All papers in 2020 (Page 4 of 1620 results)

Last updated:  2020-10-23
WARP : Revisiting GFN for Lightweight 128-bit Block Cipher
Subhadeep Banik, Zhenzhen Bao, Takanori Isobe, Hiroyasu Kubo, Fukang Liu, Kazuhiko Minematsu, Kosei Sakamoto, Nao Shibata, Maki Shigeri
In this article, we present WARP, a lightweight 128-bit block cipher with a 128-bit key. It aims at small-footprint circuit in the field of 128-bit block ciphers, possibly for a unified encryption and decryption functionality. The overall structure of WARP is a variant of 32-nibble Type-2 Generalized Feistel Network (GFN), with a permutation over nibbles designed to optimize the security and efficiency. We conduct a thorough security analysis and report comprehensive hardware and software implementation results. Our hardware results show that WARP is the smallest 128-bit block cipher for most of typical hardware implementation strategies. A serialized circuit of WARP achieves around 800 Gate Equivalents (GEs), which is much smaller than previous state-of-the-art implementations of lightweight 128-bit ciphers (they need more than GEs). While our primary metric is hardware size, WARP also enjoys several other features, most notably low energy consumption. This is somewhat surprising, since GFN generally needs more rounds than substitution permutation network (SPN), and thus GFN has been considered to be less advantageous in this regard. We show a multi-round implementation of WARP is quite low-energy. Moreover, WARP also performs well on software: our SIMD implementation is quite competitive to known hardware-oriented 128-bit lightweight ciphers for long input, and even much better for small inputs due to the small number of parallel blocks. On 8-bit microcontrollers, the results of our assembly implementations show that WARP is flexible to achieve various performance characteristics.
Last updated:  2021-02-04
On Succinct Arguments and Witness Encryption from Groups
Ohad Barta, Yuval Ishai, Rafail Ostrovsky, David J. Wu
Succinct non-interactive arguments (SNARGs) enable proofs of NP statements with very low communication. Recently, there has been significant work in both theory and practice on constructing SNARGs with very short proofs. Currently, the state-of-the-art in succinctness is due to Groth (Eurocrypt 2016) who constructed a SNARG from bilinear maps where the proof consists of just 3 group elements. In this work, we first construct a concretely-efficient designated-verifier (preprocessing) SNARG with inverse polynomial soundness, where the proof consists of just 2 group elements in a standard (generic) group. This leads to a 50% reduction in concrete proof size compared to Groth's construction. We follow the approach of Bitansky et al. (TCC 2013) who describe a compiler from linear PCPs to SNARGs in the preprocessing model. Our improvement is based on a new linear PCP packing technique that allows us to construct 1-query linear PCPs which can then be compiled into a SNARG (using ElGamal encryption over a generic group). An appealing feature of our new SNARG is that the verifier can precompute a statement-independent lookup table in an offline phase; verifying proofs then only requires 2 exponentiations and a single table lookup. This makes our new designated-verifier SNARG appealing in settings that demand fast verification and minimal communication. We then turn to the question of constructing arguments where the proof consists of a single group element. Here, we first show that any (possibly interactive) argument for a language L where the verification algorithm is "generic" (i.e., only performs generic group operations) and the proof consists of a single group element, implies a witness encryption scheme for L. We then show that under a yet-unproven, but highly plausible, hypothesis on the hardness of approximating the minimal distance of linear codes, we can construct a 2-message laconic argument for NP where the proof consists of a single group element. Under the same hypothesis, we obtain a witness encryption scheme for NP in the generic group model. Along the way, we show that under a conceptually-similar but proven hardness of approximation result, there is a 2-message laconic argument for NP with negligible soundness error where the prover's message consists of just 2 group elements. In both settings, we obtain laconic arguments (and linear PCPs) with linear decision procedures. Our constructions circumvent a previous lower bound by Groth on such argument systems with linear decision procedures by relying on imperfect completeness. Namely, our constructions have vanishing but not negligible completeness error, while the lower bound of Groth implicitly assumes negligible completeness error of the underlying argument. Our techniques thus highlight new avenues for designing linear PCPs, succinct arguments, and witness encryption schemes.
Last updated:  2021-03-04
Poppins: A Direct Construction for Asymptotically Optimal zkSNARKs
Abhiram Kothapalli, Elisaweta Masserova, Bryan Parno
We present Poppins, a direct construction of a zero-knowledge argument system for general computation that features an time prover and an time verifier (after a single public setup) for computations of size . Our scheme utilizes a universal linear-size structured reference string (SRS) that allows a single trusted setup to be used across all computation instances of a bounded size. Concretely, for computations of size , our prover's cost is dominated by multi-exponentiations of size and our verifier's cost is dominated by pairings. To achieve the stated asymptotics, we first construct a nearly-optimal zkSNARK with a logarithmic verifier in the random oracle model. We then show how to achieve a constant-time verifier using (single-layer) proof composition. Along the way we design (1) a new polynomial commitment scheme for evaluation-based representations of polynomials, (2) an asymptotically optimal inner-product argument system, (3) an asymptotically optimal multi-Hadamard-product argument system, and (4)~a new constraint system for NP that is particularly well-suited for our bundle of techniques.
Last updated:  2024-06-26
Improved Rectangle Attacks on SKINNY and CRAFT
Hosein Hadipour, Nasour Bagheri, and Ling Song
The boomerang and rectangle attacks are adaptions of differential cryptanalysis that regard the target cipher as a composition of two sub-ciphers, i.e., , to construct a distinguisher for with probability by concatenating two short differential trails for and with probability and respectively. According to the previous research, the dependency between these two differential characteristics has a great impact on the probability of boomerang and rectangle distinguishers. Dunkelman et al. proposed the sandwich attack to formalise such dependency that regards as three parts, i.e., , where contains the dependency between two differential trails, satisfying some differential propagation with probability . Accordingly, the entire probability is . Recently, Song et al. have proposed a general framework to identify the actual boundaries of and systematically evaluate the probability of with any number of rounds, and applied their method to accurately evaluate the probabilities of the best SKINNY's boomerang distinguishers. In this paper, using a more advanced method to search for boomerang distinguishers, we show that the best previous boomerang distinguishers for SKINNY can be significantly improved in terms of probability and number of rounds. More precisely, we propose related-tweakey boomerang distinguishers for up to 19, 21, 23, and 25 rounds of SKINNY-64-128, SKINNY-128-256, SKINNY-64-192 and SKINNY-128-384 respectively, which improve the previous boomerang distinguishers of these variants of SKINNY by 1, 2, 1, and 1 round respectively. Based on the improved boomerang distinguishers for SKINNY, we provide related-tweakey rectangle attacks on 23 rounds of SKINNY-64-128, 24 rounds of SKINNY-128-256, 29 rounds of SKINNY-64-192, and 30 rounds of SKINNY-128-384. It is worth noting that our improved related-tweakey rectangle attacks on SKINNY-64-192, SKINNY-128-256 and SKINNY-128-384 can be directly applied for the same number of rounds of ForkSkinny-64-192, ForkSkinny-128-256 and ForkSkinny-128-384 respectively. CRAFT is another SKINNY-like tweakable block cipher for which we provide the security analysis against rectangle attack for the first time. As a result, we provide a 14-round boomerang distinguisher for CRAFT in the single-tweak model based on which we propose a single-tweak rectangle attack on 18 rounds of this cipher. Moreover, following the previous research regarding the evaluation of switching in multiple rounds of boomerang distinguishers, we also introduce new tools called Double Boomerang Connectivity Table , , and to evaluate the boomerang switch through the multiple rounds more accurately.
Last updated:  2020-10-26
Security of Public Key Encryption against Resetting Attacks
Juliane Krämer, Patrick Struck
Ciphertext indistinguishability under chosen plaintext attacks is a standard security notion for public key encryption. It crucially relies on the usage of good randomness and is trivially unachievable if the randomness is known by the adversary. Yilek (CT-RSA'10) defined security against resetting attacks, where randomness might be reused but remains unknown to the adversary. Furthermore, Yilek claimed that security against adversaries making a single query to the challenge oracle implies security against adversaries making multiple queries to the challenge oracle. This is a typical simplification for indistinguishability security notions proven via a standard hybrid argument. The given proof, however, was pointed out to be flawed by Paterson, Schuldt, and Sibborn (PKC'14). Prior to this work, it has been unclear whether this simplification of the security notion also holds in case of resetting attacks. We remedy this state of affairs as follows. First, we show the strength of resetting attacks by showing that many public key encryption schemes are susceptible to these attacks. As our main contribution, we show that the simplification to adversaries making only one query to the challenge oracle also holds in the light of resetting attacks. More precisely, we show that the existing proof can not be fixed and give a different proof for the claim. Finally, we define real-or-random security against resetting attacks and prove it equivalent to the notion by Yilek which is of the form left-or-right.
Last updated:  2020-10-23
On Index Calculus Algorithms for Subfield Curves
Steven D. Galbraith, Robert Granger, Simon-Philipp Merz, Christophe Petit
In this paper we further the study of index calculus methods for solving the elliptic curve discrete logarithm problem (ECDLP). We focus on the index calculus for subfield curves, also called Koblitz curves, defined over with ECDLP in . Instead of accelerating the solution of polynomial systems during index calculus as was predominantly done in previous work, we define factor bases that are invariant under the -power Frobenius automorphism of the field , reducing the number of polynomial systems that need to be solved. A reduction by a factor of is the best one could hope for. We show how to choose factor bases to achieve this, while simultaneously accelerating the linear algebra step of the index calculus method for Koblitz curves by a factor . Furthermore, we show how to use the Frobenius endomorphism to improve symmetry breaking for Koblitz curves. We provide constructions of factor bases with the desired properties, and we study their impact on the polynomial system solving costs experimentally. This work gives an answer to the problem raised in the literature on how the Frobenius endomorphism can be used to speed-up index calculus on subfield curves.
Last updated:  2022-02-21
Secure Software Leasing from Standard Assumptions
Fuyuki Kitagawa, Ryo Nishimaki, Takashi Yamakawa
Secure software leasing (SSL) is a quantum cryptographic primitive that enables an authority to lease software to a user by encoding it into a quantum state. SSL prevents users from generating authenticated pirated copies of leased software, where authenticated copies indicate those run on legitimate platforms. Although SSL is a relaxed variant of quantum copy protection that prevents users from generating any copy of leased softwares, it is still meaningful and attractive. Recently, Ananth and La Placa proposed the first SSL scheme. It satisfies a strong security notion called infinite-term security. On the other hand, it has a drawback that it is based on public key quantum money, which is not instantiated with standard cryptographic assumptions so far. Moreover, their scheme only supports a subclass of evasive functions. In this work, we present SSL schemes that satisfy a security notion called finite-term security based on the learning with errors assumption (LWE). Finite-term security is weaker than infinite-term security, but it still provides a reasonable security guarantee. Specifically, our contributions consist of the following. - We construct a finite-term secure SSL scheme for pseudorandom functions from the LWE assumption against quantum adversaries. - We construct a finite-term secure SSL scheme for a subclass of evasive functions from the LWE assumption against sub-exponential quantum adversaries. - We construct finite-term secure SSL schemes for the functionalities above with classical communication from the LWE assumption against (sub-exponential) quantum adversaries. SSL with classical communication means that entities exchange only classical information though they run quantum computation locally. Our crucial tool is two-tier quantum lightning, which is introduced in this work and a relaxed version of quantum lighting. In two-tier quantum lightning schemes, we have a public verification algorithm called semi-verification and a private verification algorithm called full-verification. An adversary cannot generate possibly entangled two quantum states whose serial numbers are the same such that one passes the semi-verification, and the other also passes the full-verification. We show that we can construct a two-tier quantum lightning scheme from the LWE assumption.
Last updated:  2021-01-29
Payment Trees: Low Collateral Payments for Payment Channel Networks
Maxim Jourenko, Mario Larangeira, Keisuke Tanaka
The security of blockchain based decentralized ledgers relies on consensus protocols executed between mutually distrustful parties. Such protocols incur delays which severely limit the throughput of such ledgers. Payment and state channels enable execution of offchain protocols that allow interaction between parties without involving the consensus protocol. Protocols such as Hashed Timelock Contracts (HTLC) and Sprites (FC'19) connect channels into Payment Channel Networks (PCN) allowing payments across a path of payment channels. Such a payment requires each party to lock away funds for an amount of time. The product of funds and locktime is the collateral of the party, i.e., their cost of opportunity to forward a payment. In the case of HTLC, the locktime is linear to the length of the path, making the total collateral invested across the path quadratic in size of its length. Sprites improved on this by reducing the locktime to a constant by utilizing smart contracts. Atomic Multi-Channel Updates (AMCU), published at CCS'19, introduced constant collateral payments without smart contracts. In this work we present the Channel Closure attack on AMCU that allows a malicious adversary to make honest parties lose funds. Furthermore, we propose the Payment Trees protocol that allows payments across a PCN with linear total collateral without the aid of smart contracts; a competitive performance similar to Sprites, and yet compatible to Bitcoin.
Last updated:  2020-10-23
Individual Simulations
Uncategorized
Yi Deng
Show abstract
Uncategorized
We develop an individual simulation technique that explicitly makes use of particular properties/structures of a given adversary's functionality. Using this simulation technique, we obtain the following results. 1. We construct the first protocols that \emph{break previous black-box barriers} of [Xiao, TCC'11 and Alwen et al., Crypto'05] under the standard hardness of factoring, both of which are polynomial time simulatable against all a-priori bounded polynomial size distinguishers: -- Two-round selective opening secure commitment scheme. -- Three-round concurrent zero knowledge and concurrent witness hiding argument for NP in the bare public-key model. 2. We present a simpler two-round weak zero knowledge and witness hiding argument for NP in the plain model under the sub-exponential hardness of factoring. Our technique also yields a significantly simpler proof that existing distinguisher-dependent simulatable zero knowledge protocols are also polynomial time simulatable against all distinguishers of a-priori bounded polynomial size. The core conceptual idea underlying our individual simulation technique is an observation of the existence of nearly optimal extractors for all hard distributions: For any NP-instance(s) sampling algorithm, there exists a polynomial-size witness extractor (depending on the sampler's functionality) that almost outperforms any circuit of a-priori bounded polynomial size in terms of the success probability.
Last updated:  2020-10-20
Cryptanalysis of Feistel-Based Format-Preserving Encryption
Orr Dunkelman, Abhishek Kumar, Eran Lambooij, Somitra Kumar Sanadhya
Format-Preserving Encryption (FPE) is a method to encrypt non-standard domains, thus allowing for securely encrypting not only binary strings, but also special domains, e.g., social security numbers into social security numbers. The need for those resulted in a few standardized constructions such as the NIST standardized FF1 and FF3-1 and the Korean Standards FEA-1 and FEA-2. Moreover, there are currently efforts both in ANSI and in ISO to include such block ciphers to standards (e.g., the ANSI X9.124 discussing encryption for financial services). Most of the proposed FPE schemes, such as the NIST standardized FF1 and FF3-1 and the Korean Standards FEA-1 and FEA-2, are based on a Feistel construction with pseudo-random round functions. Moreover, to mitigate enumeration attacks against the possibly small domains, they all employ tweaks, which enrich the actual domain sizes. In this paper we present distinguishing attacks against Feistel-based FPEs. We show a distinguishing attack against the full FF1 with data complexity of 20-bit plaintexts, against the full FF3-1 with data complexity of 20-bit plaintexts. For FEA-1 with 128-bit, 192-bit and 256-bit keys, the data complexity of the distinguishing attack is , , and 8-bit plaintexts, respectively. The data complexity of the distinguishing attack against the full FEA-2 with 128-bit, 192-bit and 256-bit is , , and 8-bit plaintexts, respectively. Moreover, we show how to extend the distinguishing attack on FEA-1 and FEA-2 using 192-bit and 256-bit keys into key recovery attacks with time complexity (for both attacks).
Last updated:  2024-09-16
A note on the low order assumption in class groups of imaginary quadratic number fields
Karim Belabas, Thorsten Kleinjung, Antonio Sanso, and Benjamin Wesolowski
In this short note we analyze the low order assumption in the imaginary quadratic number fields. We show how this assumption is broken for Mersenne primes. We also provide a description on how to possible attack this assumption for other class of prime numbers leveraging some new mathematical tool coming from higher (cubic) number fields.
Last updated:  2021-03-30
Provable Security Analysis of Decentralized Cryptographic Contact Tracing
Uncategorized
Noel Danz, Oliver Derwisch, Anja Lehmann, Wenzel Puenter, Marvin Stolle, Joshua Ziemann
Show abstract
Uncategorized
Automated contact tracing leverages the ubiquity of smartphones to warn users about an increased exposure risk to COVID-19. In the course of only a few weeks, several cryptographic protocols have been proposed that aim to achieve such contract tracing in a decentralized and privacy-preserving way. Roughly, they let users' phones exchange random looking pseudonyms that are derived from locally stored keys. If a user is diagnosed, her phone uploads the keys which allows other users to check for any contact matches. Ultimately this line of work led to Google and Apple including a variant of these protocols into their phones which is currently used by millions of users. Due to the obvious urgency, these schemes were pushed to deployment without a formal analysis of the achieved security and privacy features. In this work we address this gap and provide the first formal treatment of such decentralized cryptographic contact tracing. We formally define three main properties in a game-based manner: pseudonym and trace unlinkability to guarantee the privacy of users during healthy and infectious periods, and integrity ensuring that triggering false positive alarms is infeasible. A particular focus of our work is on the timed aspects of these schemes, as both keys and pseudonyms are rotated regularly, and we specify different variants of the aforementioned properties depending on the time granularity for which they hold. We analyze a selection of practical protocols (DP-3T, TCN, GAEN) and prove their security under well-defined assumptions.
Last updated:  2021-05-01
On the Success Probability of Solving Unique SVP via BKZ
Eamonn W. Postlethwaite, Fernando Virdia
As lattice-based key encapsulation, digital signature, and fully homomorphic encryption schemes near standardisation, ever more focus is being directed to the precise estimation of the security of these schemes. The primal attack reduces key recovery against such schemes to instances of the unique Shortest Vector Problem (uSVP). Dachman-Soled et al. (Crypto 2020) recently proposed a new approach for fine-grained estimation of the cost of the primal attack when using Progressive BKZ for lattice reduction. In this paper we review and extend their technique to BKZ 2.0 and provide extensive experimental evidence of its accuracy. Using this technique we also explain results from previous primal attack experiments by Albrecht et al. (Asiacrypt 2017) where attacks succeeded with smaller than expected block sizes. Finally, we use our simulators to reestimate the cost of attacking the three lattice KEM finalists of the NIST Post Quantum Standardisation Process.
Last updated:  2021-03-01
Multiparty Cardinality Testing for Threshold Private Set Intersection
Pedro Branco, Nico Döttling, Sihang Pu
Threshold Private Set Intersection (PSI) allows multiple parties to compute the intersection of their input sets if and only if the intersection is larger than , where is the size of the sets and is some threshold. The main appeal of this primitive is that, in contrast to standard PSI, known upper-bounds on the communication complexity only depend on the threshold and not on the sizes of the input sets. Current Threshold PSI protocols split themselves into two components: A Cardinality Testing phase, where parties decide if the intersection is larger than some threshold; and a PSI phase, where the intersection is computed. The main source of inefficiency of Threshold PSI is the former part. In this work, we present a new Cardinality Testing protocol that allows parties to check if the intersection of their input sets is larger than . The protocol incurs in communication complexity. We thus obtain a Threshold PSI scheme for parties with communication complexity .
Last updated:  2023-08-10
Simulation Extractable Versions of Groth’s zk-SNARK Revisited
Oussama Amine, Karim Baghery, Zaira Pindado, Carla Ràfols
Zero-knowledge Succinct Non-interactive Arguments of Knowledge (zk-SNARKs) are the most efficient proof systems in terms of proof size and verification. Currently, Groth's scheme from EUROCRYPT 2016, , is the state-of-the-art and is widely deployed in practice. is originally proven to achieve knowledge soundness, which does not guarantee the non-malleability of proofs. There has been considerable progress in presenting new zk-SNARKs or modifying to efficiently achieve Simulation Extractability (SE), which is shown to be a necessary requirement in some applications. In this paper, we revise the Random Oracle (RO) based variant of proposed by Bowe and Gabizon, BG18, the most efficient one in terms of prover efficiency and CRS size among the candidates, and present a more efficient variant that saves pairings in the verification and group element in the proof. This supersedes our preliminary construction, presented in CANS 2020 [BPR20], which saved 1 pairing in the verification, and was proven in the Generic Group Model (GGM). Our new construction also improves on BG18 in that our proofs are in the Algebraic Group Model (AGM) with Random Oracles and reduces security to standard computational assumptions in bilinear groups (as opposed to using the full power of the GGM). We implement our proposed SE zk-SNARK along with BG18 in the library and compare the efficiency of our scheme with some related works. Our empirical experiences confirm that our SE zk-SNARK is more efficient than all previous SE schemes in most dimensions and it has very close efficiency to the original .
Last updated:  2021-07-09
On the Compressed-Oracle Technique, and Post-Quantum Security of Proofs of Sequential Work
Kai-Min Chung, Serge Fehr, Yu-Hsuan Huang, Tai-Ning Liao
We revisit the so-called compressed oracle technique, introduced by Zhandry for analyzing quantum algorithms in the quantum random oracle model (QROM). This technique has proven to be very powerful for reproving known lower bound results, but also for proving new results that seemed to be out of reach before. Despite being very useful, it is however still quite cumbersome to actually employ the compressed oracle technique. To start off with, we offer a concise yet mathematically rigorous exposition of the compressed oracle technique. We adopt a more abstract view than other descriptions found in the literature, which allows us to keep the focus on the relevant aspects. Our exposition easily extends to the parallel-query QROM, where in each query-round the considered quantum oracle algorithm may make several queries to the QROM in parallel. This variant of the QROM allows for a more fine-grained query-complexity analysis of quantum oracle algorithms. Our main technical contribution is a framework that simplifies the use of (the parallel-query generalization of) the compressed oracle technique for proving query complexity results. With our framework in place, whenever applicable, it is possible to prove quantum query complexity lower bounds by means of purely classical reasoning. More than that, we show that, for typical examples, the crucial classical observations that give rise to the classical bounds are sufficient to conclude the corresponding quantum bounds. We demonstrate this on a few examples, recovering known results (like the optimality of parallel Grover), but also obtaining new results (like the optimality of parallel BHT collision search). Our main application is to prove hardness of finding a -chain, i.e., a sequence with the property that for all , with fewer than parallel queries. The above problem of producing a hash chain is of fundamental importance in the context of proofs of sequential work. Indeed, as a concrete application of our new bound, we prove that the ``Simple Proofs of Sequential Work" proposed by Cohen and Pietrzak remain secure against quantum attacks. Such a proof is not simply a matter of plugging in our new bound; the entire protocol needs to be analyzed in the light of a quantum attack, and substantial additional work is necessary. Thanks to our framework, this can now be done with purely classical reasoning.
Last updated:  2022-12-27
QCB: Efficient Quantum-secure Authenticated Encryption
Ritam Bhaumik, Xavier Bonnetain, André Chailloux, Gaëtan Leurent, María Naya-Plasencia, André Schrottenloher, Yannick Seurin
It was long thought that symmetric cryptography was only mildly affected by quantum attacks, and that doubling the key length was sufficient to restore security. However, recent works have shown that Simon's quantum period finding algorithm breaks a large number of MAC and authenticated encryption algorithms when the adversary can query the MAC/encryption oracle with a quantum superposition of messages. In particular, the OCB authenticated encryption mode is broken in this setting, and no quantum-secure mode is known with the same efficiency (rate-one and parallelizable). In this paper we generalize the previous attacks, show that a large class of OCB-like schemes is unsafe against superposition queries, and discuss the quantum security notions for authenticated encryption modes. We propose a new rate-one parallelizable mode named QCB inspired by TAE and OCB and prove its security against quantum superposition queries.
Last updated:  2021-07-16
Post-Quantum Cryptography with Contemporary Co-Processors: Beyond Kronecker, Schönhage-Strassen & Nussbaumer
Joppe W. Bos, Joost Renes, Christine van Vredendaal
There are currently over 30 billion IoT (Internet of Things) devices installed worldwide. To secure these devices from various threats one often relies on public-key cryptographic primitives whose operations can be costly to compute on resource-constrained IoT devices. To support such operations these devices often include a dedicated co-processor for cryptographic procedures, typically in the form of a big integer arithmetic unit. Such existing arithmetic co-processors do not offer the functionality that is expected by upcoming post-quantum cryptographic primitives. Regardless, contemporary systems may exist in the field for many years to come. In this paper we propose the Kronecker+ algorithm for polynomial multiplication in rings of the form Z[X]/(X^n+1): the arithmetic foundation of many lattice-based cryptographic schemes. We discuss how Kronecker+ allows for re-use of existing co-processors for post-quantum cryptography, and in particular directly applies to the various finalists in the post-quantum standardization effort led by NIST. We demonstrate the effectiveness of our algorithm in practice by integrating Kronecker+ into Saber: one of the finalists in the ongoing NIST standardization effort. On our target platform, a RV32IMC with access to a dedicated arithmetic co-processor designed to accelerate RSA and ECC, Kronecker+ performs the matrix multiplication 2.8 times faster than regular Kronecker substitution and 1.7 times faster than Harvey's negated-evaluation-points method.
Last updated:  2022-11-09
TMVP-based Multiplication for Polynomial Quotient Rings and Application to Saber on ARM Cortex-M4
İrem Keskinkurt Paksoy, Murat Cenk
Lattice-based NIST PQC finalists need efficient multiplication in . Multiplication in this ring can be performed very efficiently via number theoretic transform (NTT) as done in CRYSTALS-Kyber if the parameters of the scheme allow it. If NTT is not supported, other multiplication algorithms must be employed. For example, if the modulus of the scheme is a power of two, as in Saber and NTRU, then NTT can not be used directly. In this case, Karatsuba and Toom-Cook methods together with modular reduction, are commonly used for multiplication in this ring. In this paper, we show that the Toeplitz matrix-vector product (TMVP) representation of modular polynomial multiplication yields better results than Karatsuba and Toom-Cook methods. We present three- and four-way TMVP formulas derived from three- and four-way Toom-Cook algorithms, respectively. We use the four-way TMVP formula to develop an algorithm for multiplication in the ring . We implement the proposed algorithm on the ARM Cortex-M4 microcontroller and apply it to Saber, one of the lattice-based finalists of the NIST PQC competition. We compare the results to previous implementations. The TMVP-based multiplication algorithm we propose is faster than the previous algorithm that uses a combination of Toom-Cook, Karatsuba, and schoolbook methods. Our algorithm also speeds up key generation, encapsulation, and decapsulation algorithms of all variants of Saber. The speedups vary between . Moreover, our algorithm requires less memory than the others, except for the memory-optimized implementation of Saber.
Last updated:  2020-10-19
Robust Property-Preserving Hash Functions for Hamming Distance and More
Nils Fleischhacker, Mark Simkin
Robust property-preserving hash (PPH) functions, recently introduced by Boyle, Lavigne, and Vaikuntanathan [ITCS 2019], compress large inputs and into short digests and in a manner that allows for computing a predicate on and while only having access to the corresponding hash values. In contrast to locality-sensitive hash functions, a robust PPH function guarantees to correctly evaluate a predicate on and even if and are chosen adversarially after seeing . Our main result is a robust PPH function for the exact hamming distance predicate where is the hamming-distance between and . Our PPH function compresses -bit strings into -bit digests, where is the security parameter. The construction is based on the q-strong bilinear discrete logarithm assumption. Along the way, we construct a robust PPH function for the set intersection predicate which compresses sets and of size with elements from some arbitrary universe into -bit long digests. This PPH function may be of independent interest. We present an almost matching lower bound of on the digest size of any PPH function for the intersection predicate, which indicates that our compression rate is close to optimal. Finally, we also show how to extend our PPH function for the intersection predicate to more than two inputs.
Last updated:  2020-10-19
Byzantine Ordered Consensus without Byzantine Oligarchy
Yunhao Zhang, Srinath Setty, Qi Chen, Lidong Zhou, Lorenzo Alvisi
The specific order of commands agreed upon when running state machine replication (SMR) is immaterial to fault-tolerance: all that is required is for all correct deterministic replicas to follow it. In the permissioned blockchains that rely on Byzantine fault tolerant (BFT) SMR, however, nodes have a stake in the specific sequence that ledger records, as well as in preventing other parties from manipulating the sequencing to their advantage. The traditional specification of SMR correctness, however, has no language to express these concerns. This paper introduces Byzantine ordered consensus, a new primitive that augments the correctness specification of BFT SMR to include specific guarantees on the total orders it produces; and a new architecture for BFT SMR that, by factoring out ordering from consensus, can enforce these guarantees and prevent Byzantine nodes from controlling ordering decisions (a Byzantine oligarchy). These contributions are instantiated in Pompe, a BFT SMR protocol that is guaranteed to order commands in a way that respects a natural extension of linearizability.
Last updated:  2021-04-02
Unbounded Key-Policy Attribute-based Encryption with Black-Box Traceability
Yunxiu Ye, Zhenfu Cao, Jiachen Shen
Attribute-based encryption received widespread attention as soon as it was proposed. However, due to its specific characteristics, some restrictions on attribute set in attribute-based encryption are not flexible enough in actual operation. In addition, since access authorities are determined according to users' attributes, users sharing the same attributes are difficult to be distinguished. Once a malicious user makes illicit gains by their decryption authorities, it is difficult to track down specific users. This paper follows practical demands to propose a more flexible key-policy attribute-based encryption scheme with black-box traceability. The scheme has a constant size of public parameters which can be utilized to construct attribute-related parameters flexibly, and the method of traitor tracing in broadcast encryption is introduced to achieve effective malicious user tracing. In addition, the security and feasibility can be proved by the security proofs and performance evaluation in this paper.
Last updated:  2021-05-25
Is Real-time Phishing Eliminated with FIDO? Social Engineering Downgrade Attacks against FIDO Protocols
Enis Ulqinaku, Hala Assal, AbdelRahman Abdou, Sonia Chiasson, Srdjan Čapkun
FIDO’s U2F is a web-authentication mechanism designed to mitigate real-time phishing—an attack that undermines multi-factor authentication by allowing an attacker to relay second-factor one-time tokens from the victim user to the legitimate website in real-time. A U2F dongle is simple to use, and is designed to restrain users from using it incorrectly. We show that social engineering attacks allow an adversary to downgrade FIDO’s U2F to alternative authentication mechanisms. Websites allow such alternatives to handle dongle malfunction or loss. All FIDO-supporting websites in Alexa’s top 100 allow choosing alternatives to FIDO, and are thus potentially vulnerable to real-time phishing attacks. We crafted a phishing website that mimics Google login’s page and implements a FIDO-downgrade attack. We then ran a carefully-designed user study to test the effect on users. We found that, when using FIDO as their second authentication factor, 55% of participants fell for real-time phishing, and another 35% would potentially be susceptible to the attack in practice.
Last updated:  2020-10-19
On the Effect of the (Micro)Architecture on the Development of Side-Channel Resistant Software
Lauren De Meyer, Elke De Mulder, Michael Tunstall
There are many examples of how to assess the side-channel resistance of a hardware implementation for a given order, where one has to take into account all transitions and glitches produced by a given design. However, microprocessors do not conform with the ideal circuit model which is typically used to gain confidence in the security of masking against side-channel attacks. As a result, masked software implementations in practice do not exhibit the security one would expect in theory. In this paper, we generalize and extend work by Papagiannopoulos and Veshchikov to describe the ways in which a microprocessor may leak. We show that the sources of leakage are far more numerous than previously considered and highly dependent on the platform. We further describe how to write high-level code in the C programming language that allows one to work around common micro-architectural features. In particular, we introduce implementation techniques to reduce sensitive combinations made by the CPU and which are devised so as to be preserved through the optimizations made by the compiler. However, these techniques cannot be proven to be secure. In this paper, we seek to highlight leakage not considered in current models used in proofs and describe some potential solutions. We apply our techniques to two case studies (DES and AES) and show that they are able to provide a modest level of security on several platforms.
Last updated:  2020-10-19
Concrete quantum cryptanalysis of binary elliptic curves
Gustavo Banegas, Daniel J. Bernstein, Iggy van Hoof, Tanja Lange
This paper analyzes and optimizes quantum circuits for computing discrete logarithms on binary elliptic curves, including reversible circuits for fixed-base-point scalar multiplication and the full stack of relevant subroutines. The main optimization target is the size of the quantum computer, i.e., the number of logical qubits required, as this appears to be the main obstacle to implementing Shor's polynomial-time discrete-logarithm algorithm. The secondary optimization target is the number of logical Toffoli gates. For an elliptic curve over a field of 2^n elements, this paper reduces the number of qubits to 7n+[log_2(n)]+9. At the same time this paper reduces the number of Toffoli gates to 48n^3+8n^(log_2(3)+1)+352n^2 log_2(n)+512n^2+O(n^(log_2(3))) with double-and-add scalar multiplication, and a logarithmic factor smaller with fixed-window scalar multiplication. The number of CNOT gates is also O(n^3). Exact gate counts are given for various sizes of elliptic curves currently used for cryptography.
Last updated:  2020-10-19
Optimized Software Implementations for theLightweight Encryption Scheme ForkAE
Arne Deprez, Elena Andreeva, Jose Maria Bermudo Mera, Angshuman Karmakar, Antoon Purnal
In this work we develop optimized software implementationsfor ForkAE, a second round candidate in the ongoing NIST lightweight cryptography standardization process. Moreover, we analyze the perfor-mance and efficiency of different ForkAE implementations on two em-bedded platforms: ARM Cortex-A9 and ARM Cortex-M0.First, we study portable ForkAE implementations. We apply a decryption optimization technique which allows us to accelerate decryption by up to 35%. Second, we go on to explore platform-specific software op-timizations. In platforms where cache-timing attacks are not a risk, we present a novel table-based approach to compute the SKINNY round function. Compared to the existing portable implementations, this technique speeds up encryption and decryption by 20% and 25%, respectively. Additionally, we propose a set of platform-specific optimizations for processors with parallel hardware extensions such as ARM NEON. Without relying on parallelism provided by long messages (cf. bit-sliced implementations), we focus on the primitive-level ForkSkinny parallelism provided by ForkAE to reduce encryption and decryption latency by up to 30%. We benchmark the performance of our implementations on the ARM Cortex-M0 and ARM Cortex-A9 processors and give a comparison withthe other SKINNY-based schemes in the NIST lightweight competition– SKINNY-AEAD and Romulus
Last updated:  2021-06-08
Coco: Co-Design and Co-Verification of Masked Software Implementations on CPUs
Barbara Gigerl, Vedad Hadzic, Robert Primas, Stefan Mangard, Roderick Bloem
The protection of cryptographic implementations against power analysis attacks is of critical importance for many applications in embedded systems. The typical approach of protecting against these attacks is to implement algorithmic countermeasures, like masking. However, implementing these countermeasures in a secure and correct manner is challenging. Masking schemes require the independent processing of secret shares, which is a property that is often violated by CPU microarchitectures in practice. In order to write leakage-free code, the typical approach in practice is to iteratively explore instruction sequences and to empirically verify whether there is leakage caused by the hardware for this instruction sequence or not. Clearly, this approach is neither efficient, nor does it lead to rigorous security statements. In this paper, we overcome the current situation and present the first approach for co-design and co-verification of masked software implementations on CPUs. First, we present Coco, a tool that allows us to provide security proofs at the gate-level for the execution of a masked software implementation on a concrete CPU. Using Coco , we analyze the popular 32-bit RISC-V Ibex core, identify all design aspects that violate the security of our tested masked software implementations and perform corrections, mostly in hardware. The resulting secured Ibex core has an area overhead around 10%, the runtime of software on this core is largely unaffected, and the formal verification with Coco of an, e.g., first-order masked Keccak S-box running on the secured Ibex core takes around 156 seconds. To demonstrate the effectiveness of our suggested design modifications, we perform practical leakage assessments using an FPGA evaluation board.
Last updated:  2020-10-19
I Choose You: Automated Hyperparameter Tuning for Deep Learning-based Side-channel Analysis
Lichao Wu, Guilherme Perin, Stjepan Picek
Deep learning-based SCA represents a powerful option for profiling side-channel analysis. Numerous results in the last few years indicate neural networks can break targets protected with countermeasures even with a relatively small number of attack traces. Intuitively, the more powerful neural network architecture we require, the more effort we need to spend in its hyperparameter tuning. Current results commonly use random search and reach good performance. Yet, we remain with the question of how good are such architectures if compared with the architectures that are carefully designed by following a specific methodology. Unfortunately, the works considering methodologies are sparse and difficult to ease without prior knowledge about the target. This work proposes an automated way for deep learning hyperparameter tuning that is based on Bayesian Optimization. We build a custom framework denoted as AutoSCA that supports both machine learning and side-channel metrics. Our experimental analysis shows that Bayesian optimization performs well regardless of the dataset, leakage model, or neural network type. What is more, we find a number of neural network architectures outperforming state-of-the-art attacks. Finally, we note that random search, despite being considered not particularly powerful, manages to reach top performance for a number of considered settings. We postulate this happens since the datasets are relatively easy to break, and there are many neural network architectures reaching top performance.
Last updated:  2023-10-04
Optimal Oblivious Parallel RAM
Gilad Asharov, Ilan Komargodski, Wei-Kai Lin, Enoch Peserico, and Runting Shi
An oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (STOC '87 and J. ACM '96), is a technique for hiding RAM's access pattern. That is, for every input the distribution of the observed locations accessed by the machine is essentially independent of the machine's secret inputs. Recent progress culminated in a work of Asharov et al. (EUROCRYPT '20), obtaining an ORAM with (amortized) logarithmic overhead in total work, which is known to be optimal. Oblivious Parallel RAM (OPRAM) is a natural extension of ORAM to the (more realistic) parallel setting where several processors make concurrent accesses to a shared memory. It is known that any OPRAM must incur logarithmic work overhead and for highly parallel RAMs a logarithmic depth blowup (in the balls and bins model). Despite the significant recent advances, there is still a large gap: all existing OPRAM schemes incur a poly-logarithmic overhead either in total work or in depth. Our main result closes the aforementioned gap and provides an essentially optimal OPRAM scheme. Specifically, assuming one-way functions, we show that any Parallel RAM with memory capacity~ can be obliviously simulated in space , incurring only blowup in (amortized) total work as well as in depth. Our transformation supports all PRAMs in the CRCW mode and the resulting simulation is in the CRCW mode as well.
Last updated:  2021-03-04
Efficient Composable Oblivious Transfer from CDH in the Global Random Oracle Model
Bernardo David, Rafael Dowsley
Oblivious Transfer (OT) is a fundamental cryptographic protocol that finds a number of applications, in particular, as an essential building block for two-party and multi-party computation. We construct the first universally composable (UC) protocol for oblivious transfer secure against active static adversaries based on the Computational Diffie-Hellman (CDH) assumption. Our protocol is proven secure in the observable Global Random Oracle model. We start by constructing a protocol that realizes an OT functionality with a selective failure issue, but shown to be sufficient to instantiate efficient OT extension protocols. In terms of complexity, this protocol only requires the computation of 6 modular exponentiations and the communication of 5 group elements, five binary strings of security parameter length, and two binary strings of message length. Finally, we lift this weak construction to obtain a protocol that realizes the standard OT functionality (without any selective failures) at an additional cost of computing 9 modular exponentiations and communicating 4 group elements, four binary strings of security parameter length and two binary strings of message length. As an intermediate step before constructing our CDH based protocols, we design generic OT protocols from any OW-CPA secure public-key encryption scheme with certain properties, which could potentially be instantiated from more assumptions other than CDH.
Last updated:  2021-10-12
FORTIS: Selfish Mining Mitigation by (FOR)geable (TI)me(S)tamps
Osman Biçer, Alptekin Küpçü
Selfish mining (SM) attack of Eyal and Sirer (2018) endangers permissionless Proof-of-Work blockchains by allowing a rational mining pool with a hash power (a) much less than 50% of the whole network to deviate from the honest mining algorithm and to steal from the fair shares of honest miners. Since then, the attack has been studied extensively in various settings, for understanding its interesting dynamics, optimizing it, and mitigating it. In this context, Heilman (14) ''Freshness Preferred'', we propose a timestamp based defence if timestamps are not generated by an authority. To use this proposal in a decentralized setting, we would like to remove the timestamp authority, but due to two natural and simple attacks this turns out to be a non-trivial task. These attacks are composed of Oracle mining by setting the timestamp to future and Bold mining by generating an alternative chain by starting from a previous block. Unfortunately, these attacks are hard to analyze and optimize, and the available tools, to our knowledge, fail to help us for this task. Thus, we propose generalized formulas for revenue and profitability of SM attacks to ease our job in analysis and optimization of these attacks. Our analyses show that although the use of timestamps would be promising for selfish mining mitigation, Freshness Preferred, in its current form, is quite vulnerable, as any rational miner with a>0 can directly benefit from our attacks. To cope with this problem, we propose an SM mitigation algorithm Fortis with forgeable timestamps (without the need for a trusted authority), which protects the honest miners' shares against any attacker with a<27.0% against all the known SM-type attacks. By building upon the blockchain simulator BlockSim by Alharby and Moorsel (2019), we simulate our Oracle and Bold mining attacks against the Freshness Preferred and our Fortis defenses. Similar to our theoretical results, the simulation results demonstrate the effectiveness of these attacks against the former and their ineffectiveness against the latter.
Last updated:  2020-10-16
Sword: An Opaque Blockchain Protocol
Farid Elwailly
I describe a blockchain design that hides the transaction graph from Blockchain Analyzers. The design is based on the realization that today the miner creating a block needs enough information to verify the validity of transactions, which makes details about the transactions public and thus allows blockchain analysis. Some protocols, such as Mimblewimble, obscure the transaction amounts but not the source of the funds which is enough to allow for analysis. The insight in this technical note is that the block creator can be restricted to the task of ensuring no double spends. The task of actually verifying transaction balances really belongs to the receiver. The receiver is the one motivated to verify that she is receiving a valid transaction output since she has to convince the next receiver that the balances are valid, otherwise no one will accept her spending transaction. The bulk of the transaction can thus be encrypted in such a manner that only the receiver can decrypt and examine it. Opening this transaction allows the receiver to also open previous transactions to allow her to work her way backward in a chain until she arrives at the coin generation blocks and completely verify the validity of the transaction. Since transactions are encrypted on the blockchain a blockchain analyzer cannot create a transaction graph until he is the receiver of a transaction that allows backward tracing through to some target transaction.
Last updated:  2021-05-27
Improved attacks against key reuse in learning with errors key exchange
Nina Bindel, Douglas Stebila, Shannon Veitch
Basic key exchange protocols built from the learning with errors (LWE) assumption are insecure if secret keys are reused in the face of active attackers. One example of this is Fluhrer’s attack on the Ding, Xie, and Lin (DXL) LWE key exchange protocol, which exploits leakage from the signal function for error correction. Protocols aiming to achieve security against active attackers generally use one of two techniques: demonstrating well-formed keyshares using re-encryption like in the Fujisaki–Okamoto transform; or directly combining multiple LWE values, similar to MQV-style Diffie–Hellman-based protocols. In this work, we demonstrate improved and new attacks exploiting key reuse in several LWE-based key exchange protocols. First, we show how to greatly reduce the number of samples required to carry out Fluhrer’s attack and reconstruct the secret period of a noisy square waveform, speeding up the attack on DXL key exchange by a factor of over 200. We show how to adapt this to attack a protocol of Ding, Branco, and Schmitt (DBS) designed to be secure with key reuse, breaking the claimed 128-bit security level in under a minute. We also apply our technique to a second authenticated key exchange protocol of DBS that uses an additive MQV design, although in this case our attack makes use of ephemeral key compromise powers of the eCK security model, which was not in scope of the claimed BR-model security proof. Our results show that building secure authenticated key exchange protocols directly from LWE remains a challenging and mostly open problem.
Last updated:  2020-10-16
Multivariate Cryptographic Primitive based on the product of the roots of a polynomial over a field
Borja Gómez
Cryptographic Primitives in Multivariate Public Key Cryptography are of relevant interest, specially in the quadratic case. These primitives classify the families of schemes that we encounter in this field. In this paper, the reader can find a new primitive based on the product of the roots of a polynomial over a field, where the coefficients of this polynomials are the elementary symmetric polynomials on variables, which guarantees a solution when inverting the scheme. Moreover, a cryptosystem and a digital signature scheme are built on top of this primitive, where distinct parametrizations and criteria that define the schemes are commented, along with applications of attacks available in literature.
Last updated:  2021-05-28
Secure Two-Party Quantum Computation Over Classical Channels
Michele Ciampi, Alexandru Cojocaru, Elham Kashefi, Atul Mantri
Secure two-party computation considers the problem of two parties computing a joint function of their private inputs without revealing anything beyond the output of the computation. In this work, we take the first steps towards understanding the setting where: 1) the two parties (Alice and Bob) can communicate only via a classical channel, 2) the input of Bob is quantum, and 3) the input of Alice is classical. Our first result indicates that in this setting it is in general impossible to realize a two-party quantum functionality with black-box simulation in the case of malicious quantum adversaries. In particular, we show that the existence of a secure quantum computing protocol that relies only on classical channels would contradict the quantum no-cloning argument. We circumvent this impossibility following three different approaches. The first is by considering a weaker security notion called one-sided simulation security. This notion protects the input of one party (the quantum Bob) in the standard simulation-based sense and protects the privacy of the other party's input (the classical Alice). We show how to realize a protocol that satisfies this notion relying on the learning with errors assumption. The second way to circumvent the impossibility result, while at the same time providing standard simulation-based security also against a malicious Bob, is by assuming that the quantum input has an efficient classical representation. Finally, we focus our attention on the class of zero-knowledge functionalities and provide a compiler that takes as input a classical proof of quantum knowledge (PoQK) protocol for a QMA relation R (a classical PoQK is a PoQK that can be verified by a classical verifier) and outputs a zero-knowledge PoQK for R that can be verified by classical parties. The direct implication of our result is that Mahadev’s protocol for classical verification of quantum computations (FOCS’18) can be turned into a zero-knowledge proof of quantum knowledge with classical verifiers. To the best of our knowledge, we are the first to instantiate such a primitive.
Last updated:  2021-03-03
Multi-Input Quadratic Functional Encryption from Pairings
Shweta Agrawal, Rishab Goyal, Junichi Tomida
We construct the first multi-input functional encryption \allowbreak(MIFE) scheme for quadratic functions from pairings. Our construction supports polynomial number of users, where user , for , encrypts input to obtain ciphertext , the key generator provides a key for vector and decryption, given and , recovers and nothing else. We achieve indistinguishability-based (selective) security against unbounded collusions under the standard bilateral matrix Diffie-Hellman assumption. All previous MIFE schemes either support only inner products (linear functions) or rely on strong cryptographic assumptions such as indistinguishability obfuscation or multi-linear maps.
Last updated:  2021-04-20
Entropy Estimation of Physically Unclonable Functions with Offset Error
Mitsuru Shiozaki, Yohei Hori, Takeshi Fujino
Physically unclonable functions (PUFs) are gaining attention as a promising cryptographic technique, with the main applications including challenge-response authentication and key generation (key storage). When a PUF is applied to these applications, min-entropy estimation is essential. Min-entropy is a measure of the lower bound of the unpredictability of PUF responses. Using the test suite of the National Institute of Standards and Technology (NIST) specification (SP) 800-90B is currently considered the best method for estimating the min-entropy of PUF responses. Several previous studies have estimated the min-entropy of PUFs as well as those of random number generators (RNGs). However, we feel doubtful about some of these estimated results; for example, an evaluator can reorder PUF responses to make the PUF performance appear much better. It is also known that the test suite of NIST SP 800-90B has no suitable estimator. In particular, it has been reported that concatenating PUF responses of two-dimensional PUFs, such as an SRAM PUF, into one-dimensional data may obfuscate spatial correlations. In this paper, we explore the inherent problems in min-entropy estimation by using our static random-access memory (SRAM) PUF and our complementary metal-oxide-semiconductor (CMOS) image sensor with a PUF (CIS PUF). We apply three orderings to the PUF responses of our SRAM PUF and CIS PUF: row-direction ordering, column-direction ordering, and random-shuffle ordering. We show how much the min-entropy estimated by NIST SP 800-90B varies and discuss the estimation results. Next, we discuss the threat of PUFs (i.e., predictability of PUF responses) when a digitizer in a PUF has an offset error. PUF sources are generally defined as circuits and transistors used to extract intrinsic physical properties and generate device-unique responses. Variation in the manufacturing of circuits and transistors other than the PUF sources, especially digitizers, may cause lower entropy. We call these circuits and transistors ``entropy-loss sources.'' We investigate the effect of entropy-loss sources on min-entropy theoretically and clarify how much the theoretical results differ from those estimated by NIST SP 800-90B. Finally, we propose an entropy prediction scheme that considers entropy-loss sources (offset error). We show through experiments that the proposed scheme more accurately estimates the min-entropy of PUFs.
Last updated:  2021-05-20
Sieving for twin smooth integers with solutions to the Prouhet-Tarry-Escott problem
Craig Costello, Michael Meyer, Michael Naehrig
We give a sieving algorithm for finding pairs of consecutive smooth numbers that utilizes solutions to the Prouhet-Tarry-Escott (PTE) problem. Any such solution induces two degree- polynomials, and , that differ by a constant integer and completely split into linear factors in . It follows that for any such that , the two integers and differ by 1 and necessarily contain factors of roughly the same size. For a fixed smoothness bound , restricting the search to pairs of integers that are parameterized in this way increases the probability that they are -smooth. Our algorithm combines a simple sieve with parametrizations given by a collection of solutions to the PTE problem. The motivation for finding large twin smooth integers lies in their application to compact isogeny-based post-quantum protocols. The recent key exchange scheme B-SIDH and the recent digital signature scheme SQISign both require large primes that lie between two smooth integers; finding such a prime can be seen as a special case of finding twin smooth integers under the additional stipulation that their sum is a prime . When searching for cryptographic parameters with , an implementation of our sieve found primes where and are -smooth; the smoothest prior parameters had a similar sized prime for which and were -smooth. In targeting higher security levels, our sieve found a 376-bit prime lying between two -smooth integers, a 384-bit prime lying between two -smooth integers, and a 512-bit prime lying between two -smooth integers. Our analysis shows that using previously known methods to find high-security instances subject to these smoothness bounds is computationally infeasible.
Last updated:  2022-11-14
Compact Authenticated Key Exchange in the Quantum Random Oracle Model
Haiyang Xue, Man Ho Au, Rupeng Yang, Bei Liang, Haodong Jiang
Several quantum-resistant authenticated key exchange protocols (AKEs) have been proposed from supersingular isogeny and lattice. Most of their security analyses are conducted in the classical random oracle model, leaving their securities in the quantum random oracle model (QROM) as open problems. In this paper, we propose a generic construction of two-message AKE in QROM. It can be regarded as a QROM-secure version of X3LH [Xue et al. ASIACRYPT 2018], a generic AKE based on double-key PKE. We prove that, with some modification, the QROM security of X3LH can be reduced to the one-way security of double-key PKE. Aside from answering open problems on the QROM security of prior AKEs, such as SIAKE [Xu et al. ASIACRYPT 2019] based on supersingular isogeny, 2Kyber-AKE based on Module-LWE, and FSXY, we propose a new construction, CSIAKE, based on commutative supersingular isogeny. Our framework enjoys the following desirable features. First of all, it supports PKEs with non-perfect correctness. Secondly, the basic building block is compact and only requires one-wayness. Finally, the resulting AKE achieves the security in CK+ model as strong as X3LH, and the transformation overhead is low.
Last updated:  2021-05-31
Key Agreement for Decentralized Secure Group Messaging with Strong Security Guarantees
Matthew Weidner, Martin Kleppmann, Daniel Hugenroth, Alastair R. Beresford
Secure group messaging protocols, providing end-to-end encryption for group communication, need to handle mobile devices frequently being offline, group members being added or removed, and the possibility of device compromises during long-lived chat sessions. Existing work targets a centralized network model in which all messages are routed through a single server, which is trusted to provide a consistent total order on updates to the group state. In this paper we adapt secure group messaging for decentralized networks that have no central authority. Servers may still optionally be used, but they are trusted less. We define decentralized continuous group key agreement (DCGKA), a new cryptographic primitive encompassing the core of a decentralized secure group messaging protocol; we give a practical construction of a DCGKA protocol and prove its security; and we describe how to construct a full messaging protocol from DCGKA. In the face of device compromise our protocol achieves forward secrecy and post-compromise security. We evaluate the performance of a prototype implementation, and demonstrate that our protocol has practical efficiency.
Last updated:  2021-04-25
DORY: An Encrypted Search System with Distributed Trust
Emma Dauterman, Eric Feng, Ellen Luo, Raluca Ada Popa, Ion Stoica
Efficient, leakage-free search on encrypted data has remained an unsolved problem for the last two decades; efficient schemes are vulnerable to leakage-abuse attacks, and schemes that eliminate leakage are impractical to deploy. To overcome this tradeoff, we reexamine the system model. We surveyed five companies providing end-to-end encrypted filesharing to better understand what they require from an encrypted search system. Based on our findings, we design and build DORY, an encrypted search system that addresses real-world requirements and protects search access patterns; namely, when a user searches for a keyword over the files within a folder, the server learns only that a search happens in that folder, but does not learn which documents match the search, the number of documents that match, or other information about the keyword. DORY splits trust between multiple servers to protect against a malicious attacker who controls all but one of the servers. We develop new cryptographic and systems techniques to meet the efficiency and trust model requirements outlined by the companies we surveyed. We implement DORY and show that it performs orders of magnitude better than a baseline built on ORAM. Parallelized across 8 servers, each with 16 CPUs, DORY takes 116ms to search roughly 50K documents and 862ms to search over 1M documents.
Last updated:  2020-11-13
Tightly-Secure Authenticated Key Exchange, Revisited
Tibor Jager, Eike Kiltz, Doreen Riepel, Sven Schäge
We introduce new tightly-secure authenticated key exchange (AKE) protocols that are extremely efficient, yet have only a constant security loss and can be instantiated in the random oracle model both from the standard DDH assumption and a subgroup assumption over RSA groups. These protocols can be deployed with optimal parameters, independent of the number of users or sessions, without the need to compensate a security loss with increased parameters and thus decreased computational efficiency. We use the standard “Single-Bit-Guess” AKE security (with forward secrecy and state corruption) requiring all challenge keys to be simultaneously pseudo-random. In contrast, most previous papers on tightly secure AKE protocols (Bader et al., TCC 2015; Gjøsteen and Jager, CRYPTO 2018; Liu et al., ASIACRYPT 2020) concentrated on a non-standard “Multi-Bit-Guess” AKE security which is known not to compose tightly with symmetric primitives to build a secure communication channel. Our key technical contribution is a new generic approach to construct tightly-secure AKE protocols based on non-committing key encapsulation mechanisms. The resulting DDH-based protocols are considerably more efficient than all previous constructions.
Last updated:  2022-10-26
Compact Dilithium Implementations on Cortex-M3 and Cortex-M4
Denisa O. C. Greconici, Matthias J. Kannwischer, Amber Sprenkels
We present implementations of the lattice-based digital signature scheme Dilithium for ARM Cortex-M3 and ARM Cortex-M4. Dilithium is one of the three signature finalists of the NIST post-quantum cryptography competition. As our Cortex-M4 target, we use the popular STM32F407-DISCOVERY development board. Compared to the previous speed records on the Cortex-M4 by Ravi, Gupta, Chattopadhyay, and Bhasin we speed up the key operations and by 20% which together with other optimizations results in speedups of 7%, 15%, and 9% for Dilithium3 key generation, signing, and verification respectively. We also present the first constant-time Dilithium implementation on the Cortex-M3 and use the Arduino Due for benchmarks. For Dilithium3, we achieve on average 2 562 kilocycles for key generation, 10 667 kilocycles for signing, and 2 321 kilocycles for verification. Additionally, we present stack consumption optimizations applying to both our CortexM3 and Cortex-M4 implementation. Due to the iterative nature of the Dilithium signing algorithm, there is no optimal way to achieve the best speed and lowest stack consumption at the same time. We present three different strategies for the signing procedure which allow trading more stack and flash memory for faster speed or vice-versa. Our implementation of Dilithium3 with the smallest memory footprint uses less than 12kB. As an additional output of this work, we present the first Cortex-M3 implementations of the key-encapsulation schemes NewHope and Kyber.
Last updated:  2020-10-14
A Simple Protocol to Compare EMFI Platforms
J. Toulemont, N. Ouldei-Tebina, J. M. Galliere, P. Nouet, E. Bourbao, P. Maurine
Several electromagnetic fault injection (EMFI) platforms have been developed these last years. They rely on different technical solutions and figures of merit used in the related datasheets or publications are also different. This renders difficult the comparison of the various EMFI platforms and the choice of the one adapted to its own usage. This paper suggests a characterization protocol which application is fast and requires equipment usually available in labs involved in security characterization. It also introduces an effective solution to enhance (by a factor 5) the timing resolution of EMFI platforms built around a commercial voltage pulse generator designed to drive 50 Ohm termination.
Last updated:  2020-10-14
Lattice-based Key Sharing Schemes - A Survey
Prasanna Ravi, James Howe, Anupam Chattopadhyay, Shivam Bhasin
Public key cryptography is an indispensable component used in almost all of our present day digital infrastructure. However, most if not all of it is predominantly built upon hardness guarantees of number theoretic problems that can be broken by large scale quantum computers in the future. Sensing the imminent threat from continued advances in quantum computing, NIST has recently initiated a global level standardization process for quantum resistant public-key cryptographic primitives such as public key encryption, digital signatures and key encapsulation mechanisms. While the process received proposals from various categories of post-quantum cryptography, lattice-based cryptography features most prominently among all the submissions. Lattice-based cryptography offers a very attractive alternative to traditional public-key cryptography mainly due to the variety of lattice-based schemes offering varying flavors of security and efficiency guarantees. In this paper, we survey the evolution of lattice-based key sharing schemes (public key encryption and key encapsulation schemes) and cover various aspects ranging from theoretical security guarantees, general algorithmic frameworks, practical implementation aspects and physical attack security, with special focus on lattice-based key sharing schemes competing in the NIST's standardization process. Please note that our work is focussed on the results available from the second round of the NIST's standardization process while the standardization process has progressed to the third and final round at the time of publishing this document.
Last updated:  2020-10-24
Quarks: Quadruple-efficient transparent zkSNARKs
Srinath Setty, Jonathan Lee
We introduce Xiphos and Kopis, new transparent zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) for R1CS. They do not require a trusted setup, and their security relies on the standard SXDH problem. They achieve non-interactivity in the random oracle model using the Fiat-Shamir transform. Unlike prior transparent zkSNARKs, which support either a fast prover, short proofs, or quick verification, our work is the first to simultaneously achieve all three properties (both asymptotically and concretely) and in addition an inexpensive setup phase, thereby providing the first quadruple-efficient transparent zkSNARKs (Quarks). Under both schemes, for an R1CS instance of size n and security parameter , the prover incurs costs to produce a proof of size . In Xiphos, verification time is , and in Kopis it is . In terms of concrete efficiency, compared to prior state-of-the-art transparent zkSNARKs, Xiphos offers the fastest verification; its proof sizes are competitive with those of SuperSonic [EUROCRYPT 2020], a prior transparent SNARK with the shortest proofs in the literature. Xiphos’s prover is fast: its prover is of Spartan [CRYPTO 2020], a prior transparent zkSNARK with the fastest prover in the literature, and is faster than SuperSonic. Kopis, at the cost of increased verification time (which is still concretely faster than SuperSonic), shortens Xiphos’s proof sizes further, thereby producing proofs shorter than SuperSonic. Xiphos and Kopis incur -- lower preprocessing costs for the verifier in the setup phase depending on the baseline. Finally, a byproduct of Kopis is Lakonia, a NIZK for R1CS with -sized proofs, which provides an alternative to Bulletproofs [S&P 2018] with over an order of magnitude faster proving and verification times.
Last updated:  2021-11-18
Dory: Efficient, Transparent arguments for Generalised Inner Products and Polynomial Commitments
Jonathan Lee
This paper presents Dory, a transparent setup, public-coin interactive argument for proving correctness of an inner-pairing product between committed vectors of elements of the two source groups. For an inner product of length , proofs are target group elements, element of each source group and scalars. Verifier work is dominated by an multi-exponentiation in the target group. Security is reduced to the symmetric external Diffie Hellman assumption in the standard model. We also show an argument reducing a batch of two such instances to one, requiring work on the Prover and communication. We apply Dory to build a multivariate polynomial commitment scheme via the Fiat-Shamir transform. For the product of one plus the degree in each variable, Prover work to compute a commitment is dominated by a multi-exponentiation in one source group of size . Prover work to show that a commitment to an evaluation is correct is in general and for univariate or multilinear polynomials, whilst communication complexity and Verifier work are both . Using batching, the Verifier can validate polynomial evaluations for polynomials of size at most with group operations and field operations.
Last updated:  2023-10-30
Classical Verification of Quantum Computations with Efficient Verifier
Nai-Hui Chia, Kai-Min Chung, and Takashi Yamakawa
In this paper, we extend the protocol of classical verification of quantum computations (CVQC) recently proposed by Mahadev to make the verification efficient. Our result is obtained in the following three steps: - We show that parallel repetition of Mahadev's protocol has negligible soundness error. This gives the first constant round CVQC protocol with negligible soundness error. In this part, we only assume the quantum hardness of the learning with error (LWE) problem similar to Mahadev's work. - We construct a two-round CVQC protocol in the quantum random oracle model (QROM) where a cryptographic hash function is idealized to be a random function. This is obtained by applying the Fiat-Shamir transform to the parallel repetition version of Mahadev's protocol. -We construct a two-round CVQC protocol with an efficient verifier in the CRS+QRO model where both prover and verifier can access a (classical) common reference string generated by a trusted third party in addition to quantum access to QRO. Specifically, the verifier can verify a computation in time where is the security parameter. For proving soundness, we assume that a standard model instantiation of our two-round protocol with a concrete hash function (say, SHA-3) is sound and the existence of post-quantum indistinguishability obfuscation and post-quantum fully homomorphic encryption in addition to the quantum hardness of the LWE problem.
Last updated:  2020-10-14
Bent Functions from Cellular Automata
Maximilien Gadouleau, Luca Mariot, Stjepan Picek
In this work, we present a primary construction of bent functions based on cellular automata (CA). We consider the well-known characterization of bent functions in terms of Hadamard matrices and employ some recent results about mutually orthogonal Latin squares (MOLS) based on linear bipermutive CA (LBCA) to design families of Hadamard matrices of the form required for bent functions. In particular, the main question to address in this construction can be reduced to finding a large enough set of coprime polynomials over , which are used to define a set of MOLS via LBCA. This set of MOLS is, in turn, used to define a Hadamard matrix of the specific structure characterizing a bent function. We settle the existence question of such bent functions by proving that the required coprime sets exist if and only if the degree of the involved polynomials is either or , and we count the resulting sets. Next, we check if the functions of variables arising from our construction are EA-equivalent to Maiorana-McFarland functions, observing that most of them are not. Finally, we show how to represent the support of these bent functions as a union of the kernels of the underlying linear CA. This allows us, in turn, to prove that the functions generated by our construction belong to the partial spread class . In particular, we remark that for degree our construction is a particular case of the Desarguesian spread construction.
Last updated:  2020-10-14
(F)unctional Sifting: A Privacy-Preserving Reputation System Through Multi-Input Functional Encryption (extended version)
Alexandros Bakas, Antonis Michalas
Functional Encryption (FE) allows users who hold a specific secret key (known as the functional key) to learn a specific function of encrypted data whilst learning nothing about the content of the underlying data. Considering this functionality and the fact that the field of FE is still in its infancy, we sought a route to apply this potent tool to solve the existing problem of designing decentralised additive reputation systems. To this end, we first built a symmetric FE scheme for the norm of a vector space, which allows us to compute the sum of the components of an encrypted vector (i.e. the votes). Then, we utilized our construction, along with functionalities offered by Intel SGX, to design the first FE-based decentralized additive reputation system with Multi-Party Computation. While our reputation system faces certain limitations, this work is amongst the first attempts that seek to utilize FE in the solution of a real-life problem.
Last updated:  2021-03-05
Classical vs Quantum Random Oracles
Takashi Yamakawa, Mark Zhandry
In this paper, we study relationship between security of cryptographic schemes in the random oracle model (ROM) and quantum random oracle model (QROM). First, we introduce a notion of a proof of quantum access to a random oracle (PoQRO), which is a protocol to prove the capability to quantumly access a random oracle to a classical verifier. We observe that a proof of quantumness recently proposed by Brakerski et al. (TQC '20) can be seen as a PoQRO. We also give a construction of a publicly verifiable PoQRO relative to a classical oracle. Based on them, we construct digital signature and public key encryption schemes that are secure in the ROM but insecure in the QROM. In particular, we obtain the first examples of natural cryptographic schemes that separate the ROM and QROM under a standard cryptographic assumption. On the other hand, we give lifting theorems from security in the ROM to that in the QROM for certain types of cryptographic schemes and security notions. For example, our lifting theorems are applicable to Fiat-Shamir non-interactive arguments, Fiat-Shamir signatures, and Full-Domain-Hash signatures etc. We also discuss applications of our lifting theorems to quantum query complexity.
Last updated:  2020-11-10
PRINCEv2 - More Security for (Almost) No Overhead
Dušan Božilov, Maria Eichlseder, Miroslav Kneževic, Baptiste Lambin, Gregor Leander, Thorben Moos, Ventzislav Nikov, Shahram Rasoolzadeh, Yosuke Todo, Friedrich Wiemer
In this work, we propose tweaks to the PRINCE block cipher that help us to increase its security without changing the number of rounds or round operations. We get substantially higher security for the same complexity. From an implementation perspective, PRINCEv2 comes at an extremely low overhead compared to PRINCE in all key categories, such as area, latency and energy. We expect, as it is already the case for PRINCE, that the new cipher PRINCEv2 will be deployed in various settings.
Last updated:  2020-11-28
A Novel Duplication Based Countermeasure To Statistical Ineffective Fault Analysis
Anubhab Baksi, Vinay B. Y. Kumar, Banashri Karmakar, Shivam Bhasin, Dhiman Saha, Anupam Chattopadhyay
The Statistical Ineffective Fault Analysis, SIFA, is a recent addition to the family of fault based cryptanalysis techniques. SIFA based attack is shown to be formidable and is able to bypass virtually all the conventional fault attack countermeasures. Reported countermeasures to SIFA incur overheads of the order of at least thrice the unprotected cipher. We propose a novel countermeasure that reduces the overhead (compared to all existing countermeasures) as we rely on a simple duplication based technique. In essence, our countermeasure eliminates the observation that enables the attacker to perform SIFA. The core idea we use here is to choose the encoding for the state bits randomly. In this way, each bit of the state is free from statistical bias, which renders SIFA unusable. Our approach protects against stuck-at faults and also does not rely on any side channel countermeasure. We show the effectiveness of the countermeasure through an open source gate-level fault attack simulation tool. Our approach is probably the simplest and the most cost effective.
Last updated:  2022-03-25
Fault Attacks In Symmetric Key Cryptosystems
Anubhab Baksi, Shivam Bhasin, Jakub Breier, Dirmanto Jap, Dhiman Saha
Fault attacks are among the well-studied topics in the area of cryptography. These attacks constitute a powerful tool to recover the secret key used in the encryption process. Fault attacks work by forcing a device to work under non-ideal environmental conditions (such as high temperature) or external disturbances (such as glitch in the power supply) while performing a cryptographic operation. The recent trend shows that the amount of research in this direction; which ranges from attacking a particular primitive, proposing a fault countermeasure, to attacking countermeasures; has grown up substantially and going to stay as an active research interest for a foreseeable future. Hence, it becomes apparent to have a comprehensive yet compact study of the (major) works. This work, which covers a wide spectrum in the present day research on fault attacks that fall under the purview of the symmetric key cryptography, aims at fulfilling the absence of an up-to-date survey. We present mostly all aspects of the topic in a way which is not only understandable for a non-expert reader, but also helpful for an expert as a reference.
Last updated:  2021-09-14
Multi-Party Functional Encryption
Shweta Agrawal, Rishab Goyal, Junichi Tomida
We initiate the study of multi-party functional encryption (MPFE) which unifies and abstracts out various notions of functional encryption which support distributed ciphertexts or secret keys, such as multi-input FE, multi-client FE, decentralized multi-client FE, multi-authority FE, dynamic decentralized FE, adhoc multi-input FE and such others. Using our framework, we identify several gaps in the literature and provide some constructions to fill these: 1) Multi-Authority ABE with Inner Product Computation: The recent work of Abdalla et al. (ASIACRYPT'20) constructed a novel ``composition'' of Attribute Based Encryption (ABE) and Inner Product Functional Encryption (IPFE), namely functional encryption schemes that combine the access control functionality of attribute based encryption with the possibility of performing linear operations on the encrypted data. In this work, we extend the access control component to support the much more challenging multi-authority setting, i.e. ``lift'' the primitive of ABE in their construction to multi-authority ABE for the same class of access control policies (LSSS structures). This yields the first construction of a nontrivial multi-authority FE beyond ABE from simple assumptions on pairings to the best of our knowledge. Our techniques can also be used to generalize the decentralized attribute based encryption scheme of Michalevsky and Joye (ESORICS'18) to support inner product computation on the message. While this scheme only supports inner product predicates which is less general than those supported by the Lewko-Waters (EUROCRYPT'11) construction, it supports policy hiding which the latter does not. Our extension inherits these features and is secure based on the -linear assumption, in the random oracle model. 2. Function Hiding DDFE: The novel primitive of dynamic decentralized functional encryption (DDFE) was recently introduced by Chotard et al. (CRYPTO'20), where they also provided the first construction for inner products. However, the primitive of DDFE does not support function hiding, which is a significant limitation for several applications. In this work, we provide a new construction for inner product DDFE which supports function hiding. To achieve our final result, we define and construct the first function hiding multi-client functional encryption (MCFE) scheme for inner products, which may be of independent interest. 3. Distributed Ciphertext-Policy ABE: We provide a distributed variant of the recent ciphertext-policy attribute based encryption scheme, constructed by Agrawal and Yamada (EUROCRYPT'20). Our construction supports access policies, and is secure based on ``Learning With Errors'' and relies on the generic bilinear group model as well as the random oracle model. Our new MPFE abstraction predicts meaningful new variants of functional encryption as useful targets for future work.
Last updated:  2020-10-14
Revisiting ECM on GPUs
Jonas Wloka, Jan Richter-Brockmann, Colin Stahlke, Thorsten Kleinjung, Christine Priplata, Tim Güneysu
Modern public-key cryptography is a crucial part of our contemporary life where a secure communication channel with another party is needed. With the advance of more powerful computing architectures – especially Graphics Processing Units (GPUs) – traditional approaches like RSA and Di&#64259;e-Hellman schemes are more and more in danger of being broken. We present a highly optimized implementation of Lenstra’s ECM algorithm customized for GPUs. Our implementation uses state-of-the-art elliptic curve arithmetic and optimized integer arithmetic while providing the possibility of arbitrarily scaling ECM’s parameters allowing an application even for larger discrete logarithm problems. Furthermore, the proposed software is not limited to any speci&#64257;c GPU generation and is to the best of our knowledge the &#64257;rst implementation supporting multiple device computation. To this end, for a bound of B1=8,192 and a modulus size of 192 bit, we achieve a throughput of 214 thousand ECM trials per second on a modern RTX 2080 Ti GPU considering only the &#64257;rst stage of ECM. To solve the Discrete Logarithm Problem for larger bit sizes, our software can easily support larger parameter sets such that a throughput of 2,781 ECM trials per second is achieved using B1=50,000, B2=5,000,000, and a modulus size of 448 bit.
Last updated:  2021-06-18
Humanly Computable Passwords as Lattice based OTP generator with LWE
Slawomir Matelski
For safe resource management - an effective mechanism/system is necessary that identifies a person and his rights to these resources, using an appropriate key, and its degree of security determines not only the property, but sometimes even the life of its owner. For several decades, it has been based on the security of (bio)material keys, which only guarantee their own authenticity, but not their owner, due to weak of static password protection. In the article will be presented the i-Chip an innovative challenge-response protocol, implements the Learning with Rounding (LWR) method as LWE variant, the known NP-hard problem, and based on post quantum lattice cryptography. The i-Chip scheme requires only single (albeit detailed) picture, illustrating the grid of a virtual microchip along with a brief set of rules, describing the way in which such chip operates, thanks to which reverse engineering of such virtual structure is as difficult as similarly operation on a physical microchip, and their diversity is limited only by the fantasy of the users and collaborating researchers, inspired by the analogous features of the physical elements. It will be shown how the i-Chip generates the one-time password (OTP) or whole digital signature, also offline on paper documents.
Last updated:  2020-10-14
Improved Fault Analysis on SIMECK Ciphers
Duc-Phong Le, Rongxing Lu, Ali A. Ghorbani
The advances of the Internet of Things (IoT) have had a fundamental impact and influence in sharping our rich living experiences. However, since IoT devices are usually resource-constrained, lightweight block ciphers have played a major role in serving as a building block for secure IoT protocols. In CHES 2015, SIMECK, a family of block ciphers, was designed for resource-constrained IoT devices. Since its publication, there have been many analyses on its security. In this paper, under the one bit-flip model, we propose a new efficient fault analysis attack on SIMECK ciphers. Compared to those previously reported attacks, our attack can recover the full master key by injecting faults into only a single round of all SIMECK family members. This property is crucial, as it is infeasible for an attacker to inject faults into different rounds of a SIMECK implementation on IoT devices in the real world. Specifically, our attack is characterized by exercising a deep analysis of differential trail between the correct and faulty immediate ciphertexts. Extensive simulation evaluations are conducted, and the results demonstrate the effectiveness and correctness of our proposed attack.
Last updated:  2021-07-22
Multi-stage Proof-of-Works: Properties and Vulnerabilities
Paolo D'Arco, Zahra Ebadi Ansaroudi, Francesco Mogavero
Since its appearance in 2008, Bitcoin has attracted considerable attention. So far, it has been the most successful cryptocurrency, with the highest market capitalization. Nevertheless, due to the method it uses to append new transactions and blocks to the blockchain, based on a Proof-of-Work, Bitcoin suffers from poor scalability, which strongly limits the number of transactions per second and, hence, its adoption as a global payment layer for everyday uses. In this paper we analyze some recent proposals to address this issue. In particular, we focus our attention on permissionless blockchain protocols, whose distributed consensus algorithm lies on a Proof-of-Work composed of >1 sequential hash-puzzles, instead of a single one. Such protocols are referred to as multi-stage Proof-of-Works. We consider a simplified scenario, commonly used in the blockchain literature, in which the number of miners, their hashing powers, and the difficulty values of the hash-puzzles are constant over time. Our contribution is threefold. Firstly, we derive a closed-form expression for the mining probability of a miner, that is, the probability that the miner completes the Proof-of-Work of the next block to be added to the blockchain before any other miner does. Secondly, we show that in multi-stage Proof-of-Works the mining probability might not be strictly related to the miner hashing power. This feature could be exploited by a smart miner, and could open up potential fairness and decentralization issues in mining. Finally, we focus on a more restricted scenario and present two attacks, which can be applied successfully against multi-stage Proof-of-Works: a Selfish Mining attack and a SelfishStage-Withholding attack. We show that both are effective, and we point out that Selfish Stage-Withholding can be seen as a complementary strategy to Selfish Mining, which in some cases increases the selfish miner profitability in the Selfish Mining attack.
Last updated:  2023-10-20
MuSig2: Simple Two-Round Schnorr Multi-Signatures
Jonas Nick, Tim Ruffing, and Yannick Seurin
Multi-signatures enable a group of signers to produce a joint signature on a joint message. Recently, Drijvers et al. (S&P'19) showed that all thus far proposed two-round multi-signature schemes in the pure DL setting (without pairings) are insecure under concurrent signing sessions. While Drijvers et al. proposed a secure two-round scheme, this efficiency in terms of rounds comes with the price of having signatures that are more than twice as large as Schnorr signatures, which are becoming popular in cryptographic systems due to their practicality (e.g., they will likely be adopted in Bitcoin). If one needs a multi-signature scheme that can be used as a drop-in replacement for Schnorr signatures, then one is forced to resort either to a three-round scheme or to sequential signing sessions, both of which are undesirable options in practice. In this work, we propose MuSig2, a simple and highly practical two-round multi-signature scheme. This is the first scheme that simultaneously i) is secure under concurrent signing sessions, ii) supports key aggregation, iii) outputs ordinary Schnorr signatures, iv) needs only two communication rounds, and v) has similar signer complexity as ordinary Schnorr signatures. Furthermore, it is the first multi-signature scheme in the pure DL setting that supports preprocessing of all but one rounds, effectively enabling a non-interactive signing process without forgoing security under concurrent sessions. We prove the security of MuSig2 in the random oracle model, and the security of a more efficient variant in the combination of the random oracle and the algebraic group model. Both our proofs rely on a weaker variant of the OMDL assumption.
Last updated:  2021-06-18
Lattice Reduction with Approximate Enumeration Oracles: Practical Algorithms and Concrete Performance
Martin R. Albrecht, Shi Bai, Jianwei Li, Joe Rowell
This work provides a systematic investigation of the use of approximate enumeration oracles in BKZ, building on recent technical progress on speeding-up lattice enumeration: relaxing (the search radius of) enumeration and extended preprocessing which preprocesses in a larger rank than the enumeration rank. First, we heuristically justify that relaxing enumeration with certain extreme pruning asymptotically achieves an exponential speed-up for reaching the same root Hermite factor (RHF). Second, we perform simulations/experiments to validate this and the performance for relaxed enumeration with numerically optimised pruning for both regular and extended preprocessing. Upgrading BKZ with such approximate enumeration oracles gives rise to our main result, namely a practical and faster (wrt. previous work) polynomial-space lattice reduction algorithm for reaching the same RHF in practical and cryptographic parameter ranges. We assess its concrete time/quality performance with extensive simulations and experiments.
Last updated:  2021-10-04
Correlated Randomness Teleportation via Semi-trusted Hardware - Enabling Silent Multi-party Computation
Yibiao Lu, Bingsheng Zhang, Hong-Sheng Zhou, Weiran Liu, Lei Zhang, Kui Ren
With the advancement of the trusted execution environment (TEE) technologies, hardware-supported secure computing becomes increasingly popular due to its efficiency. During the protocol execution, typically, the players need to contact a third-party server for remote attestation, ensuring the validity of the involved trusted hardware component, such as Intel SGX, as well as the integrity of the computation result. When the hardware manufacturer is not fully trusted, sensitive information may be leaked to the third-party server through backdoors, steganography, and kleptography, etc. In this work, we introduce a new security notion called semi-trusted hardware model, where the adversary is allowed to passively or maliciously corrupt the hardware. Therefore, she can learn the input of the hardware component and might also tamper its output. We then show how to utilize such semi-trusted hardwares for correlated randomness teleportation. When the semi-trusted hardware is instantiated by Intel SGX, to generate 10k random OT's, our protocol is 24X and 450X faster than the EMP-IKNP-ROT in the LAN and WAN setting, respectively. When SGX is used to teleport garbled circuits, the resulting two-party computation protocol is 5.3-5.7X and 43-47X faster than the EMP-SH2PC in the LAN and WAN setting, respectively, for the AES-128, SHA-256, and SHA-512 evaluation. We also show how to achieve malicious security with little overhead.
Last updated:  2020-10-14
TranSCA: Cross-Family Profiled Side-Channel Attacks using Transfer Learning on Deep Neural Networks
Dhruv Thapar, Manaar Alam, Debdeep Mukhopadhyay
Side-channel analysis (SCA) utilizing the power consumption of a device has proved to be an efficient technique for recovering secret keys exploiting the implementation vulnerability of mathematically secure cryptographic algorithms. Recently, Deep Learning-based profiled SCA (DL-SCA) has gained popularity, where an adversary trains a deep learning model using profiled traces obtained from a dummy device (a device that is similar to the target device) and uses the trained model to retrieve the secret key from the target device. \emph{However, for efficient key recovery from the target device, training of such a model requires a large number of profiled traces from the dummy device and extensive training time}. In this paper, we propose \emph{TranSCA}, a new DL-SCA strategy that tries to address the issue. \emph{TranSCA} works in three steps -- an adversary (1) performs a one-time training of a \emph{base model} using profiled traces from \emph{any} device, (2) fine-tunes the parameters of the \emph{base model} using significantly less profiled traces from a dummy device with the aid of \emph{transfer learning} strategy in lesser time than training from scratch, and (3) uses the fine-tuned model to attack the target device. We validate \emph{TranSCA} on simulated power traces created to represent different FPGA families. Experimental results show that the transfer learning strategy makes it possible to attack a new device from the knowledge of another device even if the new device belongs to a different family. Also, \emph{TranSCA} requires very few power traces from the dummy device compared to when applying DL-SCA without any previous knowledge.
Last updated:  2021-02-24
Improved Reduction Between SIS Problems over Structured Lattices
ZaHyun Koo, Yongwoo Lee, Joon-Woo Lee, Jong-Seon No, Young-Sik Kim
Lattice-based cryptographic scheme is constructed based on hard problems on an algebraic structured lattice such as the short integer solution (SIS) problems. These problems are called ring-SIS (R-SIS) and its generalized version, module-SIS (M-SIS). Generally, it has been considered that problems defined on the module-lattice are more difficult than the problems defined on the ideal-lattice. However, Koo, No, and Kim showed that R-SIS is more difficult than M-SIS under some norm constraints of R-SIS. However, this reduction has problems that the rank of the module is limited to about half of the instances of R-SIS, and the comparison is not performed through the same modulus of R-SIS and M-SIS. In this paper, we propose that R-SIS is more difficult than M-SIS with the same modulus under some constraint of R-SIS. Also, we show that R-SIS with the modulus prime is more difficult than M-SIS with the composite modulus such that is divided by . In particular, it shows that through the reduction from M-SIS to R-SIS with the same modulus, the rank of the module is extended as much as the number of instances of R-SIS from half of the number of instances of R-SIS. Finally, this paper shows that R-SIS is more difficult than M-SIS under some constraint, which is tighter than the M-SIS in the previous work.
Last updated:  2020-10-15
Asymptotically Good Multiplicative LSSS over Galois Rings and Applications to MPC over Z/p^k Z
Mark Abspoel, Ronald Cramer, Ivan Damgård, Daniel Escudero, Matthieu Rambaud, Chaoping Xing, Chen Yuan
We study information-theoretic multiparty computation (MPC) protocols over rings that have good asymptotic communication complexity for a large number of players. An important ingredient for such protocols is arithmetic secret sharing, i.e., linear secret-sharing schemes with multiplicative properties. The standard way to obtain these over fields is with a family of linear codes , such that , and are asymptotically good (strongly multiplicative). For our purposes here it suffices if the square code is not the whole space, i.e., has codimension at least 1 (multiplicative). Our approach is to lift such a family of codes defined over a finite field to a Galois ring, which is a local ring that has as its residue field and that contains as a subring, and thus enables arithmetic that is compatible with both structures. Although arbitrary lifts preserve the distance and dual distance of a code, as we demonstrate with a counterexample, the multiplicative property is not preserved. We work around this issue by showing a dedicated lift that preserves \emph{self-orthogonality} (as well as distance and dual distance), for . Self-orthogonal codes are multiplicative, therefore we can use existing results of asymptotically good self-dual codes over fields to obtain arithmetic secret sharing over Galois rings. For we obtain multiplicativity by using existing techniques of secret-sharing using both and , incurring a constant overhead. As a result, we obtain asymptotically good arithmetic secret-sharing schemes over Galois rings. With these schemes in hand, we extend existing field-based MPC protocols to obtain MPC over , in the setting of a submaximal adversary corrupting less than a fraction of the players, where is arbitrarily small. We consider 3 different corruption models. For passive and active security with abort, our protocols communicate bits per multiplication. For full security with guaranteed output delivery we use a preprocessing model and get bits per multiplication in the online phase and bits per multiplication in the offline phase. Thus, we obtain true linear bit complexities, without the common assumption that the ring size depends on the number of players.
Last updated:  2020-11-25
Boolean Ring Cryptographic Equation Solving
Uncategorized
Sean Murphy, Maura Paterson, Christine Swart
Show abstract
Uncategorized
This paper considers multivariate polynomial equation systems over GF(2) that have a small number of solutions. This paper gives a new method EGHAM2 for solving such systems of equations that uses the properties of the Boolean quotient ring to potentially reduce memory and time complexity relative to existing XL-type or Groebner basis algorithms applied in this setting. This paper also establishes a direct connection between solving such a multivariate polynomial equation system over GF(2), an MQ problem, and an instance of the LPN problem.
Last updated:  2021-06-11
Broadcast-Optimal Two Round MPC with an Honest Majority
Ivan Damgård, Bernardo Magri, Divya Ravi, Luisa Siniscalchi, Sophia Yakoubov
This paper closes the question of the possibility of two-round MPC protocols achieving different security guarantees with and without the availability of broadcast in any given round. Cohen et al. (Eurocrypt 2020) study this question in the dishonest majority setting; we complete the picture by studying the honest majority setting. In the honest majority setting, given broadcast in both rounds, it is known that the strongest guarantee — guaranteed output delivery — is achievable (Gordon et al. Crypto 2015). We show that, given broadcast in the first round only, guaranteed output delivery is still achievable. Given broadcast in the second round only, we give a new construction that achieves identifiable abort, and we show that fairness — and thus guaranteed output delivery — are not achievable in this setting. Finally, if only peer-to-peer channels are available, we show that the weakest guarantee — selective abort — is the only one achievable for corruption thresholds and for and . On the other hand, it is already known that selective abort can be achieved in these cases. In the remaining cases, i.e., and , it is known (from the work of Ishai et al. at Crypto 2010, and Ishai et al. at Crypto 2015) that guaranteed output delivery (and thus all weaker guarantees) are possible.
Last updated:  2023-06-24
New Representations of the AES Key Schedule
Gaëtan Leurent, Clara Pernot
In this paper we present a new representation of the AES key schedule, with some implications to the security of AES-based schemes. In particular, we show that the AES-128 key schedule can be split into four independent parallel computations operating on 32-bit chunks, up to linear transformation. Surprisingly, this property has not been described in the literature after more than 20 years of analysis of AES. We show two consequences of our new representation, improving previous cryptanalysis results of AES-based schemes. First, we observe that iterating an odd number of key schedule rounds results in a permutation with short cycles. This explains an observation of Khairallah on mixFeed, a second-round candidate in the NIST lightweight competition. Our analysis actually shows that his forgery attack on mixFeed succeeds with probability 0.44 (with data complexity 220GB), breaking the scheme in practice. The same observation also leads to a novel attack on ALE, another AES-based AEAD scheme. Our new representation also gives efficient ways to combine information from the first subkeys and information from the last subkeys, in order to reconstruct the corresponding master key. This results in small improvements to previous attacks: we improve impossible differential attacks against several variants of AES (and Rijndael), and a square attack against AES-192.
Last updated:  2021-06-24
Adaptive Extractors and their Application to Leakage Resilient Secret Sharing
Nishanth Chandran, Bhavana Kanukurthi, Sai Lakshmi Bhavana Obbattu, Sruthi Sekar
We introduce Adaptive Extractors, which, unlike traditional randomness extractors, guarantee security even when an adversary obtains leakage on the source after observing the extractor output. We make a compelling case for the study of such extractors by demonstrating their use in obtaining adaptive leakage in secret sharing schemes. Specifically, at FOCS 2020, Chattopadhyay, Goodman, Goyal, Kumar, Li, Meka, Zuckerman, built an adaptively secure leakage resilient secret sharing scheme (LRSS) with both rate and leakage rate being , where is the number of parties. In this work, we build an adaptively secure LRSS that offers an interesting trade-off between rate, leakage rate, and the total number of shares from which an adversary can obtain leakage. As a special case, when considering -out-of- secret sharing schemes for threshold (constant ), we build a scheme with a constant rate, constant leakage rate, and allow the adversary leakage from all but of the shares, while giving her the remaining shares completely in the clear. (Prior to this, constant rate LRSS scheme tolerating adaptive leakage was unknown for any threshold.) Finally, we show applications of our techniques to both non-malleable secret sharing and secure message transmission.
Last updated:  2021-01-24
Bit Security Estimation Using Various Information-Theoretic Measures
Dong-Hoon Lee, Young-Sik Kim, Jong-Seon No
In this paper, various quantitative information-theoretic security reductions which correlate statistical difference between two probability distributions with security level's gap for two cryptographic schemes are proposed. Security is the most important prerequisite for cryptographic primitives. In general, there are two kinds of security; one is computational security, and the other is information-theoretic security. We focus on the latter one in this paper, especially the view point of bit security which is a convenient notion to indicate the quantitative security level. We propose tighter and more generalized version of information-theoretic security reductions than those of the previous works [1,2]. More specifically, we obtain about 2.5-bit tighter security reduction than that in the previous work [2], and we devise a further generalized version of security reduction in the previous work [1] by relaxing the constraint on the upper bound of the information-theoretic measure, that is, -efficient. Through this work, we propose the methodology to estimate the affects on security level when -bit secure original scheme is implemented on -bit precision system. (Here, can be set to any value as long as certain condition is satisfied.) In the previous work [1], was fixed as , but the proposed scheme is generalized to make it possible for security level and precision to variate independently. This makes a very big difference. The previous result cannot provide the exact lower bound value of security level for the case , but, it can only provide inaccurate relative information for security level. In contrast to this, the proposed result can provide the exact lower bound of estimation value of security level as long as precision satisfies the certain condition. Moreover, we provide diverse types of security reduction formulas for the five kinds of information-theoretic measures. We are expecting that the proposed schemes could provide an information-theoretic guideline for how much the two identical cryptographic schemes with different probability distribution may show the difference in their security level when extracting their randomness from two different probability distributions. Especially, the proposed schemes can be used to obtain the quantitative estimation of how much the statistical difference between the ideal distribution and the real distribution affects the security level [8,10,11].
Last updated:  2020-10-09
A New Code Based Signature Scheme without Trapdoors
Zhe Li, Chaoping Xing, Sze Ling Yeo
We present a signature scheme for Hamming metric random linear codes via the Schnorr-Lyubashevsky framework that employs the rejection sampling on appropriate probability distributions instead of using trapdoors. Such an approach has been widely believed to be more challenging for linear codes as compared to lattices with Gaussian distributions. We prove that our signature scheme achieves EUF-CMA security under the assumption of the decoding one out of many problem or achieves strong EUF-CMA security under the assumption of the codeword finding problem under relaxed parameters. We provide an instantiation of the signature scheme based on Ring-LPN instances as well as quasi-cyclic codes and present some concrete parameters. In addition, a proof of concept implementation of the scheme is provided. We compare our scheme with previous unsuccessful similar attempts and provide a rigorous security analysis of our scheme. Our construction primarily relies on an efficient rejection sampling lemma for binary linear codes with respect to suitably defined variants of the binomial distribution. Essentially, the rejection sampling lemma indicates that adding a small weight vector to a large weight vector has no significant effect on the distribution of the large weight vector. Concretely, we prove that if the large weight is at least the square of the small weight and the large weight vector admits binomial distribution, the sum distribution of the two vectors can be efficiently adjusted to a binomial distribution via the rejection step and independent from the small weight vector. As a result, our scheme outputs a signature distribution that is independent of the secret key. Compared to two existing code based signature schemes, namely Durandal and Wave, the security of our scheme is reduced to full-fledged hard coding problems i.e., codeword finding problem and syndrome decoding problem for random linear codes. By contrast, the security of the Durandal and Wave schemes is reduced to newly introduced product spaces subspaces indistinguishability problem and the indistinguishability of generalized codes problem, respectively. We believe that building our scheme upon the more mature hard coding problems provides stronger confidence to the security of our signature scheme.
Last updated:  2020-10-09
Adversarial Level Agreements for Two-Party Protocols
Marilyn George, Seny Kamara
Adversaries in cryptography have traditionally been modeled as either semi-honest or malicious. Over the years, however, several lines of work have investigated the design of cryptographic protocols against rational adversaries. The most well-known example are covert adversaries in secure computation (Aumann & Lindell, TCC '07) which are adversaries that wish to deviate from the protocol but without being detected. To protect against such adversaries, protocols secure in the covert model guarantee that deviations are detected with probability at least which is known as the deterrence factor. In this work, we initiate the study of contracts in cryptographic protocol design. We show how to design, use and analyze contracts between parties for the purpose of incentivizing honest behavior from rational adversaries. We refer to such contracts as adversarial level agreements (ALA). The framework we propose can result in more efficient protocols and can enforce deterrence in covert protocols; meaning that one can guarantee that a given deterrence factor will deter the adversary instead of assuming it. We show how to apply our framework to two-party protocols, including secure two-party computation (2PC) and proofs of storage (PoS). In the 2PC case, we integrate ALAs to publicly-verifiable covert protocols and show, through a game-theoretic analysis, how to set the parameters of the ALA to guarantee honest behavior. We do the same in the setting of PoS which are two-party protocols that allow a client to efficiently verify the integrity of a file stored in the cloud.
Last updated:  2021-06-12
Random-index PIR and Applications
Craig Gentry, Shai Halevi, Bernardo Magri, Jesper Buus Nielsen, Sophia Yakoubov
Private information retrieval (PIR) lets a client retrieve an entry from a database without the server learning which entry was retrieved. Here we study a weaker variant that we call \emph{random-index PIR} (RPIR), where the retrieved index is an \emph{output} rather than an input of the protocol, and is chosen at random. RPIR is clearly weaker than PIR, but it suffices for some interesting applications and may be realized more efficiently than full-blown PIR. We report here on two lines of work, both tied to RPIR but otherwise largely unrelated. The first line of work studies RPIR as a primitive on its own. Perhaps surprisingly, we show that RPIR is in fact equivalent to PIR when there are no restrictions on the number of communication rounds. On the other hand, RPIR can be implemented in a ``noninteractive'' setting (with pre-processing), which is clearly impossible for PIR. For two-server RPIR we even show a truly noninteractive solution, offering information-theoretic security without any pre-processing. The other line of work, which was the original motivation for our work, uses RPIR to improve on the recent work of Benhamouda \etal (TCC'20) for maintaining secret values on public blockchains. Their solution depends on a method for selecting many random public keys from a PKI while hiding most of the selected keys from an adversary. However, the method they proposed is vulnerable to a double-dipping attack, limiting its resilience. Here we observe that a RPIR protocol, where the client is implemented via secure MPC, can eliminate that vulnerability. We thus get a secrets-on-blockchain protocol (and more generally large-scale MPC), resilient to any fraction of corrupted parties, resolving the main open problem left from the work of Benhamouda \etal As the client in this solution is implemented via secure MPC, it really brings home the need to make it as efficient as possible. We thus strive to explore whatever efficiency gains we can get by using RPIR rather than PIR. We achieve more gains by using \emph{batch RPIR} where multiple indexes are retrieved at once. Lastly, we observe that this application can make do with a weaker security guarantee than full RPIR, and show that this weaker variant can be realized even more efficiently. We discuss one protocol in particular, that may be attractive for practical implementations.
Last updated:  2021-05-19
Doubly Efficient Interactive Proofs for General Arithmetic Circuits with Linear Prover Time
Jiaheng Zhang, Tianyi Liu, Weijie Wang, Yinuo Zhang, Dawn Song, Xiang Xie, Yupeng Zhang
We propose a new doubly efficient interactive proof protocol for general arithmetic circuits. The protocol generalizes the doubly efficient interactive proof for layered circuits proposed by Goldwasser, Kalai and Rothblum to arbitrary circuits while preserving the optimal prover complexity that is strictly linear to the size of the circuits. The proof size remains succinct for low depth circuits and the verifier time is sublinear for structured circuits. We then construct a new zero knowledge argument scheme for general arithmetic circuits using our new interactive proof protocol together with polynomial commitments. Not only does our new protocol achieve optimal prover complexity asymptotically, but it is also efficient in practice. Our experiments show that it only takes 1 second to generate the proof for a circuit with 600,000 gates, which is 7 times faster than the original interactive proof protocol on the corresponding layered circuit. The proof size is 229 kilobytes and the verifier time is 0.56 second. Our implementation can take general arithmetic circuits generated by existing tools directly, without transforming them to layered circuits with high overhead on the size of the circuits.
Last updated:  2021-09-17
The Mother of All Leakages: How to Simulate Noisy Leakages via Bounded Leakage (Almost) for Free
Gianluca Brian, Antonio Faonio, Maciej Obremski, João Ribeiro, Mark Simkin, Maciej Skórski, Daniele Venturi
We show that noisy leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to a small statistical simulation error and a slight loss in the leakage parameter. The latter holds true in particular for one of the most used noisy-leakage models, where the noisiness is measured using the conditional average min-entropy (Naor and Segev, CRYPTO'09 and SICOMP'12). Our reductions between noisy and bounded leakage are achieved in two steps. First, we put forward a new leakage model (dubbed the dense leakage model) and prove that dense leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to small statistical distance. Second, we show that the most common noisy-leakage models fall within the class of dense leakage, with good parameters. We also provide a complete picture of the relationships between different noisy-leakage models, and prove lower bounds showing that our reductions are nearly optimal. Our result finds applications to leakage-resilient cryptography, where we are often able to lift security in the presence of bounded leakage to security in the presence of noisy leakage, both in the information-theoretic and in the computational setting. Additionally, we show how to use lower bounds in communication complexity to prove that bounded-collusion protocols (Kumar, Meka, and Sahai, FOCS'19) for certain functions do not only require long transcripts, but also necessarily need to reveal enough information about the inputs.
Last updated:  2021-07-15
Two-round trip Schnorr multi-signatures via delinearized witnesses
Handan Kilinc Alper, Jeffrey Burdges
We construct a two-round Schnorr-based signature scheme (DWMS) by delinearizing two pre-commitments supplied by each signer. DWMS is a secure signature scheme in the algebraic group model (AGM) and the random oracle model (ROM) under the assumption of the hardness of the one-more discrete logarithm problem and the 2-entwined sum problem that we introduce in this paper. Our new m-entwined sum} problem tweaks the k-sum problem in a scalar field using the associated group. We prove the hardness of our new problem in the AGM assuming the hardness of the discrete logarithm problem in the associated group. We believe that our new problem simplifies the security proofs of multi-signature schemes that use the delinearization of commitments.
Last updated:  2021-12-02
Taming the many EdDSAs
Konstantinos Chalkias, François Garillot, Valeria Nikolaenko
This paper analyses security of concrete instantiations of EdDSA by identifying exploitable inconsistencies between standardization recommendations and Ed25519 implementations. We mainly focus on current ambiguity regarding signature verification equations, binding and malleability guarantees, and incompatibilities between randomized batch and single verification. We give a formulation of Ed25519 signature scheme that achieves the highest level of security, explaining how each step of the algorithm links with the formal security properties. We develop optimizations to allow for more efficient secure implementations. Finally, we designed a set of edge-case test-vectors and run them by some of the most popular Ed25519 libraries. The results allowed to understand the security level of those implementations and showed that most libraries do not comply with the latest standardization recommendations. The methodology allows to test compatibility of different Ed25519 implementations which is of practical importance for consensus-driven applications.
Last updated:  2021-10-05
A New Variant of Unbalanced Oil and Vinegar Using Quotient Ring: QR-UOV
Hiroki Furue, Yasuhiko Ikematsu, Yutaro Kiyomura, Tsuyoshi Takagi
The unbalanced oil and vinegar signature scheme (UOV) is a multivariate signature scheme that has essentially not been broken for over 20 years. However, it requires the use of a large public key; thus, various methods have been proposed to reduce its size. In this paper, we propose a new variant of UOV with a public key represented by block matrices whose components correspond to an element of a quotient ring. We discuss how it affects the security of our proposed scheme whether or not the quotient ring is a field. Furthermore, we discuss their security against currently known and newly possible attacks and propose parameters for our scheme. We demonstrate that our proposed scheme can achieve a small public key size without significantly increasing the signature size compared with other UOV variants. For example, the public key size of our proposed scheme is 85.8 KB for NIST's Post-Quantum Cryptography Project (security level 3), whereas that of compressed Rainbow is 252.3 KB, where Rainbow is a variant of UOV and is one of the third-round finalists of the NIST PQC project.
Last updated:  2020-10-30
Improved (Related-key) Differential Cryptanalysis on GIFT
Fulei Ji, Wentao Zhang, Chunning Zhou, Tianyou Ding
In this paper, we reevaluate the security of GIFT against differential cryptanalysis under both single-key scenario and related-key scenario. Firstly, we apply Matsui's algorithm to search related-key differential trails of GIFT. We add three constraints to limit the search space and search the optimal related-key differential trails on the limited search space. We obtain related-key differential trails of GIFT-64/128 for up to 15/14 rounds, which are the best results on related-key differential trails of GIFT so far. Secondly, we propose an automatic algorithm to increase the probability of the related-key boomerang distinguisher of GIFT by searching the clustering of the related-key differential trails utilized in the boomerang distinguisher. We find a 20-round related-key boomerang distinguisher of GIFT-64 with probability 2^-58.557. The 25-round related-key rectangle attack on GIFT-64 is constructed based on it. This is the longest attack on GIFT-64. We also find a 19-round related-key boomerang distinguisher of GIFT-128 with probability 2^-109.626. We propose a 23-round related-key rectangle attack on GIFT-128 utilizing the 19-round distinguisher, which is the longest related-key attack on GIFT-128. The 24-round related-key rectangle attack on GIFT-64 and 22-round related-key boomerang attack on GIFT-128 are also presented. Thirdly, we search the clustering of the single-key differential trails. We increase the probability of a 20-round single-key differential distinguisher of GIFT-128 from 2^-121.415 to 2^-120.245. The time complexity of the 26-round differential attack on GIFT-128 is improved from 2^124:415 to 2^123:245.
Last updated:  2020-10-09
DAPA: Differential Analysis aided Power Attack on (Non-)Linear Feedback Shift Registers (Extended version)
Siang Meng Sim, Dirmanto Jap, Shivam Bhasin
Differential power analysis (DPA) is a form of side-channel analysis (SCA) that performs statistical analysis on the power traces of cryptographic computations. DPA is applicable to many cryptographic primitives, including block ciphers, stream ciphers and even hash-based message authentication code (HMAC). At COSADE 2017, Dobraunig~et~al. presented a DPA on the fresh re-keying scheme Keymill to extract the bit relations of neighbouring bits in its shift registers, reducing the internal state guessing space from 128 to 4 bits. In this work, we generalise their methodology and combine with differential analysis, we called it differential analysis aided power attack (DAPA), to uncover more bit relations and take into account the linear or non-linear functions that feedback to the shift registers (i.e. LFSRs or NLFSRs). Next, we apply our DAPA on LR-Keymill, the improved version of Keymill designed to resist the aforementioned DPA, and breaks its 67.9-bit security claim with a 4-bit internal state guessing. We experimentally verified our analysis. In addition, we improve the previous DPA on Keymill by halving the amount of data resources needed for the attack. We also applied our DAPA to Trivium, a hardware-oriented stream cipher from the eSTREAM portfolio and reduces the key guessing space from 80 to 14 bits.
Last updated:  2021-01-19
SQISign: compact post-quantum signatures from quaternions and isogenies
Luca De Feo, David Kohel, Antonin Leroux, Christophe Petit, Benjamin Wesolowski
We introduce a new signature scheme, SQISign, (for Short Quaternion and Isogeny Signature) from isogeny graphs of supersingular elliptic curves. The signature scheme is derived from a new one-round, high soundness, interactive identification protocol. Targeting the post-quantum NIST-1 level of security, our implementation results in signatures of bytes, secret keys of bytes and public keys of bytes. In particular, the signature and public key sizes combined are an order of magnitude smaller than all other post-quantum signature schemes. On a modern workstation, our implementation in C takes 0.6s for key generation, 2.5s for signing, and 50ms for verification. While the soundness of the identification protocol follows from classical assumptions, the zero-knowledge property relies on the second main contribution of this paper. We introduce a new algorithm to find an isogeny path connecting two given supersingular elliptic curves of known endomorphism rings. A previous algorithm to solve this problem, due to Kohel, Lauter, Petit and Tignol, systematically reveals paths from the input curves to a `special' curve. This leakage would break the zero-knowledge property of the protocol. Our algorithm does not directly reveal such a path, and subject to a new computational assumption, we prove that the resulting identification protocol is zero-knowledge.
Last updated:  2020-10-09
Authenticated Dictionaries with Cross-Incremental Proof (Dis)aggregation
Alin Tomescu, Yu Xia, Zachary Newman
Authenticated dictionaries (ADs) are a key building block of many cryptographic systems, such as transparency logs, distributed &#64257;le systems and cryptocurrencies. In this paper, we propose a new notion of cross-incremental proof (dis)aggregation for authenticated dictionaries, which enables aggregating multiple proofs with respect to different dictionaries into a single, succinct proof. Importantly, this aggregation can be done incrementally and can be later reversed via disaggregation. We give an efficient authenticated dictionary construction from hidden-order groups that achieves cross-incremental (dis)aggregation. Our construction also supports updating digests, updating (cross-)aggregated proofs and precomputing all proofs efficiently. This makes it ideal for stateless validation in cryptocurrencies with smart contracts. As an additional contribution, we give a second authenticated dictionary construction, which can be used in more malicious settings where dictionary digests are adversarially-generated, but features only “one-hop” proof aggregation (with respect to the same digest). We add support for append-only proofs to this construction, which gives us an append-only authenticated dictionary (AAD) that can be used for transparency logs and, unlike previous AAD constructions, supports updating and aggregating proofs.
Last updated:  2022-05-13
Hardness of Entropic Module-LWE
Hao Lin, Mingqiang Wang, Jincheng Zhuang, Yang Wang
The Learning with Errors (LWE) problem is a versatile basis for building various purpose post-quantum schemes. Goldwasser et al. [ISC 2010] initialized the study of a variant of this problem called the Entropic LWE problem, where the LWE secret is generated from a distribution with a certain min-entropy. Brakerski and D{\"o}ttling recently further extended the study in this field, and first proved the hardness of the Entropic LWE problem with unbounded secret [Eurocrypt 2020], then gave a similar result for the Entropic Ring-LWE problem [TCC 2020]. In this work, we systematically study the hardness of the Entropic Module-LWE problem. Adapting the ``lossiness approach" to the module setting, we give lower entropy bounds for the secret distribution that guarantee the hardness of the Entropic Module-LWE problem in both search and decision cases, where results are divided into two settings: bounded and unbounded norm. We also present that our search entropy lower bound in the unbounded case is essentially tight. An application of our bounded result is to deduce the hardness for the Binary Module-LWE problem. One of our central techniques is a new generalized leftover hash lemma over rings, which might be of independent interest.
Last updated:  2024-10-04
A Complete Analysis of the BKZ Lattice Reduction Algorithm
Jianwei Li and Phong Q. Nguyen
We present the first rigorous dynamic analysis of BKZ, the most widely used lattice reduction algorithm besides LLL: we provide guarantees on the quality of the current lattice basis during execution. Previous analyses were either heuristic or only applied to theoretical variants of BKZ, not the real BKZ implemented in software libraries. Our analysis extends to a generic BKZ algorithm where the SVP-oracle is replaced by an approximate oracle and/or the basis update is not necessarily performed by LLL. As an application, we observe that in certain approximation regimes, it is more efficient to use BKZ with an approximate rather than exact SVP-oracle.
Last updated:  2020-10-09
Round-Efficient Byzantine Broadcast under Strongly Adaptive and Majority Corruptions
Jun Wan, Hanshen Xiao, Srinivas Devadas, Elaine Shi
The round complexity of Byzantine Broadcast (BB) has been a central question in distributed systems and cryptography. In the honest majority setting, expected constant round protocols have been known for decades even in the presence of a strongly adaptive adversary. In the corrupt majority setting, however, no protocol with sublinear round complexity is known, even when the adversary is allowed to {\it strongly adaptively} corrupt only 51\% of the players, and even under reasonable setup or cryptographic assumptions. Recall that a strongly adaptive adversary can examine what original message an honest player would have wanted to send in some round, adaptively corrupt the player in the same round and make it send a completely different message instead. In this paper, we are the first to construct a BB protocol with sublinear round complexity in the corrupt majority setting. Specifically, assuming the existence of time-lock puzzles with suitable hardness parameters and that the decisional linear assumption holds in suitable bilinear groups}, we show how to achieve BB in rounds with probability, where denotes the total number of players, denotes the maximum number of corrupt players, and is the security parameter. Our protocol completes in polylogarithmically many rounds even when 99\% of the players can be corrupt.
Last updated:  2021-10-04
Assessing Lightweight Block Cipher Security using Linear and Nonlinear Machine Learning Classifiers
Ting Rong Lee, Je Sen Teh, Norziana Jamil, Jasy Liew Suet Yan, Jiageng Chen
In this paper, we investigate the use of machine learning classifiers to assess block cipher security from the perspective of differential cryptanalysis. These classifiers were trained using common block cipher features (number of rounds, permutation pattern, truncated input and output differences), making our approach generalizable to an entire class of ciphers. Each data sample represents a truncated differential path, for which the level of security is labelled as secure or insecure by the trained classifier based on the number of differentially active S-boxes. We trained six machine learning classifiers (linear and nonlinear) to perform the security prediction task using a dataset generated from a small-scale generalized Feistel structure (GFS) cipher as a proof-of-concept. Prediction accuracy was further refined by determining the best way to represent features in the dataset during training. We then studied how well these classifiers perform the prediction tasks on ciphers that they were trained on (seen) and those that they were not (unseen). When applied on seen ciphers, the classifiers achieved prediction accuracy results of up to 93% whereas for unseen cipher variants, accuracy results of up to 71% were obtained. Our findings indicate that nonlinear classifiers are better suited for the prediction task. Next, we applied the proposed approach to a full-scale lightweight GFS block cipher, TWINE. By training the best performing nonlinear classifiers (k-nearest neighbour and decision tree classifiers) using data from five other GFS ciphers, we obtained an accuracy of up to 74% when labelling data from TWINE. In addition, the trained classifiers could generalize to a larger number of rounds of TWINE despite being trained using data obtained from fewer rounds. These findings showcase the feasibility of using machine learning classifiers, notably nonlinear variants, as a tool to assess block cipher security.
Last updated:  2021-06-08
Impossibility on the Schnorr Signature from the One-more DL Assumption in the Non-programmable Random Oracle Model
Masayuki Fukumitsu, Shingo Hasegawa
In the random oracle model (ROM), it is provable from the DL assumption, whereas there is negative circumstantial evidence in the standard model. Fleischhacker, Jager, and Schröder showed that the tight security of the Schnorr signature is unprovable from a strong cryptographic assumption, such as the One-More DL (OM-DL) assumption and the computational and decisional Diffie-Hellman assumption, in the ROM via a generic reduction as long as the underlying cryptographic assumption holds. However, it remains open whether or not the impossibility of the provable security of the Schnorr signature from a strong assumption via a non-tight and reasonable reduction. In this paper, we show that the security of the Schnorr signature is unprovable from the OM-DL assumption in the non-programmable ROM as long as the OM-DL assumption holds. Our impossibility result is proven via a non-tight Turing reduction.
Last updated:  2020-10-09
BVOT: Self-Tallying Boardroom Voting with Oblivious Transfer
Farid Javani, Alan T. Sherman
A boardroom election is an election with a small number of voters carried out with public communications. We present BVOT, a self-tallying boardroom voting protocol with ballot secrecy, fairness (no tally information is available before the polls close), and dispute-freeness (voters can observe that all voters correctly followed the protocol). BVOT works by using a multiparty threshold homomorphic encryption system in which each candidate is associated with a masked unique prime. Each voter engages in an oblivious transfer with an untrusted distributor: the voter selects the index of a prime associated with a candidate and receives the selected prime in masked form. The voter then casts their vote by encrypting their masked prime and broadcasting it to everyone. The distributor does not learn the voter's choice, and no one learns the mapping between primes and candidates until the audit phase. By hiding the mapping between primes and candidates, BVOT provides voters with insufficient information to carry out effective cheating. The threshold feature prevents anyone from computing any partial tally---until everyone has voted. Multiplying all votes, their decryption shares, and the unmasking factor yields a product of the primes each raised to the number of votes received. In contrast to some existing boardroom voting protocols, BVOT does not rely on any zero-knowledge proof; instead, it uses oblivious transfer to assure ballot secrecy and correct vote casting. Also, BVOT can handle multiple candidates in one election. BVOT prevents cheating by hiding crucial information: an attempt to increase the tally of one candidate might increase the tally of another candidate. After all votes are cast, any party can tally the votes.
Last updated:  2020-10-09
On the Existence of Weak Keys for QC-MDPC Decoding
Nicolas Sendrier, Valentin Vasseur
We study in this work a particular class of QC-MDPC codes for which the decoding failure rate is significantly larger than for typical QC-MDPC codes of same parameters. Our purpose is to figure out whether the existence of such weak codes impacts the security of cryptographic schemes using QC-MDPC codes as secret keys. A class of weak keys was exhibited in [DGK19]. We generalize it and show that, though their Decoding Failure Rate (DFR) is higher than normal, the set is not large enough to contribute significantly to the average DFR. It follows that with the proper semantically secure transform [HHK17], those weak keys do not affect the IND-CCA status of key encapsulation mechanisms, like BIKE, which are using QC-MDPC codes.
Last updated:  2021-06-02
vault1317/signal-dakez: An authenticated key exchange protocol with a public key concealing and a participation deniability designed for secure messaging
Richard B. Riddick
A deniable authenticated key exchange can establish a secure communication channel while leaving no cryptographic evidence of communication. Some well-designed protocol today, even in the case of betrayal by some participants and disclosure of long-term key materials, cannot leave any cryptographic evidence. However, this is no longer enough: If “Big data” technology is used to analyse data fetched from pivotal nodes, it’s not difficult to register your identity through your long-term public keys. (although it can’t be a solid evidence due to deniability) In this article, we have analysed the advantages and disadvantages of existing solutions which are claimed to be deniable to some degree, and proposed an authenticated key exchange protocol that is able to conceal the public keys from the outside of the secure channel, and deniable to some degree, and a reference implementation is provided.
Last updated:  2021-07-10
Certificateless Public-key Authenticate Searchable Encryption with Probabilistic Trapdoor Generation
Leixiao Cheng, Fei Meng
Boneh et al. proposed the cryptographic primitive public key encryption with keyword search (PEKS) to search on encrypted data without exposing the privacy of the keyword. Most standard PEKS schemes are vulnerable to inside keyword guessing attacks (KGA), i.e., a malicious server may generate a ciphertext by its own and then to guess the keyword of the trapdoor by testing. Huang et al. solved this problem by proposing the public-key authenticated encryption with keyword search (PAEKS) achieving single trapdoor indistinguishability (TI). Certificateless public-key authenticated encryption with keyword search (CLPAEKS) is first formally proposed by He et al. as a combination of the Huang's PAEKS and the certificateless public key cryptography (CLPKC). Lin et al. revised He's work and re-formalize the security requirements for CLPAEKS in terms of both trapdoor indistinguishability and ciphertext indistinguishability. However, trapdoor generation algorithms of all above works are deterministic. In this case, given two trapdoors, it's obviously to check whether the target keywords are identical embedded in them. This feature conflicts with trapdoor indistinguishability security. In this paper, we initially propose a CLPAEKS scheme with probabilistic trapdoor generation. Formal proof shows that our scheme is provable secure in the random oracle model.
Last updated:  2021-07-13
Decentralized Asset Custody Scheme with Security against Rational Adversary
Zhaohua Chen, Guang Yang
Asset custody is a core financial service in which the custodian holds in-safekeeping assets on behalf of the client. Although traditional custody service is typically endorsed by centralized authorities, decentralized custody scheme has become technically feasible since the emergence of digital assets, and furthermore, it is greatly needed by new applications such as blockchain and DeFi (Decentralized Finance). In this work, we propose a framework of decentralized asset custody scheme that is able to support a large number of custodians and safely hold customer assets of multiple times the value of the total security deposit. The proposed custody scheme distributes custodians and assets into many custodian groups via combinatorial designs, where each group fully controls the assigned assets. Since every custodian group is small, the overhead cost is significantly reduced. The liveness is also improved because even a single alive group would be able to process transactions. The security of this custody scheme is guaranteed under the rational adversary model, such that any adversary corrupting a bounded fraction of custodians cannot move assets more than the security deposit paid. We further analyze the security and performance of our constructions from both theoretical and experimental sides and give explicit examples with concrete numbers and figures for a better understanding of our results.
Last updated:  2020-10-09
Low-Cost Body Biasing Injection (BBI) Attacks on WLCSP Devices
Colin O'Flynn
Body Biasing Injection (BBI) uses a voltage applied with a physical probe onto the backside of the integrated circuit die. Compared to other techniques such as electromagnetic fault injection (EMFI) or Laser Fault Injection (LFI), this technique appears less popular in academic literature based on published results. It is hypothesized being due to (1) moderate cost of equipment, and (2) effort required in device preperation. This work demonstrates that BBI (and indeed many other backside attacks) can be trivially performed on Wafer-Level Chip-Scale Packaging (WLCSP), which inherently expose the die backside. A low-cost ($15) design for the BBI tool is introduced, and validated with faults introduced on a STM32F415OG against code flow, RSA, and some initial results on various hardware block attacks are discussed.
Last updated:  2020-12-09
Integral Cryptanalysis of Reduced-Round Tweakable TWINE
Muhammad ElSheikh, Amr M. Youssef
textsf{Tweakable TWINE} is the first lightweight dedicated tweakable block cipher family built on Generalized Feistel Structure (GFS). \twine family is an extension of the conventional block cipher \textsf{TWINE} with minimal modification by adding a simple tweak based on the SKINNY's tweakey schedule. Similar to \textsf{TWINE}, \twine has two variants, namely \twine[80] and \twine[128]. The two variants have the same block size of 64 bits and a variable key length of 80 and 128 bits. In this paper, we study the implications for adding the tweak on the security of \twine against the integral cryptanalysis. In particular, we first utilize the bit-based division property to search for the longest integral distinguisher. As a result, we are able to perform a distinguishing attack against 19 rounds using chosen tweak-plaintext combinations. We then convert this attack to key recovery attacks against 26 and 27 rounds (out of 36) of \twine[80] and \twine[128], respectively. By prepending one round before the distinguisher and using dynamically chosen plaintexts, we manage to extend the attack one more round without using the full codebook of the plaintext. Therefore, we are able to attack 27 and 28 rounds of \twine[80] and \twine[128], respectively.
Last updated:  2020-10-06
Synchronous Constructive Cryptography
Chen-Da Liu-Zhang, Ueli Maurer
This paper proposes a simple synchronous composable security framework as an instantiation of the Constructive Cryptography framework, aiming to capture minimally, without unnecessary artefacts, exactly what is needed to state synchronous security guarantees. The objects of study are specifications (i.e., sets) of systems, and traditional security properties like consistency and validity can naturally be understood as specifications, thus unifying composable and property-based definitions. The framework's simplicity is in contrast to current composable frameworks for synchronous computation which are built on top of an asynchronous framework (e.g. the UC framework), thus not only inheriting artefacts and complex features used to handle asynchronous communication, but adding additional overhead to capture synchronous communication. As a second, independent contribution we demonstrate how secure (synchronous) multi-party computation protocols can be understood as constructing a computer that allows a set of parties to perform an arbitrary, on-going computation. An interesting aspect is that the instructions of the computation need not be fixed before the protocol starts but can also be determined during an on-going computation, possibly depending on previous outputs.
Last updated:  2022-01-26
ABY2.0: Improved Mixed-Protocol Secure Two-Party Computation
Arpita Patra, Thomas Schneider, Ajith Suresh, Hossein Yalame
Secure Multi-party Computation (MPC) allows a set of mutually distrusting parties to jointly evaluate a function on their private inputs while maintaining input privacy. In this work, we improve semi-honest secure two-party computation (2PC) over rings, with a focus on the efficiency of the online phase. We propose an efficient mixed-protocol framework, outperforming the state-of-the-art 2PC framework of ABY. Moreover, we extend our techniques to multi- input multiplication gates without inflating the online communication, i.e., it remains independent of the fan-in. Along the way, we construct efficient protocols for several primitives such as scalar product, matrix multiplication, comparison, maxpool, and equality testing. The online communication of our scalar product is two ring elements irrespective of the vector dimension, which is a feature achieved for the first time in the 2PC literature. The practicality of our new set of protocols is showcased with four applications: i) AES S-box, ii) Circuit-based Private Set Intersection, iii) Biometric Matching, and iv) Privacy- preserving Machine Learning (PPML). Most notably, for PPML, we implement and benchmark training and inference of Logistic Regression and Neural Networks over LAN and WAN networks. For training, we improve online runtime (both for LAN and WAN) over SecureML (Mohassel et al., IEEE S&P’17) in the range 1.5x-6.1x, while for inference, the improvements are in the range of 2.5x-754.3x.
Last updated:  2020-10-06
Multi-Input Functional Encryption: Efficient Applications From Symmetric Primitives (extended version)
Alexandros Bakas, Antonis Michalas
Functional Encryption (FE) allows users who hold a specific secret key (known as the functional key) to learn a specific function of encrypted data whilst learning nothing about the content of the underlying data. Considering this functionality and the fact that the field of FE is still in its infancy, we sought a route to apply this potent tool to design efficient applications. To this end, we first built a symmetric FE scheme for the norm of a vector space, which allows us to compute the sum of the components of an encrypted vector. Then, we utilized our construction, to design an Order-Revealing Encryption (ORE) scheme and a privately encrypted database. While there is room for improvement in our schemes, this work is among the first attempts that seek to utilize FE for the solution of practical problems that can have a tangible effect on people's daily lives.
Last updated:  2021-05-17
Algorithmic Acceleration of B/FV-like Somewhat Homomorphic Encryption for Compute-Enabled RAM
Jonathan Takeshita, Dayane Reis, Ting Gong, Michael Niemier, X. Sharon Hu, Taeho Jung
Somewhat Homomorphic Encryption (SHE) allows arbitrary computation with nite multiplicative depths to be performed on encrypted data, but its overhead is high due to memory transfer incurred by large ciphertexts. Recent research has recognized the shortcomings of general-purpose computing for high-performance SHE, and has begun to pioneer the use of hardware-based SHE acceleration with hardware including FPGAs, GPUs, and Compute-Enabled RAM (CE-RAM). CERAM is well-suited for SHE, as it is not limited by the separation between memory and processing that bottlenecks other hardware. Further, CE-RAM does not move data between dierent processing elements. Recent research has shown the high eectiveness of CE-RAM for SHE as compared to highly-optimized CPU and FPGA implementations. However, algorithmic optimization for the implementation on CE-RAM is underexplored. In this work, we examine the eect of existing algorithmic optimizations upon a CE-RAM implementation of the B/FV scheme, and further introduce novel optimization techniques for the Full RNS Variant of B/FV. Our experiments show speedups of up to 784x for homomorphic multiplication, 143x for decryption, and 330x for encryption against a CPU implementation. We also compare our approach to similar work in CE-RAM, FPGA, and GPU acceleration, and note general improvement over existing work. In particular, for homomorphic multiplication we see speedups of 506.5x against CE-RAM, 66.85x against FPGA, and 30.8x against GPU as compared to existing work in hardware acceleration of B/FV.
Last updated:  2021-03-25
Practical Post-Quantum Few-Time Verifiable Random Function with Applications to Algorand
Muhammed F. Esgin, Veronika Kuchta, Amin Sakzad, Ron Steinfeld, Zhenfei Zhang, Shifeng Sun, Shumo Chu
In this work, we introduce the first practical post-quantum verifiable random function (VRF) that relies on well-known (module) lattice problems, namely Module-SIS and Module-LWE. Our construction, named LB-VRF, results in a VRF value of only 84 bytes and a proof of around only 5 KB (in comparison to several MBs in earlier works), and runs in about 3 ms for evaluation and about 1 ms for verification. In order to design a practical scheme, we need to restrict the number of VRF outputs per key pair, which makes our construction few-time. Despite this restriction, we show how our few-time LB-VRF can be used in practice and, in particular, we estimate the performance of Algorand using LB-VRF. We find that, due to the significant increase in the communication size in comparison to classical constructions, which is inherent in all existing lattice-based schemes, the throughput in LB-VRF-based consensus protocol is reduced, but remains practical. In particular, in a medium-sized network with 100 nodes, our platform records a 1.14x to 3.4x reduction in throughput, depending on the accompanying signature used. In the case of a large network with 500 nodes, we can still maintain at least 24 transactions per second. This is still much better than Bitcoin, which processes only about 5 transactions per second.
Last updated:  2022-02-09
Verifiable Functional Encryption using Intel SGX
Tatsuya Suzuki, Keita Emura, Toshihiro Ohigashi, Kazumasa Omote
Most functional encryption schemes implicitly assume that inputs to decryption algorithms, i.e., secret keys and ciphertexts, are generated honestly. However, they may be tampered by malicious adversaries. Thus, verifiable functional encryption (VFE) was proposed by Badrinarayanan et al. in ASIACRYPT 2016 where anyone can publicly check the validity of secret keys and ciphertexts. They employed indistinguishability-based (IND-based) security due to an impossibility result of simulation-based (SIM-based) VFE even though SIM-based security is more desirable. In this paper, we propose a SIM-based VFE scheme. To bypass the impossibility result, we introduce a trusted setup assumption. Although it appears to be a strong assumption, we demonstrate that it is reasonable in a hardware-based construction, e.g., Fisch et al. in ACM CCS 2017. Our construction is based on a verifiable public-key encryption scheme (Nieto et al. in SCN 2012), a signature scheme, and a secure hardware scheme, which we refer to as VFE-HW. Finally, we discuss an our implementation of VFE-HW using Intel Software Guard Extensions (Intel SGX).
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.