All papers in 2019 (Page 11 of 1498 results)

Last updated:  2019-10-02
Dual Isogenies and Their Application to Public-key Compression for Isogeny-based Cryptography
Michael Naehrig, Joost Renes
The isogeny-based protocols SIDH and SIKE have received much attention for being post-quantum key agreement candidates that retain relatively small keys. A recent line of work has proposed and further improved compression of public keys, leading to the inclusion of public-key compression in the SIKE proposal for Round 2 of the NIST Post-Quantum Cryptography Standardization effort. We show how to employ the dual isogeny to significantly increase performance of compression techniques, reducing their overhead from 160--182% to 77--86% for Alice's key generation and from 98--104% to 59--61% for Bob's across different SIDH parameter sets. For SIKE, we reduce the overhead of (1) key generation from 140--153% to 61--74%, (2) key encapsulation from 67--90% to 38--57%, and (3) decapsulation from 59--65% to 34--39%. This is mostly achieved by speeding up the pairing computations, which has until now been the main bottleneck, but we also improve (deterministic) basis generation.
Last updated:  2019-05-20
CSI-FiSh: Efficient Isogeny based Signatures through Class Group Computations
Ward Beullens, Thorsten Kleinjung, Frederik Vercauteren
In this paper we report on a new record class group computation of an imaginary quadratic field having 154-digit discriminant, surpassing the previous record of 130 digits. This class group is central to the CSIDH-512 isogeny based cryptosystem, and knowing the class group structure and relation lattice implies efficient uniform sampling and a canonical representation of its elements. Both operations were impossible before and allow us to instantiate an isogeny based signature scheme first sketched by Stolbunov, which we further optimize using multiple public keys and Merkle trees. We also show that including quadratic twists allows to cut the public key size in half for free. Optimizing for signature size, our implementation takes 390ms to sign/verify and results in signatures of $263$ bytes, at the expense of a large public key. This is 300 times faster and over 3 times smaller than an optimized version of SeaSign for the same parameter set. Optimizing for public key and signature size combined, results in a total size of 1468 bytes, which is smaller than any other post-quantum signature scheme at the 128-bit security level.
Last updated:  2019-05-20
Forward and Backward-Secure Range-Searchable Symmetric Encryption
Jiafan Wang, Sherman S. M. Chow
Dynamic searchable symmetric encryption (DSSE) allows a client to search or update over an outsourced encrypted database. Range query is commonly needed (AsiaCrypt'18) but order-preserving encryption approach is vulnerable to reconstruction attacks (SP'17). Previous range-searchable schemes (SIGMOD'16, ESORICS'18) require an ad-hoc instance of encrypted database to store the updates and/or suffer from other shortcomings, some brought by the usage of asymmetric primitives. In this paper, with our encrypted index which enables queries for a sequence of contiguous keywords, we propose a generic upgrade of any DSSE to support range query (a.k.a. range DSSE), and a concrete construction which provides a new trade-off of reducing the client storage to "reclaim" the benefits of outsourcing. Our schemes achieve forward security, an important property which mitigates file injection attacks. We identify a variant of file injection attack against a recent solution (ESORICS'18). We also extend the definition of backward security to range DSSE and show our schemes are compatible with a generic transformation for achieving backward security (CCS'17). We comprehensively analyze the computation and communication overheads including some parts which were ignored in previous schemes, e.g., index-related operations in the client side. Our experiments demonstrate the high efficiency of our schemes.
Last updated:  2019-05-20
Non-malleability for quantum public-key encryption
Christian Majenz, Christian Schaffner, Jeroen van Wier
Non-malleability is an important security property for public-key encryption (PKE). Its significance is due to the fundamental unachievability of integrity and authenticity guarantees in this setting, rendering it the strongest integrity-like property achievable using only PKE, without digital signatures. In this work, we generalize this notion to the setting of quantum public-key encryption. Overcoming the notorious "recording barrier" known from generalizing other integrity-like security notions to quantum encryption, we generalize one of the equivalent classical definitions, comparison-based non-malleability, and show how it can be fulfilled. In addition, we explore one-time non-malleability notions for symmetric-key encryption from the literature by defining plaintext and ciphertext variants and by characterizing their relation.
Last updated:  2019-05-21
Protecting ECC Against Fault Attacks: The Ring Extension Method Revisited
Marc Joye
Due to its shorter key size, elliptic curve cryptography (ECC) is gaining more and more popularity. However, if not properly implemented, the resulting cryptosystems may be susceptible to fault attacks. Over the past few years, several techniques for secure implementations have been published. This paper revisits the ring extension method and its adaptation to the elliptic curve setting.
Last updated:  2021-12-09
On the non-tightness of measurement-based reductions for key encapsulation mechanism in the quantum random oracle model
Haodong Jiang, Zhenfeng Zhang, Zhi Ma
Key encapsulation mechanism (KEM) variants of the Fujisaki-Okamoto (FO) transformation (TCC 2017) that turn a weakly-secure public-key encryption (PKE) into an IND-CCA-secure KEM, were widely used among the KEM submissions to the NIST Post-Quantum Cryptography Standardization Project. Under the standard CPA security assumptions, i.e., OW-CPA and IND-CPA, the security of these variants in the quantum random oracle model (QROM) has been proved by black-box reductions, e.g., Jiang et al. (CRYPTO 2018), and by non-black-box reductions (EUROCRYPT 2020). The non-black-box reductions (EUROCRYPT 2020) have a liner security loss, but can only apply to specific reversible adversaries with strict reversible implementation. On the contrary, the existing black-box reductions in the literature can apply to an arbitrary adversary with an arbitrary implementation, but suffer a quadratic security loss. In this paper, for KEM variants of the FO transformation, we first show the tightness limits of the black-box reductions, and prove that a measurement-based reduction in the QROM from breaking the standard OW-CPA (or IND-CPA) security of the underlying PKE to breaking the IND-CCA security of the resulting KEM, will inevitably incur a quadratic loss of the security, where ``measurement-based" means the reduction measures a hash query from the adversary and uses the measurement outcome to break the underlying security of PKE. In particular, most black-box reductions for these FO-like KEM variants are of this type, and our results suggest an explanation for the lack of progress in improving this reduction tightness in terms of the degree of security loss. Then, we further show that the quadratic loss is also unavoidable when one turns a search problem into a decision problem using the one-way to hiding technique in a black-box manner, which has been recognized as an essential technique to prove the security of cryptosystems involving quantum random oracles.
Last updated:  2020-07-15
Evaluating the effectiveness of heuristic worst-case noise analysis in FHE
Anamaria Costache, Kim Laine, Rachel Player
The purpose of this paper is to test the accuracy of worst-case heuristic bounds on the noise growth in ring-based homomorphic encryption schemes. We use the methodology of Iliashenko (PhD thesis, 2019) to provide a new heuristic noise analysis for the BGV scheme. We demonstrate that for both the BGV and FV schemes, this approach gives tighter bounds than previous heuristic approaches, by as much as 10 bits of noise budget. Then, we provide experimental data on the noise growth of HElib and SEAL ciphertexts, in order to evaluate how well the heuristic bounds model the noise growth in practice. We find that, in spite of our improvements, there is still a gap between the heuristic estimate of the noise and the observed noise in practice. We extensively justify that a heuristic worst-case approach inherently leads to this gap, and hence leads to selecting significantly larger parameters than needed. As an additional contribution, we update the comparison between the two schemes presented by Costache and Smart (CT-RSA, 2016). Our new analysis shows that the practical crossover point at which BGV begins to outperform FV occurs for very large plaintext moduli, well beyond the crossover point reported by Costache and Smart.
Last updated:  2019-09-23
Decisional second-preimage resistance: When does SPR imply PRE?
Daniel J. Bernstein, Andreas Hülsing
There is a well-known gap between second-preimage resistance and preimage resistance for length-preserving hash functions. This paper introduces a simple concept that fills this gap. One consequence of this concept is that tight reductions can remove interactivity for multi-target length-preserving preimage problems, such as the problems that appear in analyzing hash-based signature systems. Previous reduction techniques applied to only a negligible fraction of all length-preserving hash functions, presumably excluding all off-the-shelf hash functions.
Last updated:  2021-09-25
Best Information is Most Successful
Eloi de Cherisey, Sylvain Guilley, Olivier Rioul, Pablo Piantanida
Using information-theoretic tools, this paper establishes a mathematical link between the probability of success of a side-channel attack and the minimum number of queries to reach a given success rate, valid for any possible distinguishing rule and with the best possible knowledge on the attacker's side. This link is a lower bound on the number of queries highly depends on Shannon's mutual information between the traces and the secret key. This leads us to derive upper bounds on the mutual information that are as tight as possible and can be easily calculated. It turns out that, in the case of an additive white Gaussian noise, the bound on the probability of success of any attack is directly related to the signal to noise ratio. This leads to very easy computations and predictions of the success rate in any leakage model.
Last updated:  2020-02-21
Sigma protocols for MQ, PKP and SIS, and fishy signature schemes
Ward Beullens
This work presents sigma protocols to prove knowledge of: -a solution to a system of quadratic polynomials, -a solution to an instance of the Permuted Kernel Problem and -a witness for a variety of lattice statements (including SIS). Our sigma protocols have soundness error 1/q', where q' is any number bounded by the size of the underlying finite field. This is much better than existing proofs, which have soundness error 2/3 or (q'+1)/2q'. The prover and verifier time of our proofs are O(q'). We achieve this by first constructing so-called sigma protocols with helper, which are sigma protocols where the prover and the verifier are assisted by a trusted third party, and then eliminating the helper from the proof with a "cut-and-choose" protocol. We apply the Fiat-Shamir transform to obtain signature schemes with security proof in the QROM. We show that the resulting signature schemes, which we call the "MUltivariate quaDratic FIat-SHamir" scheme (MUDFISH) and the "ShUffled Solution to Homogeneous linear SYstem FIat-SHamir" scheme (SUSHSYFISH), are more efficient than existing signatures based on the MQ problem and the Permuted Kernel Problem. Our proof system can be used to improve the efficiency of applications relying on (generalizations of) Stern's protocol. We show that the proof size of our SIS proof is smaller than that of Stern's protocol by an order of magnitude and that our proof is more efficient than existing post-quantum secure SIS proofs.
Last updated:  2019-05-20
Memory-Efficient High-Speed Implementation of Kyber on Cortex-M4
Leon Botros, Matthias J. Kannwischer, Peter Schwabe
This paper presents an optimized software implementation of the module-lattice-based key-encapsulation mechanism Kyber for the ARM Cortex-M4 microcontroller. Kyber is one of the round-2 candidates in the NIST post-quantum project. In the center of our work are novel optimization techniques for the number-theoretic transform (NTT) inside Kyber, which make very efficient use of the computational power offered by the “vector” DSP instructions of the target architecture. We also present results for the recently updated parameter sets of Kyber which equally benefit from our optimizations. As a result of our efforts we present software that is 18% faster than an earlier implementation of Kyber optimized for the Cortex-M4 by the Kyber submitters. Our NTT is more than twice as fast as the NTT in that software. Our software runs at about the same speed as the latest speed-optimized implementation of the other module-lattice based round-2 NIST PQC candidate Saber. However, for our Kyber software, this performance is achieved with a much smaller RAM footprint. Kyber needs less than half of the RAM of what the considerably slower RAM-optimized version of Saber uses. Our software does not make use of any secret-dependent branches or memory access and thus offers state-of-the-art protection against timing attacks
Last updated:  2019-05-20
Enigma 2000: An Authenticated Encryption Algorithm For Human-to-Human Communication
Alan Kaminsky
Enigma 2000 (E2K) is a cipher that updates the World War II-era Enigma Machine for the twenty-first century. Like the original Enigma, E2K is intended to be computed by an offline device; this prevents side channel attacks and eavesdropping by malware. Unlike the original Enigma, E2K uses modern cryptographic algorithms; this provides secure encryption. E2K is intended for encrypted communication between humans only, and therefore it encrypts and decrypts plaintexts and ciphertexts consisting only of the English letters A through Z plus a few other characters. E2K uses a nonce in addition to the secret key, and requires that different messages use unique nonces. E2K performs authenticated encryption, and optional header data can be included in the authentication. This paper defines the E2K encryption and decryption algorithms, analyzes E2K’s security, and describes an encryption appliance based on the Raspberry Pi computer for doing E2K encryptions and decryptions offline.
Last updated:  2020-02-16
From Single-Input to Multi-Client Inner-Product Functional Encryption
Michel Abdalla, Fabrice Benhamouda, Romain Gay
We present a new generic construction of multi-client functional encryption (MCFE) for inner products from single-input functional inner-product encryption and standard pseudorandom functions. In spite of its simplicity, the new construction supports labels, achieves security in the standard model under adaptive corruptions, and can be instantiated from the plain DDH, LWE, and Paillier assumptions. Prior to our work, the only known constructions required discrete-log-based assumptions and the random-oracle model. Since our new scheme is not compatible with the compiler from Abdalla et al. (PKC 2019) that decentralizes the generation of the functional decryption keys, we also show how to modify the latter transformation to obtain a decentralized version of our scheme with similar features.
Last updated:  2019-05-19
Detective Mining: Selfish Mining Becomes Unrealistic under Mining Pool Environment
Suhyeon Lee, Seungjoo Kim
One of Bitcoin’s core security guarantees is that, for an attacker to be able to successfully interfere with the Bitcoin network and reverse transactions, they need to control 51% of total hash power. Eyal et al., however, significantly reduces Bitcoin’s security guarantee by introducing another type of attack, called "Selfish Mining". The key idea behind selfish mining is for a miner to keep its discovered blocks private, thereby intentionally forking the chain. As a result of a selfish mining attack, even a miner with 25% of the computation power can bias the agreed chain with its blocks. After Eyal's original paper, the concept of selfish mining has been actively studied within the Bitcoin community for several years. This paper studies a fundamental problem regarding the selfish mining strategy under the existence of mining pools. For this, we propose a new attack strategy, called "Detective Mining", and show that selfish mining pool is not profitable anymore when other miners use our strategy.
Last updated:  2020-09-29
A taxonomy of pairings, their security, their complexity
Razvan Barbulescu, Nadia El Mrabet, Loubna Ghammam
The Kim-Barbulescu attack against pairings made it necessary to increase the key sizes of the most popular families of pairings : BN, BLS-12, KSS-16, KSS-18 and BLS-24. The computation of new key sizes was a slow process because it was done in two waves : first a series of theoretical estimations, then a wave of precise estimations based on practical models. In this paper, we propose an up-to-date security evaluation for more then hundred pairing friendly elliptic curves. We evaluate the complexity of a complete pairing execution taking into account the Miller algorithm for different degree of twist and the Final exponentiation for the most promising curves. At 128 bits of security we find that the best pairings in the BD model are BLS-24 and BLS-12. The best pairings are not affected by the new polynomial selection method. At 192 bits of security, we find that the new champions are the less known BLS-24, KSS-16 and KSS-18. At 256 bits of security we conclude that the best pairing is BLS-27.
Last updated:  2019-11-16
New Number-Theoretic Cryptographic Primitives
Eric Brier, Houda Ferradi, Marc Joye, David Naccache
This paper introduces new $p^r q$-based one-way functions and companion signature schemes. The new signature schemes are interesting because they do not belong to the two common design blueprints, which are the inversion of a trapdoor permutation and the Fiat--Shamir transform. In the basic signature scheme, the signer generates multiple RSA-like moduli $n_i = p_i^2 q_i$ and keeps their factors secret. The signature is a bounded-size prime whose Jacobi symbols with respect to the $n_i$'s match the message digest. The generalized signature schemes replace the Jacobi symbol with higher-power residue symbols. Given of their very unique design the proposed signature schemes seem to be overlooked missing species in the corpus of known signature algorithms.
Last updated:  2019-05-13
Improved Filter Permutators: Combining Symmetric Encryption Design, Boolean Functions, Low Complexity Cryptography, and Homomorphic Encryption, for Private Delegation of Computations
Uncategorized
Pierrick Méaux, Claude Carlet, Anthony Journault, François-Xavier Standaert
Show abstract
Uncategorized
Motivated by the application of delegating computation, we revisit the design of filter permutators as a general approach to build stream ciphers that can be efficiently evaluated in a fully homomorphic manner. We first introduce improved filter permutators that allow better security analyses, instances and implementations than the previously proposed FLIP family of stream ciphers. We also put forward the similarities between these improved constructions and a popular PRG design by Goldreich. Then, we exhibit the relevant cryptographic parameters of two families of Boolean functions, direct sums of monomials and XOR-MAJ functions, which give candidates to instantiate the improved filter permutator paradigm. We develop new Boolean functions techniques to study them, and refine Goldreich's PRG locality bound for this purpose. We give an asymptotic analysis of the noise level of improved filter permutators instances using both kind of functions, and recommend them as good candidates for evaluation with a third-generation FHE scheme. Finally, we propose a methodology to evaluate the performance of such symmetric cipher designs in a FHE setting, which primarily focuses on the noise level of the symmetric ciphertexts (hence on the amount of operations on these ciphertextsthat can be homomorphically evaluated). Evaluations performed with HElib show that instances of improved filter permutators using direct sums of monomials as filter outperform all existing ciphers in the literature based on this criteria. We also discuss the (limited) overheads of these instances in terms of latency and throughput.
Last updated:  2019-05-13
Tiny WireGuard Tweak
Jacob Appelbaum, Chloe Martindale, Peter Wu
We show that a future adversary with access to a quantum computer, historic network traffic protected by WireGuard, and knowledge of a WireGuard user's long-term static public key can likely decrypt many of the WireGuard user's historic messages. We propose a simple, efficient alteration to the WireGuard protocol that mitigates this vulnerability, with negligible additional computational and memory costs. Our changes add zero additional bytes of data to the wire format of the WireGuard protocol. Our alteration provides transitional post-quantum security for any WireGuard user who does not publish their long-term static public key -- it should be exchanged out-of-band.
Last updated:  2019-05-14
An Efficient and Compact Reformulation of NIST Collision Estimate Test
Prasanna Raghaw Mishra, Bhartendu Nandan, Navneet Gaba
In this paper we give an efficient and compact reformulation of NIST collision estimate test given in SP-800 90B. We correct an error in the formulation of the test and show that the test statistic can be computed in a much easier way. We also propose a revised algorithm for the test based on our findings.
Last updated:  2019-07-15
On the Efficiency of Privacy-Preserving Smart Contract Systems
Karim Baghery
Along with blockchain technology, smart contracts have found intense interest in lots of practical applications. A smart contract is a mechanism involving digital assets and some parties, where the parties deposit assets into the contract and the contract redistributes the assets among the parties based on provisions of the smart contract and inputs of the parties. Recently, several smart contract systems are constructed that use zk-SNARKs to provide privacy-preserving payments and interconnections in the contracts (e.g. Hawk [IEEE S&P, 2016] and Gyges [ACM CCS, 2016]). Efficiency of such systems severely are dominated by efficiency of the underlying UC-secure zk-SNARK that is achieved using COCO framework [Kosba et al., 2015] applied on a non-UC-secure zk-SNARK. In this paper, we show that recent progresses on zk-SNARKs, allow one to simplify the structure and also improve the efficiency of both systems with a UC-secure zk-SNARK that has simpler construction and better efficiency in comparison with the currently used ones. To this end, we first show that given a NIZK argument which guarantees non-black-box simulation (knowledge) soundness, one can construct a UC-secure NIZK that has simpler construction and better efficiency than the ones that currently are used in Hawk and Gyges. We believe, new technique can be of independent interest.
Last updated:  2019-06-25
Extended 3-Party ACCE and Application to LoRaWAN 1.1
Sébastien Canard, Loïc Ferreira
LoRaWAN is an IoT protocol deployed worldwide. Whereas the first version 1.0 has been shown to be weak against several types of attacks, the new version 1.1 has been recently released, and aims, in particular, at providing corrections to the previous release. It introduces also a third entity, turning the original 2-party protocol into a 3-party protocol. In this paper, we provide the first security analysis of LoRaWAN 1.1 in its 3-party setting using a provable approach, and show that it suffers from several flaws. Based on the 3(S)ACCE model of Bhargavan et al., we then propose an extended framework that we use to analyse the security of LoRaWAN-like 3-party protocols, and describe a generic 3-party protocol provably secure in this extended model. We use this provable security approach to propose a slightly modified version of LoRaWAN 1.1. We show how to concretely instantiate this alternative, and formally prove its security in our extended model.
Last updated:  2019-05-18
BEARZ Attack FALCON: Implementation Attacks with Countermeasures on the FALCON signature scheme
Sarah McCarthy, James Howe, Neil Smyth, Seamus Brannigan, Máire O’Neill
Post-quantum cryptography is an important and growing area of research due to the threat of quantum computers, as recognised by the National Institute of Standards and Technology (NIST) recent call for standardisation. Lattice-based signatures have been shown in the past to be susceptible to side-channel attacks. Falcon is a lattice-based signature candidate submitted to NIST, which has good performance but lacks in research with respect to implementation attacks and resistance. This research proposes the first fault attack analysis on Falcon and finds its lattice trapdoor sampler is as vulnerable to fault attacks as the GPV sampler used in alternative signature schemes. We simulate the post-processing component of this fault attack and achieve a 100% success rate at retrieving the private-key. This research then proposes an evaluation of countermeasures to prevent this fault attack and timing attacks on Falcon. We provide cost evaluations on the overheads of the proposed countermeasures which shows that Falcon has only up to 30% deterioration in performance of its key generation, and only 5% in its signing, compared to without countermeasures.
Last updated:  2021-07-01
The Complexities of Healing in Secure Group Messaging: Why Cross-Group Effects Matter
Cas Cremers, Britta Hale, Konrad Kohbrok
Modern secure messaging protocols can offer strong security guarantees such as Post-Compromise Security (PCS), which enables participants to heal after compromise. The core PCS mechanism in protocols like Signal is designed for pairwise communication, making it inefficient for large groups, while recently proposed designs for secure group messaging, ART, IETF's MLS Draft-11/TreeKEM, use group keys derived from tree structures to efficiently provide PCS to large groups. Until now, research on PCS designs only considered healing behaviour within a single group. In this work we provide the first analysis of the healing behaviour when a user participates in multiple groups. Our analysis reveals that the currently proposed protocols based on group keys, such as ART and TreeKEM/MLS Draft-11, provide significantly weaker PCS guarantees than group protocols based on pairwise PCS channels. In fact, we show that if new users can be created dynamically, ART, TreeKEM, and MLS Draft-11 never fully heal authentication. We map the design space of healing mechanisms, analyzing security and overhead of possible solutions. This leads us to a promising solution based on (i) global updates that affect all current and future groups, and (ii) post-compromise secure signatures. Our solution allows group messaging protocols such ART and MLS to achieve substantially stronger PCS guarantees. We provide a security definition for post-compromise secure signatures and an instantiation.
Last updated:  2019-05-10
On MILP-Based Automatic Search for Differential Trails Through Modular Additions with Application to Bel-T
Muhammad ElSheikh, Ahmed Abdelkhalek, Amr M. Youssef
Using modular addition as a source of nonlinearity is frequently used in many symmetric-key structures such as ARX and Lai--Massey schemes. At FSE'16, Fu \etal proposed a Mixed Integer Linear Programming (MILP)-based method to handle the propagation of differential trails through modular additions assuming that the two inputs to the modular addition and the consecutive rounds are independent. However, this assumption does not necessarily hold. In this paper, we study the propagation of the XOR difference through the modular addition at the bit level and show the effect of the carry bit. Then, we propose a more accurate MILP model to describe the differential propagation through the modular addition taking into account the dependency between the consecutive modular additions. The proposed MILP model is utilized to launch a differential attack against Bel-T-256, which is a member of the Bel-T block cipher family that has been adopted recently as a national standard of the Republic of Belarus. In particular, we employ the concept of partial Differential Distribution Table to model the 8-bit S-Box of Bel-T using a MILP approach in order to automate finding a differential characteristic of the cipher. Then, we present a $4\frac{1}{7}$-round (out of 8) differential attack which utilizes a $3$-round differential characteristic that holds with probability $2^{-111}$. The data, time and memory complexities of the attack are $2^{114}$ chosen plaintexts, $ 2^{237.14} $ $4\frac{1}{7}$-round encryptions, and $2^{224}$ 128-bit blocks, respectively.
Last updated:  2020-02-25
Dual-Mode NIZKs from Obfuscation
Uncategorized
Dennis Hofheinz, Bogdan Ursu
Show abstract
Uncategorized
Two standard security properties of a non-interactive zero-knowledge (NIZK) scheme are soundness and zero-knowledge. But while standard NIZK systems can only provide one of those properties against unbounded adversaries, dual-mode NIZK systems allow to choose dynamically and adaptively which of these properties holds unconditionally. The only known dual-mode NIZK systems are Groth-Sahai proofs (which have proved extremely useful in a variety of applications), and the FHE-based NIZK constructions of Canetti et al. and Peikert et al, which are concurrent and independent to this work. However, all these constructions rely on specific algebraic settings. Here, we provide a generic construction of dual-mode NIZK systems for all of NP. The public parameters of our scheme can be set up in one of two indistinguishable ways. One way provides unconditional soundness, while the other provides unconditional zero-knowledge. Our scheme relies on subexponentially secure indistinguishability obfuscation and subexponentially secure one-way functions, but otherwise only on comparatively mild and generic computational assumptions. These generic assumptions can be instantiated under any one of the DDH, k-LIN, DCR, or QR assumptions. As an application, we reduce the required assumptions necessary for several recent obfuscation-based constructions of multilinear maps. Combined with previous work, our scheme can be used to construct multilinear maps from obfuscation and a group in which the strong Diffie-Hellman assumption holds. We also believe that our work adds to the understanding of the construction of NIZK systems, as it provides a conceptually new way to achieve dual-mode properties.
Last updated:  2021-08-28
A Method to Reduce the Key Size of UOV Signature Scheme
Chengdong Tao
Multivariate public key signature scheme has a good performance on speed and signature size. But most of them have a huge public key size. In this paper, we propose a new method to reduce the public key size of unbalance oil and vinegar (UOV) signature scheme. We can reduce the public key size of UOV scheme to about 4KB for 128 bits security level. This method can be used to reduce the public key sizes of other multivariate public key cryptosystems.
Last updated:  2019-05-24
Defeating the Hart et al, Beullens-Blackburn, Kotov-Menshov-Ushakov, and Merz-Petit Attacks on WalnutDSA(TM)
Iris Anshel, Derek Atkins, Dorian Goldfeld, Paul E Gunnells
The Walnut Digital Signature Algorithm (WalnutDSA) brings together methods in group theory, representation theory, and number theory, to yield a public-key method that provides a means for messages to be signed and signatures to be verified, on platforms where traditional approaches cannot be executed. After briefly reviewing the various heuristic/practical attacks that have be posited by Hart et al, Beullens-Blackburn, Kotov-Menshov-Ushakov, and Merz-Petit, we detail the parameter choices that defeat each attack, ensure the security of the of the method, and demonstrate its continued utility.
Last updated:  2019-05-10
UC-Secure CRS Generation for SNARKs
Uncategorized
Behzad Abdolmaleki, Karim Baghery, Helger Lipmaa, Janno Siim, Michal Zajac
Show abstract
Uncategorized
Zero-knowledge SNARKs (zk-SNARKs) have recently found various applications in verifiable computation and blockchain applications (Zerocash), but unfortunately they rely on a common reference string (CRS) that has to be generated by a trusted party. A standard suggestion, pursued by Ben Sasson et al. [IEEE S&P, 2015], is to generate CRS via a multi-party protocol. We enhance their CRS-generation protocol to achieve UC-security. This allows to safely compose the CRS-generation protocol with the zk-SNARK in a black-box manner with the insurance that the security of the zk-SNARK is not influenced. Differently from the previous work, the new CRS-generation protocol also avoids the random oracle model which is typically not required by zk-SNARKs themselves. As a case study, we apply the protocol to the state-of-the-art zk-SNARK by Groth [EUROCRYPT, 2016].
Last updated:  2020-04-23
A Practical Approach to the Secure Computation of the Moore-Penrose Pseudoinverse over the Rationals
Niek J. Bouman, Niels de Vreede
Solving linear systems of equations is a universal problem. In the context of secure multiparty computation (MPC), a method to solve such systems, especially for the case in which the rank of the system is unknown and should remain private, is an important building block. We devise an efficient and data-oblivious algorithm (meaning that the algorithm's execution time and branching behavior are independent of all secrets) for solving a bounded integral linear system of unknown rank over the rational numbers via the Moore-Penrose pseudoinverse, using finite-field arithmetic. I.e., we compute the Moore-Penrose inverse over a finite field of sufficiently large order, so that we can recover the rational solution from the solution over the finite field. While we have designed the algorithm with an MPC context in mind, it could be valuable also in other contexts where data-obliviousness is required, like secure enclaves in CPUs. Previous work by Cramer, Kiltz and Padró (CRYPTO 2007) proposes a constant-rounds protocol for computing the Moore-Penrose pseudoinverse over a finite field. The asymptotic complexity (counted as the number of secure multiplications) of their solution is $O(m^4 + n^2 m)$, where $m$ and $n$, $m\leq n$, are the dimensions of the linear system. To reduce the number of secure multiplications, we sacrifice the constant-rounds property and propose a protocol for computing the Moore-Penrose pseudoinverse over the rational numbers in a linear number of rounds, requiring only $O(m^2n)$ secure multiplications. To obtain the common denominator of the pseudoinverse, required for constructing an integer-representation of the pseudoinverse, we generalize a result by Ben-Israel for computing the squared volume of a matrix. Also, we show how to precondition a symmetric matrix to achieve generic rank profile while preserving symmetry and being able to remove the preconditioner after it has served its purpose. These results may be of independent interest.
Last updated:  2019-05-10
Security Analysis of Efficient Anonymous Authentication With Conditional Privacy Preserving Scheme for Vehicular Ad Hoc Networks
Uncategorized
Rui Qiao, Qinglong Wang, Zongtao Duan, Na Fan
Show abstract
Uncategorized
Protecting a driver’s privacy is one of the major concerns in vehicular ad hoc networks (VANETs). Currently, Azees et al. has proposed an efficient anonymous authentication protocol (EAAP) for VANETs. The authors claim that their scheme can implement conditional privacy, and that it can provide resistance against impersonation attack and bogus message attack from an external attacker. In this paper, we show that their scheme fails to resist these two types of attack as well as forgery attack. By these attacks, an attacker can broadcast any messages successfully. Further, the attacker cannot be traced by a trusted authority, which means their scheme does not satisfy the requirement of conditional privacy. The results of this article clearly show that the scheme of Azees et al. is insecure.
Last updated:  2022-10-06
The Mersenne Low Hamming Combination Search Problem can be reduced to an ILP Problem
Alessandro Budroni, Andrea Tenti
In 2017, Aggarwal, Joux, Prakash, and Santha proposed an innovative NTRU-like public-key cryptosystem that was believed to be quantum resistant, based on Mersenne prime numbers q = 2^N-1. After a successful attack designed by Beunardeau, Connolly, Geraud, and Naccache, the authors revised the protocol which was accepted for Round 1 of the Post-Quantum Cryptography Standardization Process organized by NIST. The security of this protocol is based on the assumption that a so-called Mersenne Low Hamming Combination Search Problem (MLHCombSP) is hard to solve. In this work, we present a reduction of MLHCombSP to Integer Linear Programming (ILP). This opens new research directions for assessing the concrete robustness of such cryptosystem. In particular, we uncover a new family of weak keys, for whose our attack runs in polynomial time.
Last updated:  2019-05-10
Revisiting Location Privacy from a Side-Channel Analysis Viewpoint (Extended Version)
Clément Massart, François-Xavier Standaert
Inspired by the literature on side-channel attacks against cryptographic implementations, we describe a framework for the analysis of location privacy. It allows us to revisit (continuous) re-identification attacks with a combination of information theoretic and security metrics. Our results highlight conceptual differences between re-identification attacks exploiting leakages that are internal or external to a pseudonymised database. They put forward the amount of data to collect in order to estimate a predictive model as an important -- yet less discussed -- dimension of privacy assessments. They finally leverage recent results on the security evaluations/certification of cryptographic implementations to connect information theoretic and security metrics, and to formally bound the risk of re-identification with external leakages.
Last updated:  2019-05-10
Privacy-Preserving K-means Clustering with Multiple Data Owners
Jung Hee Cheon, Jinhyuck Jeong, Dohyeong Ki, Jiseung Kim, Joohee Lee, Seok Won Lee
Recently with the advent of technology, a lot of data are stored and mined in cloud servers. Since most of the data contain potential private information, it has become necessary to preserve the privacy in data mining. In this paper, we propose a protocol for collaboratively performing the K-means clustering algorithm on the data distributed among multiple data owners, while protecting the sensitive private data. We employ two service providers in our scenario, namely a main service provider and a key manager. Under the assumption that the cryptosystems used in our protocol are secure and that the two service providers do not collude, we provide a perfect secrecy in the sense that the cluster centroids and data are not leaked to any party including the two service providers. Also, we implement the scenario using recently proposed leveled homomorphic encryption called HEAAN. With our construction, the privacy-preserving K-means clustering can be done in less than one minute while maintaining 80-bit security in a situation with 10,000 data, 8 features and 4 clusters.
Last updated:  2019-10-20
Towards a Practical Cluster Analysis over Encrypted Data
Jung Hee Cheon, Duhyeong Kim, Jai Hyun Park
Cluster analysis is one of the most significant unsupervised machine learning tasks, and it is utilized in various fields associated with privacy issues including bioinformatics, finance and image processing. In this paper, we propose a practical solution for privacy-preserving cluster analysis based on homomorphic encryption~(HE). Our work is the first HE solution for the mean-shift clustering algorithm. To reduce the super-linear complexity of the original mean-shift algorithm, we adopt a novel random sampling method called dust sampling which perfectly fits in HE and achieves the linear complexity. We also substitute non-polynomial kernels by a new polynomial kernel so that it can be efficiently computed in HE. The HE implementation of our modified mean-shift clustering algorithm based on the approximate HE scheme HEAAN shows prominent performance in terms of speed and accuracy. It takes about $30$ minutes with $99\%$ accuracy over several public datasets with hundreds of data, and even for the dataset with $262,144$ data it takes only $82$ minutes applying SIMD operations in HEAAN. Our results outperform the previously best known result (SAC 2018) over $400$ times.
Last updated:  2022-03-10
The complexity of MinRank
Alessio Caminata, Elisa Gorla
In this note, we leverage some of our previous results to produce a concise and rigorous proof for the complexity of the generalized MinRank Problem in the under-defined and well-defined case. Our main theorem recovers and extends previous results by Faugère, Safey El Din, Spaenlehauer.
Last updated:  2019-05-11
In Pursuit of Clarity In Obfuscation
Uncategorized
Allison Bishop, Lucas Kowalczyk, Tal Malkin, Valerio Pastro, Mariana Raykova, Kevin Shi
Show abstract
Uncategorized
An account of meandering research efforts in the area of cryptographic obfuscation over several years.
Last updated:  2019-06-26
How to wrap it up - A formally verified proposal for the use of authenticated wrapping in PKCS\#11
Alexander Dax, Robert Künnemann, Sven Tangermann, Michael Backes
Being the most widely used and comprehensive standard for hardware security modules, cryptographic tokens and smart cards, PKCS#11 has been the subject of academic study for years. PKCS#11 provides a key store that is separate from the application, so that, ideally, an application never sees a key in the clear. Again and again, researchers have pointed out the need for an import/export mechanism that ensures the integrity of the permissions associated to a key. With version 2.40, for the first time, the standard included authenticated deterministic encryption schemes. The interface to this operation is insecure, however, so that an application can get the key in the clear, subverting the purpose of using a hardware security module. This work proposes a formal model for the secure use of authenticated deterministic encryption in PKCS11, including concrete API changes to allow for secure policies to be implemented. Owing to the authenticated encryption mechanism, the policy we propose provides more functionality than any policy proposed so far and can be implemented without access to a random number generator. Our results cover modes of operation that rely on unique initialisation vectors (IVs), like GCM or CCM, but also modes that generate synthetic IVs. We furthermore provide a proof for the deduction soundness of our modelling of deterministic encryption in Böhl et.al.'s composable deduction soundness framework.
Last updated:  2021-04-19
Physical Security of Deep Learning on Edge Devices: Comprehensive Evaluation of Fault Injection Attack Vectors
Xiaolu Hou, Jakub Breier, Dirmanto Jap, Lei Ma, Shivam Bhasin, Yang Liu
Decision making tasks carried out by the usage of deep neural networks are successfully taking over in many areas, including those that are security critical, such as healthcare, transportation, smart grids, where intentional and unintentional failures can be disastrous. Edge computing systems are becoming ubiquitous nowadays, often serving deep learning tasks that do not need to be sent over to servers. Therefore, there is a necessity to evaluate the potential attacks that can target deep learning in the edge. In this work, we present evaluation of deep neural networks (DNNs) reliability against fault injection attacks. We first experimentally evaluate DNNs implemented in an embedded device by using laser fault injection to get the insight on possible attack vectors. We show practical results on four activation functions, ReLu, softmax, sigmoid, and tanh. We then perform a deep study on DNNs based on derived fault models by using several different attack strategies based on random faults. We also investigate a powerful attacker who can find effective fault location based on genetic algorithm, to show the most efficient attacks in terms of misclassification success rates. Finally, we show how a state of the art countermeasure against model extraction attack can be bypassed with a fault attack. Our results can serve as a basis to outline the susceptibility of DNNs to physical attacks which can be considered a viable attack vector whenever a device is deployed in hostile environment.
Last updated:  2019-05-31
Fast Keyed-Verification Anonymous Credentials on Standard Smart Cards
Jan Camenisch, Manu Drijvers, Petr Dzurenda, Jan Hajny
Cryptographic anonymous credential schemes allow users to prove their personal attributes, such as age, nationality, or the validity of a ticket or a pre-paid pass, while preserving their privacy, as such proofs are unlinkable and attributes can be selectively disclosed. Recently, Chase et al. (CCS 2014) observe that in such systems, a typical setup is that the credential issuer also serves as the verifier. They introduce keyed-verification credentials that are tailored to this setting. In this paper, we present a novel keyed-verification credential system designed for lightweight devices (primarily smart cards) and prove its security. By using a novel algebraic MAC based on Boneh-Boyen signatures, we achieve the most efficient proving protocol compared to existing schemes. To demonstrate the practicality of our scheme in real applications, including large-scale services such as public transportation or e-government, we present an implementation on a standard, off-the-shelf, Multos smart card. While using significantly higher security parameters than most existing implementations, we achieve performance that is more than 44 % better than the current state-of-the-art implementation.
Last updated:  2019-05-22
From Collisions to Chosen-Prefix Collisions - Application to Full SHA-1
Gaëtan Leurent, Thomas Peyrin
A chosen-prefix collision attack is a stronger variant of a collision attack, where an arbitrary pair of challenge prefixes are turned into a collision. Chosen-prefix collisions are usually significantly harder to produce than (identical-prefix) collisions, but the practical impact of such an attack is much larger. While many cryptographic constructions rely on collision-resistance for their security proofs, collision attacks are hard to turn into a break of concrete protocols, because the adversary has limited control over the colliding messages. On the other hand, chosen-prefix collisions have been shown to break certificates (by creating a rogue CA) and many internet protocols (TLS, SSH, IPsec). In this article, we propose new techniques to turn collision attacks into chosen-prefix collision attacks. Our strategy is composed of two phases: first, a birthday search that aims at taking the random chaining variable difference (due to the chosen-prefix model) to a set of pre-defined target differences. Then, using a multi-block approach, carefully analysing the clustering effect, we map this new chaining variable difference to a colliding pair of states using techniques developed for collision attacks. We apply those techniques to MD5 and SHA1, and obtain improved attacks. In particular, we have a chosen-prefix collision attack against SHA1 with complexity between $2^{66.9}$ and $2^{69.4}$ (depending on assumptions about the cost of finding near-collision blocks), while the best-known attack has complexity $2^{77.1}$. This is within a small factor of the complexity of the classical collision attack on SHA1 (estimated as $2^{64.7}$). This represents yet another warning that industries and users have to move away from using SHA1 as soon as possible.
Last updated:  2023-07-05
Poseidon: A New Hash Function for Zero-Knowledge Proof Systems
Lorenzo Grassi, Dmitry Khovratovich, Christian Rechberger, Arnab Roy, Markus Schofnegger
The area of practical computational integrity proof systems, like SNARKs, STARKs, Bulletproofs, is seeing a very dynamic development with several constructions having appeared recently with improved properties and relaxed setup requirements. Many use cases of such systems involve, often as their most expensive part, proving the knowledge of a preimage under a certain cryptographic hash function, which is expressed as a circuit over a large prime field. A notable example is a zero-knowledge proof of coin ownership in the Zcash cryptocurrency, where the inadequacy of the SHA-256 hash function for such a circuit caused a huge computational penalty. In this paper, we present a modular framework and concrete instances of cryptographic hash functions which work natively with GF(p) objects. Our hash function Poseidon uses up to 8x fewer constraints per message bit than Pedersen Hash. Our construction is not only expressed compactly as a circuit, but can also be tailored for various proof systems using specially crafted polynomials, thus bringing another boost in performance. We demonstrate this by implementing a 1-out-of-a-billion membership proof with Merkle trees in less than a second by using Bulletproofs.
Last updated:  2019-05-10
Forgery Attack on mixFeed in the Nonce-Misuse Scenario
Mustafa Khairallah
mixFeed [CN19] is a round 1 candidate for the NIST Lightweight Cryptography Standardization Project. It is a single-pass, nonce-based, AES-based authenticated encryption algorithms. The authors claim that while there are no guarantees for security in terms of confidentiality in case of nonce-misuse (repetition), the integrity security still holds up to 2^32 data complexity. In this report, this claim is not true in case the plaintext length is non-zero (≥ 16 bytes to be exact). We show a forgery attack that requires only two encryption queries with the same nonce and 34 bytes of data.
Last updated:  2019-05-14
UniqueChain: A Fast, Provably Secure Proof-of-Stake Based Blockchain Protocol in the Open Setting
Peifang Ni, Hongda Li, Xianning Meng, Dongxue Pan
We present UniqueChain, a proof-of-stake based blockchain protocol that is secure against a mildly adaptive adversary in open setting, where newly joining parties can be initialized securely without any additional trusted assumptions. What's more, UniqueChain provides secure best local chains for existing honest parties and achieves fast messages (transactions) confirmation. Security of protocol holds if majority of overall stakes are controlled by honest parties. To achieve the above guarantees, we formalize a secure bootstrapping mechanism for new parties, a best local chain selection rule for existing honest parties and propose a new form of two-chain structure that realizes uniqueness of the chains, which contain messages, held by honest parties. Further, we prove that $UniqueChain$ satisfies security properties as chain growth, chain quality, common prefix and soundness, and two additional properties as uniqueness and high efficiency.
Last updated:  2019-05-10
FloodXMR: Low-cost transaction flooding attack with Monero’s bulletproof protocol
João Otávio Massari Chervinski, Diego Kreutz, Jiangshan Yu
Monero is one of the first and most popular cryptocurrencies to address privacy issues of other crypto coins such as Bitcoin. Monero has a market capitalization of over one billion US dollars, and is ranked the 12th most valuable cryptocurrency on CoinMarketCap (17 April 2019). This digital coin provides different mechanisms to protect its users, such as decoy keys or mixins to obfuscate transaction inputs. However, in spite of the efforts to protect Monero’s users privacy, transaction tracing attacks are still feasible. Our contribution is twofold. First, we propose and evaluate a new traceability attack, called transaction flooding attack (FloodXMR). Second, we present an analysis of thecosts required for an attacker to conduct FloodXMR. We show how an attacker can take advantage of Monero’s Bulletproof protocol, which reduces transaction fees, to flood the network with his own transactions and, consequently, remove mixins from transaction inputs. Assuming an attack timeframe of 12 months, our findings show that an attacker can trace up to 47.63% of the transaction inputs at a cost of just 1,746.53 USD. Moreover, we show also that more than 90% of the inputs are affected by our tracing algorithm.
Last updated:  2020-01-16
Non-Interactive MPC with Trusted Hardware Secure Against Residual Function Attacks
Ryan Karl, Timothy Burchfield, Jonathan Takeshita, Taeho Jung
Secure multiparty computation (MPC) has been repeatedly optimized, and protocols with two communication rounds and strong security guarantees have been achieved. While progress has been made constructing non-interactive protocols with just one-round of online communication (i.e., non-interactive MPC or NI-MPC), since correct evaluation must be guaranteed with only one round, these protocols are by their nature vulnerable to the residual function attack in the standard model. This is because a party that receives a garbled circuit may repeatedly evaluate the circuit locally, while varying their own inputs and fixing the input of others to learn the values entered by other participants. We present the first MPC protocol with a one-round online phase that is secure against the residual function attack. We also present rigorous proofs of correctness and security in the covert adversary model, a reduction of the malicious model that is stronger than the semi-honest model and better suited for modeling the behaviour of parties in the real world, for our protocol. Furthermore, we rigorously analyze the communication and computational complexity of current state of the art protocols which require two rounds of communication or one-round during the online-phase with a reduced security requirement, and demonstrate that our protocol is comparable to or outperforms their complexity.
Last updated:  2019-05-08
A New Approach to Modelling Centralised Reputation Systems
Lydia Garms, Elizabeth A. Quaglia
A reputation system assigns a user or item a reputation value which can be used to evaluate trustworthiness. Blömer, Juhnke and Kolb in 2015, and Kaafarani, Katsumata and Solomon in 2018, gave formal models for \mathit{centralised} reputation systems, which rely on a central server and are widely used by service providers such as AirBnB, Uber and Amazon. In these models, reputation values are given to items, instead of users. We advocate a need for shift in how reputation systems are modelled, whereby reputation values are given to users, instead of items, and each user has unlinkable items that other users can give feedback on, contributing to their reputation value. This setting is not captured by the previous models, and we argue it captures more realistically the functionality and security requirements of a reputation system. We provide definitions for this new model, and give a construction from standard primitives, proving it satisfies these security requirements. We show that there is a low efficiency cost for this new functionality.
Last updated:  2022-11-28
A Central Limit Framework for Ring-LWE Decryption
Uncategorized
Sean Murphy, Rachel Player
Show abstract
Uncategorized
This paper develops Central Limit arguments for analysing the noise in ciphertexts in two homomorphic encryption schemes that are based on Ring-LWE. The first main contribution of this paper is to present an average-case noise analysis for the BGV scheme. Our approach builds upon the recent work of Costache et al. that gives the approximation of a polynomial product as a multivariate Normal distribution. We show how this result can be applied in the BGV context and experimentally verify its improvement over prior, worst-case, approaches. Our second main contribution is to develop a Central Limit framework to analyse the noise growth in the homomorphic Ring-LWE cryptosystem of Lyubashevsky, Peikert and Regev (Eurocrypt 2013, full version). Our approach is very general: apart from finite variance, no assumption on the distribution of the noise is required (in particular, the noise need not be subgaussian). We show that our approach leads to tighter bounds for the probability of decryption failure than have been obtained in prior work.
Last updated:  2019-05-08
Reducing the Cost of Authenticity with Leakages: a CIML2-Secure AE Scheme with One Call to a Strongly Protected Tweakable Block Cipher
Francesco Berti, Olivier Pereira, François-Xavier Standaert
This paper presents CONCRETE (Commit-Encrypt-Send-the-Key) a new Authenticated Encryption mode that offers CIML2 security, that is, ciphertext integrity in the presence of nonce misuse and side-channel leakages in both encryption and decryption. CONCRETE improves on a recent line of works aiming at leveled implementations, which mix a strongly protected and energy demanding implementation of a single component, and other weakly protected and much cheaper components. Here, these components all implement a tweakable block cipher TBC. CONCRETE requires the use of the strongly protected TBC only once while supporting the leakage of the full state of the weakly protected components -- it achieves CIML2 security in the so-called unbounded leakage model. All previous works need to use the strongly protected implementation at least twice. As a result, for short messages whose encryption and decryption energy costs are dominated by the strongly protected component, we halve the cost of a leakage-resilient implementation. CONCRETE additionally provides security when unverified plaintexts are released, and confidentiality in the presence of simulatable leakages in encryption and decryption.
Last updated:  2019-05-08
HMAKE: Legacy-Compliant Multi-factor Authenticated Key Exchange from Historical Data
Chenglu Jin, Zheng Yang, Sridhar Adepu, Jianying Zhou
In this paper, we introduce two lightweight historical data based multi-factor authenticated key exchange (HMAKE) protocols in the random oracle model. Our HMAKE protocols use a symmetric secret key, as their first authentication factor, together with their second authentication factor, historical data exchanged between the two parties in the past, and the third authentication factor, a set of secret tags associated with the historical data, to establish a secure communication channel between the client and the server. A remarkable security feature of HMAKE is bounded historical tag leakage resilience, which means that (informally speaking) if a small portion of the secret tags is leaked to an adversary, it will not affect the security of one HMAKE protocol with an overwhelming probability. Our first HMAKE protocol can provide static bounded leakage resilience, meaning that the secret tags are leaked at the beginning of the security game. To enhance its security, our second HMAKE protocol makes use of our first protocol as a compiler to transform any passively secure two-message key exchange protocol to an actively secure HMAKE protocol with perfect forward secrecy, and therefore it can be secure even if the historical tags are compromised adaptively by an attacker. In addition to the strong security properties we achieved, our protocols can potentially have great impacts in practice: they are efficient in computation, and they are compatible with legacy devices in cyber-physical systems.
Last updated:  2019-12-19
Limits to Non-Malleability
Marshall Ball, Dana Dachman-Soled, Mukul Kulkarni, Tal Malkin
There have been many successes in constructing explicit non-malleable codes for various classes of tampering functions in recent years, and strong existential results are also known. In this work we ask the following question: "When can we rule out the existence of a non-malleable code for a tampering class $\mathcal{F}$?" We show that non-malleable codes are impossible to construct for three different tampering classes: 1. Functions that change $d/2$ symbols, where $d$ is the distance of the code; 2. Functions where each input symbol affects only a single output symbol; 3. Functions where each of the $n$ output bits is a function of $n-\log n$ input bits. We additionally rule out constructions of non-malleable codes for certain classes $\mathcal{F}$ via reductions to the assumption that a distributional problem is hard for $\mathcal{F}$, that make black-box use of the tampering functions in the proof. In particular, this yields concrete obstacles for the construction of efficient codes for $\mathsf{NC}$, even assuming average-case variants of $P\not\subseteq\mathsf{NC}$.
Last updated:  2019-05-08
Efficient Pseudorandom Correlation Generators: Silent OT Extension and More
Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, Lisa Kohl, Peter Scholl
Secure multiparty computation (MPC) often relies on sources of correlated randomness for better efficiency and simplicity. This is particularly useful for MPC with no honest majority, where input-independent correlated randomness enables a lightweight “non-cryptographic” online phase once the inputs are known. However, since the amount of correlated randomness typically scales with the circuit size of the function being computed, securely generating correlated randomness forms an efficiency bottleneck, involving a large amount of communication and storage. A natural tool for addressing the above limitations is a pseudorandom correlation generator (PCG). A PCG allows two or more parties to securely generate long sources of useful correlated randomness via a local expansion of correlated short seeds and no interaction. PCGs enable MPC with silent preprocessing, where a small amount of interaction used for securely sampling the seeds is followed by silent local generation of correlated pseudorandomness. A concretely efficient PCG for Vector-OLE correlations was recently obtained by Boyle et al. (CCS 2018) based on variants of the learning parity with noise (LPN) assumption over large fields. In this work, we initiate a systematic study of PCGs and present concretely efficient constructions for several types of useful MPC correlations. We obtain the following main contributions: – PCG foundations. We give a general security definition for PCGs. Our definition suffices for any MPC protocol satisfying a stronger security requirement that is met by existing protocols. We prove that a stronger security requirement is indeed necessary, and justify our PCG definition by ruling out a stronger and more natural definition. – Silent OT extension. We present the first concretely efficient PCG for oblivious transfer correlations. Its security is based on a variant of the binary LPN assumption and any correlation-robust hash function. We expect it to provide a faster alternative to the IKNP OT extension protocol (Crypto ’03) when communication is the bottleneck. We present several applications, including protocols for non-interactive zero-knowledge with bounded-reusable preprocessing from binary LPN, and concretely efficient related-key oblivious pseudorandom functions. – PCGs for simple 2-party correlations. We obtain PCGs for several other types of useful 2-party correlations, including (authenticated) one-time truth-tables and Beaver triples. While the latter PCGs are slower than our PCG for OT, they are still practically feasible. These PCGs are based on a host of assumptions and techniques, including specialized homomorphic secret sharing schemes and pseudorandom generators tailored to their structure. – Multiparty correlations. We obtain PCGs for multiparty correlations that can be used to make the circuit-dependent communication of MPC protocols scale linearly (instead of quadratically) with the number of parties.
Last updated:  2019-05-09
Practical Key-recovery Attacks on Round-Reduced Ketje Jr, Xoodoo-AE and Xoodyak
Haibo Zhou, Zheng Li, Xiaoyang Dong, Keting Jia, Willi Meier
Conditional cube attack was proposed by Huang et al. at EUROCRYPT 2017 to attack Keccak keyed mode. Inspired by dynamic cube attack, they reduce the degree by appending key bit conditions on the initial value (IV). Recently, Li et al. proposed new conditional cube attacks on Keccak keyed mode with extremely small degrees of freedom. In this paper, we find a new property on Li et al.'s method, and modify the new conditional cube attack for lightweight encryption algorithms using a 8-2-2 pattern, and apply it on 5-round Ketje Jr, 6-round Xoodoo-AE and Xoodyak, where Ketje Jr is among the 3rd round CAESAR competition candidates and Xoodyak is a Round 1 submission of the ongoing NIST lightweight cryptography project. Then we give the updated conditional cube attack analysis. All our results are of practical time complexity with negligible memory cost and our test codes are given in this paper. Notably, it is the first third-party cryptanalysis result for Xoodyak.
Last updated:  2019-05-08
Backward Private DSSE: Alternative Formulations of Information Leakage and Efficient Constructions
Sanjit Chatterjee, Shravan Kumar Parshuram Puria, Akash Shah
Dynamic Searchable Symmetric Encryption ($\mathsf{DSSE}$), apart from providing support for search operation, allows a client to perform update operations on outsourced database efficiently. Two security properties, viz., forward privacy and backward privacy are desirable from a $\mathsf{DSSE}$ scheme. The former captures that the newly updated entries cannot be related to previous search queries and the latter ensures that search queries should not leak matching entries after they have been deleted. These security properties are formalized in terms of the information leakage that can be incurred by the respective constructions. Existing backward private constructions either have a non-optimal communication overhead or they make use of heavy cryptographic primitives. Our main contribution consists of three efficient backward private schemes that aim to achieve practical efficiency by using light weight symmetric cryptographic components only. In the process, we also revisit the existing definitions of information leakage for backward privacy [Bost et al. CCS'17] and propose alternative formulations. Our first construction $\Pi_\mathsf{BP}\text{-}\mathsf{prime}$ achieves a stronger notion of backward privacy whereas our next two constructions $\Pi_\mathsf{BP}$ and $\Pi_\mathsf{WBP}$ achieve optimal communication complexity at the cost of some additional leakage. The prototype implementations of our schemes depict the practicability of the proposed constructions and indicate that the cost of achieving backward privacy over forward privacy is substantially small.
Last updated:  2019-05-08
Lattice-based Zero-Knowledge Proofs: New Techniques for Shorter and Faster Constructions and Applications
Muhammed F. Esgin, Ron Steinfeld, Joseph K. Liu, Dongxi Liu
We devise new techniques for design and analysis of efficient lattice-based zero-knowledge proofs (ZKP). First, we introduce one-shot proof techniques for non-linear polynomial relations of degree $k\ge 2$, where the protocol achieves a negligible soundness error in a single execution, and thus performs significantly better in both computation and communication compared to prior protocols requiring multiple repetitions. Such proofs with degree $k\ge 2$ have been crucial ingredients for important privacy-preserving protocols in the discrete logarithm setting, such as Bulletproofs (IEEE S&P '18) and arithmetic circuit arguments (EUROCRYPT '16). In contrast, one-shot proofs in lattice-based cryptography have previously only been shown for the linear case ($k=1$) and a very specific quadratic case ($k=2$), which are obtained as a special case of our technique. Moreover, we introduce two speedup techniques for lattice-based ZKPs: a CRT-packing technique supporting ``inter-slot'' operations, and ``NTT-friendly'' tools that permit the use of fully-splitting rings. The former technique comes at almost no cost to the proof length, and the latter one barely increases it, which can be compensated for by tweaking the rejection sampling parameters while still having faster computation overall. To illustrate the utility of our techniques, we show how to use them to build efficient relaxed proofs for important relations, namely proof of commitment to bits, one-out-of-many proof, range proof and set membership proof. Despite their relaxed nature, we further show how our proof systems can be used as building blocks for advanced cryptographic tools such as ring signatures. Our ring signature achieves a dramatic improvement in length over all the existing proposals from lattices at the same security level. The computational evaluation also shows that our construction is highly likely to outperform all the relevant works in running times. Being efficient in both aspects, our ring signature is particularly suitable for both small-scale and large-scale applications such as cryptocurrencies and e-voting systems. No trusted setup is required for any of our proposals.
Last updated:  2019-06-26
Symmetric-key Authenticated Key Exchange (SAKE) with Perfect Forward Secrecy
Gildas Avoine, Sébastien Canard, Loïc Ferreira
Key exchange protocols in the asymmetric-key setting are known to provide stronger security properties than protocols in symmetric-key cryptography. In particular, they can provide perfect forward secrecy, as illustrated by key exchange protocols based on the Diffie-Hellman scheme. However public-key algorithms are too heavy for low-resource devices, which can then not benefit from forward secrecy. In this paper, we describe a scheme that solves this issue. Using a nifty resynchronisation technique, we propose an authenticated key exchange protocol in the symmetric-key setting that guarantees perfect forward secrecy. We prove that the protocol is sound, and provide a formal security proof.
Last updated:  2019-05-08
Contingent payments on a public ledger: models and reductions for automated verification
Sergiu Bursuc, Steve Kremer
We study protocols that rely on a public ledger infrastructure, concentrating on protocols for zero-knowledge contingent payment, whose security properties combine diverse notions of fairness and privacy. We argue that rigorous models are required for capturing the ledger semantics, the protocol-ledger interaction, the cryptographic primitives and, ultimately, the security properties one would like to achieve. Our focus is on a particular level of abstraction, where network messages are represented by a term algebra, protocol execution by state transition systems (e.g. multiset rewrite rules) and where the properties of interest can be analyzed with automated verification tools. We propose models for: (1) the rules guiding the ledger execution, taking the coin functionality of public ledgers such as Bitcoin as an example; (2) the security properties expected from ledger-based zero-knowledge contingent payment protocols; (3) two different security protocols that aim at achieving these properties relying on different ledger infrastructures; (4) reductions that allow simpler term algebras for homomorphic cryptographic schemes. Altogether, these models allow us to derive a first automated verification for ledger-based zero-knowledge contingent payment using the Tamarin prover. Furthermore, our models help in clarifying certain underlying assumptions, security and efficiency tradeoffs that should be taken into account when deploying protocols on the blockchain.
Last updated:  2019-05-03
K2SN-MSS: An Efficient Post-Quantum Signature (Full Version)
Sabyasachi Karati, Reihaneh Safavi-Naini
With the rapid development of quantum technologies, quantum-safe cryptography has found significant attention. Hash-based signature schemes have been in particular of interest because of (i) the importance of digital signature as the main source of trust on the Internet, (ii) the fact that the security of these signatures relies on existence of one-way functions, which is the minimal assumption for signature schemes, and (iii) they can be efficiently implemented. Basic hash-based signatures are for a single message, but have been extended for signing multiple messages. In this paper we design a Multi-message Signature Scheme (MSS) based on an existing One-Time Signature (OTS) that we refer to as KSN-OTS. KSN uses SWIFFT, an additive homomorphic lattice-based hash function family with provable one-wayness property, as the one-way-function and achieves a short signature. We prove security of our proposed signature scheme in a new strengthened security model (multi-target multi-function) of MSS, determine the system parameters for 512 bit classical (256 bit quantum) security, and compare parameter sizes of our scheme against XMSS, a widely studied hash based MSS that has been a candidate for NIST standardization of post-quantum signature scheme. We give an efficient implementation of our scheme using Intel SIMD (Single Instruction Multiple Data) instruction set. For this, we first implement SWIFFT computation using a SIMD parallelization of Number Theoretic Transform (NTT) of elements of the ring $\mathbb{Z}_p[X]/(X^\n+1)$, that can support different levels of parallelization. We compare efficiency of this implementation with a comparable (security level) implementation of XMSS and show its superior performance on a number of efficiency parameters.
Last updated:  2019-08-10
The State of the Uniform: Attacks on Encrypted Databases Beyond the Uniform Query Distribution
Uncategorized
Evgenios M. Kornaropoulos, Charalampos Papamanthou, Roberto Tamassia
Show abstract
Uncategorized
Recent foundational work on leakage-based attacks on encrypted databases has broadened our understanding of what an adversary can accomplish with a standard leakage profile. Nevertheless, all known value reconstruction attacks succeed under strong assumptions that may not hold in the real world. The most prevalent assumption is that queries should be issued uniformly at random by the client. We present the first value reconstruction attacks for encrypted databases without any assumptions about the query or data distribution. Our approach uses the search pattern leakage, which exists in all known structured encryption schemes but has not been effectively utilized so far. At the core of our method lies a support size estimator, a technique that utilizes the repetition of search tokens with the same response to estimate distances between encrypted values without any assumptions about the underlying distribution. We develop distribution-agnostic reconstruction attacks for both range queries and k-nearest-neighbor (k-NN) queries based on information extracted from the search pattern leakage. Our new range attack follows a different algorithmic approach than state-of-the-art attacks, which are fine-tuned to succeed under the uniform queries. Instead, we reconstruct plaintext values under a variety of skewed query distributions and even outperform the accuracy of previous approaches under uniform query distribution. Our new k-NN attack succeeds with far fewer samples than a previously proposed attack and scales to much larger values of k. We demonstrate the effectiveness of our attacks by experimentally testing them on a wide range of query distributions and database densities, both unknown to the adversary.
Last updated:  2019-07-20
Elastic-Tweak: A Framework for Short Tweak Tweakable Block Cipher
Avik Chakraborti, Nilanjan Datta, Ashwin Jha, Cuauhtemoc Mancillas Lopez, Mridul Nandi, Yu Sasaki
Tweakable block cipher (TBC), a stronger notion than standard block ciphers, has wide-scale applications in symmetric-key schemes. At a high level, it provides flexibility in design and (possibly) better security bounds. In multi-keyed applications, a TBC with short tweak values can be used to replace multiple keys. However, the existing TBC construction frameworks, including TWEAKEY and XEX, are designed for general purpose tweak sizes. Specifically, they are not optimized for short tweaks, which might render them inefficient for certain resource constrained applications. So a dedicated paradigm to construct short-tweak TBCs (tBC) is highly desirable. In this paper, we present a dedicated framework, called the Elastic-Tweak framework (ET in short), to convert any reasonably secure SPN block cipher into a secure tBC. We apply the ET framework on GIFT and AES to construct efficient tBCs, named TweGIFT and TweAES. We present hardware and software results to show that the performance overheads for these tBCs are minimal. We perform comprehensive security analysis and observe that TweGIFT and TweAES provide sufficient security without any increase in the number of block cipher rounds when compared to GIFT and AES. We also show some concrete applications of ET-based tBCs, which are better than their block cipher counterparts in terms of key size, state size, number of block cipher calls, and short message processing. Some notable applications include, Twe-FCBC (reduces the key size of FCBC and gives better security than CMAC), Twe-LightMAC Plus (better rate than LightMAC Plus), Twe-CLOC, and Twe-SILC (reduces the number of block cipher calls and simplifies the design of CLOC and SILC).
Last updated:  2019-10-18
A Comprehensive Study of Deep Learning for Side-Channel Analysis
Uncategorized
Loïc Masure, Cécile Dumas, Emmanuel Prouff
Show abstract
Uncategorized
Recently, several studies have been published on the application of deep learning to enhance Side-Channel Attacks (SCA). These seminal works have practically validated the soundness of the approach, especially against implementations protected by masking or by jittering. Concurrently, important open issues have emerged. Among them, the relevance of machine (and thereby deep) learning based SCA has been questioned in several papers based on the lack of relation between the accuracy, a typical performance metric used in machine learning, and common SCA metrics like the Guessing entropy or the key-discrimination success rate. Also, the impact of the classical side-channel counter-measures on the efficiency of deep learning has been questioned, in particular by the semi-conductor industry. Both questions enlighten the importance of studying the theoretical soundness of deep learning in the context of side-channel and of developing means to quantify its efficiency, especially with respect to the optimality bounds published so far in the literature for side-channel leakage exploitation. The first main contribution of this paper directly concerns the latter point. It is indeed proved that minimizing the Negative Log Likelihood (NLL for short) loss function during the training of deep neural networks is actually asymptotically equivalent to maximizing the Perceived Information introduced by Renauld et al. at EUROCRYPT 2011 as a lower bound of the Mutual Information between the leakage and the target secret. Hence, such a training can be considered as an efficient and effective estimation of the PI, and thereby of the MI (known to be complex to accurately estimate in the context of secure implementations). As a second direct consequence of our main contribution, it is argued that, in a side-channel exploitation context, choosing the NLL loss function to drive the training is sound from an information theory point of view. As a third contribution, classical counter-measures like Boolean masking or execution flow shuffling, initially dedicated to classical SCA, are proved to stay sound against deep Learning based attacks.
Last updated:  2019-05-03
Oblivious PRF on Committed Vector Inputs and Application to Deduplication of Encrypted Data
Jan Camenisch, Angelo De Caro, Esha Ghosh, Alessandro Sorniotti
Ensuring secure deduplication of encrypted data is a very active topic of research because deduplication is effective at reducing storage costs. Schemes supporting deduplication of encrypted data that are not vulnerable to content guessing attacks (such as Message Locked Encryption) have been proposed recently [Bellare et al. 2013, Li et al. 2015]. However in all these schemes, there is a key derivation phase that solely depends on a short hash of the data and not the data itself. Therefore, a file specofic key can be obtained by anyone possessing the hash. Since hash values are usually not meant to be secret, a desired solution will be a more robust oblivious key generation protocol where file hashes need not be kept private. Motivated by this use-case, we propose a new primitive for oblivious pseudorandom function (OPRF) on committed vector inputs in the universal composable (UC) framework. We formalize this functionality as $\mathcal{F}_\mathsf{OOPRF}$, where $\mathsf{OOPRF}$ stands for Ownership-based Oblivious PRF. $\mathcal{F}_\mathsf{OOPRF}$ produces a unique random key on input a vector digest provided the client proves knowledge of a (parametrisable) number of random positions of the input vector. To construct an efficient $\mathsf{OOPRF}$ protocol, we carefully combine a hiding vector commitment scheme, a variant of the PRF scheme of Dodis- Yampolskiy [Dodis et al. 2005] and a homomorphic encryption scheme glued together with concrete, efficient instantiations of proofs of knowledge. To the best of our knowledge, our work shows for the first time how these primitives can be combined in a secure, efficient and useful way. We also propose a new vector commitment scheme with constant sized public parameters but $(\log n)$ size witnesses where n is the length of the committed vector. This can be of independent interest.
Last updated:  2019-05-03
Efficient coding for secure computing with additively-homomorphic encrypted data
Thijs Veugen
A framework is introduced for efficiently computing with encrypted data. We assume a semi-honest security model with two computing parties. Two different coding techniques are used with additively homomorphic encryption, such that many values can be put into one large encryption, and additions and multiplications can be performed on all values simultaneously. For more complicated operations such as comparisons and equality tests, bit-wise secret sharing is proposed as an additional technique that has a low computational and communication complexity, and which allows for precomputing. The framework is shown to significantly improve the computational complexity of state-of-the-art solutions on generic operations such as secure comparisons and secure set intersection.
Last updated:  2020-02-07
Flexible Authenticated and Confidential Channel Establishment (fACCE): Analyzing the Noise Protocol Framework
Benjamin Dowling, Paul Rösler, Jörg Schwenk
The Noise protocol framework is a suite of channel establishment protocols, of which each individual protocol ensures various security properties of the transmitted messages, but keeps specification, implementation, and configuration relatively simple. Implementations of the Noise protocols are themselves, due to the employed primitives, very performant. Thus, despite its relative youth, Noise is already used by large-scale deployed applications such as WhatsApp and Slack. Though the Noise specification describes and claims the security properties of the protocol patterns very precisely, there has been no computational proof yet. We close this gap. Noise uses only a limited number of cryptographic primitives which makes it an ideal candidate for reduction-based security proofs. Due to its patterns' characteristics as channel establishment protocols, and the usage of established keys within the handshake, the authenticated and confidential channel establishment (ACCE) model (Jager et al. CRYPTO 2012) seems to perfectly fit for an analysis of Noise. However, the ACCE model strictly divides protocols into two non-overlapping phases: the pre-accept phase (i.e., the channel establishment) and post-accept phase (i.e., the channel). In contrast, Noise allows the transmission of encrypted messages as soon as any key is established (for instance, before authentication between parties has taken place), and then incrementally increases the channel's security guarantees. By proposing a generalization of the original ACCE model, we capture security properties of such staged channel establishment protocols flexibly – comparably to the multi-stage key exchange model (Fischlin and Günther CCS 2014). We give security proofs for eight of the 15 basic Noise patterns.
Last updated:  2020-06-21
A Complete and Optimized Key Mismatch Attack on NIST Candidate NewHope
Yue Qin, Chi Cheng, Jintai Ding
In CT-RSA 2019, Bauer et al. have analyzed the case when the public key is reused for the NewHope key encapsulation mechanism (KEM), a second-round candidate in the NIST Post-quantum Standard process. They proposed an elegant method to recover coefficients ranging from -6 to 4 in the secret key. We repeat their experiments but there are two fundamental problems. First, even for coefficients in [-6,4] we cannot recover at least 262 of them in each secret key with 1024 coefficients. Second, for the coefficient outside [-6,4], they suggested an exhaustive search. But for each secret key on average there are 10 coefficients that need to be exhaustively searched, and each of them has 6 possibilities. This makes Bauer et al.'s method highly inefficient. We propose an improved method, which with 99.22% probability can recover all the elements ranging from -6 to 4 in the secret key. Then, inspired by Ding et al.'s key mismatch attack, we propose an efficient strategy which with a probability of 96.88% succeeds in recovering all the coefficients in the secret key. Experiments show that our proposed method is very efficient, which completes the attack in about 137.56 ms using the NewHope parameters.
Last updated:  2019-04-29
Masking Fuzzy-Searchable Public Databases
Alexandra Boldyreva, Tianxin Tang, Bogdan Warinschi
We introduce and study the notion of keyless fuzzy search (KlFS) which allows to mask a publicly available database in such a way that any third party can retrieve content if and only if it possesses some data that is “close to” the encrypted data – no cryptographic keys are involved. We devise a formal security model that asks a scheme not to leak any information about the data and the queries except for some well-defined leakage function if attackers cannot guess the right query to make. In particular, our definition implies that recovering high entropy data protected with a KlFS scheme is costly. We propose two KlFS schemes: both use locality-sensitive hashes (LSH), cryptographic hashes and symmetric encryption as building blocks. The first scheme is generic and works for abstract plaintext domains. The second scheme is specifically suited for databases of images. To demonstrate the feasibility of our KlFS for images, we implemented and evaluated a prototype system that supports image search by object similarity on a masked database.
Last updated:  2022-01-09
Secure Communication Channel Establishment: TLS 1.3 (over TCP Fast Open) versus QUIC
Shan Chen, Samuel Jero, Matthew Jagielski, Alexandra Boldyreva, Cristina Nita-Rotaru
Secure channel establishment protocols such as Transport Layer Security (TLS) are some of the most important cryptographic protocols, enabling the encryption of Internet traffic. Reducing latency (the number of interactions between parties before encrypted data can be transmitted) in such protocols has become an important design goal to improve user experience. The most important protocols addressing this goal are TLS 1.3, the latest TLS version standardized in 2018 to replace the widely deployed TLS 1.2, and Quick UDP Internet Connections (QUIC), a secure transport protocol from Google that is implemented in the Chrome browser. There have been a number of formal security analyses for TLS 1.3 and QUIC, but their security, when layered with their underlying transport protocols, cannot be easily compared. Our work is the first to thoroughly compare the security and availability properties of these protocols. Towards this goal, we develop novel security models that permit "layered'' security analysis. In addition to the standard goals of server authentication and data confidentiality and integrity, we consider the goals of IP spoofing prevention, key exchange packet integrity, secure channel header integrity, and reset authentication, which capture a range of practical threats not usually taken into account by existing security models that focus mainly on the cryptographic cores of the protocols. Equipped with our new models we provide a detailed comparison of three low-latency layered protocols: TLS 1.3 over TCP Fast Open (TFO), QUIC over UDP, and QUIC[TLS] (a new design for QUIC that uses TLS 1.3 key exchange) over UDP. In particular, we show that TFO's cookie mechanism does provably achieve the security goal of IP spoofing prevention. Additionally, we find several new availability attacks that manipulate the early key exchange packets without being detected by the communicating parties. By including packet-level attacks in our analysis, our results shed light on how the reliability, flow control, and congestion control of the above layered protocols compare, in adversarial settings. We hope that our models will help protocol designers in their future protocol analyses and that our results will help practitioners better understand the advantages and limitations of secure channel establishment protocols.
Last updated:  2020-03-23
Cryptanalysis of a System Based on Twisted Reed-Solomon Codes
Julien Lavauzelle, Julian Renner
Twisted Reed-Solomon (TRS) codes are a family of codes that contains a large number of maximum distance separable codes that are non-equivalent to Reed--Solomon codes. TRS codes were recently proposed as an alternative to Goppa codes for the McEliece code-based cryptosystem, resulting in a potential reduction of key sizes. The use of TRS codes in the McEliece cryptosystem has been motivated by the fact that a large subfamily of TRS codes is resilient to a direct use of known algebraic key-recovery methods. In this paper, an efficient key-recovery attack on the TRS variant that was used in the McEliece cryptosystem is presented. The algorithm exploits a new approach based on recovering the structure of a well-chosen subfield subcode of the public code. It is proved that the attack always succeeds and breaks the system for all practical parameters in $O(n^4)$ field operations. A software implementation of the algorithm retrieves a valid private key from the public key within a few minutes, for parameters claiming a security level of 128 bits. The success of the attack also indicates that, contrary to common beliefs, subfield subcodes of the public code need to be precisely analyzed when proposing a McEliece-type code-based cryptosystem. Finally, the paper discusses an attempt to repair the scheme and a modification of the attack aiming at Gabidulin-Paramonov-Tretjakov cryptosystems based on twisted Gabidulin codes.
Last updated:  2019-10-01
Cocks-Pinch curves of embedding degrees five to eight and optimal ate pairing computation
Aurore Guillevic, Simon Masson, Emmanuel Thomé
Recent algorithmic improvements of discrete logarithm computation in special extension fields threaten the security of pairing-friendly curves used in practice. A possible answer to this delicate situation is to propose alternative curves that are immune to these attacks, without compromising the efficiency of the pairing computation too much. We follow this direction, and focus on embedding degrees 5 to 8; we extend the Cocks-Pinch algorithm to obtain pairing-friendly curves with an efficient ate pairing. We carefully select our curve parameters so as to thwart possible attacks by “special” or “tower” Number Field Sieve algorithms. We target a 128-bit security level, and back this security claim by time estimates for the DLP computation. We also compare the efficiency of the optimal ate pairing computation on these curves to k = 12 curves (Barreto–Naehrig,Barreto–Lynn–Scott), k = 16 curves (Kachisa–Schaefer–Scott) and k = 1 curves (Chatterjee–Menezes–Rodríguez-Henríquez).
Last updated:  2019-09-17
Composition of Boolean Functions: An Application to the Secondary Constructions of Bent Functions
Guangpu Gao, Dongdai Lin, Wenfen Liu, Yongjuan Wang
Bent functions are optimal combinatorial objects and have been attracted their research for four decades. Secondary constructions play a central role in constructing bent functions since a complete classification of this class of functions is elusive. This paper is devoted to establish a relationship between the secondary constructions and the composition of Boolean functions. We firstly prove that some well-known secondary constructions of bent functions, can be described by the composition of a plateaued Boolean function and some bent functions. Then their dual functions can be calculated by the Lagrange interpolation formula. By following this observation, two secondary constructions of bent functions are presented. We show that they are inequivalent to the known ones, and may generate bent functions outside the primary classes $\mathcal{M}$ and $% \mathcal{PS}$. These results show that the method we present in this paper is genetic and unified and therefore can be applied to the constructions of Boolean functions with other cryptographical criteria.
Last updated:  2020-02-02
ASTRA: High Throughput 3PC over Rings with Application to Secure Prediction
Harsh Chaudhari, Ashish Choudhury, Arpita Patra, Ajith Suresh
The concrete efficiency of secure computation has been the focus of many recent works. In this work, we present concretely-efficient protocols for secure $3$-party computation (3PC) over a ring of integers modulo $2$^$l$ tolerating one corruption, both with semi-honest and malicious security. Owing to the fact that computation over ring emulates computation over the real-world system architectures, secure computation over ring has gained momentum of late. Cast in the offline-online paradigm, our constructions present the most efficient online phase in concrete terms. In the semi-honest setting, our protocol requires communication of $2$ ring elements per multiplication gate during the online phase, attaining a per-party cost of less than one element. This is achieved for the first time in the regime of $3$PC. In the malicious setting, our protocol requires communication of $4$ elements per multiplication gate during the online phase, beating the state-of-the-art protocol by $5$ elements. Realized with both the security notions of selective abort and fairness, the malicious protocol with fairness involves slightly more communication than its counterpart with abort security for the output gates alone. We apply our techniques from $3$PC in the regime of secure server-aided machine-learning (ML) inference for a range of prediction functions-- linear regression, linear SVM regression, logistic regression, and linear SVM classification. Our setting considers a model-owner with trained model parameters and a client with a query, with the latter willing to learn the prediction of her query based on the model parameters of the former. The inputs and computation are outsourced to a set of three non-colluding servers. Our constructions catering to both semi-honest and the malicious world, invariably perform better than the existing constructions.
Last updated:  2021-05-12
Quantum Lazy Sampling and Game-Playing Proofs for Quantum Indifferentiability
Jan Czajkowski, Christian Majenz, Christian Schaffner, Sebastian Zur
Game-playing proofs constitute a powerful framework for non-quantum cryptographic security arguments, most notably applied in the context of indifferentiability. An essential ingredient in such proofs is lazy sampling of random primitives. We develop a quantum game-playing proof framework by generalizing two recently developed proof techniques. First, we describe how Zhandry's compressed quantum oracles~(Crypto'19) can be used to do quantum lazy sampling of a class of non-uniform function distributions. Second, we observe how Unruh's one-way-to-hiding lemma~(Eurocrypt'14) can also be applied to compressed oracles, providing a quantum counterpart to the fundamental lemma of game-playing. Subsequently, we use our game-playing framework to prove quantum indifferentiability of the sponge construction, assuming a random internal function.
Last updated:  2019-04-29
Improved Secure Integer Comparison via Homomorphic Encryption
Florian Bourse, Olivier Sanders, Jacques Traoré
Secure integer comparison has been one of the first problems introduced in cryptography, both for its simplicity to describe and for its applications. The first formulation of the problem was to enable two parties to compare their inputs without revealing the exact value of those inputs, also called the Millionaires' problem. The recent rise of fully homomorphic encryption has given a new formulation to this problem. In this new setting, one party blindly computes an encryption of the boolean $(a<b)$ given only ciphertexts encrypting $a$ and $b$. In this paper, we present new solutions for the problem of secure integer comparison in both of these settings. The underlying idea for both schemes is to avoid decomposing the integers in binary in order to improve the performances. Our fully homomorphic based solution is inspired by Bourse et al, and makes use of the fast bootstrapping techniques recently developpedto obtain scalability for large integers while preserving high efficiency. On the other hand, our solution to the original Millionaires' problem is inspired by the protocol of Carlton et al, based on partially homomorphic encryption. We tweak their protocol in order to minimize the number of interactions required, while preserving the advantage of comparing non-binary integers. Both our techniques provide efficient solutions to the problem of secure integer comparison for large (even a-priori unbounded in our first scenario) integers with minimum interaction.
Last updated:  2021-02-19
Design of Symmetric-Key Primitives for Advanced Cryptographic Protocols
Abdelrahaman Aly, Tomer Ashur, Eli Ben-Sasson, Siemen Dhooghe, Alan Szepieniec
While traditional symmetric algorithms like AES and SHA3 are optimized for efficient hardware and software implementations, a range of emerging applications using advanced cryptographic protocols such as multi-party computation and zero-knowledge proofs require optimization with respect to a different metric: arithmetic complexity. In this paper we study the design of secure cryptographic algorithms optimized to minimize this metric. We begin by identifying the differences in the design space between such arithmetization-oriented ciphers and traditional ones, with particular emphasis on the available tools, efficiency metrics, and relevant cryptanalysis. This discussion highlights a crucial point --- the considerations for designing arithmetization-oriented ciphers are oftentimes different from the considerations arising in the design of software- and hardware-oriented ciphers. The natural next step is to identify sound principles to securely navigate this new terrain, and to materialize these principles into concrete designs. To this end, we present the Marvellous design strategy which provides a generic way to easily instantiate secure and efficient algorithms for this emerging domain. We then show two examples for families following this approach. These families --- Vision and Rescue --- are benchmarked with respect to three use cases: the ZK-STARK proof system, proof systems based on Rank-One Constraint Satisfaction (R1CS), and Multi-Party Computation (MPC). These benchmarks show that our algorithms achieve a highly compact algebraic description, and thus benefit the advanced cryptographic protocols that employ them. Evidence is provided that this is the case also in real-world implementations.
Last updated:  2019-08-19
Homomorphic Training of 30,000 Logistic Regression Models
Flavio Bergamaschi, Shai Halevi, Tzipora T. Halevi, Hamish Hunt
In this work, we demonstrate the use the CKKS homomorphic encryption scheme to train a large number of logistic regression models simultaneously, as needed to run a genome-wide association study (GWAS) on encrypted data. Our implementation can train more than 30,000 models (each with four features) in about 20 minutes. To that end, we rely on a similar iterative Nesterov procedure to what was used by Kim, Song, Kim, Lee, and Cheon to train a single model [KSKLC18]. We adapt this method to train many models simultaneously using the SIMD capabilities of the CKKS scheme. We also performed a thorough validation of this iterative method and evaluated its suitability both as a generic method for computing logistic regression models, and specifically for GWAS.
Last updated:  2019-04-29
Preimage Security of KNOT-Hash
Raghvendra Rohit
KNOT is a Round 1 submission of the ongoing NIST lightweight cryptography project. In this short note, we show that the preimage security of KNOT-Hash instances with squeezing rate half the state size is lower than the claimed security. Our attack exploits the non-randomness properties of the KNOT Sbox which reduce the preimage complexities. In particular, if $2n$ is the squeezing rate then the preimage security is approximately $(\text{log\textsubscript{2}}(\frac{3}{4}))^{-n} \times 2^{\frac{3n}{4}} \times (\text{log\textsubscript{2}}(3))^{\frac{n}{2}}$. For $n = 64$, 96 and 128, the former bound translates to $2^{125.28}$, $2^{187.92}$ and $2^{250.57}$, respectively.
Last updated:  2019-10-28
Chaotic Compilation for Encrypted Computing: Obfuscation but Not in Name
Peter T. Breuer
An `obfuscation' for the encrypted computing context is quantified exactly here, leading to an argument that security against polynomial-time attacks has been achieved for user data, with or without encryption. Encrypted computing is the emerging science and technology of processors that take encrypted inputs to encrypted outputs via encrypted intermediate values (at nearly conventional speeds). The aim is to make user data in general-purpose computing secure against the operator and operating system as potential adversaries. A stumbling block has always been that memory addresses are data and good encryption means the encrypted value varies randomly,and that makes hitting any target in memory problematic without address decryption, but decryption anywhere on the memory path would open up many easily exploitable vulnerabilities. This paper `solves compilation' for processors without address decryption, covering all of ANSI C while satisfying the required security properties and opening up encrypted computing for the standard software tool-chain and infrastructure. The new understanding produces the argument referred to above.
Last updated:  2019-07-01
Parallelizable MACs Based on the Sum of PRPs with Security Beyond the Birthday Bound
Alexander Moch, Eik List
The combination of universal hashing and encryption is a fundamental paradigm for the construction of symmetric-key MACs, dating back to the seminal works by Wegman and Carter, Shoup, and Bernstein. While fully sufficient for many practical applications, the Wegman-Carter construction, however, is well-known to break if nonces are ever repeated, and provides only birthday-bound security if instantiated with a permutation. Those limitations inspired the community to several recent proposals that addressed them, initiated by Cogliati et al.'s Encrypted Wegman-Carter Davies-Meyer (EWCDM) construction. This work extends this line of research by studying two constructions based on the sum of PRPs: (1) a stateless deterministic scheme that uses two hash functions, and (2) a nonce-based scheme with one hash-function call and a nonce. We show up to 2n/3-bit security for both of them if the hash function is universal. Compared to the EWCDM construction, our proposals avoid the fact that a single reuse of a nonce can lead to a break.
Last updated:  2019-04-27
Continuing to reflect on TLS 1.3 with external PSK
Liliya Akhmetzyanova, Evgeny Alekseev, Ekaterina Smyshlyaeva, Alexandr Sokolov
The TLS protocol is the main cryptographic protocol of the Internet. The work on its current version, TLS 1.3, was completed in 2018. This version differs from the previous ones and has been developed taking into account all modern principles of constructing cryptographic protocols. At the same time, even when there are security proofs in some fairly strong security model, it is important to study the possibility of extending this model and then clarifying the security limits of the protocol. In this paper, we consider in detail the restriction on the usage of post-handshake authentication in connections established on external PSK. We clarify that the certain vulnerability appears only in the case of psk_ke mode if more than a single pair of entities can possess a single PSK. We provide several practical scenarios where this condition can be easily achieved. Also we propose appropriate mitigation.
Last updated:  2019-10-18
Improving Speed of Dilithium’s Signing Procedure
Prasanna Ravi, Sourav Sen Gupta, Anupam Chattopadhyay, Shivam Bhasin
Dilithium is a round 2 candidate for digital signature schemes in NIST initiative for post-quantum cryptographic schemes. Since Dilithium is built upon the “Fiat Shamir with Aborts” framework, its signing procedure performs rejection sampling of its signatures to ensure they do not leak information about the secret key. Thus, the signing procedure is iterative in nature with a number of rejected iterations, which serve as unnecessary overheads hampering its overall performance. As a first contribution, we propose an optimization that reduces the computations in the rejected iterations through early-evaluation of the conditional checks. This allows to perform an early detection of the rejection condition and reject a given iteration as early as possible. We also incorporate a number of standard optimizations such as unrolling and inlining to further improve the speed of the signing procedure. We incorporate and evaluate our optimizations over the software implementation of Dilithium on both the Intel Core i5-4460 and ARM Cortex-M4 CPUs. As a second contribution, we identify opportunities to present a more refined evaluation of Dilithium’s signing procedure in several scenarios where pre-computations can be carried out. We also evaluate the performance of our optimizations and the memory requirements for the pre-computed intermediates in the considered scenarios. We could yield speed-ups in the range of 6% upto 35%, considering all the aforementioned scenarios, thus presenting the fastest software implementation of Dilithium till date.
Last updated:  2019-09-11
Algebraic Cryptanalysis of STARK-Friendly Designs: Application to MARVELlous and MiMC
Martin R. Albrecht, Carlos Cid, Lorenzo Grassi, Dmitry Khovratovich, Reinhard Lüftenegger, Christian Rechberger, Markus Schofnegger
The block cipher Jarvis and the hash function Friday, both members of the MARVELlous family of cryptographic primitives, are among the first proposed solutions to the problem of designing symmetric-key algorithms suitable for transparent, post-quantum secure zero-knowledge proof systems such as ZK-STARKs. In this paper we describe an algebraic cryptanalysis of Jarvis and Friday and show that the proposed number of rounds is not sufficient to provide adequate security. In Jarvis, the round function is obtained by combining a finite field inversion, a full-degree affine permutation polynomial and a key addition. Yet we show that even though the high degree of the affine polynomial may prevent some algebraic attacks (as claimed by the designers), the particular algebraic properties of the round function make both Jarvis and Friday vulnerable to Gröbner basis attacks. We also consider MiMC, a block cipher similar in structure to Jarvis. However, this cipher proves to be resistant against our proposed attack strategy. Still, our successful cryptanalysis of Jarvis and Friday does illustrate that block cipher designs for “algebraic platforms” such as STARKs, FHE or MPC may be particularly vulnerable to algebraic attacks.
Last updated:  2019-04-24
Sharing of Encrypted files in Blockchain Made Simpler
S. Sharmila Deva Selvi, Arinjita Paul, Siva Dirisala, Saswata Basu, C. Pandu Rangan
Recently, blockchain technology has attracted much attention of the research community in several domains requiring transparency of data accountability, due to the removal of intermediate trust assumptions from third parties. One such application is enabling file sharing in blockchain enabled distributed cloud storage. Proxy re-encryption is a cryptographic primitive that allows such file sharing by re-encrypting ciphertexts towards legitimate users via semi-trusted proxies, without them learning any information about the underlying message. To facilitate secure data sharing in the distributed cloud, it is essential to construct efficient proxy re-encryption protocols. In this paper, we introduce the notion of proxy self re-encryption (SE-PRE) that is highly efficient, as compared to the existing PRE schemes in the literature. We show that our self encryption scheme is provably CCA secure based on the DLP assumption and our proxy re-encryption scheme with self encryption is CCA secure under the hardness of the Computational Diffie Hellman (CDH) and Discrete Logarithm (DLP) assumption. Our novel encryption scheme, called self encryption, has no exponentiation or costly pairing operation. Even the re-encryption in SE-PRE does not have such operations and this facilitates the service provider with efficiency gain.
Last updated:  2019-11-11
Numerical Method for Comparison on Homomorphically Encrypted Numbers
Jung Hee Cheon, Dongwoo Kim, Duhyeong Kim, Hun Hee Lee, Keewoo Lee
We propose a new method to compare numbers which are encrypted by Homomorphic Encryption (HE). Previously, comparison and min/max functions were evaluated using Boolean functions where input numbers are encrypted bit-wisely. However, the bit-wise encryption methods require relatively expensive computation of basic arithmetic operations such as addition and multiplication. In this paper, we introduce iterative algorithms that approximately compute the min/max and comparison operations of several numbers which are encrypted word-wisely. From the concrete error analyses, we show that our min/max and comparison algorithms have $\Theta(\alpha)$ and $\Theta(\alpha\log\alpha)$ computational complexity to obtain approximate values within an error rate $2^{-\alpha}$, while the previous minimax polynomial approximation method requires the exponential complexity $\Theta(2^{\alpha/2})$ and $\Theta(\sqrt{\alpha}\cdot 2^{\alpha/2})$, respectively. We also show the (sub-)optimality of our min/max and comparison algorithms in terms of asymptotic computational complexity among polynomial evaluations to obtain approximate min/max and comparison results. Our comparison algorithm is extended to several applications such as computing the top-$k$ elements and counting numbers over the threshold in encrypted state. Our new method enables word-wise HEs to enjoy comparable performance in practice with bit-wise HEs for comparison operations while showing much better performance on polynomial operations. Computing an approximate maximum value of any two $\ell$-bit integers encrypted by HEAAN, up to error $2^{\ell-10}$, takes only $1.14$ milliseconds in amortized running time, which is comparable to the result based on bit-wise HEs.
Last updated:  2019-04-29
How many transactions per second can bitcoin really handle ? Theoretically.
Evangelos Georgiadis
Transactions are arguably the most important part in the bitcoin mechanism, with everything else facilitating the proper creation, propagation and validation; culminating with their addition to the public ledger – the blockchain. One crucial measure inevitably intertwined with transactions is, throughput, the number of transactions confirmed (or added to the blockchain) per second, or simply, tps. Understanding throughput capacity from different angles remains of paramount importance for gaining insights into the underlying infrastructure. We compute the exact upper bound for the maximal transaction throughput of the bitcoin protocol and obtain 27 tps. The previous best known bound for the average transaction throughput is 7 tps. All results are based on legacy infrastructure, i.e., pre-SegWit era.
Last updated:  2019-12-17
Refinement and Verification of CBC Casper
Ryuya Nakamura, Takayuki Jimba, Dominik Harz
Decentralised ledgers are a prime application case for consensus protocols. Changing sets of validators have to agree on a set of transactions in an asynchronous network and in the presence of Byzantine behaviour. Major research efforts focus on creating consensus protocols under such conditions, with proof-of-stake (PoS) representing a promising candidate. PoS aims to reduce the waste of energy inherent to proof-of-work (PoW) consensus protocols. However, a significant challenge is to get PoS protocols "right", i.e. ensure that they are secure w.r.t. safety and liveness. The "Correct-by-Construction" (CBC) Casper approach by the Ethereum project employs pen-and-paper proofs to ensure its security. CBC Casper is a framework to define consensus protocols and aims to prove safety without loss of abstractness. Each member of the CBC Casper family of protocols is defined by five parameters. CBC Casper models the protocol by a state of each validator and messages sent by validators. Each validator can transition its state using messages by other validators that include their current consensus value and a justification (i.e. their previous messages). We extend CBC Casper in three ways. First, we summarise the research of CBC Casper and extend the definitions of safety and liveness properties. To this end, we discuss an instance of CBC Casper called Casper The Friendly GHOST (TFG), a consensus protocol using a variant of the GHOST fork-choice rule. Second, we refine the properties of messages and states in CBC Casper and give a definition of blockchain safety for Casper TFG. Third, we formally verify the CBC Casper framework together with our refined message and state properties as well as our blockchain safety definition in the Isabelle/HOL proof assistant.
Last updated:  2020-05-06
Two-Round Oblivious Transfer from CDH or LPN
Nico Döttling, Sanjam Garg, Mohammad Hajiabadi, Daniel Masny, Daniel Wichs
We show a new general approach for constructing maliciously secure two-round oblivious transfer (OT). Specifically, we provide a generic sequence of transformations to upgrade a very basic notion of two-round OT, which we call elementary OT, to UC-secure OT. We then give simple constructions of elementary OT under the Computational Diffie-Hellman (CDH) assumption or the Learning Parity with Noise (LPN) assumption, yielding the first constructions of malicious (UC-secure) two-round OT under these assumptions. Since two-round OT is complete for two-round 2-party and multi-party computation in the malicious setting, we also achieve the first constructions of the latter under these assumptions.
Last updated:  2020-07-18
On the Streaming Indistinguishability of a Random Permutation and a Random Function
Itai Dinur
An adversary with $S$ bits of memory obtains a stream of $Q$ elements that are uniformly drawn from the set $\{1,2,\ldots,N\}$, either with or without replacement. This corresponds to sampling $Q$ elements using either a random function or a random permutation. The adversary's goal is to distinguish between these two cases. This problem was first considered by Jaeger and Tessaro (EUROCRYPT 2019), which proved that the adversary's advantage is upper bounded by $\sqrt{Q \cdot S/N}$. Jaeger and Tessaro used this bound as a streaming switching lemma which allowed proving that known time-memory tradeoff attacks on several modes of operation (such as counter-mode) are optimal up to a factor of $O(\log N)$ if $Q \cdot S \approx N$. However, the bound's proof assumed an unproven combinatorial conjecture. Moreover, if $Q \cdot S \ll N$ there is a gap between the upper bound of $\sqrt{Q \cdot S/N}$ and the $Q \cdot S/N$ advantage obtained by known attacks. In this paper, we prove a tight upper bound (up to poly-logarithmic factors) of $O(\log Q \cdot Q \cdot S/N)$ on the adversary's advantage in the streaming distinguishing problem. The proof does not require a conjecture and is based on a hybrid argument that gives rise to a reduction from the unique-disjointness communication complexity problem to streaming.
Last updated:  2019-09-17
On the complexity of the Permuted Kernel Problem
Eliane KOUSSA, Gilles MACARIO-RAT, Jacques PATARIN
In 1989, A. Shamir introduced an interesting public-key scheme of a new nature, a Zero-Knowledge (ZK) Identification scheme, based on PKP: the Permuted Kernel Problem. PKP is an NP-hard algebraic problem which has been extensively studied. Among all the attacks, the problem PKP is in spite of the research effort, still exponential. This problem was used to develop an Identification Scheme (IDS) which has a very efficient implementation on low-cost smart cards. There has been recently a renewed interest in PKP-based cryptography due to post quantum security considerations, simple security proofs, and the design of new PKP-based signature algorithm. In 2018 and through the Fiat-Shamir transform, the PKP-IDS was used to construct a post-quantum signature scheme which was submitted to a Chinese competition for the design of post-quantum cryptographic algorithms (organized by the Chinese Association CACR). This latter was improved later. The aim of this document is two-fold. First, we investigate the complexity of the combinatorial problem - namely PKP. We also present a summary of previously known algorithms devoted to solve this problem. Contrary to what is shown previously, and after a thorough analysis of the State-of-the-art attacks of PKP, we claim that the Joux-Jaulmes attack is not the most efficient algorithm for solving PKP. In fact, the complexity of the Joux-Jaulmes attack underestimate the amount of certain important phase of the algorithm. Second, we examine the complexity given by various algorithms, specifically the ones introduced by Patarin-Chauvaud and Poupard. It is relatively complex to obtain a general complexity formula due to the very numerous variants. However, we have been able to develop a program and provide its approximate space and time complexities which allow us to identify hard instances and secure sets of parameters of this problem with respect to the best attack currently known.
Last updated:  2020-02-17
Exploring the Monero Peer-to-Peer Network
Uncategorized
Tong Cao, Jiangshan Yu, Jérémie Decouchant, Xiapu Luo, Paulo Verissimo
Show abstract
Uncategorized
As of September 2019, Monero is the most capitalized privacy- preserving cryptocurrency, and is ranked tenth among all cryptocurren- cies. Monero’s on-chain data privacy guarantees, i.e., how mixins are selected in each transaction, have been extensively studied. However, de- spite Monero’s prominence, the network of peers running Monero clients has not been analyzed. Such analysis is of prime importance, since po- tential vulnerabilities in the peer-to-peer network may lead to attacks on the blockchain’s safety (e.g., by isolating a set of nodes) and on users’ privacy (e.g., tracing transactions flow in the network). This paper provides the first step study on understanding Monero’s peer- to-peer (P2P) network. In particular, we deconstruct Monero’s P2P pro- tocol based on its source code, and develop a toolset to explore Monero’s network, which allows us to infer its topology, size, node distribution, and node connectivity. During our experiments, we collected 510 GB of raw data, from which we extracted 21,678 IP addresses of Monero nodes distributed in 970 autonomous systems. We show that Monero’s network is highly centralized — 13.2% of the nodes collectively maintain 82.86% of the network connections. We have identified approximately 2,758 ac- tive nodes per day, which is 68.7% higher than the number reported by the MoneroHash mining pool. We also identified all concurrent outgoing connections maintained by Monero nodes with very high probability (on average 97.98% for nodes with less than 250 outgoing connections, and 93.79% for nodes with more connections).
Last updated:  2020-08-10
Policy-Based Sanitizable Signatures
Kai Samelin, Daniel Slamanig
Sanitizable signatures are a variant of signatures which allow a single, and signer-defined, sanitizer to modify signed messages in a controlled way without invalidating the respective signature. They turned out to be a versatile primitive, proven by different variants and extensions, e.g., allowing multiple sanitizers or adding new sanitizers one-by-one. However, existing constructions are very restricted regarding their flexibility in specifying potential sanitizers. We propose a different and more powerful approach: Instead of using sanitizers' public keys directly, we assign attributes to them. Sanitizing is then based on policies, i.e., access structures defined over attributes. A sanitizer can sanitize, if, and only if, it holds a secret key to attributes satisfying the policy associated to a signature, while offering full-scale accountability.
Last updated:  2020-01-09
Post-Quantum Provably-Secure Authentication and MAC from Mersenne Primes
Houda Ferradi, Keita Xagawa
This paper presents a novel, yet efficient secret-key authentication and MAC, which provide post-quantum security promise, whose security is reduced to the quantum-safe conjectured hardness of Mersenne Low Hamming Combination (MERS) assumption recently introduced by Aggarwal, Joux, Prakash, and Santha (CRYPTO 2018). Our protocols are very suitable to weak devices like smart card and RFID tags.
Last updated:  2019-04-22
Forgery Attack on SNEIKEN
Mustafa Khairallah
This document includes a collision/forgery attack against SNEIKEN128/192/256, where every message with more than 128 bytes of associated data can be converted into another message with different associated data and the same ciphertext/tag. The attack is a direct application of the probability 1 differential of the SNEIK permutation found by Léo Perrin in [Per19]. We verify the attack using the reference implementation of SNEIKEN128 provided by the designers, providing an example of such collisions.
Last updated:  2020-02-12
Privacy-Preserving Network Path Validation
Uncategorized
Binanda Sengupta, Yingjiu Li, Kai Bu, Robert H. Deng
Show abstract
Uncategorized
The end-users communicating over a network path currently have no control over the path. For a better quality of service, the source node often opts for a superior (or premium) network path in order to send packets to the destination node. However, the current Internet architecture provides no assurance that the packets indeed follow the designated path. Network path validation schemes address this issue and enable each node present on a network path to validate whether each packet has followed the specific path so far. In this work, we introduce two notions of privacy -- path privacy and index privacy -- in the context of network path validation. We show that, in case a network path validation scheme does not satisfy these two properties, the scheme is vulnerable to certain practical attacks (that affect the reliability, neutrality and quality of service offered by the underlying network). To the best of our knowledge, ours is the first work that addresses privacy issues related to network path validation. We design PrivNPV, a privacy-preserving network path validation protocol, that satisfies both path privacy and index privacy. We discuss several attacks related to network path validation and how PrivNPV defends against these attacks. Finally, we discuss the practicality of PrivNPV based on relevant parameters.
Last updated:  2019-04-22
Fine-Grained and Controlled Rewriting in Blockchains: Chameleon-Hashing Gone Attribute-Based
David Derler, Kai Samelin, Daniel Slamanig, Christoph Striecks
Blockchain technologies recently received a considerable amount of attention. While the initial focus was mainly on the use of blockchains in the context of cryptocurrencies such as Bitcoin, application scenarios now go far beyond this. Most blockchains have the property that once some object, e.g., a block or a transaction, has been registered to be included into the blockchain, it is persisted and there are no means to modify it again. While this is an essential feature of most blockchain scenarios, it is still often desirable - at times it may be even legally required - to allow for breaking this immutability in a controlled way. Only recently, Ateniese et al. (EuroS&P 2017) proposed an elegant solution to this problem on the block level. Thereby, the authors replace standard hash functions with so-called chameleon-hashes (Krawczyk and Rabin, NDSS 2000). While their work seems to offer a suitable solution to the problem of controlled re-writing of blockchains, their approach is too coarse-grained in that it only offers an all-or-nothing solution. We revisit this idea and introduce the novel concept of policy-based chameleonhashes (PCH). PCHs generalize the notion of chameleon-hashes by giving the party computing a hash the ability to associate access policies to the generated hashes. Anyone who possesses enough privileges to satisfy the policy can then find arbitrary collisions for a given hash. We then apply this concept to transaction-level rewriting within blockchains, and thus support fine-grained and controlled modifiability of blockchain objects. Besides modeling PCHs, we present a generic construction of PCHs (using a strengthened version of chameleon-hashes with ephemeral trapdoors which we also introduce), rigorously prove its security, and instantiate it with efficient building blocks. We report first implementation results.
Last updated:  2019-04-22
A Novel FPGA Architecture and Protocol for the Self-attestation of Configurable Hardware
Jo Vliegen, Md Masoom Rabbani, Mauro Conti, Nele Mentens
Field-Programmable Gate Arrays or FPGAs are popular platforms for hardware-based attestation. They offer protection against physical and remote attacks by verifying if an embedded processor is running the intended application code. However, since FPGAs are configurable after deployment (thus not tamper-resistant), they are susceptible to attacks, just like microprocessors. Therefore, attesting an electronic system that uses an FPGA should be done by verifying the status of both the software and the hardware, without the availability of a dedicated tamper-resistant hardware module. Inspired by the work of Perito and Tsudik, this paper proposes a partially reconfigurable FPGA architecture and attestation protocol that enable the self-attestation of the FPGA. Through the use of our solution, the FPGA can be used as a trusted hardware module to perform hardware-based attestation of a processor. This way, an entire hardware/software system can be protected against malicious code updates.
Last updated:  2019-04-25
Efficient Message Authentication Codes with Combinatorial Group Testing
Kazuhiko Minematsu
Message authentication code, MAC for short, is a symmetric-key cryptographic function for authenticity. A standard MAC verification only tells whether the message is valid or invalid, and thus we can not identify which part is corrupted in case of invalid message. In this paper we study a class of MAC functions that enables to identify the part of corruption, which we call group testing MAC (GTM). This can be seen as an application of a classical (non-adaptive) combinatorial group testing to MAC. Although the basic concept of GTM (or its keyless variant) has been proposed in various application areas, such as data forensics and computer virus testing, they rather treat the underlying MAC function as a black box, and exact computation cost for GTM seems to be overlooked. In this paper, we study the computational aspect of GTM, and show that a simple yet non-trivial extension of parallelizable MAC (PMAC) enables $O(m+t)$ computation for $m$ data items and $t$ tests, irrespective of the underlying test matrix we use, under a natural security model. This greatly improves efficiency from naively applying a black-box MAC for each test, which requires $O(mt)$ time. Based on existing group testing methods, we also present experimental results of our proposal and observe that ours runs as fast as taking single MAC tag, with speed-up from the conventional method by factor around 8 to 15 for $m=10^4$ to $10^5$ items.
Last updated:  2019-09-30
Fast and simple constant-time hashing to the BLS12-381 elliptic curve
Riad S. Wahby, Dan Boneh
Pairing-friendly elliptic curves in the Barreto-Lynn-Scott family are seeing a resurgence in popularity because of the recent result of Kim and Barbulescu that improves attacks against other pairing-friendly curve families. One particular Barreto-Lynn-Scott curve, called BLS12-381, is the locus of significant development and deployment effort, especially in blockchain applications. This effort has sparked interest in using the BLS12-381 curve for BLS signatures, which requires hashing to one of the groups of the bilinear pairing defined by BLS12-381. While there is a substantial body of literature on the problem of hashing to elliptic curves, much of this work does not apply to Barreto-Lynn-Scott curves. Moreover, the work that does apply has the unfortunate property that fast implementations are complex, while simple implementations are slow. In this work, we address these issues. First, we show a straightforward way of adapting the ``simplified SWU'' map of Brier et al. to BLS12-381. Second, we describe optimizations to this map that both simplify its implementation and improve its performance; these optimizations may be of interest in other contexts. Third, we implement and evaluate. We find that our work yields constant-time hash functions that are simple to implement, yet perform within 9% of the fastest, non--constant-time alternatives, which require much more complex implementations.
Last updated:  2020-11-07
ILC: A Calculus for Composable, Computational Cryptography
Kevin Liao, Matthew A. Hammer, Andrew Miller
The universal composability (UC) framework is the established standard for analyzing cryptographic protocols in a modular way, such that security is preserved under concurrent composition with arbitrary other protocols. However, although UC is widely used for on-paper proofs, prior attempts at systemizing it have fallen short, either by using a symbolic model (thereby ruling out computational reduction proofs), or by limiting its expressiveness. In this paper, we lay the groundwork for building a concrete, executable implementation of the UC framework. Our main contribution is a process calculus, dubbed the Interactive Lambda Calculus (ILC). ILC faithfully captures the computational model underlying UC---interactive Turing machines (ITMs)---by adapting ITMs to a subset of the pi-calculus through an affine typing discipline. In other words, well-typed ILC programs are expressible as ITMs. In turn, ILC's strong confluence property enables reasoning about cryptographic security reductions. We use ILC to develop a simplified implementation of UC called SaUCy.
Last updated:  2019-04-22
Side-Channel assessment of Open Source Hardware Wallets
Manuel San Pedro, Victor Servant, Charles Guillemet
Side-channel attacks rely on the fact that the physical behavior of a device depends on the data it manipulates. We show in this paper how to use this class of attacks to break the security of some cryptocurrencies hardware wallets when the attacker is given physical access to them. We mounted two profiled side-channel attacks: the first one extracts the user PIN used through the verification function, and the second one extracts the private signing key from the ECDSA scalar multiplication using a single signature. The results of our study were responsibly disclosed to the manufacturer who patched the PIN vulnerability through a firmware upgrade.
Last updated:  2019-04-18
Degenerate Fault Attacks on Elliptic Curve Parameters in OpenSSL
Akira Takahashi, Mehdi Tibouchi
In this paper, we describe several practically exploitable fault attacks against OpenSSL's implementation of elliptic curve cryptography, related to the singular curve point decompression attacks of Blömer and Günther (FDTC2015) and the degenerate curve attacks of Neves and Tibouchi (PKC 2016). In particular, we show that OpenSSL allows to construct EC key files containing explicit curve parameters with a compressed base point. A simple single fault injection upon loading such a file yields a full key recovery attack when the key file is used for signing with ECDSA, and a complete recovery of the plaintext when the file is used for encryption using an algorithm like ECIES. The attack is especially devastating against curves with $j$-invariant equal to 0 such as the Bitcoin curve secp256k1, for which key recovery reduces to a single division in the base field. Additionally, we apply the present fault attack technique to OpenSSL's implementation of ECDH, by combining it with Neves and Tibouchi's degenerate curve attack. This version of the attack applies to usual named curve parameters with nonzero $j$-invariant, such as P192 and P256. Although it is typically more computationally expensive than the one against signatures and encryption, and requires multiple faulty outputs from the server, it can recover the entire static secret key of the server even in the presence of point validation. These various attacks can be mounted with only a single instruction skipping fault, and therefore can be easily injected using low-cost voltage glitches on embedded devices. We validated them in practice using concrete fault injection experiments on a Rapsberry Pi single board computer running the up to date OpenSSL command line tools---a setting where the threat of fault attacks is quite significant.
Last updated:  2019-04-18
Inception makes non-malleable codes shorter as well!
Divesh Aggarwal, Maciej Obremski
Non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs in ICS 2010, have emerged in the last few years as a fundamental object at the intersection of cryptography and coding theory. Non-malleable codes provide a useful message integrity guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely ``unrelated value''. Although such codes do not exist if the family of ``tampering functions'' {\mathcal F} allowed to modify the original codeword is completely unrestricted, they are known to exist for many broad tampering families {\mathcal F}. The family which received the most attention is the family of tampering functions in the so called (2-part) {\em split-state} model: here the message x is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with each L and R individually. Dodis, Kazana, and the authors in STOC 2015 developed a generalization of non-malleable codes called the concept of non-malleable reduction, where a non-malleable code for a tampering family {\mathcal F} can be seen as a non-malleable reduction from {\mathcal F} to a family NM of functions comprising the identity function and constant functions. They also gave a constant-rate reduction from a split-state tampering family to a tampering family {\mathcal G} containing so called $2$-lookahead functions, and forgetful functions. In this work, we give a constant rate non-malleable reduction from the family {\mathcal G} to NM, thereby giving the first {\em constant rate non-malleable code in the split-state model.} Central to our work is a technique called inception coding which was introduced by Aggarwal, Kazana and Obremski in TCC 2017, where a string that detects tampering on a part of the codeword is concatenated to the message that is being encoded.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.