All papers in 2016 (Page 11 of 1195 results)

Last updated:  2016-02-24
How to Generalize RSA Cryptanalyses
Atsushi Takayasu, Noboru Kunihiro
Recently, the security of RSA variants with moduli N=p^rq, e.g., the Takagi RSA and the prime power RSA, have been actively studied in several papers. Due to the unusual composite moduli and rather complex key generations, the analyses are more involved than the standard RSA. Furthermore, the method used in some of these works are specialized to the form of composite integers N=p^rq. In this paper, we generalize the techniques used in the current best attacks on the standard RSA to the RSA variants. We show that the lattices used to attack the standard RSA can be transformed into lattices to attack the variants where the dimensions are larger by a factor of (r+1) of the original lattices. We believe the steps we took present to be more natural than previous researches, and to illustrate this point we obtained the following results: \begin{itemize} \item Simpler proof for small secret exponent attacks on the Takagi RSA proposed by Itoh et al. (CT-RSA 2008). Our proof generalizes the work of Herrmann and May (PKC 2010). \item Partial key exposure attacks on the Takagi RSA; generalizations of the works of Ernst et al. (Eurocrypt 2005) and Takayasu and Kunihiro (SAC 2014). Our attacks improve the result of Huang et al. (ACNS 2014). \item Small secret exponent attacks on the prime power RSA; generalizations of the work of Boneh and Durfee (Eurocrypt 1999). Our attacks improve the results of Sarkar (DCC 2014, ePrint 2015) and Lu et al. (Asiacrypt 2015). \item Partial key exposure attacks on the prime power RSA; generalizations of the works of Ernst et al. and Takayasu and Kunihiro. Our attacks improve the results of Sarkar and Lu et al. \end{itemize} The construction techniques and the strategies we used are conceptually easier to understand than previous works, owing to the fact that we exploit the exact connections with those of the standard RSA.
Last updated:  2018-05-08
How to Share a Secret, Infinitely
Uncategorized
Ilan Komargodski, Moni Naor, Eylon Yogev
Show abstract
Uncategorized
Secret sharing schemes allow a dealer to distribute a secret piece of information among several parties such that only qualified subsets of parties can reconstruct the secret. The collection of qualified subsets is called an access structure. The best known example is the $k$-threshold access structure, where the qualified subsets are those of size at least $k$. When $k=2$ and there are $n$ parties, there are schemes for sharing an $\ell$-bit secret in which the share size of each party is roughly $\max\{\ell,\log n\}$ bits, and this is tight even for secrets of 1 bit. In these schemes, the number of parties $n$ must be given in advance to the dealer. In this work we consider the case where the set of parties is not known in advance and could potentially be infinite. Our goal is to give the $t$-th party arriving the smallest possible share as a function of $t$. Our main result is such a scheme for the $k$-threshold access structure and 1-bit secrets where the share size of party $t$ is $(k-1)\cdot \log t + \mathsf{poly}(k)\cdot o(\log t)$. For $k=2$ we observe an equivalence to prefix codes and present matching upper and lower bounds of the form $\log t + \log\log t + \log\log\log t + O(1)$. Finally, we show that for any access structure there exists such a secret sharing scheme with shares of size $2^{t-1}$.
Last updated:  2017-10-09
Security considerations for Galois non-dual RLWE families
Uncategorized
Hao Chen, Kristin Lauter, Katherine E. Stange
Show abstract
Uncategorized
We explore further the hardness of the non-dual discrete variant of the Ring-LWE problem for various number rings, give improved attacks for certain rings satisfying some additional assumptions, construct a new family of vulnerable Galois number fields, and apply some number theoretic results on Gauss sums to deduce the likely failure of these attacks for 2-power cyclotomic rings and unramified moduli.
Last updated:  2018-06-19
On Negation Complexity of Injections, Surjections and Collision-Resistance in Cryptography
Douglas Miller, Adam Scrivener, Jesse Stern, Muthuramakrishnan Venkitasubramaniam
Goldreich and Izsak (Theory of Computing, 2012) initiated the research on understanding the role of negations in circuits implementing cryptographic primitives, notably, considering one-way functions and pseudo-random generators. More recently, Guo, Malkin, Oliveira and Rosen (TCC, 2014) determined tight bounds on the minimum number of negations gates (i.e., negation complexity) of a wide variety of cryptographic primitives including pseudo-random functions, error-correcting codes, hardcore-predicates and randomness extractors. We continue this line of work to establish the following results: 1. First, we determine tight lower bounds on the negation complexity of collision-resistant and target collision-resistant hash-function families. 2. Next, we examine the role of injectivity and surjectivity on the negation complexity of one-way functions. Here we show that, a) Assuming the existence of one-way injections, there exists a monotone one-way injection. Furthermore, we complement our result by showing that, even in the worst-case, there cannot exist a monotone one-way injection with constant stretch. b) Assuming the existence of one-way permutations, there exists a monotone one-way surjection. 3. Finally, we show that there exists list-decodable codes with monotone decoders. In addition, we observe some interesting corollaries to our results.
Last updated:  2017-11-29
Optimal Security Proofs for Signatures from Identification Schemes
Eike Kiltz, Daniel Masny, Jiaxin Pan
We perform a concrete security treatment of digital signature schemes obtained from canonical identification schemes via the Fiat-Shamir transform. If the identification scheme is rerandomizable and satisfies the weakest possible security notion (key-recoverability), then the implied signature scheme is unforgeability against chosen-message attacks in the multi-user setting in the random oracle model. The reduction loses a factor of roughly Qh, the number of hash queries. Previous security reductions incorporated an additional multiplicative loss of N, the number of users in the system. As an important application of our framework, we obtain a concrete security treatment for Schnorr signatures. Our analysis is done in small steps via intermediate security notions, and all our implications have relatively simple proofs. Furthermore, for each step we show the optimality of the given reduction via a meta-reduction.
Last updated:  2016-02-23
A MAC Mode for Lightweight Block Ciphers
Atul Luykx, Bart Preneel, Elmar Tischhauser, Kan Yasuda
Lightweight cryptography strives to protect communication in constrained environments without sacrificing security. However, security often conflicts with efficiency, shown by the fact that many new lightweight block cipher designs have block sizes as low as 64 or 32 bits. Due to the birthday bound, such low block sizes lead to impractical limits on how much data a mode of operation can process per key. MAC (message authentication code) modes of operation frequently have bounds which degrade with both the number of messages queried and the message length. We present a MAC mode of operation, LightMAC, where the message length has no effect on the security bound, allowing an order of magnitude more data to be processed per key. Furthermore, LightMAC is incredibly simple, has almost no overhead over the block cipher, and is parallelizable. As a result, LightMAC not only offers compact authentication for resource-constrained platforms, but also allows high-performance parallel implementations. We highlight this in a comprehensive implementation study, instantiating LightMAC with PRESENT and the AES. Moreover, LightMAC allows flexible trade-offs between rate and maximum message length. Unlike PMAC and its many derivatives, LightMAC is not covered by patents. Altogether, this makes it a promising authentication primitive for a wide range of platforms and use cases.
Last updated:  2017-02-05
Yao's millionaires' problem and public-key encryption without computational assumptions
Uncategorized
Dima Grigoriev, Laszlo B. Kish, Vladimir Shpilrain
Show abstract
Uncategorized
We offer efficient and practical solutions of Yao's millionaires' problem without using any one-way functions. Some of the solutions involve physical principles, while others are purely mathematical. One of our solutions (based on physical principles) yields a public-key encryption protocol secure against (passive) computationally unbounded adversary. In that protocol, the legitimate parties are not assumed to be computationally unbounded.
Last updated:  2016-02-23
On the division property of S-boxes
Faruk Göloğlu, Vincent Rijmen, Qingju Wang
In 2015, Todo introduced a property of multisets of a finite field called the division property. It is then used by Todo in an attack against the S7 S-box of the MISTY1 cipher. This paper provides a complete mathematical analysis of the division property. The tool we use is the discrete Fourier transform. We relate the division property to the natural concept of the degree of a subset of a finite field. This indeed provides a characterization of multisets satisfying the division property. In 2015, Sun et al. gave some properties related to the division property. In this paper we give a complete characterization and reprove many of their results. We show that the division property is actually the dual of the degree of $t$-products of the inverse S-box and show these two characteristics are affine invariants. We then propose a very efficient way to check vulnerability of a given S-box against attacks of this type. We also reprove some recent interesting results using the method based on the discrete Fourier transform. We finally check whether the S-boxes of the candidate ciphers in the CAESAR competition are vulnerable against attacks based on the division property.
Last updated:  2016-05-24
Efficient Secure Multiparty Computation with Identifiable Abort
Carsten Baum, Emmanuela Orsini, Peter Scholl
We study secure multiparty computation (MPC) in the dishonest majority setting providing security with identifiable abort, where if the protocol aborts, the honest parties can agree upon the identity of a corrupt party. All known constructions that achieve this notion require expensive zero-knowledge techniques to obtain active security, so are not practical. In this work, we present the first efficient MPC protocol with identifiable abort. Our protocol has an information-theoretic online phase with message complexity $O(n^2)$ for each secure multiplication (where $n$ is the number of parties), similar to the BDOZ protocol (Bendlin et al., Eurocrypt 2011), and a factor in the security parameter lower than the identifiable abort protocol of Ishai et al. (Crypto 2014). A key component of our protocol is a linearly homomorphic information-theoretic signature scheme, for which we provide the first definitions and construction based on a previous non-homomorphic scheme. We then show how to implement the preprocessing for our protocol using somewhat homomorphic encryption, similarly to the SPDZ protocol (Damgård et al., Crypto 2012) and other recent works with applicable efficiency improvements.
Last updated:  2017-04-25
Lightweight MDS Generalized Circulant Matrices (Full Version)
Uncategorized
Meicheng Liu, Siang Meng Sim
Show abstract
Uncategorized
In this article, we analyze the circulant structure of generalized circulant matrices to reduce the search space for finding lightweight MDS matrices. We first show that the implementation of circulant matrices can be serialized and can achieve similar area requirement and clock cycle performance as a serial-based implementation. By proving many new properties and equivalence classes for circulant matrices, we greatly reduce the search space for finding lightweight maximum distance separable (MDS) circulant matrices. We also generalize the circulant structure and propose a new class of matrices, called cyclic matrices, which preserve the benefits of circulant matrices and, in addition, have the potential of being self-invertible. In this new class of matrices, we obtain not only the MDS matrices with the least XOR gates requirement for dimensions from 3x3 to 8x8 in GF(2^4) and GF(2^8), but also involutory MDS matrices which was proven to be non-existence in the class of circulant matrices. To the best of our knowledge, the latter matrices are the first of its kind, which have a similar matrix structure as circulant matrices and are involutory and MDS simultaneously. Compared to the existing best known lightweight matrices, our new candidates either outperform or match them in terms of XOR gates required for a hardware implementation. Notably, our work is generic and independent of the metric for lightweight. Hence, our work is applicable for improving the search for efficient circulant matrices under other metrics besides XOR gates.
Last updated:  2016-11-22
On the Influence of Message Length in PMAC's Security Bounds
Uncategorized
Atul Luykx, Bart Preneel, Alan Szepieniec, Kan Yasuda
Show abstract
Uncategorized
Many MAC (Message Authentication Code) algorithms have security bounds which degrade linearly with the message length. Often there are attacks that confirm the linear dependence on the message length, yet PMAC has remained without attacks. Our results show that PMAC's message length dependence in security bounds is non-trivial. We start by studying a generalization of PMAC in order to focus on PMAC's basic structure. By abstracting away details, we are able to show that there are two possibilities: either there are infinitely many instantiations of generic PMAC with security bounds independent of the message length, or finding an attack against generic PMAC which establishes message length dependence is computationally hard. The latter statement relies on a conjecture on the difficulty of finding subsets of a finite field summing to zero or satisfying a binary quadratic form. Using the insights gained from studying PMAC's basic structure, we then shift our attention to the original instantiation of PMAC, namely, with Gray codes. Despite the initial results on generic PMAC, we show that PMAC with Gray codes is one of the more insecure instantiations of PMAC, by illustrating an attack which roughly establishes a linear dependence on the message length.
Last updated:  2016-02-28
Efficiently Enforcing Input Validity in Secure Two-party Computation
Jonathan Katz, Alex J. Malozemoff, Xiao Wang
Secure two-party computation based on cut-and-choose has made great strides in recent years, with a significant reduction in the total number of garbled circuits required. Nevertheless, the overhead of cut-and-choose can still be significant for large circuits (i.e., a factor of $\rho$ in both communication and computation for statistical security $2^{-\rho}$). We show that for a particular class of computation it is possible to do better. Namely, consider the case where a function on the parties' inputs is computed only if each party's input satisfies some publicly checkable predicate (e.g., is signed by a third party, or lies in some desired domain). Using existing cut-and-choose-based protocols, both the predicate checks and the function would need to be garbled $\rho$ times. Here we show a protocol in which only the underlying function is garbled $\rho$ times, and the predicate checks are each garbled only \emph{once}. For certain natural examples (e.g., signature verification followed by evaluation of a million-gate circuit), this can lead to huge savings in communication (up to 80$\times$) and computation (up to 56$\times$). We provide detailed estimates using realistic examples to validate our claims.
Last updated:  2016-02-23
There is Wisdom in Harnessing the Strengths of your Enemy: Customized Encoding to Thwart Side-Channel Attacks -- Extended Version --
Houssem Maghrebi, Victor Servant, Julien Bringer
Side-channel attacks are an important concern for the security of cryptographic algorithms. To counteract it, a recent line of research has investigated the use of software encoding functions such as dual-rail rather than the well known masking countermeasure. The core idea consists in encoding the sensitive data with a fixed Hamming weight value and perform all operations following this fashion. This new set of countermeasures applies to all devices that leak a function of the Hamming weight of the internal variables. However when the leakage model deviates from this idealized model, the claimed security guarantee vanishes. In this work, we introduce a framework that aims at building customized encoding functions according to the precise leakage model based on stochastic profiling. We specifically investigate how to take advantage of adversary's knowledge of the physical leakage to select the corresponding optimal encoding. Our solution has been evaluated within several security metrics, proving its efficiency against side-channel attacks in realistic scenarios. A concrete experimentation of our proposal to protect the PRESENT Sbox confirms its practicability. In a realistic scenario, our new custom encoding achieves a hundredfold reduction in leakage compared to the dual-rail, although using the same code length.
Last updated:  2016-02-23
Side-Channel Watchdog: Run-Time Evaluation of Side-Channel Vulnerability in FPGA-Based Crypto-systems
Souvik Sonar, Debapriya Basu Roy, Rajat Subhra Chakraborty, Debdeep Mukhopadhyay
Besides security against classical cryptanalysis, its important for cryptographic implementations to have sufficient robustness against side-channel attacks. Many countermeasures have been proposed to thwart side channel attacks, especially power trace measurement based side channel attacks. Additionally, researchers have proposed several evaluation metrics to evaluate side channel security of crypto-system. However, evaluation of any crypto-system is done during the testing phase and is not part of the actual hardware. In our approach, we propose to implement such evaluation metrics on-chip for run-time side channel vulnerability estimation of a cryptosystem. The objective is to create a watchdog on the hardware which will monitor the side channel leakage of the device, and will alert the user if that leakage crosses a pre-determined threshold, beyond which the system might be considered vulnerable. Once such alert signal is activated, proactive countermeasures can be activated either at the device level or at the protocol level, to prevent the impending side channel attack. A FPGA based prototype designed by us show low hardware overhead, and is an effective option that avoids the use of bulky and inconvenient on-field measurement setup.
Last updated:  2016-02-23
Cryptographic Properties of Addition Modulo $2^n$
Uncategorized
S. M. Dehnavi, A. Mahmoodi Rishakani, M. R. Mirzaee Shamsabad, Hamidreza Maimani, Einollah Pasha
Show abstract
Uncategorized
The operation of modular addition modulo a power of two is one of the most applied operations in symmetric cryptography. For example, modular addition is used in RC6, MARS and Twofish block ciphers and RC4, Bluetooth and Rabbit stream ciphers. In this paper, we study statistical and algebraic properties of modular addition modulo a power of two. We obtain probability distribution of modular addition carry bits along with conditional probability distribution of these carry bits. Using these probability distributions and Markovity of modular addition carry bits, we compute the joint probability distribution of arbitrary number of modular addition carry bits. Then, we examine algebraic properties of modular addition with a constant and obtain the number of terms as well as algebraic degrees of component Boolean functions of modular addition with a constant. Finally, we present another formula for the ANF of the component Boolean functions of modular addition modulo a power of two. This formula contains more information than representations which are presented in cryptographic literature, up to now.
Last updated:  2019-01-26
Public-Key Encryption with Simulation-Based Selective-Opening Security and Compact Ciphertexts
Dennis Hofheinz, Tibor Jager, Andy Rupp
In a selective-opening (SO) attack on an encryption scheme, an adversary A gets a number of ciphertexts (with possibly related plaintexts), and can then adaptively select a subset of those ciphertexts. The selected ciphertexts are then opened for A (which means that A gets to see the plaintexts and the corresponding encryption random coins), and A tries to break the security of the unopened ciphertexts. Two main flavors of SO security notions exist: indistinguishability-based (IND-SO) and simulation-based (SIM-SO) ones. Whereas IND-SO security allows for simple and efficient instantiations, its usefulness in larger constructions is somewhat limited, since it is restricted to special types of plaintext distributions. On the other hand, SIM-SO security does not suffer from this restriction, but turns out to be significantly harder to achieve. In fact, all known SIM-SO secure encryption schemes either require O(|m|) group elements in the ciphertext to encrypt |m|-bit plaintexts, or use specific algebraic properties available in the DCR setting. In this work, we present the first SIM-SO secure PKE schemes in the discrete-log setting with compact ciphertexts (whose size is O(1) group elements plus plaintext size). The SIM-SO security of our constructions can be based on, e.g., the k-linear assumption for any k. Technically, our schemes extend previous IND-SO secure schemes by the property that simulated ciphertexts can be efficiently opened to arbitrary plaintexts. We do so by encrypting the plaintext in a bitwise fashion, but such that each encrypted bit leads only to a single ciphertext bit (plus O(1) group elements that can be shared across many bit encryptions). Our approach leads to rather large public keys (of O(|m|2) group elements), but we also show how this public key size can be reduced (to O(|m|) group elements) in pairing-friendly groups.
Last updated:  2016-02-22
Computing theta functions in quasi-linear time in genus 2 and above
Hugo Labrande, Emmanuel Thomé
We outline an algorithm to compute $\theta(z,\tau)$ in genus 2 in quasi-optimal time, borrowing ideas from the algorithm for theta constants and the one for $\theta(z,\tau)$ in genus 1. Our implementation shows a large speedup for precisions as low as a few thousand decimal digits. We also lay out a strategy to generalize this algorithm to genus $g$.
Last updated:  2016-02-22
Integrals go Statistical: Cryptanalysis of Full Skipjack Variants
Meiqin Wang, Tingting Cui, Huaifeng Chen, Ling Sun, Long Wen, Andrey Bogdanov
Integral attacks form a powerful class of cryptanalytic techniques that have been widely used in the security analysis of block ciphers. The integral distinguishers are based on balanced properties holding with probability one. To obtain a distinguisher covering more rounds, an attacker will normally increase the data complexity by iterating through more plaintexts with a given structure under the strict limitation of the full codebook. On the other hand, an integral property can only be deterministically verified if the plaintexts cover all possible values of a bit selection. These circumstances have somehow restrained the applications of integral cryptanalysis. In this paper, we aim to address these limitations and propose a novel \emph{statistical integral distinguisher} where only a part of value sets for these input bit selections are taken into consideration instead of all possible values. This enables us to achieve significantly lower data complexities for our statistical integral distinguisher as compared to those of traditional integral distinguisher. As an illustration, we successfully attack the full-round Skipjack-BABABABA for the first time, which is the variant of NSA's Skipjack block cipher.
Last updated:  2016-02-22
Reduced Memory Meet-in-the-Middle Attack against the NTRU Private Key
Christine van Vredendaal
NTRU is a public-key cryptosystem introduced at ANTS-III. The two most used techniques in attacking the NTRU private key are meet-in-the-middle attacks and lattice-basis reduction attacks. In the 2007 CRYPTO paper ``A Hybrid Lattice-Reduction and Meet-in-the-Middle Attack Against NTRU'' both techniques are combined and it is pointed out that the largest obstacle to attacks is the memory capacity that is required for the meet-in-the-middle phase. In this paper an algorithm is presented that applies low-memory techniques to find `golden' collisions to Odlyzko's meet-in-the-middle attack against the NTRU private key. Several aspects of NTRU secret keys and the algorithm are analysed. The running time of the algorithm with a maximum storage capacity of $w$ is estimated and experimentally verified. Experiments indicate that decreasing the storage capacity by a factor $c$ increases the running time by a factor $\sqrt{c}$.
Last updated:  2016-05-20
Anonymous Role-Based Access Control on E-Health Records
Xingguang Zhou, Jianwei Liu, Weiran Liu, Qianhong Wu
Electronic Health Record (EHR) system facilitates us a lot for health record management. Privacy risk of patients' records is the dominating obstacle in the widely deployed EHRs. Role-based access control (RBAC) schemes offer an access control on EHRs according to one's role. Only the medical staff with roles satisfying the specified access policies can read EHRs. In existing schemes, attackers can link patients' identities to their doctors. Therefore, the classification of patients' diseases are leaked without actually knowing patients' EHRs. To address this problem, we present an anonymous RBAC scheme. Not only it achieves flexible access control, but also realizes privacy-preserving for individuals. Moreover, our scheme maintains the property of constant size for the encapsulated EHRs. The proposed security model with both semantic security and anonymity can be proven under decisional bilinear group assumptions. Besides, we provide an approach for EHR owners to search out their targeted EHR in the anonymous system. For better user experience, we apply "online/offline" approach to speed up data processing in our scheme. Experimental results show that the time consumption for key generation and EHR encapsulation can be done in milliseconds.
Last updated:  2016-10-07
Online/Offline OR Composition of Sigma Protocols
Michele Ciampi, Giuseppe Persiano, Alessandra Scafuro, Luisa Siniscalchi, Ivan Visconti
Proofs of partial knowledge allow a prover to prove knowledge of witnesses for k out of n instances of NP languages. Cramer, Schoenmakers and Damg\aa rd [CDS94] provided an efficient construction of a 3-round public-coin witness-indistinguishable (k, n)-proof of partial knowledge for any NP language, by cleverly combining n executions of Sigma-protocols for that language. This transform assumes that all n instances are fully specified before the proof starts, and thus directly rules out the possibility of choosing some of the instances after the first round. Very recently, Ciampi et al. [CPS+16] provided an improved transform where one of the instances can be specified in the last round. They focus on (1,2)-proofs of partial knowledge with the additional feature that one instance is defined in the last round, and could be adaptively chosen by the verifier. They left as an open question the existence of an efficient (1, 2)-proof of partial knowledge where no instance is known in the first round. More in general, they left open the question of constructing an efficient (k, n)-proof of partial knowledge where knowledge of all n instances can be postponed. Indeed, this property is achieved only by inefficient constructions requiring NP reductions [LS90]. In this paper we focus on the question of achieving adaptive-input proofs of partial knowledge. We provide through a transform the first efficient construction of a 3-round public-coin witness- indistinguishable (k, n)-proof of partial knowledge where all instances can be decided in the third round. Our construction enjoys adaptive-input witness indistinguishability. Additionally, the proof of knowledge property remains also if the adversarial prover selects instances adaptively at last round as long as our transform is applied to a proof of knowledge belonging to the widely used class of proofs of knowledge described in [Mau15, CD98]. Since knowledge of instances and witnesses is not needed before the last round, we have that the first round can be precomputed and in the online/offline setting our performance is similar to the one of [CDS94]. Our new transform relies on the DDH assumption (in contrast to the transforms of [CDS94, CPS+16] that are unconditional). We also show how to strengthen the transform of [CPS+16] so that it also achieves adaptive soundness, when the underlying combined protocols belong to the class of protocols described in [Mau15, CD98].
Last updated:  2016-02-23
Honey Encryption Beyond Message Recovery Security
Joseph Jaeger, Thomas Ristenpart, Qiang Tang
Juels and Ristenpart introduced honey encryption (HE) and showed how to achieve message recovery security even in the face of attacks that can exhaustively try all likely keys. This is important in contexts like password-based encryption where keys are very low entropy, and HE schemes based on the JR construction were subsequently proposed for use in password management systems and even long-term protection of genetic data. But message recovery security is in this setting, like previous ones, a relatively weak property, and in particular does not prohibit an attacker from learning partial information about plaintexts or from usefully mauling ciphertexts. We show that one can build HE schemes that can hide partial information about plaintexts and that prevent mauling even in the face of exhaustive brute force attacks. To do so, we introduce target-distribution semantic-security and target-distribution non-malleability security notions and proofs that a slight variant of the JR HE construction can meet them. The proofs require new balls-and-bins type analyses significantly different from those used in prior work. Finally, we provide a formal proof of the folklore result that an unbounded adversary which obtains a limited number of encryptions of known plaintexts can always succeed at message recovery.
Last updated:  2016-02-22
Circuit Compilers with O(1/ log(n)) Leakage Rate
Marcin Andrychowicz, Stefan Dziembowski, Sebastian Faust
The goal of leakage-resilient cryptography is to construct cryptographic algorithms that are secure even if the devices on which they are implemented leak information to the adversary. One of the main parameters for designing leakage resilient constructions is the leakage \emph{rate}, i.e., a proportion between the amount of leaked information and the complexity of the computation carried out by the construction. We focus on the so-called circuit compilers, which is an important tool for transforming any cryptographic algorithm (represented as a circuit) into one that is secure against the leakage attack. Our model is the ``probing attack'' where the adversary learns the values on some (chosen by him) wires of the circuit. Our results can be summarized as follows. First, we construct circuit compilers with perfect security and leakage rate $O(1/\log(n))$, where $n$ denotes the security parameter (previously known constructions achieved rate $O(1/n)$). Moreover, for the circuits that have only affine gates we obtain a construction with a constant leakage rate. In particular, our techniques can be used to obtain constant-rate leakage-resilient schemes for refreshing an encoded secret (previously known schemes could tolerate leakage rates $O(1/n)$). We also show that our main construction is secure against constant-rate leakage in the random probing leakage model, where the leaking wires are chosen randomly.
Last updated:  2016-06-29
All Your Queries Are Belong to Us: The Power of File-Injection Attacks on Searchable Encryption
Yupeng Zhang, Jonathan Katz, Charalampos Papamanthou
The goal of searchable encryption (SE) is to enable a client to execute searches over encrypted files stored on an untrusted server while ensuring some measure of privacy for both the encrypted files and the search queries. Research has focused on developing efficient SE schemes at the expense of allowing some small, well-characterized "(information) leakage" to the server about the files and/or the queries. The practical impact of this leakage, however, remains unclear. We thoroughly study file-injection attacks--in which the server sends files to the client that the client then encrypts and stores--on the query privacy of single-keyword and conjunctive SE schemes. We show such attacks can reveal the client's queries in their entirety using very few injected files, even for SE schemes having low leakage. We also demonstrate that natural countermeasures for preventing file-injection attacks can be easily circumvented. Our attacks outperform prior work significantly in terms of their effectiveness as well as in terms of their assumptions about the attacker's prior knowledge.
Last updated:  2017-10-18
Commutativity, Associativity, and Public Key Cryptography
Jacques Patarin, Valérie Nachef
In this paper, we will study some possible generalizations of the famous Diffie-Hellman algorithm. As we will see, at the end, most of these generalizations will not be secure or will be equivalent to some classical schemes. However, these results are not always obvious and moreover our analysis will present some interesting connections between the concepts of commutativity, associativity, and public key cryptography.
Last updated:  2016-02-19
Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity Learning
Uncategorized
Ran Raz
Show abstract
Uncategorized
We prove that any algorithm for learning parities requires either a memory of quadratic size or an exponential number of samples. This proves a recent conjecture of Steinhardt, Valiant and Wager and shows that for some learning problems a large storage space is crucial. More formally, in the problem of parity learning, an unknown string $x \in \{0,1\}^n$ was chosen uniformly at random. A learner tries to learn $x$ from a stream of samples $(a_1, b_1), (a_2, b_2)... $, where each $a_t$ is uniformly distributed over $\{0,1\}^n$ and $b_t$ is the inner product of $a_t$ and $x$, modulo 2. We show that any algorithm for parity learning, that uses less than $n^2/25$ bits of memory, requires an exponential number of samples. Previously, there was no non-trivial lower bound on the number of samples needed, for any learning problem, even if the allowed memory size is $O(n)$ (where $n$ is the space needed to store one sample). We also give an application of our result in the field of bounded-storage cryptography. We show an encryption scheme that requires a private key of length $n$, as well as time complexity of $n$ per encryption/decryption of each bit, and is provenly and unconditionally secure as long as the attacker uses less than $n^2/25$ memory bits and the scheme is used at most an exponential number of times. Previous works on bounded-storage cryptography assumed that the memory size used by the attacker is at most linear in the time needed for encryption/decryption.
Last updated:  2016-02-19
Provably Robust Sponge-Based PRNGs and KDFs
Uncategorized
Peter Gaži, Stefano Tessaro
Show abstract
Uncategorized
We study the problem of devising provably secure PRNGs with input based on the sponge paradigm. Such constructions are very appealing, as efficient software/hardware implementations of SHA-3 can easily be translated into a PRNG in a nearly black-box way. The only existing sponge-based construction, proposed by Bertoni et al. (CHES 2010), fails to achieve the security notion of robustness recently considered by Dodis et al. (CCS 2013), for two reasons: (1) The construction is deterministic, and thus there are high-entropy input distributions on which the construction fails to extract random bits, and (2) The construction is not forward secure, and presented solutions aiming at restoring forward security have not been rigorously analyzed. We propose a seeded variant of Bertoni et al.'s PRNG with input which we prove secure in the sense of robustness, delivering in particular concrete security bounds. On the way, we make what we believe to be an important conceptual contribution, developing a variant of the security framework of Dodis et al. tailored at the ideal permutation model that captures PRNG security in settings where the weakly random inputs are provided from a large class of possible adversarial samplers which are also allowed to query the random permutation. As a further application of our techniques, we also present a simple and very efficient key-derivation function based on sponges (which can hence be instantiated from SHA-3 in a black-box fashion), which we also prove secure when fed with samples from permutation-dependent distributions.
Last updated:  2017-08-28
Town Crier: An Authenticated Data Feed for Smart Contracts
Uncategorized
Fan Zhang, Ethan Cecchetti, Kyle Croman, Ari Juels, Elaine Shi
Show abstract
Uncategorized
Smart contracts are programs that execute autonomously on blockchains. Their key envisioned uses (e.g. financial instruments) require them to consume data from outside the blockchain (e.g. stock quotes). Trustworthy data feeds that support a broad range of data requests will thus be critical to smart contract ecosystems. We present an authenticated data feed system called Town Crier (TC). TC acts as a bridge between smart contracts and existing web sites, which are already commonly trusted for non-blockchain applications. It combines a blockchain front end with a trusted hardware back end to scrape HTTPS- enabled websites and serve source-authenticated data to re- lying smart contracts. TC also supports confidentiality; it enables private data requests with encrypted parameters and secure use of user credentials to scrape access-controlled on- line data sources. We describe TC’s design principles and architecture and report on an implementation that uses Intel’s recently introduced Software Guard Extensions (SGX) to furnish data to the Ethereum smart contract system. We formally model TC and define and prove its basic security properties in the Universal Composability (UC) framework. Our results include definitions and techniques of general interest relating to resource consumption (Ethereum’s “gas” fee system) and TCB minimization. We also report on experiments with three example applications. We plan to launch TC soon as an online public service.
Last updated:  2016-02-19
On Bitcoin Security in the Presence of Broken Crypto Primitives
Ilias Giechaskiel, Cas Cremers, Kasper Rasmussen
Digital currencies like Bitcoin rely on cryptographic primitives to operate. However, past experience shows that cryptographic primitives do not last forever: increased computational power and advanced cryptanalysis cause primitives to break frequently, and motivate the development of new ones. It is therefore crucial for maintaining trust in a crypto currency to anticipate such breakage. We present the first systematic analysis of the effect of broken primitives on Bitcoin. We identify the core cryptographic building blocks and analyze the various ways in which they can break, and the subsequent effect on the main Bitcoin security guarantees. Our analysis reveals a wide range of possible effects depending on the primitive and type of breakage, ranging from minor privacy violations to a complete breakdown of the currency. Our results lead to several observations on, and suggestions for, the Bitcoin migration plans in case of broken cryptographic primitives.
Last updated:  2017-04-19
Per-Session Security: Password-Based Cryptography Revisited
Grégory Demay, Peter Gaži, Ueli Maurer, Björn Tackmann
Cryptographic security is usually defined as some form of guarantee that holds except when a bad event with negligible probability occurs, and nothing is guaranteed in that case. However, in settings where such failure can happen with substantial probability, one needs to provide guarantees even for the bad case. A typical example is where a (possibly weak) password is used instead of a secure cryptographic key to protect a session, the bad event being that the adversary correctly guesses the password. In a situation with multiple such sessions, a per-session guarantee is desired: any session for which the password has not been guessed remains secure, independently of whether other sessions have been compromised. In particular, a user with a very strong password enjoys the full security guarantees of an analysis in which passwords are replaced by uniform cryptographic keys. Our contributions are two-fold. First, we provide a new, general technique for stating security guarantees that degrade gracefully and which could not be expressed with existing formalisms. Our method is simple, does not require new security definitions, and can be carried out in any simulation-based security framework (thus providing composability). Second, we apply our approach to revisit the analysis of password-based message authentication and of password-based (symmetric) encryption (PBE), investigating whether they provide strong per-session guarantees. In the case of PBE, one would intuitively expect a weak form of confidentiality, where a transmitted message only leaks to the adversary once the underlying password is guessed. Indeed, we show that PBE does achieve this weak confidentiality if an upper-bound on the number of adversarial password-guessing queries is known in advance for each session. However, such local restrictions appear to be questionable since we show that standard domain separation techniques employed in password-based cryptography, such as salting, can only provide global restrictions on the number of adversarial password-guessing queries. Quite surprisingly, we show that in this more realistic scenario the desired per-session confidentiality is unachievable.
Last updated:  2017-02-23
PrAd: Enabling Privacy-Aware Location based Advertising
Hung Dang, Ee-Chien Chang
Smart phones and mobile devices have become more and more ubiquitous recently. This ubiquity gives chance for mobile advertising, especially location-based advertising, to develop into a very promising market. In many location-based advertising services, it is implied that service providers would obtain actual locations of users in order to serve relevant advertisements which are near users’ current locations. However, this practice has raised a significant privacy concern as various private information of an user can be inferred based on her locations and trajectories. In this work, we propose PrAd, a location-based advertising model that appreciates users’ location privacy; i.e. it never reveals their locations to any untrusted party. Our solution is conceptualized based on several state-of-the-art privacy preserving techniques such as data obfuscation, space encoding and private information retrieval (PIR). We especially introduce algorithmic modification to existing hardware-based PIR technique to make it more practical and thus suit real-time applications. Moreover, PrAd enables a correct billing mechanism among involved parties without revealing any individual sensitive information. Finally, we confirm the effectiveness of our proposed framework by evaluating its performance using a real world dataset.
Last updated:  2016-02-19
Sanitization of FHE Ciphertexts
Léo Ducas, Damien Stehle
By definition, fully homomorphic encryption (FHE) schemes support homomorphic decryption, and all known FHE constructions are bootstrapped from a Somewhat Homomorphic Encryption (SHE) scheme via this technique. Additionally, when a public key is provided, ciphertexts are also re-randomizable, e.g., by adding to them fresh encryptions of 0. From those two operations we devise an algorithm to sanitize a ciphertext, by making its distribution canonical. In particular, the distribution of the ciphertext does not depend on the circuit that led to it via homomorphic evaluation, thus providing circuit privacy in the honest-but-curious model. Unlike the previous approach based on noise flooding, our approach does not degrade much the security/efficiency trade-off of the underlying FHE. The technique can be applied to all lattice-based FHE proposed so far, without substantially affecting their concrete parameters.
Last updated:  2016-08-12
ZKBoo: Faster Zero-Knowledge for Boolean Circuits
Irene Giacomelli, Jesper Madsen, Claudio Orlandi
In this paper we describe ZKBoo, a proposal for practically efficient zero-knowledge arguments especially tailored for Boolean circuits and report on a proof-of-concept implementation. As an highlight, we can generate (resp. verify) a non-interactive proof for the SHA-1 circuit in approximately 13ms (resp. 5ms), with a proof size of 444KB. Our techniques are based on the “MPC-in-the-head” approach to zero-knowledge of Ishai et al. (IKOS), which has been successfully used to achieve significant asymptotic improvements. Our contributions include: 1) A thorough analysis of the different variants of IKOS, which highlights their pro and cons for practically relevant soundness parameters; 2) A generalization and simplification of their approach, which leads to faster Sigma-protocols (that can be made non-interactive using the Fiat-Shamir heuristic) for statements of the form “I know x such that y = f(x)” (where f is a circuit and y a public value); 3) A case study, where we provide explicit protocols, implementations and benchmarking of zero-knowledge protocols for the SHA-1 and SHA-256 circuits;
Last updated:  2016-02-20
New Negative Results on Differing-Inputs Obfuscation
Mihir Bellare, Igors Stepanovs, Brent Waters
We provide the following negative results for differing-inputs obfuscation (diO): (1) If sub-exponentially secure one-way functions exist then sub-exponentially secure diO for TMs does not exist (2) If in addition sub-exponentially secure iO exists then polynomially secure diO for TMs does not exist.
Last updated:  2020-02-22
Revisiting Structure Graphs: Applications to CBC-MAC and EMAC
Ashwin Jha, Mridul Nandi
In Crypto'05, Bellare et al. proved an $O(\ell q^2 /2^n)$ bound for the PRF (pseudorandom function) security of the CBC-MAC based on an $n$-bit random permutation $\Pi$, provided $\ell < 2^{n/3}$. Here an adversary can make at most $q$ prefix-free queries each having at most $\ell$ many ``blocks'' (elements of $\{0,1\}^n$). In the same paper an $O(\ell^{o(1)} q^2 /2^n)$ bound for EMAC (or encrypted CBC-MAC) was proved, provided $\ell < 2^{n/4}$. Both proofs are based on {\bf structure graphs} representing all collisions among ``intermediate inputs'' to $\Pi$ during the computation of CBC. The problem of bounding PRF-advantage is shown to be reduced to bounding the number of structure graphs satisfying certain collision patterns. In the present paper, we show that the Lemma 10 in the Crypto '05 paper, stating an important result on structure graphs, is incorrect. This is due to the fact that the authors overlooked certain structure graphs. This invalidates the proofs of the PRF bounds. In ICALP '06, Pietrzak improved the bound for EMAC by showing a tight bound $O(q^2/2^n)$ under the restriction that $\ell < 2^{n/8}$. As he used the same flawed lemma, this proof also becomes invalid. In this paper, we have revised and sometimes simplified these proofs. We revisit structure graphs in a slightly different mathematical language and provide a complete characterization of certain types of structure graphs. Using this characterization, we show that PRF security of CBC-MAC is about $\sigma q /2^n$ provided $\ell < 2^{n/3}$ where $ \sigma $ is the total number of blocks in all queries. We also recover tight bound for PRF security of EMAC with a much relaxed constraint ($ \ell < 2^{n/4} $) than the original ($ \ell < 2^{n/8} $).
Last updated:  2016-02-21
Polytopic Cryptanalysis
Tyge Tiessen
Standard differential cryptanalysis uses statistical dependencies between the difference of two plaintexts and the difference of the respective two ciphertexts to attack a cipher. Here we introduce polytopic cryptanalysis which considers interdependencies between larger sets of texts as they traverse through the cipher. We prove that the methodology of standard differential cryptanalysis can unambiguously be extended and transferred to the polytopic case including impossible differentials. We show that impossible polytopic transitions have generic advantages over impossible differentials. To demonstrate the practical relevance of the generalization, we present new low-data attacks on round-reduced DES and AES using impossible polytopic transitions that are able to compete with existing attacks, partially outperforming these.
Last updated:  2016-02-18
Pseudoentropy: Lower-bounds for Chain rules and Transformations
Krzysztof Pietrzak, Maciej Skorski
Computational notions of entropy have recently found many applications, including leakage-resilient cryptography, deterministic encryption or memory delegation. The two main types of results which make computational notions so useful are (1) Chain rules, which quantify by how much the computational entropy of a variable decreases if conditioned on some other variable (2) Transformations, which quantify to which extend one type of entropy implies another. Such chain rules and transformations typically lose a significant amount in quality of the entropy, and are the reason why applying these results one gets rather weak quantitative security bounds. In this paper we for the first time prove lower bounds in this context, showing that existing results for transformations are, unfortunately, basically optimal for non-adaptive black-box reductions (and it's hard to imagine how non black-box reductions or adaptivity could be useful here.) A variable $X$ has $k$ bits of HILL entropy of quality $(\epsilon,s)$ if there exists a variable $Y$ with $k$ bits min-entropy which cannot be distinguished from $X$ with advantage $\epsilon$ by distinguishing circuits of size $s$. A weaker notion is Metric entropy, where we switch quantifiers, and only require that for every distinguisher of size $s$, such a $Y$ exists. %For Metric entropy, we further distinguish between a notion that considers probabilistic or only weaker deterministic distinguishers. We first describe our result concerning transformations. By definition, HILL implies Metric without any loss in quality. Metric entropy often comes up in applications, but must be transformed to HILL for meaningful security guarantees. The best known result states that if a variable $X$ has $k$ bits of Metric entropy of quality $(\epsilon,s)$, then it has $k$ bits of HILL with quality $(2\epsilon,s\cdot\epsilon^2)$. We show that this loss of a factor $\Omega(\epsilon^{-2})$ in circuit size is necessary. In fact, we show the stronger result that this loss is already necessary when transforming so called deterministic real valued Metric entropy to randomised boolean Metric (both these variants of Metric entropy are implied by HILL without loss in quality). The chain rule for HILL entropy states that if $X$ has $k$ bits of HILL entropy of quality $(\epsilon,s)$, then for any variable $Z$ of length $m$, $X$ conditioned on $Z$ has $k-m$ bits of HILL entropy with quality $(\epsilon,s\cdot \epsilon^2/ 2^{m})$. We show that a loss of $\Omega(2^m/\epsilon)$ in circuit size necessary here. Note that this still leaves a gap of $\epsilon$ between the known bound and our lower bound.
Last updated:  2016-02-18
A Subgradient Algorithm For Computational Distances and Applications to Cryptography
Maciej Skórski
The task of finding a constructive approximation in the computational distance, while simultaneously preserving additional constrains (referred to as "simulators"), appears as the key difficulty in problems related to complexity theory, cryptography and combinatorics. In this paper we develop a general framework to \emph{efficiently} prove results of this sort, based on \emph{subgradient-based optimization applied to computational distances}. This approach is simpler and natural than KL-projections already studied in this context (for example the uniform min-max theorem from CRYPTO'13), while simultaneously may lead to quantitatively better results. Some applications of our algorithm include: \begin{itemize} \item Fixing an erroneous boosting proof for simulating auxiliary inputs from TCC'13 and much better bounds for the EUROCRYPT'09 leakage-resilient stream cipher \item Deriving the unified proof for Impagliazzo Hardcore Lemma, Dense Model Theorem, Weak Szemeredi Theorem (CCC'09) \item Showing that "dense" leakages can be efficiently simulated, with significantly improved bounds \end{itemize} Interestingly, our algorithm can take advantage of small-variance assumptions imposed on distinguishers, that have been studied recently in the context of key derivation.
Last updated:  2016-11-26
Key Derivation for Squared-Friendly Applications: Lower Bounds
Maciej Skorski
Security of a cryptographic application is typically defined by a security game. The adversary, within certain resources, cannot win with probability much better than $0$ (for unpredictability applications, like one-way functions) or much better than $\frac{1}{2}$ (indistinguishability applications for instance encryption schemes). In so called \emph{squared-friendly applications} the winning probability of the adversary, for different values of the application secret randomness, is not only close to $0$ or $\frac{1}{2}$ on average, but also concentrated in the sense that it's second central moment is small. The class of squared-friendly applications, which contains all unpredictability applications and many indistinguishability applications, is particularly important in the context of key derivation. Barak et al. observed that for square-friendly applications one can beat the ``RT-bound'', extracting secure keys with significantly smaller entropy loss. In turn Dodis and Yu showed that in squared-friendly applications one can directly use a ``weak'' key, which has only high entropy, as a secure key. In this paper we give sharp lower bounds on square security assuming security for ``weak'' keys. We show that \emph{any} application which is either (a) secure with weak keys or (b) allows for saving entropy in a key derived by hashing, \emph{must} be square-friendly. Quantitatively, our lower bounds match the positive results of Dodis and Yu and Barak et al. (TCC'13, CRYPTO'11) Hence, they can be understood as a general characterization of squared-friendly applications. Whereas the positive results on squared-friendly applications where derived by one clever application of the Cauchy-Schwarz Inequality, for tight lower bounds we need more machinery. In our approach we use convex optimization techniques and some theory of circular matrices.
Last updated:  2017-02-24
More Efficient Constant-Round Multi-Party Computation from BMR and SHE
Yehuda Lindell, Nigel P. Smart, Eduardo Soria-Vazquez
We present a multi-party computation protocol in the case of dishonest majority which has very low round complexity. Our protocol sits philosophically between Gentry's Fully Homomorphic Encryption based protocol and the SPDZ-BMR protocol of Lindell et al (CRYPTO 2015). Our protocol avoids various inefficiencies of the previous two protocols. Compared to Gentry's protocol we only require Somewhat Homomorphic Encryption (SHE). Whilst in comparison to the SPDZ-BMR protocol we require only a quadratic complexity in the number of players (as opposed to cubic), we have fewer rounds, and we require less proofs of correctness of ciphertexts. Additionally, we present a variant of our protocol which trades the depth of the garbling circuit (computed using SHE) for some more multiplications in the offline and online phases.
Last updated:  2016-02-18
Cryptanalysis of Multi-Prime $\Phi$-Hiding Assumption
Jun Xu, Lei Hu, Santanu Sarkar, Xiaona Zhang, Zhangjie Huang, Liqiang Peng
In Crypto 2010, Kiltz, O'Neill and Smith used $m$-prime RSA modulus $N$ with $m\geq 3$ for constructing lossy RSA. The security of the proposal is based on the Multi-Prime $\Phi$-Hiding Assumption. In this paper, we propose a heuristic algorithm based on the Herrmann-May lattice method (Asiacrypt 2008) to solve the Multi-Prime $\Phi$-Hiding Problem when prime $e>N^{\frac{2}{3m}}$. Further, by combining with mixed lattice techniques, we give an improved heuristic algorithm to solve this problem when prime $e>N^{\frac{2}{3m}-\frac{1}{4m^2}}$. These two results are verified by our experiments. Our bounds are better than the existing works.
Last updated:  2018-03-16
Highly-Efficient Fully-Anonymous Dynamic Group Signatures
David Derler, Daniel Slamanig
Group signatures are a central tool in privacy-enhancing cryptography, which allow members of a group to anonymously produce signatures on behalf of the group. Consequently, they are an attractive means to implement privacy-friendly authentication mechanisms. Ideally, group signatures are dynamic and thus allow to dynamically and concurrently enroll new members to a group. For such schemes, Bellare et al. (CT-RSA'05) proposed the currently strongest security model (BSZ model). This model, in particular, ensures desirable anonymity guarantees. Given the prevalence of the resource asymmetry in current computing scenarios, i.e., a multitude of (highly) resource-constrained devices are communicating with powerful (cloud-powered) services, it is of utmost importance to have group signatures that are highly-efficient and can be deployed in such scenarios. Satisfying these requirements in particular means that the signing (client) operations are lightweight. We propose a novel, generic approach to construct dynamic group signature schemes, being provably secure in the BSZ model and particularly suitable for resource-constrained devices. Our results are interesting for various reasons: We can prove our construction secure without requiring random oracles. Moreover, when opting for an instantiation in the random oracle model (ROM) the so obtained scheme is extremely efficient and outperforms the fastest constructions providing anonymity in the BSZ model - which also rely on the ROM - known to date. Regarding constructions providing a weaker anonymity notion than BSZ, we surprisingly outperform the popular short BBS group signature scheme (CRYPTO'04; also proven secure in the ROM) and thereby even obtain shorter signatures. We provide a rigorous comparison with existing schemes that highlights the benefits of our scheme. On a more theoretical side, we provide the first construction following the "without encryption" paradigm introduced by Bichsel et al. (SCN'10) in the strong BSZ model.
Last updated:  2016-02-18
Differentially Private Password Frequency Lists
Uncategorized
Jeremiah Blocki, Anupam Datta, Joseph Bonneau
Show abstract
Uncategorized
Given a dataset of user-chosen passwords, the frequency list reveals the frequency of each unique password. We present a novel mechanism for releasing perturbed password frequency lists with rigorous security, efficiency, and distortion guarantees. Specifically, our mechanism is based on a novel algorithm for sampling that enables an efficient implementation of the exponential mechanism for differential privacy (naïve sampling is exponential time). It provides the security guarantee that an adversary will not be able to use this perturbed frequency list to learn anything of significance about any individual user's password even if the adversary already possesses a wealth of background knowledge about the users in the dataset. We prove that our mechanism introduces minimal distortion, thus ensuring that the released frequency list is close to the actual list. Further, we empirically demonstrate, using the now-canonical password dataset leaked from RockYou, that the mechanism works well in practice: as the differential privacy parameter $\epsilon$ varies from $8$ to $0.002$ (smaller $\epsilon$ implies higher security), the normalized distortion coefficient (representing the distance between the released and actual password frequency list divided by the number of users $N$) varies from $8.8\times10^{-7}$ to $1.9\times 10^{-3}$. Given this appealing combination of security and distortion guarantees, our mechanism enables organizations to publish perturbed password frequency lists. This can facilitate new research comparing password security between populations and evaluating password improvement approaches. To this end, we have collaborated with Yahoo! to use our differentially private mechanism to publicly release a corpus of 50 password frequency lists representing approximately 70 million Yahoo! users. This dataset is now the largest password frequency corpus available. Using our perturbed dataset we are able to closely replicate the original published analysis of this data.
Last updated:  2016-02-18
Attacks and parameter choices in HIMMO
Uncategorized
Oscar Garcia-Morchon, Ronald Rietman, Ludo Tolhuizen, Jose-Luis Torre-Arce, Moon Sung Lee, Domingo Gomez-Perez, Jaime Gutierrez, Berry Schoenmakers
Show abstract
Uncategorized
The HIMMO scheme has been introduced as a lightweight collusion-resistant key pre-distribution scheme, with excellent efficiency in terms of bandwidth, energy consumption and computation time. As its cryptanalysis relies on lattice techniques, HIMMO is also an interesting quantum-safe candidate. Unlike the schemes by Blom, by Matsumoto and Imai, and by Blundo {\em et al}, which break down once the number of colluding nodes exceeds a given threshold, it aims at tolerating any number of colluding nodes. In 2015, a contest for the verification of the scheme was held. During the contest, a method was developed to guess a key by finding an approximate solution of one of the problems underlying the scheme. This attack involves finding a short vector in a lattice of dimension linear in a system parameter $\alpha$ and allowed key recovery for several challenges. Thwarting this attack by increasing $\alpha$ would lead to a significant performance degradation, as CPU and memory requirements for the implementation of the scheme scale quadratically in $\alpha$. This paper describes a generalization of HIMMO parameters that allows configuring the scheme such that both its performance and the dimension of the lattice involved in the attack grow linearly in $\alpha$. Two attacks inspired by the one developed in the contest are described, and the impact of those attacks for different parameter choices is discussed. Parameters choices are described that thwart existing attacks while enabling high performance implementations of the scheme.
Last updated:  2016-02-25
Pseudorandom Functions in Almost Constant Depth from Low-Noise LPN
Yu Yu, John Steinberger
Pseudorandom functions (PRFs) play a central role in symmetric cryptography. While in principle they can be built from any one-way functions by going through the generic HILL (SICOMP 1999) and GGM (JACM 1986) transforms, some of these steps are inherently sequential and far from practical. Naor, Reingold (FOCS 1997) and Rosen (SICOMP 2002) gave parallelizable constructions of PRFs in NC\(^2\) and TC\(^0\) based on concrete number-theoretic assumptions such as DDH, RSA, and factoring. Banerjee, Peikert, and Rosen (Eurocrypt 2012) constructed relatively more efficient PRFs in NC\(^1\) and TC\(^0\) based on ``learning with errors'' (LWE) for certain range of parameters. It remains an open problem whether parallelizable PRFs can be based on the ``learning parity with noise'' (LPN) problem for both theoretical interests and efficiency reasons (as the many modular multiplications and additions in LWE would then be simplified to AND and XOR operations under LPN). In this paper, we give more efficient and parallelizable constructions of randomized PRFs from LPN under noise rate \(n^{-c}\) (for any constant 0<c<1) and they can be implemented with a family of polynomial-size circuits with unbounded fan-in AND, OR and XOR gates of depth \(\omega(1)\), where \(\omega(1)\) can be any small super-constant (e.g., \(\log\log\log{n}\) or even less). Our work complements the lower bound results by Razborov and Rudich (STOC 1994) that PRFs of beyond quasi-polynomial security are not contained in AC\(^0\)(MOD\(_2\)), i.e., the class of polynomial-size, constant-depth circuit families with unbounded fan-in AND, OR, and XOR gates. Furthermore, our constructions are security-lifting by exploiting the redundancy of low-noise LPN. We show that in addition to parallelizability (in almost constant depth) the PRF enjoys either of (or any tradeoff between) the following: (1) A PRF on a weak key of sublinear entropy (or equivalently, a uniform key that leaks any \((1 - o(1))\)-fraction) has comparable security to the underlying LPN on a linear size secret. (2) A PRF with key length \(\lambda\) can have security up to \(2^{O(\lambda/\log\lambda)}\), which goes much beyond the security level of the underlying low-noise LPN. where adversary makes up to certain super-polynomial amount of queries.
Last updated:  2016-06-27
On Garbling Schemes with and without Privacy
Uncategorized
Carsten Baum
Show abstract
Uncategorized
Garbling schemes allow to construct two-party function evaluation with security against cheating parties (SFE). To achieve this goal, one party (the Garbler) sends multiple encodings of a circuit (called Garbled Circuits) to the other party (the Evaluator) and opens a subset of these encodings, showing that they were generated honestly. For the remaining garbled circuits, the garbler sends encodings of the inputs. This allows the evaluator to compute the result of function, while the encoding ensures that no other information beyond the output is revealed. To achieve active security against a malicious adversary, the garbler in current protocols has to send O(s) circuits (where s is the statistical security parameter). In this work we show that, for a certain class of circuits, one can reduce this overhead. We consider circuits where sub-circuits depend only on one party's input. Intuitively, one can evaluate these sub-circuits using only one circuit and privacy-free garbling. This has applications to e.g. input validation in SFE and allows to construct more efficient SFE protocols in such cases. We additionally show how to integrate our solution with the SFE protocol of Frederiksen et al. (FJN14), thus reducing the overhead even further.
Last updated:  2016-02-18
Improved Integral and Zero-correlation Linear Cryptanalysis of Reduced-round CLEFIA Block Cipher
Wentan Yi, Shaozhen Chen
CLEFIA is a block cipher developed by Sony Corporation in 2007. It is a recommended cipher of CRYPTREC, and has been adopted as ISO/IEC international standard in lightweight cryptography. In this paper, some new 9-round zero-correlation linear distinguishers of CLEFIA are constructed with the input masks and output masks being independent, which allow multiple zero-correlation linear attacks on 14/15-rounds CLEAIA-192/256 with the partial sum technique. Furthermore, the relations between integral distinguishers and zero-correlation linear approximations are improved, and some new integral distinguishers over 9-round are deduced from zero-correlation linear approximations. By using these integral distinguishers and the partial sum technique, the previous integral results on CLEFIA are improved. The two results have either one more rounds or lower time complexity than previous attack results by means of integral and zero-correlation linear cryptanalysis.
Last updated:  2016-02-18
Isogeny-based Quantum-resistant Undeniable Blind Signature Scheme
Srinath M. S., V. Chandrasekaran
In this paper, we propose an Undeniable Blind Signature scheme (UBSS) based on isogenies between supersingular elliptic curves. The proposed UBSS is an extension of the Jao-Soukharev undeniable signature scheme. We formalize the notion of a UBSS by giving the formal definition. We then study its properties along with the pros and cons. Based on this, we provide a couple of its applcations. We then state the isogeny problems in a more general form and discuss their computational hardnesses. Finally, we prove that the proposed scheme is secure in the presence of a quantum adversary under certain assumptions.
Last updated:  2016-06-07
Annihilation Attacks for Multilinear Maps: Cryptanalysis of Indistinguishability Obfuscation over GGH13
Uncategorized
Eric Miles, Amit Sahai, Mark Zhandry
Show abstract
Uncategorized
In this work, we present a new class of polynomial-time attacks on the original multilinear maps of Garg, Gentry, and Halevi (2013). Previous polynomial-time attacks on GGH13 were “zeroizing” attacks that generally required the availability of low-level encodings of zero. Most significantly, such zeroizing attacks were not applicable to candidate indistinguishability obfuscation (iO) schemes. iO has been the subject of intense study. To address this gap, we introduce annihilation attacks, which attack multilinear maps using non-linear polynomials. Annihilation attacks can work in situations where there are no low-level encodings of zero. Using annihilation attacks, we give the first polynomial-time cryptanalysis of candidate iO schemes over GGH13. More specifically, we exhibit two simple programs that are functionally equivalent, and show how to efficiently distinguish between the obfuscations of these two programs. Given the enormous applicability of iO, it is important to devise iO schemes that can avoid attack. We discuss some initial directions for safeguarding against annihilating attacks.
Last updated:  2016-05-07
Improved Progressive BKZ Algorithms and their Precise Cost Estimation by Sharp Simulator
Yoshinori Aono, Yuntao Wang, Takuya Hayashi, Tsuyoshi Takagi
In this paper, we investigate a variant of the BKZ algorithm, called progressive BKZ, which performs BKZ reductions by starting with a small blocksize and gradually switching to larger blocks as the process continues. We discuss techniques to accelerate the speed of the progressive BKZ algorithm by optimizing the following parameters: blocksize, searching radius and probability for pruning of the local enumeration algorithm, and the constant in the geometric series assumption (GSA). We then propose a simulator for predicting the length of the Gram-Schmidt basis obtained from the BKZ reduction. We also present a model for estimating the computational cost of the proposed progressive BKZ by considering the efficient implementation of the local enumeration algorithm and the LLL algorithm. Finally, we compare the cost of the proposed progressive BKZ with that of other algorithms using instances from the Darmstadt SVP Challenge. The proposed algorithm is approximately 50 times faster than BKZ 2.0 (proposed by Chen-Nguyen) for solving the SVP Challenge up to 160 dimensions.
Last updated:  2016-08-24
Designing Proof of Human-work Puzzles for Cryptocurrency and Beyond
Uncategorized
Jeremiah Blocki, Hong-Sheng Zhou
Show abstract
Uncategorized
We introduce the novel notion of a Proof of Human-work (PoH) and present the first distributed consensus protocol from hard Artificial Intelligence problems. As the name suggests, a PoH is a proof that a {\em human} invested a moderate amount of effort to solve some challenge. A PoH puzzle should be moderately hard for a human to solve. However, a PoH puzzle must be hard for a computer to solve, including the computer that generated the puzzle, without sufficient assistance from a human. By contrast, CAPTCHAs are only difficult for other computers to solve --- not for the computer that generated the puzzle. We also require that a PoH be publicly verifiable by a computer without any human assistance and without ever interacting with the agent who generated the proof of human-work. We show how to construct PoH puzzles from indistinguishability obfuscation and from CAPTCHAs. We motivate our ideas with two applications: HumanCoin and passwords. We use PoH puzzles to construct HumanCoin, the first cryptocurrency system with human miners. Second, we use proofs of human work to develop a password authentication scheme which provably protects users against offline attacks.
Last updated:  2016-02-16
Highly-Efficient and Composable Password-Protected Secret Sharing (Or: How to Protect Your Bitcoin Wallet Online)
Stanislaw Jarecki, Aggelos Kiayias, Hugo Krawczyk, Jiayu Xu
PPSS is a central primitive introduced by Bagherzandi et al [BJSL10] which allows a user to store a secret among n servers such that the user can later reconstruct the secret with the sole possession of a single password by contacting t+1 servers for t<n. At the same time, an attacker breaking into t of these servers - and controlling all communication channels - learns nothing about the secret (or the password). Thus, PPSS schemes are ideal for on-line storing of valuable secrets when retrieval solely relies on a memorizable password. We show the most efficient Password-Protected Secret Sharing (PPSS) to date (and its implied Threshold-PAKE scheme), which is optimal in round communication as in Jarecki et al [JKK14] but which improves computation and communication complexity over that scheme requiring a single per-server exponentiation for the client and a single exponentiation for the server. As with the schemes from [JKK14] and Camenisch et al [CLLN14], we do not require secure channels or PKI other than in the initialization stage. We prove the security of our PPSS scheme in the Universally Composable (UC) model. For this we present a UC definition of PPSS that relaxes the UC formalism of [CLLN14] in a way that enables more efficient PPSS schemes (by dispensing with the need to extract the user's password in the simulation) and present a UC-based definition of Oblivious PRF (OPRF) that is more general than the (Verifiable) OPRF definition from [JKK14] and is also crucial for enabling our performance optimization.
Last updated:  2016-07-08
On upper bounds for algebraic degrees of APN functions
Lilya Budaghyan, Claude Carlet, Tor Helleseth, Nian Li, Bo Sun
We study the problem of existence of APN functions of algebraic degree $n$ over $\ftwon$. We characterize such functions by means of derivatives and power moments of the Walsh transform. We deduce some non-existence results which mean, in particular, that for most of the known APN functions $F$ over $\ftwon$ the function $x^{2^n-1}+F(x)$ is not APN, and changing a value of $F$ in a single point results in non-APN functions.
Last updated:  2016-03-01
Hash-Function based PRFs: AMAC and its Multi-User Security
Mihir Bellare, Daniel J. Bernstein, Stefano Tessaro
AMAC is a simple and fast candidate construction of a PRF from an MD-style hash function which applies the keyed hash function and then a cheap, un-keyed output transform such as truncation. Spurred by its use in the widely-deployed Ed25519 signature scheme, this paper investigates the provable PRF security of AMAC to deliver the following three-fold message: (1) First, we prove PRF security of AMAC (2) Second, we show that AMAC has a quite unique and attractive feature, namely that its multi-user security is essentially as good as its single-user security and in particular superior in some settings to that of competitors. (3) Third, it is technically interesting, its security and analysis intrinsically linked to security of the compression function in the presence of leakage.
Last updated:  2016-02-16
On low degree polynomials in 2-round AES
Igor Semaev
Recent observations on polynomial structures of AES-like round functions are analysed in this note. We present computational evidence that input/output bits of AES-like 2-round transform up to $40$-bit, constructed with $8$-bit AES S-boxes, do not satisfy any relations of degree $3$. So it is very unlikely that actual AES 2-round transform admits any relations of degree $\leq 3$.
Last updated:  2016-02-16
Adaptively Secure Identity-Based Encryption from Lattices with Asymptotically Shorter Public Parameters
Shota Yamada
In this paper, we present two new adaptively secure identity-based encryption (IBE) schemes from lattices. The size of the public parameters, ciphertexts, and private keys are $\tilde{O}(n^2 \kappa^{1/d})$, $\tilde{O}(n)$, and $\tilde{O}(n)$ respectively. Here, $n$ is the security parameter, $\kappa$ is the length of the identity, and $d$ is a flexible constant that can be set arbitrary (but will affect the reduction cost). Ignoring the poly-logarithmic factors hidden in the asymptotic notation, our schemes achieve the best efficiency among existing adaptively secure IBE schemes from lattices. In more detail, our first scheme is anonymous, but proven secure under the LWE assumption with approximation factor $n^{\omega(1)}$. Our second scheme is not anonymous, but proven adaptively secure assuming the LWE assumption for all polynomial approximation factors. As a side result, based on a similar idea, we construct an attribute-based encryption scheme for branching programs that simultaneously satisfies the following properties for the first time: Our scheme achieves compact secret keys, the security is proven under the LWE assumption with polynomial approximation factors, and the scheme can deal with unbounded length branching programs.
Last updated:  2016-06-09
An Algorithm for NTRU Problems and Cryptanalysis of the GGH Multilinear Map without a Low Level Encoding of Zero
Uncategorized
Jung Hee Cheon, Jinhyuck Jeong, Changmin Lee
Show abstract
Uncategorized
Let f and g be polynomials of a bounded Euclidean norm in the ring \Z[X]/< X^n+1>. Given the polynomial [f/g]_q\in \Z_q[X]/< X^n+1>, the NTRU problem is to find a, b\in \Z[X]/< X^n+1> with a small Euclidean norm such that [a/b]_q = [f/g]_q. We propose an algorithm to solve the NTRU problem, which runs in 2^{O(\log^{2} \lambda)} time when ||g||, ||f||, and || g^{-1}|| are within some range. The main technique of our algorithm is the reduction of a problem on a field to one in a subfield. Recently, the GGH scheme, the first candidate of a (approximate) multilinear map, was found to be insecure by the Hu--Jia attack using low-level encodings of zero, but no polynomial-time attack was known without them. In the GGH scheme without low-level encodings of zero, our algorithm can be directly applied to attack this scheme if we have some top-level encodings of zero and a known pair of plaintext and ciphertext. Using our algorithm, we can construct a level-0 encoding of zero and utilize it to attack a security ground of this scheme in the quasi-polynomial time of its security parameter using the parameters suggested by {GGH13}.
Last updated:  2016-02-16
A new algorithm for residue multiplication modulo $2^{521}-1$
Shoukat Ali, Murat Cenk
We present a new algorithm for residue multiplication modulo the Mersenne prime $2^{521}-1$ based on the Toeplitz matrix-vector product. For this modulo, our algorithm yields better result in terms of the total number of operations than the previously known best algorithm of R. Granger and M. Scott presented in Public Key Cryptography - PKC 2015. Although our algorithm has nine more multiplications than Granger-Scott multiplication algorithm, the total number of additions is forty-two less than their algorithm. Even if one takes a ratio of $1:4$ between multiplication and addition our algorithm still has less total number of operations. We also present the test results of both the multiplication algorithms on an Intel Sandy Bridge Corei5-2410M machine, with and without optimization option in GCC.
Last updated:  2016-10-04
Rate-1, Linear Time and Additively Homomorphic UC Commitments
Uncategorized
Ignacio Cascudo, Ivan Damgård, Bernardo David, Nico Döttling, Jesper Buus Nielsen
Show abstract
Uncategorized
We propose the first UC commitment scheme for binary strings with the optimal properties of rate approaching 1 and linear time (in the amortised sense, using a small number of seed OTs). On top of this, the scheme is additively homomorphic, which allows for applications to maliciously secure 2-party computation. As tools for obtaining this, we make three contributions of independent interest: we construct the first (binary) linear time encodable codes with non-trivial distance and rate approaching 1, we construct the first almost universal hash function with small seed that can be computed in linear time, and we introduce a new primitive called interactive proximity testing that can be used to verify whether a string is close to a given linear code.
Last updated:  2016-02-16
Automatic Expectation and Variance Computing for Attacks on Feistel Schemes
Emmanuel Volte, Valérie Nachef, Nicolas Marrière
There are many kinds of attacks that can be mounted on block ciphers: differential attacks, impossible differential attacks, truncated differential attacks, boomerang attacks. We consider generic differential attacks used as distinguishers for various types of Feistel ciphers: they allow to distinguish a random permutation from a permutation generated by the cipher. These attacks are based on differences between the expectations of random variables defined by relations on the inputs and outputs of the ciphers. Sometimes, one has to use the value of the variance as well. In this paper, we will provide a tool that computes the exact values of these expectations and variances. We first explain thoroughly how these computations can be carried out by counting the number of solutions of a linear systems with equalities and non-equalities. Then we provide the first applications of this tool. For example, it enabled to discover a new geometry in 4-point attacks. It gave an explanation to some phenomena that can appear in simulations when the inputs and outputs have a small number of bits.
Last updated:  2016-02-15
Cryptanalysis of the New CLT Multilinear Map over the Integers
Jung Hee Cheon, Pierre-Alain Fouque, Changmin Lee, Brice Minaud, Hansol Ryu
Multilinear maps serve as a basis for a wide range of cryptographic applications. The first candidate construction of multilinear maps was proposed by Garg, Gentry, and Halevi in 2013, and soon afterwards, another construction was suggested by Coron, Lepoint, and Tibouchi (CLT13), which works over the integers. However, both of these were found to be insecure in the face of so-called zeroizing attacks, by Hu and Jia, and by Cheon, Han, Lee, Ryu and Stehlé. To improve on CLT13, Coron, Lepoint, and Tibouchi proposed another candidate construction of multilinear maps over the integers at Crypto 2015 (CLT15). This article presents two polynomial attacks on the CLT15 multilinear map, which share ideas similar to the cryptanalysis of CLT13. Our attacks allow recovery of all secret parameters in time polynomial in the security parameter, and lead to a full break of the CLT15 multilinear map for virtually all applications.
Last updated:  2016-08-13
More Practical and Secure History-Independent Hash Tables
Uncategorized
Michael T. Goodrich, Evgenios M. Kornaropoulos, Michael Mitzenmacher, Roberto Tamassia
Show abstract
Uncategorized
Direct-recording electronic (DRE) voting systems have been used in several countries including United States, India, and the Netherlands to name a few. In the majority of those cases, researchers discovered several security flaws in the implementation and architecture of the voting system. A common property of the machines inspected was that the votes were stored sequentially according to the time they were cast, which allows an attacker to break the anonymity of the voters using some side-channel information. Subsequent research (Molnar et al. Oakland’06, Bethencourt et al. NDSS’07, Moran et al. ICALP’07) pointed out the connection between vote storage and history-independence, a privacy property that guarantees that the system does not reveal the sequence of operations that led to the current state. There are two flavors of history-independence. In a weakly history-independent data structure, every possible sequence of operations consistent with the current set of items is equally likely to have occurred. In a strongly history-independent data structure, items must be stored in a canonical way, i.e., for any set of items, there is only one possible memory representation. Strong history- independence implies weak history-independence but considerably constrains the design choices of the data structures. In this work, we present and analyze an efficient hash table data structure that simultaneously achieves the following properties: • It is based on the classic linear probing collision-handling scheme. • It is weakly history-independent. • It is secure against collision-timing attacks. That is, we consider adversaries that can measure the time for an update operation, but cannot observe data values, and we show that those adversaries cannot learn information about the items in the table. • All operations are significantly faster in practice (in particular, almost 2x faster for high load factors) than those of the commonly used strongly history-independent linear probing method proposed by Blelloch and Golovin (FOCS’07), which is not secure against collision-timing attacks. The first property is desirable for ease of implementation. The second property is desirable for the sake of maximizing privacy in scenarios where the memory of the hash table is exposed, such as post-election audit of DRE voting machines or direct memory access (DMA) attacks. The third property is desirable for maximizing privacy against adversaries who do not have access to memory but nevertheless are capable of accurately measuring the execution times of data structure operations. To our knowledge, our hash table construction is the first data structure that combines history-independence and protection against a form of timing attacks.
Last updated:  2016-02-15
On the nonlinearity of monotone Boolean functions
Claude Carlet
We first prove the truthfulness of a conjecture on the nonlinearity of monotone Boolean functions in even dimension, proposed in the recent paper ``Cryptographic properties of monotone Boolean functions", by D. Joyner, P. Stanica, D. Tang and the author, to appear in the Journal of Mathematical Cryptology. We prove then an upper bound on such nonlinearity, which is asymptotically much stronger than the conjectured upper bound and than the upper bound proved for odd dimension in this same paper. This bound shows a deep weakness of monotone Boolean functions; they are too closely approximated by affine functions for being usable as nonlinear components in cryptographic applications. We deduce a necessary criterion to be satisfied by a Boolean (resp. vectorial) function for being nonlinear.
Last updated:  2016-08-24
Cryptanalysis of 6-round PRINCE using 2 Known Plaintexts
Shahram Rasoolzadeh, Håvard Raddum
In this paper we focus on the PRINCE block cipher reduced to 6 rounds, with two known plaintext/ciphertext pairs. We develop two attacks on 6-round PRINCE based on accelerated exhaustive search, one with negligible memory usage and one having moderate memory requirements. The time complexities for the two attacks are $2^{96.78}$ and $2^{88.85}$, respectively. The memory consumption of the second attack is less than 200MB and so is not a restricting factor in a real-world setting.
Last updated:  2016-02-15
New Attacks on the Concatenation and XOR Hash Combiners
Itai Dinur
We study the security of the concatenation combiner $H_1(M) \| H_2(M)$ for two independent iterated hash functions with $n$-bit outputs that are built using the Merkle-Damgård construction. In 2004 Joux showed that the concatenation combiner of hash functions with an $n$-bit internal state does not offer better collision and preimage resistance compared to a single strong $n$-bit hash function. On the other hand, the problem of devising second preimage attacks faster than $2^n$ against this combiner has remained open since 2005 when Kelsey and Schneier showed that a single Merkle-Damgård hash function does not offer optimal second preimage resistance for long messages. In this paper, we develop new algorithms for cryptanalysis of hash combiners and use them to devise the first second preimage attack on the concatenation combiner. The attack finds second preimages faster than $2^n$ for messages longer than $2^{2n/7}$ and has optimal complexity of $2^{3n/4}$. This shows that the concatenation of two Merkle-Damgård hash functions is not as strong a single ideal hash function. Our methods are also applicable to other well-studied combiners, and we use them to devise a new preimage attack with complexity of $2^{2n/3}$ on the XOR combiner $H_1(M) \oplus H_2(M)$ of two Merkle-Damgård hash functions. This improves upon the attack by Leurent and Wang (presented at Eurocrypt 2015) whose complexity is $2^{5n/6}$ (but unlike our attack is also applicable to HAIFA hash functions). Our algorithms exploit properties of random mappings generated by fixing the message block input to the compression functions of $H_1$ and $H_2$. Such random mappings have been widely used in cryptanalysis, but we exploit them in new ways to attack hash function combiners.
Last updated:  2016-02-15
On the Computation of the Optimal Ate Pairing at the 192-bit Security Level
Uncategorized
Loubna Ghammam, Emmanuel Fouotsa
Show abstract
Uncategorized
Barreto, Lynn and Scott elliptic curves of embedding degree 12 denoted BLS12 have been proven to present fastest results on the implementation of pairings at the 192-bit security level [1]. The computation of pairings in general involves the execution of the Miller algorithm and the final exponentiation. In this paper, we improve the complexity of these two steps up to 8% by searching an appropriate parameter. We compute the optimal ate pairing on BLS curves of embedding degree 12 and we also extend the same analysis to BLS curves with embedding degree 24. Furthermore, as many pairing based protocols are implemented on memory constrained devices such as SIM or smart cards, we describe an efficient algorithm for the computation of the final exponentiation less memory intensive with an improvement up to 25% with respect to the previous work.
Last updated:  2016-02-17
ECDH Key-Extraction via Low-Bandwidth Electromagnetic Attacks on PCs
Daniel Genkin, Lev Pachmanov, Itamar Pipman, Eran Tromer
We present the first physical side-channel attack on elliptic curve cryptography running on a PC. The attack targets the ECDH public-key encryption algorithm, as implemented in the latest version of GnuPG's Libgcrypt. By measuring the target's electromagnetic emanations, the attack extracts the secret decryption key within seconds, from a target located in an adjacent room across a wall. The attack utilizes a single carefully chosen ciphertext, and tailored time-frequency signal analysis techniques, to achieve full key extraction.
Last updated:  2016-10-12
Removing the Strong RSA Assumption from Arguments over the Integers
Uncategorized
Geoffroy Couteau, Thomas Peters, David Pointcheval
Show abstract
Uncategorized
Committing integers and proving relations between them is an essential ingredient in many cryptographic protocols. Among them, range proofs have shown to be fundamental. They consist in proving that a committed integer lies in a public interval, which can be seen as a particular case of the more general Diophantine relations: for the committed vector of integers x, there exists a vector of integers w such that P (x,w) = 0, where P is a polynomial. In this paper, we revisit the security strength of the statistically hiding commitment scheme over the integers due to Damgard-Fujisaki, and the zero-knowledge proofs of knowledge of openings. Our first main contribution shows how to remove the Strong RSA assumption and replace it by the standard RSA assumption in the security proofs. This improvement naturally extends to generalized commitments and more complex proofs without modifying the original protocols. As a second contribution, we design an interactive technique turning commitment scheme over the integers into commitment scheme modulo a prime p. Still under the RSA assumption, this results in more efficient proofs of relations between committed values. Our methods thus improve upon existing proof systems for Diophantine relations both in terms of performance and security. We illustrate that with more efficient range proofs under the sole RSA assumption.
Last updated:  2016-07-04
A subfield lattice attack on overstretched NTRU assumptions: Cryptanalysis of some FHE and Graded Encoding Schemes
Uncategorized
Martin Albrecht, Shi Bai, Léo Ducas
Show abstract
Uncategorized
The subfield attack exploits the presence of a subfield to solve overstretched versions of the NTRU assumption: norming the public key $h$ down to a subfield may lead to an easier lattice problem and any sufficiently good solution may be lifted to a short vector in the full NTRU-lattice. This approach was originally sketched in a paper of Gentry and Szydlo at Eurocrypt'02 and there also attributed to Jonsson, Nguyen and Stern. However, because it does not apply for small moduli and hence NTRUEncrypt, it seems to have been forgotten. In this work, we resurrect this approach, fill some gaps, analyze and generalize it to any subfields and apply it to more recent schemes. We show that for significantly larger moduli ---a case we call overstretched--- the subfield attack is applicable and asymptotically outperforms other known attacks. This directly affects the asymptotic security of the bootstrappable homomorphic encryption schemes LTV and YASHE which rely on a mildly overstretched NTRU assumption: the subfield lattice attack runs in sub-exponential time $2^{O(\lambda/\log^{1/3}\lambda)}$ invalidating the security claim of $2^{\Theta(\lambda)}$. The effect is more dramatic on GGH-like Multilinear Maps: this attack can run in polynomial time without *encodings of zero* nor the *zero-testing parameter*, yet requiring an additional quantum step to recover the secret parameters exactly. We also report on practical experiments. Running LLL in dimension $512$ we obtain vectors that would have otherwise required running BKZ with block-size $130$ in dimension $8192$. Finally, we discuss concrete aspects of this attack, the condition on the modulus $q$ to guarantee full immunity, discuss countermeasures and propose open questions.
Last updated:  2016-02-14
Server Notaries: A Complementary Approach to the Web PKI Trust Model
Emre Yüce, Ali Aydın Selçuk
SSL/TLS is the de facto protocol for providing secure communication over the Internet. It relies on the Web PKI model for authentication and secure key exchange. Despite its relatively successful past, the number of Web PKI incidents observed have increased recently. These incidents revealed the risks of forged certificates issued by certificate authorities without the consent of the domain owners. Several solutions have been proposed to solve this problem, but no solution has yet received widespread adaption due to complexity and deployability issues. In this paper, we propose a practical mechanism that enables servers to get their certificate views across the Internet, making detection of a certificate substitution attack possible. The origin of the certificate substitution attack can also be located by this mechanism. We have conducted simulation experiments and evaluated our proposal using publicly available, real-world BGP data. We have obtained promising results on the AS-level Internet topology.
Last updated:  2016-10-10
Compact Identity Based Encryption from LWE
Daniel Apon, Xiong Fan, Feng-Hao Liu
We construct an identity-based encryption (IBE) scheme from the standard Learning with Errors (LWE) assumption that has \emph{compact} public-key and achieves adaptive security in the standard model. In particular, our scheme only needs 2 public matrices to support $O(\log^2 \secparam)$-bit length identity, and $O(\secparam / \log^2 \secparam)$ public matrices to support $\secparam$-bit length identity. This improves over previous IBE schemes from lattices substantially. Additionally, our techniques from IBE can be adapted to construct a compact digital signature scheme, which achieves existential unforgeability under the standard Short Integer Solution (SIS) assumption with small polynomial parameters.
Last updated:  2016-05-30
Collecting relations for the Number Field Sieve in $GF(p^6)$
Pierrick Gaudry, Laurent Grémy, Marion Videau
In order to assess the security of cryptosystems based on the discrete logarithm problem in non-prime finite fields, as are the torus-based or pairing-based ones, we investigate thoroughly the case in $GF(p^6)$ with the Number Field Sieve. We provide new insights, improvements, and comparisons between different methods to select polynomials intended for a sieve in dimension 3 using a special-q strategy. We also take into account the Galois action to increase the relation productivity of the sieving phase. To validate our results, we ran several experiments and real computations for various selection methods and field sizes with our publicly available implementation of the sieve in dimension 3, with special-q and various enumeration strategies.
Last updated:  2016-12-23
Robust Password-Protected Secret Sharing
Uncategorized
Michel Abdalla, Mario Cornejo, Anca Nitulescu, David Pointcheval
Show abstract
Uncategorized
Password-protected secret sharing (PPSS) schemes allow a user to publicly share its high-entropy secret across different servers and to later recover it by interacting with some of these servers using only his password without requiring any authenticated data. In particular, this secret will remain safe as long as not too many servers get corrupted. However, servers are not always reliable and the communication can be altered. To address this issue, a robust PPSS should additionally guarantee that a user can recover his secret as long as enough servers provide correct answers, and these are received without alteration. In this paper, we propose new robust PPSS schemes which are significantly more efficient than the existing ones. We achieve this goal in two steps. First, we propose a generic technique to build a Robust Gap Threshold Secret Sharing Scheme (RGTSSS) from any threshold secret sharing scheme. In the PPSS construction, this allows us to drop the verifiable property of Oblivious Pseudorandom Functions (OPRF). Then, we use this new approach to design two new robust PPSS schemes that are quite efficient, from two OPRFs. They are proven in the random oracle model, just because our RGTSSS construction requires random non-malleable fingerprints. This is easily guaranteed when the hash function is modeled as a random oracle.
Last updated:  2016-11-17
Simpira v2: A Family of Efficient Permutations Using the AES Round Function
Uncategorized
Shay Gueron, Nicky Mouha
Show abstract
Uncategorized
This paper introduces Simpira, a family of cryptographic permutations that supports inputs of $128 \times b$ bits, where $b$ is a positive integer. Its design goal is to achieve high throughput on virtually all modern 64-bit processors, that nowadays already have native instructions for AES. To achieve this goal, Simpira uses only one building block: the AES round function. For $b=1$, Simpira corresponds to 12-round AES with fixed round keys, whereas for $b\ge 2$, Simpira is a Generalized Feistel Structure (GFS) with an $F$-function that consists of two rounds of AES. We claim that there are no structural distinguishers for Simpira with a complexity below $2^{128}$, and analyze its security against a variety of attacks in this setting. The throughput of Simpira is close to the theoretical optimum, namely, the number of AES rounds in the construction. For example, on the Intel Skylake processor, Simpira has throughput below 1 cycle per byte for $b \le 4$ and $b=6$. For larger permutations, where moving data in memory has a more pronounced effect, Simpira with $b=32$ (512 byte inputs) evaluates 732 AES rounds, and performs at 824 cycles (1.61 cycles per byte), which is less than 13% off the theoretical optimum. If the data is stored in interleaved buffers, this overhead is reduced to less than 1%. The Simpira family offers an efficient solution when processing wide blocks, larger than 128 bits, is desired.
Last updated:  2016-10-03
Tightly-Secure Pseudorandom Functions via Work Factor Partitioning
Tibor Jager
We introduce a new technique for tight security proofs called work factor partitioning. Using this technique in a modified version of the framework of Döttling and Schröder (CRYPTO 2015), we obtain the first generic construction of tightly-secure pseudorandom functions (PRFs) from PRFs with small domain. By instantiating the small-domain PRFs with the Naor-Reingold function (FOCS 1997) or its generalization by Lewko and Waters (ACM CCS 2009), this yields the first fully-secure PRFs whose black-box security proof loses a factor of only O(log^2 \lambda), where \lambda is the security parameter. Interestingly, our variant of the Naor-Reingold construction can be seen as a standard Naor-Reingold PRF (whose security proof has a loss of \Theta(\lambda)), where a special encoding is applied to the input before it is processed. The tightness gain comes almost for free: the encoding is very efficiently computable and increases the length of the input only by a constant factor smaller than 2.
Last updated:  2016-11-10
Oblivious Transfer from Any Non-Trivial Elastic Noisy Channels via Secret Key Agreement
Ignacio Cascudo, Ivan Damgård, Felipe Lacerda, Samuel Ranellucci
A $(\gamma,\delta)$-elastic channel is a binary symmetric channel between a sender and a receiver where the error rate of an honest receiver is $\delta$ while the error rate of a dishonest receiver lies within the interval $[\gamma, \delta]$. In this paper, we show that from \emph{any} non-trivial elastic channel (i.e., $0<\gamma<\delta<\frac{1}{2}$) we can implement oblivious transfer with information theoretic security. This was previously (Khurana et al., Eurocrypt 2016) only known for a subset of these parameters. Our technique relies on a new way to exploit protocols for information-theoretic key agreement from noisy channels. We also show that information theoretically secure commitments where the receiver commits follow from any non-trivial elastic channel.
Last updated:  2017-02-17
Lightweight Multiplication in GF(2^n) with Applications to MDS Matrices
Christof Beierle, Thorsten Kranz, Gregor Leander
In this paper we consider the fundamental question of optimizing finite field multiplications with one fixed element. Surprisingly, this question did not receive much attention previously. We investigate which field representation, that is which choice of basis, allows for an optimal implementation. Here, the efficiency of the multiplication is measured in terms of the number of XOR operations needed to implement the multiplication. While our results are potentially of larger interest, we focus on a particular application in the second part of our paper. Here we construct new MDS matrices which outperform or are on par with all previous results when focusing on a round-based hardware implementation.
Last updated:  2016-03-13
Circuit-ABE from LWE: Unbounded Attributes and Semi-Adaptive Security
Zvika Brakerski, Vinod Vaikuntanathan
We construct an LWE-based key-policy attribute-based encryption (ABE) scheme that supports attributes of unbounded polynomial length. Namely, the size of the public parameters is a fixed polynomial in the security parameter and a depth bound, and with these fixed length parameters, one can encrypt attributes of arbitrary length. Similarly, any polynomial size circuit that adheres to the depth bound can be used as the policy circuit regardless of its input length (recall that a depth $d$ circuit can have as many as $2^d$ inputs). This is in contrast to previous LWE-based schemes where the length of the public parameters has to grow linearly with the maximal attribute length. We prove that our scheme is semi-adaptively secure, namely, the adversary can choose the challenge attribute after seeing the public parameters (but before any decryption keys). Previous LWE-based constructions were only able to achieve selective security. (We stress that the complexity leveraging technique is not applicable for unbounded attributes.) We believe that our techniques are of interest at least as much as our end result. Fundamentally, selective security and bounded attributes are both shortcomings that arise out of the current LWE proof techniques that program the challenge attributes into the public parameters. The LWE toolbox we develop in this work allows us to "delay" this programming. In a nutshell, the new tools include a way to generate an a-priori unbounded sequence of LWE matrices, and have fine-grained control over which trapdoor is embedded in each and every one of them, all with succinct representation.
Last updated:  2016-05-19
Circular Security Separations for Arbitrary Length Cycles from LWE
Uncategorized
Venkata Koppula, Brent Waters
Show abstract
Uncategorized
We describe a public key encryption that is IND-CPA secure under the Learning with Errors (LWE) assumption, but that is not circular secure for arbitrary length cycles. Previous separation results for cycle length greater than 2 require the use of indistinguishability obfuscation, which is not currently realizable under standard assumptions.
Last updated:  2016-04-29
Interactive Oracle Proofs
Eli Ben-Sasson, Alessandro Chiesa, Nicholas Spooner
We initiate the study of a proof system model that naturally combines two well-known models: interactive proofs (IPs) and probabilistically-checkable proofs (PCPs). An *interactive oracle proof* (IOP) is an interactive proof in which the verifier is not required to read the prover's messages in their entirety; rather, the verifier has oracle access to the prover's messages, and may probabilistically query them. IOPs simultaneously generalize IPs and PCPs. Thus, IOPs retain the expressiveness of PCPs, capturing NEXP rather than only PSPACE, and also the flexibility of IPs, allowing multiple rounds of communication with the prover. These degrees of freedom allow for more efficient "PCP-like" interactive protocols, because the prover does not have to compute the parts of a PCP that are not requested by the verifier. As a first investigation into IOPs, we offer two main technical contributions. First, we give a compiler that maps any public-coin IOP into a non-interactive proof in the random oracle model. We prove that the soundness of the resulting proof is tightly characterized by the soundness of the IOP against *state restoration attacks*, a class of rewinding attacks on the IOP verifier. Our compiler preserves zero knowledge, proof of knowledge, and time complexity of the underlying IOP. As an application, we obtain blackbox unconditional ZK proofs in the random oracle model with quasilinear prover and polylogarithmic verifier, improving on the result of Ishai et al.\ (2015). Second, we study the notion of state-restoration soundness of an IOP: we prove tight upper and lower bounds in terms of the IOP's (standard) soundness and round complexity; and describe a simple adversarial strategy that is optimal across all state restoration attacks. Our compiler can be viewed as a generalization of the Fiat--Shamir paradigm for public-coin IPs (CRYPTO~'86), and of the "CS proof" constructions of Micali (FOCS~'94) and Valiant (TCC~'08) for PCPs. Our analysis of the compiler gives, in particular, a unified understanding of all of these constructions, and also motivates the study of state restoration attacks, not only for IOPs, but also for IPs and PCPs.
Last updated:  2016-03-08
Efficiently Computing Data-Independent Memory-Hard Functions
Uncategorized
Joel Alwen, Jeremiah Blocki
Show abstract
Uncategorized
A memory-hard function (MHF) $f$ is equipped with a {\em space cost} $\sigma$ and {\em time cost} $\tau$ parameter such that repeatedly computing $f_{\sigma,\tau}$ on an application specific integrated circuit (ASIC) is not economically advantageous relative to a general purpose computer. Technically we would like that any (generalized) circuit for evaluating an iMHF $f_{\sigma,\tau}$ has area $\times$ time (AT) complexity at $\Theta(\sigma^2 * \tau)$. A data-independent MHF (iMHF) has the added property that it can be computed with almost optimal memory and time complexity by an algorithm which accesses memory in a pattern independent of the input value. Such functions can be specified by fixing a directed acyclic graph (DAG) $G$ on $n=\Theta(\sigma * \tau)$ nodes representing its computation graph. In this work we develop new tools for analyzing iMHFs. First we define and motivate a new complexity measure capturing the amount of {\em energy} (i.e. electricity) required to compute a function. We argue that, in practice, this measure is at least as important as the more traditional AT-complexity. Next we describe an algorithm $\mathcal{A}$ for repeatedly evaluating an iMHF based on an arbitrary DAG $G$. We upperbound both its energy and AT complexities per instance evaluated in terms of a certain combinatorial property of $G$. Next we instantiate our attack for several general classes of DAGs which include those underlying many of the most important iMHF candidates in the literature. In particular, we obtain the following results which hold for all choices of parameters $\sigma$ and $\tau$ (and thread-count) such that $n=\sigma*\tau$. 1) The Catena-Dragonfly function of~\cite{forler2013catena} has AT and energy complexities $O(n^{1.67})$. 2) The Catena-Butterfly function of~\cite{forler2013catena} has complexities is $O(n^{1.67})$. 3) The Double-Buffer and the Linear functions of~\cite{CBS16} both have complexities in $O(n^{1.67})$. 4) The Argon2i function of~\cite{Argon2} (winner of the Password Hashing Competition~\cite{PHC}) has complexities $O(n^{7/4}\log(n))$. 5) The Single-Buffer function of~\cite{CBS16} has complexities $O(n^{7/4}\log(n))$. 6) \emph{Any} iMHF can be computed by an algorithm with complexities $O(n^2/\log^{1-\epsilon}(n))$ for all $\epsilon > 0$. In particular when $\tau=1$ this shows that the goal of constructing an iMHF with AT-complexity $\Theta(\sigma^2 * \tau)$ is unachievable. Along the way we prove a lemma upper-bounding the depth-robustness of any DAG which may prove to be of independent interest.
Last updated:  2016-06-14
The Magic of ELFs
Uncategorized
Mark Zhandry
Show abstract
Uncategorized
We introduce the notion of an \emph{Extremely Lossy Function} (ELF). An ELF is a family of functions with an image size that is tunable anywhere from injective to having a polynomial-sized image. Moreover, for any efficient adversary, for a sufficiently large polynomial $r$ (necessarily chosen to be larger than the running time of the adversary), the adversary cannot distinguish the injective case from the case of image size $r$. We develop a handful of techniques for using ELFs, and show that such extreme lossiness is useful for instantiating random oracles in several settings. In particular, we show how to use ELFs to build secure point function obfuscation with auxiliary input, as well as polynomially-many hardcore bits for any one-way function. Such applications were previously known from strong knowledge assumptions --- for example polynomially-many hardcore bits were only know from differing inputs obfuscation, a notion whose plausibility has been seriously challenged. We also use ELFs to build a simple hash function with \emph{output intractability}, a new notion we define that may be useful for generating common reference strings. Next, we give a construction of ELFs relying on the \emph{exponential} hardness of the decisional Diffie-Hellman problem, which is plausible in pairing-based groups. Combining with the applications above, our work gives several practical constructions relying on qualitatively different --- and arguably better --- assumptions than prior works.
Last updated:  2016-02-10
On the Composition of Two-Prover Commitments, and Applications to Multi-Round Relativistic Commitments
Uncategorized
Serge Fehr, Max Fillinger
Show abstract
Uncategorized
We consider the related notions of two-prover and of relativistic commitment schemes. In recent work, Lunghi et al. proposed a new relativistic commitment scheme with a multi-round sustain phase that keeps the binding property alive as long as the sustain phase is running. They prove security of their scheme against classical attacks; however, the proven bound on the error parameter is very weak: it blows up double exponentially in the number of rounds. In this work, we give a new analysis of the multi-round scheme of Lunghi et al., and we show a linear growth of the error parameter instead (also considering classical attacks only). Our analysis is based on a new composition theorem for two-prover commitment schemes. The proof of our composition theorem is based on a better understanding of the binding property of two-prover commitments that we provide in the form of new definitions and relations among them. As an additional consequence of these new insights, our analysis is actually with respect to a strictly stronger notion of security than considered by Lunghi et al.
Last updated:  2016-08-23
On the (In)security of SNARKs in the Presence of Oracles
Dario Fiore, Anca Nitulescu
In this work we study the feasibility of knowledge extraction for succinct non-interactive arguments of knowledge (SNARKs) in a scenario that, to the best of our knowledge, has not been analyzed before. While prior work focuses on the case of adversarial provers that may receive (statically generated) {\em auxiliary information}, here we consider the scenario where adversarial provers are given {\em access to an oracle}. For this setting we study if and under what assumptions such provers can admit an extractor. Our contribution is mainly threefold. First, we formalize the question of extraction in the presence of oracles by proposing a suitable proof of knowledge definition for this setting. We call SNARKs satisfying this definition O-SNARKs. Second, we show how to use O-SNARKs to obtain formal and intuitive security proofs for three applications (homomorphic signatures, succinct functional signatures, and SNARKs on authenticated data) where we recognize an issue while doing the proof under the standard proof of knowledge definition of SNARKs. Third, we study whether O-SNARKs exist, providing both negative and positive results. On the negative side, we show that, assuming one way functions, there do not exist O-SNARKs in the standard model for every signing oracle family (and thus for general oracle families as well). On the positive side, we show that when considering signature schemes with appropriate restrictions on the message length O-SNARKs for the corresponding signing oracles exist, based on classical SNARKs and assuming extraction with respect to specific distributions of auxiliary input.
Last updated:  2017-03-31
Scalable and Secure Logistic Regression via Homomorphic Encryption
Yoshinori Aono, Takuya Hayashi, Le Trieu Phong, Lihua Wang
Logistic regression is a powerful machine learning tool to classify data. When dealing with sensitive data such as private or medical information, cares are necessary. In this paper, we propose a secure system for protecting both the training and predicting data in logistic regression via homomorphic encryption. Perhaps surprisingly, despite the non-polynomial tasks of training and predicting in logistic regression, we show that only additively homomorphic encryption is needed to build our system. Indeed, we instantiate our system with Paillier, LWE-based, and ring-LWE-based encryption schemes, highlighting the merits and demerits of each instance. Our system is very scalable in both the dataset size and dimension, tolerating big size for example of hundreds of millions ($10^8$s) records. Besides examining the costs of computation and communication, we carefully test our system over real datasets to demonstrate its accuracies and other related measures such as F-score and AUC.
Last updated:  2016-06-03
Three's Compromised Too: Circular Insecurity for Any Cycle Length from (Ring-)LWE
Navid Alamati, Chris Peikert
Informally, a public-key encryption scheme is \emph{$k$-circular secure} if a cycle of~$k$ encrypted secret keys $(\pkcenc_{\pk_{1}}(\sk_{2}), \pkcenc_{\pk_{2}}(\sk_{3}), \ldots, \pkcenc_{\pk_{k}}(\sk_{1}))$ is indistinguishable from encryptions of zeros. Circular security has applications in a wide variety of settings, ranging from security of symbolic protocols to fully homomorphic encryption. A fundamental question is whether standard security notions like IND-CPA/CCA imply $k$-circular security. For the case $k=2$, several works over the past years have constructed counterexamples---i.e., schemes that are CPA or even CCA secure but not $2$-circular secure---under a variety of well-studied assumptions (SXDH, decision linear, and LWE). However, for $k > 2$ the only known counterexamples are based on strong general-purpose obfuscation assumptions. In this work we construct $k$-circular security counterexamples for any $k \geq 2$ based on (ring-)LWE. Specifically: \begin{itemize} \item for any constant $k=O(1)$, we construct a counterexample based on $n$-dimensional (plain) LWE for $\poly(n)$ approximation factors; \item for any $k=\poly(\lambda)$, we construct one based on degree-$n$ ring-LWE for at most subexponential $\exp(n^{\varepsilon})$ factors. \end{itemize} Moreover, both schemes are $k'$-circular insecure for $2 \leq k' \leq k$. Notably, our ring-LWE construction does not immediately translate to an LWE-based one, because matrix multiplication is not commutative. To overcome this, we introduce a new ``tensored'' variant of LWE which provides the desired commutativity, and which we prove is actually equivalent to plain LWE.
Last updated:  2016-02-10
Fast Multiparty Multiplications from shared bits
Uncategorized
Ivan Damgård, Tomas Toft, Rasmus Winther Zakarias
Show abstract
Uncategorized
We study the question of securely multiplying N-bit integers that are stored in binary representation, in the context of protocols for dishonest majority with preprocessing. We achieve communication complexity O(N) using only secure operations over small fields F_2 and F_p with log(p) \approx log(N). For semi-honest security we achieve communication O(N)2^{O(log&#8727;(N))} using only secure operations over F_2. This improves over the straightforward solution of simulating a Boolean multiplication circuit, both asymptotically and in practice.
Last updated:  2017-05-10
An Efficient Toolkit for Computing Private Set Operations
Uncategorized
Alex Davidson, Carlos Cid
Show abstract
Uncategorized
Private set operation (PSO) protocols provide a natural way of securely performing operations on data sets, such that crucial details of the input sets are not revealed. Such protocols have an ever-increasing number of practical applications, particularly when implementing privacy-preserving data mining schemes. Protocols for computing private set operations have been prevalent in multi-party computation literature over the past decade, and in the case of private set intersection (PSI), have become practically feasible to run in real applications. In contrast, other set operations such as union have received less attention from the research community, and the few existing designs are often limited in their feasibility. In this work we aim to fill this gap, and present a new technique using Bloom filter data structures and additive homomorphic encryption to develop the first private set union protocol with both linear computation and communication complexities. Moreover, we show how to adapt this protocol to give novel ways of computing PSI and private set intersection/union cardinality with only minor changes to the protocol computation. Our work resembles therefore a toolkit for scalable private set computation with linear complexities, and we provide a thorough experimental analysis that shows that the online phase of our designs is practical up to large set sizes.
Last updated:  2016-02-10
Fully Anonymous Transferable Ecash
Hitesh Tewari, Arthur Hughes
Numerous electronic cash schemes have been proposed over the years ranging from Ecash, Mondex to Millicent. However none of these schemes have been adopted by the financial institutions as an alternative to traditional paper based currency. The Ecash scheme was the closest to a system that mimicked fiat currency with the property that it provided anonymity for users when buying coins from the Bank and spending them at a merchant premises (from the Bank's perspective). However Ecash lacked one crucial element present in current fiat based systems i.e., the ability to continuously spend or transfer coins within the system. In this paper we propose an extension to the Ecash scheme which allows for the anonymous transfer of coins between users without the involvement of a trusted third party. We make use of a powerful technique which allows for distributed decision making within the network - namely the Bitcoin blockchain protocol. Combined with the proof-of-work technique and the classical discrete logarithm problem we are able to continuously reuse coins within our system, and also prevent double-spending of coins without revealing the identities of the users.
Last updated:  2016-11-23
Access Control Encryption: Enforcing Information Flow with Cryptography
Ivan Damgård, Helene Haagh, Claudio Orlandi
We initiate the study of Access Control Encryption (ACE), a novel cryptographic primitive that allows fine-grained access control, by giving different rights to different users not only in terms of which messages they are allowed to receive, but also which messages they are allowed to send. Classical examples of security policies for information flow are the well known Bell-Lapadula [BL73] or Biba [Bib75] model: in a nutshell, the Bell-Lapadula model assigns roles to every user in the system (e.g., public, secret and top-secret). A users' role specifies which messages the user is allowed to receive (i.e., the no read-up rule, meaning that users with public clearance should not be able to read messages marked as secret or top-secret) but also which messages the user is allowed to send (i.e., the no write-down rule, meaning that a user with top-secret clearance should not be able to write messages marked as secret or public). To the best of our knowledge, no existing cryptographic primitive allows for even this simple form of access control, since no existing cryptographic primitive enforces any restriction on what kind of messages one should be able to encrypt. Our contributions are: - Introducing and formally defining access control encryption (ACE); - A construction of ACE with complexity linear in the number of the roles based on classic number theoretic assumptions (DDH, Paillier); - A construction of ACE with complexity polylogarithmic in the number of roles based on recent results on cryptographic obfuscation;
Last updated:  2016-06-27
Can there be efficient and natural FHE schemes?
Kristian Gjøsteen, Martin Strand
In 1978, Rivest, Adleman and Dertouzos asked for algebraic systems for which useful privacy homomorphisms exist. To date, the only acknownledged result is noise based encryption combined with bootstrapping. Before that, there were several failed attempts. We prove that fully homomorphic schemes are impossible for several algebraic structures. Then we develop a characterisation of all fully homomorphic schemes and use it to analyse three examples. Finally, we propose a conjecture stating that secure FHE schemes must either have a significant ciphertext expansion or use unusual algebraic structures.
Last updated:  2016-02-10
Open Sesame: The Password Hashing Competition and Argon2
Jos Wetzels
In this document we present an overview of the background to and goals of the Password Hashing Competition (PHC) as well as the design of its winner, Argon2, and its security requirements and properties.
Last updated:  2016-05-07
Speed Optimizations in Bitcoin Key Recovery Attacks
Nicolas Courtois, Guangyan Song, Ryan Castellucci
In this paper we study and give the first detailed benchmarks on existing implementations of the secp256k1 elliptic curve used by at least hundreds of thousands of users in Bitcoin and other cryptocurrencies. Our implementation improves the state of the art by a factor of 2.5, with focus on the cases where side channel attacks are not a concern and a large quantity of RAM is available. As a result, we are able to scan the Bitcoin blockchain for weak keys faster than any previous implementation. We also give some examples of passwords which have we have cracked, showing that brain wallets are not secure in practice even for quite complex passwords.
Last updated:  2017-02-13
Breaking the Sub-Exponential Barrier in Obfustopia
Uncategorized
Sanjam Garg, Omkant Pandey, Akshayaram Srinivasan, Mark Zhandry
Show abstract
Uncategorized
Indistinguishability obfuscation (\io) has emerged as a surprisingly powerful notion. Almost all known cryptographic primitives can be constructed from general purpose \io\ and other minimalistic assumptions such as one-way functions. A major challenge in this direction of research is to develop novel techniques for using \io\ since \io\ by itself offers virtually no protection for secret information in the underlying programs. When dealing with complex situations, often these techniques have to consider an exponential number of hybrids (usually one per input) in the security proof. This results in a {\em sub-exponential} loss in the security reduction. Unfortunately, this scenario is becoming more and more common and appears to be a fundamental barrier to many current techniques. A parallel research challenge is building obfuscation from simpler assumptions. Unfortunately, it appears that such a construction would likely incur an exponential loss in the security reduction. Thus, achieving any application of \io\ from simpler assumptions would also require a sub-exponential loss, \emph{even if the \io-to-application security proof incurred a polynomial loss}. Functional encryption (\fe) is known to be equivalent to \io\ up to a sub-exponential loss in the \fe-to-\io\ security reduction; yet, unlike \io, \fe\ \emph{can} be achieved from simpler assumptions (namely, specific multilinear map assumptions) with only a polynomial loss. In the interest of basing applications on weaker assumptions, we therefore argue for using \fe\ as the starting point, rather than \io, and restricting to reductions with only a polynomial loss. By significantly expanding on ideas developed by Garg, Pandey, and Srinivasan (CRYPTO 2016), we achieve the following early results in this line of study: \begin{itemize} \item We construct {\em universal samplers} based only on polynomially-secure public-key \fe. As an application of this result, we construct a {\em non-interactive multiparty key exchange} (NIKE) protocol for an unbounded number of users without a trusted setup. Prior to this work, such constructions were only known from indistinguishability obfuscation. \item We also construct trapdoor one-way permutations (OWP) based on polynomially-secure public-key \fe. This improves upon the recent result of Bitansky, Paneth, and Wichs (TCC 2016) which requires \io\ of \emph{sub-exponential strength}. We proceed in two steps, first giving a construction requiring \io\ of \emph{polynomial strength}, and then specializing the \fe-to-\io\ conversion to our specific application. \end{itemize} Many of the techniques that have been developed for using \io, including many of those based on the ``punctured programming'' approach, become inapplicable when we insist on polynomial reductions to \fe. As such, our results above require many new ideas that will likely be useful for future works on basing security on \fe.
Last updated:  2016-09-07
Signature Schemes with Efficient Protocols and Dynamic Group Signatures from Lattice Assumptions
Benoit Libert, San Ling, Fabrice Mouhartem, Khoa Nguyen, Huaxiong Wang
A recent line of works - initiated by Gordon, Katz and Vaikuntanathan (Asiacrypt 2010) - gave lattice-based constructions allowing users to authenticate while remaining hidden in a crowd. Despite five years of efforts, known constructions are still limited to static sets of users, which cannot be dynamically updated. This work provides new tools enabling the design of anonymous authentication systems whereby new users can join the system at any time. Our first contribution is a signature scheme with efficient protocols, which allows users to obtain a signature on a committed value and subsequently prove knowledge of a signature on a committed message. This construction is well-suited to the design of anonymous credentials and group signatures. It indeed provides the first lattice-based group signature supporting dynamically growing populations of users. As a critical component of our group signature, we provide a simple joining mechanism of introducing new group members using our signature scheme. This technique is combined with zero-knowledge arguments allowing registered group members to prove knowledge of a secret short vector of which the corresponding public syndrome was certified by the group manager. These tools provide similar advantages to those of structure-preserving signatures in the realm of bilinear groups. Namely, they allow group members to generate their own public key without having to prove knowledge of the underlying secret key. This results in a two-message joining protocol supporting concurrent enrollments, which can be used in other settings such as group encryption. Our zero-knowledge arguments are presented in a unified framework where: (i) The involved statements reduce to arguing possession of a $\{-1,0,1\}$-vector $\mathbf{x}$ with a particular structure and satisfying $\mathbf{P}\cdot \mathbf{x} = \mathbf{v} \bmod q$ for some public matrix $\mathbf{P}$ and vector $\mathbf{v}$; (ii) The reduced statements can be handled using permuting techniques for Stern-like protocols. Our framework can serve as a blueprint for proving many other relations in lattice-based cryptography.
Last updated:  2016-05-06
On the Complexity of Scrypt and Proofs of Space in the Parallel Random Oracle Model
Joël Alwen, Binyi Chen, Chethan Kamath, Vladimir Kolmogorov, Krzysztof Pietrzak, Stefano Tessaro
We investigate lower bounds in terms of time and memory on the {\em parallel} complexity of an adversary $\cal A$ computing labels of randomly selected challenge nodes in direct acyclic graphs, where the $w$-bit label of a node is the hash $H(.)$ (modelled as a random oracle with $w$-bit output) of the labels of its parents. Specific instances of this general problem underlie both proofs-of-space protocols [Dziembowski et al. CRYPTO'15] as well as memory-hardness proofs including {\sf scrypt}, a widely deployed password hashing and key-derivation function which is e.g. used within Proofs-of-Work for digital currencies like Litecoin. Current lower bound proofs for these problems only consider {\em restricted} algorithms $\cal A$ which perform only a single $H(.)$ query at a time and which only store individual labels (but not arbitrary functions thereof). This paper substantially improves this state of affairs; Our first set of results shows that even when allowing multiple parallel $\hash$ queries, the ``cumulative memory complexity'' (CMC), as recently considered by Alwen and Serbinenko [STOC '15], of ${\sf scrypt}$ is at least $w \cdot (n/\log(n))^2$, when ${\sf scrypt}$ invokes $H(.)$ $n$ times. Our lower bound holds for adversaries which can store (1) Arbitrary labels (i.e., random oracle outputs) and (2) Certain natural functions of these labels, e.g., linear combinations. The exact power of such adversaries is captured via the combinatorial abstraction of parallel ``entangled'' pebbling games on graphs, which we introduce and study. We introduce a combinatorial quantity $\gamma_n$ and under the conjecture that it is upper bounded by some constant, we show that the above lower bound on the CMC also holds for arbitrary algorithms $\cal A$, storing in particular arbitrary functions of their labels. We also show that under the same conjecture, the {\em time complexity} of computing the label of a random node in a graph on $n$ nodes (given an initial $kw$-bit state) reduces tightly to the time complexity for entangled pebbling on the same graph (given an initial $k$-node pebbling). Under the conjecture, this solves the main open problem from the work of Dziembowski et al. In fact, we note that every non-trivial upper bound on $\gamma_n$ will lead to the first non-trivial bounds for general adversaries for this problem.
Last updated:  2016-02-07
Attribute-Based Fully Homomorphic Encryption with a Bounded Number of Inputs
Michael Clear, Ciaran McGoldrick
The only known way to achieve Attribute-based Fully Homomorphic Encryption (ABFHE) is through indistinguishability obfsucation. The best we can do at the moment without obfuscation is Attribute-Based Leveled FHE which allows circuits of an a priori bounded depth to be evaluated. This has been achieved from the Learning with Errors (LWE) assumption. However we know of no other way without obfuscation of constructing a scheme that can evaluate circuits of unbounded depth. In this paper, we present an ABFHE scheme that can evaluate circuits of unbounded depth but with one limitation: there is a bound N on the number of inputs that can be used in a circuit evaluation. The bound N could be thought of as a bound on the number of independent senders. Our scheme allows N to be exponentially large so we can set the parameters so that there is no limitation on the number of inputs in practice. Our construction relies on multi-key FHE and leveled ABFHE, both of which have been realized from LWE, and therefore we obtain a concrete scheme that is secure under LWE.
Last updated:  2016-10-24
Haraka v2 - Efficient Short-Input Hashing for Post-Quantum Applications
Stefan Kölbl, Martin M. Lauridsen, Florian Mendel, Christian Rechberger
Recently, many efficient cryptographic hash function design strategies have been explored, not least because of the SHA-3 competition. These designs are, almost exclusively, geared towards high performance on long inputs. However, various applications exist where the performance on short (fixed length) inputs matters more. Such hash functions are the bottleneck in hash-based signature schemes like SPHINCS or XMSS, which is currently under standardization. Secure functions specifically designed for such applications are scarce. We attend to this gap by proposing two short-input hash functions (or rather simply compression functions). By utilizing AES instructions on modern CPUs, our proposals are the fastest on such platforms, reaching throughputs below one cycle per hashed byte even for short inputs, while still having a very low latency of less than 60 cycles. Under the hood, this results comes with several innovations. First, we study whether the number of rounds for our hash functions can be reduced, if only second-preimage resistance (and not collision resistance) is required. The conclusion is: only a little. Second, since their inception, AES-like designs allow for supportive security arguments by means of counting and bounding the number of active S-boxes. However, this ignores powerful attack vectors using truncated differentials, including the powerful rebound attacks. We develop a general tool-based method to include arguments against attack vectors using truncated differentials.
Last updated:  2016-02-07
A Maiorana-McFarland Construction of a GBF on Galois ring
Uncategorized
Shashi Kant Pandey, P. R. Mishra, B. K. Dass
Show abstract
Uncategorized
Bent functions shows some vital properties among all combinatorial objects. Its links in combinatorics, cryptography and coding theory attract the scientific community to construct new class of bent functions. Since the entire characterisation of bent functions is still unexplored but several construction on different algebraic structure is in progress. In this paper we proposed a generalized Maiorana-McFarland construction of bent function from Galois ring.
Last updated:  2016-02-05
Provable Security Evaluation of Structures against Impossible Differential and Zero Correlation Linear Cryptanalysis
Bing Sun, Meicheng Liu, Jian Guo, Vincent Rijmen, Ruilin Li
Impossible differential and zero correlation linear cryptanalysis are two of the most important cryptanalytic vectors. To characterize the impossible differentials and zero correlation linear hulls which are independent of the choices of the non-linear components, Sun et al. proposed the structure deduced by a block cipher at CRYPTO 2015. Based on that, we concentrate in this paper on the security of the SPN structure and Feistel structure with SP-type round functions. Firstly, we prove that for an SPN structure, if \alpha_1\rightarrow\beta_1 and \alpha_2\rightarrow\beta_ are possible differentials, \alpha_1|\alpha_2\rightarrow\beta_1|\beta_2 is also a possible differential, i.e., the OR "|" operation preserves differentials. Secondly, we show that for an SPN structure, there exists an r-round impossible differential if and only if there exists an r-round impossible differential \alpha\not\rightarrow\beta where the Hamming weights of both \alpha and \beta are 1. Thus for an SPN structure operating on m bytes, the computation complexity for deciding whether there exists an impossible differential can be reduced from O(2^{2m}) to O(m^2). Thirdly, we associate a primitive index with the linear layers of SPN structures. Based on the matrices theory over integer rings, we prove that the length of impossible differentials of an SPN structure is upper bounded by the primitive index of the linear layers. As a result we show that, unless the details of the S-boxes are considered, there do not exist 5-round impossible differentials for the AES and ARIA. Lastly, based on the links between impossible differential and zero correlation linear hull, we projected these results on impossible differentials to zero correlation linear hulls. It is interesting to note some of our results also apply to the Feistel structures with SP-type round functions.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.