All papers in 2012 (733 results)

Last updated:  2013-05-13
Reusable Garbled Circuits and Succinct Functional Encryption
Uncategorized
Shafi Goldwasser, Yael Kalai, Raluca Ada Popa, Vinod Vaikuntanathan, Nickolai Zeldovich
Show abstract
Uncategorized
Garbled circuits, introduced by Yao in the mid 80s, allow computing a function f on an input x without leaking anything about f or x besides f(x). Garbled circuits found numerous applications, but every known construction suffers from one limitation: it offers no security if used on multiple inputs x. In this paper, we construct for the first time reusable garbled circuits. The key building block is a new succinct single-key functional encryption scheme. Functional encryption is an ambitious primitive: given an encryption Enc(x) of a value x, and a secret key sk_f for a function f, anyone can compute f(x) without learning any other information about x. We construct, for the first time, a succinct functional encryption scheme for any polynomial-time function f where succinctness means that the ciphertext size does not grow with the size of the circuit for f, but only with its depth. The security of our construction is based on the intractability of the Learning with Errors (LWE) problem and holds as long as an adversary has access to a single key sk_f (or even an a priori bounded number of keys for different functions). Building on our succinct single-key functional encryption scheme, we show several new applications in addition to reusable garbled circuits, such as a paradigm for general function obfuscation which we call token-based obfuscation, homomorphic encryption for a class of Turing machines where the evaluation runs in input-specific time rather than worst-case time, and a scheme for delegating computation which is publicly verifiable and maintains the privacy of the computation.
Last updated:  2023-08-10
Non-Interactive Key Exchange
Eduarda S. V. Freire, Dennis Hofheinz, Eike Kiltz, Kenneth G. Paterson
Non-interactive key exchange (NIKE) is a fundamental but much-overlooked cryptographic primitive. It appears as a major contribution in the ground-breaking paper of Diffie and Hellman, but NIKE has remained largely unstudied since then. In this paper, we provide different security models for this primitive and explore the relationships between them. We then give constructions for secure NIKE in the Random Oracle Model based on the hardness of factoring and in the standard model based on the hardness of a variant of the decisional Bilinear Diffie Hellman Problem for asymmetric pairings. We also study the relationship between NIKE and public key encryption (PKE), showing that a secure NIKE scheme can be generically converted into an IND-CCA secure PKE scheme. This conversion also illustrates the fundamental nature of NIKE in public key cryptography.
Last updated:  2013-01-01
Time-memory Trade-offs for Near-collisions
Gaëtan Leurent
In this work we consider generic algorithms to find near-collisions for a hash function. If we consider only hash computations, it is easy to compute a lower-bound for the complexity of near-collision algorithms, and to build a matching algorithm. However, this algorithm needs a lot of memory, and makes than 2^{n/2} memory accesses. Recently, several algorithms have been proposed without this memory requirement; they require more hash evaluations, but the attack is actually more practical. They can be divided in two main categories: some are based on truncation, and some are based on covering codes. In this paper, we give a new insight to the generic complexity of a near-collision attack. First, we consider time-memory trade-offs for truncation-based algorithms. For a practical implementation, it seems reasonable to assume that some memory is available and we show that taking advantage of this memory can significantly reduce the complexity. Second, we show a new method combining truncation and covering codes. The new algorithm is always at least as good as the previous works, and often gives a significant improvement. We illustrate our results by giving a 10-near collision for MD5: our algorithm has a complexity of 2^45.4 using 1TB of memory while the best previous
Last updated:  2013-03-26
Twisted Edwards-Form Elliptic Curve Cryptography for 8-bit AVR-based Sensor Nodes
Dalin Chu, Johann Großschädl, Zhe Liu, Volker Müller, Yang Zhang
Wireless Sensor Networks (WSNs) pose a number of unique security challenges that demand innovation in several areas including the design of cryptographic primitives and protocols. Despite recent progress, the efficient implementation of Elliptic Curve Cryptography (ECC) for WSNs is still a very active research topic and techniques to further reduce the time and energy cost of ECC are eagerly sought. This paper presents an optimized ECC implementation that we developed from scratch to comply with the severe resource constraints of 8-bit sensor nodes such as the MICAz and IRIS motes. Our ECC software uses Optimal Prime Fields (OPFs) as underlying algebraic structure and supports two different families of elliptic curves, namely Weierstraß-form and twisted Edwards-form curves. Due to the combination of efficient field arithmetic and fast group operations, we achieve an execution time of $5.8 \cdot 10^6$ clock cycles for a full 158-bit scalar multiplication on an 8-bit ATmega128 microcontroller, which is 2.78 times faster than the widely-used TinyECC library. Our implementation also shows that the energy cost of scalar multiplication on a MICAz (or IRIS) mote amounts to just 19 mJ when using a twisted Edwards curve over a 160-bit OPF. This result compares fairly well with the energy figures of two recently-presented hardware designs of ECC based on twisted Edwards curves.
Last updated:  2015-04-23
On the Impossibility of Approximate Obfuscation and Applications to Resettable Cryptography
Nir Bitansky, Omer Paneth
The traditional notion of {\em program obfuscation} requires that an obfuscation $\tilde{f}$ of a program $f$ computes the exact same function as $f$, but beyond that, the code of $\tilde{f}$ should not leak any information about $f$. This strong notion of {\em virtual black-box} security was shown by Barak et al. (CRYPTO 2001) to be impossible to achieve, for certain {\em unobfuscatable function families}. The same work raised the question of {\em approximate obfuscation}, where the obfuscated $\tilde{f}$ is only required to approximate $\tilde{f}$; that is, $\tilde{f}$ only agrees with $f$ on some input distribution. We show that, assuming {\em trapdoor permutations}, there exist families of {\em robust unobfuscatable functions} for which even approximate obfuscation is impossible. That is, obfuscation is impossible even if the obfuscated $\tilde{f}$ only agrees with $f$ with probability slightly more than $\frac{1}{2}$, on a uniformly sampled input (below $\frac{1}{2}$-agreement, the function obfuscated by $\tilde{f}$ is not uniquely defined). Additionally, we show that, assuming only one-way functions, we can rule out approximate obfuscation where $\tilde{f}$ is not allowed to err, but may refuse to compute $f$ with probability close to $1$. We then demonstrate the power of robust unobfuscatable functions by exhibiting new implications to resettable protocols that so far have been out of our reach. Concretely, we obtain a new non-black-box simulation technique that reduces the assumptions required for resettably-sound zero-knowledge protocols to {\em one-way functions}, as well as reduce round-complexity. We also present a new simplified construction of simultaneously resettable zero-knowledge protocols that does not rely on collision-resistent hashing. Finally, we construct a three-message simultaneously resettable $\WI$ {\em argument of knowledge} (with a non-black-box knowledge extractor). Our constructions are based on a special kind of ``resettable slots" that are useful for a non-black-box simulator, but not for a resetting prover.
Last updated:  2013-01-01
Defensive Leakage Camouflage
E. Brier, Q. Fortier, R. Korkikian, K. W. Magld, D. Naccache, G. Ozari de Almeida, A. Pommellet, A. H. Ragab, J. Vuillemin
This paper considers the transfer of digital data over {\sl leaky and noisy} communication channels. We develop defensive strategies exploiting the fact that noise prevents the attacker from accurately measuring leakage. The defense strategy described in this paper pairs each useful data element $k$ with a camouflage value $v$ and simultaneously transmits both $k$ and $v$ over the channel. This releases an emission $e(k,v)$. We wish to select the camouflage values $v(k)$ as a function of $k$ in a way that makes the quantities $e(k,v(k))$ as {\sl indistinguishable} as possible from each other. We model the problem and show that optimal camouflage values can be computed from side-channels under very weak physical assumptions. The proposed technique is hence applicable to a wide range of readily available technologies. We propose algorithms for computing optimal camouflage values when the number of samples per trace is moderate (typically $\leq 6$) and justify our models by a statistical analysis. We also provide experimental results obtained using FPGAs.
Last updated:  2012-12-28
On the Impossibility of Sender-Deniable Public Key Encryption
Dana Dachman-Soled
The primitive of deniable encryption was first introduced by Canetti et al. (CRYPTO, 1997). Deniable encryption is a regular public key encryption scheme with the added feature that after running the protocol honestly and transmitting a message $m$, both Sender and Receiver may produce random coins showing that the transmitted ciphertext was an encryption of any message $m'$ in the message space. Deniable encryption is a key tool for constructing incoercible protocols, since it allows a party to send one message and later provide apparent evidence to a coercer that a different message was sent. In addition, deniable encryption may be used to obtain \emph{adaptively}-secure multiparty computation (MPC) protocols and is secure under \emph{selective-opening} attacks. Different flavors such as sender-deniable and receiver-deniable encryption, where only the Sender or Receiver can produce fake random coins, have been considered. Recently, several open questions regarding the feasibility of deniable encryption have been resolved (c.f. (O'Neill et al., CRYPTO, 2011), (Bendlin et al., ASIACRYPT, 2011)). A fundamental remaining open question is whether it is possible to construct sender-deniable Encryption Schemes with super-polynomial security, where an adversary has negligible advantage in distinguishing real and fake openings. The primitive of simulatable public key encryption (PKE), introduced by Damgård and Nielsen (CRYPTO, 2000), is a public key encryption scheme with additional properties that allow oblivious sampling of public keys and ciphertexts. It is one of the low-level primitives used to construct adaptively-secure MPC protocols and was used by O'Neill et al. in their construction of bi-deniable encryption in the multi-distributional model (CRYPTO, 2011). Moreover, the original construction of sender-deniable encryption with polynomial security given by Canetti et al. can be instantiated with simulatable PKE. Thus, a natural question to ask is whether it is possible to construct sender-deniable encryption with \emph{super-polynomial security} from simulatable PKE. In this work, we investigate the possibility of constructing sender-deniable public key encryption from the primitive of simulatable PKE in a black-box manner. We show that, in fact, there is no black-box construction of sender-deniable encryption with super-polynomial security from simulatable PKE. This indicates that the original construction of sender-deniable public key encryption given by Canetti et al. is in some sense optimal, since improving on it will require the use of non-black-box techniques, stronger underlying assumptions or interaction.
Last updated:  2012-12-30
Applications of Polynomial Properties to Verifiable Delegation of Computation and Electronic Voting
Marc Obrador, Paz Morillo, Sandra Guasch
This paper presents some proposals of protocols for two types of schemes such as verifiable delegation of computation and remote electronic voting, based on polynomial properties. Our protocols for verifiable delegation of computation are aimed to the efficient evaluation of polynomials, working on schemes where the polynomial and/or the input are kept secret to the server. Our proposal for remote electronic voting allows the verification of vote well-formation upon reception at the voting server, with little overhead of computations for the voter.
Last updated:  2012-12-28
Cryptanalysis of an efficient certificateless two-party authenticated key agreement protocol
Qingfeng Cheng
Recently, He et al. (Computers and Mathematics with Applications, 2012, 64(6): 1914-1926) proposed a new efficient certificateless two-party authenticated key agreement protocol. They claimed their protocol was provably secure in the extended Canetti-Krawczyk (eCK) model. In this paper, we will show that their protocol is insecure. A type I adversary, who obtains one party's ephemeral private key, can impersonate the party to cheat the other party and compute the shared session key successfully. For overcoming this weakness, we also propose a simple countermeasure.
Last updated:  2013-01-06
A Coding-Theoretic Approach to Recovering Noisy RSA Keys
Uncategorized
Kenneth G. Paterson, Antigoni Polychroniadou, Dale L. Sibborn
Show abstract
Uncategorized
Inspired by cold boot attacks, Heninger and Shacham (Crypto 2009) initiated the study of the problem of how to recover an RSA private key from a noisy version of that key. They gave an algorithm for the case where some bits of the private key are known with certainty. Their ideas were extended by Henecka, May and Meurer (Crypto 2010) to produce an algorithm that works when all the key bits are subject to error. In this paper, we bring a coding-theoretic viewpoint to bear on the problem of noisy RSA key recovery. This viewpoint allows us to cast the previous work as part of a more general framework. In turn, this enables us to explain why the previous algorithms do not solve the motivating cold boot problem, and to design a new algorithm that does (and more). In addition, we are able to use concepts and tools from coding theory -- channel capacity, list decoding algorithms, and random coding techniques -- to derive bounds on the performance of the previous algorithms and our new algorithm.
Last updated:  2012-12-27
Two Exponentiation Algorithms Resistant to Cross-correlation Power Analysis and to Other Known Attacks
Yaacov Belenky, Zeev Geyzel, Michael Kara-Ivanov, Avraham Entelis
In order to prevent the SPA (Simple Power Analysis) attack against modular exponentiation algorithms, a multiply-always implementation is generally used. Witteman et al. introduced in \cite{WI} a new cross-correlation power analysis attack against the multiply-always implementation. We suggest two new algorithms, resistant to this attack and also to other known attacks. The first algorithm is an alternative approach to exponentiation algorithms used in cryptography, which usually receive as an input some representation (e.g. binary) of the exponent. In our approach both the exponent and the result are functions (not necessarily easily invertible) of the exponentiation algorithm input. We show that this approach can have a good performance and that it is also resistant to several known attacks, especially to the cross-correlation power analysis. It is particularly relevant for cryptographic schemes in which the private exponent can be chosen arbitrarily. Another exponentiation algorithm that we present here may be preferable for use with RSA in certain settings. It is resistant to the cross-correlation power analysis attack, C safe error attack, and other attacks; although it involves squaring operations.
Last updated:  2015-10-20
Hardness Preserving Reductions via Cuckoo Hashing
Itay Berman, Iftach Haitner, Ilan Komargodski, Moni Naor
The focus of this work is hardness-preserving transformations of somewhat limited pseudorandom functions families (PRFs) into ones with more versatile characteristics. Consider the problem of domain extension of pseudorandom functions: given a PRF that takes as input elements of some domain $\mathcal{U}$, we would like to come up with a PRF over a larger domain. Can we do it with little work and without significantly impacting the security of the system? One approach is to first hash the larger domain into the smaller one and then apply the original PRF. Such a reduction, however, is vulnerable to a "birthday attack": after $\sqrt{|\mathcal{U}|}$ queries to the resulting PRF, a collision (i.e., two distinct inputs having the same hash value) is very likely to occur. As a consequence, the resulting PRF is insecure against an attacker making this number of queries. In this work we show how to go beyond the aforementioned birthday attack barrier by replacing the above simple hashing approach with a variant of cuckoo hashing, a hashing paradigm that resolves collisions in a table by using two hash functions and two tables, cleverly assigning each element to one of the two tables. We use this approach to obtain: (i) a domain extension method that requires just two calls to the original PRF, can withstand as many queries as the original domain size, and has a distinguishing probability that is exponentially small in the amount of non-cryptographic work; and (ii) a security-preserving reduction from non-adaptive to adaptive PRFs.
Last updated:  2012-12-27
On the (In)security of Fischlin's Paradigm
Prabhanjan Ananth, Raghav Bhaskar, Vipul Goyal, Vanishree Rao
The Fiat-Shamir paradigm was proposed as a way to remove interaction from 3-round proof of knowledge protocols and derive secure signature schemes. This generic transformation leads to very efficient schemes and has thus grown quite popular. However, this transformation is proven secure only in the random oracle model. In FOCS 2003, Goldwasser and Kalai showed that this transformation is provably insecure in the standard model by presenting a counterexample of a 3-round protocol, the Fiat-Shamir transformation of which is (although provably secure in the random oracle model) insecure in the standard model, thus showing that the random oracle is uninstantiable. In particular, for every hash function that is used to replace the random oracle, the resulting signature scheme is existentially forgeable. This result was shown by relying on the non-black-box techniques of Barak (FOCS 2001). An alternative to the Fiat-Shamir paradigm was proposed by Fischlin in Crypto 2005. Fischlin\rq{}s transformation can be applied to any so called 3-round ``Fiat-Shamir proof of knowledge\rq{}\rq{} and can be used to derive non-interactive zero-knowledge proofs of knowledge as well as signature schemes. An attractive property of this transformation is that it provides online extractability (i.e., the extractor works without having to rewind the prover). Fischlin remarks that in comparison to the Fiat-Shamir transformation, his construction tries to ``decouple the hash function from the protocol flow" and hence, the counterexample in the work of Goldwaaser and Kalai does not seem to carry over to this setting. In this work, we show a counterexample to the Fischlin's transformation. In particular, we construct a 3-round Fiat-Shamir proof of knowledge (on which Fischlin's transformation is applicable), and then, present an adversary against both - the soundness of the resulting non-interactive zero-knowledge, as well as the unforegeability of the resulting signature scheme. Our attacks are successful except with negligible probability for any hash function, that is used to instantiate the random oracle, provided that there is an apriori (polynomial) bound on the running time of the hash function. By choosing the right bound, secure instantiation of Fischlin transformation with most practical cryptographic hash functions can be ruled out. The techniques used in our work are quite unrelated to the ones used in the work of Goldwasser and Kalai. Our primary technique is to bind the protocol flow with the hash function if the code of the hash function is available. We believe that our ideas are of independent interest and maybe applicable in other related settings.
Last updated:  2013-01-07
Faster index calculus for the medium prime case. Application to 1175-bit and 1425-bit finite fields
Antoine Joux
Many index calculus algorithms generate multiplicative relations between smoothness basis elements by using a process called {\it Sieving}. This process allows to filter potential candidate relations very quickly, without spending too much time to consider bad candidates. However, from an asymptotic point of view, there is not much difference between sieving and straightforward testing of candidates. The reason is that even when sieving, some small amount time is spend for each bad candidates. Thus, asymptotically, the total number of candidates contributes to the complexity. In this paper, we introduce a new technique: {\it Pinpointing}, which allows us to construct multiplicate relations much faster, thus reducing the asymptotic complexity of relations' construction. Unfortunately, we only know how to implement this technique for finite fields which contain a medium-sized subfield. When applicable, this method improves the asymptotic complexity of the index calculus algorithm in the cases where the sieving phase dominates. In practice, it gives a very interesting boost to the performance of state-of-the-art algorithms. We illustrate the feasability of the method with a discrete logarithm record in medium prime finite fields of sizes 1175~bits and 1425~bits.
Last updated:  2012-12-27
An ideal multi-secret sharing scheme based on minimal privileged coalitions
Yun Song, Zhihui Li
How to construct an ideal multi-secret sharing scheme for general access structures is difficult. In this paper, we solve an open problem proposed by Spiez et al.recently [Finite Fields and Their Application, 2011(17) 329-342], namely to design an algorithm of privileged coalitions of any length if such coalitions exist. Furthermore, in terms of privileged coalitions, we show that most of the existing multi-secret sharing schemes based on Shamir threshold secret sharing are not perfect by analyzing Yang et al.'s scheme and Pang et al.'s scheme. Finally, based on the algorithm mentioned above, we devise an ideal multi-secret sharing scheme for families of access structures, which possesses more vivid authorized sets than that of the threshold scheme.
Last updated:  2013-09-15
Succinct Non-Interactive Arguments via Linear Interactive Proofs
Nir Bitansky, Alessandro Chiesa, Yuval Ishai, Rafail Ostrovsky, Omer Paneth
\emph{Succinct non-interactive arguments} (SNARGs) enable verifying NP statements with lower complexity than required for classical NP verification. Traditionally, the focus has been on minimizing the length of such arguments; nowadays researches have focused also on minimizing verification time, by drawing motivation from the problem of delegating computation. A common relaxation is a \emph{preprocessing} SNARG, which allows the verifier to conduct an expensive offline phase that is independent of the statement to be proven later. Recent constructions of preprocessing SNARGs have achieved attractive features: they are publicly-verifiable, proofs consist of only $O(1)$ encrypted (or encoded) field elements, and verification is via arithmetic circuits of size linear in the NP statement. Additionally, these constructions seem to have ``escaped the hegemony'' of probabilistically-checkable proofs (PCPs) as a basic building block of succinct arguments. We present a general methodology for the construction of preprocessing SNARGs, as well as resulting concrete efficiency improvements. Our contribution is three-fold: (1) We introduce and study a natural extension of the interactive proof model that considers \emph{algebraically-bounded} provers; this new setting is analogous to the common study of algebraically-bounded ``adversaries'' in other fields, such as pseudorandomness and randomness extraction. More concretely, in this work we focus on linear (or affine) provers, and provide several constructions of (succinct two-message) \emph{linear-interactive proofs} (LIPs) for NP. Our constructions are based on general transformations applied to both \emph{linear} PCPs (LPCPs) and traditional ``unstructured" PCPs. (2) We give conceptually simple cryptographic transformations from LIPs to preprocessing SNARGs, whose security can be based on different forms of \emph{linear targeted malleability} (implied by previous knowledge assumptions). Our transformations convert arbitrary (two-message) LIPs into designated-verifier SNARGs, and LIPs with degree-bounded verifiers into publicly-verifiable SNARGs. We also extend our methodology to obtain \emph{zero-knowledge} LIPs and SNARGs. Our techniques yield SNARGs \emph{of knowledge} and thus can benefit from known recursive composition and bootstrapping techniques. (3) Following this methodology, we exhibit several constructions achieving new efficiency features, such as ``single-ciphertext preprocessing SNARGs" and improved succinctness-soundness tradeoffs. We also offer a new perspective on existing constructions of preprocessing SNARGs, revealing a direct connection of these to LPCPs and LIPs.
Last updated:  2012-12-27
Practical Yet Universally Composable Two-Server Password-Authenticated Secret Sharing
Jan Camenisch, Anna Lysyanskaya, Gregory Neven
Password-authenticated secret sharing (PASS) schemes, first introduced by Bagherzandi et al. at CCS 2011, allow users to distribute data among several servers so that the data can be recovered using a single humanmemorizable password, but no single server (or collusion of servers up to a certain size) can mount an off-line dictionary attack on the password or learn anything about the data. We propose a new, universally composable (UC) security definition for the two-server case (2PASS) in the public-key setting that addresses a number of relevant limitations of the previous, non-UC definition. For example, our definition makes no prior assumptions on the distribution of passwords, preserves security when honest users mistype their passwords, and guarantees secure composition with other protocols in spite of the unavoidable non-negligible success rate of online dictionary attacks. We further present a concrete 2PASS protocol and prove that it meets our definition. Given the strong security guarantees, our protocol is surprisingly efficient: in its most efficient instantiation under the DDH assumption in the random-oracle model, it requires fewer than twenty elliptic-curve exponentiations on the user's device. We achieve our results by careful protocol design and by exclusively focusing on the two-server public-key setting.
Last updated:  2012-12-31
Attribute-Based Functional Encryption on Lattices
Xavier Boyen
We introduce a broad lattice manipulation technique for expressive cryptography, and use it to realize functional encryption for access structures from post-quantum hardness assumptions. Specifically, we build an efficient key-policy attribute-based encryption scheme, and prove its security in the selective sense from learning-with-errors intractability in the standard model.
Last updated:  2013-01-03
New Impossible Differential Attack on $\text{SAFER}_{+}$ and $\text{SAFER}_{++}$
Uncategorized
Jingyuan Zhao, Meiqin Wang, Jiazhe Chen, Yuliang Zheng
Show abstract
Uncategorized
SAFER\scriptsize + \normalsize was a candidate block cipher for AES with 128-bit block size and a variable key sizes of 128, 192 or 256 bits. Bluetooth uses customized versions of SAFER\scriptsize + \normalsize for security. The numbers of rounds for SAFER\scriptsize + \normalsize with key sizes of 128, 192 and 256 are 8, 12 and 16, respectively. SAFER\scriptsize ++\normalsize, a variant of SAFER\scriptsize +\normalsize, was among the cryptographic primitives selected for the second phase of the NESSIE project. The block size is 128 bits and the key size can take either 128 or 256 bits. The number of rounds are 7 for SAFER\scriptsize ++ \normalsize/128 and 10 for SAFER\scriptsize ++ \normalsize/256. Both ciphers use PHT as their linear transformations. In this paper, we take advantage of properties of PHT and S-boxes to identify 3.75-round impossible differentials for SAFER\scriptsize ++ \normalsize and 2.75-round impossible differentials for SAFER\scriptsize +\normalsize, which result in impossible differential attacks on 4-round SAFER\scriptsize +\normalsize/128(256), 5-round SAFER\scriptsize ++\normalsize/128 and 5.5-round SAFER\scriptsize ++\normalsize/256. Our attacks significantly improve previously known impossible differential attacks on 3.75-round SAFER\scriptsize +\normalsize/128(256) and SAFER\scriptsize ++\normalsize/128(256). Our attacks on SAFER\scriptsize +\normalsize/128(256) and SAFER\scriptsize ++\normalsize/256 represent the best currently known attack in terms of the number of rounds.
Last updated:  2013-03-22
Discrete Gaussian Leftover Hash Lemma over Infinite Domains
Uncategorized
Shweta Agrawal, Craig Gentry, Shai Halevi, Amit Sahai
Show abstract
Uncategorized
The classic Leftover Hash Lemma (LHL) is one of the most useful tools in cryptography, and is often used to argue that certain distributions arising from modular subset-sums are close to uniform over some finite domain. Though extremely useful and powerful in general, the applicability of the leftover hash lemma to lattice based cryptography is limited for two reasons. First, typically the distributions we care about in lattice-based cryptography are {\em discrete Gaussians}, not uniform. Second, the elements chosen from these discrete Gaussian distributions lie in an infinite domain: a lattice rather than a finite field. In this work we prove a ``lattice world" analog of LHL over infinite domains, proving that certain ``generalized subset sum'' distributions are statistically close to well behaved discrete Gaussian distributions, even without any modular reduction. Specifically, given many vectors $\{\vec x_i\}_{i=1}^m$ from some lattice $L\subset\R^n$, we analyze the probability distribution $\sum_{i=1}^m z_i \vec x_i$ where the integer vector $\vec z \in \Z^m$ is chosen from a discrete Gaussian distribution. We show that when the $\vec x_i$'s are ``random enough'' and the Gaussian from which the $\vec z$'s are chosen is ``wide enough'', then the resulting distribution is statistically close to a near-spherical discrete Gaussian over the lattice $L$. Beyond being interesting in its own right, this ``lattice-world" analog of LHL has applications for the new construction of multilinear maps \cite{GGH12}, where it is used to sample Discrete Gaussians obliviously. Specifically, given encoding of the $\vec x_i$'s, it is used to produce an encoding of a near-spherical Gaussian distribution over the lattice. We believe that our new lemma will have other applications, and sketch some plausible ones in this work.
Last updated:  2012-12-27
Systematic Treatment of Remote Attestation
Aurelien Francillon, Quan Nguyen, Kasper B. Rasmussen, Gene Tsudik
Embedded computing devices (such as actuators, controllers and sensors of various sizes) increasingly permeate many aspects of modern life: from medical to automotive, from building and factory automation to weapons, from critical infrastructures to home entertainment. Despite their specialized nature as well as limited resources and connectivity, these devices are now becoming increasingly popular and attractive targets for various attacks, especially, remote malware infestations. There has been a number of research proposals to detect and/or mitigate such attacks. They vary greatly in terms of application generality and underlying assumptions. However, one common theme is the need for Remote Attestation, a distinct security service that allows a trusted party (verifier) to check the internal state of a remote untrusted embedded device (prover). This paper provides a systematic treatment of Remote Attestation, starting with a precise definition of the desired service and proceeding to its systematic deconstruction into necessary and sufficient properties. These properties are, in turn, mapped into a minimal collection of hardware and software components that results in secure Remote Attestation. One distinguishing feature of this line of research is the need to prove (or, at least argue) architectural minimality; this is rarely encountered in security research. This work also offers some insights into vulnerabilities of certain prior techniques and provides a promising platform for attaining more advanced security services and guarantees.
Last updated:  2012-12-27
On the Security of the Core of PRINCE Against Biclique and Differential Cryptanalysis
Farzaneh Abed, Eik List, Stefan Lucks
PRINCE is a modern involutive lightweight cipher which was proposed by Rechberger et al. in 2012. PRINCE uses 64-bit core cipher, which holds the major encryption logic and is wrapped by two key additions. Thus, the security of the cipher is mainly depending on the security properties of the core. In this paper, we present an independent-biclique attack on the full version and also a differential inside-out cryptanalysis on the round-reduced version of the core of PRINCE.
Last updated:  2021-06-16
Unprovable Security of 2-Message Zero Knowledge
Kai-Min Chung, Edward Lui, Mohammad Mahmoody, Rafael Pass
Goldreich and Oren (JoC'94) show that only languages in BPP have 2-message zero-knowledge arguments. In this paper we consider weaker, super-polynomial simulation (SPS), notions of zero-knowledge. We present barriers to using black-box reductions for demonstrating soundness of 2-message protocols with efficient prover strategies satisfying SPS zero-knowledge. More precisely, if $poly(T(n))$-hard one-way functions exist for a super-polynomial $T(n)$, the following holds about 2-message efficient prover arguments over statements of length $n$. 1. Black-box reductions cannot prove soundness of 2-message $T(n)$-simulatable arguments based on any polynomial-time intractability assumption, unless the assumption can be broken in polynomial time. This complements known 2-message quasi-polynomial-time simulatable arguments using a quasi-polynomial-time reduction (Pass'03), and 2-message exponential-time simulatable proofs using a polynomial-time reduction (Dwork-Naor'00, Pass'03). 2. Back-box reductions cannot prove soundness of 2-message strong $T(n)$-simulatable arguments, even if the reduction and the challenger both can run in $poly(T(n))$-time, unless the assumption can be broken in $poly(T(n))$ time. Strong $T(\cdot)$-simulatability means that the output of the simulator is indistinguishable also for $poly(T(\cdot))$-size circuits, with a $negl(T(\cdot))$ indistinguishability gap. This complements known 3-message strong quasi-polynomial-time simulatable proofs (Blum'86, Canetti et~al'~00), or 2-message quasi-polynomial-time simulatable arguments (Khurana-Sahai'17, Kalai-Khurana-Sahai'18) satisfying a relaxed notion of strong simulation where the distinguisher's size can be large, but the distinguishing gap is negligible in $n$.
Last updated:  2012-12-19
Non Observability in the Random Oracle Model
Prabhanjan Ananth, Raghav Bhaskar
The Random Oracle Model, introduced by Bellare and Rogaway, provides a method to heuristically argue about the security of cryptographic primitives and protocols. The basis of this heuristic is that secure hash functions are close enough to random functions in their behavior, and so, a primitive that is secure using a random function should continue to remain secure even when the random function is replaced by a real hash function. In the security proof, this setting is realized by modeling the hash function as a random oracle. However, this approach in particular also enables any reduction, reducing a hard problem to the existence of an adversary, to \emph{observe} the queries the adversary makes to its random oracle and to \emph{program} the responses that the oracle provides to these queries. While, the issue of programmability of query responses has received a lot of attention in the literature, to the best of our knowledge, observability of the adversary's queries has not been identified as an artificial artefact of the Random Oracle Model. In this work, we study the security of several popular schemes when the security reduction cannot ``observe'' the adversary's queries to the random oracle, but can (possibly) continue to ``program'' the query responses. We first show that RSA-PFDH and Schnorr's signatures continue to remain secure when the security reduction is non observing (NO reductions), which is not surprising as their proofs in the random oracle model rely on programmability. We also provide two example schemes, namely, Fischlin's NIZK-PoK \cite{Fischlin05} and non interactive extractable commitment scheme, extractor algorithms of which seem to rely on observability in the random oracle model. While we prove that Fischlin's online extractors cannot exist when they are non observing, our extractable commitment scheme continues to be secure even when the extractors are non observing. We also introduce Non Observing Non Programming reductions which we believe are closest to standard model reductions.
Last updated:  2012-12-19
Further results on the distinctness of binary sequences derived from primitive sequences modulo square-free odd integers
Qun-Xiong Zheng, Wen-Feng Qi
This paper studies the distinctness of primitive sequences over Z/(M) modulo 2, where M is an odd integer that is composite and square-free, and Z/(M) is the integer residue ring modulo M. A new sufficient condition is given for ensuring that primitive sequences generated by a primitive polynomial f(x) over Z/(M) are pairwise distinct modulo 2. Such result improves a recent result obtained in our previous paper [27] and consequently the set of primitive sequences over Z/(M) that can be proven to be distinct modulo 2 is greatly enlarged.
Last updated:  2012-12-18
Calling out Cheaters: Covert Security With Public Verifiability
Gilad Asharov, Claudio Orlandi
We introduce the notion of covert security with public verifiability, building on the covert security model introduced by Aumann and Lindell (TCC 2007). Protocols that satisfy covert security guarantee that the honest parties involved in the protocol will notice any cheating attempt with some constant probability $\epsilon$. The idea behind the model is that the fear of being caught cheating will be enough of a deterrent to prevent any cheating attempt. However, in the basic covert security model, the honest parties are not able to persuade any third party (say, a judge) that a cheating occurred. We propose (and formally define) an extension of the model where, when an honest party detects cheating, it also receives a certificate that can be published and used to persuade other parties, without revealing any information about the honest party's input. In addition, malicious parties cannot create fake certificates in the attempt of framing innocents. Finally, we construct a secure two-party computation protocol for any functionality $f$ that satisfies our definition, and our protocol is almost as efficient as the one of Aumann and Lindell. We believe that the fear of a public humiliation or even legal consequences vastly exceeds the deterrent given by standard covert security. Therefore, even a small value of the deterrent factor $\epsilon$ will suffice in discouraging any cheating attempt. As the overall complexity of covert security and the parameter $\epsilon$ are inversely proportional to each other, we believe that the small price to pay to get the public verifiability property on top of covert security will be dominated by the efficiency gain obtained by using a smaller value $\epsilon$.
Last updated:  2012-12-18
Cryptanalysis of WIDEA
Gaëtan Leurent
WIDEA is a family of block ciphers designed by Junod and Macchetti in 2009 as an extension of IDEA to larger block sizes (256 and 512 bits for the main instances WIDEA-4 and WIDEA-8) and key sizes (512 and 1024 bits), with a focus on using them to design a hash function. WIDEA is based on the trusted IDEA design, and was expected to inherit its good security properties. WIDEA-w is composed of w parallel copies of the IDEA block cipher, with an MDS matrix to provide diffusion between them. In this paper we present low complexity attacks on WIDEA based on truncated differentials. We show a distinguisher for the full WIDEA with complexity only 2^65, and we use the distinguisher in a key-recovery attack with complexity w·2^68. We also show a collision attack on WIDEA-8 if it is used to build a hash function using the Merkle-Damgård mode of operation. The attacks exploit the parallel structure of WIDEA and the limited diffusion between the IDEA instances, using differential trails where the MDS diffusion layer is never active. In addition, we use structures of plaintext to reduce the data complexity.
Last updated:  2012-12-18
On the (In)security of the Fiat-Shamir Paradigm, Revisited
Uncategorized
Dana Dachman-Soled, Abhishek Jain, Yael Tauman Kalai, Adriana Lopez-Alt
Show abstract
Uncategorized
The Fiat-Shamir paradigm [CRYPTO'86] is a heuristic for converting 3-round identification schemes into signature schemes, and more generally, for collapsing rounds in public-coin interactive protocols. This heuristic is very popular both in theory and in practice, and many researchers have studied its security (and insecurity). In this work, we continue this study. As our main result, we show that for many well studied interactive *proofs* (and arguments) the soundness of the Fiat-Shamir heuristic cannot be proven via a black-box reduction to any falsifiable assumption. Previously, the insecurity of this paradigm was exemplified only when applied to interactive arguments (as opposed to proofs). Using similar techniques, we also show a black-box impossibility result for Micali's CS-proofs [FOCS'94]. Namely, we prove that there exist PCPs such that for "sufficiently hard'' NP languages, Micali's CS-proof cannot be proven sound via black-box reduction to any falsifiable assumption. These results are obtained by extending the impossibility of two-message zero knowledge protocols due to Goldreich and Oren [J. Cryptology'94].
Last updated:  2012-12-19
Why "Fiat-Shamir for Proofs" Lacks a Proof
Nir Bitansky, Sanjam Garg, Daniel Wichs
The Fiat-Shamir heuristic (CRYPTO '86) is used to convert any 3-message public-coin proof or argument system into a non-interactive argument, by hashing the prover's first message to select the verifier's challenge. It is known that this heuristic is sound when the hash function is modeled as a random oracle. On the other hand, the surprising result of Goldwasser and Kalai (FOCS '03) shows that there exists a computationally sound argument on which the Fiat-Shamir heuristic is never sound, when instantiated with any actual efficient hash function. This leaves us with the following interesting possibility: perhaps there exists a hash function that securely instantiates the Fiat-Shamir heuristic for all 3-message public-coin statistically sound proofs, even if it can fail for some computationally sound arguments. Indeed, the existence of such hash functions has been conjectured by Barak, Lindell and Vadhan (FOCS '03), who also gave a seemingly reasonable and sufficient condition under which such hash functions exist. However, we do not have any provably secure construction of such hash functions, under any standard assumption such as the hardness of DDH, RSA, QR, LWE, etc. In this work we give a broad black-box separation result, showing that the security of such hash functions cannot be proved under virtually any standard cryptographic assumption via a black-box reduction.
Last updated:  2012-12-18
On the Non-malleability of the Fiat-Shamir Transform
Sebastian Faust, Markulf Kohlweiss, Giorgia Azzurra Marson, Daniele Venturi
The Fiat-Shamir transform is a well studied paradigm for removing interaction from public-coin protocols. We investigate whether the resulting non-interactive zero-knowledge (NIZK) proof systems also exhibit non-malleability properties that have up to now only been studied for NIZK proof systems in the common reference string model: first, we formally define simulation soundness and a weak form of simulation extraction in the random oracle model (ROM). Second, we show that in the ROM the Fiat-Shamir transform meets these properties under lenient conditions. A consequence of our result is that, in the ROM, we obtain truly efficient non malleable NIZK proof systems essentially for free. Our definitions are sufficient for instantiating the Naor-Yung paradigm for CCA2-secure encryption, as well as a generic construction for signature schemes from hard relations and simulation-extractable NIZK proof systems. These two constructions are interesting as the former preserves both the leakage resilience and key-dependent message security of the underlying CPA-secure encryption scheme, while the latter lifts the leakage resilience of the hard relation to the leakage resilience of the resulting signature scheme.
Last updated:  2012-12-18
Profiled Model Based Power Simulator for Side Channel Evaluation
Uncategorized
Nicolas Debande, Maël Berthier, Yves Bocktaels, Thanh-Ha Le
Show abstract
Uncategorized
An embedded cryptographic device performs operation on sensitive data and as such, is vulnerable to side-channel attacks. This forces smart-card manufacturers to carefully consider development of security mechanisms. To accelerate this procedure, the use of power and electromagnetic simulator can be relevant and saves non negligible time. Based on a high level simulator, we propose to use profiled abstract models to gain accuracy on the simulated traces. These abstract models are obtained by profiling some parts of the target device which is physically available by the evaluator.
Last updated:  2012-12-18
Cryptanalysis of RAPP, an RFID Authentication Protocol
Nasour Bagheri, Masoumeh Safkhani, Pedro Peris-Lopez, Juan E. Tapiador
Tian et al. proposed a novel ultralightweight RFID mutual authentication protocol [4] that has recently been analyzed in [1], [2], [5]. In this letter, we first propose a desynchronization attack that succeeds with probability almost 1, which improves upon the 0.25 given by the attack in [1]. We also show that the bad properties of the proposed permutation function can be exploited to disclose several bits of the tag’s secret (rather than just one bit as in [2]), which increases the power of a traceability attack. Finally, we show how to extend the above attack to run a full disclosure attack, which requires to eavesdrop less protocol runs than the attack described in [5] (i.e., 192 << 230).
Last updated:  2012-12-18
Recovering RSA Secret Keys from Noisy Key Bits with Erasures and Errors
Noboru Kunihiro, Naoyuki Shinohara, Tetsuya Izu
We discuss how to recover RSA secret keys from noisy key bits with erasures and errors. There are two known algorithms recovering original secret keys from noisy keys. At Crypto 2009, Heninger and Shacham proposed a method for the case where an erroneous version of secret keys contains only erasures. Subsequently, Henecka et al. proposed a method for an erroneous version containing only errors at Crypto2010. For physical attacks such as side-channel and cold boot attacks, we need to study key recovery from a noisy secret key containing both erasures and errors. In this paper, we propose a method to recover a secret key from such an erroneous version and analyze the condition for error and erasure rates so that our algorithm succeeds in finding the correct secret key in polynomial time. We also evaluate a theoretical bound to recover the secret key and discuss to what extent our algorithm achieves this bound.
Last updated:  2014-05-14
Efficient, Adaptively Secure, and Composable Oblivious Transfer with a Single, Global CRS
Seung Geol Choi, Jonathan Katz, Hoeteck Wee, Hong-Sheng Zhou
We present a general framework for efficient, universally composable oblivious transfer (OT) protocols in which a single, global, common reference string (CRS) can be used for multiple invocations of oblivious transfer by arbitrary pairs of parties. In addition: - Our framework is round-efficient. E.g., under the DLIN or SXDH assumptions we achieve round-optimal protocols with static security, or 3-round protocols with adaptive security (assuming erasure). - Our resulting protocols are more efficient than any known previously, and in particular yield protocols for string OT using O(1) exponentiations and communicating O(1) group elements. Our result improves on that of Peikert et al. (Crypto 2008), which uses a CRS whose length depends on the number of parties in the network and achieves only static security. Compared to Garay et al. (Crypto 2009), we achieve adaptive security with better round complexity and efficiency.
Last updated:  2014-04-21
How Practical is Public-Key Encryption Based on LPN and Ring-LPN?
Uncategorized
Ivan Damgård, Sunoo Park
Show abstract
Uncategorized
We conduct a study of public-key cryptosystems based on variants of the Learning Parity with Noise (LPN) problem. The main LPN variant in consideration was introduced by Alekhnovich (FOCS 2003), and we describe several improvements to the originally proposed scheme, inspired by similar existing variants of Regev's LWE-based cryptosystem. To achieve further efficiency, we propose the first public-key cryptosystem based on the ring-LPN problem, which is a more recently introduced LPN variant that makes for substantial improvement in terms of both time and space. We also introduce a variant of this problem called the transposed Ring-LPN problem. Our public-key scheme based on this problem is even more efficient. For all cases, we compute the parameters required for various security levels in practice, given the best currently known attacks. Our conclusion is that the basic LPN-based scheme is in several respects not competitive with existing practical schemes, as the public key, ciphertexts and encryption time become very large already for 80-bit security. On the other hand, the scheme based on transposed Ring-LPN is far better in all these respects. Although the public key and ciphertexts are still larger than for, say, RSA at comparable security levels, they are not prohibitively large; moreover, for decryption, the scheme outperforms RSA for security levels of 112 bits or more. The Ring-LPN based scheme is less efficient, however. Thus, LPN-based public-key cryptography seems to be somewhat more promising for practical use than has been generally assumed so far.
Last updated:  2013-07-01
5PM: Secure Pattern Matching
Uncategorized
Joshua Baron, Karim El Defrawy, Kirill Minkovich, Rafail Ostrovsky, Eric Tressler
Show abstract
Uncategorized
In this paper we consider the problem of secure pattern matching that allows single-character wildcards and substring matching in the malicious (stand-alone) setting. Our protocol, called 5PM, is executed between two parties: Server, holding a text of length $n$, and Client, holding a pattern of length $m$ to be matched against the text, where our notion of matching is more general and includes non-binary alphabets, non-binary Hamming distance and non-binary substring matching. 5PM is the first secure expressive pattern matching protocol designed to optimize round complexity by carefully specifying the entire protocol round by round. In the malicious model, 5PM requires $O((m+n)k^2)$ bandwidth and $O(m+n)$ encryptions, where $m$ is the pattern length and $n$ is the text length. Further, 5PM can hide pattern size with no asymptotic additional costs in either computation or bandwidth. Finally, 5PM requires only two rounds of communication in the honest-but-curious model and eight rounds in the malicious model. Our techniques reduce pattern matching and generalized Hamming distance problems to a novel linear algebra formulation that allows for generic solutions based on any additively homomorphic encryption. We believe our efficient algebraic techniques are of independent interest.
Last updated:  2012-12-14
Verifiable Elections That Scale for Free
Melissa Chase, Markulf Kohlweiss, Anna Lysyanskaya, Sarah Meiklejohn
In order to guarantee a fair and transparent voting process, electronic voting schemes must be verifiable. Most of the time, however, it is important that elections also be anonymous. The notion of a verifiable shuffle describes how to satisfy both properties at the same time: ballots are submitted to a public bulletin board in encrypted form, verifiably shuffled by several mix servers (thus guaranteeing anonymity), and then verifiably decrypted by an appropriate threshold decryption mechanism. To guarantee transparency, the intermediate shuffles and decryption results, together with proofs of their correctness, are posted on the bulletin board throughout this process. In this paper, we present a verifiable shuffle and threshold decryption scheme in which, for security parameter k, L voters, M mix servers, and N decryption servers, the proof that the end tally corresponds to the original encrypted ballots is only O(k(L + M + N)) bits long. Previous verifiable shuffle constructions had proofs of size O(kLM + kLN), which, for elections with thousands of voters, mix servers, and decryption servers, meant that verifying an election on an ordinary computer in a reasonable amount of time was out of the question. The linchpin of each construction is a controlled-malleable proof (cm-NIZK), which allows each server, in turn, to take a current set of ciphertexts and a proof that the computation done by other servers has proceeded correctly so far. After shuffling or partially decrypting these ciphertexts, the server can also update the proof of correctness, obtaining as a result a cumulative proof that the computation is correct so far. In order to verify the end result, it is therefore sufficient to verify just the proof produced by the last server.
Last updated:  2012-12-14
Cryptanalysis of RAKAPOSHI Stream Cipher
Lin Ding, Jie Guan
RAKAPOSHI is a hardware oriented stream cipher designed by Carlos Cid et al. in 2009. The stream cipher is based on Dynamic Linear Feedback Shift Registers, with a simple and potentially scalable design, and is particularly suitable for hardware applications with restricted resources. The RAKAPOSHI stream cipher offers 128-bit security. In this paper, we point out some weaknesses in the cipher. Firstly, it shows that there are 2^192 weak (key, IV) pairs in RAKAPOSHI stream cipher. Secondly, for weak (key, IV) pairs of RAKAPOSHI, they are vulnerable to linear distinguishing attack and algebraic attack. Finally, we propose a real time related key chosen IV attack on RAKAPOSHI. The attack on RAKAPOSHI recovers the 128-bit secret key of with a computational complexity of 2^37, requiring 47 related keys, 2^8 chosen IVs and 2^14.555 keystream bits. The success probability of this attack is 0.999, which is quite close to 1. The experimental results corroborate our assertion.
Last updated:  2013-07-03
Fully Automated Analysis of Padding-Based Encryption in the Computational Model
Gilles Barthe, Juan Manuel Crespo, Benjamin Grégoire, César Kunz, Yassine Lakhnech, Benedikt Schmidt, Santiago Zanella-Béguelin
Computer-aided verification provides effective means of analyzing the security of cryptographic primitives. However, it has remained a challenge to achieve fully automated analyses yielding guarantees that hold against computational (rather than symbolic) attacks. This paper meets this challenge for public-key encryption schemes built from trapdoor permutations and hash functions. Using a novel combination of techniques from computational and symbolic cryptography, we present proof systems for analyzing the chosen-plaintext and chosen-ciphertext security of such schemes in the random oracle model. Building on these proof systems, we develop a toolset that bundles together fully automated proof and attack finding algorithms. We use this toolset to build a comprehensive database of encryption schemes that records attacks against insecure schemes, and proofs with concrete bounds for secure ones.
Last updated:  2012-12-14
Cryptanalysis of matrix conjugation schemes
A. D. Myasnikov, A. Ushakov
In this paper we cryptanalyze two protocols: Grigoriev-Shpilrain authentication protocol and Wang et al. public key encryption protocols that use computational hardness of some variations of the conjugacy search problem in noncommutative monoids. We devise a practical heuristic algorithm solving those problems. As a conclusion we claim that these protocols are insecure for the proposed parameter values.
Last updated:  2015-02-15
Encoding Functions with Constant Online Rate or How to Compress Garbled Circuits Keys
Benny Applebaum, Yuval Ishai, Eyal Kushilevitz, Brent Waters
\emph{Randomized encodings of functions} can be used to replace a ``complex'' function $f(x)$ by a ``simpler'' randomized mapping $\hat{f}(x;r)$ whose output distribution on an input $x$ encodes the value of $f(x)$ and hides any other information about $x$. One desirable feature of randomized encodings is low \emph{online complexity}. That is, the goal is to obtain a randomized encoding $\hat{f}$ of $f$ in which most of the output can be precomputed and published before seeing the input $x$. When the input $x$ is available, it remains to publish only a short string $\hat{x}$, where the online complexity of computing $\hat{x}$ is independent of (and is typically much smaller than) the complexity of computing $f$. Yao's garbled circuit construction gives rise to such randomized encodings in which the online part $\hat{x}$ consists of $n$ encryption keys of length $\kappa$ each, where $n=|x|$ and $\kappa$ is a security parameter. Thus, the {\em online rate} $|\hat{x}|/|x|$ of this encoding is proportional to the security parameter $\kappa$. In this paper, we show that the online rate can be dramatically improved. Specifically, we show how to encode any polynomial-time computable function $f:\bit^n\to\bit^{m(n)}$ with online rate of $1+o(1)$ and with nearly linear online computation. More concretely, the online part $\hat{x}$ consists of an $n$-bit string and a single encryption key. These constructions can be based on the decisional Diffie-Hellman assumption (DDH), the Learning with Errors assumption (LWE), or the RSA assumption. We also present a variant of this result which applies to {\em arithmetic formulas}, where the encoding only makes use of arithmetic operations, as well as several negative results which complement our positive results. Our positive results can lead to efficiency improvements in most contexts where randomized encodings of functions are used. We demonstrate this by presenting several concrete applications. These include protocols for secure multiparty computation and for non-interactive verifiable computation in the preprocessing model which achieve, for the first time, an optimal online communication complexity, as well as non-interactive zero-knowledge proofs which simultaneously minimize the online communication and the prover's online computation.
Last updated:  2014-12-01
Generic Constructions of Integrated PKE and PEKS
Uncategorized
Yu Chen, Jiang Zhang, Zhenfeng Zhang, Dongdai Lin
Show abstract
Uncategorized
In this paper we investigate the topic of integrated public-key encryption (PKE) and public-key encryption with keyword search (PEKS) schemes (PKE-PEKS as shorthand). We first formalize the strongest security notion to date for PKE-PEKS schemes, named joint CCA-security. We then propose two simple constructions of jointly CCA-secure PKE- PEKS schemes from anonymous (hierarchical) identity-based encryption schemes. Besides, we also define the notion of consistency for PKE-PEKS schemes, as well as revisit its related notions (including consistency of PEKS schemes, robustness and collision-freeness of IBE schemes), which may be of independent interest.
Last updated:  2012-12-10
Root Optimization of Polynomials in the Number Field Sieve
Shi Bai, Richard P. Brent, Emmanuel Thomé
The general number field sieve (GNFS) is the most efficient algorithm known for factoring large integers. It consists of several stages, the first one being polynomial selection. The quality of the chosen polynomials in polynomial selection can be modelled in terms of size and root properties. In this paper, we describe some algorithms for selecting polynomials with very good root properties.
Last updated:  2012-12-11
The Weakness of Integrity Protection for LTE
Uncategorized
Teng Wu, Guang Gong
Show abstract
Uncategorized
In this paper, we concentrate on the security issues of the integrity protection of LTE and present two different forgery attacks. For the first attack, referred to as a {\em linear forgery attack}, EIA1 and EIA3, two integrity protection algorithms of LTE, are insecure if the initial value (IV) can be repeated twice during the life cycle of an integrity key (IK). Because of the linearity of EIA1 and EIA3, given two valid Message Authentication Codes (MACs) our algorithm can forge up to $2^{32}$ valid MACs. Thus, the probability of finding a valid MAC is dramatically increased. Although the combination of IV and IK never repeats in the ordinary case, in our well-designed scenario, the attacker can make the same combination occur twice. The duplication provides the opportunity to conduct our linear forgery attack, which may harm the security of communication. To test our linear forgery attack algorithm, we generate two counter check messages and successfully forge the third one. We also examine the attack timing by simulating real communication. From the experimental results, our attack is applicable. The second attack is referred to as a {\em trace extension forgery attack}, which works only in theory. However, this attack is more general than the linear forgery attack. Known only one MAC and message pair, we can construct a different message, who has the same MAC as the original one, with the probability $\frac{1}{2^{16}}$. In this attack, trace function is applied to the message to shrink the guessing space.
Last updated:  2013-01-16
Cryptography Using CAPTCHA Puzzles
Abishek Kumarasubramanian, Rafail Ostrovsky, Omkant Pandey, Akshay Wadia
A \captcha is a puzzle that is easy for humans but hard to solve for computers. A formal framework, modelling \captcha puzzles (as hard AI problems), was introduced by Ahn, Blum, Hopper, and Langford (\cite{AhnBHL03}, Eurocrypt 2003). Despite their attractive features and wide adoption in practice, the use of \captcha puzzles for general cryptographic applications has been limited. In this work, we explore various ways to formally model \captcha puzzles and their human component and explore new applications for \captcha. We show that by defining \captcha with additional (strong but realistic) properties, it is possible to broaden \captcha applicability, including using it to learning a machine's ``secret internal state.'' To facilitate this, we introduce the notion of an human-extractable \captcha, which we believe may be of independent interest. We show that this type of \captcha yields a \emph{constant round} protocol for \emph{fully} concurrent non-malleable zero-knowledge. To enable this we also define and construct a \captcha -based commitment scheme which admits ``straight line'' extraction. We also explore \captcha definitions in the setting of Universal Composability (UC). We show that there are two (incomparable) ways to model \captcha within the UC framework that lead to different results. In particular, we show that in the so called \emph{indirect access model}, for every polynomial time functionality $\calf$ there exists a protocol that UC-realizes $\calf$ using human-extractable \captcha, while for the so-called \emph{direct access model}, UC is impossible, even with the help of human-extractable \captcha. The security of our constructions using human-extractable \captcha is proven against the (standard) class of all polynomial time adversaries. In contrast, most previous works guarantee security only against a very limited class of adversaries, called the \emph{conservative} adversaries.
Last updated:  2014-07-29
A Simple Provably Secure Key Exchange Scheme Based on the Learning with Errors Problem
Uncategorized
Jintai Ding, Xiang Xie, Xiaodong Lin
Show abstract
Uncategorized
We use the learning with errors (LWE) problem to build a new simple and provably secure key exchange scheme. The basic idea of the construction can be viewed as certain extension of Diffie-Hellman problem with errors. The mathematical structure behind comes from the commutativity of computing a bilinear form in two different ways due to the associativity of the matrix multiplications:$$(\mathbf{x}^t \times \mathbf{A})\times \mathbf{y}=\mathbf{x}^t \times (\mathbf{A}\times \mathbf{y}),$$ where $\mathbf{x,y}$ are column vectors and $\mathbf{A}$ is a square matrix. We show that our new schemes are more efficient in terms of communication and computation complexity compared with key exchange schemes or key transport schemes via encryption schemes based on the LWE problem. Furthermore, we extend our scheme to the ring learning with errors (RLWE) problem, resulting in small key size and better efficiency.
Last updated:  2012-12-10
The k-BDH Assumption Family: Bilinear Map Cryptography from Progressively Weaker Assumptions
Karyn Benson, Hovav Shacham, Brent Waters
Over the past decade bilinear maps have been used to build a large variety of cryptosystems. In addition to new functionality, we have concurrently seen the emergence of many strong assumptions. In this work, we explore how to build bilinear map cryptosystems under progressively weaker assumptions. We propose $k$-BDH, a new family of progressively weaker assumptions that generalizes the decisional bilinear Diffie-Hellman (DBDH) assumption. We give evidence in the generic group model that each assumption in our family is strictly weaker than the assumptions before it. DBDH has been used for proving many schemes secure, notably identity-based and functional encryption schemes; we expect that our $k$-BDH will lead to generalizations of many such schemes. To illustrate the usefulness of our $k$-BDH family, we construct a family of selectively secure Identity-Based Encryption (IBE) systems based on it. Our system can be viewed as a generalization of the Boneh-Boyen IBE, however, the construction and proof require new ideas to fit the family. We then extend our methods to produces hierarchical IBEs and CCA security; and give a fully secure variant. In addition, we discuss the opportunities and challenges of building new systems under our weaker assumption family.
Last updated:  2012-12-10
Improved (Pseudo) Preimage Attack and Second Preimage Attack on Round-Reduced Grøstl
Jian Zou, Wenling Wu, Shuang Wu, Le Dong
Grøstl is one of the five finalists in the third round of SHA-3 competition hosted by NIST. In this paper, we use many techniques to improve the pseudo preimage attack on Grøstl hash function, such as subspace preimage attack and guess-and-determine technique. We present improved pseudo preimage attacks on 5-round Grøstl-256 and 8-round Grøstl-512 respectively. The complexity of the above two attacks are ($2^{239.90},2^{240.40}$) (in time and memory) and ($2^{499.50},2^{499}$) respectively. Furthermore, we propose pseudo preimage attack and pseudo second preimage attack on 6-round Grøstl-256. The complexity of our 6-round pseudo preimage and second preimage attack is ($2^{253.26},2^{253.67}$) and ($2^{251.0},2^{252.0}$) respectively. As far as we know, these are the best known attacks on round-reduced Grøstl hash function.
Last updated:  2013-07-18
Square root computation over even extension fields
Uncategorized
Gora Adj, Francisco Rodríguez-Henríquez
Show abstract
Uncategorized
This paper presents a comprehensive study of the computation of square roots over finite extension fields. We propose two novel algorithms for computing square roots over even field extensions of the form $\F_{q^{2}}$, with $q=p^n,$ $p$ an odd prime and $n\geq 1$. Both algorithms have an associate computational cost roughly equivalent to one exponentiation in $\F_{q^{2}}$. The first algorithm is devoted to the case when $q\equiv 1 \bmod 4$, whereas the second one handles the case when $q\equiv 3 \bmod 4$. Numerical comparisons show that the two algorithms presented in this paper are competitive and in some cases more efficient than the square root methods previously known.
Last updated:  2013-07-18
Generic Related-key Attacks for HMAC
Thomas Peyrin, Yu Sasaki, Lei Wang
In this article we describe new generic distinguishing and forgery attacks in the related-key scenario (using only a single related-key) for the HMAC construction. When HMAC uses a k-bit key, outputs an n-bit MAC, and is instantiated with an l-bit inner iterative hash function processing m-bit message blocks where m=k, our distinguishing-R attack requires about 2^{n/2} queries which improves over the currently best known generic attack complexity 2^{l/2} as soon as l>n. This means that contrary to the general belief, using wide-pipe hash functions as internal primitive will not increase the overall security of HMAC in the related-key model when the key size is equal to the message block size. We also present generic related-key distinguishing-H, internal state recovery and forgery attacks. Our method is new and elegant, and uses a simple cycle-size detection criterion. The issue in the HMAC construction (not present in the NMAC construction) comes from the non-independence of the two inner hash layers and we provide a simple patch in order to avoid this generic attack. Our work finally shows that the choice of the opad and ipad constants value in HMAC is important.
Last updated:  2014-06-24
Fingerprint Tables: A Generalization of Rainbow Tables
Uncategorized
Gildas Avoine, Adrien Bourgeois, Xavier Carpent
Uncategorized
Cryptanalytic time-memory trade-offs were introduced by Hellman in 1980 in order to perform key-recovery attacks on cryptosystems. A major advance was presented at Crypto 2003 by Oechslin, with the rainbow table variant that outperforms Hellman's seminal work. This paper introduces the fingerprint tables, which drastically reduce the number of false alarms during the attack compared to the rainbow tables. The key point of our technique consists in storing in the tables the fingerprints of the chains instead of their endpoints. The fingerprint tables provide a time-memory trade-off that is about two times faster than the rainbow tables on usual problem sizes. We experimentally illustrate the performance of our technique, and demonstrate that it is faster than Ophcrack, a Windows LM Hash password cracker considered so far to be the fastest one ever implemented.
Last updated:  2013-03-08
Proofs of Retrievability with Public Verifiability and Constant Communication Cost in Cloud
Jiawei Yuan, Shucheng Yu
For data storage outsourcing services, it is important to allow data owners to efficiently and securely verify that the storage sever stores their data correctly. To address this issue, several proof-of-retrievability (POR) schemes have been proposed wherein a storage sever must prove to a verifier that all of a client's data are stored correctly. While existing POR schemes offer decent solutions addressing various practical issues, they either have a non-trivial (linear or quadratic) communication complexity, or only support private verification, i.e., only the data owner can verify the remotely stored data. It remains open to design a POR scheme that achieves both public verifiability and constant communication cost simultaneously. In this paper, we solve this open problem and propose the first POR scheme with public verifiability and constant communication cost: in our proposed scheme, the message exchanged between the prover and verifier is composed of a constant number of group elements; different from existing private POR constructions, our scheme allows public verification and releases the data owners from the burden of staying online. We achieved these by tailoring and uniquely combining techniques such as constant size polynomial commitment and homomorphic linear authenticators. Thorough analysis shows that our proposed scheme is efficient and practical. We prove the security of our scheme based on the Computational Diffie-Hellman Problem, the Strong Diffie-Hellman assumption and the Bilinear Strong Diffie-Hellman assumption.\end{abstract}
Last updated:  2012-12-10
Resilience to Distinguishing Attacks on WG-7 Cipher and Their Generalizations
Guang Gong, Mark Aagaard, Xinxin Fan
The stream cipher WG-7 is a lightweight variant of the well-known Welch-Gong (WG) stream cipher family, targeting for resource-constrained devices like RFID tags, smart cards, and wireless sensor nodes. Recently, a distinguishing attack was discovered against the stream cipher WG-7 by Orumiehchiha, Pieprzyk and Steinfeld. In this paper, we extend their work to a general distinguishing attack and suggest criteria to protect the WG stream cipher family from this attack. Our analysis shows that by properly choosing the minimal polynomial of the linear feedback shift register for a WG stream cipher, the general distinguishing attack can be easily thwarted.
Last updated:  2015-12-10
Natural Generalizations of Threshold Secret Sharing
Oriol Farras, Carles Padro, Chaoping Xing, An Yang
We present new families of access structures that, similarly to the multilevel and compartmented access structures introduced in previous works, are natural generalizations of threshold secret sharing. Namely, they admit an ideal linear secret sharing schemes over every large enough finite field, they can be described by a small number of parameters, and they have useful properties for the applications of secret sharing. The use of integer polymatroids makes it possible to find many new such families and it simplifies in great measure the proofs for the existence of ideal secret sharing schemes for them.
Last updated:  2016-04-01
Hiding the Input-Size in Secure Two-Party Computation
Yehuda Lindell, Kobbi Nissim, Claudio Orlandi
In the setting of secure multiparty computation, a set of parties wish to compute a joint function of their inputs, while preserving properties like privacy, correctness, and independence of inputs. One security property that has typically not been considered in the past relates to the length or size of the parties inputs. This is despite the fact that in many cases the size of a party's input can be confidential. The reason for this omission seems to have been the folklore belief that, as with encryption, it is impossible to carry out non-trivial secure computation while hiding the size of parties' inputs. However some recent results (e.g., Ishai and Paskin at TCC 2007, Ateniese, De Cristofaro and Tsudik at PKC 2011) showed that it is possible to hide the input size of one of the parties for some limited class of functions, including secure two-party set intersection. This suggests that the folklore belief may not be fully accurate. In this work, we initiate a theoretical study of input-size hiding secure computation, and focus on the two-party case. We present definitions for this task, and deal with the subtleties that arise in the setting where there is no a priori polynomial bound on the parties' input sizes. Our definitional study yields a multitude of classes of input-size hiding computation, depending on whether a single party's input size remains hidden or both parties' input sizes remain hidden, and depending on who receives output and if the output size is hidden from a party in the case that it does not receive output. We prove feasibility and impossibility results for input-size hiding secure two-party computation. Some of the highlights are as follows: Under the assumption that fully homomorphic encryption (FHE) exists, there exist non-trivial functions (e.g., the millionaire's problem) that can be securely computed while hiding the input size of both parties. Under the assumption that FHE exists, every function can be securely computed while hiding the input size of one party, when both parties receive output (or when the party not receiving output does learn the size of the output). In the case of functions with fixed output length, this implies that every function can be securely computed while hiding one party's input size. There exist functions that cannot be securely computed while hiding both parties' input sizes. This is the first formal proof that, in general, some information about the size of the parties' inputs must be revealed. Our results are in the semi-honest model. The problem of input-size hiding is already challenging in this scenario. We discuss the additional difficulties that arise in the malicious setting and leave this extension for future work.
Last updated:  2012-12-01
Infective Computation and Dummy Rounds: Fault Protection for Block Ciphers without Check-before-Output
Benedikt Gierlichs, Jorn-Marc Schmidt, Michael Tunstall
Implementation attacks pose a serious threat for the security of cryptographic devices and there are a multitude of countermeasures that are used to prevent them. Two countermeasures used in implementations of block ciphers to increase the complexity of such attacks are the use of dummy rounds and redundant computation with consistency checks to prevent fault attacks. In this paper we present several countermeasures based on the idea of infective computation. Our countermeasures ensure that a fault injected into a cipher, dummy, or redundant round will infect the ciphertext such that an attacker cannot derive any information on the secret key being used. This has one clear advantage: the propagation of faults prevents an attacker from being able to conduct any fault analysis on any corrupted ciphertexts. As a consequence, there is no need for any test at the end of an implementation to determine if a fault has been injected and a ciphertext can always be returned.
Last updated:  2012-11-30
What is the Effective Key Length for a Block Cipher: an Attack on Every Block Cipher
Jialin Huang, Xuejia Lai
Recently, several important block ciphers are considered to be broken by the bruteforce-like cryptanalysis, with a time complexity faster than exhaustive key search by going over the entire key space but performing less than a full encryption for each possible key. Motivated by this observation, we describe a meet-in-the-middle attack that can always be successfully mounted against any practical block ciphers with success probability one. The data complexity of this attack is the smallest according to the unicity distance. The time complexity can be written as $2^k(1-\epsilon)$ where $\epsilon > 0$ for all block ciphers. Previously, the security bound that is commonly accepted is the length k of the given master key. From our result we point out that actually this k-bit security is always overestimated and can never be reached due to the inevitable key bits loss. No amount of clever design can prevent it, but increments of the number of rounds can reduce this key loss as much as possible. We give more insight in the problem of the upper bound of eective key bits in block ciphers, and show a more accurate bound. A suggestion about the relation between the key size and block size is given. That is, when the number of rounds is xed, it is better to take a key size equal to the block size. Moreover, eective key bits of many well-known block ciphers are calculated and analyzed, which also conrm their lower security margin than thought before.
Last updated:  2012-11-30
Mixed-integer Linear Programming in the Analysis of Trivium and Ktantan
Julia Borghoff
In this paper we present a rather new approach to apply mixed-integer optimization to the cryptanalysis of cryptographic primitives. We focus on the stream cipher Trivium, that has been recommended by the eSTREAM stream cipher project, and the lightweight block cipher Ktantan. Using these examples we explain how the problem of solving a non-linear multivariate Boolean equation system can be formulated as a mixed-integer linear programming problem. Our main focus is the formulation of the mixed-integer programming model (MIP model), which includes amongst others the choice of a conversion method to convert the Boolean equations into equations over the reals, different guessing strategies and the selection of binary variables. We apply the commercial solver Cplex to our problems. The results and further possible features of the approach are discussed.
Last updated:  2013-03-04
Minkowski sum based lattice construction for multivariate simultaneous Coppersmith's technique and applications to RSA
Uncategorized
Yoshinori Aono
Show abstract
Uncategorized
We investigate a lattice construction method for the Coppersmith technique for finding small solutions of a modular equation. We consider its variant for simultaneous equations and propose a method to construct a lattice by combining lattices for solving single equations. As applications, we consider a new RSA cryptanalyses. Our algorithm can factor an RSA modulus from $\ell \ge 2$ pairs of RSA public exponents with the common modulus corresponding to secret exponents smaller than $N^{(9\ell -5)/(12\ell + 4)}$, which improves on the previously best known result by Sarkar and Maitra. For partial key exposure situation, we also can factor the modulus if $\beta - \delta/2 + 1/4 < (3\ell-1)(3\ell + 1)$, where $\beta$ and $\delta$ are bit-lengths $/ \log N$ of the secret exponent and its exposed LSBs, respectively.
Last updated:  2013-01-17
Lecture Notes in Secret Sharing
Carles Padro
These are basically the lecture notes for the short course "Applications of Combinatorics to Information-Theoretic Cryptography", Central European University, Budapest, May-June 2012. With the objective of covering a full course on secret sharing, additional content will be added in subsequent versions of these lecture notes.
Last updated:  2012-11-29
Robust Encryption, Revisited
Uncategorized
Pooya Farshim, Benoît Libert, Kenneth G. Paterson, Elizabeth A. Quaglia
Show abstract
Uncategorized
We revisit the notions of robustness introduced by Abdalla, Bellare, and Neven (TCC 2010). One of the main motivations for the introduction of strong robustness for public-key encryption (PKE) by Abdalla et al. to prevent certain types of attack on Sako's auction protocol. We show, perhaps surprisingly, that Sako's protocol is still vulnerable to attacks exploiting robustness problems in the underlying PKE scheme, even when it is instantiated with a \emph{strongly} robust scheme. This demonstrates that current notions of robustness are insufficient even for one of its most natural applications. To address this and other limitations in existing notions, we introduce a series of new robustness notions for PKE and explore their relationships. In particular, we introduce \emph{complete} robustness, our strongest new notion of robustness, and give a number of constructions for completely robust PKE schemes.
Last updated:  2013-02-12
Collision Attacks on Up to 5 Rounds of SHA-3 Using Generalized Internal Differentials
Uncategorized
Itai Dinur, Orr Dunkelman, Adi Shamir
Show abstract
Uncategorized
On October 2-nd 2012 NIST announced its selection of the Keccak scheme as the new SHA-3 hash standard. In this paper we present the first published collision finding attacks on reduced-round versions of Keccak-384 and Keccak-512, providing actual collisions for 3-round versions, and describing an attack which is $2^{45}$ times faster than birthday attacks for 4-round Keccak-384. For Keccak-256, we increase the number of rounds which can be attacked to 5. All these results are based on a generalized {\it internal differential attack} (introduced by Peyrin at Crypto 2010), and use it to map a large number of Keccak inputs into a relatively small subset of possible outputs with a surprisingly large probability. In such a \textit{squeeze attack} it is easier to find random collisions in the reduced target subset by a standard birthday argument.
Last updated:  2012-11-28
Fully Secure Unbounded Inner-Product and Attribute-Based Encryption
Tatsuaki Okamoto, Katsuyuki Takashima
In this paper, we present the first inner-product encryption (IPE) schemes that are unbounded in the sense that the public parameters do not impose additional limitations on the predicates and attributes used for encryption and decryption keys. All previous IPE schemes were bounded, or have a bound on the size of predicates and attributes given public parameters fixed at setup. The proposed unbounded IPE schemes are fully (adaptively) secure and fully attribute-hiding in the standard model under a standard assumption, the decisional linear (DLIN) assumption. In our unbounded IPE schemes, the inner-product relation is generalized, where the two vectors of inner-product can be different sizes and it provides a great improvement of efficiency in many applications. We also present the first fully secure unbounded attribute-based encryption (ABE) schemes, and the security is proven under the DLIN assumption in the standard model. To achieve these results, we develop novel techniques, indexing and consistent randomness amplification, on the (extended) dual system encryption technique and the dual pairing vector spaces (DPVS).
Last updated:  2014-03-13
Fast Cryptography in Genus 2
Uncategorized
Joppe W. Bos, Craig Costello, Huseyin Hisil, Kristin Lauter
Show abstract
Uncategorized
In this paper we highlight the benefits of using genus 2 curves in public-key cryptography. Compared to the standardized genus 1 curves, or elliptic curves, arithmetic on genus 2 curves is typically more involved but allows us to work with moduli of half the size. We give a taxonomy of the best known techniques to realize genus 2 based cryptography, which includes fast formulas on the Kummer surface and efficient 4-dimensional GLV decompositions. By studying different modular arithmetic approaches on these curves, we present a range of genus 2 implementations. On a single core of an Intel Core i7-3520M (Ivy Bridge), our implementation on the Kummer surface breaks the 125 thousand cycle barrier which sets a new software speed record at the 128-bit security level for constant-time scalar multiplications compared to all previous genus 1 and genus 2 implementations.
Last updated:  2014-08-27
Blackbox Traceable CP-ABE: How to Catch People Leaking Their Keys by Selling Decryption Devices on eBay
Uncategorized
Zhen Liu, Zhenfu Cao, Duncan S. Wong
Show abstract
Uncategorized
In the context of Ciphertext-Policy Attribute-Based Encryption (CP-ABE), if a decryption device associated with an attribute set $S_{\cal D}$ appears on eBay, and is alleged to be able to decrypt any ciphertexts with policies satisfied by $S_{\cal D}$, no one including the CP-ABE authorities can identify the malicious user(s) who build such a decryption device using their key(s). This has been known as a major practicality concern in CP-ABE applications, for example, providing fine-grained access control on encrypted data. Due to the nature of CP-ABE, users get decryption keys from authorities associated with attribute sets. If there exists two or more users with attribute sets being the supersets of $S_{\cal D}$, existing CP-ABE schemes cannot distinguish which user is the malicious one who builds and sells such a decryption device. In this paper, we extend the notion of CP-ABE to support \emph{Blackbox Traceability} and propose a concrete scheme which is able to identify a user whose key has been used in building a decryption device from multiple users whose keys associated with the attribute sets which are all the supersets of $S_{\cal D}$. The scheme is efficient with sub-linear overhead and when compared with the very recent (non-traceable) CP-ABE scheme due to Lewko and Waters in Crypto 2012, we can consider this new scheme as an extension with the property of \emph{fully collusion-resistant blackbox traceability} added, i.e. an adversary can access an arbitrary number of keys when building a decryption device while the new tracing algorithm can still identify at least one particular key which must have been used for building the underlying decryption device. We show that this new scheme is secure against adaptive adversaries in the standard model, and is highly expressive by supporting any monotonic access structures. Its additional traceability property is also proven against adaptive adversaries in the standard model. As of independent interest, in this paper, we also consider another scenario which we call it ``\emph{found-in-the-wild}". In this scenario, a decryption device is found, for example, from a black market, and reported to an authority (e.g. a law enforcement agency). The decryption device is found to be able to decrypt ciphertexts with certain policy, say $\mathbb{A}$, while the associated attribute set $S_{\cal D}$ is \textbf{missing}. In this found-in-the-wild scenario, we show that the Blackbox Traceable CP-ABE scheme proposed in this paper can still be able to find the malicious users whose keys have been used for building the decryption device, and our scheme can achieve \emph{selective} traceability in the standard model under this scenario.
Last updated:  2012-11-28
Construction of Differential Characteristics in ARX Designs -- Application to Skein
Gaetan Leurent
In this paper, we study differential attacks against ARX schemes. We build upon the generalized characteristics of de Cannière and Rechberger and the multi-bit constraints of Leurent. We describe a more efficient way to propagate multi-bit constraints, that allows us to use the complete set of 2^32 2.5-bit constraints, instead of the reduced sets used by Leurent. As a result, we are able to build complex non-linear differential characteristics for reduced versions of the hash function Skein. We present several characteristics for use in various attack scenarios; this results in attacks with a relatively low complexity, in relatively strong settings. In particular, we show practical free-start and semi-free-start collision attacks for 20 rounds and 12 rounds of Skein-256, respectively. To the best of our knowledge, these are the first examples of complex differential trails are build for pure ARX designs. We believe this is an important work to assess the security of ARX designs against differential cryptanalysis. Our improved tools will be publicly available with the final version of this paper.
Last updated:  2012-11-28
False Negative probabilities in Tardos codes
Uncategorized
Antonino Simone, Boris Skoric
Show abstract
Uncategorized
Forensic watermarking is the application of digital watermarks for the purpose of tracing unauthorized redistribution of content. The most powerful type of attack on watermarks is the collusion attack, in which multiple users compare their differently watermarked versions of the same content. Collusion-resistant codes have been developed against these attacks. One of the most famous such codes is the Tardos code. It has the asymptotically optimal property that it can resist c attackers with a code of length proportional to c^2. Determining error rates for the Tardos code and its various extensions and generalizations turns out to be a nontrivial problem. In recent work we developed an approach called the Convolution and Series Expansion (CSE) method to accurately compute false positive accusation probabilities. In this paper we extend the CSE method in order to make it possible to compute false negative accusation probabilities as well.
Last updated:  2013-02-27
Estimating the Φ(n) of Upper/Lower Bound in its RSA Cryptosystem
Uncategorized
Chenglian Liu, Ziwei Ye
Show abstract
Uncategorized
The RSA-768 (270 decimal digits) was factored by Kleinjung et al. on December 12 2009, and the RSA-704 (212 decimal digits) was factored by Bai et al. on July 2, 2012. And the RSA-200 (663 bits) was factored by Bahr et al. on May 9, 2005. Until right now, there is no body successful to break the RSA-210 (696 bits) currently. In this paper, we would discuss an estimation method to approach lower/upper bound of &#934;(n) in the RSA parameters. Our contribution may help researchers lock the &#934;(n) and the challenge RSA shortly.
Last updated:  2012-11-28
Uniform Compression Functions Can Fail to Preserve “Full” Entropy
Daniel R. L. Brown
To have “full” entropy has been defined in a draft NIST standard to be to have min-entropy very close, proportionally, to the min-entropy of a uniform distribution. A function is uniform if all its preimages have the same size. This report proves that the output of any uniform compression function can fail to have full entropy, even when the input has full entropy.
Last updated:  2012-12-03
PRE- Stronger Security Notion and Efficient Construction with New Property
Jiang Zhang, Zhenfeng Zhang, Yu Chen
In a proxy re-encryption (PRE) scheme, a proxy is given a re-encryption key and has the ability to translate a ciphertext under one key into a ciphertext of the same message under a different key, without learning anything about the message encrypted under either key. PREs have been widely used in many exciting applications, such as email forwarding and law enforcement. Based on a good observation on the applications of PREs, we find that a PRE receiver needs an ability, just like what is provided by public-key encryption with non-interactive opening, to non-interactively and efficiently convince third parties of what he obtains from a particular (transformed) ciphertext, while still keeping the security of his secret key and other ciphertexts. To meet such a practical requirement, we first introduce proxy re-encryption with non-interactive opening (PRENO), and formally define the notions of security against \textit{chosen ciphertext attacks} (CCA) and \textit{proof soundness}. Our security model is natural and strong since we allow the CCA adversary to adaptively choose public keys for malicious users (i.e., a chosen key model), and a scheme secure in previous models (i.e., knowledge of secret key models) is not necessarily secure in our model. Then, we present an efficient PRENO scheme which satisfies our security notions based on the decisional bilinear Diffie-Hellman (DBDH) assumption in the standard model. Compared with two previous PRE schemes, our scheme is competitive in several aspects. First, its CCA security is proved in a strong security model under a well-studied assumption in the standard model. Second, it has a good overall performance in terms of ciphertext length and computational cost. Third, it first provides non-interactive opening for PRE schemes.
Last updated:  2012-11-28
Virtual isomorphisms of ciphers: is AES secure against differential / linear attack?
Alexander Rostovtsev
In [eprint.iacr.org/2009/117] method of virtual isomorphisms of ciphers was proposed for cryptanalysis. Cipher is vulnerable to an attack iff isomorphic cipher is vulnerable to this attack. That method is based on conjugation, and it is not practical because all round operations except one become nonlinear. New isomorphism of AES is proposed, its image IAES has only one nonlinear operation IXOR - isomorphic image of XOR of 5 bytes. Maximal probabilities of byte differentials are increased about 10-11 times, maximal biases of linear sums are increased about 3.6 times comparatively to original AES. IAES possesses computable family of differentials of IXOR with two active input bytes, zero output difference and probability 1. Zero output difference decreases the rate of multiplication of active nonlinearities in differential characteristic of IAES.
Last updated:  2012-11-26
Asynchronous Physical Unclonable Functions – AsyncPUF
Julian Murphy
Physically Unclonable Functions (PUFs) exploit the physical characteristics of silicon and provide an alternative to storing digital encryption keys in non-volatile memory. A PUF maps a unique set of digital inputs to a corresponding set of digital outputs. In this paper, the use of asynchronous logic and design techniques to implement PUFs is advocated for Asynchronous Physically Unclonable Functions (APUFs). A new method of using asynchronous rings to implement PUFs is described called ASYNCPUF which features inherent field programmability. It is both a novel and holistic PUF design compared to the existing state-of-the-art as it naturally addresses the two challenges facing PUFs to-date that prevent wide-spread adoption: robustness and entropy. Results of electrical simulation in a 90 nano-meter lithography process are presented and discussed.
Last updated:  2012-11-26
Breaking Another Quasigroup-Based Cryptographic Scheme
Markus Dichtl, Pascale Böffgen
In their paper ``A Quasigroup Based Random Number Generator for Resource Constrained Environments", the authors Matthew Battey and Abhishek Parakh propose the pseudo random number generator LOQG PRNG 256. We show several highly efficient attacks on LOQG PRNG 256.
Last updated:  2017-08-18
Design of Secure Image Transmission In MANET using Number Theory Based Image Compression and uasigroup Encryption (NTICQE) Algorithm
Uncategorized
Munivel E, Rajeswari Mukesh
Uncategorized
Image compression and image encryption are pivotal to proper storage and transmission of images over MANET. Simultaneous image compression and encryption aims at achieving enhanced bandwidth utilization and security at the same time. The Number Theory based Image Compression and Quasigroup Encryption (NTICQE) algorithm employs number theoretic paradigm - Chinese Remainder Theorem and Quasigroup Encryption, to solve congruencies and hence realize the twin ideals of compression and encryption simultaneously. Quasigroup encryptor that has very good data-scrambling properties and, therefore, it has potential uses in symmetric cryptography.
Last updated:  2012-11-26
Does Counting Still Count? Revisiting the Security of Counting based User Authentication Protocols against Statistical Attacks
Hassan Jameel Asghar, Shujun Li, Ron Steinfeld, Josef Pierpzyk
At NDSS 2012, Yan et al. analyzed the security of several challenge-response type user authentication protocols against passive observers, and proposed a generic counting based statistical attack to recover the secret of some counting based protocols given a number of observed authentication sessions. Roughly speaking, the attack is based on the fact that secret (pass) objects appear in challenges with a different probability from non-secret (decoy) objects when the responses are taken into account. Although they mentioned that a protocol susceptible to this attack should minimize this difference, they did not give details as to how this can be achieved barring a few suggestions. In this paper, we attempt to fill this gap by generalizing the attack with a much more comprehensive theoretical analysis. Our treatment is more quantitative which enables us to describe a method to theoretically estimate a lower bound on the number of sessions a protocol can be safely used against the attack. Our results include 1) two proposed fixes to make counting protocols practically safe against the attack at the cost of usability, 2) the observation that the attack can be used on non-counting based protocols too as long as challenge generation is contrived, 3) and two main design principles for user authentication protocols which can be considered as extensions of the principles from Yan et al. This detailed theoretical treatment can be used as a guideline during the design of counting based protocols to determine their susceptibility to this attack. The Foxtail protocol, one of the protocols analyzed by Yan et al., is used as a representative to illustrate our theoretical and experimental results.
Last updated:  2013-06-12
Digital Signatures with Minimal Overhead from Indifferentiable Random Invertible Functions
Uncategorized
Eike Kiltz, Krzysztof Pietrzak, Mario Szegedy
Show abstract
Uncategorized
In a digital signature scheme with message recovery, rather than transmitting the message $m$ and its signature $\sigma$, a single enhanced signature $\tau$ is transmitted. The verifier is able to recover $m$ from $\tau$ and at the same time verify its authenticity. The two most important parameters of such a scheme are its security and overhead $|\tau|-|m|$. A simple argument shows that for any scheme with ``$n$ bits security" $|\tau|-|m|\ge n$, i.e., the overhead is lower bounded by the security parameter $n$. Currently, the best known constructions in the random oracle model are far from this lower bound requiring an overhead of $n+\log q_h$, where $q_h$ is the number of queries to the random oracle. In this paper we give a construction which basically matches the $n$ bit lower bound. We propose a simple digital signature scheme with $n+o(\log q_h)$ bits overhead, where $q_h$ denotes the number of random oracle queries. Our construction works in two steps. First, we propose a signature scheme with message recovery having optimal overhead in a new ideal model, the random invertible function model. Second, we show that a four-round Feistel network with random oracles as round functions is tightly "public-indifferentiable'' from a random invertible function. At the core of our indifferentiability proof is an almost tight upper bound for the expected number of edges of the densest "small'' subgraph of a random Cayley graph, which may be of independent interest.
Last updated:  2012-11-26
Fixed Argument Pairing Inversion on Elliptic Curves
Sungwook Kim, Jung Hee Cheon
Let $E$ be an elliptic curve over a finite field ${\mathbb F}_q$ with a power of prime $q$, $r$ a prime dividing $\#E({\mathbb F}_q)$, and $k$ the smallest positive integer satisfying $r | \Phi_k(p)$, called embedding degree. Then a bilinear map $t: E({\mathbb F}_q)[r] \times E({\mathbb F}_{q^k})/rE({\mathbb F}_{q^k}) \rightarrow {\mathbb F}_{q^k}^*$ is defined, called the Tate pairing. And the Ate pairing and other variants are obtained by reducing the domain for each argument and raising it to some power. In this paper we consider the {\em Fixed Argument Pairing Inversion (FAPI)} problem for the Tate pairing and its variants. In 2012, considering FAPI for the Ate$_i$ pairing, Kanayama and Okamoto formulated the {\em Exponentiation Inversion (EI)} problem. However the definition gives a somewhat vague description of the hardness of EI. We point out that the described EI can be easily solved, and hence clarify the description so that the problem does contain the actual hardness connection with the prescribed domain for given pairings. Next we show that inverting the Ate pairing (including other variants of the Tate pairing) defined on the smaller domain is neither easier nor harder than inverting the Tate pairing defined on the lager domain. This is very interesting because it is commonly believed that the structure of the Ate pairing is so simple and good (that is, the Miller length is short, the solution domain is small and has an algebraic structure induced from the Frobenius map) that it may leak some information, thus there would be a chance for attackers to find further approach to solve FAPI for the Ate pairing, differently from the Tate pairing.
Last updated:  2012-11-21
Security Evaluation of Rakaposhi Stream Cipher
Mohammad Ali Orumiehchiha, Josef Pieprzyk, Elham Shakour, Ron Steinfeld
Rakaposhi is a synchronous stream cipher, which uses three main components a non-linear feedback shift register (NLFSR), a dynamic linear feedback shift register (DLFSR) and a non-linear filtering function ($NLF$). NLFSR consists of 128 bits and is initialised by the secret key $K$. DLFSR holds 192 bits and is initialised by an initial vector ($IV$). $NLF$ takes 8-bit inputs and returns a single output bit. The work identifies weaknesses and properties of the cipher. The main observation is that the initialisation procedure has the so-called sliding property. The property can be used to launch distinguishing and key recovery attacks. The distinguisher needs four observations of the related $(K,IV)$ pairs. The key recovery algorithm allows to discover the secret key $K$ after observing $2^{9}$ pairs of $(K,IV)$. In the proposed related-key attack, the number of related $(K,IV)$ pairs is $2^{(128+192)/4}$ pairs. The key recovery algorithm allows to discover the secret key $K$ after observing $2^9$ related $(K,IV)$ pairs. Further the cipher is studied when the registers enter short cycles. When NLFSR is set to all ones, then the cipher degenerates to a linear feedback shift register with a non-linear filter. Consequently, the initial state (and Secret Key and $IV$) can be recovered with complexity $2^{63.87}$. If DLFSR is set to all zeros, then $NLF$ reduces to a low non-linearity filter function. As the result, the cipher is insecure allowing the adversary to distinguish it from a random cipher after $2^{17}$ observations of keystream bits. There is also the key recovery algorithm that allows to find the secret key with complexity $2^{54}$.
Last updated:  2014-02-26
Privacy Preserving Revocable Predicate Encryption Revisited
Kwangsu Lee, Intae Kim, Seong Oun Hwang
Predicate encryption (PE) that provides both the access control of ciphertexts and the privacy of ciphertexts is a new paradigm of public-key encryption. An important application of PE is a searchable encryption system in cloud storage, where it enables a client to securely outsource the search of a keyword on encrypted data without revealing the keyword to the cloud server. One practical issue of PE is to devise an efficient revocation method to revoke a user when the secret key of the user is compromised. Privacy preserving revocable PE (RPE) can provide not only revocation, but also the privacy of revoked users. In this paper, we first define two new security models of privacy preserving RPE: the strongly full-hiding security and the weakly full-hiding security. The strongly full-hiding security provides the full privacy of ciphertexts against outside and inside adversaries, but the weakly full-hiding security provides the full privacy of ciphertexts against an outside adversary who cannot decrypt the challenge ciphertext. Next, we propose a general RPE construction from any PE scheme, and prove its security in the weakly full-hiding security model. Our generic RPE scheme is efficient since the number of ciphertext elements is not proportional to the number of users in a receiver set. Additionally, our RPE scheme can support polynomial-size circuits if a recently proposed FE scheme for polynomial-size circuits is used as an underlying PE scheme.
Last updated:  2012-11-21
Refine the Concept of Public Key Encryption with Delegated Search
Qiang Tang, Yuanjie Zhao, Xiaofeng Chen, Hua Ma
We revisit the concept of public key encryption with delegated keyword search (PKEDS), a concept proposed by Ibraimi et al. A PKEDS scheme allows a receiver to authorize third-party server(s) to search in two ways: either according to a message chosen by the server itself or according to a trapdoor sent by the receiver. We show that the existing formulation has some defects and the proposed scheme is unnecessarily inefficient. Based on our analysis, we present a refined formulation of the primitive with a new security model. We then propose a new PKEDS scheme, which is proven secure and much more efficient than the original scheme by Ibraimi et al.
Last updated:  2012-11-21
How powerful are the DDH hard groups?
Periklis A. Papakonstantinou, Charles W. Rackoff, Yevgeniy Vahlis
The question whether Identity-Based Encryption (IBE) can be based on the Decisional Diffie-Hellman (DDH) assumption is one of the most prominent questions in Cryptography related to DDH. We study limitations on the use of the DDH assumption in cryptographic constructions, and show that it is impossible to construct a secure Identity-Based Encryption system using, in a black box way, only the DDH (or similar) assumption about a group. Our impossibility result is set in the generic groups model, where we describe an attack on any IBE construction that relies on oracle access to the group operation of randomly labelled group elements -- a model that formalizes naturally DDH hardness. The vast majority of existing separation results typically give separation from general primitives, whereas we separate a primitive from a class of number theoretic hardness assumptions. Accordingly, we face challenges in creating an attack algorithm that will work against constructions which leverage the underlying algebraic structure of the group. In fact, we know that this algebraic structure is powerful enough to provide generic constructions for several powerful primitives including oblivious transfer and chosen ciphertext secure public-key cryptosystems (note that an IBE generalizes such systems). Technically, we explore statistical properties of the group algebra associated with a DDH oracle, which can be of independent interest.
Last updated:  2013-02-17
Round-Efficient Concurrently Composable Secure Computation via a Robust Extraction Lemma
Vipul Goyal, Huijia Lin, Omkant Pandey, Rafael Pass, Amit Sahai
We consider the problem of constructing protocols for secure computation that achieve strong concurrent and composable notions of security in the plain model. Unfortunately UC-secure secure computation protocols are impossible in this setting, but the Angel-Based Composable Security notion offers a promising alternative. Until now, however, under standard (polynomial-time) assumptions, only protocols with polynomially many rounds were known to exist. In this work, we give the first $\tilde{O}(\log n)$-round secure computation protocol in the plain model that achieves angel-based composable security in the concurrent setting, under standard assumptions. We do so by constructing the first $\tilde{O}(\log n)$-round CCA-secure commitment protocol. Our CCA-secure commitment protocol is secure based on the minimal assumption that one-way functions exist. A central tool in obtaining our result is a new \emph{robust concurrent extraction lemma} that we introduce and prove, based on the minimal assumptions that one-way functions exist. This robust concurrent extraction lemma shows how to build concurrent extraction procedures that work even in the context of an ``external'' protocol that cannot be rewound by the extractor. We believe this lemma can be used to simplify many existing works on concurrent security, and is of independent interest. In fact, our lemma when used in conjunction with the concurrent-simulation schedule of Pass and Venkitasubramaniam (TCC'08), also yields a constant round construction based additionally on the existence of quasi-polynomial time (\pqt) secure one-way functions.
Last updated:  2014-02-10
TAAC: Temporal Attribute-based Access Control for Multi-Authority Cloud Storage Systems
Kan Yang, Zhen Liu, Zhenfu Cao, Xiaohua Jia, Duncan S. Wong, Kui Ren
Data access control is an effective way to ensure the data security in the cloud. Due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems. Ciphertext-Policy Attribute-based Encryption (CP-ABE), as a promising technique for access control of encrypted data, is very suitable for access control in cloud storage systems due to its high efficiency and expressiveness. However, the existing CP-ABE schemes cannot be directly applied to data access control for cloud storage systems because of the attribute revocation problem. In this paper, we consider the problem of attribute revocation in multi-authority cloud storage systems where the users' attributes come from different domains each of which is managed by a different authority. We propose TAAC (Temporal Attribute-based Access Control), an efficient data access control scheme for multi-authority cloud storage systems, where the authorities are independent from each other and no central authority is needed. TAAC can efficiently achieve temporal access control on attribute-level rather than on user-level. Moreover, different from the existing schemes with attribute revocation functionality, TAAC does not require re-encryption of any ciphertext when the attribute revocation happens, which means great improvement on the efficiency of attribute revocation. The analysis results show that TAAC is highly efficient, scalable, and flexible to applications in practice.
Last updated:  2015-03-04
Formal analysis of privacy in Direct Anonymous Attestation schemes
Ben Smyth, Mark D. Ryan, Liqun Chen
This article introduces a definition of privacy for Direct Anonymous Attestation schemes. The definition is expressed as an equivalence property which is suited to automated reasoning using Blanchet's ProVerif. The practicality of the definition is demonstrated by analysing the RSA-based Direct Anonymous Attestation protocol by Brickell, Camenisch & Chen. The analysis discovers a vulnerability in the RSA-based scheme which can be exploited by a passive adversary and, under weaker assumptions, corrupt administrators. A security fix is identified and the revised protocol is shown to satisfy our definition of privacy.
Last updated:  2013-02-25
A Robust and Plaintext-Aware Variant of Signed ElGamal Encryption
Yannick Seurin, Joana Treger
Adding a Schnorr signature to ElGamal encryption is a popular proposal aiming at thwarting chosen-ciphertext attacks by rendering the scheme plaintext-aware. However, there is no known security proof for the resulting scheme, at least not in a weaker model than the one obtained by combining the Random Oracle Model (ROM) and the Generic Group Model (Schnorr and Jakobsson, ASIACRYPT 2000). In this paper, we propose a very simple modification to Schnorr-Signed ElGamal encryption such that the resulting scheme is semantically secure under adaptive chosen-ciphertext attacks (IND-CCA2-secure) in the ROM under the Decisional Diffie-Hellman assumption. In fact, we even prove that our new scheme is plaintext-aware in the ROM as defined by Bellare et al. (CRYPTO'98). Interestingly, we also observe that Schnorr-Signed ElGamal is not plaintext-aware (again, for the definition of Bellare et al.) under the Computational Diffie-Hellman assumption. We show that our new scheme additionally achieves anonymity as well as robustness, a notion formalized by Abdalla et al. (TCC 2010) which captures the fact that it is hard to create a ciphertext that is valid under two different public keys. Finally, we study the hybrid variant of our new proposal, and show that it is IND-CCA2-secure in the ROM under the Computational Diffie-Hellman assumption when used with a symmetric encryption scheme satisfying the weakest security notion, namely ciphertext indistinguishability under one-time attacks (IND-OT-security).
Last updated:  2012-11-21
Search in Encrypted Data: Theoretical Models and Practical Applications
Qiang Tang
Recently, the concept of Search in Encrypted Data (SED) has become a highlight in cryptography. A SED scheme enables a client to have third-party server(s) to perform certain search functionalities on his encrypted data. In this book chapter, we aim at conducting a systematic study on SED schemes. Firstly, we describe three application scenarios and identify the desirable security requirements. Secondly, we provide two orthogonal categorizations and review the related security models for each category of SED schemes. Thirdly, we analyze the practical issues related to SED schemes and identify some future research directions.
Last updated:  2012-11-21
A Measure of Dependence for Cryptographic Primitives Relative to Ideal Functions
Uncategorized
Daniel Smith-Tone, Cristina Tone
Show abstract
Uncategorized
In this work we present a modification of a well-established measure of dependence appropriate for the analysis of stopping times for adversarial processes on cryptographic primitives. We apply this measure to construct generic criteria for the ideal behavior of fixed functions in both the random oracle and ideal permutation setting. More significantly, we provide a nontrivial extension of the notion of hash function indifferentiability, transporting the theory from the status of providing security arguments for protocols utilizing ideal primitives into the more realistic setting of protocol assurance with fixed functions. The methodology this measure introduces to indifferentiability analysis connects the security of a hash function with an indifferentiable mode to the security of the underlying compression function in a quantitative way; thus, we prove that dependence results on cryptographic primitives provide a direct means of determining the practical resistance or vulnerability of protocols employing such primitives.
Last updated:  2013-10-04
Galindo-Garcia Identity-Based Signature, Revisited
Sanjit Chatterjee, Chethan Kamath, Vikas Kumar
In Africacrypt 2009, Galindo-Garcia proposed a lightweight identity-based signature (IBS) scheme based on the Schnorr signature. The construction is simple and claimed to be the most efficient IBS till date. The security is based on the discrete-log assumption and the security argument consists of two reductions: B1 and B2, both of which use the multiple-forking lemma to solve the discrete-log problem (DLP). In this work, we revisit the security argument given in. Our contributions are two fold: (i) we identify several problems in the original argument and (ii) we provide a detailed new security argument which allows significantly tighter reductions. In particular, we show that the reduction B1 in fails in the standard security model for IBS, while the reduction B2 is incomplete. To remedy these problems, we adopt a two-pronged approach. First, we sketch ways to fill the gaps by making minimal changes to the structure of the original security argument; then, we provide a new security argument. The new argument consists of three reductions: R1, R2 and R3 and in each of them, solving the DLP is reduced to breaking the IBS. R1 uses the general forking lemma together with the programming of the random oracles and Coron's technique. Reductions R2 and R3, on the other hand, use the multiple-forking lemma along with the programming of the random oracles. We show that the reductions R1 and R2 are significantly tighter than their original counterparts.
Last updated:  2013-03-22
Simple, Efficient and Strongly KI-Secure Hierarchical Key Assignment Schemes
Eduarda S. V. Freire, Kenneth G. Paterson, Bertram Poettering
Hierarchical Key Assignment Schemes can be used to enforce access control policies by cryptographic means. In this paper, we present a new, enhanced security model for such schemes. We also give simple, efficient, and strongly-secure constructions for Hierarchical Key Assignment Schemes for arbitrary hierarchies using pseudorandom functions and forward-secure pseudorandom generators. We compare instantiations of our constructions with state-of-the-art Hierarchical Key Assignment Schemes, demonstrating that our new schemes possess an attractive trade-off between storage requirements and efficiency of key derivation.
Last updated:  2012-11-20
Impossibility Results for Indifferentiability with Resets
Atul Luykx, Elena Andreeva, Bart Mennink, Bart Preneel
The indifferentiability framework of Maurer, Renner, and Holenstein (MRH) has gained immense popularity in recent years and has proved to be a powerful way to argue security of cryptosystems that enjoy proofs in the random oracle model. Recently, however, Ristenpart, Shacham, and Shrimpton (RSS) showed that the composition theorem of MRH has a more limited scope than originally thought, and that extending its scope required the introduction of reset-indifferentiability, a notion which no practical domain extenders satisfy with respect to random oracles. In light of the results of RSS, we set out to rigorously tackle the specifics of indifferentiability and reset-indifferentiability by viewing the notions as special cases of a more general definition. Our contributions are twofold. Firstly, we provide the necessary formalism to refine the notion of indifferentiability regarding composition. By formalizing the definition of stage minimal games we expose new notions lying in between regular indifferentiability (MRH) and reset-indifferentiability (RSS). Secondly, we answer the open problem of RSS by showing that it is impossible to build any domain extender which is reset-indifferentiable from a random oracle. This result formally confirms the intuition that reset-indifferentiability is too strong of a notion to be satisfied by any hash function. As a consequence we look at the weaker notion of single-reset-indifferentiability, yet there as well we demonstrate that there are no ``meaningful'' domain extenders which satisfy this notion. Not all is lost though, as we also view indifferentiability in a more general setting and point out the possibility for different variants of indifferentiability.
Last updated:  2012-11-20
Protocols for Multiparty Coin Toss With Dishonest Majority
Uncategorized
Amos Beimel, Eran Omri, Ilan Orlov
Show abstract
Uncategorized
Coin-tossing protocols are protocols that generate a random bit with uniform distribution. These protocols are used as a building block in many cryptographic protocols. Cleve [STOC 1986] has shown that if at least half of the parties can be malicious, then, in any r-round coin-tossing protocol, the malicious parties can cause a bias of Omega(1/r) to the bit that the honest parties output. However, for more than two decades the best known protocols had bias t/\sqrt{r}, where t is the number of corrupted parties. Recently, in a surprising result, Moran, Naor, and Segev [TCC 2009] have shown that there is an r-round two-party coin-tossing protocol with the optimal bias of O(1/r). We extend Moran et al. results to the multiparty model when less than 2/3 of the parties are malicious. The bias of our protocol is proportional to 1/r and depends on the gap between the number of malicious parties and the number of honest parties in the protocol. Specifically, for a constant number of parties or when the number of malicious parties is somewhat larger than half, we present an r-round m-party coin-tossing protocol with optimal bias of O(1/r).
Last updated:  2013-08-20
Practical Covertly Secure MPC for Dishonest Majority – or: Breaking the SPDZ Limits
Ivan Damgard, Marcel Keller, Enrique Larraia, Valerio Pastro, Peter Scholl, Nigel P. Smart
SPDZ (pronounced “Speedz”) is the nickname of the MPC protocol of Damg°ard et al. from Crypto 2012. SPDZ provided various efficiency innovations on both the theoretical and practical sides compared to previous work in the preprocessing model. In this paper we both resolve a number of open problems with SPDZ; and present several theoretical and practical improvements to the protocol. In detail, we start by designing and implementing a covertly secure key generation protocol for distributed BGV secret keys. In prior work this was assumed to be provided by a given setup functionality. Protocols for distributingBGV secret keys are likely to be of wider applicability than to the SPDZ protocol alone. We then construct both a covertly and actively secure preprocessing phase, both of which compare favourably with previous work in terms of efficiency and provable security. We also build a new online phase, which solves a major problem of the SPDZ protocol: namely prior to this work preprocessed data could be used for only one function evaluation and then had to be recomputed from scratch for the next evaluation, while our online phase can support reactive functionalities. This improvement comes mainly from the fact that our construction does not require players to reveal the MAC keys to check correctness of MAC’d values. Since our focus is also on practical instantiations, our implementation offloads as much computation as possible into the preprocessing phase, thus resulting in a faster online phase. Moreover, a better analysis of the parameters of the underlying cryptoscheme and a more specific choice of the field where computation is performed allow us to obtain a better optimized implementation. Improvements are also due to the fact that our construction is in the random oracle model, and the practical implementation is multi-threaded.
Last updated:  2012-11-11
A unidirectional conditional proxy re-encryption scheme based on non-monotonic access structure
Bin Wang
Recently, Fang et al. [6] introduced an interactive(bidirectional) conditional proxy re-encryption(C-PRE) scheme such that a proxy can only convert ciphertexts that satisfy access policy set by a delegator. Their scheme supports monotonic access policy expressed by “OR” and “AND” gates. In addition, their scheme is called interactive since generation of re-encryption keys requires interaction between the delegator and delegatee. In this paper, we study the problem of constructing a unidirectional(non-interactive) C-PRE scheme supporting non-monotonic access policy expressed by “NOT”, “OR” and “AND” gates. A security model for unidirectional C-PRE schemes is also proposed in this paper. To yield a unidirectional C-PRE scheme supporting non-monotonic access policy, we extend the unidirectional PRE scheme presented by Libert et al. [8] by using the ideas from the non-monotonic attributed based encryption (ABE) scheme presented by Ostrovsky et al. [9]. Furthermore, the security of our C-PRE scheme is proved under the modified 3-weak Decision Bilinear Diffie-Hellman Inversion assumption in the standard model.
Last updated:  2012-11-11
Preimage and Pseudo-Collision Attacks on Step-Reduced SM3 Hash Function
Gaoli Wang, Yanzhao Shen
SM3~\cite{SM3hf} is the Chinese cryptographic hash standard which was announced in 2010 and designed by Wang $et\ al.$. It is based on the Merkle-Damgård design and its compression function can be seen as a block cipher used in Davies-Meyer mode. It uses message block of length 512 bits and outputs hash value of length 256 bits. This paper studies the security of SM3 hash function against preimage attack and pseudo-collision attack. We propose preimage attacks on 29-step and 30-step SM3, and pseudo-preimage attacks on 31-step and 32-step SM3 out of 64 steps. The complexities of these attacks are $2^{245}$ 29-step operations, $2^{251.1}$ 30-step operations, $2^{245}$ 31-step operations and $2^{251.1}$ 32-step operations, respectively. These (pseudo) preimage attacks are all from the first step of the reduced SM3. Meanwhile, these (pseudo) preimage attacks can be converted into pseudo-collision attacks on SM3 reduced to 29 steps, 30 steps, 31 steps and 32 steps with complexities of $2^{122}$, $2^{125.1}$, $2^{122}$ and $2^{125.1}$ respectively. As far as we know, the previously best known preimage attacks on SM3 cover 28 steps (from the first step) and 30 steps (from the 7-th step), and there is no publicly published result on (pseudo) collision attack on SM3.
Last updated:  2012-11-11
Coarse-grained integer - Smooth? Rough? Both!
Daniel Loebenberger, Michael Nüsken
We count ]B, C]-grained, k-factor integers which are simultaneously B-rough and C-smooth and have a fixed number k of prime factors. Our aim is to exploit explicit versions of the prime number theorem as much as possible to get good explicit bounds for the count of such integers. This analysis was inspired by certain inner procedures in the general number field sieve. The result should at least provide some insight in what happens there. We estimate the given count in terms of some recursively defined functions. Since they are still difficult to handle, only another approximation step reveals their orders. Finally, we use the obtained bounds to perform numerical experiments that show how good the desired count can be approximated for the parameters of the general number field sieve in the mentioned inspiring application.
Last updated:  2012-11-11
Cryptanalysis and Improvement of a Multi-Receiver Generalized Signcryption Scheme
Cai-xue Zhou
Generalized signcryption (GSC) scheme can adaptively work as an encryption scheme, a signature scheme or a signcryption scheme with only one algorithm. It is very suitable for storage-constrained environments. In this paper, we analyze a multi-receiver GSC scheme, and show that it cannot achieve indistinguishability-adaptive chosen ciphertext attack (IND-CCA2) secure in the pure encryption mode and hybrid encryption mode. We further propose a revised version of the scheme, which resolves the security issues of the original scheme without sacrificing its high efficiency and simple design. Our improved scheme can be proved to be IND-CCA2 secure and existentially unforgeable-adaptive chosen message attack (EUF-CMA) under computational Diffie-Hellman (CDH) assumption.
Last updated:  2012-11-11
Efficient Methods for Practical Fully Homomorphic Symmetric-key Encrypton, Randomization and Verification
Aviad Kipnis, Eliphaz Hibshoosh
We present high performance non-deterministic fully-homomorphic methods for practical randomization of data (over commutative ring), and symmetric-key encryption of random mod-N data (over ring of reidues mod-N) well suited for crypto applications. These methods secure, for example, the multivariate input or the coefficients of a polynomial function running in an open untrusted environment. We show that random plaintext is the sufficient condition for proof of security for the homomorphic encryption. The efficient nature of the methods - one large-numbers multiplication per encryption and six for the product of two encrypted values - motivates and enables the use of low cost collaborative security platforms for crypto applications such as keyed-hash or private key derivation algorithms. Such a platform is comprised of a low-cost and low performance security element supported by an untrusted high performance server running the homomorpic algorithms. The methods employed may also provide enhanced protection for some existing crypto algorithms against certain attacks. Specifically, it is shown how to secure OSS public-key signature against Pollard attack. Further, we demonstrate how the homomorphic randomization of data can offer protection for an AES-key against side-channel attacks. Finally, the methods provide both fault detection and verification of computed-data integrity.
Last updated:  2013-07-12
On the Complexity of the BKW Algorithm on LWE
Uncategorized
Martin R. Albrecht, Carlos Cid, Jean-Charles Faugère, Robert Fitzpatrick, Ludovic Perret
Show abstract
Uncategorized
This work presents a study of the complexity of the Blum-Kalai-Wasserman (BKW) algorithm when applied to the Learning with Errors (LWE) problem, by providing refined estimates for the data and computational effort requirements for solving concrete instances of the LWE problem. We apply this refined analysis to suggested parameters for various LWE-based cryptographic schemes from the literature and compare with alternative approaches based on lattice reduction. As a result, we provide new upper bounds for the concrete hardness of these LWE-based schemes. Rather surprisingly, it appears that BKW algorithm outperforms known estimates for lattice reduction algorithms starting in dimension n &#8776; 250 when LWE is reduced to SIS. However, this assumes access to an unbounded number of LWE samples.
Last updated:  2013-03-31
Secure Outsourced Attribute-based Encryption
Jin Li, Jingwei Li, Xiaofeng Chen, Chunfu Jia, Duncan S. Wong
Attribute-Based Encryption (ABE) is a promising cryptographic primitive which significantly enhances the versatility of access control mechanisms. Due to the high expressiveness of ABE policies, the computational complexities of ABE key-issuing (by Attribute Authorities (AAs)) and decryption (by eligible users) are getting prohibitively high. Despite that the existing Outsourced ABE solutions are able to offload some intensive computing tasks to a third party, for example, a cloud, so to relieve the local burden of eligible users during decryption, the high computational complexity of the key-issuing at the AAs has yet to be addressed, while an ABE system will continue to grow with more users being included, and with the user revocation being considered in practice which will trigger more key (re-)issuing. Aiming at tackling the challenges above, for the first time, we propose a Secure Outsourced ABE system, which not only supports secure outsourced decryption, but also provides secure outsourced key-issuing. Unlike the current outsourced ABE systems, our new method offloads all access policy and attribute related operations in the key-issuing process or decryption to a Key Generation Service Provider (KGSP) and a Decryption Service Provider (DSP), respectively, leaving only a constant number of simple operations for the AAs and eligible users to perform locally. Furthermore, we show that both outsourcing processes (to KGSP and to DSP) are secure, namely, the KGSP and the DSP would not be able to recover the keys or decrypt the ciphertexts, respectively. In addition, we consider the scenario that a KGSP or DSP may be dishonest and could maliciously generate some incorrect returning values rather than following the outsourced operations. Therefore, in this paper, we also propose another ABE construction which allows the AAs and eligible users to check the correctness of outsourced operations in an efficient way. The security of the construction is analyzed under a recently formalized model called Refereed Delegation of Computation (RDoC).
Last updated:  2012-11-11
Cryptanalysis of Double-Block-Length Hash Mode MJH
Deukjo Hong, Daesung Kwon
A double-block-length (DBL) hash mode of block ciphers, MJH has been proved to be collision-resistant in the ideal cipher model upto $2^{2n/3- \log n}$ queries. In this paper we provide first cryptanalytic results for MJH. We show that a collision attack on MJH has the time complexity below the birthday bound. When block ciphers with 128-bit blocks are used, it has time complexity around $2^{124}$, which is to be compared to the birthday attack having complexity $2^{128}$. We also give a preimage attack on MJH. It has the time complexity of $2^{3n/2+1}$ with $n$-bit block ciphers, which is to be compared to the brute force attack having complexity $2^{2n}$.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.