All papers in 2010 (Page 6 of 660 results)

Last updated:  2010-03-27
A Flaw in The Internal State Recovery Attack on ALPHA-MAC
Shengbao Wu, Mingsheng Wang, Zheng Yuan
An distinguisher was constructed by utilizing a 2-round collision differential path of ALPHA-MAC, with about $2^{65.5}$ chosen messages and $2^{65.5}$ queries. Then, this distinguisher was used to recover the internal state(\cite{Yuan1},\cite{Yuan2}). However, a flaw is found in the internal state recovery attack. The complexity of recovering the internal state is up to $2^{81}$ exhaustive search. And the complexity of the whole attack will be up to $2^{67}$ chosen messages and $2^{81}$ exhaustive search. To repair the flaw, a modified 2-round differential path of ALPHA-MAC is present and a new distinguisher based on this path is proposed. Finally, an attack with about $2^{65.5}$ chosen messages and $2^{65.5}$ queries is obtained under the new distinguisher.
Last updated:  2010-12-07
Identity-Based Encryption Secure against Selective Opening Attack
Uncategorized
Mihir Bellare, Brent Waters, Scott Yilek
Show abstract
Uncategorized
We present the first Identity-Based Encryption (IBE) schemes that are proven secure against selective opening attack (SOA). This means that if an adversary, given a vector of ciphertexts, adaptively corrupts some fraction of the senders, exposing not only their messages but also their coins, the privacy of the unopened messages is guaranteed. Achieving security against such attacks is well-known to be challenging and was only recently solved in the PKE case, but the techniques used there do not solve the IBE case. Our solutions illustrate two techniques to achieving SOA-secure IBE, one based on the Boyen-Waters anonymous IBE and the other based on Waters’ dual-system approach.
Last updated:  2010-03-24
A variant of the F4 algorithm
Uncategorized
Antoine Joux, Vanessa Vitse
Show abstract
Uncategorized
Algebraic cryptanalysis usually requires to find solutions of several similar polynomial systems. A standard tool to solve this problem consists of computing the Gröbner bases of the corresponding ideals, and Faugère's F4 and F5 are two well-known algorithms for this task. In this paper, we present a new variant of the F4 algorithm which is well suited to algebraic attacks of cryptosystems since it is designed to compute Gröbner bases of a set of polynomial systems having the same shape. It is faster than F4 as it avoids all reductions to zero, but preserves its simplicity and its computation efficiency, thus competing with F5.
Last updated:  2010-03-24
Elliptic Curve Discrete Logarithm Problem over Small Degree Extension Fields. Application to the static Diffie-Hellman problem on $E(\F_{q^5})$
Uncategorized
Antoine Joux, Vanessa Vitse
Show abstract
Uncategorized
In 2008 and 2009, Gaudry and Diem proposed an index calculus method for the resolution of the discrete logarithm on the group of points of an elliptic curve defined over a small degree extension field $\F_{q^n}$. In this paper, we study a variation of this index calculus method, improving the overall asymptotic complexity when $\log q \leq c n^3$. In particular, we are able to successfully obtain relations on $E(\F_{p^5})$, whereas the more expensive computational complexity of Gaudry and Diem's initial algorithm makes it impractical in this case. An important ingredient of this result is a new variation of Faugère's Gröbner basis algorithm F4, which significantly speeds up the relation computation and might be of independent interest. As an application, we show how this index calculus leads to a practical example of an oracle-assisted resolution of the elliptic curve static Diffie-Hellman problem over a finite field on $130$ bits, which is faster than birthday-based discrete logarithm computations on the same curve.
Last updated:  2010-03-24
Genus 2 Curves with Complex Multiplication
Eyal Z. Goren, Kristin E. Lauter
Genus 2 curves are useful in cryptography for both discrete-log based and pairing-based systems, but a method is required to compute genus 2 curves with Jacobian with a given number of points. Currently, all known methods involve constructing genus 2 curves with complex multiplication via computing their 3 Igusa class polynomials. These polynomials have rational coefficients and require extensive computation and precision to compute. Both the computation and the complexity analysis of these algorithms can be improved by a more precise understanding of the denominators of the coefficients of the polynomials. The main goal of this paper is to give a bound on the denominators of Igusa class polynomials of genus 2 curves with CM by a primitive quartic CM field $K$. We give an overview of Igusa's results on the moduli space of genus two curves and the method to construct genus 2 curves via their Igusa invariants. We also give a complete characterization of the reduction type of a CM abelian surface, for biquadratic, cyclic, and non-Galois quartic CM fields, and for any type of prime decomposition of the prime, including ramified primes.
Last updated:  2010-03-25
the upper bounds on differntial characteristics in block cipher SMS4
Zhang MeiLing, Liu JingMei, Wang XinMei
SMS4 is a 128-bit block cipher with a 128-bit user key and 32 rounds, which is used in the Chinese National Standard for Wireless LAN WAPI. In this paper, all possible differential patterns are divided into several sections by six designed rules. In order to evaluate the security against the differential cryptanalysis of SMS4, we calculate the lower bounds on the number of active S-Boxes for all kinds of sections, based on which the lower bounds on the number of active S-Boxes in all possible differential patterns can be derived. Finally, the upper bounds on differential characteristic probabilities of arbitrary round numbers are given, which can be used to estimate the strength of SMS4 against differential attack and linear attack.
Last updated:  2010-09-15
Efficient Public-Key Cryptography in the Presence of Key Leakage
Yevgeniy Dodis, Kristiyan Haralambiev, Adriana Lopez-Alt, Daniel Wichs
We study the design of cryptographic primitives resistant to a large class of side-channel attacks, called "memory attacks", where an attacker can repeatedly and adaptively learn information about the secret key, subject *only* to the constraint that the *overall amount* of such information is bounded by some parameter $\ell$. Although the study of such primitives was initiated only recently by Akavia et al. [AGV09], subsequent work already produced many such "leakage-resilient" primitives [NS09,ADW09,KV09], including signature, encryption, identification (ID) and authenticated key agreement (AKA) schemes. Unfortunately, every existing scheme, --- for any of the four fundamental primitives above, --- fails to satisfy at least one of the following desirable properties: - Efficiency. While the construction may be generic, it should have some *efficient* instantiations, based on standard cryptographic assumptions, and without relying on random oracles. - Strong Security. The construction should satisfy the strongest possible definition of security (even in the presence of leakage). For example, encryption schemes should be secure against chosen *ciphertext* attack (CCA), while signatures should be *existentially* unforgeable. - Leakage Flexibility. It should be possible to set the parameters of the schemes so that the leakage bound $\ell$ can come arbitrarily close to the size of the secret key $sk$. In this work we design the first signature, encryption, ID and AKA schemes which overcome these limitations, and satisfy all the properties above. Moreover, all our constructions are generic, in several cases elegantly simplifying and generalizing the prior constructions (which did not have any efficient instantiations). We also introduce several tools of independent interest, such as the abstraction (and constructions) of *simulation extractable* NIZK arguments, and a new *deniable* DH-based AKA protocol based on any CCA-secure encryption.
Last updated:  2010-03-23
Founding Cryptography on Tamper-Proof Hardware Tokens
Uncategorized
Vipul Goyal, Yuval Ishai, Amit Sahai, Ramarathnam Venkatesan, Akshay Wadia
Show abstract
Uncategorized
A number of works have investigated using tamper-proof hardwaretokens as tools to achieve a variety of cryptographic tasks. In particular, Goldreich and Ostrovsky considered the goal of software protection via oblivious RAM. Goldwasser, Kalai, and Rothblum introduced the concept of \emph{one-time programs}: in a one-time program, an honest sender sends a set of {\em simple} hardware tokens to a (potentially malicious) receiver. The hardware tokens allow the receiver to execute a secret program specified by the sender's tokens exactly once (or, more generally, up to a fixed $t$ times). A recent line of work initiated by Katz examined the problem ofachieving UC-secure computation using hardware tokens. Motivated by the goal of unifying and strengthening these previous notions, we consider the general question of basing secure computation on hardware tokens. We show that the following tasks, which cannot be realized in the ``plain'' model, become feasible if the parties are allowed to generate and exchange tamper-proof hardware tokens. Unconditional non-interactive secure computation: We show that by exchanging simple stateful hardware tokens, any functionality can be realized with unconditional security against malicious parties. In the case of two-party functionalities $f(x,y)$ which take their inputs from a sender and a receiver and deliver their output to the receiver, our protocol is non-interactive and only requires a unidirectional communication of simple stateful tokens from the sender to the receiver. This strengthens previous feasibility results for one-time programs both by providing unconditional security and by offering general protection against malicious senders. As is typically the case for unconditionally secure protocols, our protocol is in fact UC-secure. This improves over previous works on UC-secure computation based on hardware tokens, which provided computational security under cryptographic assumptions. Interactive Secure computation from stateless tokens based on one-way functions: We show that stateless hardware tokens are sufficient to base general secure (in fact, UC-secure) computation on the existence of one-way functions. One cannot hope for security against unbounded adversaries with stateless tokens since an unbounded adversary could query the token multiple times to ``learn" the functionality it contains. Non-interactive secure computation from stateless tokens: We consider the problem of designing non-interactive secure computation from stateless tokens for stateless oblivious reactive functionalities, i.e., reactive functionalities which allow unlimited queries from the receiver (these are the only functionalities one can hope to realize non-interactively with stateless tokens). By building on recent techniques from resettably secure computation, we give a general positive result for stateless oblivious reactive functionalities under standard cryptographic assumption. This result generalizes the notion of (unlimited-use) obfuscation by providing security against a malicious sender, and also provides the first general feasibility result for program obfuscation using stateless tokens.
Last updated:  2010-11-10
Secure and Fast Implementations of Two Involution Ciphers
Billy Bob Brumley
Anubis and Khazad are closely related involution block ciphers. Building on two recent AES software results, this work presents a number of constant-time software implementations of Anubis and Khazad for processors with a byte-vector shuffle instruction, such as those that support SSSE3. For Anubis, the first is serial in the sense that it employs only one cipher instance and is compatible with all standard block cipher modes. Efficiency is largely due to the S-box construction that is simple to realize using a byte shuffler. The equivalent for Khazad runs two parallel instances in counter mode. The second for each cipher is a parallel bit-slice implementation in counter mode.
Last updated:  2010-03-21
Ring signature with divided private key
Stelian Flonta, Liviu-Cristian Miclea
The ring signature is a group signature without a group manager, so that a signer realizes a signature in the name of the group. In some situations it is necessary for a message to be signed by more than one persons. The scheme of the ring signature with divided key is an algorithm which ensures realizing a key signature by a group of k entities from a group of n entities. Because of the way this scheme is elaborated, each signer has his own private key, which he uses in the signing phase. Checking the key is realized by using a single common public key. The signature scheme is based on the problem of the discrete logarithm. This cryptographic primitive ensures the anonymity of the signature, which is a ring signature.
Last updated:  2011-06-05
Black-Box Computational Zero-Knowledge Proofs, Revisited: The Simulation-Extraction Paradigm
Uncategorized
Mohammad Sadeq Dousti
Show abstract
Uncategorized
The concept of zero-knowledge proofs has been around for about 25 years. It has been redefined over and over to suit the special security requirements of protocols and systems. Common among all definitions is the requirement of the existence of some efficient ``device'' simulating the view of the verifier (or the transcript of the protocol), such that the simulation is indistinguishable from the reality. The definitions differ in many respects, including the type and power of the devices, the order of quantifiers, the type of indistinguishability, and so on. In this paper, we will scrutinize the definition of ``black-box computational'' zero-knowledge, in which there exists one simulator \emph{for all} verifiers, the simulator has black-box access to the verifier, and the quality of simulation is such that the real and simulated views cannot be distinguished by polynomial tests (\emph{computational} indistinguishability). Working in a theoretical model (the Random-Oracle Model), we show that the indistinguishability requirement is stated in a \emph{conceptually} inappropriate way: Present definitions allow the knowledge of the \emph{verifier} and \emph{distinguisher} to be independent, while the two entities are essentially coupled. Therefore, our main take on the problem will be \emph{conceptual} and \emph{semantic}, rather than \emph{literal}. We formalize the concept by introducing a ``knowledge extractor'' into the model, which tries to extract the extra knowledge hard-coded into the distinguisher (if any), and then helps the simulator to construct the view of the verifier. The new paradigm is termed \emph{Simulation-Extraction Paradigm}, as opposed to the previous \emph{Simulation Paradigm}. We also provide an important application of the new formalization: Using the simulation-extraction paradigm, we construct one-round (i.e. two-move) zero-knowledge protocols of proving ``the computational ability to invert some trapdoor permutation'' in the Random-Oracle Model. It is shown that the protocol cannot be proven zero-knowledge in the classical Simulation Paradigm. The proof of the zero-knowledge property in the new paradigm is interesting in that it does not require knowing the internal structure of the trapdoor permutation, or a polynomial-time reduction from it to another (e.g. an $\mathcal{NP}$-complete) problem.
Last updated:  2010-03-21
On Small Subgroup Non-confinement Attack
Uncategorized
Feng Hao
Show abstract
Uncategorized
The small subgroup confinement attack works by confining cryptographic operations within a small subgroup, in which exhaustive search is feasible. This attack is overt and hence can be easily thwarted by adding a public key validation: verifying the received group element has proper order. In this paper, we present a different aspect of the small subgroup attack. Sometimes, the fact that an operation does not fall into the small subgroup confinement may provide an oracle to an attacker, leaking partial information about the long-term secrets. This attack is subtle and reflects structural weakness of a protocol; the question of whether the protocol has a public key validation is completely irrelevant. As a concrete example, we show how this attack works on the Secure Remote Password (SRP-6) protocol.
Last updated:  2010-03-20
Comments on five smart card based password authentication protocols
Yalin Chen, Jue-Sam Chou, Chun-Hui Huang
In this paper, we use the ten security requirements proposed by Liao et al. for a smart card based authentication protocol to examine five recent work in this area. After analyses, we found that the protocols of Juang et al.’s, Hsiang et al.’s, Kim et al.’s, and Li et al.’s all suffer from the password guessing attack if the smart card is lost and the protocol of Xu et al.’s suffers from the insider attack.
Last updated:  2010-06-22
A New Framework for Password-Based Authenticated Key Exchange
Adam Groce, Jonathan Katz
Protocols for password-based authenticated key exchange (PAKE) allow two users who share only a short, low-entropy password to agree on a cryptographically strong session key. The challenge in designing such protocols is that they must be immune to off-line dictionary attacks in which an eavesdropping adversary exhaustively enumerates the dictionary of likely passwords in an attempt to match a password to the set of observed transcripts. To date, few general frameworks for constructing PAKE protocols in the standard model are known. Here, we abstract and generalize a protocol by Jiang and Gong to give a new methodology for realizing PAKE without random oracles, in the common reference string model. In addition to giving a new approach to the problem, the resulting construction offers several advantages over prior work. We also describe an extension of our protocol that is secure within the universal composability~(UC) framework and, when instantiated using El Gamal encryption, is more efficient than a previous protocol of Canetti et al.
Last updated:  2010-04-07
Some Applications of Lattice Based Root Finding Techniques
Santanu Sarkar, Subhamoy Maitra
In this paper we present some problems and their solutions exploiting lattice based root finding techniques. In CaLC 2001, Howgrave-Graham proposed a method to find the Greatest Common Divisor (GCD) of two large integers when one of the integers is exactly known and the other one is known approximately. In this paper, we present three applications of the technique. The first one is to show deterministic polynomial time equivalence between factoring $N$ ($N = pq$, where $p > q$ or $p, q$ are of same bit size) and knowledge of $q^{-1} \bmod p$. Next, we consider the problem of finding smooth integers in a short interval. The third one is to factorize $N$ given a multiple of the decryption exponent in RSA. In Asiacrypt 2006, Jochemsz and May presented a general strategy for finding roots of a polynomial. We apply that technique for solving the following two problems. The first one is to factorize $N$ given an approximation of a multiple of the decryption exponent in RSA. The second one is to solve the implicit factorization problem given three RSA moduli considering certain portions of LSBs as well as MSBs of one set of three secret primes are same.
Last updated:  2022-06-15
i-Hop Homomorphic Encryption and Rerandomizable Yao Circuits
Craig Gentry, Shai Halevi, Vinod Vaikuntanathan
Homomorphic encryption (HE) schemes enable computing functions on encrypted data, by means of a public $\Eval$ procedure that can be applied to ciphertexts. But the evaluated ciphertexts so generated may differ from freshly encrypted ones. This brings up the question of whether one can keep computing on evaluated ciphertexts. An \emph{$i$-hop} homomorphic encryption scheme is one where $\Eval$ can be called on its own output up to $i$~times, while still being able to decrypt the result. A \emph{multi-hop} homomorphic encryption is a scheme which is $i$-hop for all~$i$. In this work we study $i$-hop and multi-hop schemes in conjunction with the properties of function-privacy (i.e., $\Eval$'s output hides the function) and compactness (i.e., the output of $\Eval$ is short). We provide formal definitions and describe several constructions. First, we observe that "bootstrapping" techniques can be used to convert any (1-hop) homomorphic encryption scheme into an $i$-hop scheme for any~$i$, and the result inherits the function-privacy and/or compactness of the underlying scheme. However, if the underlying scheme is not compact (such as schemes derived from Yao circuits) then the complexity of the resulting $i$-hop scheme can be as high as $k^{O(i)}$. We then describe a specific DDH-based multi-hop homomorphic encryption scheme that does not suffer from this exponential blowup. Although not compact, this scheme has complexity linear in the size of the composed function, independently of the number of hops. The main technical ingredient in this solution is a \emph{re-randomizable} variant of the Yao circuits. Namely, given a garbled circuit, anyone can re-garble it in such a way that even the party that generated the original garbled circuit cannot recognize it. This construction may be of independent interest.
Last updated:  2012-05-09
New Definitions and Separations for Circular Security
Uncategorized
David Cash, Matthew Green, Susan Hohenberger
Show abstract
Uncategorized
Traditional definitions of encryption security guarantee secrecy for any plaintext that can be computed by an outside adversary. In some settings, such as anonymous credential or disk encryption systems, this is not enough, because these applications encrypt messages that depend on the secret key. A natural question to ask is do standard definitions capture these scenarios? One area of interest is n-circular security} where the ciphertexts E(pk_1, sk_2), E(pk_2, sk_3), ... E(pk_{n-1}, sk_n), E(pk_n, sk_1) must be indistinguishable from encryptions of zero. Acar et al. (Eurocrypt 2010) provided a CPA-secure public key cryptosystem that is not 2-circular secure due to a distinguishing attack. In this work, we consider a natural relaxation of this definition. Informally, a cryptosystem is n-weak circular secure if an adversary given the cycle E(pk_1, sk_2), E(pk_2, sk_3), ..., E(pk_{n-1}, sk_n), E(pk_n, sk_1) has no significant advantage in the regular security game, (e.g., CPA or CCA) where ciphertexts of chosen messages must be distinguished from ciphertexts of zero. Since this definition is sufficient for some practical applications and the Acar et al. counterexample no longer applies, the hope is that it would be easier to realize, or perhaps even implied by standard definitions. We show that this is unfortunately not the case: even this weaker notion is not implied by standard definitions. Specifically, we show: 1. For symmetric encryption, under the minimal assumption that one-way functions exist, n-weak circular (CPA) security is not implied by CCA security, for any n. In fact, it is not even implied by authenticated encryption security, where ciphertext integrity is guaranteed. 2. For public-key encryption, under a number-theoretic assumption, 2-weak circular security is not implied by CCA security. In both of these results, which also apply to the stronger circular security definition, /we actually show for the first time an attack in which the adversary can recover the secret key of an otherwise-secure encryption scheme after an encrypted key cycle is published./ These negative results are an important step in answering deep questions about which attacks are prevented by commonly-used definitions and systems of encryption. They say to practitioners: if key cycles may arise in your system, then even if you use CCA-secure encryption, your system may break catastrophically; that is, a passive adversary might be able to recover your secret keys.
Last updated:  2010-03-18
Small Scale Variants Of The Block Cipher PRESENT
Gregor Leander
In this note we de¯ne small scale variants of the block cipher present [1]. The main reason for this is that the running time of some recent attacks (e.g. [2, 3]) remain unclear as they are based on heuristics that are hard or even impossible to verify in practice. Those attacks usually require the full code bock of present to be available and they work only if some independence assumptions hold in practice. While those assumptions are clearly wrong from a theoretical point of view, the impact on the running times of the attacks in question is not clear. With versions of present with smaller block size it might be possible to verify how those attacks scale for those versions and hopefully learn something about present itself.
Last updated:  2010-03-16
Mean value formulas for twisted Edwards curves
Dustin Moody
R. Feng, and H. Wu recently established a certain mean-value formula for the x-coordinates of the n-division points on an elliptic curve given in Weierstrass form (A mean value formula for elliptic curves, 2010, available at http://eprint.iacr.org/2009/586.pdf ). We prove a similar result for both the x and y-coordinates on a twisted Edwards elliptic curve.
Last updated:  2010-08-25
A Reflection on the Security Proofs of Boneh-Franklin Identity-Based Encryption
Uncategorized
Yu Chen
Uncategorized
Boneh and Franklin constructed the first practical Identity-Based Encryption scheme (BF-IBE) [1] and proved its security based on the computational Bilinear Diffie-Hellman assumption (CBDH) in 2001. The correct- ness of its security proof was long believed to be correct until in 2005, Galindo [2] noticed a flawed step in the original proof. In the same paper, Galindo provided a new proof with a looser security reduction. Shortly after- wards, Nishioka [3] improved Galindo’s proof to achieve a tighter security reduction. In the same year, Zhang and Imai [4] gave another proof of BF-IBE. Unfortunately, we find that none of their proofs is flawless. In this paper, besides identifying and fixing the lapses in previous proofs, we present two new proofs for the CCA security of BF-IBE. The first proof is proved via selective-identity security with imposing a natural constraint to the original scheme. The second proof is proved by directly reducing the security to a stronger assumption, namely the gap Bilinear Diffie-Hellman (GBDH) assumption.
Last updated:  2012-06-11
Improved Agreeing-Gluing Algorithm
Uncategorized
Igor Semaev
Show abstract
Uncategorized
In this paper we study the asymptotical complexity of solving a system of sparse algebraic equations over finite fields. An equation is called sparse if it depends on a bounded number of variables. Finding efficiently solutions to the system of such equations is an underlying hard problem in the cryptanalysis of modern ciphers. New deterministic Improved Agreeing-Gluing Algorithm is introduced. The expected running time of the Algorithm on uniformly random instances of the problem is rigorously estimated. The estimate is at present the best theoretical bound on the complexity of solving average instances of the problem. In particular, this is a significant improvement over those in our earlier papers [20,21]. In sparse Boolean equations a gap between the present worst case and the average time complexity of the problem has significantly increased. Also we formulate Average Time Complexity Conjecture. If proved that will have far-reaching consequences in the field of cryptanalysis and in computing in general.
Last updated:  2010-03-14
A New Class of Public Key Cryptosystems Constructed Based on Perfect Error-Correcting Codes Realizing Coding Rate of Exactly 1.0
Masao Kasahara
In this paper, we propose a new method for constructing the public-key cryptosystems based on a class of perfect error-correcting codes. The constructed PKC is referred to as K(IV)SE(1)PKC. In K(IV)SE(1)PKC, members of the class of perfect error correcting codes such as (7,4,3) cyclic Hamming code and (3,1,3) code {000,111} is used, yielding a simple process of encryption and decryption. The K(IV)SE(1)PKC has a remarkable feature that the coding rate can take on exactly 1.0 due to the use of perfect codes. Besides the size of the public key for K(IV)SE(1)PKC can be made smaller than that of the McEliece PKC.
Last updated:  2010-03-18
On the Security of a Novel Remote User Authentication Scheme using Smart Card based on ECDLP
Uncategorized
Manoj Kumar
Show abstract
Uncategorized
In 2009, Jena et al. proposed a novel remote user authentication scheme using smart card based on ECDLP and claimed that the proposed scheme withstands to security threats. This paper shows that Jena et al.'s scheme is vulnerable to serious security threats and also does not satisfy the attributes of an ideal password authentication scheme .
Last updated:  2010-10-06
Estimating the Security of Lattice-based Cryptosystems
Markus Rückert, Michael Schneider
Encryption and signature schemes based on worst-case lattice problems are promising candidates for the post-quantum era, where classic number-theoretic assumptions are rendered false. Although there have been many important results and breakthroughs in lattice cryptography, the questions of how to systematically evaluate their security in practice and how to choose secure parameters are still open. This is mainly due to the fact that most security proofs are essentially asymptotic statements. In addition, the hardness of the underlying complexity assumption is controlled by several interdependent parameters rather than just a simple bit length as in many classic schemes. With our work, we close this gap by providing a framework that (1) distills a hardness estimate out of a given parameter set and (2) relates the complexity of practical lattice-based attacks to symmetric "bit security" for the first time. Our approach takes various security levels, or attacker types, into account. Moreover, we use it to predict long-term security in a similar fashion as the results that are collected on www.keylength.com. In contrast to the experiments by Gama and Nguyen (Eurocrypt 2008), our estimates are based on precisely the family of lattices that is relevant in modern lattice-based cryptography. Our framework can be applied in two ways: Firstly, to assess the hardness of the (few) proposed parameter sets so far and secondly, to propose secure parameters in the first place. Our methodology is applicable to essentially all lattice-based schemes that are based on the learning with errors problem (LWE) or the small integer solution problem (SIS) and it allows us to compare efficiency and security across different schemes and even across different types of cryptographic primitives.
Last updated:  2010-03-12
On Robust Key Agreement Based on Public Key Authentication
Feng Hao
This paper discusses public-key authenticated key agreement protocols. First, we critically analyze several authenticated key agreement protocols and uncover various theoretical and practical flaws. In particular, we present two new attacks on the HMQV protocol, which is currently being standardized by IEEE P1363. The first attack presents a counterexample to invalidate the basic authentication in HMQV. The second attack is applicable to almost all past schemes, despite that many of them have formal security proofs. These attacks highlight the difficulty to design a crypto protocol correctly and suggest the caution one should always take. We further point out that many of the design errors are caused by sidestepping an important engineering principle, namely ``Do not assume that a message you receive has a particular form (such as $g^{r}$ for known $r$) unless you can check this''. Constructions in the past generally resisted this principle on the grounds of efficiency: checking the knowledge of the exponent is commonly seen as too expensive. In a concrete example, we demonstrate how to effectively integrate the zero-knowledge proof primitive into the protocol design and meanwhile achieve good efficiency. Our new key agreement protocol, YAK, has comparable computational efficiency to the MQV and HMQV protocols with clear advantages on security. Among all the related techniques, our protocol appears to be the simplest so far. We believe simplicity is also an important engineering principle.
Last updated:  2010-04-14
On The Broadcast and Validity-Checking Security of PKCS \#1 v1.5 Encryption
Aurélie Bauer, Jean-Sébastien Coron, David Naccache, Mehdi Tibouchi, Damien Vergnaud
This paper describes new attacks on PKCS \#1 v1.5, a deprecated but still widely used RSA encryption standard. The first cryptanalysis is a broadcast attack, allowing the opponent to reveal an identical plaintext sent to different recipients. This is nontrivial because different randomizers are used for different encryptions (in other words, plaintexts coincide only partially). The second attack predicts, using a single query to a validity checking oracle, which of two chosen plaintexts corresponds to a challenge ciphertext. The attack's success odds are very high. The two new attacks rely on different mathematical tools and underline the need to accelerate the phase out of PKCS \#1 v1.5.
Last updated:  2010-06-18
Barreto-Naehrig Curve With Fixed Coefficient - Efficiently Constructing Pairing-Friendly Curves -
Masaaki Shirase
This paper describes a method for constructing Barreto-Naehrig (BN) curves and twists of BN curves that are pairing-friendly and have the embedding degree $12$ by using just primality tests without a complex multiplication (CM) method. Specifically, this paper explains that the number of points of elliptic curves $y^2=x^3\pm 16$ and $y^2=x^3 \pm 2$ over $\Fp$ is given by 6 polynomials in $z$, $n_0(z),\cdots, n_5(z)$, two of which are irreducible, classified by the value of $z\bmod{12}$ for a prime $p(z)=36z^4+36z^3+24z^2+6z+1$ with $z$ an integer. For example, elliptic curve $y^2=x^3+2$ over $\Fp$ always becomes a BN curve for any $z$ with $z \equiv 2,11\!\!\!\pmod{12}$. Let $n_i(z)$ be irreducible. Then, to construct a pairing-friendly elliptic curve, it is enough to find an integer $z$ of appropriate size such that $p(z)$ and $n_i(z)$ are primes.
Last updated:  2010-09-22
Signing on Elements in Bilinear Groups for Modular Protocol Design
Masayuki Abe, Kristiyan Haralambiev, Miyako Ohkubo
A signature scheme is called structure-preserving if its verification keys, messages, and signatures are group elements and the verification predicate is a conjunction of pairing product equations. We answer to the open problem of constructing a constant-size structure-preserving signature scheme. The security is proven in the standard model based on a novel non-interactive assumption that can be justified and has an optimal bound in the generic bilinear group model. We also present efficient structure-preserving signature schemes with advanced properties including signing unbounded number of group elements, allowing simulation in the common reference string model, signing messages from mixed groups in the asymmetric bilinear group setting, and strong unforgeability. Among many applications, we show two examples; an adaptively secure round optimal blind signature scheme and a group signature scheme with efficient concurrent join. As a bi-product, several homomorphic trapdoor commitment schemes and one-time signature schemes are presented, too. In combination with the Groth-Sahai non-interactive proof system, these schemes contribute to give efficient instantiations to modular constructions of cryptographic protocols.
Last updated:  2010-03-10
On the claimed privacy of EC-RAC III
Uncategorized
Junfeng Fan, Jens Hermans, Frederik Vercauteren
Show abstract
Uncategorized
In this paper we show how to break the most recent version of EC-RAC with respect to privacy. We show that both the ID-Transfer and ID&PWD-Transfer schemes from EC-RAC do not provide the claimed privacy levels by using a man-in-the-middle attack. The existence of these attacks voids the presented privacy proofs for EC-RAC.
Last updated:  2010-03-11
Multi-property-preserving Domain Extension Using Polynomial-based Modes of Operation
Jooyoung Lee, John Steinberger
In this paper, we propose a new double-piped mode of operation for multi-property-preserving domain extension of MACs~(message authentication codes), PRFs~(pseudorandom functions) and PROs~(pseudorandom oracles). Our mode of operation performs twice as fast as the original double-piped mode of operation of Lucks while providing comparable security. Our construction, which uses a class of polynomial-based compression functions proposed by Stam, makes a single call to a $3n$-bit to $n$-bit primitive at each iteration and uses a finalization function $f_2$ at the last iteration, producing an $n$-bit hash function $H[f_1,f_2]$ satisfying the following properties. \begin{enumerate} \item $H[f_1,f_2]$ is unforgeable up to $O(2^n/n)$ query complexity as long as $f_1$ and $f_2$ are unforgeable. \item $H[f_1,f_2]$ is pseudorandom up to $O(2^n/n)$ query complexity as long as $f_1$ is unforgeable and $f_2$ is pseudorandom. \item $H[f_1,f_2]$ is indifferentiable from a random oracle up to $O(2^{2n/3})$ query complexity as long as $f_1$ and $f_2$ are public random functions. \end{enumerate} To our knowledge, our result constitutes the first time $O(2^n/n)$ unforgeability has been achieved using only an unforgeable primitive of $n$-bit output length. (Yasuda showed unforgeability of $O(2^{5n/6})$ for Lucks' construction assuming an unforgeable primitive, but the analysis is sub-optimal; in this paper, we show how Yasuda's bound can be improved to $O(2^n)$.) In related work, we strengthen Stam's collision resistance analysis of polynomial-based compression functions (showing that unforgeability of the primitive suffices) and discuss how to implement our mode by replacing $f_1$ with a $2n$-bit key blockcipher in Davies-Meyer mode or by replacing $f_1$ with the cascade of two $2n$-bit to $n$-bit compression functions.
Last updated:  2013-03-21
Low Voltage Fault Attacks to AES and RSA on General Purpose Processors
Uncategorized
Alessandro Barenghi, Guido Bertoni, Luca Breveglieri, Mauro Pellicioli, Gerardo Pelosi
Show abstract
Uncategorized
Fault injection attacks have proven in recent times a powerful tool to exploit implementative weaknesses of robust cryptographic algorithms. A number of different techniques aimed at disturbing the computation of a cryptographic primitive have been devised, and have been successfully employed to leak secret information inferring it from the erroneous results. In particular, many of these techniques involve directly tampering with the computing device to alter the content of the embedded memory, e.g. through irradiating it with laser beams. In this contribution we present a low-cost, non-invasive and effective technique to inject faults in an ARM9 general purpose CPU through lowering its feeding voltage. This is the first result available in fault attacks literature to attack a software implementation of a cryptosystem running on a full fledged CPU with a complete operating system. The platform under consideration (an ARM9 CPU running a full Linux 2.6 kernel) is widely used in mobile computing devices such as smartphones, gaming platforms and network appliances. We fully characterise both the fault model and the errors induced in the computation, both in terms of ensuing frequency and corruption patterns on the computed results. At first, we validate the effectiveness of the proposed fault model to lead practical attacks to implementations of RSA and AES cryptosystems, using techniques known in open literature. Then we devised two new attack techniques, one for each cryptosystem. The attack to AES is able to retrieve all the round keys regardless both their derivation strategy and the number of rounds. A known ciphertext attack to RSA encryption has been devised: the plaintext is retrieved knowing the result of a correct and a faulty encryption of the same plaintext, and assuming the fault corrupts the public key exponent. Through experimental validation, we show that we can break any AES with roughly 4 kb of ciphertext, RSA encryption with 3 to 5 faults and RSA signature with 1 to 2 faults.
Last updated:  2010-03-08
Relation for Algebraic Attack on E0 combiner
N. Rajesh Pillai, S. S. Bedi, Sanjay Kumar, Roopika Chaudhary
The low degree relation for algebraic attacks on E0 combiner given in \cite{DBLP:conf/crypto/ArmknechtK03} had an error. The correct version of low degree relation for the E0 combiner for use in algebraic attack is given.
Last updated:  2011-03-01
Update-Optimal Authenticated Structures Based on Lattices
Uncategorized
Charalampos Papamanthou, Roberto Tamassia
Uncategorized
We study the problem of authenticating a \emph{dynamic table} with $n$ entries in the authenticated data structures model, which is related to memory checking. We present the first dynamic authenticated table that is \emph{update-optimal}, using a \emph{lattice-based} construction. In particular, the update time is $O(1)$, improving in this way the ``a priori'' $O(\log n)$ update bounds for previous constructions, such as the Merkle tree. Moreover, the space used by our data structure is $O(n)$ and logarithmic bounds hold for the other complexity measures, such as \emph{proof size}. To achieve this result, we exploit the \emph{linearity} of lattice-based hash functions and show how the security of lattice-based digests can be guaranteed under updates. This is the first construction achieving constant update bounds without causing other time complexities to increase beyond logarithmic. All previous solutions enjoying constant update complexity have $\Omega(n^\epsilon)$ proof or query bounds. As an application of our lattice-based authenticated table, we provide the first construction of an authenticated Bloom filter, an update-intensive data structure that falls into our model.
Last updated:  2011-02-21
CCA-Secure Cryptosystem from Lattice
Uncategorized
Chen Huiyan
Uncategorized
We propose a simple construction of CCA- secure publickey encryption scheme based on lattice in the standard model. Our construction regards lattice-based cryptosystem mR05 of [21], which is the multi-bit version of single-bit cryptosystems R05 [20], as building block and makes use of its indistinguishable pseudohomomorphism property which is known to be achievable without random oracles and which is the crux that we can construct a public key encryption scheme which is CCA-secure in standard model. This makes our construction approach quite dierent from existing ones. So far as we know, our construction is the rst CCA-secure cryptosystem which is directly constructed from lattice and whose security is directly based on the standard lattice problem which is hard in the worst case for quantum algorithms.
Last updated:  2010-03-07
On the Security of an Efficient Mobile Authentication Scheme for Wireless Networks
Jian-zhu Lu, Jipeng Zhou
Tang and Wu proposed an efficient mobile authentication scheme for wireless networks, and claimed the scheme can effectively defend all known attacks to mobile networks including the denial-of-service attack. This article shows an existential replication attack on the scheme and, as a result, an attacker can obtain the communication key between a mobile user and the accessed VLR. Fortunately, we improve its security in this paper.
Last updated:  2010-03-06
Cryptographic Aspects of Real Hyperelliptic Curves
M. J. Jacobson Jr., R. Scheidler, A. Stein
In this paper, we give an overview of cryptographic applications using real hyperelliptic curves. We review previously proposed cryptographic protocols, and discuss the infrastructure of a real hyperelliptic curve, the mathematical structure underlying all these protocols. We then describe recent improvements to infrastructure arithmetic, including explicit formulas for divisor arithmetic in genus 2; and advances in solving the infrastructure discrete logarithm problem, whose presumed intractability is the basis of security for the related cryptographic protocols.
Last updated:  2010-03-06
A Hardware Wrapper for the SHA-3 Hash Algorithms
Brian Baldwin, Andrew Byrne, Liang Lu, Mark Hamilton, Neil Hanley, Maire O'Neill, William P. Marnane
The second round of the NIST public competition is underway to find a new hash algorithm(s) for inclusion in the NIST Secure Hash Standard (SHA-3). Computational efficiency of the algorithms in hardware is to be addressed during the second round of the contest. For software implementations NIST specifies an application programming interface (API) along with reference implementation for each of the designs, thereby enabling quick and easy comparison and testing on software platforms, however no such specification was given for hardware analysis. In this paper we present a hardware wrapper interface which attempts to encompass all the competition entries (and indeed, hash algorithms in general) across any number of both FPGA and ASIC hardware platforms. This interface comprises communications and padding, and attempts to standardise the hashing algorithms to allow accurate and fair area, timing and power measurement between the different designs.
Last updated:  2010-04-08
Delaying Mismatched Field Multiplications in Pairing Computations
Uncategorized
Craig Costello, Colin Boyd, Juan Manuel Gonzalez Nieto, Kenneth Koon-Ho Wong
Show abstract
Uncategorized
Miller's algorithm for computing pairings involves performing multiplications between elements that belong to different finite fields. Namely, elements in the full extension field $\mathbb{F}_{p^k}$ are multiplied by elements contained in proper subfields $\mathbb{F}_{p^{k/d}}$, and by elements in the base field $\mathbb{F}_{p}$. We show that significant speedups in pairing computations can be achieved by delaying these ``mismatched'' multiplications for an optimal number of iterations. Importantly, we show that our technique can be easily integrated into traditional pairing algorithms; implementers can exploit the computational savings herein by applying only minor changes to existing pairing code.
Last updated:  2010-03-05
Security of Encryption Schemes in Weakened Random Oracle Models
Akinori Kawachi, Akira Numayama, Keisuke Tanaka, Keita Xagawa
Liskov proposed several weakened versions of the random oracle model, called {\em weakened random oracle models} (WROMs), to capture the vulnerability of ideal compression functions, which are expected to have the standard security of hash functions, i.e., collision resistance, second-preimage resistance, and one-wayness properties. The WROMs offer additional oracles to break such properties of the random oracle. In this paper, we investigate whether public-key encryption schemes in the random oracle model essentially require the standard security of hash functions by the WROMs. In particular, we deal with four WROMs associated with the standard security of hash functions; the standard, collision tractable, second-preimage tractable, first-preimage tractable ones (ROM, CT-ROM, SPT-ROM, and FPT-ROM, respectively), done by Numayama et al. for digital signature schemes in the WROMs. We obtain the following results: (1) The OAEP is secure in all the four models. (2) The encryption schemes obtained by the Fujisaki-Okamoto conversion (FO) are secure in the SPT-ROM. However, some encryption schemes with FO are insecure in the FPT-ROM. (3) We consider two artificial variants wFO and dFO of FO for separation of the WROMs in the context of encryption schemes. The encryption schemes with wFO (dFO, respectively) are secure in the CT-ROM (ROM, respectively). However, some encryption schemes obtained by wFO (dFO, respectively) are insecure in the SPT-ROM (CT-ROM, respectively). These results imply that standard encryption schemes such as the OAEP and FO-based one do not always require the standard security of hash functions. Moreover, in order to make our security proofs complete, we construct an efficient sampling algorithm for the binomial distribution with exponentially large parameters, which was left open in Numayama et al.'s paper.
Last updated:  2011-02-21
Lattice-Based Public Key Cryptosystem Provably Secure against Adaptive Chosen Ciphertext Attack
Chen Huiyan, Li Zichen
We propose a simple and efficient construction of CCA- secure public-key encryption scheme based on lattice. Our construction needs an encryption scheme, which we call \matrix encryption", as building block, and requires the underlying matrix encryption scheme to satisfy only a relatively weak notion of security which can be achievable without random oracles. With the pseudohomomorphism property of mR04 of [3], which is the multi-bit version of single-bit cryptosystems R04 [1], we design a matrix encryption scheme which satisfies the above requirements, thus, our construction provides a new approach for constructing CCA-secure encryption schemes in the standard model. So far as we know, our construction is the first CCA-secure cryptosystem which is directly constructed from lattice and whose security is based on the unique shortest vector problem (uSVP). In addition, the method designing the matrix encryption scheme from mR04 also adapts to mR05, mA05, mADGGH of [3], which are the multibit versions of single-bit cryptosystems R05 [2], A05 [5], and ADGGH [7], respectively, since they have the same pseudohomomorphism property as mR04. This result makes our approach constructing CCA-secure cryptosystem become generic and universal.
Last updated:  2014-12-11
Universal One-Way Hash Functions and Average Case Complexity via Inaccessible Entropy
Uncategorized
Iftach Haitner, Thomas Holenstein, Omer Reingold, Salil Vadhan, Hoeteck Wee
Show abstract
Uncategorized
This paper revisits the construction of Universally One-Way Hash Functions (UOWHFs) from any one-way function due to Rompel (STOC 1990). We give a simpler construction of UOWHFs which also obtains better efficiency and security. The construction exploits a strong connection to the recently introduced notion of *inaccessible entropy* (Haitner et al. STOC 2009). With this perspective, we observe that a small tweak of any one-way function f is already a weak form of a UOWHF: Consider F(x, i) that outputs the i-bit long prefix of f(x). If F were a UOWHF then given a random x and i it would be hard to come up with x' \neq x such that F(x, i) = F(x', i). While this may not be the case, we show (rather easily) that it is hard to sample x' with almost full entropy among all the possible such values of x'. The rest of our construction simply amplifies and exploits this basic property. With this and other recent works we have that the constructions of three fundamental cryptographic primitives (Pseudorandom Generators, Statistically Hiding Commitments and UOWHFs) out of one-way functions are to a large extent unified. In particular, all three constructions rely on and manipulate computational notions of entropy in similar ways. Pseudorandom Generators rely on the well-established notion of pseudoentropy, whereas Statistically Hiding Commitments and UOWHFs rely on the newer notion of inaccessible entropy. In an additional result, we use the notion of inaccessible entropy for reproving the seminal result of Impagliazzo and Levin (FOCS 1989): a reduction from "uniform distribution" average case complexity problems to ones with arbitrary (though polynomial samplable one) distributions.
Last updated:  2010-03-09
How to Construct Space Efficient Revocable IBE from Non-monotonic ABE
Huang Lin, Zhenfu Cao, Muxin Zhou, Haojin Zhu
Since there always exists some users whose private keys are stolen or expired in practice, it is important for identity based encryption (IBE) system to provide a solution for revocation. The current most efficient revocable IBE system has a private key of size $\mathcal{O}(\log n)$ and update information of size $\mathcal{O}(r \log(\frac{n}{r}))$ where $r$ is the number of revoked users. We describe a new revocable IBE systems where the private key only contains two group elements and the update information size is $\mathcal{O}(r)$. To our best knowledge, the proposed constructions serve as the most efficient revocable IBE constructions in terms of space cost. Besides, this construction also provides a generic methodology to transform a non-monotonic attribute based encryption into a revocable IBE scheme. This paper also demonstrates how the proposed method can be employed to present an efficient revocable hierarchical IBE scheme.
Last updated:  2010-03-05
Proposal of a Signature Scheme based on STS Trapdoor
Shigeo Tsujii, Masahito Gotaishi, Kohtaro Tadaki, Ryou Fujita
A New digital signature scheme based on Stepwise Triangular Scheme (STS) is proposed. The proposed trapdoor has resolved the vulnerability of STS and secure against both Gröbner Bases and Rank Attacks. In addition, as a basic trapdoor, it is more efficient than the existing systems. With the efficient implementation, the Multivariate Public Key Cryptosystems (MPKC) signature public key has the signature longer than the message by less than 25 %, for example.
Last updated:  2010-03-05
Cryptographic Agility and its Relation to Circular Encryption
Uncategorized
Tolga Acar, Mira Belenkiy, Mihir Bellare, David Cash
Show abstract
Uncategorized
We initiate a provable-security treatment of cryptographic \emph{agility}. A primitive (for example PRFs, authenticated encryption schemes or digital signatures) is agile when multiple, individually secure schemes can securely share the same key. We provide a surprising connection between two seemingly unrelated but challenging questions. The first, new to this paper, is whether wPRFs (weak-PRFs) are agile. The second, already posed several times in the literature, is whether every secure (IND-R) encryption scheme is secure when encrypting cycles. We resolve the second question in the negative and thereby the first as well. We go on to provide a comprehensive treatment of agility, with definitions for various different primitives. We explain the practical motivations for agility. We provide foundational results that show to what extent it is achievable and practical constructions to achieve it to the best extent possible. On the theoretical side our work uncovers new notions and relations and settles stated open questions, and on the practical side it serves to guide developers.
Last updated:  2010-03-05
Practical Improvements of Profiled Side-Channel Attacks on a Hardware Crypto-Accelerator
Uncategorized
M. Abdelaziz Elaabid, Sylvain Guilley
Show abstract
Uncategorized
This article investigates the relevance of the theoretical framework on profiled side-channel attacks presented by F.-X. Standaert et al. at Eurocrypt 2009. The analyses consist in a case-study based on sidechannel measurements acquired experimentally from a hardwired cryptographic accelerator. Therefore, with respect to previous formal analyses carried out on software measurements or on simulated data, the investigations we describe are more complex, due to the underlying chip’s architecture and to the large amount of algorithmic noise. In this difficult context, we show however that with an engineer’s mindset, two techniques can greatly improve both the off-line profiling and the on-line attack. First, we explore the appropriateness of different choices for the sensitive variables. We show that a skilled attacker aware of the register transfers occurring during the cryptographic operations can select the most adequate distinguisher, thus increasing its success rate. Second, we introduce a method based on the thresholding of leakage data to accelerate the profiling or the matching stages. Indeed, leveraging on an engineer’s common sense, it is possible to visually foresee the shape of some eigenvectors thereby anticipating their estimation towards their asymptotic value by authoritatively zeroing weak components containing mainly non-informational noise. This method empowers an attacker, in that it saves traces when converging towards correct values of the secret. Concretely, we demonstrate a 5 times speed-up in the on-line phase of the attack.
Last updated:  2010-03-04
A Security Evaluation of DNSSEC with NSEC3
Jason Bau, John C Mitchell
Domain Name System Security Extensions (DNSSEC) and Hashed Authenticated Denial of Existence (NSEC3) are slated for adoption by important parts of the DNS hierarchy, including the root zone, as a solution to vulnerabilities such as ”cache-poisoning” attacks. We study the security goals and operation of DNSSEC/NSEC3 using Murphi, a finite-state enumeration tool, to analyze security properties that may be relevant to various deployment scenarios. Our systematic study reveals several subtleties and potential pitfalls that can be avoided by proper configuration choices, including resource records that may remain valid after the expiration of relevant signatures and potential insertion of forged names into a DNSSEC-enabled domain via the opt-out option. We demonstrate the exploitability of DNSSEC opt-out options in an enterprise setting by constructing a browser cookie-stealing attack on a laboratory domain. Under recommended configuration settings, further Murphi model checking finds no vulnerabilities within our threat model, suggesting that DNSSEC with NSEC3 provides significant security benefits.
Last updated:  2010-04-27
The Discrete Logarithm Problem Modulo One: Cryptanalysing the Ariffin--Abu cryptosystem
Simon R. Blackburn
The paper provides a cryptanalysis the $AA_\beta$-cryptosystem recently proposed by Ariffin and Abu. The scheme is in essence a key agreement scheme whose security is based on a discrete logarithm problem in the infinite (additive) group $\mathbb{R}/\mathbb{Z}$ (the reals modulo $1$). The paper breaks the $AA_\beta$-cryptosystem (in a passive adversary model) by showing that this discrete logarithm problem can be efficiently solved in practice.
Last updated:  2010-04-01
Cryptanalysis of Two Efficient HIBE Schemes in the Standard Model
Xu An Wang, Xiaoyuan Yang
In Informatica 32 (2008), Ren and Gu proposed an anonymous hierarchical identity based encryption scheme based on the q-ABDHE problem with full security in the standard model. Later in Indocrypt'08, they proposed another secure hierarchical identity based encryption scheme based on the q-TBDHE problem with full security in the standard model. They claimed that their schemes have short parameters, high efficiency and tight reduction. However, in this paper we give attacks to show their schemes are insecure at all. Concretely, from any first level private key, the adversary can easily derive a proper ``private key'' which can decrypt any ciphertexts for the target identity. That is to say, one key generation query on any first level identity excluding the target's first level identity, is enough to break their schemes.
Last updated:  2010-03-04
CCA-Secure PRE Scheme without Random Oracles
Jun Shao, Zhenfu Cao, Peng Liu
In a proxy re-encryption scheme, a semi-trusted proxy can transform a ciphertext under Alice's public key into another ciphertext that Bob can decrypt. However, the proxy cannot access the plaintext. Due to its transformation property, proxy re-encryption can be used in many applications, such as encrypted email forwarding. In this paper, by using the techniques of Canetti-Hohenberger and Kurosawa-Desmedt, we propose a new single-use unidirectional proxy re-encryption scheme. Our proposal is secure against chosen ciphertext attack (CCA) and collusion attack in the standard model.
Last updated:  2015-10-07
On zero practical significance of “"Key recovery attack on full GOST block cipher with zero time and memory”"
Uncategorized
Vladimir Rudskoy
Show abstract
Uncategorized
In this paper we show that the related key boomerang attack by E. Fleischmann et al. from the paper mentioned in the title does not allow to recover the master key of the GOST block cipher with complexity less than the complexity of the exhaustive search. Next we present modified attacks. Finally we argue that these attacks and the related key approach itself are of extremely limited practical applications and do not represent a fundamental obstacle to practical usage of the block ciphers such as GOST, AES and Kasumi.
Last updated:  2011-12-27
Fully Secure Functional Encryption: Attribute-Based Encryption and (Hierarchical) Inner Product Encryption
Uncategorized
Allison Lewko, Tatsuaki Okamoto, Amit Sahai, Katsuyuki Takashima, Brent Waters
Show abstract
Uncategorized
In this paper, we present two fully secure functional encryption schemes. Our first result is a fully secure attribute-based encryption (ABE) scheme. Previous constructions of ABE were only proven to be selectively secure. We achieve full security by adapting the dual system encryption methodology recently introduced by Waters and previously leveraged to obtain fully secure IBE and HIBE systems. The primary challenge in applying dual system encryption to ABE is the richer structure of keys and ciphertexts. In an IBE or HIBE system, keys and ciphertexts are both associated with the same type of simple object: identities. In an ABE system, keys and ciphertexts are associated with more complex objects: attributes and access formulas. We use a novel information-theoretic argument to adapt the dual system encryption methodology to the more complicated structure of ABE systems. We construct our system in composite order bilinear groups, where the order is a product of three primes. We prove the security of our system from three static assumptions. Our ABE scheme supports arbitrary monotone access formulas. Our second result is a fully secure (attribute-hiding) predicate encryption (PE) scheme for inner-product predicates. As for ABE, previous constructions of such schemes were only proven to be selectively secure. Security is proven under a non-interactive assumption whose size does not depend on the number of queries. The scheme is comparably efficient to existing selectively secure schemes. We also present a fully secure hierarchical PE scheme under the same assumption. The key technique used to obtain these results is an elaborate combination of the dual system encryption methodology (adapted to the structure of inner product PE systems) and a new approach on bilinear pairings using the notion of dual pairing vector spaces (DPVS) proposed by Okamoto and Takashima.
Last updated:  2011-01-14
Practical Adaptive Oblivious Transfer from Simple Assumptions
Matthew Green, Susan Hohenberger
In an adaptive oblivious transfer (OT) protocol, a sender commits to a database of messages and then repeatedly interacts with a receiver in such a way that the receiver obtains one message per interaction of his choice (and nothing more) while the sender learns nothing about any of the choices. Recently, there has been significant effort to design practical adaptive OT schemes and to use these protocols as a building block for larger database applications. To be well suited for these applications, the underlying OT protocol should: (1) support an efficient initialization phase where one commitment can support an arbitrary number of receivers who are guaranteed of having the same view of the database, (2) execute transfers in time independent of the size of the database, and (3) satisfy a strong notion of security under a simple assumption in the standard model. We present the first adaptive OT protocol simultaneously satisfying these requirements. The sole complexity assumption required is that given $(g,g^a,g^b,g^c,Q)$, where $g$ generates a bilinear group of prime order $p$ and $a,b,c$ are selected randomly from $\Zp$, it is hard to decide if $Q = g^{abc}$. All prior protocols in the standard model either do not meet our efficiency requirements or require dynamic ``q-based'' assumptions. Our construction makes an important change to the established ``assisted decryption'' technique for designing adaptive OT. As in prior works, the sender commits to a database of $n$ messages by publishing an encryption of each message and a signature on each encryption. Then, each transfer phase can be executed in time independent of $n$ as the receiver blinds one of the encryptions and proves knowledge of the blinding factors and a signature on this encryption, after which the sender helps the receiver decrypt the chosen ciphertext. One of the main obstacles to designing an adaptive OT scheme from a simple assumption is realizing a suitable signature for this purpose (i.e., enabling signatures on group elements in a manner that later allows for efficient proofs.) We make the observation that a secure signature scheme is not necessary for this paradigm, provided that signatures can only be forged in certain ways. We then show how to efficiently integrate an insecure signature into a secure adaptive OT construction.
Last updated:  2010-03-02
Perfectly Secure Oblivious RAM Without Random Oracles
Uncategorized
Ivan Damgård, Sigurd Meldgaard, Jesper Buus Nielsen
Show abstract
Uncategorized
We present an algorithm for implementing a secure oblivious RAM where the access pattern is perfectly hidden in the information theoretic sense, without assuming that the CPU has access to a random oracle. In addition we prove a lover bound on the amount of randomness needed for information theoretically secure oblivious RAM.
Last updated:  2011-02-18
Adaptive Concurrent Non-Malleability with Bare Public-Keys
Andrew C. Yao, Moti Yung, Yunlei Zhao
Coin-tossing (CT) is one of the earliest and most fundamental protocol problems in the literature. In this work, we formalize and construct (constant-round) concurrent non-malleable coin-tossing (CNMCT) in the bare public-key (BPK) model. The CNMCT protocol can, in particular, be used to transform CNM zero-knowledge (CNMZK) in the common random string (CRS) model into the BPK model with full adaptive input (statements and language) selection. Here, full adaptive input selection in the public-key model means that the concurrent man-in-the-middle (CMIM) adversary can adaptively set statements to all sessions at any point of the concurrent execution evolution (not necessarily at the beginning of each session), and can set the underlying language based upon honest players’ public-keys.
Last updated:  2010-03-01
Perfectly Secure Multiparty Computation and the Computational Overhead of Cryptography
Ivan Damgård, Yuval Ishai, Mikkel Krøigaard
We study the following two related questions: - What are the minimal computational resources required for general secure multiparty computation in the presence of an honest majority? - What are the minimal resources required for two-party primitives such as zero-knowledge proofs and general secure two-party computation? We obtain a nearly tight answer to the first question by presenting a perfectly secure protocol which allows $n$ players to evaluate an arithmetic circuit of size $s$ by performing a total of $\O(s\log s\log^2 n)$ arithmetic operations, plus an additive term which depends (polynomially) on $n$ and the circuit depth, but only logarithmically on $s$. Thus, for typical large-scale computations whose circuit width is much bigger than their depth and the number of players, the amortized overhead is just polylogarithmic in $n$ and $s$. The protocol provides perfect security with guaranteed output delivery in the presence of an active, adaptive adversary corrupting a $(1/3-\epsilon)$ fraction of the players, for an arbitrary constant $\epsilon>0$ and sufficiently large $n$. The best previous protocols in this setting could only offer computational security with a computational overhead of $\poly(k,\log n,\log s)$, where $k$ is a computational security parameter, or perfect security with a computational overhead of $\O(n\log n)$. We then apply the above result towards making progress on the second question. Concretely, under standard cryptographic assumptions, we obtain zero-knowledge proofs for circuit satisfiability with $2^{-k}$ soundness error in which the amortized computational overhead per gate is only {\em polylogarithmic} in $k$, improving over the $\omega(k)$ overhead of the best previous protocols. Under stronger cryptographic assumptions, we obtain similar results for general secure two-party computation.
Last updated:  2010-07-05
Bias in the nonlinear filter generator output sequence
Uncategorized
Sui-Guan Teo, Leonie Simpson, Ed Dawson
Show abstract
Uncategorized
Nonlinear filter generators are common components used in the keystream generators for stream ciphers and more recently for authentication mechanisms. They consist of a Linear Feedback Shift Register (LFSR) and a nonlinear Boolean function to mask the linearity of the LFSR output. Properties of the output of a nonlinear filter are not well studied. Anderson noted that the $m$-tuple output of a nonlinear filter with consecutive taps to the filter function is unevenly distributed. Current designs use taps which are not consecutive. We examine $m$-tuple outputs from nonlinear filter generators constructed using various LFSRs and Boolean functions for both consecutive and uneven (full positive difference sets where possible) tap positions. The investigation reveals that in both cases, the $m$-tuple output is not uniform. However, consecutive tap positions result in a more biased distribution than uneven tap positions, with some m-tuples not occurring at all. These biased distributions indicate a potential flaw that could be exploited for cryptanalysis.
Last updated:  2010-04-08
Avoiding Full Extension Field Arithmetic in Pairing Computations
Uncategorized
Craig Costello, Colin Boyd, Juan Manuel Gonzalez Nieto, Kenneth Koon-Ho Wong
Show abstract
Uncategorized
The most costly operations encountered in pairing computations are those that take place in the full extension field $\mathbb{F}_{p^k}$. At high levels of security, the complexity of operations in $\mathbb{F}_{p^k}$ dominates the complexity of the operations that occur in the lower degree subfields. Consequently, full extension field operations have the greatest effect on the runtime of Miller's algorithm. Many recent optimizations in the literature have focussed on improving the overall operation count by presenting new explicit formulas that reduce the number of subfield operations encountered throughout an iteration of Miller's algorithm. Unfortunately, almost all of these operations far outweigh the operations in the smaller subfields. In this paper, we propose a new way of carrying out Miller's algorithm that involves new explicit formulas which reduce the number of full extension field operations that occur in an iteration of the Miller loop, resulting in significant speed ups in most practical situations of between 5 and 30 percent.
Last updated:  2010-03-01
The Extended Access Control for Machine Readable Travel Documents
Rafik Chaabouni, Serge Vaudenay
Machine Readable travel documents have been rapidly put in place since 2004. The initial standard was made by the ICAO and it has been quickly followed by the Extended Access Control (EAC). In this paper we discuss about the evolution of these standards and more precisely on the evolution of EAC. We intend to give a realistic survey on these standards. We discuss about their problems, such as the inexistence of a clock in the biometric passports and the absence of a switch preventing the lecture of a closed passport. We also look at the issue with retrocompatibility that could be easily solved and the issue with terminal revocation that is harder.
Last updated:  2010-05-24
Constructing Verifiable Random Functions with Large Input Spaces
Susan Hohenberger, Brent Waters
We present a family of verifiable random functions which are provably secure for exponentially-large input spaces under a non-interactive complexity assumption. Prior constructions required either an interactive complexity assumption or one that could tolerate a factor 2^n security loss for n-bit inputs. Our construction is practical and inspired by the pseudorandom functions of Naor and Reingold and the verifiable random functions of Lysyanskaya. Set in a bilinear group, where the Decisional Diffie-Hellman problem is easy to solve, we require the Decisional Diffie-Hellman Exponent assumption in the standard model, without a common reference string. Our core idea is to apply a simulation technique where the large space of VRF inputs is collapsed into a small (polynomial-size) input in the view of the reduction algorithm. This view, however, is information-theoretically hidden from the attacker. Since the input space is exponentially large, we can first apply a collision-resistant hash function to handle arbitrarily-large inputs.
Last updated:  2010-03-01
Fair Blind Signatures without Random Oracles
Georg Fuchsbauer, Damien Vergnaud
A fair blind signature is a blind signature with revocable anonymity and unlinkability, i.e., an authority can link an issuing session to the resulting signature and trace a signature to the user who requested it. In this paper we first revisit the security model for fair blind signatures given by Hufschmitt and Traoré in 2007. We then give the first practical fair blind signature scheme with a security proof in the standard model. Our scheme satisfies a stronger variant of the Hufschmitt-Traoré model.
Last updated:  2012-03-22
Correlated Product Security From Any One-Way Function and the New Notion of Decisional Correlated Product Security
Brett Hemenway, Steve Lu, Rafail Ostrovsky
It is well-known that the k-wise product of one-way functions remains one-way, but may no longer be when the k inputs are correlated. At TCC 2009, Rosen and Segev introduced a new notion known as Correlated Product secure functions. These functions have the property that a k-wise product of them remains one-way even under correlated inputs. Rosen and Segev gave a construction of injective trapdoor functions which were correlated product secure from the existence of Lossy Trapdoor Functions (introduced by Peikert and Waters in STOC 2008). The first main result of this work shows the surprising fact that a family of correlated product secure functions can be constructed from any one-way function. Because correlated product secure functions are trivially one-way, this shows an equivalence between the existence of these two cryptographic primitives. In the second main result of this work, we consider a natural decisional variant of correlated product security. Roughly, a family of functions are Decisional Correlated Product (DCP) secure if $f_1(x_1),\ldots,f_k(x_1)$ is indistinguishable from $f_1(x_1),\ldots,f_k(x_k)$ when $x_1,\ldots,x_k$ are chosen uniformly at random. We argue that the notion of Decisional Correlated Product security is a very natural one. To this end, we show a parallel from the Discrete Log Problem and Decision Diffie-Hellman Problem to Correlated Product security and its decisional variant. This intuition gives very simple constructions of PRGs and IND-CPA encryption from DCP secure functions. Furthermore, we strengthen our first result by showing that the existence of DCP secure one-way functions is also equivalent to the existence of any one-way function. When considering DCP secure functions with trapdoors, we give a construction based on Lossy Trapdoor Functions, and show that any DCP secure function family with trapdoor satisfy the security requirements for Deterministic Encryption as defined by Bellare, Boldyreva and O'Neill in CRYPTO 2007. In fact, we also show that definitionally, DCP secure functions with trapdoors are a strict subset of Deterministic Encryption functions by showing an example of a Deterministic Encryption function which according to the definition is not a DCP secure function.
Last updated:  2012-03-22
On Homomorphic Encryption and Chosen-Ciphertext Security
Uncategorized
Brett Hemenway, Rafail Ostrovsky
Show abstract
Uncategorized
Chosen-Ciphertext (IND-CCA) security is generally considered the right notion of security for a cryptosystem. Because of its central importance much effort has been devoted to constructing IND-CCA secure cryptosystems. In this work, we consider constructing IND-CCA secure cryptosystems from (group) homomorphic encryption. Our main results give natural and efficient constructions of IND-CCA secure cryptosystems from any homomorphic encryption scheme that satisfies weak cyclic properties, either in the plaintext, ciphertext or randomness space. Our results have the added benefit of being simple to describe and analyze.
Last updated:  2010-03-01
A Zero-One Law for Deterministic 2-Party Secure Computation
Hemanta K. Maji, Manoj Prabhakaran, Mike Rosulek
We use security in the Universal Composition framework as a means to study the ``cryptographic complexity'' of 2-party secure computation tasks (functionalities). We say that a functionality $F$ {\em reduces to} another functionality $G$ if there is a UC-secure protocol for $F$ using ideal access to $G$. This reduction is a natural and fine-grained way to compare the relative complexities of cryptographic tasks. There are two natural ``extremes'' of complexity under the reduction: the {\em trivial} functionalities, which can be reduced to any other functionality; and the {\em complete} functionalities, to which any other functionality can be reduced. In this work we show that under a natural computational assumption (the existence of a protocol for oblivious transfer secure against semi-honest adversaries), there is a {\bf zero-one law} for the cryptographic complexity of 2-party deterministic functionalities. Namely, {\em every such functionality is either trivial or complete.} No other qualitative distinctions exist among functionalities, under this computational assumption. While nearly all previous work classifying multi-party computation functionalities has been restricted to the case of secure function evaluation, our results are the first to consider completeness of arbitrary {\em reactive} functionalities, which receive input and give output repeatedly throughout several rounds of interaction. One important technical contribution in this work is to initiate the comprehensive study of the cryptographic properties of reactive functionalities. We model these functionalities as finite automata and develop an automata-theoretic methodology for classifying and studying their cryptographic properties. Consequently, we completely characterize the reactive behaviors that lead to cryptographic non-triviality. Another contribution of independent interest is to optimize the hardness assumption used by Canetti et al.\ (STOC 2002) in showing that the common random string functionality is complete (a result independently obtained by Damgård et al.\ (TCC 2010)).
Last updated:  2010-08-23
Parallel Enumeration of Shortest Lattice Vectors
Özgür Dagdelen, Michael Schneider
Lattice basis reduction is the problem of finding short vectors in lattices. The security of lattice based cryptosystems is based on the hardness of lattice reduction. Furthermore, lattice reduction is used to attack well-known cryptosystems like RSA. One of the algorithms used in lattice reduction is the enumeration algorithm (ENUM), that provably finds a shortest vector of a lattice. We present a parallel version of the lattice enumeration algorithm. Using multi-core CPU systems with up to 16 cores, our implementation gains a speed-up of up to factor 14. Compared to the currently best public implementation, our parallel algorithm saves more than 90% of runtime.
Last updated:  2010-03-01
Secret Sharing Extensions based on the Chinese Remainder Theorem
Kamer Kaya, Ali Aydın Selçuk
In this paper, we investigate how to achieve verifiable secret sharing (VSS) schemes by using the Chinese Remainder Theorem (CRT). We first show that two schemes proposed earlier are not secure from an attack where the dealer is able to distribute inconsistent shares to the users. Then we propose a new VSS scheme based on the CRT and prove its security. Using the proposed VSS scheme, we develop joint random secret sharing~(JRSS) and proactive SSS protocols, which, to the best of our knowledge, are the first secure protocols of their kind based on the CRT.
Last updated:  2010-02-26
Plaintext-Dependent Decryption: A Formal Security Treatment of SSH-CTR
Kenneth G. Paterson, Gaven J. Watson
This paper presents a formal security analysis of SSH in counter mode in a security model that accurately captures the capabilities of real-world attackers, as well as security-relevant features of the SSH specifications and the OpenSSH implementation of SSH. Under reasonable assumptions on the block cipher and MAC algorithms used to construct the SSH Binary Packet Protocol (BPP), we are able to show that the SSH BPP meets a strong and appropriate notion of security: indistinguishability under buffered, stateful chosen-ciphertext attacks. This result helps to bridge the gap between the existing security analysis of the SSH BPP by Bellare et al. and the recently discovered attacks against the SSH BPP by Albrecht et al. which partially invalidate that analysis.
Last updated:  2010-02-22
A Random Number Generator Based on Isogenies Operations
He Debiao, Chen Jianhua, Hu Jin
A random number generator based on the operation of isogenies between elliptic curves over finite fields Fp is proposed. By using the proposed generator together with the isogeny cryptography algorithm, which is against the attack of quantum computer, we can save hardware and software components. Theoretical analyses show that periods of the proposed random number generator are sufficiently long. Moreover, the generated sequences have passed the U.S. NIST statistical test.
Last updated:  2010-02-22
New Impossible Differential Attacks on AES
Zheng Yuan
Some new near $5$ rounds impossible differential properties of AES are first presented in this paper, in which active bytes of $1^{st}$ round or $5^{th}$ round are in different columns and in favor of extension. Additionally, we first propose the complexities expressions of an universal impossible differential attack, which can help us to rapidly search appropriate impossible differential paths. More importantly, our near $5$ rounds impossible differential properties and complexities expressions lead to a series of new impossible differential attacks on 7 rounds AES-128, 7-9 rounds AES-192, and 8-12 rounds AES-256.
Last updated:  2010-02-22
Security Weaknesses in Two Certificateless Signcryption Schemes
S. Sharmila Deva Selvi, S. Sree Vivek, C. Pandu Rangan
Recently, a certificateless signcryption scheme in the standard model was proposed by Liu et al. in \cite{LiuHZM10}. Another certificateless signcryption scheme in the standard model was proposed by Xie et al. in \cite{WZ09}. Here, we show that the scheme in \cite{LiuHZM10} and \cite{WZ09} are not secure against Type-I adversary.
Last updated:  2010-04-24
Distinguishers for the Compression Function and Output Transformation of Hamsi-256
Uncategorized
Jean-Philippe Aumasson, Emilia Käsper, Lars Ramkilde Knudsen, Krystian Matusiewicz, Rune Odegaard, Thomas Peyrin, Martin Schläffer
Show abstract
Uncategorized
Hamsi is one of 14 remaining candidates in NIST's Hash Competition for the future hash standard SHA-3. Until now, little analysis has been published on its resistance to differential cryptanalysis, the main technique used to attack hash functions. We present a study of Hamsi's resistance to differential and higher-order differential cryptanalysis, with focus on the 256-bit version of Hamsi. Our main results are efficient distinguishers and near-collisions for its full (3-round) compression function, and distinguishers for its full (6-round) finalization function, indicating that Hamsi's building blocks do not behave ideally.
Last updated:  2010-02-22
Solving a 676-bit Discrete Logarithm Problem in GF(3^{6n})
Takuya Hayashi, Naoyuki Shinohara, Lihua Wang, Shin'ichiro Matsuo, Masaaki Shirase, Tsuyoshi Takagi
Pairings on elliptic curves over finite fields are crucial for constructing various cryptographic schemes. The \eta_T pairing on supersingular curves over GF(3^n) is particularly popular since it is efficiently implementable. Taking into account the Menezes-Okamoto-Vanstone (MOV) attack, the discrete logarithm problem (DLP) in GF(3^{6n}) becomes a concern for the security of cryptosystems using \eta_T pairings in this case. In 2006, Joux and Lercier proposed a new variant of the function field sieve in the medium prime case, named JL06-FFS. We have, however, not yet found any practical implementations on JL06-FFS over GF(3^{6n}). Therefore, we first fulfilled such an implementation and we successfully set a new record for solving the DLP in GF(3^{6n}), the DLP in GF(3^{6 \cdot 71}) of 676-bit size. In addition, we also compared JL06-FFS and an earlier version, named JL02-FFS, with practical experiments. Our results confirm that the former is several times faster than the latter under certain conditions.
Last updated:  2010-02-22
Interactive Locking, Zero-Knowledge PCPs, and Unconditional Cryptography
Vipul Goyal, Yuval Ishai, Mohammad Mahmoody, Amit Sahai
Motivated by the question of basing cryptographic protocols on stateless tamper-proof hardware tokens, we revisit the question of unconditional two-prover zero-knowledge proofs for $NP$. We show that such protocols exist in the {\em interactive PCP} model of Kalai and Raz (ICALP '08), where one of the provers is replaced by a PCP oracle. This strengthens the feasibility result of Ben-Or, Goldwasser, Kilian, and Wigderson (STOC '88) which requires two stateful provers. In contrast to previous zero-knowledge PCPs of Kilian, Petrank, and Tardos (STOC '97), in our protocol both the prover and the PCP oracle are efficient given an $NP$ witness. Our main technical tool is a new primitive that we call {\em interactive locking}, an efficient realization of an unconditionally secure commitment scheme in the interactive PCP model. We implement interactive locking by adapting previous constructions of {\em interactive hashing} protocols to our setting, and also provide a direct construction which uses a minimal amount of interaction and improves over our interactive hashing based constructions. Finally, we apply the above results towards showing the feasibility of basing unconditional cryptography on {\em stateless} tamper-proof hardware tokens, and obtain the following results: *) We show that if tokens can be used to encapsulate other tokens, then there exist unconditional and statistically secure (in fact, UC secure) protocols for general secure computation. *) Even if token encapsulation is not possible, there are unconditional and statistically secure commitment protocols and zero-knowledge proofs for $NP$. *) Finally, if token encapsulation is not possible, then no protocol can realize statistically secure oblivious transfer.
Last updated:  2011-04-14
An Efficient and Parallel Gaussian Sampler for Lattices
Chris Peikert
At the heart of many recent lattice-based cryptographic schemes is a polynomial-time algorithm that, given a `high-quality' basis, generates a lattice point according to a Gaussian-like distribution. Unlike most other operations in lattice-based cryptography, however, the known algorithm for this task (due to Gentry, Peikert, and Vaikuntanathan; STOC 2008) is rather inefficient, and is inherently sequential. We present a new Gaussian sampling algorithm for lattices that is \emph{efficient} and \emph{highly parallelizable}. We also show that in most cryptographic applications, the algorithm's efficiency comes at almost no cost in asymptotic security. At a high level, our algorithm resembles the ``perturbation'' heuristic proposed as part of NTRUSign (Hoffstein \etal, CT-RSA 2003), though the details are quite different. To our knowledge, this is the first algorithm and rigorous analysis demonstrating the security of a perturbation-like technique.
Last updated:  2010-02-22
MQ^*-IP: An Identity-based Identification Scheme without Number-theoretic Assumptions
Christopher Wolf, Bart Preneel
In this article, we propose an identification scheme which is based on the two combinatorial problems Multivariate Quadratic equations (MQ) and Isomorphism of Polynomials (IP). We show that this scheme is statistical zero-knowledge. Using a trapdoor for the MQ-problem, it is possible to make it also identity-based, i.e., there is no need for distributing public keys or for certificates within this scheme. The size of the public keys and the communication complexity\ are within the range of other non-number-theoretic identification schemes. In contrast to MQ^*-IP, these schemes do usually no permit identity-based public keys.
Last updated:  2010-11-16
A Framework for Efficient Signatures, Ring Signatures and Identity Based Encryption in the Standard Model
Zvika Brakerski, Yael Tauman Kalai
In this work, we present a generic framework for constructing efficient signature schemes, ring signature schemes, and identity based encryption schemes, all in the standard model (without relying on random oracles). We start by abstracting the recent work of Hohenberger and Waters (Crypto 2009), and specifically their ``prefix method''. We show a transformation taking a signature scheme with a very weak security guarantee (a notion that we call a-priori-message unforgeability under static chosen message attack) and producing a fully secure signature scheme (i.e., existentially unforgeable under adaptive chosen message attack). Our transformation uses the notion of chameleon hash functions, defined by Krawczyk and Rabin (NDSS 2000) and the ``prefix method''. Constructing such weakly secure schemes seems to be significantly easier than constructing fully secure ones, and we present {\em simple} constructions based on the RSA assumption, the {\em short integer solution} (SIS) assumption, and the {\em computational Diffie-Hellman} (CDH) assumption over bilinear groups. Next, we observe that this general transformation also applies to the regime of ring signatures. Using this observation, we construct new (provably secure) ring signature schemes: one is based on the {\em short integer solution} (SIS) assumption, and the other is based on the CDH assumption over bilinear groups. As a building block for these constructions, we define a primitive that we call \emph{ring trapdoor functions}. We show that ring trapdoor functions imply ring signatures under a weak definition, which enables us to apply our transformation to achieve full security. Finally, we show a connection between ring signature schemes and identity based encryption (IBE) schemes. Using this connection, and using our new constructions of ring signature schemes, we obtain two IBE schemes: The first is based on the {\em learning with error} (LWE) assumption, and is similar to the recently introduced IBE scheme of Cash-Hofheinz-Kiltz-Peikert; The second is based on the $d$-linear assumption over bilinear groups.
Last updated:  2010-02-22
Pair-wise Cryptographic Models for Secure Data Exchange in P2P Database Management Systems
Sk. Md. Mizanur Rahman, Mehedi Masud, Carlisle Adams, Khalil El-Khatib, Hussein Mouftah, Eiji Okamoto
A peer-to-peer database management system(P2PDBMS) is a collection of autonomous data sources, called peers. In this system each peer augments a conventional database management system with an inter-operability layer (i.e. mappings/policies) for sharing data and services. Peers exchange data in a pair-wise fashion on-the-fly in response to a query without any centralized control. Generally, the communication link between two peers is insecure and peers create a temporary session while exchanging data. When peers exchange highly confidential data between them over an insecure communication network, such as the Internet, the data might be trapped and disclosed by the intruders. In a P2PDBMS there is no centralized control for data exchange, hence we cannot assume any central third party security infrastructure (e.g. PKI) to protect confidential data. So far, there is currently no available/existing security protocol for secured data exchange in P2PDBMS. In this paper we propose three models for secure data exchange in P2PDBMSs and the corresponding security protocols. The proposed protocol allows the peers to compute their secret session keys dynamically during data exchange based on the policies between them. Our proposed protocol is robust against the man-in-the middle attack, the masquerade attack, and the reply attack.
Last updated:  2010-04-24
Attribute-based Authenticated Key Exchange
Uncategorized
M. Choudary Gorantla, Colin Boyd, Juan Manuel González Nieto
Show abstract
Uncategorized
We introduce the concept of attribute-based authenticated key exchange (AB-AKE) within the framework of ciphertext policy attribute-based systems. A notion of AKE-security for AB-AKE is presented based on the security models for group key exchange protocols and also taking into account the security requirements generally considered in the ciphertext policy attribute-based setting. We also extend the paradigm of hybrid encryption to the ciphertext policy attribute-based encryption schemes. A new primitive called encapsulation policy attribute-based key encapsulation mechanism (EP-AB-KEM) is introduced and a notion of chosen ciphertext security is defined for EP-AB-KEMs. We propose an EP-AB-KEM from an existing attribute-based encryption scheme and show that it achieves chosen ciphertext security in the generic group and random oracle models. We present a generic one-round AB-AKE protocol that satisfies our AKE-security notion. The protocol is generically constructed from any EP-AB-KEM that satisfies chosen ciphertext security. Instantiating the generic AB-AKE protocol with our EP-AB-KEM will result in a concrete one-round AB-AKE protocol also secure in the generic group and random oracle models.
Last updated:  2010-02-22
One Round Group Key Exchange with Forward Security in the Standard Model
M. Choudary Gorantla, Colin Boyd, Juan Manuel González Nieto
Constructing a one round group key exchange (GKE) protocol that provides forward secrecy is an open problem in the literature. In this paper, we investigate whether or not the security of one round GKE protocols can be enhanced with any form of forward secrecy without increasing the number of rounds. We apply the {\em key evolving} approach used for forward secure encryption/signature schemes and then model the notion of forward security for the first time for key exchange protocols. This notion is slightly weaker than forward secrecy, considered traditionally for key exchange protocols. We then revise an existing one round GKE protocol to propose a GKE protocol with forward security. In the security proof of the revised protocol we completely avoid reliance on the random oracle assumption that was needed for the proof of the base protocol. Our security proof can be directly applied to the base protocol, making it the most efficient one round GKE protocol secure in the standard model. Our one round GKE protocol is generically constructed from the primitive of forward secure encryption. We also propose a concrete forward secure encryption scheme with constant size ciphertext that can be used to efficiently instantiate our protocol.
Last updated:  2010-02-17
Predicate-Based Key Exchange
James Birkett, Douglas Stebila
We provide the first description of and security model for authenticated key exchange protocols with predicate-based authentication. In addition to the standard goal of session key security, our security model also provides for credential privacy: a participating party learns nothing more about the other party's credentials than whether they satisfy the given predicate. Our model also encompasses attribute-based key exchange since it is a special case of predicate-based key exchange. We demonstrate how to realize a secure predicate-based key exchange protocol by combining any secure predicate-based signature scheme with the basic Diffie-Hellman key exchange protocol, providing an efficient and simple solution.
Last updated:  2010-02-16
The Eris hybrid cipher
Sandy Harris
An earlier paper by the same author (IACR Eprint 2008/473) suggested combining a block cipher and a stream cipher to get a strong hybrid cipher. This paper proposes a specific cipher based on those ideas, using the HC-128 stream cipher and a tweakable block cipher based on Serpent.
Last updated:  2010-07-20
Secrecy-Oriented First-Order Logical Analysis of Cryptographic Protocols
Gergei Bana, Koji Hasebe, Mitsuhiro Okada
We present a computationally sound first-order system for security analysis of protocols that places secrecy of nonces and keys in its center. Even trace properties such as agreement and authentication are proven via proving a non-trace property, namely, secrecy first. This results a very powerful system, the working of which we illustrate on the agreement and authenti- cation proofs for the Needham-Schroeder-Lowe public-key and the amended Needham-Schroeder shared-key protocols in case of unlimited sessions. Unlike other available formal verification techniques, computational soundness of our approach does not require any idealizations about parsing of bitstrings or unnecessary tagging. In particular, we have total control over detecting or eliminating the possibility of type-flaw attacks.
Last updated:  2013-04-03
From Dust to Dawn: Practically Efficient Two-Party Secure Function Evaluation Protocols and their Modular Design
Uncategorized
Vladimir Kolesnikov, Ahmad-Reza Sadeghi, Thomas Schneider
Show abstract
Uncategorized
General two-party Secure Function Evaluation (SFE) allows mutually distrusting parties to (jointly) correctly compute \emph{any} function on their private input data, without revealing the inputs. SFE, properly designed, guarantees to satisfy the most stringent security requirements, even for interactive computation. Two-party SFE can benefit almost any client-server interaction where privacy is required, such as privacy-preserving credit checking, medical classification, or face recognition. Today, SFE is subject of an immense amount of research in a variety of directions, and is not easy to navigate. In this paper, we systematize the most \emph{practically important} work of the vast research knowledge on \emph{general} SFE. It turns out that the most efficient SFE protocols today are obtained by combining several basic techniques, such as garbled circuits and homomorphic encryption. We limit our detailed discussion to efficient general techniques. In particular, we do not discuss the details of currently \emph{practically inefficient} techniques, such as fully homomorphic encryption (although we elaborate on its practical relevance), nor do we cover \emph{specialized} techniques applicable only to small classes of functions. As an important practical contribution, we present a framework in which today's practically most efficient techniques for general SFE can be viewed as building blocks with well-defined interfaces that can be easily combined to establish a complete efficient solution. Further, our approach naturally lends itself to automated protocol generation (compilation). This is evidenced by the implementation of (parts of) our framework in the TASTY SFE compiler (introduced at ACM CCS 2010). In sum, our work is positioned as a comprehensive guide in state-of-the-art SFE, with the additional goal of extracting, systematizing and unifying the most relevant and promising general techniques from among the mass of SFE knowledge. We hope this guide would help developers of SFE libraries and privacy-preserving protocols in selecting the most efficient SFE components available today.
Last updated:  2010-02-20
Multiple Bytes Differential Fault Analysis on CLEFIA
Uncategorized
Xin-jie ZHAO, Tao WANG, Jing-zhe GAO
Show abstract
Uncategorized
This paper examines the strength of CLEFIA against multiple bytes differential fault attack. Firstly, it presents the principle of CLEFIA algorithm and differential fault analysis; then, according to injecting faults into the rth,r-1th,r-2th CLEFIA round three conditions, proposes three fault models and corresponding analysis methods; finally, all of the fault model and analysis methods above have been verified through software simulation. Experiment results demonstrate that: CLEFIA is vulnerable to differential fault attack due to its Feistel structure and S-box feature, 5-6,6-8,2 faults are needed to recover CLEFIA-128 based on the three fault models in this paper respectively, multiple byte faults model can greatly improve the attack practicality and even the attack efficiency, and the fault analysis methods in this paper can provide some fault analysis ideas on other block ciphers using S-box.
Last updated:  2010-02-16
ECC2K-130 on Cell CPUs
Joppe W. Bos, Thorsten Kleinjung, Ruben Niederhagen, Peter Schwabe
This paper describes an implementation of Pollard's rho algorithm to compute the elliptic curve discrete logarithm for the Synergistic Processor Elements of the Cell Broadband Engine Architecture. Our implementation targets the elliptic curve discrete logarithm problem defined in the Certicom ECC2K-130 challenge. We compare a bitsliced implementation to a non-bitsliced implementation and describe several optimization techniques for both approaches. In particular, we address the question whether normal-basis or polynomial-basis representation of field elements leads to better performance. Using our software, the ECC2K-130 challenge can be solved in one year using the Synergistic Processor Units of less than 2700 Sony Playstation~3 gaming consoles.
Last updated:  2011-12-14
Private and Continual Release of Statistics
Uncategorized
T-H. Hubert Chan, Elaine Shi, Dawn Song
Show abstract
Uncategorized
We ask the question – how can websites and data aggregators continually release updated statistics, and meanwhile preserve each individual user’s privacy? Suppose we are given a stream of 0’s and 1’s. We propose a differentially private continual counter that outputs at every time step the approximate number of 1’s seen thus far. Our counter construction has error that is only poly-log in the number of time steps. We can extend the basic counter construction to allow websites to continually give top-k and hot items suggestions while preserving users’ privacy.
Last updated:  2010-02-16
A New Scheme for Zero Knowledge Proof based on Multivariate Quadratic Problem and Quaternion Algebra
Mehdi Vasef
This paper introduces a new intractable security problem whose intractability is due to the NP completeness of multivariate quadratic problem. This novel problem uses quaternion algebra in conjunction with MQ. Starting with the simultaneous multivariate equations, we transform these equations into simultaneous quaternion based multivariate quadratic equations. A new scheme for computational zero knowledge proof based on this problem is proposed. It is proved that according to black box definition of zero knowledge proof (ZKP) system, the proposed scheme is ZKP. Our proof has two lemmas. The proof is done through two lemmas. In the first lemma it is shown that expected polynomial time machine V * M halts in a polynomial time. In the second lemma, it is showed that the probability ensembles V x L M x * and x L P x , V * x are polynomially indistinguishable. The scheme has low computational overhead and is particularly useful in cryptographic applications such as digital signature and key agreement.
Last updated:  2010-02-11
Concurrent Knowledge Extraction in the Public-Key Model
Andrew C. Yao, Moti Yung, Yunlei Zhao
Knowledge extraction is a fundamental notion, modeling machine possession of values (witnesses) in a computational complexity sense and enabling one to argue about the internal state of a party in a protocol without probing its internal secret state. However, when transactions are concurrent (e.g., over the Internet) with players possessing public-keys (as is common in cryptography), assuring that entities ``know" what they claim to know, where adversaries may be well coordinated across different transactions, turns out to be much more subtle and in need of re-examination. Here, we investigate how to formally treat knowledge possession by parties (with registered public-keys) interacting over the Internet. Stated more technically, we look into the relative power of the notion of ``concurrent knowledge-extraction" (CKE) in the concurrent zero-knowledge (CZK) bare public-key (BPK) model where statements being proven can be dynamically and adaptively chosen by the prover. We show the potential vulnerability of man-in-the-middle (MIM) attacks turn out to be a real security threat to existing natural protocols running concurrently in the public-key model, which motivates us to introduce and formalize the notion of CKE, alone with clarification of various subtleties. Then, both generic (based on standard polynomial assumptions), and efficient (employing complexity leveraging in a novel way) implementations for NP are presented for constant-round (in particular, round-optimal) concurrently knowledge-extractable concurrent zero-knowledge (CZK-CKE) arguments in the BPK model. The efficient implementation can be further practically instantiated for specific number-theoretic language.
Last updated:  2010-02-11
Related-Key Boomerang Attack on Block Cipher SQUARE
Bonwook Koo, Yongjin Yeom, Junghwan Song
Square is 8-round SPN structure block cipher and its round function and key schedule have been slightly modified to design building blocks of Rijndael. Key schedule of Square is simple and efficient but fully affie, so we apply a related-key attack on it. We find a 3-round related-key differential trail with probability 2^28, which have zero differences both on its input and output states, and this trail is called the local collision in [5]. By extending of this related-key differential, we construct a 7-round related-key boomerang distinguisher and successful attack on full round Square. The best attack on Square have ever been known is the square attack on 6-round reduced variant of Square. In this paper, we present a key recovery attack on the full round of Square using a related-key boomerang distinguisher. We construct a 7-round related-key boomerang distinguisher with probability 2^119 by finding local collision, and calculate its probability using ladder switch and local amplification techniques. As a result, one round on top of distinguisher is added to construct a full round attack on Square which recovers 16-bit key information with 2^36 encryptions and 2^123 data.
Last updated:  2010-06-24
Approximating Addition by XOR: how to go all the way
Didier Alquié
In this paper, we study approximation of addition by XOR, taking P. Sarkar's publication~\cite{bib:sarkar} as the reference work and starting point. In this work, among various results, it was claimed that explicit formulas seemed difficult to obtain when the number $n$ of summands is more than $5$. In the first part of our work, we show a systematic way to find explicit formulas: the complexity to compute them is $O(n^3)$, which allows large values of $n$. We present some numerical computation and point out a - conjectural - observation on the coefficients. In the second part, we study a generalization of P. Sarkar's work to $q$-ary addition, instead of binary. We show that the mechanics of the addition is essentially the same as in the binary case. In particular, sequence of carries behaves very similarly: it is a Markov chain whose transition matrix can be computed. Running some experiments on small values of $n$ leads us to a conjecture, the first part of which is intuitive and the second part of which reveals an amazing coincidence (and is probably not!). Finally, in a section titled ``very last news'', we refer to a paper published by Holte in 1997, that was brought to us after our first post and that we had missed before. It happens that this paper studies the topic and solves a major part of our open problems. Henceforth, the present post is an updated version of our previous ``Approximating Addition by XOR: how to go (a little) further than P. Sarkar'', taking into account this previous Holte's reference.
Last updated:  2010-02-11
2-round Substitution-Permutation and 3-round Feistel Networks have bad Algebraic Degree
Didier Alquié
We study algebraic degree profile of reduced-round block cipher schemes. We show that the degree is not maximal with elementary combinatorial and algebraic arguments. We discuss on how it can be turned into distinguishers from balanced random functions.
Last updated:  2010-03-01
Strongly Unforgeable Signatures and Hierarchical Identity-based Signatures from Lattices without Random Oracles
Markus Rückert
We propose a variant of Peikert's lattice-based existentially unforgeable signature scheme in the standard model. Our construction offers the same efficiency as Peikert's but supports the stronger notion of strong unforgeability. Strong unforgeability demands that the adversary is unable to produce a new message-signature pair (m, s), even if he or she is allowed to see a different signature sig' for m. In particular, we provide the first treeless signature scheme that supports strong unforgeability for the post-quantum era in the standard model. Moreover, we show how to directly implement identity-based, and even hierarchical identity-based, signatures (IBS) in the same strong security model without random oracles. An additional advantage of this direct approach over the usual generic conversion of hierarchical identity-based encryption to IBS is that we can exploit the efficiency of ideal lattices without significantly harming security. We equip all constructions with strong security proofs based on mild worst-case assumptions on lattices and we also propose concrete security parameters.
Last updated:  2010-04-14
Type-II Optimal Polynomial Bases
Daniel J. Bernstein, Tanja Lange
In the 1990s and early 2000s several papers investigated the relative merits of polynomial-basis and normal-basis computations for $\F_{2^n}$. Even for particularly squaring-friendly applications, such as implementations of Koblitz curves, normal bases fell behind in performance unless a type-I normal basis existed for $\F_{2^n}$. In 2007 Shokrollahi proposed a new method of multiplying in a type-II normal basis. Shokrollahi's method efficiently transforms the normal-basis multiplication into a single multiplication of two size-$(n+1)$ polynomials. This paper speeds up Shokrollahi's method in several ways. It first presents a simpler algorithm that uses only size-$n$ polynomials. It then explains how to reduce the transformation cost by dynamically switching to a `type-II optimal polynomial basis' and by using a new reduction strategy for multiplications that produce output in type-II polynomial basis. As an illustration of its improvements, this paper explains in detail how the multiplication overhead in Shokrollahi's original method has been reduced by a factor of $1.4$ in a major cryptanalytic computation, the ongoing attack on the ECC2K-130 Certicom challenge. The resulting overhead is also considerably smaller than the overhead in a traditional low-weight-polynomial-basis approach. This is the first state-of-the-art binary-elliptic-curve computation in which type-II bases have been shown to outperform traditional low-weight polynomial bases.
Last updated:  2010-03-01
Okamoto-Tanaka Revisited: Fully Authenticated Diffie-Hellman with Minimal Overhead
Rosario Gennaro, Hugo Krawczyk, Tal Rabin
Okamoto-Tanaka Revisited: Fully Authenticated Diffie-Hellman with Minimal Overhead The Diffie-Hellman protocol (DHP) is one of the most studied protocols in cryptography. Much work has been dedicated to armor the original protocol against active attacks while incurring a minimal performance overhead relative to the basic (unauthenticated) DHP. This line of work has resulted in some remarkable protocols, e.g., MQV, where the protocol's communication cost is identical to that of the basic DHP and the computation overhead is small. Unfortunately, MQV and similar 2-message ``implicitly authenticated" protocols do not achieve full security against active attacks since they cannot provide forward secrecy (PFS), a major security goal of DHP, against active attackers. In this paper we investigate the question of whether one can push the limits of authenticated DHPs even further, namely, to achieve communication complexity as in the original DHP (two messages with a single group element per message), maintain low computational overhead, and yet achieve full PFS against active attackers in a provable way. We answer this question in the affirmative by resorting to an old and elegant key agreement protocol: the Okamoto-Tanaka protocol \cite{okta}. We present a variant of the protocol (denoted mOT) which achieves the above minimal communication, incurs a computational overhead relative to the basic DHP that is practically negligible, and yet achieves full provable key agreement security, including PFS, against active attackers. Moreover, due to the identity-based properties of mOT, even the sending of certificates (typical for authenticated DHPs) can be avoided in the protocol. As additional contributions, we apply our analysis to prove the security of a recent multi-domain extension of the Okamoto-Tanaka protocol by Schridde et al. and show how to adapt mOT to the (non id-based) certificate-based setting.
Last updated:  2010-02-11
A Pairing-Based DAA Scheme Further Reducing TPM Resources
Ernie Brickell, Jiangtao Li
Direct Anonymous Attestation (DAA) is an anonymous signature scheme designed for anonymous attestation of a Trusted Platform Module (TPM) while preserving the privacy of the device owner. Since TPM has limited bandwidth and computational capability, one interesting feature of DAA is to split the signer role between two entities: a TPM and a host platform where the TPM is attached. Recently, Chen proposed a new DAA scheme that is more efficient than previous DAA schemes. In this paper, we construct a new DAA scheme requiring even fewer TPM resources. Our DAA scheme is about 5 times more efficient than Chen's scheme for the TPM implementation using the Barreto-Naehrig curves. In addition, our scheme requires much smaller size of software code that needs to be implemented in the TPM. This makes our DAA scheme ideal for the TPM implementation. Our DAA scheme is efficient and provably secure in the random oracle model under the strong Diffie-Hellman assumption and the decisional Diffie-Hellman assumption.
Last updated:  2010-02-09
Some Observations on TWIS Block Cipher
Bozhan Su, Wenling Wu, Lei Zhang, Yanjun Li
The 128-bit block cipher TWIS was proposed by Ojha et al in 2009. It is a lightweight block cipher and its design is inspired from CLEFIA. In this paper, we first study the properties of TWIS structure, and as an extension we also considered the generalized TWIS-type structure which can be called G-TWIS cipher, where the block size and round number can be arbitrary values. Then we present a series of 10-round differential distinguishers for TWIS and a n-round differential distinguisher for G-TWIS whose probabilities are all equal to 1. Therefore, by utilizing these kinds of differential distinguishers, we can break the full 10-round TWIS cipher and n-round G-TWIS cipher.
Last updated:  2010-02-08
An Anonymous ID-based Encryption Revisited
Zhengjun Cao
In 2006, Boyen and Waters proposed an anonymous ID-based encryption. It is impressive that in the scheme the system secret key is a tuple of five numbers. The user's secret key is also a tuple of five elements. The authors did not explain why it should introduce so many parameters. In this paper, we simulate a general attempt to attack the scheme. It shows us which parameters are essential to the scheme and which parameters can be reasonably discarded. Based on the analysis we present a simplified version and an efficient version of the Boyen-Waters scheme. The analyzing technique developed in this paper is helpful to better other cryptographic protocols.
Last updated:  2010-07-30
New Advances on Privacy-Preserving Policy Reconciliation
Ulrike Meyer, Susanne Wetzel, Sotiris Ioannidis
Entities define their own set of rules under which they are willing to collaborate, e.g., interact, share and exchange resources or information with others. Typically, these individual policies differ for different parties. Thus, collaboration requires the resolving of differences and reaching a consensus. This process is generally referred to as policy reconciliation. Current solutions for policy reconciliation do not take into account the privacy concerns of reconciliating parties. This paper addresses the problem of preserving privacy during policy reconciliation. We introduce new protocols that meet the privacy requirements of the organizations and allow parties to find a common policy rule which optimizes their individual preferences.
Last updated:  2010-12-10
Differential Fault Analysis on SMS4 Using a Single Fault
Ruilin Li, Bing Sun, Chao Li, Jianxiong You
Differential Fault Analysis (DFA) attack is a powerful cryptanalytic technique that could be used to retrieve the secret key by exploiting computational errors in the encryption (decryption) procedure. In the present paper, we propose a new DFA attack on SMS4 using a single fault. We show that if a random byte fault is induced into either the second, third, or fourth word register at the input of the $28$-th round, the 128-bit master key could be recovered with an exhaustive search of $22.11$ bits on average. The proposed attack makes use of the characteristic of the cipher's structure, the speciality of the diffusion layer, and the differential property of the S-box. Furthermore, it can be tailored to any block cipher employing a similar structure and an SPN-style round function as that of SMS4.
Last updated:  2010-02-08
Differential Cryptanalysis of SMS4 Block Cipher
Bozhan Su, Wenling Wu, Wentao Zhang
SMS4 is a 128-bit block cipher used in the WAPI standard for wireless networks in China. In this paper, we analyze the security of SMS4 block cipher against differential cryptanalysis. Firstly, we prove three theorems and one corollary that reflect relationships of 5- and 6-round SMS4. Nextly, by these relationships, we clarify the minimum number of differentially active S-boxes in 6-, 7- and 12-round SMS4 respectively. Finally, based on the above results, we present a family of about $2^{14}$ differential characteristics for 19-round SMS4, which leads to an attack on 23-round SMS4 with $2^{115}$ chosen plaintexts and $2^{124.3}$ encryptions. Our attack is the best known attack on SMS4 so far.
Last updated:  2010-06-23
Privacy-Preserving Matching Protocols for Attributes and Strings
Uncategorized
Pu Duan, Sanmin Liu, Weiqin Ma, Guofei Gu, Jyh-Charn Liu
Show abstract
Uncategorized
In this technical report we present two new privacy-preserving matching protocols for singular attributes and strings, respectively. The first one is used for matching of common attributes without revealing unmatched ones to each other. The second protocol is used to discover the longest common sub-string of two input strings in a privacy-preserving manner. Compared with previous work, our solutions are efficient and suitable to implement for many different applications, e.g., discovery of common worm signatures, computation of similarity of IP payloads.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.