All papers in 2006 (Page 4 of 485 results)

Last updated:  2008-04-10
Deterministic and Efficiently Searchable Encryption
Uncategorized
Mihir Bellare, Alexandra Boldyreva, Adam O'Neill
Show abstract
Uncategorized
We present as-strong-as-possible definitions of privacy, and constructions achieving them, for public-key encryption schemes where the encryption algorithm is \textit{deterministic}. We obtain as a consequence database encryption methods that permit fast (i.e.~sub-linear, and in fact logarithmic, time) search while provably providing privacy that is as strong as possible subject to this fast search constraint. One of our constructs, called RSA-DOAEP, has the added feature of being length preserving, so that it is the first example of a public-key cipher. We generalize this to obtain a notion of efficiently-searchable encryption schemes which permit more flexible privacy to search-time trade-offs via a technique called bucketization. Our results answer much-asked questions in the database community and provide foundations for work done there.
Last updated:  2006-06-19
Statistical Zero-Knowledge Arguments for NP from Any One-Way Function
Minh-Huyen Nguyen, Shien Jin Ong, Salil Vadhan
We show that every language in NP has a *statistical* zero-knowledge argument system under the (minimal) complexity assumption that one-way functions exist. In such protocols, even a computationally unbounded verifier cannot learn anything other than the fact that the assertion being proven is true, whereas a polynomial-time prover cannot convince the verifier to accept a false assertion except with negligible probability. This resolves an open question posed by Naor, Ostrovsky, Venkatesan, and Yung (CRYPTO `92, J. Cryptology `98). Departing from previous works on this problem, we do not construct standard statistically hiding commitments from any one-way function. Instead, we construct a relaxed variant of commitment schemes called "1-out-of-2-binding commitments," recently introduced by Nguyen and Vadhan (STOC `06).
Last updated:  2006-08-08
On Signatures of Knowledge
Uncategorized
Melissa Chase, Anna Lysyanskaya
Show abstract
Uncategorized
In a traditional signature scheme, a signature $\sigma$ on a message $m$ is issued under a public key $\pk$, and can be interpreted as follows: "The owner of the public key $\pk$ and its corresponding secret key has signed message $m$." In this paper we consider schemes that allow one to issue signatures on behalf of any NP statement, that can be interpreted as follows: "A person in possession of a witness $w$ to the statement that $x \in L$ has signed message $m$." We refer to such schemes as \emph{signatures of knowledge}. We formally define the notion of a signature of knowledge. We begin by extending the traditional definition of digital signature schemes, captured by Canetti's ideal signing functionality, to the case of signatures of knowledge. We then give an alternative definition in terms of games that also seems to capture the necessary properties one may expect from a signature of knowledge. We then gain additional confidence in our two definitions by proving them equivalent. We construct signatures of knowledge under standard complexity assumptions in the common-random-string model. We then extend our definition to allow signatures of knowledge to be \emph{nested} i.e., a signature of knowledge (or another accepting input to a UC-realizable ideal functionality) can itself serve as a witness for another signature of knowledge. Thus, as a corollary, we obtain the first \emph{delegatable} anonymous credential system, i.e., a system in which one can use one's anonymous credentials as a secret key for issuing anonymous credentials to others.
Last updated:  2006-06-02
Information-Theoretic Conditions for Two-Party Secure Function Evaluation
Claude Crépeau, George Savvides, Christian Schaffner, Jürg Wullschleger
The standard security definition of unconditional secure function evaluation, which is based on the ideal/real model paradigm, has the disadvantage of being overly complicated to work with in practice. On the other hand, simpler ad-hoc definitions tailored to special scenarios have often been flawed. Motivated by this unsatisfactory situation, we give an information-theoretic security definition of secure function evaluation which is very simple yet provably equivalent to the standard, simulation-based definitions.
Last updated:  2006-05-30
On the Limits of Point Function Obfuscation
Uncategorized
Arvind Narayanan, Vitaly Shmatikov
Show abstract
Uncategorized
We study the problem of circuit obfuscation, ie, transforming the circuit in a way that hides everything except its input-output behavior. Barak et al. showed that a universal obfuscator that obfuscates every circuit class cannot exist, leaving open the possibility of special-purpose obfuscators. Known positive results for obfuscation are limited to point functions (boolean functions that return 1 on exactly one input) and simple extensions thereof in the random oracle model, ie, assuming black-box access to a true random function. It was also shown by Wee how to instantiate random oracles so as to achieve a slightly weaker form of point function obfuscation. Two natural questions arise: (i) what circuits have obfuscators whose security can be reduced in a black-box way to point function obfuscation? and (ii) for what circuits obfuscatable in the random oracle model can we instantiate the random oracles to build obfuscators in the plain model? We give partial answers to these questions: there is a circuit in AC^0 which can be obfuscated in the random oracle model, but not secure when random oracles are instantiated with Wee's construction. More generally, we present evidence for the impossibility of a black-box reduction of the obfuscatability of this circuit to point function obfuscation. Conversely, this result shows that to instantiate random oracles in general obfuscators, one needs to utilize properties of the instantiation that are not satisfied by point function obfuscators.
Last updated:  2006-07-19
There exist Boolean functions on $n$ (odd) variables having nonlinearity $> 2^{n-1} - 2^{\frac{n-1}{2}}$ if and only if $n > 7$
Selçuk Kavut, Subhamoy Maitra, Melek D. Yücel
For the first time we find Boolean functions on 9 variables having nonlinearity 241, that remained as an open question in literature for almost three decades. Such functions are discovered using a suitably modified steepest-descent based iterative heuristic search in the class of rotation symmetric Boolean functions (RSBFs). This shows that there exist Boolean functions on $n$ (odd) variables having nonlinearity $> 2^{n-1} - 2^{\frac{n-1}{2}}$ if and only if $n > 7$. Using the same search method, we also find several other important functions and we study the autocorrelation, propagation characteristics and resiliency of the RSBFs (using proper affine transformations, if required). The results show that it is possible to get balanced Boolean functions on $n=10$ variables having autocorrelation spectra with maximum absolute value $< 2^{\frac{n}{2}}$, which was not known earlier. In certain cases the functions can be affinely transformed to get first order propagation characteristics. We also obtain 10-variable functions having first order resiliency and nonlinearity 492 which was posed as an open question in Crypto 2000.
Last updated:  2006-05-30
Divisibility of the Hamming Weight by $2^k$ and Monomial Criteria for Boolean Functions
Uncategorized
Dmitry Khovratovich
Show abstract
Uncategorized
In this paper we consider the notions of the Hamming weight and the algebraic normal form. We solve an open problem devoted to checking divisibility of the weight by $2^k$. We generalize the criterion for checking the evenness of the weight in two ways. Our main result states that for checking whether the Hamming weight of $f$ is divisible by $2^k, \,k>1$, it is necessary and sufficient to know its algebraic normal form accurate to an additive constant.
Last updated:  2006-08-10
FPGA Accelerated Tate Pairing Based Cryptosystems over Binary Fields
Chang Shu, Soonhak Kwon, Kris Gaj
Though the implementation of the Tate pairing is commonly believed to be computationally more intensive than other cryptographic operations, such as ECC point multiplication, there has been a substantial progress in speeding up the Tate pairing computations. Because of their inherent parallelism, the existing Tate pairing algorithms are very suitable for hardware implementation aimed at achieving a high operation speed. Supersingular elliptic curves over binary fields are good candidates for hardware implementation due to their simple underlying algorithms and binary arithmetic. In this paper we propose efficient Tate pairing implementations over binary fields $\mathbb F_{2^{239}}$ and $\mathbb F_{2^{283}}$ via FPGA. Though our field sizes are larger than those used in earlier architectures with the same security strength based on cubic elliptic curves or binary hyperelliptic curves, fewer multiplications in the underlying field are required, so that the computational latency for one pairing can be reduced. As a result, our pairing accelerators implemented via FPGA can run 15-to-25 times faster than other FPGA realizations at the same level of security strength, and at the same time achieve lower product of latency by area.
Last updated:  2007-03-05
A New Cryptosystem Based On Hidden Order Groups
Amitabh Saxena, Ben Soh
Let $G_1$ be a cyclic multiplicative group of order $n$. It is known that the Diffie-Hellman problem is random self-reducible in $G_1$ with respect to a fixed generator $g$ if $\phi(n)$ is known. That is, given $g, g^x\in G_1$ and having oracle access to a ``Diffie-Hellman Problem solver'' with fixed generator $g$, it is possible to compute $g^{1/x} \in G_1$ in polynomial time (see theorem 3.2). On the other hand, it is not known if such a reduction exists when $\phi(n)$ is unknown (see conjuncture 3.1). We exploit this ``gap'' to construct a cryptosystem based on hidden order groups and present a practical implementation of a novel cryptographic primitive called an \emph{Oracle Strong Associative One-Way Function} (O-SAOWF). O-SAOWFs have applications in multiparty protocols. We demonstrate this by presenting a key agreement protocol for dynamic ad-hoc groups.
Last updated:  2006-05-26
On the (Im-)Possibility of Extending Coin Toss
Dennis Hofheinz, Joern Mueller-Quade, Dominique Unruh
We consider the cryptographic two-party protocol task of extending a given coin toss. The goal is to generate n common random coins from a single use of an ideal functionality which gives m<n common random coins to the parties. In the framework of Universal Composability we show the impossibility of securely extending a coin toss for statistical and perfect security. On the other hand, for computational security the existence of a protocol for coin toss extension depends on the number m of random coins which can be obtained "for free". For the case of stand-alone security, i.e., a simulation based security definition without an environment, we present a novel protocol for unconditionally secure coin toss extension. The new protocol works for superlogarithmic m, which is optimal as we show the impossibility of statistically secure coin toss extension for smaller m. Combining our results with already known results, we obtain a (nearly) complete characterization under which circumstances coin toss extension is possible.
Last updated:  2006-05-22
Counting points on elliptic curves in medium characteristic
Antoine Joux, Reynald Lercier
In this paper, we revisit the problem of computing the kernel of a separable isogeny of degree $\ell$ between two elliptic curves defined over a finite field $\GF{q}$ of characteristic $p$. We describe an algorithm the asymptotic time complexity of which is equal to $\SoftO(\ell^2(1+\ell/p)\log q)$ bit operations. This algorithm is particularly useful when $\ell > p$ and as a consequence, we obtain an improvement of the complexity of the SEA point counting algorithm for small values of $p$. More precisely, we obtain a heuristic time complexity $\SoftO(\log^{4} q)$ and a space complexity $O(\log^{2} q)$, in the previously unfavorable case where $p \simeq \log q$. Compared to the best previous algorithms, the memory requirements of our SEA variation are smaller by a $\log^2 q$ factor.
Last updated:  2008-07-03
Tight Bounds for Unconditional Authentication Protocols in the Manual Channel and Shared Key Models
Moni Naor, Gil Segev, Adam Smith
We address the message authentication problem in two seemingly different communication models. In the first model, the sender and receiver are connected by an insecure channel and by a low-bandwidth auxiliary channel, that enables the sender to ``manually'' authenticate one short message to the receiver (for example, by typing a short string or comparing two short strings). We consider this model in a setting where no computational assumptions are made, and prove that for any $0 < \epsilon < 1$ there exists a $\log^* n$-round protocol for authenticating $n$-bit messages, in which only $2 \log(1 / \epsilon) + O(1)$ bits are manually authenticated, and any adversary (even computationally unbounded) has probability of at most $\epsilon$ to cheat the receiver into accepting a fraudulent message. Moreover, we develop a proof technique showing that our protocol is essentially optimal by providing a lower bound of $2 \log(1 / \epsilon) - O(1)$ on the required length of the manually authenticated string. The second model we consider is the traditional message authentication model. In this model the sender and the receiver share a short secret key; however, they are connected only by an insecure channel. We apply the proof technique above to obtain a lower bound of $2 \log(1 / \epsilon) - 2$ on the required Shannon entropy of the shared key. This settles an open question posed by Gemmell and Naor (CRYPTO '93). Finally, we prove that one-way functions are {\em necessary} (and sufficient) for the existence of protocols breaking the above lower bounds in the computational setting.
Last updated:  2006-06-10
Frobenius expansion and the Diffie Hellman problem
V. R. Sule
This paper proposes investigation of special sessions of the Diffie Hellman (DH) key exchange scheme on elliptic curves for which the shared key can be computed by a polynomial time algorithm. Such sessions are called \emph{singular}. Existence of singular sessions are demonstrated using the Frobenius expansion and polynomial representation of public keys which lead to an expression for the shared key. When the Weil pairing can be computed on the elliptic curve along with a modified pairing defined by a distortion map efficiently, a sufficient condition is obtained for sessions to be singular which can be verified in polynomial time. Hence this condition identifies sessions whose singular nature can be determined in polynomial time. A single round three party key exchange scheme is proposed using singular sessions in which efficient computation of the shared key of a pair of users by the third party is a necessary requirement. This scheme is thus a positive application of singular sessions and offers a possible alternative to the need for using super singular curves on which pairings can be computed efficiently.
Last updated:  2006-05-22
Some Practical Public-Key Encryption Schemes in both Standard Model and Random Oracle Model
Le Trieu Phong, Ogata Wakaha
In this paper, we present some more results about the security of the Kurosawa-Desmedt encryption scheme and a variant of it. We prove that after a modification, those schemes are secure against adaptive chosen-ciphertext attack not only under the decisional Diffie-Hellman assumption in standard model as before but also under the computational Diffie-Hellman assumption in the random oracle model. These results ensure that both the Kurosawa-Desmedt scheme and the variant have similar security merits as the Cramer-Shoup encryption scheme, which is proposed as a standard.
Last updated:  2006-05-17
On Computing Products of Pairings
R Granger, N. P. Smart
In many pairing-based protocols often the evaluation of the product of many pairing evaluations is required. In this paper we consider methods to compute such products efficiently. Focusing on pairing-friendly fields in particular, we evaluate methods for the Weil, Tate and Ate pairing algorithms for ordinary elliptic curves at various security levels. Our operation counts indicate that the minimal cost of each additional pairing relative to the cost of one is $\approx 0.61$, $0.45$, and $0.43$, for each of these pairings respectively at the 128-bit security level. For larger security levels the Ate pairing can have a relative additional cost of as low as $0.13$ for each additional pairing. These estimates allow implementors to make optimal algorithm choices for given scenarios, in which the number of pairings in the product, the security level, and the embedding degree are factors under consideration.
Last updated:  2006-07-17
Key confirmation and adaptive corruptions in the protocol security logic
Uncategorized
Prateek Gupta, Vitaly Shmatikov
Show abstract
Uncategorized
Cryptographic security for key exchange and secure session establishment protocols is often defined in the so called ``adaptive corruptions'' model. Even if the adversary corrupts one of the participants in the middle of the protocol execution and obtains the victim's secrets such as the private signing key, the victim must be able to detect this and abort the protocol. This is usually achieved by adding a key confirmation message to the protocol. Conventional symbolic methods for protocol analysis assume unforgeability of digital signatures, and thus cannot be used to reason about security in the adaptive corruptions model. We present a symbolic protocol logic for reasoning about authentication and key confirmation in key exchange protocols. The logic is cryptographically sound: a symbolic proof of authentication and secrecy implies that the protocol is secure in the adaptive corruptions model. We illustrate our method by formally proving security of an authenticated Diffie-Hellman protocol with key confirmation.
Last updated:  2006-05-16
Visual Cryptography Schemes with Optimal Pixel Expansion
Uncategorized
Carlo Blundo, Stelvio Cimato, Alfredo De Santis
Show abstract
Uncategorized
A visual cryptography scheme encodes a black&white secret image into n shadow images called shares which are distributed to the n participants. Such shares are such that only qualified subsets of participants can ``visually'' recover the secret image. Usually, the reconstructed image will be darker than the background of the image itself. In this paper we consider visual cryptography schemes satisfying the model introduced by Tzeng and Hu (Designs, Codes and Cryptography, Vol. 27, No. 3, pp. 207-227, 2002). In such a model the recovered secret image can be darker or lighter than the background. We prove a lower bound on the pixel expansion of the scheme and, for (2,n)-threshold visual cryptography schemes, we provide schemes achieving the bound. Our schemes improve on the ones proposed by Tzeng and Hu.
Last updated:  2006-05-16
Simplified pairing computation and security implications
Steven D. Galbraith, Colm O hEigeartaigh, Caroline Sheedy
Recent progress on pairing implementation has made certain pairings extremely simple and fast to compute. Hence, it is natural to examine if there are consequences for the security of pairing-based cryptography. This paper gives a method to compute eta pairings in a way which avoids the requirement for a final exponentiation. The method does not lead to any improvement in the speed of pairing implementation. However, it seems appropriate to re-evaluate the security of pairing based cryptography in light of these new ideas. A multivariate attack on the pairing inversion problem is proposed and analysed. Our findings support the belief that pairing inversion is a hard computational problem.
Last updated:  2006-05-18
How Fast can be Algebraic Attacks on Block Ciphers ?
Nicolas T. Courtois
In this paper we give a specification of a new block cipher that can be called the Courtois Toy Cipher (CTC). It is quite simple, and yet very much like any other known block cipher. If the parameters are large enough, it should evidently be secure against all known attack methods. However, we are not proposing a new method for encrypting sensitive data, but rather a research tool that should allow us (and other researchers) to experiment with algebraic attacks on block ciphers and obtain interesting results using a PC with reasonable quantity of RAM. For this reason the S-box of this cipher has only 3-bits, which is quite small. Ciphers with very small S-boxes are believed quite secure, for example the Serpent S-box has only 4 bits, and in DES all the S-boxes have 4 output bits. The AES S-box is not quite as small but can be described (in many ways) by a very small systems of equations with only a few monomials (and this fact can also be exploited in algebraic cryptanalysis). We believe that results on algebraic cryptanalysis of this cipher will have very deep implications for the security of ciphers in general.
Last updated:  2007-01-04
Towards Trustworthy e-Voting using Paper Receipts
Uncategorized
Yunho Lee, Kwangwoo Lee, Seungjoo Kim, Dongho Won
Show abstract
Uncategorized
Current electronic voting systems are not sufficient to satisfy trustworthy elections as they do not provide any proofs or confirming evidences of their honesty. This lack of trustworthiness is the main reason why e-voting is not widely spread even though e-voting is expected to be more efficient than the current plain paper voting. Many experts believe that the only way to assure voters that their intended votes are casted is to use paper receipts. In this paper, we propose an efficient scheme for issuing receipts to voters in e-voting using the well-known divide-and-choose method. Our scheme does not require any special printers or scanners, nor frequent observations to voting machines. In addition to that, our scheme is more secure than the previous ones.
Last updated:  2007-07-17
General Secret Sharing Based on the Chinese Remainder Theorem
Sorin Iftene
In this paper we extend the threshold secret sharing schemes based on the Chinese remainder theorem in order to deal with more general access structures. Aspects like verifiability, secret sharing homomorphisms and multiplicative properties are also discussed.
Last updated:  2006-05-17
Pairings for Cryptographers
S. D. Galbraith, K. G. Paterson, N. P. Smart
Many research papers in pairing based cryptography treat pairings as a ``black box''. These papers build cryptographic schemes making use of various properties of pairings. If this approach is taken, then it is easy for authors to make invalid assumptions concerning the properties of pairings. The cryptographic schemes developed may not be realizable in practice, or may not be as efficient as the authors assume. The aim of this paper is to outline, in as simple a fashion as possible, the basic choices that are available when using pairings in cryptography. For each choice, the main properties and efficiency issues are summarized. The paper is intended to be of use to non-specialists who are interested in using pairings to design cryptographic schemes.
Last updated:  2006-05-16
Classification of Signature-only Signature Models
Zhengjun Cao
We introduce a set of criterions for classifying signature-only signature models. By the criterions, we classify signature models into 5 basic types and 69 general classes. Theoretically, 21140 kinds of signature models can be deduced by appropriately combining different general classes. The result comprises almost existing signature models. We also contribute a lot of new signature models. Moreover, we find the three signature models, i.e., group-nominee signature, multi-nominee signature and threshold-nominee signature, are of great importance in light of our classification.
Last updated:  2006-05-16
Achieving a log(n) Speed Up for Boolean Matrix Operations and Calculating the Complexity of the Dense Linear Algebra step of Algebraic Stream Cipher Attacks and of Integer Factorization Methods
Gregory V. Bard
The purpose of this paper is to calculate the running time of dense boolean matrix operations, as used in stream cipher cryptanalysis and integer factorization. Several variations of Gaussian Elimination, Strassen's Algorithm and the Method of Four Russians are analyzed. In particular, we demonstrate that Strassen's Algorithm is actually slower than the Four Russians algorithm for matrices of the sizes encountered in these problems. To accomplish this, we introduce a new model for tabulating the running time, tracking matrix reads and writes rather than field operations, and retaining the coefficients rather than dropping them. Furthermore, we introduce an algorithm known heretofore only orally, a ``Modified Method of Four Russians'', which has not appeared in the literature before. This algorithm is $\log n$ times faster than Gaussian Elimination for dense boolean matrices. Finally we list rough estimates for the running time of several recent stream cipher cryptanalysis attacks.
Last updated:  2006-05-10
A Summary of McEliece-Type Cryptosystems and their Security
D. Engelbert, R. Overbeck, A. Schmidt
In this paper we give an overview of some of the cryptographic applications which were derived from the proposal of R.J. McEliece to use error correcting codes for cryptographic purposes. Code based cryptography is an interesting alternative to number theoretic cryptography. Many basic cryptographic functions like encryption, signing, hashing, etc. can be realized using code theoretic concepts. In this paper we briefly show how to correct errors in transmitted data by employing Goppa codes and describe possible applications to public key cryptography. The main focus of this paper is to provide detailed insight into the state of art of cryptanalysis of the McEliece cryptosystem and the effect on different cryptographic applications. We conclude, that for code based cryptography a public key of $88$KB offers sufficient security for encryption, while we need a public key of at least $597$KB for secure signing.
Last updated:  2006-08-20
Cryptanalysis of 4-Pass HAVAL
Uncategorized
Zhangyi Wang, Huanguo Zhang, Zhongping Qin, Qingshu Meng
Show abstract
Uncategorized
HAVAL is a cryptographic hash function proposed by Zheng et al. Rompay et al and Wang et al found collisions of full 3-Pass HAVAL. In this paper, we study the security of 4-Pass HAVAL. We find collisions of full versions of 4-Pass HAVAL. The attack is similar to the two-block attack of MD5 proposed by Wang et al. The computational complexity of the attack is about 2^30-2^32 for the first block and 2^27-2^29 for the second block. We use this attack to find 256bit collisions of 4-Pass HAVAL in 3-4 hour on a common PC.
Last updated:  2006-05-04
A Built-in Decisional Function and Security Proof of ID-based Key Agreement Protocols from Pairings
L. Chen, Z. Cheng, N. P. Smart
In recent years, a large number of identity-based key agreement protocols from pairings have been proposed. Some of them are elegant and practical. However, the security of this type of protocols has been surprisingly hard to prove. The main issue is that a simulator is not able to deal with reveal queries, because it requires solving either a computational problem or a decisional problem, both of which are generally believed to be hard (i.e., computationally infeasible). The best solution of security proof published so far uses the gap assumption, which means assuming that the existence of a decisional oracle does not change the hardness of the corresponding computational problem. The disadvantage of using this solution to prove the security for this type of protocols is that such decisional oracles, on which the security proof relies, cannot be performed by any polynomial time algorithm in the real world, because of the hardness of the decisional problem. In this paper we present a method incorporating a built-in decisional function in this type of protocols. The function transfers a hard decisional problem in the proof to an easy decisional problem. We then discuss the resulting efficiency of the schemes and the relevant security reductions in the context of different pairings one can use.
Last updated:  2006-05-04
Repairing a Security-Mediated Certificateless Encryption Scheme from PKC 2006
Joonsang Baek, Guilin Wang
At PKC 2006, Chow, Boyd, and Nieto introduced the concept of security-mediated certificateless (SMC) cryptography. This notion can be considered as a variant of certificateless cryptography with the property of instantaneous key revocation, or a variant of mediated cryptography without full key escrow. They presented a definition of security for SMC encryption, which covers (fully-adaptive) chosen ciphertext attack with public key replacement considered as a strong but essential attack on certificateless cryptographic schemes. They proposed two SMC encryption schemes, one is a generic construction based on any public key encryption, identity-based encryption and one-time signature schemes and the other is a concrete construction based on bilinear pairings, which were shown to be secure under their security definition. In this note, we, however, present two types of attacks demonstrating that their generic construction for SMC encryption fails to meet their security requirement. We then discuss how to repair the scheme and provide a provably-secure solution.
Last updated:  2006-05-03
An Efficient ID-based Proxy Signature Scheme from Pairings
Chunxiang Gu, Yuefei Zhu
This paper proposes a new ID-based proxy signature scheme based on the bilinear pairings. The number of paring operation involved in the verification procedure of our scheme is only one, so our scheme is more efficient comparatively. The new scheme can be proved secure with the hardness assumption of the k-Bilinear Diffie-Hellman Inverse problem, in the random oracle model.
Last updated:  2006-05-19
An efficient way to access an array at a secret index
Timothy Atkinson, Marius C. Silaghi
We propose cryptographic primitives for reading and assigning the (shared) secret found at a secret index in a vector of secrets. The problem can also be solved in constant round with existing general techniques based on arithmetic circuits and the ``equality test'' in [Damgard.et.al 05]. However the proposed technique requires to exchange less bits. The proposed primitives require a number of rounds that is independent of the size N of the vector, and only depends (linearly) on the number t of computing servers. A previously known primitive for reading a vector at a secret index works only for 2-party computations. Our primitives work for any number of computing participants/servers. The proposed techniques are secure against passive attackers, and zero knowledge proofs are provided to show that exactly one index of the array is read/written. The techniques work both with multiparty computations based on secret sharing and with multiparty computations based on threshold homomorphic encryption.
Last updated:  2006-05-09
The Hardness of the DHK Problem in the Generic Group Model
Alexander W. Dent
In this note we prove that the controversial Diffie-Hellman Knowledge problem is secure in the generic group model. This appears to be the first paper that presents any evidence as to whether the Diffie-Hellman Knowledge problem is true or false.
Last updated:  2006-04-24
Independent Zero-Knowledge Sets
Rosario Gennaro, Silvio Micali
We define and construct Independent Zero-Knowledge Sets (ZKS) protocols. In a ZKS protocols, a Prover commits to a set $S$, and for any $x$, proves non-interactively to a Verifier if $x \in S$ or $x \notin S$ without revealing any other information about $S$. In the {\em independent} ZKS protocols we introduce, the adversary is prevented from successfully correlate her set to the one of a honest prover. Our notion of independence in particular implies that the resulting ZKS protocol is non-malleable. On the way to this result we define the notion of {\em independence} for commitment schemes. It is shown that this notion implies non-malleability, and we argue that this new notion has the potential to simplify the design and security proof of non-malleable commitment schemes. Efficient implementations of ZKS protocols are based on the notion of mercurial commitments. Our efficient constructions of independent ZKS protocols requires the design of {\em new} commitment schemes that are simultaneously independent (and thus non-malleable) and mercurial.
Last updated:  2006-04-25
New Public Key Authentication Frameworks with Lite Certification Authority
Xiaolei Dong, Licheng Wang, Zhenfu Cao
Two variants of CA-based public key authentication framework are proposed in this paper. The one is termed as public key cryptosystem without certificate management center (PKCwCMC) and the other is termed as proxy signature based authentication framework (PS-based AF). Moreover, we give an implementation of the former based on quadratic residue theory and an implementation of the latter from RSA. Both of the two variants can be looked as lite-CA based authentication frameworks since the workload and deployment of CAs in these systems are much lighter and easier than those of in the traditional CA-based PKC.
Last updated:  2006-04-22
On the Relationships Between Notions of Simulation-Based Security
Anupam Datta, Ralf Kuesters, John C. Mitchell, Ajith Ramanathan
Several compositional forms of simulation-based security have been proposed in the literature, including universal composability, black-box simulatability, and variants thereof. These relations between a protocol and an ideal functionality are similar enough that they can be ordered from strongest to weakest according to the logical form of their definitions. However, determining whether two relations are in fact identical depends on some subtle features that have not been brought out in previous studies. We identify the position of a ``master process" in the distributed system, and some limitations on transparent message forwarding within computational complexity bounds, as two main factors. Using a general computational framework, called Sequential Probabilistic Process Calculus (SPPC), we clarify the relationships between the simulation-based security conditions. We also prove general composition theorems in SPPC. Many of the proofs are carried out based on a small set of equivalence principles involving processes and distributed systems. This gives us results that carry over to a variety of computational models.
Last updated:  2006-04-22
Pairing based Mutual Authentication Scheme Using Smart Cards
G. Shailaja, K. Phani Kumar, Ashutosh Saxena
Bilinear pairings based mutual authentication scheme using smart card is presented. We propose a novel technique of using two different servers, one for registration and other for authentication. The scheme is resilient to replay, forgery, man-in-the-middle and insider attacks.
Last updated:  2006-04-22
Simulation-Based Security with Inexhaustible Interactive Turing Machines
Ralf Kuesters
Recently, there has been much interest in extending models for simulation-based security in such a way that the runtime of protocols may depend on the length of their input. Finding such extensions has turned out to be a non-trivial task. In this work, we propose a simple, yet expressive general computational model for systems of Interactive Turing Machines (ITMs) where the runtime of the ITMs may be polynomial per activation and may depend on the length of the input received. One distinguishing feature of our model is that the systems of ITMs that we consider involve a generic mechanism for addressing dynamically generated copies of ITMs. We study properties of such systems and, in particular, show that systems satisfying a certain acyclicity condition run in polynomial time. Based on our general computational model, we state different notions of simulation-based security in a uniform and concise way, study their relationships, and prove a general composition theorem for composing a polynomial number of copies of protocols, where the polynomial is determined by the environment. The simplicity of our model is demonstrated by the fact that many of our results can be proved by mere equational reasoning based on a few equational principles on systems.
Last updated:  2006-04-22
Demonstrating data possession and uncheatable data transfer
Décio Luiz Gazzoni Filho, Paulo Sérgio Licciardi Messeder Barreto
We observe that a certain RSA-based secure hash function is homomorphic. We describe a protocol based on this hash function which prevents `cheating' in a data transfer transaction, while placing little burden on the trusted third party that oversees the protocol. We also describe a cryptographic protocol based on similar principles, through which a prover can demonstrate possession of an arbitrary set of data known to the verifier. The verifier isn't required to have this data at hand during the protocol execution, but rather only a small hash of it. The protocol is also provably as secure as integer factoring.
Last updated:  2007-06-08
A method of construction of balanced functions with optimum algebraic immunity
C. Carlet
Because of the recent algebraic attacks, a high algebraic immunity is now an absolutely necessary (but not sufficient) property for Boolean functions used in stream ciphers. A difference of only 1 between the algebraic immunities of two functions can make a crucial difference with respect to algebraic attacks. Very few examples of (balanced) functions with high algebraic immunity have been found so far. These examples seem to be isolated and no method for obtaining such functions is known. In this paper, we introduce a general method for proving that a given function, in any number of variables, has a prescribed algebraic immunity. We deduce an algorithm for generating balanced functions in any odd number of variables, with optimum algebraic immunity. We also give an algorithm, valid for any even number of variables, for constructing (possibly) balanced functions with optimum (or, if this can be useful, with high but not optimal) algebraic immunity. We also give a new example of an infinite class of such functions. We study their Walsh transforms. To this aim, we completely characterize the Walsh transform of the majority function.
Last updated:  2006-05-17
Computational Indistinguishability between Quantum States and Its Cryptographic Application
Akinori Kawachi, Takeshi Koshiba, Harumichi Nishimura, Tomoyuki Yamakami
We introduce a computational problem of distinguishing between two specific quantum states as a new cryptographic problem to design a quantum cryptographic scheme that is ``secure'' against any polynomial-time quantum adversary. Our problem QSCDff is to distinguish between two types of random coset states with a hidden permutation over the symmetric group of finite degree. This naturally generalizes the commonly-used distinction problem between two probability distributions in computational cryptography. As our major contribution, we show three cryptographic properties: (i) QSCDff has the trapdoor property; (ii) the average-case hardness of QSCDff coincides with its worst-case hardness; and (iii) QSCDff is computationally at least as hard in the worst case as the graph automorphism problem. These cryptographic properties enable us to construct a quantum public-key cryptosystem, which is likely to withstand any chosen plaintext attack of a polynomial-time quantum adversary. We further discuss a generalization of QSCDff, called QSCDcyc, and introduce a multi-bit encryption scheme relying on the cryptographic properties of QSCDcyc.
Last updated:  2006-07-31
New Integrated proof Method on Iterated Hash Structure and New Structures
Uncategorized
Duo Lei
Show abstract
Uncategorized
In this paper, we give a integrated proof method on security proof of iterated hash structure. Based on the proof method, we can distinguish the security of Merkel-Damagård structure, wide-pipe hash, double-pipe hash and 3c hash and know the requirement on true design compression function, we also give a new recommend structure. At last, we give a new hash structure, MAC structure, encryption model, and which use same block cipher round function and key schedule algorithm and are based on Feistel structure, the security proofs on those structures are also given.
Last updated:  2006-04-13
Completeness of Formal Hashes in the Standard Model
Flavio D. Garcia, Peter van Rossum
We study an extension of the well-known Abadi-Rogaway logic with hashes. Previously, we have given a sound computational interpretation of this extension using Canetti's oracle hashing. This paper extends Micciancio and Warinschi's completeness result for the original logic to this setting.
Last updated:  2006-05-29
PUBLIC-KEY CRYPTOSYSTEM BASED ON ISOGENIES
Alexander Rostovtsev, Anton Stolbunov
A new general mathematical problem, suitable for public-key cryptosystems, is proposed: morphism computation in a category of Abelian groups. In connection with elliptic curves over finite fields, the problem becomes the following: compute an isogeny (an algebraic homomorphism) between the elliptic curves given. The problem seems to be hard for solving with a quantum computer. ElGamal public-key encryption and Diffie-Hellman key agreement are proposed for an isogeny cryptosystem. The paper describes theoretical background and a public-key encryption technique, followed by security analysis and consideration of cryptosystem parameters selection. A demonstrative example of encryption is included as well.
Last updated:  2006-05-04
Implementing Cryptographic Pairings on Smartcards
Michael Scott, Neil Costigan, Wesam Abdulwahab
Pairings on elliptic curves are fast coming of age as cryptographic primitives for deployment in new security applications, particularly in the context of implementations of Identity-Based Encryption (IBE). In this paper we describe the implementation of various pairings on a contemporary 32-bit smart-card, the Philips Hi{P}er{S}mart\texttrademark , an instantiation of the MIPS-32 based Smart{MIPS}\texttrademark architecture. Three types of pairing are considered, first the standard Tate pairing on a nonsupersingular curve $E(\F_p)$, second the Ate pairing, also on a nonsupersingular curve $E(\F_p)$, and finally the $\eta_T$ pairing on a supersingular curve $E(\F_{2^m})$. We demonstrate that pairings can be calculated as efficiently as classic cryptographic primitives on this architecture, with a calculation time of as little as 0.15 seconds.
Last updated:  2006-10-04
Blinded Fault Resistant Exponentiation
Guillaume Fumaroli, David Vigilant
As the core operation of many public key cryptosystems, group exponentiation is central to cryptography. Attacks on its implementation in embedded device setting is hence of great concern. Recently, implementations resisting both simple side-channel analysis and fault attacks were proposed. In this paper, we go further and present an algorithm that also inherently thwarts differential side-channel attacks in any finite abelian group with only limited time and storage overhead.
Last updated:  2006-07-11
Rational Secret Sharing, Revisited
S. Dov Gordon, Jonathan Katz
We consider the problem of secret sharing among $n$ rational players. This problem was introduced by Halpern and Teague (STOC 2004), who claim that a solution is impossible for $n=2$ but show a solution for the case $n\geq 3$. Contrary to their claim, we show a protocol for rational secret sharing among $n=2$ players; our protocol extends to the case $n\geq 3$, where it is simpler than the Halpern-Teague solution and also offers a number of other advantages. We also show how to avoid the continual involvement of the dealer, in either our own protocol or that of Halpern and Teague. Our techniques extend to the case of rational players trying to securely compute an arbitrary function, under certain assumptions on the utilities of the players.
Last updated:  2006-04-11
Linear Sequential Circuit Approximation of Grain and Trivium Stream Ciphers
Shahram Khazaei, Mahdi M. Hasanzadeh, Mohammad S. Kiaei
Grain and Trivium are two hardware oriented synchronous stream ciphers proposed as the simplest candidates to the ECRYPT Stream Cipher Project, both dealing with 80-bit secret keys. In this paper we apply the linear sequential circuit approximation method to evaluate the strength of these stream ciphers against distinguishing attack. In this approximation method which was initially introduced by Golic in 1994, linear models are effectively determined for autonomous finite-state machines. We derive linear functions of consecutive key-stream bits which are held with correlation coefficient of about 2^-63.7 and 2^-126 for Grain and Trivium ciphers, respectively. Then using the concept of so-called generating function, we turn them into linear functions with correlation coefficient of 2^-29 for Grain and 2^-72 for Trivium. It shows that the Grain output sequence can be distinguished from a purely random sequence, using about 2^58 bits of the output sequence with the same time complexity. However, our attempt fails to find a successful distinguisher for Trivium.
Last updated:  2006-04-10
GVG-RP: A Net-centric Negligibility-based Security Model for Self-organizing Networks
Jiejun Kong
We present a rigorous approach to building a secure self-organizing mobile ad hoc network (MANET). In a highly dynamic environment like MANET, it is impossible to ensure absolute security to protect everything. We have to speak of the "infeasibility" of breaking the security system rather than the "impossibility" of breaking the same system. More formally, security is defined on the concept of "negligible", which is asymptotically sub-polynomial with respect to a pre-defined system parameter $n$. Intuitively, the parameter $n$ in modern cryptography is the key length. The crypto-system's security is broken if the adversary's capability is of exponentials of $n$, and the efficiency of all related algorithms is measured in polynomials of $n$. We adopt the same formal security notion in ad hoc network security research. In network security, the network scale (i.e., number of network members) $N$ replaces the role of key length $n$ in cryptography. If a security scheme can be devised to ensure that the probability of security failure is negligible, then the larger the network scale is or the more complex the network system is, the more secure the network is. In other words, given a negligibility-based protection against a specific security attack, larger or more complex systems are favored over smaller or simpler systems. Intuitively, this is consistent with the evolution theory where more complex entities probabilistically emerge from and likely survive longer than their less complex counterparts. In this paper, we use ``rushing attack'' as the exemplary security attack to disrupt mobile ad hoc routing. We show that ``rushing attack'' is a severe attack against on-demand ad hoc routing schemes. Fortunately, ``localized forwarding community area'' is an available countermeasure to ensure that the failure probability of packet forwarding is negligible. This demonstrates the usefulness of our negligibility-based network security model. We expect to augment the pool of negligibility-based protections and explore the general notion in other types of networks.\\ \emph{Keywords}---Net-centric Security = Negligibility + Scalability
Last updated:  2009-06-10
A Unified Framework for the Analysis of Side-Channel Key Recovery Attacks (extended version)
Uncategorized
Francois-Xavier Standaert, Tal G. Malkin, Moti Yung
Show abstract
Uncategorized
The fair evaluation and comparison of side-channel attacks and countermeasures has been a long standing open question, limiting further developments in the field. Motivated by this challenge, this work makes a step in this direction and proposes a framework for the analysis of cryptographic implementations that includes a theoretical model and an application methodology. The model is based on commonly accepted hypotheses about side-channels that computations give rise to. It allows quantifying the effect of practically relevant leakage functions with a combination of information theoretic and security metrics, measuring the quality of an implementation and the strength of an adversary, respectively. From a theoretical point of view, we demonstrate formal connections between these metrics and discuss their intuitive meaning. From a practical point of view, the model implies a unified methodology for the analysis of side-channel key recovery attacks. The proposed solution allows getting rid of most of the subjective parameters that were limiting previous specialized and often ad hoc approaches in the evaluation of physically observable devices. It typically determines the extent to which basic (but practically essential) questions such as "How to compare two implementations?" or "How to compare two side-channel adversaries?" can be answered in a sound fashion.
Last updated:  2006-12-05
Trace-Driven Cache Attacks on AES
Uncategorized
Onur Ac\i{}içmez, Çetin Kaya Koç
Show abstract
Uncategorized
Cache based side-channel attacks have recently been attracted significant attention due to the new developments in the field. In this paper, we present efficient trace-driven cache attacks on a widely used implementation of the AES cryptosystem. We also evaluate the cost of the proposed attacks in detail under the assumption of a noiseless environment. We develop an accurate mathematical model that we use in the cost analysis of our attacks. We use two different metrics, specifically, the expected number of necessary traces and the cost of the analysis phase, for the cost evaluation purposes. Each of these metrics represents the cost of a different phase of the attack.
Last updated:  2006-04-09
Defining Strong Privacy for RFID
Ari Juels, Stephen A. Weis
In this work, we consider privacy in Radio Frequency IDentification (RFID) systems. Our contribution is threefold: (1) We propose a simple, formal definition of strong privacy useful for basic analysis of RFID systems, as well as a different (weaker) definition applicable to multi-verifier systems; (2) We apply our definition to reveal vulnerabilities in several proposed privacy-enhancing RFID protocols; and (3) We formally analyze and suggest improvements to ``Hash-Locks,'' one of the first privacy-enhancing RFID protocols in the literature.
Last updated:  2006-04-18
A Challenging but Feasible Blockwise-Adaptive Chosen-Plaintext Attack on SSL
Gregory V. Bard
This paper introduces a chosen-plaintext vulnerability in the Secure Sockets Layer (SSL) and Trasport Layer Security (TLS) protocols which enables recovery of low entropy strings such as can be guessed from a likely set of 2--1000 options. SSL and TLS are widely used for securing communication over the Internet. When utilizing block ciphers for encryption, the SSL and TLS standards mandate the use of the cipher block chaining (CBC) mode of encryption which requires an initialization vector (IV) in order to encrypt. Although the first IV used by SSL is a (pseudo)random string which is generated and shared during the initial handshake phase, subsequent IVs used by SSL are chosen in a deterministic, predictable pattern; in particular, the IV of a message is taken to be the final ciphertext block of the immediately-preceding message, and is therefore known to the adversary. The one-channel nature of web proxies, anonymizers or Virtual Private Networks (VPNs), results in all Internet traffic from one machine traveling over the same SSL channel. We show this provides a feasible ``point of entry'' for this attack. Moreover, we show that the location of target data among block boundaries can have a profound impact on the number of guesses required to recover that data, especially in the low-entropy case. The attack in this paper is an application of the blockwise-adaptive chosen-plaintext attack paradigm, and is the only feasible attack to use this paradigm with a reasonable probability of success. The attack will work for all versions of SSL, and TLS version 1.0. This vulnerability and others are closed in TLS 1.1 (which is still in draft status) and OpenSSL after 0.9.6d. It is hoped this paper will encourage the deprecation of SSL and speed the adoption of OpenSSL or TLS 1.1/1.2 when they are finially released.
Last updated:  2006-05-12
The Design Principle of Hash Function with Merkle-Damgård Construction
Duo Lei, Da Lin, Li Chao, Keqin Feng, Longjiang Qu
The paper discusses the security of hash function with Merkle-Damgård construction and provides the complexity bound of finding a collision and primage of hash function based on the condition probability of compression function $y=F(x,k)$. we make a conclusion that in Merkle-Dammaård construction, the requirement of free start collision resistant and free start collision resistant on compression function is not necessary and it is enough if the compression function with properties of fix start collision resistant and fix start preimage resistant. However, the condition probability $P_{Y|X=x}(y)$ and $P_{Y|K=k}(y)$ of compression function $y=F(x,k)$ have much influence on the security of the hash function. The best design of compression function should have properties of that $P_{Y|X=x}(y)$ and $P_{Y|K=k}(y)$ are both uniformly distributed for all $x$ and $k$. At the end of the paper, we discussed the block cipher based hash function, point out among the the 20 schemes, selected by PGV\cite{Re:Preneel} and BPS\cite{Re:JBlack}, the best scheme is block cipher itself, if the block cipher with perfect security and perfect key distribution.
Last updated:  2006-04-20
Identity Based Strong Designated Verifier Signature Scheme
K. Phani Kumar, G. Shailaja, Ashutosh Saxena
Identity based cryptosystem simplifies the key management and revocation problem. Here we propose an Identity Based Strong Designated Verifier Signature (IBSDVS) scheme using bilinear pairings. The Designated Verifier Signature scheme described in [10] is identity based but it suffers from the deligatability as pointed out in [4]. We analyse the security of the scheme and show that the problem of delegatability does not exist in our scheme.
Last updated:  2006-04-03
Low Complexity Bit-Parallel Square Root Computation over GF($2^m$) for all Trinomials
Francisco Rodríguez-Henríquez, Guillermo Morales-Luna, Julio López-Hernández
In this contribution we introduce a low-complexity bit-parallel algorithm for computing square roots over binary extension fields. Our proposed method can be applied for any type of irreducible polynomials. We derive explicit formulae for the space and time complexities associated to the square root operator when working with binary extension fields generated using irreducible trinomials. We show that for those finite fields, it is possible to compute the square root of an arbitrary field element with equal or better hardware efficiency than the one associated to the field squaring operation. Furthermore, a practical application of the square root operator in the domain of field exponentiation computation is presented. It is shown that by using as building blocks squarers, multipliers and square root blocks, a parallel version of the classical square-and-multiply exponentiation algorithm can be obtained. A hardware implementation of that parallel version may provide a speedup of up to 50\% percent when compared with the traditional version.
Last updated:  2007-05-01
Conditional Reactive Simulatability
Michael Backes, Markus Duermuth, Dennis Hofheinz, Ralf Kuesters
Simulatability has established itself as a salient notion for defining and proving the security of cryptographic protocols since it entails strong security and compositionality guarantees, which are achieved by universally quantifying over all environmental behaviors of the analyzed protocol. As a consequence, however, protocols that are secure except for certain environmental behaviors are not simulatable, even if these behaviors are efficiently identifiable and thus can be prevented by the surrounding protocol. We propose a relaxation of simulatability by conditioning the permitted environmental behaviors, i.e., simulation is only required for environmental behaviors that fulfill explicitly stated constraints. This yields a more fine-grained security definition that is achievable i) for several protocols for which unconditional simulatability is too strict a notion or ii) at lower cost for the underlying cryptographic primitives. Although imposing restrictions on the environment destroys unconditional composability in general, we show that the composition of a large class of conditionally simulatable protocols yields protocols that are again simulatable under suitable conditions. This even holds for the case of cyclic assume-guarantee conditions where protocols only guarantee suitable behavior if they themselves are offered certain guarantees. Furthermore, composing several commonly investigated protocol classes with conditionally simulatable subprotocols yields protocols that are again simulatable in the standard, unconditional sense.
Last updated:  2006-04-29
Provably Secure Ubiquitous Systems: Universally Composable RFID Authentication Protocols
Mike Burmester, Tri van Le, Breno de Medeiros
This paper examines two unlinkably anonymous, simple RFID identification protocols that require only the ability to evaluate hash functions and generate random values, and that are provably secure against Byzantine adversaries. The main contribution is a universally composable security model tuned for RFID applications. By making specific setup, communication, and concurrency assumptions that are realistic in the RFID application setting, we arrive at a model that guarantees strong security and availability properties, while still permitting the design of practical RFID protocols. We show that the two previously proposed protocols are provably secure within the new security model. Our proofs do not employ random oracles---the protocols are shown to be secure in the standard model under the assumption of existence of pseudo-random function families.
Last updated:  2006-04-03
Simulatable Security and Polynomially Bounded Concurrent Composition
Dennis Hofheinz, Dominique Unruh
Simulatable security is a security notion for multi-party protocols that implies strong composability features. The main definitional flavours of simulatable security are standard simulatability, universal simulatability, and black-box simulatability. All three come in "computational," "statistical" and "perfect" subflavours indicating the considered adversarial power. Universal and black-box simulatability, in all of their subflavours, are already known to guarantee that the concurrent composition even of a polynomial number of secure protocols stays secure. We show that computational standard simulatability does not allow for secure concurrent composition of polynomially many protocols, but we also show that statistical standard simulatability does. The first result assumes the existence of an interesting cryptographic tool (namely time-lock puzzles), and its proof employs a cryptographic multi-party computation in an interesting and unconventional way.
Last updated:  2006-08-30
Some Remarks on the TKIP Key Mixing Function of IEEE 802.11i
Uncategorized
Wei Han, Dong Zheng, Ke-fei Chen
Show abstract
Uncategorized
Temporal Key Integrity Protocol (TKIP) is a sub-protocol of IEEE 802.11i. TKIP remedies some security flaws in Wired Equivalent Privacy (WEP) Protocol. TKIP adds four new algorithms to WEP: a Message Integrity Code (MIC) called Michael, an Initialization Vector (IV) sequencing discipline, a key mixing function and a re-keying mechanism. The key mixing function, also called temporal key hash, de-correlates the IVs from weak keys. Some cryptographic properties of the S-box used in the key mixing function are investigated in this paper, such as regularity, avalanche effect, differ uniform and linear structure. V.Moen, H.Raddum and K.J.Hole point out that there exists a temporal key recovery attack in TKIP key mixing function. In this paper a method is proposed to defend against the attack, and the resulting effect on performance is also discussed.
Last updated:  2006-08-15
On the existence of distortion maps on ordinary elliptic curves
Uncategorized
Denis Charles
Show abstract
Uncategorized
Distortion maps allow one to solve the Decision Diffie-Hellman problem on subgroups of points on an elliptic curve. In the case of ordinary elliptic curves over finite fields, it is known that in most cases there are no distortion maps for the Frobenius eigenspaces. In this article we characterize the existence of distortion maps in the remaining cases.
Last updated:  2006-04-03
A New Cryptanalytic Time/Memory/Data Trade-off Algorithm
Uncategorized
Sourav Mukhopadhyay, Palash Sarkar
Show abstract
Uncategorized
In 1980, Hellman introduced a time/memory trade-off (TMTO) algorithm satisfying the TMTO curve $TM^2=N^2$, where $T$ is the online time, $M$ is the memory and $N$ is the size of the search space. Later work by Biryukov-Shamir incorporated multiple data to obtain the curve $TM^2D^2=N^2$, where $D$ is the number of data points. In this paper, we describe a new table structure obtained by combining Hellman's structure with a structure proposed by Oechslin. Using the new table structure, we design a new multiple data TMTO algorithm both with and without the DP method. The TMTO curve for the new algorithm is obtained to be $T^3M^7D^8=N^7$. This curve is based on a conjecture on the number of distinct points covered by the new table. Support for the conjecture has been obtained through some emperical observations. For $D>N^{1/4}$, we show that the trade-offs obtained by our method are better than the trade-offs obtained by the BS method.
Last updated:  2006-03-30
ECGSC: Elliptic Curve based Generalized Signcryption Scheme
Yiliang Han, Xiaoyuan Yang
Signcryption is a new cryptographic primitive that simultaneously fulfills both the functions of signature and encryption. The definition of generalized signcryption is proposed in the paper firstly. Generalized signcryption has a special feature that provides confidentiality or authenticity separately under the condition of specific inputs. So it is more useful than common ones. Based on ECDSA, a signcryption scheme called ECGSC is designed. It will be equivalent to an AtE(OTP$,MAC) encryption scheme or ECDSA when one of party is absent. A third party can verify the signcryption text publicly in the method of ECDSA. Security properties are proven based on Random Oracle mode: confidentiality (CUF-CPA), unforgeability (UF-CMA) and non-repudiation. Compared with the others, ECGSC presents a 78% reduction in computational cost for typical security parameters for high level security applications.
Last updated:  2006-06-16
Fast computation of Tate pairing on general divisors of genus 3 hyperelliptic curves
Uncategorized
Eunjeong Lee, Hyang-Sook Lee, Yoonjin Lee
Show abstract
Uncategorized
For the Tate pairing computation over hyperelliptic curves, there are developments by Duursma-Lee and Barreto et al., and those computations are focused on {\it degenerate} divisors. As divisors are not degenerate form in general, it is necessary to find algorithms on {\it general} divisors for the Tate pairing computation. In this paper, we present two efficient methods for computing the Tate pairing over divisor class groups of the hyperelliptic curves $y^2 = x^p - x + d, ~ d = \pm 1$ of genus 3. First, we provide the {\it pointwise} method, which is a generalization of the previous developments by Duursma-Lee and Barreto et al. In the second method, we use the {\it resultant} for the Tate pairing computation. According to our theoretical analysis of the complexity, the {\it resultant} method is $48.5 \%$ faster than the pointwise method in the best case and $15.3 \%$ faster in the worst case, and our implementation result shows that the {\it resultant} method is much faster than the pointwise method. These two methods are completely general in the sense that they work for general divisors with Mumford representation, and they provide very explicit algorithms.
Last updated:  2006-03-29
Fast Elliptic Scalar Multiplication using New Double-base Chain and Point Halving
K. W. Wong, Edward C. W. Lee, L. M. Cheng, Xiaofeng Liao
The fast implementation of elliptic curve cryptosystems relies on the efficient computation of scalar multiplication. Based on the double-base chain representation of scalar using powers of 2 and 3, we propose a new representation with powers of ½ and 3 instead. Thus the efficient point halving operation can be incorporated in the new double-base chain to achieve fast scalar multiplication. Experimental results show that our approach leads to a lower complexity which contributes to the efficient implementation of elliptic curve cryptosystems.
Last updated:  2011-10-03
Designated Confirmer Signatures Revisited
Uncategorized
Douglas Wikström
Show abstract
Uncategorized
Previous definitions of designated confirmer signatures in the literature are incomplete, and the proposed security definitions fail to capture key security properties, such as unforgeability against malicious confirmers and non-transferability. We propose new definitions. Previous schemes rely on the random oracle model or set-up assumptions, or are secure with respect to relaxed security definitions. We construct a practical scheme that is provably secure with respect to our security definition under the strong RSA-assumption, the decision composite residuosity assumption, and the decision Diffie-Hellman assumption. To achieve our results we introduce several new relaxations of standard notions. We expect these techniques to be useful in the construction and analysis of other efficient cryptographic schemes.
Last updated:  2008-11-19
Chosen-Ciphertext Secure Identity-Based Encryption in the Standard Model with short Ciphertexts
Eike Kiltz
We describe a practical identity-based encryption scheme that is secure in the standard model against chosen-ciphertext (CCA2) attacks. Security is based on an assumption comparable to (but slightly stronger than) Bilinear Decisonal Diffie-Hellman (BDDH). A comparison shows that our construction outperforms all known identity-based encryption schemes in the standard model and its performance is even comparable with the one from the random-oracle based Boneh/Franklin IBE scheme. Our proposed IBE scheme has furthermore the property that it fulfills some notion of ``redundancy-freeness", i.e. the encryption algorithm is not only a probabilistic injection but also a surjection. As a consequence the ciphertext overhead is nearly optimal: to encrypt $k$ bit messages for $k$ bit identities and with $k$ bit randomness we get $3k$ bit ciphertexts to guarantee (roughly) $k$ bits of security.
Last updated:  2006-03-29
Counting Prime Numbers with Short Binary Signed Representation
José de Jesús Angel Angel, Guillermo Morales-Luna
Modular arithmetic with prime moduli has been crucial in present day cryptography. The primes of Mersenne, Solinas, Crandall and the so called IKE-MODP have been extensively used in efficient implementations. In this paper we study the density of primes with binary signed representation involving a small number of non-zero $\pm 1$-digits.
Last updated:  2006-03-27
Key Privacy for Identity Based Encryption
Jason E. Holt
We define key privacy for IBE systems in terms of two properties, indistinguishability under chosen identity attack, and indistinguishability under chosen key generator attack. Further, we show that the BasicIdent system in the Boneh/Franklin IBE has these properties under chosen plaintext attack.
Last updated:  2006-03-28
Repairing Attacks on a Password-Based Group Key Agreement
Ratna Dutta, Rana Barua
From designing point of view, it is not a trivial task to convert a group key agreement protocol into password-based setting where the members of the group share only a human-memorable weak password and the system may not have any secure public key infrastructure. Security analysis against dictionary attacks is on the other side of the coin. The low entropy of human memorable password may enable an adversary to mount off-line dictionary attacks if careful approaches are not taken in designing the protocol. Recently, Kim et al. proposed a very efficient provably secure group key agreement protocol KLL, security of which relies on the Computational Diffie-Hellman (CDH) assumption in the presence of random oracles. Dutta-Barua embed the protocol KLL into password-based environment -- yielding the protocol DB-PWD. Abdalla et al. detect certain flaws in the protocol DB-PWD. In this paper, we take suitable measures to overcome these attacks. We introduce a protocol MDB-PWD -- an improved variant of the protocol DB-PWD and analyze its security in the security framework formalized by Bellare et al. in both the ideal cipher model and the random oracle model under CDH assumption.
Last updated:  2006-05-22
On construction of non-normal Boolean functions
Uncategorized
Sugata Gangopadhyay, Deepmala Sharma
Show abstract
Uncategorized
Given two non-weakly $k$-normal Boolean functions on $n$ variables a method is proposed to construct a non-weakly $(k+1)$-normal Boolean function on $(n+2)$ variables.
Last updated:  2006-03-29
Conjectured Security of the ANSI-NIST Elliptic Curve RNG
Daniel R. L. Brown
An elliptic curve random number generator (ECRNG) has been proposed in ANSI and NIST draft standards. This paper proves that, if three conjectures are true, then the ECRNG is secure. The three conjectures are hardness of the elliptic curve decisional Diffie-Hellman problem and the hardness of two newer problems, the x-logarithm problem and the truncated point problem.
Last updated:  2006-09-26
Second Preimages for Iterated Hash Functions Based on a b-Block Bypass
Uncategorized
Mario Lamberger, Norbert Pramstaller, Vincent Rijmen
Show abstract
Uncategorized
In this article, we present a second preimage attack on a double block-length hash proposal presented at FSE 2006. If the hash function is instantiated with DESX as underlying block cipher, we are able to construct second preimages deterministically. Nevertheless, this second preimage attack does not render the hash scheme insecure. For the hash scheme, we only show that it should not be instantiated with DESX but AES should rather be used. However, we use the instantiation of this hash scheme with DESX to introduce a new property of iterated hash functions, namely a so-called b-block bypass. We will show that if an iterated hash function possesses a b-block bypass, then this implies that second preimages can be constructed. Additionally, the attacker has more degrees of freedom for constructing the second preimage.
Last updated:  2006-03-26
Fast exponentiation via prime finite field isomorphism
Alexander Rostovtsev
Raising of the fixed element of prime order group to arbitrary degree is the main operation specified by digital signature algorithms DSA, ECDSA. Fast exponentiation algorithms are proposed. Algorithms use number system with algebraic integer base (-2)^(1/4), 2^(1/4). Prime group order r can be factored as r = pi*pi1 in Euclidean ring Z[Sqrt[-2]], Z[Sqrt[2]] by Pollard and Schnorr algorithm. Farther factorization of prime quadratic divisor as pi = rho*rho1 in the ring Z[(-2)^(1/4)], Z[2^(1/4)] can be done similarly. Finite field of r elements is represented as quotient ring Z[(-2)^(1/4)]/(rho), Z[2^(1/4)]/(rho). Integer exponent k is reduced in corresponding quotient ring by minimization of its absolute norm. Algorithms can be used for fast exponentiation in arbitrary cyclic group if its order can be factored in corresponding number rings. If window size is 4 bits, this approach allows speeding-up 2.5 times elliptic curve digital signature verification comparatively to known methods with the same window size.
Last updated:  2008-01-09
Tate pairing for $y^{2}=x^{5}-\alpha x$ in Characteristic Five
Uncategorized
Ryuichi Harasawa, Yutaka Sueyoshi, Aichi Kudo
Show abstract
Uncategorized
In this paper, for the genus-$2$ hyperelliptic curve $y^{2}=x^{5} -\alpha x$ ($\alpha = \pm2$) defined over finite fields of characteristic five, we construct a distortion map explicitly, and show the map indeed gives an input for which the value of the Tate pairing is not trivial. Next we describe a computation of the Tate pairing by using the proposed distortion map. Furthermore, we also see that this type of curve is equipped with a simple quintuple operation on the Jacobian group, which leads to giving an improvement for computing the Tate pairing. We indeed show that, for the computation of the Tate pairing for genus-$2$ hyperelliptic curves, our method is about twice as efficient as a previous work.
Last updated:  2006-04-01
A New Construction of Time Capsule Signature
Miaomiao Zhang, Gongliang Chen, Jianhua Li, Licheng Wang, Haifeng Qian
In this paper we introduce a new approach of constructing time capsule signature. Our new construction captures the basic requirements defined by dodis \emph{et al.}, and it is also very straightforward and flexible. The time capsule signature provides an elegant way to produce a ``future signature" that becomes valid from a specific future time $t$, when a trusted third party (called \textit{Time Server}) publishes some trapdoor information associated with the time $t$. It also has many other advantages. Our work includes a developed security model of time capsule signature, a novel way of construction based on the bipartite ring signature, which is proven secure in the random oracle model and a concrete realization of the scheme.
Last updated:  2006-03-22
Entity Authentication and Authenticated Key Exchange with Tree Parity Machines
Markus Volkmer
This paper provides the first analytical and practical treatment of entity authentication and authenticated key exchange in the framework of Tree Parity Machines (TPMs). The interaction of TPMs has been discussed as an alternative concept for secure symmetric key exchange. Several attacks have been proposed on the non-authenticated principle. Adding and some extra entity authentication method is straightforward but outside the concept using TPMs. A simple and consequent implicit entity authentication from within the key exchange concept as an extension to the key exchange protocol is suggested. A proof for the soundness of the proposed entity authentication is given. Furthermore, next to averting a Man-In-The-Middle attack, the currently known attacks on the non-authenticated symmetric key exchange principle using TPMs can provably be averted for the authenticated variant.
Last updated:  2006-03-22
Attacking LCCC Batch Verification of RSA Signatures
Martin Stanek
Batch verification of digital signatures is used to improve the computational complexity when large number of digital signatures must be verified. Lee at al. [2] proposed a new method to identify bad signatures in batches efficiently. We show that the method is flawed.
Last updated:  2006-06-16
The Eta Pairing Revisited
F. Hess, N. P. Smart, F. Vercauteren
In this paper we simplify and extend the Eta pairing, originally discovered in the setting of supersingular curves by Baretto et al., to ordinary curves. Furthermore, we show that by swapping the arguments of the Eta pairing, one obtains a very efficient algorithm resulting in a speed-up of a factor of around six over the usual Tate pairing, in the case of curves which have large security parameters, complex multiplication by $D=-3$, and when the trace of Frobenius is chosen to be suitably small. Other, more minor savings are obtained for more general curves.
Last updated:  2006-08-26
A Simpler Sieving Device: Combining ECM and TWIRL
Willi Geiselmann, Fabian Januszewski, Hubert Koepfer, Jan Pelzl, Rainer Steinwandt
A main obstacle in manufacturing the TWIRL device for realizing the sieving step of the Number Field Sieve is the sophisticated chip layout. Especially the logic for logging and recovering large prime factors found during sieving adds significantly to the layout complexity. We describe a device building on the Elliptic Curve Method (ECM) that for parameters of interest enables the replacement of the complete logging part in TWIRL by an off-wafer postprocessing. The postprocessing is done in real time, leaving the total sieving time basically unchanged. The proposed device is an optimized ECM implementation building on curves chosen to cope with factor sizes as expected in the output of TWIRL. According to our preliminary analysis, for the relation collection step expected for a 1024-bit factorization our design is realizable with current fab technology at very moderate cost. The proposed ECM engine also finds the vast majority of the needed cofactor factorizations. In summary, we think the proposed device to enable a significant decrease of TWIRL's layout complexity and therewith its cost.
Last updated:  2006-03-22
Efficient Public Key Encryption with Keyword Search Schemes from Pairings
Chunxiang Gu, Yuefei Zhu, Yajuan Zhang
Public key encryption with keyword search (PEKS) enables user Alice to send a secret key $T_W$ to a server that will enable the server to locate all encrypted messages containing the keyword $W$, but learn nothing else. In this paper, we propose a new PKES scheme based on pairings. There is no pairing operation involved in the encryption procedure. Then, we provide further discussion on removing secure channel from PKES, and present an efficient secure channel free PKES scheme. Our two new schemes can be proved secure in the random oracle model, under the appropriate computational assumptions.
Last updated:  2006-03-19
The number field sieve for integers of low weight
Oliver Schirokauer
We define the weight of an integer $N$ to be the smallest $w$ such that $N$ can be represented as $\sum_{i=1}^w \epsilon_i 2^{c_i}$, with $\epsilon_1,\ldots,\epsilon_w\in\{1,-1\}$. Since arithmetic modulo a prime of low weight is particularly efficient, it is tempting to use such primes in cryptographic protocols. In this paper we consider the difficulty of the discrete logarithm problem modulo a prime $N$ of low weight, as well as the difficulty of factoring an integer $N$ of low weight. We describe a version of the number field sieve which handles both problems. Our analysis leads to the conjecture that, for $N\to\infty$ with $w$ fixed, the worst-case running time of the method is bounded above by ${\rm exp}((c+o(1))(\log\,N)^{1/3}(\log\log\,N)^{2/3})$ with $c<((32/9)(2w-3)/(w-1))^{1/3}$ and below by the same expression with $c=(32/9)^{1/3}((\sqrt{2}w-2\sqrt{2}+1)/(w-1))^{2/3}.$ It also reveals that on average the method performs significantly better than it does in the worst case. We consider all the examples given in a recent paper of Koblitz and Menezes and demonstrate that in every case but one, our algorithm runs faster than the standard versions of the number field sieve.
Last updated:  2006-03-19
Further Refinement of Pairing Computation Based on Miller's Algorithm
Chao-Liang Liu, Gwoboa Horng, Te-Yu Chen
In 2006, Blake, Murty and Xu proposed three refinements to Miller's algorithm for computing Weil/Tate Pairings. In this paper we extend their work and propose a generalized algorithm, which integrates their first two algorithms. Our approach is to pre-organize the binary representation of the involved integer to the best cases of Blake's algorithms. Further, our refinement is more suitable for Solinas numbers than theirs. We analyze our algorithm and show that our refinement can perform better than the original algorithms.
Last updated:  2006-04-17
Tunnels in Hash Functions: MD5 Collisions Within a Minute
Vlastimil Klima
In this paper we introduce a new idea of tunneling of hash functions. In some sense tunnels replace multi-message modification methods and exponentially accelerate collision search. We describe several tunnels in hash function MD5. Using it we find a MD5 collision roughly in one minute on a standard notebook PC (Intel Pentium, 1.6 GHz). The method works for any initializing value. Tunneling is a general idea, which can be used for finding collisions of other hash functions, such as SHA-1, 2. We show several capabilities of tunnels. A program, which source code is available on a project homepage, experimentally verified the method. Revised version of this paper contains the appendix with the description of more tunnels. These tunnels further decrease the average time of MD5 collision to 31 seconds. On PC Intel Pentium 4 (3,2 GHz) it is 17 seconds in average.
Last updated:  2006-03-19
Fast Collision Attack on MD5
Marc Stevens
In this paper, we present an improved attack algorithm to find two-block collisions of the hash function MD5. The attack uses the same differential path of MD5 and the set of sufficient conditions that was presented by Wang et al. We present a new technique which allows us to deterministically fulfill restrictions to properly rotate the differentials in the first round. We will present a new algorithm to find the first block and we will use an algorithm of Klima to find the second block. To optimize the inner loop of these algorithms we will optimize the set of sufficient conditions. We also show that the initial value used for the attack has a large influence on the attack complexity. Therefore a recommendation is made for 2 conditions on the initial value of the attack to avoid very hard situations if one has some freedom in choosing this initial value. Our attack can be done in an average of about 1 minute (avg. complexity $2^{32.3}$) on a 3Ghz Pentium4 for these random recommended initial values. For arbitrary random initial values the average is about 5 minutes (avg. complexity $2^{34.1}$). With a reasonable probability a collision is found within mere seconds, allowing for instance an attack during the execution of a protocol.
Last updated:  2006-10-05
Security of VSH in the Real World
Uncategorized
Markku-Juhani O. Saarinen
Show abstract
Uncategorized
In Eurocrypt 2006, Contini, Lenstra, and Steinfeld proposed a new hash function primitive, VSH, very smooth hash. In this brief paper we offer commentary on the resistance of VSH against some standard cryptanalytic attacks, including preimage attacks and collision search for a truncated VSH. Although the authors of VSH claim only collision resistance, we show why one must be very careful when using VSH in cryptographic engineering, where additional security properties are often required.
Last updated:  2006-03-19
Efficient Blind and Partially Blind Signatures Without Random Oracles
Tatsuaki Okamoto
This paper proposes a new efficient signature scheme from bilinear maps that is secure in the standard model (i.e., without the random oracle model). Our signature scheme is more effective in many applications (e.g., blind signatures, group signatures, anonymous credentials etc.) than the existing secure signature schemes in the standard model. As typical applications of our signature scheme, this paper presents efficient blind signatures and partially blind signatures that are secure in the standard model. Here, partially blind signatures are a generalization of blind signatures (i.e., blind signatures are a special case of partially blind signatures) and have many applications including electronic cash and voting. Our blind signature scheme is much more efficient than the existing secure blind signature schemes in the standard model such as the Camenisch-Koprowski-Warinsch and Juels-Luby-Ostrovsky schemes, and is also almost as efficient as the most efficient blind signature schemes whose security has been analyzed heuristically or in the random oracle model. Our partially blind signature scheme is the first one that is secure in the standard model and it is very efficient (as efficient as our blind signatures). The security proof of our blind and partially blind signature schemes requires the 2SDH assumption, a variant of the SDH assumption introduced by Boneh and Boyen, and the 2SDH-IND assumption. This paper also presents an efficient way to convert our (partially) blind signature scheme in the standard model to a scheme secure for a concurrent run of users in the common reference string (CRS) model. Finally, we present a blind signature scheme based on the Waters signature scheme.
Last updated:  2006-03-19
Information-theoretic analysis of coating PUFs
Uncategorized
B. Skoric, S. Maubach, T. Kevenaar, P. Tuyls
Show abstract
Uncategorized
Physical Uncloneable Functions (PUFs) can be used as a cost-effective means to store cryptographic key material in an uncloneable way. In coating PUFs, keys are generated from capacitance measurements of a coating containing many randomly distributed particles with different dielectric constants. We introduce a physical model of coating PUFs by simplifying the capacitance sensors to a parallel plate geometry. We estimate the amount of information that can be extracted from the coating. We show that the inherent entropy is proportional to $sqrt{n}(log n)^{3/2}$, where n is the number of particles that fit between the capacitor plates in a straight line. However, measurement noise may severely reduce the amount of information that can actually be extracted in practice. In the noisy regime the number of extractable bits is in fact a decreasing function of n. We derive an optimal value for n as a function of the noise amplitude, the PUF geometry and the dielectric constants.
Last updated:  2006-09-23
A Shorter Group Signature with Verifier-Location Revocation and Backward Unlinkability
Uncategorized
Zhou Sujing, Lin Dongdai
Show abstract
Uncategorized
Group signatures are generalized credential/member authentication schemes with wide applications, such as Trust Computing. Membership revocation problem is a major issue of group signatures. In some applications that group secret keys are stored in tamper resistant chips, a Verifier-Local Revocation resolution is more reasonable than other methods, such as witness based revocation. Boneh et al. formally defined such VLR group signatures and proposed a VLR resolution for a short group signature. Later Nakanishi et al. pointed out it has a disadvantage of backward linkability, and provided a VLR resolution with backward unlinkability at the cost of longer signature size and more computation. We improve Nakanishi et al.'s scheme by reducing the signature size and computations required, without compromising VLR and backward unlinkability.
Last updated:  2006-03-28
An Efficient Single-Key Pirates Tracing Scheme Using Cover-Free Families
Dongvu Tonien, Reihaneh Safavi-Naini
A cover-free family is a well-studied combinatorial structure that has many applications in computer science and cryptography. In this paper, we propose a new public key traitor tracing scheme based on cover-free families. The new traitor tracing scheme is similar to the Boneh-Franklin scheme except that in the Boneh-Franklin scheme, decryption keys are derived from Reed-Solomon codes while in our case they are derived from a cover-free family. This results in much simpler and faster tracing algorithms for single-key pirate decoders, compared to the tracing algorithms of Boneh-Franklin scheme that use Berlekamp-Welch algorithm. Our tracing algorithms never accuse innocent users and identify all traitors with overwhelming probability.
Last updated:  2006-07-07
Gröbner Basis Based Cryptanalysis of SHA-1
Makoto Sugita, Mitsuru Kawazoe, Hideki Imai
Recently, Wang proposed a new method to cryptanalyze SHA-1 and found collisions of $58$-round SHA-1. However many details of Wang's attack are still unpublished, especially, 1) How to find differential paths? 2) How to modify messages properly? For the first issue, some results have already been reported. In our article, we clarify the second issue and give a sophisticated method based on Gröbner basis techniques. We propose two algorithm based on the basic and an improved message modification techniques respectively. The complexity of our algorithm to find a collision for 58-round SHA-1 based on the basic message modification is $2^{29}$ message modifications and its implementation is equivalent to $2^{31}$ SHA-1 computation experimentally, whereas Wang's method needs $2^{34}$ SHA-1 computation. We propose an improved message modification and apply it to construct a more sophisticated algorithm to find a collision. The complexity to find a collision for 58-round SHA-1 based on this improved message modification technique is $2^8$ message modifications, but our latest implementation is very slow, equivalent to $2^{31}$ SHA-1 computation experimentally. However we conjecture that our algorithm can be improved by techniques of error correcting code and Gröbner basis. By using our methods, we have found many collisions for $58$-round SHA-1.
Last updated:  2006-04-18
A Cryptographic Tour of the IPsec Standards
Kenneth G. Paterson
In this article, we provide an overview of cryptography and cryptographic key management as they are specified in IPsec, a popular suite of standards for providing communications security and network access control for Internet communications. We focus on the latest generation of the IPsec standards, recently published as Request for Comments 4301–4309 by the Internet Engineering Task Force, and how they have evolved from earlier versions of the standards.
Last updated:  2006-03-12
Sequential Aggregate Signatures and Multisignatures without Random Oracles
Steve Lu, Rafail Ostrovsky, Amit Sahai, Hovav Shacham, Brent Waters
We present the first aggregate signature, the first multisignature, and the first verifiably encrypted signature provably secure without random oracles. Our constructions derive from a novel application of a recent signature scheme due to Waters. Signatures in our aggregate signature scheme are sequentially constructed, but knowledge of the order in which messages were signed is not necessary for verification. The aggregate signatures obtained are shorter than Lysyanskaya et~al. sequential aggregates and can be verified more efficiently than Boneh et~al. aggregates. We also consider applications to secure routing and proxy signatures.
Last updated:  2009-02-24
MAC Reforgeability
John Black, Martin Cochran
Message Authentication Codes (MACs) are central algorithms deployed in virtually every security protocol in common usage. In these protocols, the integrity and authenticity of messages rely entirely on the security of the MAC; we examine cases in which this security is lost. In this paper, we examine the notion of “reforgeability” for MACs. We first give a definition for this new notion, then examine some of the most widely-used and well-known MACs under our definition. We show that for each of these MACs there exists an attack that allows efficient forgeries after the first one is obtained, and we show that simply making these schemes stateful is usually insufficient. For those schemes where adding state is effective, we go one step further to examine how counter misuse affects the security of the MAC, finding, in many cases, simply repeating a single counter value yields complete insecurity. These issues motivated the design of a new scheme, WMAC, which has a number of desirable properties. It is as efficient as the fastest MACs, resists counter misuse, and has tags which may be truncated to the desired length without affecting security (currently, the fastest MACs do not have this property), making it resistant to reforging attacks and arguably the best MAC for constrained environments.
Last updated:  2006-03-09
Cryptanalysis of the MEM Mode of Operation
Peng Wang, Dengguo Feng, Wenling Wu
The MEM mode is a nonce-based enciphering mode of operation proposed by Chakraborty and Sarkar, which was claimed to be secure against symmetric nonce respecting adversaries. We show that this is not correct by using two very simple attcks. One attack need one decryption and one decryption queries, and the other only need one encryption query.
Last updated:  2006-03-09
RSA and a higher degree diophantine equation
Abderrahmane Nitaj
Let $N=pq$ be an RSA modulus where $p$, $q$ are large primes of the same bitsize. We study the class of the public exponents $e$ for which there exist an integer $m$ with $1\leq m\leq {\log{N}\over \log{32}}$ and small integers $u$, $X$, $Y$ and $Z$ satisfying $$(e+u)Y^m-\psi(N)X^m=Z,$$ where $\psi(N)=(p+1)(q-1)$. First we show that these exponents are of improper use in RSA cryptosystems. Next we show that their number is at least $O\left(mN^{{1\over 2}+{\a\over m}-\a-\e}\right)$ where $\a$ is defined by $N^{1-\a}=\psi(N)$.
Last updated:  2006-03-09
Cryptanalysis of RSA with constrained keys
Uncategorized
Abderrahmane Nitaj
Show abstract
Uncategorized
Let $n=pq$ be an RSA modulus with unknown prime factors and $F$ any function for which there exists an integer $u\neq 0$ satisfying $F(u)\approx n$ and $pu$ or $qu$ is computable from $F(u)$ and $n$. We show that choosing a public key exponent $e$ for which there exist positive integers $X$, $Y$ such that $\left\vert eY-XF(u)\right\vert$ and $Y$ are suitably small, then the system is insecure.
Last updated:  2006-03-09
The Complexity of Online Memory Checking
Moni Naor, Guy Rothblum
Suppose you want to store a large file on a remote and unreliable server. You would like to verify that your file has not been corrupted, so you store a small private (randomized)``fingerprint'' of the file on your own computer. This is the setting for the well-studied authentication problem, and the size of the required private fingerprint is well understood. We study the problem of sub-linear authentication: suppose you would like to encode and store your file in a way that allows you to verify that it has not been corrupted, but without reading all of it. If you only want to read t bits of the file, how large does the size s of the fingerprint need to be? We define this problem formally, and show a tight lower bound on the relationship between s and t when the adversary is not computationally bounded, namely: s x t= Omega(n) where n is the file size. This is an easier case of the online memory checking problem, introduced by Blum, Evans, Gemmel, Kannan and Naor in 1991, and hence the same (tight) lower bound applies also to this problem. It was previously shown that when the adversary is not computationally bounded, under the assumption that one-way functions exist, it is possible to construct much better online memory checkers and sub-linear authentication schemes. We show that the existence of one-way functions is also a necessary condition: even slightly breaking the s x t= Omega(n) lower bound in a computational setting implies the existence of one-way functions.
Last updated:  2006-03-15
Secure Sketch for Multi-Sets
Uncategorized
Ee-Chien Chang, Vadym Fedyukovych, Qiming Li
Show abstract
Uncategorized
Given the original set $X$ where $|X|=s$, a sketch $P$ is computed from $X$ and made public. From another set $Y$ where $|Y| = s$ and $P$, we can reconstruct $X$ if $|X\cap Y|\ge |s-t|$, where $t<s$ is some threshold. The sketch $P$ is secure if it does not reveal much information about $X$. A few constructions have been proposed, but they cannot handle multi-sets, that is, sets that may contain duplicate elements. We observe that the techniques in the set reconciliation protocol proposed by Minsky et al. (ISIT 2001) can be applied and give a secure sketch that supports multi-sets. If $X$ is a subset of an universe with $n$ elements, the running time of the encoding and decoding algorithms will be polynomial w.r.t. $s$ and $\log n$, and the entropy loss due to the sketch is less than $2t(1+\log n)$.
Last updated:  2006-03-08
A Tree-based Model of Unicast Stream Authentication
Goce Jakimoski, Yvo Desmedt
When proving the security of a message authentication scheme, the messages are considered to be atomic objects. Straightforward application of such schemes to some information resources may introduce security flaws. Gennaro and Rohatgi (Crypto '97) identified the streams of data as an important class of information resources that can not be considered to be message-like, and they proposed a solution to the problem of stream signing when the stream is not known in advance. The disadvantage of digital signing streams of data is that it is not efficient when non-repudiation is not important, as in the case of point-to-point communications. We present several schemes and also a family of schemes for stream authentication in a unicast setting. Since many authentication schemes have been broken, we will prove our solutions.
Last updated:  2010-03-12
On the Feasibility of Consistent Computations
Uncategorized
Sven Laur, Helger Lipmaa
Show abstract
Uncategorized
In many practical settings, participants are willing to deviate from the protocol only if they remain undetected. Aumann and Lindell introduced a concept of covert adversaries to formalize this type of corruption. In the current paper, we refine their model to get stronger security guarantees. Namely, we show how to construct protocols, where malicious participants cannot learn anything beyond their intended outputs and honest participants can detect malicious behavior that alters their outputs. As this construction does not protect honest parties from selective protocol failures, a valid corruption complaint can leak a single bit of information about the inputs of honest parties. Importantly, it is often up to the honest party to decide whether to complain or not. This potential leakage is often compensated by gains in efficiency---many standard zero-knowledge proof steps can be omitted. As a concrete practical contribution, we show how to implement consistent versions of several important cryptographic protocols such as oblivious transfer, conditional disclosure of secrets and private inference control.
Last updated:  2007-04-26
Analysis of the SPV Secure Routing Protocol: Weaknesses and Lessons
Uncategorized
Barath Raghavan, Saurabh Panjwani, Anton Mityagin
Show abstract
Uncategorized
We analyze a secure routing protocol, Secure Path Vector (SPV), proposed in SIGCOMM 2004. SPV aims to provide authenticity for route announcements in the Border Gateway Protocol (BGP) using an efficient alternative to ordinary digital signatures, called constant-time signatures. Today, SPV is often considered the best cryptographic defense for BGP. We find subtle flaws in the design of SPV which lead to attacks that can be mounted by 60% of Autonomous Systems in the Internet. In addition, we study several of SPV's design decisions and assumptions and highlight the requirements for security of routing protocols. In light of our analysis, we reexamine the need for constant-time signatures and find that certain standard digital signature schemes can provide the same level of efficiency for route authenticity.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.