All papers in 1997 (15 results)

Last updated:  1997-12-05
Optimistic fair Exchange of Digital Signatures
Uncategorized
N. Asokan, V. Shoup, M. Waidner
Show abstract
Uncategorized
We present a new protocol that allows two players to exchange digital signatures (including RSA and DSS) over the Internet in a fair way, so that either each player gets the other's signature, or neither player does. One obvious application is where the signatures represent items of value, for example, an electronic check or airline ticket; the protocol can also be adapted to exchange encrypted data. The protocol relies on a trusted third party, but is "optimistic," in that the third party is only needed in cases where one player attempts to cheat or simply crashes. This is an important property, as it greatly reduces the load on the third party, which in particular facilitates a more robust and secure implementation of the third party.
Last updated:  1997-11-09
Generalized Diffie-Hellman Modulo a Composite is not Weaker than Factoring
Uncategorized
Eli Biham, Dan Boneh, Omer Reingold
Show abstract
Uncategorized
The Diffie-Hellman key-exchange protocol may naturally be extended to k>2 parties. This gives rise to the generalized Diffie-Hellman assumption (GDH-Assumption). Naor and Reingold have recently shown an efficient construction of pseudo-random functions and reduced the security of their construction to the GDH-Assumption. In this note, we prove that breaking this assumption modulo a composite would imply an efficient algorithm for factorization. Therefore, the security of both the key-exchange protocol and the pseudo-random functions can be reduced to factoring.
Last updated:  1997-10-06
Visual Authentication and Identification
Uncategorized
Moni Naor, Benny Pinkas.
Show abstract
Uncategorized
The problems of authentication and identification have received wide interest in cryptographic research. However, there has been no satisfactory solution for the problem of authentication by a human recipient who does not use any trusted computational device. The problem of authentication arises for example in the context of smartcard--human interaction, in particular in the context of electronic wallets. The problem of identification is ubiquitous in communication over insecure networks. This paper introduces visual authentication and visual identification methods, which are authentication and identification methods for human users based on visual cryptography. These methods are very natural and easy to use, and can be implemented using very common ``low tech'' technology. The methods we suggest are efficient in the sense that a single transparency can be used for several authentications or for several identifications. The visual authentication methods we suggest are not limited to authenticating textual messages, and can be used to authenticate any image. An important contribution of this paper is the introduction of a framework for proving the security of protocols in which humans take an active part. We rigorously prove the security of our schemes using this framework.
Last updated:  1998-08-01
Self-Delegation with Controlled Propagation - or - What If You Lose Your Laptop.
Uncategorized
Oded Goldreich, Birgit Pfitzmann, Ronald L. Rivest
Show abstract
Uncategorized
We introduce delegation schemes wherein a user may delegate rights to himself, i.e., to other public keys he owns, but may not safely delegate those rights to others, i.e., to their public keys. In our motivating application, a user has a primary (long-term) key that receives rights, such as access privileges, that may not be delegated to others, yet the user may reasonably wish to delegate these rights to new secondary (short-term) keys he creates to use on his laptop when traveling, to avoid having to store his primary secret key on the vulnerable laptop. We propose several cryptographic schemes, both generic and practical, that allow such self-delegation while providing strong motivation for the user not to delegate rights that he only obtained for personal use to other parties.
Last updated:  1997-08-15
Identity Escrow
Uncategorized
Joe Kilian, Erez Petrank
Show abstract
Uncategorized
We introduce the notion of escrowed identity, an application of key-escrow ideas to the problem of identification. In escrowed identity, one party, A, does not give his identity to another party B, but rather gives him information that would allow an authorized third party, E, to determine A's identity. However, B receives a guarantee that E can indeed determine A's identity. We give protocols for escrowed identity based on the El-Gamal (signature and encryption) schemes and on the RSA function. A useful feature of our protocol is that after setting up A to use the system, E is only involved when it is actually needed to determine A's identity.
Last updated:  1997-08-15
CBC MAC for Real-Time Data Sources
Uncategorized
Erez Petrank, Charles Rackoff
Show abstract
Uncategorized
The Cipher Block Chaining (CBC) Message Authentication Code (MAC) is an authentication method which is widely used in practice. It is well known that the naive use of CBC MAC for variable length messages is not secure, and a few rules of thumb for the correct use of CBC MAC are known by folklore. The first rigorous proof of the security of CBC MAC, when used on fixed length messages, was given only recently by Bellare, Kilian and Rogaway. They also suggested variants of CBC MAC that handle variable length messages but in these variants the length of the message has to be known in advance (i.e., before the message is processed). We study CBC authentication of real time applications in which the length of the message is not known until the message ends, and furthermore, since the application is real-time, it is not possible to start processing the authentication only after the message ends. We first present a variant of CBC MAC, called {\em double MAC} (DMAC) which handles messages of variable unknown lengths. Computing DMAC on a message is virtually as simple and as efficient as computing the standard CBC MAC on the message. We provide a rigorous proof that its security is implied by the security of the underlying block cipher. Next, we argue that the basic CBC MAC is secure when applied to prefix free message space. A message space can be made prefix free by authenticating also the (usually hidden) last character which marks the end of the message.
Last updated:  1997-07-11
Collision-Resistant Hashing: Towards Making UOWHFs Practical
Uncategorized
Mihir Bellare, Phillip Rogaway
Show abstract
Uncategorized
Recent attacks on the cryptographic hash functions MD4 and MD5 make it clear that (strong) collision-resistance is a hard-to-achieve goal. We look towards a weaker notion, the <i>universal one-way hash functions</i> (UOWHFs) of Naor and Yung, and investigate their practical potential. The goal is to build UOWHFs not based on number theoretic assumptions, but from the primitives underlying current cryptographic hash functions like MD5 and SHA. Pursuing this goal leads us to new questions. The main one is how to extend a compression function to a full-fledged hash function in this new setting. We show that the classic Merkle-Damgard method used in the standard setting fails for these weaker kinds of hash functions, and we present some new methods that work. Our main construction is the "XOR tree." We also consider the problem of input length-variability and present a general solution.
Last updated:  1997-06-13
Factoring via Strong Lattice Reduction Algorithms
Uncategorized
Harald Ritter, Carsten Roessner
Show abstract
Uncategorized
We address to the problem to factor a large composite number by lattice reduction algorithms. Schnorr has shown that under a reasonable number theoretic assumptions this problem can be reduced to a simultaneous diophantine approximation problem. The latter in turn can be solved by finding sufficiently many l_1--short vectors in a suitably defined lattice. Using lattice basis reduction algorithms Schnorr and Euchner applied Schnorrs reduction technique to 40--bit long integers. Their implementation needed several hours to compute a 5% fraction of the solution, i.e., 6 out of 125 congruences which are necessary to factorize the composite. In this report we describe a more efficient implementation using stronger lattice basis reduction techniques incorporating ideas of Schnorr, Hoerner and Ritter. For 60--bit long integers our algorithm yields a complete factorization in less than 3 hours.
Last updated:  1997-06-02
Towards realizing random oracles: Hash functions that hide all partial information
Uncategorized
Ran Canetti
Show abstract
Uncategorized
The random oracle model is a very convenient setting for designing cryptographic protocols. In this idealized model all parties have access to a common, public random function, called a random oracle. Protocols in this model are often very simple and efficient; also the analysis is often clearer. However, we do not have a general mechanism for transforming protocols that are secure in the random oracle model into protocols that are secure in real life. In fact, we do not even know how to meaningfully specify the properties required from such a mechanism. Instead, it is a common practice to simply replace - often without mathematical justification - the random oracle with a `cryptographic hash function' (e.g., MD5 or SHA). Consequently, the resulting protocols have no meaningful proofs of security.
Last updated:  1997-05-04
Protecting Data Privacy in Private Information Retrieval Schemes
Uncategorized
Yuval Ishai, Eyal Kushilevitz
Show abstract
Uncategorized
Private Information Retrieval (PIR) schemes allow a user to retrieve the i-th bit of a data string x, replicated in k>=2 databases, while keeping the value of i private. The main cost measure for such a scheme is its communication complexity. We study PIR schemes where in addition to the user's privacy we require data privacy. That is, in every invocation of the retrieval protocol the user learns exactly a single physical bit of x and no other information. Further, we require that even a dishonest user would not learn more than a single physical data bit. We present general transformations that allow translating PIR schemes satisfying certain properties into PIR schemes that respect data privacy as well, with a small penalty in the communication complexity. Using our machinery we are able to translate currently known PIR solutions into schemes satisfying the newly introduced, stronger privacy constraint. In particular we get: a k-database scheme of complexity O(log(n) n^{1/(2k-1)}) for every k>=2; an O(log(n))-database scheme of poly-logarithmic complexity; a 2-database computational PIR of complexity O(n^c), for every constant c>0. All these require only a single round of interaction.
Last updated:  1997-04-21
A Probabilistic Error-Correcting Scheme
Uncategorized
S. Decatur, O. Goldreich, D. Ron
Show abstract
Uncategorized
In the course of research in Computational Learning Theory, we found ourselves in need of an error-correcting encoding scheme for which few bits in the codeword yield no information about the plain message. Being unaware of a previous solution, we came-up with the scheme presented here. Since this scheme may be of interest to people working in Cryptography, we thought it may be worthwhile to ``publish'' this part of our work within the Cryptography community. Clearly, a scheme as described above cannot be deterministic. Thus, we introduce a probabilistic coding scheme which, in addition to the standard coding theoretic requirements, has the feature that any constant fraction of the bits in the (randomized) codeword yields no information about the message being encoded. This coding scheme is also used to obtain efficient constructions for the Wire-Tap Channel Problem.
Last updated:  2002-10-07
A note on negligible functions
Uncategorized
Mihir Bellare
Show abstract
Uncategorized
In theoretical cryptography, one formalizes the notion of an adversary's success probability being ``too small to matter'' by asking that it be a negligible function of the security parameter. We argue that the issue that really arises is what it might mean for a collection of functions to be ``negligible.'' We consider (and define) two such notions, and prove them equivalent. Roughly, this enables us to say that any cryptographic primitive has a specific associated ``security level.'' In particular we say this for any one-way function. We also reconcile different definitions of negligible error arguments and computational proofs of knowledge that have appeared in the literature. Although the motivation is cryptographic, the main result is purely about negligible functions.
Last updated:  1997-03-05
Efficient Cryptographic Protocols Based on Noisy Channels.
Uncategorized
Claude Crepeau
Show abstract
Uncategorized
The Wire-Tap Channel of Wyner shows that a Binary Symmetric Channel may be used as a basis for exchanging a secret key. Later, Crepeau and Kilian showed how a BSC may be used to implement Oblivious Transfer. Unfortunately, this result is rather impractical as it requires $n sup 11$ bits to be sent through the BSC to accomplish a single OT. The current paper provides efficient protocols to achieve Bit Commitment and Oblivious Transfer based on the existence of a BSC. Our protocols respectively use the BSC $n$ times and $n sup 3$ times. These results are based on a technique known as Generalized Privacy Amplification.
Last updated:  1997-02-26
Round-Optimal Zero-Knowledge Arguments Based on any One-Way Function
Uncategorized
Mihir Bellare, Markus Jakobsson, Moti Yung
Show abstract
Uncategorized
We fill a gap in the theory of zero-knowledge protocols by presenting NP-arguments that achieve negligible error probability and computational zero-knowledge in four rounds of interaction, assuming only the existence of a one-way function. This result is optimal in the sense that four rounds and a one-way function are each individually necessary to achieve a negligible error zero-knowledge argument for NP.
Last updated:  1997-02-26
A New Paradigm for Collision-free Hashing: Incrementality at Reduced Cost
Uncategorized
Mihir Bellare, Daniele Micciancio
Show abstract
Uncategorized
We present a simple, new paradigm for the design of collision-free hash functions. Any function emanating from this paradigm is <i>incremental.</i> (This means that if a message x which I have previously hashed is modified to x' then rather than having to re-compute the hash of x' from scratch, I can quickly ``update'' the old hash value to the new one, in time proportional to the amount of modification made in x to get x'.) Also any function emanating from this paradigm is parallelizable, useful for hardware implementation.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.