All papers (Page 239 of 23853 results)
Insecurity of Quantum Computations
Uncategorized
Uncategorized
It had been widely claimed that quantum mechanics can protect private
information during public decision in for example the so-called
two-party secure computation. If this were the case, quantum
smart-cards could prevent fake teller machines from learning the PIN
(Personal Identification Number) from the customers' input. Although
such optimism has been challenged by the recent surprising discovery
of the insecurity of the so-called quantum bit commitment, the
security of quantum two-party computation itself remains unaddressed.
Here I answer this question directly by showing that all *one-sided*
two-party computations (which allow only one of the two parties to
learn the result) are necessarily insecure. As corollaries to my
results, quantum one-way oblivious password identification and the
so-called quantum one-out-of-two oblivious transfer are impossible. I
also construct a class of functions that cannot be computed securely
in any <i>two-sided</i> two-party computation. Nevertheless, quantum
cryptography remains useful in key distribution and can still provide
partial security in ``quantum money'' proposed by Wiesner.
Relations among Notions of Security for Public-Key Encryption Schemes
Uncategorized
Uncategorized
We compare the relative strengths of popular notions of security for
public key encryption schemes. We consider the goals of
indistinguishability and non-malleability, each under chosen plaintext
attack and two kinds of chosen ciphertext attack. For each of the
resulting pairs of definitions we prove either an implication (every
scheme meeting one notion must meet the other) or a separation (there
is a scheme meeting one notion but not the other, assuming the first
notion can be met at all). We similarly treat plaintext awareness, a
notion of security in the random oracle model. An additional
contribution of this paper is a new definition of non-malleability
which we believe is simpler than the previous one.
Almost All Discrete Log Bits Are Simultaneously Secure
Uncategorized
Uncategorized
Let G be a finite cyclic group with generator \alpha and with an
encoding so that multiplication is computable in polynomial time. We
study the security of bits of the discrete log x when given
exp<sub>\alpha</sub>(x), assuming that the exponentiation function
exp<sub>\alpha</sub>(x) = \alpha<sup>x</sup> is one-way. We reduce the
general problem to the case that G has odd order q. If G has odd order
q the security of the least-significant bits of x and of the most
significant bits of the rational number x/q \in [0,1) follows from the
work of Peralta [P85] and Long and Wigderson [LW88]. We generalize
these bits and study the security of consecutive <i> shift bits</i>
lsb(2<sup>-i</sup>x mod q) for i=k+1,...,k+j. When we restrict
exp<sub>\alpha</sub> to arguments x such that some sequence of j
consecutive shift bits of x is constant (i.e., not depending on x) we
call it a 2<sup>-j</sup>-<i>fraction</i> of exp<sub>\alpha</sub>.
For groups of odd group order q we show that every two
2<sup>-j</sup>-fractions of exp<sub>\alpha</sub> are equally one-way
by a polynomial time transformation: Either they are all one-way or
none of them. Our <i> key theorem</i> shows that arbitrary j
consecutive shift bits of x are simultaneously secure when given
exp<sub>\alpha</sub>(x) iff the 2<sup>-j</sup>-fractions of
exp<sub>\alpha</sub> are one-way. In particular this applies to the j
least-significant bits of x and to the j most-significant bits of x/q
\in [0,1). For one-way exp<sub>\alpha</sub> the individual bits of x
are secure when given exp<sub>\alpha</sub>(x) by the method of Hastad,
Naslund [HN98]. For groups of even order 2<sup>s</sup>q we show that
the j least-significant bits of \lfloor x/2<sup>s</sup>\rfloor, as
well as the j most-significant bits of x/q \in [0,1), are
simultaneously secure iff the 2<sup>-j</sup>-fractions of
exp<sub>\alpha'</sub> are one-way for \alpha' := \alpha<sup>2<sup>s</sup>
</sup>.
We use and extend the models of generic algorithms of Nechaev (1994)
and Shoup (1997). We determine the generic complexity of inverting
fractions of exp<sub>\alpha</sub> for the case that \alpha has prime
order q. As a consequence, arbitrary segments of (1-\varepsilon)\lg q
consecutive shift bits of random x are for constant \varepsilon >0
simultaneously secure against generic attacks. Every generic algorithm
using t generic steps (group operations) for distinguishing bit
strings of j consecutive shift bits of x from random bit strings has
at most advantage O((\lg q)j\sqrt{t} (2<sup>j</sup>/q)<sup>1/4</sup>).
Many-to-one Trapdoor Functions and their Relation to Public-key Cryptosystems
Uncategorized
Uncategorized
The heart of the task of building public key cryptosystems is viewed
as that of ``making trapdoors;'' in fact, public key cryptosystems and
trapdoor functions are often discussed as synonymous. How accurate is
this view? In this paper we endeavor to get a better understanding of
the nature of ``trapdoorness'' and its relation to public key
cryptosystems, by broadening the scope of the investigation: we look
at general trapdoor functions; that is, functions that are not
necessarily injective (ie., one-to-one). Our first result is somewhat
surprising: we show that non-injective trapdoor functions (with
super-polynomial pre-image size) can be constructed {from} any one-way
function (and hence it is unlikely that they suffice for public key
encryption). On the other hand, we show that trapdoor functions with
polynomial pre-image size are sufficient for public key encryption.
Together, these two results indicate that the pre-image size is a
fundamental parameter of trapdoor functions. We then turn our
attention to the converse, asking what kinds of trapdoor functions can
be constructed from public key cryptosystems. We take a first step by
showing that in the random-oracle model one can construct injective
trapdoor functions from any public key cryptosystem.
Security and Composition of Multi-party Cryptographic Protocols
Uncategorized
Uncategorized
We present general definitions of security for multiparty cryptographic
protocols and show that, using these definitions, security is preserved
under a natural composition method.
The definitions follow the general paradigm of known definitions;
yet some substantial modifications and simplifications are introduced. In
particular, `black-box simulation' is no longer required. The composition
method is essentially the natural `subroutine substitution' method suggested
by Micali and Rogaway.
We first present the general definitional approach. Next we consider several
settings for multiparty protocols. These include the cases of non-adaptive
and adaptive adversaries, as well as the information-theoretic and the
computational models.
Making An Empty Promise With A Quantum Computer (Or, A Brief Review on the Impossibility of Quantum Bit Commitment)
Uncategorized
Uncategorized
Alice has made a decision in her mind.
While she does not want to reveal it to
Bob at this moment, she would like to convince Bob that she is committed to
this particular decision and that she cannot change it at a later time. Is
there a way for Alice to get Bob's trust? Until recently, researchers had
believed that the above task can be performed with the help of quantum
mechanics. And the security of the quantum scheme lies on the uncertainty
principle. Nevertheless, such optimism was recently shattered by Mayers and by
us, who found that Alice can always change her mind if she has a quantum
computer. Here, we survey this dramatic development and its implications on
the security of other quantum cryptographic schemes.
Last updated: 1999-01-01
Quantum Computers Render Quantum Key Distribution Unconditionally Secure Over Arbitrarily Long Distances
Uncategorized
Uncategorized
Abstract: Quantum cryptography has long been claimed to be useful for
achieving many tasks that are impossible from the perspective of
conventional cryptography. Arguably, the most important problem
in quantum cryptography has been a rigorous proof of the security of
quantum key distribution, the most well-known application.
This notoriously hard problem has eluded researchers for years and has
become even more important after the recent surprising demonstration
of the insecurity of many other quantum cryptographic schemes including
quantum bit commitment. Here, we solve this long standing problem by
showing that, given quantum computers, quantum key distribution over an
arbitrarily long distance of a realistic noisy channel can be made
unconditionally secure. The novel technique we use is reduction from a
quantum scheme to a classical scheme. The security in realistic noisy
environments is then proven by using the recent theory of fault-tolerant
quantum computation.
More on Proofs of Knowledge
Uncategorized
Uncategorized
The notion of proofs of knowledge is central to cryptographic
protocols, and many definitions for it have been proposed. In this work
we explore a different facet of this notion, not addressed by prior
definitions. Specifically, prior definitions concentrate on capturing
the properties of the verifier, and do not pay much attention to the
properties of the prover.
Our new definition is strictly stronger than previous ones, and captures
new and desirable properties. In particular, it guarantees prover
feasibility, that is, it guarantees that the time spent by the prover
in a proof of knowledge is comparable to that it spends in an "extraction"
of this knowledge. Our definition also enables one to consider meaningfully
the case of a single, specific prover.
Randomness versus Fault-Tolerance
Uncategorized
Uncategorized
We investigate the relations between two major requirements of multiparty
protocols: {\em fault tolerance} (or {\em resilience}) and {\em randomness}.
Fault-tolerance is measured in terms of the maximum number of colluding faulty
parties, t, that a protocol can withstand and still maintain the privacy of the inputs and the correctness of the outputs (of the honest parties). Randomness
is measured in terms of the total number of random bits needed by the parties
in order to execute the protocol.
Previously, the upper bound on the amount of randomness required by general
constructions for securely computing any non-trivial function f was polynomial
both in , the total number of parties, and the circuit-size C(f). This was
the state of knowledge even for the special case t=1 (i.e., when there is at
most one faulty party). In this paper, we show that for any linear-size
circuit, and for any number t < n/3 of faulty parties, O(poly(t) * log n)
randomness is sufficient. More generally, we show that for any function f
with circuit-size C(f), we need only O(poly(t) * log n + poly(t) * C(f)/n)
randomness in order to withstand any coalition of size at most t.
Furthermore, in our protocol only t+1 parties flip coins and the rest of
the parties are deterministic. Our results generalize to the case of adaptive
adversaries as well.
A Random Server Model for Private Information Retrieval (or How to Achieve Information Theoretic PIR Avoiding Data Replication)
Uncategorized
Uncategorized
Private information retrieval (PIR) schemes enable users to obtain
information from databases while keeping their queries secret from the
database managers. We propose a new model for PIR, utilizing
auxiliary random servers to provide privacy services for database
access. In this model, prior to any on-line communication where users
request queries, the database engages in an initial preprocessing
setup stage with the random servers. Using this model we achieve the
first PIR information theoretic solution in which the database does
not need to give away its data to be replicated, and with minimal
on-line computation cost for the database. This solves privacy and
efficiency problems inherent to all previous solutions.
In particular, all previous information theoretic PIR schemes required
multiple replications of the database into separate entities which are
not allowed to communicate with each other; and in all previous
schemes (including ones which do not achieve information theoretic
security), the amount of computation performed by the database on-line
for every query is at least linear in the size of the database.
In contrast, in our solutions the database does not give away its
contents to any other entity; and after the initial setup stage, which
costs at most O(n log n) in computation, the database needs to
perform only O(1) amount of computation to answer questions of users
on-line. All the extra on-line computation is done by the auxiliary
random servers.
Maintaining Authenticated Communication in the Presence of Break-ins
Uncategorized
Uncategorized
We study the problem of maintaining authenticated communication over untrusted
communication channels, in a scenario where the communicating parties may be
occasionally and repeatedly broken into for transient periods of time. Once
a party is broken into, its cryptographic keys are exposed and perhaps
modified. Yet, we want parties whose security is thus compromised to regain
their ability to communicate in an authenticated way aided by other parties.
In this work we present a mathematical model for this highly adversarial
setting, exhibiting salient properties and parameters, and then describe
a practically-appealing protocol for the task of maintaining authenticated
communication in this model.
A key element in our solution is devising {\em proactive distributed signature
(PDS) schemes} in our model. Although PDS schemes are known in the literature,
they are all designed for a model where authenticated communication and
broadcast primitives are available. We therefore show how these schemes can be
modified to work in our model, where no such primitives are available a-priori.
In the process of devising the above schemes, we also present a new definition
of PDS schemes (and of distributed signature schemes in general). This
definition may be of independent interest.
The Random Oracle Methodology, Revisited
We take a critical look at the relationship between the security of
cryptographic schemes in the Random Oracle Model, and the security of the
schemes that result from implementing the random oracle by so called
"cryptographic hash functions".
The main result of this paper is a negative one: There exist signature and
encryption schemes that are secure in the Random Oracle Model, but for which
any implementation of the random oracle results in insecure schemes.
In the process of devising the above schemes, we consider possible definitions
for the notion of a "good implementation" of a random oracle, pointing out
limitations and challenges.
Chameleon Hashing and Signatures
Uncategorized
Uncategorized
We introduce CHAMELEON SIGNATURES that provide with an undeniable
commitment of the signer to the contents of the signed document (as regular
digital signatures do) but, at the same time, do not allow the recipient
of the signature to disclose the contents of the signed information to any
third party without the signer's consent. These signatures are closely
related to Chaum's "undeniable signatures", but chameleon signatures allow
for simpler and more efficient realizations than the latter.
In particular, they are essentially non-interactive and do not involve the
design and complexity of zero-knowledge proofs on which traditional undeniable
signatures are based. Instead, chameleon signatures are generated
under the standard method of hash-then-sign. Yet, the hash functions
which are used are CHAMELEON HASH FUNCTIONS. These hash functions are
characterized by the non-standard property of being collision-resistant
for the signer but collision tractable for the recipient.
We present simple and efficient constructions of chameleon hashing and
chameleon signatures. The former can be constructed based on standard
cryptographic assumptions (such as the hardness of factoring or discrete
logarithms) and have efficient realizations based on these assumptions.
For the signature part we can use any digital signature (such as RSA or DSS)
and prove the unforgeability property of the resultant chameleon signatures
solely based on the unforgeability of the underlying digital signature
in use.
A Modular Approach to the Design and Analysis of Authentication and Key Exchange Protocols
Uncategorized
Uncategorized
We present a general framework for constructing and analyzing authentication
protocols in realistic models of communication networks. This framework
provides a sound formalization for the authentication problem and suggests
simple and attractive design principles for general authentication and key
exchange protocols. The key element in our approach is a modular treatment of
the authentication problem in cryptographic protocols; this applies to the
definition of security, to the design of the protocols, and to their analysis.
In particular, following this modular approach, we show how to systematically
transform solutions that work in a model of idealized authenticated
communications into solutions that are secure in the realistic setting of
communication channels controlled by an active adversary.
Using these principles we construct and prove the security of simple and
practical authentication and key-exchange protocols. In particular, we provide
a security analysis of some well-known key exchange protocols (e.g.
authenticated Diffie-Hellman key exchange), and of some of the techniques
underlying the design of several authentication protocols that are currently
being deployed on a large scale for the Internet Protocol and other
applications.
An Efficient Non-Interactive Statistical Zero-Knowledge Proof System for Quasi-Safe Prime Products
Uncategorized
Uncategorized
We present efficient zero-knowledge proof systems for quasi-safe
prime products and other related languages. Quasi-safe primes are a
relaxation of safe primes, a class of prime numbers useful in many
cryptographic applications.
Our proof systems achieve higher security and better
efficiency than all previously known ones.
In particular, all our proof systems are perfect or statistical
zero-knowledge, meaning that even a computationally
unbounded adversary cannot extract any information
from the proofs.
Moreover, our proof systems are extremely efficient because they do
not use general reductions to NP-complete problems, can be easily
parallelized preserving zero-knowledge, and are non-interactive for
computationally unbounded provers. The prover can also be efficiently
implemented given some trapdoor information and using very little
interaction.
We demonstrate the applicability of quasi-safe primes
by showing how they can be effectively used in the
context of RSA based undeniable signatures to enforce
the use of ``good'' public keys, i.e.,
keys such that if a signer can convince a recipient
of the validity of a signature, then he won't be able to
subsequently deny the same signature in case of a dispute.
Fast Batch Verification for Modular Exponentiation and Digital Signatures
Uncategorized
Uncategorized
Many tasks in cryptography (e.g., digital signature
verification) call for verification of a basic operation like modular
exponentiation in some group: given (g,x,y) check that g<sup>x</sup>=y.
This is typically done by re-computing g<sup>x</sup> and checking we get y.
We would like to do it differently, and faster.
The approach we use is batching. Focusing first on the basic modular
exponentiation operation, we provide some probabilistic batch verifiers, or
tests, that verify a sequence of modular exponentiations significantly faster
than the naive re-computation method. This yields speedups for several
verification tasks that involve modular exponentiations.
Focusing specifically on digital signatures, we then suggest a weaker notion of
(batch) verification which we call ``screening.'' It seems useful for many
usages of signatures, and has the advantage that it can be done very fast;
in particular, we show how to screen a sequence of RSA signatures at the
cost of one RSA verification plus hashing.
A Practical Public Key Cryptosystem Provably Secure against Adaptive Chosen Ciphertext Attack
Uncategorized
Uncategorized
A new public key cryptosystem is presented that is provably secure
against adaptive chosen ciphertext attack.
The scheme is quite practical, and the proof of security relies
only on standard intractability assumptions.
On the possibility of basing Cryptography on the assumption that
Uncategorized
Uncategorized
Recent works by Ajtai and by Ajtai and Dwork
bring to light the old (general) question of whether it is
at all possible to base the
security of cryptosystems on the assumption that .
We discuss this question and in particular review and extend
a two-decade old result of Brassard regarding this question.
Our conclusion is that the question remains open.
Universal Service Providers for Database Private Information Retrieval
Uncategorized
Uncategorized
We consider the question of private information retrieval in the
so-called ``commodity-based'' model. This model was recently proposed
by Beaver for practically-oriented service-provider internet
applications. In this paper, we show the following, somewhat
surprising, results regarding this model for the problem of
private-information retrieval: (1) the service-provider model allows
to dramatically reduce the overall communication involving the user,
using off-line pre-processing messages from ``service-providers'' to
databases, where the service-providers do not need to know the
database contents, nor the future user's requests; (2) our
service-provider solutions are resilient against more than a majority
(in fact, all-but-one) coalitions of service-providers; and (3) these
results hold for {\em both} the computational and the
information-theoretic setting.
More specifically, we exhibit a service-provider algorithm which can
``sell'' (i.e., generate and send) ``commodities'' to users and
databases, that subsequently allow users to retrieve database contents
in a way which hides from the database which particular item the user
retrieves. The service-providers need not know anything about the
contents of the databases nor the nature of the user's requests in
order to generate commodities. Our commodity-based solution
significantly improves communication complexity of the users (i.e.,
counting both the size of commodities bought by the user from the
service-providers and the subsequent communication with the databases)
compared to all previously known on-line private information retrieval
protocols (i.e., without the help of the service-providers).
Moreover, we show how commodities from different service-providers can
be {\em combined} in such a way that even if ``all-but-one'' of the
service-providers collude with the database, the user's privacy
remains intact. Finally, we show how to re-use commodities in case of
multiple requests (i.e., in the amortized sense), how to ``check''
commodity-correctness, and how some of the solutions can be extended
to the related problem of {\em Private Information Storage}.
Private Information Retrieval by Keywords
Uncategorized
Uncategorized
Private information retrieval (PIR) schemes enable a user to
access one or more servers that hold copies of
a database and {\em privately} retrieve parts of the
bits of data stored in the database. This means that the queries
give each individual
database no partial information (in the information theoretic or computational
sense) on the identity of the item retrieved by the user.
All known PIR schemes assume that the user knows the {\em physical address}
of the sought item. This is usually not the case when accessing a public
database that is not managed by the user. Such databases are typically
presented with keywords, which are then internally translated (at the
database end) to physical addresses, using an appropriate search
structure (for example, a hash table or a binary tree). In this note we
describe a simple, modular way to privately access data by keywords.
It combines {\em any} conventional search structure with {\em any}
underlying PIR scheme (including single server schemes). The transformation
requires no modification in the way that the search structure is maintained.
Therefore the same database will support both private and regular (non
private) searches.
The Graph Clustering Problem has a Perfect Zero-Knowledge Proof
Uncategorized
Uncategorized
The input to the Graph Clustering Problem
consists of a sequence of integers
and a sequence of graphs.
The question is whether the equivalence classes,
under the graph isomorphism relation,
of the input graphs have sizes which match the input sequence of integers.
In this note we show that this problem has a (perfect) zero-knowledge
interactive proof system.
This result improves over <a href="http:../1996/96-14.html">record 96-14</a>,
where a parametrized (by the sequence of integers)
version of the problem was studied.
On Protocol Divertibility
Uncategorized
Uncategorized
In this paper, we establish the notion of divertibility as a
protocol property
as opposed to the existing notion as a language property (see Okamoto,
Ohta). We give a definition of protocol divertibility that applies to
arbitrary 2-party protocols and is compatible with Okamoto and Ohta's
definition
in the case of interactive zero-knowledge proofs. Other important examples
falling under the new definition are blind signature protocols. A sufficient
criterion for divertibility is presented and found to be satisfied by many
examples of protocols in the literature. The generality of the definition is
further demonstrated by examples from protocol classes that have not been
considered for divertibility before. We show diverted El-Gamal encryption and
diverted Diffie-Hellman key exchange.
Optimistic fair Exchange of Digital Signatures
Uncategorized
Uncategorized
We present a new protocol that allows two players to exchange digital
signatures (including RSA and DSS) over the Internet in a fair way, so
that either each player gets the other's signature, or neither player
does. One obvious application is where the signatures represent items
of value, for example, an electronic check or airline ticket; the
protocol can also be adapted to exchange encrypted data. The protocol
relies on a trusted third party, but is "optimistic," in that the
third party is only needed in cases where one player attempts to cheat
or simply crashes. This is an important property, as it greatly
reduces the load on the third party, which in particular facilitates
a more robust and secure implementation of the third party.
Generalized Diffie-Hellman Modulo a Composite is not Weaker than Factoring
Uncategorized
Uncategorized
The Diffie-Hellman key-exchange protocol may naturally be
extended to k>2 parties. This gives rise to the generalized
Diffie-Hellman assumption (GDH-Assumption).
Naor and Reingold have recently shown an efficient construction
of pseudo-random functions and reduced the security of their
construction to the GDH-Assumption.
In this note, we prove that breaking this assumption modulo a composite
would imply an efficient algorithm for factorization.
Therefore, the security of both the key-exchange protocol and
the pseudo-random functions can be reduced to factoring.
Visual Authentication and Identification
Uncategorized
Uncategorized
The problems of authentication and identification have
received wide interest in cryptographic research.
However, there has been no satisfactory solution for the problem of
authentication by a human recipient
who does not use any trusted computational device.
The problem of authentication arises for example in
the context of smartcard--human interaction, in particular in
the context of electronic wallets. The problem of identification is ubiquitous
in communication over insecure networks.
This paper introduces visual authentication and visual
identification methods, which are
authentication and identification methods for human users
based on visual cryptography. These methods are
very natural and easy to use, and can be implemented using very
common ``low tech'' technology. The methods we suggest are efficient
in the sense that a single transparency can be used
for several authentications or for several identifications. The visual
authentication methods we suggest are not limited to authenticating
textual messages, and can be used to authenticate any image.
An important contribution of this paper is the introduction of a
framework for proving the security of protocols in which humans take an
active part. We rigorously prove the security of our schemes using this
framework.
Self-Delegation with Controlled Propagation - or - What If You Lose Your Laptop.
Uncategorized
Uncategorized
We introduce delegation schemes wherein a user may delegate rights to
himself, i.e., to other public keys he owns, but may
not safely delegate those rights to others, i.e., to their
public keys. In our motivating application, a user
has a primary (long-term) key that receives rights, such as access
privileges, that may not be delegated to others, yet the user may
reasonably wish to delegate these rights to new
secondary (short-term) keys he creates to use on his laptop when
traveling, to avoid having to store his primary secret key on the
vulnerable laptop.
We propose several cryptographic schemes, both generic and practical,
that allow such self-delegation while providing strong motivation for
the user not to delegate rights that he only obtained for personal use
to other parties.
Identity Escrow
Uncategorized
Uncategorized
We introduce the notion of escrowed identity, an application of
key-escrow ideas to the problem of identification. In escrowed
identity, one party, A, does not give his identity to another
party B, but rather gives him information that would allow an
authorized third party, E, to determine A's identity. However, B
receives a guarantee that E can indeed determine A's identity. We
give protocols for escrowed identity based on the El-Gamal (signature
and encryption) schemes and on the RSA function. A useful feature
of our protocol is that after setting up A to use the system, E is
only involved when it is actually needed to determine A's identity.
CBC MAC for Real-Time Data Sources
Uncategorized
Uncategorized
The Cipher Block Chaining (CBC) Message Authentication Code (MAC)
is an authentication method which is widely used in practice.
It is well known that the naive use of CBC MAC for variable
length messages is not secure, and a few rules of thumb for the
correct use of CBC MAC are known by folklore. The first rigorous
proof of the security of CBC MAC, when used on fixed length messages,
was given only recently by Bellare, Kilian and Rogaway. They also
suggested variants of CBC MAC that handle variable length messages
but in these variants the length of the message has to be known
in advance (i.e., before the message is processed).
We study CBC authentication of real time applications in which the
length of the message is not known until the message ends, and
furthermore, since the application is real-time, it is not possible
to start processing the authentication only after the message ends.
We first present a variant of CBC MAC, called {\em double MAC} (DMAC)
which handles messages of variable unknown lengths. Computing DMAC on
a message is virtually as simple and as efficient as computing the
standard CBC MAC on the message. We provide a rigorous proof that its
security is implied by the security of the underlying block cipher.
Next, we argue that the basic CBC MAC is secure when applied to prefix
free message space. A message space can be made prefix free by
authenticating also the (usually hidden) last character which marks
the end of the message.
Collision-Resistant Hashing: Towards Making UOWHFs Practical
Uncategorized
Uncategorized
Recent attacks on the cryptographic hash functions MD4 and MD5
make it clear that (strong) collision-resistance is a hard-to-achieve goal. We
look towards a weaker notion, the <i>universal one-way hash
functions</i> (UOWHFs) of Naor and Yung, and investigate their practical
potential. The goal is to build UOWHFs not based on number theoretic
assumptions, but from the primitives underlying current cryptographic hash
functions like MD5 and SHA. Pursuing this goal leads us to new questions. The
main one is how to extend a compression function to a full-fledged hash
function in this new setting. We show that the classic Merkle-Damgard
method used in the standard setting fails for these weaker kinds of hash
functions, and we present some new methods that work. Our main construction is
the "XOR tree." We also consider the problem of input length-variability and
present a general solution.
Factoring via Strong Lattice Reduction Algorithms
Uncategorized
Uncategorized
We address to the problem to factor a large composite number
by lattice reduction algorithms.
Schnorr has shown that under a reasonable number
theoretic assumptions this problem can
be reduced to a simultaneous diophantine
approximation problem. The latter in turn can be solved by finding
sufficiently many l_1--short vectors in a suitably defined lattice.
Using lattice basis reduction algorithms Schnorr and Euchner applied
Schnorrs reduction technique to 40--bit long integers.
Their implementation needed several hours to compute a 5% fraction
of the solution, i.e., 6 out of 125
congruences which are necessary to factorize the composite.
In this report we describe a more efficient implementation using
stronger lattice basis reduction techniques incorporating ideas
of Schnorr, Hoerner and Ritter.
For 60--bit long integers our algorithm yields a complete factorization
in less than 3 hours.
Towards realizing random oracles: Hash functions that hide all partial information
Uncategorized
Uncategorized
The random oracle model is a very convenient setting for designing
cryptographic protocols. In this idealized model all parties have access
to a common, public random function, called a random oracle.
Protocols in this model are often very simple and efficient; also the
analysis is often clearer. However, we do not have a general mechanism for
transforming protocols that are secure in the random oracle model into
protocols that are secure in real life. In fact, we do not even know how
to meaningfully specify the properties required from such a mechanism.
Instead, it is a common practice to simply replace - often without
mathematical justification - the random oracle with a `cryptographic hash
function' (e.g., MD5 or SHA). Consequently, the resulting protocols have
no meaningful proofs of security.
Protecting Data Privacy in Private Information Retrieval Schemes
Uncategorized
Uncategorized
Private Information Retrieval (PIR) schemes allow a user to retrieve the
i-th bit of a data string x, replicated in k>=2 databases, while keeping
the value of i private. The main cost measure for such a scheme is its
communication complexity.
We study PIR schemes where in addition to the user's privacy we require
data privacy. That is, in every invocation of the retrieval protocol the
user learns exactly a single physical bit of x and no other information.
Further, we require that even a dishonest user would not learn more than a
single physical data bit.
We present general transformations that allow translating PIR schemes
satisfying certain properties into PIR schemes that respect data privacy
as well, with a small penalty in the communication complexity. Using our
machinery we are able to translate currently known PIR solutions into
schemes satisfying the newly introduced, stronger privacy constraint. In
particular we get: a k-database scheme of complexity
O(log(n) n^{1/(2k-1)}) for every k>=2; an O(log(n))-database scheme of
poly-logarithmic complexity; a 2-database computational PIR of complexity
O(n^c), for every constant c>0. All these require only a single
round of interaction.
A Probabilistic Error-Correcting Scheme
Uncategorized
Uncategorized
In the course of research in Computational Learning Theory,
we found ourselves in need of an error-correcting encoding scheme for which
few bits in the codeword yield no information about the plain message.
Being unaware of a previous solution,
we came-up with the scheme presented here.
Since this scheme may be of interest to people working in Cryptography,
we thought it may be worthwhile to ``publish'' this part of our work
within the Cryptography community.
Clearly, a scheme as described above cannot be deterministic.
Thus, we introduce a probabilistic coding scheme which,
in addition to the standard coding theoretic requirements,
has the feature that any constant fraction
of the bits in the (randomized) codeword yields no information about
the message being encoded.
This coding scheme is also used to obtain efficient constructions for
the Wire-Tap Channel Problem.
A note on negligible functions
Uncategorized
Uncategorized
In theoretical cryptography, one formalizes the notion of an
adversary's success probability being ``too small to matter'' by asking that it
be a negligible function of the security parameter. We argue that the issue
that really arises is what it might mean for a collection of functions
to be ``negligible.'' We consider (and define) two such notions, and prove them
equivalent. Roughly, this enables us to say that any cryptographic primitive
has a specific associated ``security level.'' In particular we say this
for any one-way function. We also reconcile different definitions of negligible
error arguments and computational proofs of knowledge that have appeared in the
literature. Although the motivation is cryptographic, the main result is
purely about negligible functions.
Efficient Cryptographic Protocols Based on Noisy Channels.
Uncategorized
Uncategorized
The Wire-Tap Channel of Wyner shows that a Binary Symmetric Channel
may be used as a basis for exchanging a secret key. Later, Crepeau and Kilian
showed how a BSC may be used to implement Oblivious Transfer. Unfortunately,
this result is rather impractical as it requires bits to be sent
through the BSC to accomplish a single OT. The current paper provides efficient
protocols to achieve Bit Commitment and Oblivious Transfer based on the
existence of a BSC. Our protocols respectively use the BSC times and
times. These results are based on a technique known as Generalized
Privacy Amplification.
Round-Optimal Zero-Knowledge Arguments Based on any One-Way Function
Uncategorized
Uncategorized
We fill a gap in the theory of zero-knowledge protocols by presenting
NP-arguments that achieve negligible error probability and computational
zero-knowledge in four rounds of interaction, assuming only the existence of a
one-way function. This result is optimal in the sense that four rounds and a
one-way function are each individually necessary to achieve a negligible error
zero-knowledge argument for NP.
A New Paradigm for Collision-free Hashing: Incrementality at Reduced Cost
Uncategorized
Uncategorized
We present a simple, new paradigm for the design of collision-free hash
functions. Any function emanating from this paradigm is <i>incremental.</i>
(This means that if a message x which I have previously hashed is modified to
x' then rather than having to re-compute the hash of x' from scratch, I
can quickly ``update'' the old hash value to the new one, in time proportional
to the amount of modification made in x to get x'.) Also any function
emanating from this paradigm is parallelizable, useful for hardware
implementation.
Public-Key Cryptosystems from Lattice Reduction Problems
Uncategorized
Uncategorized
We present a new proposal for a trapdoor one-way function, from which
we derive public-key encryption and digital signatures.
The security of the new construction is based on the
conjectured computational difficulty of lattice-reduction problems,
providing a possible alternative to existing
public-key encryption algorithms
and digital signatures such as RSA and DSS.
Verifiable Partial Key Escrow
Uncategorized
Uncategorized
One of the main objections to existing proposals for key escrow is that the
individual's privacy relies on too high a level of trust in the law enforcement
agencies. In particular, even if the government is trustworthy today, it may be
replaced by an un-trustworthy government tomorrow which could immediately and
suddenly recover the secret keys of all users.
The Graph Clustering Problem has a Perfect Zero-Knowledge Proof
Uncategorized
Uncategorized
The Graph Clustering Problem is parameterized by a sequence
of positive integers, .
The input is a sequence of graphs,
and the question is whether the equivalence classes
under the graph isomorphism relation have sizes which match
the sequence of parameters.
In this note
we show that this problem has a (perfect) zero-knowledge
interactive proof system.
On the Contrast in Visual Cryptography Schemes
Uncategorized
Uncategorized
A visual cryptography scheme is a method to encode a secret image SI into
shadow images called shares such that certain qualified subsets of shares
enable the ``visual'' recovery of the secret image.
The ``visual'' recovery consists of xeroxing the shares onto transparencies,
and then stacking them. The shares of a qualified set will reveal the secret
image without any cryptographic computation.
In this paper we analyze the contrast of the reconstructed image
in k out of n visual cryptography schemes. (In such a scheme
any k shares will reveal the image, but no set of k-1 shares
gives any information about the image.)
In the case of 2 out of n threshold schemes we give a complete
characterization of schemes having optimal contrast and minimum
pixel expansion in terms of certain balanced incomplete block designs.
In the case of k out of n threshold schemes with k>2 we obtain
upper and lower bounds on the optimal contrast.
Proactive RSA
Uncategorized
Uncategorized
We consider a "mobile adversary" which may corrupt all
participants throughout the lifetime of the system in a non-monotonic
fashion (i.e. recoveries are possible) but the adversary is unable to
corrupt too many participants during any short time period.
Schemes resiliant to such adverasry are called proactive.
We present a proactive RSA system in which a threshold of servers
applies the RSA signature (or decryption) function in a distributed manner.
Employing new combinatorial and elementary number theoretic
techniques, our protocol enables the dynamic updating
of the servers (which hold the RSA key distributively);
it is secure even when a linear number of
the servers are corrupted during any time period;
it efficiently "self-maintains" the
security of the function and its
messages (ciphertexts or signatures); and it enables continuous
availability, namely, correct function application using the shared
key is possible at any time.
On the Construction of Pseudo-Random Permutations: Luby-Rackoff Revisited
Uncategorized
Uncategorized
Luby and Rackoff showed a method for constructing a pseudo-random permutation
from a pseudo-random function. The method is based on composing four
(or three for weakened security) so called Feistel permutations
each of which requires the evaluation of a pseudo-random function.
We reduce somewhat the complexity of the construction
and simplify its proof of security by showing
that two Feistel permutations are sufficient together with initial
and final pair-wise independent permutations.
The revised construction and proof provide a framework in which
similar constructions may be brought up and their security can
be easily proved.
We demonstrate this by presenting some additional adjustments
of the construction that achieve the following:
1. Reduce the success probability of the adversary.
2. Provide a construction of pseudo-random permutations with large input
size using pseudo-random functions with small input size.
3. Provide a construction of a pseudo-random permutation
using a single pseudo-random function.
Oblivious Transfers and Intersecting Codes
Uncategorized
Uncategorized
Assume A owns t secret k-bit strings.
She is willing to disclose one of them to B, at his choosing,
provided he does not learn anything about the other strings.
Conversely, B does not want A to learn which secret he chose to learn.
A protocol for the above task is said to implement
One-out-of-t String Oblivious Transfer. An apparently simpler task
corresponds to the case k=1 and t=2 of two one-bit secrets:
this is known as One-out-of-two Bit OT.
We address the question of implementing the former assuming the
existence of the later.
In particular, we prove that the general protocol can be implemented from
O(tk) calls to One-out-of-two Bit OT. This is
optimal up to a small multiplicative constant.
Our solution is based on the notion of self-intersecting codes.
Of independent interest, we give several efficient new constructions for
such codes.
Another contribution of this paper is a set
of information-theoretic definitions for correctness and
privacy of unconditionally-secure oblivious transfer.
Collision-Free Hashing from Lattice Problems
Uncategorized
Uncategorized
Recently Ajtai described a construction of one-way functions whose
security is equivalent to the difficulty of some well known approximation
problems in lattices. We show that essentially the same
construction can also be used to obtain collision-free hashing.
Access Control and Signatures via Quorum Secret Sharing
Uncategorized
Uncategorized
We suggest a method of controlling the access to a secure
database via quorum systems. A quorum system is a collection of sets
(quorums) every two of which have a nonempty intersection.
Quorum systems have been used for a number of applications in the area of
distributed systems.
We propose a separation between access servers which are protected and
trustworthy, but may be outdated, and the data servers which may all
be compromised. The main paradigm is that only the servers in a
complete quorum can collectively grant (or revoke) access permission.
The method we suggest ensures that after authorization is revoked, a
cheating user Alice will not be able to access the data even if many
access servers still consider her authorized, and even if the complete
raw database is available to her. The method has a low overhead in
terms of communication and computation. It can also be converted into
a distributed system for issuing secure signatures.
Visual Cryptography II: Improving the Contrast Via the Cover Base
Uncategorized
Uncategorized
In Eurocrypt'94 we proposed a a new type of cryptographic scheme,
which can decode concealed images without any cryptographic computations,
by placing two transparencies on top of each other and using the decoder's
(human) visual systems.
One of the drawback of that proposal was a loss in contrast: a black pixel
is translated in the reconstruction into a black region, but a white
pixel is translated into a grey region (half black and half white).
In this paper we propose am alternative model for reconstruction with a
different set of operation (which we call the ``Cover" semi-group) is proposed.
In this model we are able to obtain a better contrast than
is possible in the previous one.
We prove tight bounds on the contrast
as a function of the complexity of the scheme. We also show
that for constructing k-out-n secret sharing schemes when
n and k are greater than 2 the new method is not applicable.
Upper bound on the communication complexity of private information retrieval
Uncategorized
Uncategorized
Private information retrieval was introduced
by Chor, Goldreich, Kushilevitz and Sudan.
It is the problem of reading a bit from the database so
that it remains secret which bit we need.
If the database exists in several identical copies,
it is possible to ask queries so that each of copies
alone does not get any information about the adress
of the needed bit.
We construct a scheme for private information retrieval
with k databases and O(n sup (1/(2k-1)) ) bits of communication.
Private Information Storage
Uncategorized
Uncategorized
We consider the setting of hiding information through the use of
multiple databases that do not interact with one another. Previously,
in this setting solutions for retrieval of data in the efficient
manner were given, where a user achieves this by interacting with all
the databases. We consider the case of both writing and
reading. While the case of reading was well studied before, the case
of writing was previously completely open. In this paper, we show how
to implement both read and write operations. As in the previous
papers, we measure, as a function of k and n the amount of
communication required between a user and all the databases for a
single read/write operation, and achieve efficient read/write schemes.
Moreover, we show a general reduction from reading database scheme to
reading and writing database scheme, with the following guarantees:
for any k, given a retrieval only k-database scheme with communication
complexity R(k,n) we show a (k+1) reading and writing database scheme
with total communication complexity O(R(k,n) * (log n)^{O(1)}). It
should be stressed that prior to the current paper no trivial
(i.e. sub-linear) bounds for private information storage were known.
Linear Zero-Knowledge - A note on Efficient Zero-Knowledge Proofs and Arguments
Uncategorized
Uncategorized
We present a zero-knowledge proof system for any NP language L, which
allows showing that x is in L using communication corresponding
to bit commitments, with error probability ,
and where c is a constant depending only on L.
The proof can be based on any bit
commitment scheme with a particular set of properties. We suggest an
efficient implementation based on factoring. The protocol allows showing
that a Boolean formula of size n is satisfiable,
with error probability , using O(n) commitments.
This is the first protocol for SAT that is linear in this sense.<br>
[The rest of the abstract was truncated and appears below -- the library.]
On Monotone Function Closure of Statistical Zero-Knowledge
Uncategorized
Uncategorized
Assume we are given a language L with an honest verifier
perfect zero-knowledge proof system. Assume also that the proof system is an
Arthur-Merlin game with at most 3 moves. The class of such languages
includes all random self-reducible language, and also any language with a
perfect zero-knowledge non-interactive proof.
We show that such a language satisfies a certain closure property, namely
that languages constructed from L by applying certain monotone functions to
statements on membership in L have perfect zero-knowledge proof systems.
The new set of languages we can build includes L itself, but also for
example languages consisting of n words of which at least t are in L.
A similar closure property is shown to hold for the complement of L and for
statistical zero-knowledge. The property we need for the monotone functions used
to build the new languages is that there are efficient secret sharing schemes
for their associated access structures. This includes (but is not necessarily
limited to) all monotone functions with polynomial size monotone formulas.
Deniable Encryption
Uncategorized
Uncategorized
Consider a situation in which the transmission of encrypted
messages is intercepted by an adversary who can later
ask the sender to reveal
the random choices (and also the secret key, if one exists)
used in generating
the ciphertext, thereby exposing the cleartext.
An encryption scheme is <B>deniable</B> if the sender can generate
`fake random choices' that will make the ciphertext `look like'
an encryption of a different cleartext, thus keeping the
real cleartext private.
Analogous requirements can be formulated with respect to
attacking the receiver and with respect to attacking both parties.
In this paper we introduce deniable encryption and propose
constructions of schemes with polynomial deniability. In addition to
being interesting by itself, and having several applications, deniable
encryption provides a simplified and elegant construction of
<B>adaptively secure</B> multiparty computation.
Incoercible Multiparty Computation
Uncategorized
Uncategorized
Current secure multiparty protocols have the following deficiency.
The public transcript of the communication can be used as an involuntary
<B>commitment</B> of the parties to their inputs and outputs. Thus parties
can be later coerced by some authority to reveal their private data.
Previous work that has pointed this interesting problem out contained only
partial treatment.
In this work we present the first general and rigorous treatment of the
coercion problem in secure computation.
First we present a general definition of protocols that
provide resilience to coercion. Our definition
constitutes a natural extension of the general paradigm used
for defining secure multiparty protocols.
Next we show that if trapdoor permutations exist then
any function can be incoercibly computed
(i.e., computed by a protocol that provides resilience to coercion)
in the presence of computationally
bounded adversaries and only public communication channels.
This holds as long as less than half the parties are coerced (or corrupted).
In particular, ours are the first incoercible protocols without
physical assumptions. Also, our protocols constitute an alternative
solution to the recently solved adaptive security problem.
- « Previous
- 1
- ...
- 238
- 239