All papers (Page 241 of 24103 results)
ACE: The Advanced Cryptographic Engine
This document describes
the Advanced Cryptographic Engine (ACE).
It specifies a public key encryption
scheme as well as a
digital signature scheme
with enough detail to ensure interoperability between different
implementations.
These schemes are almost as efficient as commercially used schemes,
yet unlike such schemes, can be proven secure under reasonable
and well-defined
intractability assumptions.
A concrete security analysis of both schemes is presented.
An Efficient Identification Scheme Based on Permuted Patterns
Uncategorized
Uncategorized
This paper proposes a new identification scheme based on a hard
partition problem rather than factoring or discrete logarithm
problems. The new scheme minimizes at the same time the communication
complexity and the computational cost required by the parties.
Since only simple operations are needed for an identification,
our scheme is well suited for smart cards with very limited
processing power. With a "good" implementation, the scheme is much
faster than the Fiat-Shamir or Shamir's PKP schemes.
On the Security of Diffie--Hellman Bits
Boneh and Venkatesan have recently proposed a polynomial time
algorithm for recovering a "hidden" element of a finite field of elements from rather short strings of the most significant bits of the remainder modulo of for several
values of selected uniformly at random from . We use some
recent bounds of exponential sums to generalize this algorithm to the case when is selected from a quite small subgroup of .
Namely, our results apply to subgroups of size at least
for all primes and to subgroups of size at
least for almost all primes , for any fixed
.
We also use this generalization to improve (and correct)
one of the statements of the aforementioned work about the
computational security of the most significant bits of the
Diffie--Hellman key.
Threshold Cryptography Secure Against the Adaptive Adversary, Concurrently
A threshold cryptosystem or signature scheme is a system with participants
where an honest majority can successfully decrypt a message or issue a
signature, but where the security and functionality properties of the
system are retained even as
the adversary corrupts up to players.
We present the novel technique of a committed proof,
which is a new general tool that enables security of threshold
cryptosystems in the presence of the adaptive adversary.
We also put forward a new measure of security for threshold schemes
secure in the adaptive adversary model: security under concurrent
composition.
Using committed proofs, we construct concurrently and adaptively secure
threshold protocols for a variety of cryptographic applications.
In particular, based on the recent scheme by Cramer-Shoup, we
construct adaptively secure threshold cryptosystems secure against
adaptive chosen ciphertext attack under the DDH intractability
assumption.
Last updated: 2000-07-09
Fast Verification of Any Remote Procedure Call: Short Witness-Indistinguishable One-Round Proofs for NP
Uncategorized
Uncategorized
The paper
is withdrawn.
As communicated to us by C. Dwork, M. Langberg,
M. Naor and K. Nissim [1] the protocol as presented in the paper
is not sufficient to prove the claims. We gratefully acknowledge
the authors of [1] for pointing out this error to us.
REFERENCES:
[1] C. Dwork, M. Langberg, M. Naor, and K. Nissim,
"Succinct Proofs for NP and Spooky Interactions" private communication,
July 4, 2000.
Lower Bounds on the Efficiency of Generic Cryptographic Constructions
We present lower bounds on the efficiency of
constructions for Pseudo-Random Generators (PRGs) and
Universal One-Way Hash Functions (UOWHFs) based on
black-box access to one-way permutations. Our lower bounds are tight as
they match the efficiency of known constructions.
A PRG (resp. UOWHF) construction based on black-box access is
a machine that is given oracle access to a permutation. Whenever
the permutation is hard to invert, the construction is hard to break.
In this paper we give lower bounds on the
number of invocations to
the oracle by the construction.
If is the assumed security of the oracle permutation
(i.e. no adversary of size can invert on a fraction
larger than of its inputs)
then a PRG (resp. UOWHF) construction that
stretches (resp. compresses) its input by bits must query
in points. This matches known constructions.
Our results are given in an extension of the Impagliazzo-Rudich
model. That is, we prove that a proof of the existence of PRG (resp. UOWHF)
black-box constructions that beat our lower bound would imply
a proof of the unconditional existence of such construction
(which would also imply ).
Last updated: 2001-06-19
Cryptanalysis of RSA with small prime difference
We show that choosing an RSA modulus with a small difference of its prime factors
yields improvements on the small private exponent attacks of Wiener and Boneh-Durfee.
Identification Protocols Secure Against Reset Attacks
We provide identification protocols that are secure even
when the adversary can reset the internal state and/or randomization source of
the user identifying itself, and when executed in an asynchronous environment
like the Internet that gives the adversary concurrent access to instances of
the user. These protocols are suitable for use by devices (like smartcards)
which when under adversary control may not be able to reliably maintain their
internal state between invocations.
Authenticated Key Exchange Secure Against Dictionary Attacks
This paper gives definitions and results about password-based
protocols for authenticated key exchange (AKE), mutual authentication
MA), and the combination of these goals (AKE, MA).
Such protocols are designed to work despite interference by an active
adversary and despite the use of passwords drawn from a space so small
that an adversary might well enumerate, off line,
a user's password.
While several such password-based protocols have been suggested,
the underlying theory has been lagging, and
some of the protocols don't actually work.
This is an area strongly in need of foundations,
but definitions and theorems here can get overwhelmingly complex.
To help manage this complexity we begin by defining a model, one rich enough
to deal with password guessing, forward secrecy,
server compromise, and loss of session keys.
The one model can be used to
define various goals.
We take AKE (with implicit authentication---no one besides
your intended partner could possibly get the key, though he may or may
not actually get it) as the basic goal.
Then we prove that any secure
AKE protocol can be
embellished (in a simple and generic way)
to also provide for MA.
This approach turns out to be simpler than trying to
augment an MA protocol to also distribute a session key.
Next we prove correctness for the idea at the center
of the Encrypted Key-Exchange (EKE) protocol
of Bellovin and Merritt:
we prove (in an ideal-cipher model) that
the two-flow protocol at the core of EKE is
a secure AKE.
Combining with the result above we have a
simple 3-flow protocol for AKE,MA which is
proven secure against dictionary attack.
Concurrent Zero-Knowledge in Poly-logarithmic Rounds
A proof is concurrent zero-knowledge if it remains zero-knowledge when run in an asynchronous environment, such as
the Internet. It is known that zero-knowledge is not necessarily preserved in such an environment; Kilian, Petrank and Rackoff have
shown that any {\bf 4} rounds zero-knowledge interactive proof (for a non-trivial language) is not concurrent zero-knowledge. On the
other hand, Richardson and Kilian have shown that there exists a concurrent zero-knowledge argument for all languages in NP, but it
requires a {\bf polynomial} number of rounds. In this paper, we present a concurrent zero-knowledge proof for all languages in NP
with a drastically improved complexity: our proof requires only a poly-logarithmic, specifically, number of rounds.
Thus, we narrow the huge gap between the known upper and lower bounds on the number of rounds required for a zero-knowledge
proof that is robust for asynchronous composition.
Last updated: 2003-03-26
Chosen Message Attack Against Goldreich-Goldwasser-Halevi's Signature Scheme from Crypto'97
The Goldreich-Goldwasser-Halevi(GGH)'s signature scheme from Crypto '99 is cryptanalyzed, which is based on the well-known lattice problem. We mount a chosen message attack on the signature scheme, and show the signature scheme is vulnerable to the attack. We collects lattice points that are linearly independent each other, and constructs a new basis that generates a sub-lattice of the original lattice. The sub-lattice is shown to be sufficient to generate a valid signature. Empirical results are presented to show the effectiveness of the attack. Finally, we show that the cube-like parameter used for the private-key generation is harmful to the security of the scheme.
Tailored Key Encryption (TaKE) Tailoring a key for a given pair of plaintext/ciphertext
Abstract. The prevailing cryptographies are attacked on the basis of
the fact that only a single element in the key space will match a
plausible plaintext with a given ciphertext. Any cryptography that
would violate this unique-key assumption, will achieve added security
through deniability (akin to One Time Pad). Such cryptography is being
described. It is achieved by breaking away from the prevailing notion
that the key is a binary string of a fixed length. The described key is
random-size non-linear array: a graph constructed from vertices and
edges. The binary naming of the vertices and edges, and the
configuration are all part of the key. Such keys can take-on most of
the necessary complexity, which allows the algorithm itself to be
exceedingly simple (a-la Turing Machine).
The Security of Chaffing and Winnowing
This paper takes a closer look at Rivest's
chaffing-and-winnowing paradigm for data privacy. We begin with a
\textit{definition} which enables one to determine clearly whether a
given scheme qualifies as ``chaffing-and-winnowing.'' We then analyze
Rivest's schemes to see what quality of data privacy they provide. His
simplest scheme is easily proven secure but is ineffient. The security
of his more efficient scheme ---based on all-or-nothing transforms
(AONTs)--- is however more problematic. It can be attacked under
Rivest's definition of security of an AONT, and even under stronger
notions does not appear provable. We show however that by using a OAEP
as the AONT one can prove security. We also present a different scheme,
still using AONTs, that is equally efficient and easily proven secure
even under the original weak notion of security of AONTs.
New Directions in Design of Resilient Boolean Functions
There has been a recent upsurge of research in the design of resilient
Boolean functions for use in stream cipher systems. The existing
research concentrates on maximum degree resilient functions and tries
to obtain as high nonlinearity as possible. In sharp contrast to this
approach we identify the class of functions with {\em provably best}
possible trade-off among the parameters: number of variables,
resiliency, nonlinearity and algebraic degree. We first prove a
sharper version of McEliece theorem for Reed-Muller codes as applied
to resilient functions, which also generalizes the well known
Xiao-Massey characterization. As a consequence a nontrivial upper
bound on the nonlinearity of resilient functions is obtained. This
result coupled with Siegenthaler's inequality naturally leads to
the notion of provably best resilient functions. We further show that
such best functions can be constructed by the Maiorana-McFarland
like technique. In cases where this method fails, we provide new ideas
to construct best functions. We also briefly discuss efficient
implementation of these functions in hardware.
Efficient Protocols based on Probabilistic Encryption using Composite Degree Residue Classes
We study various applications and variants of Paillier's probabilistic
encryption scheme. First, we propose a threshold variant of the scheme,
and also zero-knowledge protocols for proving that a given ciphertext
encodes a given plaintext, and for verifying multiplication of encrypted values.
We then show how these building blocks can be used for applying the
scheme to efficient electronic voting. This reduces dramatically the work needed to compute the final result of an election, compared to the previously best known schemes. We show how the
basic scheme for a yes/no vote can be easily adapted to casting a
vote for up to out of candidates. The same basic building blocks can also be adapted to provide receipt-free elections, under appropriate physical assumptions. The scheme for 1 out of elections can be optimised such that for a certain range of parameter values, a ballot has size only bits.
Finally, we propose a variant of the encryption scheme, that allows
reducing the expansion factor of Paillier's scheme from 2 to almost 1.
Public Electronic Contract Protocol
The notion of Public Electronic Contract (PEC) Protocol is presented in this paper. In the idea, the PEC will be published on a public directory (of certain groups) and let all the members to review the true (raw) transaction information. Collection of information of PEC reflects more reliable facts of the market trends rather than merely depends on the data provided by certain agencies for estimation. The goal is to eliminate the opportunities for certain agencies to manipulate the data and persuade the investors to make inappropriate decisions on purchases or investments. A perfect open market with open facts should be established in the future. The PEC also contains the property of public witnesses so that the transactions will be more secure. In order to keep the protocol simple; its implementation is mainly based on RSA public key scheme.
An Encryption Algorithm and Key-stream Generator for Chinese Text Messages by Character Internal Code Structure
This paper proposes an algorithm to encipher the Chinese plaintext message written in Big-5/GB Chinese character internal codes. Unlike the ordinary ciphers, the Crypto-key of our proposed algorithm consists of a pair of formulas and a set of parameter values. The senders can formulate and compose their own unique formulas and parameters for encryption. According to the character internal code structure, we apply the formulas in a Key-stream generator to encipher the Chinese plaintext message. Since the proposed stream generator does not contain permanent encryption and decryption operations, the opponents are inadequate to predict the forms of its output (ciphertext). Experiment results show that the proposed algorithm achieves the data secrecy.
On Resilient Boolean Functions with Maximal Possible Nonlinearity
It is proved that the maximal possible nonlinearity of -variable
-resilient Boolean function is for
. This value can be achieved only for
optimized functions (i.~e. functions with an algebraic degree ).
For it is suggested a method
to construct an -variable -resilient function with maximal possible
nonlinearity such that each variable presents in ANF of this
function in some term of maximal possible length .
For , ,
it is given a scheme of hardware implementation for such function that
demands approximately gates EXOR and gates AND.
Combinatorial Properties of Frameproof and Traceability Codes
In order to protect copyrighted material, codes may be
embedded in the content or codes may be associated with the
keys used to recover the content. Codes can offer protection
by providing some form of traceability for pirated
data. Several researchers have studied different notions of
traceability and related concepts in recent years. "Strong"
versions of traceability allow at least one member of a
coalition that constructs a "pirate decoder" to be
traced. Weaker versions of this concept ensure that no
coalition can "frame" a disjoint user or group of users. All
these concepts can be formulated as codes having certain
combinatorial properties.
In this paper, we study the relationships between the various
notions, and we discuss equivalent formulations using
structures such as perfect hash families. We use methods from
combinatorics and coding theory to provide bounds (necessary
conditions) and constructions (sufficient conditions) for the
objects of interest.
Last updated: 2000-03-10
Implications of the Nontriviality of Entropy Approximation
The paper was withdrawn because it contained a fatal flaw.
A New Forward-Secure Digital Signature Scheme
We improve the Bellare-Miner (Crypto '99) construction of signature
schemes with forward security in the random oracle model. Our scheme
has significantly shorter keys and is, therefore, more practical. By
using a direct proof technique not used for forward-secure schemes
before, we are able to provide better security bounds for the original
construction as well as for our scheme.
Bellare and Miner also presented a method for constructing such schemes
without the use of the random oracle. We conclude by proposing an
improvement to their method and an additional, new method for accomplishing
this.
On Security Preserving Reductions -- Revised Terminology
Many of the results in Modern Cryptography are actually
transformations of a basic computational phenomenon (i.e., a
basic primitive, tool or assumption) to a more complex
phenomenon (i.e., a higher level primitive or
application). The transformation is explicit and is always
accompanied by an explicit reduction of the violation of the
security of the former phenomenon to the violation of the
latter. A key aspect is the efficiency of the reduction. We
discuss and slightly modify the hierarchy of reductions
originally suggested by Levin.
A tool for obtaining tighter security analyses of pseudorandom function based constructions, with applications to PRP to PRF conversion
Uncategorized
Uncategorized
We present a general probabilistic lemma that can be applied
to upper bound the advantage of an adversary in distinguishing between two
families of functions. Our lemma reduces the task of upper bounding the
advantage to that of upper bounding the ratio of two probabilities associated
to the adversary, when this ratio is is viewed as a random variable. It
enables us to obtain significantly tighter analyses than more conventional
methods.
In this paper we apply the technique to the problem of PRP to PRF conversion.
We present a simple, new construction of a PRF from a PRP that makes only
two invocations of the PRP and has insecurity linear in the number of
queries made by the adversary. We also improve the analysis of the
truncation construction.
Concurrent Zero-Knowledge
Uncategorized
Uncategorized
One of the toughest challenges in designing
cryptographic protocols is to design them so that they will remain
secure even when composed. For example, concurrent executions of a
zero-knowledge protocol by a single prover (with one or more
verifiers) may leak information and may not be zero-knowledge in
toto. In this work we:
(1) Suggest time as a mechanism to design concurrent cryptographic
protocols and in particular maintaining zero-knowledge under
concurrent execution.
(2) Introduce the notion of of Deniable Authentication
and connect it to the problem of concurrent zero-knowledge.
We do not assume global synchronization, however we assume an
(alpha,beta) timing constraint: for any two processors
and , if measures alpha elapsed time on its local
clock and measures beta elapsed time on its local clock, and
starts after does, then will finish after
does. We show that for an adversary controlling all the
processors clocks (as well as their communication channels) but
which is constrained by an (alpha,beta) constraint
there exist four-round almost concurrent zero-knowledge interactive proofs
and perfect concurrent zero-knowledge arguments for every language in NP.
We also address the more specific problem of Deniable Authentication,
for which we propose several particularly efficient solutions.
Deniable Authentication is of independent interest, even in the
sequential case; our concurrent solutions yield sequential
solutions, without recourse to timing, i.e., in the standard model.
Resettable Zero-Knowledge
Uncategorized
Uncategorized
We introduce the notion of Resettable Zero-Knowledge
(rZK), a new security measure for cryptographic protocols which
strengthens the classical notion of zero-knowledge. In essence, an
rZK protocol is one that remains zero knowledge even if an adeversary
can interact with the prover many times, each time resetting the
prover to its initial state and forcing him to use the same random
tape.
Under general complexity asumptions, which hold for example if the
Discrete Logarithm Problem is hard, we construct (1) rZK proof-systems
for NP: (2) constant-round resettable witness-indistinguishable
proof-systems for NP; and (3) constant-round rZK arguments for NP in
the public key model where verifiers have fixed, public keys
associated with them.
In addition to shedding new light on what makes zero knowledge
possible (by constructing ZK protocols that use randomness in a
dramatically weaker way than before), rZK has great relevance to
applications. Firstly, we show that rZK protocols are closed under
parallel and concurrent execution and thus are guaranteed to be secure
when implemented in fully asynchronous networks, even if an adversary
schedules the arrival of every message sent. Secondly, rZK protocols
enlarge the range of physical ways in which provers of a ZK protocols
can be securely implemented, including devices which cannot reliably
toss coins on line, nor keep state betweeen invocations. (For
instance, because ordinary smart cards with secure hardware are
resattable, they could not be used to implement securely the provers
of classical ZK protocols, but can now be used to implement securely
the provers of rZK protocols.)
Public-Key Cryptography and Password Protocols: The Multi-User Case
Uncategorized
Uncategorized
The problem of password authentication over an insecure network
when the user holds only a human-memorizable password has
received much attention in the literature. The first rigorous
treatment was provided by Halevi and Krawczyk (ACM CCS, 1998),
who studied off-line password guessing attacks in the scenario in
which the authentication server possesses a pair of private and
public keys. HK's definition of security concentrates
on the single-user (and single server) case. <P>
In this work we:
(1) Show the inadequacy of both the Halevi-Krawczyk formalization
and protocol in the case where there is more than a single user:
using a simple and realistic attack, we prove failure of the HK
solution in the two-user case.
(2) Propose a new definition of security for the multi-user case,
expressed in terms of transcripts of the entire system, rather
than individual protocol executions.
(3) Suggest several ways of achieving this security against both
static and dynamic adversaries.
In a recent revision of their paper, Halevi and Krawczyk attempted
to handle the multi-user case. We expose a weakness in their approach.
Improving the Exact Security of Digital Signature Schemes
Uncategorized
Uncategorized
We provide two contributions to exact security analysis of
digital signatures:
We put forward a new method of constructing Fiat-Shamir-like
signature schemes that yields better "exact security" than the original
Fiat-Shamir method; and
we extend exact security analysis to "exact cost-security analysis" by
showing that digital signature schemes with "loose security" may be
preferable for reasonable measures of cost.
Security of all RSA and Discrete Log Bits
Uncategorized
Uncategorized
We study the security of individual bits in an RSA
encrypted message E_N(x). We show that given E_N(x), predicting any
single bit in x with only a non-negligible advantage over the trivial
guessing strategy, is (through a polynomial time reduction) as hard as
breaking RSA. Moreover, we prove that blocks of O(log log N) bits of x
are computationally indistinguishable from random bits. The results
carry over to the Rabin encryption scheme.
Considering the discrete exponentiation function, g^x modulo p, with
probability 1-o(1) over random choices of the prime p, the analog
results are demonstrated. Finally, we prove that the bits of ax+b
modulo p give hard core predicates for any one-way function f.
Non-Malleable Encryption: Equivalence between Two Notions, and an Indistinguishability-Based Characterization
Uncategorized
Uncategorized
We prove the equivalence of two definitions of non-malleable
encryption appearing in the literature--- the original one of Dolev, Dwork
and Naor and the later one of Bellare, Desai, Pointcheval and Rogaway. The
equivalence relies on a new characterization of non-malleable encryption in
terms of the standard notion of indistinguishability of Goldwasser and
Micali. We show that non-malleability is equivalent to indistinguishability
under a ``parallel chosen ciphertext attack,'' this being a new kind of
chosen ciphertext attack we introduce, in which the adversary's decryption
queries are not allowed to depend on answers to previous queries, but must be
made all at once. This characterization simplifies both the notion of
non-malleable encryption and its usage, and enables one to see more easily
how it compares with other notions of encryption. The results here apply to
non-malleable encryption under any form of attack, whether chosen-plaintext,
chosen-ciphertext, or adaptive chosen-ciphertext.
A Composition Theorem for Universal One-Way Hash Functions
Uncategorized
Uncategorized
In this paper we present a new scheme for constructing
universal one-way hash functions that hash arbitrarily long messages
out of universal one-way hash functions that hash fixed-length messages.
The new construction is extremely simple and is also very efficient,
yielding shorter keys than previously proposed composition
constructions.
A forward-secure digital signature scheme
Uncategorized
Uncategorized
We describe a digital signature scheme in which the
public key is fixed but the secret signing key is updated at regular
intervals so as to provide a <i>forward security</i> property:
compromise of the current secret key does not enable an adversary to
forge signatures pertaining to the past. This can be useful to
mitigate the damage caused by key exposure without requiring
distribution of keys. Our construction uses ideas from the
Fiat-Shamir and Ong-Schnorr identification and
signature schemes, and is proven to be forward secure based
on the hardness of factoring, in the random oracle model. The
construction is also quite efficient.
Interleaved Zero-Knowledge in the Public-Key Model
Uncategorized
Uncategorized
We introduce the notion of Interleaved Zero-Knowledge (iZK), a new
security measure for cryptographic protocols which strengthens
the classical notion of zero-knowledge, in a way suitable for
multiple concurrent executions in an asynchronous environment
like the internet. We prove that iZK protocols are robust:
they are ``parallelizable'', and preserve security when run
concurrently in a fully asynchronous network. Furthermore,
this holds even if the prover's random-pads in all these
concurrent invocations are identical. Thus, iZK protocols are
ideal for smart-cards and other devices which cannot reliably
Concurrent Zero-Knowledge is Easy in Practice
Uncategorized
Uncategorized
We show that if any one-way function exists, then 3-round concurrent
zero-knowledge arguments for all NP problems can be built in a model
where a short auxiliary string with a prescribed distribution is
available to the players. We also show that all known efficient
identification schemes using specialized assumptions can be modified
to work in this model with no essential loss of efficiency. We argue
that the assumptions of the model will be satisfied in most practical
scenarios where public key cryptography is used, in particular our
construction works given any secure public key
infrastructure. Finally, we point out that in a model with
preprocessing (and no auxiliary string) proposed earlier, concurrent
zero-knowledge for NP can be based on any one-way function.
Secure Hash-and-Sign Signatures without the Random Oracle
Uncategorized
Uncategorized
We present a new signature scheme which is existentially unforgeable
under chosen message attacks, assuming some variant of the RSA conjecture.
This scheme is not based on "signature trees", and instead it uses
the so called "hash-and-sign" paradigm. It is unique in that the
assumptions made on the cryptographic hash function in use are well
defined and reasonable (although non-standard). In particular, we
do not model this function as a random oracle.
We construct our proof of security in steps. First we describe and
prove a construction which operates in the random oracle model. Then
we show that the random oracle in this construction can be replaced
by a hash function which satisfies some strong (but well defined!)
computational assumptions. Finally, we demonstrate that these assumptions
are reasonable, by proving that a function satisfying them exists under
standard intractability assumptions.
On Formal Models for Secure Key Exchange
Uncategorized
Uncategorized
A new formal security model for session key exchange protocols is
proposed, and several efficient protocols are analyzed in this model.
Our new model is in the style of multi-party simulatability: it
specifies the service and security guarantees that a key exchange
protocol should provide to higher-level protocols as a simple,
natural, and intuitive interface to which a high-level protocol
designer can program. The relationship between this new model and
previously proposed models is explored, and in particular, several
flaws and shortcomings in previously proposed models are discussed.
The model also deals with anonymous users---that is, users who do not
have public keys, but perhaps have passwords that can be used to
authenticate themselves within a secure session.
Practical Threshold Signatures
Uncategorized
Uncategorized
We present an RSA threshold signature scheme. The scheme enjoys the following
properties:
it is unforgeable and robust;
in the random oracle model, assuming the RSA problem is hard;
signature share generation and verification is completely non-interactive;
the size of an individual signature share is bounded by a constant
times the size of the RSA modulus.
A Relationship between One-Wayness and Correlation Intractability
Uncategorized
Uncategorized
The notion of correlation intractability was introduced
in an attempt to capture the ``unpredictability" property
of random oracles: It is assumed that if is a random oracle
then it is infeasible to find an input
such that the input-output pair has some desired property.
It is desirable that a plausible construction
of correlation intractable function ensembles will be provided since
the unpredictability property is often useful to design many cryptographic
applications in the random oracle model.
However, no plausibility result has been proposed.
In this paper, we show that proving the implication,
``if uniform one-way functions exist then uniform correlation intractable
function ensembles exist",
is as hard as proving a claim regarding the triviality
of 3-round auxiliary-input zero-knowledge Arthur-Merlin proofs
without making any assumptions.
We believe that it is unlikely that one can prove it unconditionally.
Therefore, we conclude that it will be difficult to construct
uniform correlation intractable function ensembles
based solely on uniform one-way functions.
On the Existence of3-Round Zero-Knowledge Protocols
Uncategorized
Uncategorized
In this paper, we construct a 3-round zero-knowledge protocol for
any NP language. Our protocol achieves weaker notions of zero-knowledge
than black-box simulation zero-knowledge.
Therefore, our result does not contradict the triviality result of
Goldreich and Krawczyk which shows that 3-round black-box simulation
zero-knowledge exist only for BPP languages.
Our main contribution is to provide a non-black-box simulation technique.
Whether there exists such a simulation technique was a major open
problem in the theory of zero-knowledge.
Our simulation technique is based on a non-standard computational assumption
related to the Diffie-Hellman problem, which was originally proposed by
Damgard.
Verifiable Encryption and Applications to Group Signatures and Signature Sharing
Uncategorized
Uncategorized
We generalize and improve the security and efficiency of the
verifiable encryption scheme of Asokan et al., such that it can rely
on more general assumptions, and can be proven secure without
assuming random oracles. We show a new application of verifiable
encryption to group signatures with separability, these schemes do
not need special purpose keys but can work with a wide range of
signature, identification, and encryption schemes already in use.
Finally, we extend our basic primitive to verifiable threshold and
group encryption. By encrypting digital signatures this way, one
gets new solutions to the verifiable signature sharing problem.
DHAES: An Encryption Scheme Based on the Diffie-Hellman Problem
Uncategorized
Uncategorized
scheme, DHAES. The scheme is as efficient as ElGamal encryption, but has
stronger security properties. Furthermore, these security properties are proven
to hold under appropriate assumptions on the underlying primitive.
We show that DHAES has not only the ``basic'' property of secure encryption
(namely privacy under a chosen-plaintext attack) but also achieves privacy
under both non-adaptive and adaptive chosen-ciphertext attacks. (And hence
it also achieves non-malleability.)
DHAES is built in a generic way from lower-level primitives: a symmetric
encryption scheme, a message authentication code, group operations in an
arbitrary group, and a cryptographic hash function. In particular, the
underlying group may be an elliptic-curve group or the multiplicative
group of integers modulo a prime number.
The proofs of security are based on appropriate assumptions about the
hardness of the Diffie-Hellman problem and the assumption that the
underlying symmetric primitives are secure. The assumptions are
all standard in the sense that no random oracles are involved.
We suggest that DHAES provides an attractive starting point for developing
public-key encryption standards based on the Diffie-Hellman assumption.
Last updated: 1999-04-19
Fast Proof of Plaintext-Knowledge and Deniable Authentication Based on Chinese Remainder Theorem
Uncategorized
Uncategorized
We propose a fast and communication-efficient proof of
plaintext-knowledge (PPTK) protocol based on the Chinese Remainder
theorem. With a PPTK the receiver of a ciphertext verifies that the
sender knows the corresponding cleartext in such a way that a
dishonest sender or an eavesdropper does not learn anything about
the plaintext except with sub-polynomial probability. We turn any
semantically secure public key cryptosystem into an efficient
(interactive) one which is immune against adaptive chosen ciphertext
attacks by adding the PPTK protocol. Using our PPTK protocol we also
derive an efficient protocol for deniable authentication.
Lattice Based Cryptography: A Global Improvement
Uncategorized
Uncategorized
We describe a general technique to simplify as well as to improve
several lattice based cryptographic protocols.
The technique is rather straightforward and is easily applied to
the protocols, and gives both a simpler analysis and better
performance than the original protocols. The improvement is global:
the modified protocols are simpler, faster, require less storage,
use less bandwidth and need less random bits than the originals.
Moreover, the improvement is achieved without any loss in security:
we formally prove that the modified protocols are at least as secure
as the original ones. In fact, the modified protocols might even
be more secure as the adversary gets less information. We exemplify
our technique on the Goldreich-Goldwasser zero-knowledge proof systems
for lattice problems and the GGH public key cryptosystem.
Public-key cryptography and password protocols
Uncategorized
Uncategorized
We study protocols for strong authentication and key exchange in asymmetric
scenarios where the authentication server possesses a pair of private and
public keys while the client has only a weak human-memorizable password
as its authentication key. We present and analyze several simple password
protocols in this scenario, and show that the security of these protocols
can be formally proven based on standard cryptographic assumptions.
Remarkably, our analysis shows optimal resistance to off-line password
guessing attacks under the choice of suitable public key encryption
functions. In addition to user authentication, we enhance our protocols
to provide two-way authentication, authenticated key exchange, defense
against server's compromise, and user anonymity. We complement these
results with a proof that public key techniques are unavoidable for
password protocols that resist off-line guessing attacks.
As a further contribution, we introduce the notion of public passwords
that enables the use of the above protocols in situations where the
client's machine does not have the means to validate the server's
public key. Public passwords serve as "hand-held certificates" that
the user can carry without the need for special computing devices.
An error in the mixed adversary protocol by Fitzi, Hirt and Maurer
Uncategorized
Uncategorized
We point out an error in the multiparty computation protocol for mixed
adversaries and zero error from the Crypto 98 paper by Fitzi, Hirt and
Maurer. We show that the protocol only works under a stronger
requirement on the adversary than the one claimed. Hence the bound on
the adversary's corruption capability given there is not tight.
Subsequent work has shown, however, a new bound which is indeed tight.
Chinese Remaindering with Errors
Uncategorized
Uncategorized
The Chinese Remainder Theorem states that a positive integer m is
uniquely specified by its remainder modulo k relatively prime integers
p_1,...,p_k, provided m < \prod_{i=1}^k p_i. Thus the residues of m
modulo relatively prime integers p_1 < p_2 < ... < p_n form a
redundant representation of m if m <= \prod_{i=1}^k p_i and k <
n. This suggests a number-theoretic construction of an
``error-correcting code'' that has been implicitly considered often in
the past. In this paper we provide a new algorithmic tool to go with
this error-correcting code: namely, a polynomial-time algorithm for
error-correction. Specifically, given n residues r_1,...,r_n and an
agreement parameter t, we find a list of all integers m <
\prod_{i=1}^k p_i such that (m mod p_i) = r_i for at least t values of
i in {1,...,n}, provided t = Omega(sqrt{kn (log p_n)/(log p_1)}). We
also give a simpler algorithm to decode from a smaller number of
errors, i.e., when t > n - (n-k)(log p_1)/(log p_1 + \log p_n). In
such a case there is a unique integer which has such agreement with
the sequence of residues.
One consequence of our result is that is a strengthening of the
relationship between average-case complexity of computing the
permanent and its worst-case complexity. Specifically we show that if
a polynomial time algorithm is able to guess the permanent of a random
n x n matrix on 2n-bit integers modulo a random n-bit prime with
inverse polynomial success rate, then #P=BPP. Previous results of
this nature typically worked over a fixed prime moduli or assumed very
small (though non-negligible) error probability (as opposed to small
but non-negligible success probability).
Signature Schemes Based on the Strong RSA Assumption
Uncategorized
Uncategorized
We describe and analyze a new digital signature scheme.
The new scheme is quite efficient, does not require the the signer
to maintain any state, and can be proven secure against adaptive
chosen message attack under a reasonable intractability assumption,
the so-called Strong RSA Assumption.
Moreover, a hash function can be incorporated into the scheme
in such a way that it is also secure in the random oracle model
under the standard RSA Assumption.
Comparing Entropies in Statistical Zero-Knowledge with Applications to the Structure of SZK
Uncategorized
Uncategorized
We consider the following (promise) problem, denoted ED (for Entropy
Difference): The input is a pairs of circuits, and YES instances
(resp., NO instances) are such pairs in which the first (resp.,
second) circuit generates a distribution with noticeably higher
entropy.
On one hand we show that any language having a (honest-verifier)
statistical zero-knowledge proof is Karp-reducible to ED. On the other
hand, we present a public-coin (honest-verifier) statistical
zero-knowledge proof for ED. Thus, we obtain an alternative proof of
Okamoto's result by which HVSZK (i.e., Honest-Verifier Statistical
Zero-Knowledge) equals public-coin HVSZK. The new proof is much simpler
than the original one. The above also yields a trivial proof that HVSZK
is closed under complementation (since ED easily reduces to its
complement). Among the new results obtained is an equivalence of a weak
notion of statistical zero-knowledge to the standard one.
Secure Distributed Storage and Retrieval
Uncategorized
Uncategorized
In his well-known Information Dispersal Algorithm paper, Rabin showed
a way to distribute information in n pieces among n servers in such a
way that recovery of the information is possible in the presence of up
to t inactive servers. An enhanced mechanism to enable construction
in the presence of malicious faults, which can intentionally modify
their pieces of the information, was later presented by Krawczyk.
Yet, these methods assume that the malicious faults occur only at
reconstruction time. <P>
In this paper we address the more general problem of secure storage
and retrieval of information (SSRI), and guarantee that also the
process of storing the information is correct even when some of the
servers fail. Our protocols achieve this while maintaining the
(asymptotical) space optimality of the above methods. <P>
We also consider SSRI with the added requirement of confidentiality,
by which no party except for the rightful owner of the information is
able to learn anything about it. This is achieved through novel
applications of cryptographic techniques, such as the distributed
generation of receipts, distributed key management via threshold
cryptography, and ``blinding.'' <P>
An interesting byproduct of our scheme is the construction of a secret
sharing scheme with shorter shares size in the amortized sense. An
immediate practical application of our work is a system for the secure
deposit of sensitive data. We also extend SSRI to a ``proactive''
setting, where an adversary may corrupt all the servers during the
lifetime of the system, but only a fraction during any given time
interval.
The Disparity between Work and Entropy in Cryptology
Uncategorized
Uncategorized
A brief theory of work is developed. In it, the work-factor and
guesswork of a random variable are linked to intuitive notions of time
complexity in a brute-force attack. Bounds are given for a specific
work-factor called the minimum majority. Tight bounds are given for
the guesswork in terms of variation distance. Differences between
work-factor, guesswork and the entropy of a random variable are
pointed out, calling into question a common misconception about
entropy indicating work.
Security amplification by composition: The case of doubly-iterated, ideal ciphers
Uncategorized
Uncategorized
We investigate, in the Shannon model, the security of constructions
corresponding to double and (two-key) triple DES. That is, we
consider F<sub>k1</sub>(F<sub>k2</sub>(.)) and
F<sub>k1</sub>(F<sub>k2</sub><sup>-1</sup>(F<sub>k1</sub>(.))) with
the component functions being ideal ciphers. This models the
resistance of these constructions to ``generic'' attacks like meet
in the middle attacks.
We obtain the first proof that composition actually
increases the security in some meaningful sense. We compute a bound
on the probability of breaking the double cipher as a function of
the number of computations of the base cipher made, and the number
of examples of the composed cipher seen, and show that the success
probability is the square of that for a single key cipher. The
same bound holds for the two-key triple cipher. The first bound
is tight and shows that meet in the middle is the best possible
generic attack against the double cipher.
Insecurity of Quantum Computations
Uncategorized
Uncategorized
It had been widely claimed that quantum mechanics can protect private
information during public decision in for example the so-called
two-party secure computation. If this were the case, quantum
smart-cards could prevent fake teller machines from learning the PIN
(Personal Identification Number) from the customers' input. Although
such optimism has been challenged by the recent surprising discovery
of the insecurity of the so-called quantum bit commitment, the
security of quantum two-party computation itself remains unaddressed.
Here I answer this question directly by showing that all *one-sided*
two-party computations (which allow only one of the two parties to
learn the result) are necessarily insecure. As corollaries to my
results, quantum one-way oblivious password identification and the
so-called quantum one-out-of-two oblivious transfer are impossible. I
also construct a class of functions that cannot be computed securely
in any <i>two-sided</i> two-party computation. Nevertheless, quantum
cryptography remains useful in key distribution and can still provide
partial security in ``quantum money'' proposed by Wiesner.
Relations among Notions of Security for Public-Key Encryption Schemes
Uncategorized
Uncategorized
We compare the relative strengths of popular notions of security for
public key encryption schemes. We consider the goals of
indistinguishability and non-malleability, each under chosen plaintext
attack and two kinds of chosen ciphertext attack. For each of the
resulting pairs of definitions we prove either an implication (every
scheme meeting one notion must meet the other) or a separation (there
is a scheme meeting one notion but not the other, assuming the first
notion can be met at all). We similarly treat plaintext awareness, a
notion of security in the random oracle model. An additional
contribution of this paper is a new definition of non-malleability
which we believe is simpler than the previous one.
Almost All Discrete Log Bits Are Simultaneously Secure
Uncategorized
Uncategorized
Let G be a finite cyclic group with generator \alpha and with an
encoding so that multiplication is computable in polynomial time. We
study the security of bits of the discrete log x when given
exp<sub>\alpha</sub>(x), assuming that the exponentiation function
exp<sub>\alpha</sub>(x) = \alpha<sup>x</sup> is one-way. We reduce the
general problem to the case that G has odd order q. If G has odd order
q the security of the least-significant bits of x and of the most
significant bits of the rational number x/q \in [0,1) follows from the
work of Peralta [P85] and Long and Wigderson [LW88]. We generalize
these bits and study the security of consecutive <i> shift bits</i>
lsb(2<sup>-i</sup>x mod q) for i=k+1,...,k+j. When we restrict
exp<sub>\alpha</sub> to arguments x such that some sequence of j
consecutive shift bits of x is constant (i.e., not depending on x) we
call it a 2<sup>-j</sup>-<i>fraction</i> of exp<sub>\alpha</sub>.
For groups of odd group order q we show that every two
2<sup>-j</sup>-fractions of exp<sub>\alpha</sub> are equally one-way
by a polynomial time transformation: Either they are all one-way or
none of them. Our <i> key theorem</i> shows that arbitrary j
consecutive shift bits of x are simultaneously secure when given
exp<sub>\alpha</sub>(x) iff the 2<sup>-j</sup>-fractions of
exp<sub>\alpha</sub> are one-way. In particular this applies to the j
least-significant bits of x and to the j most-significant bits of x/q
\in [0,1). For one-way exp<sub>\alpha</sub> the individual bits of x
are secure when given exp<sub>\alpha</sub>(x) by the method of Hastad,
Naslund [HN98]. For groups of even order 2<sup>s</sup>q we show that
the j least-significant bits of \lfloor x/2<sup>s</sup>\rfloor, as
well as the j most-significant bits of x/q \in [0,1), are
simultaneously secure iff the 2<sup>-j</sup>-fractions of
exp<sub>\alpha'</sub> are one-way for \alpha' := \alpha<sup>2<sup>s</sup>
</sup>.
We use and extend the models of generic algorithms of Nechaev (1994)
and Shoup (1997). We determine the generic complexity of inverting
fractions of exp<sub>\alpha</sub> for the case that \alpha has prime
order q. As a consequence, arbitrary segments of (1-\varepsilon)\lg q
consecutive shift bits of random x are for constant \varepsilon >0
simultaneously secure against generic attacks. Every generic algorithm
using t generic steps (group operations) for distinguishing bit
strings of j consecutive shift bits of x from random bit strings has
at most advantage O((\lg q)j\sqrt{t} (2<sup>j</sup>/q)<sup>1/4</sup>).
Many-to-one Trapdoor Functions and their Relation to Public-key Cryptosystems
Uncategorized
Uncategorized
The heart of the task of building public key cryptosystems is viewed
as that of ``making trapdoors;'' in fact, public key cryptosystems and
trapdoor functions are often discussed as synonymous. How accurate is
this view? In this paper we endeavor to get a better understanding of
the nature of ``trapdoorness'' and its relation to public key
cryptosystems, by broadening the scope of the investigation: we look
at general trapdoor functions; that is, functions that are not
necessarily injective (ie., one-to-one). Our first result is somewhat
surprising: we show that non-injective trapdoor functions (with
super-polynomial pre-image size) can be constructed {from} any one-way
function (and hence it is unlikely that they suffice for public key
encryption). On the other hand, we show that trapdoor functions with
polynomial pre-image size are sufficient for public key encryption.
Together, these two results indicate that the pre-image size is a
fundamental parameter of trapdoor functions. We then turn our
attention to the converse, asking what kinds of trapdoor functions can
be constructed from public key cryptosystems. We take a first step by
showing that in the random-oracle model one can construct injective
trapdoor functions from any public key cryptosystem.
Security and Composition of Multi-party Cryptographic Protocols
Uncategorized
Uncategorized
We present general definitions of security for multiparty cryptographic
protocols and show that, using these definitions, security is preserved
under a natural composition method.
The definitions follow the general paradigm of known definitions;
yet some substantial modifications and simplifications are introduced. In
particular, `black-box simulation' is no longer required. The composition
method is essentially the natural `subroutine substitution' method suggested
by Micali and Rogaway.
We first present the general definitional approach. Next we consider several
settings for multiparty protocols. These include the cases of non-adaptive
and adaptive adversaries, as well as the information-theoretic and the
computational models.
Making An Empty Promise With A Quantum Computer (Or, A Brief Review on the Impossibility of Quantum Bit Commitment)
Uncategorized
Uncategorized
Alice has made a decision in her mind.
While she does not want to reveal it to
Bob at this moment, she would like to convince Bob that she is committed to
this particular decision and that she cannot change it at a later time. Is
there a way for Alice to get Bob's trust? Until recently, researchers had
believed that the above task can be performed with the help of quantum
mechanics. And the security of the quantum scheme lies on the uncertainty
principle. Nevertheless, such optimism was recently shattered by Mayers and by
us, who found that Alice can always change her mind if she has a quantum
computer. Here, we survey this dramatic development and its implications on
the security of other quantum cryptographic schemes.
Last updated: 1999-01-01
Quantum Computers Render Quantum Key Distribution Unconditionally Secure Over Arbitrarily Long Distances
Uncategorized
Uncategorized
Abstract: Quantum cryptography has long been claimed to be useful for
achieving many tasks that are impossible from the perspective of
conventional cryptography. Arguably, the most important problem
in quantum cryptography has been a rigorous proof of the security of
quantum key distribution, the most well-known application.
This notoriously hard problem has eluded researchers for years and has
become even more important after the recent surprising demonstration
of the insecurity of many other quantum cryptographic schemes including
quantum bit commitment. Here, we solve this long standing problem by
showing that, given quantum computers, quantum key distribution over an
arbitrarily long distance of a realistic noisy channel can be made
unconditionally secure. The novel technique we use is reduction from a
quantum scheme to a classical scheme. The security in realistic noisy
environments is then proven by using the recent theory of fault-tolerant
quantum computation.
More on Proofs of Knowledge
Uncategorized
Uncategorized
The notion of proofs of knowledge is central to cryptographic
protocols, and many definitions for it have been proposed. In this work
we explore a different facet of this notion, not addressed by prior
definitions. Specifically, prior definitions concentrate on capturing
the properties of the verifier, and do not pay much attention to the
properties of the prover.
Our new definition is strictly stronger than previous ones, and captures
new and desirable properties. In particular, it guarantees prover
feasibility, that is, it guarantees that the time spent by the prover
in a proof of knowledge is comparable to that it spends in an "extraction"
of this knowledge. Our definition also enables one to consider meaningfully
the case of a single, specific prover.
Randomness versus Fault-Tolerance
Uncategorized
Uncategorized
We investigate the relations between two major requirements of multiparty
protocols: {\em fault tolerance} (or {\em resilience}) and {\em randomness}.
Fault-tolerance is measured in terms of the maximum number of colluding faulty
parties, t, that a protocol can withstand and still maintain the privacy of the inputs and the correctness of the outputs (of the honest parties). Randomness
is measured in terms of the total number of random bits needed by the parties
in order to execute the protocol.
Previously, the upper bound on the amount of randomness required by general
constructions for securely computing any non-trivial function f was polynomial
both in , the total number of parties, and the circuit-size C(f). This was
the state of knowledge even for the special case t=1 (i.e., when there is at
most one faulty party). In this paper, we show that for any linear-size
circuit, and for any number t < n/3 of faulty parties, O(poly(t) * log n)
randomness is sufficient. More generally, we show that for any function f
with circuit-size C(f), we need only O(poly(t) * log n + poly(t) * C(f)/n)
randomness in order to withstand any coalition of size at most t.
Furthermore, in our protocol only t+1 parties flip coins and the rest of
the parties are deterministic. Our results generalize to the case of adaptive
adversaries as well.
A Random Server Model for Private Information Retrieval (or How to Achieve Information Theoretic PIR Avoiding Data Replication)
Uncategorized
Uncategorized
Private information retrieval (PIR) schemes enable users to obtain
information from databases while keeping their queries secret from the
database managers. We propose a new model for PIR, utilizing
auxiliary random servers to provide privacy services for database
access. In this model, prior to any on-line communication where users
request queries, the database engages in an initial preprocessing
setup stage with the random servers. Using this model we achieve the
first PIR information theoretic solution in which the database does
not need to give away its data to be replicated, and with minimal
on-line computation cost for the database. This solves privacy and
efficiency problems inherent to all previous solutions.
In particular, all previous information theoretic PIR schemes required
multiple replications of the database into separate entities which are
not allowed to communicate with each other; and in all previous
schemes (including ones which do not achieve information theoretic
security), the amount of computation performed by the database on-line
for every query is at least linear in the size of the database.
In contrast, in our solutions the database does not give away its
contents to any other entity; and after the initial setup stage, which
costs at most O(n log n) in computation, the database needs to
perform only O(1) amount of computation to answer questions of users
on-line. All the extra on-line computation is done by the auxiliary
random servers.
Maintaining Authenticated Communication in the Presence of Break-ins
Uncategorized
Uncategorized
We study the problem of maintaining authenticated communication over untrusted
communication channels, in a scenario where the communicating parties may be
occasionally and repeatedly broken into for transient periods of time. Once
a party is broken into, its cryptographic keys are exposed and perhaps
modified. Yet, we want parties whose security is thus compromised to regain
their ability to communicate in an authenticated way aided by other parties.
In this work we present a mathematical model for this highly adversarial
setting, exhibiting salient properties and parameters, and then describe
a practically-appealing protocol for the task of maintaining authenticated
communication in this model.
A key element in our solution is devising {\em proactive distributed signature
(PDS) schemes} in our model. Although PDS schemes are known in the literature,
they are all designed for a model where authenticated communication and
broadcast primitives are available. We therefore show how these schemes can be
modified to work in our model, where no such primitives are available a-priori.
In the process of devising the above schemes, we also present a new definition
of PDS schemes (and of distributed signature schemes in general). This
definition may be of independent interest.
The Random Oracle Methodology, Revisited
We take a critical look at the relationship between the security of
cryptographic schemes in the Random Oracle Model, and the security of the
schemes that result from implementing the random oracle by so called
"cryptographic hash functions".
The main result of this paper is a negative one: There exist signature and
encryption schemes that are secure in the Random Oracle Model, but for which
any implementation of the random oracle results in insecure schemes.
In the process of devising the above schemes, we consider possible definitions
for the notion of a "good implementation" of a random oracle, pointing out
limitations and challenges.
Chameleon Hashing and Signatures
Uncategorized
Uncategorized
We introduce CHAMELEON SIGNATURES that provide with an undeniable
commitment of the signer to the contents of the signed document (as regular
digital signatures do) but, at the same time, do not allow the recipient
of the signature to disclose the contents of the signed information to any
third party without the signer's consent. These signatures are closely
related to Chaum's "undeniable signatures", but chameleon signatures allow
for simpler and more efficient realizations than the latter.
In particular, they are essentially non-interactive and do not involve the
design and complexity of zero-knowledge proofs on which traditional undeniable
signatures are based. Instead, chameleon signatures are generated
under the standard method of hash-then-sign. Yet, the hash functions
which are used are CHAMELEON HASH FUNCTIONS. These hash functions are
characterized by the non-standard property of being collision-resistant
for the signer but collision tractable for the recipient.
We present simple and efficient constructions of chameleon hashing and
chameleon signatures. The former can be constructed based on standard
cryptographic assumptions (such as the hardness of factoring or discrete
logarithms) and have efficient realizations based on these assumptions.
For the signature part we can use any digital signature (such as RSA or DSS)
and prove the unforgeability property of the resultant chameleon signatures
solely based on the unforgeability of the underlying digital signature
in use.
A Modular Approach to the Design and Analysis of Authentication and Key Exchange Protocols
Uncategorized
Uncategorized
We present a general framework for constructing and analyzing authentication
protocols in realistic models of communication networks. This framework
provides a sound formalization for the authentication problem and suggests
simple and attractive design principles for general authentication and key
exchange protocols. The key element in our approach is a modular treatment of
the authentication problem in cryptographic protocols; this applies to the
definition of security, to the design of the protocols, and to their analysis.
In particular, following this modular approach, we show how to systematically
transform solutions that work in a model of idealized authenticated
communications into solutions that are secure in the realistic setting of
communication channels controlled by an active adversary.
Using these principles we construct and prove the security of simple and
practical authentication and key-exchange protocols. In particular, we provide
a security analysis of some well-known key exchange protocols (e.g.
authenticated Diffie-Hellman key exchange), and of some of the techniques
underlying the design of several authentication protocols that are currently
being deployed on a large scale for the Internet Protocol and other
applications.
An Efficient Non-Interactive Statistical Zero-Knowledge Proof System for Quasi-Safe Prime Products
Uncategorized
Uncategorized
We present efficient zero-knowledge proof systems for quasi-safe
prime products and other related languages. Quasi-safe primes are a
relaxation of safe primes, a class of prime numbers useful in many
cryptographic applications.
Our proof systems achieve higher security and better
efficiency than all previously known ones.
In particular, all our proof systems are perfect or statistical
zero-knowledge, meaning that even a computationally
unbounded adversary cannot extract any information
from the proofs.
Moreover, our proof systems are extremely efficient because they do
not use general reductions to NP-complete problems, can be easily
parallelized preserving zero-knowledge, and are non-interactive for
computationally unbounded provers. The prover can also be efficiently
implemented given some trapdoor information and using very little
interaction.
We demonstrate the applicability of quasi-safe primes
by showing how they can be effectively used in the
context of RSA based undeniable signatures to enforce
the use of ``good'' public keys, i.e.,
keys such that if a signer can convince a recipient
of the validity of a signature, then he won't be able to
subsequently deny the same signature in case of a dispute.
Fast Batch Verification for Modular Exponentiation and Digital Signatures
Uncategorized
Uncategorized
Many tasks in cryptography (e.g., digital signature
verification) call for verification of a basic operation like modular
exponentiation in some group: given (g,x,y) check that g<sup>x</sup>=y.
This is typically done by re-computing g<sup>x</sup> and checking we get y.
We would like to do it differently, and faster.
The approach we use is batching. Focusing first on the basic modular
exponentiation operation, we provide some probabilistic batch verifiers, or
tests, that verify a sequence of modular exponentiations significantly faster
than the naive re-computation method. This yields speedups for several
verification tasks that involve modular exponentiations.
Focusing specifically on digital signatures, we then suggest a weaker notion of
(batch) verification which we call ``screening.'' It seems useful for many
usages of signatures, and has the advantage that it can be done very fast;
in particular, we show how to screen a sequence of RSA signatures at the
cost of one RSA verification plus hashing.
A Practical Public Key Cryptosystem Provably Secure against Adaptive Chosen Ciphertext Attack
Uncategorized
Uncategorized
A new public key cryptosystem is presented that is provably secure
against adaptive chosen ciphertext attack.
The scheme is quite practical, and the proof of security relies
only on standard intractability assumptions.
On the possibility of basing Cryptography on the assumption that
Uncategorized
Uncategorized
Recent works by Ajtai and by Ajtai and Dwork
bring to light the old (general) question of whether it is
at all possible to base the
security of cryptosystems on the assumption that .
We discuss this question and in particular review and extend
a two-decade old result of Brassard regarding this question.
Our conclusion is that the question remains open.
Universal Service Providers for Database Private Information Retrieval
Uncategorized
Uncategorized
We consider the question of private information retrieval in the
so-called ``commodity-based'' model. This model was recently proposed
by Beaver for practically-oriented service-provider internet
applications. In this paper, we show the following, somewhat
surprising, results regarding this model for the problem of
private-information retrieval: (1) the service-provider model allows
to dramatically reduce the overall communication involving the user,
using off-line pre-processing messages from ``service-providers'' to
databases, where the service-providers do not need to know the
database contents, nor the future user's requests; (2) our
service-provider solutions are resilient against more than a majority
(in fact, all-but-one) coalitions of service-providers; and (3) these
results hold for {\em both} the computational and the
information-theoretic setting.
More specifically, we exhibit a service-provider algorithm which can
``sell'' (i.e., generate and send) ``commodities'' to users and
databases, that subsequently allow users to retrieve database contents
in a way which hides from the database which particular item the user
retrieves. The service-providers need not know anything about the
contents of the databases nor the nature of the user's requests in
order to generate commodities. Our commodity-based solution
significantly improves communication complexity of the users (i.e.,
counting both the size of commodities bought by the user from the
service-providers and the subsequent communication with the databases)
compared to all previously known on-line private information retrieval
protocols (i.e., without the help of the service-providers).
Moreover, we show how commodities from different service-providers can
be {\em combined} in such a way that even if ``all-but-one'' of the
service-providers collude with the database, the user's privacy
remains intact. Finally, we show how to re-use commodities in case of
multiple requests (i.e., in the amortized sense), how to ``check''
commodity-correctness, and how some of the solutions can be extended
to the related problem of {\em Private Information Storage}.
Private Information Retrieval by Keywords
Uncategorized
Uncategorized
Private information retrieval (PIR) schemes enable a user to
access one or more servers that hold copies of
a database and {\em privately} retrieve parts of the
bits of data stored in the database. This means that the queries
give each individual
database no partial information (in the information theoretic or computational
sense) on the identity of the item retrieved by the user.
All known PIR schemes assume that the user knows the {\em physical address}
of the sought item. This is usually not the case when accessing a public
database that is not managed by the user. Such databases are typically
presented with keywords, which are then internally translated (at the
database end) to physical addresses, using an appropriate search
structure (for example, a hash table or a binary tree). In this note we
describe a simple, modular way to privately access data by keywords.
It combines {\em any} conventional search structure with {\em any}
underlying PIR scheme (including single server schemes). The transformation
requires no modification in the way that the search structure is maintained.
Therefore the same database will support both private and regular (non
private) searches.
The Graph Clustering Problem has a Perfect Zero-Knowledge Proof
Uncategorized
Uncategorized
The input to the Graph Clustering Problem
consists of a sequence of integers
and a sequence of graphs.
The question is whether the equivalence classes,
under the graph isomorphism relation,
of the input graphs have sizes which match the input sequence of integers.
In this note we show that this problem has a (perfect) zero-knowledge
interactive proof system.
This result improves over <a href="http:../1996/96-14.html">record 96-14</a>,
where a parametrized (by the sequence of integers)
version of the problem was studied.
On Protocol Divertibility
Uncategorized
Uncategorized
In this paper, we establish the notion of divertibility as a
protocol property
as opposed to the existing notion as a language property (see Okamoto,
Ohta). We give a definition of protocol divertibility that applies to
arbitrary 2-party protocols and is compatible with Okamoto and Ohta's
definition
in the case of interactive zero-knowledge proofs. Other important examples
falling under the new definition are blind signature protocols. A sufficient
criterion for divertibility is presented and found to be satisfied by many
examples of protocols in the literature. The generality of the definition is
further demonstrated by examples from protocol classes that have not been
considered for divertibility before. We show diverted El-Gamal encryption and
diverted Diffie-Hellman key exchange.
Optimistic fair Exchange of Digital Signatures
Uncategorized
Uncategorized
We present a new protocol that allows two players to exchange digital
signatures (including RSA and DSS) over the Internet in a fair way, so
that either each player gets the other's signature, or neither player
does. One obvious application is where the signatures represent items
of value, for example, an electronic check or airline ticket; the
protocol can also be adapted to exchange encrypted data. The protocol
relies on a trusted third party, but is "optimistic," in that the
third party is only needed in cases where one player attempts to cheat
or simply crashes. This is an important property, as it greatly
reduces the load on the third party, which in particular facilitates
a more robust and secure implementation of the third party.
Generalized Diffie-Hellman Modulo a Composite is not Weaker than Factoring
Uncategorized
Uncategorized
The Diffie-Hellman key-exchange protocol may naturally be
extended to k>2 parties. This gives rise to the generalized
Diffie-Hellman assumption (GDH-Assumption).
Naor and Reingold have recently shown an efficient construction
of pseudo-random functions and reduced the security of their
construction to the GDH-Assumption.
In this note, we prove that breaking this assumption modulo a composite
would imply an efficient algorithm for factorization.
Therefore, the security of both the key-exchange protocol and
the pseudo-random functions can be reduced to factoring.
Visual Authentication and Identification
Uncategorized
Uncategorized
The problems of authentication and identification have
received wide interest in cryptographic research.
However, there has been no satisfactory solution for the problem of
authentication by a human recipient
who does not use any trusted computational device.
The problem of authentication arises for example in
the context of smartcard--human interaction, in particular in
the context of electronic wallets. The problem of identification is ubiquitous
in communication over insecure networks.
This paper introduces visual authentication and visual
identification methods, which are
authentication and identification methods for human users
based on visual cryptography. These methods are
very natural and easy to use, and can be implemented using very
common ``low tech'' technology. The methods we suggest are efficient
in the sense that a single transparency can be used
for several authentications or for several identifications. The visual
authentication methods we suggest are not limited to authenticating
textual messages, and can be used to authenticate any image.
An important contribution of this paper is the introduction of a
framework for proving the security of protocols in which humans take an
active part. We rigorously prove the security of our schemes using this
framework.
Self-Delegation with Controlled Propagation - or - What If You Lose Your Laptop.
Uncategorized
Uncategorized
We introduce delegation schemes wherein a user may delegate rights to
himself, i.e., to other public keys he owns, but may
not safely delegate those rights to others, i.e., to their
public keys. In our motivating application, a user
has a primary (long-term) key that receives rights, such as access
privileges, that may not be delegated to others, yet the user may
reasonably wish to delegate these rights to new
secondary (short-term) keys he creates to use on his laptop when
traveling, to avoid having to store his primary secret key on the
vulnerable laptop.
We propose several cryptographic schemes, both generic and practical,
that allow such self-delegation while providing strong motivation for
the user not to delegate rights that he only obtained for personal use
to other parties.
Identity Escrow
Uncategorized
Uncategorized
We introduce the notion of escrowed identity, an application of
key-escrow ideas to the problem of identification. In escrowed
identity, one party, A, does not give his identity to another
party B, but rather gives him information that would allow an
authorized third party, E, to determine A's identity. However, B
receives a guarantee that E can indeed determine A's identity. We
give protocols for escrowed identity based on the El-Gamal (signature
and encryption) schemes and on the RSA function. A useful feature
of our protocol is that after setting up A to use the system, E is
only involved when it is actually needed to determine A's identity.
CBC MAC for Real-Time Data Sources
Uncategorized
Uncategorized
The Cipher Block Chaining (CBC) Message Authentication Code (MAC)
is an authentication method which is widely used in practice.
It is well known that the naive use of CBC MAC for variable
length messages is not secure, and a few rules of thumb for the
correct use of CBC MAC are known by folklore. The first rigorous
proof of the security of CBC MAC, when used on fixed length messages,
was given only recently by Bellare, Kilian and Rogaway. They also
suggested variants of CBC MAC that handle variable length messages
but in these variants the length of the message has to be known
in advance (i.e., before the message is processed).
We study CBC authentication of real time applications in which the
length of the message is not known until the message ends, and
furthermore, since the application is real-time, it is not possible
to start processing the authentication only after the message ends.
We first present a variant of CBC MAC, called {\em double MAC} (DMAC)
which handles messages of variable unknown lengths. Computing DMAC on
a message is virtually as simple and as efficient as computing the
standard CBC MAC on the message. We provide a rigorous proof that its
security is implied by the security of the underlying block cipher.
Next, we argue that the basic CBC MAC is secure when applied to prefix
free message space. A message space can be made prefix free by
authenticating also the (usually hidden) last character which marks
the end of the message.
Collision-Resistant Hashing: Towards Making UOWHFs Practical
Uncategorized
Uncategorized
Recent attacks on the cryptographic hash functions MD4 and MD5
make it clear that (strong) collision-resistance is a hard-to-achieve goal. We
look towards a weaker notion, the <i>universal one-way hash
functions</i> (UOWHFs) of Naor and Yung, and investigate their practical
potential. The goal is to build UOWHFs not based on number theoretic
assumptions, but from the primitives underlying current cryptographic hash
functions like MD5 and SHA. Pursuing this goal leads us to new questions. The
main one is how to extend a compression function to a full-fledged hash
function in this new setting. We show that the classic Merkle-Damgard
method used in the standard setting fails for these weaker kinds of hash
functions, and we present some new methods that work. Our main construction is
the "XOR tree." We also consider the problem of input length-variability and
present a general solution.
Factoring via Strong Lattice Reduction Algorithms
Uncategorized
Uncategorized
We address to the problem to factor a large composite number
by lattice reduction algorithms.
Schnorr has shown that under a reasonable number
theoretic assumptions this problem can
be reduced to a simultaneous diophantine
approximation problem. The latter in turn can be solved by finding
sufficiently many l_1--short vectors in a suitably defined lattice.
Using lattice basis reduction algorithms Schnorr and Euchner applied
Schnorrs reduction technique to 40--bit long integers.
Their implementation needed several hours to compute a 5% fraction
of the solution, i.e., 6 out of 125
congruences which are necessary to factorize the composite.
In this report we describe a more efficient implementation using
stronger lattice basis reduction techniques incorporating ideas
of Schnorr, Hoerner and Ritter.
For 60--bit long integers our algorithm yields a complete factorization
in less than 3 hours.
Towards realizing random oracles: Hash functions that hide all partial information
Uncategorized
Uncategorized
The random oracle model is a very convenient setting for designing
cryptographic protocols. In this idealized model all parties have access
to a common, public random function, called a random oracle.
Protocols in this model are often very simple and efficient; also the
analysis is often clearer. However, we do not have a general mechanism for
transforming protocols that are secure in the random oracle model into
protocols that are secure in real life. In fact, we do not even know how
to meaningfully specify the properties required from such a mechanism.
Instead, it is a common practice to simply replace - often without
mathematical justification - the random oracle with a `cryptographic hash
function' (e.g., MD5 or SHA). Consequently, the resulting protocols have
no meaningful proofs of security.
Protecting Data Privacy in Private Information Retrieval Schemes
Uncategorized
Uncategorized
Private Information Retrieval (PIR) schemes allow a user to retrieve the
i-th bit of a data string x, replicated in k>=2 databases, while keeping
the value of i private. The main cost measure for such a scheme is its
communication complexity.
We study PIR schemes where in addition to the user's privacy we require
data privacy. That is, in every invocation of the retrieval protocol the
user learns exactly a single physical bit of x and no other information.
Further, we require that even a dishonest user would not learn more than a
single physical data bit.
We present general transformations that allow translating PIR schemes
satisfying certain properties into PIR schemes that respect data privacy
as well, with a small penalty in the communication complexity. Using our
machinery we are able to translate currently known PIR solutions into
schemes satisfying the newly introduced, stronger privacy constraint. In
particular we get: a k-database scheme of complexity
O(log(n) n^{1/(2k-1)}) for every k>=2; an O(log(n))-database scheme of
poly-logarithmic complexity; a 2-database computational PIR of complexity
O(n^c), for every constant c>0. All these require only a single
round of interaction.
A Probabilistic Error-Correcting Scheme
Uncategorized
Uncategorized
In the course of research in Computational Learning Theory,
we found ourselves in need of an error-correcting encoding scheme for which
few bits in the codeword yield no information about the plain message.
Being unaware of a previous solution,
we came-up with the scheme presented here.
Since this scheme may be of interest to people working in Cryptography,
we thought it may be worthwhile to ``publish'' this part of our work
within the Cryptography community.
Clearly, a scheme as described above cannot be deterministic.
Thus, we introduce a probabilistic coding scheme which,
in addition to the standard coding theoretic requirements,
has the feature that any constant fraction
of the bits in the (randomized) codeword yields no information about
the message being encoded.
This coding scheme is also used to obtain efficient constructions for
the Wire-Tap Channel Problem.
A note on negligible functions
Uncategorized
Uncategorized
In theoretical cryptography, one formalizes the notion of an
adversary's success probability being ``too small to matter'' by asking that it
be a negligible function of the security parameter. We argue that the issue
that really arises is what it might mean for a collection of functions
to be ``negligible.'' We consider (and define) two such notions, and prove them
equivalent. Roughly, this enables us to say that any cryptographic primitive
has a specific associated ``security level.'' In particular we say this
for any one-way function. We also reconcile different definitions of negligible
error arguments and computational proofs of knowledge that have appeared in the
literature. Although the motivation is cryptographic, the main result is
purely about negligible functions.
Efficient Cryptographic Protocols Based on Noisy Channels.
Uncategorized
Uncategorized
The Wire-Tap Channel of Wyner shows that a Binary Symmetric Channel
may be used as a basis for exchanging a secret key. Later, Crepeau and Kilian
showed how a BSC may be used to implement Oblivious Transfer. Unfortunately,
this result is rather impractical as it requires bits to be sent
through the BSC to accomplish a single OT. The current paper provides efficient
protocols to achieve Bit Commitment and Oblivious Transfer based on the
existence of a BSC. Our protocols respectively use the BSC times and
times. These results are based on a technique known as Generalized
Privacy Amplification.
Round-Optimal Zero-Knowledge Arguments Based on any One-Way Function
Uncategorized
Uncategorized
We fill a gap in the theory of zero-knowledge protocols by presenting
NP-arguments that achieve negligible error probability and computational
zero-knowledge in four rounds of interaction, assuming only the existence of a
one-way function. This result is optimal in the sense that four rounds and a
one-way function are each individually necessary to achieve a negligible error
zero-knowledge argument for NP.
A New Paradigm for Collision-free Hashing: Incrementality at Reduced Cost
Uncategorized
Uncategorized
We present a simple, new paradigm for the design of collision-free hash
functions. Any function emanating from this paradigm is <i>incremental.</i>
(This means that if a message x which I have previously hashed is modified to
x' then rather than having to re-compute the hash of x' from scratch, I
can quickly ``update'' the old hash value to the new one, in time proportional
to the amount of modification made in x to get x'.) Also any function
emanating from this paradigm is parallelizable, useful for hardware
implementation.
Public-Key Cryptosystems from Lattice Reduction Problems
Uncategorized
Uncategorized
We present a new proposal for a trapdoor one-way function, from which
we derive public-key encryption and digital signatures.
The security of the new construction is based on the
conjectured computational difficulty of lattice-reduction problems,
providing a possible alternative to existing
public-key encryption algorithms
and digital signatures such as RSA and DSS.
Verifiable Partial Key Escrow
Uncategorized
Uncategorized
One of the main objections to existing proposals for key escrow is that the
individual's privacy relies on too high a level of trust in the law enforcement
agencies. In particular, even if the government is trustworthy today, it may be
replaced by an un-trustworthy government tomorrow which could immediately and
suddenly recover the secret keys of all users.
The Graph Clustering Problem has a Perfect Zero-Knowledge Proof
Uncategorized
Uncategorized
The Graph Clustering Problem is parameterized by a sequence
of positive integers, .
The input is a sequence of graphs,
and the question is whether the equivalence classes
under the graph isomorphism relation have sizes which match
the sequence of parameters.
In this note
we show that this problem has a (perfect) zero-knowledge
interactive proof system.
On the Contrast in Visual Cryptography Schemes
Uncategorized
Uncategorized
A visual cryptography scheme is a method to encode a secret image SI into
shadow images called shares such that certain qualified subsets of shares
enable the ``visual'' recovery of the secret image.
The ``visual'' recovery consists of xeroxing the shares onto transparencies,
and then stacking them. The shares of a qualified set will reveal the secret
image without any cryptographic computation.
In this paper we analyze the contrast of the reconstructed image
in k out of n visual cryptography schemes. (In such a scheme
any k shares will reveal the image, but no set of k-1 shares
gives any information about the image.)
In the case of 2 out of n threshold schemes we give a complete
characterization of schemes having optimal contrast and minimum
pixel expansion in terms of certain balanced incomplete block designs.
In the case of k out of n threshold schemes with k>2 we obtain
upper and lower bounds on the optimal contrast.
Proactive RSA
Uncategorized
Uncategorized
We consider a "mobile adversary" which may corrupt all
participants throughout the lifetime of the system in a non-monotonic
fashion (i.e. recoveries are possible) but the adversary is unable to
corrupt too many participants during any short time period.
Schemes resiliant to such adverasry are called proactive.
We present a proactive RSA system in which a threshold of servers
applies the RSA signature (or decryption) function in a distributed manner.
Employing new combinatorial and elementary number theoretic
techniques, our protocol enables the dynamic updating
of the servers (which hold the RSA key distributively);
it is secure even when a linear number of
the servers are corrupted during any time period;
it efficiently "self-maintains" the
security of the function and its
messages (ciphertexts or signatures); and it enables continuous
availability, namely, correct function application using the shared
key is possible at any time.
On the Construction of Pseudo-Random Permutations: Luby-Rackoff Revisited
Uncategorized
Uncategorized
Luby and Rackoff showed a method for constructing a pseudo-random permutation
from a pseudo-random function. The method is based on composing four
(or three for weakened security) so called Feistel permutations
each of which requires the evaluation of a pseudo-random function.
We reduce somewhat the complexity of the construction
and simplify its proof of security by showing
that two Feistel permutations are sufficient together with initial
and final pair-wise independent permutations.
The revised construction and proof provide a framework in which
similar constructions may be brought up and their security can
be easily proved.
We demonstrate this by presenting some additional adjustments
of the construction that achieve the following:
1. Reduce the success probability of the adversary.
2. Provide a construction of pseudo-random permutations with large input
size using pseudo-random functions with small input size.
3. Provide a construction of a pseudo-random permutation
using a single pseudo-random function.
Oblivious Transfers and Intersecting Codes
Uncategorized
Uncategorized
Assume A owns t secret k-bit strings.
She is willing to disclose one of them to B, at his choosing,
provided he does not learn anything about the other strings.
Conversely, B does not want A to learn which secret he chose to learn.
A protocol for the above task is said to implement
One-out-of-t String Oblivious Transfer. An apparently simpler task
corresponds to the case k=1 and t=2 of two one-bit secrets:
this is known as One-out-of-two Bit OT.
We address the question of implementing the former assuming the
existence of the later.
In particular, we prove that the general protocol can be implemented from
O(tk) calls to One-out-of-two Bit OT. This is
optimal up to a small multiplicative constant.
Our solution is based on the notion of self-intersecting codes.
Of independent interest, we give several efficient new constructions for
such codes.
Another contribution of this paper is a set
of information-theoretic definitions for correctness and
privacy of unconditionally-secure oblivious transfer.
Collision-Free Hashing from Lattice Problems
Uncategorized
Uncategorized
Recently Ajtai described a construction of one-way functions whose
security is equivalent to the difficulty of some well known approximation
problems in lattices. We show that essentially the same
construction can also be used to obtain collision-free hashing.
Access Control and Signatures via Quorum Secret Sharing
Uncategorized
Uncategorized
We suggest a method of controlling the access to a secure
database via quorum systems. A quorum system is a collection of sets
(quorums) every two of which have a nonempty intersection.
Quorum systems have been used for a number of applications in the area of
distributed systems.
We propose a separation between access servers which are protected and
trustworthy, but may be outdated, and the data servers which may all
be compromised. The main paradigm is that only the servers in a
complete quorum can collectively grant (or revoke) access permission.
The method we suggest ensures that after authorization is revoked, a
cheating user Alice will not be able to access the data even if many
access servers still consider her authorized, and even if the complete
raw database is available to her. The method has a low overhead in
terms of communication and computation. It can also be converted into
a distributed system for issuing secure signatures.
Visual Cryptography II: Improving the Contrast Via the Cover Base
Uncategorized
Uncategorized
In Eurocrypt'94 we proposed a a new type of cryptographic scheme,
which can decode concealed images without any cryptographic computations,
by placing two transparencies on top of each other and using the decoder's
(human) visual systems.
One of the drawback of that proposal was a loss in contrast: a black pixel
is translated in the reconstruction into a black region, but a white
pixel is translated into a grey region (half black and half white).
In this paper we propose am alternative model for reconstruction with a
different set of operation (which we call the ``Cover" semi-group) is proposed.
In this model we are able to obtain a better contrast than
is possible in the previous one.
We prove tight bounds on the contrast
as a function of the complexity of the scheme. We also show
that for constructing k-out-n secret sharing schemes when
n and k are greater than 2 the new method is not applicable.
Upper bound on the communication complexity of private information retrieval
Uncategorized
Uncategorized
Private information retrieval was introduced
by Chor, Goldreich, Kushilevitz and Sudan.
It is the problem of reading a bit from the database so
that it remains secret which bit we need.
If the database exists in several identical copies,
it is possible to ask queries so that each of copies
alone does not get any information about the adress
of the needed bit.
We construct a scheme for private information retrieval
with k databases and O(n sup (1/(2k-1)) ) bits of communication.
Private Information Storage
Uncategorized
Uncategorized
We consider the setting of hiding information through the use of
multiple databases that do not interact with one another. Previously,
in this setting solutions for retrieval of data in the efficient
manner were given, where a user achieves this by interacting with all
the databases. We consider the case of both writing and
reading. While the case of reading was well studied before, the case
of writing was previously completely open. In this paper, we show how
to implement both read and write operations. As in the previous
papers, we measure, as a function of k and n the amount of
communication required between a user and all the databases for a
single read/write operation, and achieve efficient read/write schemes.
Moreover, we show a general reduction from reading database scheme to
reading and writing database scheme, with the following guarantees:
for any k, given a retrieval only k-database scheme with communication
complexity R(k,n) we show a (k+1) reading and writing database scheme
with total communication complexity O(R(k,n) * (log n)^{O(1)}). It
should be stressed that prior to the current paper no trivial
(i.e. sub-linear) bounds for private information storage were known.
Linear Zero-Knowledge - A note on Efficient Zero-Knowledge Proofs and Arguments
Uncategorized
Uncategorized
We present a zero-knowledge proof system for any NP language L, which
allows showing that x is in L using communication corresponding
to bit commitments, with error probability ,
and where c is a constant depending only on L.
The proof can be based on any bit
commitment scheme with a particular set of properties. We suggest an
efficient implementation based on factoring. The protocol allows showing
that a Boolean formula of size n is satisfiable,
with error probability , using O(n) commitments.
This is the first protocol for SAT that is linear in this sense.<br>
[The rest of the abstract was truncated and appears below -- the library.]
- « Previous
- 1
- ...
- 240
- 241
- 242
- Next »