All papers (Page 219 of 21939 results)
Graph-Based Authentication of Digital Streams
Uncategorized
Uncategorized
We consider the authentication of digital streams over a lossy
network. The overall approach taken is graph-based, as this yields
simple methods for controlling overhead, delay, and the ability to
authenticate, while serving to unify many previously known hash- and
MAC-based techniques. The loss pattern of the network is defined
probabilistically, allowing both bursty and random packet loss to be
modeled. Our authentication schemes are customizable by the
sender of the stream; that is, within reasonable constraints on
the input parameters, we provide schemes that achieve the desired
authentication probability while meeting the input upper bound on the
overhead per packet. In addition, we demonstrate that some of the
shortcomings of previously known schemes correspond to easily
identifiable properties of a graph, and hence, may be more easily
avoided by taking a graph-based approach to designing authentication
schemes.
Session-Key Generation using Human Passwords Only
We present session-key generation protocols in a model where the
legitimate parties share {\em only} a human-memorizable
password, and there is no additional setup assumption in the
network. Our protocol is proven secure under the assumption that
trapdoor permutations exist. The security guarantee holds with
respect to probabilistic polynomial-time adversaries that control
the communication channel (between the parties), and may omit,
insert and modify messages at their choice. Loosely speaking, the
effect of such an adversary that attacks an execution of our
protocol is comparable to an attack in which an adversary is only
allowed to make a constant number of queries of the form ``is $w$
the password of Party $A$''. We stress that the result holds also
in case the passwords are selected at random from a small
dictionary so that it is feasible (for the adversary) to scan the
entire directory. We note that prior to our result, it was not
known whether or not such protocols were attainable without the
use of random oracles or additional setup assumptions.
A Complete Problem for Statistical Zero Knowledge
We present the first complete problem for SZK, the class of
(promise) problems
possessing statistical zero-knowledge proofs (against an
honest verifier).
The problem, called STATISTICAL DIFFERENCE,
is to decide whether two efficiently samplable distributions
are either statistically close or far apart. This
gives a new characterization of
SZK that makes no reference to interaction or zero knowledge.
We propose the use of complete problems to unify and
extend the study of statistical zero knowledge. To this end,
we examine several
consequences of our Completeness Theorem and its proof, such as:
(1) A way to make every (honest-verifier) statistical
zero-knowledge proof very communication efficient,
with the prover sending only one bit
to the verifier (to achieve soundness error 1/2).
(2) Simpler proofs of many of the previously known
results about statistical zero knowledge, such as
the Fortnow and Aiello--Håstad upper bounds on the complexity of SZK
and
Okamoto's result that SZK is closed under complement.
(3) Strong closure properties of SZK which amount to
constructing statistical zero-knowledge proofs for complex assertions
built out of simpler assertions already shown to be in SZK.
(4) New results about the various measures of "knowledge complexity,"
including a collapse in the hierarchy corresponding
to knowledge complexity in the "hint" sense.
(5) Algorithms for manipulating the statistical difference
between efficiently samplable distributions, including transformations
which "polarize" and "reverse" the statistical relationship
between a pair of distributions.
Multiparty Computation from Threshold Homomorphic Encryption
We introduce a new approach to multiparty computation (MPC)
basing it on homomorphic
threshold crypto-systems. We show that given keys for any
sufficiently efficient
system of this type, general MPC protocols for $n$ players can be
devised which are
secure against an active adversary that corrupts any minority of the
players.
The total number of bits sent is $O(nk|C|)$, where $k$ is the
security parameter and $|C|$ is
the size of a (Boolean) circuit computing the function to be
securely evaluated.
An earlier proposal by Franklin and Haber with the same complexity
was only secure
for passive adversaries, while all earlier protocols with active
security had complexity at
least quadratic in $n$. We give two examples of threshold
cryptosystems that can support our
construction and lead to the claimed complexities.
Correlation Immune Boolean Functions with Very High Nonlinearity
Here we provide a construction method for unbalanced, first order
correlation immune Boolean functions on even number of variables
$n \geq 6$. These functions achieve the currently best known
nonlinearity $2^{n-1} - 2^{\frac{n}{2}} + 2^{\frac{n}{2} - 2}$ .
Then we provide a simple modification of these functions to get
unbalanced correlation immune Boolean functions on even number of
variables $n$, with nonlinearity
$2^{n-1} - 2^{\frac{n}{2}} + 2^{\frac{n}{2} - 2} - 2$ and maximum
possible algebraic degree $n-1$. Moreover, we present a detailed
study on the Walsh spectra of these functions.
A Construction of Resilient Functions with High Nonlinearity
Uncategorized
Uncategorized
The relationship between nonlinearity and
resiliency for a function $F:\mathbb{F}_2^n \mapsto
\mathbb{F}_2^m$ is considered. We give a construction of resilient
functions with high nonlinearity. The construction leads to the
problem of finding a set of linear codes with a fixed minimum
distance, having the property that the intersection
between any two codes is the all zero codeword only. This problem is
considered, and existence results are provided. The constructed
functions obtain a nonlinearity superior to previous construction
methods.
CRYPTANALYSIS OF THE A5/2 ALGORITHM
An attack on the A5/2 stream cipher algorithm is described, that
determines the linear relations among the output sequence bits.
The vast majority of the unknown output bits can be reconstructed.
The time complexity of the attack is proportional to 2**17.
Reducing the Gate Count of Bitslice DES
This paper describes various techniques to reduce the number of logic gates needed to implement the DES S-boxes in bitslice software. Using standard logic gates, an average of 56 gates per S-box was achieved, while an average of 51 was produced when non-standard gates were utilized. This is an improvement over the previous best result, which used an average of 61 non-standard gates.
Spectral Analysis of High Order Correlation Immune Functions
We use the recent results on the spectral structure of
correlation immune and resilient Boolean functions for the
investigations of high order correlation immune functions.
At first, we give simple proofs of some theorems where only
long proofs were known. Next, we introduce the matrix of
nonzero Walsh coefficients and establish important properties
of this matrix. We use these properties to prove the nonexistence
of some high order correlation immune functions. Finally, we
establish the order of magnitude for the number of (n-4)th
order correlation immune functions of n variables.
Spectral Domain Analysis of Correlation Immune and Resilient Boolean Functions
In this paper we prove a general result on the Walsh Transform
of an arbitrary Boolean function. As a consequence, we obtain several
divisibility results on the Walsh Transform of correlation immune and
resilient Boolean functions. This allows us to improve upper bounds
on the nonlinearity of correlation immune and resilient Boolean
functions. Also we provide new necessary conditions on the algebraic
normal form of correlation immune/resilient functions attaining the
maximum possible nonlinearity.
New Constructions of Resilent and Correlation Immune Boolean Functions achieving Upper Bounds on Nonlinearity
Recently weight divisibility results on resilient and correlation
immune Boolean functions have received a lot of attention. These
results have direct consequences towards the upper bound on nonlinearity
of resilient and correlation immune Boolean functions of certain order.
Now the clear benchmark in the design of resilient Boolean functions
(which optimizes Sigenthaler's inequality) is to provide results
which attain the upper bound on nonlinearity. Here we construct a
7-variable, 2-resilient Boolean function with nonlinearity 56. This
solves the maximum nonlinearity issue for 7-variable functions with
any order of resiliency. Using this 7-variable function, we also
construct a 10-variable, 4-resilient Boolean function with nonlinearity
480. Construction of these two functions were justified as important
open questions in Crypto 2000. Also we provide methods to generate an
infinite sequence of Boolean functions on $n = 7 + 3i$ variables
$(i \geq 0)$ with order of resiliency $m = 2 + 2i$, algebraic degree
$4 + i$ and nonlinearity $2^{n-1} - 2^{m+1}$, which were not known
earlier. We conclude with a few interesting construction results
on unbalanced correlation immune functions of 5 and 6 variables.
Highly Nonlinear Balanced Boolean Functions with very good Autocorrelation Property
Constructing highly nonlinear balanced Boolean functions with very good
autocorrelation property is an interesting open question. In this direction
we use the measure $\Delta_f$ for a function $f$ proposed by Zhang and
Zheng (1995). We provide balanced functions $f$ with currently best known
nonlinearity and $\Delta_f$ values together. Our results for 15-variable
functions disprove the conjecture proposed by Zhang and Zheng (1995),
where our constructions are based on modifications of
Patterson-Wiedemann (1983) functions. Also we propose a simple
bent based construction technique to get functions with very good
$\Delta_f$ values for odd number of variables. This construction has
a root in Kerdock Codes. Moreover, our construction on even number
of variables is a recursive one and we conjecture (similar to Dobbertin's
conjecture (1994) with respect to nonlinearity) that this provides
minimum possible value of $\Delta_f$ for a function $f$ on even number
of variables.
The Saturation Attack - a Bait for Twofish
We introduce the notion of a saturation attack and present attacks on
reduced-round versions of the Twofish block cipher. Our attack for all
generic key sizes of Twofish (i.e., for 128-bit, 192-bit and 256-bit
keys) improves on exhaustive key search for seven rounds of Twofish
with full whitening, and for eight rounds of Twofish without whitening
at the end. The core of the attack is a a key-independent
distinguisher for six rounds of Twofish. The distinguisher is used to
attack up to 7 rounds of Twofish with full whitening and and 8 rounds
of Twofish with prewhitening only - half of the cipher. The attacks
take up to 2^127 chosen plaintexts (half of the codebook!) and are 2-4
times faster than exhaustive search.
Efficient Zero-Knowledge Proofs of Knowledge Without Intractability Assumptions
We initiate the investigation of the class of relations
that admit extremely efficient perfect zero knowledge
proofs of knowledge: constant number of rounds, communication
linear in the length of the statement and the witness, and
negligible knowledge error. In its most general incarnation,
our result says that for relations that have a particular
three-move honest-verifier zero-knowledge (HVZK) proof of knowledge,
and which admit a particular three-move HVZK proof of knowledge for
an associated commitment relation, perfect zero knowledge
(against a general verifier) can be achieved essentially for free,
even when proving statements on several instances combined
under under monotone function composition. In addition,
perfect zero-knowledge is achieved with an optimal 4-moves.
Instantiations of our main protocol lead to efficient perfect
ZK proofs of knowledge of discrete logarithms and RSA-roots,
or more generally, $q$-one-way group homomorphisms.
None of our results rely on intractability assumptions.
Provably Secure Password-Authenticated Key Exchange Using Diffie-Hellman
When designing password-authenticated key exchange protocols (as
opposed to key exchange protocols authenticated using
cryptographically secure keys), one must not allow any information
to be leaked that would allow verification of the password (a weak
shared key), since an attacker who obtains this information may be
able to run an off-line dictionary attack to determine the
correct password. Of course, it may be extremely difficult to hide
all password information, especially if the attacker may pose as one
of the parties in the key exchange. Nevertheless, we present a new
protocol called PAK which is the first Diffie-Hellman-based
password-authenticated key exchange protocol to provide a formal
proof of security (in the random oracle model) against both passive
and active adversaries. In
addition to the PAK protocol that provides mutual explicit
authentication, we also show a more efficient protocol called PPK that
is provably secure in the implicit-authentication model. We then
extend PAK to a protocol called PAK-X, in which one side (the
client) stores a plaintext version of the password, while the other
side (the server) only stores a verifier for the password. We
formally prove security of PAK-X, even when the server is
compromised. Our formal model for password-authenticated key
exchange is new, and may be of independent interest.
Constructions and Bounds for Unconditionally Secure Commitment Schemes
Commitment schemes have been extensively studied since they
were introduced by Blum in 1982. Rivest recently
showed how to construct unconditionally secure commitment schemes,
assuming the existence of a trusted initializer. In this paper, we present a
formal mathematical model for such schemes, and analyze their
binding and concealing properties. In particular, we
show that such schemes cannot be perfectly concealing: there is necessarily
a small probability that Alice can cheat Bob by committing to one value
but later revealing a different value. We prove several
bounds on Alice's cheating probability, and present constructions
of schemes that achieve optimal cheating probabilities. We also
show a close link between commitment schemes and the classical
``affine resolvable designs''.
Constructing Pseudo-Random Permutations with a Prescribed Structure
We show how to construct pseudo-random permutations that satisfy a
certain cycle restriction, for example that the permutation be
cyclic (consisting of one cycle containing all the elements) or an
involution (a self-inverse permutation) with no fixed points. The
construction can be based on any (unrestricted) pseudo-random
permutation. The resulting permutations
are defined succinctly and their
evaluation at a given point is efficient. Furthermore, they enjoy
a {\em fast forward} property, i.e. it is possible to iterate
them at a very small cost.
On Symmetrically Private Information Retrieval
In this paper we give single-round single-server symmetrically private information retrieval (SPIR) scheme, in which privacy of user follows from intractability of quadratic residuosity problem and and privacy of database follows from the number theoretic XOR assumption introduced in this paper. Proposed scheme achieves the communication complexity $O(n^{\e})$, for any $\e > 0$, where $n$ is the number of bits in the database. We also present an efficient block retrieval SPIR scheme. Intrestingly, we show that an $( K \log{n})$ SPIR scheme is possible if there exists an probabilistic bit encryption scheme on which certain operators can be defined with desired properties. Finally we go on to generalize SPIR scheme to private retrieval of secrets and sharing by a group of users. It can also be viewed as an extended secret sharing scheme. We also discover and prove certain properties related to quadratic residuosity in particular and probabilistic bit encryption in general.
Decimation Attack of Stream Ciphers
This paper presents a new attack called {\em Decimation Attack}
of most stream ciphers. It exploits the property that multiple clocking
(or equivalently $d$-th decimation) of a LFSR can simulate the behavior
of many other LFSRs of possible shorter length. It yields then significqnt
improvements of all the previous known correlation and fast correlation attacks
provided a new criterion is satisfied. This criterion on the length of the feedback
polynomial is then defined to resist the decimation attack. Simulation results and
complexity comparison are detailed for ciphertext only attack.
Encryption Modes with Almost Free Message Integrity
We define a new mode of operation for block ciphers which in addition to providing confidentiality also ensures message integrity. In contrast, previously for message integrity a separate pass was required to compute a cryptographic message authentication code (MAC). The new mode of operation, called Integrity Aware Parallelizable Mode (IAPM),
requires a total of m+1 block cipher evaluations on a plain-text of length m blocks. For comparison, the well known CBC (cipher block chaining) encryption mode requires m block cipher evaluations, and the second pass of computing the CBC-MAC essentially requires additional m+1 block cipher evaluations. As the name suggests, the new mode is also highly parallelizable.
On the Complexity of Verifiable Secret Sharing and Multi-Party Computation
We first study the problem of doing Verifiable Secret Sharing (VSS)
information theoretically secure for a general access structure. We
do it in the model where private channels between players and a
broadcast channel is given, and where an active, adaptive adversary
can corrupt any set of players not in the access structure. In
particular, we consider the complexity of protocols for this
problem, as a function of the access structure and the number of
players. For all access structures where VSS is possible at all, we
show that, up to a polynomial time black-box reduction, the
complexity of adaptively secure VSS is the same as that of ordinary
secret sharing (SS), where security is only required against a
passive, static adversary. Previously, such a connection was only
known for linear secret sharing and VSS schemes.
We then show an impossibility result indicating that a similar
equivalence does not hold for Multiparty Computation (MPC): we show
that even if protocols are given black-box access for free to an
idealized secret sharing scheme secure for the access structure in
question, it is not possible to handle all relevant access
structures efficiently, not even if the adversary is passive and
static. In other words, general MPC can only be black-box reduced
efficiently to secret sharing if extra properties of the secret
sharing scheme used (such as linearity) are assumed.
General Secure Multi-Party Computation from any Linear Secret Sharing Scheme
We show that verifiable secret sharing (VSS) and secure multi-party
computation (MPC) among a set of $n$ players can efficiently be based
on {\em any} linear secret sharing scheme (LSSS) for the players,
provided that the access structure of the LSSS allows MPC or VSS at
all. Because an LSSS neither guarantees reconstructability when some
shares are false, nor verifiability of a shared value, nor allows for
the multiplication of shared values, an LSSS is an apparently much weaker
primitive than VSS or MPC.
Our approach to secure MPC is generic and applies to both the
in\-for\-ma\-tion-theoretic and the cryptographic setting. The
construction is based on 1) a formalization of the special
multiplicative property of an LSSS that is needed to perform a
multiplication on shared values, 2) an efficient generic construction
to obtain from any LSSS a multiplicative LSSS for the same access
structure, and 3) an efficient generic construction to build
verifiability into every LSSS (always assuming that the adversary
structure allows for MPC or VSS at all).
The protocols are efficient. In contrast to all previous
information-theo\-re\-ti\-cal\-ly secure protocols, the field size is not
restricted (e.g, to be greater than $n$). Moreover, we exhibit
adversary structures for which our protocols are polynomial in $n$
while all previous approaches to MPC for non-threshold adversaries
provably have super-polynomial complexity.
Using fewer Qubits in Shor's Factorization Algorithm via Simultaneous Diophantine Approximation
While quantum computers might speed up in principle
certain computations dramatically, in pratice, though
quantum computing technology is still in its infancy.
Even we cannot clearly envison at present what the
hardware of that machine will be like.
Nevertheless, we can be quite confident that it will be
much easier to build any practical quantum computer
operating on a few number of quantum bits rather than one operating
on a huge number of of quantum bits.
It is therefore of big practical impact to use the resource
of quantum bits very spare,
i.e., to find quantum algorithms which use as few as possible
quantum bits.
Here, we present a method to reduce the number of actually needed qubits
in Shor's algorithm to factor a composite number $N$.
Exploiting the inherent probabilism of quantum computation we are able to
substitute the continued fraction algorithm to find a certain unknown
fraction by a simultaneous Diophantine approximation.
While the continued fraction algorithm is able to find a Diophantine
approximation to a single known fraction with a denominator greater than
$N^2$, our simultaneous Diophantine approximation method computes in
polynomial time unusually good approximations to known fractions with a
denominator of size $N^{1+\varepsilon}$, where $\varepsilon$ is allowed to be
an arbitrarily small positive constant.
As these unusually good approximations are almost unique we are able to
recover an unknown denominator using fewer qubits in the quantum part of our
algorithm.
Electronic Jury Voting Protocols
This work elicits the fact that all current
proposals for electronic voting schemes disclose the final
tally of the votes.
In certain situations, like jury voting, this may be undesirable.
We present a robust and universally verifiable
Membership Testing Scheme (MTS) that allows, among other things,
a collection of voters to cast votes and determine whether
their tally belongs to some pre--specified set (e.g., exceeds a
given threshold)
--- our scheme discloses no additional information than that implied
from the knowledge of such membership.
We discuss several extensions of our basic MTS.
All the constructions presented
combine features of two parallel lines of research
concerning electronic voting schemes, those based on MIX--networks
and in homomorphic encryption.
Random Oracles in Constantinople: Practical Asynchronous Byzantine Agreement using Cryptography
Byzantine agreement requires a set of parties in a distributed system to
agree on a value even if some parties are corrupted. A new protocol for
Byzantine agreement in a completely asynchronous network is presented that
makes use of cryptography, specifically of threshold signatures and
coin-tossing protocols. These cryptographic protocols have practical and
provably secure implementations in the ``random oracle'' model. In
particular, a coin-tossing protocol based on the Diffie-Hellman problem is
presented and analyzed.
The resulting asynchronous Byzantine agreement protocol is both practical
and theoretically nearly optimal because it tolerates the maximum number of
corrupted parties, runs in constant expected time, has message
and communication complexity close to the optimum, and uses a trusted dealer
only in a setup phase, after which it can process a virtually unlimited
number of transactions.
The protocol is formulated as a transaction processing service in a
cryptographic security model, which differs from the standard
information-theoretic formalization and may be of independent interest.
The Complete Distribution of Linear Probabilities of MARS' s-box
This paper shows the complete linear probability distribution of MARS' s-box. The best bias is $\dfrac{84}{2^9}$ ($=2^{-2.61}$), while the designers' estimation is $\dfrac{64}{2^9}$ and the best previously known bias is $\dfrac{82}{2^9}$.
Anonymous Fingerprinting with Direct Non-Repudiation
Fingerprinting schemes support copyright protection by enabling the merchant of a data item to identify the original buyer of a
redistributed copy. In asymmetric schemes, the merchant can also convince an arbiter of this fact. Anonymous fingerprinting schemes enable buyers to purchase digital items anonymously; however, identification is possible if they redistribute the data item.
Recently, a concrete and reasonably efficient construction based on digital coins was proposed. A disadvantage is that the accused
buyer has to participate in any trial protocol to deny charges. Trials with direct non-repudiation, i.e., the merchant alone
holds enough evidence to convince an arbiter, are more useful in real life. This is similar to the difference between ``normal'' and ``undeniable'' signatures.
In this paper, we present an equally efficient anonymous fingerprinting scheme with direct non-repudiation. The main technique
we use, delayed verifiable encryption, is related to coin tracing in escrowed cash systems. However, there are technical differences, mainly to provide an unforgeable link to license conditions.
Forward Security in Threshold Signature Schemes
We consider the usage of forward security with threshold
signature schemes. This means that even if more than the
threshold number of players are compromised, some security remains:
it is not possible to forge signatures relating to the past. In
this paper, we describe the first forward-secure threshold
signature schemes whose parameters (other than signing or verifying
time) do not vary in length with the number of time periods in the
scheme. Both are threshold versions of the Bellare-Miner
forward-secure signature scheme, which is Fiat-Shamir-based. One
scheme uses multiplicative secret sharing, and tolerates mobile
eavesdropping adversaries. The second scheme is based on polynomial
secret sharing, and we prove it forward-secure based on the security
of the Bellare-Miner scheme. We then sketch modifications which
would allow this scheme to tolerate malicious adversaries. Finally,
we give several general constructions which add forward security to
any existing threshold scheme.
Last updated: 2001-03-16
Secure Multiparty Computation of Approximations
Approximation algorithms can sometimes be used to obtain efficient
solutions where no efficient exact computation is known. In
particular, approximations are often useful in a distributed setting
where the inputs are held by different parties and are extremely
large. Furthermore, for some applications, the parties want to
cooperate to compute a function of their inputs without revealing more
information than they have to.
Suppose the function $\fhat$ is an approximation to the function $f$.
Secure multiparty computation of $f$ allows the parties to compute $f$
without revealing more than they have to, but requires some additional
overhead in computation and communication. Hence, if $f$ is
inefficient or just efficient enough to be practical, a secure
computation of $f$ may be impractically expensive. A secure
computation of $\fhat$ may be efficient enough, but a secure
computation of $\fhat$ is not necessarily as private as a secure
computation of $f$, because the output of $\fhat$ may reveal more
information than the output of $f$. In this paper, we present
definitions and protocols of secure multiparty approximate computation
that show how to realize most of the cost savings available by using
$\fhat$ instead of $f$ without losing the privacy of a secure
computation of $f$.
We make three contributions in this paper. First, we give formal
definitions of secure multiparty approximate computations. Second, we
introduce some general techniques for constructing secure multiparty
approximations. Finally, we present an efficient,
sublinear-communication, secure approximate computation for the
Hamming and $L^{1}$ distances.
Concrete Security Characterizations of PRFs and PRPs: Reductions and Applications
We investigate, in a concrete security setting, several alternate
characterizations of pseudorandom functions (PRFs) and pseudorandom
permutations (PRPs). By analyzing the concrete complexity of the
reductions between the standard notions and the alternate ones, we
show that the latter, while equivalent under polynomial-time
reductions, are weaker in the concrete security sense. With these
alternate notions, we argue that it is possible to get better
concrete security bounds for certain PRF/PRP-based schemes. As an
example, we show how using an alternate characterization of a PRF
could result in tighter security bounds for a certain class of
message authentication codes. We also apply these techniques to
give a simple concrete security analysis of the counter mode of
encryption. In addition, our results provide some insight into how
injectivity impacts pseudorandomness.
An Information-Theoretic Model for Steganography
An information-theoretic model for steganography with a passive adversary is
proposed. The adversary's task of distinguishing between an innocent cover
message $C$ and a modified message $S$ containing hidden information is
interpreted as a hypothesis testing problem. The security of a
steganographic system is quantified in terms of the relative entropy (or
discrimination) between the distributions of $C$ and $S$, which yields
bounds on the detection capability of any adversary. It is shown that
secure steganographic schemes exist in this model provided the covertext
distribution satisfies certain conditions. A universal stegosystem is
presented in this model that needs no knowledge of the covertext
distribution, except that it is generated from independently repeated
experiments.
Accountable Certificate Management using Undeniable Attestations
Uncategorized
Uncategorized
This paper initiates a study of accountable certificate management
methods, necessary to support long-term authenticity of digital
documents. Our main contribution is a model for accountable
certificate management, where clients receive attestations
confirming inclusion/removal of their certificates from the database
of valid certificates. We explain why accountability depends on the
inability of the third parties to create contradictory attestations.
After that we define an undeniable attester as a primitive that
provides efficient attestation creation, publishing and
verification, so that it is intractable to create contradictory
attestations. We introduce authenticated search trees and build an
efficient undeniable attester upon them. The proposed system is the
first accountable long-term certificate management system.
Moreover, authenticated search trees can be used in many
security-critical applications instead of the (sorted) hash trees to
reduce trust in the authorities, without decrease in efficiency.
Therefore, the undeniable attester promises looks like a very useful
cryptographic primitive with a wide range of applications.
Authentication and Key Agreement via Memorable Password
This paper presents a new password authentication and key agreement protocol, AMP, based on the amplified password idea.
The intrinsic problems with password authentication are the password itself has low entropy and the password file is very hard
to protect.
We present the amplified password proof and the amplified password file for solving these problems.
A party commits the high entropy information and amplifies her password with that information in the amplifed password proof.
She never shows any information except that she knows it.
Our amplified password proof idea is very similar to the zero-knowledge proof in that sense.
We adds one more idea; the amplified password file for password file protection.
A server stores the amplified verifiers in the amplified password file that is secure against a server file compromise and a dictionary
attack.
AMP mainly provides the password-verifier based authentication
and the Diffie-Hellman based key agreement, securely and efficiently.
AMP is easy to generalize in any other cyclic groups.
In spite of those plentiful properties, AMP is actually the most efficient protocol among the related protocols
due to the simultaneous multiple exponentiation method.
Several variants such as AMP^i, AMPn, AMP^n+, AMP+, AMP++, and AMP^c are also proposed.
Among them, AMP^n is actually the basic protocol of this paper that describes the amplified password proof idea
while AMP is the most complete protocol that adds the amplified password file.
AMP^i simply removes the amplified password file from AMP.
In the end, we give a comparison to the related protocols in terms of efficiency.
Authenticated Encryption: Relations among notions and analysis of the generic composition paradigm
Uncategorized
Uncategorized
An authenticated encryption scheme is a symmetric encryption scheme whose goal is to provide both privacy and integrity. We consider two possible notions of authenticity for such schemes, namely integrity of plaintexts and integrity of ciphertexts, and relate them (when coupled with IND-CPA) to the standard notions of privacy (IND-CCA, NM-CPA) by presenting implications and separations between all notions considered. We then analyze the security of authenticated encryption schemes designed by ``generic composition,'' meaning making black-box use of a given symmetric encryption scheme and a given MAC. Three composition methods are considered, namely Encrypt-and-MAC, MAC-then-encrypt, and Encrypt-then-MAC. For each of these, and for each notion of security, we indicate whether or not the resulting scheme meets the notion in question assuming the given symmetric encryption scheme is secure against chosen-plaintext attack and the given MAC is unforgeable under chosen-message attack. We provide proofs for the cases where the answer is ``yes'' and counter-examples for the cases where the answer is ``no.''
Security of the Most Significant Bits of the Shamir Message Passing Scheme
Boneh and Venkatesan have recently proposed a polynomial time
algorithm for recovering a ``hidden'' element $\alpha$ of a
finite field $\F_p$ of $p$ elements from rather short
strings of the most significant bits of the remainder
mo\-du\-lo $p$ of $\alpha t$ for several values of $t$ selected uniformly
at random from $\F_p^*$. Unfortunately the applications to the
computational security of most significant bits
of private keys of some finite field exponentiation based cryptosystems
given by Boneh and Venkatesan are not quite correct. For the Diffie-Hellman
cryptosystem the
result of Boneh and Venkatesan has been corrected and
generalized in our recent paper.
Here a similar analysis is given for the Shamir message passing scheme.
The results depend on some bounds
of exponential sums.
Security of Polynomial Transformations of the Diffie--Hellman Key
Uncategorized
Uncategorized
D. Boneh and R. Venkatesan have recently proposed an approachto proving that a reasonably small portions of most significant bits of the Diffie-Hellman key modulo a prime are as secure the the whole key. Some further improvements and generalizations have been obtained by I. M. Gonzales Vasco and I. E. Shparlinski. E. R. Verheul has obtained certain analogies of these results in the case of Diffie--Hellman keys in extensions of finite fields, when an oracle is given to compute a certain polynomial function of the key, for example, the trace in the background field. Here we obtain some new results in this direction concerning the case of so-called "unreliable" oracles.
ACE: The Advanced Cryptographic Engine
This document describes
the Advanced Cryptographic Engine (ACE).
It specifies a public key encryption
scheme as well as a
digital signature scheme
with enough detail to ensure interoperability between different
implementations.
These schemes are almost as efficient as commercially used schemes,
yet unlike such schemes, can be proven secure under reasonable
and well-defined
intractability assumptions.
A concrete security analysis of both schemes is presented.
An Efficient Identification Scheme Based on Permuted Patterns
Uncategorized
Uncategorized
This paper proposes a new identification scheme based on a hard
partition problem rather than factoring or discrete logarithm
problems. The new scheme minimizes at the same time the communication
complexity and the computational cost required by the parties.
Since only simple operations are needed for an identification,
our scheme is well suited for smart cards with very limited
processing power. With a "good" implementation, the scheme is much
faster than the Fiat-Shamir or Shamir's PKP schemes.
On the Security of Diffie--Hellman Bits
Boneh and Venkatesan have recently proposed a polynomial time
algorithm for recovering a "hidden" element $\alpha$ of a finite field $\F_p$ of $p$ elements from rather short strings of the most significant bits of the remainder modulo $p$ of $\alpha t$ for several
values of $t$ selected uniformly at random from $\F_p^*$. We use some
recent bounds of exponential sums to generalize this algorithm to the case when $t$ is selected from a quite small subgroup of $\F_p^*$.
Namely, our results apply to subgroups of size at least
$p^{1/3+ \varepsilon}$ for all primes $p$ and to subgroups of size at
least $p^{\varepsilon}$ for almost all primes $p$, for any fixed
$\varepsilon >0$.
We also use this generalization to improve (and correct)
one of the statements of the aforementioned work about the
computational security of the most significant bits of the
Diffie--Hellman key.
Threshold Cryptography Secure Against the Adaptive Adversary, Concurrently
A threshold cryptosystem or signature scheme is a system with $n$ participants
where an honest majority can successfully decrypt a message or issue a
signature, but where the security and functionality properties of the
system are retained even as
the adversary corrupts up to $t$ players.
We present the novel technique of a committed proof,
which is a new general tool that enables security of threshold
cryptosystems in the presence of the adaptive adversary.
We also put forward a new measure of security for threshold schemes
secure in the adaptive adversary model: security under concurrent
composition.
Using committed proofs, we construct concurrently and adaptively secure
threshold protocols for a variety of cryptographic applications.
In particular, based on the recent scheme by Cramer-Shoup, we
construct adaptively secure threshold cryptosystems secure against
adaptive chosen ciphertext attack under the DDH intractability
assumption.
Last updated: 2000-07-09
Fast Verification of Any Remote Procedure Call: Short Witness-Indistinguishable One-Round Proofs for NP
Uncategorized
Uncategorized
The paper
is withdrawn.
As communicated to us by C. Dwork, M. Langberg,
M. Naor and K. Nissim [1] the protocol as presented in the paper
is not sufficient to prove the claims. We gratefully acknowledge
the authors of [1] for pointing out this error to us.
REFERENCES:
[1] C. Dwork, M. Langberg, M. Naor, and K. Nissim,
"Succinct Proofs for NP and Spooky Interactions" private communication,
July 4, 2000.
Lower Bounds on the Efficiency of Generic Cryptographic Constructions
We present lower bounds on the efficiency of
constructions for Pseudo-Random Generators (PRGs) and
Universal One-Way Hash Functions (UOWHFs) based on
black-box access to one-way permutations. Our lower bounds are tight as
they match the efficiency of known constructions.
A PRG (resp. UOWHF) construction based on black-box access is
a machine that is given oracle access to a permutation. Whenever
the permutation is hard to invert, the construction is hard to break.
In this paper we give lower bounds on the
number of invocations to
the oracle by the construction.
If $S$ is the assumed security of the oracle permutation $\pi$
(i.e. no adversary of size $S$ can invert $\pi$ on a fraction
larger than $1/S$ of its inputs)
then a PRG (resp. UOWHF) construction that
stretches (resp. compresses) its input by $k$ bits must query $\pi$
in $q=\Omega(k/\log S)$ points. This matches known constructions.
Our results are given in an extension of the Impagliazzo-Rudich
model. That is, we prove that a proof of the existence of PRG (resp. UOWHF)
black-box constructions that beat our lower bound would imply
a proof of the unconditional existence of such construction
(which would also imply $P \neq NP$).
Last updated: 2001-06-19
Cryptanalysis of RSA with small prime difference
We show that choosing an RSA modulus with a small difference of its prime factors
yields improvements on the small private exponent attacks of Wiener and Boneh-Durfee.
Identification Protocols Secure Against Reset Attacks
We provide identification protocols that are secure even
when the adversary can reset the internal state and/or randomization source of
the user identifying itself, and when executed in an asynchronous environment
like the Internet that gives the adversary concurrent access to instances of
the user. These protocols are suitable for use by devices (like smartcards)
which when under adversary control may not be able to reliably maintain their
internal state between invocations.
Authenticated Key Exchange Secure Against Dictionary Attacks
This paper gives definitions and results about password-based
protocols for authenticated key exchange (AKE), mutual authentication
MA), and the combination of these goals (AKE, MA).
Such protocols are designed to work despite interference by an active
adversary and despite the use of passwords drawn from a space so small
that an adversary might well enumerate, off line,
a user's password.
While several such password-based protocols have been suggested,
the underlying theory has been lagging, and
some of the protocols don't actually work.
This is an area strongly in need of foundations,
but definitions and theorems here can get overwhelmingly complex.
To help manage this complexity we begin by defining a model, one rich enough
to deal with password guessing, forward secrecy,
server compromise, and loss of session keys.
The one model can be used to
define various goals.
We take AKE (with implicit authentication---no one besides
your intended partner could possibly get the key, though he may or may
not actually get it) as the basic goal.
Then we prove that any secure
AKE protocol can be
embellished (in a simple and generic way)
to also provide for MA.
This approach turns out to be simpler than trying to
augment an MA protocol to also distribute a session key.
Next we prove correctness for the idea at the center
of the Encrypted Key-Exchange (EKE) protocol
of Bellovin and Merritt:
we prove (in an ideal-cipher model) that
the two-flow protocol at the core of EKE is
a secure AKE.
Combining with the result above we have a
simple 3-flow protocol for AKE,MA which is
proven secure against dictionary attack.
Concurrent Zero-Knowledge in Poly-logarithmic Rounds
A proof is concurrent zero-knowledge if it remains zero-knowledge when run in an asynchronous environment, such as
the Internet. It is known that zero-knowledge is not necessarily preserved in such an environment; Kilian, Petrank and Rackoff have
shown that any {\bf 4} rounds zero-knowledge interactive proof (for a non-trivial language) is not concurrent zero-knowledge. On the
other hand, Richardson and Kilian have shown that there exists a concurrent zero-knowledge argument for all languages in NP, but it
requires a {\bf polynomial} number of rounds. In this paper, we present a concurrent zero-knowledge proof for all languages in NP
with a drastically improved complexity: our proof requires only a poly-logarithmic, specifically, $\omega(\log^2 k)$ number of rounds.
Thus, we narrow the huge gap between the known upper and lower bounds on the number of rounds required for a zero-knowledge
proof that is robust for asynchronous composition.
Last updated: 2003-03-26
Chosen Message Attack Against Goldreich-Goldwasser-Halevi's Signature Scheme from Crypto'97
The Goldreich-Goldwasser-Halevi(GGH)'s signature scheme from Crypto '99 is cryptanalyzed, which is based on the well-known lattice problem. We mount a chosen message attack on the signature scheme, and show the signature scheme is vulnerable to the attack. We collects $n$ lattice points that are linearly independent each other, and constructs a new basis that generates a sub-lattice of the original lattice. The sub-lattice is shown to be sufficient to generate a valid signature. Empirical results are presented to show the effectiveness of the attack. Finally, we show that the cube-like parameter used for the private-key generation is harmful to the security of the scheme.
Tailored Key Encryption (TaKE) Tailoring a key for a given pair of plaintext/ciphertext
Abstract. The prevailing cryptographies are attacked on the basis of
the fact that only a single element in the key space will match a
plausible plaintext with a given ciphertext. Any cryptography that
would violate this unique-key assumption, will achieve added security
through deniability (akin to One Time Pad). Such cryptography is being
described. It is achieved by breaking away from the prevailing notion
that the key is a binary string of a fixed length. The described key is
random-size non-linear array: a graph constructed from vertices and
edges. The binary naming of the vertices and edges, and the
configuration are all part of the key. Such keys can take-on most of
the necessary complexity, which allows the algorithm itself to be
exceedingly simple (a-la Turing Machine).
The Security of Chaffing and Winnowing
This paper takes a closer look at Rivest's
chaffing-and-winnowing paradigm for data privacy. We begin with a
\textit{definition} which enables one to determine clearly whether a
given scheme qualifies as ``chaffing-and-winnowing.'' We then analyze
Rivest's schemes to see what quality of data privacy they provide. His
simplest scheme is easily proven secure but is ineffient. The security
of his more efficient scheme ---based on all-or-nothing transforms
(AONTs)--- is however more problematic. It can be attacked under
Rivest's definition of security of an AONT, and even under stronger
notions does not appear provable. We show however that by using a OAEP
as the AONT one can prove security. We also present a different scheme,
still using AONTs, that is equally efficient and easily proven secure
even under the original weak notion of security of AONTs.
New Directions in Design of Resilient Boolean Functions
There has been a recent upsurge of research in the design of resilient
Boolean functions for use in stream cipher systems. The existing
research concentrates on maximum degree resilient functions and tries
to obtain as high nonlinearity as possible. In sharp contrast to this
approach we identify the class of functions with {\em provably best}
possible trade-off among the parameters: number of variables,
resiliency, nonlinearity and algebraic degree. We first prove a
sharper version of McEliece theorem for Reed-Muller codes as applied
to resilient functions, which also generalizes the well known
Xiao-Massey characterization. As a consequence a nontrivial upper
bound on the nonlinearity of resilient functions is obtained. This
result coupled with Siegenthaler's inequality naturally leads to
the notion of provably best resilient functions. We further show that
such best functions can be constructed by the Maiorana-McFarland
like technique. In cases where this method fails, we provide new ideas
to construct best functions. We also briefly discuss efficient
implementation of these functions in hardware.
Efficient Protocols based on Probabilistic Encryption using Composite Degree Residue Classes
We study various applications and variants of Paillier's probabilistic
encryption scheme. First, we propose a threshold variant of the scheme,
and also zero-knowledge protocols for proving that a given ciphertext
encodes a given plaintext, and for verifying multiplication of encrypted values.
We then show how these building blocks can be used for applying the
scheme to efficient electronic voting. This reduces dramatically the work needed to compute the final result of an election, compared to the previously best known schemes. We show how the
basic scheme for a yes/no vote can be easily adapted to casting a
vote for up to $t$ out of $L$ candidates. The same basic building blocks can also be adapted to provide receipt-free elections, under appropriate physical assumptions. The scheme for 1 out of $L$ elections can be optimised such that for a certain range of parameter values, a ballot has size only $O(\log L)$ bits.
Finally, we propose a variant of the encryption scheme, that allows
reducing the expansion factor of Paillier's scheme from 2 to almost 1.
Public Electronic Contract Protocol
The notion of Public Electronic Contract (PEC) Protocol is presented in this paper. In the idea, the PEC will be published on a public directory (of certain groups) and let all the members to review the true (raw) transaction information. Collection of information of PEC reflects more reliable facts of the market trends rather than merely depends on the data provided by certain agencies for estimation. The goal is to eliminate the opportunities for certain agencies to manipulate the data and persuade the investors to make inappropriate decisions on purchases or investments. A perfect open market with open facts should be established in the future. The PEC also contains the property of public witnesses so that the transactions will be more secure. In order to keep the protocol simple; its implementation is mainly based on RSA public key scheme.
An Encryption Algorithm and Key-stream Generator for Chinese Text Messages by Character Internal Code Structure
This paper proposes an algorithm to encipher the Chinese plaintext message written in Big-5/GB Chinese character internal codes. Unlike the ordinary ciphers, the Crypto-key of our proposed algorithm consists of a pair of formulas and a set of parameter values. The senders can formulate and compose their own unique formulas and parameters for encryption. According to the character internal code structure, we apply the formulas in a Key-stream generator to encipher the Chinese plaintext message. Since the proposed stream generator does not contain permanent encryption and decryption operations, the opponents are inadequate to predict the forms of its output (ciphertext). Experiment results show that the proposed algorithm achieves the data secrecy.
On Resilient Boolean Functions with Maximal Possible Nonlinearity
It is proved that the maximal possible nonlinearity of $n$-variable
$m$-resilient Boolean function is $2^{n-1}-2^{m+1}$ for
${2n-7\over 3}\le m\le n-2$. This value can be achieved only for
optimized functions (i.~e. functions with an algebraic degree $n-m-1$).
For ${2n-7\over 3}\le m\le n-\log_2{n-2\over 3}-2$ it is suggested a method
to construct an $n$-variable $m$-resilient function with maximal possible
nonlinearity $2^{n-1}-2^{m+1}$ such that each variable presents in ANF of this
function in some term of maximal possible length $n-m-1$.
For $n\equiv 2\pmod 3$, $m={2n-7\over 3}$,
it is given a scheme of hardware implementation for such function that
demands approximately $2n$ gates EXOR and $(2/3)n$ gates AND.
Combinatorial Properties of Frameproof and Traceability Codes
In order to protect copyrighted material, codes may be
embedded in the content or codes may be associated with the
keys used to recover the content. Codes can offer protection
by providing some form of traceability for pirated
data. Several researchers have studied different notions of
traceability and related concepts in recent years. "Strong"
versions of traceability allow at least one member of a
coalition that constructs a "pirate decoder" to be
traced. Weaker versions of this concept ensure that no
coalition can "frame" a disjoint user or group of users. All
these concepts can be formulated as codes having certain
combinatorial properties.
In this paper, we study the relationships between the various
notions, and we discuss equivalent formulations using
structures such as perfect hash families. We use methods from
combinatorics and coding theory to provide bounds (necessary
conditions) and constructions (sufficient conditions) for the
objects of interest.
Last updated: 2000-03-10
Implications of the Nontriviality of Entropy Approximation
The paper was withdrawn because it contained a fatal flaw.
A New Forward-Secure Digital Signature Scheme
We improve the Bellare-Miner (Crypto '99) construction of signature
schemes with forward security in the random oracle model. Our scheme
has significantly shorter keys and is, therefore, more practical. By
using a direct proof technique not used for forward-secure schemes
before, we are able to provide better security bounds for the original
construction as well as for our scheme.
Bellare and Miner also presented a method for constructing such schemes
without the use of the random oracle. We conclude by proposing an
improvement to their method and an additional, new method for accomplishing
this.
On Security Preserving Reductions -- Revised Terminology
Many of the results in Modern Cryptography are actually
transformations of a basic computational phenomenon (i.e., a
basic primitive, tool or assumption) to a more complex
phenomenon (i.e., a higher level primitive or
application). The transformation is explicit and is always
accompanied by an explicit reduction of the violation of the
security of the former phenomenon to the violation of the
latter. A key aspect is the efficiency of the reduction. We
discuss and slightly modify the hierarchy of reductions
originally suggested by Levin.
A tool for obtaining tighter security analyses of pseudorandom function based constructions, with applications to PRP to PRF conversion
Uncategorized
Uncategorized
We present a general probabilistic lemma that can be applied
to upper bound the advantage of an adversary in distinguishing between two
families of functions. Our lemma reduces the task of upper bounding the
advantage to that of upper bounding the ratio of two probabilities associated
to the adversary, when this ratio is is viewed as a random variable. It
enables us to obtain significantly tighter analyses than more conventional
methods.
In this paper we apply the technique to the problem of PRP to PRF conversion.
We present a simple, new construction of a PRF from a PRP that makes only
two invocations of the PRP and has insecurity linear in the number of
queries made by the adversary. We also improve the analysis of the
truncation construction.
Concurrent Zero-Knowledge
Uncategorized
Uncategorized
One of the toughest challenges in designing
cryptographic protocols is to design them so that they will remain
secure even when composed. For example, concurrent executions of a
zero-knowledge protocol by a single prover (with one or more
verifiers) may leak information and may not be zero-knowledge in
toto. In this work we:
(1) Suggest time as a mechanism to design concurrent cryptographic
protocols and in particular maintaining zero-knowledge under
concurrent execution.
(2) Introduce the notion of of Deniable Authentication
and connect it to the problem of concurrent zero-knowledge.
We do not assume global synchronization, however we assume an
(alpha,beta) timing constraint: for any two processors $P_1$
and $P_2$, if $P_1$ measures alpha elapsed time on its local
clock and $P_2$ measures beta elapsed time on its local clock, and
$P_2$ starts after $P_1$ does, then $P_2$ will finish after
$P_1$ does. We show that for an adversary controlling all the
processors clocks (as well as their communication channels) but
which is constrained by an (alpha,beta) constraint
there exist four-round almost concurrent zero-knowledge interactive proofs
and perfect concurrent zero-knowledge arguments for every language in NP.
We also address the more specific problem of Deniable Authentication,
for which we propose several particularly efficient solutions.
Deniable Authentication is of independent interest, even in the
sequential case; our concurrent solutions yield sequential
solutions, without recourse to timing, i.e., in the standard model.
Resettable Zero-Knowledge
Uncategorized
Uncategorized
We introduce the notion of Resettable Zero-Knowledge
(rZK), a new security measure for cryptographic protocols which
strengthens the classical notion of zero-knowledge. In essence, an
rZK protocol is one that remains zero knowledge even if an adeversary
can interact with the prover many times, each time resetting the
prover to its initial state and forcing him to use the same random
tape.
Under general complexity asumptions, which hold for example if the
Discrete Logarithm Problem is hard, we construct (1) rZK proof-systems
for NP: (2) constant-round resettable witness-indistinguishable
proof-systems for NP; and (3) constant-round rZK arguments for NP in
the public key model where verifiers have fixed, public keys
associated with them.
In addition to shedding new light on what makes zero knowledge
possible (by constructing ZK protocols that use randomness in a
dramatically weaker way than before), rZK has great relevance to
applications. Firstly, we show that rZK protocols are closed under
parallel and concurrent execution and thus are guaranteed to be secure
when implemented in fully asynchronous networks, even if an adversary
schedules the arrival of every message sent. Secondly, rZK protocols
enlarge the range of physical ways in which provers of a ZK protocols
can be securely implemented, including devices which cannot reliably
toss coins on line, nor keep state betweeen invocations. (For
instance, because ordinary smart cards with secure hardware are
resattable, they could not be used to implement securely the provers
of classical ZK protocols, but can now be used to implement securely
the provers of rZK protocols.)
Public-Key Cryptography and Password Protocols: The Multi-User Case
Uncategorized
Uncategorized
The problem of password authentication over an insecure network
when the user holds only a human-memorizable password has
received much attention in the literature. The first rigorous
treatment was provided by Halevi and Krawczyk (ACM CCS, 1998),
who studied off-line password guessing attacks in the scenario in
which the authentication server possesses a pair of private and
public keys. HK's definition of security concentrates
on the single-user (and single server) case. <P>
In this work we:
(1) Show the inadequacy of both the Halevi-Krawczyk formalization
and protocol in the case where there is more than a single user:
using a simple and realistic attack, we prove failure of the HK
solution in the two-user case.
(2) Propose a new definition of security for the multi-user case,
expressed in terms of transcripts of the entire system, rather
than individual protocol executions.
(3) Suggest several ways of achieving this security against both
static and dynamic adversaries.
In a recent revision of their paper, Halevi and Krawczyk attempted
to handle the multi-user case. We expose a weakness in their approach.
Improving the Exact Security of Digital Signature Schemes
Uncategorized
Uncategorized
We provide two contributions to exact security analysis of
digital signatures:
We put forward a new method of constructing Fiat-Shamir-like
signature schemes that yields better "exact security" than the original
Fiat-Shamir method; and
we extend exact security analysis to "exact cost-security analysis" by
showing that digital signature schemes with "loose security" may be
preferable for reasonable measures of cost.
Security of all RSA and Discrete Log Bits
Uncategorized
Uncategorized
We study the security of individual bits in an RSA
encrypted message E_N(x). We show that given E_N(x), predicting any
single bit in x with only a non-negligible advantage over the trivial
guessing strategy, is (through a polynomial time reduction) as hard as
breaking RSA. Moreover, we prove that blocks of O(log log N) bits of x
are computationally indistinguishable from random bits. The results
carry over to the Rabin encryption scheme.
Considering the discrete exponentiation function, g^x modulo p, with
probability 1-o(1) over random choices of the prime p, the analog
results are demonstrated. Finally, we prove that the bits of ax+b
modulo p give hard core predicates for any one-way function f.
Non-Malleable Encryption: Equivalence between Two Notions, and an Indistinguishability-Based Characterization
Uncategorized
Uncategorized
We prove the equivalence of two definitions of non-malleable
encryption appearing in the literature--- the original one of Dolev, Dwork
and Naor and the later one of Bellare, Desai, Pointcheval and Rogaway. The
equivalence relies on a new characterization of non-malleable encryption in
terms of the standard notion of indistinguishability of Goldwasser and
Micali. We show that non-malleability is equivalent to indistinguishability
under a ``parallel chosen ciphertext attack,'' this being a new kind of
chosen ciphertext attack we introduce, in which the adversary's decryption
queries are not allowed to depend on answers to previous queries, but must be
made all at once. This characterization simplifies both the notion of
non-malleable encryption and its usage, and enables one to see more easily
how it compares with other notions of encryption. The results here apply to
non-malleable encryption under any form of attack, whether chosen-plaintext,
chosen-ciphertext, or adaptive chosen-ciphertext.
A Composition Theorem for Universal One-Way Hash Functions
Uncategorized
Uncategorized
In this paper we present a new scheme for constructing
universal one-way hash functions that hash arbitrarily long messages
out of universal one-way hash functions that hash fixed-length messages.
The new construction is extremely simple and is also very efficient,
yielding shorter keys than previously proposed composition
constructions.
A forward-secure digital signature scheme
Uncategorized
Uncategorized
We describe a digital signature scheme in which the
public key is fixed but the secret signing key is updated at regular
intervals so as to provide a <i>forward security</i> property:
compromise of the current secret key does not enable an adversary to
forge signatures pertaining to the past. This can be useful to
mitigate the damage caused by key exposure without requiring
distribution of keys. Our construction uses ideas from the
Fiat-Shamir and Ong-Schnorr identification and
signature schemes, and is proven to be forward secure based
on the hardness of factoring, in the random oracle model. The
construction is also quite efficient.
Interleaved Zero-Knowledge in the Public-Key Model
Uncategorized
Uncategorized
We introduce the notion of Interleaved Zero-Knowledge (iZK), a new
security measure for cryptographic protocols which strengthens
the classical notion of zero-knowledge, in a way suitable for
multiple concurrent executions in an asynchronous environment
like the internet. We prove that iZK protocols are robust:
they are ``parallelizable'', and preserve security when run
concurrently in a fully asynchronous network. Furthermore,
this holds even if the prover's random-pads in all these
concurrent invocations are identical. Thus, iZK protocols are
ideal for smart-cards and other devices which cannot reliably
Concurrent Zero-Knowledge is Easy in Practice
Uncategorized
Uncategorized
We show that if any one-way function exists, then 3-round concurrent
zero-knowledge arguments for all NP problems can be built in a model
where a short auxiliary string with a prescribed distribution is
available to the players. We also show that all known efficient
identification schemes using specialized assumptions can be modified
to work in this model with no essential loss of efficiency. We argue
that the assumptions of the model will be satisfied in most practical
scenarios where public key cryptography is used, in particular our
construction works given any secure public key
infrastructure. Finally, we point out that in a model with
preprocessing (and no auxiliary string) proposed earlier, concurrent
zero-knowledge for NP can be based on any one-way function.
Secure Hash-and-Sign Signatures without the Random Oracle
Uncategorized
Uncategorized
We present a new signature scheme which is existentially unforgeable
under chosen message attacks, assuming some variant of the RSA conjecture.
This scheme is not based on "signature trees", and instead it uses
the so called "hash-and-sign" paradigm. It is unique in that the
assumptions made on the cryptographic hash function in use are well
defined and reasonable (although non-standard). In particular, we
do not model this function as a random oracle.
We construct our proof of security in steps. First we describe and
prove a construction which operates in the random oracle model. Then
we show that the random oracle in this construction can be replaced
by a hash function which satisfies some strong (but well defined!)
computational assumptions. Finally, we demonstrate that these assumptions
are reasonable, by proving that a function satisfying them exists under
standard intractability assumptions.
On Formal Models for Secure Key Exchange
Uncategorized
Uncategorized
A new formal security model for session key exchange protocols is
proposed, and several efficient protocols are analyzed in this model.
Our new model is in the style of multi-party simulatability: it
specifies the service and security guarantees that a key exchange
protocol should provide to higher-level protocols as a simple,
natural, and intuitive interface to which a high-level protocol
designer can program. The relationship between this new model and
previously proposed models is explored, and in particular, several
flaws and shortcomings in previously proposed models are discussed.
The model also deals with anonymous users---that is, users who do not
have public keys, but perhaps have passwords that can be used to
authenticate themselves within a secure session.
Practical Threshold Signatures
Uncategorized
Uncategorized
We present an RSA threshold signature scheme. The scheme enjoys the following
properties:
it is unforgeable and robust;
in the random oracle model, assuming the RSA problem is hard;
signature share generation and verification is completely non-interactive;
the size of an individual signature share is bounded by a constant
times the size of the RSA modulus.
A Relationship between One-Wayness and Correlation Intractability
Uncategorized
Uncategorized
The notion of correlation intractability was introduced
in an attempt to capture the ``unpredictability" property
of random oracles: It is assumed that if $R$ is a random oracle
then it is infeasible to find an input $x$
such that the input-output pair $(x,R(x))$ has some desired property.
It is desirable that a plausible construction
of correlation intractable function ensembles will be provided since
the unpredictability property is often useful to design many cryptographic
applications in the random oracle model.
However, no plausibility result has been proposed.
In this paper, we show that proving the implication,
``if uniform one-way functions exist then uniform correlation intractable
function ensembles exist",
is as hard as proving a claim regarding the triviality
of 3-round auxiliary-input zero-knowledge Arthur-Merlin proofs
without making any assumptions.
We believe that it is unlikely that one can prove it unconditionally.
Therefore, we conclude that it will be difficult to construct
uniform correlation intractable function ensembles
based solely on uniform one-way functions.
On the Existence of3-Round Zero-Knowledge Protocols
Uncategorized
Uncategorized
In this paper, we construct a 3-round zero-knowledge protocol for
any NP language. Our protocol achieves weaker notions of zero-knowledge
than black-box simulation zero-knowledge.
Therefore, our result does not contradict the triviality result of
Goldreich and Krawczyk which shows that 3-round black-box simulation
zero-knowledge exist only for BPP languages.
Our main contribution is to provide a non-black-box simulation technique.
Whether there exists such a simulation technique was a major open
problem in the theory of zero-knowledge.
Our simulation technique is based on a non-standard computational assumption
related to the Diffie-Hellman problem, which was originally proposed by
Damgard.
Verifiable Encryption and Applications to Group Signatures and Signature Sharing
Uncategorized
Uncategorized
We generalize and improve the security and efficiency of the
verifiable encryption scheme of Asokan et al., such that it can rely
on more general assumptions, and can be proven secure without
assuming random oracles. We show a new application of verifiable
encryption to group signatures with separability, these schemes do
not need special purpose keys but can work with a wide range of
signature, identification, and encryption schemes already in use.
Finally, we extend our basic primitive to verifiable threshold and
group encryption. By encrypting digital signatures this way, one
gets new solutions to the verifiable signature sharing problem.
DHAES: An Encryption Scheme Based on the Diffie-Hellman Problem
Uncategorized
Uncategorized
scheme, DHAES. The scheme is as efficient as ElGamal encryption, but has
stronger security properties. Furthermore, these security properties are proven
to hold under appropriate assumptions on the underlying primitive.
We show that DHAES has not only the ``basic'' property of secure encryption
(namely privacy under a chosen-plaintext attack) but also achieves privacy
under both non-adaptive and adaptive chosen-ciphertext attacks. (And hence
it also achieves non-malleability.)
DHAES is built in a generic way from lower-level primitives: a symmetric
encryption scheme, a message authentication code, group operations in an
arbitrary group, and a cryptographic hash function. In particular, the
underlying group may be an elliptic-curve group or the multiplicative
group of integers modulo a prime number.
The proofs of security are based on appropriate assumptions about the
hardness of the Diffie-Hellman problem and the assumption that the
underlying symmetric primitives are secure. The assumptions are
all standard in the sense that no random oracles are involved.
We suggest that DHAES provides an attractive starting point for developing
public-key encryption standards based on the Diffie-Hellman assumption.
Last updated: 1999-04-19
Fast Proof of Plaintext-Knowledge and Deniable Authentication Based on Chinese Remainder Theorem
Uncategorized
Uncategorized
We propose a fast and communication-efficient proof of
plaintext-knowledge (PPTK) protocol based on the Chinese Remainder
theorem. With a PPTK the receiver of a ciphertext verifies that the
sender knows the corresponding cleartext in such a way that a
dishonest sender or an eavesdropper does not learn anything about
the plaintext except with sub-polynomial probability. We turn any
semantically secure public key cryptosystem into an efficient
(interactive) one which is immune against adaptive chosen ciphertext
attacks by adding the PPTK protocol. Using our PPTK protocol we also
derive an efficient protocol for deniable authentication.
Lattice Based Cryptography: A Global Improvement
Uncategorized
Uncategorized
We describe a general technique to simplify as well as to improve
several lattice based cryptographic protocols.
The technique is rather straightforward and is easily applied to
the protocols, and gives both a simpler analysis and better
performance than the original protocols. The improvement is global:
the modified protocols are simpler, faster, require less storage,
use less bandwidth and need less random bits than the originals.
Moreover, the improvement is achieved without any loss in security:
we formally prove that the modified protocols are at least as secure
as the original ones. In fact, the modified protocols might even
be more secure as the adversary gets less information. We exemplify
our technique on the Goldreich-Goldwasser zero-knowledge proof systems
for lattice problems and the GGH public key cryptosystem.
Public-key cryptography and password protocols
Uncategorized
Uncategorized
We study protocols for strong authentication and key exchange in asymmetric
scenarios where the authentication server possesses a pair of private and
public keys while the client has only a weak human-memorizable password
as its authentication key. We present and analyze several simple password
protocols in this scenario, and show that the security of these protocols
can be formally proven based on standard cryptographic assumptions.
Remarkably, our analysis shows optimal resistance to off-line password
guessing attacks under the choice of suitable public key encryption
functions. In addition to user authentication, we enhance our protocols
to provide two-way authentication, authenticated key exchange, defense
against server's compromise, and user anonymity. We complement these
results with a proof that public key techniques are unavoidable for
password protocols that resist off-line guessing attacks.
As a further contribution, we introduce the notion of public passwords
that enables the use of the above protocols in situations where the
client's machine does not have the means to validate the server's
public key. Public passwords serve as "hand-held certificates" that
the user can carry without the need for special computing devices.
An error in the mixed adversary protocol by Fitzi, Hirt and Maurer
Uncategorized
Uncategorized
We point out an error in the multiparty computation protocol for mixed
adversaries and zero error from the Crypto 98 paper by Fitzi, Hirt and
Maurer. We show that the protocol only works under a stronger
requirement on the adversary than the one claimed. Hence the bound on
the adversary's corruption capability given there is not tight.
Subsequent work has shown, however, a new bound which is indeed tight.
Chinese Remaindering with Errors
Uncategorized
Uncategorized
The Chinese Remainder Theorem states that a positive integer m is
uniquely specified by its remainder modulo k relatively prime integers
p_1,...,p_k, provided m < \prod_{i=1}^k p_i. Thus the residues of m
modulo relatively prime integers p_1 < p_2 < ... < p_n form a
redundant representation of m if m <= \prod_{i=1}^k p_i and k <
n. This suggests a number-theoretic construction of an
``error-correcting code'' that has been implicitly considered often in
the past. In this paper we provide a new algorithmic tool to go with
this error-correcting code: namely, a polynomial-time algorithm for
error-correction. Specifically, given n residues r_1,...,r_n and an
agreement parameter t, we find a list of all integers m <
\prod_{i=1}^k p_i such that (m mod p_i) = r_i for at least t values of
i in {1,...,n}, provided t = Omega(sqrt{kn (log p_n)/(log p_1)}). We
also give a simpler algorithm to decode from a smaller number of
errors, i.e., when t > n - (n-k)(log p_1)/(log p_1 + \log p_n). In
such a case there is a unique integer which has such agreement with
the sequence of residues.
One consequence of our result is that is a strengthening of the
relationship between average-case complexity of computing the
permanent and its worst-case complexity. Specifically we show that if
a polynomial time algorithm is able to guess the permanent of a random
n x n matrix on 2n-bit integers modulo a random n-bit prime with
inverse polynomial success rate, then #P=BPP. Previous results of
this nature typically worked over a fixed prime moduli or assumed very
small (though non-negligible) error probability (as opposed to small
but non-negligible success probability).
Signature Schemes Based on the Strong RSA Assumption
Uncategorized
Uncategorized
We describe and analyze a new digital signature scheme.
The new scheme is quite efficient, does not require the the signer
to maintain any state, and can be proven secure against adaptive
chosen message attack under a reasonable intractability assumption,
the so-called Strong RSA Assumption.
Moreover, a hash function can be incorporated into the scheme
in such a way that it is also secure in the random oracle model
under the standard RSA Assumption.
Comparing Entropies in Statistical Zero-Knowledge with Applications to the Structure of SZK
Uncategorized
Uncategorized
We consider the following (promise) problem, denoted ED (for Entropy
Difference): The input is a pairs of circuits, and YES instances
(resp., NO instances) are such pairs in which the first (resp.,
second) circuit generates a distribution with noticeably higher
entropy.
On one hand we show that any language having a (honest-verifier)
statistical zero-knowledge proof is Karp-reducible to ED. On the other
hand, we present a public-coin (honest-verifier) statistical
zero-knowledge proof for ED. Thus, we obtain an alternative proof of
Okamoto's result by which HVSZK (i.e., Honest-Verifier Statistical
Zero-Knowledge) equals public-coin HVSZK. The new proof is much simpler
than the original one. The above also yields a trivial proof that HVSZK
is closed under complementation (since ED easily reduces to its
complement). Among the new results obtained is an equivalence of a weak
notion of statistical zero-knowledge to the standard one.
Secure Distributed Storage and Retrieval
Uncategorized
Uncategorized
In his well-known Information Dispersal Algorithm paper, Rabin showed
a way to distribute information in n pieces among n servers in such a
way that recovery of the information is possible in the presence of up
to t inactive servers. An enhanced mechanism to enable construction
in the presence of malicious faults, which can intentionally modify
their pieces of the information, was later presented by Krawczyk.
Yet, these methods assume that the malicious faults occur only at
reconstruction time. <P>
In this paper we address the more general problem of secure storage
and retrieval of information (SSRI), and guarantee that also the
process of storing the information is correct even when some of the
servers fail. Our protocols achieve this while maintaining the
(asymptotical) space optimality of the above methods. <P>
We also consider SSRI with the added requirement of confidentiality,
by which no party except for the rightful owner of the information is
able to learn anything about it. This is achieved through novel
applications of cryptographic techniques, such as the distributed
generation of receipts, distributed key management via threshold
cryptography, and ``blinding.'' <P>
An interesting byproduct of our scheme is the construction of a secret
sharing scheme with shorter shares size in the amortized sense. An
immediate practical application of our work is a system for the secure
deposit of sensitive data. We also extend SSRI to a ``proactive''
setting, where an adversary may corrupt all the servers during the
lifetime of the system, but only a fraction during any given time
interval.
The Disparity between Work and Entropy in Cryptology
Uncategorized
Uncategorized
A brief theory of work is developed. In it, the work-factor and
guesswork of a random variable are linked to intuitive notions of time
complexity in a brute-force attack. Bounds are given for a specific
work-factor called the minimum majority. Tight bounds are given for
the guesswork in terms of variation distance. Differences between
work-factor, guesswork and the entropy of a random variable are
pointed out, calling into question a common misconception about
entropy indicating work.
Security amplification by composition: The case of doubly-iterated, ideal ciphers
Uncategorized
Uncategorized
We investigate, in the Shannon model, the security of constructions
corresponding to double and (two-key) triple DES. That is, we
consider F<sub>k1</sub>(F<sub>k2</sub>(.)) and
F<sub>k1</sub>(F<sub>k2</sub><sup>-1</sup>(F<sub>k1</sub>(.))) with
the component functions being ideal ciphers. This models the
resistance of these constructions to ``generic'' attacks like meet
in the middle attacks.
We obtain the first proof that composition actually
increases the security in some meaningful sense. We compute a bound
on the probability of breaking the double cipher as a function of
the number of computations of the base cipher made, and the number
of examples of the composed cipher seen, and show that the success
probability is the square of that for a single key cipher. The
same bound holds for the two-key triple cipher. The first bound
is tight and shows that meet in the middle is the best possible
generic attack against the double cipher.
Insecurity of Quantum Computations
Uncategorized
Uncategorized
It had been widely claimed that quantum mechanics can protect private
information during public decision in for example the so-called
two-party secure computation. If this were the case, quantum
smart-cards could prevent fake teller machines from learning the PIN
(Personal Identification Number) from the customers' input. Although
such optimism has been challenged by the recent surprising discovery
of the insecurity of the so-called quantum bit commitment, the
security of quantum two-party computation itself remains unaddressed.
Here I answer this question directly by showing that all *one-sided*
two-party computations (which allow only one of the two parties to
learn the result) are necessarily insecure. As corollaries to my
results, quantum one-way oblivious password identification and the
so-called quantum one-out-of-two oblivious transfer are impossible. I
also construct a class of functions that cannot be computed securely
in any <i>two-sided</i> two-party computation. Nevertheless, quantum
cryptography remains useful in key distribution and can still provide
partial security in ``quantum money'' proposed by Wiesner.
Relations among Notions of Security for Public-Key Encryption Schemes
Uncategorized
Uncategorized
We compare the relative strengths of popular notions of security for
public key encryption schemes. We consider the goals of
indistinguishability and non-malleability, each under chosen plaintext
attack and two kinds of chosen ciphertext attack. For each of the
resulting pairs of definitions we prove either an implication (every
scheme meeting one notion must meet the other) or a separation (there
is a scheme meeting one notion but not the other, assuming the first
notion can be met at all). We similarly treat plaintext awareness, a
notion of security in the random oracle model. An additional
contribution of this paper is a new definition of non-malleability
which we believe is simpler than the previous one.
Almost All Discrete Log Bits Are Simultaneously Secure
Uncategorized
Uncategorized
Let G be a finite cyclic group with generator \alpha and with an
encoding so that multiplication is computable in polynomial time. We
study the security of bits of the discrete log x when given
exp<sub>\alpha</sub>(x), assuming that the exponentiation function
exp<sub>\alpha</sub>(x) = \alpha<sup>x</sup> is one-way. We reduce the
general problem to the case that G has odd order q. If G has odd order
q the security of the least-significant bits of x and of the most
significant bits of the rational number x/q \in [0,1) follows from the
work of Peralta [P85] and Long and Wigderson [LW88]. We generalize
these bits and study the security of consecutive <i> shift bits</i>
lsb(2<sup>-i</sup>x mod q) for i=k+1,...,k+j. When we restrict
exp<sub>\alpha</sub> to arguments x such that some sequence of j
consecutive shift bits of x is constant (i.e., not depending on x) we
call it a 2<sup>-j</sup>-<i>fraction</i> of exp<sub>\alpha</sub>.
For groups of odd group order q we show that every two
2<sup>-j</sup>-fractions of exp<sub>\alpha</sub> are equally one-way
by a polynomial time transformation: Either they are all one-way or
none of them. Our <i> key theorem</i> shows that arbitrary j
consecutive shift bits of x are simultaneously secure when given
exp<sub>\alpha</sub>(x) iff the 2<sup>-j</sup>-fractions of
exp<sub>\alpha</sub> are one-way. In particular this applies to the j
least-significant bits of x and to the j most-significant bits of x/q
\in [0,1). For one-way exp<sub>\alpha</sub> the individual bits of x
are secure when given exp<sub>\alpha</sub>(x) by the method of Hastad,
Naslund [HN98]. For groups of even order 2<sup>s</sup>q we show that
the j least-significant bits of \lfloor x/2<sup>s</sup>\rfloor, as
well as the j most-significant bits of x/q \in [0,1), are
simultaneously secure iff the 2<sup>-j</sup>-fractions of
exp<sub>\alpha'</sub> are one-way for \alpha' := \alpha<sup>2<sup>s</sup>
</sup>.
We use and extend the models of generic algorithms of Nechaev (1994)
and Shoup (1997). We determine the generic complexity of inverting
fractions of exp<sub>\alpha</sub> for the case that \alpha has prime
order q. As a consequence, arbitrary segments of (1-\varepsilon)\lg q
consecutive shift bits of random x are for constant \varepsilon >0
simultaneously secure against generic attacks. Every generic algorithm
using t generic steps (group operations) for distinguishing bit
strings of j consecutive shift bits of x from random bit strings has
at most advantage O((\lg q)j\sqrt{t} (2<sup>j</sup>/q)<sup>1/4</sup>).
Many-to-one Trapdoor Functions and their Relation to Public-key Cryptosystems
Uncategorized
Uncategorized
The heart of the task of building public key cryptosystems is viewed
as that of ``making trapdoors;'' in fact, public key cryptosystems and
trapdoor functions are often discussed as synonymous. How accurate is
this view? In this paper we endeavor to get a better understanding of
the nature of ``trapdoorness'' and its relation to public key
cryptosystems, by broadening the scope of the investigation: we look
at general trapdoor functions; that is, functions that are not
necessarily injective (ie., one-to-one). Our first result is somewhat
surprising: we show that non-injective trapdoor functions (with
super-polynomial pre-image size) can be constructed {from} any one-way
function (and hence it is unlikely that they suffice for public key
encryption). On the other hand, we show that trapdoor functions with
polynomial pre-image size are sufficient for public key encryption.
Together, these two results indicate that the pre-image size is a
fundamental parameter of trapdoor functions. We then turn our
attention to the converse, asking what kinds of trapdoor functions can
be constructed from public key cryptosystems. We take a first step by
showing that in the random-oracle model one can construct injective
trapdoor functions from any public key cryptosystem.
Security and Composition of Multi-party Cryptographic Protocols
Uncategorized
Uncategorized
We present general definitions of security for multiparty cryptographic
protocols and show that, using these definitions, security is preserved
under a natural composition method.
The definitions follow the general paradigm of known definitions;
yet some substantial modifications and simplifications are introduced. In
particular, `black-box simulation' is no longer required. The composition
method is essentially the natural `subroutine substitution' method suggested
by Micali and Rogaway.
We first present the general definitional approach. Next we consider several
settings for multiparty protocols. These include the cases of non-adaptive
and adaptive adversaries, as well as the information-theoretic and the
computational models.
Making An Empty Promise With A Quantum Computer (Or, A Brief Review on the Impossibility of Quantum Bit Commitment)
Uncategorized
Uncategorized
Alice has made a decision in her mind.
While she does not want to reveal it to
Bob at this moment, she would like to convince Bob that she is committed to
this particular decision and that she cannot change it at a later time. Is
there a way for Alice to get Bob's trust? Until recently, researchers had
believed that the above task can be performed with the help of quantum
mechanics. And the security of the quantum scheme lies on the uncertainty
principle. Nevertheless, such optimism was recently shattered by Mayers and by
us, who found that Alice can always change her mind if she has a quantum
computer. Here, we survey this dramatic development and its implications on
the security of other quantum cryptographic schemes.
Last updated: 1999-01-01
Quantum Computers Render Quantum Key Distribution Unconditionally Secure Over Arbitrarily Long Distances
Uncategorized
Uncategorized
Abstract: Quantum cryptography has long been claimed to be useful for
achieving many tasks that are impossible from the perspective of
conventional cryptography. Arguably, the most important problem
in quantum cryptography has been a rigorous proof of the security of
quantum key distribution, the most well-known application.
This notoriously hard problem has eluded researchers for years and has
become even more important after the recent surprising demonstration
of the insecurity of many other quantum cryptographic schemes including
quantum bit commitment. Here, we solve this long standing problem by
showing that, given quantum computers, quantum key distribution over an
arbitrarily long distance of a realistic noisy channel can be made
unconditionally secure. The novel technique we use is reduction from a
quantum scheme to a classical scheme. The security in realistic noisy
environments is then proven by using the recent theory of fault-tolerant
quantum computation.
More on Proofs of Knowledge
Uncategorized
Uncategorized
The notion of proofs of knowledge is central to cryptographic
protocols, and many definitions for it have been proposed. In this work
we explore a different facet of this notion, not addressed by prior
definitions. Specifically, prior definitions concentrate on capturing
the properties of the verifier, and do not pay much attention to the
properties of the prover.
Our new definition is strictly stronger than previous ones, and captures
new and desirable properties. In particular, it guarantees prover
feasibility, that is, it guarantees that the time spent by the prover
in a proof of knowledge is comparable to that it spends in an "extraction"
of this knowledge. Our definition also enables one to consider meaningfully
the case of a single, specific prover.
Randomness versus Fault-Tolerance
Uncategorized
Uncategorized
We investigate the relations between two major requirements of multiparty
protocols: {\em fault tolerance} (or {\em resilience}) and {\em randomness}.
Fault-tolerance is measured in terms of the maximum number of colluding faulty
parties, t, that a protocol can withstand and still maintain the privacy of the inputs and the correctness of the outputs (of the honest parties). Randomness
is measured in terms of the total number of random bits needed by the parties
in order to execute the protocol.
Previously, the upper bound on the amount of randomness required by general
constructions for securely computing any non-trivial function f was polynomial
both in $n$, the total number of parties, and the circuit-size C(f). This was
the state of knowledge even for the special case t=1 (i.e., when there is at
most one faulty party). In this paper, we show that for any linear-size
circuit, and for any number t < n/3 of faulty parties, O(poly(t) * log n)
randomness is sufficient. More generally, we show that for any function f
with circuit-size C(f), we need only O(poly(t) * log n + poly(t) * C(f)/n)
randomness in order to withstand any coalition of size at most t.
Furthermore, in our protocol only t+1 parties flip coins and the rest of
the parties are deterministic. Our results generalize to the case of adaptive
adversaries as well.
A Random Server Model for Private Information Retrieval (or How to Achieve Information Theoretic PIR Avoiding Data Replication)
Uncategorized
Uncategorized
Private information retrieval (PIR) schemes enable users to obtain
information from databases while keeping their queries secret from the
database managers. We propose a new model for PIR, utilizing
auxiliary random servers to provide privacy services for database
access. In this model, prior to any on-line communication where users
request queries, the database engages in an initial preprocessing
setup stage with the random servers. Using this model we achieve the
first PIR information theoretic solution in which the database does
not need to give away its data to be replicated, and with minimal
on-line computation cost for the database. This solves privacy and
efficiency problems inherent to all previous solutions.
In particular, all previous information theoretic PIR schemes required
multiple replications of the database into separate entities which are
not allowed to communicate with each other; and in all previous
schemes (including ones which do not achieve information theoretic
security), the amount of computation performed by the database on-line
for every query is at least linear in the size of the database.
In contrast, in our solutions the database does not give away its
contents to any other entity; and after the initial setup stage, which
costs at most O(n log n) in computation, the database needs to
perform only O(1) amount of computation to answer questions of users
on-line. All the extra on-line computation is done by the auxiliary
random servers.
Maintaining Authenticated Communication in the Presence of Break-ins
Uncategorized
Uncategorized
We study the problem of maintaining authenticated communication over untrusted
communication channels, in a scenario where the communicating parties may be
occasionally and repeatedly broken into for transient periods of time. Once
a party is broken into, its cryptographic keys are exposed and perhaps
modified. Yet, we want parties whose security is thus compromised to regain
their ability to communicate in an authenticated way aided by other parties.
In this work we present a mathematical model for this highly adversarial
setting, exhibiting salient properties and parameters, and then describe
a practically-appealing protocol for the task of maintaining authenticated
communication in this model.
A key element in our solution is devising {\em proactive distributed signature
(PDS) schemes} in our model. Although PDS schemes are known in the literature,
they are all designed for a model where authenticated communication and
broadcast primitives are available. We therefore show how these schemes can be
modified to work in our model, where no such primitives are available a-priori.
In the process of devising the above schemes, we also present a new definition
of PDS schemes (and of distributed signature schemes in general). This
definition may be of independent interest.
The Random Oracle Methodology, Revisited
We take a critical look at the relationship between the security of
cryptographic schemes in the Random Oracle Model, and the security of the
schemes that result from implementing the random oracle by so called
"cryptographic hash functions".
The main result of this paper is a negative one: There exist signature and
encryption schemes that are secure in the Random Oracle Model, but for which
any implementation of the random oracle results in insecure schemes.
In the process of devising the above schemes, we consider possible definitions
for the notion of a "good implementation" of a random oracle, pointing out
limitations and challenges.
Chameleon Hashing and Signatures
Uncategorized
Uncategorized
We introduce CHAMELEON SIGNATURES that provide with an undeniable
commitment of the signer to the contents of the signed document (as regular
digital signatures do) but, at the same time, do not allow the recipient
of the signature to disclose the contents of the signed information to any
third party without the signer's consent. These signatures are closely
related to Chaum's "undeniable signatures", but chameleon signatures allow
for simpler and more efficient realizations than the latter.
In particular, they are essentially non-interactive and do not involve the
design and complexity of zero-knowledge proofs on which traditional undeniable
signatures are based. Instead, chameleon signatures are generated
under the standard method of hash-then-sign. Yet, the hash functions
which are used are CHAMELEON HASH FUNCTIONS. These hash functions are
characterized by the non-standard property of being collision-resistant
for the signer but collision tractable for the recipient.
We present simple and efficient constructions of chameleon hashing and
chameleon signatures. The former can be constructed based on standard
cryptographic assumptions (such as the hardness of factoring or discrete
logarithms) and have efficient realizations based on these assumptions.
For the signature part we can use any digital signature (such as RSA or DSS)
and prove the unforgeability property of the resultant chameleon signatures
solely based on the unforgeability of the underlying digital signature
in use.
A Modular Approach to the Design and Analysis of Authentication and Key Exchange Protocols
Uncategorized
Uncategorized
We present a general framework for constructing and analyzing authentication
protocols in realistic models of communication networks. This framework
provides a sound formalization for the authentication problem and suggests
simple and attractive design principles for general authentication and key
exchange protocols. The key element in our approach is a modular treatment of
the authentication problem in cryptographic protocols; this applies to the
definition of security, to the design of the protocols, and to their analysis.
In particular, following this modular approach, we show how to systematically
transform solutions that work in a model of idealized authenticated
communications into solutions that are secure in the realistic setting of
communication channels controlled by an active adversary.
Using these principles we construct and prove the security of simple and
practical authentication and key-exchange protocols. In particular, we provide
a security analysis of some well-known key exchange protocols (e.g.
authenticated Diffie-Hellman key exchange), and of some of the techniques
underlying the design of several authentication protocols that are currently
being deployed on a large scale for the Internet Protocol and other
applications.
- « Previous
- 1
- ...
- 218
- 219
- 220
- Next »