All papers (Page 218 of 21909 results)
Revocation and Tracing Schemes for Stateless Receivers
We deal with the problem of a center sending a message to a group
of users such that some subset of the users is considered revoked
and should not be able to obtain the content of the message. We
concentrate on the stateless receiver case, where the users
do not (necessarily) update their state from session to session.
We present a framework called the Subset-Cover framework,
which abstracts a variety of revocation schemes including some
previously known ones. We provide sufficient conditions that
guarantee the security of a revocation algorithm in this class.
We describe two explicit Subset-Cover revocation algorithms; these
algorithms are very flexible and work for any number of revoked
users. The schemes require storage at the receiver of $\log N$ and
$\frac{1}{2} \log^2 N$ keys respectively ($N$ is the total number
of users), and in order to revoke $r$ users the required message
lengths are of $r \log N$ and $2r$ keys respectively. We also
provide a general traitor tracing mechanism that can be
integrated with any Subset-Cover revocation scheme that satisfies
a ``bifurcation property''. This mechanism does not need an a
priori bound on the number of traitors and does not expand the
message length by much compared to the revocation of the same set
of traitors.
The main improvements of these methods over previously suggested
methods, when adapted to the stateless scenario, are: (1) reducing
the message length to $O(r)$ regardless of the coalition
size while maintaining a single decryption at the user's end (2)
provide a seamless integration between the revocation and
tracing so that the tracing mechanisms does not require any change
to the revocation algorithm.
Efficient Zero-knowledge Authentication Based on a Linear Algebra Problem MinRank
Uncategorized
Uncategorized
A Zero-knowledge protocol provides provably secure entity authentication based on a hard computational problem. Among many schemes proposed since 1984, the most practical rely on factoring and discrete log, but still they are practical schemes based on NP-hard problems.
Among them, the problem SD of decoding linear codes
is in spite of some 30 years of research effort, still exponential.
We study a more general problem called MinRank that generalizes SD and contains also other well known hard problems.
MinRank is also used in cryptanalysis of several public key cryptosystems such as birational schemes (Crypto'93), HFE (Crypto'99), GPT cryptosystem (Eurocrypt'91), TTM (Asiacrypt'2000) and Chen's authentication scheme (1996).
We propose a new Zero-knowledge scheme based on MinRank.
We prove it to be Zero-knowledge by black-box simulation.
An adversary able to cheat with a given MinRank instance
is either able to solve it, or is able to compute a collision on a given hash function.
MinRank is one of the most efficient schemes based on NP-complete problems. It can be used to prove in Zero-knowledge a solution to any problem described by multivariate equations. We also present a version with a public key shared by a few users, that allows anonymous group signatures (a.k.a. ring signatures).
On the Security of the SPEKE Password-Authenticated Key Exchange Protocol
In the most strict formal definition of security for
password-authenticated key exchange, an adversary can test at most one
password per impersonation attempt. We propose a slightly relaxed
definition which restricts an adversary to testing at most a constant
number of passwords per impersonation attempt. This definition seems
useful, since there is currently a popular password-authenticated key
exchange protocol called SRP that seems resistant to off-line
dictionary attack, yet does allow an adversary to test two passwords
per impersonation attempt. In this paper we prove (in the random
oracle model) that a certain instantiation of the SPEKE protocol that
uses hashed passwords instead of non-hashed passwords is a secure
password-authenticated key exchange protocol (using our relaxed
definition) based on a new assumption, the Decision
Inverted-Additive Diffie-Hellman assumption. Since this is a new
security assumption, we investigate its security and relation to other
assumptions; specifically we prove a lower bound for breaking this new
assumption in the generic model, and we show that the computational
version of this new assumption is equivalent to the Computational
Diffie-Hellman assumption.
On the Complexity of Matsui's Attack
Linear cryptanalysis remains the most powerful attack against DES
at this time. Given $2^{43}$ known plaintext-ciphertext pairs,
Matsui expected a complexity of less than $2^{43}$ DES evaluations
in 85% of the cases for recovering the key. In this paper, we
present a theoretical and experimental complexity analysis of this attack, which has been simulated 21 times using the idle
time of several computers. The experimental results suggest a complexity upper-bounded
by $2^{41}$ DES evaluations in 85% of the case, while more than the half of the experiments needed less than $2^{39}$ DES evaluations. In addition, we give a detailed theoretical analysis of the attack complexity.
Universally Composable Commitments
We propose a new security measure for commitment protocols, called
/universally composable/ (UC) Commitment. The measure
guarantees that commitment protocols behave like an "ideal commitment
service," even when concurrently composed with an arbitrary set of
protocols. This is a strong guarantee: it implies that security is
maintained even when an unbounded number of copies of the scheme are
running concurrently, it implies non-malleability (not only with
respect to other copies of the same protocol but even with respect
to other protocols), it provides resilience to selective
decommitment, and more.
Unfortunately two-party UC commitment protocols do not exist in
the plain model. However, we construct two-party UC commitment
protocols, based on general complexity assumptions, in the
/common reference string model/ where all parties have
access to a common string taken from a predetermined distribution.
The protocols are non-interactive, in the sense that both the
commitment and the opening phases consist of a single message
from the committer to the receiver.
Extending the GHS Weil Descent Attack
In this paper we extend the Weil descent attack
due to Gaudry, Hess and Smart (GHS) to
a much larger class of elliptic curves.
This extended attack still only works for fields of composite
degree over $\F_2$.
The principle behind the extended attack is to use
isogenies to
find a new elliptic curve for which the GHS attack is
effective.
The discrete logarithm problem on the target curve
can be transformed into a discrete logarithm problem
on the new isogenous curve.
One contribution of the paper is to give
an improvement to an algorithm of Galbraith
for constructing isogenies between elliptic curves,
and this is of independent interest in
elliptic curve cryptography.
We conclude that fields of the form $\F_{q^7}$ should be
considered weaker from a cryptographic standpoint than
other fields.
In addition we show that a larger proportion than previously
thought of elliptic curves over $\F_{2^{155}}$ should be
considered weak.
Security Proofs for the RSA-PSS Signature Scheme and Its Variants
We analyze the security of different versions of the adapted
RSA-PSS signature scheme, including schemes with variable salt
lengths and message recovery. We also examine a variant with
Rabin-Williams (RW) as the underlying verification primitive.
Our conclusion is that the security of RSA-PSS and RW-PSS in
the random oracle model can be tightly related to the hardness
of inverting the underlying RSA and RW primitives, at least if
the PSS salt length is reasonably large. Our security proofs
are based on already existing work by Bellare and Rogaway
and by Coron, who examined signature schemes based on the
original PSS encoding method.
Differential Probability of Modular Addition with a Constant Operand
In this article I analyze the function f(X) = A + X (mod 2**n) exclusive-or differential probability. The result, regarding differential cryptanalysis, is a better understanding of ciphers that use f(X) as a primitive operation. A simple O(n) algorithm to compute the probability is given.
Black-Box Concurrent Zero-Knowledge Requires $\tilde\Omega(\log n)$ Rounds
We show that any concurrent zero-knowledge protocol for a non-trivial
language (i.e., for a language outside $\BPP$), whose security is
proven via black-box simulation, must use at least
$\tilde\Omega(\log n)$ rounds of interaction. This result achieves a
substantial improvement over previous lower bounds, and is the first
bound to rule out the possibility of constant-round concurrent
zero-knowledge when proven via black-box simulation. Furthermore, the
bound is polynomially related to the number of rounds in the best
known concurrent zero-knowledge protocol for languages in $\NP$.
Elliptic curve Paillier schemes
This paper is concerned with
generalisations of Paillier's
probabilistic encryption scheme
from the integers modulo a square
to elliptic curves over rings.
Paillier himself described
two public key encryption schemes
based on anomalous elliptic curves over rings.
It is argued that these schemes are not secure.
A more natural generalisation of Paillier's scheme to
elliptic curves is given.
A known plaintext attack on the ISAAC keystream generator
Stream ciphers are often used in applications where high speed and low delay are a requirement. The ISAAC keystream generator is a fast software-oriented encryption algorithm. In this papers the security of the ISAAC keystream generator is investigated. Cryptanalytic algorithm is developed for a known plaintext attack where only a small segment of plaintext is assumed to be known.
Keywords. ISAAC. Keystream generator. Cryptanalysis.
Forward-Secure Signatures with Optimal Signing and Verifying
Ordinary digital signatures have an inherent weakness: if the secret key is leaked, then all signatures, even the ones generated before the leak, are no longer trustworthy. Forward-secure digital signatures were recently proposed to address this weakness: they ensure that past signatures remain secure even if the current secret key is leaked.
We propose the first forward-secure signature scheme for which both signing and verifying are as efficient as for one of the most efficient ordinary signature schemes (Guillou-Quisquater): each requiring just two modular exponentiations with a short exponent. All previously proposed forward-secure signature schemes took significantly longer to sign and verify than ordinary signature schemes.
Our scheme requires only fractional increases to the sizes of keys and signatures, and no additional public storage. Like the underlying Guillou-Quisquater scheme, our scheme is provably secure in the random oracle model.
ON THE METHOD OF "XL" AND ITS INEFFICIENCY TO TTM
We will show that the paper "Efficient Algorithm for solving over-
defined systems of multivariate polynomials equations" published
in Eurocrypt 2000 by N. Courtois, A. Shamir, J. Patarin and A.Klimov
ignors the intersection property at infinity of the system and the
program proposed by them can not achieve their desired results.
In fact, we produce very simple example to show that in some cases,
their program always fails.
The simple ideal cipher system
We address the problem of how to construct ideal cipher systems when the
length
of a key is much less than the length of an encrypted message. We suggest a
new
secret key cipher system in which firstly the message is transformed into two
parts in such a
way that
the biggest part consists of independent and equiprobable letters.
Secondly the relatively small second part is enciphered wholly by the Vernam
cipher
whereas only few bits from the biggest part are enciphered.
This transformation is based on the fast version of the Elias
construction of an unbiased random sequence.
The
time required for encoding and decoding and the memory size of the encoder and
decoder are presented as functions
of the ratio of the key length and the message length.
The suggested scheme can be applied to sources with unknown statistics.
The order of encryption and authentication for protecting communications (Or: how secure is SSL?)
Uncategorized
Uncategorized
We study the question of how to generically compose {\em symmetric}
encryption and authentication when building ``secure channels'' for
the protection of communications over insecure networks.
We show that any secure channels protocol designed to work with any combination
of secure encryption (against chosen plaintext attacks) and secure MAC
must use the encrypt-then-authenticate method.
We demonstrate this by showing that the other common methods
of composing encryption and authentication, including the
authenticate-then-encrypt method used in SSL, are not generically secure.
We show an example of an encryption function
that provides (Shannon's) perfect secrecy but when combined with
any MAC function under the authenticate-then-encrypt method
yields a totally insecure protocol (for example, finding passwords
or credit card numbers transmitted under the protection of such protocol
becomes an easy task for an active attacker).
The same applies to the encrypt-and-authenticate method used in SSH.
On the positive side we show that the authenticate-then-encrypt method
is secure if the encryption method in use is either CBC mode (with
an underlying secure block cipher) or a stream cipher (that xor the
data with a random or pseudorandom pad).
Thus, while we show the generic security of SSL to be broken,
the current standard implementations of the protocol that use
the above modes of encryption are safe.
Optimistic Asynchronous Multi-Party Contract Signing with Reduced Number of Rounds
Optimistic asynchronous multi-party contract signing protocols have
received attention in recent years as a compromise between efficient
protocols and protocols avoiding a third party as a bottleneck of
security. ``Optimistic'' roughly means: in case all participants are
honest and receive the messages from the other participants as
expected, the third party is not involved at all. The best solutions
known so far terminate within $t+2$ rounds in the optimistic case, for
any fixed set of $n$ signatories and allowing up to $t<n$ dishonest
signatories. The protocols presented here achieve a major improvement
compared to the state of the art: The number of rounds $R$ is reduced
from $O(t)$ to $O(1)$ for all $n \ge 2t+1$, and for $n < 2t+1$, $R$
grows remarkably slowly compared with numbers of rounds in $O(t)$: If
$t \approx \frac{k}{k+1} n $ then $R \approx 2k$.
Cryptanalysis of the Vesta-2M Stream Cipher
In this paper the security of the stream cipher Vesta-2M is investigated. Cryptanalytic algorithm is developed for a known plaintext attack where only a small segment of plaintext is assumed to be known. The complexity the attack is estimated the time of searching through the square root of all possible initial states.
Simple Forward-Secure Signatures From Any Signature Scheme
Uncategorized
Uncategorized
In Crypto'99, Bellare and Miner introduced {\em forward-secure signatures}
as digital signature schemes with the attractive property that exposure
of the signing key at certain time period does not allow for the forgery
of signatures from previous time periods.
That paper presented the first full design of an efficient forward-secure
signatures scheme, but left open the question of building efficient
and practical schemes based on standard signatures such as RSA or DSS.
In particular, they called for the development of schemes where the
main size-parameters (namely, the size of the private key, public key,
and signature) do not grow with the total number of periods for which
the public key is to be in use.
We present an efficient and extremely simple construction of forward-secure
signatures based on {\em any} regular signature scheme (e.g., RSA and DSS);
the resultant signatures enjoy size-parameters that are independent of the
number of periods (except for the inclusion of an index to the period
in which a signature is issued). The only parameter that grows (linearly)
with the number of periods is the total size of local non-secret
memory of the signer. The forward-security of our schemes is directly
implied by the unforgeability property of the underlying signature
scheme and it requires no extra assumptions.
Our approach can also be applied to some signature schemes with special
properties, such as undeniable signatures, to obtain forward-secure
signatures that still enjoy the added special property.
Solving Elliptic Curve Discrete Logarithm Problems Using Weil Descent
We provide a concrete instance of the discrete logarithm problem
on an elliptic curve over F_{2^{155}} which resists all previously
known attacks, but which can be solved with modest computer
resources using the Weil descent attack methodology of Frey. We
report on our implementation of index-calculus methods for
hyperelliptic curves over characteristic two finite fields, and
discuss the cryptographic implications of our results.
Analysis of Key-Exchange Protocols and Their Use for Building Secure Channels
Uncategorized
Uncategorized
We present a formalism for the analysis of key-exchange protocols
that combines previous definitional approaches and results in a definition
of security that enjoys some important analytical benefits:
(i) any key-exchange protocol that satisfies the security definition
can be composed with symmetric encryption and authentication functions
to provide provably secure communication channels;
and
(ii) the definition allows for simple modular proofs of security:
one can design and prove security of key-exchange protocols in an
idealized model where the communication links are perfectly authenticated,
and then translate them using general tools to obtain security in
the realistic setting of adversary-controlled links.
We exemplify the usability of our results by applying them to obtain the
proof of two main classes of key-exchange protocols, Diffie-Hellman and
key-transport, authenticated via symmetric or asymmetric techniques.
Further contributions of the paper include the formalization of
``secure channels'' in the context of key-exchange protocols, and
establishing sufficient conditions on the symmetric encryption and
authentication functions to realize these channels.
Robust Software Tokens: Towards Securing a Digital Identity
Uncategorized
Uncategorized
This paper presents a new method called the robust software token for providing users with a stable and portable container in which a private key is stored and kept from adversaries, by simple software-only techniques. The proposed scheme is comparable with the related noble work such as a cryptographic camouflage scheme and a networked cryptographic device, but equipped with several advantages; (1) it uniquely supports both closed and open domains on public key infrastructures, (2) it supports more protocol setup, (3) and it is more efficient than the others. This paper handles the new RSA-based scheme only. The DSA-based scheme sharing the basic idea can be found in our previous work.
Flaws in differential cryptanalysis of Skipjack
This paper is motivated by some results presented by
Knudsen, Robshaw and Wagner at Crypto'99, that
described many attacks of reduced versions of Skipjack,
some of them being erroneous.
Differential cryptanalysis is based on distinguishers,
any attack should prove that the events that
triggers the analysis has not the same probability
for the cipher than for a random function.
In particular, the composition of differential
for successive parts of a cipher should be done
very carefully to lead to an attack.
This revised version of the paper includes the exact
computations of some probabilities and repairs the
attack of the first half of Skipjack.
EMpowering Side-Channel Attacks
In this paper, we report preliminary results obtained
as a result of a systematic investigation of leakage of compromising
information via EM emanations from chipcards and other devices. Our
findings show that the EM side--channel is more powerful than other
side--channels such as timing and power analysis. Specifically, in
some cases, one can obtain much more compromising information about
computations and one can use this information to defeat the protection
provided by countermeasures to the other side--channel attacks.
Anti-persistence: History Independent Data Structures
Many data structures give away much more information than they
were intended to. Whenever privacy is important, we need to be
concerned that it might be possible to infer information from the
memory representation of a data structure that is not available
through its ``legitimate'' interface. Word processors that
quietly maintain old versions of a document are merely the most
egregious example of a general problem.
We deal with data structures whose current memory representation does
not reveal their history. We focus on dictionaries, where this means
revealing nothing
about the order of insertions or deletions. Our first algorithm is
a hash table based on open addressing,
allowing $O(1)$ insertion and search. We also present
a history independent dynamic perfect hash table that uses space
linear in the number of elements inserted and has expected amortized
insertion
and deletion time $O(1)$. To solve the dynamic perfect hashing
problem we devise a general scheme for history independent
memory allocation.
For fixed-size records this is quite efficient, with insertion and
deletion both linear in the size of the record. Our variable-size
record
scheme is efficient enough for dynamic perfect hashing but not for
general use. The main open problem we leave is whether it is possible
to implement a variable-size record scheme with low overhead.
Forward-Security in Private-Key Cryptography
Uncategorized
Uncategorized
This paper provides a comprehensive treatment of
forward-security in the context of shared-key based cryptographic primitives,
as a practical means to mitigate the damage caused by key-exposure. We provide
definitions of security, practical proven-secure constructions, and
applications for the main primitives in this area. We identify forward-secure
pseudorandom bit generators as the central primitive, providing several
constructions and then showing how forward-secure message authentication
schemes and symmetric encryption schemes can be built based on standard schemes
for these problems coupled with forward-secure pseudorandom bit generators. We
then apply forward-secure message authentication schemes to the problem of
maintaining secure access logs in the presence of break-ins.
Composition and Efficiency Tradeoffs for Forward-Secure Digital Signatures
Forward-secure digital signatures, initially proposed by Anderson in
CCS 97 and formalized by Bellare and Miner in Crypto 99, are signature
schemes which enjoy the additional guarantee that a compromise of the
secret key at some point in time does not help forge signatures
allegedly signed in an earlier time period. Consequently, if the
secret key is lost, then the key can be safely revoked without
invalidating previously-issued signatures. Since the introduction of
the concept, several forward-secure signature schemes have been
proposed, with varying performance both in terms of space and time.
Which scheme is most useful in practice typically depends on the
requirements of the specific application.
In this paper we propose and study some general composition operations
that can be used to combine existing signature schemes (whether
forward-secure or not) into new forward-secure signature schemes. Our
schemes offer interesting trade-offs between the various efficiency
parameters, achieving a greater flexibility in accommodating the
requirements of different applications. As an extension of our
techniques, we also construct the first efficient forward-secure
signature scheme where the total number of time periods for which the
public key is used does not have to be fixed in advance. The scheme
can be used for practically unbounded time, and the performance
depends (minimally) only on the time elapsed so far.
Our scheme achieves excellent performance overall, is very competitive
with previous schemes with respect to all parameters, and outperforms
each of the previous schemes in at least one parameter. Moreover, the
scheme can be based on any underlying digital signature scheme, and
does not rely on specific assumptions. Its forward security is proven
in the standard model, without using a random oracle.
Dual of New Method for Upper Bounding the Maximum Average Linear Hull Probability for SPNs
In [3], we present a new algorithm for computing an upper bound on
the maximum average linear hull probability (MALHP) for the SPN
symmetric cipher structure, a value required to make claims about
provable security against linear cryptanalysis. This algorithm
improves on existing work in that the resulting upper bound is a
function of the number of encryption rounds (other upper bounds
known to the authors are not), and moreover, it can be computed
for an SPN with any linear transformation layer (the best previous
result, that of Hong et.al [4], applies only to SPNs with highly
diffusive linear transformations).
It is well known that there exists a duality between linear
cryptanalysis and differential cryptanalysis which allows certain
results related to one of the attacks to be translated into the
corresponding results for the other attack [1,5]. Since this
duality applies to our work in [3], we immediately obtain an
algorithm for upper bounding the maximum average differential
probability (MADP) for SPNs (required to make claims about
provable security against differential cryptanalysis).
Note: In what follows, we assume familiarity with the notation
and results of [3].
Efficient and Non-Interactive Non-Malleable Commitment
We present new constructions of non-malleable commitment schemes, in
the public parameter model (where a trusted party makes parameters
available to all parties), based on the discrete logarithm or RSA
assumptions. The main features of our schemes are: they achieve
near-optimal communication for arbitrarily-large messages and are
non-interactive. Previous schemes either required (several rounds of)
interaction or focused on achieving non-malleable commitment based on
general assumptions and were thus efficient only when committing to a
single bit. Although our main constructions are for the case of
perfectly-hiding commitment, we also present a communication-efficient,
non-interactive commitment scheme (based on general assumptions) that
is perfectly binding.
Efficient Password-Authenticated Key Exchange Using Human-Memorable Passwords
We present an efficient password-authenticated key exchange protocol
which is secure against off-line dictionary attacks even when users
choose passwords from a very small space (say, a dictionary of English
words). We prove security in the standard model under the decisional
Diffie-Hellman assumption, assuming public parameters generated by a
trusted party. Compared to the recent work of Goldreich and Lindell
(which was the first to give a secure construction, under general
assumptions, in the standard model), our protocol requires only 3
rounds and is efficient enough to be used in practice.
On the Power of Nonlinear Secret-Sharing
Uncategorized
Uncategorized
A secret-sharing scheme enables a dealer to distribute a secret
among n parties such that only some predefined authorized sets of
parties will be able to reconstruct the secret from their shares.
The (monotone) collection of authorized sets is called an access
structure, and is freely identified with its characteristic monotone
function f:{0,1}^n --> {0,1}. A family of secret-sharing schemes is
called efficient if the total length of the n shares is polynomial
in n. Most previously known secret-sharing schemes belonged to a
class of linear schemes, whose complexity coincides with the
monotone span program size of their access structure. Prior to this
work there was no evidence that nonlinear schemes can be
significantly more efficient than linear schemes, and in particular
there were no candidates for schemes efficiently realizing access
structures which do not lie in NC.
The main contribution of this work is the construction of two
efficient nonlinear schemes:
(1) A scheme with perfect privacy whose access structure is
conjectured not to lie in NC;
(2) A scheme with statistical privacy whose access structure is
conjectured not to lie in P/poly.
Another contribution is the study of a class of nonlinear schemes,
termed quasi-linear schemes, obtained by composing linear schemes
over different fields. We show that while these schemes are possibly
(super-polynomially) more powerful than linear schemes, they cannot
efficiently realize access structures outside NC.
On multivariate signature-only public key cryptosystems
In a paper published at Asiacrypt 2000 a signature scheme that (apparently) cannot be abused for encryption is published.
The problem is highly non-trivial and every solution should be looked upon with caution.
What is especially hard to achieve is to avoid that the public key should leak some information, to be used as a possible "shadow" secondary public key.
In the present paper we argument that the problem has
many natural solutions within the framework of the multivariate cryptography.
First of all it seems that virtually any non-injective multivariate public key is inherently unusable for encryption.
Unfortunately having a lot of leakage is inherent to multivariate cryptosystems. Though it may appear hopeless at the first sight, we use this very property to remove leakage. In our new scenario the Certification Authority (CA) makes extensive modifications of the public key such that the user can still use the internal trapdoor,
but has no control on any publicly verifiable property of the actual public key equations published by CA.
Thus we propose a very large class of multivariate non-encryption
PKI schemes with many parameters $q,d,h,v,r,u,f,D$.
The paper is also of independent interest, as it contains all variants of the HFE trapdoor public key cryptosystem.
We give numerous and precise security claims that HFE achieves or appears to achieve and establish some provable security relationships.
Efficient Encryption for Rich Message Spaces Under General Assumptions
We present a new family of public-key encryption schemes which combine modest computational demands with provable security guarantees under only general assumptions. The schemes may be realized with any one-way trapdoor permutation, and provide a notion of security corresponding to semantic security under the condition that the message space has sufficient entropy. Furthermore, these schemes can be implemented with very few applications of the underlying one-way permutation: schemes which provide security for message spaces in $\{0,1\}^n$ with minimum entropy $n - \ell$ can be realized with $\ell + w(k)\log k$ applications of the underlying one-way trapdoor permutation. Here $k$ is the security parameter and $w(k)$ is any function which tends to infinity. In comparison, extant systems offering full semantic security require roughly $n$ applications of the underlying one-way trapdoor permutation. Finally, we give a simplified proof of a fundamental ``elision lemma'' of Goldwasser and Micali.
A Block-Cipher Mode of Operation for Parallelizable Message Authentication
Uncategorized
Uncategorized
We define and analyze a
simple and fully parallelizable block-cipher mode of operation
for message authentication.
Parallelizability does not come at the
expense of serial efficiency: in a conventional, serial
environment, the algorithm's speed is within
a few percent of the (inherently sequential) CBC~MAC.
The new mode, PMAC, is deterministic,
resembles a standard mode of operation
(and not a Carter-Wegman MAC),
works for strings of any bit length,
employs a single block-cipher key,
and uses just max{1, ceiling(|M|/n)}
block-cipher calls to MAC any string M using an
n-bit block cipher.
We prove PMAC secure,
quantifying an adversary's forgery probability
in terms of the quality of the block cipher as a
pseudorandom permutation.
OCB Mode
This paper was prepared for NIST, which is considering new
block-cipher modes of operation. It describes a parallelizable
mode of operation that simultaneously provides both privacy
and authenticity. "OCB mode" encrypts-and-authenticates
an arbitrary message $M\in\bits^*$ using only $\lceil |M|/n\rceil + 2$
block-cipher invocations, where $n$ is the block length of the
underlying block cipher. Additional overhead is small.
OCB refines a scheme, IAPM, suggested by Jutla [IACR-2000/39], who
was the first to devise an authenticated-encryption mode with minimal
overhead compared to standard modes. Desirable new properties of
OCB include: very cheap offset calculations; operating on an arbitrary
message $M\in\bits^*$; producing ciphertexts of minimal length;
using a single underlying cryptographic key; making a nearly optimal number
of block-cipher calls; avoiding the need for a random IV; and rendering it
infeasible for an adversary to find "pretag collisions". The paper
provides a full proof of security for OCB.
Last updated: 2001-06-20
Cryptanalysis of some elliptic curve based cryptosystems of Paillier
Two public key encryption schemes
based on anomalous elliptic curves over rings
are studied.
It is argued that these schemes are not secure.
Secure Multiparty Computation of Approximations
Approximation algorithms can sometimes be used to obtain efficient
solutions where no efficient exact computation is known. In
particular, approximations are often useful in a distributed setting
where the inputs are held by different parties and are extremely
large. Furthermore, for some applications, the parties want to
cooperate to compute a function of their inputs without revealing more
information than they have to.
Suppose the function $\fhat$ is an approximation to the function $f$.
Secure multiparty computation of $f$ allows the parties to compute $f$
without revealing more than they have to, but it requires some
additional overhead in computation and communication. Hence, if
computation of $f$ is inefficient or just efficient enough to be
practical, then secure computation of $f$ may be impractically
expensive. Furthermore, a secure computation of $\fhat$ is not
necessarily as private as a secure computation of $f$, because the
output of $\fhat$ may reveal more information than the output of $f$.
In this paper, we present definitions and protocols of secure
multiparty approximate computation that show how to realize most of
the cost savings available by using $\fhat$ instead of $f$ without
losing the privacy of a secure computation of $f$.
We make three contributions. First, we give formal definitions of
secure multiparty approximate computations. Second, we present an
efficient, sublinear-communication, private approximate computation
for the Hamming distance; we also give an efficient,
polylogarithmic-communication solution for the $L^{2}$ distance
in a relaxed model. Finally, we give an efficient private
approximation of the permanent and other related \#P-hard problems.
Robustness for Free in Unconditional Multi-Party Computation
We present a very efficient multi-party computation protocol
unconditionally secure against an active adversary. The security is
maximal, i.e., active corruption of up to $t<n/3$ of the $n$ players is
tolerated. The communication complexity for securely evaluating a circuit
with $m$ multiplication gates over a finite field is $\O(mn^2)$ field
elements, including the communication required for simulating broadcast.
This corresponds to the complexity of the best known protocols for the
passive model, where the corrupted players are guaranteed not to deviate
from the protocol. Even in this model, it seems to be unavoidable that
for every multiplication gate every player must send a value to every
other player, and hence the complexity of our protocol may well be
optimal. The constant overhead factor for robustness is small and the
protocol is practical.
Optimistic Asynchronous Atomic Broadcast
This paper presents a new protocol
for atomic broadcast in an asynchronous network with
a maximal number of Byzantine failures.
It guarantees both safety and liveness without
making any timing assumptions or using any type of failure detector.
Under normal circumstances, the protocol runs in an optimistic mode,
with extremely low message and computational complexity -- essentially,
just performing a Bracha broadcast for each request.
In particular, no
potentially expensive public-key cryptographic operations are used.
In rare circumstances, the protocol may briefly
switch to a pessimistic mode,
where both the message and computational complexity
are significantly higher than in the optimistic mode,
but are
still reasonable.
The Rectangle Attack - Rectangling the Serpent
Serpent is one of the 5 AES finalists. The best attack published so far
analyzes up to 9 rounds. In this paper we present attacks on 7-round,
8-round, and 10-round variants of Serpent. We attack 7-round variant of
Serpent with all key lengths, and 8- and 10-round variants wih 256-bit keys.
The 10-roun attack on the 256-bit keys variants is the best published
attack on the cipher. The attack enhances the amplified boomerang attack
and uses better differentials. We also present the best 3-round, 4-round,
5-round and 6-round differential characteristics of Serpent.
Some observations on the theory of cryptographic hash functions
In this paper, we study several issues related to the notion of
``secure'' hash functions. Several necessary conditions
are considered, as well as a popular sufficient condition
(the so-called random oracle model). We study the security of
various problems that are motivated by the notion of a secure
hash function.
These problems are analyzed in the random oracle model,
and we prove that
the obvious trivial algorithms are optimal.
As well, we look closely at
reductions between various problems. In particular,
we consider the important question ``does preimage resistance imply
collision resistance?''. Finally, we study the relationship
of the security of
hash functions built using the Merkle-Damgard construction
to the security of the underlying compression function.
An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation
A credential system is a system in which users can obtain
credentials from organizations and demonstrate possession of these
credentials. Such a system is anonymous when transactions carried out by the
same user cannot be linked. An anonymous credential system is of significant
practical relevance because it is the best means of providing privacy for
users. In this paper we propose a practical anonymous credential system that
is based on the strong RSA assumption and the decisional Diffie-Hellman
assumption modulo a safe prime product and is considerably superior to
existing ones:
(1) We give the first practical solution that allows
a user to unlinkably demonstrate possession of a credential as many times as
necessary without involving the issuing organization.
(2) To prevent misuse of anonymity, our scheme is the first to offer optional
anonymity revocation for particular transactions.
(3) Our scheme offers separability: all organizations can choose their
cryptographic keys independently of each other.
Moreover, we suggest more effective means of preventing users from sharing their
credentials, by introducing {\em all-or-nothing} sharing: a user who allows a
friend to use one of her credentials once, gives him the ability to use all of
her credentials, i.e., taking over her identity. This is implemented by a new
primitive, called {\em circular encryption}, which is of independent interest,
and can be realized from any semantically secure cryptosystem in the random
oracle model.
Analysis of a Subset Sum Randomizer
In [5] an efficient pseudo-random number generator (PRNG) with
provable security is described. Its security is based on the
hardness of the subset sum or knapsack problem. In this paper
we refine these ideas to design a PRNG with independent seed and
output generation. This independence allows for greater
parallelism, design flexibility, and possibly greater security.
On adaptive vs. non-adaptive security of multiparty protocols
Security analysis of multiparty cryptographic protocols distinguishes
between two types of adversarial settings: In the non-adaptive
setting, the set of corrupted parties is chosen in advance, before the
interaction begins. In the adaptive setting, the adversary chooses
who to corrupt during the course of the computation. We study the
relations between adaptive security (i.e., security in the adaptive
setting) and non-adaptive security, according to two definitions and
in several models of computation. While affirming some prevailing
beliefs, we also obtain some unexpected results. Some highlights of
our results are:
o According to the definition of Dodis-Micali-Rogaway (which is set in
the information-theoretic model), adaptive and non-adaptive security
are equivalent. This holds for both honest-but-curious and Byzantine
adversaries, and for any number of parties.
o According to the definition of Canetti, for honest-but-curious
adversaries, adaptive security is equivalent to non-adaptive
security when the number of parties is logarithmic, and is strictly
stronger than non-adaptive security when the number of parties is
super-logarithmic. For Byzantine adversaries, adaptive security is
strictly stronger than non-adaptive security, for any number of
parties.
Efficient Traitor Tracing Algorithms using List Decoding
Uncategorized
Uncategorized
We apply powerful, recently discovered techniques for the list decoding of error-correcting codes to the problem of efficiently tracing traitors. Traitor tracing schemes have been extensively studied for use as a piracy deterrent. In a widely studied model for protecting digital content, each user in the system is associated with a unique set of symbols. For example, the sets may be used to install a software CD or decrypt pay-TV content. The assignment of sets is done in such a way that if a bounded collection of sets is used to form a new set to enable piracy, at least one of the traitor sets can be identified by applying a traitor tracing algorithm to the newly formed set. Much work has focused on methods for constructing such traceability schemes, but the complexity of the traitor tracing algorithms has received little attention. A widely used traitor tracing algorithm, the TA algorithm, has a running time of $O(\n)$ in general, where $\n$ is number of sets in the system (e.g., the number of copies of the CD), and therefore is inefficient for large populations. In this paper we use a coding theoretic approach to produce traceability schemes for which the TA algorithm is very fast. We show that when suitable error-correcting codes are used to construct traceability schemes, and fast list decoding algorithms are used to trace, the run time of the TA algorithm is polynomial in the codeword length. We also use the strength of the error-correcting code approach to construct traceability schemes with more efficient algorithms for finding all possible traitor coalitions. Finally, we provide evidence that amongst traceability schemes in general, TA traceability schemes are the most likely to be amenable to efficient tracing methods.
An observation regarding Jutla's modes of operation
Recently, Jutla suggested two new modes of operation for block ciphers. These modes build on traditional CBC and ECB
modes, respectively, but add to them masking of the outputs and inputs. Jutla proved that these masking operations considerably
strengthen CBC and ECB modes. In particular, together with a simple checksum, the modified modes ensure not only confidentiality, but
also authenticity. Similar modes were also suggested by Gligor and Donsecu and by Rogaway.
In Jutla's proposal (as well as in some of the other proposals), the masks themselves are derived from an IV via the same block
cipher as used for the encryption (perhaps with a different key). In this work we note, however, that the function for deriving these masks
need not be cryptographic at all. In particular, we prove that a universal hash function (a-la-Carter-Wegman) is sufficient for this
purpose.
Timed-Release Cryptography
Let $n$ be a large composite number. Without factoring $n$, the
validation of $a^{2^t} (\bmod \, n)$ given $a$, $t$ with $gcd(a, n) =
1$ and $t < n$ can be done in $t$ squarings modulo $n$. For $t \ll n$
(e.g., $n > 2^{1024}$ and $t < 2^{100}$), no lower complexity than $t$
squarings is known to fulfill this task (even considering massive
parallelisation). Rivest et al suggested to use such constructions as
good candidates for realising timed-release crypto problems.
We argue the necessity for zero-knowledge proof of the correctness of
such constructions and propose the first practically efficient
protocol for a realisation. Our protocol proves, in $\log_2 t$
standard crypto operations, the correctness of $(a^e)^{2^t}
(\bmod\,n)$ with respect to $a^e$ where $e$ is an RSA encryption
exponent. With such a proof, a {\em Timed-release RSA Encryption} of a
message $M$ can be given as $a^{2^t} M (\bmod \,n)$ with the
assertion that the correct decryption of the RSA ciphertext $M^e
(\bmod \, n)$ can be obtained by performing $t$ squarings modulo $n$
starting from $a$. {\em Timed-release RSA signatures} can be
constructed analogously.
Digitally Watermarking RSA Moduli
The moduli used in RSA (see \cite{rsa}) can be generated by many different sources.
The generator of that modulus knows its factorization.
They have the ability to forge signatures or break any system based on this moduli.
If a moduli and the RSA parameters associated with it were generated by a reputable source,
the system would have higher value than if the parameters were generated by an unknown entity.
An RSA modulus is digitally marked, or digitally trade marked, if the generator and other
identifying features of the modulus can be identified and possibly verified by the modulus itself.
The basic concept of digitally marking an RSA modulus would be to fix the upper bits of the
modulus to this tag.
Thus anyone who sees the public modulus can tell who generated the modulus and who the generator
believes the intended user/owner of the modulus is.
Two types of trade marking will be described here.
The first is simpler but does not give verifiable trade marking.
The second is more complex, but allows for verifiable trade marking of RSA moduli.
The second part of this paper describes how to generate an RSA modulus with fixed upper bits.
Ciphers with Arbitrary Finite Domains
Uncategorized
Uncategorized
We introduce the problem of enciphering members of a finite set $M$
where $k=|M|$ is arbitrary (in particular, it need not be a power
of two). We want to achieve this goal starting from a block cipher
(which requires a message space of size $N=2^n$, for some $n$).
We look at a few solutions to this problem, focusing on the case
when $M=[0, k-1]$.
New Zero-knowledge Undeniable Signatures - Forgery of Signature Equivalent to Factorisation
We propose a new zero-knowledge undeniable signature scheme which is
based on the intractability of computing high-order even powers modulo
a composite. The new scheme has a number of desirable properties: (i)
forgery of a signature (including existential forgery) is proven to be
equivalent to factorisation, (ii) perfect zero-knowledge, (iii)
efficient protocols for signature verification and non-signature
denial: both measured by $O(\log k)$ (multiplications) where $1/k$
bounds the probability of error. For a denial protocol, this
performance is unprecedented.
How to achieve a McEliece-based Digital Signature Scheme
McEliece is one of the oldest known public key cryptosystems.
Though it was less widely studied that RSA, it is remarkable that
all known attacks are still exponential. It is widely believed that
code-based cryptosystems like McEliece does not allow practical
digital signatures. In the present paper we disprove this belief
and show a way to build a practical signature scheme based on coding
theory. It's security can be reduced in the random oracle model to
the well-known {\em syndrome decoding problem} and the
distinguishability of permuted binary Goppa codes from a random
code. For example we propose a scheme with signatures of $81$-bits
and a binary security workfactor of $2^{83}$.
Robust key-evolving public key encryption schemes
We propose a key-evolving paradigm to deal with the key
exposure problem of public key encryption schemes.
The key evolving paradigm is like the one used for
forward-secure digital signature schemes.
Let time be divided into time periods such that
at time period $j$, the decryptor holds the secret key
$SK_j$, while the public key PK is fixed during its
lifetime.
At time period $j$, a sender encrypts a message $m$ as
$\langle j, c\rangle$, which can be decrypted only
with the private key $SK_j$.
When the time makes a transit from period $j$ to $j+1$, the
decryptor updates its private key from $SK_j$ to $SK_{j+1}$
and deletes $SK_j$ immediately.
The key-evolving paradigm assures that compromise of the
private key $SK_j$ does not jeopardize the message encrypted
at the other time periods.
\par
We propose two key-evolving public key encryption schemes
with $z$-resilience such that compromise of $z$ private keys
does not affect confidentiality of messages encrypted in
other time periods.
Assuming that the DDH problem is hard,
we show one scheme semantically secure against passive
adversaries and the other scheme semantically secure against
the adaptive chosen ciphertext attack under the random
oracle.
Fully Distributed Threshold RSA under Standard Assumptions
The aim of the present article is to propose a fully distributed environment for the RSA scheme.
What we have in mind is highly sensitive applications and even if we are ready to pay a price in
terms of efficiency, we do not want any compromise of the security assumptions that we make.
Recently Shoup proposed a practical RSA threshold signature scheme that allows to share the
ability to sign between a set of players. This scheme can be used for decryption as well.
However, Shoup's protocol assumes a trusted dealer to generate and distribute the keys.
This comes from the fact that the scheme needs a special assumption on the RSA modulus and
this kind of RSA moduli cannot be easily generated in an efficient way with many players.
Of course, it is still possible to call theoretical results on multiparty computation, but we
cannot hope to design efficient protocols. The only practical result to generate RSA moduli
in a distributive manner is Boneh and Franklin protocol but this protocol cannot be easily
modified to generate the kind of RSA moduli that Shoup's protocol requires.
The present work takes a different path by proposing a method to enhance the key generation
with some additional properties and revisits the proof of Shoup to work with the resulting
RSA moduli. Both of these enhancements decrease the performance of the basic protocols.
However, we think that in the applications that we target, these enhancements provide
practical solutions. Indeed, the key generation protocol is usually run only once and the
number of players have time to perform their task so that the communication or time complexity
are not overly important.
Are 'Strong' Primes Needed for RSA
We review the arguments in favor of using so-called ``strong primes''
in the RSA public-key cryptosystem. There are two types of such
arguments: those that say that strong primes are needed to protect
against factoring attacks, and those that say that strong primes are
needed to protect against ``cycling'' attacks (based on repeated
encryption).
We argue that, contrary to common belief, it is unnecessary to use
strong primes in the RSA cryptosystem. That is, by using strong
primes one gains a negligible increase in security over what is
obtained merely by using ``random'' primes of the same size. There
are two parts to this argument. First, the use of strong primes
provides no additional protection against factoring attacks, because
Lenstra's method of factoring based on elliptic curves (ECM) circumvents any
protection that might have been offered by using strong primes.
The methods that 'strong' primes are intended to guard against, as well as ECM, are
probabalistic in nature, but ECM succeeds with higher probability. For RSA key sizes being proposed now, the probability
of success of these methods is very low.
Additionally, the newer Number Field Sieve algorithm can factor RSA keys with certainty in
less time than these methods.
Second, a simple group-theoretic argument shows that
cycling attacks are extremely unlikely to be effective, as long as the
primes used are large. Indeed, even probabalistic factoring attacks will succeed much more
quickly and with higher probability than cycling attacks.
Secure and Efficient Asynchronous Broadcast Protocols
Reliable broadcast protocols are a fundamental building block for
implementing replication in fault-tolerant distributed systems. This paper
addresses secure service replication in an asynchronous environment with a
static set of servers, where a malicious adversary may corrupt up to a
threshold of servers and controls the network. We develop a formal model
using concepts from modern cryptography, present modular definitions for
several broadcast problems, including reliable, atomic, and secure causal
broadcast, and present protocols implementing them. Reliable broadcast is a
basic primitive, also known as the Byzantine generals problem, providing
agreement on a delivered message. Atomic broadcast imposes additionally a
total order on all delivered messages. We present a randomized atomic
broadcast protocol based on a new, efficient multi-valued asynchronous
Byzantine agreement primitive with an external validity condition.
Apparently, no such efficient asynchronous atomic broadcast protocol
maintaining liveness and safety in the Byzantine model has appeared
previously in the literature. Secure causal broadcast extends atomic
broadcast by encryption to guarantee a causal order among the delivered
messages. Threshold-cryptographic protocols for signatures, encryption, and
coin-tossing also play an important role.
A Note on Cryptanalysis of the Preliminary Version of the NTRU Signature Scheme
In this paper a preliminary version of the NTRU signature scheme
is cryptanalyzed. The attack exploits a correlation between some
bits of a signature and coefficients of a secret random
polynomial. The attack does not apply to the next version of the
signature scheme.
Last updated: 2001-01-26
MinRank problem and Zero-knowledge authentication
A zero-knowledge protocol provides provably secure authentication based on a computational problem.
Several such schemes have been proposed since 1984, and the most practical ones rely on
problems such as factoring that are unfortunately subexponential.
Still there are several other (practical) schemes based on NP-complete problems.
Among them, the problems of coding theory
are in spite of some 20 years of significant research effort, still exponential to solve.
The problem MinRank recently arouse in cryptanalysis of HFE (Crypto'99) and TTM (Asiacrypt'2000) public key cryptosystems.
It happens to ba a strict generalization of those hard problems of (decoding) error correcting codes.
We propose a new Zero-knowledge scheme based on MinRank.
We prove it to be Zero-knowledge by black-box simulation.
We compare it to other practical schemes based on NP-complete problems.
The MinRank scheme allows also an easy setup for a public key shared by a few users, and thus allows anonymous group signatures.
Separating Decision Diffie-Hellman from Diffie-Hellman in cryptographic groups
In many cases, the security of a cryptographic scheme based on Diffie--Hellman does in fact rely on the hardness of the
Diffie--Hellman Decision problem. In this paper, we show that the
hardness of Decision Diffie--Hellman is a much stronger hypothesis than the hardness of the regular Diffie--Hellman problem. Indeed, we
describe a reasonably looking cryptographic group where Decision
Diffie--Hellman is easy while Diffie--Hellman is equivalent to a --
presumably hard -- Discrete Logarithm Problem. This shows that care
should be taken when dealing with Decision Diffie--Hellman, since its
security cannot be taken for granted.
The One-More-RSA-Inversion Problems and the Security of Chaum's Blind Signature Scheme
We introduce a new class of computational problems which we
call the ``one-more-RSA-inversion'' problems. Our main result is that
two problems in this class, which we call the chosen-target and known-target
inversion problems respectively, have polynomially-equivalent computational
complexity. We show how this leads to a proof of security for Chaum's RSA-based
blind signature scheme in the random oracle model based on the assumed hardness
of either of these problems. We define and prove analogous results for
``one-more-discrete-logarithm'' problems. Since the appearence of the
preliminary version of this paper, the new problems we have introduced
have found other uses as well.
Efficient Algorithms for Computing Differential Properties of Addition
In this paper we systematically study the differential properties of
addition modulo $2^n$. We derive $\Theta(\log n)$-time algorithms
for most of the properties, including differential probability of
addition. We also present log-time algorithms for finding good
differentials. Despite the apparent simplicity of modular addition,
the best known algorithms require naive exhaustive computation. Our
results represent a significant improvement over them. In the most
extreme case, we present a complexity reduction from
$\Omega(2^{4n})$ to $\Theta(\log n)$.
New constructions of resilient Boolean functions with maximal nonlinearity
In this paper we develop a technique that allows to obtain new effective constructions of highly resilient Boolean functions with high nonlinearity. In particular, we prove that the upper bound
$2^{n-1}-2^{m+1}$ on nonlinearity of m-resilient n-variable Boolean
functions is achieved for $0.6n-1\le m\le n-2$.
A Content Certified E-mail Protocol with a Public Mailbox
This short note presents some ideas of the content certified e-mail protocol with a public mailbox. By applying the Diffie-Hellman key exchange style to reflect the agreement between the sender and receiver.
The notion of certified e-mail just similar to the certification procedures for regular mails in the real-life post offices. Yet the post office only certifies whether the mail has been sent or received by the appropriate parties, but not its content. This insufficient verification in paper authentication protocols can be easily solved by digital signature schemes.
A certified e-mail protocol must have the following security properties:
1. The sender must have some way of proving that the receiver received the mail, should the receiver later try to deny it.
2. The receiver must have some way of proving that the sender did not send the mail, should the sender later try to claim that she did.
Universally Composable Security: A New Paradigm for Cryptographic Protocols
We present a general framework for representing cryptographic protocols and analyzing their security. The framework
allows specifying the security requirements of practically any
cryptographic task in a unified and systematic way.
Furthermore, in this framework the security of protocols
is maintained under a general composition operation, called
universal composition.
The proposed framework with its security-preserving composition property allow for modular design and analysis of complex cryptographic protocols from relatively simple building blocks.
Moreover, within this framework, protocols are guaranteed to maintain their security within any context, even in the presence of an unbounded number of arbitrary protocol instances that run concurrently in an adversarially controlled manner.
This is a useful guarantee, that allows arguing about the security of
cryptographic protocols in complex and unpredictable environments such
as modern communication networks.
A Model for Asynchronous Reactive Systems and its Application to Secure Message Transmission
We present the first rigorous model for secure reactive systems in
asynchronous networks with a sound cryptographic semantics, supporting
abstract specifications and the composition of secure systems. This
enables modular proofs of security, which is essential in bridging
the gap between the rigorous proof techniques of cryptography and
tool-supported formal proof techniques.
The model follows the general simulatability approach of modern
cryptography. A variety of network structures and trust models can
be described, such as static and adaptive adversaries.
As an example of our specification methodology we provide the first
abstract and complete specification for Secure Message Transmission,
improving on recent results by Lynch, and verify one
concrete implementation. Our proof is based on a general theorem on
the security of encryption in a reactive multi-user setting,
generalizing a recent result by Bellare et.al.
How to Encrypt Long Messages without Large Size Symmetric/Asymmetric Encryption Schemes
Suppose that we wish to encrypt long messages
with small overhead by a public key encryption scheme
which is secure against adaptive chosen ciphertext attack (IND-CCA2).
Then the previous schemes require either
a large size one-way trapdoor permutation (OAEP)
or both a large size symmetric encryption scheme
and a small size asymmetric encryption scheme (hybrid encryption).
In this paper,
we show a scheme which requires only a small size
asymmetric encryption scheme satisfying IND-CCA2
for our purpose.
Therefore, the proposed scheme is very efficient.
A hash function and
a psuedorandom bit generator are used as random oracles.
On the Security of Modular Exponentiation with Application to the Construction of Pseudorandom Generators
Assuming the inractability of factoring, we show that
the output of the exponentiation modulo a composite function
$f_{N,g}(x)=g^x\bmod N$ (where $N=P\cdot Q$) is pseudorandom,
even when its input is restricted to be half the size.
This result is equivalent to the simultaneous hardness of the upper
half of the bits of $f_{N,g}$,
proven by Hastad, Schrift and Shamir.
Yet, we supply a different proof that is significantly simpler
than the original one. In addition, we suggest a pseudorandom generator
which is more efficient than all previously known factoring
based pseudorandom generators.
Candidate One-Way Functions Based on Expander Graphs
We suggest a candidate one-way function using combinatorial
constructs such as expander graphs. These graphs are used to
determine a sequence of small overlapping subsets of input bits,
to which a hard-wired random predicate is applied.
Thus, the function is extremely easy to evaluate:
all that is needed is to take multiple projections of the
input bits, and to use these as entries to a look-up table.
It is feasible for the adversary to scan the look-up table,
but we believe it would be infeasible to find an input that fits a given
sequence of values obtained for these overlapping projections.
The conjectured difficulty of inverting the suggested function
does not seem to follow from any well-known assumption.
Instead, we propose the study of the complexity of inverting
this function as an interesting open problem,
with the hope that further research will provide evidence to our
belief that the inversion task is intractable.
Last updated: 2001-01-05
Non-Deforming Digital Watermarks
TaKE cryptography offers subliminal marking of a digital stream so that any tampering, induces an unacceptable distortion of the primary information. Encrypted audio and video streams are decrypted by one key to the original content (e.g. music), and through another key to the digital watermark (e.g. name of legitimate user). Unlike the prevailing methods which are based on distorting the protected contents, or locking it through a digital signature, TaKE -- Tailored Key Encryption -- preserves the integrity of the original stream, and its digital watermarks are inconspicious. Daniel (tm) is a particular TaKE cryptography which also offers an instant and flexible trade off between security level and speed and convenience level. The described method is fast and proper for both high capacity stream, and secrecy sensitive streams..
RSA-OAEP is Secure under the RSA Assumption
Recently Victor Shoup noted that there is a gap in
the widely-believed security result of OAEP against adaptive
chosen-ciphertext attacks. Moreover, he showed that,
presumably,
OAEP cannot be proven secure from the {\it one-wayness}
of the underlying trapdoor permutation.
This paper establishes another result on the security
of OAEP. It proves that OAEP offers semantic security
against adaptive chosen-ciphertext attacks,
in the random oracle model, under the {\it partial-domain}
one-wayness of the underlying permutation.
Therefore, this uses a formally stronger assumption.
Nevertheless, since partial-domain one-wayness of the RSA function
is equivalent to its (full-domain) one-wayness, it follows that
the security of RSA--OAEP can actually
be proven under the sole RSA assumption, although
the reduction is not tight.
OAEP Reconsidered
The OAEP encryption scheme was introduced by Bellare and Rogaway
at Eurocrypt '94.
It converts any trapdoor permutation scheme into a public-key
encryption scheme.
OAEP is widely believed to provide
resistance against adaptive chosen ciphertext attack.
The main justification for this belief is a supposed proof of security
in the random oracle model, assuming the underlying
trapdoor permutation scheme is one way.
This paper shows conclusively that this justification is invalid.
First, it observes that there appears to be a non-trivial gap in the
OAEP security proof.
Second,
it proves that this gap cannot be filled,
in the sense that
there can be no standard "black box" security reduction
for OAEP.
This is done by proving that there exists an oracle
relative to which the general OAEP scheme is insecure.
The paper also presents a new scheme OAEP+, along with
a complete proof of security in the random oracle model.
OAEP+ is essentially just as efficient as OAEP,
and even has a tighter security reduction.
It should be stressed that these results do not
imply that a particular instantiation of OAEP, such as RSA-OAEP, is insecure.
They simply undermine the original justification for its security.
In fact, it turns out -- essentially by accident,
rather than by design -- that RSA-OAEP is secure in the random oracle
model; however, this fact relies on special algebraic properties
of the RSA function, and not on the security
of the general OAEP scheme.
Essential Shannon Security with Keys Smaller Than the Encrypted Message
To a cryptographer the claim that “Shannon Security was achieved with keys smaller than the encrypted message" appears unworthy of attention, much as the claim of “perpetuum mobile” is to a physicist. Albeit, from an engineering point of view solar cells which power satellites exhibit an “essential perpetuum mobile” and are of great interest. Similarly for Shannon Security, as it is explored in this article. We discuss encryption schemes designed to confound a diligent cryptanalyst who works his way from a captured ciphertext to a disappointing endpoint where more than one otherwise plausible plaintexts are found to be associated with keys that encrypt them to that ciphertext. Unlike some previous researchers who explored this equivocation as a special case of existing schemes, this approach is aimed at devising a symmetric encryption for that purpose per se.
Graph-Based Authentication of Digital Streams
Uncategorized
Uncategorized
We consider the authentication of digital streams over a lossy
network. The overall approach taken is graph-based, as this yields
simple methods for controlling overhead, delay, and the ability to
authenticate, while serving to unify many previously known hash- and
MAC-based techniques. The loss pattern of the network is defined
probabilistically, allowing both bursty and random packet loss to be
modeled. Our authentication schemes are customizable by the
sender of the stream; that is, within reasonable constraints on
the input parameters, we provide schemes that achieve the desired
authentication probability while meeting the input upper bound on the
overhead per packet. In addition, we demonstrate that some of the
shortcomings of previously known schemes correspond to easily
identifiable properties of a graph, and hence, may be more easily
avoided by taking a graph-based approach to designing authentication
schemes.
Session-Key Generation using Human Passwords Only
We present session-key generation protocols in a model where the
legitimate parties share {\em only} a human-memorizable
password, and there is no additional setup assumption in the
network. Our protocol is proven secure under the assumption that
trapdoor permutations exist. The security guarantee holds with
respect to probabilistic polynomial-time adversaries that control
the communication channel (between the parties), and may omit,
insert and modify messages at their choice. Loosely speaking, the
effect of such an adversary that attacks an execution of our
protocol is comparable to an attack in which an adversary is only
allowed to make a constant number of queries of the form ``is $w$
the password of Party $A$''. We stress that the result holds also
in case the passwords are selected at random from a small
dictionary so that it is feasible (for the adversary) to scan the
entire directory. We note that prior to our result, it was not
known whether or not such protocols were attainable without the
use of random oracles or additional setup assumptions.
A Complete Problem for Statistical Zero Knowledge
We present the first complete problem for SZK, the class of
(promise) problems
possessing statistical zero-knowledge proofs (against an
honest verifier).
The problem, called STATISTICAL DIFFERENCE,
is to decide whether two efficiently samplable distributions
are either statistically close or far apart. This
gives a new characterization of
SZK that makes no reference to interaction or zero knowledge.
We propose the use of complete problems to unify and
extend the study of statistical zero knowledge. To this end,
we examine several
consequences of our Completeness Theorem and its proof, such as:
(1) A way to make every (honest-verifier) statistical
zero-knowledge proof very communication efficient,
with the prover sending only one bit
to the verifier (to achieve soundness error 1/2).
(2) Simpler proofs of many of the previously known
results about statistical zero knowledge, such as
the Fortnow and Aiello--Håstad upper bounds on the complexity of SZK
and
Okamoto's result that SZK is closed under complement.
(3) Strong closure properties of SZK which amount to
constructing statistical zero-knowledge proofs for complex assertions
built out of simpler assertions already shown to be in SZK.
(4) New results about the various measures of "knowledge complexity,"
including a collapse in the hierarchy corresponding
to knowledge complexity in the "hint" sense.
(5) Algorithms for manipulating the statistical difference
between efficiently samplable distributions, including transformations
which "polarize" and "reverse" the statistical relationship
between a pair of distributions.
Multiparty Computation from Threshold Homomorphic Encryption
We introduce a new approach to multiparty computation (MPC)
basing it on homomorphic
threshold crypto-systems. We show that given keys for any
sufficiently efficient
system of this type, general MPC protocols for $n$ players can be
devised which are
secure against an active adversary that corrupts any minority of the
players.
The total number of bits sent is $O(nk|C|)$, where $k$ is the
security parameter and $|C|$ is
the size of a (Boolean) circuit computing the function to be
securely evaluated.
An earlier proposal by Franklin and Haber with the same complexity
was only secure
for passive adversaries, while all earlier protocols with active
security had complexity at
least quadratic in $n$. We give two examples of threshold
cryptosystems that can support our
construction and lead to the claimed complexities.
Correlation Immune Boolean Functions with Very High Nonlinearity
Here we provide a construction method for unbalanced, first order
correlation immune Boolean functions on even number of variables
$n \geq 6$. These functions achieve the currently best known
nonlinearity $2^{n-1} - 2^{\frac{n}{2}} + 2^{\frac{n}{2} - 2}$ .
Then we provide a simple modification of these functions to get
unbalanced correlation immune Boolean functions on even number of
variables $n$, with nonlinearity
$2^{n-1} - 2^{\frac{n}{2}} + 2^{\frac{n}{2} - 2} - 2$ and maximum
possible algebraic degree $n-1$. Moreover, we present a detailed
study on the Walsh spectra of these functions.
A Construction of Resilient Functions with High Nonlinearity
Uncategorized
Uncategorized
The relationship between nonlinearity and
resiliency for a function $F:\mathbb{F}_2^n \mapsto
\mathbb{F}_2^m$ is considered. We give a construction of resilient
functions with high nonlinearity. The construction leads to the
problem of finding a set of linear codes with a fixed minimum
distance, having the property that the intersection
between any two codes is the all zero codeword only. This problem is
considered, and existence results are provided. The constructed
functions obtain a nonlinearity superior to previous construction
methods.
CRYPTANALYSIS OF THE A5/2 ALGORITHM
An attack on the A5/2 stream cipher algorithm is described, that
determines the linear relations among the output sequence bits.
The vast majority of the unknown output bits can be reconstructed.
The time complexity of the attack is proportional to 2**17.
Reducing the Gate Count of Bitslice DES
This paper describes various techniques to reduce the number of logic gates needed to implement the DES S-boxes in bitslice software. Using standard logic gates, an average of 56 gates per S-box was achieved, while an average of 51 was produced when non-standard gates were utilized. This is an improvement over the previous best result, which used an average of 61 non-standard gates.
Spectral Analysis of High Order Correlation Immune Functions
We use the recent results on the spectral structure of
correlation immune and resilient Boolean functions for the
investigations of high order correlation immune functions.
At first, we give simple proofs of some theorems where only
long proofs were known. Next, we introduce the matrix of
nonzero Walsh coefficients and establish important properties
of this matrix. We use these properties to prove the nonexistence
of some high order correlation immune functions. Finally, we
establish the order of magnitude for the number of (n-4)th
order correlation immune functions of n variables.
Spectral Domain Analysis of Correlation Immune and Resilient Boolean Functions
In this paper we prove a general result on the Walsh Transform
of an arbitrary Boolean function. As a consequence, we obtain several
divisibility results on the Walsh Transform of correlation immune and
resilient Boolean functions. This allows us to improve upper bounds
on the nonlinearity of correlation immune and resilient Boolean
functions. Also we provide new necessary conditions on the algebraic
normal form of correlation immune/resilient functions attaining the
maximum possible nonlinearity.
New Constructions of Resilent and Correlation Immune Boolean Functions achieving Upper Bounds on Nonlinearity
Recently weight divisibility results on resilient and correlation
immune Boolean functions have received a lot of attention. These
results have direct consequences towards the upper bound on nonlinearity
of resilient and correlation immune Boolean functions of certain order.
Now the clear benchmark in the design of resilient Boolean functions
(which optimizes Sigenthaler's inequality) is to provide results
which attain the upper bound on nonlinearity. Here we construct a
7-variable, 2-resilient Boolean function with nonlinearity 56. This
solves the maximum nonlinearity issue for 7-variable functions with
any order of resiliency. Using this 7-variable function, we also
construct a 10-variable, 4-resilient Boolean function with nonlinearity
480. Construction of these two functions were justified as important
open questions in Crypto 2000. Also we provide methods to generate an
infinite sequence of Boolean functions on $n = 7 + 3i$ variables
$(i \geq 0)$ with order of resiliency $m = 2 + 2i$, algebraic degree
$4 + i$ and nonlinearity $2^{n-1} - 2^{m+1}$, which were not known
earlier. We conclude with a few interesting construction results
on unbalanced correlation immune functions of 5 and 6 variables.
Highly Nonlinear Balanced Boolean Functions with very good Autocorrelation Property
Constructing highly nonlinear balanced Boolean functions with very good
autocorrelation property is an interesting open question. In this direction
we use the measure $\Delta_f$ for a function $f$ proposed by Zhang and
Zheng (1995). We provide balanced functions $f$ with currently best known
nonlinearity and $\Delta_f$ values together. Our results for 15-variable
functions disprove the conjecture proposed by Zhang and Zheng (1995),
where our constructions are based on modifications of
Patterson-Wiedemann (1983) functions. Also we propose a simple
bent based construction technique to get functions with very good
$\Delta_f$ values for odd number of variables. This construction has
a root in Kerdock Codes. Moreover, our construction on even number
of variables is a recursive one and we conjecture (similar to Dobbertin's
conjecture (1994) with respect to nonlinearity) that this provides
minimum possible value of $\Delta_f$ for a function $f$ on even number
of variables.
The Saturation Attack - a Bait for Twofish
We introduce the notion of a saturation attack and present attacks on
reduced-round versions of the Twofish block cipher. Our attack for all
generic key sizes of Twofish (i.e., for 128-bit, 192-bit and 256-bit
keys) improves on exhaustive key search for seven rounds of Twofish
with full whitening, and for eight rounds of Twofish without whitening
at the end. The core of the attack is a a key-independent
distinguisher for six rounds of Twofish. The distinguisher is used to
attack up to 7 rounds of Twofish with full whitening and and 8 rounds
of Twofish with prewhitening only - half of the cipher. The attacks
take up to 2^127 chosen plaintexts (half of the codebook!) and are 2-4
times faster than exhaustive search.
Efficient Zero-Knowledge Proofs of Knowledge Without Intractability Assumptions
We initiate the investigation of the class of relations
that admit extremely efficient perfect zero knowledge
proofs of knowledge: constant number of rounds, communication
linear in the length of the statement and the witness, and
negligible knowledge error. In its most general incarnation,
our result says that for relations that have a particular
three-move honest-verifier zero-knowledge (HVZK) proof of knowledge,
and which admit a particular three-move HVZK proof of knowledge for
an associated commitment relation, perfect zero knowledge
(against a general verifier) can be achieved essentially for free,
even when proving statements on several instances combined
under under monotone function composition. In addition,
perfect zero-knowledge is achieved with an optimal 4-moves.
Instantiations of our main protocol lead to efficient perfect
ZK proofs of knowledge of discrete logarithms and RSA-roots,
or more generally, $q$-one-way group homomorphisms.
None of our results rely on intractability assumptions.
Provably Secure Password-Authenticated Key Exchange Using Diffie-Hellman
When designing password-authenticated key exchange protocols (as
opposed to key exchange protocols authenticated using
cryptographically secure keys), one must not allow any information
to be leaked that would allow verification of the password (a weak
shared key), since an attacker who obtains this information may be
able to run an off-line dictionary attack to determine the
correct password. Of course, it may be extremely difficult to hide
all password information, especially if the attacker may pose as one
of the parties in the key exchange. Nevertheless, we present a new
protocol called PAK which is the first Diffie-Hellman-based
password-authenticated key exchange protocol to provide a formal
proof of security (in the random oracle model) against both passive
and active adversaries. In
addition to the PAK protocol that provides mutual explicit
authentication, we also show a more efficient protocol called PPK that
is provably secure in the implicit-authentication model. We then
extend PAK to a protocol called PAK-X, in which one side (the
client) stores a plaintext version of the password, while the other
side (the server) only stores a verifier for the password. We
formally prove security of PAK-X, even when the server is
compromised. Our formal model for password-authenticated key
exchange is new, and may be of independent interest.
Constructions and Bounds for Unconditionally Secure Commitment Schemes
Commitment schemes have been extensively studied since they
were introduced by Blum in 1982. Rivest recently
showed how to construct unconditionally secure commitment schemes,
assuming the existence of a trusted initializer. In this paper, we present a
formal mathematical model for such schemes, and analyze their
binding and concealing properties. In particular, we
show that such schemes cannot be perfectly concealing: there is necessarily
a small probability that Alice can cheat Bob by committing to one value
but later revealing a different value. We prove several
bounds on Alice's cheating probability, and present constructions
of schemes that achieve optimal cheating probabilities. We also
show a close link between commitment schemes and the classical
``affine resolvable designs''.
Constructing Pseudo-Random Permutations with a Prescribed Structure
We show how to construct pseudo-random permutations that satisfy a
certain cycle restriction, for example that the permutation be
cyclic (consisting of one cycle containing all the elements) or an
involution (a self-inverse permutation) with no fixed points. The
construction can be based on any (unrestricted) pseudo-random
permutation. The resulting permutations
are defined succinctly and their
evaluation at a given point is efficient. Furthermore, they enjoy
a {\em fast forward} property, i.e. it is possible to iterate
them at a very small cost.
On Symmetrically Private Information Retrieval
In this paper we give single-round single-server symmetrically private information retrieval (SPIR) scheme, in which privacy of user follows from intractability of quadratic residuosity problem and and privacy of database follows from the number theoretic XOR assumption introduced in this paper. Proposed scheme achieves the communication complexity $O(n^{\e})$, for any $\e > 0$, where $n$ is the number of bits in the database. We also present an efficient block retrieval SPIR scheme. Intrestingly, we show that an $( K \log{n})$ SPIR scheme is possible if there exists an probabilistic bit encryption scheme on which certain operators can be defined with desired properties. Finally we go on to generalize SPIR scheme to private retrieval of secrets and sharing by a group of users. It can also be viewed as an extended secret sharing scheme. We also discover and prove certain properties related to quadratic residuosity in particular and probabilistic bit encryption in general.
Decimation Attack of Stream Ciphers
This paper presents a new attack called {\em Decimation Attack}
of most stream ciphers. It exploits the property that multiple clocking
(or equivalently $d$-th decimation) of a LFSR can simulate the behavior
of many other LFSRs of possible shorter length. It yields then significqnt
improvements of all the previous known correlation and fast correlation attacks
provided a new criterion is satisfied. This criterion on the length of the feedback
polynomial is then defined to resist the decimation attack. Simulation results and
complexity comparison are detailed for ciphertext only attack.
Encryption Modes with Almost Free Message Integrity
We define a new mode of operation for block ciphers which in addition to providing confidentiality also ensures message integrity. In contrast, previously for message integrity a separate pass was required to compute a cryptographic message authentication code (MAC). The new mode of operation, called Integrity Aware Parallelizable Mode (IAPM),
requires a total of m+1 block cipher evaluations on a plain-text of length m blocks. For comparison, the well known CBC (cipher block chaining) encryption mode requires m block cipher evaluations, and the second pass of computing the CBC-MAC essentially requires additional m+1 block cipher evaluations. As the name suggests, the new mode is also highly parallelizable.
On the Complexity of Verifiable Secret Sharing and Multi-Party Computation
We first study the problem of doing Verifiable Secret Sharing (VSS)
information theoretically secure for a general access structure. We
do it in the model where private channels between players and a
broadcast channel is given, and where an active, adaptive adversary
can corrupt any set of players not in the access structure. In
particular, we consider the complexity of protocols for this
problem, as a function of the access structure and the number of
players. For all access structures where VSS is possible at all, we
show that, up to a polynomial time black-box reduction, the
complexity of adaptively secure VSS is the same as that of ordinary
secret sharing (SS), where security is only required against a
passive, static adversary. Previously, such a connection was only
known for linear secret sharing and VSS schemes.
We then show an impossibility result indicating that a similar
equivalence does not hold for Multiparty Computation (MPC): we show
that even if protocols are given black-box access for free to an
idealized secret sharing scheme secure for the access structure in
question, it is not possible to handle all relevant access
structures efficiently, not even if the adversary is passive and
static. In other words, general MPC can only be black-box reduced
efficiently to secret sharing if extra properties of the secret
sharing scheme used (such as linearity) are assumed.
General Secure Multi-Party Computation from any Linear Secret Sharing Scheme
We show that verifiable secret sharing (VSS) and secure multi-party
computation (MPC) among a set of $n$ players can efficiently be based
on {\em any} linear secret sharing scheme (LSSS) for the players,
provided that the access structure of the LSSS allows MPC or VSS at
all. Because an LSSS neither guarantees reconstructability when some
shares are false, nor verifiability of a shared value, nor allows for
the multiplication of shared values, an LSSS is an apparently much weaker
primitive than VSS or MPC.
Our approach to secure MPC is generic and applies to both the
in\-for\-ma\-tion-theoretic and the cryptographic setting. The
construction is based on 1) a formalization of the special
multiplicative property of an LSSS that is needed to perform a
multiplication on shared values, 2) an efficient generic construction
to obtain from any LSSS a multiplicative LSSS for the same access
structure, and 3) an efficient generic construction to build
verifiability into every LSSS (always assuming that the adversary
structure allows for MPC or VSS at all).
The protocols are efficient. In contrast to all previous
information-theo\-re\-ti\-cal\-ly secure protocols, the field size is not
restricted (e.g, to be greater than $n$). Moreover, we exhibit
adversary structures for which our protocols are polynomial in $n$
while all previous approaches to MPC for non-threshold adversaries
provably have super-polynomial complexity.
Using fewer Qubits in Shor's Factorization Algorithm via Simultaneous Diophantine Approximation
While quantum computers might speed up in principle
certain computations dramatically, in pratice, though
quantum computing technology is still in its infancy.
Even we cannot clearly envison at present what the
hardware of that machine will be like.
Nevertheless, we can be quite confident that it will be
much easier to build any practical quantum computer
operating on a few number of quantum bits rather than one operating
on a huge number of of quantum bits.
It is therefore of big practical impact to use the resource
of quantum bits very spare,
i.e., to find quantum algorithms which use as few as possible
quantum bits.
Here, we present a method to reduce the number of actually needed qubits
in Shor's algorithm to factor a composite number $N$.
Exploiting the inherent probabilism of quantum computation we are able to
substitute the continued fraction algorithm to find a certain unknown
fraction by a simultaneous Diophantine approximation.
While the continued fraction algorithm is able to find a Diophantine
approximation to a single known fraction with a denominator greater than
$N^2$, our simultaneous Diophantine approximation method computes in
polynomial time unusually good approximations to known fractions with a
denominator of size $N^{1+\varepsilon}$, where $\varepsilon$ is allowed to be
an arbitrarily small positive constant.
As these unusually good approximations are almost unique we are able to
recover an unknown denominator using fewer qubits in the quantum part of our
algorithm.
Electronic Jury Voting Protocols
This work elicits the fact that all current
proposals for electronic voting schemes disclose the final
tally of the votes.
In certain situations, like jury voting, this may be undesirable.
We present a robust and universally verifiable
Membership Testing Scheme (MTS) that allows, among other things,
a collection of voters to cast votes and determine whether
their tally belongs to some pre--specified set (e.g., exceeds a
given threshold)
--- our scheme discloses no additional information than that implied
from the knowledge of such membership.
We discuss several extensions of our basic MTS.
All the constructions presented
combine features of two parallel lines of research
concerning electronic voting schemes, those based on MIX--networks
and in homomorphic encryption.
Random Oracles in Constantinople: Practical Asynchronous Byzantine Agreement using Cryptography
Byzantine agreement requires a set of parties in a distributed system to
agree on a value even if some parties are corrupted. A new protocol for
Byzantine agreement in a completely asynchronous network is presented that
makes use of cryptography, specifically of threshold signatures and
coin-tossing protocols. These cryptographic protocols have practical and
provably secure implementations in the ``random oracle'' model. In
particular, a coin-tossing protocol based on the Diffie-Hellman problem is
presented and analyzed.
The resulting asynchronous Byzantine agreement protocol is both practical
and theoretically nearly optimal because it tolerates the maximum number of
corrupted parties, runs in constant expected time, has message
and communication complexity close to the optimum, and uses a trusted dealer
only in a setup phase, after which it can process a virtually unlimited
number of transactions.
The protocol is formulated as a transaction processing service in a
cryptographic security model, which differs from the standard
information-theoretic formalization and may be of independent interest.
The Complete Distribution of Linear Probabilities of MARS' s-box
This paper shows the complete linear probability distribution of MARS' s-box. The best bias is $\dfrac{84}{2^9}$ ($=2^{-2.61}$), while the designers' estimation is $\dfrac{64}{2^9}$ and the best previously known bias is $\dfrac{82}{2^9}$.
Anonymous Fingerprinting with Direct Non-Repudiation
Fingerprinting schemes support copyright protection by enabling the merchant of a data item to identify the original buyer of a
redistributed copy. In asymmetric schemes, the merchant can also convince an arbiter of this fact. Anonymous fingerprinting schemes enable buyers to purchase digital items anonymously; however, identification is possible if they redistribute the data item.
Recently, a concrete and reasonably efficient construction based on digital coins was proposed. A disadvantage is that the accused
buyer has to participate in any trial protocol to deny charges. Trials with direct non-repudiation, i.e., the merchant alone
holds enough evidence to convince an arbiter, are more useful in real life. This is similar to the difference between ``normal'' and ``undeniable'' signatures.
In this paper, we present an equally efficient anonymous fingerprinting scheme with direct non-repudiation. The main technique
we use, delayed verifiable encryption, is related to coin tracing in escrowed cash systems. However, there are technical differences, mainly to provide an unforgeable link to license conditions.
Forward Security in Threshold Signature Schemes
We consider the usage of forward security with threshold
signature schemes. This means that even if more than the
threshold number of players are compromised, some security remains:
it is not possible to forge signatures relating to the past. In
this paper, we describe the first forward-secure threshold
signature schemes whose parameters (other than signing or verifying
time) do not vary in length with the number of time periods in the
scheme. Both are threshold versions of the Bellare-Miner
forward-secure signature scheme, which is Fiat-Shamir-based. One
scheme uses multiplicative secret sharing, and tolerates mobile
eavesdropping adversaries. The second scheme is based on polynomial
secret sharing, and we prove it forward-secure based on the security
of the Bellare-Miner scheme. We then sketch modifications which
would allow this scheme to tolerate malicious adversaries. Finally,
we give several general constructions which add forward security to
any existing threshold scheme.
Last updated: 2001-03-16
Secure Multiparty Computation of Approximations
Approximation algorithms can sometimes be used to obtain efficient
solutions where no efficient exact computation is known. In
particular, approximations are often useful in a distributed setting
where the inputs are held by different parties and are extremely
large. Furthermore, for some applications, the parties want to
cooperate to compute a function of their inputs without revealing more
information than they have to.
Suppose the function $\fhat$ is an approximation to the function $f$.
Secure multiparty computation of $f$ allows the parties to compute $f$
without revealing more than they have to, but requires some additional
overhead in computation and communication. Hence, if $f$ is
inefficient or just efficient enough to be practical, a secure
computation of $f$ may be impractically expensive. A secure
computation of $\fhat$ may be efficient enough, but a secure
computation of $\fhat$ is not necessarily as private as a secure
computation of $f$, because the output of $\fhat$ may reveal more
information than the output of $f$. In this paper, we present
definitions and protocols of secure multiparty approximate computation
that show how to realize most of the cost savings available by using
$\fhat$ instead of $f$ without losing the privacy of a secure
computation of $f$.
We make three contributions in this paper. First, we give formal
definitions of secure multiparty approximate computations. Second, we
introduce some general techniques for constructing secure multiparty
approximations. Finally, we present an efficient,
sublinear-communication, secure approximate computation for the
Hamming and $L^{1}$ distances.
Concrete Security Characterizations of PRFs and PRPs: Reductions and Applications
We investigate, in a concrete security setting, several alternate
characterizations of pseudorandom functions (PRFs) and pseudorandom
permutations (PRPs). By analyzing the concrete complexity of the
reductions between the standard notions and the alternate ones, we
show that the latter, while equivalent under polynomial-time
reductions, are weaker in the concrete security sense. With these
alternate notions, we argue that it is possible to get better
concrete security bounds for certain PRF/PRP-based schemes. As an
example, we show how using an alternate characterization of a PRF
could result in tighter security bounds for a certain class of
message authentication codes. We also apply these techniques to
give a simple concrete security analysis of the counter mode of
encryption. In addition, our results provide some insight into how
injectivity impacts pseudorandomness.