All papers in 2001 (113 results)
Efficient Revocation of Anonymous Group Membership
An accumulator scheme, introduced be Benaloh and de Mare
and further studied by Baric̈ and Pfitzmann, is an algorithm that
allows to hash a large set of inputs into one short value, called the
\textit{accumulator}, such that there is a short witness that a given
input was incorporated into the accumulator.
We put forward the notion of \textit{dynamic accumulators}, i.e., a method
that allows to dynamically add and delete inputs from the accumulator,
such that the cost of an add or delete is independent on the number of
accumulated values. We achieve this under the strong RSA assumption. For
this construction, we also show an efficient zero-knowledge protocol for
proving that a committed value is in the accumulator.
In turn, our construction of dynamic accumulator enables efficient
membership revocation in the anonymous setting. This method applies
to membership revocation in group signature schemes, such as the one
due to Ateniese et al., and efficient revocation of
credentials in anonymous credential systems, such as the one due to
Camenisch and Lysyanskaya. Using our method,
allowing revocation does not alter the complexity of any operations of
the underlying schemes. In particular, the cost of a group signature
verification or credential showing increases by only a small constant
factor, less than 2. All previously known methods (such as the ones
due to Bresson and Stern and Ateniese and Tsudik incurred an increase in these costs that was
linear in the number of members.
A Proposal for an ISO Standard for Public Key Encryption
This document is an initial proposal for a draft for a forthcoming
ISO standard on public-key encryption.
It is hoped that this proposal will serve as a basis for discussion,
from which a consensus for a standard may be formed.
An Identity Based Authenticated Key Agreement Protocol Based on the Weil Pairing
Uncategorized
Uncategorized
We describe an ID based authenticated two pass key agreement
protocol which makes use of the Weil pairing.
The protocol is described and its properties are discussed
including the ability to add key confirmation.
RSA hybrid encryption schemes
This document compares the two published RSA-based hybrid encryption
schemes having linear reduction in their security proof:
RSA-KEM with DEM1 and RSA-REACT.
While the performance of RSA-REACT is worse than the performance of
RSA-KEM+DEM1, a complete proof of its security has already been
published.
This is indeed an advantage, because we show that the security result
for RSA-KEM+DEM1 has a small hole.
We provide here a complete proof of the security of
RSA-KEM+DEM1.
We also propose some changes to RSA-REACT to improve its efficiency
without changing its security, and conclude that this new RSA-REACT is
a generalisation of RSA-KEM+DEM1, with at most the same security,
and with possibly worse performance.
Therefore we show that RSA-KEM+DEM1 should be preferred to RSA-REACT.
New Notions of Soundness and Simultaneous Resettability in the Public-Key Model
I n this paper, some new notions of soundness in public-key model are presented. We clarify the relationships among our new notions of soundness and the original 4 soundness notions presented by Micali and Reyzin. Our new soundness notions also characterize a new model for ZK protocols in public key model: weak soundness model. By ``weak” we mean for each common input x selected by a malicious prover on the fly, x is used by the malicious prover at most a-priori bounded polynomial times. The weak soundness model just lies in between BPK model and UPK model. Namely, it is weaker than BPK model but stronger than UPK model. In the weak soundness model (also in the UPK model, since weak soundness model implies UPK model), we get a 3-round black-box rZK arguments with weak resettable soundness for NP.
Note that simultaneous resettability is an important open problem in the field of ZK protocols. And Reyzin has proven that there are no ZK protocols with resettable soundness in the BPK model. It means that to achieve simultaneous resettability one needs to augment the BPK model in a reasonable fashion. Although Barak et al. [BGGL01] have proven that any language which has a black-box ZK arguments with resettable soundness is in BPP. It is the weak soundness that makes us to get simultaneous resettability.
More interestingly, our protocols work in a somewhat ``parallel repetition” manner to reduce the error probability and the verifier indeed has secret information with respect to historical transcripts. Note that in general the error probability of such protocols can not be reduced by parallel repetition. [BIN97]
At last, we give a 3-round non-black-box rZK arguments system with resettable soundness for NP in the preprocessing model in which a trusted third party is assumed. Our construction for such protocol is quite simple. Note that although the preprocessing model is quite imposing but it is still quite reasonable as indicated in [CGGM00]. For example, in many e-commerce setting a trusted third party is often assumed.
The critical tools used in this paper are: verifiable pseudorandom functions, zap and complexity leveraging. To our knowledge, our protocols are also the second application of verifiable pseudorandom functions. The first application is the 3-round rZK arguments with one-time soundness for NP in the public-key model as indicated by Micali and Reyzin [MR01a].
Design and Analysis of Practical Public-Key Encryption Schemes Secure against Adaptive Chosen Ciphertext Attack
A new public key encryption scheme, along with several variants,
is proposed and analyzed.
The scheme and its variants are quite practical, and are proved
secure
against adaptive chosen ciphertext attack under standard
intractability assumptions.
These appear to be the first public-key encryption schemes
in the literature that are simultaneously practical and provably secure.
Parallel Coin-Tossing and Constant-Round Secure Two-Party Computation
In this paper we show that any {\em two-party} functionality can be securely computed in a {\em constant number of rounds}, where security is obtained against malicious adversaries that may arbitrarily deviate from the protocol specification. This is in contrast to Yao's constant-round protocol that ensures security only in the face of semi-honest adversaries, and to its malicious adversary version that requires a polynomial number of rounds.
In order to obtain our result, we present a constant-round protocol for secure coin-tossing of polynomially many coins (in parallel). We then show how this protocol can be used in conjunction with other existing constructions in order to obtain a constant-round protocol for securely computing any two-party functionality. On the subject of coin-tossing, we also present a constant-round {\em perfect} coin-tossing protocol, where by ``perfect'' we mean that the resulting coins are guaranteed to be statistically close to uniform (and not just pseudorandom).
Cryptanalysis of the COS (2,128) Stream Ciphers
A new family of very fast stream ciphers called COS (for “crossing over system”) has been proposed by Filiol and Fontaine, and seems to have been adopted for at least one commercial standard. COS(2,128) Mode I and COS(2,128) Mode II are particular members of this family for which the authors proposed a cryptanalysis challenge. The ciphers accept secret keys of 256, 192 or 128 bits. In this note we cryptanalyse both of these ciphers, using a small amount of known keystream — with negligible effort in the case of Mode II, and with effort well below that required for a single DES key search in the case of Mode I.
Universal Arguments and their Applications
We put forward a new type of
computationally-sound proof systems, called universal-arguments,
which are related but different from both CS-proofs (as defined
by Micali) and arguments (as defined by Brassard, Chaum and
Crepeau). In particular, we adopt the instance-based
prover-efficiency paradigm of CS-proofs, but follow the
computational-soundness condition of argument systems (i.e., we
consider only cheating strategies that are implementable by
polynomial-size circuits).
We show that universal-arguments can be constructed based on
standard intractability assumptions that refer to polynomial-size
circuits (rather than assumptions referring to
subexponential-size circuits as used in the construction of
CS-proofs). As an application of universal-arguments, we weaken
the intractability assumptions used in the recent non-black-box
zero-knowledge arguments of Barak. Specifically, we only utilize
intractability assumptions that refer to polynomial-size circuits
(rather than assumptions referring to circuits of some ``nice''
super-polynomial size).
Concurrent Zero-Knowledge With Timing, Revisited
Following Dwork, Naor, and Sahai (30th STOC, 1998),
we consider concurrent execution of protocols in a
semi-synchronized network. Specifically, we assume that each party
holds a local clock such that a constant bound on the relative rates
of these clocks is a-priori known, and consider protocols that
employ time-driven operations
(i.e., time-out in-coming messages and delay out-going messages).
We show that the constant-round zero-knowledge proof for NP
of Goldreich and Kahan (Jour. of Crypto., 1996)
preserves its security when polynomially-many independent copies
are executed concurrently under the above timing model.
We stress that
our main result establishes zero-knowledge of interactive proofs,
whereas the results of Dwork et al. are
either for zero-knowledge arguments
or for a weak notion of zero-knowledge (called $\epsilon$-knowledge) proofs.
Our analysis identifies two extreme schedulings
of concurrent executions under the above timing model:
the first is the case of parallel execution of polynomially-many copies,
and the second is of concurrent execution of polynomially-many
copies such the number of copies that are simultaneously active
at any time is bounded by a constant (i.e., bounded simultaneity).
Dealing with each of these extreme cases is of independent interest,
and the general result (regarding concurrent executions under
the timing model) is obtained by combining the two treatments.
Countermeasures against Side-Channel Attacks for Elliptic Curve Cryptosystems
Some attacks on cryptographic systems exploit the leakage of information through so-called ``side channels'', such as power consumption or time employed by a computation.
For cryptosystems involving an exponentiation, a few possible countermeasures are suggested.
In the case of elliptic curves over a binary finite field, we show how to split point addition into two blocks which, through the addition of a little overhead, can be made undistinguishable from a point doubling. This allows the whole exponentiation process to be performed as a sequence of homogeneous steps.
To add some randomization to the exponentiation process in the ECC case, we suggest the use of points of small order, computed on the fly. This presents some disadvantages over known methods, but allows to avoid the storage of points in non-volatile RAM.
A multiplicative variation of ``additive exponent blinding'' is suggested. This involves a two-phase exponentiation and is valid both for discrete log and RSA settings.
Computer experiments implementing some of these ideas are described and analyzed.
An Extended Quadratic Frobenius Primality Test with Average Case Error Estimates
We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT is well-suited for generating large, random prime numbers since on a random input number, it takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability. EQFT extends QFT by verifying additional algebraic properties related to the existence of elements of order 3 and 4. We obtain a simple closed expression that upper bounds the probability of acceptance for any input number. This in turn allows us to give strong bounds on the average-case behaviour of the test: consider the algorithm that repeatedly chooses random odd $k$ bit numbers, subjects them to $t$ iterations of our test and outputs the first one found that passes all tests. We obtain numeric upper bounds for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most $2^{-143}$ for $k=500, t=2$. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point. While EQFT is slower than the average case on a small set of inputs, we present a variant that is always fast, i.e.takes time about 2 Miller-Rabin tests. The variant has slightly larger worst case error probability than EQFT, but still improves on previous proposed tests.
Quasi-Efficient Revocation of Group Signatures
A group signature scheme allows any group member to sign
on behalf of the group in an anonymous and unlinkable fashion.
In the event of a dispute, a designated trusted entity can reveal
the identity of the signer. Group signatures are claimed to
have many useful applications such as voting and electronic cash.
A number of group signature schemes have been proposed to-date.
However, in order for the whole group signature concept to become
practical and credible, the problem of secure and efficient group
member revocation must be addressed. In this paper, we construct a
new revocation method for group signatures based on the signature
scheme by Ateniese et al. at Crypto 2000. This new method represents
an advance in the state-of-the-art since the only revocation schemes
proposed thus far are either: 1) based on implicit revocation and
the use of fixed time periods, or 2) require the signature size to
be linear in the number of revoked members. Our method, in contrast,
does not rely on time periods, offers constant-length signatures
and constant work for the signer.
A Note on Girault's Self-Certified Model
Uncategorized
Uncategorized
In this paper, we describe an important shortcoming of
the first self-certified model proposed by Girault,
that may be exploited by the authority to compute users' secret keys.
We also show, while it is possible to make the attack ineffective by
taking additional precautions, the resulting model loses all merits
of the original model and does no longer meet the primary
contribution of the self-certified notion.
Linear Code Implies Public-Key Traitor Tracing
In this paper, we first show that three public-key $(k,n)$-traceability schemes can be derived from an $[n,u,d]$-linear code ${\cal C}$ such that $d \geq 2k+1$. The previous schemes are obtained as special cases. This observation gives a more freedom and a new insight to this field. For example, we show that Boneh-Franklin scheme is equivalent to a slight modification of the corrected Kurosawa-Desmedt scheme. This means that BF scheme is redundant or overdesigned because the modified KD scheme is much simpler. It is also shown that the corrected KD scheme is the best among them.
Fast hashing onto elliptic curves over fields of characteristic 3
We describe a fast hash algorithm that maps arbitrary messages
onto points of an elliptic curve defined over a finite field of
characteristic 3. Our new scheme runs in time $O(m^2)$ for curves
over $\GF{3^m}$. The best previous algorithm for this task runs
in time $O(m^3)$. Experimental data confirms the speedup by a
factor $O(m)$, or approximately a hundred times for practical $m$
values. Our results apply for both standard and normal basis
representations of $\GF{3^m}$.
An Efficient MAC for Short Messages
HMAC is the internet standard for message authentication. What
distinguishes HMAC from other MAC algorithms is that it provides
proofs of security assuming that the underlying cryptographic hash
(e.g. SHA-1) has some reasonable properties. HMAC is efficient for
long messages, however, for short messages the nested construction
results in a significant inefficiency. For example to MAC a message
shorter than a block, HMAC requires at least two calls to the
compression function rather than one.
This inefficiency may be particularly high for some applications, like
message authentication of signaling messages, where the individual
messages may all fit within one or two blocks. Also for TCP/IP traffic
it is well known that large number of packets (e.g. acknowledgment)
have sizes around 40 bytes which fit within a block of most
cryptographic hashes. We propose an enhancement that allows both
short and long messages to be message authenticated more efficiently
than HMAC while also providing proofs of security. In particular, for
a message smaller than a block our MAC only requires one call to the
compression function.
Constructing elliptic curves with a given number of points over a finite field
In using elliptic curves for cryptography, one often needs to construct elliptic curves with a given or known number of points over a given finite field. In the context of primality proving, Atkin and Morain suggested the use of the theory of complex multiplication to construct such curves. One of the steps in this method is the calculation of the Hilbert class polynomial
$H_D(X)$ modulo some integer $p$ for a certain fundamental discriminant $D$. The usual way of doing this is to compute
$H_D(X)$ over the integers and then reduce modulo $p$. But this involves computing the roots with very high accuracy and subsequent rounding of the coefficients to the closest integer. (Such accuracy issues also arise for higher genus cases.) We present a modified version of the Chinese remainder theorem (CRT) to compute $H_D(X)$ modulo $p$ directly from the knowledge of $H_D(X)$ modulo enough small primes. Our algorithm is inspired by Couveigne's method for computing square roots in the number field sieve, which is useful in other scenarios as well. It runs in heuristic expected time less than
the CRT method in [CNST]. Moreover, our method requires very few digits of precision to succeed, and avoids calculating the exponentially large coefficients of the Hilbert class polynomial over the integers.
Secure Vickrey Auctions without Threshold Trust
We argue that threshold trust is not an option in most of the real-life
electronic auctions. We then propose two new cryptographic Vickrey auction schemes that involve, apart from the bidders and the seller $S$, an auction authority $A$ so that unless $S$ and $A$ collude the outcome of auctions will be correct, and moreover, $S$ will not get any information about the bids, while $A$ will learn bid statistics. Further extensions make it possible to decrease damage that colluding $S$ and $A$ can do, and to construct $(m+1)$st price auction schemes. The communication complexity between the $S$ and $A$ in medium-size auctions is at least one order of magnitude less than in the Naor-Pinkas-Sumner scheme.
Slope packings and coverings, and generic algorithms for the discrete logarithm problem
We consider the set of slopes of lines formed by
joining all pairs of points in some subset S of a
Desarguesian affine plane of prime order, p.
If all the slopes are distinct and non-infinite, we have a
slope packing;
if every possible non-infinite slope occurs, then we have
a slope covering. We review and unify some results on these problems
that can be derived from the study of Sidon sets and sum covers.
Then we report some
computational results we have obtained for small values of p.
Finally, we
point out some connections between slope packings and coverings
and generic algorithms for the discrete logarithm problem in prime
order (sub)groups. Our results provide a combinatorial
characterization
of such algorithms, in the sense that any generic algorithm
implies the
existence of a certain slope packing or covering, and conversely.
Threshold Cryptosystems Based on Factoring
Uncategorized
Uncategorized
We consider threshold cryptosystems over a composite
modulus $N$ where the \emph{factors} of $N$ are shared among the
participants as the secret key.
This is a new paradigm for threshold cryptosystems based on a
composite modulus, differing from the
typical treatment of RSA-based systems where a ``decryption
exponent'' is shared among the participants. Our approach yields
solutions to some open problems in threshold cryptography; in particular, we obtain the following:
1. \emph{Threshold homomorphic encryption}. A number of applications (e.g., electronic voting or efficient multi-party computation) require threshold homomorphic encryption schemes.
We present a protocol for threshold decryption of the homomorphic Goldwasser-Micali encryption scheme \cite{GM84}, answering an open question of \cite{FPS00}.
2. \emph{Threshold cryptosystems as secure as factoring}. We describe a threshold version of a variant of the signature standards ISO 9796-2 and PKCS\#1 v1.5 (cf.\ \cite[Section 11.3.4]{MvOV}), thus giving the first threshold signature scheme
whose security (in the random oracle model) is equivalent to the hardness of factoring \cite{C02}.
Our techniques may be adapted to distribute the Rabin encryption
scheme \cite{R79} whose semantic security may be reduced to the hardness of factoring.
3. \emph{Efficient threshold schemes without a trusted dealer.}
Because our schemes only require sharing of $N$ --- which furthermore need not be a product of strong primes --- our schemes are very efficient (compared to previous schemes) when a trusted dealer is not assumed and key generation is done in a distributed manner.
Extensions to achieve robustness and proactivation are also possible with our schemes.
BDD-based Cryptanalysis of Keystream Generators
Many of the keystream generators which are used in practice are LFSR-based in the sense
that they produce the keystream according to a rule $y=C(L(x))$,
where $L(x)$ denotes an internal linear bitstream, produced by a small number of parallel
linear feedback shift registers (LFSRs),
and $C$ denotes some nonlinear compression function.
We present an $n^{O(1)} 2^{(1-\alpha)/(1+\alpha)n}$ time bounded attack, the FBDD-attack,
against LFSR-based generators, which computes the secret initial state
$x\in\booln$ from $cn$ consecutive keystream bits, where $\alpha$ denotes the rate of information,
which $C$ reveals about the internal bitstream, and $c$ denotes some small constant.
The algorithm uses Free Binary Decision Diagrams (FBDDs), a data structure for
minimizing and manipulating Boolean functions.
The FBDD-attack yields better
bounds on the effective key length for several keystream generators of practical use, so
a $0.656n$ bound for the self-shrinking generator, a $0.6403 n$ bound for the A5/1 generator,
used in the GSM standard, a $0.6n$
bound for the $E_0$ encryption standard in the one level mode,
and a $0.8823n$ bound for the two-level $E_0$ generator used in the Bluetooth wireless LAN system.
Perfect Hiding and Perfect Binding Universally Composable Commitment Schemes with Constant Expansion Factor
Canetti and Fischlin have recently proposed the security notion {\em
universal composability} for commitment schemes and provided two
examples. This new notion is very strong. It guarantees that security
is maintained even when an unbounded number of copies of the scheme
are running concurrently, also it guarantees non-malleability,
resilience to selective decommitment, and security against adaptive
adversaries. Both of their schemes uses $\Theta(k)$ bits to commit to
one bit and can be based on the existence of trapdoor commitments and
non-malleable encryption.
We present new universally composable commitment schemes based on the
Paillier cryptosystem and the Okamoto-Uchiyama cryptosystem. The
schemes are efficient: to commit to $k$ bits, they use a constant
number of modular exponentiations and communicates $O(k)$
bits. Further more the scheme can be instantiated in either perfectly
hiding or perfectly binding versions. These are the first schemes to
show that constant expansion factor, perfect hiding, and perfect
binding can be obtained for universally composable commitments.
We also show how the schemes can be applied to do efficient
zero-knowledge proofs of knowledge that are universally composable.
Identity Based Encryption From the Weil Pairing
We propose a fully functional identity-based encryption scheme (IBE).
The scheme has chosen ciphertext security in the random oracle model
assuming an elliptic curve variant of the computational Diffie-Hellman
problem. Our system is based on bilinear maps between groups. The
Weil pairing on elliptic curves is an example of such a map. We give
precise definitions for secure identity based encryption schemes and
give several applications for such systems.
Linear broadcast encryption schemes
A new family of broadcast encryption schemes (BESs), which will be called linear broadcast encryption schemes (LBESs), is presented in this paper by using linear algebraic techniques. This family generalizes most previous proposals and provide a general framework to the study of broadcast encryption schemes. We present a method to construct LBESs for a general specification structure in order to find schemes that fit in situations that have not been considered before.
Improving the trade-off between storage and communication in broadcast encryption schemes
The most important point in the design of broadcast encryption schemes (BESs) is obtain a good trade-off between the amount of secret information that must be stored by every user and the length of the broadcast message, which are measured, respectively, by the information rate $\rho$ and the broadcast information rate $\rho_B$. In this paper we present a simple method to combine two given BESs in order to improve the trade-off between $\rho$ and $\rho_B$ by finding BESs with good information rate $\rho$ for arbitrarily many different values of the broadcast information rate $\rho_B$. We apply this technique to threshold $(R,T)$-BESs and we present a method to obtain, for every rational value $1/R \le \rho_B \le 1$, a $(R,T)$-BES with optimal information rate $\rho$ among all $(R,T)$-BESs that can be obtained by combining two of the $(R,T)$-BESs proposed by Blundo et al.
A Linear Algebraic Approach to Metering Schemes
Uncategorized
Uncategorized
A metering scheme is a method by which an audit agency
is able to measure the interaction between servers and
clients during a certain number of time frames.
Naor and Pinkas proposed
metering schemes where any server is able to compute
a proof, i.e., a value to be shown to the audit agency
at the end of each time frame,
if and only if it has been visited
by a number of clients larger than or equal to some threshold $h$
during the time frame.
Masucci and Stinson
showed how to construct a metering scheme realizing
any access structure, where the access structure is the family of all subsets
of clients which enable a server to compute its proof.
They also provided lower bounds on the communication
complexity of metering schemes.
In this paper we describe a linear algebraic approach
to design metering schemes
realizing any access structure. Namely,
given any access structure, we present a
method to construct a metering scheme realizing it
from any linear secret sharing scheme
with the same access structure.
Besides, we prove some properties about the relationship
between metering schemes
and secret sharing schemes. These properties provide
some new bounds on the information distributed to clients
and servers in a metering scheme. According to these bounds,
the optimality of the metering schemes obtained by our method
relies upon the optimality of the linear secret sharing
schemes for the given access structure.
Statistical Zero-Knowledge Proofs from Diophantine Equations
A family $(S_t)$ of sets is $p$-bounded Diophantine if $S_t$ has a
representing $p$-bounded polynomial $R_{S,t}$, s.t. $x\in S_t \iff
(\exists y)[R_{S}(x;y)=0]$. We say that $(S_t)$ is unbounded
Diophantine if additionally, $R_{S,t}$ is a fixed $t$-independent
polynomial. We show that $p$-bounded (resp., unbounded) Diophantine
set has a polynomial-size (resp., constant-size) statistical
zero-knowledge proof system that a committed tuple $x$ belongs to
$S$. We describe efficient SZK proof systems for several
cryptographically interesting sets. Finally, we show how to prove in
SZK that an encrypted number belongs to $S$.
Universal Hash Proofs and a Paradigm for Adaptive Chosen Ciphertext Secure Public-Key Encryption
We present several new and fairly practical public-key
encryption schemes and prove them secure against adaptive
chosen ciphertext attack. One scheme is based on
Paillier's Decision Composite Residuosity (DCR)
assumption, while another is based in the classical
Quadratic Residuosity (QR) assumption. The analysis is in
the standard cryptographic model, i.e., the security of
our schemes does not rely on the Random Oracle model.
We also introduce the notion of a universal hash
proof system. Essentially, this is a special kind of
non-interactive zero-knowledge proof system for an NP
language. We do not show that universal hash
proof systems exist for all NP languages, but we do
show how to construct very efficient universal hash
proof systems for a general class of group-theoretic
language membership problems.
Given an efficient universal hash proof system for a
language with certain natural cryptographic
indistinguishability properties, we show
how to construct an efficient public-key encryption
schemes secure against adaptive chosen ciphertext attack
in the standard model. Our construction only uses the
universal hash proof systemas a primitive: no other
primitives are required, although even more efficient
encryption schemes can be obtained by using hash
functions with appropriate collision-resistance
properties.
We show how to construct efficient universal hash proof
systems for languages related to the DCR and QR
assumptions. From these we get corresponding public-key
encryption schemes that are secure under these assumptions.
We also show that the Cramer-Shoup encryption scheme
(which up until now was the only practical
encryption scheme that could be proved secure against
adaptive chosen ciphertext attack under a reasonable
assumption, namely, the Decision Diffie-Hellman assumption)
is also a special case of our general theory.
Analysis of the GHS Weil Descent Attack on the ECDLP over Characteristic Two Finite Fields of Composite Degree
In this paper, we analyze the Gaudry-Hess-Smart (GHS) Weil descent
attack on the elliptic curve discrete logarithm problem (ECDLP) for
elliptic curves defined over characteristic two finite fields of
composite extension degree. For each such field $F_{2^N}$,
$N \in [100,600]$, we identify elliptic curve parameters such
that (i) there should exist a cryptographically interesting elliptic
curve $E$ over $F_{2^N}$ with these parameters; and (ii) the GHS
attack is more efficient for solving the ECDLP in $E(F_{2^N})$ than
for solving the ECDLP on any other cryptographically interesting
elliptic curve over $F_{2^N}$. We examine the feasibility of the
GHS attack on the specific elliptic curves over $F_{2^{176}}$,
$F_{2^{208}}$, $F_{2^{272}}$, $F_{2^{304}}$, and $F_{2^{368}}$
that are provided as examples inthe ANSI X9.62 standard for the
elliptic curve signature scheme ECDSA. Finally, we provide several
concrete instances of the ECDLP over $F_{2^N}$, $N$ composite,
of increasing difficulty which resist all previously known attacks
but which are within reach of the GHS attack.
On the Constructing of Highly Nonlinear Resilient Boolean Functions by Means of Special Matrices
In this paper we consider matrices of special form introduced in [11]
and used for the constructing of resilient functions with cryptographically
optimal parameters. For such matrices we establish lower bound
${1\over\log_2(\sqrt{5}+1)}=0.5902...$ for the important ratio
${t\over t+k}$ of its parameters and point out that there exists a
sequence of matrices for which the limit of ratio of its parameters
is equal to lower bound. By means of these matrices we construct
$m$-resilient $n$-variable functions with maximum possible nonlinearity
$2^{n-1}-2^{m+1}$ for $m=0.5902...n+O(\log_2 n)$. This result
supersedes the previous record.
A Description of Protocols for Private Credentials
This document provides a short description of practical protocols for private credential systems. We explain the basic concepts and mechanisms behind issuing and showing of private credentials and e-cash. The goal is to describe concisely how practical private credential systems can be achieved and not to provide intuition or motivation for the technology; for information on these subjects, see [1,2,3]. We give the details of one specific type of practical protocols for private credentials; other choices of functionalities and optimizations are possible. The reader is assumed to have general knowledge of basic concepts of cryptography such as the Discrete Logarithm problem, basic group theory and hash functions. For security proofs and more elaborate descriptions of the techniques used we refer the reader to [2].
A Sufficient Condition for Secure Ping--Pong Protocols
A sufficient condition for secure ping--pong protocols is
repretsented. This condition, called \emph{name--suffixing}, is
essentially to insert identities of participants in messages. We
prove its sufficiency and discuss the feature of security in terms
of name--suffixing.
COS Ciphers are not "extremely weak"! - The Design Rationale of COS Ciphers
This note summarizes the results of Babbage's cryptanalysis of COS ciphers and shows that in fact COS ciphers are not weak as claimed. These ciphers have been designed according a novel concept of encryption directly determined by the context of use. This concept is here more precisely defined.
Authenticated Encryption in the Public-Key Setting: Security Notions and Analyses
This paper addresses the security of authenticated encryption schemes
in the public key setting. We present two new notions of
authenticity that are stronger than the integrity notions given in the
symmetric setting \cite{bn00}. We also show that
chosen-ciphertext attack security (IND-CCA) in the public key setting
is not obtained in general from the combination of chosen-plaintext
security (IND-CPA) and integrity of ciphertext (INT-CTXT), which is in
contrast to the results shown in the symmetric setting
\cite{ky00,bn00}. We provide security analyses of authenticated
encryption schemes constructed by combining a given public key
encryption scheme and a given digital signature scheme in a
``generic'' manner ---namely, Encrypt-and-Sign, Sign-then-Encrypt, and
Encrypt-then-Sign--- and show that none of them, in general, provide
security under all notions defined in this paper. We then present a
scheme called {\em ESSR} that meets all security notions defined here.
We also give security analyses on an efficient Diffie-Hellman based
scheme called {\em DHETM}, which can be thought of as a transform of
the encryption scheme ``DHIES'' \cite{abr01} into an {\em
authenticated} encryption scheme in the public key setting.
The COS Stream Ciphers are Extremely Weak
A new family of very fast stream ciphers called COS (for "crossing over system") has been proposed by Filiol and Fontaine, and seems to have been adopted for at least one commercial standard. In this note we show that the COS ciphers are very weak indeed — it requires negligible effort to reconstruct the state of the keystream generator from a very small amount of known keystream.
A Time-Memory Tradeoff Attack Against LILI-128
In this note we discuss a novel but simple time-memory tradeoff attack
against the stream cipher LILI-128. The attack defeats
the security advantage of having an irregular stepping function.
The attack requires $2^{46}$ bits of keystream, a lookup table of
$2^{45}$ 89-bit words and computational effort which is roughly
equivalent to $2^{48}$ DES operations.
Communication Complexity and Secure Function Evaluation
A secure function evaluation protocol allows two parties to jointly compute a function $f(x,y)$ of their inputs in a manner not leaking more information than necessary. A major result in this field is: ``any function $f$ that can be computed using polynomial resources can be computed securely using polynomial resources'' (where `resources' refers to communication and computation). This result follows by a general transformation from any circuit for $f$ to a secure protocol that evaluates $f$.
Although the resources used by protocols resulting from this transformation are polynomial in the circuit size, they are much higher (in general) than those required for an insecure computation of $f$.
For the design of efficient secure protocols we suggest two new methodologies, that differ with respect to their underlying computational models. In one methodology we utilize the communication complexity tree (or branching program) representation of $f$. We start with an efficient (insecure) protocol for $f$ and transform it into
a secure protocol. In other words, ``any function $f$ that can be computed using communication complexity $c$ can be can be computed securely using communication complexity that is polynomial in $c$ and a security parameter''. The second methodology uses the circuit computing $f$, enhanced with look-up tables as its underlying computational model.
It is possible to simulate any RAM machine in this model with polylogarithmic blowup. Hence it is possible to start with a computation of $f$ on a RAM machine and transform it into a secure protocol.
We show many applications of these new methodologies resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the ``millionaires problem'', where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation.
Pseudo-Random Functions and Factoring
Uncategorized
Uncategorized
Factoring integers is the most established problem on which
cryptographic primitives are based. This work presents an efficient
construction of {\em pseudorandom functions} whose security is based
on the intractability of factoring. In particular, we are able to
construct efficient length-preserving pseudorandom functions where
each evaluation requires only a {\em constant} number of modular
multiplications per output bit. This is substantially more efficient
than any previous construction of pseudorandom functions based on
factoring, and matches (up to a constant factor) the efficiency of
the best known factoring-based {\em pseudorandom bit generators}.
On the Security of Randomized CBC-MAC Beyond the Birthday Paradox Limit - A New Construction
In this paper, we study the security of randomized CBC-MACs and propose a new construction that resists birthday paradox attacks and provably reaches full security. The size of the MAC tags in this construction is optimal, i.e., exactly twice the size of the block cipher. Up to a constant, the security of the proposed randomized CBC-MAC using an n-bit block cipher is the same as the security of the usual encrypted CBC-MAC using a 2n-bit block cipher. Moreover, this construction adds a negligible computational overhead compared to the cost of a plain, non-randomized CBC-MAC. We give a full standard proof of our construction using one pass of a block cipher with 2n-bit keys but there also is a proof for n-bit keys block ciphers in the ideal cipher model.
Efficient oblivious transfer schemes
In this paper we propose a very efficient
(string) $OT_n^1$ scheme
for any $n\geq 2$.
We build our $OT_n^1$ scheme from fundamental cryptographic
techniques directly.
It achieves optimal efficiency in the number of rounds
and the total number of exchanged messages for the case
that the receiver's
choice is unconditionally secure.
The computation time of our $OT_n^1$ scheme is very
efficient, too.
The receiver need compute 2 modular
exponentiations only no matter how large $n$ is,
and the sender need compute $2n$ modular exponentiations.
Furthermore, the system-wide parameters need not change
during the lifetime of the system and are {\em universally
usable}.
That is, all possible receivers and senders use the same
parameters and need no trapdoors specific to each of them.
For our $OT_n^1$ scheme, the privacy of the receiver's choice
is unconditionally secure and the privacy of
the un-chosen secrets is at least as strong as the hardness
of the decisional Diffie-Hellman problem.
\par
We extend our $OT_n^1$ scheme to distributed oblivious
transfer schemes.
Our distributed $OT_n^1$ scheme takes full advantage of
the research results of secret sharing and is conceptually
simple.
It achieves better security than
Noar and Pinkas's scheme does in many aspects.
For example, our scheme is secure against collusion of $R$
and $t$-$1$ servers
and it need not restrict $R$ to contact at most $t$ servers,
which is difficult to enforce.
\par
For applications, we present a method of transforming any
single-database PIR
protocol into a symmetric PIR protocol with only one extra
unit of communication cost.
On the Goubin-Courtois Attack on TTM
Uncategorized
Uncategorized
In the paper [1] published in ``Asiacrypt 2000", L. Goubin and N.T. Courtois
propose an attack on the TTM cryptosystem. In paper [1], they mispresent TTM
cryptosystem. Then they jump an attack from an example of TTM to the general
TTM cryptosystem. Finally they conclude:"There is very little hope that a secure
triangular system (Tame transformation system in our terminology) will ever be
proposed". This is serious challenge to many people working in the field.
In this paper, we will show that their attack is full of gaps in section 5.
Even their attack on one implementation of TTM is questionable. We write a
lengthy introduction to restate TTM cryptosystem and point out many possible
implementations. It will be clear that their attack on one implementation can
not be generalized to attacks on other implementations. As one usually said:
"truth is in the fine details", we quote and analysis their TPM system at the
end of the introduction and $\S$ 2. We further state one
implementations of TTM cryptosystem in $\S$ 3.
We analysis their MiniRank(r) attack in $\S$ 4 and show that is infeasible.
We conclude that the attack of [1] on the TTM cryptosystem is infeasible and
full of gaps. There is no known attacks which can crack the TTM cryptosystem.
Multi-Recipient Public-Key Encryption with Shortened Ciphertext
In the trivial
$n$-recipient public-key encryption scheme,
a ciphertext is a concatenation
of independently encrypted messages for $n$ recipients.
In this paper,
we say that an $n$-recipient scheme
has a ``{\it shortened ciphertext}'' property
if
the length of the ciphertext is almost
a half (or less) of the trivial scheme
and the security is still almost the same
as the underlying single-recipient scheme.
We first present
(multi-plaintext, multi-recipient) schemes
with the ``{\it shortened ciphertext}'' property
for ElGamal scheme and Cramer-Shoup scheme.
We next show
(single-plaintext, multi-recipient)
hybrid encryption schemes
with the ``{\it shortened ciphertext}'' property.
Security Assessment of Hierocrypt and Rijndael against the Differential and Linear Cryptanalysis (Extended Abstract)
The authors analyze the security of Hierocrypt-3(128-bit) and Hierocrypt-L1(64-bit) designed on the nested SPN(NSPN) structure against the differential and linear cryptanalysis, and found that they are sufficiently secure, e.g., the maximum average differential and linear hull probabilities (MACP and MALHP) are bounded by $2^{-96}$ for 4-round of Hierocrypt-3; those probabilities are bounded by $2^{-48}$ for 4-round of Hierocrypt-L1. The authors get these results by extending the provable security theorem by Hong et al.. Furthermore, the extended theory is applied to Rijndael, and found that MACP and MALHP of 4-round Rijndael are bounded by $2^{-96}$. This outperforms the best previous result by Keliher et al..
On the (Im)possibility of Obfuscating Programs
Informally, an {\em obfuscator} $O$ is an (efficient,
probabilistic) ``compiler'' that takes as input a program (or
circuit) $P$ and produces a new program $O(P)$ that has the
same functionality as $P$ yet is ``unintelligible'' in some sense.
Obfuscators, if they exist, would have a wide variety of
cryptographic and complexity-theoretic applications, ranging from
software protection to homomorphic encryption to
complexity-theoretic analogues of Rice's theorem. Most of these
applications are based on an interpretation of the
``unintelligibility'' condition in obfuscation as meaning that
$O(P)$ is a ``virtual black box,'' in the sense that anything
one can efficiently compute given $O(P)$, one could also
efficiently compute given oracle access to $P$.
In this work, we initiate a theoretical investigation of
obfuscation. Our main result is that, even under very
weak formalizations of the above intuition, obfuscation
is impossible. We prove this by constructing a family of
functions $F$ that are {\em \inherently
unobfuscatable} in the following sense: there is a
property $\pi : F \rightarrow \{0,1\}$ such that (a) given {\em
any program} that computes a function $f\in F$, the
value $\pi(f)$ can be efficiently computed, yet (b) given
{\em oracle access} to a (randomly selected) function
$f\in F$, no efficient algorithm can compute $\pi(f)$
much better than random guessing.
We extend our impossibility result in a number of ways, including
even obfuscators that (a) are not necessarily computable in
polynomial time, (b) only {\em approximately} preserve the
functionality, and (c) only need to work for very restricted
models of computation ($TC_0$). We also rule out several
potential applications of obfuscators, by constructing
``unobfuscatable'' signature schemes, encryption schemes, and
pseudorandom function families.
SQUARE Attacks on Reduced-Round PES and IDEA Block Ciphers
This paper reports on variants of the Square attack applied to
reduced-round versions of the PES and IDEA block ciphers. Attacks
on 2.5 rounds of IDEA require $3\cdot 2^{16}$ chosen-plaintexts and
recover 78 key bits. A new kind of attack, the Square
related-key attack, is applied on 2.5 rounds of IDEA and recovers
32 key bits, with 2 chosen-plaintexts and $2^{17}$
related keys. Similar results hold for 2.5 rounds of PES.
Implementations of the attacks on 32-bit block mini-versions of both
ciphers confirmed the expected computational complexity. Although
our attacks do not improve on previous approaches, this report shows
new variants of the Square attack on word-oriented block ciphers
like IDEA and PES.
An Attack on A Traitor Tracing Scheme
Uncategorized
Uncategorized
In Crypto'99, Boneh and Franklin proposed a
public key traitor tracing scheme~\cite{Boneh},
which was believed to be able to catch all
traitors while not accusing any innocent users
(i.e., full-tracing and error-free). Assuming
that Decision Diffie-Hellman problem is
unsolvable in $G_{q}$, Boneh and Franklin
proved that a decoder cannot distinguish
valid ciphertexts from invalid ones that are
used for tracing. However, our novel pirate
decoder $P_{3}$ manages to make some invalid
ciphertexts distinguishable without violating
their assumption, and it can also frame
innocent users to fool the tracer. Neither
the single-key nor arbitrary pirate tracing
algorithm presented in~\cite{Boneh} can
identify all keys used by $P_{3}$ as claimed.
Instead, it is possible for both algorithms
to catch none of the traitors. We believe that
the construction of our novel pirate
also demonstrates a simple way to defeat some
other black-box traitor tracing schemes in
general.
IMPROVED PUBLIC KEY CRYPTOSYSTEM USING FINITE NON ABELIAN GROUPS
In the public key cryptosystem using finite non abelian groups which is suggested in CRYPTO 2001, the discrete logarithm problems in inner automorphism groups are used. In this paper, we generalize the system and give some examples of non abelian groups which is applicable to our system.
Analysis of chosen plaintext attacks on the WAKE Stream Cipher
Stream ciphers are an important class of encryption algorithms, which are widely used in practice. In this paper the security of the WAKE stream cipher is investigated. We present two chosen plaintext attacks on this cipher. The complexities of these attacks can be estimated as 10^^19.2 and 10^^14.4.
An Integer Commitment Scheme based on Groups with Hidden Order
We present a commitment scheme allowing commitment to arbitrary size integers, based on any Abelian group with certain properties, most importantly that it is hard for the committer to compute its order. Potential examples include RSA and class groups. We also give efficient zero-knowledge protocols for proving knowledge of the contents of a commitment and for verifying multiplicative relations over the integers on committed values. This means that our scheme can support, for instance, the efficent interval proofs of Boudot. The scheme can be seen as a modification and a generalization of an earlier scheme of Fujisaki and Okamoto(FO), and in particular our results show that we can use a much larger class of RSA moduli than the safe prime products proposed by FO. Also, we correct some mistakes in the proofs of FO and give what appears to be the first multiplication protocol for a Fujisaki/Okamoto-like scheme with a complete proof of soundness.
Resettably-Sound Zero-Knowledge and its Applications
Resettably-sound proofs and arguments remain sound even when the
prover can reset the verifier, and so force it to use the same
random coins in repeated executions of the protocol. We show that
resettably-sound zero-knowledge {\em arguments} for NP exist
if collision-resistant hash functions exist. In contrast,
resettably-sound zero-knowledge {\em proofs} are possible only
for languages in P/poly.
We present two applications of resettably-sound zero-knowledge
arguments. First, we construct resettable zero-knowledge arguments
of knowledge for NP, using a natural relaxation of the
definition of arguments (and proofs) of knowledge. We note that,
under the standard definition of proofs of knowledge, it is
impossible to obtain resettable zero-knowledge arguments of
knowledge for languages outside BPP. Second, we construct a
constant-round resettable zero-knowledge argument for NP in the
public-key model, under the assumption that collision-resistant
hash functions exist. This improves upon the sub-exponential
hardness assumption required by previous constructions.
We emphasize that our results use non-black-box zero-knowledge
simulations. Indeed, we show that some of the results are {\em
impossible} to achieve using black-box simulations. In
particular, only languages in BPP have resettably-sound
arguments that are zero-knowledge with respect to black-box
simulation.
Optimal security proofs for PSS and other signature schemes
The Probabilistic Signature Scheme (PSS) designed by Bellare and
Rogaway is a signature scheme provably secure against chosen
message attacks in the random oracle model, with a security level equivalent to RSA.
In this paper, we derive a new security proof for PSS in which
a much shorter random salt is used to achieve the same security
level, namely we show that $\log_2 q_{sig}$ bits suffice, where
$q_{sig}$ is the number of signature queries made by the attacker.
When PSS is used with message recovery, a better
bandwidth is obtained because longer messages can now be
recovered. Moreover, we show that this size is optimal: if less
than $\log_2 q_{sig}$ bits of random salt are used, PSS is still
provably secure but no security proof can be tight. This result
is based on a new technique which shows that other
signature schemes such as the Full Domain Hash scheme and
Gennaro-Halevi-Rabin's scheme have optimal security proofs.
Clock-Controlled Shift Registers for Key-Stream Generation
In this paper we estimate the period of the sequence generated by a clock-controlled LFSR with an irreducible feedback polynomial and an arbitrary structure of the control sequence, as well as some randomness properties of this sequence including element distribution and the autocorrelation function. Also we construct and analyze a specific key-stream generator that applies clock-control. Finally, we present a comprehensive survey of known correlation attacks on clock-controlled registers and their memoryless combiners.
The Security of Practical Two-Party RSA Signature Schemes
In a two-party RSA signature scheme, a client and server, each
holding a share of an RSA decryption exponent $d$, collaborate to compute an
RSA signature under the corresponding public key $N,e$ known to both. This
primitive is of growing interest in the domain of server-aided password-based
security, where the client's share of $d$ is based on its password. To minimize
cost, designers are looking at very simple, practical protocols based on the
early ideas of Boyd, but their security is unclear. We analyze a class of these
protocols. We suggest two notions of security for two-party signature schemes
and provide proofs of security for the schemes in our class based on
assumptions about RSA and the hash function underlying the scheme.
Revocation and Tracing Schemes for Stateless Receivers
We deal with the problem of a center sending a message to a group
of users such that some subset of the users is considered revoked
and should not be able to obtain the content of the message. We
concentrate on the stateless receiver case, where the users
do not (necessarily) update their state from session to session.
We present a framework called the Subset-Cover framework,
which abstracts a variety of revocation schemes including some
previously known ones. We provide sufficient conditions that
guarantee the security of a revocation algorithm in this class.
We describe two explicit Subset-Cover revocation algorithms; these
algorithms are very flexible and work for any number of revoked
users. The schemes require storage at the receiver of $\log N$ and
$\frac{1}{2} \log^2 N$ keys respectively ($N$ is the total number
of users), and in order to revoke $r$ users the required message
lengths are of $r \log N$ and $2r$ keys respectively. We also
provide a general traitor tracing mechanism that can be
integrated with any Subset-Cover revocation scheme that satisfies
a ``bifurcation property''. This mechanism does not need an a
priori bound on the number of traitors and does not expand the
message length by much compared to the revocation of the same set
of traitors.
The main improvements of these methods over previously suggested
methods, when adapted to the stateless scenario, are: (1) reducing
the message length to $O(r)$ regardless of the coalition
size while maintaining a single decryption at the user's end (2)
provide a seamless integration between the revocation and
tracing so that the tracing mechanisms does not require any change
to the revocation algorithm.
Efficient Zero-knowledge Authentication Based on a Linear Algebra Problem MinRank
Uncategorized
Uncategorized
A Zero-knowledge protocol provides provably secure entity authentication based on a hard computational problem. Among many schemes proposed since 1984, the most practical rely on factoring and discrete log, but still they are practical schemes based on NP-hard problems.
Among them, the problem SD of decoding linear codes
is in spite of some 30 years of research effort, still exponential.
We study a more general problem called MinRank that generalizes SD and contains also other well known hard problems.
MinRank is also used in cryptanalysis of several public key cryptosystems such as birational schemes (Crypto'93), HFE (Crypto'99), GPT cryptosystem (Eurocrypt'91), TTM (Asiacrypt'2000) and Chen's authentication scheme (1996).
We propose a new Zero-knowledge scheme based on MinRank.
We prove it to be Zero-knowledge by black-box simulation.
An adversary able to cheat with a given MinRank instance
is either able to solve it, or is able to compute a collision on a given hash function.
MinRank is one of the most efficient schemes based on NP-complete problems. It can be used to prove in Zero-knowledge a solution to any problem described by multivariate equations. We also present a version with a public key shared by a few users, that allows anonymous group signatures (a.k.a. ring signatures).
On the Security of the SPEKE Password-Authenticated Key Exchange Protocol
In the most strict formal definition of security for
password-authenticated key exchange, an adversary can test at most one
password per impersonation attempt. We propose a slightly relaxed
definition which restricts an adversary to testing at most a constant
number of passwords per impersonation attempt. This definition seems
useful, since there is currently a popular password-authenticated key
exchange protocol called SRP that seems resistant to off-line
dictionary attack, yet does allow an adversary to test two passwords
per impersonation attempt. In this paper we prove (in the random
oracle model) that a certain instantiation of the SPEKE protocol that
uses hashed passwords instead of non-hashed passwords is a secure
password-authenticated key exchange protocol (using our relaxed
definition) based on a new assumption, the Decision
Inverted-Additive Diffie-Hellman assumption. Since this is a new
security assumption, we investigate its security and relation to other
assumptions; specifically we prove a lower bound for breaking this new
assumption in the generic model, and we show that the computational
version of this new assumption is equivalent to the Computational
Diffie-Hellman assumption.
On the Complexity of Matsui's Attack
Linear cryptanalysis remains the most powerful attack against DES
at this time. Given $2^{43}$ known plaintext-ciphertext pairs,
Matsui expected a complexity of less than $2^{43}$ DES evaluations
in 85% of the cases for recovering the key. In this paper, we
present a theoretical and experimental complexity analysis of this attack, which has been simulated 21 times using the idle
time of several computers. The experimental results suggest a complexity upper-bounded
by $2^{41}$ DES evaluations in 85% of the case, while more than the half of the experiments needed less than $2^{39}$ DES evaluations. In addition, we give a detailed theoretical analysis of the attack complexity.
Universally Composable Commitments
We propose a new security measure for commitment protocols, called
/universally composable/ (UC) Commitment. The measure
guarantees that commitment protocols behave like an "ideal commitment
service," even when concurrently composed with an arbitrary set of
protocols. This is a strong guarantee: it implies that security is
maintained even when an unbounded number of copies of the scheme are
running concurrently, it implies non-malleability (not only with
respect to other copies of the same protocol but even with respect
to other protocols), it provides resilience to selective
decommitment, and more.
Unfortunately two-party UC commitment protocols do not exist in
the plain model. However, we construct two-party UC commitment
protocols, based on general complexity assumptions, in the
/common reference string model/ where all parties have
access to a common string taken from a predetermined distribution.
The protocols are non-interactive, in the sense that both the
commitment and the opening phases consist of a single message
from the committer to the receiver.
Extending the GHS Weil Descent Attack
In this paper we extend the Weil descent attack
due to Gaudry, Hess and Smart (GHS) to
a much larger class of elliptic curves.
This extended attack still only works for fields of composite
degree over $\F_2$.
The principle behind the extended attack is to use
isogenies to
find a new elliptic curve for which the GHS attack is
effective.
The discrete logarithm problem on the target curve
can be transformed into a discrete logarithm problem
on the new isogenous curve.
One contribution of the paper is to give
an improvement to an algorithm of Galbraith
for constructing isogenies between elliptic curves,
and this is of independent interest in
elliptic curve cryptography.
We conclude that fields of the form $\F_{q^7}$ should be
considered weaker from a cryptographic standpoint than
other fields.
In addition we show that a larger proportion than previously
thought of elliptic curves over $\F_{2^{155}}$ should be
considered weak.
Security Proofs for the RSA-PSS Signature Scheme and Its Variants
We analyze the security of different versions of the adapted
RSA-PSS signature scheme, including schemes with variable salt
lengths and message recovery. We also examine a variant with
Rabin-Williams (RW) as the underlying verification primitive.
Our conclusion is that the security of RSA-PSS and RW-PSS in
the random oracle model can be tightly related to the hardness
of inverting the underlying RSA and RW primitives, at least if
the PSS salt length is reasonably large. Our security proofs
are based on already existing work by Bellare and Rogaway
and by Coron, who examined signature schemes based on the
original PSS encoding method.
Differential Probability of Modular Addition with a Constant Operand
In this article I analyze the function f(X) = A + X (mod 2**n) exclusive-or differential probability. The result, regarding differential cryptanalysis, is a better understanding of ciphers that use f(X) as a primitive operation. A simple O(n) algorithm to compute the probability is given.
Black-Box Concurrent Zero-Knowledge Requires $\tilde\Omega(\log n)$ Rounds
We show that any concurrent zero-knowledge protocol for a non-trivial
language (i.e., for a language outside $\BPP$), whose security is
proven via black-box simulation, must use at least
$\tilde\Omega(\log n)$ rounds of interaction. This result achieves a
substantial improvement over previous lower bounds, and is the first
bound to rule out the possibility of constant-round concurrent
zero-knowledge when proven via black-box simulation. Furthermore, the
bound is polynomially related to the number of rounds in the best
known concurrent zero-knowledge protocol for languages in $\NP$.
Elliptic curve Paillier schemes
This paper is concerned with
generalisations of Paillier's
probabilistic encryption scheme
from the integers modulo a square
to elliptic curves over rings.
Paillier himself described
two public key encryption schemes
based on anomalous elliptic curves over rings.
It is argued that these schemes are not secure.
A more natural generalisation of Paillier's scheme to
elliptic curves is given.
A known plaintext attack on the ISAAC keystream generator
Stream ciphers are often used in applications where high speed and low delay are a requirement. The ISAAC keystream generator is a fast software-oriented encryption algorithm. In this papers the security of the ISAAC keystream generator is investigated. Cryptanalytic algorithm is developed for a known plaintext attack where only a small segment of plaintext is assumed to be known.
Keywords. ISAAC. Keystream generator. Cryptanalysis.
Forward-Secure Signatures with Optimal Signing and Verifying
Ordinary digital signatures have an inherent weakness: if the secret key is leaked, then all signatures, even the ones generated before the leak, are no longer trustworthy. Forward-secure digital signatures were recently proposed to address this weakness: they ensure that past signatures remain secure even if the current secret key is leaked.
We propose the first forward-secure signature scheme for which both signing and verifying are as efficient as for one of the most efficient ordinary signature schemes (Guillou-Quisquater): each requiring just two modular exponentiations with a short exponent. All previously proposed forward-secure signature schemes took significantly longer to sign and verify than ordinary signature schemes.
Our scheme requires only fractional increases to the sizes of keys and signatures, and no additional public storage. Like the underlying Guillou-Quisquater scheme, our scheme is provably secure in the random oracle model.
ON THE METHOD OF "XL" AND ITS INEFFICIENCY TO TTM
We will show that the paper "Efficient Algorithm for solving over-
defined systems of multivariate polynomials equations" published
in Eurocrypt 2000 by N. Courtois, A. Shamir, J. Patarin and A.Klimov
ignors the intersection property at infinity of the system and the
program proposed by them can not achieve their desired results.
In fact, we produce very simple example to show that in some cases,
their program always fails.
The simple ideal cipher system
We address the problem of how to construct ideal cipher systems when the
length
of a key is much less than the length of an encrypted message. We suggest a
new
secret key cipher system in which firstly the message is transformed into two
parts in such a
way that
the biggest part consists of independent and equiprobable letters.
Secondly the relatively small second part is enciphered wholly by the Vernam
cipher
whereas only few bits from the biggest part are enciphered.
This transformation is based on the fast version of the Elias
construction of an unbiased random sequence.
The
time required for encoding and decoding and the memory size of the encoder and
decoder are presented as functions
of the ratio of the key length and the message length.
The suggested scheme can be applied to sources with unknown statistics.
The order of encryption and authentication for protecting communications (Or: how secure is SSL?)
Uncategorized
Uncategorized
We study the question of how to generically compose {\em symmetric}
encryption and authentication when building ``secure channels'' for
the protection of communications over insecure networks.
We show that any secure channels protocol designed to work with any combination
of secure encryption (against chosen plaintext attacks) and secure MAC
must use the encrypt-then-authenticate method.
We demonstrate this by showing that the other common methods
of composing encryption and authentication, including the
authenticate-then-encrypt method used in SSL, are not generically secure.
We show an example of an encryption function
that provides (Shannon's) perfect secrecy but when combined with
any MAC function under the authenticate-then-encrypt method
yields a totally insecure protocol (for example, finding passwords
or credit card numbers transmitted under the protection of such protocol
becomes an easy task for an active attacker).
The same applies to the encrypt-and-authenticate method used in SSH.
On the positive side we show that the authenticate-then-encrypt method
is secure if the encryption method in use is either CBC mode (with
an underlying secure block cipher) or a stream cipher (that xor the
data with a random or pseudorandom pad).
Thus, while we show the generic security of SSL to be broken,
the current standard implementations of the protocol that use
the above modes of encryption are safe.
Optimistic Asynchronous Multi-Party Contract Signing with Reduced Number of Rounds
Optimistic asynchronous multi-party contract signing protocols have
received attention in recent years as a compromise between efficient
protocols and protocols avoiding a third party as a bottleneck of
security. ``Optimistic'' roughly means: in case all participants are
honest and receive the messages from the other participants as
expected, the third party is not involved at all. The best solutions
known so far terminate within $t+2$ rounds in the optimistic case, for
any fixed set of $n$ signatories and allowing up to $t<n$ dishonest
signatories. The protocols presented here achieve a major improvement
compared to the state of the art: The number of rounds $R$ is reduced
from $O(t)$ to $O(1)$ for all $n \ge 2t+1$, and for $n < 2t+1$, $R$
grows remarkably slowly compared with numbers of rounds in $O(t)$: If
$t \approx \frac{k}{k+1} n $ then $R \approx 2k$.
Cryptanalysis of the Vesta-2M Stream Cipher
In this paper the security of the stream cipher Vesta-2M is investigated. Cryptanalytic algorithm is developed for a known plaintext attack where only a small segment of plaintext is assumed to be known. The complexity the attack is estimated the time of searching through the square root of all possible initial states.
Simple Forward-Secure Signatures From Any Signature Scheme
Uncategorized
Uncategorized
In Crypto'99, Bellare and Miner introduced {\em forward-secure signatures}
as digital signature schemes with the attractive property that exposure
of the signing key at certain time period does not allow for the forgery
of signatures from previous time periods.
That paper presented the first full design of an efficient forward-secure
signatures scheme, but left open the question of building efficient
and practical schemes based on standard signatures such as RSA or DSS.
In particular, they called for the development of schemes where the
main size-parameters (namely, the size of the private key, public key,
and signature) do not grow with the total number of periods for which
the public key is to be in use.
We present an efficient and extremely simple construction of forward-secure
signatures based on {\em any} regular signature scheme (e.g., RSA and DSS);
the resultant signatures enjoy size-parameters that are independent of the
number of periods (except for the inclusion of an index to the period
in which a signature is issued). The only parameter that grows (linearly)
with the number of periods is the total size of local non-secret
memory of the signer. The forward-security of our schemes is directly
implied by the unforgeability property of the underlying signature
scheme and it requires no extra assumptions.
Our approach can also be applied to some signature schemes with special
properties, such as undeniable signatures, to obtain forward-secure
signatures that still enjoy the added special property.
Solving Elliptic Curve Discrete Logarithm Problems Using Weil Descent
We provide a concrete instance of the discrete logarithm problem
on an elliptic curve over F_{2^{155}} which resists all previously
known attacks, but which can be solved with modest computer
resources using the Weil descent attack methodology of Frey. We
report on our implementation of index-calculus methods for
hyperelliptic curves over characteristic two finite fields, and
discuss the cryptographic implications of our results.
Analysis of Key-Exchange Protocols and Their Use for Building Secure Channels
Uncategorized
Uncategorized
We present a formalism for the analysis of key-exchange protocols
that combines previous definitional approaches and results in a definition
of security that enjoys some important analytical benefits:
(i) any key-exchange protocol that satisfies the security definition
can be composed with symmetric encryption and authentication functions
to provide provably secure communication channels;
and
(ii) the definition allows for simple modular proofs of security:
one can design and prove security of key-exchange protocols in an
idealized model where the communication links are perfectly authenticated,
and then translate them using general tools to obtain security in
the realistic setting of adversary-controlled links.
We exemplify the usability of our results by applying them to obtain the
proof of two main classes of key-exchange protocols, Diffie-Hellman and
key-transport, authenticated via symmetric or asymmetric techniques.
Further contributions of the paper include the formalization of
``secure channels'' in the context of key-exchange protocols, and
establishing sufficient conditions on the symmetric encryption and
authentication functions to realize these channels.
Robust Software Tokens: Towards Securing a Digital Identity
Uncategorized
Uncategorized
This paper presents a new method called the robust software token for providing users with a stable and portable container in which a private key is stored and kept from adversaries, by simple software-only techniques. The proposed scheme is comparable with the related noble work such as a cryptographic camouflage scheme and a networked cryptographic device, but equipped with several advantages; (1) it uniquely supports both closed and open domains on public key infrastructures, (2) it supports more protocol setup, (3) and it is more efficient than the others. This paper handles the new RSA-based scheme only. The DSA-based scheme sharing the basic idea can be found in our previous work.
Flaws in differential cryptanalysis of Skipjack
This paper is motivated by some results presented by
Knudsen, Robshaw and Wagner at Crypto'99, that
described many attacks of reduced versions of Skipjack,
some of them being erroneous.
Differential cryptanalysis is based on distinguishers,
any attack should prove that the events that
triggers the analysis has not the same probability
for the cipher than for a random function.
In particular, the composition of differential
for successive parts of a cipher should be done
very carefully to lead to an attack.
This revised version of the paper includes the exact
computations of some probabilities and repairs the
attack of the first half of Skipjack.
EMpowering Side-Channel Attacks
In this paper, we report preliminary results obtained
as a result of a systematic investigation of leakage of compromising
information via EM emanations from chipcards and other devices. Our
findings show that the EM side--channel is more powerful than other
side--channels such as timing and power analysis. Specifically, in
some cases, one can obtain much more compromising information about
computations and one can use this information to defeat the protection
provided by countermeasures to the other side--channel attacks.
Anti-persistence: History Independent Data Structures
Many data structures give away much more information than they
were intended to. Whenever privacy is important, we need to be
concerned that it might be possible to infer information from the
memory representation of a data structure that is not available
through its ``legitimate'' interface. Word processors that
quietly maintain old versions of a document are merely the most
egregious example of a general problem.
We deal with data structures whose current memory representation does
not reveal their history. We focus on dictionaries, where this means
revealing nothing
about the order of insertions or deletions. Our first algorithm is
a hash table based on open addressing,
allowing $O(1)$ insertion and search. We also present
a history independent dynamic perfect hash table that uses space
linear in the number of elements inserted and has expected amortized
insertion
and deletion time $O(1)$. To solve the dynamic perfect hashing
problem we devise a general scheme for history independent
memory allocation.
For fixed-size records this is quite efficient, with insertion and
deletion both linear in the size of the record. Our variable-size
record
scheme is efficient enough for dynamic perfect hashing but not for
general use. The main open problem we leave is whether it is possible
to implement a variable-size record scheme with low overhead.
Forward-Security in Private-Key Cryptography
Uncategorized
Uncategorized
This paper provides a comprehensive treatment of
forward-security in the context of shared-key based cryptographic primitives,
as a practical means to mitigate the damage caused by key-exposure. We provide
definitions of security, practical proven-secure constructions, and
applications for the main primitives in this area. We identify forward-secure
pseudorandom bit generators as the central primitive, providing several
constructions and then showing how forward-secure message authentication
schemes and symmetric encryption schemes can be built based on standard schemes
for these problems coupled with forward-secure pseudorandom bit generators. We
then apply forward-secure message authentication schemes to the problem of
maintaining secure access logs in the presence of break-ins.
Composition and Efficiency Tradeoffs for Forward-Secure Digital Signatures
Forward-secure digital signatures, initially proposed by Anderson in
CCS 97 and formalized by Bellare and Miner in Crypto 99, are signature
schemes which enjoy the additional guarantee that a compromise of the
secret key at some point in time does not help forge signatures
allegedly signed in an earlier time period. Consequently, if the
secret key is lost, then the key can be safely revoked without
invalidating previously-issued signatures. Since the introduction of
the concept, several forward-secure signature schemes have been
proposed, with varying performance both in terms of space and time.
Which scheme is most useful in practice typically depends on the
requirements of the specific application.
In this paper we propose and study some general composition operations
that can be used to combine existing signature schemes (whether
forward-secure or not) into new forward-secure signature schemes. Our
schemes offer interesting trade-offs between the various efficiency
parameters, achieving a greater flexibility in accommodating the
requirements of different applications. As an extension of our
techniques, we also construct the first efficient forward-secure
signature scheme where the total number of time periods for which the
public key is used does not have to be fixed in advance. The scheme
can be used for practically unbounded time, and the performance
depends (minimally) only on the time elapsed so far.
Our scheme achieves excellent performance overall, is very competitive
with previous schemes with respect to all parameters, and outperforms
each of the previous schemes in at least one parameter. Moreover, the
scheme can be based on any underlying digital signature scheme, and
does not rely on specific assumptions. Its forward security is proven
in the standard model, without using a random oracle.
Dual of New Method for Upper Bounding the Maximum Average Linear Hull Probability for SPNs
In [3], we present a new algorithm for computing an upper bound on
the maximum average linear hull probability (MALHP) for the SPN
symmetric cipher structure, a value required to make claims about
provable security against linear cryptanalysis. This algorithm
improves on existing work in that the resulting upper bound is a
function of the number of encryption rounds (other upper bounds
known to the authors are not), and moreover, it can be computed
for an SPN with any linear transformation layer (the best previous
result, that of Hong et.al [4], applies only to SPNs with highly
diffusive linear transformations).
It is well known that there exists a duality between linear
cryptanalysis and differential cryptanalysis which allows certain
results related to one of the attacks to be translated into the
corresponding results for the other attack [1,5]. Since this
duality applies to our work in [3], we immediately obtain an
algorithm for upper bounding the maximum average differential
probability (MADP) for SPNs (required to make claims about
provable security against differential cryptanalysis).
Note: In what follows, we assume familiarity with the notation
and results of [3].
Efficient and Non-Interactive Non-Malleable Commitment
We present new constructions of non-malleable commitment schemes, in
the public parameter model (where a trusted party makes parameters
available to all parties), based on the discrete logarithm or RSA
assumptions. The main features of our schemes are: they achieve
near-optimal communication for arbitrarily-large messages and are
non-interactive. Previous schemes either required (several rounds of)
interaction or focused on achieving non-malleable commitment based on
general assumptions and were thus efficient only when committing to a
single bit. Although our main constructions are for the case of
perfectly-hiding commitment, we also present a communication-efficient,
non-interactive commitment scheme (based on general assumptions) that
is perfectly binding.
Efficient Password-Authenticated Key Exchange Using Human-Memorable Passwords
We present an efficient password-authenticated key exchange protocol
which is secure against off-line dictionary attacks even when users
choose passwords from a very small space (say, a dictionary of English
words). We prove security in the standard model under the decisional
Diffie-Hellman assumption, assuming public parameters generated by a
trusted party. Compared to the recent work of Goldreich and Lindell
(which was the first to give a secure construction, under general
assumptions, in the standard model), our protocol requires only 3
rounds and is efficient enough to be used in practice.
On the Power of Nonlinear Secret-Sharing
Uncategorized
Uncategorized
A secret-sharing scheme enables a dealer to distribute a secret
among n parties such that only some predefined authorized sets of
parties will be able to reconstruct the secret from their shares.
The (monotone) collection of authorized sets is called an access
structure, and is freely identified with its characteristic monotone
function f:{0,1}^n --> {0,1}. A family of secret-sharing schemes is
called efficient if the total length of the n shares is polynomial
in n. Most previously known secret-sharing schemes belonged to a
class of linear schemes, whose complexity coincides with the
monotone span program size of their access structure. Prior to this
work there was no evidence that nonlinear schemes can be
significantly more efficient than linear schemes, and in particular
there were no candidates for schemes efficiently realizing access
structures which do not lie in NC.
The main contribution of this work is the construction of two
efficient nonlinear schemes:
(1) A scheme with perfect privacy whose access structure is
conjectured not to lie in NC;
(2) A scheme with statistical privacy whose access structure is
conjectured not to lie in P/poly.
Another contribution is the study of a class of nonlinear schemes,
termed quasi-linear schemes, obtained by composing linear schemes
over different fields. We show that while these schemes are possibly
(super-polynomially) more powerful than linear schemes, they cannot
efficiently realize access structures outside NC.
On multivariate signature-only public key cryptosystems
In a paper published at Asiacrypt 2000 a signature scheme that (apparently) cannot be abused for encryption is published.
The problem is highly non-trivial and every solution should be looked upon with caution.
What is especially hard to achieve is to avoid that the public key should leak some information, to be used as a possible "shadow" secondary public key.
In the present paper we argument that the problem has
many natural solutions within the framework of the multivariate cryptography.
First of all it seems that virtually any non-injective multivariate public key is inherently unusable for encryption.
Unfortunately having a lot of leakage is inherent to multivariate cryptosystems. Though it may appear hopeless at the first sight, we use this very property to remove leakage. In our new scenario the Certification Authority (CA) makes extensive modifications of the public key such that the user can still use the internal trapdoor,
but has no control on any publicly verifiable property of the actual public key equations published by CA.
Thus we propose a very large class of multivariate non-encryption
PKI schemes with many parameters $q,d,h,v,r,u,f,D$.
The paper is also of independent interest, as it contains all variants of the HFE trapdoor public key cryptosystem.
We give numerous and precise security claims that HFE achieves or appears to achieve and establish some provable security relationships.
Efficient Encryption for Rich Message Spaces Under General Assumptions
We present a new family of public-key encryption schemes which combine modest computational demands with provable security guarantees under only general assumptions. The schemes may be realized with any one-way trapdoor permutation, and provide a notion of security corresponding to semantic security under the condition that the message space has sufficient entropy. Furthermore, these schemes can be implemented with very few applications of the underlying one-way permutation: schemes which provide security for message spaces in $\{0,1\}^n$ with minimum entropy $n - \ell$ can be realized with $\ell + w(k)\log k$ applications of the underlying one-way trapdoor permutation. Here $k$ is the security parameter and $w(k)$ is any function which tends to infinity. In comparison, extant systems offering full semantic security require roughly $n$ applications of the underlying one-way trapdoor permutation. Finally, we give a simplified proof of a fundamental ``elision lemma'' of Goldwasser and Micali.
A Block-Cipher Mode of Operation for Parallelizable Message Authentication
Uncategorized
Uncategorized
We define and analyze a
simple and fully parallelizable block-cipher mode of operation
for message authentication.
Parallelizability does not come at the
expense of serial efficiency: in a conventional, serial
environment, the algorithm's speed is within
a few percent of the (inherently sequential) CBC~MAC.
The new mode, PMAC, is deterministic,
resembles a standard mode of operation
(and not a Carter-Wegman MAC),
works for strings of any bit length,
employs a single block-cipher key,
and uses just max{1, ceiling(|M|/n)}
block-cipher calls to MAC any string M using an
n-bit block cipher.
We prove PMAC secure,
quantifying an adversary's forgery probability
in terms of the quality of the block cipher as a
pseudorandom permutation.
OCB Mode
This paper was prepared for NIST, which is considering new
block-cipher modes of operation. It describes a parallelizable
mode of operation that simultaneously provides both privacy
and authenticity. "OCB mode" encrypts-and-authenticates
an arbitrary message $M\in\bits^*$ using only $\lceil |M|/n\rceil + 2$
block-cipher invocations, where $n$ is the block length of the
underlying block cipher. Additional overhead is small.
OCB refines a scheme, IAPM, suggested by Jutla [IACR-2000/39], who
was the first to devise an authenticated-encryption mode with minimal
overhead compared to standard modes. Desirable new properties of
OCB include: very cheap offset calculations; operating on an arbitrary
message $M\in\bits^*$; producing ciphertexts of minimal length;
using a single underlying cryptographic key; making a nearly optimal number
of block-cipher calls; avoiding the need for a random IV; and rendering it
infeasible for an adversary to find "pretag collisions". The paper
provides a full proof of security for OCB.
Last updated: 2001-06-20
Cryptanalysis of some elliptic curve based cryptosystems of Paillier
Two public key encryption schemes
based on anomalous elliptic curves over rings
are studied.
It is argued that these schemes are not secure.
Secure Multiparty Computation of Approximations
Approximation algorithms can sometimes be used to obtain efficient
solutions where no efficient exact computation is known. In
particular, approximations are often useful in a distributed setting
where the inputs are held by different parties and are extremely
large. Furthermore, for some applications, the parties want to
cooperate to compute a function of their inputs without revealing more
information than they have to.
Suppose the function $\fhat$ is an approximation to the function $f$.
Secure multiparty computation of $f$ allows the parties to compute $f$
without revealing more than they have to, but it requires some
additional overhead in computation and communication. Hence, if
computation of $f$ is inefficient or just efficient enough to be
practical, then secure computation of $f$ may be impractically
expensive. Furthermore, a secure computation of $\fhat$ is not
necessarily as private as a secure computation of $f$, because the
output of $\fhat$ may reveal more information than the output of $f$.
In this paper, we present definitions and protocols of secure
multiparty approximate computation that show how to realize most of
the cost savings available by using $\fhat$ instead of $f$ without
losing the privacy of a secure computation of $f$.
We make three contributions. First, we give formal definitions of
secure multiparty approximate computations. Second, we present an
efficient, sublinear-communication, private approximate computation
for the Hamming distance; we also give an efficient,
polylogarithmic-communication solution for the $L^{2}$ distance
in a relaxed model. Finally, we give an efficient private
approximation of the permanent and other related \#P-hard problems.
Robustness for Free in Unconditional Multi-Party Computation
We present a very efficient multi-party computation protocol
unconditionally secure against an active adversary. The security is
maximal, i.e., active corruption of up to $t<n/3$ of the $n$ players is
tolerated. The communication complexity for securely evaluating a circuit
with $m$ multiplication gates over a finite field is $\O(mn^2)$ field
elements, including the communication required for simulating broadcast.
This corresponds to the complexity of the best known protocols for the
passive model, where the corrupted players are guaranteed not to deviate
from the protocol. Even in this model, it seems to be unavoidable that
for every multiplication gate every player must send a value to every
other player, and hence the complexity of our protocol may well be
optimal. The constant overhead factor for robustness is small and the
protocol is practical.
Optimistic Asynchronous Atomic Broadcast
This paper presents a new protocol
for atomic broadcast in an asynchronous network with
a maximal number of Byzantine failures.
It guarantees both safety and liveness without
making any timing assumptions or using any type of failure detector.
Under normal circumstances, the protocol runs in an optimistic mode,
with extremely low message and computational complexity -- essentially,
just performing a Bracha broadcast for each request.
In particular, no
potentially expensive public-key cryptographic operations are used.
In rare circumstances, the protocol may briefly
switch to a pessimistic mode,
where both the message and computational complexity
are significantly higher than in the optimistic mode,
but are
still reasonable.
The Rectangle Attack - Rectangling the Serpent
Serpent is one of the 5 AES finalists. The best attack published so far
analyzes up to 9 rounds. In this paper we present attacks on 7-round,
8-round, and 10-round variants of Serpent. We attack 7-round variant of
Serpent with all key lengths, and 8- and 10-round variants wih 256-bit keys.
The 10-roun attack on the 256-bit keys variants is the best published
attack on the cipher. The attack enhances the amplified boomerang attack
and uses better differentials. We also present the best 3-round, 4-round,
5-round and 6-round differential characteristics of Serpent.
Some observations on the theory of cryptographic hash functions
In this paper, we study several issues related to the notion of
``secure'' hash functions. Several necessary conditions
are considered, as well as a popular sufficient condition
(the so-called random oracle model). We study the security of
various problems that are motivated by the notion of a secure
hash function.
These problems are analyzed in the random oracle model,
and we prove that
the obvious trivial algorithms are optimal.
As well, we look closely at
reductions between various problems. In particular,
we consider the important question ``does preimage resistance imply
collision resistance?''. Finally, we study the relationship
of the security of
hash functions built using the Merkle-Damgard construction
to the security of the underlying compression function.
An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation
A credential system is a system in which users can obtain
credentials from organizations and demonstrate possession of these
credentials. Such a system is anonymous when transactions carried out by the
same user cannot be linked. An anonymous credential system is of significant
practical relevance because it is the best means of providing privacy for
users. In this paper we propose a practical anonymous credential system that
is based on the strong RSA assumption and the decisional Diffie-Hellman
assumption modulo a safe prime product and is considerably superior to
existing ones:
(1) We give the first practical solution that allows
a user to unlinkably demonstrate possession of a credential as many times as
necessary without involving the issuing organization.
(2) To prevent misuse of anonymity, our scheme is the first to offer optional
anonymity revocation for particular transactions.
(3) Our scheme offers separability: all organizations can choose their
cryptographic keys independently of each other.
Moreover, we suggest more effective means of preventing users from sharing their
credentials, by introducing {\em all-or-nothing} sharing: a user who allows a
friend to use one of her credentials once, gives him the ability to use all of
her credentials, i.e., taking over her identity. This is implemented by a new
primitive, called {\em circular encryption}, which is of independent interest,
and can be realized from any semantically secure cryptosystem in the random
oracle model.
Analysis of a Subset Sum Randomizer
In [5] an efficient pseudo-random number generator (PRNG) with
provable security is described. Its security is based on the
hardness of the subset sum or knapsack problem. In this paper
we refine these ideas to design a PRNG with independent seed and
output generation. This independence allows for greater
parallelism, design flexibility, and possibly greater security.
On adaptive vs. non-adaptive security of multiparty protocols
Security analysis of multiparty cryptographic protocols distinguishes
between two types of adversarial settings: In the non-adaptive
setting, the set of corrupted parties is chosen in advance, before the
interaction begins. In the adaptive setting, the adversary chooses
who to corrupt during the course of the computation. We study the
relations between adaptive security (i.e., security in the adaptive
setting) and non-adaptive security, according to two definitions and
in several models of computation. While affirming some prevailing
beliefs, we also obtain some unexpected results. Some highlights of
our results are:
o According to the definition of Dodis-Micali-Rogaway (which is set in
the information-theoretic model), adaptive and non-adaptive security
are equivalent. This holds for both honest-but-curious and Byzantine
adversaries, and for any number of parties.
o According to the definition of Canetti, for honest-but-curious
adversaries, adaptive security is equivalent to non-adaptive
security when the number of parties is logarithmic, and is strictly
stronger than non-adaptive security when the number of parties is
super-logarithmic. For Byzantine adversaries, adaptive security is
strictly stronger than non-adaptive security, for any number of
parties.
Efficient Traitor Tracing Algorithms using List Decoding
Uncategorized
Uncategorized
We apply powerful, recently discovered techniques for the list decoding of error-correcting codes to the problem of efficiently tracing traitors. Traitor tracing schemes have been extensively studied for use as a piracy deterrent. In a widely studied model for protecting digital content, each user in the system is associated with a unique set of symbols. For example, the sets may be used to install a software CD or decrypt pay-TV content. The assignment of sets is done in such a way that if a bounded collection of sets is used to form a new set to enable piracy, at least one of the traitor sets can be identified by applying a traitor tracing algorithm to the newly formed set. Much work has focused on methods for constructing such traceability schemes, but the complexity of the traitor tracing algorithms has received little attention. A widely used traitor tracing algorithm, the TA algorithm, has a running time of $O(\n)$ in general, where $\n$ is number of sets in the system (e.g., the number of copies of the CD), and therefore is inefficient for large populations. In this paper we use a coding theoretic approach to produce traceability schemes for which the TA algorithm is very fast. We show that when suitable error-correcting codes are used to construct traceability schemes, and fast list decoding algorithms are used to trace, the run time of the TA algorithm is polynomial in the codeword length. We also use the strength of the error-correcting code approach to construct traceability schemes with more efficient algorithms for finding all possible traitor coalitions. Finally, we provide evidence that amongst traceability schemes in general, TA traceability schemes are the most likely to be amenable to efficient tracing methods.
An observation regarding Jutla's modes of operation
Recently, Jutla suggested two new modes of operation for block ciphers. These modes build on traditional CBC and ECB
modes, respectively, but add to them masking of the outputs and inputs. Jutla proved that these masking operations considerably
strengthen CBC and ECB modes. In particular, together with a simple checksum, the modified modes ensure not only confidentiality, but
also authenticity. Similar modes were also suggested by Gligor and Donsecu and by Rogaway.
In Jutla's proposal (as well as in some of the other proposals), the masks themselves are derived from an IV via the same block
cipher as used for the encryption (perhaps with a different key). In this work we note, however, that the function for deriving these masks
need not be cryptographic at all. In particular, we prove that a universal hash function (a-la-Carter-Wegman) is sufficient for this
purpose.
Timed-Release Cryptography
Let $n$ be a large composite number. Without factoring $n$, the
validation of $a^{2^t} (\bmod \, n)$ given $a$, $t$ with $gcd(a, n) =
1$ and $t < n$ can be done in $t$ squarings modulo $n$. For $t \ll n$
(e.g., $n > 2^{1024}$ and $t < 2^{100}$), no lower complexity than $t$
squarings is known to fulfill this task (even considering massive
parallelisation). Rivest et al suggested to use such constructions as
good candidates for realising timed-release crypto problems.
We argue the necessity for zero-knowledge proof of the correctness of
such constructions and propose the first practically efficient
protocol for a realisation. Our protocol proves, in $\log_2 t$
standard crypto operations, the correctness of $(a^e)^{2^t}
(\bmod\,n)$ with respect to $a^e$ where $e$ is an RSA encryption
exponent. With such a proof, a {\em Timed-release RSA Encryption} of a
message $M$ can be given as $a^{2^t} M (\bmod \,n)$ with the
assertion that the correct decryption of the RSA ciphertext $M^e
(\bmod \, n)$ can be obtained by performing $t$ squarings modulo $n$
starting from $a$. {\em Timed-release RSA signatures} can be
constructed analogously.