All papers in 2001 (Page 2 of 113 results)
Digitally Watermarking RSA Moduli
The moduli used in RSA (see \cite{rsa}) can be generated by many different sources.
The generator of that modulus knows its factorization.
They have the ability to forge signatures or break any system based on this moduli.
If a moduli and the RSA parameters associated with it were generated by a reputable source,
the system would have higher value than if the parameters were generated by an unknown entity.
An RSA modulus is digitally marked, or digitally trade marked, if the generator and other
identifying features of the modulus can be identified and possibly verified by the modulus itself.
The basic concept of digitally marking an RSA modulus would be to fix the upper bits of the
modulus to this tag.
Thus anyone who sees the public modulus can tell who generated the modulus and who the generator
believes the intended user/owner of the modulus is.
Two types of trade marking will be described here.
The first is simpler but does not give verifiable trade marking.
The second is more complex, but allows for verifiable trade marking of RSA moduli.
The second part of this paper describes how to generate an RSA modulus with fixed upper bits.
Ciphers with Arbitrary Finite Domains
Uncategorized
Uncategorized
We introduce the problem of enciphering members of a finite set
where is arbitrary (in particular, it need not be a power
of two). We want to achieve this goal starting from a block cipher
(which requires a message space of size , for some ).
We look at a few solutions to this problem, focusing on the case
when .
New Zero-knowledge Undeniable Signatures - Forgery of Signature Equivalent to Factorisation
We propose a new zero-knowledge undeniable signature scheme which is
based on the intractability of computing high-order even powers modulo
a composite. The new scheme has a number of desirable properties: (i)
forgery of a signature (including existential forgery) is proven to be
equivalent to factorisation, (ii) perfect zero-knowledge, (iii)
efficient protocols for signature verification and non-signature
denial: both measured by (multiplications) where
bounds the probability of error. For a denial protocol, this
performance is unprecedented.
How to achieve a McEliece-based Digital Signature Scheme
McEliece is one of the oldest known public key cryptosystems.
Though it was less widely studied that RSA, it is remarkable that
all known attacks are still exponential. It is widely believed that
code-based cryptosystems like McEliece does not allow practical
digital signatures. In the present paper we disprove this belief
and show a way to build a practical signature scheme based on coding
theory. It's security can be reduced in the random oracle model to
the well-known {\em syndrome decoding problem} and the
distinguishability of permuted binary Goppa codes from a random
code. For example we propose a scheme with signatures of -bits
and a binary security workfactor of .
Robust key-evolving public key encryption schemes
We propose a key-evolving paradigm to deal with the key
exposure problem of public key encryption schemes.
The key evolving paradigm is like the one used for
forward-secure digital signature schemes.
Let time be divided into time periods such that
at time period , the decryptor holds the secret key
, while the public key PK is fixed during its
lifetime.
At time period , a sender encrypts a message as
, which can be decrypted only
with the private key .
When the time makes a transit from period to , the
decryptor updates its private key from to
and deletes immediately.
The key-evolving paradigm assures that compromise of the
private key does not jeopardize the message encrypted
at the other time periods.
\par
We propose two key-evolving public key encryption schemes
with -resilience such that compromise of private keys
does not affect confidentiality of messages encrypted in
other time periods.
Assuming that the DDH problem is hard,
we show one scheme semantically secure against passive
adversaries and the other scheme semantically secure against
the adaptive chosen ciphertext attack under the random
oracle.
Fully Distributed Threshold RSA under Standard Assumptions
The aim of the present article is to propose a fully distributed environment for the RSA scheme.
What we have in mind is highly sensitive applications and even if we are ready to pay a price in
terms of efficiency, we do not want any compromise of the security assumptions that we make.
Recently Shoup proposed a practical RSA threshold signature scheme that allows to share the
ability to sign between a set of players. This scheme can be used for decryption as well.
However, Shoup's protocol assumes a trusted dealer to generate and distribute the keys.
This comes from the fact that the scheme needs a special assumption on the RSA modulus and
this kind of RSA moduli cannot be easily generated in an efficient way with many players.
Of course, it is still possible to call theoretical results on multiparty computation, but we
cannot hope to design efficient protocols. The only practical result to generate RSA moduli
in a distributive manner is Boneh and Franklin protocol but this protocol cannot be easily
modified to generate the kind of RSA moduli that Shoup's protocol requires.
The present work takes a different path by proposing a method to enhance the key generation
with some additional properties and revisits the proof of Shoup to work with the resulting
RSA moduli. Both of these enhancements decrease the performance of the basic protocols.
However, we think that in the applications that we target, these enhancements provide
practical solutions. Indeed, the key generation protocol is usually run only once and the
number of players have time to perform their task so that the communication or time complexity
are not overly important.
Are 'Strong' Primes Needed for RSA
We review the arguments in favor of using so-called ``strong primes''
in the RSA public-key cryptosystem. There are two types of such
arguments: those that say that strong primes are needed to protect
against factoring attacks, and those that say that strong primes are
needed to protect against ``cycling'' attacks (based on repeated
encryption).
We argue that, contrary to common belief, it is unnecessary to use
strong primes in the RSA cryptosystem. That is, by using strong
primes one gains a negligible increase in security over what is
obtained merely by using ``random'' primes of the same size. There
are two parts to this argument. First, the use of strong primes
provides no additional protection against factoring attacks, because
Lenstra's method of factoring based on elliptic curves (ECM) circumvents any
protection that might have been offered by using strong primes.
The methods that 'strong' primes are intended to guard against, as well as ECM, are
probabalistic in nature, but ECM succeeds with higher probability. For RSA key sizes being proposed now, the probability
of success of these methods is very low.
Additionally, the newer Number Field Sieve algorithm can factor RSA keys with certainty in
less time than these methods.
Second, a simple group-theoretic argument shows that
cycling attacks are extremely unlikely to be effective, as long as the
primes used are large. Indeed, even probabalistic factoring attacks will succeed much more
quickly and with higher probability than cycling attacks.
Secure and Efficient Asynchronous Broadcast Protocols
Reliable broadcast protocols are a fundamental building block for
implementing replication in fault-tolerant distributed systems. This paper
addresses secure service replication in an asynchronous environment with a
static set of servers, where a malicious adversary may corrupt up to a
threshold of servers and controls the network. We develop a formal model
using concepts from modern cryptography, present modular definitions for
several broadcast problems, including reliable, atomic, and secure causal
broadcast, and present protocols implementing them. Reliable broadcast is a
basic primitive, also known as the Byzantine generals problem, providing
agreement on a delivered message. Atomic broadcast imposes additionally a
total order on all delivered messages. We present a randomized atomic
broadcast protocol based on a new, efficient multi-valued asynchronous
Byzantine agreement primitive with an external validity condition.
Apparently, no such efficient asynchronous atomic broadcast protocol
maintaining liveness and safety in the Byzantine model has appeared
previously in the literature. Secure causal broadcast extends atomic
broadcast by encryption to guarantee a causal order among the delivered
messages. Threshold-cryptographic protocols for signatures, encryption, and
coin-tossing also play an important role.
A Note on Cryptanalysis of the Preliminary Version of the NTRU Signature Scheme
In this paper a preliminary version of the NTRU signature scheme
is cryptanalyzed. The attack exploits a correlation between some
bits of a signature and coefficients of a secret random
polynomial. The attack does not apply to the next version of the
signature scheme.
Last updated: 2001-01-26
MinRank problem and Zero-knowledge authentication
A zero-knowledge protocol provides provably secure authentication based on a computational problem.
Several such schemes have been proposed since 1984, and the most practical ones rely on
problems such as factoring that are unfortunately subexponential.
Still there are several other (practical) schemes based on NP-complete problems.
Among them, the problems of coding theory
are in spite of some 20 years of significant research effort, still exponential to solve.
The problem MinRank recently arouse in cryptanalysis of HFE (Crypto'99) and TTM (Asiacrypt'2000) public key cryptosystems.
It happens to ba a strict generalization of those hard problems of (decoding) error correcting codes.
We propose a new Zero-knowledge scheme based on MinRank.
We prove it to be Zero-knowledge by black-box simulation.
We compare it to other practical schemes based on NP-complete problems.
The MinRank scheme allows also an easy setup for a public key shared by a few users, and thus allows anonymous group signatures.
Separating Decision Diffie-Hellman from Diffie-Hellman in cryptographic groups
In many cases, the security of a cryptographic scheme based on Diffie--Hellman does in fact rely on the hardness of the
Diffie--Hellman Decision problem. In this paper, we show that the
hardness of Decision Diffie--Hellman is a much stronger hypothesis than the hardness of the regular Diffie--Hellman problem. Indeed, we
describe a reasonably looking cryptographic group where Decision
Diffie--Hellman is easy while Diffie--Hellman is equivalent to a --
presumably hard -- Discrete Logarithm Problem. This shows that care
should be taken when dealing with Decision Diffie--Hellman, since its
security cannot be taken for granted.
The One-More-RSA-Inversion Problems and the Security of Chaum's Blind Signature Scheme
We introduce a new class of computational problems which we
call the ``one-more-RSA-inversion'' problems. Our main result is that
two problems in this class, which we call the chosen-target and known-target
inversion problems respectively, have polynomially-equivalent computational
complexity. We show how this leads to a proof of security for Chaum's RSA-based
blind signature scheme in the random oracle model based on the assumed hardness
of either of these problems. We define and prove analogous results for
``one-more-discrete-logarithm'' problems. Since the appearence of the
preliminary version of this paper, the new problems we have introduced
have found other uses as well.
Efficient Algorithms for Computing Differential Properties of Addition
In this paper we systematically study the differential properties of
addition modulo . We derive -time algorithms
for most of the properties, including differential probability of
addition. We also present log-time algorithms for finding good
differentials. Despite the apparent simplicity of modular addition,
the best known algorithms require naive exhaustive computation. Our
results represent a significant improvement over them. In the most
extreme case, we present a complexity reduction from
to .
- « Previous
- 1
- 2