All papers in 2008 (Page 2 of 545 results)
Cryptanalysis of the Improved Cellular Message Encryption Algorithm
This paper analyzes the Improved Cellular Message Encryption Algorithm (CMEA-I) which is an improved version of the Telecommunication Industry Association's Cellular Message Encryption Algorithm (CMEA). We present a chosen-plaintext attack of CMEA-I which requires less than 850 plaintexts in its adaptive version. This demonstrates that the improvements made over CMEA are ineffective to thwart such attacks and confirms that the security of CMEA and its variants must be reconsidered from the beginning.
Elliptic divisibility sequences and the elliptic curve discrete logarithm problem
We use properties of the division polynomials of an elliptic curve
over a finite field together with a pure result about elliptic divisibility sequences from the 1940s to construct a very simple alternative to the Menezes-Okamoto-Vanstone algorithm for
solving the elliptic curve discrete logarithm problem in the case
where .
Key differentiation attacks on stream ciphers
Uncategorized
Uncategorized
In this paper the applicability of differential cryptanalytic tool to stream
ciphers is elaborated using the algebraic representation similar to
early Shannon's postulates regarding the concept of confusion. In
2007, Biham and Dunkelman \cite{BihDunk}
have formally introduced the concept of differential
cryptanalysis in stream ciphers by addressing the three different
scenarios of interest. Here we mainly consider the first scenario
where the key difference and/or IV difference influence the internal
state of the cipher .
We then show that under certain circumstances a chosen IV attack may
be transformed in the key chosen attack. That is,
whenever at some stage of the key/IV setup algorithm (KSA) we may
identify linear relations between some subset of key and IV bits,
and these key variables only appear through these linear relations, then
using the differentiation of internal state variables (through chosen
IV scenario of attack) we are able to
eliminate the presence of corresponding key variables. The method
leads to an attack whose complexity is beyond the exhaustive
search, whenever the cipher admits exact algebraic description of
internal state variables and the keystream computation is not
complex. A successful application is especially noted in the
context of stream ciphers whose keystream bits evolve relatively slow
as a function of secret state bits. A modification of the attack
can be applied to the TRIVIUM stream cipher \cite{Trivium}, in this case
12 linear relations could be identified but at the same time the same
12 key variables appear in another part of state register. Still, a
significant decrease in the degree and complexity of state bit
expressions after the KSA is achieved. Computer simulations,
currently in progress,
will answer the question for what number of
initialization rounds the attack is faster than exhaustive search.
Fast Arithmetic on ATmega128 for Elliptic Curve Cryptography
Authentication protocols are indispensable in wireless sensor
networks. Commonly they are based on asymmetric cryptographic
algorithms. In this paper
we investigate all categories of finite fields suitable for elliptic
curve cryptography on the ATmega128 microcontroller: ,
, and . It turns out that binary fields enable the
most efficient implementations.
How Risky is the Random-Oracle Model?
RSA-FDH and many other schemes secure in the Random-Oracle Model (ROM) require
a hash function with output size larger than standard sizes.
We show that the random-oracle instantiations proposed in the literature for such cases
are weaker than a random oracle,
including the proposals by Bellare and Rogaway from 1993 and 1996,
and the ones implicit in IEEE P1363 and PKCS standards:
for instance, there is a practical preimage attack on BR93 for 1024-bit digests.
Next, we study the security impact of hash function defects for ROM signatures.
As an extreme case, we note that any hash collision would suffice to disclose the master key in the ID-based cryptosystem by Boneh et al. from FOCS '07,
and the secret key in the Rabin-Williams signature for which Bernstein proved tight security at EUROCRYPT '08.
Interestingly, for both of these schemes, a slight modification can prevent these attacks, while preserving the ROM security result.
We give evidence that in the case of RSA and Rabin/Rabin-Williams,
an appropriate PSS padding is more robust than all other paddings known.
Robust Encryption
Uncategorized
Uncategorized
We provide a provable-security treatment of ``robust''
encryption. Robustness means it is hard to produce a ciphertext that
is valid for two different users. Robustness makes explicit a
property that has been implicitly assumed in the past. We argue that
it is an essential conjunct of anonymous encryption. We show that
natural anonymity-preserving ways to achieve it, such as adding
recipient identification information before encrypting, fail. We
provide transforms that do achieve it, efficiently and provably. We
assess the robustness of specific encryption schemes in the
literature, providing simple patches for some that lack the property.
We discuss applications including
PEKS (Public-key Encryption with Keyword Search) and auctions.
Overall our work enables safer and simpler
use of encryption.
Linear equivalence between elliptic curves in Weierstrass and Hesse form
Elliptic curves in Hesse form admit more suitable arithmetic than ones in Weierstrass form. But elliptic curve cryptosystems usually use Weierstrass form. It is known that both those forms are birationally equivalent. Birational equivalence is relatively hard to compute. We prove that elliptic curves in Hesse form and in Weierstrass form are linearly equivalent over initial field or its small extension and this equivalence is easy to compute. If cardinality of finite field q = 2 (mod 3) and Frobenius trace T = 0 (mod 3), then equivalence is defined over initial finite field. This linear equivalence allows multiplying of an elliptic curve point in Weierstrass form by passing to Hessian curve, computing product point for this curve and passing back. This speeds up the rate of point multiplication about 1,37 times.
New Related-Key Boomerang Attacks on AES
In this paper we present two new attacks on round reduced versions of the AES. We present
the first application of the related-key boomerang attack on 7 and 9 rounds of AES-192. The 7-round attack requires only 2^{18} chosen plaintexts and ciphertexts and needs 2^{67.5} encryptions. We extend our attack to nine rounds of AES-192. This leaves to a data complexity of 2^{67} chosen plaintexts and ciphertexts using about 2^{143.33} encryptions to break 9 rounds of AES-192.
Divisibility, Smoothness and Cryptographic Applications
This paper deals with products of moderate-size primes, familiarly known as {\sl smooth numbers}. Smooth numbers play an crucial role in information theory, signal processing and cryptography.
We present various properties of smooth numbers relating to their enumeration, distribution and occurrence in various integer sequences. We then turn our attention to cryptographic applications in which smooth numbers play a pivotal role.
Last updated: 2009-01-05
BGKM: An Efficient Secure Broadcasting Group Key Management Scheme
Broadcasting Group Key Management (BGKM) scheme is designed to reduce communication, storage, and
computation overhead of existing Broadcasting Encryption Schemes (BES). To this end, BGKM proposes
a new broadcasting group management scheme by utilizing the basic construction of ciphertext policy
attribute based encryption and flat table identity management. Compared to previous approaches, our
performance evaluation shows that BGKM greatly improved performance in communication O(log n)
for single revocation and O((log n)(log m)) for bulk revocations), storage (O(log n) for
each group member), and computation (O((log n) (log m)) for both encryption and decryption),
where n is the ID space size and m is the number of GMs in the group. Moreover, BGKM scheme
provides group forward/backward secrecy, and it is resilience to colluding attacks.
Obtaining and solving systems of equations in key variables only for the small variants of AES
This work is devoted to attacking the small scale variants of the Advanced Encryption Standard (AES) via systems that contain only the initial key variables. To this end, we introduce a system of equations that naturally arises in the AES, and then eliminate all the intermediate variables via normal form reductions. The resulting system in key variables only is solved then. We also consider a possibility to apply our method in the meet-in-the-middle scenario especially with several plaintext/ciphertext pairs. We elaborate on the method further by looking for subsystems which contain fewer variables and are overdetermined, thus facilitating solving the large system.
The computational SLR: a logic for reasoning about computational indistinguishability
Uncategorized
Uncategorized
Computational indistinguishability is a notion in complexity-theoretic cryptography and is used to
define many security criteria. However, in traditional cryptography, proving computational
indistinguishability is usually informal and becomes error-prone when cryptographic constructions
are complex. This paper presents a formal proof system based on an extension of Hofmann’s SLR
language, which can capture probabilistic polynomial-time computations through typing and is
sufficient for expressing cryptographic constructions. We in particular define rules that justify
directly the computational indistinguishability between programs and prove that these rules are
sound with respect to the set-theoretic semantics, hence the standard definition of security. We also
show that it is applicable in cryptography by verifying, in our proof system, Goldreich and Micali’s
construction of pseudorandom generator, and the equivalence between next-bit unpredictability and
pseudorandomness.
On differences of quadratic residues
Factoring an integer is equivalent to express the integer as the difference of two squares. We test that for any odd modulus, in the corresponding ring of remainders, any element can be realized as the difference of two quadratic residues, and also that, for a fixed remainder value, the map assigning to each modulus the number of ways to express the remainder as difference of quadratic residues is non-decreasing with respect to the divisibility ordering in the odd numbers. The reduction to remainders rings of the problem to express a remainder as the difference of two quadratic residues does not diminish the complexity of the factorization problem.
Dynamic Provable Data Possession
As storage-outsourcing services and resource-sharing networks have become popular, the problem of efficiently proving the integrity of data stored at untrusted servers has received increased attention. In the provable data possession (PDP) model, the client pre-processes the data and then sends it to an untrusted server for storage, while keeping a small amount of meta-data. The client later asks the server to prove that the stored data has not been tampered with or deleted (without downloading the actual data). However, the original PDP scheme applies only to static (or append-only) files.
We present a definitional framework and efficient constructions for dynamic provable data possession (DPDP), which extends the PDP model to support provable updates to stored data. We use a new version of authenticated dictionaries based on rank information. The price of dynamic updates is a performance change from to (or ), for a file consisting of blocks, while maintaining the same (or better, respectively) probability of misbehavior detection. Our experiments show that this slowdown is very low in practice (e.g., 415KB proof size and 30ms computational overhead for a 1GB file). We also show how to apply our DPDP scheme to outsourced file systems and version control systems (e.g., CVS).
Usable Optimistic Fair Exchange
Fairly exchanging digital content is an everyday problem. It has been shown that fair exchange cannot be done without a trusted third party (called the Arbiter). Yet, even with a trusted party, it is still non-trivial to come up with an efficient solution, especially one that can be used in a p2p file sharing system with a high volume of data exchanged.
We provide an efficient optimistic fair exchange mechanism for bartering digital files, where receiving a payment in return to a file (buying) is also considered fair. The exchange is optimistic, removing the need for the Arbiter's involvement unless a dispute occurs. While the previous solutions employ costly cryptographic primitives for every file or block exchanged, our protocol employs them only once per peer, therefore achieving O(n) efficiency improvement when n blocks are exchanged between two peers. The rest of our protocol uses very efficient cryptography, making it perfectly suitable for a p2p file sharing system where tens of peers exchange thousands of blocks and they do not know beforehand which ones they will end up exchanging. Therefore, our system yields to one-two orders of magnitude improvement in terms of both computation and communication (40 seconds vs. 42 minutes, 1.6MB vs. 200MB). Thus, for the first time, a provably secure (and privacy respecting when payments are made using e-cash) fair exchange protocol is being used in real bartering applications (e.g., BitTorrent) without sacrificing performance.
Cryptographic Protocol Composition via the Authentication Tests
Although cryptographic protocols are typically analyzed in
isolation, they are used in combinations. If a protocol was
analyzed alone and shown to meet some security goals, will
it still meet those goals when executed together with a
second protocol? While not every choice of a second
protocol can preserve the goals, there are syntactic
criteria for the second protocol that ensure they will be
preserved. Our main result strengthens previously known
criteria.
Our method has three main elements. First, a language
L(\Pi) in classical logic describes executions of each
protocol \Pi, and expresses its security goals. Strand
spaces provide the models for L(\Pi). Second, the strand
space ``authentication test'' principles suggest our
syntactic criterion for security goals to be preserved.
Third, certain homomorphisms among models for L(\Pi)
preserve the truth of formulas of the syntactic form that
security goals take. This provides a way to extract -- from
a counterexample to a goal that uses both protocols -- a
counterexample using only the first protocol.
This essentially model-theoretic technique studies a
syntactically defined set of formulas using the
homomorphisms among their models. It appears to be novel
for protocol analysis.
Public-Key Encryption with Efficient Amortized Updates
Searching and modifying public-key encrypted data (without having the decryption key) has received a lot of attention in recent literature. In this paper we re-visit this important problem and achieve much better amortized communication-complexity bounds. Our solution resolves the main open question posed by Boneh at al., \cite{BKOS07}.
First, we consider the following much simpler to state problem (which turns out to be central for the above): A server holds a copy of Alice's database that has been encrypted under Alice's public key. Alice would like to allow other users in the system to replace a
bit of their choice in the server's database by communicating directly with the server, despite other users not having Alice's private key. However, Alice requires that the server should not know
which bit was modified. Additionally, she requires that the
modification protocol should have ``small" communication complexity
(sub-linear in the database size). This task is referred to as
private database modification, and is a central tool in building a more general protocol for modifying and searching over public-key encrypted data with small communication complexity. The problem was first considered by Boneh at al., \cite{BKOS07}. The protocol of \cite{BKOS07} to modify bit of an -bit database has communication complexity . Naturally, one can ask if we can improve upon this. Unfortunately, \cite{OS08} give evidence to the contrary, showing that using current algebraic techniques, this is not possible to do. In this paper, we ask the following question: what is the communication complexity when modifying bits of an -bit database? Of course, one can achieve naive communication complexity of by simply repeating the protocol of \cite{BKOS07}, times. Our main result is a private database modification protocol to modify bits of an -bit database that has communication complexity
, where
is a constant. (We remark that in contrast with recent work of Lipmaa \cite{L08} on the same topic, our database size {\em does not grow} with every update, and stays exactly the same size.)
As sample corollaries to our main result, we obtain the following:
\begin{itemize}
\item First, we apply our private database modification protocol to
answer the main open question of \cite{BKOS07}. More specifically, we
construct a public key encryption scheme supporting PIR queries that
allows every message to have a non-constant number of keywords
associated with it.
\item Second, we show that one can apply our
techniques to obtain more efficient communication complexity when
parties wish to increment or decrement multiple cryptographic
counters (formalized by Katz at al. ~\cite{KMO01}).
\end{itemize}
We believe that ``public-key encrypted'' amortized database modification is an important cryptographic primitive in it's own right and will be a useful in other applications.
Delegatable Anonymous Credentials
We construct an efficient delegatable anonymous credential system. Users can anonymously and unlinkably obtain credentials from any authority, delegate their credentials to other users, and prove possession of a credential levels away from the given authority. The size of the proof (and time to compute it) is , where is the security parameter. The only other construction of delegatable anonymous credentials (Chase and Lysyanskaya, Crypto 2006) relies on general non-interactive proofs for NP-complete languages of size .
We revise the entire approach to constructing anonymous credentials
and identify \emph{randomizable} zero-knowledge proof of knowledge
systems as the key building block. We formally define the notion of
randomizable non-interactive zero-knowledge proofs, and give the first construction by showing how to appropriately rerandomize Groth and Sahai (Eurocrypt 2008) proofs. We show that such proof systems, in combination with an appropriate authentication scheme and a few other protocols, allow us to construct delegatable anonymous credentials. Finally, we instantiate these building blocks under appropriate assumptions about groups with bilinear maps.
LEGO for Two Party Secure Computation
The first and still most popular solution for secure two-party
computation relies on Yao's garbled circuits. Unfortunately, Yao's
construction provide security only against passive adversaries.
Several constructions (zero-knowledge compiler, cut-and-choose) are
known in order to provide security against active adversaries, but
most of them are not efficient enough to be considered practical. In
this paper we propose a new approach called LEGO (Large Efficient
Garbled-circuit Optimization) for two-party computation, which allows
to construct more efficient protocols secure against active
adversaries. The basic idea is the following: Alice constructs and
provides Bob a set of garbled NAND gates. A fraction of them is
checked by Alice giving Bob the randomness used to construct
them. When the check goes through, with overwhelming probability there
are very few bad gates among the non-checked gates. These gates Bob
permutes and connects to a Yao circuit, according to a fault-tolerant
circuit design which computes the desired function even in the
presence of a few random faulty gates. Finally he evaluates this Yao
circuit in the usual way.
For large circuits, our protocol offers better performance than any
other existing protocol.
The protocol is universally composable (UC) in the OT-hybrid model.
On Kasami Bent Functions
Uncategorized
Uncategorized
It is proved that no non-quadratic Kasami bent is affine equivalent to Maiorana-MacFarland type bent functions.
Efficient Asynchronous Multiparty Computation with Optimal Resilience
Verifiable Secret Sharing (VSS) is a fundamental primitive used as a
building block in many distributed cryptographic tasks, such as
Secure Multiparty Computation (MPC) and Byzantine Agreement (BA). An important variant of VSS is Asynchronous VSS (AVSS) which is designed to work over asynchronous networks. AVSS is a two phase
(Sharing, Reconstruction) protocol carried out among parties in the presence of a computationally unbounded active adversary, who can corrupt up to parties. We assume that every two parties in the network are directly connected by a pairwise secure channel.
In this paper, we present a new statistical AVSS protocol with optimal resilience; i.e. with . Our protocol privately communicates O((\ell n^3 + n^4 \log{\frac{1}{\epsilon}}) \log{\frac{1}{\epsilon}}) bits and A-cast O(n^3 \log(n)) bits to simultaneously share \ell \geq 1 elements from a finite field F, where \epsilon is the error parameter of our protocol.
There are only two known statistical AVSS protocols with
reported in [CR93] and [PCR08]. The AVSS protocol of [CR93] requires a private communication of O(n^9 (\log{\frac{1}{\epsilon}})^4) bits and A-cast of O(n^9 (\log{\frac{1}{\epsilon}})^2 \log(n)) bits to share a single element from F. Thus our AVSS protocol shows a significant improvement in communication complexity over the AVSS of [CR93]. The AVSS protocol of [PCR08] requires a private communication and A-cast of O((\ell n^3 + n^4) \log{\frac{1}{\epsilon}}) bits
to share \ell \geq 1 elements. However, the shared element(s) may be NULL \not \in F. Thus our AVSS is better than the AVSS of [PCR08] due to the following reasons:
1. The A-cast communication of our AVSS is independent of the number of secrets i.e. \ell;
2. Our AVSS makes sure that the shared value(s) always belong to F.
Using our AVSS, we design a new primitive called Asynchronous Complete Secret Sharing (ACSS) which acts as an important building block of asynchronous multiparty computation (AMPC). Using our ACSS scheme, we design a statistical AMPC protocol with optimal resilience; i.e., with , that privately communicates
O(n^5 \log{\frac{1}{\epsilon}}) bits per multiplication gate. This significantly improves the communication complexity of only known optimally resilient [BKR93] that privately communicates
\Omega(n^{11} (\log{\frac{1}{\epsilon}})^4) bits and A-cast
\Omega(n^{11} (\log{\frac{1}{\epsilon}})^2 \log(n)) bits per multiplication gate.
Both our ACSS and AVSS employ several new techniques, which are of independent interest.
Asynchronous Byzantine Agreement with Optimal Resilience
Uncategorized
Uncategorized
We present an efficient, optimally-resilient Asynchronous Byzantine Agreement (ABA) protocol involving n = 3t+1 parties over a completely asynchronous network, tolerating a computationally unbounded Byzantine adversary, capable of corrupting at most t out of the n parties.
In comparison with the best known optimally-resilient ABA protocols of Canetti and Rabin (STOC 1993) and Abraham, Dolev and Halpern (PODC 2008), our protocol is significantly more efficient in terms of the communication complexity.
Our ABA protocol is built on a new statistical asynchronous verifiable secret sharing (AVSS) protocol with optimal resilience. Our AVSS protocol significantly improves the communication complexity of the only known statistical and optimally-resilient AVSS protocol of Canetti et al. Our AVSS protocol is further built on an asynchronous primitive called asynchronous weak commitment (AWC), while the AVSS of Canetti et al. is built on the primitive called asynchronous weak secret sharing (AWSS). We observe that AWC has weaker requirements than AWSS and hence it can be designed more efficiently than AWSS.
The common coin primitive is one of the most important building blocks for the construction of an ABA protocol. In this paper, we extend the existing common coin protocol to make it compatible with our new AVSS protocol that shares multiple secrets simultaneously. As a byproduct, our new common coin protocol is more communication efficient than all the existing common coin protocols.
Searchable encryption with decryption in the standard model
A *searchable public key encryption (PEKS) scheme* allows to generate, for any given message , a trapdoor , such that allows to check whether a given ciphertext is an encryption of or not. Of course, should not reveal any additional information about the plaintext. PEKS schemes have interesting applications: for instance, consider an email gateway that wants to prioritize or filter encrypted emails based on keywords contained in the message text. The email recipient can then enable the gateway to do so by releasing the trapdoors for the corresponding keywords. This way, the gateway can check emails for these keywords, but it learns nothing more about the email contents.
PEKS schemes have first been formalized and constructed by Boneh et al.. But with one exception, no known construction of a PEKS scheme supports the decryption of ciphertexts. That is, known constructions allow to *test* for a certain message, but they do not allow to *retrieve* the message, even when having the full secret key. Besides being somewhat unnatural for an encryption scheme, this ``no-decryption''-property also limits the applicability of a PEKS scheme. The one exception, a PEKS scheme with decryption due to Fuhr and Paillier, is formulated in the random oracle model, and inherently relies on the statistical properties of the random oracle. In fact, Fuhr and Paillier leave it as an open problem to construct a PEKS scheme with decryption in the standard model.
In this paper, we construct the first PEKS scheme with decryption (PEKSD scheme) in the standard model. Our sole assumption is an anonymous IBE scheme. We explain the technical difficulties that arise with previous attempts to build a PEKS scheme with decryption and how we overcome these difficulties. Technically, we isolate a vital additional property of IBE schemes (a property we call
*well-addressedness* and which states that a ciphertext is tied to an identity and will be rejected when trying to decrypt with respect to any other identity) and show how to generically achieve it.
Our construction of a PEKSD scheme from an anonymous IBE scheme provides a natural example of a *non-shielding* construction (in which the decryption algorithm queries the encryption algorithm). Gertner et al. have shown that an IND-CCA secure public key encryption scheme cannot be constructed and proven from an IND-CPA secure scheme in a black-box and *shielding* way. However, our results give evidence that encryption queries in the decryption algorithm may well prove useful in a security reduction.
A New Approach for Algebraically Homomorphic Encryption
The existence of an efficient and provably secure algebraically
homomorphic scheme (AHS), i.e., one that supports both addition and
multiplication operations, is a long stated open problem. All
proposals so far are either insecure or not provable secure,
inefficient, or allow only for one multiplication (and arbitrary
additions). As only very limited progress has been made on the
existing approaches in the recent years, the question arises whether
new methods can lead to more satisfactory solutions.
In this paper we show how to construct a provably secure AHS based
on a coding theory problem. It allows for arbitrary many additions
and for a fixed, but arbitrary number of multiplications and works
over arbitrary finite fields. Besides, it possesses some useful
properties: i) the plaintext space can be extended adaptively without
the need for re-encryption, ii) it operates over arbitrary infinite
fields as well, e.g., rational numbers, but the hardness of the
underlying decoding problem in such cases is less studied, and iii)
depending on the parameter choice, the scheme has inherent
error-correcting up to a certain number of transmission errors in
the ciphertext.
However, since our scheme is symmetric and its ciphertext size grows
exponentially with the expected total number of encryptions, its
deployment is limited to specific client-server-applications with
few number of multiplications. Nevertheless, we believe room for
improvement due to the huge number of alternative coding schemes
that can serve as the underlying hardness problem. For these
reasons and because of the interesting properties of our scheme, we
believe that using coding theory to design AHS is a promising
approach and hope to encourage further investigations.
Truly Efficient 2-Round Perfectly Secure Message Transmission Scheme
Uncategorized
Uncategorized
In the model of perfectly secure message transmission schemes (PSMTs), there are channels between a sender and a receiver. An infinitely powerful adversary may corrupt (observe and forge)the messages sent through out of channels. The sender wishes to send a secret to the receiver perfectly privately and perfectly reliably without sharing any key with the receiver.
In this paper, we show the first -round PSMT for such that not only the transmission rate is but also the computational costs of the sender and the receiver are both polynomial in . This means that we solve the open problem raised by
Agarwal, Cramer and de Haan at CRYPTO 2006.
Oblivious Transfer from Weak Noisy Channels
Uncategorized
Uncategorized
Various results show that oblivious transfer can be implemented using the assumption of noisy channels. Unfortunately, this assumption is not as weak as one might think, because in a cryptographic setting, these noisy channels must satisfy very strong security requirements.
Unfair noisy channels, introduced by Damgard, Kilian and Salvail [Eurocrypt '99], reduce these limitations: They give the adversary an unfair advantage over the honest player, and therefore weaken the security requirements on the noisy channel. However, this model still has many shortcomings: For example, the adversary's advantage is only allowed to have a very special form, and no error is allowed in the implementation.
In this paper we generalize the idea of unfair noisy channels. We introduce two new models of cryptographic noisy channels that we call the weak erasure channel and the weak binary symmetric channel, and show how they can be used to implement oblivious transfer. Our models
are more general and use much weaker assumptions than unfair noisy channels, which makes implementation a more realistic prospect. For example, these are the first models that allows the parameters to come from experimental evidence.
Parsing ambiguities in authentication and key establishment protocols
A new class of attacks against authentication and authenticated
key establishment protocols is described, which we call
parsing ambiguity attacks. If appropriate precautions
are not deployed, these attacks apply to a very wide range of
such protocols, including those specified in a number of
international standards. Three example attacks are described in
detail, and possible generalisations are also outlined.
Finally, possible countermeasures are given, as are
recommendations for modifications to the relevant standards.
Privacy-Enhancing First-Price Auctions Using Rational Cryptography
We consider enhancing a sealed-bid single-item auction with
\emph{privacy} concerns, our assumption being that bidders primarily
care about monetary payoff and secondarily worry about exposing
information about their type to other players and learning information
about other players' types. To treat privacy explicitly within the
game theoretic context, we put forward a novel \emph{hybrid utility}
model that considers both fiscal and privacy components in the
players' payoffs.
We show how to use rational cryptography to approximately implement a
given \emph{ex interim} individually strictly rational equilibrium of
such an auction (or any game with a winner) without a trusted mediator
through a cryptographic protocol that uses only point-to-point
authenticated channels between the players. By ``ex interim
individually strictly rational'' we mean that, given its type and
before making its move, each player has a strictly positive expected
utility, i.e., it becomes the winner of the auction with positive
probability. By ``approximately implement'' we mean that, under
cryptographic assumptions, running the protocol is a computational
Nash equilibrium with a payoff profile negligibly close to the
original equilibrium.
In addition the protocol has the stronger property that no collusion,
of any size, can obtain more by deviating in the implementation than
by deviating in the ideal mediated setting which the mechanism was
designed in. Also, despite the non-symmetric payoffs profile, the
protocol always correctly terminates.
On the security of pairing-friendly abelian varieties over non-prime fields
Uncategorized
Uncategorized
Let be an abelian variety defined over a non-prime finite field that has embedding degree with respect to a subgroup of prime order .
In this paper we give explicit conditions on , , and that imply that the minimal embedding field of with respect to is . When these conditions hold, the embedding degree is a good measure of the security level of a pairing-based cryptosystem that uses .
We apply our theorem to supersingular elliptic curves and to supersingular genus 2 curves, in each case computing a maximum -value for which the minimal embedding field must be . Our results are in most cases stronger (i.e., give larger allowable -values) than previously known results for supersingular varieties, and our theorem holds for general abelian varieties, not only supersingular ones.
Almost-Asynchronous MPC with Faulty Minority
Secure multiparty computation (MPC) allows a set of parties to
securely evaluate any agreed function of their inputs, even when up
to of the parties are faulty. Protocols for synchronous
networks (where every sent message is assumed to arrive within a
constant time) tolerate up to faulty parties, whereas in the
more realistic asynchronous setting (with no \emph{a priory}
information on maximal message delay) only security against
is possible. We present the first protocol that achieves security
against without assuming a fully synchronous network.
Actually our protocol guarantees security against any faulty
minority in an \emph{almost asynchronous} network, i.e. in a network
with one single round of synchronous broadcast (followed by a fully
asynchronous communication). Furthermore our protocol takes inputs
of all parties (in a fully asynchronous network only inputs of
parties can be guaranteed), and so achieves everything that is
possible in synchronous networks (but impossible in fully
asynchronous networks) at the price of just one synchronous
broadcast round.
As tools for our protocol we introduce the notions of \emph{almost
non-interactive verifiable secret-sharing} and \emph{almost
non-interactive zero-knowledge proof of knowledge}, which are of
independent interest as they can serve as efficient replacements for
fully non-interactive verifiable secret-sharing and fully
non-interactive zero-knowledge proof of knowledge.
Asynchronous Multiparty Computation: Theory and Implementation
We propose an asynchronous protocol for general multiparty computation with perfect security and communication complexity O(n^2 |C| k) where n is the number of parties, |C| is the size of the arithmetic circuit being computed, and k is the size of elements in the underlying field. The protocol guarantees termination if the adversary allows a preprocessing phase to terminate, in which no information is released. The communication complexity of this protocol is the same as that of a passively secure solution up to a constant factor. It is secure against an adaptive and active adversary corrupting less than n/3 players.
We also present a software framework for implementation of asynchronous protocols called VIFF (Virtual Ideal Functionality Framework), which allows automatic parallelization of primitive operations such as secure multiplications, without having to resort to complicated multithreading. Benchmarking of a VIFF implementation of our protocol confirms that it is applicable to practical non-trivial secure computations. VIFF can be downloaded from http://viff.dk/.
On the Number of Synchronous Rounds Required for Byzantine Agreement
Byzantine agreement is typically considered with respect to either a
fully synchronous network or a fully asynchronous one. In the
synchronous case, either deterministic rounds are necessary in
order to achieve Byzantine agreement or at least some expected large
constant number of rounds.
In this paper we examine the question of how many initial synchronous
rounds are required for Byzantine agreement if we allow to switch to
asynchronous operation afterwards.
Let be the number of parties where are honest and are
corrupted. As the main result we show that, in the model with a
public-key infrastructure and signatures, deterministic
synchronous rounds are sufficient where is the minimal integer
such that . This improves over the necessary
deterministic rounds for almost all cases, and over the exact
expected number of rounds in the non-deterministic case for many
cases.
Password Mistyping in Two-Factor-Authenticated Key Exchange
Abstract:
We study the problem of Key Exchange (KE), where authentication is two-factor and based on both electronically stored long keys and human-supplied credentials (passwords or biometrics). The latter credential has low entropy and may be adversarily mistyped. Our main contribution is the first formal treatment of mistyping in this setting.
Ensuring security in presence of mistyping is subtle. We show mistyping-related limitations of previous KE definitions and constructions.
We concentrate on the practical two-factor authenticated KE setting where servers exchange keys with clients, who use short passwords
(memorized) and long cryptographic keys (stored on a card). Our work is thus a natural generalization of Halevi-Krawczyk and Kolesnikov-Rackoff. We discuss the challenges that arise due to mistyping. We propose the first KE definitions in this setting, and formally discuss their guarantees. We present efficient KE protocols and prove their security.
Key Predistribution for Homogeneous Wireless Sensor Networks with Group Deployment of Nodes
Uncategorized
Uncategorized
Recent literature contains proposals for key predistribution schemes for sensor networks in which nodes are deployed in separate groups. In this paper we consider the implications of group deployment for the connectivity and resilience of a key predistribution scheme. After showing that there is a lack of flexibility in the parameters of a scheme due to Liu, Ning and Du, limiting its applicability in networks with small numbers of groups, we propose a more general scheme, based on the structure of a resolvable transversal design. We demonstrate that this scheme permits effective trade-offs between resilience, connectivity and storage requirements within a group-deployed environment as compared with other schemes in the literature, and show that group deployment can be used to increase network connectivity, without increasing storage requirements or sacrificing resilience.
Cryptanalysis of LU Decomposition-based Key Pre-distribution Scheme for Wireless Sensor Networks
S. J. Choi and H. Y. Youn proposed a key pre-distribution scheme for Wireless Sensor Networks based on LU decomposition of symmetric matrix, and later many researchers did works based on this scheme. Nevertheless, we find a mathematical relationship of L and U matrixes decomposed from symmetric matrix, by using which we can calculate one matrix from another regardless of their product -- the key matrix K. This relationship would profoundly harm the secure implementation of this decomposition scheme in the real world. In this paper, we first present and prove the mathematical theorem. Next we give samples to illustrate how to break the networks by using this theorem. Finally, we state the conclusion and some directions for improving the security of the key pre-distribution scheme.
On the Role of PKG for Proxy Re-encryption in Identity Based Setting
Uncategorized
Uncategorized
In 1998, Blaze, Bleumer, and Strauss proposed a kind of
cryptographic primitive called proxy re-encryption.
In proxy re-encryption, a proxy can transform a ciphertext computed
under Alice's public key into one that can be opened under Bob's
decryption key. In 2007, Matsuo proposed the concept of four types
of proxy re-encryption schemes: CBE(Certificate Based Public
Key Encryption) to IBE(Identity Based Encryption)(type 1),
IBE to IBE(type 2), IBE to CBE (type 3),
CBE to CBE (type 4). Now CBE to
IBE and IBE to IBE proxy re-encryption schemes are being
standardized by IEEEP1363.3 working group. In this
paper, based on we pay attention to the role of
PKG for proxy re-encryption in identity based setting. We find
that if we allow the PKG to use its master-key in the
process of generating re-encryption key for proxy re-encryption in
identity based setting, many open problems can be solved. Our main
results are as following: We construct the first proxy re-encryption
scheme from CBE to IBE which can resist malicious PKG
attack, the first proxy re-encryption scheme from IBE to CBE,
the second proxy re-encryption scheme based on a variant of
BB_1 IBE, the first proxy re-encryption scheme based on BB_2 IBE, the
first proxy re-encryption scheme based on SK IBE, we also
prove their security in their corresponding security models.
A New -Threshold Secret Sharing Scheme and Its Extension
In Shamir's -threshold secret sharing scheme (threshold scheme), a heavy computational cost is required to make shares and recover the secret. As a solution to this problem, several fast threshold schemes have been proposed. This paper proposes a new (k,n) k k (k,L,n)$-threshold {\it ramp} scheme similar to the existing {\it ramp} scheme based on Shamir's scheme.
The Enigmatique Toolkit
Abstract: This paper describes a new method of creating systems of poly-alphabetic, symmetric ciphers with a large set of algorithms. The method uses two translation tables containing the code-text character set, one for encryption and the other for decryption. After each character is encrypted or decrypted, these tables are permuted through a series of pseudo random, pairwise swaps. The implementation of alternative swap formulas leads to systems with large sets of encryption/decryption algorithms. Systems that contain more than a googol (1E100) algorithms have been easily implemented. An algorithm set can be easily sub-setted, producing hierarchical systems where a program may contain a single algorithm or a portion of the full set. The strength of these systems has not been determined. However strength can be inferred through the use of a substantial number of numeric keys , pseudo random number generators, algorithm switching, and other features.
Indifferentiable Security Analysis of choppfMD, chopMD, a chopMDP, chopWPH, chopNI, chopEMD, chopCS, and chopESh Hash Domain Extensions
Uncategorized
Uncategorized
We provide simple and unified indifferentiable security analyses of choppfMD, chopMD, a chopMDP (where the permutation is to be xored with any non-zero constant.), chopWPH (the chopped version of Wide-Pipe Hash proposed in \cite{Lucks05}), chopEMD, chopNI, chopCS, chopESh hash domain extensions. Even though there are security analysis of them in the case of no-bit chopping (i.e., ), there is no unified way to give security proofs. All our proofs in this paper follow the technique introduced in \cite{BeDaPeAs08}. These proofs are simple and easy to follow.
An asymptotically optimal RFID protocol against relay attacks
Relay attacks are a major concern for RFID systems: during an authentication process an adversary transparently relays messages between a verifier and a remote legitimate prover.
We present an authentication protocol suited for RFID systems. Our solution is the first that prevents relay attacks without degrading the authentication security level: it minimizes the probability that the verifier accepts a fake proof of identity, whether or not a relay attack occurs.
Slid Pairs in Salsa20 and Trivium
The stream ciphers Salsa20 and Trivium are two of the finalists of the eSTREAM project which are in the final portfolio of new promising stream ciphers.
In this paper we show that initialization and key-stream generation of these ciphers is {\em slidable}, i.e. one can find distinct (Key, IV) pairs
that produce identical (or closely related) key-streams.
There are and more then such pairs in Salsa20 and Trivium respectively.
We write out and solve the non-linear equations which describe such related (Key, IV) pairs.
This allows us to sample the space of such related pairs efficiently as well as detect such pairs in large portions of key-stream very efficiently.
We show that Salsa20 does not have 256-bit security if one considers general birthday and related key distinguishing and key-recovery attacks.
Pairing with Supersingular Trace Zero Varieties Revisited
A Trace Zero Variety is a specific subgroup of the group of the divisor classes on a hyperelliptic curve , which are rational over a small degree extension of the definition field.
Trace Zero Varieties (\tzv) are interesting for cryptographic applications since they enjoy properties that can be exploited to achieve fast arithmetic and group construction.
Furthermore, supersingular \tzv allows to achieve higher MOV security per bit than supersingular elliptic curves, thus making them interesting for applications in pairing-based cryptography.
In this paper we survey algorithms in literature for computing bilinear pairings and we present a new algorithm for the Tate pairing over supersingular \tzv, which exploits the action of the -Frobenius.
We give explicit examples and provide experimental results for supersingular \tzv defined over fields of characteristic 2.
Moreover, in the same settings, we propose a more efficient variant of the Silverberg's point compression algorithm.
SPICE Simulation of a "Provably Secure" True Random Number Generator
In their paper "A Provably Secure True Random Number Generator with Built-in Tolerance to Active Attacks",
B. Sunar, W. Martin, and D. Stinson propose a design for a true random number generator. Using SPICE simulation we study the behaviour of their random number generator and show that practical implementations result in a too high frequency signal to be processed with current CMOS technology.
Algebraic Cryptanalysis of Curry and Flurry using Correlated Messages
In \cite{BPW}, Buchmann, Pyshkin and Weinmann have described two families of
Feistel and SPN block ciphers called Flurry and Curry
respectively. These two families of ciphers are fully parametrizable and have
a sound design strategy against basic statistical attacks; i.e. linear and
differential attacks. The encryption process can be easily described by a set
of algebraic equations. These ciphers are then targets of choices for
algebraic attacks. In particular, the key recovery problem has been reduced to
changing the order of a Groebner basis \cite{BPW,BPWext}. This attack -
although being more efficient than linear and differential attacks - remains
quite limited. The purpose of this paper is to overcome this limitation by
using a small number of suitably chosen pairs of message/ciphertext for
improving algebraic attacks. It turns out that this approach permits to go one
step further in the (algebraic) cryptanalysis of Flurry and
\textbf{Curry}. To explain the behavior of our attack, we have established an
interesting connection between algebraic attacks and high order differential
cryptanalysis \cite{Lai}. From extensive experiments, we estimate that our
approach, that we can call an ``algebraic-high order
differential" cryptanalysis, is polynomial when the Sbox is a power function.
As a proof of concept, we have been able to break Flurry -- up to
rounds -- in few hours.
Two New Efficient CCA-Secure Online Ciphers: MHCBC and MCBC
Online ciphers are those ciphers whose ciphertexts can be computed
in real time by using a length-preserving encryption algorithm.
HCBC1 and HCBC2 are two known examples of Hash Cipher Block
Chaining online ciphers. The first construction is secure against
chosen plaintext adversary (or called CPA-secure) whereas the
latter is secure against chosen ciphertext adversary (or called
CCA-secure). In this paper, we have provided simple security
analysis of these online ciphers. We have also proposed two new
more efficient chosen ciphertext secure online ciphers
modified-HCBC (MHCBC) and modified-CBC (MCBC). If one uses a
finite field multiplication based universal hash function, the
former needs one less key and one less field multiplication
compared to HCBC2. The MCBC does not need any universal hash
function and it needs only one blockcipher key unlike the other
three online ciphers where two independent keys (hash function and
blockcipher) are required.
Comments on two password based protocols
Recently, M. Hölbl et al. and I. E. Liao et al. each proposed an user
authentication protocol. Both claimed that their schemes can withstand
password guessing attack. However, T. Xiang et al. pointed out
I. E. Liao et al.'s protocol suffers three kinds of attacks, including
password guessing attacks. We present an improvement protocol to get
rid of password guessing attacks. In this paper, we first point out
the security loopholes of M. Hölbl et al.'s protocol and review
T. Xiang et al.'s cryptanalysis on I. E. Liao et al.'s protocol. Then,
we present the improvements on M. Hölbl et al.'s protocol and
I. E. Liao et al.'s protocol, respectively.
Round Efficient Unconditionally Secure Multiparty Computation Protocol
In this paper, we propose a round efficient {\it unconditionally secure multiparty computation} (UMPC)
protocol in {\it information theoretic} model with players, in the absence of any physical broadcast channel,
which communicates field elements per multiplication
and requires rounds, even if up to players
are under the control of an active adversary having {\it unbounded computing power}. In the absence of a physical broadcast channel
and with players, the best known UMPC protocol with minimum number of rounds,
requires rounds and
communicates field elements per multiplication, where denotes
the multiplicative depth of the circuit representing the function to be computed securely.
On the other hand, the best known UMPC protocol with minimum communication complexity
requires communication overhead of field elements per multiplication, but has
a round complexity of rounds. Hence our UMPC protocol is
the most round efficient protocol so far and ranks second according to communication complexity.
To design our protocol, we use certain new techniques which are of independent interest.
Generating genus two hyperelliptic curves over large characteristic finite fields
In hyperelliptic curve cryptography, finding a suitable
hyperelliptic curve is an important fundamental problem.
One of necessary conditions is that the order of its Jacobian
is a product of a large prime number and a small number.
In the paper, we give a probabilistic polynomial
time algorithm to test whether the Jacobian of the given hyperelliptic curve of the form
satisfies the condition and, if so, gives the largest prime factor.
Our algorithm enables us to generate random curves of the form
until the order of its Jacobian is almost prime in the above sense.
A key idea is to obtain candidates of its zeta function over the base field from
its zeta function over the extension field where the Jacobian splits.
Last updated: 2008-11-27
A Framework for the Development Playfair Cipher Considering Probability of Occurrence of Characters in English Literature
The Playfair cipher was the first practical digraph substitution cipher. The scheme was invented in 1854 by Charless Wheatstone, but was the named after Lord Playfair who promoted the use of the cipher. In this paper, two algorithms have been proposed to written the Playfair cipher. The first effort is being made to write the playfair cipher in a new way by considering the probability of the occurrence of english character in context to english literature. In the proposed algorithm at each and every time, the algorithm will check whether, the plain text contains odd number of words or not. If the plain text is of odd number of characters, the algorithm will concatenate a dummy character at the end with lowest probability to form even length word in the cipher text and then search is being made in the dictionary to find out whether the word with dummy character is present in the dictionary or not. If that word is present in the dictionary, delete that dummy character of that word and the same effort is being made with another dummy character with next lowest probability occurrence. The second method add another new approach, it replace all blanks between two consecutive word by some character which has least probability to appear at that position in the sentence and search the dictionary to check whether this new word has different meaning . If new word if present, replace blank with next higher probability character for this blank. Finally, time comparison has made of encryption and decryption of some sentences using original with two algorithms and it has been found that the decryption time are much better than original play fair algorithm and that will produce the original message from encrypted message directly.
Analysis of RC4 and Proposal of Additional Layers for Better Security Margin
In this paper, the RC4 Key Scheduling Algorithm (KSA) is theoretically studied to reveal non-uniformity in the expected number of times each value of the permutation is touched by the indices . Based on our analysis and the results available in literature regarding the existing weaknesses of RC4, few additional layers over the RC4 KSA and RC4 Pseudo-Random Generation Algorithm (PRGA) are proposed. Analysis of the modified cipher (we call it RC4 ) shows that this new strategy avoids existing weaknesses of RC4.
New Applications of Differential Bounds of the SDS Structure
In this paper, we present some new applications of the bounds for the differential probability of a SDS (Substitution-Diffusion-Substitution) structure by Park et al. at FSE 2003. Park et al. have applied their result on the AES cipher which uses the SDS structure based on MDS matrices. We shall apply their result to practical ciphers that use SDS structures based on {0,1}-matrices of size n times n. These structures are useful because they can be efficiently implemented in hardware. We prove a bound on {0,1}-matrices to show that they cannot be MDS and are almost-MDS only when n=2,3 or 4. Thus we have to apply Park's result whenever {0,1}-matrices where are used because previous results only hold for MDS and almost-MDS diffusion matrices. Based on our bound, we also show that the {0,1}-matrix used in E2 is almost-optimal among {0,1}-matrices. Using Park's result, we prove differential bounds for E2 and an MCrypton-like cipher, from which we can deduce their security against boomerang attack and some of its variants. At ICCSA 2006, Khoo and Heng constructed block cipher-based universal hash functions, from which they derived Message Authentication Codes (MACs) which are faster than CBC-MAC. Park's result provides us with the means to obtain a more accurate bound for their universal hash function. With this bound, we can restrict the number of MAC's performed before a change of MAC key is needed.
Attribute-Based Ring Signatures
Ring signature was proposed to keep signer's anonymity when it
signs messages on behalf of a ``ring" of possible signers. In this
paper, we propose a novel notion of ring signature which is called
attribute-based ring signature. In this kind of signature, it allows
the signer to sign message with its attributes from attribute
center. All users that possess of these attributes form a ring. The
identity of signer is kept anonymous in this ring. Furthermore,
anyone out of this ring could not forge the signature on behalf of
the ring.
Two constructions of attribute-based ring signature are also
presented in this paper. The first scheme is proved to be secure in
the random oracle model, with large universal attributes. We also
present another scheme in order to avoid the random oracle model. It
does not rely on non-standard hardness assumption or random oracle
model. Both schemes in this paper are based on standard
computational Diffie-Hellman assumption.
How Far Must You See To Hear Reliably
Uncategorized
Uncategorized
We consider the problem of probabilistic reliable communication (PRC) over synchronous networks modeled as directed graphs in the presence of a Byzantine adversary when players' knowledge of the network topology is not complete. We show that possibility of PRC is extremely sensitive to the changes in players'
knowledge of the topology. This is in complete contrast with earlier known results on the possibility of perfectly reliable communication over undirected graphs where the case of each player knowing only its neighbours gives the same result as the case where players have complete knowledge of the network. Specifically, in either case, -vertex connectivity is necessary and sufficient, where is the number of nodes that can be corrupted by the adversary \cite{DDWY93:PSMT,SKR05}. We introduce a novel model for quantifying players' knowledge of network topology, denoted by { }. Given a directed graph , influenced by a Byzantine adversary that can corrupt up to any players, we give a necessary and sufficient condition for possibility of PRC over for any arbitrary topology knowledge { }. It follows from our main characterization theorem that knowledge of up to levels is sufficient for the solvability of honest player to honest player communication over any network over which PRC is possible when each player has complete knowledge of the topology. We also show the existence of networks where PRC is possible when players have complete topology knowledge but it is not possible when the players do not have knowledge of up to levels.
GUC-Secure Set-Intersection Computation
Uncategorized
Uncategorized
Secure set-intersection computation is one of important problems in the field of secure multiparty computation with valuable applications. We propose a very gerneral construction for 2-party set-intersection computation based-on anonymous IBE scheme and its user private-keys blind generation techniques. Compared with recently-proposed protocols, e.g., those of Freedman-Nissim-Pinkas, Kissner-Song and Hazay-Lindell, this construction is provabley GUC-secure in standard model with acceptable efficiency. For this goal a new notion of non-malleable zero-knowledge proofs of knowledge and its efficient general construction is presented. In addition, we present an efficient instantiation of this general construction via anonymous Boyen-Waters IBE scheme.
Could The 1-MSB Input Difference Be The Fastest Collision Attack For MD5 ?
So far, two different 2-block collision differentials, both with 3-bit input differences for MD5, have been found by Wang etc in 2005 and Xie etc in 2008 respectively, and those differentials have been improved later on to generate a collision respectively within around one minute and half an hour on a desktop PC. Are there more collision differentials for MD5? Can a more efficient algorithm be developed to find MD5 collisions? In this paper, we list the whole set of 1-bit to 3-bit input difference patterns that are possibly qualified to construct a feasible collision differential, and from which a new collision differential with only 1-MSB input difference is then analyzed in detail, finally the performances are compared with the prior two 3-bit collision attacks according to seven criteria proposed in this paper. In our approach, a two-block message is still needed to produce a collision, the first block being only one MSB different while the second block remains the same. Although the differential path appears to be computationally infeasible, most of the conditions that a collision differential path must satisfy can be fulfilled by multi-step modifications, and the collision searching efficiency can be much improved further by a specific divide-and-conquer technique, which transforms a multiplicative accumulation of the computational complexities into an addition by properly grouping of the conditional bits. In particular, a tunneling-like technique is applied to enhance the attack algorithm by introducing some additional conditions. As a result, the fastest attack algorithm is obtained with an averaged computational complexity of 2^20.96 MD5 compressions, which implies that it is able to search a collision within a second on a common PC for arbitrary random initial values. With a reasonable probability a collision can be found within milliseconds, allowing for instancing an attack during the execution of a practical protocol. The collision searching algorithm, however, is very complex, but the algorithm has been implemented which is available from the website http://www.is.iscas.ac.cn/gnomon, and we suggest you download the implementation program from the website for a personal experience if you are interested in it.
Elliptic Curve Cryptography: The Serpentine Course of a Paradigm Shift
Over a period of sixteen years elliptic curve cryptography went from being an approach that many people mistrusted or misunderstood to being a public key technology that enjoys almost unquestioned acceptance. We describe the sometimes surprising twists and turns in this paradigm shift, and compare this story with the commonly accepted Ideal Model
of how research and development function in cryptography. We also
discuss to what extent the ideas in the literature on "social
construction of technology" can contribute to a better understanding
of this history.
Optimal Subset-Difference Broadcast Encryption with Free Riders
Broadcast encryption (BE) deals with secure transmission of a message to
a group of receivers such that only an authorized subset of receivers can
decrypt the message.
The transmission cost of a BE system can be reduced considerably if
a limited number of free riders can be tolerated in the system.
In this paper, we study the problem of how to optimally place a given
number of free riders in a subset difference (SD) based BE system,
which is currently the most efficient BE scheme in use and has also
been incorporated in standards, and we propose a polynomial-time optimal
placement algorithm and three more efficient heuristics for this problem.
Simulation experiments show that SD-based BE schemes can benefit
significantly from the proposed algorithms.
Double-Base Number System for Multi-Scalar Multiplications
The Joint Sparse Form is currently the standard representation system to perform multi-scalar multiplications of the form . We introduce the concept of Joint Double-Base Chain, a generalization of the Double-Base Number System to represent simultaneously and . This concept is relevant because of the high redundancy of Double-Base systems, which ensures that we can
find a chain of reasonable length that uses exactly the same terms to compute both and . Furthermore, we discuss an algorithm to produce such a Joint Double-Base Chain. Because of its simplicity, this algorithm is straightforward to implement, efficient, and also quite easy to analyze. Namely, in our main result we show that the average number of terms in the expansion is less than . With respect to the Joint Sparse Form, this induces a reduction by more than of the number of additions. As a consequence, the total number of multiplications required for a scalar multiplications is minimal for our method, across all the methods using two precomputations, and . This is the case even with coordinate systems offering very cheap doublings, in contrast with recent results on scalar multiplications.
Several variants are discussed, including methods using more precomputed points and a generalization relevant for Koblitz curves.
Our second contribution is a new way to evaluate , the dual endomorphism of the Frobenius. Namely, we propose formulae to compute with at most 2 multiplications and 2 squarings in . This represents a speed-up of about with respect to the fastest techniques known. This has very concrete consequences on scalar and multi-scalar multiplications on
Koblitz curves.
Last updated: 2009-02-13
--Withdrawn--
Uncategorized
Uncategorized
None
Shared Key Encryption by the State Machine with Two-Dimensional Random Look-up Table
This paper describes a new approach to creation of fast encryption methods with symmetric (shared) key. The result solution is intermediate one between block and stream ciphers. The main advantages of both types of ciphers are realized, while most of disadvantages are eliminated. The new approach combines encryption with built-in calculation of the hash for the data integrity, pseudo-random generator, and option for dual shared keys. These properties are pivotal for secure applications. These new methods may be designed for different size of shared secret key. Both software and hardware implementations of the new methods are fast and simple and may be used in various security applications. Presented cryptanalysis proves basic features and gives an option to design actual provable security methods.
Cube Attacks on Tweakable Black Box Polynomials
Uncategorized
Uncategorized
Almost any cryptographic scheme can be described by
\emph{tweakable polynomials} over , which contain both
secret variables (e.g., key bits) and public variables (e.g.,
plaintext bits or IV bits). The cryptanalyst is allowed to tweak
the polynomials by choosing arbitrary values for the public
variables, and his goal is to solve the resultant system of
polynomial equations in terms of their common secret variables. In
this paper we develop a new technique (called a \emph{cube
attack}) for solving such tweakable polynomials, which is a major
improvement over several previously published attacks of the same
type. For example, on the stream cipher Trivium with a reduced
number of initialization rounds, the best previous attack (due to
Fischer, Khazaei, and Meier) requires a barely practical
complexity of to attack initialization rounds,
whereas a cube attack can find the complete key of the same
variant in bit operations (which take less than a second
on a single PC). Trivium with initialization rounds (which
could not be attacked by any previous technique) can now be broken
with bit operations. Trivium with initialization
rounds can now be broken with bit operations, and the
complexity of the attack can almost certainly be further reduced to about
bit operations. Whereas previous attacks were heuristic,
had to be adapted to each cryptosystem, had no general complexity
bounds, and were not expected to succeed on random looking
polynomials, cube attacks are provably successful when applied to
random polynomials of degree over secret variables
whenever the number of public variables exceeds .
Their complexity is bit operations, which is
polynomial in and amazingly low when is small. Cube
attacks can be applied to any block cipher, stream cipher, or MAC
which is provided as a black box (even when nothing is known about
its internal structure) as long as at least one output bit can be
represented by (an unknown) polynomial of relatively low degree in
the secret and public variables.
Improving the Boneh-Franklin Traitor Tracing Scheme
Traitor tracing schemes are cryptographically secure broadcast methods that allow identification of conspirators: if a pirate key is generated by traitors out of a static set of legitimate
users, then all traitors can be identified given the pirate key. In this paper we address three practicality and security issues of the Boneh-Franklin traitor-tracing scheme. In the first place, without changing the original scheme, we modify its tracing procedure in the non-black-box model such that it allows identification of traitors in time , as opposed to the original tracing complexity . This new tracing procedure works independently of the nature of the Reed-Solomon code used to watermark private keys. As a consequence, in applications with billions of users it takes just a few minutes on a common desktop computer to identify large collusions. Secondly, we exhibit the lack of practical value of list-decoding algorithms to identify more than traitors. Finally, we show that traitors can derive the keys of all legitimate users and we propose a fix to this security issue.
Hierarchical Identity Based Encryption with Polynomially Many Levels
We present the first hierarchical identity based encryption (HIBE) system that has full security for more than a constant number of levels. In all prior HIBE systems in the literature, the security reductions suffered from exponential degradation in the depth of the hierarchy, so these systems were only proven fully secure for identity hierarchies of constant depth. (For deep hierarchies, previous work could only prove the weaker notion of selective-ID security.) In contrast, we offer a tight proof of security, regardless of the number of levels; hence our system is secure for polynomially many levels.
Our result can very roughly be viewed as an application of Boyen's framework for constructing HIBE systems from exponent-inversion IBE systems to a (dramatically souped-up) version of Gentry's IBE system, which has a tight reduction. In more detail, we first describe a generic transformation from ``identity based broadcast encryption with key randomization" (KR-IBBE) to a HIBE, and then construct KR-IBBE by modifying a recent construction of IBBE of Gentry and Waters, which is itself an extension of Gentry's IBE system. Our hardness assumption is similar to that underlying Gentry's IBE system.
Authenticated Wireless Roaming via Tunnels: Making Mobile Guests Feel at Home
Uncategorized
Uncategorized
In wireless roaming a mobile device obtains a service from some foreign network while being registered for the similar service at its own home network. However, recent proposals try to keep the service provider role behind the home network and let the foreign network create a tunnel connection through which all service requests of the mobile device are sent to and answered directly
by the home network. Such Wireless Roaming via Tunnels (WRT) offers several (security) benefits but states also new security challenges on authentication and key establishment, as the goal is not only to protect the end-to-end communication between the tunnel peers but also the tunnel itself.
In this paper we formally specify mutual authentication and key establishment goals for WRT and propose an efficient and provably secure protocol that can be used to secure such roaming session. Additionally, we describe some modular protocol extensions to address resistance against DoS attacks, anonymity of the mobile device and unlinkability of its roaming sessions, as well as the
accounting claims of the foreign network in commercial scenarios.
New AES software speed records
This paper presents new speed records for AES software,taking advantage of (1) architecture-dependent reduction of instructions used to compute AES and (2) microarchitecture-dependent reduction of cycles used for those instructions.
A wide variety of common CPU architectures---amd64, ppc32, sparcv9, and x86---are discussed in detail, along with several specific microarchitectures.
Dynamic Threshold Cryptosystem without Group Manager
In dynamic networks with flexible memberships, group signatures and distributed signatures are an important problem.
Dynamic threshold cryptosystems are best suited to realize distributed signatures in dynamic (e.g. meshed) networks. Without a group manager or a trusted third party even more flexible scenarios can be realized.
Gennaro et al. showed, it is possible to dynamically increase the size of the signer group, without altering the public key. We extend this idea by removing members from the group, also without changing the public key. This is an important feature for dynamic groups, since it is very common, e.g. in meshed networks that members leave a group.
Gennaro et al. used RSA and bi-variate polynomials for their scheme. In contrast, we developed a DL-based scheme that uses ideas from the field of proactive secret sharing (PSS). One advantage of our scheme is the possibility to use elliptic curve cryptography and thereby decrease the communication and computation complexity through a smaller security parameter.
Our proposal is an efficient threshold cryptosystem that is able to adapt the group size in both directions. Since it is not possible to realize a non-interactive scheme with the ability to remove members (while the public key stays unchanged), we realized an interactive scheme whose communication efficency is highly optimized to compete with non-interactive schemes.
Our contribution also includes a security proof for our threshold scheme.
A Characterization of Chameleon Hash Functions and New, Efficient Designs
Uncategorized
Uncategorized
This paper shows that chameleon hash functions and Sigma
protocols are equivalent. We provide a transform of any suitable Sigma protocol
to a chameleon hash function, and also show that any chameleon hash function is
the result of applying our transform to some suitable Sigma protocol. This
enables us to unify previous designs of chameleon hash functions, seeing them
all as emanating from a common paradigm, and also obtain new designs that are
more efficient than previous ones. In particular, via a modified version of the
Fiat-Shamir protocol, we obtain the fastest known chameleon hash function with
a proof of security based on the STANDARD factoring assumption.
The increasing number of applications of
chameleon hash functions,
including on-line/off-line signing, chameleon signatures, designated-verifier
signatures and conversion from weakly-secure to fully-secure
signatures, make our work of
contemporary interest.
Additively Homomorphic Encryption with d-Operand Multiplications
Uncategorized
Uncategorized
The search for encryption schemes that allow to evaluate functions (or circuits) over encrypted data has attracted a lot of attention since the seminal work on this subject by Rivest, Adleman and Dertouzos in 1978.
In this work we define a theoretical object, chained encryption schemes, which allow an efficient evaluation of polynomials of degree d over encrypted data. Chained encryption schemes are generically constructed by concatenating cryptosystems with the appropriate homomorphic properties; such schemes are common in lattice-based cryptography. As a particular instantiation we propose a chained encryption scheme whose IND-CPA security is based on a worst-case/average-case reduction from uSVP.
TRIVIUM's output partially autocancels
The eStream cipher proposal TRIVIUM outputs an XOR sum of six
internal state bits. These in turn are obtained from 18 bits linearly,
plus six ANDings of two bits each.
We show that 4 of the 18 linear terms cancel between themselves.
Session-state Reveal is stronger than Ephemeral Key Reveal: Attacking the NAXOS Authenticated Key Exchange protocol
In the papers Stronger Security of Authenticated Key Exchange [LLM07, LLM06], a new security model for key exchange protocols is proposed. The new model is suggested to be at least as strong as previous models for key exchange protocols. In particular, the model includes a new notion of an Ephemeral Key Reveal adversary query, which is claimed in [LLM06, Oka07, Ust08] to be at least as strong as existing definitions of the Session-state Reveal query. We show that for some protocols, Session-state Reveal is strictly stronger than Ephemeral Key Reveal. In particular, we show that the NAXOS protocol from [LLM07, LLM06] does not meet its security requirements if the Session-state Reveal query is allowed in the security model.
A public key encryption scheme secure against key dependent chosen plaintext and adaptive chosen ciphertext attacks
Uncategorized
Uncategorized
Recently, at Crypto 2008, Boneh, Halevi, Hamburg, and Ostrovsky (BHHO)
solved the long-standing open problem of ``circular encryption,''
by
presenting a public key encryption scheme
and proving that it
is semantically secure against key dependent chosen plaintext attack (KDM-CPA security)
under standard assumptions (and without resorting to random oracles).
However, they left as an open problem that of designing
an encryption scheme that \emph{simultaneously} provides security
against both key dependent chosen plaintext \emph{and} adaptive chosen ciphertext
attack (KDM-CCA2 security).
In this paper, we solve this problem.
First, we show that by applying the Naor-Yung ``double encryption''
paradigm, one can combine
any KDM-CPA secure scheme with any (ordinary) CCA2 secure scheme,
along with an appropriate non-interactive zero-knowledge proof,
to obtain a KDM-CCA2 secure scheme.
Second, we give a concrete instantiation that makes use
the above KDM-CPA secure scheme of BHHO,
along with a generalization of the Cramer-Shoup CCA2 secure
encryption scheme,
and recently developed
pairing-based NIZK proof systems.
This instantiation increases the complexity of the BHHO
scheme by just a small constant factor.
Chosen Ciphertext Security with Optimal Ciphertext Overhead
Every public-key encryption scheme has to incorporate a certain
amount of randomness into its ciphertexts to provide semantic security
against chosen ciphertext attacks (IND-CCA). The difference between the length of a ciphertext and the embedded message is called the ciphertext overhead. While a generic brute-force adversary running in steps gives a theoretical lower bound of bits on the
ciphertext overhead for IND-CPA security, the best known IND-CCA secure schemes demand roughly bits even in the random oracle model. Is the -bit gap essential for achieving IND-CCA security?
We close the gap by proposing an IND-CCA secure scheme whose ciphertext overhead matches the generic lower bound up to a small constant. Our scheme uses a variation of a four-round Feistel network in the random oracle model and hence belongs to the family of OAEP-based schemes. Maybe of independent interest is a new efficient method to encrypt long messages exceeding the length of the permutation while retaining the minimal overhead.
Analysis and Improvement of Authenticatable Ring Signcryption Scheme
Ring signcryption is an anonymous signcryption which allows a user
to anonymously signcrypt a message on behalf of a set of users
including himself. In an ordinary ring signcryption scheme, even if
a user of the ring generates a signcryption, he also cannot prove
that the signcryption was produced by himself. In 2008, Zhang, Yang,
Zhu, and Zhang solve the problem by introducing an identity-based
authenticatable ring signcryption scheme (denoted as the ZYZZ
scheme). In the ZYZZ scheme, the actual signcrypter can prove that
the ciphertext is generated by himself, and the others cannot
authenticate it. However, in this paper, we show that the ZYZZ
scheme is not secure against chosen plaintext attacks. Furthermore,
we propose an improved scheme that remedies the weakness of the ZYZZ
scheme. The improved scheme has shorter ciphertext size than the
ZYZZ scheme. We then prove that the improved scheme satisfies
confidentiality,
unforgeability, anonymity and authenticatability.
Enumeration of Balanced Symmetric Functions over GF(p)
It is proved that the construction and enumeration of the number of balanced symmetric functions over GF(p) are equivalent to solving an equation system and enumerating the solutions. Furthermore, we give an lower bound on number of balanced symmetric functions over GF(p), and the lower bound provides best known results.
Unconditionally Reliable Message Transmission in Directed Hypergraphs
We study the problem of unconditionally reliable message transmission (URMT), where two non-faulty players, the sender S and the receiver R are part of a synchronous network modeled as a directed hypergraph, a part of which may be under the influence of an adversary having unbounded computing power. S intends to transmit a message to R, such that R should correctly obtain S's message with probability at least for arbitrarily small . However, unlike most of the literature on this problem, we assume the adversary modeling the faults is threshold mixed, and can corrupt different set of nodes in Byzantine, passive and fail-stop fashion simultaneously. The main contribution of this work is the complete characterization of URMT in directed hypergraph tolerating such an adversary. Working out a direct characterization of URMT over directed hypergraphs tolerating threshold mixed adversary is highly un-intuitive. So we first propose a novel technique, which takes as input a directed hypergraph and a threshold mixed adversary on that hypergraph and outputs a corresponding digraph, along with a non-threshold mixed adversary, such that URMT over the hypergraph tolerating the threshold mixed adversary is possible iff a special type of URMT is possible over the obtained digraph, tolerating the corresponding non-threshold mixed adversary}. Thus characterization of URMT over directed hypergraph tolerating threshold mixed adversary reduces to characterizing special type of a URMT over arbitrary digraph tolerating non-threshold mixed adversary. We then characterize URMT in arbitrary digraphs tolerating non-threshold mixed adversary
and modify it to obtain the characterization for special type of URMT over digraphs tolerating non-threshold mixed adversary. This completes the characterization of URMT over the original hypergraph.
Surprisingly, our results indicate that even passive corruption, in
collusion with active faults, substantially affects the reliability of URMT protocols! This is interesting because it is a general belief that passive corruption (eavesdropping) does not affect reliable communication.
Compartmented Threshold RSA Based on the Chinese Remainder Theorem
Uncategorized
Uncategorized
In this paper we combine the compartmented secret
sharing schemes based on the Chinese remainder theorem
with the RSA scheme in order to obtain, as a novelty, a
dedicated solution for compartmented threshold decryption
or compartmented threshold digital signature generation.
AMS Subject Classification: 94A60, 94A62, 11A07
Keywords and phrases: threshold cryptography, secret
sharing, Chinese remainder theorem
New Directions in Cryptanalysis of Self-Synchronizing Stream Ciphers
In cryptology we commonly face the problem of finding an unknown key K from the output of an easily computable keyed function F(C,K) where
the attacker has the power to choose the public variable C. In this work we focus on self-synchronizing stream ciphers. First we show how to model these primitives in the above-mentioned general problem by relating appropriate functions F to the underlying ciphers. Then we apply the recently proposed framework presented at AfricaCrypt’08 by Fischer et. al. for dealing with this kind of problems to the proposed T-function based self-synchronizing stream cipher by Klimov and Shamir at FSE’05 and show how to deduce some non-trivial information about
the key. We also open a new window for answering a crucial question raised by Fischer et. al. regarding the problem of finding weak IV bits which is essential for their attack.
Side Channel Attack Resistant Implementation of Multi-Power RSA using Hensel Lifting
Multi-Power RSA [1] is a fast variant of RSA [2] with a small decryption time, making it attractive for implementation on lightweight cryptographic devices such as smart cards. Hensel Lifting is a key component in the implementation of fast Multi-Power RSA Decryption. However, it is found that a naive implementation of this algorithm is vulnerable to a host of side channel attacks, some of them powerful enough to entirely break the cryptosystem by providing a factorisation of the public modulus . We propose here a secure (under reasonable assumptions) implementation of the Hensel Lifting algorithm. We then use this algorithm to obtain a secure implementation of Multi-Power RSA Decryption.
Threshold Homomorphic Encryption in the Universally Composable Cryptographic Library
Uncategorized
Uncategorized
Protocol security analysis has become an active research topic in recent years. Researchers have been trying to build sufficient theories for building automated tools, which give security proofs for cryptographic protocols. There are two approaches for analysing protocols: formal and computational. The former, often called Dolev-Yao style, uses abstract terms to model cryptographic messages with an assumption about perfect security of the cryptographic primitives. The latter mathematically uses indistinguishability to prove that adversaries with computational resources bounds cannot gain anything significantly. The first method is easy to be automated while the second one can give sound proofs of security.
Therefore there is a demand to bridge the gap between two methods in order to have better security-proof tools. One idea is to prove that some Dolev-Yao style cryptographic primitives used in formal tools are computationally sound for arbitrary active attacks in arbitrary reactive environments, i.e universally composable. As a consequence, protocols that use such primitives can also be proved secure by formal tools.
In this paper, we prove that a homomorphic encryption used together with a non-interactive zero-knowledge proof in Dolev-Yao style are sound abstractions for the real implementation under certain conditions. It helps to automatically design and analyze a class of protocols that use homomorphic encryptions together with non-interactive zero-knowledge proofs, such as e-voting.
Unique Shortest Vector Problem for max norm is NP-hard
Uncategorized
Uncategorized
The unique Shortest vector problem (uSVP) in lattice theory plays a crucial role in many public-key cryptosystems. The security of those cryptosystems bases on the hardness of uSVP. However, so far there is no proof for the proper hardness of uSVP even in its exact version. In this paper, we show that the exact version of uSVP for norm is NP-hard. Furthermore, many other lattice problems including unique Subspace avoiding problem, unique Closest vector problem and unique Generalized closest vector problem, for any norm, are also shown to be NP-hard.
Entropy Bounds for Traffic Confirmation
Consider an open MIX-based anonymity system with participants and a batch size of .
Assume a global passive adversary who targets a given participant Alice with a set of
communicating partners.
Let denote the entropy of as calculated by the adversary
given message sets (MIX batches) where Alice is a sender in each message set.
Our main result is to express the rate at which the anonymity of Alice
(as measured by )
degrades over time as a function of the main parameters , and .
Zcipher Algorithm Specification
This paper describes the Zcipher - a symmetric encryption/decryption cryptographic algorithm, designed to be secure and high-performance with small memory footprint and simple structure.
An argument for Hamiltonicity
A constant-round interactive argument is introduced
to show existence of a Hamiltonian cycle in a directed graph.
Graph is represented with a characteristic polynomial,
top coefficient of a verification polynomial is tested to fit the cycle,
soundness follows from Schwartz-Zippel lemma.
The Cost of False Alarms in Hellman and Rainbow Tradeoffs
Cryptanalytic time memory tradeoff algorithms are generic one-way function inversion techniques that utilize pre-computation. Even though the online time complexity is known up to a small multiplicative factor for any tradeoff algorithm, false alarms pose a major obstacle in its accurate assessment.
In this work, we study the expected pre-image size for an iteration of functions and use the result to analyze the cost incurred by false alarms. We are able to present the expected online time complexities for the Hellman tradeoff and the rainbow table method in a manner that takes false alarms into account. We also analyze the effects of the checkpoint method in reducing false alarm costs.
The ability to accurately compute the online time complexities will allow one to choose their tradeoff parameters more optimally, before starting the expensive pre-computation process.
Last updated: 2010-12-20
IEEE P1363.1 Draft 10: Draft Standard for Public Key Cryptographic Techniques Based on Hard Problems over Lattices.
Engineering specifications and security considerations for NTRUEncrypt, secure against the lattice attacks presented at Crypto 2007
An Approach to ensure Information Security through 252-Bit Integrated Encryption System (IES)
In this paper, a block-cipher, Integrated Encryption System (IES), to be implemented in bit-level is presented that requires a 252-bit secret key. In IES, there exist at most sixteen rounds to be implemented in cascaded manner. RPSP, TE, RPPO, RPMS, RSBM, RSBP are the six independent block-ciphering protocols, that are integrated to formulate IES. A round is constituted by implementing one of the protocols on the output produced in the preceding round. The process of encryption is an interactive activity, in which the encryption authority is given enough scope of flexibility regarding choice of protocols in any round. However no protocol can be used in more than three rounds. Formation of key is a run-time activity, which gets built with the interactive encryption process proceeds. Results of implementation of all six protocols in cascaded manner have been observed and analyzed. On achieving a satisfactory level of performance in this cascaded implementation, IES and its schematic and operational characteristics have been proposed.
Argument of knowledge of a bounded error
A protocol is introduced to show knowledge of a codeword of Goppa code
and Goppa polynomial.
Protocol does not disclosure any useful information about
the codeword and polynomial coefficients.
A related protocol is introduced to show
Hamming weight of an error is below a threshold.
Protocol does not disclosure codeword and weight of the error.
Verifier only uses commitments to codeword components and coefficients
while testing validity of statements.
Both protocols are honest verifier zero knowledge.
History-Independent Cuckoo Hashing
Cuckoo hashing is an efficient and practical dynamic dictionary. It provides expected amortized constant update time, worst case constant lookup time, and good memory utilization. Various experiments demonstrated that cuckoo hashing is highly suitable for modern computer architectures and distributed settings, and offers significant improvements compared to other schemes.
In this work we construct a practical {\em history-independent} dynamic dictionary based on cuckoo hashing. In a history-independent data structure, the memory representation at any point in time yields no information on the specific sequence of insertions and deletions that led to its current content, other than the content itself. Such a property is significant when preventing unintended leakage of information, and was also found useful in several algorithmic settings.
Our construction enjoys most of the attractive properties of cuckoo hashing. In particular, no dynamic memory allocation is required, updates are performed in expected amortized constant time, and membership queries are performed in worst case constant time. Moreover, with high probability, the lookup procedure queries only two memory entries which are independent and can be queried in parallel. The approach underlying our construction is to enforce a canonical memory representation on cuckoo hashing. That is, up to the initial randomness, each set of elements has a unique memory representation.
A protocol for K-multiple substring matching
A protocol is introduced to show
copies of a pattern string are embedded is a host string.
Commitments to both strings,
and to offsets of copies of the pattern in the host
is input of Verifier.
Protocol does not leak useful information about strings,
and is zero knowledge.
Using Commutative Encryption to Share a Secret
It is shown how to use commutative encryption
to share a secret. Suppose Alice wants to share
a secret with Bob such that Bob cannot decrypt the secret
unless a group of trustees agree.
It is assumed that Alice, Bob and the trustees communicate
over insecure channels. This paper presents a scheme that
uses modular
exponentiation and does not require key exchange.
The security of the scheme rest of the difficulty of
the discrete logarithm problem.
An argument for rank metric
A protocol is introduced to show an upper bound for rank of a square matrix
Last updated: 2008-08-31
On DDos Attack against Proxy in Re-encryption and Re-signature
Uncategorized
Uncategorized
In 1998, Blaze, Bleumer, and Strauss proposed new kind of cryptographic primitives called proxy re-encryption and proxy re-signature[BBS98]. In proxy re-encryption, a proxy can transform a ciphertext computated under Alice's public key into one that can be opened under Bob's decryption key. In proxy re-signature, a proxy can transform a signature computated under Alice's secret key into one that can be verified by Bob's public key. In 2005, Ateniese et al proposed a few new re-encryption schemes and discussed its several potential applications especially in the secure distributed storage[AFGH05]. In 2006, they proposed another few re-signature schemes and also discussed its several potential applications[AH06]. They predicated that re-encryption and re-signature will play an important role in our life. Since then, researchers are sparked to give new lights to this area. Many excellent schemes have been proposed. In this paper, we introduce a new attack- DDos attack against proxy in the proxy re-cryptography. Although this attack can also be implemented against other cryptographic primitives, the danger caused by it in proxy re-cryptography seems more serious. We revisit the current literature, paying attention on their resisting DDos attack ability. We suggest a solution to decline the impact of DDos attacking. Also we give a new efficient re-encryption scheme which can achieve CCA2 secure based on Cramer-Shoup encryption scheme and prove its security. We point out this is the most efficient proxy re-encryption schemes for the proxy which can achieve CCA2 secure in the literature. At last we give our conclusions with hoping researchers give more attention on this attack.
Weaknesses in HENKOS Stream Cipher
HENKOS is a synchronous stream cipher posted by Marius Oliver Gheorghita to eprint. In this paper we are going to present some weaknesses in the cipher. We first present a chosen IV attack which is very straight forward attack on the cipher. Second we present a group of weak keys.
On Notions of Security for Deterministic Encryption, and Efficient Constructions without Random Oracles
The study of deterministic public-key encryption was initiated by
Bellare et al. (CRYPTO~'07), who provided the ``strongest possible"
notion of security for this primitive (called PRIV) and
constructions in the random oracle (RO) model. We focus on
constructing efficient deterministic encryption schemes
\emph{without} random oracles. To do so, we propose a slightly
weaker notion of security, saying that no partial information about
encrypted messages should be leaked as long as each message is
a-priori hard-to-guess \emph{given the others} (while PRIV did not
have the latter restriction). Nevertheless, we argue that this
version seems adequate for certain practical applications. We show
equivalence of this definition to single-message and
indistinguishability-based ones, which are easier to work with.
Then we give general constructions of both chosen-plaintext (CPA)
and chosen-ciphertext-attack (CCA) secure deterministic encryption
schemes, as well as efficient instantiations of them under standard
number-theoretic assumptions. Our constructions build on the
recently-introduced framework of Peikert and Waters (STOC '08) for
constructing CCA-secure \emph{probabilistic} encryption schemes,
extending it to the deterministic-encryption setting and yielding
some improvements to their original results as well.
Flaws in Some Self-Healing Key Distribution Schemes with Revocation
Dutta and Mukhopadhyay have recently proposed some very efficient
self-healing key distribution schemes with revocation. The
parameters of these schemes contradict some results (lower bounds)
presented by Blundo et al. In this paper different attacks against
the schemes of Dutta and Mukhopadhyay are explained: one of them can
be easily avoided with a slight modification in the schemes, but the
other one is really serious.
Higher Order Differential Cryptanalysis of Multivariate Hash Functions
Uncategorized
Uncategorized
In this paper, we analyze the security of multivariate hash functions
and conclude that low degree multivariate functions such as MQ-HASH
are neither pseudo-random nor unpredictable. And they are also not
computation-resistance, which makes MAC forgery easily.
Time-Area Optimized Public-Key Engines: MQ-Cryptosystems as Replacement for Elliptic Curves?
In this paper ways to efficiently implement public-key schemes based onMultivariate Quadratic polynomials (MQ-schemes for short) are investigated. In particular, they are claimed to resist quantum computer attacks. It is shown that such schemes can have a much better time-area product than elliptic curve cryptosystems. For instance, an optimised FPGA implementation of amended TTS is estimated to be over 50 times more efficient with respect to
this parameter. Moreover, a general framework for implementing small-field MQ-schemes in hardware is proposed which includes a systolic architecture performing Gaussian elimination over composite binary fields.
Iterative Probabilistic Reconstruction of RC4 Internal States
It is shown that an improved version of a previously proposed iterative probabilistic algorithm, based on forward and backward probability recursions along a short keystream segment, is capable of reconstructing the RC4 internal states from a relatively small number of known initial permutation entries. Given a modulus , it is argued that about and known entries are sufficient for success, for consecutive and specially generated entries, respectively. The complexities of the corresponding guess-and-determine attacks are analyzed and, e.g., for , the data and time complexities are (conservatively) estimated to be around , and , , for the two types of guessed entries considered, respectively.
Information Leakage in Optimal Anonymized and Diversified Data
Uncategorized
Uncategorized
To reconcile the demand of information dissemination and
preservation of privacy, a popular approach generalizes the attribute values in the dataset, for example by dropping the last digit of the postal code, so that the published dataset meets certain privacy requirements, like the notions of k-anonymity and l-diversity. On the other hand, the published dataset should remain useful and not over generalized. Hence it is desire to disseminate a database with high "usefulness", measured by a utility function. This leads to a generic framework whereby the optimal dataset (w.r.t. the utility function) among all the generalized datasets that meet certain privacy requirements, is chosen to be disseminated. In this paper, we observe that, the fact that a generalized dataset is optimal may leak information about the original. Thus, an adversary who is aware of how the dataset is generalized may able to derive more information than what the privacy requirements constrained. This observation challenges the widely adopted approach that treats the generalization
process as an optimization problem. We illustrate the observation by
giving counter-examples in the context of k-anonymity and l-diversity.
Remote Integrity Check with Dishonest Storage Server
We are interested in this problem: a verifier, with a small and
reliable storage, wants to periodically check whether a remote
server is keeping a large file . A dishonest server,
by adapting the challenges and responses, tries to discard partial
information of and yet evades detection. Besides the
security requirements, there are considerations on communication,
storage size and computation time. Juels et al. \cite{Pors} gave
a security model for {\em Proof of Retrievability} ({\POR})
system. The model imposes a requirement that the original can be recovered from multiple challenges-responses. Such
requirement is not necessary in our problem. Hence, we propose an
alternative security model for {\em Remote Integrity Check}
({\RIC}). We study a few schemes and analyze their efficiency and
security. In particular, we prove the security of a proposed
scheme {\simplePIR}. This scheme can be deployed as a {\POR}
system and it also serves as an example of an effective {\POR}
system whose ``extraction'' is not verifiable. We also propose a
combination of the RSA-based scheme by Filho et al. \cite{DDPs}
and the ECC-based authenticator by Naor et al.
\cite{complex_memcheck}, which achieves good asymptotic
performance. This scheme is not a {\POR} system and seems to be a
secure {\RIC}. In-so-far, all schemes that have been proven
secure can also be adopted as {\POR} systems. This brings out the
question of whether there are fundamental differences between the
two models. To highlight the differences, we introduce a notion,
{\em trap-door compression}, that captures a property on
compressibility.
- « Previous
- 1
- 2
- 3
- ...
- 6
- Next »