## All papers in 2008 (545 results)

Resolving the Simultaneous Resettability Conjecture and a New Non-Black-Box Simulation Strategy

Canetti, Goldreich, Goldwasser, and Micali (STOC 2000) introduced the notion of resettable zero-knowledge proofs, where the protocol must be zero-knowledge even if a cheating verifier can reset the prover and have several interactions in which the prover uses the same random tape. Soon afterwards, Barak, Goldreich, Goldwasser, and Lindell (FOCS 2001) studied the closely related notion of resettable soundness, where the soundness condition of the protocol must hold even if the cheating prover can reset the verifier to have multiple interactions with the same verifier's random tape. The main problem left open by this work was whether it is possible to have a single protocol that is simultaneously resettable zero knowledge and resettably sound. We resolve this question by constructing such a protocol.
At the heart of our construction is a new non-black-box simulation strategy, which we believe to be of independent interest. This new strategy allows for simulators which ``marry'' recursive rewinding techniques (common in the context of concurrent simulation) with non-black-box simulation. Previous non-black-box strategies led to exponential blowups in computational complexity in such circumstances, which our new strategy is able to avoid.

Comments on two multi-server authentication protocols

Recently, Tsai and Liao et al. each proposed a multi-server authentication protocol. They claimed their protocols are secure and can withstand various attacks. But we found some security loopholes in each protocol. We will show the attacks on their schemes.

Odd-Char Multivariate Hidden Field Equations

We present a multivariate version of Hidden Field Equations (HFE)
over a finite field of odd characteristic, with an extra
``embedding'' modifier. Combining these known ideas makes our new
MPKC (multivariate public key cryptosystem) more efficient
and scalable than any other extant multivariate encryption scheme.
Switching to odd characteristics in HFE-like schemes affects how an
attacker can make use of field equations. Extensive empirical tests
(using MAGMA-2.14, the best commercially available \mathbold{F_4}
implementation) suggests that our new construction is indeed secure
against algebraic attacks using Gröbner Basis algorithms. The
``embedding'' serves both to narrow down choices of pre-images and
to guard against a possible Kipnis-Shamir type (rank-based) attack. We
may hence reasonably argue that for practical sizes, prior attacks
take exponential time.
We demonstrate that our construction is in fact efficient by
implementing practical-sized examples of our ``odd-char HFE'' with 3
variables (``THFE'') over $\mathrm{GF}(31)$. To be precise, our preliminary
THFE implementation is $15\times$--$20\times$ the speed of RSA-1024.

Distinguishing Attack and Second-Preimage Attack on the CBC-like MACs

In this paper, we first present a new distinguisher on the CBC-MAC based on a block cipher in Cipher Block Chaining (CBC) mode. It can also be used to distinguish other CBC-like MACs from random functions. The main results of this paper are on the second-preimage attack on CBC-MAC and CBC-like MACs include TMAC, OMAC, CMAC, PC-MAC and MACs based on three-key encipher CBC mode. Instead of exhaustive search, this attack can be performed with the birthday attack complexity.

Resettably-Sound Resettable Zero Knowledge Arguments for NP

We construct resettably-sound resettable zero knowledge arguments
for NP based on standard hardness assumption (the existence of
claw-free permutations) in the plain model. This proves the
simultaneous resettability conjecture posed by Barak et al. in [FOCS
2001].
\setlength{\parindent}{2em} Our construction, inspired by the
paradigm for designing concurrent zero knowledge protocols, makes
crucial use of a tool called instance-dependent resettably-sound
resettable WI argument of knowledge (\textsf{IDWIAOK} (and a
special-purpose variant), introduced recently by Deng and Lin in
[Eurocrypt 2007]).Roughly speaking, for a NP statement of the form $x_0\vee x_1$,\textsf{IDWIAOK} is an argument for which resettable WI property holds when both $x_0$ and $x_1$ are YES instances, and
resettably-sound argument of knowledge property holds when $x_0$ is
a NO instance.
The heart of the simulator for our protocol is a new technique that
allows us to embed the (non-black-box) straight-line simulation
strategy in the (black-box) recursive rewinding simulation strategy.

New Impossible Differential Attacks on AES

In this paper we apply impossible differential attacks to reduced
round AES. Using various techniques, including the early abort
approach and key schedule considerations, we significantly improve
previously known attacks due to Bahrak-Aref and Phan. The improvement
of these attacks leads to the best known impossible differential
attacks on 7-round AES-128 and AES-192, as well as to the best known
impossible differential attacks on 8-round AES-256.

An Accumulator Based on Bilinear Maps and Efficient Revocation for Anonymous Credentials

The success of electronic authentication systems, be it e-ID card systems or Internet authentication
systems such as CardSpace, highly depends on the provided level of user-privacy.
Thereby, an important requirement is an efficient means for revocation of the authentication credentials.
In this paper we consider the problem of revocation for certificate-based privacy-protecting authentication systems.
To date, the most efficient solutions for revocation for such systems are based on cryptographic accumulators.
Here, an accumulate of all currently valid certificates is published regularly and each user holds
a {\em witness} enabling her to prove the validity of her (anonymous) credential while retaining anonymity.
Unfortunately, the users' witnesses must be updated at least each time a credential is revoked.
For the know solutions, these updates are computationally very expensive for users and/or certificate issuers which is very problematic
as revocation is a frequent event as practice shows.
In this paper, we propose a new dynamic accumulator scheme based on bilinear maps and show how to apply it to the problem
of revocation of anonymous credentials.
In the resulting scheme, proving a credential's validity and updating witnesses both come at (virtually) no cost for
credential owners and verifiers.
In particular, updating a witness requires the issuer to do only one multiplication per addition or revocation of a credential
and can also be delegated to untrusted entities from which a user could just retrieve the updated witness.
We believe that thereby we provide the first authentication system offering privacy protection suitable for implementation
with electronic tokens such as eID cards or drivers' licenses.

Supporting Non-membership Proofs with Bilinear-map Accumulators

In this short note, we present an extension of Nguyen's bilinear-map
based accumulator scheme to support
\emph{non-membership witnesses} and corresponding
\emph{non-membership proofs}, i.e., cryptographic proofs that an
element has not been accumulated to a given set. This complements
the non-membership proofs developed by Li \emph{et
al.} for the RSA
accumulator, making the
functionality of the bilinear-map accumulator equivalent to that of
the RSA accumulator. Our non-membership extension of Nguyen's
scheme makes use of the $q$-Strong Diffie-Hellman
assumption the security of the original scheme is based on.

A Secure Threshold Anonymous Password-Authenticated Key Exchange Protocol

At Indocrypt 2005, Viet et al., [22] have proposed an
anonymous password-authenticated key exchange (PAKE) protocol and
its threshold construction both of which are designed for client's
password-based authentication and anonymity against a passive
server, who does not deviate the protocol. In this paper, we first
point out that their threshold construction is completely insecure
against off-line dictionary attacks. For the threshold t > 1, we
propose a secure threshold anonymous PAKE (for short, TAP)
protocol with the number of clients n upper-bounded, such that n
\leq 2 \sqrt{N-1} -1, where N is a dictionary size of passwords.
We rigorously prove that the TAP protocol has semantic
security of session keys in the random oracle model by showing the
reduction to the computational Diffie-Hellman problem. In addition,
the TAP protocol provides unconditional anonymity against a
passive server. For the threshold t=1, we propose an efficient
anonymous PAKE protocol that significantly improves efficiency in
terms of computation costs and communication bandwidth compared to
the original (not threshold) anonymous PAKE protocol [22].

Predicate Privacy in Encryption Systems

Uncategorized

Uncategorized

Predicate encryption is a new encryption paradigm which
gives the secret key owner fine-grained control over access to
encrypted data. The secret key owner can generate tokens corresponding to predicates. An encryption of a
plaintext x can be decrypted using a token corresponding to a
predicate f if the plaintext satisfies the predicate, i.e., f(x) = 1.
Prior work on public-key predicate encryption has focused on the
notion of plaintext privacy, the property that ciphertexts
reveal no information about the encrypted plaintext. In this paper,
we consider a new notion called predicate privacy, the
property that tokens reveal no information about the encoded query
predicate. Predicate privacy is inherently impossible to achieve in
the public-key setting and has therefore received little attention
in prior work. In this work, we consider predicate encryption in the
symmetric-key setting and present a symmetric-key predicate
encryption scheme which supports inner product queries. We prove
that our scheme achieves both plaintext privacy and predicate
privacy.

A Recursive Threshold Visual Cryptography Scheme

This paper presents a recursive hiding scheme for 2 out of 3 secret sharing. In recursive hiding of secrets, the user encodes additional information about smaller secrets in the shares of a larger secret without an expansion in the size of the latter, thereby increasing the efficiency of secret sharing. We present applications of our proposed protocol to images as well as text.

Somewhat Non-Committing Encryption and Efficient Adaptively Secure Oblivious Transfer

Designing efficient cryptographic protocols tolerating adaptive
adversaries, who are able to corrupt parties on the fly as the
computation proceeds, has been an elusive task. Indeed, thus far no
\emph{efficient} protocols achieve adaptive security for general
multi-party computation, or even for many specific two-party tasks
such as oblivious transfer (OT). In fact, it is difficult and
expensive to achieve adaptive security even for the task of
\emph{secure communication}, which is arguably the most basic task
in cryptography.
In this paper we make progress in this area. First, we introduce a
new notion called \emph{semi-adaptive} security which is slightly
stronger than static security but \emph{significantly weaker than
fully adaptive security}. The main difference between adaptive and
semi-adaptive security is that, for semi-adaptive security, the
simulator is not required to handle the case where \emph{both}
parties start out honest and one becomes corrupted later on during
the protocol execution. As such, semi-adaptive security is much
easier to achieve than fully adaptive security. We then give a
simple, generic protocol compiler which transforms any
semi-adaptively secure protocol into a fully adaptively secure one.
The compilation effectively decomposes the problem of adaptive
security into two (simpler) problems which can be tackled
separately: the problem of semi-adaptive security and the problem of
realizing a weaker variant of secure channels.
We solve the latter problem by means of a new primitive that we call
{\em somewhat non-committing encryption} resulting in significant
efficiency improvements over the standard method for realizing
(fully) secure channels using (fully) non-committing encryption.
Somewhat non-committing encryption has two parameters: an
equivocality parameter $\ell$ (measuring the number of ways that a
ciphertext can be ``opened'') and the message sizes $k$. Our
implementation is very efficient for small values $\ell$,
\emph{even} when $k$ is large. This translates into a very efficient
compilation of many semi-adaptively secure protocols (in particular,
for a task with small input/output domains such as bit-OT) into a
fully adaptively secure protocol.
Finally, we showcase
our methodology by applying it to the recent Oblivious Transfer
protocol by Peikert \etal\ [Crypto 2008], which is only secure
against static corruptions, to obtain the first efficient, adaptively secure and composable OT protocol.
In particular, to transfer an $n$-bit message, we use a constant number of rounds and $O(n)$ public key operations.

Collusion-Free Multiparty Computation in the Mediated Model

Collusion-free protocols prevent subliminal communication (i.e., covert channels) between parties running the protocol. In the standard communication model (and assuming the existence of one-way functions), protocols satisfying any reasonable degree of privacy cannot be collusion-free. To circumvent this impossibility result, Alwen et al. recently suggested the mediated model where all communication passes through a mediator; the goal is to design protocols where collusion-freeness is guaranteed as long as the mediator is honest, while standard security guarantees continue to hold if the mediator is dishonest. In this model, they gave constructions of collusion-free protocols for commitments and zero-knowledge proofs in the two-party setting.
We strengthen the definition of Alwen et al. in several ways, and resolve the key open questions in this area by showing a collusion-free protocol (in the mediated model) for computing any multi-party functionality.

Semi-free start collision attack on Blender

Blender is a cryptographic hash function submitted to NIST's SHA3 competition. We have found a semi-free start collision attack on Blender with trivial complexity. One pair of semi-free start collision messages with zero initial values is presented.

Collision Attack on the Waterfall Hash Function

We give a method that appears to be able to find colliding messages for the Waterfall hash function with approximately $O(2^{70})$ work for all hash sizes. If correct, this would show that the Waterfall hash function does not meet the required collision resistance.

Fast hashing to G2 on pairing friendly curves

When using pairing-friendly ordinary elliptic curves over prime fields to implement identity-based protocols, there is often a need to hash identities to points on one or both of the two elliptic curve groups of prime order $r$ involved in the pairing. Of these $G_1$ is a group of points on the base field $E(\F_p)$ and $G_2$ is instantiated as a group of points with coordinates on some extension field, over a twisted curve $E'(\F_{p^d})$, where $d$ divides the embedding degree $k$. While hashing to $G_1$ is relatively easy, hashing to $G_2$ has been less considered, and is regarded as likely to be more expensive as it appears to require a multiplication by a large cofactor. In this paper we introduce a fast method for this cofactor multiplication on $G_2$ which exploits an efficiently computable homomorphism.

A Hardware Interface for Hashing Algorithms

The submissions to the SHA-3 competition include a reference implementation in C, built on top of a standard programmer's interface (API). This greatly improves the evaluation process: it enables portability across platforms, and it makes performance comparison of the algorithms easy. For hardware crypto-implementations, such a stan-dard interface does not exist. As a result, the evaluation and comparison of hardware hashing algorithms becomes complex and error prone. The first step to improve the evaluation process for hardware is the definition of an interface. This document describes a general hardware interface for hashing algorithms. The operation of the interface is discussed, and the appendix lists a SHA-256 reference implementation that uses the interface.

Encrypting Proofs on Pairings and Its Application to Anonymity for Signatures

We give a generic methodology to unlinkably anonymize cryptographic schemes in bilinear groups using the Boneh-Goh-Nissim cryptosystem and NIZK proofs in the line of Groth, Ostrovsky and Sahai.
We illustrate our techniques by presenting the first instantiation of anonymous proxy signatures, a recent primitive unifying the functionalities and strong security notions of group and proxy signatures. To construct our scheme, we introduce various efficient NIZK and witness-indistinguishable proofs, and a relaxed version of simulation soundness.

Properties of Cryptographic Hash Functions

Uncategorized

Uncategorized

This paper extends the work of Rogaway and Shrimpton (2004), where they formalized seven security properties: notions of preimage resistance (Pre, aPre, ePre), second-preimage resistance (Sec, aSec, eSec) and collision resistance (Coll). They also give all the implications and separations among the properties. In this paper we consider three additional security properties which are important in applications of hash functions: unforgeability (MAC), pseudo-random function (Prf) and pseudo-random oracle (Pro). We give a new type of the implication and separation between the security notions since the ones defined by Rogaway and Shrimpton were too constraining, and work out all the relationships among the ten security notions above. Some of the relations have been proven before, some of them appear to be new. We show that a property pseudo-random oracle (Pro) introduced by Coron, Dodis, Malinaud and Puniya is (as expected) the strongest one, since it implies almost all of the other properties.

Novel Precomputation Schemes for Elliptic Curve Cryptosystems

Uncategorized

Uncategorized

We present an innovative technique to add elliptic curve points with the form P+-Q, and discuss its application to the generation of precomputed tables for the scalar multiplication. Our analysis shows that the proposed schemes offer, to the best of our knowledge, the lowest costs for precomputing points on both single and multiple scalar multiplication and for various elliptic curve forms, including the highly efficient Jacobi quartics and Edwards curves.

On The Diffie-Hellman Assumption

We generalize the Strong Boneh-Boyen (SBB) signature scheme to sign vectors (GSBB). We show that if a particular (but most natural) average case reduction from SBB to GSBB exists, then the Strong Diffie-Hellman (SDH) and the Computational Diffie-Hellman (CDH) have the same worst case complexity.

Round-Optimal Zero-Knowledge Proofs of Knowledge for NP

It is well known that all the known black-box zero-knowledge proofs
of knowledge for NP are non-constant-round. Whether there exit
constant-round black-box zero-knowledge proofs of knowledge for all
NP languages under certain standard assumptions is a open problem.
This paper focuses on the problem and give a positive answer by
presenting two constructions of constant-round (black-box)
zero-knowledge proofs of knowledge for the HC (Hamiltonian Cycle) problem. By the recent result of Katz, our second construction which relies on the existence of claw-free functions has optimal round complexity (5-round) assuming the polynomial hierarchy does not collapse.

Last updated: 2011-03-19

Privacy Preserving Multiset Union with ElGamal Encryption

The privacy preserving multiset union (PPMU) protocol allows a set of parties, each with a multiset, to collaboratively compute a multiset union secretly, meaning that any information other than union is not revealed. We propose an efficient PPMU protocol, using
multiplicative homomorphic property of ElGamal encryption over F_q[x]=f(x) where q is a prime and f(x) is an irreducible polynomial over F_q. The protocol involves a constant number of rounds and improves the computation and communication complexities of the scheme proposed
by Kissner and Song. We also prove the security of the protocol in the random oracle model.

Twisted Edwards Curves Revisited

This paper introduces fast algorithms for performing group operations on twisted Edwards curves, pushing the recent speed limits of Elliptic Curve Cryptography (ECC) forward in a wide range of applications. Notably, the new addition algorithm uses 8M for suitably selected curve constants. In comparison, the fastest point addition algorithms for (twisted) Edwards curves stated in the literature use 9M+1S. It is also shown that the new addition algorithm can be implemented with four processors dropping the effective cost to 2M. This implies an effective speed increase by the full factor of 4 over the sequential case. Our results allow faster implementation of elliptic curve scalar multiplication. In addition, the new point addition algorithm can be used to provide a natural protection from side channel attacks based on simple power analysis (SPA).
(M: Multiplication, S: Squaring)

Generating Shorter Bases for Hard Random Lattices

We revisit the problem of generating a `hard' random lattice together
with a basis of relatively short vectors. This problem has gained in
importance lately due to new cryptographic schemes that use such a
procedure to generate public/secret key pairs. In these applications,
a shorter basis corresponds to milder underlying complexity
assumptions and smaller key sizes.
The contributions of this work are twofold. First, we simplify and
modularize an approach originally due to Ajtai (ICALP 1999). Second,
we improve the construction and its analysis in several ways, most
notably by making the output basis asymptotically as short as
possible.

Cryptanalysis of the Hash Function LUX-256

Uncategorized

Uncategorized

LUX is a new hash function submitted to NIST's SHA-3 competition. In this paper, we found some non-random properties of LUX due to the weakness of origin shift vector. We also give reduced blank round collision attack, free-start collision attack and free-start preimage attack on LUX-256. The two collision attacks are trivial. The free-start preimage attack has complexity of about 2^80 and requires negligible memory.

Collision attack on NaSHA-512

Uncategorized

Uncategorized

The hash function NaSHA is a new algorithm proposed for SHA-3. It follows the wide-pipe structure and compression function adopts quasigroup transformations. These properties of operation in quasigroup raise obstacles to analysis. However, The high probability difference to cause inner collision can be found in the quasigroup transformations. We propose a collision attack to NaSHA-512 with the complexity is 2^{192}, which is lower than the complexity of birthday attack to NaSHA-512. Using the similar method, we can find free-start collision on all versions with negligible complexity.

Last updated: 2009-01-02

A NEW HASH ALGORITHM$:$ Khichidi$-$1

This is a technical report describing a new hash algorithm called Khichidi-1 and has been written in response to a Hash competition (SHA-3) called by National Institute of Standards and Technology (NIST), USA. This algorithm generates a condensed representation of a message called a Message Digest.
A group of functions used in the development of Khichidi-1 is described, followed by a detailed explanation of preprocessing approach. Differences between Khichidi-1 and NIST SHA-2, the algorithm that is in use widely, have also been highlighted. Analytical proofs, implementations with examples, various attack scenarios, entropy tests, security strength and computational complexity of the algorithm are described as separate sections in this document.

Improving the Rules of the DPA Contest

A DPA contest has been launched at CHES 2008. The goal of this initiative is to make it possible for researchers to compare different side-channel attacks in an objective manner. For this purpose, a set of 80000 traces corresponding to the encryption of 80000 different plaintexts with the Data Encryption Standard and a fixed key has been made available. In this short note, we discuss the rules that the contest uses to rate the effectiveness of different distinguishers. We first describe practical examples of attacks in which these rules can be misleading. Then, we suggest an improved set of rules that can be implemented easily in order to obtain a better interpretation of the comparisons performed.

Distinguishing and Forgery Attacks on Alred and Its AES-based Instance Alpha-MAC

In this paper, we present new distinguishers of the MAC construction \textsc{Alred} and its specific instance \textsc{Alpha}-MAC based on AES, which is proposed by Daemen and Rijmen in 2005. For the \textsc{Alred} construction, we describe a general distinguishing attack which leads to a forgery attack directly. The complexity is $2^{64.5}$ chosen messages and $2^{64.5}$ queries with success probability 0.63. We also use a two-round collision differential path for \textsc{Alpha}-MAC, to construct a new distinguisher with about $2^{65.5}$ queries. The most important is that the new distinguisher can be used to recover the internal state, which is an equivalent secret subkey, and leads to a second preimage attack. Moreover, the distinguisher on \textsc{Alred} construction is also applicable to the MACs based on CBC and CFB encryption mode.

Cryptanalysis of RadioGatun

Uncategorized

Uncategorized

In this paper we study the security of the RadioGatun family of hash functions, and more precisely the collision resistance of this proposal. We show that it is possible to find differential paths with acceptable probability of success. Then, by using the freedom degrees available from the incoming message words, we provide a significant improvement over the best previously known cryptanalysis. As a proof of concept, we provide a colliding pair of messages for RadioGatun with 2-bit words. We finally argue that, under some light assumption, our technique is very likely to provide the first collision attack on RadioGatun.

Noncommutative Polly Cracker-type cryptosystems and chosen-ciphertext security

In this paper we consider chosen-ciphertext attacks against noncommutative Polly Cracker-type cryptosystems. We present several versions of these attacks, as well as techniques to counter them. First we introduce a chosen-ciphertext attack, which assumes a very simple private key. We then present generalizations of this attack which are valid in more general situations, and propose a simple but
effective technique to counter these attacks. Finally, we show how this technique can also be used to counter the adaptive chosen-ciphertext attacks against noncommutative Polly Cracker-type cryptosystems.

Improved Cryptanalysis of SHAMATA-BC

We state the design flaws of the 1-round block cipher SHA\-MATA-BC, designed by Fleishmann and Gorski by using the building blocks of SHAMATA hash function. We fix the flaws and then show that the amended version of SHAMATA-BC is much weaker. We believe that there is no connection between security level of SHAMATA as a hash function and that of SHAMATA-BC as a block cipher.

A new class of Bent functions in Polynomial Forms

Uncategorized

Uncategorized

This paper is a contribution to the construction of bent functions
having the form $f(x) = \tr {o(s_1)} (a x^ {s_1}) +
\tr {o(s_2)} (b x^{s_2})$ where $o(s_i$) denotes the cardinality of the cyclotomic
class of 2 modulo $2^n-1$ which contains $i$ and whose coefficients
$a$ and $b$ are, respectively in $F_{2^{o(s_1)}}$ and
$F_{2^{o(s_2)}}$. Many constructions of monomial bent functions are
presented in the literature but very few are known even in the
binomial case.
We prove that the exponents $s_1=2^{\frac n2}-1$ and $s_2={\frac
{2^n-1}3}$, where $a\in\GF{n}$ and $ b\in\GF[4]{}$ provide the
construction of new infinite class of bent functions over $\GF{n}$
with maximum algebraic degree. For $m$ odd, we give an explicit
characterization of the bentness of these functions,
in terms of the Kloosterman sums of the corresponding coefficients. For $m$ even,
we give a necessary condition in terms of these Kloosterman sums.

Classification of the SHA-3 Candidates

In this note we give an overview on the current state of the SHA-3
candidates. First, we classify all publicly known candidates and,
second, we outline and summarize the performance data as given in the
candidates documentation for 64-bit and 32-bit implementations. We
define performance classes and classify the hash algorithms. Note,
that this article will be updated as soon as new candidates arrive or
new cryptanalytic results get published. Comments to the authors of
this article are welcome.

Reconstructing RSA Private Keys from Random Key Bits

We show that an RSA private key with small public exponent can be
efficiently recovered given a 0.27 fraction of its bits at
random. An important application of this work is to the "cold
boot" attacks of Halderman et al. We make new observations about
the structure of RSA keys that allow our algorithm to make use of
the redundant information in the typical storage format of an RSA
private key. Our algorithm itself is elementary and does not
make use of the lattice techniques used in other RSA key
reconstruction problems. We give an analysis of the running time
behavior of our algorithm that matches the threshold phenomenon
observed in our experiments.

Chosen-Ciphertext Secure Proxy Re-Encryption without Pairings

Uncategorized

Uncategorized

Proxy re-encryption (PRE), introduced by Blaze, Bleumer and Strauss, allows a semi-trusted proxy to convert a ciphertext originally intended for Alice into an encryption of the same message intended for Bob. Proxy re-encryption has found many practical applications, such as encrypted email forwarding, secure distributed file systems, and outsourced filtering of encrypted spam. In ACM CCS'07, Canetti and Hohenberger presented a bidirectional PRE scheme with chosen-ciphertext security, and left an important open problem to construct a chosen-ciphertext secure proxy re-encryption scheme without pairings. In this paper, we propose a bidirectional PRE scheme with chosen-ciphertext security. The proposed scheme is fairly efficient due to two distinguished features: (i) it does not use the costly bilinear pairings; (ii) the computational cost and the ciphertext length decrease with re-encryption.

Some Formal Solutions in Side-channel Cryptanalysis - An Introduction

Uncategorized

Uncategorized

We propose to revisit Side-channel Cryptanalysis from the point of view, for instance, of C. E. Shannon: The calculation of a posteriori probabilities is the generalized problem of cryptanalysis. So, our goal will be to provide analytic formulae for the marginal posterior probability mass functions for the targets of those attacks. Since we are concerned with the probabilities of single and perfectly determined cases, we need above all to place ourselves in a probabilistic system enjoying an epistemic “interpretation”. We select Probability as Logic, the most suitable system for our purpose. With this powerful and flexible system at hand, we first solve two independent problems for known, non-chosen messages: the determination of side-channel leakage times (generalized for high-order attacks) and the determination of the target, given those leakage times. The first problem belongs to Hypotheses Testing Theory and admits a formal solution in terms of Bayes Factors in the parametric framework. The calculation of those factors requires marginalizing over all possible values of the target, so that this new procedure has no equivalent in frequentist Statistics and we indicate how it could be proved to outperform previous procedures more and more, as the target space size increases. We present preliminary experimental results and give some clues on how to extend this solution to the nonparametric framework. The second problem is a classical Parameter Estimation problem with many hyperparameters. It also admits a unique maximum a posteriori solution under 0-1 loss function within Decision Theory. When it is not possible to solve both problems independently, we must solve them simultaneously in order to get general solutions for Side-channel Cryptanalysis on symmetric block ciphers, at least. Taking benefit of the duality between Hypotheses Testing and Parameter Estimation in our system of inference, we transform the determination of the generalized leakage times into a parameter estimation problem, in order to fall back into a global parameter estimation problem. Generally speaking, it appears that (marginal) side-channel parametric leakage models are in fact averages between attack and “non-attack” models and, more generally between many conditional models, so that likelihoods can not be frequency sampling distributions. Then, we give the marginal posterior probability mass function for the targets of the most general known-messages attacks: “correlation” attacks, template attacks, high-order attacks, multi-decision functions attacks, multi-attack models attacks and multi-“non-attack” models attacks. Essentially, it remains to explain how to assign joint prior and discrete direct probability distributions by logical inspection, to extent this approach to the nonparametric framework and other cryptographic primitives, to deal with analytic, symbolic, numerical and computational implementation issues and especially to derive formal adaptive chosen-messages attacks.

A non-delegatable identity-based strong designated verifier signature scheme

In a strong designated verifier signature scheme, no third party can verify the validity of a signature. On the other hand, non-delegatability, proposed by Lipmaa, Wang and Bao, is another stronger notion for designated verifier signature schemes. In this paper, we formalize a security model for non-delegatable identity based strong designated verifier signature (IDSDVS) schemes. Then a novel non-delegatable IDSDVS scheme based on pairing is presented. The presented scheme is proved to be non-delegatable, non-transferable and unforgeable under the Gap Bilinear Diffie-Hellman assumption.

Unconditionally Secure Message Transmission in Arbitrary Directed Synchronous Networks Tolerating Generalized Mixed Adversary

In this paper, we re-visit the problem of {\it unconditionally secure message transmission} (USMT) from a sender {\bf S} to a receiver {\bf R}, who are part of a distributed synchronous network, modeled as an {\it arbitrary} directed graph. Some of the intermediate nodes between {\bf S} and {\bf R} can be under the control of the adversary having {\it unbounded} computing power. Desmedt and Wang \cite{Desmedt} have given the characterization of USMT in directed networks. However, in their model, the underlying network is abstracted as directed node disjoint paths (also called as wires/channels) between {\bf S} and {\bf R}, where the intermediate nodes are oblivious, message passing nodes and perform no other computation. In this work, we first show that the characterization of USMT given by Desmedt et.al \cite{Desmedt} does not hold good for {\it arbitrary} directed networks, where the intermediate nodes perform some computation, beside acting as message forwarding nodes.
We then give the {\it true} characterization of USMT in arbitrary directed networks. As far our knowledge
is concerned, this is the first ever {\it true} characterization of USMT in arbitrary directed networks.

The $n^c$-Unique Shortest Vector Problem is Hard

The unique Shortest Vector Problem (uSVP) gained prominence because
it was the problem upon which the first provably-secure
lattice-based cryptosystems were built. But it was an open problem
as to whether uSVP was as hard as the standard, more general,
version of the shortest vector problem.
We show that there is a reduction from the approximate decision
version of the shortest vector problem (GapSVP) to the unique
shortest vector problem. In particular, we show that for any
$\gamma>6\sqrt{n}$, there is a reduction from GapSVP$_\gamma$ to
$\frac{\gamma}{6\sqrt{n}}$-uSVP. This implies that the Ajtai-Dwork
and the Regev cryptosystems are based on the hardness of the
worst-case GapSVP$_{O(n^{2.5})}$ and GapSVP$_{O(n^{2})}$,
respectively. Our reduction is quite elementary, but it does use a
clever, yet surprisingly simple (in retrospect!), idea of Peikert
that was recently used by him to construct a cryptosystem based on
the worst-case hardness of GapSVP$_{O(n^3)}$.

Non-Malleable Extractors and Symmetric Key Cryptography from Weak Secrets

We study the question of basing symmetric key cryptography on weak secrets. In this setting, Alice and Bob share an $n$-bit secret $W$, which might not be uniformly random, but the adversary has at least $k$ bits of uncertainty about it (formalized using conditional min-entropy). Since standard symmetric-key primitives require uniformly random secret keys, we would like to construct an authenticated key agreement protocol in which Alice and Bob use $W$ to agree on a nearly uniform key $R$, by communicating over a public channel controlled by an active adversary Eve. We study this question in the information theoretic setting where the attacker is computationally unbounded. We show that single-round (i.e. one message) protocols do not work when $k < n/2$, and require poor parameters even when
$n/2< k << n$.
On the other hand, for arbitrary values of $k$, we design a communication efficient two-round (challenge-response) protocol extracting nearly $k$ random bits. This dramatically improves the only previously known protocol of Renner and Wolf~\cite{RennerW03}, which required $O(\lambda)$ rounds where $\lambda$ is the security parameter. Our solution takes a new approach by studying and constructing ``non-malleable'' seeded randomness extractors --- if an attacker sees a random seed $X$ and comes up with an arbitrarily related seed $X'$, then we bound the relationship between $R= \Ext(W;X)$ and $R' = \Ext(W;X')$.
We also extend our two-round key agreement protocol to the ``fuzzy'' setting, where Alice and Bob share ``close'' (but not equal) secrets $W_A$ and $W_B$, and to the Bounded Retrieval Model (BRM) where the size of the secret $W$ is huge.

Extended Access Structures and Their Cryptographic Applications

In secret sharing schemes a secret is distributed among a set of users $\mathcal{P}$ in such a way that only some sets, the authorized sets, can recover it. The family $\Gamma$ of authorized sets is called access structure. Given such a monotone family $\Gamma \subset 2^\P$, we introduce the concept of \emph{extended access structures}, defined over a larger set $\P' = \P \cup \tilde{\P}$, satisfying these two properties: (1) the set $\P$ is a minimal subset of $\Gamma'$, i.e. $\P - \{R_i\} \notin \Gamma'$ for every $R_i \in \P$, (2) a subset $A \subset \P$ is in $\Gamma$ if and only if the subset $A \cup \tilde{\P}$ is in $\Gamma'$.
As our first contribution, we give an explicit construction of an extended access structure $\Gamma'$ starting from a vector space access structure $\Gamma$, and we prove that $\Gamma'$ is also vector space. Our second contribution is to show that the concept of extended access structure can be used to design encryption schemes which involve access structures that are chosen ad-hoc at the time of encryption. Specifically, we design and analyze a dynamic distributed encryption scheme and a ciphertext-policy attribute-based encryption scheme. In some cases, the new schemes enjoy better properties than the existing ones.

Some Observations on SHAMATA

In this note we discuss some observation of the SHA-3 candidate SHAMATA. We observe that its internal block cipher is very weak, which could possibly lead to an attack on the hash function.

Strongly Secure Authenticated Key Exchange Protocol Based on Computational Diffie-Hellman Problem

Currently, there are a lot of authenticated key exchange (AKE)
protocols in literature. However, the security proofs of this kind
of protocols have been established to be a non-trivial task. The
main issue is that without static private key it is difficult for
simulator to fully support the SessionKeyReveal and
EphemeralKeyReveal queries. Some proposals which have been proven
secure either just hold in relatively weak models which do not fully
support above-mentioned two queries or make use of the stronger gap
assumption.
In this paper, using a new technique named twin Diffie-Hellman
problem proposed by Cash, Kiltz and Shoup, we present a new AKE
protocol based on the computational Diffie-Hellman (CDH) assumption,
which is more standard than gap Diffie-Hellman (GDH) assumption.
Moreover, our scheme is shown to be secure in strong security
definition, the enhanced Canetti-Krawczyk (eCK) model introduced by
LaMacchia, Lauter and Mityagin, which better supports the
adversaries' queries than previous models.

Some Observations on HC-128

Uncategorized

Uncategorized

In this paper, we use linear approximations of the addition modulo $2^n$ of three $n$-bit integers to identify linear approximations of $g_1, g_2$, the feedback functions of HC-128. This, in turn, shows that the process of keystream output generation of HC-128 can be well approximated by linear functions. In this direction, we show that the ``least significant bit" based distinguisher (presented by the designer himself) of HC-128 works for the complete 32-bit word. In a different note, in the line of Dunkelman's observation, we also study how HC-128 keystream words leak secret state information of the cipher due to the properties of the functions $h_1, h_2$ and present improved results.

Small Odd Prime Field Multivariate PKCs

We show that Multivariate Public Key Cryptosystems (MPKCs)
over fields of small odd prime characteristic, say 31, can be highly
efficient. Indeed, at the same design security of $2^{80}$ under
the best known attacks, odd-char MPKC is generally
faster than prior MPKCs over \GF{2^k}, which are in turn faster than
``traditional'' alternatives.
This seemingly counter-intuitive feat is accomplished by exploiting
the comparative over-abundance of small integer arithmetic resources
in commodity hardware, here embodied by SSE2 or more advanced
special multimedia instructions on modern x86-compatible CPUs.
We explain our implementation techniques and design choices in
implementing our chosen MPKC instances modulo small a odd prime.
The same techniques are also applicable in modern FPGAs which often
contains a large number of multipliers.

On the Correctness of An Approach Against Side-channel attacks

Side-channel attacks are a very powerful cryptanalytic technique. Li and Gu [ProvSec'07] proposed an approach against side-channel attacks, which states that a symmetric encryption scheme is IND-secure in side-channel model, if it is IND-secure in black-box model and there is no adversary who can recover the whole key of the scheme computationally in side-channel model, i.e. WKR-SCA ^ IND -> IND-SCA. Our researches show that it is not the case. We analyze notions of security against key recovery attacks and security against distinguishing attacks, and then construct a scheme which is WKR-SCA-secure and IND-secure, but not IND-SCA-secure in the same side-channel environment. Furthermore, even if the scheme is secure again partial key recovery attacks in side-channel model, this approach still does not hold true.

Constructing Variable-Length PRPs and SPRPs from Fixed-Length PRPs

We create variable-length pseudorandom permutations (PRPs) and strong PRPs (SPRPs) accepting any input length chosen from the range of b to 2b bits from fixed-length, b-bit PRPs. We utilize the elastic network that underlies the recently introduced concrete design of elastic block ciphers, exploiting it as a network of PRPs. We prove that three and four-round elastic networks are variable-length PRPs and five-round elastic networks are variable-length SPRPs, accepting any input length that is fixed in the range of b to 2b bits, when the round functions are independently chosen fixed-length PRPs on b bits. We also prove that these are the minimum number of rounds required.

Non-Malleable Obfuscation

Existing definitions of program obfuscation do not rule out malleability attacks, where an adversary that sees an obfuscated program is able to generate another (potentially obfuscated) program that is related to the original one in some way.
We formulate two natural flavors of non-malleability requirements for program obfuscation, and show that they are incomparable in general. We also construct non-malleable obfuscators of both flavors for some program families of interest. Some of our constructions are in the Random Oracle model, whereas another one is in the common reference string model. We also define the notion of verifiable obfuscation which is of independent interest.

Key Agreement from Close Secrets over Unsecured Channels

We consider information-theoretic key agreement between two parties sharing somewhat different versions of a secret w that has relatively little entropy. Such key agreement, also known as information reconciliation and privacy amplification over unsecured channels, was shown to be theoretically feasible by Renner and Wolf (Eurocrypt 2004), although no protocol that runs in polynomial time was
described. We propose a protocol that is not only polynomial-time,
but actually practical, requiring only a few seconds on consumer-grade computers.
Our protocol can be seen as an interactive version of robust fuzzy extractors (Boyen et al., Eurocrypt 2005, Dodis et al., Crypto 2006).
While robust fuzzy extractors, due to their noninteractive nature, require w to have entropy at least half its length, we have no such constraint. In fact, unlike in prior solutions, in our solution the entropy loss is essentially unrelated to the length or the entropy of w, and depends only on the security parameter.

Secure Parameters for SWIFFT

Uncategorized

Uncategorized

The SWIFFT compression functions, proposed by Lyubashevsky et al. at FSE 2008, are very efficient instantiations of generalized compact knapsacks for a specific set of parameters. They have the property that, asymptotically, finding collisions for a randomly chosen compression function implies being able to solve computationally hard ideal lattice problems in the worst-case.
We present three results. First, we present new average-case problems, which may be used for all lattice schemes whose security is proven with the worst-case to average-case reduction in either general or ideal lattices. The new average-case problems require less description bits, resulting in improved keysize and speed for these schemes. Second, we propose a parameter generation algorithm for SWIFFT where the main parameter n can be any integer in the image of Euler's totient function, and not necessarily a power of 2 as before. Third, we give experimental evidence that finding pseudo-collisions for SWIFFT is as hard as breaking a 68-bit symmetric cipher according to the well-known heuristic by Lenstra and Verheul. We also recommend conservative parameters corresponding to a 127-bit symmetric cipher.

Modeling Computational Security in Long-Lived Systems, Version 2

For many cryptographic protocols, security relies on the assumption
that adversarial entities have limited computational power.
This type of security degrades progressively over the lifetime of a protocol. However, some cryptographic services, such as timestamping services or digital archives, are long-lived in nature; they are expected to be secure and operational for a very long time (i.e. super-polynomial). In such cases, security cannot be guaranteed in the traditional sense: a computationally secure protocol may become insecure if the attacker has a super-polynomial number of interactions with the protocol.
This paper proposes a new paradigm for the analysis of long-lived
security protocols. We allow entities to be active for a potentially unbounded amount of real time, provided they perform only a polynomial amount of work per unit of real time. Moreover, the space used by these entities is allocated dynamically and must be polynomially bounded.
We propose a new notion of long-term implementation, which is an
adaptation of computational indistinguishability to the long-lived
setting. We show that long-term implementation is preserved under polynomial parallel composition and exponential sequential composition. We illustrate the use of this new paradigm by analyzing some security properties of the long-lived timestamping protocol of Haber and Kamat.

A CM construction for curves of genus 2 with p-rank 1

We construct Weil numbers corresponding to genus-2 curves with $p$-rank 1 over the finite field $\F_{p^2}$ of $p^2$ elements. The corresponding curves can be constructed using explicit CM constructions. In one of our algorithms, the group of $\F_{p^2}$-valued points of the Jacobian has prime order, while another allows for a prescribed embedding degree with respect to a subgroup of prescribed order. The curves are defined over $\F_{p^2}$ out of necessity: we show that curves of $p$-rank 1 over $\F_p$ for large $p$ cannot be efficiently constructed using explicit CM constructions.

On the final exponentiation for calculating pairings on ordinary elliptic curves

When using pairing-friendly ordinary elliptic curves to compute the Tate and related pairings, the computation consists of two main components, the Miller loop and the so-called final exponentiation. As a result of good progress being made to reduce the Miller loop component of the algorithm (particularly with the discovery of
``truncated loop'' pairings like the R-ate pairing), the final exponentiation has become a more significant component of the overall calculation. Here we exploit the structure of pairing friendly elliptic curves to reduce the computation required for the final exponentiation to a minimum.

HAIL: A High-Availability and Integrity Layer for Cloud Storage

We introduce HAIL (High-Availability and Integrity Layer), a distributed cryptographic system that permits a set of servers to prove to a client that a stored file is intact and retrievable. HAIL strengthens, formally unifies, and streamlines distinct approaches from the cryptographic and distributed-systems communities. Proofs in HAIL are efficiently computable by servers and highly compact---typically tens or hundreds of bytes, irrespective of file size. HAIL cryptographically verifies and reactively reallocates file shares. It is robust against an active, mobile adversary, i.e., one that may progressively corrupt the full set of servers. We propose a strong, formal adversarial model for HAIL, and rigorous analysis and parameter choices. We show how HAIL improves on the security and efficiency of existing tools, like Proofs of Retrievability (PORs) deployed on individual servers. We also report on a prototype implementation.

Efficient Rational Secret Sharing in Standard Communication Networks

Uncategorized

Uncategorized

We propose a new methodology for rational secret sharing leading
to various instantiations (in both the two-party and multi-party
settings) that are simple and efficient in terms of computation,
share size, and round complexity. Our protocols do not require
physical assumptions or simultaneous channels, and can even be run
over asynchronous, point-to-point networks.
We also propose new equilibrium notions (namely, computational
versions of strict Nash equilibrium and stability with respect to
trembles) and prove that our protocols satisfy them. These notions
guarantee, roughly speaking, that at each point in the protocol there
is a unique legal message a party can send. This, in turn, ensures that
protocol messages cannot be used as subliminal channels, something
achieved in prior work only by making strong assumptions on the
communication network.

Secure Certificateless Public Key Encryption without Redundancy

Certificateless public key cryptography was introduced to solve the
key escrow problem in identity based cryptography while enjoying the
most attractive {\em certificateless} property. In this paper, we
present the first {\em secure} certificateless public key encryption
(CLPKE) scheme {\em without redundancy}. Our construction provides
optimal bandwidth and a quite efficient decryption process among all
the existing CLPKE schemes. It is provably secure against adaptive
chosen ciphertext attacks in the random oracle model under a
slightly stronger assumption.

Inside the Hypercube

Bernstein's CubeHash is a hash function family that includes four functions submitted to the NIST Hash Competition. A CubeHash function is parametrized by a number of rounds r, a block byte size b, and a digest bit length h (the compression function makes r rounds, while the finalization function makes 10r rounds). The 1024-bit internal state of CubeHash is represented as a five-dimensional hypercube. The submissions to NIST recommends r=8, b=1, and h in {224,256,384,512}.
This paper presents the first external analysis of CubeHash, with: improved standard generic attacks for collisions and preimages; a multicollision attack that exploits fixed points; a study of the round function symmetries; a preimage attack that exploits these symmetries; a practical collision attack on a weakened version of CubeHash; a study of fixed points and an example of nontrivial fixed point; high-probability truncated differentials over 10 rounds.
Since the first publication of these results, several collision attacks for reduced versions of CubeHash were published by Dai, Peyrin, et al. Our results are more general, since they apply to any choice of the parameters, and show intrinsic properties of the CubeHash design, rather than attacks on specific versions.

Last updated: 2008-11-19

Fast Point Multiplication Formulae on Elliptic Curves of Weierstrass Form

\begin{abstract}
In this paper, we presented fast point multiplication formulae for
two different families elliptic curve of Weierstrass form
$y^2=x^3+ax^2+bx$ and $y^2=x^3+ax^2+2atx+at^2$. The costs of
doubling and tripling formulae are $5S+2M+3C$ and $6S+6M+4C$
respectively, even $5S+2M$ and $6S+6M$ with the parameter of the
curve suitably chosen.
\end{abstract}

Sharp lower bounds on the extractable randomness from non-uniform sources

Uncategorized

Uncategorized

Extraction of uniform randomness from (noisy) non-uniform sources
is an important primitive in many security applications,
e.g. (pseudo-)random number generators, privacy-preserving biometrics, and key storage based on Physical Unclonable Functions.
Generic extraction methods exist, using universal hash functions.
There is a trade-off between the length of the extracted bit string and the uniformity of the string.
In the literature there are proven lower bounds on this length as a function of the desired uniformity. The best known bound involves a quantity known as smooth min-entropy. Unfortunately, there exist at least three definitions of smooth entropy.
In this paper we compare three of these definitions,
and we derive improved lower bounds on the extractable randomness.
We also investigate the use of almost universal hash functions,
which are slightly worse at extracting randomness than universal hash functions, but are preferable in practice because they require far less nonvolatile memory. We show that using them has negligible effect on the extractable randomness.

Sharing DSS by the Chinese Remainder Theorem

Uncategorized

Uncategorized

In this paper, we propose a new threshold scheme for the Digital Signature Standard (DSS) using Asmuth-Bloom secret sharing based on the Chinese Remainder Theorem (CRT). To achieve the desired result, we first show how to realize certain other threshold primitives using Asmuth-Bloom secret sharing, such as joint random secret sharing, joint exponential random secret sharing, and joint exponential inverse random secret sharing. We prove the security of our scheme against a static adversary. To the best of our knowledge, this is the first provably secure threshold DSS scheme based on the CRT.

The Generic Hardness of Subset Membership Problems under the Factoring Assumption

Uncategorized

Uncategorized

We analyze a large class of subset membership problems related to integer factorization. We show that there is no algorithm solving these problems efficiently without exploiting properties of the given representation of ring elements, unless factoring integers is easy. Our results imply that problems with high relevance for a large number of cryptographic applications, such as the quadratic residuosity and the subgroup decision problems, are generically equivalent to factoring.

Public-Key Cryptosystems from the Worst-Case Shortest Vector Problem

We construct public-key cryptosystems that are secure assuming the
\emph{worst-case} hardness of approximating the length of a shortest
nonzero vector in an $n$-dimensional lattice to within a small
$\poly(n)$ factor. Prior cryptosystems with worst-case connections
were based either on the shortest vector problem for a \emph{special
class} of lattices (Ajtai and Dwork, STOC 1997; Regev, J.~ACM 2004),
or on the conjectured hardness of lattice problems for \emph{quantum}
algorithms (Regev, STOC 2005).
Our main technical innovation is a reduction from certain variants of
the shortest vector problem to corresponding versions of the
``learning with errors'' ($\lwe$) problem; previously, only a
\emph{quantum} reduction of this kind was known. In addition, we
construct new cryptosystems based on the \emph{search} version of
$\lwe$, including a very natural \emph{chosen ciphertext-secure}
system that has a much simpler description and tighter underlying
worst-case approximation factor than prior constructions.

ECM on Graphics Cards

Uncategorized

Uncategorized

This paper reports record-setting performance for the elliptic-curve method of integer factorization: for example, 926.11 curves/second
for ECM stage 1 with B1=8192 for 280-bit integers on a single PC.The state-of-the-art GMP-ECM software handles 124.71 curves/second
for ECM stage 1 with B1=8192 for 280-bit integers using all four cores of a 2.4 GHz Core 2 Quad Q6600.
The extra speed takes advantage of extra hardware,specifically two NVIDIA GTX 295 graphics cards,using a new ECM implementation introduced in this paper.Our implementation uses Edwards curves, relies on new parallel addition formulas, and is carefully tuned for the highly parallel GPU architecture.On a single GTX 295 the implementation performs 41.88 million modular multiplications per second for a general 280-bit modulus.GMP-ECM, using all four cores of a Q6600, performs 13.03 million modular multiplications per second.
This paper also reports speeds on other graphics processors: for example, 2414 280-bit elliptic-curve scalar multiplications per second
on an older NVIDIA 8800 GTS (G80), again for a general 280-bit modulus.For comparison, the CHES 2008 paper ``Exploiting the Power of GPUs for Asymmetric Cryptography'' reported 1412 elliptic-curve scalar multiplications per second on the same graphics processor
despite having fewer bits in the scalar (224 instead of 280),
fewer bits in the modulus (224 instead of 280), and a special modulus (2^{224}-2^{96}+1).

Formal Proof of Relative Strengths of Security between ECK2007 Model and other Proof Models for Key Agreement Protocols

Uncategorized

Uncategorized

In 2005, Choo, Boyd & Hitchcock compared four well-known indistinguishability-based proof models for key agreement protocols, which contains the Bellare & Rogaway (1993, 1995) model, the Bellare , Pointcheval & Rogaway 2000 model and the Canetti & Krawczyk (2001) model. After that, researchers from Microsoft presented a stronger security model, called Extended Canetti-Krawczyk model (2007). In this paper, we will point out the differences between the new proof model and the four previous models, and analyze the relative strengths of security of these models. To support the implication or non-implication relation between these models, we will provide proof or counter-example.

Attribute-Based Encryption with Key Cloning Protection

In this work, we consider the problem of key cloning in attribute-based encryption schemes. We introduce a new type of attribute-based encryption scheme, called token-based attribute-based encryption, that provides strong deterrence for key cloning, in the sense that delegation of keys reveals some personal information about the user. We formalize the security requirements for such a scheme in terms of indistinguishability of the ciphertexts and two new security requirements which we call uncloneability and privacy-preserving. We construct a privacy-preserving uncloneable token-based attribute-based encryption scheme based on Cheung and Newport's ciphertext-policy attribute-based encryption scheme and prove the scheme satisfies the above three security requirements. We discuss our results and show directions for future research.

On a New Formal Proof Model for RFID Location Privacy (Extended Version)

We discuss a recently proposed formal proof model for RFID
location privacy. We show that protocols which intuitively and in several other models are considered not to be location private, are provably location private in this model. Conversely, we also show that protocols which obviously are location private, are not considered location private in this model.
Specifically, we prove a protocol in which every tag transmits the same constant message to not be location private in the proposed model. Then we prove a protocol in which a tag’s identity is transmitted in clear text to be weakly location private in the model. Finally, we consider a protocol with known weaknesses with respect to location privacy and show it to be location private in the model.

The $F_f$-Family of Protocols for RFID-Privacy and Authentication

Uncategorized

Uncategorized

In this paper, we present the design of the lightweight $F_f$ family
of privacy-preserving authentication protocols for RFID-systems.
$F_f$ is based on a new algebraic framework for reasoning about and
analyzing this kind of authentication protocols. $F_f$ offers
user-adjustable, strong authenticity and privacy against known
algebraic and also recent SAT-solving attacks. In contrast to related
work, $F_f$ achieves these two security properties without requiring
an expensive cryptographic hash function. $F_f$ is designed for a
challenge-response protocol, where the tag sends random nonces and the
results of HMAC-like computations of one of the nonces together
with its secret key. In this paper, the authenticity and privacy of
$F_f$ is evaluated using analytical and experimental methods.

Sphinx: A Compact and Provably Secure Mix Format

Sphinx is a cryptographic message format used to relay anonymized messages within a mix network. It is more compact than any comparable scheme, and supports a full set of security features: indistinguishable replies, hiding the path length and relay position, as well as providing unlinkability for each leg of the message's journey over the network. We prove the full cryptographic security of Sphinx in the random oracle model, and we describe how it can be used as an efficient drop-in replacement in deployed remailer systems.

Access Controls for Oblivious and Anonymous Systems

Uncategorized

Uncategorized

The use of privacy-enhancing cryptographic protocols, such as anonymous credentials and oblivious transfer, often has a detrimental effect on the ability of providers to effectively implement access controls on their content. In this paper, we propose a stateful anonymous credential system that allows the provider to implement non-trivial, real-world access controls on oblivious protocols conducted with anonymous users. Our stateful anonymous credential system models the behavior of users as a state machine, and embeds that state within an anonymous credential to restrict access to resources based on the state information. The use of state machine models of user behavior allows the provider to restrict the users' actions according to a wide variety of access control models without learning anything about the users' identities or actions. Our system is secure in the standard model under basic assumptions, and, after an initial setup phase, each transaction requires only constant time. As a concrete example, we show how to implement the Brewer-Nash (Chinese Wall) and Bell-La Padula (Multilevel Security) access control models within our credential system. Furthermore, we combine our credential system with a simulatable, adaptive oblivious transfer scheme to create a privacy-friendly oblivious database with strong access controls.

Exploring Cipherspace: Combining stream ciphers and block ciphers

This paper looks at the possibility of combining a block cipher and a stream cipher to get a strong hybrid cipher. It includes two specific proposals for combining AES-128 and RC4-128 to get a cipher that takes a 256-bit key and is significantly faster than AES-256, and arguably more secure. One is immune to algebraic attacks.

Practical attacks against WEP and WPA

In this paper, we describe two attacks on IEEE 802.11 based wireless
LANs. The first attack is an improved key recovery
attack on WEP, which reduces the average number of packets an attacker has to intercept to recover the
secret key. The second attack is (according to our knowledge) the first
practical attack on WPA secured wireless networks, besides launching a
dictionary attack when a weak pre shared key (PSK) is used. The attack
works if the network is using TKIP to encrypt the traffic. An attacker,
who has about 12-15 minutes access to the network is then able to
decrypt an ARP request or response and send 7 packets with custom
content to network.

Automatic Generation of Sound Zero-Knowledge Protocols

Efficient zero-knowledge proofs of knowledge (ZK-PoK) are basic building blocks of many practical cryptographic applications such as identification schemes, group signatures, and secure multiparty computation. Currently, first applications that essentially rely on ZK-POKs are being deployed in the real world. The most prominent example is Direct Anonymous Attestation (DAA), which was adopted by the Trusted Computing Group (TCG) and implemented as one of the functionalities of the cryptographic chip Trusted Platform Module (TPM).
Implementing systems using ZK-PoK turns out to be challenging, since ZK-PoK are, loosely speaking, significantly more complex than standard crypto primitives, such as encryption and signature schemes. As a result, implementation cycles of ZK-PoK are time-consuming and error-prone, in particular for developers with minor or no cryptographic skills.
To overcome these challenges, we have designed and implemented a compiler with corresponding languages that given a high-level ZK-PoK protocol specification automatically generates a sound implementation of this. The output is given in form of $\Sigma$-protocols, which are the most efficient protocols for ZK-PoK currently known. Our compiler translates ZK-PoK protocol specifications, written in a high-level protocol description language, into Java code or \LaTeX\ documentation of the protocol.
The compiler is based on a unified theoretical framework that encompasses a large number of existing ZK-PoK techniques. Within this framework we present a new efficient ZK-PoK protocol for exponentiation homomorphisms in hidden order groups. Our protocol overcomes several limitations of the existing proof techniques.

From Weaknesses to Secret Disclosure in a Recent Ultra-Lightweight RFID Authentication Protocol

A recent research trend, motivated by the massive deployment of RFID technology, looks at cryptographic protocols for securing communication between entities in which al least one of the parties has very limited computing capabilities.
In this paper we focus our attention on SASI, a new Ultra-Lightweight RFID Authentication Protocol, designed for providing Strong
Authentication and Strong Integrity.
The protocol, suitable for passive Tags with limited computational power and storage, involves simple bitwise operations like $and$, $or$, exclusive $or$, modular addition, and cyclic shift operations. It is efficient, fits the hardware constraints, and can be seen as an example of the above research trend.
We start by showing some weaknesses in the protocol and, then, we describe how such weaknesses, through a sequence of simple steps, can be used to compute in an efficient way all secret data used for the authentication process. More precisely, we describe three attacks:
- A de-synchronisation attack, through which an adversary
can break the synchronisation between the RFID Reader and the Tag.
- An identity disclosure attack, through which an adversary can compute the identity of the Tag.
- A full disclosure attack, which enables an adversary to retrieve all secret data stored in the Tag.
Then we present some experimental results we have obtained by running several tests on an implementation of the protocol, in order to evaluate the performance of the proposed attacks. The results confirm that the attacks are effective and efficient.

Classification and Generation of Disturbance Vectors for Collision Attacks against SHA-1

Uncategorized

Uncategorized

In this paper, we present a deterministic algorithm to produce disturbance vectors for collision attacks against SHA-1.
We show that all published disturbance vectors can be classified into two types of vectors, type-I and type-II.
We define a cost function, close to those described in \cite{Mendel06}, to evaluate the complexity of a collision attack for a given disturbance vector.
Using the classification and the cost function we made an exhaustive search which allowed us to retrieve all known vectors.
We also found new vectors which have lower cost.
This may lead to the best collision attack against SHA-1, with a theoretical attack complexity of $2^{51}$ hash function calls.

A CCA2 Secure Variant of the McEliece Cryptosystem

Uncategorized

Uncategorized

The McEliece public-key encryption scheme has become an interesting alternative to cryptosystems based on number-theoretical problems. Differently from RSA and ElGa- mal, McEliece PKC is not known to be broken by a quantum computer. Moreover, even tough McEliece PKC has a relatively big key size, encryption and decryption operations are rather efficient. In spite of all the recent results in coding theory based cryptosystems, to the date, there are no constructions secure against chosen ciphertext attacks in the standard model – the de facto security notion for public-key cryptosystems.
In this work, we show the first construction of a McEliece based public-key cryptosystem secure against chosen ciphertext attacks in the standard model. Our construction is inspired by a recently proposed technique by Rosen and Segev.

Cryptanalysis of EnRUPT

In this paper we present a preimage attack on EnRUPT-512. We exploit the fact that the internal state is only a little bit larger than the critical security level: 1152 bits against 1024 bits. The absence of a message expansion and a fairly simple compression function allow us to fix the values for some state words and thus reduce the size of birthday state space in the meet-in-the-middle attack under 1024 bits. Equations that arise through the analysis are solved using look-up tables. The complexity of the attack is around 2^{480} compression function calls and the memory requirement is around 2^{384}.

Combined (identity-based) public key schemes

Consider a scenario in which parties use a public key encryption scheme and a signature scheme with a single public key/private key pair---so the private key sk is used for both signing and decrypting. Such a simultaneous use of a key is in general considered poor cryptographic practice, but from an efficiency point of view looks attractive.
We offer security notions to analyze such violations of key separation. For both the identity- and the non-identity-based setting, we show that---although being insecure in general---for schemes of interest the resulting combined (identity-based) public key scheme can offer strong security guarantees.

Secure Arithmetic Computation with No Honest Majority

We study the complexity of securely evaluating arithmetic circuits over finite rings. This question is motivated by natural secure computation tasks. Focusing mainly on the case of {\em two-party} protocols with security against {\em malicious} parties, our main goals are to: (1) only make black-box calls to the ring operations and standard cryptographic primitives, and (2) minimize the number of such black-box calls as well as the communication overhead.
We present several solutions which differ in their efficiency, generality, and underlying intractability assumptions. These include:
\begin{itemize}
\item An {\em unconditionally secure} protocol in the OT-hybrid model which makes a black-box use of an arbitrary ring $R$, but where the number of ring operations grows linearly with (an upper bound on) $\log|R|$.
\item Computationally secure protocols in the OT-hybrid model which make a black-box use of an underlying ring, and in which the numberof ring operations does not grow with the ring size. The protocols rely on variants of previous intractability assumptions related to linear codes. In the most efficient instance of these protocols, applied to a suitable class of fields, the (amortized) communication cost is a constant number of field elements per multiplication gate and the computational cost is dominated by $O(\log k)$ field operations per gate, where$k$ is a security parameter. These results extend a previous approach of Naor and Pinkas for secure polynomial evaluation ({\em SIAM J.\ Comput.}, 35(5), 2006).
\item A protocol for the rings $\mathbb{Z}_m=\mathbb{Z}/m\mathbb{Z}$ which only makes a black-box use of a homomorphic encryption scheme. When $m$ is prime, the (amortized) number of calls to the encryption scheme for each gate of the circuit is constant.
\end{itemize}
All of our protocols are in fact {\em UC-secure} in the OT-hybrid model and can be generalized to {\em multiparty} computation with an arbitrary number of malicious parties.

Vortex: A New Family of One Way Hash Functions based on Rijndael Rounds and Carry-less Multiplication

We present Vortex a new family of one way hash functions that can produce message digests of 224, 256, 384 and 512 bits. The main idea behind the design of these hash functions is that we use well known algorithms that can support very fast diffusion in a small number of steps. We also balance the cryptographic strength that comes from iterating block cipher rounds with SBox substitution and diffusion (like Whirlpool) against the need to have a lightweight implementation with as small number of rounds as possible. We use a variable number of Rijndael rounds with a stronger key schedule. Our goal is not to protect a secret symmetric key but to support perfect mixing of the bits of the input into the hash value. Rijndael rounds are followed by our variant of Galois Field multiplication. This achieves cross-mixing between 128-bit or 256-bit sets. Our hash function uses the Enveloped Merkle-Damgard construction to support properties such as collision resistance, first and second pre-image resistance, pseudorandom oracle preservation and pseudorandom function preservation. We provide analytical results that demonstrate that the number of queries required for finding a collision with probability greater or equal to 0.5 in an ideal block cipher approximation of Vortex 256 is at least 1.18x2^122.55 if the attacker uses randomly selected message words. We also provide experimental results that indicate that the compression function of Vortex is not inferior to that of the SHA family regarding its capability to preserve the pseudorandom oracle property. We list a number of well known attacks and discuss how the Vortex design addresses them. The main strength of the Vortex design is that this hash function can demonstrate an expected performance of 2.2-2.5 cycles per byte in future processors with instruction set support for Rijndael rounds and carry-less multiplication. We provide arguments why we believe this is a trend in the industry. We also discuss how optimized assembly code can be written that demonstrates such performance.

Key-Private Proxy Re-Encryption

Proxy re-encryption (PRE) allows a proxy to convert a ciphertext encrypted under one key into an encryption of the same message under another key. The main idea is to place as little trust and reveal as little information to the proxy as necessary to allow it to perform its translations. At the very least, the proxy should not be able to learn the keys of the participants or the content of the messages it re-encrypts. However, in all prior PRE schemes, it is easy for the proxy to determine between which participants a re-encryption key can transform ciphertexts. This can be a problem in practice. For example, in a secure distributed file system, content owners may want to use the proxy to help re-encrypt sensitive information *without* revealing to the proxy the *identity* of the recipients.
In this work, we propose key-private (or anonymous) re-encryption keys as an additional useful property of PRE schemes. We formulate a definition of what it means for a PRE scheme to be secure and key-private. Surprisingly, we show that this property is not captured by prior definitions or achieved by prior schemes, including even the secure *obfuscation* of PRE by Hohenberger, Rothblum, shelat and Vaikuntanathan (TCC 2007). Finally, we propose the first key-private PRE construction and prove its security under a simple extension of the Decisional Bilinear Diffie Hellman assumption and its key-privacy under the Decision Linear assumption in the standard model.

Unconditionally Secure Multiparty Set Intersection Re-Visited

In this paper, we re-visit the problem of unconditionally secure multiparty set intersection in information theoretic model.
Li et.al \cite{LiSetMPCACNS07} have proposed a protocol
for $n$-party set intersection problem, which provides unconditional security when $t < \frac{n}{3}$ players are corrupted by an active adversary having {\it unbounded computing power}. Moreover, they have claimed that their protocol takes six rounds of communication and incurs a communication complexity of ${\cal O}(n^4m^2)$, where each player has a set of size $m$. However, we show that the round complexity and communication complexity of the protocol in \cite{LiSetMPCACNS07} is much more than what is claimed
in \cite{LiSetMPCACNS07}. We then propose a {\it novel} unconditionally secure protocol for multiparty set intersection problem with $n > 3t$ players, which significantly improves the "actual" round and communication complexity (as shown in this paper) of the protocol given in \cite{LiSetMPCACNS07}. To design our protocol, we use several tools which are of independent interest.

Last updated: 2009-02-11

On Communication Complexity of Perfectly Reliable and Secure Communication in Directed Networks

In this paper, we re-visit the problem of {\it perfectly reliable message transmission} (PRMT) and {\it perfectly secure message transmission}(PSMT) in a {\it directed network} under the presence of a threshold adaptive Byzantine adversary, having {\it unbounded computing power}. Desmedt et.al have given the necessary and sufficient condition for the existence of PSMT protocols in directed networks. In this paper, we first show that the necessary and sufficient condition (characterization) given by Desmedt et.al does not hold for two phase\footnote{A phase is a send from sender to receiver or vice-versa.} PSMT protocols. Hence we provide a different necessary and sufficient condition for two phase PSMT in directed networks. We also derive the lower bound on communication complexity of two phase PSMT and show that our lower bound is {\it asymptotically tight} by designing a two phase PSMT protocol whose
communication complexity satisfies the lower bound. Though the characterization for three or more phase PSMT is resolved by the result of Desmedt et. al, the lower bound on communication
complexity for the same has not been investigated yet. Here we derive the lower bound on the communication complexity of three or more phase PSMT in directed networks and show that our lower bound is {\it asymptotically tight} by designing {\it communication optimal} PSMT protocols. Finally, we characterize the class of directed networks over which communication optimal PRMT or PRMT with constant factor overhead is possible. By communication optimal PRMT
or PRMT with constant factor overhead, we mean that the PRMT protocol is able to send $\ell$ field elements by communicating $O(\ell)$ field elements from a finite field $\mathbb F$.
To design our protocols, we use several techniques, which are of independent interest.

Various Security Analysis of a pfCM-MD Hash Domain Extension and Applications based on the Extension

We propose a new hash domain extension \textit{a prefix-free-Counter-Masking-MD (pfCM-MD)}. And, among security notions for the hash function, we focus on the indifferentiable security notion by which we can check whether the structure of a given hash function has any weakness or not. Next, we consider the security of HMAC, two new prf constructions, NIST SP 800-56A key derivation function, and the randomized hashing in NIST SP 800-106, where all of them are based on the pfCM-MD. Especially, due to the counter of the pfCM-MD, the pfCM-MD are secure against all of generic second-preimage attacks such as Kelsey-Schneier attack \cite{KeSc05} and Elena {\em et al.}' attck \cite{AnBoFoHoKeShZi08}. Our proof technique and most of notations follow those in \cite{BeDaPeAs08,Bellare06,BeCaKr96a}.

A variant of Wiener's attack on RSA

Wiener's attack is a well-known polynomial-time attack on a RSA
cryptosystem with small secret decryption exponent d, which works
if d<n^{0.25}, where n=pq is the modulus of the cryptosystem.
Namely, in that case, d is the denominator of some convergent
p_m/q_m of the continued fraction expansion of e/n, and therefore d
can be computed efficiently from the public key (n,e).
There are several extensions of Wiener's attack that allow the RSA
cryptosystem to be broken when d is a few bits longer than n^{0.25}.
They all have the run-time complexity (at least) O(D^2), where
d=Dn^{0.25}. Here we propose a new variant of Wiener's attack, which
uses results on Diophantine approximations of the form |\alpha - p/q|
< c/q^2, and "meet-in-the-middle" variant for testing the candidates
(of the form rq_{m+1} + sq_m) for the secret exponent. This
decreases the run-time complexity of the attack to O(D log(D)) (with
the space complexity O(D)).

Complete Fairness in Multi-Party Computation Without an Honest Majority

Uncategorized

Uncategorized

Gordon et al. recently showed that certain (non-trivial) functions can be computed with complete fairness in the two-party setting. Motivated by their results, we initiate a study of complete fairness in the multi-party case and demonstrate the first completely-fair protocols for non-trivial functions in this setting. We also provide evidence that achieving fairness is "harder" in the multi-party setting, at least with regard to round complexity.

On the Composability of Statistically Secure Bit Commitments

Uncategorized

Uncategorized

We show that stand-alone statistically secure commitments based on two-party stateless primitives are statistically universally composable. I.e. they are simulatable secure with an unlimited adversary, an unlimited simulator and an unlimited environment machine.
Especially, these protocols can be used in arbitrary statistically secure applications without lowering the security.

The Diffie-Hellman problem and generalization of Verheul's theorem

Uncategorized

Uncategorized

Bilinear pairings on elliptic curves have been of much interest in cryptography recently. Most of the protocols involving pairings rely on the hardness of the bilinear Diffie-Hellman problem. In contrast to the discrete log (or Diffie-Hellman) problem in a finite field, the difficulty of this problem has not yet been much studied. In 2001, Verheul \cite{Ver} proved that on a certain class of curves, the discrete log and Diffie-Hellman problems are unlikely to be provably equivalent to the same problems in a corresponding finite field unless both Diffie-Hellman problems are easy. In this paper we generalize Verheul's theorem and discuss the implications on the security of pairing based systems. We also include a large table of distortion maps.

New hash function designs

Uncategorized

Uncategorized

We suggest new hash function design principles. The basic idea is
that the hash value may be a combination of XOR's and modular additions of values independently produced from different parts of the message.
We instantiate this framework with a function for computing the above values based on modular multiplication.

Complexity of Multiparty Computation Problems: The Case of 2-Party Symmetric Secure Function Evaluation

In symmetric secure function evaluation (SSFE), Alice has an input
$x$, Bob has an input $y$, and both parties wish to securely
compute $f(x,y)$. We classify these functions $f$ according
to their ``cryptographic complexities,'' and show that the
landscape of complexity among these functions is surprisingly
rich.
We give combinatorial characterizations of the SSFE
functions $f$ that have passive-secure protocols, and those which are
protocols secure in
the standalone setting. With respect to universally composable
security (for unbounded parties), we show that there is an infinite
hierarchy of increasing complexity for SSFE functions,
That is, we describe a family of SSFE functions $f_1, f_2, \ldots$
such that there exists a UC-secure protocol for $f_i$ in the
$f_j$-hybrid world if and only if $i \le j$.
Our main technical tool for deriving complexity separations
is a powerful protocol simulation theorem which states that,
even in the strict setting of UC security, the canonical
protocol for $f$ is as secure as any other protocol for $f$,
as long as $f$ satisfies a certain combinatorial characterization.
We can then show intuitively clear impossibility results by
establishing the combinatorial properties of $f$ and then
describing attacks against the very simple canonical
protocols, which by extension are also feasible
attacks against {\em any} protocol for the same functionality.

Generalized Universal Circuits for Secure Evaluation of Private Functions with Application to Data Classification

Secure Evaluation of Private Functions (PF-SFE) allows two parties to compute a private function which is known by one party only on private data of both. It is known that PF-SFE can be reduced to Secure Function Evaluation (SFE) of a Universal Circuit (UC). Previous UC constructions only simulated circuits with gates of $d=2$ inputs while gates with $d>2$ inputs were decomposed into many gates with $2$ inputs which is inefficient for large $d$ as the size of UC heavily depends on the number of gates.
We present generalized UC constructions to efficiently simulate any circuit with gates of $d \ge 2$ inputs having efficient circuit representation. Our constructions are non-trivial generalizations of previously known UC constructions.
As application we show how to securely evaluate private functions such as neural networks (NN) which are increasingly used in commercial applications. Our provably secure PF-SFE protocol needs only one round in the semi-honest model (or even no online communication at all using non-interactive oblivious transfer) and evaluates a generalized UC that entirely hides the structure of the private NN. This enables applications like privacy-preserving data classification based on private NNs without trusted third party while simultaneously protecting user's data and NN owner's intellectual property.

Last updated: 2008-10-28

Injective Trapdoor Functions are Necessary and Sufficient for CCA2 Secure Public-Key Cryptosystems

We make the following contributions in this paper:
1. We show that the existence of semantically secure public-key cryptosystems implies the existence of injective one-way trapdoor functions. This resolves one of long-standing open problems in cryptography. Moreover, the black-box way of construction to injective trapdoor functions from any semantically secure cryptosystem disproves a conclusion at FOCS'01.
2. We further show that the injective trapdoor functions constructed are secure correlated products under uniform, repetitional distribution. This shows that the existence of semantically secure public-key cryptosystems implies the existence of CCA2 secure cryptosystems by a result of Rosen and Segev. This settles another intensive investigated long-standing and fundamental open problem in cryptography. It also indicates that the secure correlated products under uniform, repetitional distribution exist if and only if injective trapdoor functions exist. That in turn answers the motivating question by Rosen and Segev.
Combining them with prior results, we achieve a somewhat surprising result: injective trapdoor functions exist if and only if CCA2 secure cryptosystems exist. Considering CCA2 security is the strongest among securities for public-key cryptosystems, this makes security hierarchy for public-key cryptosystems, in the sense of existence, collapses into one level.
The conclusions of this work have many consequences: for example, the trapdoor functions with poly-bounded pre-image size exist if and only if injective trapdoor functions exist; There exists a collection of efficient trapdoor functions from Ajtai-Dwork lattice based cryptosystems, and several others.

Algebraic Cryptanalysis of MQQ Public Key Cryptosystem by MutantXL

In this paper, we present an efficient attack to the multivariate
Quadratic Quasigroups (MQQ) cryptosystem. Our cryptanalysis breaks
MQQ cryptosystems by solving systems of multivariate quadratic
polynomial equations using a modified version of the MutantXL
algorithm. We present experimental results comparing the behavior of
our implementation of MutantXL to Magma's implementation of $F_4$ on
MQQ systems ($\geq$ 135 bit). Based on our results we show that the
MutantXL implementation solves with much less memory than Magma's
implementation of $F_4$ algorithm.

On the Security of Fully Collusion Resistant Traitor Tracing Schemes

This paper investigates the security of FTT (fully collusion
resistant traitor tracing) schemes in terms of DOT (Denial Of
Tracing) and framing. With DOT attack, a decoder is able to detect
tracing activity, and then prolongs the tracing process such that
the tracer is unable to complete tracing job in a realistic time
duration and hence has to abort his effort. On the other hand, by
merely embedding several bytes of non-volatile memory in the
decoder, we demonstrate, for the FTT schemes, how the decoder can
frame innocent users at will. Furthermore, we propose a
countermeasure on the framing attack.

A New Variant of the Cramer-Shoup KEM Secure against Chosen Ciphertext Attack

We propose a new variant of the Cramer-Shoup KEM (key
encapsulation mechanism). The proposed variant is more
efficient than the original Cramer-Shoup KEM scheme in terms of
public key size and encapsulation cost, but is proven to be
(still) secure against chosen ciphertext attack in the standard
model, relative to the Decisional Diffie-Hellman problem.

Authenticated Adversarial Routing

The aim of this paper is to demonstrate the feasibility of authenticated throughput-efficient routing in an unreliable and dynamically changing synchronous network in which the majority of malicious insiders try to destroy and alter messages or disrupt communication in any way. More specifically, in this paper we seek to answer the following question: Given a network in which the majority of nodes are controlled by a malicious adversary and whose topology is changing every round, is it possible to develop a protocol with polynomially-bounded memory per processor that guarantees throughput-efficient and correct end-to-end communication? We answer the question affirmatively for extremely general corruption patterns: we only request that the topology of the network and the corruption pattern of the adversary leaves at least one path each round connecting the sender and receiver through honest nodes (though this path may change at every round). Out construction works in the public-key setting and enjoys bounded memory per processor (that does not depend on the amount of traffic and is polynomial in the network size.) Our protocol achieves optimal transfer rate with negligible decoding error. We stress that our protocol assumes no knowledge of which nodes are corrupted nor which path is reliable at any round, and is also fully distributed with nodes making decisions locally, so that they need not know the topology of the network at any time.
The optimality that we prove for our protocol is very strong. Given any routing protocol, we evaluate its efficiency (rate of message delivery) in the ``worst case,'' that is with respect to the worst possible graph and against the worst possible (polynomially bounded) adversarial strategy (subject to the above mentioned connectivity constraints). Using this metric, we show that there does not exist any protocol that can be asymptotically superior (in terms of throughput) to ours in this setting.
We remark that the aim of our paper is to demonstrate via explicit example the feasibility of throughput-efficient authenticated adversarial routing. However, we stress that out protocol is not intended to provide a practical solution, as due to its complexity, no attempt thus far has been made to make the protocol practical by reducing constants or the large (though polynomial) memory requirements per processor.
Our result is related to recent work of Barak, Goldberg and Xiao in 2008 who studied fault localization in networks assuming a private-key trusted setup setting. Our work, in contrast, assumes a public-key PKI setup and aims at not only fault localization, but also transmission optimality. Among other things, our work answers one of the open questions posed in the Barak et. al. paper regarding fault localization on multiple paths. The use of a public-key setting to achieve strong error-correction results in networks was inspired by the work of Micali, Peikert, Sudan and Wilson who showed that classical error-correction against a polynomially-bounded adversary can be achieved with surprisingly high precision. Our work is also related to an interactive coding theorem of Rajagopalan and Schulman
who showed that in noisy-edge static-topology networks a constant overhead in communication can also be achieved (provided none of the processors are malicious), thus establishing an optimal-rate routing theorem for static-topology networks.
Finally, our work is closely related and builds upon to the problem of End-To-End Communication in distributed networks, studied by Afek and Gafni, Awebuch, Mansour, and Shavit, and Afek, Awerbuch, Gafni, Mansour, Rosen, and Shavit, though none of these papers consider or ensure correctness in the setting of a malicious adversary controlling the majority of the network.

Divisible On-line/Off-line Signatures

Uncategorized

Uncategorized

On-line/Off-line signatures are used in a particular scenario where the signer must respond quickly once the message to be signed is presented. The idea is to split the signing procedure into two phases: the off-line and on-line phases. The signer can do some pre-computations in off-line phase before he sees the message to be signed.
In most of these schemes, when signing a message $m$, a partial signature of $m$ is computed in the off-line phase. We call this part of signature the off-line signature token of message $m$. In some special applications, the off-line signature tokens might be exposed in the off-line phase. For example, some signers might want to transmit off-line signature tokens in the off-line phase in order to save the on-line transmission bandwidth. Another example is in the case of on-line/off-line threshold signature schemes, where off-line signature tokens are unavoidably exposed to all the players in the off-line phase.
This paper discusses this exposure problem and introduces a new notion: divisible on-line/off-line signatures, in which exposure of off-line signature tokens in off-line phase is allowed. An efficient construction of this type of signatures is also proposed. Furthermore, we show an important application of divisible on-line/off-line signatures in the area of on-line/off-line threshold signatures.

Secure Random Key Pre-Distribution Against Semi-Honest Adversaries

Recently, Eschenauer and Gligor [EG02] proposed a model (the EG-model) for random key pre-distribution in distributed sensor networks (DSN) that allows sensors to establish private shared keys. In this model, each sensor is randomly assigned a set of keys, called a key-ring, from a secret key-pool. Two nodes can communicate securely by using a shared key (direct key) or via a chain of shared keys (key-path). The authors show how the key-ring size can be chosen so that nodes are guaranteed to be linked either by direct keys or by key-paths. Security of this system is proven for an eavesdropping (passive) adversary.
In this paper we assume the same key pre-distribution set-up but consider a semi-honest adversary. Semi-honest adversaries are privacy adversaries that have access to a fraction of the keys in the key pool, the compromised keys, but are otherwise passive, in the sense that they do not cause nodes to deviate from protocol executions (to remain undetectable). Since they can decrypt messages secured by key-paths with compromised keys, the security guarantees of the EG model break down.
We revisit the security of key establishment in the presence of such adversaries and make a number of contributions. First, we show that it is possible to choose the size of the key-rings so that any two
nodes can exchange a private key securely in the presence of a semi-honest adversary. Second, we give a protocol that achieves this guarantee and prove its security. Third, we introduce a new efficiency parameter for the EG-model that allows the protocol designer to trade-off the communication required for key establishment with the key-ring size. Finally, we propose a concrete key establishment protocol (based on the DSR protocol) that guarantees security in the presence of a semi-honest adversary.