All papers in 2007 (482 results)
TinyPBC: Pairings for Authenticated Identity-Based Non-Interactive Key Distribution in Sensor Networks
Uncategorized
Uncategorized
Key distribution in Wireless Sensor Networks (WSNs) is challenging.
Symmetric cryptosystems can perform it efficiently, but they often do
not provide a perfect trade-off between resilience and storage.
Further, even though conventional public key and elliptic curve
cryptosystem are computationally feasible on sensor nodes, protocols
based on them are not. They require exchange and storage of large
keys and certificates, which is expensive.
Using Pairing-based Cryptography (PBC) protocols, conversely, parties
can agree on keys without any interaction. In this work, we (i) show
how security in WSNs can be bootstrapped using an authenticated
identity-based non-interactive protocol and (ii) present TinyPBC, to
our knowledge, the most efficient implementation of PBC primitives
for an 8-bit processor.
TinyPBC is an open source code able to compute pairings as well
as binary multiplication in about 5.5s and 4019.46$\mu$s,
respectively, on the ATmega128L 7.3828-MHz/4KB SRAM/128KB ROM
processor -- the MICA2 and MICAZ node processor.
Last updated: 2008-05-06
MAC-free variant of KD04
Uncategorized
Uncategorized
Kurosawa and Desmedt proposed an efficient hybrid encryption scheme(KD04) which is secure
against adaptive chosen ciphertext attacks(IND-CCA) although the underlying KEM(key
encapsulation mechanism) is not IND-CCA secure\cite{Kurosawa2004}. We show a variant of
KD04 which is IND-CCA secure when the the underlying DEM part is IND-CCA secure. We need
a DEM built from one-time symmetric encryption scheme and a MAC in the security reduction
to check if the KEM part of a ciphertext is valid. However in the real situation we can
check if the KEM part of the ciphertext is valid without the help of the MAC. So the
hybrid encryption scheme can also use redundancy-free IND-CCA secure DEMs that avoid the
overhead due to the MAC. When using redundancy-free(MAC-free) IND-CCA secure DEMs, the
new scheme will be more efficient than KD04 in bandwidth.
Differential Fault Analysis on the AES Key Schedule
This letter proposes a differential fault analysis
on the AES key schedule and shows how an entire 128-bit AES key can be retrieved.
In the workshop at FDTC 2007, we presented the DFA mechanism on the AES key schedule and proposed general attack rules.
Using our proposed rules, we showed an efficient attack that can retrieve 80 bits of the 128-bit key.
Recently, we have found a new attack that can obtain an additional 8 bits compared with our previous attack.
As a result, we present most efficient attack for
retrieving 88 bits of the 128-bit key using approximately
two pairs of correct and faulty ciphertexts.
An Efficient Identification Protocol and the Knowledge-of-Exponent Assumption
In this paper, we propose an extremely simple identification protocol and prove its security using the Knowledge-of-Exponent Assumption (KEA). We discuss the applicability of KEA in various protocol settings as well. Recently, doubts have been raised about applying KEA in some protocols where an adversary has auxiliary inputs. However, we suggest that KEA is applicable in these cases. We present two variants of KEA, Generalized KEA (GKEA) and Auxiliary-Input KEA (AI-KEA), to clarify the proper use of KEA.
Impossibility Results for Universal Composability in Public-Key Models and with Fixed Inputs
Universal composability and concurrent general composition consider a setting where secure protocols are run concurrently with each other and with arbitrary other possibly insecure protocols. Protocols that meet the definition of universal composability are guaranteed to remain secure even when run in this strongly adversarial setting. In the case of an honest majority, or where there is a trusted setup phase of some kind (like a common reference string or the key-registration public-key infrastructure of Barak et al.~in FOCS 2004), it has been shown that any functionality can be securely computed in a universally composable way. On the negative side, it has also been shown that in the {\em plain model}\/ where there is no trusted setup at all, there are large classes of functionalities which cannot be securely computed in a universally composable way without an honest majority.
In this paper we extend these impossibility results for universal composability. We study a number of public-key models and show for which models the impossibility results of universal composability hold and for which they do not. We also consider a setting where the inputs to the protocols running in the network are fixed before any execution begins. The majority of our results are negative and we show that the known impossibility results for universal composability in the case of no honest majority extend to many other settings.
Algebraic Side-Channel Collision Attacks on AES
This paper presents a new powerful side-channel cryptanalytic method - algebraic collision attacks - representing an efficient class of power analysis being based on both the power consumption information leakage and specific structure of the attacked cryptographic algorithm. This can result in an extremely low measurement count needed for a key recovery.
The algebraic collision attacks are well applicable to AES, if one-byte collisions are detectable. For the recovery of the complete AES key, one needs 3 measurements with a probability of 0.42 and 4.24 PC hours post-processing, 4 measurements with a probability of 0.82 and several seconds of offline computations or 5 measurements with success probability close to 1 and several seconds of post-processing.
Dynamic SHA
Uncategorized
Uncategorized
In this paper I describe the construction of Dynamic SHA family of cryptographic hash functions. They are built with design components from the SHA-2 family, but there is function R in the new hash functionh. It enabled us to achieve a novel design principle: When message is changed, different rotate right operation maybe done. It make the system can resistant against all extant attacks.
Obtaining Universally Composable Security: Towards the Bare Bones of Trust
A desirable goal for cryptographic protocols is to guarantee security
when the protocol is composed with other protocol instances. Universally Composable (UC) security provides this guarantee in a strong sense: A UC-secure protocol maintains its security properties even when composed concurrently with an unbounded number of instances of arbitrary protocols. However, many interesting cryptographic tasks are provably impossible to realize with UC security in the standard, ``plain'' model of computation. Impossibility holds even if ideally authenticated communication channels are provided. In contrast, it has been demonstrated that general secure computation can be obtained in a number of idealized models. Each one of these models represents a form of trust that is put in some of the system's components.
This survey examines and compares some of these trust models, both
from the point of view of their sufficiency for building UC secure
protocols, and from the point of view of their practical realizability. We start with the common reference string (CRS) model, and then describe several relaxations and alternatives including the Defective CRS model, the key registration models, the hardware token model, the global and augmented CRS models, and a timing assumption.
Finally, we briefly touch upon trust models for obtaining
authenticated communication.
Notes on the Wang et al. $2^{63}$ SHA-1 Differential Path
Although advances in SHA-1 cryptanalysis have been made since the 2005 announcement of a $2^{63}$ attack by Wang et al., the details of the attack have not yet been vetted; this note does just that. Working from Adi Shamir's 2005 CRYPTO rump session presentation of Wang et al.'s work, this note corroborates and presents the differential path and associated conditions for the two-block attack. Although the error analysis for the advanced condition correction technique is not verified, a method is given which yields a two-block collision attack on SHA-1 requiring an estimated $2^{62}$ SHA-1 computations if the original error analysis by Wang et al. is correct.
Authenticated Key Exchange and Key Encapsulation Without Random Oracles
This paper presents a new paradigm to realize cryptographic primitives such as authenticated key exchange and key encapsulation without random oracles under three assumptions: the decisional Diffie-Hellman (DDH) assumption, target collision resistant (TCR) hash functions and a class of pseudo-random functions (PRFs), $\pi$PRFs, PRFs with pairwise-independent random sources. We propose a (PKI-based) two-pass authenticated key exchange (AKE) protocol that is comparably as efficient as the existing most efficient protocols like MQV and that is secure without random oracles (under these assumptions). Our protocol is shown to be secure in the (currently) strongest security definition, the extended Canetti-Krawczyk
(eCK) security definition introduced by LaMacchia, Lauter and Mityagin. We also show that a variant of the Kurosawa-Desmedt key encapsulation mechanism (KEM) using a $\pi$PRF is CCA-secure under the three assumptions. This scheme is secure in a stronger security notion, the chosen public-key and ciphertext attack (CPCA) security,
with using a generalized TCR (GTCR) hash function in place of a TCR hash function. The proposed schemes in this paper are validity-check-free and the implication is that combining them with validity-check-free symmetric encryption (DEM) will yield validity-check-free (e.g., MAC-free) CCA-secure hybrid encryption.
New Features of Latin Dances: Analysis of Salsa, ChaCha, and Rumba
Uncategorized
Uncategorized
The stream cipher Salsa20 was introduced by Bernstein in 2005 as a candidate in the eSTREAM project, accompanied by the reduced versions Salsa20/8 and Salsa20/12. ChaCha is a variant of Salsa20 aiming at bringing better diffusion for similar performance. Variants of Salsa20 with up to 7 rounds (instead of 20) have been broken by differential cryptanalysis, while ChaCha has not been analyzed yet. We introduce a novel method for differential cryptanalysis of Salsa20 and ChaCha, inspired by correlation attacks and related to the notion of neutral bits. This is the first application of neutral bits in stream cipher cryptanalysis. It allows us to break the 256-bit version of Salsa20/8, to bring faster attacks on the 7-round variant, and to break 6- and 7-round ChaCha. In a second part, we analyze the compression function Rumba, built as the XOR of four Salsa20 instances and returning a 512-bit output. We find collision and preimage attacks for two simplified variants, then we discuss differential attacks on the original version, and exploit a high-probability differential to reduce complexity of collision search from 2^256 to 2^79 for 3-round Rumba. To prove the correctness of our approach we provide examples of collisions and near-collisions on simplified versions.
Attacks on the WEP protocol
WEP is a protocol for securing wireless networks. In the past years, many attacks on WEP have been published, totally breaking WEP’s security. This thesis summarizes all major attacks on WEP. Additionally a new attack, the PTW attack, is introduced, which was partially developed by the author of this document. Some advanced versions of the PTW attack which are more suiteable in certain environments are described as well. Currently, the PTW attack is fastest publicly known key recovery attack against WEP protected networks.
Faster Multi-Exponentiation through Caching: Accelerating (EC)DSA Signature Verification
We consider the task of computing power products $\prod_{1 \leq i \leq k} g_i^{e_i}$ ("multi-exponentiation") where base elements $g_2, ..., g_k$ are fixed while $g_1$ is variable between multi-exponentiations but may repeat, and where the exponents are bounded (e.g., in a finite group). We present a new technique that entails two different ways of computing such a result. The first way applies to the first occurrence of any $g_1$ where, besides obtaining the actual result, we create a cache entry based on $g_1$, investing very little memory or time overhead. The second way applies to any multi-exponentiation once such a cache entry exists for the $g_1$ in question: the cache entry provides for a significant speed-up. Our technique is useful for ECDSA or DSA signature verification with common domain parameters and recurring signers.
ID-Based Group Password-Authenticated Key Exchange
Uncategorized
Uncategorized
Password-authenticated key exchange (PAKE) protocols are designed to be secure even when the secret key used for authentication is a human-memorable password. In this paper, we consider PAKE protocols in the group scenario, in which a group of clients, each of them shares a password with an ``honest but curious'' server, intend to establish a common secret key (i.e., a group key) with the help of the server. In this setting, the key established is known to the clients only and no one else, including the server. Each client needs to remember passwords only while the server keeps passwords in addition to private keys related to his identity. Towards our goal, we present a compiler that transforms any group key exchange (KE) protocol secure against a passive eavesdropping to a group PAKE which is secure against an active adversary who controls all communication in the network. This compiler is built on any group KE protocol (e.g., the Burmester-Desmedt protocol), any identity-based encryption (IBE) scheme (e.g., Gentry's scheme), and any identity-based signature (IBS) scheme (e.g., Paterson-Schuldt scheme). It adds only two rounds and $O(1)$ communication (per client) to the original group KE protocol. As long as the underlying group KE protocol, IBE scheme and an IBS scheme have provably security without random oracles, a group PAKE constructed by our compiler can be proven to be secure without random oracles.
Last updated: 2009-10-24
On the hash function of ODH assumption
Uncategorized
Uncategorized
M. Abdalla, M. Bellare and P. Rogaway proposed a variation of Diffie-Hellman assumption
named as oracle Diffie-Hellman(ODH) assumption. They recommend to use a one-way
cryptographic hash function for the ODH assumption. We notice that if the hash function
is just one-way then there will be an attack. We show that if the the hash function is
non-malleable then the computational version of ODH assumption can be reduced to the
computational Diffie-Hellman(CDH) assumption. But we can not reduce the ODH assumption to
the decisional Diffie-Hellman(DDH) even if the hash function is non-malleable. It seems
that we need a random oracle hash function to reduce the ODH assumption to the DDH
assumption.
Interactive and Noninteractive Zero Knowledge are Equivalent in the Help Model
Uncategorized
Uncategorized
We show that interactive and noninteractive zero-knowledge are
equivalent in the `help model' of Ben-Or and Gutfreund ({\em J.
Cryptology}, 2003). In this model, the shared reference string is
generated by a probabilistic polynomial-time dealer who is given
access to the statement to be proven. Our results do not rely on
any unproven complexity assumptions and hold for statistical zero
knowledge, for computational zero knowledge restricted to AM,
and for quantum zero knowledge when the help is a pure quantum
state.
Improved Impossible Differential Cryptanalysis of CLEFIA
This paper presents an improved impossible differential attack on the new
block cipher CLEFIA which is proposed by Sony Corporation at FSE
2007. Combining some observations with new tricks, we can filter out
the wrong keys more efficiently, and improve the impossible
differential attack on 11-round CLEFIA-192/256, which also firstly
works for CLEFIA-128. The complexity is about $2^{103.1}$
encryptions and $2^{103.1}$ chosen plaintexts. By putting more
constraint conditions on plaintext pairs, we give the first attack
on 12-round CLEFIA for all three key lengths with $2^{119.1}$
encryptions and $2^{119.1}$ chosen plaintexts. For CLEFIA-192/256,
our attack is applicable to 13-round variant, of which the time
complexity is about $2^{181}$, and the data complexity is $2^{120}$.
We also extend our attack to 14-round CLEFIA-256, with about
$2^{245.4}$ encryptions and $2^{120.4}$ chosen plaintexts. Moreover,
a birthday sieve method is introduced to decrease the complexity of
the core precomputation.
A Synthetic Indifferentiability Analysis of Some Block-Cipher-Based Hash Functions
Uncategorized
Uncategorized
At ASIACRYPT 2006, Chang et al. analyzed the indifferentiability of some popular hash functions based on block ciphers, namely, the twenty collision resistant PGV, the MDC2 and the PBGV hash functions, etc. In particular, two indifferentiable attacks were presented on the four of the twenty collision resistant PGV and the PBGV hash functions with the prefix-free padding. In this article, a synthetic indifferentiability analysis of some block-cipher-based hash functions is considered. First, a more precise definition is proposed on the indifferentiability adversary in block-cipher-based hash functions. Next, the advantage of indifferentiability is separately analyzed by considering whether the hash function is keyed or not. Finally, a limitation is observed in Chang et al.'s indifferentiable attacks on the four PGV and the PBGV hash functions. The formal proofs show the fact that those hash functions are indifferentiable from a random oracle in the ideal cipher model with the prefix-free padding, the NMAC/HMAC and the chop construction.
Secure Computation Without Authentication
Research on secure multiparty computation has mainly concentrated on
the case where the parties can authenticate each other and the
communication between them. This work addresses the question of what
security can be guaranteed when authentication is not available. We
consider a completely unauthenticated setting, where {\em all}
messages sent by the parties may be tampered with and modified by
the adversary without the uncorrupted parties being able to detect
this fact. In this model, it is not possible to achieve the same
level of security as in the authenticated-channel setting.
Nevertheless, we show that meaningful security guarantees {\em
can} be provided: Essentially, all the adversary can do is to
partition the network into disjoint sets, where in each set the
computation is secure in of itself, and also {\em independent} of
the computation in the other sets. In this setting we provide, for
the first time, non-trivial security guarantees in a model with {\em
no setup assumptions whatsoever.} We also obtain similar results
while guaranteeing universal composability, in some variants of the
common reference string model. Finally, our protocols can be used to
provide conceptually simple and unified solutions to a number of
problems that were studied separately in the past, including
password-based authenticated key exchange and non-malleable
commitments. As an application of our results, we study the
question of constructing secure protocols in partially-authenticated
networks, where some of the links are authenticated and some are not
(as is the case in most networks today).
Efficient GF(3m) Multiplication Algorithm for eta T Pairing
The computation speed of pairing based cryptosystems is
slow compared with the other public key cryptosystems even though
several efficient computation algorithms have been proposed. Thus more efficient computation of the Tate pairing is an important research goal. GF(3m) multiplication in GF(36m) in the pairing algorithm is the greatest consumer of time. Past research concentrated on reducing the number of GF(3m) multiplications, for instance the Karatsuba method. In this article, we propose a new method to reduce the number of online precomputations(
precomputations) in GF(3m) multiplications for the eta T
pairing. The proposed algorithm reduces 18 online precomputations in
GF(36m) in the eta T pairing to 4 online precomputations by reusing the intermediate products obtained in precomputation.We implement the proposed algorithm and compare the time taken by the proposed algorithm with that of the previous work. Our algorithm offers a 40% performance increase for GF(3m) multiplications in GF(36m) on an AMD 64-bit processor. Additionally, a completely new finding is obtained. The results show that the reducing the number of the multiplications in GF(36m) does not necessarily lead to a speed-up of the eta T pairing calculation.
Construction of Universal Designated-Verifier Signatures and Identity-Based Signatures from Standard Signatures
We give a generic construction for universal designated-verifier signature schemes from a large class, C, of signature schemes. The resulting schemes are efficient and have two important properties. Firstly, they are provably DV-unforgeable, non-transferable and also non-delegatable. Secondly, the signer and the designated verifier can independently choose their cryptographic settings. We also propose a generic construction for identity-based signature schemes from any signature scheme in C and prove that the construction is secure against adaptive chosen message and identity attacks. We discuss possible extensions of our constructions to hierarchical identity-based signatures, identity-based universal designated verifier signatures, and identity-based ring signatures from any signature in C.
Verifiable Attribute-based Encryption
In this paper,we construct two veriable attribute-based
encryption(VABE)schemes.One is with a single authority,and the
other is with multi authorities.Not only our schemes are proved secure as the previous ABE schemes,they also provide a verication property.Adding the verication property has a few advantages:first,it allows the user to immediately check the correctness of the keys,if not,he only
needs the authority to resend the corresponding shares,especially,in
multi-authoritycase,if the key does not pass the check,the user only
needs to ask the particular authority to resend its own part,without need to go to all the authorities,this saves a lot of time when error appears;second,if the keys pass the verication but the user still does not rightly decrypt out the message,something might be wrong with the attributes or ciphertexts,then,the user has to contact with the encryptor.We formalize the notion of VABE and prove our schemes in our model.
Guarantees for Customers of Incentive Anonymizing Networks
We raise and propose solutions to the problem of guaranteeing that a user of incentive remailing services for anonymization cannot lose money if he does not get full service, i.e., if his message does not reach its destination. Applications such as voting over the Internet or reviewing of articles require anonymous delivery of messages. An anonymizing technique was proposed several decades ago by Chaum and is based on a group of volunteer agents called {\em mixnet}. However, mixnets are not yet widely known and used today, and one often mentioned reason is the lack of incentives for volunteers. A recently proposed solution is based on adding digital coins to messages, such that each volunteer can extract only the digital coin designated as a payment for her. However, registered volunteers can sabotage the system by extracting and using their coins without performing their task --- which consists of forwarding anonymized messages. The main improvement we propose is to guarantee that no money is lost by the user without getting his message at the destination. This is an essential property for a viable service. Solutions described are based on handshaking mechanisms where each volunteer gets her payment (or key to decrypt the payment) from the agent to which she is expected to forward the message, or from the destination using a public board or a reply message. This ensures that a volunteer gets her financial support only if she fulfills her task. We discuss how techniques for non-repudiation of receipt of a message, together with reputation systems, can address the remaining problems.
Practical Anonymous Divisible E-Cash From Bounded Accumulators
We present an efficient off-line divisible e-cash scheme which is
\emph{truly anonymous} without a trusted third party. This is the
second scheme in the literature which achieves full unlinkability
and anonymity, after the seminal work proposed by Canard and Gouget.
The main trick of our scheme is the use of a bounded accumulator in
combination with the classical binary tree approach.
The aims of this paper are twofold. Firstly, we analyze Canard and
Gouget's seminal work on the efficient off-line divisible e-cash. We
point out some subtleties on the parameters generation of their
scheme. Moreover, spending a coin of small value requires
computation of several hundreds of multi-based exponentiations,
which is very costly. In short, although this seminal work provides
a new approach of achieving a truly anonymous divisible e-cash,
unfortunately it is rather impractical. Secondly, we present our
scheme that uses a novel approach of incorporating a bounded
accumulator. In terms of time and space complexities, our scheme is
$50$ to $100$ times more efficient than Canard and Gouget's work in
the spend protocol at the cost of an $10$ to $500$ (the large range
is due to whether pre-processing is taken into account and the
probabilistic nature of our withdrawal protocol) times less
efficient withdrawal protocol. We believe this trade-off between the
withdrawal protocol and the spend protocol is reasonable as the
former protocol is to be executed much less frequent than the
latter. Nonetheless, while their scheme provides an affirmative
answer to whether divisible e-cash can be \emph{truly anonymous},
our result puts it a step further and we show that truly anonymous
divisible e-cash can be \emph{practical}.
Saving Private Randomness in One-Way Functions and Pseudorandom Generators
Can a one-way function f on n input bits be used
with fewer than $n$ bits while retaining comparable hardness of
inversion? We show that the answer to this fundamental question is
negative, if one is limited black-box reductions.
Instead, we ask whether one can save on secret random bits
at the expense of more public random bits.
Using a shorter secret input is highly desirable, not only
because it saves resources, but also because it can yield
tighter reductions from higher-level primitives to one-way functions.
Our first main result
shows that if the number of output elements of f is at most
$2^k$, then a simple construction using pairwise-independent hash
functions results in a new one-way function that uses
only k secret bits. We also demonstrate that it is not the knowledge of security of f, but rather of its structure, that enables the savings: a black-box reduction cannot, for a general f, reduce the secret-input length, even given the knowledge that security of f is only $2^{-k}$; nor can a black-box reduction use fewer than k secret input bits when f has $2^k$ distinct outputs.
Our second main result is an application of the public-randomness approach: we show
a construction of a pseudorandom generator based on any
regular one-way function with output range of known size $2^k$. The construction requires a seed of only 2n+O(k\log k) bits
(as opposed to O(n \log n) in previous constructions); the savings
come from the reusability of public randomness.
The secret part of the seed is of length only k
(as opposed to n in previous constructions), less than the length of
the one-way function input.
Comparing Implementation Efficiency of Ordinary and Squared Pairings
In this paper, we will implement a standard probabilistic method of computing bilinear pairings. We will compare its performance to a deterministic algorithm introduced in [5] to compute the squared Tate/Weil pairings which are claimed to be 20 percent faster than the standard method. All pairings will be evaluated over pairing-friendly ordinary elliptic curves of embedding degrees 8 and 10 and a supersingular curve of embedding degree 6. For these curves, we can make the algorithm to compute both the ordinary Weil and Tate pairings deterministic and optimizations to improve the algorithms are applied. We will show that the evaluation of squared Weil pairing is, indeed, faster than the ordinary Weil pairing even with optimizations. However, evaluation of the squared Tate pairing is not faster than the ordinary Tate pairing over the curves that we used when optimizations are applied.
Last updated: 2008-03-16
Precise Zero-Knowledge in Concurrent Setting
We present a stronger notion of zero-knowledge: precise concurrent
zero-knowledge. Our notion captures the idea that the view of any
verifier in concurrent interaction can be reconstructed in the
almost same time (within a constant/polynomial factor). Precise
zero-knowledge in stand-alone setting was introduced by Micali and
Pass in STOC'06 (The original work used the term "local
zero-knowledge".). Their notion shows that the view of any verifier
can be reconstructed in the almost same time in stand-alone setting.
Hence our notion is the generalization of their notion in concurrent
setting.
Furthermore, we propose a $\omega (\log ^2 n)$-round concurrent
zero-knowledge argument for ${\rm{NP}}$ with linear precision, which
shows that the view of any verifier in concurrent interaction can be
reconstructed by the simulator with linear-time overhead. Our
argument is Feige-Lapidot-Shamir type which consists of a
proof-preamble and a proof-body for a modified NP statement. Our
result assumes the restriction of adversarial scheduling the
communication that the concurrent interaction of preambles of all
sessions will be scheduled before any proof-body by the adversarial
verifier.
Analysis and optimization of elliptic-curve single-scalar multiplication
Let $P$ be a point on an elliptic curve over a finite field of large characteristic.
Exactly how many points $2P,3P,5P,7P,9P,\ldots,mP$
should be precomputed in a sliding-window computation of $nP$?
Should some or all of the points be converted to affine form,
and at which moments during the precomputation should these conversions take place?
Exactly how many field multiplications are required for the resulting computation of $nP$?
The answers depend on the size of $n$,
the $\inversions/\mults$ ratio,
the choice of curve shape,
the choice of coordinate system,
and the choice of addition formulas.
This paper presents answers that, compared to previous analyses,
are more carefully optimized and cover a much wider range of situations.
Efficient Certificateless Signatures Suitable for Aggregation
This technical report describes a novel certificateless signature scheme suitable for aggregation that requires no pairing computations for signing and only 3 pairing computations for signature verification. We provide proofs for the security of single and aggregate signatures.
On the Relations Between Non-Interactive Key Distribution, Identity-Based Encryption and Trapdoor Discrete Log Groups
We investigate the relationships between identity-based non-interactive key distribution and identity-based encryption. We provide constructions for these schemes that make use of general trapdoor discrete log groups. We then investigate the schemes that result in two concrete settings, obtaining new, provably secure, near-practical identity-based encryption schemes.
Constructing Brezing-Weng pairing friendly elliptic curves using elements in the cyclotomic field
We describe a new method for constructing Brezing-Weng-like pairing-friendly elliptic curves. The new construction uses the minimal polynomials of elements in a cyclotomic field. Using this new construction we present new ``record breaking'' families of pairing-friendly curves with embedding degrees of $k \in \{16,18,36,40\}$, and
some interesting new constructions for the cases $k \in \{8,32\}$
Precise Concurrent Zero Knowledge
\emph{Precise zero knowledge} introduced by Micali and Pass
(STOC'06) guarantees that the view of any verifier $V$ can be
simulated in time closely related to the \emph{actual} (as opposed
to worst-case) time spent by $V$ in the generated view. We provide
the first constructions of precise concurrent zero-knowledge
protocols. Our constructions have essentially optimal precision;
consequently this improves also upon the previously tightest
non-precise concurrent zero-knowledge protocols by Kilian and
Petrank (STOC'01) and Prabhakaran, Rosen and Sahai (FOCS'02) whose
simulators have a quadratic worst-case overhead.
Additionally, we achieve a statistically-precise concurrent
zero-knowledge property---which requires simulation of unbounded verifiers participating in an unbounded number of concurrent executions;
as such we obtain the first (even non-precise)
concurrent zero-knowledge protocols which handle verifiers
participating in a super-polynomial number of concurrent executions.
Short Group Signature without Random Oracles
Uncategorized
Uncategorized
We construct a short group signature which is proven secure without random oracles. By making certain reasonable assumptions and applying the technique of non-interactive proof system, we prove that our scheme is full anonymity and full traceability. Compared with other related works, such as BW06, BW07, ours is more practical due to the short size of both public key and group signature.
Generic Attacks on Unbalanced Feistel Schemes with Expanding Functions
Uncategorized
Uncategorized
\begin{abstract}
Unbalanced Feistel schemes with expanding functions are used to construct pseudo-random permutations from $kn$ bits to $kn$ bits by using random functions from $n$ bits to $(k-1)n$ bits. At each round, all the bits except $n$ bits are changed by using a function that depends only on these $n$ bits. C.S.Jutla \cite{Jut} investigated such schemes, which he denotes by $F^d_k$, where $d$ is the number of rounds.
In this paper, we describe novel Known Plaintext Attacks (KPA) and Non Adaptive Chosen Plaintext Attacks (CPA-1) against these schemes. With these attacks we will often be able to improve the result of C.S.Jutla. We also give precise formulas for the complexity of our attacks in $d$, $k$ and $n$.
\end{abstract}
Generalized Correlation and Higher Order Nonlinearity for Probabilistic Algebraic Attacks Description
Abstract. Algebraic attacks are relatively new and interesting subject in cryptanalysis. The algebraic attacks where introduced in [1], where several possible attack's scenarios where given. The big attention was paid to deterministic scenarios of those. In this paper, probabilistic scenarios are studied. Conception of conditional correlation and partial higher order nonlinearity of Boolean function where introduced (briefly definition of conditional correlation: $C(g,f|f = a): = \Pr (g = f|f = a) - \Pr (g \ne f|f = a)$
) . It was shown, that the both types of scenarios can be seen as a one unified attack - higher order correlation attack, which uses conditional correlation. The clear criteria of vulnerability of Boolean function to both types of scenarios was given. Accordingly, the notion of the algebraic immunity was extended.
There are very vulnerable functions to probabilistic scenario. Calculations show that if a function with a very low partial higher order nonlinearity was used in the cipher like SFINKS [8], the simple attack would require only about $
2^{42}$ operations and $32Kb$
of keystream. The question about relation between partial higher order nonlinearity and algebraic immunity remains open yet.
Weak adaptive chosen ciphertext secure hybrid encryption scheme
Uncategorized
Uncategorized
We propose a security notion named as weak adaptive chosen ciphertext security(IND-WCCA)
for hybrid encryption schemes. Although it is weaker than adaptive chosen ciphertext
security(IND-CCA), a IND-WCCA secure hybrid encryption scheme can be used in any
situations that a IND-CCA secure hybrid encryption scheme used in. We show that IND-WCCA
secure hybrid encryption scheme can be constructed from IND-CCA secure KEM and IND-PA
secure DEM. Since IND-PA is the basic requirement of symmetric key encryption schemes,
IND-WCCA hybrid encryption scheme is very flexible and can use most of the stream ciphers
and block ciphers as the DEM part of the scheme. Use the new secure notion we can refine
current IND-CCA secure hybrid encryption schemes and get more efficient IND-WCCA secure
hybrid encryption schemes.
A Lattice-Based Computationally-Efficient Private Information Retrieval Protocol
A PIR scheme is a scheme that allows an user to get an element of a database without giving any information about what part of the database he is interested in.
In this paper we present a lattice-based PIR scheme, using an NTRU-like approach, in which the computational cost is a few thousand bit-operations per bit in the database. This improves the protocol computational performance by two orders of magnitude when compared to existing approaches. Our scheme has worse communication performance than other existing protocols, but we show that practical usability of PIR schemes is not as dependent on communication performance as the literature suggests, and that a trade-off between communication and computation leads to much more versatile schemes.
Proposal of a new efficient public key system for encryption and digital signatures
In this paper a new efficient public key cryptosystem usable for both
encryption and digital signatures is presented.
Due to its simple structure this public key cipher can be implemented easily in every software or hardware device, making the cryptosystem available for circumstances where the implementation of an alternative like RSA, El Gamal / Diffie - Hellmann, etc. is too complicated.
Furthermore the construction on the closest and shortest vector problem using a new homomorph "almost" linear one-way function gives not only strong evidence of the ciphers security, but may be also the base for a new class of "errorprone" cryptographic primitives based on lattice problems. Therefore this cipher and its construction is a good alternative to cryptosystems based on the integer factoriziation problem or the discrete logarithm and might be a base for secure "errorprone" application like biometrics or image watermarking.
Tight bounds between algebraic immunity and nonlinearities of high orders
Uncategorized
Uncategorized
Among cryptographically significant characteristics of Boolean functions used
in symmetric ciphers the algebraic immunity and the nonlinearities of
high orders play the important role. Some bounds on the nonlinearities of
high orders of Boolean functions via its algebraic immunity
were obtained in recent papers.
In this paper we improve these results and obtain new tight bounds.
We prove new universal tight lower bound that reduces the problem
of an estimation of high order nonlinearities to the problem of the finding of dimensions of some
linear spaces of Boolean functions. As simple consequences we obtain all previously known
bounds in this field. For polynomials with disjoint terms
we reduce the finding of dimensions
of linear spaces of Boolean functions mentioned above to a simple combinatorial
analysis. Finally, we prove the tight lower bound on
the nonlinearity of the second order via its algebraic immunity.
Template Attacks with a Power Model
This article analyses some properties of the \emph{template attack}.
Examples come from attacks against an unprotected ASIC implementation of DES.
The principal components analysis (PCA) is used to represent the templates in two dimensions.
We give a physical interpretation of the templates PCA eigenvalues and
eigenvectors.
We show that the S-boxes are \emph{not} the target of template attacks.
We point out that the efficiency of template attacks on unprotected
implementations can be unleashed by using a power model.
The most suitable power-model happens to be linked to the key schedule.
This casts a new light on key schedule requirements for SCA resistance
against a ``template'' attacker.
The results are tailored for DES, because this symmetric block cipher is emblematic and is still promised a long life.
Its key schedule is also remarkably simple, with cryptanalytic weaknesses,that paradoxically turn out to be a strength against SCA.
Another Look at Non-Standard Discrete Log and Diffie-Hellman Problems
We examine several versions of the one-more-discrete-log and
one-more-Diffie-Hellman problems. In attempting to evaluate
their intractability, we find conflicting evidence of the
relative hardness of the different problems. Much of this
evidence comes from natural families of groups associated with
curves of genus 2, 3, 4, 5, and 6. This leads to questions
about how to interpret reductionist security arguments that
rely on these non-standard problems.
Faster Group Operations on Elliptic Curves
Uncategorized
Uncategorized
This paper improves implementation techniques of Elliptic Curve Cryptography. We introduce new formulae and algorithms for the group law on Jacobi quartic, Jacobi intersection, Edwards, and Hessian curves. The proposed formulae and algorithms can save time in suitable point representations. To support our claims, a cost comparison is made with classic scalar multiplication algorithms using previous and current operation counts. Most notably, the best speedup is obtained in the case of Jacobi quartic curves which also lead to one of the most efficient scalar multiplications benefiting from the proposed 2M + 5S + 1D (i.e. 2 multiplications, 5 squarings, and 1 multiplication by a curve constant) point doubling and 7M + 3S + 1D point addition algorithms. Furthermore, the new addition algorithm provides an efficient way to protect against side channel attacks which are based on simple power analysis (SPA).
An Improved Remote User Authentication Scheme using Bilinear Pairings
In 2005 Das et al. [5] proposed a remote user authentication scheme using bilinear pairings. Fang and Huang [7] analyzed the scheme and pointed out some weaknesses. They also proposed an improvement. Recently, Giri and Srivastava [9] observed that the improved scheme is still insecure to off-line attack and an improvement. However, the improved scheme is still insecure. In this paper, we show some weaknesses in the existing scheme and propose an improvement. The proposed scheme also enables users to choose and change the password without the help of the remote server.
Multiparty Key Agreement Using Bilinear Map
Uncategorized
Uncategorized
A key agreement protocol is a cryptographical primitive which allows participants to share a common secret key via insecure channel. In particular, a multiparty key agreement protocol is a key agreement protocol that can manage arbitrary number of participants at once. In the security point of view, authentication and forward secrecy are the most important requirements in such protocols. One interesting problem in key agreement protocols is to construct a multiparty key agreement protocol satisfying the above security requirements with minimal number of communication rounds (i.e. one-round). In literature, there has been no one-round multiparty key agreement protocol that satisfies both of authentication and forward secrecy.
In this paper, we present a new multiparty key agreement protocol using bilinear map and adopting the key generation center. The protocol demands only one round for arbitrary number of participants to share a group key and satisfies both authentication and (partial) forward secrecy.
Ordered Multisignatures and Identity-Based Sequential Aggregate Signatures, with Applications to Secure Routing
Uncategorized
Uncategorized
We construct two new multiparty digital signature schemes that allow multiple signers to sequentially produce a compact, fixed-length signature. First, we introduce a new primitive that we call \emph{ordered multisignatures} (OMS), which allows signers to attest to a common message as well as the order in which they signed. Our OMS construction substantially improves computational efficiency and scalability over any existing scheme with suitable functionality. Second, we design a new identity-based sequential aggregate signature scheme, where signers can attest to different messages and signature verification does not require knowledge of traditional public keys. The latter property permits savings on bandwidth and storage as compared to public-key solutions. In contrast to the only prior scheme to provide this functionality, ours offers improved security that does not rely on synchronized clocks or a trusted first signer. We provide formal security definitions and support the proposed schemes with security proofs under appropriate computational assumptions. We focus on potential applications of our schemes to secure network routing, but we believe they will find many other applications as well.
Reconfigurable Hardware Implementations of Tweakable Enciphering Schemes
Tweakable enciphering schemes are length preserving block cipher
modes of operation that provide a strong pseudo-random permutation.
It has been suggested that these schemes can be used as the main
building blocks for achieving in-place disk encryption. In the past
few years there has been an intense research activity towards
constructing secure and efficient tweakable enciphering schemes.
But, actual experimental performance data of these newly proposed
schemes are yet to be reported. Accordingly, in this paper we
present optimized FPGA implementations of five tweakable enciphering
schemes, namely, HCH, HCTR, XCB, EME and TET, using a 128-bit AES
core as the underlying block cipher. We report performance timings
of these modes when using both, pipelined and sequential AES
structures. The universal polynomial hash function included in the
specification of HCH, HCHfp (a variant of HCH), HCTR, XCB and TET,
was implemented using a Karatsuba-Ofman multiplier as the main
building block. We provide detailed analyses of each of the schemes
and their experimental performances achieved in various scenarios.
Our experiments show that a sequential AES core is not an attractive
option for the design of these modes as it leads to rather poor
throughputs. In contrast, by using an encryption/decryption
pipelined AES core we get a throughput of 3.67 Gbps for HCTR and by
using a encryption only pipeline AES core we get a throughput of
5.71 Gbps for EME. The performance results reported in this paper
provide experimental evidence that hardware implementations of
tweakable enciphering schemes can actually match and even outperform
the data rates achieved by state-of-the-technology disk controllers,
thus showing that they might be used for achieving provably secure
in-place hard disk encryption.
New Attacks on the Stream Cipher TPy6 and Design of New Ciphers the TPy6-A and the TPy6-B
The stream ciphers Py, Pypy and Py6 were designed by Biham and Seberry for the
ECRYPT-eSTREAM project in 2005. The ciphers were promoted to the `Focus' ciphers of the
Phase II of the eSTREAM project. However, due to some cryptanalytic results on the
ciphers, strengthened versions of the ciphers, namely TPy, TPypy and TPy6 were built. So
far there exists no attacks on TPy6. In this paper, we find hitherto unknown weaknesses in
the keystream generation algorithms of the Py6 and of its stronger variant TPy6. Exploiting
these weaknesses, a large number of distinguishing attacks are mounted on the ciphers, the
best of which works with $2^{224.6}$ data and comparable time. In the second part, we present two new
ciphers derived from the TPy6, namely TPy6-A and TPy6-B, whose performances are 2.65
cycles/byte and 4.4 cycles/byte on Pentium III. As a result, to the best of our knowledge,
on Pentium platforms TPy6-A becomes the fastest stream cipher in the literature. Based on our security analysis,
we conjecture that no attacks better than brute force are possible on the ciphers TPy6-A and
TPy6-B.
Irreducibility to the One-More Evaluation Problems: More May Be Less
For a random-self-reducible function, the evaluation problem is
irreducible to the one-more evaluation problem, in the following
sense. An irreduction algorithm exists that, given a reduction
algorithm from the evaluation to the one-more evaluation problem,
solves a separator problem: the evaluation problem itself. Another
irreduction shows that if the computational Diffie-Hellman problem
is reduced to the gap Diffie-Hellman problem, then the decision
Diffie-Hellman problem is easy. Irreductions are primarily of
theoretical interest, because they do not actually prove
inequivalence between problems. What these irreductions suggest,
though, is that one-more variants of the RSA and discrete logarithm
problems may be easier than the standard variants, and that the gap
Diffie-Hellman problem may be easier than the standard
Diffie-Hellman problem.
Computing the Ate Pairing on Elliptic Curves with Embedding Degree $k=9$
Uncategorized
Uncategorized
For AES 128 security level there are several natural choices for
pairing-friendly elliptic curves. In particular, as we will explain,
one might choose curves with $k=9$ or curves with $k=12$. The case
$k=9$ has not been studied in the literature, and so it is not clear
how efficiently pairings can be computed in that case. In this
paper, we present efficient methods for the $k=9$ case, including
generation of elliptic curves with the shorter Miller loop, the
denominator elimination and speed up of the final exponentiation.
Then we compare the performance of these choices. From the analysis, we conclude
that for pairing-based cryptography at the AES 128 security level,
the Barreto-Naehrig curves are the most efficient choice, and the
performance of the case $k=9$ is comparable to the Barreto-Naehrig
curves.
An (Almost) Constant-Effort Solution-Verification Proof-of-Work Protocol based on Merkle Trees
Proof-of-work schemes are economic measures to deter denial-of-service attacks: service requesters compute moderately hard functions that are easy to check by the provider. We present such a new scheme for solution-verification protocols. Although most schemes to date are probabilistic unbounded iterative processes with high variance of the requester effort, our Merkle tree scheme is deterministic, with an almost constant effort and null variance, and is computation-optimal.
Trapdoors for Hard Lattices and New Cryptographic Constructions
We show how to construct a variety of ``trapdoor'' cryptographic tools
assuming the worst-case hardness of standard lattice problems (such as
approximating the length of the shortest nonzero vector to within
certain polynomial factors). Our contributions include a new notion
of \emph{preimage sampleable} functions, simple and efficient
``hash-and-sign'' digital signature schemes, and identity-based
encryption.
A core technical component of our constructions is an efficient
algorithm that, given a basis of an arbitrary lattice, samples lattice
points from a \emph{discrete Gaussian} probability distribution whose
standard deviation is essentially the length of the longest
Gram-Schmidt vector of the basis. A crucial security property is that
the output distribution of the algorithm is oblivious to the
particular geometry of the given basis.
Notions of Efficiency in Simulation Paradigm
Abstract. There are some well-known conceptional and technical issues related to a common setting of simulation paradigm, i.e., EPT (expected polynomial time) simulator versus SPT (strict polynomial time) adversary. In fact, it has been shown that this setting is essential for achieving constant-round black-box zero-knowledge protocols. Many suggestions and results have been proposed to deal with these issues. In this paper, we propose an alternative solution. We study a new class of machines, MPT (Markov polynomial time), which is a cryptographic adaption of Levin's average polynomial-time. Since MPT has good compatibility to SPT and intuitive composition properties, we can use it as a drop-in replacement of SPT. Moreover, it is easy to construct simulators in MPT.
Cryptanalysis of LASH
We show that the LASH-$x$ hash function is vulnerable to attacks
that trade time for memory, including collision attacks as
fast as $2^{\frac{4}{11}x}$ and preimage attacks as fast as $2^{\frac47x}$.
Moreover, we describe heuristic lattice based collision attacks that
use small memory but require very long messages.
Based upon experiments, the lattice attacks are expected to find
collisions much faster than $2^{x/2}$.
All of these attacks exploit the designers' choice of an all zero IV.
We then consider whether LASH can be patched simply by changing the IV.
In this case, we show that LASH is vulnerable to a $2^{\frac78x}$
preimage attack. We also show that LASH is trivially not a PRF when any subset of input bytes is used as a secret key.
None of our attacks depend upon the particular contents of the LASH matrix -- we only assume that the distribution of elements is more or less uniform.
Additionally, we show a generalized birthday attack
on the final compression of LASH which requires
$O\left(x2^{\frac{x}{2(1+\frac{107}{105})}}\right) \approx O(x2^{x/4})$ time and memory.
Our method extends the Wagner algorithm to
truncated sums, as is done in the final transform in LASH.
On compressible pairings and their computation
In this paper we provide explicit formul\ae\ to compute bilinear pairings in compressed form. We indicate families of curves where the proposed compressed computation method can be applied and where particularly generalized versions of the Eta and Ate pairings due to Zhao \emph{et al.} are especially efficient. Our approach introduces more flexibility when trading off computation speed and memory requirement. Furthermore, compressed computation of reduced pairings can be done without any finite field inversions. We also give a performance evaluation and compare the new method with conventional pairing algorithms.
Isogenies and the Discrete Logarithm Problem on Jacobians of Genus 3 Hyperelliptic Curves
We describe the use of explicit isogenies
to reduce Discrete Logarithm Problems (DLPs)
on Jacobians of hyperelliptic genus~$3$ curves
to Jacobians of non-hyperelliptic genus~$3$ curves,
which are vulnerable to faster index calculus attacks.
We provide algorithms which compute an isogeny
with kernel isomorphic to $(\mathbb{Z}/2\mathbb{Z})^3$
for any hyperelliptic genus~$3$ curve.
These algorithms provide a rational isogeny
for a positive fraction of all hyperelliptic genus~$3$ curves
defined over a finite field of characteristic $p > 3$.
Subject to reasonable assumptions,
our algorithms provide an explicit and efficient
reduction from hyperelliptic DLPs to non-hyperelliptic DLPs
for around $18.57\%$ of all hyperelliptic genus~$3$ curves
over a given finite field.
Idempotents in the Neighbourhood of Patterson-Wiedemann Functions having Walsh Spectra Zeros
In this paper we study the neighbourhood of $15$-variable Patterson-Wiedemann (PW) functions, i.e., the functions that differ by a small Hamming distance from the PW functions in terms of truth table representation. We exploit the idempotent structure of the PW functions and interpret them as Rotation Symmetric Boolean Functions (RSBFs). We present techniques to modify these RSBFs to introduce zeros in the Walsh spectra of the modified functions with minimum reduction in nonlinearity. Our technique demonstrates 15-variable balanced and $1$-resilient functions with currently best known nonlinearities 16272 and 16264 respectively. In the process, we find functions for which the autocorrelation spectra and algebraic immunity parameters are best known till date.
Implementing Cryptographic Pairings over Curves of Embedding Degrees 8 and 10
In this paper, we will describe efficient implementations of
the Tate and Ate pairings over ordinary elliptic curves of embedding degrees
8 and 10. We will discuss the possible curve-dependent optimizations
that can be applied to evaluate the pairings. We pay particular
attention to the use of elliptic curve twists and the denominator elimination
method to make computations more efficient. Our main goal is
to draw together the best possible optimizations that can be used to
efficiently evaluate the Tate and the Ate pairings in both curves and
to give timings and appropriate interpretation on the rate of change on
the running time of our programs for both curves. To come up with an
adequate conclusion, we will compare the performance of the curves we
chose to an already experimented curve of embedding degree 12.
On prime-order elliptic curves with embedding degrees k=3,4 and 6
We further analyze the solutions to the Diophantine equations from which prime-order elliptic curves of embedding degrees $k=3,4$ or $6$ (MNT curves) may be obtained. We give an explicit algorithm to generate such curves. We derive a heuristic lower bound for the number $E(z)$ of MNT curves with $k=6$ and discriminant $D\le z$, and compare this lower bound with experimental data.
When e-th Roots Become Easier Than Factoring
We show that computing $e$-th roots modulo $n$ is easier than
factoring $n$ with currently known methods, given subexponential
access to an oracle outputting the roots of numbers of the form
$x_i + c$.
Here $c$ is fixed and $x_i$ denotes small integers of the
attacker's choosing.
Several variants of the attack are presented, with varying
assumptions on the oracle, and goals ranging from selective to
universal forgeries. The computational complexity of the attack
is $L_n(\frac{1}{3}, \sqrt[3]{\frac{32}{9}})$ in most significant
situations, which matches the {\sl
special} number field sieve's ({\sc snfs}) complexity.
This sheds additional light on {\sc rsa}'s malleability in
general and on {\sc rsa}'s resistance to affine forgeries in
particular -- a problem known to be polynomial for $x_i >
\sqrt[3]{n}$, but for which no algorithm faster than factoring
was known before this work.
Finding Low Weight Polynomial Multiples Using Lattices
Uncategorized
Uncategorized
The low weight polynomial multiple problem arises in the context of stream ciphers
cryptanalysis and of efficient finite field arithmetic, and is believed to be difficult. It can be formulated as follows: given
a polynomial $f \in \F_2[X]$ of degree $d$, and a bound $n$, the task is to find a low weight multiple of
$f$ of degree at most $n$. The best algorithm known so far to
solve this problem is based on a time memory trade-off and runs in time
${\cal O}(n^{ \lceil {(w - 1)}/{2} \rceil})$ using ${\cal O}(n^{ \lceil
{(w - 1)}/{4} \rceil})$ of memory, where $w$ is the estimated minimal weight.
In this paper, we propose a new technique to find low weight multiples using
lattice basis reduction. Our algorithm runs in time ${\cal O}(n^6)$ and uses
${\cal O}(nd)$ of memory. This improves the space needed and
gives a better theoretical time estimate when
$w \geq 12$ . Such a situation is plausible when the bound $n$, which represents the
available keystream, is small.
We run our experiments using the NTL library on some known polynomials in
cryptanalysis and we confirm our analysis.
Structural Identity-Based Encryption
In this paper, we introduce the concept of structural identity-based
encryption (SIBE). Similar to hierarchical identity-based encryption
(HIBE), entities in the system are organized into hierarchy. An
entity in SIBE can decrypt ciphertext for all its ancestors. It can
be seen as an opposite of HIBE, where an entity can decrypt the
ciphertext for all its descendants.
We formalize the notion and security requirements, propose an
efficient construction and show that our construction is secure
under appropriate assumptions in the random oracle model.
The role of help in Classical and Quantum Zero-Knowledge
We study the role of help in Non-Interactive Zero-Knowledge protocols and its relation to the standard interactive model. In the classical case, we show that help and interaction are equivalent, answering an open question of Ben-Or and Gutfreund (\cite{BG03}). This implies a new complete problem for the class SZK, the Image Intersection Density. For this problem, we also prove a polarization lemma which is stronger than the previously known one.
In the quantum setting, we define the notion of quantum help and show in a more direct way that help and interaction are again equivalent. Moreover, we define quantum Non-Interactive Zero-Knowledge with classical help and prove that
it is equal to the class of languages that have classical honest-Verifier Zero Knowledge protocols secure against quantum Verifiers (\cite{Wat06, HKSZ07}). Last, we provide new complete problems for all these quantum classes.
Similar results were independently discovered by Dragos Florin Ciocan and Salil Vadhan.
A Critical Analysis and Improvement of AACS Drive-Host Authentication
This paper presents a critical analysis of the AACS drive-host authentication scheme. A few weaknesses are identified which could lead to various attacks on the scheme. In particular, we observe that the scheme is susceptible to unknown key-share and man-in-the-middle attacks. Modifications of the scheme are suggested in order to provide better security. A proof of security of the modified scheme is also presented. The modified scheme achieves better efficiency than the original scheme.
Cryptanalysis of the Random Number Generator of the Windows Operating System
The pseudo-random number generator (PRNG) used by the Windows operating system is the most commonly used PRNG. The pseudo-randomness of the output of this generator is crucial for the security of almost any application running in Windows. Nevertheless, its exact algorithm was never published.
We examined the binary code of a distribution of Windows 2000, which is still the second most popular operating system after Windows XP. (This investigation was done without any help from Microsoft.) We
reconstructed, for the first time, the algorithm used by the pseudo-random number generator (namely, the function CryptGenRandom). We analyzed the security of the algorithm and found a non-trivial attack: given the internal state of the generator, the previous state can be computed in $O(2^{23})$ work (this is an attack on the forward-security of the generator, an $O(1)$ attack on backward security is trivial). The attack on forward-security demonstrates that the design of the generator is flawed, since it is well known how to prevent such attacks.
We also analyzed the way in which the generator is run by the operating system, and found that it amplifies the effect of the
attacks: The generator is run in user mode rather than in kernel mode, and therefore it is easy to access its state even without administrator privileges. The initial values of part of the state of the generator are not set explicitly, but rather are defined by whatever values are present on the stack when the generator is
called.Furthermore, each process runs a different copy of the generator, and the state of the generator is refreshed with system generated entropy only after generating 128 KBytes of output for the process running it. The result of combining this observation with our attack is that learning a single state may reveal 128 Kbytes of the past and future output of the generator.
The implication of these findings is that a buffer overflow attack or a similar attack can be used to learn a single state of the generator, which can then be used to predict all random values, such as SSL keys, used by a process in all its past and future operation. This attack is more severe and more efficient than known attacks, in which an attacker can only learn SSL keys if it is controlling the attacked machine at the time the keys are used.
Last updated: 2007-11-07
An Improved Remote User Authentication Scheme with Smart Cards using Bilinear Pairings
Recently Manik et al. [3] proposed a novel remote user authentication scheme using bilinear pairings. Various attacks were discussed on this scheme. Recently, Fang et al [15] re-analyzed these schemes and pointed out that these further proposed schemes are not secure. They proposed an improvement to previous schemes. Recently, Giri and Srivastava [16] observed that the improved scheme is still insecure to off-line attack and they suggested an improvement on Feng et al's scheme. However, the improved scheme is still insecure. In this paper, we discuss these attacks and propose an improvement of their scheme that provides the better security compared to the schemes previously published
Algorithms and Arithmetic Operators for Computing the $\eta_T$ Pairing in Characteristic Three
Since their introduction in constructive cryptographic applications,
pairings over (hyper)elliptic curves are at the heart of an ever
increasing number of protocols. Software implementations being rather
slow, the study of hardware architectures became an active research
area.
In this paper, we discuss several algorithms to compute the $\eta_T$
pairing in characteristic three and suggest further improvements.
These algorithms involve addition, multiplication, cubing, inversion,
and sometimes cube root extraction over $\mathbb{F}_{3^m}$. We propose
a hardware accelerator based on a unified arithmetic operator able to
perform the operations required by a given algorithm. We describe the
implementation of a compact coprocessor for the field
$\mathbb{F}_{3^{97}}$ given by $\mathbb{F}_3[x]/(x^{97}+x^{12}+2)$,
which compares favorably with other solutions described in the open
literature.
Compression Function Design Principles Supporting Variable Output Lengths from a Single Small Function
Uncategorized
Uncategorized
In this paper, we introduce new compression function design principles supporting variable output lengths (multiples of size $n$).
They are based on a function or block cipher with an $n$-bit output size. In the case of the compression function with a $(t+1)n$-bit output size, in the random oracle and ideal cipher models, their maximum advantages from the perspective of collision resistance are $O(\frac{t^2q}{2^{tn}}+\frac{q^2}{2^{(t+1)n}})$. In the case of $t=1$, the advantage is near-optimal. In the case of $t>1$, the advantage is optimal.
Cryptanalytic Flaws in Oh et al.'s ID-Based Authenticated Key Agreement Protocol
A key agreement protocol is designed for two or more entities to agree upon a shared secret key, which is used to preserve confidentiality and data integrity over an open network. In 2007, Oh et al. proposed an efficient ID-based authenticated key agreement protocol on elliptic curve pairings, which is believed to be able to generate two session keys securely after a protocol execution. However, we discover that their protocol is in fact susceptible to the basic impersonation attack as well as the key compromise impersonation attack. In this paper, we present the imperfections of Oh et al.'s scheme and subsequently we suggest a slight modification to the scheme which would resolve the problems.
Optimizing double-base elliptic-curve single-scalar multiplication
This paper analyzes the best speeds that can be obtained for single-scalar multiplication with variable base point by combining a huge range of options:
– many choices of coordinate systems and formulas for individual group operations, including new formulas for tripling on Edwards curves;
– double-base chains with many different doubling/tripling ratios, including standard base-2 chains as an extreme case;
– many precomputation strategies, going beyond Dimitrov, Imbert, Mishra (Asiacrypt 2005) and Doche and Imbert (Indocrypt 2006).
The analysis takes account of speedups such as S-M tradeoffs and includes recent advances such as inverted Edwards coordinates.
The main conclusions are as follows. Optimized precomputations and triplings save time for single-scalar multiplication in Jacobian coordinates, Hessian curves, and tripling-oriented Doche/Icart/Kohel curves. However, even faster single-scalar multiplication is possible in Jacobi intersections, Edwards curves, extended Jacobi-quartic coordinates, and inverted Edwards coordinates, thanks to extremely fast doublings and additions; there is no evidence that double-base chains are worthwhile for the fastest curves. Inverted Edwards coordinates are the speed leader.
Breaking ONE.FIVIUM by AIDA an Algebraic IV Differential Attack
Uncategorized
Uncategorized
We show, how to break TRIVIUM with a setup of 576 (instead of 1152) clock cycles, with an effort of 2^6 chosen IV resynchronisations up to cycle 625 for each of the 47 recovered key bits.
Proposing a Master One-Way Function
Uncategorized
Uncategorized
Making an arbitrary binary string fit as a fixed size cipher key (via hashing) one could use an arbitrary string x as both plaintext and key to generate a ciphertext, y defined as "the crypto square of x", while x is the crypto square root of y. Extended to higher powers, this formalism allows for polynomial morphology that combines all one-way functions candidates into a single master function which is at least as intractable as its best ingredient one-way function. The master list has some interesting and useful attributes: at will size for both input and output, controlled forward computational burden, milestone computing, and of course the best practical chance for being one-way.
Cryptanalysis on Improved One-round Lin-Li's Tripartite Key Agreement Protocol
Uncategorized
Uncategorized
A tripartite authenticated key agreement protocol is designed for three entities to communicate securely over an open network particularly with a shared key. Recently, we have improved a one-round tripartite authenticated key agreement protocol proposed by Lin-Li due to its vulnerability to the forging attack in our previous report. However, we have later discovered that both the original Lin-Li's scheme and our previous enhanced protocol are vulnerable to the insider replay attack. Moreover, we have also realized that both protocols have falsely claimed the forward secrecy attribute. In this paper, we will revise our improvements and again secure this protocol against these cryptanalytic attacks while recovering the precious perfect forward secrecy property.
Inverted Edwards coordinates
Edwards curves have attracted great interest for several reasons.
When curve parameters are chosen properly, the addition formulas use
only $10M+1S$. The formulas are {\it strongly unified}, i.e., work
without change for doublings; even better, they are {\it complete},
i.e., work without change for all inputs. Dedicated doubling formulas
use only $3M+4S$, and dedicated tripling formulas use only $9M+4S$.
This paper introduces {\it inverted Edwards coordinates}. Inverted
Edwards coordinates $(X_1:Y_1:Z_1)$ represent the affine point
$(Z_1/X_1,Z_1/Y_1)$ on an Edwards curve; for comparison, standard
Edwards coordinates $(X_1:Y_1:Z_1)$ represent the affine point
$(X_1/Z_1,Y_1/Z_1)$.
This paper presents addition formulas for inverted Edwards coordinates
using only $9M+1S$. The formulas are not complete but still are
strongly unified. Dedicated doubling formulas use only $3M+4S$, and
dedicated tripling formulas use only $9M+4S$. Inverted Edwards
coordinates thus save $1M$ for each addition, without slowing down
doubling or tripling.
Building a Collision-Resistant Compression Function from Non-Compressing Primitives
Uncategorized
Uncategorized
We consider how to build an efficient compression function from a small number of random, non-compressing primitives.
Our main goal is to achieve a level of collision resistance as close as possible to the optimal birthday bound. We present a $2n$-to-$n$ bit compression function based on three independent $n$-to-$n$ bit random functions, each called only once. We show that if the three random functions are treated as black boxes finding collisions requires $\Theta(2^{n/2}/n^c)$ queries for $c\approx 1$. This result remains valid if two of the three random functions are replaced by a fixed-key ideal cipher in Davies-Meyer mode (i.e., $E_K(x)\xor x$ for permutation $E_K$). We also give a heuristic, backed by experimental results, suggesting that the security loss is at most four bits for block sizes up to 256 bits.
We believe this is the best result to date on the matter of building
a collision resistant compression function from non-compressing functions. It also relates to an open question from Black et al. (Eurocrypt'05), who showed that compression functions that invoke a single non-compressing random function cannot suffice.
We also explore the relationship of our problem with that of
doubling the output of a hash function and we show how our
compression function can be used to double the output length
of ideal hashes.
Differential Cryptanalysis of PRESENT
Uncategorized
Uncategorized
PRESENT is proposed by A.Bogdanov et al. in CHES 2007 for extremely
constrained environments such as RFID tags and sensor networks. In
this paper, we find out the differential characteristics for
r-round($5 \leq r \leq 15$), then give the differential
cryptanalysis on reduced-round variants of PRESENT. We attack
16-round PRESENT using $2^{64}$ chosen plaintexts, $2^{32}$ 6-bit
counters, and $2^{65}$ memory accesses.
Last updated: 2010-05-12
Provably Secure Grouping-proofs for RFID tags
We investigate an application of RFIDs referred to in the literature
as group scanning, in which several tags are "simultaneously" scanned by a reader device. Our goal is to study the group scanning problem in strong adversarial models. We present a security model for this application and give a formal description of the attending security requirements, focusing on the privacy (anonymity) of the grouped tags, and/ or forward-security properties. Our model is based on the Universal Composability framework and supports re usability (through modularity of security guarantees). We introduce novel protocols that realize the security models, focusing on efficient solutions based on off-the-shelf components, such as highly optimized pseudo-random function designs that require fewer than 2000 Gate-Equivalents.
Modeling Computational Security in Long-Lived Systems
For many cryptographic protocols, security relies on the assumption that adversarial entities have limited computational power. This type of security degrades progressively over the lifetime of a protocol. However, some cryptographic services, such as timestamping services or digital archives, are long-lived in nature; they are expected to be secure and operational for a very long time (i.e., super-polynomial). In such cases, security cannot be guaranteed in the traditional sense: a computationally secure protocol may become insecure if the attacker has a super-polynomial number of interactions with the protocol.
This paper proposes a new paradigm for the analysis of long-lived security protocols. We allow entities to be active for a potentially unbounded amount of real time, provided they perform only a polynomial amount of work per unit of real time. Moreover, the space used by these entities is allocated dynamically and must be polynomially bounded. We propose a new notion of long-term implementation, which is an adaptation of computational indistinguishability to the long-lived setting. We show that long-term implementation is preserved under polynomial parallel composition and exponential sequential composition. We illustrate the use of this new paradigm by analyzing some security properties of the long-lived timestamping protocol of Haber and Kamat.
Secure PRNGs from Specialized Polynomial Maps over Any $F_q$
We prove that a random map drawn from any class ${\frak C}$ of polynomial
maps from $F_q^n$ to $F_q^{n+r}$ that is (i) totally
random in the affine terms, and (ii) has a negligible chance of being
not strongly one-way, provides a secure PRNG (hence a secure stream
cipher) for any q. Plausible choices for ${\frak C}$ are semi-sparse
(i.e., the affine terms are truly random) systems and other systems
that are easy to evaluate from a small (compared to a generic map)
number of parameters.
To our knowledge, there are no other positive results for provable
security of specialized polynomial systems, in particular sparse
ones (which are natural candidates to investigate for speed). We
can build a family of provably secure stream ciphers
a rough implementation of which at the same security level
can be more than twice faster than an optimized QUAD (and any other
provably secure stream ciphers proposed so far), and uses much less
storage. This may also help build faster provably secure hashes.
We also examine the effects of recent results on specialization on
security, e.g., Aumasson-Meier (ICISC 2007), which precludes
Merkle-Damgaard compression using polynomials systems {uniformly
very sparse in every degree} from being universally
collision-free. We conclude that our ideas are consistent with and
complements these new results. We think that we can build secure
primitives based on specialized (versus generic) polynomial
maps which are more efficient.
Predicate Encryption Supporting Disjunctions, Polynomial Equations, and Inner Products
Predicate encryption is a new paradigm generalizing, among other
things, identity-based encryption. In a predicate encryption scheme,
secret keys correspond to predicates and ciphertexts are associated
with attributes; the secret key SK_f corresponding to the predicate
f can be used to decrypt a ciphertext associated with attribute I if and only if f(I)=1. Constructions of such schemes are currently known for relatively few classes of predicates.
We construct such a scheme for predicates corresponding to the evaluation of inner products over N (for some large integer N).
This, in turn, enables constructions in which predicates correspond to the evaluation of disjunctions, polynomials, CNF/DNF formulae, or threshold predicates (among others). Besides serving as what we feel is a significant step forward in the theory of predicate encryption,
our results lead to a number of applications that are interesting in their own right.
Turbo SHA-2
Uncategorized
Uncategorized
In this paper we describe the construction of Turbo SHA-2 family of cryptographic hash functions. They are built with design components from the SHA-2 family, but the new hash function has three times more chaining variables, it is more robust and resistant against generic multi-block collision attacks, its design is resistant against generic length extension attacks and it is 2 - 8 times faster than the original SHA-2. It uses two novel design principles in the design of hash functions: {\em 1. Computations in the iterative part of the compression function start by using variables produced in the message expansion part that have the complexity level of a random Boolean function, 2. Variables produced in the message expansion part are not discarded after the processing of the current message block, but are used for the construction of the three times wider chain for the next message block.} These two novel principles combined with the already robust design principles present in SHA-2 (such as the nonlinear message expansion part), enabled us to build the compression function of Turbo SHA-2 that has just 16 new variables in the message expansion part (compared to 48 for SHA-256 and 64 for SHA-512) and just 8 rounds in the iterative part (compared to 64 for SHA-256 and 80 for SHA-512).
Robust, Anonymous RFID Authentication with Constant Key-Lookup
A considerable number of anonymous RFID authentication schemes have been proposed. However, current proposals either do not provide robust security guarantees, or suffer from scalability issues when the number of tags issued by the system is very large.
In this paper, we focus on approaches that reconcile these important requirements. In particular, we seek to reduce the complexity of identifying tags by the back-end server in anonymous RFID authentication protocols---what we term the key-lookup problem.
We propose a compiler that transforms a generic RFID authentication protocol (supporting anonymity) into one that achieves the same guarantees with constant key-lookup cost even when the number of tags is very large (billions of tags and beyond). This approach uses
a lightweight one-way trapdoor function and produces protocols that are suitable for deployment into current tag architectures. We then explore the issue of minimal assumptions required, and show that one-way trapdoor functions are necessary to achieve highly scalable, robustly secure solutions. We then relax the requirement of unlinkable anonymity, and consider scalable solutions that are provably secure and for which the loss of privacy is minimal.
Another Look at Automated Theorem-Proving
I examine the use of automated theorem-proving for reductionist security arguments in cryptography and discuss three papers that purport to show the potential of computer-assisted proof-writing and proof-checking. I look at the proofs that the authors give to illustrate the "game-hopping" technique -- for Full-Domain Hash signatures, ElGamal encryption, and Cramer-Shoup encryption -- and ask whether there is evidence that automated theorem-proving can contribute anything of value to the security analysis of cryptographic protocols.
REMARKS ON IBE SCHEME OF WANG AND CAO
In this paper we analyze and find an anomaly in the security proof of the identity-based encryption (IBE) scheme fullM-IBE of Wang and Cao [9], which is based on mBDHP. Here we give another proof for fullM-IBE which is based on Bilinear Diffie-Hellman Problem (BDHP). We also obtain a tightness improvement using a stronger assumption, namely, the Bilinear Inverse Dicision Diffie-Hellman problem (BIDDHP).
Ceremony Design and Analysis
The concept of ceremony is introduced as an extension of the concept of network protocol, with human nodes alongside computer nodes and with communication links that include UI, human-to-human communication and transfers of physical objects that carry data. What is out-of-band to a protocol is in-band to a ceremony, and therefore subject to design and analysis using variants of the same mature techniques used for the design and analysis of protocols. Ceremonies include all protocols, as well as all applications with a user interface, all workflow and all provisioning scenarios. A secure ceremony is secure against both normal attacks and social engineering. However, some secure protocols imply ceremonies that cannot be made secure.
Last updated: 2012-07-31
A Short Signature Scheme in the Standard Model
In this paper, by elaborately choosing the parameters of Waters
Hash function, we propose a new efficient signature scheme. It is
shown that the scheme is secure against strongly unforgeable
chosen-message attacks in the standard model under Computational
Diffie-Hellman (CDH) assumption. Further, among all the known
secure signatures in the standard model, our scheme is the
shortest one and has the efficient security reduction as well.
On the security defects of an image encryption scheme
This paper studies the security of a recently-proposed chaos-based
image encryption scheme, and points out the following problems: 1)
there exist a number of invalid keys and weak keys, and some keys
are partially equivalent for encryption/decryption; 2) given one
chosen plain-image, a subkey $K_{10}$ can be guessed with a smaller
computational complexity than that of the simple brute-force attack;
3) given at most 128 chosen plain-images, a chosen-plaintext attack
can possibly break the following part of the secret key: $\{K_i\bmod
128\}_{i=4}^{10}$, which works very well when $K_{10}$ is not too
large; 4) when $K_{10}$ is relatively small, a known-plaintext
attack can be carried out with only one known plain-image to recover
some visual information of any other plain-images encrypted by the
same key.
Proxy Re-Signature Schemes without Random Oracles
To construct a suitable and secure proxy re-signature scheme is not an easy job, up to now, there exist only three schemes, one is proposed by Blaze et al. at EUROCRYPT 1998, and the others are proposed by Ateniese and Hohenbergerat ACM CCS 2005. However, none of these schemes is proved in the standard model (i.e., do not rely on the random oracle heuristic). In this paper, based on Waters' approach, we first propose a multi-use bidirectional proxy re-signature scheme, denoted as $S_{mb}$, which is existentially unforgeable in the standard model. And then, we extend $S_{mb}$ to be a multi-use bidirectional ID-based proxy re-signature scheme, denoted by $S_{id-mb}$, which is also existentially unforgeable in the standard model. Both of these two proposed schemes are computationally efficient, and their security bases on the Computational Diffie-Hellman (CDH) assumption.
Second Preimage Attacks on Dithered Hash Functions
The goal of this paper is to analyze the security of dithered variants of the Merkle-Damgard mode of operation that use a third input to indicate the position of a block in the message to be hashed. These modes of operation for hash functions have been proposed to avoid some structural weaknesses of the Merkle-Damgard paradigm, e.g. that second preimages can be constructed in much less than $2^n$ work, as pointed out by Kelsey and Schneier. Among the modes of operation that use such a third input are Rivest's dithered hashing and Biham and Dunkelman's HAIFA proposal.
We propose several new second preimage attacks on the Merkle-Damgard mode of operation, which can also attack Rivest's dithered hash with almost the same complexity. When applied to Shoup's UOWHF, these attacks can be shown to be optimal since their complexity matches Shoup's security bound.
Almost-everywhere Secure Computation
Secure multi-party computation (MPC) is a central problem in cryptography. Unfortunately, it is well known that MPC is possible if and only if the underlying communication network has very large connectivity---specifically, $\Omega(t)$, where $t$ is the number of potential corruptions in the network. This impossibility result renders existing MPC results far less applicable in practice, since most deployed networks have in fact a very small degree.
In this paper, we show how to circumvent this impossibility result and achieve meaningful security guarantees for graphs with small degree (such as expander graphs and several other topologies). In fact, the notion we introduce, which we call {\em almost-everywhere MPC}, building on the notion of almost-everywhere agreement due to Dwork, Peleg, Pippenger and Upfal, allows the degree of the network to be much smaller than the total number of allowed corruptions. In essence, our definition allows the adversary to {\em implicitly} wiretap some of the good nodes by corrupting sufficiently many nodes in the ``neighborhood'' of those nodes. We show protocols that satisfy our new definition, retaining both correctness and privacy for most nodes despite small connectivity, no matter how the adversary chooses his corruptions.
Instrumental in our constructions is a new model and protocol for the {\em secure message transmission} (SMT) problem, which we call {\em SMT by public discussion}, and which we use for the establishment of pairwise secure channels in limited connectivity networks.
Overlap-free Karatsuba-Ofman Polynomial Multiplication Algorithms
Uncategorized
Uncategorized
We describe how a simple way to split input operands allows for fast VLSI implementations of subquadratic $GF(2)[x]$ Karatsuba-Ofman multipliers. The theoretical XOR gate delay of the resulting multipliers is reduced significantly. For example, it is reduced by about 33\% and 25\% for $n=2^{t}$ and $n=3^{t}$ $(t>1)$, respectively. To the best of our knowledge, this parameter has never been improved since the original Karatsuba-Ofman algorithm was first used to design $GF(2^n)$ multipliers in 1990.
Efficient Computationally Private Information Retrieval From Anonymity or Trapdoor Groups
Uncategorized
Uncategorized
A Private Information Retrieval (PIR) protocol allows a database user, or client, to obtain information from a database in a manner that prevents the database from knowing which data was retrieved. Although substantial progress has been made in the discovery of
computationally PIR (cPIR) protocols with reduced communication complexity, there has been relatively little work in reducing the
computational complexity of cPIR protocols. In particular, Sion \cite{sion} argues that existing cPIR protocols are slower than the trivial PIR protocol (in overall performance). In this paper, we present a new family of cPIR protocols with a variety of security and performance properties. Our protocols enable much lower CPU overhead for the database server. When the database is viewed as a bit sequence, only addition operations are performed by the database server. We can view our protocol as a middle ground between the trivial protocol (fastest possible computational complexity and slowest possible communication complexity) and protocols such as Gentry-Ramzan \cite{gentry} (fast communication complexity but slower computational complexity). This middle ground enjoys a much better overall performance. The security of the general version of our protocol depends on either a trapdoor group assumption or sender anonymity \cite{pfitzmann}, and we present two specialized versions, the first of which depends on the trapdoor group assumption, and the second which depends on the sender anonymity assumption. We may view both Gentry-Ramzan and our cPIR protocol as instances of a more general new construct: the \textit{trapdoor group}. In a trapdoor group, knowledge of the trapdoor allows efficient computation of an inversion problem, such as computing discrete logarithms. Without the trapdoor, it is computationally hard to solve the inversion problem. For our protocol, we assume, roughly speaking, that given only the elements $be_1, \ldots, be_t$ in the group $\Z_m$, where $e_i < m/t$ and t is small, it is hard to compute low order bits of the group order $m$. One version of our cPIR protocol depends only on sender anonymity, which to our knowledge, is the first cPIR protocol to depend only on an anonymity assumption. Our prototype implementation shows that our performance compares favorably with existing cPIR protocols.
A novel public key crypto system based on semi-modules over quotient semi-rings
Uncategorized
Uncategorized
In A generalization of the original Diffie-Hellman key exchange in (ℤ/pℤ)* found a new depth when Miller and Koblitz suggested that such a protocol could be used with the group over an elliptic curve. Maze, Monico and Rosenthal extend such a generalization to the setting of a Semi-group action on a finite set, more precisely, linear actions of abelian semi-rings on semi-modules. In this paper, we extend such a generalization to the linear actions of quotient semi-rings on semi-modules. In fact, we show how the action of quotient semi-rings on a semi-module gives rise to a generalized Diffie-Hellman and ElGamal protocol. This leads naturally to a cryptographic protocol whose difficulty is based on the hardness of a particular control problem, namely the problem of steering the state of some dynamical system from an initial vector to some final location.
Implementing Cryptographic Pairings over Barreto-Naehrig Curves
In this paper we describe an efficient implementation of the Tate and Ate pairings using Barreto-Naehrig pairing-friendly curves, on both a
standard 32-bit PC and on a 32-bit smartcard. First we introduce a sub-family of such curves with a particularly simple representation. Next we consider the issues that arise in the efficient implementation of field arithmetic in $\F_{p^{12}}$, which is crucial to good performance. Various optimisations are suggested, including a novel approach to the `final exponentiation', which is faster and requires less memory than the methods previously recommended.
Interactive and Noninteractive Zero Knowledge Coincide in the Help Model
We show that a problem in $\AM$ has a interactive zero-knowledge
proof system {\em if and only if} it has a noninteractive zero
knowledge proof system in the `help model' of Ben-Or and Gutfreund
({\em J. Cryptology}, 2003). In this model, the shared reference
string is generated by a probabilistic polynomial-time dealer who
is given access to the statement to be proven. Our result holds
for both computational zero knowledge and statistical zero
knowledge, and does not rely on any unproven complexity
assumptions.
We also show that help does not add power to interactive
computational zero-knowledge proofs, paralleling a result of
Ben-Or and Gutfreund for the case of statistical zero knowledge.
On Ciphertext Undetectability
We propose a novel security notion for public-key encryption schemes -- ciphertext undetectability. Informally, an encryption scheme has the property of ciphertext undetectability, if the attacker is unable to distinguish between valid and invalid ciphertexts. We compare this notion with the established ones, such as indistinguishability of ciphertexts and plaintext awareness. We analyze the possibilities of constructing schemes with the property of ciphertext undetectability. Moreover, we prove that the Damgard ElGamal, the Cramer-Shoup scheme and its lite variant achieve ciphertext undetectability under standard assumptions.
Last updated: 2007-11-09
Analysis of Local Optima in Block Ciphers
We present a technique to perform key distinguishing attacks on block ciphers. The method is based on profiling the behaviour of a simple search algorithm when it is applied to recover the key under which a set of known plaintexts has been encrypted. Even though the probability of finding the correct key is negligible, it is observed that the solutions (local optima) yielded by successive searches can be highly dependent on the key, forming patterns that can be unequivocally (in a statistical sense) associated with each particular key. When a cipher suffers from such a weakness, this provides us with an effective procedure to tell apart ciphertexts generated by different and unknown keys. We illustrate the method by applying it to the TEA block cipher, for which attacks of this kind can be successfully mounted against the full version (64 rounds) with extremely simple profiling methods. The technique itself is completely black-box and admits a number of refinements. We suspect it might be applied to many other ciphers by using the same or more complex profiling schemes.
(Convertible) Undeniable Signatures without Random Oracles
We propose a convertible undeniable signature scheme without random oracles. Our construction is based on Waters' and Kurosawa and Heng's schemes that were proposed in Eurocrypt 2005. The security of our scheme is based on the CDH and the decision linear assumption. Comparing only the part of undeniable signatures, our scheme uses more standard assumptions than the existing undeniable signatures without random oracles due to Laguillamie and Vergnaud.
On the insecurity of interchanged use of OFB and CBC modes of operation
The security of interchanged use of modes of operation of block ciphers have not been discussed in the public literature. So far, the modes of operation of block ciphers have been treated as completely independent and uncorrelated. In this paper we represent both CBC and OFB as quasigroup string transformations, and then show that OFB mode is a special case of the CBC mode of operation. That raise possibilities for construction of several devastating attack scenarios against that interchanged use of CBC and OFB. These attacks have not been addressed in NIST Special Publication 800-38A 2001, ``Recommendation for Block Cipher Modes of Operation''. More specifically, in the chosen plaintext attack scenario with interchanged use of CBC and OFB mode, we give a concrete list of openssl commands that extract the complete plaintext without knowing the secret key.
Non-Interactive Anonymous Credentials
In this paper, we introduce P-signatures. A P-signature scheme
consists of a signature scheme, a commitment scheme, and (1) an
interactive protocol for obtaining a signature on a committed value;
(2) a non-interactive proof system for proving that the
contents of a commitment has been signed; (3) a non-interactive proof
system for proving that a pair of commitments are commitments to the
same value. We give a definition of security for P-signatures and
show how they can be realized under appropriate assumptions about
groups with bilinear map. Namely, we make extensive use of the
powerful suite of non-interactive proof techniques due to Groth and Sahai.
Our P-signatures enable, for the first time, the design of a practical
non-interactive anonymous credential system whose security does not
rely on the random oracle model. In addition, they may serve as a useful building block for other privacy-preserving authentication mechanisms.
Cryptanalysis on Improved Chou et al.'s ID-Based Deniable Authentication Protocol
A deniable authentication protocol enables the protocol participants to authenticate their respective peers, while able to deny their participation after the protocol execution. This protocol can be extremely useful in some practical applications such as online negotiation, online shopping and electronic voting. Recently, we have improved a deniable authentication scheme proposed by Chou et al. due to its vulnerability to the key compromise impersonation attack in our previous report. However, we have later discovered that our previous enhanced protocol is vulnerable to the insider key compromise impersonation attack and key replicating attack. In this paper, we will again secure this protocol against these attacks and demonstrate its heuristic security analysis.