All papers in 2006 (485 results)
Inductive Trace Properties for Computational Security
Protocol authentication properties are generally trace-based, meaning that authentication holds for the protocol if authentication holds for individual traces (runs of the protocol and adversary). Computational secrecy conditions, on the other hand, often are not trace based: the ability to computationally distinguish a system that transmits a secret from one that does not is measured by overall success on the \textit{set} of all traces of each system. This presents a challenge for inductive or compositional methods: induction is a natural way of reasoning about traces of a system, but it does not appear applicable to non-trace properties. We therefore investigate the semantic connection between trace properties that could be established by induction and non-trace-based security requirements. Specifically, we prove that a certain trace property implies computational secrecy and authentication properties, assuming the encryption scheme provides chosen ciphertext security and ciphertext integrity. We also prove a similar theorem for computational secrecy assuming Decisional Diffie-Hellman and a chosen plaintext secure encryption scheme.
Indifferentiability of Single-Block-Length and Rate-1 Compression Functions
Uncategorized
Uncategorized
The security notion of indifferentiability was proposed by Maurer,
Renner, and Holenstein in 2004. In 2005, Coron, Dodis, Malinaud, and
Puniya discussed the indifferentiability of hash functions. They
showed that the Merkle-Damgaard construction is not secure in the
sense of indifferentiability. In this paper, we analyze the security
of single-block-length and rate-1 compression functions in the sense
of indifferentiability. We formally show that all single-block-length
and rate-1 compression functions, which include the Davies-Meyer
compression function, are insecure. Furthermore, we show how to
construct a secure single-block-length and rate-1 compression function
in the sense of indifferentiability. This does not contradict our
result above.
Last updated: 2008-01-15
A New Identity Based Encryption Scheme From Pairing
Uncategorized
Uncategorized
We construct an efficient identity based encryption scheme from pairing. The
basic version of the new scheme is provably secure against chosen plaintext
attack, and the full version of the new scheme is provably secure against
adaptive chosen ciphertext attack. Our scheme is based on a new assumption
(decision weak bilinear Diffie-Hellman assumption ) which is no stronger than
decision bilinear Diffie-Hellman assumption.
New Constructions for Provably-Secure Time-Bound Hierarchical Key Assignment Schemes
Uncategorized
Uncategorized
A time-bound hierarchical key assignment scheme is a method
to assign time-dependent encryption keys to a set of classes in a
partially ordered hierarchy, in such a way that each class in the
hierarchy can compute the keys of all classes lower down in the
hierarchy, according to temporal constraints.
In this paper we propose new constructions for time-bound
hierarchical key assignment schemes which are provably secure with
respect to key indistinguishability. Our constructions use as a
building block any provably-secure hierarchical key assignment
scheme without temporal constraints and exhibit a tradeoff among
the amount of private information held by each class, the amount
of public data, the complexity of key derivation, and the
computational assumption on which their security is based.
Moreover, the proposed schemes support updates to the access
hierarchy with local changes to the public information and without
requiring any private information to be re-distributed.
Countermeasures for the Simple Branch Prediction Analysis
Branch Prediction Analysis has been proposed as an attack method to obtain key bits from a cryptographic application.
In this report, we put forth several solutions to avoid or prevent this attack.
The reported countermeasures require only minimal hardware support that is commonly available in modern superscalar processors.
A Practical Limit of Security Proof in the Ideal Cipher Model : Possibility of Using the Constant As a Trapdoor In Several Double Block Length Hash Functions
Recently, Shoichi Hirose \cite{Hirose06} proposed several double
block length (DBL) hash functions. Each DBL hash function uses a
constant which has a role to make the DBL hash function
collision-resistant in the ideal cipher model. However, we have to
instantiate a block cipher. In this paper, we show that the constant
may be used as a trapdoor to help a attacker to find a collision
easily. In case of 256-bit output size, we can find a collision with
the complexity $2^{64}$. This is a gap between the security of the
DBL hash function in the ideal cipher model and the security of the
DBL hash function based on any block cipher.
Cryptanalysis of REESSE1+ Public Key Cryptosystem
Uncategorized
Uncategorized
A new public key cryptosystem, called REESSE1+, was proposed.
REESSE1 consists of two primitive algorithms, a public key
encryptio/decryption algorithm and a digital signature algorithm.
We give some analysis to REESSE1+, and show that the system is
totally unsecure. We show how to derive the private key from the
public key. As the same time, we also show how to forge signatures
for any messages, given two valid signatures.
Efficient Provably-Secure Hierarchical Key Assignment Schemes
Uncategorized
Uncategorized
A hierarchical key assignment scheme is a method to assign
some private information and encryption keys to a set of classes
in a partially ordered hierarchy, in such a way that the private
information of a higher class can be used to derive the keys of
all classes lower down in the hierarchy.
In this paper we design and analyze hierarchical key assignment
schemes which are provably-secure and support dynamic updates to
the hierarchy with local changes to the public information and
without requiring any private information to be re-distributed.
We first consider the problem of constructing a hierarchical key assignment scheme by using as a building block a symmetric encryption scheme. We propose a new construction which is provably secure with respect to key indistinguishability, requires a single computational assumption, and improves on previous proposals.
Then, we show how to reduce key derivation time at the expense of an
increment of the amount of public information, by improving a
previous result.
Finally, we show how to construct a hierarchical key assignment scheme by using as a building block a public-key broadcast encryption
scheme. In particular, one of our constructions provides constant
private information and public information linear in the number of
classes in the hierarchy.
Near-Collision Attack and Collision-Attack on Double Block Length Compression Functions based on the Block Cipher IDEA
Uncategorized
Uncategorized
IDEA is a block cipher designed by Xuejia Lai and James L. Massey
and was first described in 1991. IDEA does not vary the constant in
its key schedule. In \cite{ChYu06}, Donghoon Chang and Moti Yung
showed that there may be a weakness of hash function based on block
cipher whose key schedule does not use various constants. Based on
their result, we investigate the security of double block length
compression functions based on the block cipher IDEA such that the
key size of IDEA is 128 bits and its block length is 64 bits. We use
the double block length hash functions proposed by Shoichi Hirose in
the second hash workshop in 2006 \cite{Hirose06}. Then, we can
easily find a near-collision by hand. And also, for a constant $c$
of DBL hash functions, we can find a collision by hand. This means
that the constant $c$ may be used as a trapdoor to make the attacker
find collision easily.
Dynamic Cryptographic Hash Functions
We present the dynamic cryptographic hash function, a new type of hash function which takes two parameters instead of one. The additional parameter, the security parameter, specifies the internal workings and size of the digest produced. We provide a formal definitions for a dynamic cryptographic hash function and for the traditional security properties, modified for dynamic hash functions. Two additional properties, security parameter collision resistance and digest resistance, are also defined. The additional properties are motivated by scenarios where a dynamic hash functions more cleanly provides a solution to a typical cryptographic problem.
Password-Authenticated Multi-Party Key Exchange with Different Passwords
Password-authenticated key exchange (PAKE) allows two or multiple parties to share a session key
using a human-memorable password only. PAKE has been applied in various environments, especially in the "clientserver"
model of remotely accessed systems. Designing a secure PAKE scheme has been a challenging task because
of the low entropy of password space and newly recognized attacks in the emerging environments. In this paper, we
study PAKE for multi-party with different passwords which allows group users with different passwords to agree
on a common session key by the help of a trusted server using their passwords only. In this setting, the users do not
share a password between themselves but only with the server. The fundamental security goal of PAKE is security
against dictionary attacks. We present the first two provably secure protocols for this problem in the standard
model under the DDH assumption; our first protocol is designed to provide forward secrecy and to be secure against
known-key attacks. The second protocol is designed to additionally provide key secrecy against curious servers. The
protocols require a constant number of rounds.
New Technique for Solving Sparse Equation Systems
Most of the recent cryptanalysis on symmetric key ciphers have
focused on algebraic attacks. The cipher being attacked is
represented as a non-linear equation system, and various techniques
(Buchberger, F4/F5, XL, XSL) can be tried in order to solve the
system, thus breaking the cipher. The success of these attacks has
been limited so far. In this paper we take a different approach to
the problem of solving non-linear equation systems, and propose a
new method for solving them. Our method differs from the others in
that the equations are not represented as multivariate polynomials,
and that the core of the algorithm for finding the solution can be
seen as message-passing on a graph. Bounds on the complexities for
the main algorithms are presented and they compare favorably with
the known bounds. The methods have also been tested on
reduced-round versions of DES with good results. This paper was
posted on ECRYPT's STVL website on January 16th 2006.
Speeding up the Bilinear Pairings Computation on Curves with Automorphisms
In this paper we present an algorithm for computing the
bilinear pairings on a family of non-supersingular elliptic curves with non-trivial automorphisms. We obtain a short iteration loop in Miller's algorithm using non-trivial efficient automorphisms. The proposed algorithm is as efficient as Scott's algorithm in [12].
Identity-Based Proxy Re-encryption
In a proxy re-encryption scheme a semi-trusted proxy converts a ciphertext for Alice into a ciphertext for Bob {\em without} seeing the underlying plaintext. A number of solutions have
been proposed in the public-key setting. In this paper, we address the problem of Identity-Based proxy re-encryption, where ciphertexts are transformed from one {\it identity} to another. Our schemes are compatible with current IBE deployments and do not require any extra work from the IBE trusted-party key generator. In addition, they are non-interactive and one of them permits multiple re-encryptions. Their security is based on a standard assumption (DBDH) in the random oracle model.
A Framework for Interactive Argument Systems using Quasigroupic Homorphic Commitment
Using a model based on \textit{probabilistic functions} (\textit{PF}), it's introduced the concept of \textit{perfect zero knowledge} (\textit{PZK}) \textit{commitment scheme} (\textit{CS}) allowing \textit{quasigroupic} \textit{homomorphic commitment} (\textit{QHC}). Using \textit{QHC} of $+_m$ (modular sum in $\mathbb{Z}_m$), application is considered in interactive argument systems (\textit{IAS}) for several languages. In four of the examples -- generalized nand ($\Lnandalpha$), string equality ($\left[=_{\left(m,\alpha,\right)}\right]$), string inequality ($\left[\neq_{\left(m,\alpha,\right)}\right]$) and graph three-colourations ($G3C$) -- complexity improvements are obtained, in comparison to other established results. Motivation then arises to define a general framework for \textit{PZK}-\textit{IAS} for membership in language with committed alphabet (\textit{MLCA}), such that the properties of soundness and \textit{PZK} result from high-level parametrizable aspects. A general simulator is constructed for sequential and (most interestingly) for parallel versions of execution. It therefore becomes easier to conceptualize functionalities of this kind of \textit{IAS} without the consideration of low level aspects of cryptographic primitives. The constructed framework is able to embrace \AcroCS\; allowing \textit{QHC} of functions that are not themselves quasigroupic. Several theoretical considerations are made, namely recognizing a necessary requirements to demand on an eventual \AcroCS \;allowing \textit{QHC} of some complete function in a Boolean sense.
Multiplication and Squaring on Pairing-Friendly Fields
Uncategorized
Uncategorized
Pairing-friendly fields are finite fields that are suitable for the implementation of cryptographic bilinear pairings. In this paper we review multiplication and squaring methods for pairing-friendly fields $\fpk$ with $k \in \{2,3,4,6\}$. For composite $k$, we consider every possible towering construction. We compare the methods to determine which is most efficient based on the number of basic $\fp$ operations, as well as the best constructions for these finite extension fields. We also present experimental results for every method.
On the security of a group key agreement protocol
In this paper we show that the group key agreement protocol proposed by Tseng suffers from a number of serious security vulnerabilities.
An Attack on Disguised Elliptic Curves
Uncategorized
Uncategorized
We present an attack on one of the Hidden Pairing schemes proposed by Dent and Galbraith. We drastically reduce the number of variables necessary to perform a multivariate attack and in some cases we can completely recover the private key. Our attack relies only on knowledge of the public system parameters.
White Box Cryptography: Another Attempt
At CMS 2006 Bringer et al. show how to conceal the algebraic structure of a ``traceable block cipher'' by adding perturbations to its description. We here exploit and strengthen their ideas by further perturbing the representation of a cipher towards a white box implementation. Our technique is quite general, and we apply it -- as a challenging example in the domain of white box cryptography -- to a variant of the block cipher AES.
Do We Need to Vary the Constants? (Methodological Investigation of Block-Cipher Based Hash Functions)
Uncategorized
Uncategorized
The recent collision attacks on the MD hash function family do not depend
on the constants used in the function, but rather on its structure
(i.e., changing the constants will not affect the differential analysis
based attacks). Thus, is seems that the role of constants in maintaining
security and preventing these attacks is unclear, at best, for this case
and in particular fixing or varying the constants will not matter
for these analyses.
%
In this work we present a methodological investigation into the case
of block-cipher based PGV hash functions family, and investigate the
importance of constants in securing these designs.
%
To this end we consider the
twelve variants of the PGV family that yield secure
hash in the generic ideal cipher case (as was shown by
Black, Rogaway and Shrimpton), but consider them under concrete
instantiation.
%
%
To investigate the role of constant in the key derivation procedure we
just ignore the constants. In this more uniform setting we further
consider a very regular cipher, namely AES modified to have Mixcolumn
also in the last round (which should still be a strong cipher).
%
Analyzing this modified-AES based hashing, we show that with about 16\%
probability we can find collisions with complexity $2^{49}$ (much
smaller than the birthday attack complexity $2^{64}$).
While we do not claim to break the AES based version, this
nevertheless shows that constants in block cipher have an important
role in resisting collision attack (in particular there is a need to
vary the constant). It also shows that (in the symmetric modified
version) merely the concrete AES structure does not guarantee the security of
AES-based hash function (shown secure under the ideal cipher
model). This is undesirable and
non-robust, because this means that even though a
block cipher has complicated structures in its round function and its key
scheduling algorithm, we can not have a confidence about the security
of hash functions based solely on it (note that there are several
block ciphers such as IDEA, 3-key triple DES which do not use any
constants).
%
Given the above methodological findings, we suggest new AES-based hash
function constructions (essentially modified PGV) which can be
generalized to any block cipher. The functions inherit the security
under the ideal cipher model on the one hand, while, on the other
hand, concretely assure in their structure that the weakness exhibited
herein is dealt with.
Prime Order Primitive Subgroups in Torus-Based Cryptography
Uncategorized
Uncategorized
We use the Bateman-Horn conjecture to study the order of the set of $\mathbb{F}_q$-rational points of primitive subgroups that arise in torus-based cryptography. We provide computational evidence to support the heuristics and make some suggestions regarding parameter selection for torus-based cryptography.
Security and Composition of Cryptographic Protocols: A Tutorial
What does it mean for a cryptographic protocol to be "secure"?
Capturing the security requirements of cryptographic tasks in
a meaningful way is a slippery business: On the one hand, we want security criteria that prevent "all feasible attacks" against a protocol. On the other hand, we want our criteria to not be overly restrictive; that is, we want them to accept those protocols that do not succumb to "feasible attacks".
This tutorial studies a general methodology for defining security of
cryptographic protocols. The methodology, often dubbed the "trusted party paradigm", allows for defining the security requirements of practically any cryptographic task in a unified and natural way.
We first review a basic formulation that captures security in isolation from other protocol instances. Next we address the secure composition problem, namely the vulnerabilities resulting from the often unexpected interactions among different protocol instances that run alongside each other in the same system. We demonstrate the limitations of the basic formulation and review a formulation that guarantees security of protocols even in general composite systems.
Remarks on "Analysis of One Popular Group Signature Scheme'' in Asiacrypt 2006
Uncategorized
Uncategorized
In \cite{Cao}, a putative framing ``attack'' against the ACJT group signature scheme \cite{ACJT00} is presented. This note shows that the attack framework considered in \cite{Cao} is \emph{invalid}. As we clearly illustrate, there is \textbf{no security weakness} in the ACJT group signature scheme as long as all the detailed specifications in \cite{ACJT00} are being followed.
Obfuscation for Cryptographic Purposes
An obfuscation O of a function F should satisfy two requirements: firstly, using O it should be possible to evaluate F; secondly, O should not reveal anything about F that cannot be learnt from oracle access to F. Several definitions for obfuscation exist. However, most of them are either too weak for or incompatible with cryptographic applications, or have been shown impossible to achieve, or both.
We give a new definition of obfuscation and argue for its reasonability and usefulness. In particular, we show that it is strong enough for cryptographic applications, yet we show that it has the potential for interesting positive results. We illustrate this with the following two results:
- If the encryption algorithm of a secure secret-key encryption scheme can be obfuscated according to our definition, then the result is a secure public-key encryption scheme.
- A uniformly random point function can be easily obfuscated according to our definition, by simply applying a one-way permutation. Previous obfuscators for point functions, under varying notions of security, are either probabilistic or in the random oracle model (but work for arbitrary distributions on the point function).
On the negative side, we show that
- Following Hada and Wee, any family of deterministic functions that can be obfuscated according to our definition must already be ``approximately learnable.'' Thus, many deterministic functions cannot be obfuscated. However, a probabilistic functionality such as a probabilistic secret-key encryption scheme can potentially be obfuscated. In particular, this is possible for a public-key encryption scheme when viewed as a secret-key scheme.
- There exists a secure probabilistic secret-key encryption scheme that cannot be obfuscated according to our definition. Thus, we cannot hope for a general-purpose cryptographic obfuscator for encryption schemes.
Improved Collision and Preimage Resistance Bounds on PGV Schemes
Uncategorized
Uncategorized
Preneel, Govaerts, and Vandewalle[14](PGV) considered 64 most basic ways to construct a hash function from a block cipher, and regarded 12 of those 64 schemes as secure. Black, Pogaway and Shrimpton[3](BRS) provided a formal and quantitative treatment of those 64 constructions and proved that, in black-box model, the 12 schemes (
group-1 ) that PGV singled out as secure really are secure. By step
ping outside of the Merkle-Damgard[4] approach to analysis, an additional 8 (group-2) of the 64 schemes are just as collision resistant as the first group of schemes. Tight upper and lower bounds on collision resistance of those 20 schemes were given. In this paper, those collision resistance and preimage resistance bounds are improved, which shows that, in black box model, collision bounds of those 20 schemes are same. In Group-1 schemes, 8 out of 12 can find fixed point easily. Bounds on second preimage, multicollisions of Joux[6], fixed-point multicollisons[8] and combine of the two kinds multicollisions are also given. From those bounds, Group-1 schemes can also be deviled into two group.
On Post-Modern Cryptography
This essay relates to a recent article of Koblitz & Menezes
(Cryptology ePrint Report 2004/152)
that ``criticizes several typical `provable security' results''
and argues that the ``theorem-proof paradigm of theoretical
mathematics is often of limited relevance'' to cryptography.
Although it feels ridiculous to answer such a claim,
we undertake to do so in this essay.
In particular, we point out some of the fundamental philosophical flaws
that underly the said article and some of its misconceptions regarding
theoretical research in Cryptography in the last quarter of a century.
Preimage Attacks On Provably Secure FFT Hashing proposed at Second Hash Workshop in 2006
`Provably Secure FFT Hashing' (We call FFT-Hash in this paper) was
suggested by Lyubashevsky et al.. in Second Hash Workshop in Aug.
2006. This paper shows preimage attacks on hash functions based on
three modes of FFT-Hash. In case of `Nano' whose output size is 513
bits, we can find a preimage with complexity $2^{385}$. In case of
`Mini' whose output size is 1025 bits, we can find a preimage with
complexity $2^{769}$. In case of `Mini' whose output size is 28672
bits, we can find a preimage with complexity $2^{24576}$. This means
that the structure of FFT-Hash is weak in the viewpoint of the
preimage resistance. We recommend that FFT-Hash can not be used in
case of the output size less than 256 bits because the full security
against the preimage attack are crucial in such a short output size.
And also we should not chop the hash output in order to get a short
hash output like SHA-224 and SHA-384, because for example we can
find a preimage with complexity $2^{128}$ (not $2^{256}$) in case of
`Nano' with chopping 257 bits whose hash output is 256 bits.
Recursive lower bounds on the nonlinearity profile of Boolean functions and their applications
The nonlinearity profile of a Boolean function (i.e. the sequence of its minimum Hamming distances $nl_r(f)$ to all functions of degrees at most $r$, for $r\geq 1$) is a cryptographic criterion whose role against attacks on stream and block ciphers has been illustrated by many papers. It plays also a role in coding theory, since it is related to the covering radii of Reed-Muller codes. We introduce a method for lower bounding its values and we deduce bounds on the second order nonlinearity for several classes of cryptographic Boolean functions, including the Welch and the multiplicative inverse functions (used in the S-boxes of the AES). In the case of this last infinite class of functions, we are able to bound the whole profile, and we do it in an efficient way when the number of variables is not too small. This allows showing the good behavior of this function with respect to this criterion as well.
Copyrighting Public-key Functions and Applications to Black-box Traitor Tracing
Copyrighting a function is the process of embedding hard-to-remove
marks in the function's implementation while retaining its original
functionality. Here we consider the above problem in the context of
public-key encryption and we parallel the process of copyrighting a function to the process of designing traitor tracing schemes.
We derive two copyrighted public-key encryption functions for the
$2$-key setting, solving an open question left by earlier work
with respect to copyrighting discrete-logarithm based functions. We then follow a modular design approach and show how to
elevate the $2$-key case to the multi-user setting, employing
collusion secure codes. Our methodology provides a general framework for constructing
public-key traitor tracing schemes that has the interesting property
that the transmission rate remains constant if the plaintext size can be
calibrated to reach an appropriate minimal length.
Achieving a constant rate, i.e., constant expansion in the
size of ciphertexts and keys, is an important open problem
in the area of traitor tracing schemes. Our design shows
how one can solve it for settings that accommodate the
required plaintext calibration (e.g., when a bulk of
symmetric cipher keys can be encrypted in one message).
Our constructions support ``black-box traitor
tracing'', the setting where the tracer only accesses
the decryption box in input/output queries/responses. For the first time here we provide a modeling of black-box traitor tracing that takes into account adversarially chosen plaintext distributions, a security notion we call {\em semantic black-box traceability}. In order to facilitate the design of schemes with semantic black-box traceability we introduce as part of our modular design approach a simpler notion called semantic user separability and we show that this notion implies semantic black-box traceability. In the multi-user setting our constructions also demonstrate how one can derive public-key traitor tracing by reducing the required ``marking assumption'' of collusion-secure codes to cryptographic hardness assumptions.
Linear Approximating to Integer Addition
The integer addition is often applied in ciphers as a cryptographic means. In this paper we will present some results about the linear approximating for the integer addition.
Indistinguishability Amplification
A random system is the abstraction of the input-output behavior of
any kind of discrete system, in particular cryptographic systems. Many
aspects of cryptographic security analyses and proofs can be seen as
the proof that a certain random system (e.g. a block cipher) is
indistinguishable from an ideal system (e.g. a random permutation),
for different types of distinguishers.
This paper presents a new generic approach to proving upper
bounds on the distinguishing advantage of a combined system,
assuming upper bounds of various types on the component systems. For
a general type of combination operation of systems (including the
combination of functions or the cascade of permutations), we prove
two amplification theorems.
The first is a direct-product theorem, similar in spirit to the
XOR-Lemma: The distinguishing advantage (or security) of the
combination of two (possibly stateful) systems is twice the product
of the individual distinguishing advantages, which is optimal.
The second theorem states that the combination of systems is
secure against some strong class of distinguishers, assuming
only that the components are secure against some weaker
class of attacks. As a corollary we obtain tight bounds on the
adaptive security of the cascade and parallel composition of
non-adaptively (or only random-query) secure component systems.
A key technical tool of the paper is
to show a tight two-way correspondence, previously only
known to hold in one direction, between the distinguishing
advantage of two systems and the probability of provoking an
appropriately defined event on one of the systems.
On Achieving the ''Best of Both Worlds'' in Secure Multiparty Computation
Two settings are typically considered for secure multiparty computation, depending on whether or not a majority of the parties are assumed to be honest. Protocols designed under this assumption provide full security (and, in particular, guarantee output delivery and fairness) when this assumption is correct; however, if half or more of the parties are dishonest then security is completely compromised. On the other hand, protocols tolerating arbitrarily-many faults do not provide fairness or guaranteed output delivery even if only a single party is dishonest. It is natural to wonder whether it is possible to achieve the ''best of both worlds''; namely, a single protocol that simultaneously achieves the best possible security in both the above settings. Ishai, et al. (Crypto 2006) recently addressed this question, and ruled out constant-round protocols of this type.
As our main result, we completely settle the question by ruling out
protocols using any (expected) polynomial number of rounds. Given this stark negative result, we ask what can be achieved if we are willing to assume simultaneous message transmission (or, equivalently, a non-rushing adversary). In this setting, we show that impossibility still holds for logarithmic-round protocols. We also show, for any polynomial $p$, a protocol (whose round complexity depends on $p$) that can be simulated to within closeness $O(1/p)$.
How to Win the Clone Wars: \\ Efficient Periodic n-Times Anonymous Authentication
We create a credential
system that lets a user anonymously authenticate at most $n$ times in
a single time period. A user withdraws a dispenser of $n$ e-tokens.
She shows an e-token to a verifier to authenticate herself; each
e-token can be used only once, however, the dispenser automatically
refreshes every time period.
The only prior solution to this problem,
due to Damgård et al.~[DDP05], uses protocols that are a factor of $k$ slower for the user and verifier, where $k$ is the security parameter.
Damgård et al. also only support one authentication per time
period, while we support $n$. Because our construction is based on
e-cash, we can use existing techniques to identify a cheating user,
trace all of her e-tokens, and revoke her dispensers. We also offer a
new anonymity service: glitch protection for basically honest users
who (occasionally) reuse e-tokens. The verifier can always recognize
a reused e-token; however, we preserve the anonymity of users who do
not reuse e-tokens too often.
Key Replacement Attack on a Certificateless Signature Scheme
Yap, Heng and Goi propose an efficient certificateless signature
scheme based on the intractability of the computational
Diffie-Hellman problem, and prove that the scheme is secure in the
random oracle model. This paper shows that their certificateless
signature scheme is vulnerable to key replacement attacks, where
an adversary who replaces the public key of a signer can forge
valid signatures on any messages for that signer without knowing
the signer's private key.
Hybrid Protocol For Password-based Key Exchange in Three-party Setting
Modular design is a common approach for dealing with complex tasks in modern cryptology. The critical of this approach is that designing a secure hybrid protocol. In this paper, we study password-based key exchange in the three-party setting within the UC framework and design a hybrid protocol that UC-securely realizes such task. That is, we firstly define an appropriate ideal functionality F3-pwKE for password-based three-party key exchange. Next we partition the task into two sub-tasks, three-party key distribution and password-based two-party key exchange, and propose relevant two ideal functionalities, F3-KD, FpwKE. Finally, we present a (F3-KD, FpwKE) -hybrid protocol for password-based three-party key exchange that is proved to be UC-secure with respect to non- adaptive party corruption.
Combined Differential, Linear and Related-Key Attacks on Block Ciphers and MAC Algorithms
Differential and linear attacks are the most widely used
cryptanalytic tools to evaluate the security of symmetric-key
cryptography. Since the introduction of differential and linear
attacks in the early 1990's, various variants of these attacks have
been proposed such as the truncated differential attack, the
impossible differential attack, the square attack, the boomerang
attack, the rectangle attack, the differential-linear attack, the
multiple linear attack, the nonlinear attack and the bilinear
attack. One of the other widely used cryptanalytic tools is the
related-key attack. Unlike the differential and linear attacks, this
attack is based on the assumption that the cryptanalyst can obtain
plaintext and ciphertext pairs by using different, but related keys.
This thesis provides several new combined differential, linear and
related-key attacks, and shows their applications to block ciphers,
hash functions in encryption mode and message authentication code
(MAC) algorithms. The first part of this thesis introduces how to
combine the differential-style, linear-style and related-key
attacks: we combine them to devise the
differential-nonlinear attack, the square-(non)linear
attack, the related-key differential-(non)linear attack, the
related-key boomerang attack and the related-key
rectangle attack. The second part of this thesis presents some
applications of the combined attacks to exiting symmetric-key
cryptography. Firstly, we present their applications to the block
ciphers SHACAL-1, SHACAL-2 and AES. In particular, we show that the
differential-nonlinear attack is applicable to 32-round SHACAL-2,
which leads to the best known attack on SHACAL-2 that uses a single
key. We also show that the related-key rectangle attack is
applicable to the full SHACAL-1, 42-round SHACAL-2 and 10-round
AES-192, which lead to the first known attack on the full SHACAL-1
and the best known attacks on SHACAL-2 and AES-192 that use related
keys. Secondly, we exploit the related-key boomerang attack to
present practical distinguishing attacks on the cryptographic hash
functions MD4, MD5 and HAVAL in encryption mode. Thirdly, we show
that the related-key rectangle attack can be used to distinguish
instantiated HMAC and NMAC from HMAC and NMAC with a random
function.
Secure Cryptographic Workflow in the Standard Model
Following the work of Al-Riyami et al. we define the notion of key encapsulation mechanism supporting cryptographic workflow (WF-KEM) and prove a KEM-DEM composition theorem which extends the notion of hybrid encryption to cryptographic workflow. We then generically construct a WF-KEM from an identity-based encryption (IBE) scheme and a secret sharing scheme. Chosen ciphertext security is achieved using one-time signatures. Adding a public-key encryption scheme we are able to modify the construction to obtain escrow-freeness. We prove all our constructions secure in the standard model.
Robust Computational Secret Sharing and a Unified Account of Classical Secret-Sharing Goals
We give a unified account of classical secret-sharing goals from a modern cryptographic vantage. Our treatment encompasses perfect, statistical, and computational secret sharing; static and dynamic adversaries; schemes with or without robustness; schemes where a participant recovers the secret and those where an external party does so. We then show that Krawczyk's 1993 protocol for robust computational secret sharing (RCSS) need not be secure, even in the random-oracle model and for threshold schemes, if the encryption primitive it uses satisfies only one-query indistinguishability (ind1), the only notion Krawczyk defines. Nonetheless, we show that the protocol is secure (in the random-oracle model, for threshold schemes) if the encryption scheme also satisfies one-query key-unrecoverability (key1). Since practical encryption schemes are ind1+key1 secure, our result effectively shows that Krawczyk's RCSS protocol is sound (in the random-oracle model, for threshold schemes). Finally, we prove the security for a variant of Krawczyk's protocol, in the standard model and for arbitrary access structures, assuming ind1 encryption and a statistically-hiding, weakly-binding commitment scheme.
Universally Composable and Forward Secure RFID Authentication and Key Exchange
Protocols proven secure in universally composable models remain secure under concurrent and modular composition,
and may be easily plugged into more complex protocols without having their security re-assessed with each new use.
Recently, a universally composable framework has been proposed for Radio-Frequency Identification (RFID) authentication protocols, that simultaneously provides for availability, anonymity, and authenticity. In this paper we extend that framework to support
key-compromise and forward-security issues.
We also introduce new, provably secure, and highly practical protocols for anonymous authentication and key-exchange by RFID devices.
The new protocols are lightweight, requiring only a pseudo-random bit generator. The new protocols satisfy forward-secure anonymity,
authenticity, and availability requirements in the Universal Composability model. The proof exploits pseudo-randomness in the standard model.
Towards a Separation of Semantic and CCA Security for Public Key Encryption
We address the question of whether or not semantically secure public-key encryption primitives imply the existence of chosen ciphertext attack (CCA) secure primitives. We show a black-box separation, using the methodology introduced by Impagliazzo and Rudich, for a large non-trivial class of constructions. In particular, we show that if the proposed CCA construction's decryption algorithm does not query the semantically secure primitive's encryption algorithm, then the proposed construction cannot be CCA secure
New Identity-Based Authenticated Key Agreement Protocols from Pairings (without Random Oracles)
Uncategorized
Uncategorized
We present the first provably secure ID-based key agreement protocol, inspired by the ID-based encryption scheme of Gentry, in the standard (non-random-oracle) model. We show how this key agreement can be used in either escrowed or escrowless mode. We also give a protocol which enables users of separate private key generators to agree on a shared secret key. All our proposed protocols have comparable performance to all known protocols that are proven secure in the random oracle model.
A class of quadratic APN binomials inequivalent to power functions
We exhibit an infinite class of almost perfect nonlinear quadratic binomials from $\mathbb{F}_{2^n}$ to $\mathbb{F}_{2^n}$ ($n\geq 12$, $n$ divisible by 3 but not by 9). We prove that these functions are EA-inequivalent to any power function and that they are CCZ-inequivalent to any Gold function and to any Kasami function. It means that for $n$ even they are CCZ-inequivalent to any known APN function, and in particular for $n=12,24$, they are therefore CCZ-inequivalent to any power function.
It is also proven that, except in particular cases, the Gold mappings are CCZ-inequivalent to the Kasami and Welch functions.
Lattices that Admit Logarithmic Worst-Case to Average-Case Connection Factors
We demonstrate an \emph{average-case} problem which is as hard as
finding $\gamma(n)$-approximate shortest vectors in certain
$n$-dimensional lattices in the \emph{worst case}, where $\gamma(n)
= O(\sqrt{\log n})$. The previously best known factor for any class
of lattices was $\gamma(n) = \tilde{O}(n)$.
To obtain our results, we focus on families of lattices having
special algebraic structure. Specifically, we consider lattices
that correspond to \emph{ideals} in the ring of integers of an
algebraic number field. The worst-case assumption we rely on is
that in some $\ell_p$ length, it is hard to find approximate
shortest vectors in these lattices, under an appropriate form of
preprocessing of the number field. Our results build upon prior
works by Micciancio (FOCS 2002), Peikert and Rosen (TCC 2006), and
Lyubashevsky and Micciancio (ICALP 2006).
For the connection factors $\gamma(n)$ we achieve, the corresponding
\emph{decisional} promise problems on ideal lattices are \emph{not}
known to be NP-hard; in fact, they are in P. However, the
\emph{search} approximation problems still appear to be very hard.
Indeed, ideal lattices are well-studied objects in computational
number theory, and the best known algorithms for them seem to
perform \emph{no better} than the best known algorithms for general
lattices.
To obtain the best possible connection factor, we instantiate our
constructions with infinite families of number fields having
constant \emph{root discriminant}. Such families are known to exist
and are computable, though no efficient construction is yet known.
Our work motivates the search for such constructions. Even
constructions of number fields having root discriminant up to
$O(n^{2/3-\epsilon})$ would yield connection factors better than the
current best of~$\tilde{O}(n)$.
Scalable Authenticated Tree Based Group Key Exchange for Ad-Hoc Groups
Task-specific groups are often formed in an ad-hoc manner within big structures, like companies. Take the following typical scenario: A high rank manager decides that a task force group for some project needs to be built. This order is passed down the hierarchy where it finally reaches a manager who calls some employees to form a group. The members should communicate in a secure way and for efficiency reasons symmetric systems are the common choice. To establish joint secret keys for groups, group key exchange (GKE) protocols were developed. If the users are part of e.g. a Public Key Infrastructure (PKI), which is usually the case within a company or a small network, it is possible to achieve authenticated GKE by modifying the protocol and particularly by including signatures.
In this paper we recall a GKE due to Burmester and Desmedt which needs only $O(\log n)$ communication and computation complexity per user, rather than $O(n)$ as in the more well-known Burmester-Desmedt protocol, and runs in a constant number of rounds. To achieve authenticated GKE one can apply compilers, however, the existing ones would need $O(n)$ computation and communication thereby mitigating the advantages of the faster protocol. Our contribution is to extend an existing compiler so that it preserves the computation and communication complexity of the non-authenticated protocol. This is particularly important for tree based protocols.
An attack on the certificateless signature scheme from EUC Workshops 2006
In this paper, we will show that the certificateless signature scheme recently proposed by Yap, Heng and Goi at EUC Workshops 2006 is insecure against a key replacement attack. Our attack shows that anyone who replaces a signer's public key can forge valid signatures for that signer without knowledge of the signer's private key.
General Distinguishing Attacks on NMAC and HMAC with Birthday Attack Complexity
Uncategorized
Uncategorized
Kim {\em et al}. \cite{KiBiPrHo06} and Contini {\em et al}.
\cite{CoYi06} studied on the security of HMAC and NMAC based on
HAVAL, MD4, MD5, SHA-0 and SHA-1. Especially, they considered the
distinguishing attacks. However, they did not describe generic
distinguishing attacks on NMAC and HMAC. In this paper, we describe
the generic distinguishers to distinguish NMAC and HMAC with the
birthday attack complexity and we prove the security bound when the
underlying compression function is the random oracle.
A New Type of Group Signature Scheme
On the basis of BB short signature scheme, this paper derives a new signature scheme, from which we construct a new kind of group signature scheme. The security of the new group signature scheme is based on the q-Strong Diffie-Hellman assumption and the Decisional Diffie-Hellman assumption in the random oracle model. The length of the new group signature is a little longer than that of BBS short group signature. However, in the new group signature scheme, giving certificates and private keys to group members do not need any third trusted party, while in BBS short group signature scheme it does need.
A New Type of Group Blind Signature Scheme Based on Bilinear Pairings
This paper constructs a new group blind signature scheme on the basis of CZK group signature scheme. The security of the new scheme is under the computational Diffie-Hellman problem in the random oracle model. In our scheme, to blind the group signature of CZK only adds the computation of modular addition; while the scheme in LR98 adds the computation of double discreet logarithm, root of the discreet logarithm and random permutation in order to blind the group signature of CS97. As far as the computation complexity is concerned, our scheme is more efficient than LR98.
On the pseudo-random generator ISAAC
Uncategorized
Uncategorized
This paper presents some properties of he deterministic random bit
generator ISAAC (FSE'96), contradicting several statements of its
introducing article. In particular, it characterizes huge subsets of
internal states which induce a strongly non-uniform distribution in
the $8\,192$ first bits produced. A previous attack on ISAAC presented
at Asiacrypt'06 by Paul and Preneel is demonstrated to be non
relevant, since relies on an erroneous algorithm. Finally, a
modification of the algorithm is proposed to fix the weaknesses
discovered.
On Zigzag Functions and Related Objects in New Metric
In \cite{BCS96}, the concept of zigzag function was introduced in
relation with oblivious transfer \cite{R84}. This subject has
later been studied in \cite{S99,DS01,CFW01}. The definition
of zigzag functions has been generalized to $s$-zigzag functions for
$2\leq s\leq n$. It turns out that zigzag functions are also interesting
combinatorial objects, thanks to their relation with self-intersecting codes and
orthogonal arrays \cite{BCS96,S99}.
The aim of this work is to formulate these objects with respect to a new metric
following the approach proposed in \cite{BNNP} and to investigate the properties
of the generalized zigzag functions and related concepts.
Statistically-Hiding Commitment from Any One-Way Function
We give a construction of statistically-hiding commitment schemes (ones where the hiding property holds information theoretically), based on the minimal cryptographic assumption that one-way functions exist. Our construction employs two-phase commitment schemes, recently constructed by Nguyen, Ong and Vadhan (FOCS `06), and universal one-way hash functions introduced and constructed by Naor and Yung (STOC `89) and Rompel (STOC `90).
Searching for Shapes in Cryptographic Protocols (extended version)
We describe a method for enumerating all essentially
different executions possible for a cryptographic protocol.
We call them the shapes of the protocol. Naturally
occurring protocols have only finitely many, indeed very few
shapes. Authentication and secrecy properties are easy to
determine from them, as are attacks and anomalies. CPSA,
our Cryptographic Protocol Shape Analyzer, implements the
method.
In searching for shapes, CPSA starts with some initial
behavior, and discovers what shapes are compatible with it.
Normally, the initial behavior is the point of view of one
participant. The analysis reveals what the other principals
must have done, given this participant's view.
The search is complete, i.e. every shape can in fact be
found in a finite number of steps. The steps in question
are applications of two authentication tests, fundamental
patterns for protocol analysis and heuristics for protocol
design. We have formulated the authentication tests in a
new, stronger form, and proved completeness for a search
algorithm based on them.
Balanced Boolean Functions with (more than) Maximum Algebraic Immunity
In this correspondence, construction of balanced Boolean functions with maximum possible algebraic (annihilator) immunity (AI) is studied with an additional property which is necessary to resist fast algebraic attack. The additional property considered here is, given an $n$-variable ($n$ even) balanced function $f$ with maximum possible AI $\frac{n}{2}$, and given two $n$-variable Boolean functions $g, h$ such that $fg = h$, if $\deg(h) = \frac{n}{2}$, then $\deg(g)$ must be greater than or equal to $\frac{n}{2}$. Our results can also be used to present theoretical construction of resilient Boolean functions having maximum possible AI.
Information Theoretic Bounds on Authentication Systems in Query Model
Authentication codes provide message integrity guarantees
in an information theoretic sense
within a symmetric key setting. Information theoretic
bounds on the success probability of an adversary who has access to
previously authenticated messages have been derived by Simmons and
Rosenbaum, among others. In this paper we consider a strong attack
scenario where the adversary is adaptive and has access to
authentication and verification oracles. We derive information
theoretic bounds on the success probability of the adversary and on
the key size of the code. This brings the study of unconditionally
secure authentication systems on a par with the study of
computationally secure ones. We characterize the codes that meet
these bounds and compare our result with the earlier ones.
Universally Composable Security with Global Setup
Cryptographic protocols are often designed and analyzed under some trusted setup assumptions, namely in settings where the participants have access to global information that is trusted to have some basic security properties. However, current modeling of security in the presence of such setup falls short of providing the expected security guarantees. A quintessential example of this phenomenon is the deniability concern: there exist natural protocols that meet the strongest known composable security notions, and are still vulnerable to bad interactions with rogue protocols that use the same setup.
We extend the notion of universally composable (UC) security in a way that re-establishes its original intuitive guarantee even for protocols that use globally available setup. The new formulation prevents bad interactions even with adaptively chosen protocols that use the same setup. In particular, it guarantees deniability. While for protocols that use no setup the proposed requirements are the same as in traditional UC security, for protocols that use global setup the proposed requirements are significantly stronger. In fact, realizing Zero Knowledge or commitment becomes provably impossible, even in the Common Reference String model. Still, we propose reasonable alternative setup assumptions and protocols that allow realizing practically any cryptographic task under standard hardness assumptions even against adaptive corruptions.
Some Efficient Algorithms for the Final Exponentiation of $\eta_T$ Pairing
Recently Tate pairing and its variations are attracted in cryptography. Their operations consist of a main iteration loop and a final exponentiation. The final exponentiation is necessary for generating a unique value of the bilinear pairing in the extension fields. The speed of the main loop has become fast by the recent improvements, e.g., the Duursma-Lee algorithm and $\eta_T$ pairing. In this paper we discuss how to enhance the speed of the final exponentiation of the $\eta_T$ pairing in the extension field ${\mathbb F}_{3^{6n}}$. Indeed, we propose some efficient algorithms using the torus $T_2({\mathbb F}_{3^{3n}})$ that can efficiently compute an inversion and a powering by $3^{n}+1$. Consequently, the total processing cost of computing the $\eta_T$ pairing can be reduced by 17% for n=97.
From Weak to Strong Watermarking
The informal goal of a watermarking scheme is to ``mark'' a digital object,
such as a picture or video, in such a way that it is difficult for an
adversary to remove the mark without destroying the content of the object.
Although there has been considerable work proposing and breaking
watermarking schemes, there has been little attention given to the
formal security goals of such a scheme. In this work, we provide a
new complexity-theoretic definition of security for watermarking
schemes. We describe some shortcomings of previous attempts at
defining watermarking security, and show that security under our
definition also implies security under previous definitions. We
also propose two weaker security conditions that seem to capture the
security goals of practice-oriented work on watermarking and show
how schemes satisfying these weaker goals can be strengthened to
satisfy our definition.
On a new invariant of Boolean functions
A new invariant of the set of $n$-variable Boolean functions with respect to the action of $AGL(n,2)$ is studied. Application of this invariant to prove affine nonequivalence of two Boolean functions is outlined. The value of this invariant is computed for $PS_{ap}$ type bent functions.
Another class of quadratic APN binomials over $\F_{2^n}$: the case $n$ divisible by 4
We exhibit an infinite class of almost perfect nonlinear quadratic binomials from $\mathbb{F}_{2^{n}}$ to $\mathbb{F}_{2^{n}}$ with $n=4k$ and $k$ odd. We prove that these functions are CCZ-inequivalent to known APN power functions when $k\ne 1$. In particular it means that for $n=12,20,28$, they are CCZ-inequivalent to any power function.
Pairing-friendly elliptic curves with small security loss by Cheon's algorithm
Pairing based cryptography is a new public key cryptographic scheme. An elliptic curve suitable for pairing based cryptography is called a ``pairing-friendly'' elliptic curve. After Mitsunari, Sakai and Kasahara's traitor tracing scheme and Boneh and Boyen's short signature scheme, many protocols based on pairing-related problems such as the $q$-weak Diffie-Hellman problem have been proposed.
In Eurocrypt 2006, Cheon proposed a new efficient algorithm to solve pairing-related problems and recently the complexity of Cheon's algorithm has been improved by Kozaki, Kutsuma and Matsuo.
Due to these two works, an influence of Cheon's algorithm should be considered when we construct a suitable curves for the use of a protocol based on a pairing-related problem.
Among known methods for constructing pairing-friendly elliptic curves, ones using cyclotomic polynomials such as the Brezing-Weng method and the Freeman-Scott-Teske method are affected by Cheon's algorithm.
In this paper, we study how to reduce a security loss of a cyclotomic family by Cheon's algorithm. The proposed method constructs many pairing-friendly elliptic curves with small security loss by Cheon's algorithm suitable for protocols based on pairing-related problems.
Last updated: 2007-01-09
The Bilinear Pairing-based Accumulator Proposed at CT-RSA'05 is not Collision Resistant
Uncategorized
Uncategorized
In this paper, we demonstrate that the construction proposed by Lan Nguyen at CT-RSA'05 does lead to a cryptographic accumulator which is not collision resistant.
Last updated: 2006-12-04
A protocol
Uncategorized
Uncategorized
Sorry for the absence. But I want revise this part after anoymous review!
Security Analysis of Voice-over-IP Protocols
Uncategorized
Uncategorized
The transmission of voice communications as datagram
packets over IP networks, commonly known as Voice-over-IP
(VoIP) telephony, is rapidly gaining wide acceptance.
With private phone conversations being conducted on
insecure public networks, security of VoIP communications
is increasingly important. We present a structured
security analysis of the VoIP protocol stack, which
consists of signaling (SIP), session description (SDP),
key establishment (SDES, MIKEY, and ZRTP) and secure
media transport (SRTP) protocols. Using a combination
of manual and tool-supported formal analysis, we uncover
several design flaws and attacks, most of which are caused
by subtle inconsistencies between the assumptions that
protocols at different layers of the VoIP stack make about
each other.
The most serious attack is a replay attack on SDES,
which causes SRTP to repeat the keystream used for media
encryption, thus completely breaking transport-layer
security. We also demonstrate a man-in-the-middle
attack on ZRTP, which allows the attacker to convince the
communicating parties that they have lost their shared
secret. If they are using VoIP devices without displays
and thus cannot execute the ``human authentication''
procedure, they are forced to communicate insecurely,
or not communicate at all, i.e., this becomes a denial of
service attack. Finally, we show that the key derivation
process used in MIKEY cannot be used to prove security of
the derived key in the standard cryptographic model for
secure key exchange.
Perfect NIZK with Adaptive Soundness
This paper presents a very simple and efficient adaptively-sound perfect NIZK argument system for any NP-language. In contrust to recently proposed schemes by Groth, Ostrovsky and Sahai, our scheme does not pose any restriction on the statements to be proven. Besides, it enjoys a number of desirable properties: it allows to re-use the common reference string (CRS), it can handle
arithmetic circuits, and the CRS can be set-up very efficiently
without the need for an honest party.
We then show an application of our techniques in constructing efficient NIZK schemes for proving arithmetic relations among committed secrets, whereas previous methods required expensive generic NP-reductions.
The security of the proposed schemes is based on a strong non-standard assumption, an extended version of the so-called Knowledge-of-Exponent Assumption (KEA) over bilinear groups. We give some justification for using such an assumption by showing that the commonly-used approach for proving NIZK arguments sound does not allow for adaptively-sound statistical NIZK arguments
(unless NP is in P/poly). Furthermore, we show that the assumption used in our construction holds with respect to generic adversaries that do not exploit the specific representation of the group elements. We also discuss how to avoid the non-standard
assumption in a pre-processing model.
Long-term Security and Universal Composability
Uncategorized
Uncategorized
Algorithmic progress and future technological advances threaten
today's cryptographic protocols. This may allow adversaries to break a
protocol retrospectively by breaking the underlying complexity
assumptions long after the execution of the protocol. Long-term secure
protocols, protocols that after the end of the execution do not reveal
any information to a then possibly unlimited adversary, could meet
this threat. On the other hand, in many applications, it is necessary
that a protocol is secure not only when executed alone, but within
arbitrary contexts. The established notion of universal composability
(UC) captures this requirement.
This is the first paper to study protocols which are simultaneously
long-term secure and universally composable. We show that the usual
set-up assumptions used for UC protocols (e.g., a common reference
string) are not sufficient to achieve long-term secure and composable
protocols for commitments or zero-knowledge protocols.
We give practical alternatives (e.g., signature cards) to these usual
setup-assumptions and show that these enable the implementation of the
important primitives commitment and zero-knowledge protocols.
Universally Composable Three-Party Key Distribution
In this paper, we formulate and realize a definition of security for three-party key distribution within the universally composable (UC) framework. That is, an appropriate ideal functionality that captures the basic security requirements of three-party key distribution is formulated. We show that UC definition of security for three-party key distribution protocol is strictly more stringent than a previous definition of security which is termed AKE-security. Finally, we present a real-life protocol that securely realizes the formulated ideal functionality with respect to non-adaptive adversaries.
The REESSE1+ Public Key Cryptosystem v 2.21
In this paper, the authors give the definitions of a coprime sequence and a lever function, and describe the five algorithms and six characteristics of a prototypal public key cryptosystem which is used for encryption and signature, and based on three new problems and one existent problem: the multivariate permutation problem (MPP), the anomalous subset product problem (ASPP), the transcendental logarithm problem (TLP), and the polynomial root finding problem (PRFP). Prove by reduction that MPP, ASPP, and TLP are computationally at least equivalent to the discrete logarithm problem (DLP) in the same prime field, and meanwhile find some evidence which inclines people to believe that the new problems are harder than DLP each, namely unsolvable in DLP subexponential time. Demonstrate the correctness of the decryption and the verification, deduce the probability of a plaintext solution being nonunique is nearly zero, and analyze the exact securities of the cryptosystem against recovering a plaintext from a ciphertext, extracting a private key from a public key or a signature, and forging a signature through known signatures, public keys, and messages on the assumption that IFP, DLP, and LSSP can be solved. Studies manifest that the running times of effectual attack tasks are greater than or equal to O(2^n) so far when n = 80, 96, 112, or 128 with lg M = 696, 864, 1030, or 1216. As viewed from utility, it should be researched further how to decrease the length of a modulus and to increase the speed of the decryption.
Some New Hidden Ideal Cryptosystems
We propose public-key cryptosystems with public key a system of polynomial equations and private key an ideal.
Analysis of Privacy-Preserving Element Reduction of Multiset
Uncategorized
Uncategorized
Among private set operations, the privacy preserving element reduction of a multiset
can be an important tool for privacy enhancing technology as itself or in the combination
with other private set operations. Recently, a protocol, over-threshold-set-union-protocol, for a
privacy preserving element reduction method of a multiset was proposed by Kissner and Song in
Crypto 2005. In this paper, we point out that there is a mathematical flaw in their polynomial
representation of element reduction of a multiset and the resulting protocol error from the flaw
in the polynomial representation of a multiset. We correct their polynomial representation of a
multiset and propose an over-threshold-set-operation-protocol based on the corrected representation.
Our over-threshold-set-operation-protocol can be combined with a privacy preserving set
operation and outputs those elements appears over the predetermined threshold number times
in the resulting multiset of set operation.
The Recent Attack of Nie et al On TTM is Faulty
Uncategorized
Uncategorized
Recently there is a paper entitled "{\it Breaking a New Instance of TTM Cryptosystem}" by Xuyun Nie, Lei
Hu, Jianyu Li, Crystal Updegrove and Jintai Ding [1] claiming a successive attack on the scheme
of TTM presented in [3]. In the present article, we will show that their claim is a
{\bf misunderstanding}.
Authenticated Interleaved Encryption
We present AIE (Authenticated Interleaved Encryption), a new scheme that allows nodes of a network to exchange messages securely (i.e. encrypted and authenticated) without sharing a common key or using public key cryptography.
Our scheme is well adapted to networks, such as ad hoc, overlay or sensor networks, where nodes have limited capabilities and can share only a small number of symmetric keys. It provides privacy and integrity. An eavesdropper listening to a communication is unable to decrypt it and modify it without being detected. We show that our proposal can be used in wireless sensor networks to send encrypted packets to very dynamic sets of nodes without having to establish and maintain group keys. These sets of nodes can be explicitly specified by the source or can be specified by the network according to some criteria, such as their location, proximity to an object, temperature range. As a result, a node can, for example, send encrypted data to all the nodes within a given geographical area, without having to identify the destination nodes in advance. Finally we show that our proposal can be used to implement a secure and scalable aggregation scheme for wireless sensor networks.
On the Minimal Embedding Field
Uncategorized
Uncategorized
We discuss the underlying mathematics that causes the embedding
degree of a curve of any genus to not necessarily correspond to the
minimal embedding field, and hence why it may fail to capture the
security of a pairing-based cryptosystem. Let $C$ be a curve of
genus $g$ defined over a finite field $\F_q$, where $q=p^m$ for a
prime $p$. The Jacobian of the curve is an abelian variety,
$J_C(\F_q)$, of dimension $g$ defined over $\F_q$. For some prime
$N$, coprime to $p$, the embedding degree of $J_C(\F_q)[N]$ is
defined to be the smallest positive integer $k$ such that $N$
divides $q^k-1$. Hence, $\F_{q^k}^*$ contains a subgroup of order
$N$. To determine the security level of a pairing-based
cryptosystem, it is important to know the minimal field containing
the $N$th roots of unity, since the discrete logarithm problem can
be transported from the curve to this field, where one can perform
index calculus. We show that it is possible to have a dramatic
(unbounded) difference between the size of the field given by the
embedding degree, $\F_{p^{mk}}$, and the minimal embedding field
that contains the $N$th roots of unity, $\F_{p^d}$, where $d\mid
mk$.
The embedding degree has utility as it indicates the field one must
work over to compute the pairing, while a security parameter should
indicate the minimal field containing the embedding. We discuss a
way of measuring the difference between the size of the two fields
and we advocate the use of two separate parameters. We offer a
possible security parameter, $k'=\frac{\ord_Np}{g}$, and we present
examples of elliptic curves and genus 2 curves which highlight the
difference between them. While our observation provides a proper
theoretical understanding of minimal embedding fields in
pairing-based cryptography, it is unlikely to affect curves used in
practice, as a discrepancy may only occur when $q$ is non-prime.
Nevertheless, it is an important point to keep in mind and a
motivation to recognize two separate parameters when describing a
pairing-based cryptosystem.
Zero Knowledge and Soundness are Symmetric
We give a complexity-theoretic characterization of the class of problems in NP having zero-knowledge argument systems. This characterization is symmetric in its treatment of the zero knowledge and the soundness conditions, and thus we deduce that the class of problems in NP intersect coNP having zero-knowledge arguments is closed under complement. Furthermore, we show that a problem in NP has a *statistical* zero-knowledge argument system if and only if its complement has a computational zero-knowledge *proof* system. What is novel about these results is that they are *unconditional*, i.e., do not rely on unproven complexity assumptions such as the existence of one-way functions.
Our characterization of zero-knowledge arguments also enables us to prove a variety of other unconditional results about the class of problems in NP having zero-knowledge arguments, such as equivalences between honest-verifier and malicious-verifier zero knowledge, private coins and public coins, inefficient provers and efficient provers, and non-black-box simulation and black-box simulation. Previously, such results were only known unconditionally for zero-knowledge *proof systems*, or under the assumption that one-way functions exist for zero-knowledge argument systems.
Preimage Attack on Parallel FFT-Hashing
Uncategorized
Uncategorized
Parallel FFT-Hashing was designed by C. P. Schnorr and S. Vaudenay
in 1993. The function is a simple and light weight hash algorithm
with 128-bit digest. Its basic component is a multi-permutation
which helps in proving its resistance to collision attacks.
%
In this work we show a preimage attack on Parallel FFT-Hashing with
complexity $2^{t+64}+2^{128-t}$ and memory $2^{t}$ which is less
than the generic complexity $2^{128}$. When $t=32$, we can find a
preimage with complexity $2^{97}$ and memory $2^{32}$. Our method
can be described as ``disseminative-meet-in-the-middle-attack'' we
actually use the properties of multi-permutation (helpful against
collision attack) to our advantage in the attack. Overall, this type
of attack (beating the generic one) demonstrates that the structure
of Parallel FFT-Hashing has some weaknesses when preimage attack is
considered. To the best of our knowledge, this is the first attack
on Parallel FFT-Hashing.
Preimage Attacks on CellHash, SubHash and Strengthened Versions of CellHash and SubHash
CellHash \cite{DaGoVa91} and SubHash \cite{DaGoVa92} were suggested
by J. Daemen, R. Govaerts and J. Vandewalle in 1991 and 1992.
SubHash is an improved version from CellHash. They have 257-bit
internal state and 256-bit hash output. In this paper, we show a
preimage attack on CellHash (SubHash) with the complexity
$2^{129+t}$ and the memory $2^{128-t}$ for any $t$ (with the
complexity about $2^{242}$ and the memory size $2^{17}$). Even
though we modify them in a famous way, we show that we can find a
preimage on the modified CellHash (the modified SubHash) with the
complexity $2^{200}$ and the memory size $2^{59}$ (with the
complexity about $2^{242}$ and the memory size $2^{17}$).
Preimage Attack on Hashing with Polynomials proposed at ICISC'06
Uncategorized
Uncategorized
In this paper, we suggest a preimage attack on Hashing with
Polynomials \cite{Shpilrain06b}. The algorithm has $n$-bit hash
output and $n$-bit intermediate state. (for example, $n=163$). The
algorithm is very simple and light so that it can be implement in
low memory environment. Our attack is based on the
meet-in-the-middle attack. We show that we can find a preimage with
the time complexity $2^{n-t}+2^{t}*(n+1/33)$ and the memory $2^{t}$
even though the recursive formula $H$ uses any \textsf{f} whose each
term's degree in terms of \textsf{x} is $2^a$ for a non-negative
integer $a$. We recommend that hash functions such as Hashing with
Polynomials should have the intermediate state size at least two
times bigger than the output size.
Galois Field Commitment Scheme
In [3] the authors give the first mathematical formalization of an unconditionally secure commitment
scheme. Their construction has some similarities to one used to build authentication
codes, so they raise the question whether there is some relation between commitment schemes and
authentication schemes. They conjecture that authentication schemes with arbitration can be used,
but they stress that the information flows are different.
In this paper, we show that there is indeed a relation between unconditionally secure commitment
schemes and unconditionally secure authentication schemes, and that an unconditionally
secure commitment scheme can be built from such an authentication scheme and an unconditionally
secure cipher system. This parallel is then used to analyse a new attack against commitment
schemes that is the counterpart of the impersonation attack in an authentication system.
To investigate the opposite direction, we start by defining an optimal commitment system
and showing that this must be a resolvable design commitment scheme as proposed in the aforementioned
paper. Then, a proof is given that the resolvable design commitment schemes are a
composition of an authentication system and a cipher system and the conclusion follows that this
is the case for all optimal commitment systems.
We prove that there is a commitment scheme based on Galois Fields that uses the One-Time
Pad as the cipher system, which to our knowledge is new in the literature. The main technique
in the proof is the construction of an appropriate design for any n, originating an authentication
system that is perfectly secure against deception attacks of levels 0 and 1. The commitment scheme
here proposed uses only very simple operations and can be very efficiently implemented both in
hardware and software.
Finally, we give a brief look at the possibility of building commitment schemes from other
primitives.
A NEW MAC: LAMA
In this paper, we will propose a MAC with two versions, which is called LAMA. The proposal MAC has the input size of 128 bits and 256 bits in the versions 1, 2 respectively, and output size of 128 bits. There have not been found a attack better than the exhaustive search attack for the MAC, and it has a fast implementations in about 5 cycles/byte in the both versions.
A Generic Construction of CCA-Secure Cryptosystems without NIZKP for a Bounded Number of Decryption Queries
In this paper, we propose a generic construction of chosen-ciphertext secure cryptosystems against adversaries with a bounded number of decrytion queries from arbitrary semantically secure encryption in a black box manner. Our construction is not only an alternative to the previously known technique, i.e. the Naor-Yung paradigm, but also has some interesting properties. Especially, (1) it does not require non-interactive zero-knowledge proof, and (2) its component ciphertexts can be compressed into only one if the underlying encryption has a certain homomorphic property. Consequently, when applying our construction to the ElGamal encryption, ciphertext overhead of the resulting scheme will be only one group element which is considered optimal since it is the same as the original ElGamal. Disadvantages to previous schemes are that the upper bound of the number of decryption queries (e.g. 2^{30}) has to be known before set-up phase, and the size of public key is large.
Cryptography in the Multi-string Model
The common random string model permits the construction of cryptographic protocols that are provably impossible to realize in the standard model. In this model, a trusted party generates a random string and gives it to all parties in the protocol. However, the introduction of such a third party should set alarm bells going off: Who is this trusted party? Why should we trust that the string is random? Even if the string is uniformly random, how do we know it does not leak private information to the trusted party? The very point of doing cryptography in the first place is to prevent us from trusting the wrong people with our secrets.
In this paper, we propose the more realistic multi-string model. Instead of having one trusted authority, we have several authorities that generate random strings. We do not trust any single authority, we only assume a majority of them generate the random string honestly. We demonstrate the use of this model for two fundamental cryptographic taks. We define non-interactive zero-knowledge in the multi-string model and construct NIZK proofs in the multi-string model. We also consider multi-party computation and show that any functionality can be securely realized in the multi-string model.
Redundancy of the Wang-Yu Sufficient Conditions
Uncategorized
Uncategorized
Wang and Yu showed that MD5 was not collision-resistant, but it is known that their sufficient conditions for finding a collision of MD5 includes some mistakes. In this paper, we examine the sufficient conditions by computer simulation. We show that the Wang-Yu conditions include 16 unnecessary conditions for making a collision. Sasaki et al. claimed that modifying one condition made it possible to remove eleven conditions. However, the result of our computer simulation shows that their conditions does not make a collision.
Universally Composable Blind Signatures in the Plain Model
In the universal composability framework, we define an ideal functionality for blind signatures,
as an alternative to a functionality recently proposed by Fischlin. Fischlin proves that
his functionality cannot be realized in the plain model, but this result does not apply to our
functionality. We show that our functionality is realized in the plain model by a blind signature
protocol if and only if the corresponding blind signature scheme is secure with respect to
blindness and non-forgeability, as defined by Juels, Luby and Ostrovsky.
Faugere's F5 Algorithm Revisited
Uncategorized
Uncategorized
We present and analyze the F5 algorithm for computing Groebner bases. On the practical side, we correct minor errors in Faugere's pseudo code, and report our experiences implementing the -- to our knowledge -- first working public version of F5. While not designed for efficiency, it will doubtless be useful to anybody implementing or experimenting with F5. In addition, we list some experimental results, hinting that the version of F5 presented in Faugere's original paper can be considered as more or less naive, and that Faugere's actual implementations are a lot more sophisticated. We also suggest further improvements to the F5 algorithm and point out some problems we encountered when attempting to merge F4 and F5 to an "F4.5" algorithm. On the theoretical side, we slightly refine Faugere's theorem that it suffices to consider all normalized critical pairs, and give the first full proof, completing his sketches. We strive to present a more accessible account of the termination and correctness proofs of F5. Unfortunately, we still rely on a conjecture about the correctness of certain optimizations. Finally, we suggest directions of future research on F5.
Non-Wafer-Scale Sieving Hardware for the NFS: Another Attempt to Cope with 1024-bit
Significant progress in the design of special purpose hardware for supporting the Number Field Sieve (NFS) has been made. From a practical cryptanalytic point of view, however, none of the published proposals for coping with the sieving step is satisfying. Even for the best known designs, the technological obstacles faced for the parameters expected for a 1024-bit RSA modulus are significant.
Below we present a new hardware design for implementing the sieving step. The suggested chips are of moderate size and the inter-chip communication does not seem unrealistic. According to our preliminary analysis of the 1024-bit case, we expect the new design to be about 2 to 3.5 times slower than TWIRL (a wafer-scale design). Due to the more moderate technological requirements, however, from a practical cryptanalytic point of view the new design seems to be no less attractive than TWIRL.
Algebraic Cryptanalysis of the Data Encryption Standard
In spite of growing importance of AES, the Data Encryption Standard is by no means obsolete. DES has never been broken from the practical point of view. The triple DES is believed very secure, is widely used, especially in the financial sector, and should remain so for many many years to come. In addition, some doubts have been risen whether its replacement AES is secure, given the extreme level of ``algebraic vulnerability'' of the AES S-boxes (their low I/O degree and exceptionally large number of quadratic I/O equations).
Is DES secure from the point of view of algebraic cryptanalysis, a new very fast-growing area of research? We do not really hope to break it, but just to advance the field of cryptanalysis. At a first glance, DES seems to be a very poor target - as there is (apparently) no strong algebraic structure of any kind in DES. However Courtois and Pieprzyk showed that ``small'' S-boxes always have a low I/O degree (cubic for DES as we show). In addition, due to their low gate count requirements, by introducing additional variables, we can always get an extremely sparse system of quadratic equations.
To assess the algebraic vulnerabilities is the easy part, that may appear unproductive. In this paper we demonstrate that in this way,
several interesting attacks on a real-life ``industrial'' block cipher can be found. One of our attacks is the fastest known algebraic attack on 6 rounds of DES. Yet, it requires only ONE SINGLE known plaintext (instead of a very large quantity) which is quite interesting in itself.
Though (on a PC) we recover the key for only six rounds, in a much weaker sense we can also attack 12 rounds of DES. These results are very interesting because DES is known to be a very robust cipher,
and our methods are very generic. They can be applied to DES with modified S-boxes and potentially other reduced-round block ciphers.
Last updated: 2007-01-15
On the cost of cryptanalytic attacks
This note discusses the complexity evaluation of cryptanalytic attacks, with the example of exhaustive key search, illustrated with several ciphers from the eSTREAM project. A measure is proposed to evaluate the effective computational cost of cryptanalytic algorithms, based on the observation that the standard one is not precise enough.
Concurrent Statistical Zero-Knowledge Arguments for NP from One Way Functions
In this paper we show a general transformation from any honest
verifier statistical zero-knowledge argument to a concurrent
statistical zero-knowledge argument. Our transformation relies only on the existence of one-way functions. It is known that the existence of zero-knowledge systems for any non-trivial language implies one way functions. Hence our transformation \emph{unconditionally} shows that concurrent statistical zero-knowledge arguments for a non-trivial language exist if and only if standalone secure statistical zero-knowledge arguments for that language exist.
Further, applying our transformation to the recent statistical zero-knowledge argument system of Nguyen et al (STOC'06) yields the first concurrent statistical zero-knowledge argument system for all languages in \textbf{NP} from any one way function.
Multi-Property-Preserving Hash Domain Extension and the EMD Transform
We point out that the seemingly strong pseudorandom oracle preserving (PRO-Pr) property of hash function domain-extension transforms defined and implemented by Coron et. al. [12] can actually weaken our guarantees on the hash function, in particular producing a hash function that fails to be even collision-resistant (CR) even though the compression function to which the transform is applied is CR. Not only is this true in general, but we show that all the transforms presented in [12] have this weakness. We suggest that the appropriate goal of a domain extension transform for the next generation of hash functions is to be multi-property preserving, namely that one should have a single transform that is simultaneously at least collision-resistance preserving, pseudorandom function preserving and PRO-Pr. We present an efficient new transform that is proven to be multi-property preserving in this sense.
The Layered Games Framework for Specifications and Analysis of Security Protocols
We establish rigorous foundations to the use of modular, layered design for building complex distributed systems. Layering is key to the design of the Internet and other distributed systems, hence such solid, theoretical foundations are essential, especially when considering adversarial settings, such as for security and cryptographic protocols.
We define the basic concepts for modular, layered design: protocols, systems, configurations, executions, and models, and three relations: indistinguishability (between two systems), satisfaction (of a model by a system), and realization (by protocol, of one model over another model). We prove several basic properties, including the {\em layering lemma} and the {\em indistinguishability lemma}. The indistinguishability lemma shows that if two systems \Gamma_L, \Gamma_R are indistinguishable, and \Gamma_L satisfies some model M, then \Gamma_R also satisfies M. The layering lemma shows that given protocols {\pi_i}^u_{i=1}, if every protocol \pi_i realizes model M_i over model M_{i-1}, then the composite protocol \pi_{1||...||u} realizes model M_u over M_0. This allows specification, design and analysis of each layer independently, and combining the results to ensure properties of the complete system.
Our framework is based on {\em games}, following many works in applied cryptography. This differs from existing frameworks allowing compositions of cryptographic protocols, based on {\em simulatability of ideal functionality}. Game-based models are more general and flexible than ideal functionality specifications, supporting different adversarial models and avoiding over-specification, which is essential for practical distributed systems and networks.
Revisiting the Efficiency of Malicious Two-Party Computation
In a recent paper Mohassel and Franklin study the efficiency of secure two-party computation in the presence of malicious behavior. Their aim is to make classical solutions to this problem, such as zero-knowledge compilation, more efficient. The authors provide several schemes which are the most efficient to date. We propose a modification to their main scheme using expanders. Our modification asymptotically improves at least one measure of efficiency of all known schemes. We also point out an error, and improve the analysis of one of their schemes.
Security Protocols with Isotropic Channels
We investigate the security properties of "isotropic channels",
broadcast media in which a receiver cannot reliably determine whether
a message originated from any particular sender and a sender cannot
reliably direct a message away from any particular receiver. We show
that perfect isotropism implies perfect (information-theoretic)
secrecy, and that asymptotically close to perfect secrecy can be
achieved on any channel that provides some (bounded) uncertainty as to
sender identity. We give isotropic security protocols under
both passive and active adversary models, and discuss the practicality
of realizing isotropic channels over various media.
Security-Focused Survey on Group Key Exchange Protocols
In this paper we overview a large number of currently known group key exchange
protocols while focusing on the protocols designed for more than three participants
(for an overview of two- and three-party key exchange protocols we refer to
[BM03, DB05c]). For each mentioned protocol we briefly describe the current state
of security based on the original analysis as well as later results appeared in the literature.
We distinguish between (i) protocols with heuristic security arguments based
on informally defined security requirements and (ii) protocols that have been proven
secure in one of the existing security models for group key exchange. Note, this paper
continues the work started in Manulis (ePrint Rep. 2006/388) which provides an analytical survey on security
requirements and currently known models for group key exchange. We emphasize that
the following survey focuses on the security aspects of the protocols and does not aim
to provide any efficiency comparison. The reader interested in this kind of surveys we
refer to Rafaeli and Hutchison (ACM Comp. Surveys, 2003) and Amir et al. (ACM Trans. on Inf. and Syst. Sec., 2004).
Identity Based Strong Designated Verifier Proxy Signature Schemes
The paper proposes four new ID based strong designated verifier proxy signature (SDVPS) scheme. The schemes are formed by introducing proxy in ID based SDVS, ID based in SDVPS and ID based proxy in SDVS. We have also analyzed the security of the schemes and their computation aspects.
Last updated: 2007-01-05
The Identity Escrow (Group Signature) Scheme at CT-RSA'05 Is Not Non-frameable
Uncategorized
Uncategorized
Following an attack against exculpability, put forward at Asiacrypt'06, of ACJT's group signature, we further found Nguyen's
identity escrow (group Signature) scheme did not satisfy
non-frameabiliy either.
The Tate Pairing via Elliptic Nets
We derive a new algorithm for computing the Tate pairing on an elliptic curve over a finite field. The algorithm uses a generalisation of elliptic divisibility sequences known as elliptic nets, which are maps from $\Z^n$ to a ring that satisfy a certain recurrence relation. We explain how an elliptic net is associated to an elliptic curve and reflects its group structure. Then we give a formula for the Tate pairing in terms of values of the net. Using the recurrence relation we can calculate these values in linear time.
Computing the Tate pairing is the bottleneck to efficient pairing-based cryptography. The new algorithm has time complexity comparable to Miller's algorithm, and is likely to yield to further optimisation.
A Note on Bounded Chosen Ciphertext Security from Black-box Semantical Security
Designing public key encryption schemes withstanding chosen
ciphertext attacks, which is the highest security level for such
schemes, is generally perceived as a delicate and intricate task,
and for good reason. In the standard model, there are essentially
three well-known but quite involved approaches.
This state of affairs is to be contrasted with the situation for
semantically secure encryption schemes, a much weaker security
notion that only guarantees security in the absence of active
attack but that appears to be much easier to fulfill,
both conceptually and practically. Thus, the boundary
between passive attack and active attack seems to make up the
dividing line between which security levels are relatively easily
achieved and which are not. Our contributions are two-fold.
First, we show a simple, efficient black-box construction of a
public key encryption scheme withstanding chosen ciphertext attack
from any given semantically secure one. Our scheme is $q$-bounded in
the sense that security is only guaranteed if the adversary makes at
most $q$ adaptive chosen ciphertext queries. Here, $q$ is an
arbitrary polynomial that is fixed in advance in the key-generation.
Our work thus shows that whether or not the number of active,
adversarial queries is known in advance is the dividing line, and
not passive versus active attack. In recent work, Gertner, Malkin
and Myers show that such black-box reductions are impossible if
instead $q$ is a polynomial that only depends on the adversary.
Thus, in a sense, our result appears to be the best black-box result
one can hope for. Second, we give a non-blackbox reduction from bounded chosen ciphertext security to semantic security where the length of the public/secret keys and ciphertexts drops from quadratic to linear in $q$, compared to our black-box construction. This latter scheme, however, is only of theoretical interest as it uses general NP-reductions, and our blackbox construction is in fact much more practical.
Last updated: 2008-01-15
Revisit of CS98
Uncategorized
Uncategorized
Cramer and Shoup proposed the first provably secure practical public-key
encryption scheme in the standard model (CS98). We find new way to construct
the secure reduction in which the decryption oracle is not needed yet. Thus we
get a simplified version of CS98 which is more efficient than the original
scheme, and also provably secure against chosen ciphertext attack in standard
model.
Traceable Ring Signature
The ring signature allows a signer to leak secrets anonymously,
without the risk of identity escrow. At the same time,
the ring signature provides great flexibility: No group manager,
no special setup, and the dynamics of group choice.
The ring signature is, however, vulnerable to malicious or irresponsible signers in some applications,
because of its anonymity. In this paper, we propose a traceable ring signature scheme. A traceable ring scheme is a ring signature
except that it can restrict ``excessive'' anonymity.
The traceable ring signature has a tag that consists of a list of ring members and an issue that refers to, for instance, a social affair or an election. A ring member can make any signed but anonymous opinion regarding the issue, but only once (per tag).
If the member submits another signed opinion, possibly pretending to be another person who supports the first opinion, the identity of the member is immediately revealed. If the member submits the same opinion, for instance, voting ``yes'' regarding the same issue twice, everyone can see that these two are linked.
The traceable ring signature can suit to many applications,
such as an anonymous voting on a BBS, a dishonest whistle-blower problem, and unclonable group identification.
We formalize the security definitions for this primitive
and show an efficient and simple construction.
Survey on Security Requirements and Models for Group Key Exchange
In this report we provide an analytical survey on security issues that are relevant for group key exchange (GKE) protocols. We start with the description of the security requirements that have been informally described in the literature and widely used to analyze security of earlier GKE protocols. Most of these definitions were originally stated for two-party protocols and then adapted to a group setting. These informal definitions are foundational for the later appeared formal security models for GKE protocols whose development, strengths, and weaknesses are also described and analyzed.
A Note on the Security of NTRUSign
At Eurocrypt '06, Nguyen and Regev presented a new key-recovery attack on the
Goldreich-Goldwasser-Halevi (GGH) lattice-based signature scheme:
when applied to NTRUSign-251 without perturbation, the attack recovers the secret key
given only 90,000 signatures. At the rump session, Whyte speculated whether the number
of required signatures might be significantly decreased to say 1,000, due to the special
properties of NTRU lattices. This short note shows that this is indeed the case: it turns out that as few as 400 NTRUSign-251 signatures are sufficient in practice to recover the
secret key. Hence, NTRUSign without perturbation should be considered totally insecure.