All papers in 2007 (Page 2 of 482 results)
Oblivious Transfer via McEliece's PKC and Permuted Kernels
We present two efficient protocols for two flavors of oblivious
transfer (OT): the Rabin and 1-out-of-2 OT using the McEliece
cryptosystem and Shamir's zero-knowledge identification scheme based
on permuted kernels. This is a step towards diversifying computational
assumptions on which OT -- the primitive of central importance -- can
be based.
Although we obtain a weak version of Rabin OT (where the malicious
receiver may decrease his erasure probability), it can nevertheless be
reduced to secure 1-out-of-2 OT.
Elaborating on the first protocol, we provide a practical construction
for 1-out-of-2 OT.
Cryptanalysis of Two New Instances of TTM Cryptosystem
In 2006, Nie et al proposed an attack to break an instance of TTM
cryptosystems. However, the inventor of TTM disputed this attack and
he proposed two new instances of TTM to support his viewpoint. At
this time, he did not give the detail of key construction
--- the construction of the lock polynomials in these instances
which would be used in decryption. The two instances are claimed to
achieve a security of against Nie et al attack. In this
paper, we show that these instances are both still insecure, and in
fact, they do not achieve a better design in the sense that we can
find a ciphertext-only attack utilizing the First Order
Linearization Equations while for the previous version of TTM, only
Second Order Linearization Equations can be used in the beginning
stage of the previous attack. Different from previous attacks, we
use an iterated linearization method to break these two instances.
For any given valid ciphertext, we can find its corresponding
plaintext within -computations after
performing once for any public key a computation of complexity less
than . Our experiment result shows we have unlocked the lock
polynomials after several iterations, though we do not know the
detailed construction of lock polynomials.
X-FCSR: a new software oriented stream cipher based upon FCSRs
Feedback with Carry Shift Registers (FCSRs) are a promising alternative to LFSRs in the design of stream cipher. The previous constructions based on FCSRs were dedicated to hardware applications. In this paper, we will describe X-FCSR a family of software oriented stream cipher using FCSRs. The core of the system is composed of two 256-bits FCSRs. We propose two versions: X-FCSR-128 and X-FCSR-256 which output respectively 128 and 256 bits at each iteration. We study the resistance of our design against several cryptanalyses. In this way, we achieve a high throughput and secure stream ciphers suitable for software applications (6.3 cycles/byte).
On The Inequivalence Of Ness-Helleseth APN Functions
In this paper, the Ness-Helleseth functions over defined
by the form are proven to be
a new class of almost perfect nonlinear (APN) functions and they are
CCZ-inequivalent with all other known APN functions when .
The original method of Ness and Helleseth showing the functions are
APN for and odd is also suitable for showing their
APN property for any prime with and odd .
Algebraic Structure Defectoscopy
We present a novel instrument of automated cryptanalysis suitable for
measuring the number of rounds that can build one PRF round, so that 4 such rounds could be recommended as a Luby-Rackoff cipher secure against adaptive attacks. ASD tests can detect structural flaws in all kinds of cryptographic primitives and their implementations. We present our results for some of the well-known ciphers and hash functions and for some of the eSTREAM candidates. Our tools can distinguish complete Achterbahn, Grain v1 and Grain-128 from random, detect weak keys in the complete IDEA cipher and find fatal structural flaws even in complete ciphers like LILI, KeeLoq or TEA in a matter of seconds. Cryptanalysts can save their valuable time by requiring that all new ciphers must pass not only randomness tests, but also automated cryptanalysis tests like ours before they could be considered interesting for manual cryptanalytic study.
Last updated: 2007-10-08
Fast Point Multiplication on Elliptic Curves of Even Order
Every elliptic curve of even order over a finite field of
characteristic >3 is birationally equivalent to a curve in Jacobi
quartic form. This paper presents the fast explicit formulas for
group operations on a Jacobi quartic curve. The algorithm for
doubling uses only 1M+6S, for the mixed-addition uses only 8M+2S
and the unified addition formula only 9M+2S to be the best case.
For elliptic curve of even order, these algorithm are more efficient
than the other algorithms in the literature.
An Efficient Range-Bounded Commitment Scheme
Checking whether a committed integer lies in a specific interval has many cryptographic applications. In Eurocrypt'98, Chan et al. proposed an instantiation (CFT for short). Based on CFT, Boudot presented an efficient range-bounded commitment scheme in Eurocrypt'2000. Both CFT proof and Boudot proof are based on the
encryption , where is an RSA
modulus whose factorization is \textit{unknown} by the prover. They
did not use a single base as usual. Thus an increase in cost occurs.
In this paper we show that it suffices to adopt a single base. The
cost of the improved Boudot proof is about half of that of the
original scheme. Moreover, the key restriction in the original
scheme, i.e., both the discrete logarithm of in base and
the discrete logarithm of in base are unknown by the prover,
which is a potential menace to the Boudot proof, is definitely
removed.
Further Musings on the Wang et al. MD5 Collision: Improvements and Corrections on the Work of Hawkes, Paddon, and Rose
Uncategorized
Uncategorized
The recent successful attack on the widely used hash function, the MD5 Message Digest Algorithm, was a breakthrough in cryptanalysis. The original paper, published in 2004 by Wang et al., described this attack in an obscure and elliptical manner. Hawkes, Paddon, and Rose later presented the attack in more detail, but even their paper contained numerous unproven statements and several significant errors. In a seven-fold process, this paper will prove assertions made by Hawkes, Paddon, and Rose, provide original corrections and illustrations, and explicate their work to make it more accessible to the mathematically literate reader. First, this paper will augment their introductory material by adding original insight to compare their unorthodox description of MD5 to the more conventional notation of Ron Rivest. Second, it will provide original examples for conditions that they present for the Tt. Third, it will elaborate on the description of the first block of the differential by asserting why and how the conditions on the Tt are determined. Fourth, it will develop a step by step analysis of the description of the second block of the differential based only the table that Hawkes, Paddon, and Rose provide. Fifth, it will supply original proofs for the assertions that they make for the conditions for the propagation of the differences through the ft functions for the first block. Sixth, it will give both the assertions and the proofs for the propagation of the differences through the ft functions for the second block. Finally, it will correct two significant errors in the work of Hawkes, Paddon, and Rose, demonstrating that the complexity of the attack is only about half of what they stated it to be and that their Case Two does not succeed in fulfilling the conditions required for the collision differential to hold.
On Factoring Arbitrary Integers with Known Bits
We study the {\em factoring with known bits problem}, where we are
given a composite integer and oracle access to the
bits of the prime factors , . Our goal is to find
the full factorization of in polynomial time with a minimal number
of calls to the oracle. We present a rigorous algorithm that
efficiently factors given bits, where
denotes the harmonic number.
A Meet-in-the-Middle Collision Attack Against the New FORK-256
We show that a collision attack exists against the FORK-256 Hash Function. The attack is surprisingly simple compared to existing published FORK-256 cryptanalysis work, yet is the best known result against the new, tweaked version of the hash. The attack is based on "splitting" the message schedule and compression function into two halves in a meet-in-the-middle attack. This in turn reduces the space of possible hash function results, which leads to significantly faster collision search. The attack strategy is also applicable to the original version of FORK-256 published in FSE 2006.
On the Authentication of One Popular Signcryption Scheme
Whether a recipient \textit{can prove} a signature to others is of great importance. The function is just one reason that we call a signature ``signature" rather than others. In this paper, we point out that one popular signcryption signature convinces \textit{only} the designated document's recipient that the signer deliberately signed the document. The \textit{designated recipient} can \textit{check} the validity of a given signcryptext but \textit{cannot prove} it to others. We also improve it using the efficient technique developed in Schnorr's signature instead of a zero-knowledge proof such that the receiver can \textit{check} the
validity of a given signcryptext and \textit{can prove} it to a
third party.
Group-oriented encryption secure against collude attack
A group oriented encryption scheme is presented in this paper. In this scheme, a sender is allowed to encrypt a message using the group public key and send the ciphertext to the group. Any user in the group can independently decrypt the ciphertext via his private key. The scheme is secure against adaptively chosen ciphertext attack and collude attack.
FURTHER PROPERTIES OF SEVERAL CLASSES OF BOOLEAN FUNCTIONS WITH OPTIMUM ALGEBRAIC IMMUNITY
Thanks to a method proposed by Carlet, several classes of balanced
Boolean functions with optimum algebraic immunity are obtained. By
choosing suitable parameters, for even , the balanced
-variable functions can have nonlinearity
,
and for odd , the functions can have nonlinearity
, where the function
is describled in Theorem 4.4. The algebraic
degree of some constructed functions is also discussed.
Universally Composable Multi-Party Computation with an Unreliable Common Reference String
Universally composable multi-party computation has been studied in two settings:
\begin{itemize}
\item When a majority of participants are honest, universally composable multi-party computation
is known to be possible without any assumptions.
\item When honest participants are \emph{not} in the majority, universally composable multi-party computation is known to be impossible (under any cryptographic assumption) in the bare model. On the other hand, feasibility results have been obtained (under standard cryptographic assumptions) in various augmented models, the most popular of which posits the existence of a \emph{common references string} (CRS) available to all parties who are executing the protocol.
\end{itemize}
In either of the above settings, some \emph{assumption} regarding the protocol execution is made (i.e., that many parties are honest in the first case, or that a legitimately-chosen string is available in the second), and if this assumption is incorrect then all security is lost.
A natural question is whether it is possible to design protocols giving \emph{some} assurance of security in case \emph{either one} of these assumptions holds, i.e., a single protocol (that uses a CRS) which is secure if \emph{either} at most players are dishonest \emph{or} if up to players are dishonest (with ) but the CRS is chosen in the proscribed manner. We show that such protocols exist if and only if .
Reducing Trust in the PKG in Identity Based Cryptosystems
One day, you suddenly find that a private key corresponding to your Identity is up for sale at e-Bay. Since you do not suspect a key compromise, perhaps it must be the PKG who is acting dishonestly and trying to make money by selling your key. How do you find out for sure and even prove it in a court of law?
This paper introduces the concept of Accountable Authority Identity based Encryption (A-IBE). A-IBE is a new approach to mitigate the (inherent) key escrow problem in identity based encryption schemes. Our main goal is to restrict the ways in which the PKG can misbehave. In our system, if the PKG ever maliciously generates and distributes a decryption key for an Identity, it runs the risk of being caught and prosecuted.
In contrast to other mitigation approaches, our approach does not require multiple key generation authorities.
Cryptanalysis of Rational Multivariate Public Key Cryptosystems
In 1989, Tsujii, Fujioka, and Hirayama proposed a family of multivariate public key cryptosystems, where the public key is given as a set of multivariate rational functions of degree 4\cite{Tsujii-Fujioka:89}. These cryptosystems are constructed via composition of two quadratic rational maps. In this paper, we present the cryptanalysis of this family of cryptosystems. The key point of our attack is to transform a problem of decomposition of two rational maps into a problem of decomposition of two polynomial maps. We develop a new improved 2R decomposition method and other new techniques, which allows us to find an equivalent decomposition of the rational maps to break the system completely. For the example suggested for practical applications, it is extremely fast to perform the computation to derive an equivalent private key, and it requires only a few seconds on a standard PC.
Breaking the Symmetry: a Way to Resist the New Differential Attack
Sflash had recently been broken by Dubois, Stern, Shamir, etc., using
a differential attack on the public key. The signature
schemes are hence no longer practical. In this paper, we will study
the new attack from the point view of symmetry, then (1) present a
simple concept (projection) to modify several multivariate schemes
to resist the new attacks; (2) demonstrate with practical examples
that this simple method could work well; and (3) show that the same
discussion of attack-and-defence applies to other big-field
multivariates. The speed of encryption schemes is not affected, and
we can still have a big-field multivariate signatures resisting the
new differential attacks with speeds comparable to Sflash.
Pairings on Jacobians of Hyperelliptic Curves
Consider the jacobian of a hyperelliptic genus two curve defined over a finite field. Under certain restrictions on the endomorphism ring of the jacobian we give an explicit description all non-degenerate, bilinear, anti-symmetric and Galois-invariant pairings on the jacobian. From this description it follows that no such pairing can be computed more efficiently than the Weil pairing.
To establish this result, we need an explicit description of the representation of the Frobenius endomorphism on the l-torsion subgroup of the jacobian. This description is given. In particular, we show that if the characteristic polynomial of the Frobenius endomorphism splits into linear factors modulo l, then the Frobenius is diagonalizable.
Finally, under the restriction that the Frobenius element is an element of a certain subring of the endomorphism ring, we prove that if the characteristic polynomial of the Frobenius endomorphism splits into linear factors modulo l, then the embedding degree and the total embedding degree of the jacobian with respect to l are the same number.
A Proof of Security of a Mesh Security Architecture
The IEEE 802.11s standard is tasked to provide ways of establishing and securing a wireless mesh network. One proposal establishes a Mesh Security Architecture (MSA), with an interesting key hierarchy and full protocol definitions. This paper proves the correctness and security of the MSA proposal and its corresponding protocols. We also propose and prove the security of an additional protocol (an abbreviated handshake) which offers a substantial efficiency improvement in certain instances. To prove the entire architecture secure, we utilize Protocol Composition Logic (PCL) to prove each protocol secure. From that basis, we can show the protocols compose securely to prove the entire architecture. We also contribute some novel concepts to PCL, to allow us to prove the security of the overall architecture.
Fuzzy Private Matching (Extended Abstract)
In the private matching problem, a client and a server each hold a set of input elements. The client wants to privately compute the intersection of these two sets: he learns which elements he has in common with the server (and nothing more), while the server gains no information at all. In certain applications it would be useful to have a private matching protocol that reports a match even if two elements are only similar instead of equal. Such a private matching protocol is called \emph{fuzzy}, and is useful, for instance, when elements may be inaccurate or corrupted by errors.
We consider the fuzzy private matching problem, in a semi-honest environment. Elements are similar if they match on out of attributes. First we show that the original solution proposed by Freedman et al. is incorrect. Subsequently we present two fuzzy private matching protocols. The first, simple, protocol has bit message complexity . The second, improved, protocol has a much better bit message complexity of , but here the client incurs a factor time complexity. Additionally, we present protocols based on the computation of the Hamming distance and on oblivious transfer, that have different, sometimes more efficient, performance characteristics.
Statistical Testing for Disk Encryption Modes of Operations
In this paper we present a group of statistical tests that
explore the random behavior of encryption modes of operations, when
used in disk encryption applications. The results of these tests help us
to better understand how these modes work. We tested ten modes of
operations with the presented statistical tests, five of the narrow-block
type and the other five of the wide-block type. Our analysis shows some
weakness in some of these modes.
Proxy Re-encryption Systems for Identity-based Encryption
A proxy re-encryption system allows the proxy to transform ciphertexts encrypted under Alice's public key into the different ciphertexts that can be decrypted by Bob's secret key. In this paper, we propose new proxy re-encryption systems; one for the transformation from ciphertexts encrypted under a traditional certificate-based public key into the ciphertexts that can be decrypted by an secret key for Identity-Based Encryption, and the other one for the transformation from ciphertexts encrypted in IBE manner into the different ciphertexts that can be decrypted by the other secret key for the IBE.
Sufficient Conditions for Intractability over Black-Box Groups: Generic Lower Bounds for Generalized DL and DH Problems
Uncategorized
Uncategorized
The generic (aka. black-box) group model is a valuable methodology for analyzing the computational hardness of number-theoretic problems used in cryptography. Since the properties ensuring generic hardness have not been well-studied and formalized yet, for each newly proposed problem an entire hardness proof has to be done from scratch. In this work we identify criteria that guarantee the hardness of generalized DL and DH problems in an extended generic group model where algorithms are allowed to perform any operation representable by a polynomial function. Assuming our conditions are satisfied, we are able to provide negligible upper bounds on the success probability of such algorithms. As useful means for the formalization of definitions and conditions we explicitly relate the concepts of generic algorithms and straight-line programs that have only been used independently in cryptography so far.
Intrusion-Resilient Secret Sharing
We introduce a new primitive called Intrusion-Resilient Secret Sharing
(IRSS), whose security proof exploits the fact that there exist
functions which can be efficiently computed interactively using low
communication complexity in k, but not in k - 1 rounds.
IRSS is a means of sharing a secret message amongst a set of players
which comes with a very strong security guarantee. The shares in an
IRSS are made artificially large so that it is hard to retrieve them
completely, and the reconstruction procedure is interactive requiring
the players to exchange k short messages. The adversaries considered
can attack the scheme in rounds, where in each round the adversary
chooses some player to corrupt and some function, and retrieves the
output of that function applied to the share of the corrupted
player. This model captures for example computers connected to a
network which can occasionally be infected by malicious software like
viruses, which can compute any function on the infected machine, but
cannot sent out a huge amount of data.
Using methods from the Bounded-Retrieval Model, we construct an IRSS
scheme which is secure against any computationally unbounded adversary
as long as the total amount of information retrieved by the adversary
is somewhat less than the length of the shares, and the adversary
makes at most k - 1 corruption rounds (as described above, where k
rounds are necessary for reconstruction). We extend our basic scheme
in several ways in order to allow the shares sent by the dealer to be
short (the players then blow them up locally) and to handle even
stronger adversaries who can learn some of the shares completely.
As mentioned, there is an obvious connection between IRSS schemes and
the fact that there exist functions with an exponential gap in their
communication complexity for k and k - 1 rounds. Our scheme implies
such a separation which is in several aspects stronger than the
previously known ones.
Improving the Round Complexity of VSS in Point-to-Point Networks
Uncategorized
Uncategorized
We revisit the following question: what is the optimal round complexity of verifiable secret sharing~(VSS)? We focus here on the case of perfectly-secure VSS where the number of corrupted parties~ satisfies , with being the total number of parties. Work of Gennaro et al. (STOC~2001) and Fitzi et al. (TCC~2006) shows that, assuming a broadcast channel, 3~rounds are necessary and sufficient for efficient VSS. The efficient 3-round protocol of Fitzi et al., however, treats the broadcast channel as being available ``for free'' and does not attempt to minimize its usage. This approach leads to relatively poor round complexity when protocols are compiled for a point-to-point network.
We show here a VSS protocol that is simultaneously optimal in terms of both the number of rounds and the number of invocations of broadcast. Our protocol also has a certain ``2-level sharing'' property that makes it useful for constructing protocols for general secure computation.
A Note on Signature Standards
A major security goal for signature schemes is to prevent an
adversary from producing new valid signatures even though he can
receive valid signatures of any messages from the legitimate signer.
On the one hand the security of elliptic curve signature schemes, as
ECDSA, ECGDSA, or ECKCDSA, is based on the elliptic curve discrete
logarithm problem, respectively on the security of the used hash
function. On the other hand some special cases for ephemeral keys
and signature components also have to be excluded to guarantee the
security of the signature scheme. In this paper we are going to
investigate some exceptional cases, which are not covered by current
signature generation algorithms, but leak information on the private
signature key.
A Block Cipher based PRNG Secure Against Side-Channel Key Recovery
We study the security of a block cipher-based pseudorandom number generator, both in the black box world and in the physical world, separately. We first show that the construction is a secure PRNG in the ideal cipher model. Then, we demonstrate its security against a Bayesian side-channel key recovery adversary. As a main result, we show that our construction guarantees that the success rate of the adversary does not increase with the number of physical observations, but in a limited and controlled way. Besides, we observe that, under common assumptions on side-channel attack strategies, increasing the security parameter (typically the block cipher key size) by a polynomial factor involves an increase of a side-channel attack complexity by an exponential factor, making the probability of a successful attack negligible. We believe this work provides a first interesting example of the way the algorithmic design of a cryptographic scheme influences its side-channel resistance.
Secret sharing on the infinite ladder
The notion of perfect secret sharing scheme has been extended to encompass infinite access structures, in particular infinite graphs. The participants are the vertices of the graph and the edges are the minimal qualified subsets. The information ratio (the inverse of the information rate) of is the largest lower bound on the amount of information by secret bits some vertex must receive in each scheme realizing this access structure. We show that this value is 7/4 for the infinite ladder, solving an open problem from. We give bounds for other infinite graphs as well.
Identity-Committable Signatures and Their Extension to Group-Oriented Ring Signatures
The identity of "Deep Throat", a pseudonym of the information source in the Watergate scandal, remained mysterious for more than three decades. In 2005, an ex-FBI official claimed that he was the anonymous source. Nevertheless, some are still inconvinced. In this paper, we introduce a new notion of identity-committable signatures (ICS) to ensure the anonymity of "Deep Throat" inside a group. A member of an organization can sign a message on behalf of himself (regular signature) or the organization (identity-committed signature). In the latter case, the signer's identity is hidden from anyone, and can be opened by himself only. We describe the requirements of ICS and give the formal definition of it.
Then we extend the notion of ICS to group-oriented ring signatures (GRS) which further allow the signer to hide his identity behind multiple groups. We believe a GRS scheme is more efficient and practical than a ring signature scheme for leaking secrets. Finally, we provide concrete constructions of ICS and GRS with information-theoretic anonymity, that is, the identity of the signer is fully-protected.
Multiparty Computation to Generate Secret Permutations
We make use of a universal re-encryption mixnet to efficiently perform a
secure multiparty computation to generate a secret permutation.
When complete, the permutation is shared among the players in such
a way that each player knows his share of the permutation but no others.
Such a permutation is useful in dining cryptographers networks (DC-nets) to determine in which slot each player should transmit.
We also see this primitive as being useful in online gaming for either
shuffling cards or ordering players without the need for a trusted dealer or other third party.
New Local Collisions for the SHA-2 Hash Family
The starting point for collision attacks on practical hash functions is a local collision. In this paper, we make a systematic study of local collisions for the SHA-2 family. The possible linear approximations of the constituent Boolean functions are considered and certain impossible conditions for such approximations are identified. Based on appropriate approximations, we describe a general method for finding local collisions. Applying this method, we obtain several local collisions and compute the probabilities of the various differential paths. Previously, only one local collision due to Gilbert-Handschuh was known. We point out two impossible conditions in the GH local collision and provide an example of an impossible differential path for linearized SHA-2 using this local collision. Sixteen new local collisions are obtained none of which have any impossible conditions. The probabilities of these local collisions are a little less than the GH local collision. On the other hand, the absence of impossible conditions may make them more suitable for (reduced round) collision search attacks on the SHA-2 family.
A Linear Lower Bound on the Communication Complexity of Single-Server Private Information Retrieval
We study the communication complexity of single-server Private Information Retrieval (PIR) protocols that are based on fundamental cryptographic primitives in a black-box manner. In this setting, we establish a tight lower bound on the number of bits communicated by the server in any polynomially-preserving construction that relies on trapdoor permutations. More specifically, our main result states that in such constructions bits must be communicated by the server, where is the size of the server's database, and this improves the lower bound due to Haitner, Hoch, Reingold and Segev (FOCS '07). Therefore, in the setting under consideration, the naive solution in which the user downloads the entire database turns out to be optimal up to constant multiplicative factors. We note that the lower bound we establish holds for the most generic form of trapdoor permutations, including in particular enhanced trapdoor
permutations.
Technically speaking, this paper consists of two main contributions from which our lower bound is obtained. First, we derive a tight lower bound on the number of bits communicated by the sender during the commit stage of any black-box construction of a statistically-hiding bit-commitment scheme from a family of trapdoor permutations. This lower bound asymptotically matches the upper bound provided by the scheme of Naor, Ostrovsky, Venkatesan and Yung (CRYPTO '92). Second, we improve the efficiency of the reduction of statistically-hiding commitment schemes to low-communication single-server PIR, due to Beimel, Ishai, Kushilevitz and Malkin (STOC '99). In particular, we present a reduction that essentially preserves the communication complexity of the underlying single-server PIR protocol.
On Tweaking Luby-Rackoff Blockciphers
Tweakable blockciphers, first formalized by Liskov, Rivest, and Wagner, are blockciphers with an additional input, the tweak, which allows for variability. An open problem proposed by Liskov et al. is how to construct tweakable blockciphers without using a pre-existing blockcipher. This problem has yet to receive any significant study. There are many natural questions in this area: is it significantly more effcient to incorporate a tweak directly? How do direct constructions compare to existing techniques? Are these direct constructions optimal and for what levels of security? How large of a tweak can be securely added? In this work, we address these questions for Luby-Rackoff blockciphers. We show that tweakable blockciphers can be created directly from Feistel ciphers, and in some cases show that direct constructions of tweakable blockciphers are more e±cient than previously known constructions.
Statistically Hiding Sets
Zero-knowledge set is a primitive introduced by Micali, Rabin, and
Kilian (FOCS 2003) which enables a prover to commit a set to a
verifier, without revealing even the size of the set. Later the
prover can give zero-knowledge proofs to convince the verifier of
membership/non-membership of elements in/not in the committed set.
We present a new primitive called {\em Statistically Hiding Sets}
(SHS), similar to zero-knowledge sets, but providing an
information theoretic hiding guarantee, rather than one based on
efficient simulation. This is comparable to relaxing
zero-knowledge proofs to {\em witness independent proofs}. More
precisely, we continue to use the simulation paradigm for our
definition, but do not require the simulator (nor the
distinguisher) to be efficient.
We present a new scheme for statistically hiding sets, which does
not fit into the ``Merkle-tree/mercurial-commitment'' paradigm
that has been used for {\em all} zero-knowledge set constructions so far. This not only provides efficiency gains compared to the best
schemes in that paradigm, but also lets us provide {\em
statistical} hiding; previous approaches required the prover to maintain growing amounts of state with each new proof for this.
Our construction is based on an algebraic tool called {\em
trapdoor DDH groups} (\tdg), introduced recently by Dent and
Galbraith (ANTS 2006). Ours is perhaps the first non-trivial
application of this tool. However the specific hardness
assumptions we associate with \tdg are different, and of a strong
nature --- strong RSA and a knowledge-of-exponent assumption. Our
new knowledge-of-exponent assumption may be of independent
interest. We prove this assumption in the generic group model.
A Framework for Efficient and Composable Oblivious Transfer
We propose a simple and general framework for constructing oblivious
transfer (OT) protocols that are \emph{efficient}, \emph{universally
composable}, and \emph{generally realizable} from a variety of
standard number-theoretic assumptions, including the decisional
Diffie-Hellman assumption, the quadratic residuosity assumption, and
\emph{worst-case} lattice assumptions.
Our OT protocols are round-optimal (one message each way), quite
efficient in computation and communication, and can use a single
common string for an unbounded number of executions. Furthermore, the
protocols can provide \emph{statistical} security to either the sender
or receiver, simply by changing the distribution of the common string.
For certain instantiations of the protocol, even a common
\emph{random} string suffices.
Our key technical contribution is a simple abstraction that we call a
\emph{dual-mode} cryptosystem. We implement dual-mode cryptosystems
by taking a unified view of several cryptosystems that have what we
call ``messy'' public keys, whose defining property is that a
ciphertext encrypted under such a key carries \emph{no information}
(statistically) about the encrypted message.
As a contribution of independent interest, we also provide a multi-bit
version of Regev's lattice-based cryptosystem (STOC 2005) whose time
and space efficiency are improved by a linear factor in the security
parameter . The amortized encryption and decryption time is only
bit operations per message bit, and the ciphertext
expansion can be made as small as a constant; the public key size and
underlying lattice assumption remain essentially the same.
Lai-Massey Scheme and Quasi-Feistel Networks
We introduce the notion of quasi-Feistel network, which is generalization of the Feistel network, and contains the Lai-Massey scheme as an instance. We show that some of the works on the Feistel network, including the works of Luby-Rackoff, Patarin, Naor-Reingold and Piret, can be naturally extended to our setting. This gives a new proof for theorems of Vaudenay on the security of the Lai-Massey scheme, and also introduces for Lai-Massey a new construction of pseudorandom permutation, analoguous to the construction of Naor-Reingold using pairwise independent permutations.
Also, we prove the birthday security of - and -round unbalanced quasi-Feistel networks with b branches against CPA and CPCA attacks, respectively. This answers an unsolved problem pointed out by Patarin et al.
Last updated: 2009-06-23
Secure multi-party computation on incomplete networks
Uncategorized
Uncategorized
Secure multiparty computation of a multivariate function is a central problem in cryptography. It is known that secure multiparty computation can be realized by a set of parties iff the connectivity of the underlying (authenticated) communication network is more than twice the number of corrupted parties. This impossibility result makes secure multiparty computation far less applicable in practice, as most deployed networks have a much lower degree than and one would ideally like to tolerate corrupted parties.
This work considers a model for (unconditional) secure multiparty computation for networks of low degrees in which authenticated channels are available between very few pairs of parties. Not all honest parties can achieve traditional security guarantees of multiparty computation for this setting. This formulation of secure multiparty computation, which permits some of the honest parties to be "sacrificed" is called almost everywhere secure computation. In this work we show how to realize a.e.s.c., on a few special families of incomplete networks, for the case of Byzantine corruptions.
Analysis of Underlying Assumptions in NIST DRBGs
In \cite{NIST}, four different DRBGs are recommended for cryptographic purpose. Each generator is based on some underlying cryptographic concept. The article examines each of the concept to determine what are the necessary and sufficient conditions for the DRBG to be secured in its generation process. In addition, the effects of failure of typical cryptographic requirements of each underlying concept are discussed.
From \cite{MC}, permutation based DRBGs are never indistinguishable from a true random source. From \cite{DB}, elliptic based DRBGs are secured given a set of problems regarding elliptic curve remains difficult. This article demostrates that a pseudo-random family is required for both hash based and HMAC based DRBGs.
Security Analysis of WAPI Authentication and Key Exchange Protocol
We first do an in-depth security analysis of the authenticated key exchange protocol WAI in WAPI (WALN Authentication Privacy Infrastructure), point out its flaws and improve it. Next, we give the security proof of this new protocol WAI' in CK security model, which indicates that WAI' has the corresponding security attributes in CK security model, and satisfies the requirements of WAPI.
Updated standards for validating elliptic curves
We give a concise statement of a test for security of elliptic
curves that should be inserted into the standards for elliptic curve
cryptography. In particular, current validation for parameters
related to the MOV condition that appears in the latest draft of the
IEEE P1363 standard \cite[Section A.12.1, Section A.16.8]{P1363}
should be replaced with our subfield-adjusted MOV condition.
Similarly, the Standards for Efficient Cryptography Group's document
SEC 1 \cite{sec_1} should make adjustments accordingly.
A New Security Model for Cross-Realm C2C-PAKE Protocol
Uncategorized
Uncategorized
Cross realm client-to-client password authenticated key exchange (C2C-PAKE) schemes are designed to enable two clients in different realms to agree on a common session key using different passwords. In 2006, Yin-Bao presented the first provably secure cross-realm C2C-PAKE, which security is proven rigorously within a formally defined security model and based on the hardness of some computationally intractable assumptions. However, soon after, Phan et al. pointed out that the Yin-Bao scheme was flawed. In this paper, we first analyze the necessary security attributes in the cross-realm C2C-PAKE scenario, and then a new security model for cross-realm C2C-PAKE is given. Analogous to the general construction of 3PAKE protocol for single server C2C-PAKE setting, we give a general construction of cross-realm C2C-PAKE protocol, which security is proved in the new security model.
Multi-Party Indirect Indexing and Applications
Uncategorized
Uncategorized
We develop a new multi-party generalization of Naor-Nissim indirect indexing, making it possible for many participants to simulate a RAM machine with only poly-logarithmic blow-up. Our most efficient instantiation (built from length-flexible additively homomorphic public key encryption) improves the communication complexity of secure multi-party computation for a number of problems in the literature. Underlying our approach is a new multi-party variant of oblivious transfer which may be of independent interest.
Efficient Implementation of the Pairing on Mobilephones using BREW
Pairing based cryptosystems can accomplish novel security applications such as ID-based cryptosystems, which have not been constructed efficiently without the pairing. The processing speed of the pairing based cryptosystems is relatively slow compared with the other conventional public key cryptosystems. However, several efficient algorithms for computing the pairing have been proposed, namely Duursma-Lee algorithm and its variant pairing.
In this paper, we present an efficient implementation of the pairing over some mobilephones. The processing speed of our implementation in ARM9 processors on BREW achieves under 100 milliseconds using the supersingular curve over . It has become efficient enough to implement security applications, such as ID-based cryptosystems and broadcast encryption, using the pairing on BREW mobilephones.
On the security of a class of image encryption schemes
Uncategorized
Uncategorized
Recently four chaos-based image encryption schemes were proposed.
Essentially, the four schemes can be classified into one class,
which is composed of two basic parts: permutation of position and
diffusion of pixel value with the same cipher-text feedback
function. The operations involved in the two basic parts are
determined by a pseudo random number sequence (PRNS) generated from
iterating a chaotic dynamic system. According to the security
requirement, the two basic parts are performed alternatively for
some rounds. Although the designers claimed that the schemes are of
high quality, we found the following security problems: 1) the
schemes are not sensitive to the changes of plain-images; 2) the
schemes are not sensitive to the changes of the key streams
generated by any secret key; 3) there exists a serious flaw of the
diffusion function; 4) the schemes can be broken with no more than
chosen-images when the iteration number
is equal to one, where is the size of the plain-image and
is the number of different pixel values; 5) the cryptanalysis on one
scheme proposed by another research group is questionable.
VHASH Security
VHASH is an almost-delta-universal hash family, designed for exceptional performance on computers that multiply 64-bit quantities efficiently. Changes to the algorithm detailed in this note improve both security and performance over the original 2006 version. Speed is improved through a newly analyzed hash construction which allows the use of lower-degree polynomials. Claimed security is higher due primarily to improved analysis and a change in prime modulus. The result is a hash family capable of hashing cache-resident one kilobyte messages on the Intel Core 2 architecture at a rate of about one-half processor cycle per byte of message with a collision probability of less than .
Mobile Phones as Secure Gateways for Message-Based Ubiquitous Communication (Revised)
For ubiquitous communication self-organising ad-hoc networks become
more and more important. We consider mobile phones as appropriate
secure gateways to provide access to the Internet for external
machines with low communication needs. A message-based approach is
best in such a scenario with moving mobile phones and machines. In
this paper we propose a security model for access control to the
communication infrastructure, which is also message oriented. To
meet the requirements of ubiquitously communicating machines, all
algorithms on the sender's side are based on symmetric cryptography
resulting in low computation requirements. Our sophisticated
symmetric key infrastructure for access control is based on unique
combinations of keys and is completed with an effective key
management. This results in a carrier grade security level although
many parties share the same keys. Adopting the Subscriber Identity
Module as a secure storage and computing module achieves the
trustworthiness of the mobile phone. This makes it possible to use
the mobile phone not only as a user terminal but also as a trusted
infrastructure component of the mobile network.
This document is an update of earlier work [BWS07]
presented at the Workshop in Information Security Theory and
Practices 2007 in Crete, Greece.
A Major Vulnerability in RSA Implementations due to MicroArchitectural Analysis Threat
Recently, Aciicmez, Koc, and Seifert have introduced new side-channel analysis types,namely Branch Prediction Analysis (BPA) and Simple Branch Prediction Analysis (SBPA), which take advantage of branch mispredictions occur during the operations of cryptosystems [4, 5]. Even more recently, Aciicmez has developed another attack type, I-cache analysis, which exploits the internal functionalities of instruction/trace caches [1]. These MicroArchitectural Analysis (MA) techniques, more specifically SBPA and I-cache Analysis, have the potential of disclosing the entire execution flow of a cryptosystem as stated in [4, 1]. Our focus of interest in this paper is that these attacks can reveal whether an extra reduction step is performed in each Montgomery multiplication operation. First Walter et. al. and then Schindler developed attacks on RSA, which result in total break of the system if the occurrences of extra reduction steps can be determined with a reasonable error rate [39, 30, 29]. These attacks may be viewed as theoretical in the sense that neither Walter et. al. nor Schindler implemented actual attacks on real systems but instead they assumed that side-channel information obtained via power and timing analysis would reveal such occurrences of extra reduction step. In this paper we adjusted the attack from [30] to the current OpenSSL standard and put this attack into practice, proving its practicality via MA. The second part of the attack exploits the previously gathered information on the required extra reductions in an optimal way, using advanced stochastic methods as the formulation and analysis of stochastic processes.
Our results show the feasibility of compromising current RSA implementations such as OpenSSL. After we shared our result with OpenSSL development team, they included a patch into the stable branch ([45]), which allows users to compile an OpenSSL version that is resistent against our attack ([46]). In particular, this patch will affect the upcoming version of 0.9.8f. We also contacted the US CERT who informed software vendors. The US CERT assigned the vulnerability explained in this paper CVE name CVE-2007-3108 and CERT vulnerability number VU#724968, and they issued a vulnerability note ([47–49]). We point out that this publication appeared in accordance with the OpenSSL development team.
Several countermeasures have been developed and employed in widely used cryptographic libraries like OpenSSL to mitigate such side-channel analysis threats. However the current implementations still do not provide sufficient protection against MicroArchitectural Analysis, despite of all the sophisticated mitigation techniques employed in these implementations. In this paper, we will show that one can completely break the RSA implementation of the current OpenSSL version (v.0.9.8e) even if the most secure configuration, including all of the countermeasures against side-channel and MicroArchitectural analysis, is in place. We have only analyzed OpenSSL, thus we currently do not know the strength of other cryptographic libraries. Other libraries and software products need to be thoroughly analyzed and appropriately modified if it is necessary. At least, developers of the current software applications that rely on OpenSSL RSA implementation need to update their products based on the recent OpenSSL changes. Our results indicate that MicroArchitectural Analysis threatens at least 60% of the internet traffic worldwide and the current systems should be analyzed thoroughly to evaluate their overall strength against MicroArchitectural Analysis ([44]). We will eventually discuss appropriate countermeasures that must be employed in security systems.
Encryption Techniques for Secure Database Outsourcing
While the idea of database outsourcing is becoming increasingly popular, the associated security risks still prevent many potential users from deploying it. In particular, the need to give full access to one's data to a third party, the database service provider, remains a major obstacle. A seemingly obvious solution is to encrypt the data in such a way that the service provider retains the ability to perform relational operations on the encrypted database. In this paper we present a model and an encryption scheme that solves this problem at least partially. Our approach represents the provably secure solution to the database outsourcing problem that allows operations exact select, Cartesian product, and projection, and that guarantees the probability of erroneous answers to be negligible. Our scheme is simple and practical, and it allows effective searches on encrypted tables: For a table consisting of n tuples the scheme performs search in O(n) steps.
New Constructions for UC Secure Computation using Tamper-proof Hardware
The Universal Composability framework was introduced by Canetti to study the security of protocols which are concurrently executed with other protocols in a network environment. Unfortunately it was shown that in the so called plain model, a large class of functionalities cannot be securely realized. These severe impossibility results motivated the study of other models involving some sort of setup assumptions, where general positive results can be obtained. Until recently, all the setup assumptions which were proposed required some trusted third party (or parties).
Katz recently proposed using a \emph{physical setup} to avoid such trusted setup assumptions. In his model, the physical setup phase includes the parties exchanging tamper proof hardware tokens implementing some functionality. The tamper proof hardware is modeled so as to assume that the receiver of the token can do nothing more than observe its input/output characteristics. It is further assumed that the sender \emph{knows} the program code of the hardware token which it distributed. Based on the DDH assumption, Katz gave general positive results for universally composable multi-party computation tolerating any number of dishonest parties making this model quite attractive.
In this paper, we present new constructions for UC secure computation using tamper proof hardware (in a stronger model). Our results represent an improvement over the results of Katz in several directions using substantially different techniques. Interestingly, our security proofs do not rely on being able to rewind the hardware tokens created by malicious parties. This means that we are able to relax the assumptions that the parties \emph{know} the code of the hardware token which they distributed. This allows us to model real life attacks where, for example, a party may simply pass on the token obtained from one party to the other without actually knowing its functionality. Furthermore, our construction models the interaction with the tamper-resistant hardware as a simple request-reply protocol. Thus, we show that the hardware tokens used in our construction can be \emph{resettable}. In fact, it suffices to use token which are completely stateless (and thus cannot execute a multi-round protocol). Our protocol is also based on general assumptions (namely enhanced trapdoor permutations).
Towards Key-Dependent Message Security in the Standard Model
Uncategorized
Uncategorized
Standard security notions for encryption schemes do not guarantee any security if the encrypted messages depend on the secret key. Yet it is exactly the stronger notion of security in the presence of key-dependent messages (KDM security) that is required in a number of applications: most prominently, KDM security plays an important role in analyzing cryptographic multi-party protocols in a formal calculus. But although often assumed, the mere existence of KDM secure schemes is an open problem. The only previously known construction was proven secure in the random oracle model.
We present symmetric encryption schemes that are KDM secure in the standard model (i.e., without random oracles). The price we pay is that we achieve only a relaxed (but still useful) notion of key-dependent message security. Our work answers (at least partially) an open problem posed by Black, Rogaway, and Shrimpton. More concretely, our contributions are as follows:
- We present a (stateless) symmetric encryption scheme that is information-theoretically secure in face of a bounded number and length of encryptions for which the messages depend in an arbitrary way on the secret key.
- We present a stateful symmetric encryption scheme that is computationally secure in face of an arbitrary number of encryptions for which the messages depend only on the respective current secret state/key of the scheme. The underlying computational assumption is minimal: we assume the existence of one-way functions.
- We give evidence that the only previously known KDM secure encryption scheme cannot be proven secure in the standard model (i.e., without random oracles).
Universally Composable Multiparty Computation with Partially Isolated Parties
It is well known that universally composable multiparty computation
cannot, in general, be achieved in the standard model without setup
assumptions when the adversary can corrupt an arbitrary number of
players. One way to get around this problem is by having a
\emph{trusted third party} generate some global setup such as a
\emph{common reference string (CRS)} or a \emph{public key
infrastructure (PKI)}. The recent work of Katz shows that we may
instead rely on physical assumptions, and in particular
\emph{tamper-proof hardware tokens}. In this paper, we consider a similar but \emph{strictly weaker}
physical assumption. We assume that a player (Alice) can
\emph{partially isolate} another player (Bob) for a brief portion of
the computation and prevent Bob from communicating more than some limited number of bits with the environment.
For example, isolation might be achieved by asking Bob to put his functionality on a tamper-proof hardware token and assuming
that Alice can prevent this token from communicating to the outside world.
Alternatively, Alice may interact with Bob directly but in a special office which she administers and where there are no high-bandwidth
communication channels to the outside world. We show that, under \emph{standard} cryptographic assumptions, such physical setup can
be used to UC-realize any two party and multiparty computation in
the presence of an active and \emph{adaptive} adversary corrupting
any number of players. We also consider an alternative scenario, in
which there are some trusted third parties but no single such party
is trusted by all of the players. This compromise allows us to
significantly limit the use of the physical set-up and hence might
be preferred in practice.
Isolated Proofs of Knowledge and Isolated Zero Knowledge
We introduce a new notion called -isolated proofs of knowledge
( -IPoK). These are proofs of knowledge where a cheating prover
is allowed to exchange up to bits of communication with some
external adversarial environment during the run of the proof.
Without any additional setup assumptions, no witness hiding protocol can be an -IPoK for \emph{unbounded} values of .
However, for any \emph{pre-defined} threshold , and any relation in NP and we construct
an -IPoK protocol for that relation. The resulting protocols
are zero knowledge (ZK) in the standard sense, i.e.,
w.r.t. a verifier that communicates only with the prover during the proof.
The cost of having a large threshold is a large communication complexity of the constructed protocol.
We analyze these costs and present a solution that is asymptotically optimal.
If a cheating verifier is allowed to communicate arbitrarily with an
external environment, it is not possible to construct an -IPoK that
is also ZK with respect to such a verifier. As another new notion,
we define -isolated zero knowledge ( -IZK) where the
verifier is -isolated. For every relation in NP and
every , we construct an -IPoK protocol that is
also -IZK.
We describe several applications of -IPoK protocols
under the physical assumption that one can -isolate a prover for
the duration of the proof phase. Firstly, we can use a witness
indistinguishable (WI) -IPoK to prevent ``man-in-the-middle''
attacks on identification schemes. Prior results for this scenario
required all verifiers to register keys under a PKI, or the ability to fully
isolate the prover. Secondly, a partially isolated prover can register a public key and use a WI -IPoK to prove knowledge of the corresponding secret key to another party acting as a verifier. This allows us to set up a PKI where the key registrant does not need to trust the Certificate Authority. The PKI is not perfect since the proof is only witness indistinguishable and not zero knowledge. In a companion paper, we show how to set up such a PKI and use it to implement arbitrary multiparty computation securely in the UC framework without relying on any trusted third parties.
Remote Power Analysis of {RFID} Tags
Uncategorized
Uncategorized
We describe the first power analysis attack on passive RFID tags. Compared to standard power analysis attacks, this attack is unique in that it requires no physical contact with the device under attack. The power analysis can be carried out even if both the tag and the attacker are passive and transmit no data, making the attack
very hard to detect.
As a proof of concept, we use power analysis to extract the kill passwords from Class 1 EPC tags operating in the UHF frequency range. Tags from several major vendors were successfully attacked. Our attack can be extended to HF tags and to remote fault analysis.
The main significance of our attack is not in the discovery of kill passwords but in its implications on future tag design -- any cryptographic functionality built into tags needs to be designed to be resistant to power analysis, and achieving this resistance is an undertaking which has an effect both on the price and on the
performance of tags.
(this is my Master's thesis, carried out under the supervision of Prof. Adi Shamir. It may be considered as the extended version of the article "Remote Password Extraction from RFID Tags", recently published in IEEE Transactions on Computers and indexed as http://dx.doi.org/10.1109/TC.2007.1050 or as http://ieeexplore.ieee.org/iel5/12/4288079/04288095.pdf)
A Tunable Broadcast Encryption Scheme
In this paper, we describe yet another broadcast encryption scheme
for stateless receivers. The main difference between our scheme and
the classical schemes derived from the complete subtree and its
subsequent improvements is that in our scheme the group management
is based upon a more adaptable data structure. In these classical
schemes, users must be spread on a tree structure where each
level of the tree is associated to some distinguishing property of
the users. The fact that the underlying data structure is a fixed
tree is a strong limitation for some applications where an operator
wants to select users very dynamically following criterions with
changing levels of priority. Our scheme may be thought as if in the
complete subtree it would be possible to exchange the different
level of the tree in order to make it very efficient to revoke or
select a class of users. It is also very efficient in the cases
where there exists very unbalanced groups of users.
This scheme allows one to select or revoke users by sending
ciphertexts of linear size with respect to the number of groups
which is in general far less than the number of users. Moreover, by
using a specific group repartition, it is possible to recover a tree
structure in order to apply the classical methods which guarantee
that our scheme is in general as efficient as a usual ones.
We prove that our scheme is fully collusion secure in the generic
group with pairing model.
A Tight High-Order Entropic Quantum Uncertainty Relation With Applications
We derive a new entropic quantum uncertainty relation involving min-entropy. The relation is tight and can be applied in various quantum-cryptographic settings.
Protocols for quantum 1-out-of-2 Oblivious Transfer and quantum Bit Commitment are presented and the uncertainty relation is used to prove the security of these protocols in the bounded-quantum-storage model according to new strong security definitions.
As another application, we consider the realistic setting of Quantum Key Distribution (QKD) against quantum-memory-bounded eavesdroppers. The uncertainty relation allows to prove the security of QKD protocols in this setting while tolerating considerably higher error rates compared to the standard model with unbounded adversaries. For instance, for the six-state protocol with one-way communication, a bit-flip error rate of up to 17% can be tolerated (compared to 13% in the standard model).
Our uncertainty relation also yields a lower bound on the min-entropy key uncertainty against known-plaintext attacks when quantum ciphers are composed. Previously, the key uncertainty of these ciphers was only known with respect to Shannon entropy.
Secure Identification and QKD in the Bounded-Quantum-Storage Model
We consider the problem of secure identification: user U proves to server S that he knows an agreed (possibly low-entropy) password w, while giving away as little information on w as possible, namely the adversary can exclude at most one possible password for each execution of the scheme. We propose a solution in the bounded-quantum-storage model, where U and S may exchange qubits, and a dishonest party is assumed to have limited quantum memory. No other restriction is posed upon the adversary.
An improved version of the proposed identification scheme is also secure against a man-in-the-middle attack, but requires U and S to additionally share a high-entropy key k. However, security is still guaranteed if one party loses k to the attacker but notices the loss. In both versions of the scheme, the honest participants need no quantum memory, and noise and imperfect quantum sources can be tolerated. The schemes compose sequentially, and w and k can securely be re-used.
A small modification to the identification scheme results in a quantum-key-distribution (QKD) scheme, secure in the bounded-quantum-storage model, with the same re-usability properties of the keys, and without assuming authenticated channels. This is in sharp contrast to known QKD schemes (with unbounded adversary) without authenticated channels, where authentication keys must be updated, and unsuccessful executions can cause the parties to run out of keys.
Efficient Password-based Authenticated Key Exchange without Public Information
Since the first password-based authenticated key exchange (PAKE) was
proposed, it has enjoyed a considerable amount of interest from the
cryptographic research community. To our best knowledge, most of
proposed PAKEs based on Diffie-Hellman key exchange need some public
information, such as generators of a finite cyclic group. However,
in a client-server environment, not all servers use the same public
information, which demands clients authenticate those public
information before beginning PAKE. It is cumbersome for users.
What's worse, it may bring some secure problems with PAKE, such as
substitution attack. To remove these problems, in this paper, we
present an efficient password-based authenticated key exchange
protocol without any public information. We also provide a
formal security analysis in the non-concurrent setting, including
basic security, mutual authentication, and forward secrecy, by using
the random oracle model.
Faster and Shorter Password-Authenticated Key Exchange
This paper presents an improved password-based authenticated key exchange protocols in the common reference string model. Its security proof requires no idealized assumption (such as random oracles).
The protocol is based on the GL framework introduced by Gennaro and Lindell, which generalizes the KOY key exchange protocol of Katz et al.\ Both the KOY and the GL protocols use (one-time) signatures as a
non-malleability tool in order to prevent a man-in-the-middle attack against the protocol. The efficiency of the resulting protocol is negatively affected, since if we use regular signatures, they require a large amount of computation (almost as much as the rest of the protocol) and further computational assumptions. If one-time signatures are used, they substantially increase the bandwidth requirement.
Our improvement avoids using digital signatures altogether, replacing them with faster and shorter message authentication codes. The crucial idea is to leverage as much as possible the non-malleability of the encryption scheme used in the protocol, by including various values into the ciphertexts as {\em labels}. As in the case of the GL framework, our protocol can be efficiently instantiated using either the DDH, Quadratic Residuosity or N-Residuosity Assumptions.
For typical security parameters our solution saves as much as 12 Kbytes of bandwidth if one-time signatures are implemented in \GL with
fast symmetric primitives. If we use number-theoretic signatures in the GL framework, our solution saves several large exponentiations (almost a third of the exponentiations computed in the GL protocol). The end result is that we bring provable security in the realm of password-authenticated key exchange one step closer to practical.
Towards provable security for route discovery protocols in mobile ad hoc networks
Uncategorized
Uncategorized
Mobile ad hoc networks (MANETs) are collections of wireless mobile
devices with restricted broadcast range and resources, and no fixed infrastructure. Communication is achieved by relaying data along appropriate routes, that are dynamically discovered and maintained
through collaboration between the nodes. Discovery of such routes is a major task, both from an efficiency and from a security point of view.
Recently, a security model tailored to the specific requirements of
MANETs was introduced by Acs, Buttyán, and Vajda. Among the novel characteristics of this security model is that it promises
security guarantees under concurrent executions, a feature of crucial practical implication for this type of distributed computation. A novel route discovery algorithm called endairA was
also proposed, together with a claimed security proof within the
same model. In this paper we show that the security proof for the route discovery algorithm endairA is flawed, and that moreover this
algorithm is vulnerable to a {\em hidden channel} attack. We also
analyze the security framework that was used for route discovery,
and argue that composability is an essential feature for ubiquitous
applications. We conclude by discussing some of the major security
challenges for route discovery in MANETs.
Attribute-Based Encryption with Non-Monotonic Access Structures
We construct an Attribute-Based Encryption (ABE) scheme
that allows a user's private key to be expressed in terms
of any access formula over attributes. Previous ABE
schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the
Decisional Bilinear Diffie-Hellman (BDH) assumption.
Furthermore, the performance of our new scheme compares
favorably with existing, less-expressive schemes.
Identifying Ideal Lattices
Micciancio defined a generalization of cyclic lattices, called ideal lattices. These lattices can be used in cryptosystems to decrease the number of parameters necessary to describe a lattice by a square root, making them more efficient. He proves that the computational intractability of classic lattice problems for these lattices gives rise to provably secure one-way and collision-resistant hash functions. This provable security relies on the assumption that reducing bases of ideal lattices is similar to reducing bases of random lattices. We give an indication that lattice problems in ideal lattices do not represent the general case by providing a distinguisher, which decides in time whether a given basis of rank spans an ideal lattice or not. Using this algorithm we perform a statistical analysis for several dimensions and show that randomly generated lattices are practically never ideal.
Balanced Boolean Functions with Nonlinearity > 2^{n-1} - 2^{(n-1)/2}
Uncategorized
Uncategorized
Recently, balanced 15-variable Boolean functions with nonlinearity 16266 were obtained by suitably modifying unbalanced Patterson-Wiedemann (PW) functions, which possess nonlinearity 2^{n-1}-2^{(n-1)/2}+20 = 16276. In this short paper, we present an idempotent interpreted as rotation symmetric Boolean function) with nonlinearity 16268 having 15 many zeroes in the Walsh spectrum, within the neighborhood of PW functions. Clearly this function can be transformed to balanced functions keeping the nonlinearity and autocorrelation distribution unchanged. The nonlinearity value of 16268 is currently the best known for balanced 15-variable Boolean functions. Furthermore, we have attained several balanced 13-variable Boolean functions with nonlinearity 4036, which improves the recent result of 4034.
On the Big Gap Between and in DSA
Uncategorized
Uncategorized
We introduce a message attack against DSA and show that the security of DSA is indeed reduced to the following problem, i.e., find such that
\centerline{ }
where and
is randomly chosen by the adversary.
Compared with the common key-only attack, i.e., find such that
\centerline{ } the message attack is more effective because of the big gap between
(1024-bit) and (160-bit).
A New Security Definition for Public Key Encryption Schemes and Its Applications
The strongest security definition for public key encryption (PKE)
schemes is indistinguishability against adaptive chosen ciphertext
attacks (IND-CCA). A practical IND-CCA secure PKE scheme in the
standard model is well-known to be difficult to construct given
the fact that there are only a few such kind of PKE schemes
available. From another perspective, we observe that for a large
class of PKE-based applications, although IND-CCA security is
sufficient, it is not a necessary requirement. Examples are Key
Encapsulation Mechanism (KEM), MT-authenticator, providing
pseudorandomness with a-priori information, and so on. This
observation leads us to propose a slightly weaker version of
IND-CCA, which requires ciphertexts of two randomly
selected messages are indistinguishable under chosen ciphertext
attacks. Under this new security notion, we show that highly
efficient schemes proven secure in the standard model can be built
in a straightforward way. We also demonstrate that such a security
definition is already sufficient for the applications above.
On the complexity of side-channel attacks on AES-256 -- methodology and quantitative results on cache attacks
Uncategorized
Uncategorized
Larger key lengths translate into an exponential increase in the complexity of an exhaustive search. Side-channel attacks, however, use a divide-and-conquer approach and hence it is generally assumed that increasing the key length cannot be used as mitigation. Yet, the internal round structure of AES-256 and its key-scheduling seem to hinder a direct extension of the existing attacks on AES-128 and thus challenge the proposition above. Indeed two consecutives round keys are required to infer the secret key and the MixColumns operation, not present in the last round, apparently increases the key search complexity from to 2^8 to 2^32. Additionally, it is unclear what the impact of the different round structures is on the number of required measurements. In this paper, we explore this question and show how to attack AES-256 with a key search complexity of O(2^8). This work confirms with practical experiments that AES-256 only offers a marginal increase in resistance against the attacks –both in the required number of measurements and in the required processing time. As an example, we quantify this increase for the case of cache-based side-channel attacks: AES-256 only provides an increase in complexity of 6 to 7 compared to cache-based attacks on AES-128.
Improving Upon the TET Mode of Operation
Uncategorized
Uncategorized
Naor and Reingold had proposed the construction of a strong pseudo-random
permutation (SPRP) by using a layer of ECB encryption between two layers of
invertible block-wise universal hash functions. At Crypto 2007, Halevi presented
constructions of invertible block-wise universal hash functions and a new mode
of operation (called TET) based on them. In this paper, we present a new mode
of operation
called {\heh} using the Naor-Reingold approach. This is built using a new
construction of invertible block-wise universal hash function. The new
construction improves over Halevi's construction by removing restrictions on
the hashing key. This in turn, leads to {\heh} improving
over TET by allowing more efficient encryption and decryption of variable length
messages as well as supporting better key agility. For the important application
of disk encryption, we present a variant called {\hehfp} which has better
key agility than TET.
SECURITY PROOF FOR SHENGBAO WANG’S IDENTITY-BASED ENCRYPTION SCHEME
This paper analyzes the security of an IBE scheme proposed by Wang in 2007. It is shown that under BDHP (which is polynomially time equivalent to BIDHP) assumption the scheme is secure in random oracle model.
Security under Key-Dependent Inputs
In this work we re-visit the question of building cryptographic primitives that remain secure even when queried on inputs that depend on the secret key. This was investigated by Black, Rogaway, and Shrimpton in the context of randomized encryption schemes and in the random oracle model. We extend the investigation to deterministic symmetric schemes (such as PRFs and block ciphers) and to the standard model. We term this notion "security against key-dependent-input attack", or KDI-security for short. Our motivation for studying KDI security is the existence of significant real-world implementations of deterministic encryption (in the context of storage encryption) that actually rely on their building blocks to be KDI secure.
We consider many natural constructions for PRFs, ciphers, tweakable ciphers and randomized encryption, and examine them with respect to their KDI security. We exhibit inherent limitations of this notion and show many natural constructions that fail to be KDI secure in the standard model, including some schemes that have been proven in the random oracle model. On the positive side, we demonstrate examples where some measure of KDI security can be provably achieved (in particular, we show such examples in the standard model).
Last updated: 2007-10-01
Formal Certification of Code-Based Cryptographic Proofs
As cryptographic proofs have become essentially unverifiable, cryptographers have argued in favor of systematically structuring proofs as sequences of games. Code-based techniques form an instance of this approach that takes a code-centric view of games, and that relies on programming language theory to justify steps in the proof-transitions between games. While these techniques contribute to increase confidence in the security of cryptographic systems, code-based proofs involve such a large palette of concepts from different fields that machine-verified proofs seem necessary to achieve the highest degree of confidence. Indeed, Halevi has convincingly argued that a tool assisting in the construction and verification of proofs is necessary to solve the crisis with cryptographic proofs. This article reports a first step towards the completion of Halevi's programme through the implementation of a fully formalized framework, CertiCrypt, for code-based proofs built on top of the Coq proof assistant. The framework has been used to yield machine-checked proofs of the PRP/PRF switching lemma and semantic security of ElGamal and OAEP encryption schemes.
Perfect Forward Secure Identity-Based Authenticated Key Agreement Protocol in the Escrow Mode
There are several essential features in key agreement protocols such as key escrow (essential when confidentiality, audit trail and legal interception are required) and perfect forward secrecy (i.e., the security of a session key established between two or more entities is guaranteed even when the private keys of the entities are compromised). Majority of the existing escrowable identity-based key agreement protocols, however, only provide partial forward secrecy. Therefore, such protocols are unsuitable for real-word applications that require a stronger sense of forward secrecy --- perfect forward secrecy. In this paper, we propose an efficient perfect forward secure identity-based key agreement protocol in the escrow mode. We prove the security of our protocol in the random oracle model, assuming the intractability of the Gap Bilinear Diffie-Hellman (GBDH) problem. Security proofs are invaluable tools in assuring protocol implementers about the security properties of protocols. We note, however, that many existing security proofs of previously published identity-based protocols entail lengthy and complicated mathematical proofs. In this paper, our proof adopts a modular approach and, hence, simpler to follow.
Secure Similarity Search
One of the most substantial ways to protect users' sensitive
information is encryption. This paper is about the keyword index
search system on encrypted documents. It has been thought that the
search with errors over encrypted data is impossible because 1 bit
difference over plaintexts may reduce to enormous bits difference
over cyphertexts. We propose a novel idea to deal with the search
with errors over encrypted data. We develop two similarity search
schemes, implement the prototypes and provide substantial analysis.
We define security requirements for the similarity search over
encrypted data. The first scheme can achieve perfect privacy in
similarity search but the second scheme is more efficient.
A Refined Algorithm for the Pairing Calculation in Characteristic Three
We describe further improvements of the pairing algorithm in
characteristic three. Our approach combines the loop unrolling
technique introduced by Granger {\em et. al} for the Duursma-Lee
algorithm, and a novel algorithm for multiplication over
proposed by Gorla {\em et al.} at SAC 2007. For
, the refined algorithm reduces the number of multiplications
over from to .
A Note on Point Multiplication on Supersingular Elliptic Curves over Ternary Fields
Recently, the supersingular elliptic curves over ternary fields are widely used in pairing based crypto-applications since they achieve the best possible ratio between security level and space requirement. We propose new algorithms for projective arithmetic on the curves, where the point tripling is field multiplication free, and point addition and point doubling requires one field multiplication less than the known best algorithms, respectively. The algorithms combined with DBNS can lead to apparently speed up scalar multiplications on the curves.
Balanced Boolean Function on 13-variables having Nonlinearity strictly greater than the Bent Concatenation Bound
Very recently, Kavut and Yucel identified 9-variable Boolean functions having nonlinearity 242, which is currently the best known. However, any of these functions do not contain any zero in the Walsh spectrum and that is why they cannot be made balanced. We use these functions to construct 13-variable balanced Boolean function having nonlinearity
which is strictly greater than the bent concatenation bound. This is the first demonstration of balanced Boolean functions on odd number of variables having nonlinearity strictly greater than the bent concatenation bound for number of input variables less than 15.
Generalized Rotation Symmetric and Dihedral Symmetric Boolean Functions - 9 variable Boolean Functions with Nonlinearity 242
Recently, 9-variable Boolean functions having nonlinearity 241, which is strictly greater than the bent concatenation bound of 240, have been discovered in the class of Rotation Symmetric Boolean Functions (RSBFs) by Kavut, Maitra and Yucel. In this paper, we present several 9-variable Boolean functions having nonlinearity of 242, which we obtain by suitably generalizing the classes of RSBFs and Dihedral Symmetric Boolean Functions (DSBFs).
Locally Invertible Boolean Mappings
The aim of this paper is to study a novel property of Boolean mappings called local intertibility. We focus on local invertibility of Boolean mappings which model filtering generators and study the case when filtering function is linear in the last variable.
Novel Approaches for Improving the Power Consumption Models in Correlation Analysis
Differential Power Analysis (DPA) is a powerful technique for
revealing secret data of cryptographic algorithms such as DES, AES and RSA implemented on a specific platform. In recent years, Correlation Power Analysis (CPA) allowed to better
formalize the differential approaches of DPA with the use of a power model. We propose here two methods in order to optimize the power model for the targeted bits of the analysed algorithm. We will consider that all the targeted bits do not give the same contribution to the power consumption. Our first method consists
in finding out the optimal ratio among the bits of a
specific device. The second method is based on a statistical analysis
of attack results while applying different possible ratios
among the bits. The experimental electromagnetic radiation signals
intercepted from an ASIC during DES operations show
that our proposed methods allow to improve significantly the
attack performance.
On Non-Randomness of the Permutation after RC4 Key Scheduling
Here we study a weakness of the RC4 Key Scheduling Algorithm (KSA) that has already been noted by Mantin and Mironov. Consider the RC4 permutation of (usually 256) bytes and denote it by after the KSA. Under reasonable assumptions we present a simple proof that each permutation byte after the KSA is significantly biased (either positive or negative) towards many values in the range . These biases are independent of the secret key and thus present an evidence that the permutation after the KSA can be distinguished from random permutation without any assumption on the secret key. We also present a detailed empirical study over Mantin's work when the theoretical formulae vary significantly from experimental results
due to repetition of short keys in RC4. Further, it is explained how these results can be used to identify new distinguishers for RC4 keystream.
A Bound on the Size of Separating Hash Families
Uncategorized
Uncategorized
The paper provides an upper bound on the size of a (generalised)
separating hash family, a notion introduced by Stinson, Wei and
Chen. The upper bound generalises and unifies several previously known
bounds which apply in special cases, namely bounds on perfect hash
families, frameproof codes, secure frameproof codes and separating
hash families of small type.
A Forward Secure Remote User Authentication Scheme
Remote user authentication schemes allow a valid user to login a
remote server. In 2000, Hwang and Li's proposed a new remote user
authentication scheme with smart cards. In the recent years,some researchers
pointed out the security weaknesses of Hwang and Li's scheme and they also
proposed some modified schemes to avoid these weaknesses. This
paper analyzes that Hwang and Li's scheme does
not satisfy some essential security requirements. Hwang and Li's scheme and all the modified
schemes do not support mutual authentication between the remote user
and the remote server
also there is no session key generation phase for secure communication. In
addition, in Hwang and Li's scheme, the remote user is not free to change his password. This
paper present an ideal remote user authentication scheme
with smart cards that not only resolves all the security problems of Hwang and Li's scheme,
but also provides all the essential security requirements and forward secrecy to the remote server.
Compression Functions Suitable for the Multi-Property-Preserving Transform
Since Bellare and Ristenpart showed a multi-property preserving domain extension transform, the problem of the construction for multi-property hash functions has been reduced to that of the construction for multi-property compression functions. However, the Davies-Meyer compression function that is widely used for standard hash functions is not a multi-property compression function. That is, in the ideal cipher model, the Davies-Meyer compression function is collision resistant, but it is not indifferentiable from a random oracle. In this paper, we show that the compression function proposed by Lai and Massey is a multi-property compression function. In addition, we show that the simplified version of the Lai-Massey compression function is also a multi-property compression function. The use of these compression functions enables us to construct multi-property hash functions by the multi-property preserving domain extension transform.
On Asymptotic Behavior of the Ratio Between the Numbers of Binary Primitive and Irreducible Polynomials
Uncategorized
Uncategorized
In this paper we study the ratio ,
where is the number of primitive polynomials and
is the number of irreducible polynomials in
of degree . %and , for an arbitrary odd number .
Let be the prime factorization of , where are odd primes.
We show that tends to 1 and is asymptotically
not less than 2/3 when are fixed and tend to infinity. We also, describe an infinite
series of values such that is strictly
less than .
A Note on Automata-based Dynamic Convolutional Cryptosystems
In [1],the automata-based dynamic convolutional
cryptosystem is proposed and analyzed; the author claims that
``finding partial information about the cipher is quite easy, and
the main idea of such an attack, described in detail in Section
4.1, is based on Gaussian elimination.'' But the deduction
supporting this claim in Section 4.1 of [1] cannot
work. It seems that this cipher is not so weak so far.
Optimizing Multiprecision Multiplication for Public Key Cryptography
In this paper we recall the hybrid method of Gura et al. for multi-precision multiplication which is an improvement on the basic Comba method and which exploits the increased number of registers available on modern architectures in order to avoid duplicated loads from memory. We then show how to improve and generalise the method for application across a wide range of processor types, setting some new records in the process.
The Security of the Extended Codebook (XCB) Mode of Operation
The XCB mode of operation was outlined in 2004 as a contribution to the IEEE Security in Storage effort, but no security analysis was provided. In this paper, we provide a proof of security for XCB, and show that it is a secure tweakable (super) pseudorandom permutation. Our analysis makes several new contributions: it uses an algebraic property of XCB's internal universal hash function to simplify the proof, and it defines a nonce mode in which XCB can be securely used even when the plaintext is shorter than twice the width of the underlying block cipher. We also show minor modifications that improve the performance of XCB and make it easier to analyze. XCB is interesting because it is highly efficient in both hardware and software, it has no alignment restrictions on input lengths, it can be used in nonce mode, and it uses the internal functions of the Galois/Counter Mode (GCM) of operation, which facilitates design re-use and admits multi-purpose implementations.
Secret sharing on infinite graphs
We extend the notion of perfect secret sharing scheme for access structures with infinitely many participants. In particular we investigate cases when the participants are the vertices of an (infinite) graph, and the minimal qualified sets are the edges. The (worst case) {\it information ratio} of an access structure is the largest lower bound on the amount of information some participant must remember for each bit in the secret -- just the inverse of the information rate. We determine this value for several infinite graphs: infinite path, two-dimensional square and honeycomb lattices; and give upper and lower bounds on the ratio for the triangular lattice.
It is also shown that the information ratio is not necessarily {\em local}, i.e.~all finite spanned subgraphs have strictly smaller ratio than the whole graph. We conclude the paper by posing several open problems.
Construction of Efficient and Secure Pairing Algorithm and its Application
Uncategorized
Uncategorized
The randomized projective coordinate (RPC) method applied to a
pairing computation algorithm is a good solution that provides an
efficient countermeasure against side channel attacks. In this
study, we investigate measures for increasing the efficiency of
the RPC-based countermeasures and construct a method that provides
an efficient RPC-based countermeasure against side channel
attacks. We then apply our method to the well-known
pairing algorithm over binary fields and obtain an RPC-based
countermeasure for the pairing; our method is more
efficient than the RPC method applied to the original
pairing algorithm.
Linearization Attacks Against Syndrome Based Hashes
In MyCrypt 2005, Augot, Finiasz, and Sendrier proposed FSB, a family of cryptographic hash functions. The security claim of the FSB hashes is based on a coding theory problem with hard average-case complexity. In the ECRYPT 2007 Hash Function Workshop, new versions with essentially the same compression function but radically different security parameters and an additional final transformation were presented. We show that hardness of average-case complexity of the underlying problem is irrelevant in collision search by presenting a linearization method that can be used to produce collisions in a matter of seconds on a desktop PC for the variant of FSB with claimed security.
Improved Privacy of the Tree-Based Hash protocols using Physically Unclonable Function
Uncategorized
Uncategorized
In 2004, Molnar and Wagner introduced a very appealing scheme dedicated to the identification of RFID tags. Their protocol relies on a binary tree of secrets which are shared -- for all nodes except the leaves -- amongst the tags. Hence the compromise of one tag also has implications on the other tags with whom it shares keys. We describe a new man-in-the-middle attack against this protocol which allows to break privacy even without opening tags. Moreover, it can be applied to some other RFID protocols which use correlated keys as the one described recently by Damgard and Pedersen at CT-RSA 2008.
We introduce a modification of the initial scheme to allow us to thwart this and to strengthen RFID tags by implementing secrets with Physical Obfuscated Keys (POKs). This doing, we augment tags and scheme privacy, particularly general resistance against physical threats.
Fully Resilient Traitor Tracing Scheme using Key Update
This paper proposes fully resilient traitor tracing schemes which
have no restriction about the number of traitors. By using the
concept of key update, the schemes can make the pirate decoders
useless within some time-period, which will be called life-time of
the decoder. There is a trade-off between the size of ciphertext
and life-time of pirate decoders.
Improved security analysis of OMAC
We present an improved security analysis of OMAC, the construction
is widely used as a candidate of MAC or Pseudo Random Function (or
PRF). In this direction, the first result was given in Crypto-05
where an improved security analysis of CBC (for fixed length or for
arbitrary length prefix-free messages) had provided. Followed by
this work, improved bounds for XCBC, TMAC and PMAC were found. The
improved bounds are of the form where
the original bounds are which is
roughly . Here, a distinguisher can
make at most queries having at most many blocks with
as the maximum block size. The original bound for OMAC was
roughly shown in FSE-03 and the next improved
bound was shown in Indocrypt-03. In this
paper we have provided an improved bound (a similar form as provided
for others) for OMAC and the bound we show is roughly
.
Relations Among Notions of Plaintext Awareness
We introduce a new simplified notion of plaintext awareness, which we term PA2I, and show that this is equivalent to the standard definition of PA2 plaintext awareness for encryption schemes that satisfy certain weak security and randomness requirements. We also show that PA2 plaintext awareness is equivalent to PA2+ plaintext awareness under similar security and randomness requirements. This proves a conjecture of Dent that, for suitably random public-key encryption schemes, PA2 plaintext awareness implies PA1+ plaintext awareness.
Construction of Rotation Symmetric Boolean Functions with Maximum Algebraic Immunity on Odd Number of Variables
In this paper we present a theoretical construction of Rotation Symmetric Boolean Functions (RSBFs) on odd number of variables with maximum possible \ai and further these functions are not symmetric.
Our RSBFs are of better nonlinearity than the existing theoretical
constructions with maximum possible \ai. To get very good nonlinearity, which is important for practical cryptographic design, we generalize our construction to a construction cum search technique in the RSBF class. We find 7, 9, 11 variable RSBFs with maximum possible \ai having nonlinearities 56, 240, 984 respectively with very small amount of search after our basic construction.
Zero-Knowledge in the Applied Pi-calculus and Automated Verification of the Direct Anonymous Attestation Protocol
We devise an abstraction of zero-knowledge protocols that is
accessible to a fully mechanized analysis. The abstraction is
formalized within the applied pi-calculus using a novel equational
theory that abstractly characterizes the cryptographic semantics of
zero-knowledge proofs. We present an encoding from the equational
theory into a convergent rewriting system that is suitable for the
automated protocol verifier ProVerif. The encoding is sound and fully
automated. We successfully used ProVerif to obtain the first
mechanized analysis of the Direct Anonymous Attestation (DAA)
protocol. The analysis in particular required us to devise novel
abstractions of sophisticated cryptographic security definitions based
on interactive games.
Secure Hybrid Encryption from Weakened Key Encapsulation
We put forward a new paradigm for building hybrid encryption schemes
from constrained chosen-ciphertext secure (CCCA) key-encapsulation mechanisms (KEMs) plus authenticated symmetric encryption. Constrained chosen-ciphertext security is a new security notion for KEMs that we propose. CCCA has less demanding security requirements than standard chosen-ciphertext (CCA) security (since it requires the adversary to have a certain plaintext-knowledge when making a decapsulation query) yet we can prove that CCCA is sufficient for secure hybrid encryption.
Our notion is not only useful to express the Kurosawa-Desmedt public-key encryption scheme and its generalizations to hash-proof systems in an abstract KEM/DEM security framework. It also has a very constructive appeal, which we demonstrate with a new encryption scheme
whose security relies on a class of intractability assumptions that we show (in the generic group model) strictly weaker than the Decision Diffie-Hellman (DDH) assumption. This appears to be the first practical public-key encryption scheme in the literature from an
algebraic assumption strictly weaker than DDH.
The Effectiveness of Receipt-Based Attacks on ThreeBallot
The ThreeBallot voting system is an end-to-end (E2E) voter-verifiable voting system. Each voter fills out three ballots according to a few simple rules and takes a copy of one of them home as a receipt for verification purposes. All ballots are posted on a public bulletin board so that any voter may verify the result. In this paper we investigate the effectiveness of attacks using the voter's receipt and the bulletin board. Focusing on two-candidate races, we determine thresholds for when the voter's vote can be reconstructed from a receipt, and when a coercer can effectively verify if a voter followed instructions by looking for pre-specified patterns on the bulletin board. Combining these two results allows us to determine safe ballot sizes that resist known attacks. We also generalize a previous observation that an individual receipt can leak information about a voter's choices.
Faster addition and doubling on elliptic curves
Edwards recently introduced a new normal form for elliptic curves.
Every elliptic curve over a non-binary field
is birationally equivalent to a curve in Edwards form
over an extension of the field,
and in many cases over the original field.
This paper presents fast
explicit formulas (and register allocations)
for group operations on an Edwards curve.
The algorithm for doubling
uses only 3M+4S, i.e., 3 field multiplications and 4 field squarings.
If curve parameters are chosen to be small
then the algorithm for mixed addition uses only 9M+1S
and the algorithm for non-mixed addition uses only 10M+1S
Arbitrary Edwards curves can be handled at the cost of just one extra
multiplication by a curve parameter.
For comparison,
the fastest algorithms known for the popular ``a_4=-3 Jacobian'' form
use 3M+5S for doubling;
use 7M+4S for mixed addition;
use 11M+5S for non-mixed addition;
and use 10M+4S for non-mixed addition when one input has been added
before.
The explicit formulas for non-mixed addition on an Edwards curve
can be used for doublings at no extra cost,
simplifying protection against side-channel attacks.
Even better, many elliptic curves
(approximately 1/4 of all isomorphism classes of elliptic curves over a non-binary finite field)
are birationally equivalent---over the original field---to Edwards curves where
this addition algorithm works for all pairs of curve points,
including inverses, the neutral element, etc.
This paper contains an
extensive comparison of different forms of elliptic curves and
different coordinate systems for the basic group operations
(doubling, mixed addition, non-mixed addition, and unified addition)
as well as higher-level operations such as multi-scalar multiplication.
Solving MRHS linear equations
A new method for solving algebraic equation systems common in
cryptanalysis is proposed. Our method differs from the others in
that the equations are not represented as multivariate polynomials,
but as a system of Multiple Right Hand Sides linear equations. The
method was tested on scaled versions of the AES. The results overcome
significantly what was previously achieved with Gröbner Basis
related algorithms.
Last updated: 2007-09-23
No title
Uncategorized
Uncategorized
No Abstract
Provably Secure Framework for Information Aggregation is Sensor Networks
Information aggregation is an important operation in wireless sensor
networks executed for the purpose of monitoring and reporting of the environmental data. Due to the performance constraints of sensor nodes the in-network form of the aggregation is especially attractive since it allows to save expensive resources during the frequent network queries. Easy accessibility of networks and nodes and almost no physical protection against corruptions arise high challenges
on the security of the aggregation process. Especially, protection against attacks aiming to falsify the aggregated result is considered to be of prime importance.
In this paper we propose a novel security model for the aggregation process based on the well-established cryptographic techniques, focusing on the scenario with the single aggregator node. In order to show soundness and feasibility of our definitions we describe a generic practical approach that achieves security against node corruptions during the aggregation process in a provable cryptographic way based solely on the symmetric cryptographic primitives. To the best of our knowledge this is the first paper which aims to combine the paradigm of provable security in the cryptographic sense with the task of information aggregation in WSNs.
- « Previous
- 1
- 2
- 3
- ...
- 5
- Next »