All papers (Page 211 of 24580 results)
Fast Implementations of AES on Various Platforms
Uncategorized
Uncategorized
This paper presents new software speed records for encryption and decryption using the block cipher AES-128 for different architectures. Target platforms are 8-bit AVR microcontrollers, NVIDIA
graphics processing units (GPUs) and the Cell broadband engine. The
new AVR implementation requires 124.6 and 181.3 cycles per byte for
encryption and decryption with a code size of less than two kilobyte.
Compared to the previous AVR records for encryption our code is 38
percent smaller and 1.24 times faster. The byte-sliced implementation
for the synergistic processing elements of the Cell architecture achieves speed of 11.7 and 14.4 cycles per byte for encryption and decryption. Similarly, our fastest GPU implementation, running on the GTX 295 and handling many input streams in parallel, delivers throughputs of 0.17 and 0.19 cycles per byte for encryption and decryption respectively. Furthermore, this is the first AES implementation for the GPU which implements both encryption and decryption.
Key Recovery Attack on QuiSci
This paper shows a key recovery attack on QuiSci (quick stream cipher), designed by Stefan Müller (FGAN-FHR, a German research institute) in 2001. With one or few know plaintexts it's possible to recover most of the key with negligible time complexity.
This paper shows a way how to exploit the weak key setup of QuiSci.
Underlying Assumptions and Designated Verifier Signatures
In this paper, we define an underlying computational problem and its decisional problem. As an application of their problems, we propose an efficient designated verifier signature (DVS) scheme without random oracles (related to symmetric pairings). We formally redefine the (Strong) Privacy of Signature's Identity, and prove our DVS scheme satisfying security based on the difficulty of the problems. Also we prove that the difficulty of the computational problem is tightly equivalent to the Strong Unforgeability of our proposed conventional signature scheme (without random oracles) related to asymmetric pairings. We believe that our underlying problems are profitable to propose many efficient cryptographic schemes.
NTRU based group oriented signature
In order to prevent illegal tracking and stealing
personal or cargo information, the authentication services should
be provided for the tags to identify a Reader. A NTRU based
signature scheme is proposed in this paper, which meets the
demand for a group of tags to quickly and securely identify a
Reader in RFID system. In our scheme, only the tag in specified
group can verify the reader’s message. Because of fast operation,
easy key generation and limited source occupied, our signature is
very suit for the RFID systems.
Cube Attack on Courtois Toy Cipher
Abstract. The cube attack has been introduced by Itai Dinur and Adi
Shamir [8] as a known plaintext attack on symmetric primitives. The
attack has been applied to reduced variants of the stream ciphers Trivium
[3, 8] and Grain-128 [2], reduced to three rounds variant of the block
cipher Serpent [9] and reduced version of the hash function MD6 [3].
In the special case the attack has appeared in the M. Vielhaber ePrint
articles [13, 14], where it has been named AIDA (Algebraic Initial Value
Differential Attack ) and applied to the modified versions of Trivium.
In this paper, we present the experimental results of application the cube
attack to four rounds of the Courtois Toy Cipher (CTC) with the full
recovery of 120-bit key. After that we extend the attack to five rounds
by applying the meet-in-the-middle principle.
Key words: Cube attack, symmetric primitives, Boolean polynomials,
CTC, the meet-in-the-middle method
Anonymous Fuzzy Identity-based Encryption for Similarity Search
In this paper, we consider the problem of predicate encryption and focus on the predicate for testing whether the hamming distance between the attribute of a data item and a target is equal to (or less than) a threshold where and are of length . Existing solutions either do not provide attribute protection or produce a big ciphertext of size . For the equality version of the problem, we provide a scheme which is match-concealing (MC) secure and the sizes of the ciphertext and token are both . For the inequality version of the problem, we give two practical schemes. The first one, also achieving MC security, produces ciphertext with size if the maximum value of , , is known in advance and is a constant. We also show how to update the ciphertext if the user wants to increase without constructing the ciphertext from scratch. On the other hand,in many real applications, the security requirement can be
lowered from MC to MR (match-revealing). Our second scheme, which is MR secure, produces ciphertext of size and token of size only.
Security Weakness in Two Authenticated Key Exchange Protocols
In ICA3PP 2009, Xinglan Zhang proposed two one-round
authenticated key exchange protocols and proved their security
in the standard model. In this paper, we analyze these two
protocols and find that both of them exist some flaws.
A Framework for Universally Composable Non-Committing Blind Signatures
A universally composable (UC) blind signature functionality requres
users to commit to the message to be blindly signed. It is thereby
impossible to realize in the plain model. This paper shows that even non-committing variants of UC blind signature functionality can not be realized in the plain model. We characterize UC non-committing blind signatures in the common reference string model by presenting equivalent stand-alone security notions under static corruption. Usefulness of the characterization is demonstrated by showing that
Fischlin's basic stand-alone blind signature scheme can be transformed into a UC non-committing blind signature protocol without using extra cryptographic components. We extend the results to the adaptive corruption model and present analogous notions, theorems, and constructions both in the erasure model and the non-erasure model.
Remarks on Some Quantum Cryptographic Schemes
We remark that the schemes [PhysRevLett.98.020503,
PhysRevA.74.012315, PhysRevA.71.022321, PhysRevA.72.012304,
PhysRevA.69.052307, PhysRevA.59.1829] are not secret sharing schemes
as claimed.
Efficient Statistical Asynchronous Verifiable Secret Sharing and Multiparty Computation with Optimal Resilience
Verifiable Secret Sharing (VSS) is a fundamental primitive used as a
building block in many distributed cryptographic tasks, such as
Secure Multiparty Computation (MPC) and Byzantine Agreement (BA). An important variant of VSS is Asynchronous VSS (AVSS) which is designed to work over asynchronous networks. AVSS is a two phase
(Sharing, Reconstruction) protocol carried out among n parties in the
presence of a computationally unbounded active adversary, who can corrupt up to t parties. We assume that every two parties in the network are directly connected by a pairwise secure channel.
In this paper, we present a new statistical AVSS protocol with
optimal resilience; i.e. with n = 3t+1. Our protocol privately communicates O((\ell n^3 + n^4 \log{\frac{1}{\epsilon}}) \log{\frac{1}{\epsilon}}) bits and A-casts O(n^3 \log(n)) bits to simultaneously share \ell \geq 1 elements from a finite field F, where \epsilon is the error parameter of our protocol.
There are only two known statistical AVSS protocols with n = 3t+1
reported in [CR93] and [PCR09]. The AVSS protocol of [CR93] requires a private communication of O(n^9 (\log{\frac{1}{\epsilon}})^4) bits and A-cast of O(n^9 (\log{\frac{1}{\epsilon}})^2 \log(n)) bits to share a single element from F. Thus our AVSS protocol shows a significant improvement in communication complexity over the AVSS of [CR93]. The AVSS protocol of [PCR09] requires a private communication and A-cast of O((\ell n^3 + n^4) \log{\frac{1}{\epsilon}}) bits to share \ell \geq 1 elements. However, the shared element(s) may be NULL \not \in {\mathbb F}. Thus our AVSS is better than the AVSS of [PCR09] due to the following reasons:
1. The A-cast communication of our AVSS is independent of the number of secrets i.e. \ell;
2. Our AVSS makes sure that the shared value(s) always belong to F.
Using our AVSS, we design a new primitive called Asynchronous Complete Secret Sharing (ACSS) which acts as an important building block of asynchronous multiparty computation (AMPC). Using our ACSS scheme, we design a statistical AMPC protocol with optimal resilience; i.e., with n = 3t+1, that privately communicates
O(n^5 \log{\frac{1}{\epsilon}}) bits per multiplication gate. This significantly improves the communication complexity of only known optimally resilient statistical AMPC of [BKR93] that privately communicates \Omega(n^{11} (\log{\frac{1}{\epsilon}})^4) bits and A-cast \Omega(n^{11} (\log{\frac{1}{\epsilon}})^2 \log(n)) bits per multiplication gate.
Both our ACSS and AVSS employ several new techniques, which are of independent interest.
Practical Private Set Intersection Protocols with Linear Computational and Bandwidth Complexity
Uncategorized
Uncategorized
Increasing dependence on anytime-anywhere availability of data and the commensurately increasing fear of losing privacy motivate the need for privacy-preserving techniques. One interesting and common problem occurs when two parties need to privately compute an intersection of their respective sets of data. In doing so, one or both parties must obtain the intersection (if one exists), while neither should learn anything about other set. Although prior work has yielded a number of effective and elegant Private Set Intersection (PSI) techniques, the quest for efficiency is still underway. This paper explores some PSI variations and constructs several secure protocols that are appreciably more efficient than the state-of-the-art.
Cryptanalysis of Multiple-Server Password-Authenticated Key
Uncategorized
Uncategorized
Password-based user-authentication schemes have been widely used when users access a server to avail internet services. Multiserver password-authentication schemes enable remote users to obtain service from multiple servers without separately registering with each server. In 2008, Jia-Lun Tsai proposed an improved and efficient password-authenticated key agreement scheme for a multiserver architecture based on Chang-Lee’s scheme proposed in 2004. However, we found that Tsai’s scheme does not provide forward secrecy and is weak to insider impersonation and denial of service attacks. In this article, we describe the drawbacks of Tsai’s scheme and provide a countermeasure to satisfy the forward secrecy property.
Impossible Boomerang Attack for Block Cipher Structures
Impossible boomerang attack \cite{lu} (IBA) is a new variant of differential cryptanalysis against block ciphers. Evident from its name, it combines the ideas of both impossible differential cryptanalysis and boomerang attack. Though such an attack might not be the best attack available, its complexity is still less than that of the exhaustive search. In impossible boomerang attack, impossible boomerang distinguishers are used to retrieve some of the subkeys. Thus the security of a block cipher against IBA can be evaluated by impossible boomerang distinguishers. In this paper, we study the impossible boomerang distinguishers for block cipher structures whose round functions are bijective. Inspired by the -method in \cite{kim}, we provide an algorithm to compute the maximum length of impossible boomerang distinguishers for general block cipher structures, and apply the algorithm to known block cipher structures such as Nyberg's generalized Feistel network, a generalized CAST256-like structure, a generalized MARS-like structure, a generalized RC6-like structure, etc.
Little Dragon Two: An efficient Multivariate Public Key Cryptosystem
In 1998 [8], Patarin proposed an efficient cryptosystem called Little Dragon which was
a variant of Matsumoto Imai cryptosystem C¤. However Patarin later found that Little
Dragon cryptosystem is not secure [8], [3]. In this paper we propose a public key cryp-
tosystem Little Dragon Two which is as e±cient as Little Dragon cryptosystem but secure
against all the known attacks. Like little Dragon cryptosystem the public key of Little
Dragon Two is of mixed type that is quadratic in plaintext and ciphertext variables. So
the public key size of Little Dragon Two is equal to Little Dragon Cryptosystem. Our
Public key algorithm is bijective and can be used for both encryption and signatures.
Error Decodable Secret Sharing and One-Round Perfectly Secure Message Transmission for General Adversary Structures
Uncategorized
Uncategorized
An error decodable secret-sharing scheme is a secret-sharing
scheme with the additional property that the secret can be
recovered from the set of all shares, even after a coalition of
participants corrupts the shares they possess. In this paper we
consider schemes that can tolerate corruption by sets of
participants belonging to a monotone coalition structure, thus
generalising both a related notion studied by Kurosawa, and the
well-known error-correction properties of threshold schemes based
on Reed-Solomon codes. We deduce a necessary and sufficient
condition for the existence of such schemes, and we show how to
reduce the storage requirements of a technique of Kurosawa for
constructing error-decodable secret-sharing schemes with efficient
decoding algorithms.
In addition, we explore the connection between one-round perfectly
secure message transmission (PSMT) schemes with general adversary
structures and secret-sharing schemes, and we exploit this
connection to investigate factors affecting the performance of
one-round PSMT schemes such as the number of channels required,
the communication overhead, and the efficiency of message recovery.
Efficient Pseudorandom Functions From the Decisional Linear Assumption and Weaker Variants
Uncategorized
Uncategorized
In this paper, we generalize Naor and Reingold's construction of pseudorandom functions under the DDH Assumption to yield a construction of pseudorandom functions under the decisional -Linear Assumption, for each . The decisional Linear Assumption was first introduced by Boneh, Boyen, and Shacham as an alternative assumption for settings where the DDH problem is easy, such as bilinear groups. This assumption can be generalized to obtain the decisional -Linear Assumptions. Shacham and Hofheinz and Kiltz showed that the decisional -Linear problem is hard for generic groups even when the decisional -Linear problem is easy. It is thus desirable to have constructions of cryptographic primitives based on the decisional -Linear Assumption instead of DDH. Not surprisingly, one must pay a small price for added security: as increases, our constructed functions become slightly less efficient to compute and the key size increases (quadratically in ).
Black-Box Circular-Secure Encryption Beyond Affine Functions
We show how to achieve public-key encryption schemes that can securely
encrypt nonlinear functions of their own secret key. Specifically, we show that for any constant , there exists a public-key encryption scheme that can securely encrypt any function of its own secret key, assuming can be expressed as a polynomial of total degree~ . Such a scheme is said to be key-dependent message (KDM) secure w.r.t.\ degree- polynomials. We also show that for any constants , there exists a public-key encryption scheme that is KDM secure w.r.t.\ all Turing machines with description size and running time , where is the security parameter. The security of such public-key schemes can be based either on the standard decision Diffie-Hellman (DDH) assumption or on the learning with errors (LWE) assumption (with certain parameters settings).
In the case of functions that can be expressed as degree- polynomials, we show that the resulting schemes are also secure with respect to \emph{key cycles} of any length. Specifically, for any polynomial number of key pairs, our schemes can securely encrypt a degree- polynomial whose variables are the collection of coordinates of all secret keys. Prior to this work, it was not known how to achieve this for nonlinear functions.
Our key idea is a general transformation that amplifies KDM security. The transformation takes an encryption scheme that is KDM secure w.r.t.\ \emph{some} functions even when the secret keys are weak (i.e.\ chosen from an arbitrary distribution with entropy~ ), and outputs a scheme that is KDM secure w.r.t.\ a \emph{richer} class of functions. The resulting scheme may no longer be secure with weak keys. Thus, in some sense, this transformation converts security with weak keys into amplified KDM security.
New Pseudo-Near-Collision Attack on Reduced-Round of Hamsi-256
Uncategorized
Uncategorized
Hamsi-256 is designed by Özgül Kücük
and it has been a candidate Hash function for the second round of SHA-3. The compression function of Hamsi-256 maps a 256-bit chaining value and a 32-bit message to a new 256-bit chaining value. As hashing a message, Hamsi-256 operates 3-round except for the last message it operates 6-round. In this paper, we will give the pseudo-near-collision for 5-round Hamsi-256. By the message modifying, the pseudo-near-collision for 3, 4 and 5 rounds can be found with , and compression function computations respectively.
On the Security of UOV
In this short note, we investigate the security of the Unbalanced Oil and Vinegar Scheme \cite{uov}. To do so, we use a hybrid approach for solving the algebraic systems naturally arising when mounting a signature-forgery attack. The basic idea is to compute Gröbner bases of several modified systems rather than a Gröbner basis of the initial system. It turns out that our approach is efficient in practice. We have obtained a complexity bounded from above by (or hours of computation) to forge a signature on a set of parameters proposed by the designers of UOV.
New Techniques for Dual System Encryption and Fully Secure HIBE with Short Ciphertexts
Uncategorized
Uncategorized
We construct a fully secure HIBE scheme with short ciphertexts. The previous construction of Boneh, Boyen, and Goh was only proven to be secure in the selective model, under a non-static assumption which depended on the depth of the hierarchy.
To obtain full security, we apply the dual system encryption concept recently introduced by Waters. A straightforward application of this technique is insufficient to achieve short ciphertexts, since the original instantiation of the technique includes tags that do not compress. To overcome this challenge, we design a new method for realizing dual system encryption. We provide a system in composite order groups (of three primes) and prove the security of our scheme under three static assumptions.
PPS: Privacy Preserving Statistics using RFID Tags
As RFID applications are entering our daily life, many new
security and privacy challenges arise. However, current research
in RFID security focuses mainly on simple authentication
and privacy-preserving identication. In this paper,
we discuss the possibility of widening the scope of RFID
security and privacy by introducing a new application scenario.
The suggested application consists of computing statistics
on private properties of individuals stored in RFID tags.
The main requirement is to compute global statistics while
preserving the privacy of individual readings. PPS assures
the privacy of properties stored in each tag through the combination
of homomorphic encryption and aggregation at the
readers. Re-encryption is used to prevent tracking of users.
The readers scan tags and forward the aggregate of their
encrypted readings to the back-end server. The back-end
server then decrypts the aggregates it receives and updates
the global statistics accordingly. PPS is provably privacypreserving.
Moreover, tags can be very simple since they are
not required to perform any kind of computation, but only
to store data.
On Cryptographic Protocols Employing Asymmetric Pairings -- The Role of Revisited
Uncategorized
Uncategorized
Asymmetric pairings for which an efficiently-computable isomorphism is known are called Type 2 pairings; if such an isomorphism is not known then is called a Type 3 pairing. Many cryptographic protocols in the asymmetric setting rely on the existence of for their security reduction while some use it in the protocol itself. For these reasons, it is believed that some of these protocols cannot be implemented with Type 3 pairings, while for some the security reductions either cannot be transformed to the Type 3 setting or else require a stronger complexity assumption. Contrary to these widely held beliefs, we argue that Type 2 pairings are merely inefficient implementations of Type 3 pairings, and appear to offer no benefit for protocols based on asymmetric pairings from the point of view of functionality, security, and performance.
Preimage Attacks on 41-Step SHA-256 and 46-Step SHA-512
In this paper, we propose preimage attacks on 41-step SHA-256 and 46-step SHA-512,
which drastically increase the number of attacked steps compared to the best previous preimage attack working for only 24 steps.
The time complexity for 41-step SHA-256 is compression function operations and the memory requirement is
words.
The time complexity for 46-step SHA-512 is compression function operations and the memory requirement is
words.
Our attack is a meet-in-the-middle attack.
We first consider the application of previous meet-in-the-middle attack techniques to SHA-2.
We then analyze the message expansion of SHA-2 by considering all previous techniques
to find a new independent message-word partition.
We first explain the attack on 40-step SHA-256 whose complexity is to describe the ideas.
We then explain how to extend the attack.
Pseudo-cryptanalysis of the Original Blue Midnight Wish
The hash function Blue Midnight Wish (BMW) is a candidate in the SHA-3
competition organised by the U.S. National Institute of Standards and Technology (NIST). BMW was selected for the second round of the competition, but the algorithm was tweaked in a number of ways. In this paper we describe cryptanalysis on the original version of BMW, as submitted to the SHA-3 competition in October 2008. When we refer to BMW, we therefore mean the original version of the algorithm.
The attacks described are (near-)collision, preimage and second preimage attacks on the BMW compression function. These attacks can also be described as pseudo-attacks on the full hash function, i.e., as attacks in which the adversary is allowed to choose the initial value of the hash function. The complexities of the attacks are about 2^{14} for the near-collision attack, about 2^{3n/8+1} for the pseudo-collision attack, and about 2^{3n/4+1} for the pseudo-(second) preimage attack, where n is the output length of the hash function. Memory requirements are negligible. Moreover, the attacks are not (or only moderately) affected by the choice of security parameter for BMW.
Preimages for Step-Reduced SHA-2
In this paper, we present a preimage attack for 42 step-reduced SHA-256 with time complexity and memory requirements of order . The same attack also applies to 42 step-reduced SHA-512 with time complexity and memory requirements of order . Our attack is meet-in-the-middle preimage attack.
On the Security of PAS (Predicate-based Authentication Service)
Recently a new human authentication scheme called PAS (predicate-based authentication service) was proposed, which does not require the assistance of any supplementary device. The main security claim of PAS is to resist passive adversaries who can observe the whole authentication session between the human user and the remote server.
In this paper we give a detailed security analysis of PAS and show that PAS is insecure against both brute force attack and a probabilistic attack. In particular we show that the security of PAS against brute force attack was strongly overestimated. Furthermore, we introduce a probabilistic attack, which can break part of the password even with a very small number of observed authentication sessions. Although the proposed attack cannot completely break the password, it can downgrade the PAS system to a much weaker system similar to common OTP (one-time password) systems.
Double-Exponentiation in Factor-4 Groups and its Applications
In previous work we showed how to compress certain prime-order subgroups of certain cyclotomic subgroups by a factor of 4. We also showed that single-exponentiation can be efficiently performed using compressed representations. In this paper we show that double-exponentiation can be efficiently performed using factor-4 compressed representation of elements. In addition to giving a considerable speed up to the previously known fastest single-exponentiation algorithm for general bases, double-exponentiation can be used to adapt our compression technique to ElGamal type signature schemes.
Resettable Public-Key Encryption: How to Encrypt on a Virtual Machine
Uncategorized
Uncategorized
Typical security models used for proving security of deployed cryptographic primitives do not allow adversaries to rewind or reset honest parties to an earlier state. Thus, it is common to see cryptographic protocols rely on the assumption that fresh random numbers can be continually generated. In this paper, we argue that because of the growing popularity of virtual machines and, specifically, their state snapshot and revert features, the security of cryptographic protocols proven under these assumptions is called into question. We focus on public-key encryption security in a setting where resetting is possible and random numbers might be reused. We show that existing schemes and security models are insufficient in this setting. We then provide new formal security models and show that making a simple and efficient modification to any existing PKE scheme gives us security under our new models.
A Simple Power Analysis Attack on the Serpent Key Schedule
We describe an SPA attack on an 8-bit smart card implementation of the Serpent block cipher. Our attack uses measurements taken during an on-the-fly key expansion together with linearity in the cipher's key schedule algorithm to drastically reduce the search time for an initial key. An implementation finds 256-bit keys in 3.736 ms on average. Our work shows that linearity in key schedule design and other cryptographic applications should be carefully evaluated for susceptibility to side-channel attacks and that search algorithm design can greatly speed up side-channel attacks.
Cryptanalysis of a Message Recognition Protocol by Mashatan and Stinson
At CANS 2008, Mashatan and Stinson suggested a message recognition protocol for ad hoc pervasive networks. The protocol provides a procedure to resynchronize in case of a (possibly adversarial) disruption of communication. We show that this resynchronization process does not provide the functionality intended and in fact enables an adversary to create selective forgeries. The computational effort for the attack is negligible and allows the insertion of arbitrary messages.
Improving the Berlekamp algorithm for binomials \boldmath
In this paper, we describe an improvement of the Berlekamp algorithm
for binomials over prime fields .
We implement the proposed method for various cases and
compare the results with the original Berlekamp method.
The proposed method can be extended easily to the case
where the base field is not a prime field.
On The Communication Complexity of Perfectly Secure Message Transmission in Directed Networks
In this paper, we re-visit the problem of perfectly secure message transmission (PSMT) in a directed network under the presence of a threshold adaptive Byzantine adversary, having unbounded
computing power. Desmedt et.al have given the characterization for three or more phase PSMT protocols over directed networks. Recently, Patra et. al. have given the characterization of two phase PSMT over directed networks. Even though the issue of tradeoff between phase complexity and communication complexity of PSMT protocols has been resolved in undirected networks, nothing is known in the literature regarding directed networks. In this paper, we completely settle down
this issue. Specifically, we derive the lower bounds on communication complexity of (a) two phase PSMT protocols and (b) three or more phase PSMT protocols in directed networks. Moreover, we show that our lower bounds are asymptotically tight, by designing communication optimal PSMT protocols in directed networks, which are first of their kind.
We re-visit the problem of perfectly reliable message transmission (PRMT) as well. Any PRMT protocol that sends a message containing field elements, has a trivial lower bound of O( ) field
elements on its communication complexity. Thus any PRMT protocol that sends a message of eld elements by communicating O(\ell) field elements, is referred as communication optimal PRMT or PRMT with constant factor overhead. Here, we characterize the class of directed networks over which communication optimal PRMT or PRMT with constant factor overhead is possible. Moreover, we design a communication optimal PRMT over a directed network that satisfies the conditions stated in our characterization. Our communication optimal PRMT/PSMT protocols employ several new techniques based on coding theory, which are of independent interest.
Additive Combinatorics and Discrete Logarithm Based Range Protocols
Uncategorized
Uncategorized
We show how to express an arbitrary integer interval as a sumset of smaller integer intervals for some small values , , and , where and . We show how to derive such expression of as a sumset for any value of , and in particular, how the coefficients can be found by using a nontrivial but efficient algorithm. This result may be interesting by itself in the context of additive combinatorics. Given the sumset-representation of , we show how to decrease both the communication complexity and the computational complexity of the recent pairing-based range proof of Camenisch, Chaabouni and shelat from ASIACRYPT 2008 by a factor of . Our results are important in applications like e-voting where a voting server has to verify thousands of proofs of e-vote correctness per hour. Therefore, our new result in additive combinatorics has direct relevance in practice.
Password Based Key Exchange with Hidden Elliptic Curve Public Parameters
Uncategorized
Uncategorized
We here describe a new Password-based Authenticated Key Exchange (PAKE) protocol based on elliptic curve cryptography. We prove it secure in the Bellare-Pointcheval-Rogaway (BPR) model. Our proposal is conceived in a such a way that it ensures that the elliptic curve public parameters remain private. This is important in the context of ID contactless devices as, in this case, it is easy to link these parameters with the nationality of the ID document owners.
Last updated: 2009-09-29
The LPN Problem with Auxiliary Input
Uncategorized
Uncategorized
This paper investigates the Learning from Parity with Noise (LPN)
problem under the scenario that the unknowns (secret keys) are only
unpredictable instead of being uniformly random to the adversaries.
In practice, this corresponds to the case where an adversary already
possesses some additional knowledge about the secret key. In the
information-theoretic setting, we show that the problem is robust
against arbitrary leakages as long as the unknowns remain some
sufficient amount of min-entropy. In the computational setting, we
prove Dodis et al.'s [STOC'09] conjecture that the auxiliary-input
LPN assumption is implied by the standard LPN assumption,
i.e., encryption schemes based on the standard LPN
assumption is secure against any exponentially hard-to-invert
auxiliary input.
\medskip
Our setting is more general than the traditional model of auxiliary
input which deals with secret keys of sufficient min-entropy, in
particular, we allow leakages that information-theoretically
determine their secret keys as long as the keys remain a linear
amount of unpredictability pseudoentropy. Further, unlike most other
schemes, our result is reducible to a well-known hardness problem
and does not quantify over all hard-to-invert auxiliary functions.
The Certicom Challenges ECC2-X
To encourage research on the hardness of the elliptic-curve
discrete-logarithm problem (ECDLP) Certicom has published a series
of challenge curves and DLPs.
This paper analyzes the costs of breaking the Certicom challenges
over the binary fields and on a
variety of platforms. We describe details of the choice of step
function and distinguished points for the Koblitz and non-Koblitz
curves. In contrast to the implementations for the previous Certicom
challenges we do not restrict ourselves to software and conventional
PCs, but branch out to cover
the majority
of available platforms
such as various ASICs, FPGAs, CPUs and the Cell Broadband Engine.
For the field arithmetic we investigate polynomial and
normal basis arithmetic for these specific fields; in particular for
the challenges on Koblitz curves normal bases become more
attractive on ASICs and FPGAs.
Readers Behaving Badly: Reader Revocation in PKI-Based RFID Systems
Recent emergence of RFID tags capable of performing public key
operations motivates new RFID applications, including
electronic travel documents, identification cards and payment
instruments. In such settings, public key certificates form the
cornerstone of the overall system security. In this paper, we
argue that one of the prominent -and still woefully
unaddressed- challenges is how to handle revocation checking of
RFID reader certificates. This is an important issue
considering that these high-end RFID tags are geared for
applications such as e-documents and contactless payment
instruments. Furthermore, the problem is unique to public
key-based RFID systems, since tags (even those capable of
complex cryptographic operations) have no clock and thus cannot
use traditional (time-based) off-line revocation checking
methods. Whereas, on-line methods require unrealistic
connectivity assumptions.
In this paper, we address the problem of reader revocation in
PKI-Based RFID systems. We begin by observing an important
distinguishing feature of personal RFID tags used in
authentication, access control or payment applications -the
involvement of a human user. We then take advantage of the
user's awareness and presence to construct a simple, efficient,
secure and (most importantly) feasible solution for reader
revocation checking. And finally, we evaluate the usability and
practical security our solution via usability studies and discuss
its feasibility in a case study of e-Passports.
In our approach, the main extra feature is the requirement for
a small passive on-tag display. However, as discussed in the
paper, modern low-power display technology (e.g., e-paper) is
low-cost and appealing for other (e.g., authentication and
verification) purposes.
On Key Authentic Degree of Cryptosystem
Uncategorized
Uncategorized
Against such attacks as rubber-hose attack, key authentic degree of cryptosystem is expatiated in detail, and the important significance of key authentic degree of cryptosystem is pointed out. And the key authentic degrees of modern cryptosystem under different conditions are given. Research shows that under most realistic situations, the key authentic degree of modern cryptosystem is high, this means that modern cryptosystem is threatened by such as rubber-hose attack and so on. Feasibility of low key authentic degree reliability is analyzed, and the implementing of low key authentic degree algorithm is studied.
On Linear Cryptanalysis with Many Linear Approximations
In this paper we present a theoretical framework
to quantify the information brought by several linear approximations of a block-cipher without putting any restriction on these approximations.
We quantify here the entropy of the key given the plaintext-ciphertext pairs statistics
which is a much more accurate measure than the ones studied earlier. The techniques which
are developed here apply to various ways of performing the linear attack
and can also been used to measure the entropy of the key for other statistical attacks.
Moreover, we present a realistic attack on the full DES with a time complexity of
for
pairs what is a big improvement comparing to Matsui's algorithm 2 ( ).
Certificateless KEM and Hybrid Signcryption Schemes Revisited
Often authentication and confidentiality are required as simultaneous key requirements in many cryptographic applications. The cryptographic primitive called signcryption effectively implements the same and while most of the public key based systems are appropriate for small messages, hybrid encryption (KEM-DEM) provides an efficient and practical way to securely communicate very large messages. Recently, Lippold et al. \cite{GCJ09} proposed a certificateless KEM in the standard model and the first certificateless hybrid signcryption scheme was proposed by Fagen Li et al. \cite{LST09}. The concept of certificateless hybrid signcryption has evolved by combining the ideas of signcryption based on tag-KEM and certificateless cryptography. In this paper, we show that \cite{GCJ09} is not Type-I CCA secure and \cite{LST09} is existentially forgeable. We also propose an improved certificateless hybrid signcryption scheme and formally prove the security of the improved scheme against both adaptive chosen ciphertext attack and existential forgery in the appropriate security models for certificateless hybrid signcryption.
A Framework for Non-Interactive Instance-Dependent Commitment Schemes (NIC)
Zero-knowledge protocols are often studied through specific problems, like Graph-Isomorphism. In many cases this approach prevents an important level of abstraction and leads to limited results, whereas in fact the constructions apply to a wide variety of problems. We propose to address this issue with a formal framework of non-interactive instance-dependent commitment schemes (NIC). We define NIC in both the perfect, statistical, and computational settings, and formally characterize problems admitting NIC in all of these settings. We also prove other useful lemmas such as closure properties. Consequently, results that previously applied only to specific problems are now strengthened by our framework to apply to classes of problems. By providing formal yet intuitive tools, our framework facilitates the construction of zero-knowledge protocols for a wide variety of problems, in various settings, without the need to refer to a specific problem. Our results are unconditional.
Asymptotic enumeration of correlation-immune boolean functions
Uncategorized
Uncategorized
A boolean function of n boolean variables is correlation-immune of order k if the function value is uncorrelated with the values of any k of the arguments. Such functions are of considerable interest due to their cryptographic properties, and are also related to the orthogonal arrays of statistics and the balanced hypercube colourings of combinatorics. The weight of a boolean function is the number of argument values that produce a function value of 1. If this is exactly half the argument values, that is, 2^{n-1} values, a correlation-immune function is called resilient.
An asymptotic estimate of the number N(n,k) of n-variable correlation-immune boolean functions of order k was obtained in 1992 by Denisov for constant k. Denisov repudiated that estimate in 2000, but we will show that the repudiation was a mistake.
The main contribution of this paper is an asymptotic estimate of N(n,k) which holds if k increases with n within generous limits and specialises to functions with a given weight, including the resilient functions. In the case of k=1, our estimates are valid for all weights.
~
Efficient Oblivious Polynomial Evaluation with Simulation-Based Security
The study of secure multiparty computation has yielded powerful feasibility results showing that any efficient functionality can be securely computed in the presence of malicious adversaries. Despite this, there are few problems of specific interest for which we have highly efficient protocols that are secure in the presence of malicious adversaries under full simulation based definitions (following the ideal/real model paradigm). Due to the difficulties of constructing such protocols, many researchers have resorted to weaker definitions of security and weaker adversary models. In this paper, we construct highly efficient protocols for the well-studied problem of oblivious polynomial evaluation.
Our protocol is secure under standard cryptographic assumptions for the settings of malicious adversaries, and readily transform to protocols that are secure under universal composability and in the presence of covert adversaries. Our protocol is constant round and requires O(d \cdot s) exponentiations, where is the degree of the polynomial and s is a statistical security parameter (that should equal about 160 in practice).
Security Analysis and Design of Proxy Signature Schemes over Braid Groups
The braid groups have attracted much attention as a new platform of constructing cryptosystems. This paper firstly analyzes the security vulnerabilities of existing proxy signature schemes over braid groups and presents feasible attacks. Then a new proxy signature scheme is proposed based on the difficulty of the conjugacy search problem and the multiple conjugacy search problem. Security analysis shows that the proposed scheme satisfies the security requirements of proxy signature.
A remark on the computation of cube roots in finite fields
We consider the computation of cube roots
in finite fields. For the computation of square roots in finite fields, there are two typical methods; the Tonelli-Shanks method and the Cipolla-Lehmer method. The former can be extended easily
to the case of -th roots, which is called the Adleman-Manders-Miller method, but it seems to be difficult to extend the latter to more general cases.
In this paper, we propose two explicit algorithms
for realizing the Cipolla-Lehmer method in the case of cube roots
for prime fields with .
We implement these methods and compare the results.
Last updated: 2009-10-19
An Automata-Theoretic Interpretation of Iterated Hash Functions - Application to Multicollisions
In this paper we present a new method of constructing multicollision
sets for iterated hash functions. Multicollisions have been studied
quite actively since Joux published an attack against iterated hash
functions, which works in complexity for
-multicollisions. Recently Kelsey \& Schneier and Aumasson have
published even faster multicollision attacks, which are based on fixed
points of the compression function and some assumptions on the
attacker's capabilities. Our method has complexity for arbitrarily large multicollision sets and does not have any
additional assumptions on the compression function or attacker's
capabilities. The drawback of our method is the extremely long
messages generated by the algorithm in the worst case. Our method is based on automata
theory and, given a compression function , we construct a finite
state transducer, which realizes the iterated hash function based on
. The method for generating multicollision sets is based on well
known pumping properties of these automata.
Identity-Based Hybrid Signcryption
Signcryption is a cryptographic primitive that fulfills both the
functions of digital signature and public key encryption
simultaneously, at a cost significantly lower than that required by
the traditional signature-then-encryption approach. In this paper,
we address a question whether it is possible to construct a hybrid
signcryption scheme in identity-based setting. This question seems
to have never been addressed in the literature. We answer the
question positively in this paper. In particular, we extend the
concept of signcryption key encapsulation mechanism to the
identity-based setting. We show that an identity-based signcryption
scheme can be constructed by combining an identity-based
signcryption key encapsulation mechanism with a data encapsulation
mechanism. We also give an example of identity-based signcryption
key encapsulation mechanism.
An Efficient Convertible Undeniable Signature Scheme with Delegatable Verification
Undeniable signatures, introduced by Chaum and van Antwerpen, require a verifier to interact with the signer to verify a signature, and hence allow the signer to control the verifiability of his signatures. Convertible undeniable signatures, introduced by Boyar, Chaum, Damg\aa{}rd, and Pedersen, furthermore allow the signer to convert signatures to publicly verifiable ones by publicizing a verification token, either for individual signatures or for all signatures universally. In addition, the signer is able to delegate the ability to prove validity and convert signatures to a semi-trusted third party by providing a verification key. While the latter functionality is implemented by the early convertible undeniable signature schemes, most recent schemes do not consider this despite its practical appeal.
In this paper we present an updated definition and security model for schemes allowing delegation, and highlight a new essential security property, token soundness, which is not formally treated in the previous security models for convertible undeniable signatures. We then propose a new convertible undeniable signature scheme. The scheme allows delegation of verification and is provably secure in the standard model assuming the computational co-Diffie-Hellman problem, a closely related problem, and the decisional linear problem are hard. Our scheme is, to the best of our knowledge, the currently most efficient convertible undeniable signature scheme which provably fulfills all security requirements in the standard model.
A Note on Linear Approximations of BLUE MIDNIGHT WISH Cryptographic Hash Function
Abstract. BLUE MIDNIGHT WISH hash function is the fastest among
14 algorithms in the second round of SHA-3 competition [1]. At the
beginning of this round authors were invited to add some tweaks before September 15th 2009. In this paper we discuss the tweaked version (BMW). The BMW algorithm [3] is of the type AXR, since it uses only operations ADD (sub), XOR and ROT (shift). If we substitute the operation ADD with operation XOR, we get a BMWlin, which is an affine transformation. In this paper we consider only a BMWlin function and its building blocks. These affine transformations can be represented as a linear matrix and a
constant vector. We found that all matrices of main blocks of BMWlin
have a full rank, or they have a rank very close to full rank. The structure of matrices was examined. Matrices of elementary blocks have an expected non-random structure, while main blocks have a random structure. We will also show matrices for different values of security parameter ExpandRounds1 (values between 0 and 16). We observed that increasing the number of rounds ExpandRounds1 tends to increase randomness as was intended by designers. These observations hold for both BMW256lin and BMW512lin. In this analysis we did not find any useful property, which would help in cryptanalysis, nor did we find any weaknesses of BMW. The study of all building blocks will follow.
Cryptanalysis of the Niederreiter Public Key Scheme Based on GRS Subcodes
A new structural attack on the McEliece/Niederreiter public key cryptosystem based on subcodes of generalized Reed-Solomon codes proposed by Berger and Loidreau is described. It allows the reconstruction of the private key for almost all practical parameter choices in polynomial time with high probability.
Efficient Certificateless KEM in the Standard Model
We give a direct construction of a certificateless key encapsulation
mechanism (KEM) in the standard model that is more efficient than the generic constructions proposed before by Huang and Wong \cite{DBLP:conf/acisp/HuangW07}. We use a direct construction from Kiltz and Galindo's KEM scheme \cite{DBLP:conf/acisp/KiltzG06} to obtain a certificateless KEM in the standard model; our construction is roughly twice as efficient as the generic construction. We also address the security flaw discovered by Selvi et al. \cite{cryptoeprint:2009:462}.
On Hierarchical Threshold Secret Sharing
Uncategorized
Uncategorized
Recently, two novel schemes have been proposed for hierarchical threshold secret sharing, one based on Birkoff interpolation and another based on bivariate Lagrange interpolation. In this short paper, we propose a much simpler solution for this problem.
One for All - All for One: Unifying Standard DPA Attacks
In this paper, we examine the relationship between and the efficiency of different approaches to standard (univariate) DPA attacks. We first show that, when feeded with the same assumptions about the target device (i.e. with the same leakage model), the most popular approaches such as using a distance-of-means test, correlation analysis, and Bayes attacks are essentially equivalent in this setting. Differences observed in practice are not due to differences in the statistical tests but due to statistical artifacts. Then, we establish a link between the correlation coefficient and the conditional entropy in side-channel attacks. In a first-order attack scenario, this relationship allows linking currently used metrics to evaluate standard DPA attacks (such as the number of power traces needed to perform a key recovery) with an information theoretic metric (the mutual information). Our results show that in the practical scenario defined formally in this paper, both measures are equally suitable to compare devices in respect to their susceptibility to DPA attacks. Together with observations regarding key and algorithm independence we consequently extend theoretical strategies for the sound evaluation of leaking devices towards the practice of side-channel attacks.
Precise Bounded-Concurrent Zero-Knowledge in Almost Constant Rounds
Precise concurrent zero-knowledge is a new notion introduced by
Pandey et al. \cite{P:P:M:T:V} in Eurocrypt'08 (which generalizes
the work on precise zero-knowledge by Micali and Pass \cite{M:P} in
STOC'06). This notion captures the idea that the view of any
verifier in concurrent interaction can be reconstructed in the
almost same time. \cite{P:P:M:T:V} constructed some (private-coin)
concurrent zero-knowledge argument systems for which achieve
precision in different levels and all these protocols use at least
rounds. In this paper we investigate the
feasibility of reducing the round complexity and still keeping
precision simultaneously. Our result is that we construct a
public-coin precise bounded-concurrent zero-knowledge argument
system for only using almost constant rounds, i.e.,
rounds. Bounded-concurrency means an a-priori bound on
the (polynomial) number of concurrent sessions is specified before
the protocol is constructed. Our result doesn't need any setup
assumption. We stress that this result cannot be obtained by
\cite{P:P:M:T:V} even in bounded-concurrent setting.
ROSSLER NONLINEAR DYNAMICAL MACHINE FOR CRYPTOGRAPHY APPLICATIONS
In many of the cryptography applications like password or IP address encryption schemes, symmetric cryptography is useful. In these relatively simpler applications of cryptography, asymmetric cryptography is difficult to justify on account of the computational and implementation complexities associated with asymmetric cryptography. Symmetric schemes make use of a single shared key known only between the two communicating hosts. This shared key is used both for the encryption as well as the decryption of data. This key has to be small in size besides being a subset of a potentially large keyspace making it convenient for the communicating hosts while at the same time making cryptanalysis difficult for the potential attackers. In the present work, an abstract Rossler nonlinear dynamical machine has been described first. The Rossler system exhibits chaotic dynamics for certain values of system parameters and initial conditions. The chaotic dynamics of the Rossler system with its apparently erratic and irregular characteristics and extreme sensitivity to the initial conditions has been used for the design of the cryptographic key in an attempt to increase the confusion and the challenge for the potential attackers.
Ntr¹u-like Public Key Cryptosystems beyond Dedekind Domain Up to Alternative Algebra
In this paper, we show that the fundamental concepts behind the Ntr¹u cryptosystem can be extended to a broader algebra than Dedekind domains. Also, we present an abstract and generalized algorithm for constructing a Ntr¹u-like cryptosystem such that the underlying algebra can be non-commutative or even non-associative. To prove the main claim, we show that it is possible to generalize Ntr¹u over non-commutative Quaternions (algebra in the sense of Cayley-Dikson, of dimension four over an arbitrary principal ideal domain) as well as non-associative Octonions (a power-associative and alternative algebra of dimension eight over a principal ideal domain).
Given the serious challenges ahead of non-commutative/non-associative algebra in quater- nionic or octonionic lattices, the proposed cryptosystems are more resistant to lattice-based attacks when compared to Ntr¹u. Concisely, this paper is making an abstract image of the mathematical base of Ntr¹u in such a way that one can make a similar cryptosystem based on various algebraic structures with the
goal of better security against lattice attack and/or more capability for protocol design.
Computing Hilbert class polynomials with the Chinese Remainder Theorem
We present a space-efficient algorithm to compute the Hilbert class polynomial H_D(X) modulo a positive integer P, based on an explicit form of the Chinese Remainder Theorem. Under the Generalized Riemann Hypothesis, the algorithm uses O(|D|^(1/2+o(1))log P) space and has an expected running time of O(|D|^(1+o(1)). We describe practical optimizations that allow us to handle larger discriminants than other methods, with |D| as large as 10^13 and h(D) up to 10^6. We apply these results to construct pairing-friendly elliptic curves of prime order, using the CM method.
Secure and Efficient HB-CM Entity Authentication Protocol
The simple, computationally efficient LPN-based HB-like entity authentication protocols have attracted a great deal of attention in the past few years due to the broad application prospect in low-cost pervasive devices. At present, the most efficient protocol is HB , which is proven to resist the GRS attack under the conjecture that it is secure in the DET-model. In this paper, we introduce an innovative HB-CM protocol, which significantly reduces the storage requirement while maintaining the same level of communication cost. We develop the concept of equivalence class, and present HB-CM reductionist proof that overcomes an inherent limitation in the HB security proof. In fact, HB is only provably resistant to partial instances of GRS attack, while we prove that HB-CM can prevent the full GRS attack except one trivial case. In addition, we propose a new noise mode for all HB-like protocols in order to thwart the latest OOV man-in-the-middle attack, which can effectively compromise all current HB-like protocols with the basic Bernoulli nose mode. The HB-CM protocol along with the proposed noise mode constitutes our final protocol: HB-CM.
Rebound Attack on the Full LANE Compression Function
In this work, we apply the rebound attack to the AES based SHA-3
candidate LANE. The hash function LANE uses a permutation
based compression function, consisting of a linear message expansion
and 6 parallel lanes. In the rebound attack on LANE, we apply
several new techniques to construct a collision for the full
compression function of LANE-256 and LANE-512. Using a
relatively sparse truncated differential path, we are able to solve
for a valid message expansion and colliding lanes independently.
Additionally, we are able to apply the inbound phase more than once
by exploiting the degrees of freedom in the parallel AES states.
This allows us to construct semi-free-start collisions for full
LANE-256 with compression function evaluations and
memory, and for full LANE-512 with compression
function evaluations and memory.
Fuzzy Privacy Preserving Peer-to-Peer Reputation Management
Uncategorized
Uncategorized
The P2PRep algorithm is a reputation-management mechanism in which a peer uses fuzzy techniques to compute local reputations and aggregates these results to compute a global reputation for another peer which has made an offer of service. While this mechanism is known to be extremely effective in the presence of malicious peers, it has one drawback: it does not preserve the anonymity of peers in the network during the voting phase of protocol. This makes it unsuitable for use in networks which associate peers with a routing identifier such as an IP address. We propose in this paper, a solution to this problem - the 3PRep (Privacy Preserving P2PRep) algorithm which implements two protocols to maintain vote privacy in P2PRep without significant additional computation and communications overhead. In doing so, we also provide a method to compute the Ordered Weighted Average (OWA) over distributed datasets while maintaining privacy of these data.
An Efficient Two-Party Identity-Based Key Exchange Protocol based on ECDLP
This paper presents an efficient identity-based key exchange protocol
based on the difficulty of computing a Elliptic Curve Discrete Lgarithm Problem. As compared with the previously proposed protocols, it has better performance in terms of the computational cost and the communication steps. Key exchange protocols allow two parties communicating over a public network to establish a common secret key called session key to encrypt the communication data. Due to their significance by in building a secure communication channel, a number of key exchange protocols have been suggested over the years for a variety of settings.The proposed key exchange protocol provides implicit key authentication as well as the desired security
attributes of an authenticated key exchange protocol.
A Multivariate Signature Scheme with an almost cyclic public key
Multivariate public key cryptography is one of the main approaches to guarantee the security of communication in a post quantum world.
One of the major drawbacks in this area is the huge size of the public key. In this paper we present a new idea to create a multivariate signature scheme with an almost cyclic public key. The scheme is very similar to the UOV-Scheme of Kipnis and Patarin but reduces the size of the public key by about 83 \verb!%!.
A Fast Mental Poker Protocol
Abstract. We present a fast and secure mental poker protocol. It is twice as fast as Barnett-Smart's and Castellà-Roca's protocols. This protocol is provably secure under DDH assumption.
Improved Cryptanalysis of Skein
The hash function Skein is the submission of Ferguson et al. to the NIST Hash Competition, and is arguably a serious candidate for selection as SHA-3. This paper presents the first third-party analysis of Skein, with an extensive study of its main component: the block cipher Threefish. We notably investigate near collisions, distinguishers, impossible differentials, key recovery using related-key differential and boomerang attacks. In particular, we present near collisions on up to 17 rounds, an impossible differential on 21 rounds, a related-key boomerang distinguisher on 34 rounds, a known-related-key boomerang distinguisher on 35 rounds, and key recovery attacks on up to 32 rounds, out of 72 in total for Threefish-512. None of our attacks directly extends to the full Skein hash. However, the pseudorandomness of Threefish is required to validate the security proofs on Skein, and our results conclude that at least 36 rounds of Threefish seem required for optimal security guarantees.
On the Relations Between Diffie-Hellman and ID-Based Key Agreement from Pairings
This paper studies the relationships between the traditional
Diffie-Hellman key agreement protocol and the identity-based
(ID-based) key agreement protocol from pairings.
For the Sakai-Ohgishi-Kasahara (SOK) ID-based key construction, we
show that identical to the Diffie-Hellman protocol, the SOK key
agreement protocol also has three variants, namely \emph{ephemeral},
\emph{semi-static} and \emph{static} versions. Upon this, we build
solid relations between authenticated Diffie-Hellman (Auth-DH)
protocols and ID-based authenticated key agreement (IB-AK)
protocols, whereby we present two \emph{substitution rules} for this
two types of protocols. The rules enable a conversion between the
two types of protocols. In particular, we obtain the \emph{real}
ID-based version of the well-known MQV (and HMQV) protocol.
Similarly, for the Sakai-Kasahara (SK) key construction, we show
that the key transport protocol underlining the SK ID-based
encryption scheme (which we call the ``SK protocol") has its non-ID
counterpart, namely the Hughes protocol. Based on this observation,
we establish relations between corresponding ID-based and non-ID-based protocols. In particular, we propose a highly enhanced version of the McCullagh-Barreto protocol.
On the Connection between Signcryption and One-pass Key Establishment
Key establishment between two parties that uses only one message transmission is referred to as one-pass key establishment (OPKE). OPKE provides the opportunity for very efficient constructions, even though they will typically provide a lower level of security than the corresponding multi-pass variants. In this paper, we explore the intuitive connection between signcryption and OPKE. By establishing a formal relationship between these two primitives, we show that with appropriate security notions, OPKE can be used as a signcryption KEM and vice versa. In order to establish the connection we explore the definitions of security for signcryption (KEM) and give new and generalised definitions. By making our generic constructions concrete we are able to provide new examples of signcryption KEMs and an OPKE protocol.
Efficient Confirmer Signatures from the ``Signature of a Commitment'' Paradigm
Uncategorized
Uncategorized
Generic constructions of designated confirmer signatures follow one of the
following two strategies; either produce a digital signature on the message to
be signed, then encrypt the resulting signature, or produce a commitment on
the message, encrypt the string used to generate the commitment and finally
sign the latter.
We study the second strategy by determining the exact security property needed
in the encryption to achieve secure constructions. This study
infers the exclusion of a useful type of encryption from the design due to an
intrinsic weakness in the paradigm. Next, we propose a simple method to
remediate to this weakness and we get efficient constructions which can be
used with \emph{any} digital signature.
Tight Bounds for Protocols with Hybrid Security
We consider broadcast and multi-party computation (MPC) in the setting where a digital signature scheme and a respective public-key infrastructure (PKI) are given among the players. However, neither the signature scheme nor the PKI are fully trusted. The goal is to achieve unconditional (PKI- and signature-independent) security up to a certain threshold, and security beyond this threshold under stronger assumptions, namely, that the forgery of signatures is impossible and/or that the given PKI is not under adversarial control. We give protocols for broadcast and MPC that achieve an optimal trade-off between these different levels of security.
Communication Optimal Multi-Valued Asynchronous Byzantine Agreement with Optimal Resilience
Byzantine Agreement (BA) and Broadcast (BC) are considered to be the most fundamental primitives for fault-tolerant distributed computing and cryptographic protocols. An important variant of BA and BC is Asynchronous Byzantine Agreement (ABA) and Asynchronous Broadcast (called as A-cast) respectively. Most often in the literature, protocols for ABA and A-cast were designed for a single bit message. But in many applications, these protocols may be invoked on long message rather than on single bit. Therefore, it is important to design efficient multi-valued protocols (i.e. protocols with long message) which extract advantage of directly dealing with long messages and are far better than multiple invocations to existing protocols for single bit. In synchronous network settings, this line of research was initiated by Turpin and Coan and later it is culminated in the result of Fitzi et al. who presented the first ever
communication optimal multi-valued BA and BC protocols with the help
of BA and BC protocols for short message. It was left open by Fitzi et al. to achieve the same in asynchronous settings.
Recently Patra et al. presented a communication optimal multi-valued A-cast using existing A-cast of Brach for small message.
Here we achieve the same for ABA which is known to be harder problem than A-cast. Specifically, we design a communication optimal,
optimally resilient multi-valued ABA protocol, based on the existing ABA protocol for short message.
Last updated: 2011-04-25
Practical Distributed Key Generation Scheme
Uncategorized
Uncategorized
Generating a distributed key, where a constant fraction of the players can reconstruct the key, is an essential component of thresh- old cryptography. Previous solutions are based on univariate polynomi als. We present a new distributed key generation (DKG) protocol. The key idea of our scheme is to use bivariate symmetric polynomial. Compared with other solutions, our construction is efficient in computation and simple and flexible in applications. In addition, our construction admits a rigorous proof of security. We also study the new-member-joining problem which often occurs in some applications: Players P1, ..., Pn, who make up a group G, have jointly generated a pair of private and public keys based on some DKG, when a new player hope to join in G, how does he get a share of private key? We present a new-member-joining protocol which is based on our DKG scheme and which completely solve this problem. we also give a proof of the security of this new-member-joining protocol in terms of correctness and secrecy.
On the Design of Trivium
eSTREAM called for new stream ciphers designed for niche areas such as exceptional performance in software and hardware where resources are restricted. This project provides an open platform to discuss these ciphers. Trivium is one of the promising new ciphers submitted to it. Until now, no attack has been successfully applied to it. This paper illustrates new design principles of stream ciphers based on the structure of Trivium and introduces the definition of k-order primitive polynomials. New designs of Trivium are also given according to the principles in this paper.
One-time-password-authenticated key exchange
To reduce the damage of phishing and spyware attacks, banks, governments, and other security-sensitive industries are deploying one-time password systems, where users have many passwords and use each password only once. If a single password is compromised, it can be only be used to impersonate the user once, limiting the damage caused. However, existing practical approaches to one-time passwords have been susceptible to sophisticated phishing attacks.
We give a formal security treatment of this important practical problem. We consider the use of one-time passwords in the context of password-authenticated key exchange (PAKE), which allows for mutual authentication, session key agreement, and resistance to phishing attacks. We describe a security model for the use of one-time passwords, explicitly considering the compromise of past (and future) one-time passwords, and show a general technique for building a secure one-time-PAKE protocol from any secure PAKE protocol. Our techniques also allow for the secure use of pseudorandomly generated and time-dependent passwords.
Precise Time and Space Simulatable Zero-Knowledge
Traditionally, the definition of zero-knowledge states that an
interactive proof of provides zero (additional) knowledge
if the view of any \emph{polynomial-time} verifier can be
reconstructed by a \emph{polynomial-time} simulator. Since this
definition only requires that the worst-case running-time of the
verifier and simulator are polynomials, zero-knowledge becomes a
worst-case notion.
In STOC'06, Micali and Pass proposed a new notion of precise
zero-knowledge, which captures the idea that the view of any
verifier in every interaction can be reconstructed in (almost) the
same time (i.e., the view can be ``indistinguishably
reconstructed''). This is the strongest notion among the known works
towards precislization of the definition of zero-knowledge.
However, as we know, there are two kinds of computational resources
(i.e. time and space) that every algorithm consumes in computation.
Although the view of a verifier in the interaction of a precise
zero-knowledge protocol can be reconstructed in almost the same
time, the simulator may run in very large space while at the same
time the verifier only runs in very small space. In this case it is
still doubtful to take indifference for the verifier to take part in
the interaction or to run the simulator. Thus the notion of precise
zero-knowledge may be still insufficient. This shows that
precislization of the definition of zero-knowledge needs further
investigation.
In this paper, we propose a new notion of precise time and space
simulatable zero-knowledge (PTSSZK), which captures the idea that
the view of any verifier in each interaction can be reconstructed
\emph{not only} in the same time, \emph{but also} in the same space.
We construct the first PTSSZK proofs and arguments with simultaneous
linear time and linear space precisions for all languages in .
Our protocols do not use noticeably more rounds than the known
precise zero-knowledge protocols, and the probability analysis of
the successful extraction of the new simulation strategy may be of
independent interests.
Efficiently from Semi-honest to Malicious OT via OLFE
A combiner securely implements a functionality out of a set implementations of another functionality from which some may be insecure. We present two efficient combiners for oblivious linear function evaluation (OLFE). The first is a constant-rate OLFE combiner in the semi-honest model, the second combiner implements Rabin string oblivious transfer (RabinOT) from OLFE in the malicious model.
As an application, we show a very efficient reductions in the malicious model of RabinOT over strings to one-out-of-two oblivious transfer over bits (OT) that is only secure in the semi-honest model. For string of size , our reductions uses only instances of OT, while previous results required . Our new reduction leads to an efficiency improvement for general multi-party computation (MPC) based on semi-honest OT, and makes it almost as efficient as MPC based on malicious OT.
All reductions are unconditionally secure, black-box, universally composable and secure against adaptive adversaries.
Efficient Verifiable Escrow and Fair Exchange with Trusted Hardware
At the heart of many fair exchange problems is verifiable escrow: a
sender encrypts some value using the public key of a trusted party
(called the recovery agent), and then must convince the receiver of
the ciphertext that the corresponding plaintext satisfies some
property (e.g., it contains the sender's signature on a
contract). Previous solutions to this problem are interactive, and
often rely on communication-intensive cut-and-choose zero-knowledge
proofs. In this paper, we provide a solution that uses generic trusted
hardware to create an efficient, non-interactive verifiable escrow
scheme. Our solution allows the protocol to use a set of recovery
agents with a threshold access structure, the \emph{verifiable group
escrow} notion which was informally introduced by Camenisch and
Damgard and which is formalized here. Finally, this paper shows how
this new non-interactive verifiable escrow scheme can be used to
create an efficient optimistic protocol for fair exchange of
signatures.
Cheating Detection and Cheater Identification in CRT-based Secret Sharing Schemes
In this paper we analyze the cheating detection and
cheater identification problems for the secret sharing schemes
based on the Chinese remainder theorem (CRT), more exactly
for Mignotte [1] and Asmuth-Bloom [2] schemes. We prove
that the majority of the solutions for Shamir’s scheme [3] can
be translated to these schemes and, moreover, there are some
interesting specific solutions.
Cryptanalysis and Security Enhancement on the Generation of Mu-Varadharajan Electronic Voting Protocol
Uncategorized
Uncategorized
Mu and Varadharajan proposed an electronic voting scheme and claimed
that their scheme authenticates the voters, protects the anonymity of them, and detects the identity of double voters. Due to some weaknesses in Mu-Varadharajan scheme, several modified schemes have been proposed by Lin et al., Hwang et al., Rodriguez-Henriquez et al. and Asaar et al.; however this paper shows that these schemes suffer from some weaknesses in fulfilling the pointed properties. For this purpose, we get Hwang et al. scheme as a case study and apply our new attacks on it. Also we consider the applicability of the attacks on other pointed schemes. In addition, we present a new scheme and show that the scheme resists against the proposed attacks without loosing
efficiency.
Double Voter Perceptible Blind Signature Based Electronic Voting Protocol
Uncategorized
Uncategorized
Mu et al. have proposed an electronic voting protocol and
claimed that it protects anonymity of voters, detects double
voting and authenticates eligible voters. It has been shown that
it does not protect voter's privacy and prevent double voting.
After that, several schemes have been presented to fulfill these
properties. However, many of them suffer from the same
weaknesses. In this paper, getting Asadpour et al. scheme
as one of the latest one and showing its weaknesses, we propose
a new voting scheme which is immune to the weaknesses
of previous schemes without loosing efficiency. The scheme,
is based on a special structure, which directly
use the identity of voter, hides it in that structure and reveals
it after double voting. We also, show that the security of this
scheme depends on hardness of RSA cryptosystem, Discrete Logarithm
problem and Representation problem.
Utilizing postponed ephemeral and pseudo-static keys in tripartite and identity-based key agreement protocols
We propose an new one-round implicitly authenticated three-party protocol that extends Joux's protocol as well as a two-party identity-based protocol. Our protocols have a single communication round that consists of ephemeral (one-time) public keys along with certificates in the tripartite protocol, and identities in the identity-based setting. As such our protocols are communication efficient and furthermore do not require enhanced message format.
Attacks on {RFID}-Based Electronic Voting Systems
Uncategorized
Uncategorized
Many secure systems, such as contactless credit cards and secure entrance systems, are built with contactless smart-card RFID technologies. In many cases these systems are claimed to be secure based on the assumption that readers and tags need to be in close proximity (about 5cm) in order to communicate. However, it is known that this proximity assumption is false: Relay attacks are a class of hardware-based attacks which compromise the safety of such systems by dramatically extending the interrogation range of the contactless system. Interestingly, the proposed Israeli e-voting scheme is based on contactless smartcards.
In this work we show how the proposed system can be completely compromised using low-cost relay attacks. Our attacks allow an adversary to read out all votes already cast into the ballot box, supress the votes of one or several voters, rewrite votes at will and even completely disqualify all votes in a single voting station. Our attacks are easy to mount, very difficult to detect, and compromise both the confidentiality and the integrity of the election system.
How to Construct Identity-Based Signatures without the Key Escrow Problem
The inherent key escrow problem is one of the main reasons for the
slow adoption of identity-based cryptography. The existing solution for mitigating the key escrow problem is by adopting multiple Private Key Generators (PKGs). Recently, there was a proposal that attempted to reduce the trust of the PKG by allowing a malicious PKG to be caught if he reveals the user's identity-based secret key illegally. Nonetheless, the proposal does not consider that the PKG can simply decrypt the ciphertext instead of revealing the secret key itself (in the case of identity-based encryption schemes).
The aim of this paper is to present an escrow-free identity-based signature (IBS) scheme, in which the malicious PKG will be caught if it releases a signature on behalf of the user but signed by itself. We present a formal model to capture such a scheme and provide a concrete construction.
Higher-order Masking and Shuffling for Software Implementations of Block Ciphers
Differential Power Analysis (DPA) is a powerful side channel key recovery attack that efficiently breaks block ciphers implementations. In software, two main techniques are usually applied to thwart them: masking and operations shuffling. To benefit from the advantages of the two techniques, recent works have proposed to combine them. However, the schemes which have been
designed until now only provide limited resistance levels and some
advanced DPA attacks have turned out to break them. In this paper,
we investigate the combination of masking and shuffling. We moreover
extend the approach with the use of higher-order masking and we
show that it enables to significantly improve the security level of
such a scheme. We first conduct a theoretical analysis in which the
efficiency of advanced DPA attacks targeting masking and shuffling
is quantified. Based on this analysis, we design a generic scheme
combining higher-order masking and shuffling. This scheme is
scalable and its security parameters can be chosen according to any
desired resistance level. As an illustration, we apply it to protect
a software implementation of AES for which we give several
security/efficiency trade-offs.
An Efficient Method for Random Delay Generation in Embedded Software
Random delays are a countermeasure against a range of side channel and fault attacks that is often implemented in embedded software. We propose a new method for generation of random delays and a criterion for measuring the efficiency of a random delay countermeasure. We implement this new method along with the existing ones on an 8-bit platform and mount practical side-channel attacks against the implementations. We show that the new method is significantly more secure in practice than the previously published solutions and also more lightweight.
Subtleties in the Definition of IND-CCA: When and How Should Challenge-Decryption be Disallowed?
The definition of IND-CCA disallows an adversary from querying the
challenge ciphertext to its decryption oracle. We point out that there are several ways to formalize this. We show that, surprisingly, for
public-key encryption the resulting notions are not all equivalent.
We then consider the same question for key-encapsulation mechanisms
(KEMs) and show that in this case the four notions ARE all
equivalent. Our discoveries are another manifestation of the
subtleties that make the study of cryptography so attractive and are
important towards achieving the definitional clarity and unity
required for firm foundations.
More Differential Paths of TIB3
Uncategorized
Uncategorized
The TIB3-256 hashing algorithm [3] is a first round candidate in the SHA-3 competition [2]. Properties of the message expansion and the PHTX function are observed, and then exploited to create new high-probability differential paths through the compression function. Examples conforming to the differential paths are presented. Only one of these differential paths can be applied to the tweaked version of TIB3v2 [4]. Due to the dual-block input mode used in TIB3 and TIB3v2, these differential paths do not seem extensible to the full hash functions.
Note: In the time between when this paper was written and when the paper was made public, the SHA-3 Round 2 Candidates were announced, and TIB3 had been eliminated from the competition.
KronCrypt - A New Symmetric Cryptosystem Based on Kronecker's Approximation Theorem
In this paper we show how to use an old mathematical concept of diophantine analysis, the approximation theorem of Kronecker, in symmetric cryptography. As a first practical application we propose and analyze the new symmetric 128-bit block cipher KronCrypt. The cipher is a 4-round Feistel network with a non-bijective round function f made up of a variable number of large key-dependent S-boxes, XORs and modular additions. Its key length is variable but not less than 128 bit. The main innovation of KronCrypt in the area of symmetric cryptography is the fact that the key-dependent S-boxes are based upon a constructive proof of the approximation theorem of Kronecker used as a boolean function. We prove the correctness of our concept in general and show how we designe the new cipher KronCrypt. Furthermore, results concerning statistical behaviour, i.e. confusion, diffusion and completeness, and differential cryptanalysis are presented.
Attacks Against Permute-Transform-Xor Compression Functions and Spectral Hash
Uncategorized
Uncategorized
This paper presents an attack on the strong collision resistance of the Spectral Hash SHA-3 candidate. Spectral-Hash (shash) is a Merkle-Damgård based hash function, carefully designed to resist all known cryptographic attacks. To best of our knowledge, our attack is the only known attack against the shash algorithm. We exploit the fundamental structure of the algorithm, completely bypassing the hash function's formidable cryptographic protections. Our attack is presented in three stages. First, we define the family of functions which have the structure we wish to exploit. We call members of this family PTX functions. Next, we show that all PTX functions, including functions which use random oracles, are vulnerable to our collision attack. Finally, we reformulate the shash compression function showing that it is a PTX function and thus vulnerable. We present results on a practical implementation of our attack, generating collisions for shash in less than a second on a typical desktop computer.
Security Bounds for the Design of Code-based Cryptosystems
Uncategorized
Uncategorized
Code-based cryptography is often viewed as an interesting ``Post-Quantum'' alternative to the classical number theory cryptography. Unlike many other such alternatives, it has the convenient advantage of having only a few, well identified, attack algorithms. However, improvements to these algorithms have made their effective complexity quite complex to compute. We give here some lower bounds on the work factor of idealized versions of these algorithms, taking into account all possible tweaks which could improve their practical complexity. The aim of this article is to help designers select durably secure parameters.
Three Improved Algorithms for Multi-path Key Establishment in Sensor Networks Using Protocols for Secure Message Transmission
In this paper, we propose a security model to capture active attacks against multi-path key establishment (MPKE) in sensor networks. Our model strengthens previous models to capture more attacks and achieve essential security goals for multi-path key establishment. In this model, we can apply protocols for perfectly secure message transmission to solve the multi-path key establishment problem. We propose a simple new protocol for optimal one-round perfectly secure message transmission based on Reed-Solomon codes. Then we use this protocol to obtain two new multi-path key establishment schemes that can be applied provided that fewer than one third of the paths are controlled by the adversary. Finally, we describe another MPKE scheme that tolerates a higher fraction (less than 1/2) of paths controlled by the adversary. This scheme is based on a new protocol for a weakened version of message transmission, which is very simple and efficient.
Our multi-path key establishment schemes achieve improved security and lower communication complexity, as compared to previous schemes.
Distinguishing Attacks on Stream Ciphers Based on Arrays of Pseudo-random Words
In numerous modern stream ciphers, the internal state consists of
a large array of pseudo-random words, and the output key-stream is
a relatively simple function of the state. In [Paul-Preneel],
it was heuristically shown that in various cases this structure
may lead to distinguishing attacks on the cipher. In this paper
we further investigate this structural attack. We present a
rigorous proof of the main probabilistic claim used in the attack
in the basic cases, and demonstrate by examining a concrete
example (the cipher SN3) that the heuristic
assumptions of the attack are remarkably precise in more
complicated cases. Furthermore, we use the general technique to
devise a distinguishing attack on the stream cipher
MV3 requiring words of key-stream.
Unlike the attacks in [Paul-Preneel], our attack does not
concentrate on the least significant bits of the words, thus
allowing to handle the combination of more operations
(XORs, modular additions and multiplications, and
rotations by a fixed number of bits) in the update and output
rules of the cipher.
Improved Garbled Circuit Building Blocks and Applications to Auctions and Computing Minima
We consider generic Garbled Circuit (GC)-based techniques for Secure Function Evaluation (SFE) in the semi-honest model.
We describe efficient GC constructions for addition, subtraction, multiplication, and comparison functions. Our circuits for subtraction and comparison are approximately two times smaller (in terms of garbled tables) than previous constructions. This implies corresponding computation and communication improvements in SFE of functions using our efficient building blocks. The techniques rely on recently proposed ``free XOR'' GC technique.
Further, we present concrete and detailed improved GC protocols
for the problem of secure integer comparison, and related
problems of auctions, minimum selection, and minimal distance.
Performance improvement comes both from building on
our efficient basic blocks and several problem-specific GC optimizations.
We provide precise cost evaluation of our constructions, which serves as a
baseline for future protocols.
Authenticated Broadcast with a Partially Compromised Public-Key Infrastructure
Given a public-key infrastructure (PKI) and digital signatures, it is possible to construct broadcast protocols tolerating any number of corrupted parties. Existing protocols, however, do not distinguish between corrupted parties who do not follow the protocol, and honest parties whose secret (signing) keys have been compromised but continue to behave honestly. We explore conditions under which it is possible to construct broadcast protocols that still provide the usual guarantees (i.e., validity/agreement) to the latter.
Consider a network of parties, where an adversary has compromised the secret keys of up to honest parties and, in addition, fully controls the behavior of up to other parties. We show that for any fixed and any fixed , there exists an efficient protocol for broadcast if and only if . (When , standard results imply feasibility for all .) We also show that if are not fixed, but are only guaranteed to satisfy the above bound, then broadcast is impossible to achieve except for a few specific values of ; for these ``exceptional'' values of , we demonstrate broadcast protocols. Taken together, our results give a complete characterization of this problem.
A Tree Based Recursive Scheme for Space Efficient Secret Sharing
This paper presents a k-out-of-n recursive secret sharing scheme based on an n-ary tree data structure. In recursive hiding of secrets, the user encodes additional secrets in the shares of the secret intended to be original shared without an expansion in the size of the latter, thereby decreasing the effective share size per secret and increasing the overall space efficiency of secret sharing, with a trade-off in security. The proposed scheme has applications in secure distributed storage and information dispersal protocols. It may be used as a steganographic channel to transmit hidden information in secret sharing, which may be used for authentication and verification of shares and the reconstructed secret itself.
A Secure and Efficient Authenticated Diffie–Hellman Protocol
The Exponential Challenge Response (XRC) and Dual Exponential Challenge Response (DCR) signature schemes are the building blocks of the HMQV protocol. We propose a complementary analysis of these schemes; on the basis of this analysis we show how impersonation and man in the middle attacks can be mounted against the HMQV protocol when some session specific information leakages happen.
We define the Full Exponential Challenge Response (FXRC) and Full Dual Exponential Challenge Response (FDCR) signature schemes; using these schemes we propose the Fully Hashed MQV protocol (with security arguments), which preserves the remarkable performance of the (H)MQV protocols and resists the attacks we present.
Single Block Attacks and Statistical Tests on CubeHash
This paper describes a second preimage attack
on the CubeHash cryptographic one-way hash function.
The attack finds a second preimage
in less time than brute force search
for these CubeHash variants:
CubeHash / -224 for ;
CubeHash / -256 for ;
CubeHash / -384 for ; and
CubeHash / -512 for .
However, the attack does not break
the CubeHash variants recommended for SHA-3.
The attack requires minimal memory
and can be performed in a massively parallel fashion.
This paper also describes several statistical randomness tests on CubeHash.
The tests were unable to disprove the hypothesis
that CubeHash behaves as a random mapping.
These results support CubeHash's viability
as a secure cryptographic hash function.
On-line Non-transferable Signatures Revisited
Undeniable signatures, introduced by Chaum and van Antwerpen, and designated confirmer signatures, introduced by Chaum, allow a signer to control the verifiability of his signatures by requiring a verifier to interact with the signer to verify a signature. An important security requirement for these types of signature schemes is \emph{non-transferability} which informally guarantees that even though a verifier has confirmed the validity of a signature by interacting with the signer, he cannot prove this knowledge to a third party. Recently Liskov and Micali pointed out that the commonly used notion of non-transferability only guarantees security against an off-line attacker which cannot influence the verifier while he interacts with the signer, and that almost all previous schemes relying on interactive protocols are vulnerable to on-line attacks. To address this, Liskov and Micali formalized on-line non-transferable signatures which are resistant to on-line attacks, and proposed a generic construction based on a standard signature scheme and an encryption scheme.
In this paper, we revisit on-line non-transferable signatures. Firstly, we extend the security model of Liskov and Micali to cover not only the sign protocol, but also the confirm and disavow protocols executed by the confirmer. Our security model furthermore considers the use of multiple (potentially corrupted or malicious) confirmers, and guarantees security against attacks related to the use of signer specific confirmer keys. We then present a new approach to the construction of on-line non-transferable signatures, and propose a new concrete construction which is provably secure in the standard model. Unlike the construction by Liskov and Micali, our construction does not require the signer to issue ``fake'' signatures to maintain security, and allows the confirmer to both confirm and disavow signatures. Lastly, our construction provides noticeably shorter signatures than the construction by Liskov and Micali.
Generic Attacks on Misty Schemes -5 rounds is not enough-
Misty schemes are classic cryptographic schemes used to construct pseudo-random permutations from bits to bits by using pseudo-random permutations from bits to bits. These permutations will be called the ``internal'' permutations, and is the number of rounds of the Misty scheme. Misty schemes are important from a practical point of view since for example, the Kasumi algorithm based on Misty schemes has been adopted as the standard blockcipher in the third generation mobile systems. In this paper we describe the best known ``generic'' attacks on Misty schemes, i.e. attacks when the internal permutations do not have special properties, or are randomly chosen. We describe known plaintext attacks (KPA), non-adaptive chosen plaintext attacks (CPA-1) and adaptive chosen plaintext and ciphertext attacks (CPCA-2) against these schemes. Some of these attacks were previously known, some are new. One important result of this paper is that we will show that when rounds, there exist such attacks with a complexity strictly less than . Consequently, at least 6 rounds are necessary to avoid these generic attacks on Misty schemes. When we also describe some attacks on Misty generators, i.e. attacks where more than one Misty permutation is required.
Last updated: 2009-08-26
Pairing-Friendly Elliptic Curves With Various Discriminants
In this paper, we extend the Brezing-Weng method by parameterizing the discriminant by a polynomial and give a construction of families of pairing-friendly elliptic curves with various discriminants. For 5, 8, 9, 15, 16, 20, 24 and 28, our method gives smaller value than the ones in previous results.
On Generic Constructions of Designated Confirmer Signatures (The ``Encryption of a Signature'' Paradigm Revisited)
Uncategorized
Uncategorized
Designated Confirmer signatures were introduced to limit the verification property
inherent to digital signatures. In fact, the verification in these
signatures is replaced by a confirmation/denial protocol between the
\emph{designated confirmer} and some verifier. An intuitive way to obtain such signatures consists in
first generating a digital signature on the message to be signed, then
encrypting the result using a suitable encryption scheme. This approach, referred to as the ``encryption of a signature'' paradigm, requires the
constituents (encryption and signature schemes) to meet the highest
security notions in order to achieve secure constructions.
In this paper, we revisit this method and establish the necessary and
sufficient assumptions on the building blocks in order to attain secure
confirmer signatures. Our study concludes that the paradigm, used in its basic
form, cannot allow a class of encryption schemes, which is vital for the
efficiency of the confirmation/denial protocols. Next, we consider a slight
variation of the paradigm, proposed in the context of undeniable signatures;
we recast it in the confirmer signature framework along with changes that
yield more flexibility, and we demonstrate its efficiency
by explicitly describing its confirmation/denial protocols when instantiated with building blocks from a large class of
signature/encryption schemes. Interestingly, the class of signatures we
consider is very popular and has been for instance used to build efficient
designated verifier signatures.
AIDA Breaks BIVIUM (A&B) in 1 Minute Dual Core CPU Time
The stream cipher BIVIUM (both BIVIUM-A and BIVIUM-B), a modification of the eSTREAM finalist TRIVIUM, can be broken completely by the Algebraic IV Differential Attack, AIDA, using simulations or one minute of dual core processing.
AIDA uses the subspaces of two 32-dimensional vector spaces over subsets of IV bits to recover 56 of the 80 key bits. The remaining 24 key bits are most easily determined by brute force search.
We applied the Fast Reed-Muller Transform to speed up the search for linear equations in the key bits
and the Wavefront Model to rule out nonlinear relations in the key bits early on.