All papers (Page 208 of 24263 results)
New Pseudo-Near-Collision Attack on Reduced-Round of Hamsi-256
Uncategorized
Uncategorized
Hamsi-256 is designed by Özgül Kücük
and it has been a candidate Hash function for the second round of SHA-3. The compression function of Hamsi-256 maps a 256-bit chaining value and a 32-bit message to a new 256-bit chaining value. As hashing a message, Hamsi-256 operates 3-round except for the last message it operates 6-round. In this paper, we will give the pseudo-near-collision for 5-round Hamsi-256. By the message modifying, the pseudo-near-collision for 3, 4 and 5 rounds can be found with , and compression function computations respectively.
On the Security of UOV
In this short note, we investigate the security of the Unbalanced Oil and Vinegar Scheme \cite{uov}. To do so, we use a hybrid approach for solving the algebraic systems naturally arising when mounting a signature-forgery attack. The basic idea is to compute Gröbner bases of several modified systems rather than a Gröbner basis of the initial system. It turns out that our approach is efficient in practice. We have obtained a complexity bounded from above by (or hours of computation) to forge a signature on a set of parameters proposed by the designers of UOV.
New Techniques for Dual System Encryption and Fully Secure HIBE with Short Ciphertexts
Uncategorized
Uncategorized
We construct a fully secure HIBE scheme with short ciphertexts. The previous construction of Boneh, Boyen, and Goh was only proven to be secure in the selective model, under a non-static assumption which depended on the depth of the hierarchy.
To obtain full security, we apply the dual system encryption concept recently introduced by Waters. A straightforward application of this technique is insufficient to achieve short ciphertexts, since the original instantiation of the technique includes tags that do not compress. To overcome this challenge, we design a new method for realizing dual system encryption. We provide a system in composite order groups (of three primes) and prove the security of our scheme under three static assumptions.
PPS: Privacy Preserving Statistics using RFID Tags
As RFID applications are entering our daily life, many new
security and privacy challenges arise. However, current research
in RFID security focuses mainly on simple authentication
and privacy-preserving identication. In this paper,
we discuss the possibility of widening the scope of RFID
security and privacy by introducing a new application scenario.
The suggested application consists of computing statistics
on private properties of individuals stored in RFID tags.
The main requirement is to compute global statistics while
preserving the privacy of individual readings. PPS assures
the privacy of properties stored in each tag through the combination
of homomorphic encryption and aggregation at the
readers. Re-encryption is used to prevent tracking of users.
The readers scan tags and forward the aggregate of their
encrypted readings to the back-end server. The back-end
server then decrypts the aggregates it receives and updates
the global statistics accordingly. PPS is provably privacypreserving.
Moreover, tags can be very simple since they are
not required to perform any kind of computation, but only
to store data.
On Cryptographic Protocols Employing Asymmetric Pairings -- The Role of Revisited
Uncategorized
Uncategorized
Asymmetric pairings for which an efficiently-computable isomorphism is known are called Type 2 pairings; if such an isomorphism is not known then is called a Type 3 pairing. Many cryptographic protocols in the asymmetric setting rely on the existence of for their security reduction while some use it in the protocol itself. For these reasons, it is believed that some of these protocols cannot be implemented with Type 3 pairings, while for some the security reductions either cannot be transformed to the Type 3 setting or else require a stronger complexity assumption. Contrary to these widely held beliefs, we argue that Type 2 pairings are merely inefficient implementations of Type 3 pairings, and appear to offer no benefit for protocols based on asymmetric pairings from the point of view of functionality, security, and performance.
Preimage Attacks on 41-Step SHA-256 and 46-Step SHA-512
In this paper, we propose preimage attacks on 41-step SHA-256 and 46-step SHA-512,
which drastically increase the number of attacked steps compared to the best previous preimage attack working for only 24 steps.
The time complexity for 41-step SHA-256 is compression function operations and the memory requirement is
words.
The time complexity for 46-step SHA-512 is compression function operations and the memory requirement is
words.
Our attack is a meet-in-the-middle attack.
We first consider the application of previous meet-in-the-middle attack techniques to SHA-2.
We then analyze the message expansion of SHA-2 by considering all previous techniques
to find a new independent message-word partition.
We first explain the attack on 40-step SHA-256 whose complexity is to describe the ideas.
We then explain how to extend the attack.
Pseudo-cryptanalysis of the Original Blue Midnight Wish
The hash function Blue Midnight Wish (BMW) is a candidate in the SHA-3
competition organised by the U.S. National Institute of Standards and Technology (NIST). BMW was selected for the second round of the competition, but the algorithm was tweaked in a number of ways. In this paper we describe cryptanalysis on the original version of BMW, as submitted to the SHA-3 competition in October 2008. When we refer to BMW, we therefore mean the original version of the algorithm.
The attacks described are (near-)collision, preimage and second preimage attacks on the BMW compression function. These attacks can also be described as pseudo-attacks on the full hash function, i.e., as attacks in which the adversary is allowed to choose the initial value of the hash function. The complexities of the attacks are about 2^{14} for the near-collision attack, about 2^{3n/8+1} for the pseudo-collision attack, and about 2^{3n/4+1} for the pseudo-(second) preimage attack, where n is the output length of the hash function. Memory requirements are negligible. Moreover, the attacks are not (or only moderately) affected by the choice of security parameter for BMW.
Preimages for Step-Reduced SHA-2
In this paper, we present a preimage attack for 42 step-reduced SHA-256 with time complexity and memory requirements of order . The same attack also applies to 42 step-reduced SHA-512 with time complexity and memory requirements of order . Our attack is meet-in-the-middle preimage attack.
On the Security of PAS (Predicate-based Authentication Service)
Recently a new human authentication scheme called PAS (predicate-based authentication service) was proposed, which does not require the assistance of any supplementary device. The main security claim of PAS is to resist passive adversaries who can observe the whole authentication session between the human user and the remote server.
In this paper we give a detailed security analysis of PAS and show that PAS is insecure against both brute force attack and a probabilistic attack. In particular we show that the security of PAS against brute force attack was strongly overestimated. Furthermore, we introduce a probabilistic attack, which can break part of the password even with a very small number of observed authentication sessions. Although the proposed attack cannot completely break the password, it can downgrade the PAS system to a much weaker system similar to common OTP (one-time password) systems.
Double-Exponentiation in Factor-4 Groups and its Applications
In previous work we showed how to compress certain prime-order subgroups of certain cyclotomic subgroups by a factor of 4. We also showed that single-exponentiation can be efficiently performed using compressed representations. In this paper we show that double-exponentiation can be efficiently performed using factor-4 compressed representation of elements. In addition to giving a considerable speed up to the previously known fastest single-exponentiation algorithm for general bases, double-exponentiation can be used to adapt our compression technique to ElGamal type signature schemes.
Resettable Public-Key Encryption: How to Encrypt on a Virtual Machine
Uncategorized
Uncategorized
Typical security models used for proving security of deployed cryptographic primitives do not allow adversaries to rewind or reset honest parties to an earlier state. Thus, it is common to see cryptographic protocols rely on the assumption that fresh random numbers can be continually generated. In this paper, we argue that because of the growing popularity of virtual machines and, specifically, their state snapshot and revert features, the security of cryptographic protocols proven under these assumptions is called into question. We focus on public-key encryption security in a setting where resetting is possible and random numbers might be reused. We show that existing schemes and security models are insufficient in this setting. We then provide new formal security models and show that making a simple and efficient modification to any existing PKE scheme gives us security under our new models.
A Simple Power Analysis Attack on the Serpent Key Schedule
We describe an SPA attack on an 8-bit smart card implementation of the Serpent block cipher. Our attack uses measurements taken during an on-the-fly key expansion together with linearity in the cipher's key schedule algorithm to drastically reduce the search time for an initial key. An implementation finds 256-bit keys in 3.736 ms on average. Our work shows that linearity in key schedule design and other cryptographic applications should be carefully evaluated for susceptibility to side-channel attacks and that search algorithm design can greatly speed up side-channel attacks.
Cryptanalysis of a Message Recognition Protocol by Mashatan and Stinson
At CANS 2008, Mashatan and Stinson suggested a message recognition protocol for ad hoc pervasive networks. The protocol provides a procedure to resynchronize in case of a (possibly adversarial) disruption of communication. We show that this resynchronization process does not provide the functionality intended and in fact enables an adversary to create selective forgeries. The computational effort for the attack is negligible and allows the insertion of arbitrary messages.
Improving the Berlekamp algorithm for binomials \boldmath
In this paper, we describe an improvement of the Berlekamp algorithm
for binomials over prime fields .
We implement the proposed method for various cases and
compare the results with the original Berlekamp method.
The proposed method can be extended easily to the case
where the base field is not a prime field.
On The Communication Complexity of Perfectly Secure Message Transmission in Directed Networks
In this paper, we re-visit the problem of perfectly secure message transmission (PSMT) in a directed network under the presence of a threshold adaptive Byzantine adversary, having unbounded
computing power. Desmedt et.al have given the characterization for three or more phase PSMT protocols over directed networks. Recently, Patra et. al. have given the characterization of two phase PSMT over directed networks. Even though the issue of tradeoff between phase complexity and communication complexity of PSMT protocols has been resolved in undirected networks, nothing is known in the literature regarding directed networks. In this paper, we completely settle down
this issue. Specifically, we derive the lower bounds on communication complexity of (a) two phase PSMT protocols and (b) three or more phase PSMT protocols in directed networks. Moreover, we show that our lower bounds are asymptotically tight, by designing communication optimal PSMT protocols in directed networks, which are first of their kind.
We re-visit the problem of perfectly reliable message transmission (PRMT) as well. Any PRMT protocol that sends a message containing field elements, has a trivial lower bound of O( ) field
elements on its communication complexity. Thus any PRMT protocol that sends a message of eld elements by communicating O(\ell) field elements, is referred as communication optimal PRMT or PRMT with constant factor overhead. Here, we characterize the class of directed networks over which communication optimal PRMT or PRMT with constant factor overhead is possible. Moreover, we design a communication optimal PRMT over a directed network that satisfies the conditions stated in our characterization. Our communication optimal PRMT/PSMT protocols employ several new techniques based on coding theory, which are of independent interest.
Additive Combinatorics and Discrete Logarithm Based Range Protocols
Uncategorized
Uncategorized
We show how to express an arbitrary integer interval as a sumset of smaller integer intervals for some small values , , and , where and . We show how to derive such expression of as a sumset for any value of , and in particular, how the coefficients can be found by using a nontrivial but efficient algorithm. This result may be interesting by itself in the context of additive combinatorics. Given the sumset-representation of , we show how to decrease both the communication complexity and the computational complexity of the recent pairing-based range proof of Camenisch, Chaabouni and shelat from ASIACRYPT 2008 by a factor of . Our results are important in applications like e-voting where a voting server has to verify thousands of proofs of e-vote correctness per hour. Therefore, our new result in additive combinatorics has direct relevance in practice.
Password Based Key Exchange with Hidden Elliptic Curve Public Parameters
Uncategorized
Uncategorized
We here describe a new Password-based Authenticated Key Exchange (PAKE) protocol based on elliptic curve cryptography. We prove it secure in the Bellare-Pointcheval-Rogaway (BPR) model. Our proposal is conceived in a such a way that it ensures that the elliptic curve public parameters remain private. This is important in the context of ID contactless devices as, in this case, it is easy to link these parameters with the nationality of the ID document owners.
Last updated: 2009-09-29
The LPN Problem with Auxiliary Input
Uncategorized
Uncategorized
This paper investigates the Learning from Parity with Noise (LPN)
problem under the scenario that the unknowns (secret keys) are only
unpredictable instead of being uniformly random to the adversaries.
In practice, this corresponds to the case where an adversary already
possesses some additional knowledge about the secret key. In the
information-theoretic setting, we show that the problem is robust
against arbitrary leakages as long as the unknowns remain some
sufficient amount of min-entropy. In the computational setting, we
prove Dodis et al.'s [STOC'09] conjecture that the auxiliary-input
LPN assumption is implied by the standard LPN assumption,
i.e., encryption schemes based on the standard LPN
assumption is secure against any exponentially hard-to-invert
auxiliary input.
\medskip
Our setting is more general than the traditional model of auxiliary
input which deals with secret keys of sufficient min-entropy, in
particular, we allow leakages that information-theoretically
determine their secret keys as long as the keys remain a linear
amount of unpredictability pseudoentropy. Further, unlike most other
schemes, our result is reducible to a well-known hardness problem
and does not quantify over all hard-to-invert auxiliary functions.
The Certicom Challenges ECC2-X
To encourage research on the hardness of the elliptic-curve
discrete-logarithm problem (ECDLP) Certicom has published a series
of challenge curves and DLPs.
This paper analyzes the costs of breaking the Certicom challenges
over the binary fields and on a
variety of platforms. We describe details of the choice of step
function and distinguished points for the Koblitz and non-Koblitz
curves. In contrast to the implementations for the previous Certicom
challenges we do not restrict ourselves to software and conventional
PCs, but branch out to cover
the majority
of available platforms
such as various ASICs, FPGAs, CPUs and the Cell Broadband Engine.
For the field arithmetic we investigate polynomial and
normal basis arithmetic for these specific fields; in particular for
the challenges on Koblitz curves normal bases become more
attractive on ASICs and FPGAs.
Readers Behaving Badly: Reader Revocation in PKI-Based RFID Systems
Recent emergence of RFID tags capable of performing public key
operations motivates new RFID applications, including
electronic travel documents, identification cards and payment
instruments. In such settings, public key certificates form the
cornerstone of the overall system security. In this paper, we
argue that one of the prominent -and still woefully
unaddressed- challenges is how to handle revocation checking of
RFID reader certificates. This is an important issue
considering that these high-end RFID tags are geared for
applications such as e-documents and contactless payment
instruments. Furthermore, the problem is unique to public
key-based RFID systems, since tags (even those capable of
complex cryptographic operations) have no clock and thus cannot
use traditional (time-based) off-line revocation checking
methods. Whereas, on-line methods require unrealistic
connectivity assumptions.
In this paper, we address the problem of reader revocation in
PKI-Based RFID systems. We begin by observing an important
distinguishing feature of personal RFID tags used in
authentication, access control or payment applications -the
involvement of a human user. We then take advantage of the
user's awareness and presence to construct a simple, efficient,
secure and (most importantly) feasible solution for reader
revocation checking. And finally, we evaluate the usability and
practical security our solution via usability studies and discuss
its feasibility in a case study of e-Passports.
In our approach, the main extra feature is the requirement for
a small passive on-tag display. However, as discussed in the
paper, modern low-power display technology (e.g., e-paper) is
low-cost and appealing for other (e.g., authentication and
verification) purposes.
On Key Authentic Degree of Cryptosystem
Uncategorized
Uncategorized
Against such attacks as rubber-hose attack, key authentic degree of cryptosystem is expatiated in detail, and the important significance of key authentic degree of cryptosystem is pointed out. And the key authentic degrees of modern cryptosystem under different conditions are given. Research shows that under most realistic situations, the key authentic degree of modern cryptosystem is high, this means that modern cryptosystem is threatened by such as rubber-hose attack and so on. Feasibility of low key authentic degree reliability is analyzed, and the implementing of low key authentic degree algorithm is studied.
On Linear Cryptanalysis with Many Linear Approximations
In this paper we present a theoretical framework
to quantify the information brought by several linear approximations of a block-cipher without putting any restriction on these approximations.
We quantify here the entropy of the key given the plaintext-ciphertext pairs statistics
which is a much more accurate measure than the ones studied earlier. The techniques which
are developed here apply to various ways of performing the linear attack
and can also been used to measure the entropy of the key for other statistical attacks.
Moreover, we present a realistic attack on the full DES with a time complexity of
for
pairs what is a big improvement comparing to Matsui's algorithm 2 ( ).
Certificateless KEM and Hybrid Signcryption Schemes Revisited
Often authentication and confidentiality are required as simultaneous key requirements in many cryptographic applications. The cryptographic primitive called signcryption effectively implements the same and while most of the public key based systems are appropriate for small messages, hybrid encryption (KEM-DEM) provides an efficient and practical way to securely communicate very large messages. Recently, Lippold et al. \cite{GCJ09} proposed a certificateless KEM in the standard model and the first certificateless hybrid signcryption scheme was proposed by Fagen Li et al. \cite{LST09}. The concept of certificateless hybrid signcryption has evolved by combining the ideas of signcryption based on tag-KEM and certificateless cryptography. In this paper, we show that \cite{GCJ09} is not Type-I CCA secure and \cite{LST09} is existentially forgeable. We also propose an improved certificateless hybrid signcryption scheme and formally prove the security of the improved scheme against both adaptive chosen ciphertext attack and existential forgery in the appropriate security models for certificateless hybrid signcryption.
A Framework for Non-Interactive Instance-Dependent Commitment Schemes (NIC)
Zero-knowledge protocols are often studied through specific problems, like Graph-Isomorphism. In many cases this approach prevents an important level of abstraction and leads to limited results, whereas in fact the constructions apply to a wide variety of problems. We propose to address this issue with a formal framework of non-interactive instance-dependent commitment schemes (NIC). We define NIC in both the perfect, statistical, and computational settings, and formally characterize problems admitting NIC in all of these settings. We also prove other useful lemmas such as closure properties. Consequently, results that previously applied only to specific problems are now strengthened by our framework to apply to classes of problems. By providing formal yet intuitive tools, our framework facilitates the construction of zero-knowledge protocols for a wide variety of problems, in various settings, without the need to refer to a specific problem. Our results are unconditional.
Asymptotic enumeration of correlation-immune boolean functions
Uncategorized
Uncategorized
A boolean function of n boolean variables is correlation-immune of order k if the function value is uncorrelated with the values of any k of the arguments. Such functions are of considerable interest due to their cryptographic properties, and are also related to the orthogonal arrays of statistics and the balanced hypercube colourings of combinatorics. The weight of a boolean function is the number of argument values that produce a function value of 1. If this is exactly half the argument values, that is, 2^{n-1} values, a correlation-immune function is called resilient.
An asymptotic estimate of the number N(n,k) of n-variable correlation-immune boolean functions of order k was obtained in 1992 by Denisov for constant k. Denisov repudiated that estimate in 2000, but we will show that the repudiation was a mistake.
The main contribution of this paper is an asymptotic estimate of N(n,k) which holds if k increases with n within generous limits and specialises to functions with a given weight, including the resilient functions. In the case of k=1, our estimates are valid for all weights.
~
Efficient Oblivious Polynomial Evaluation with Simulation-Based Security
The study of secure multiparty computation has yielded powerful feasibility results showing that any efficient functionality can be securely computed in the presence of malicious adversaries. Despite this, there are few problems of specific interest for which we have highly efficient protocols that are secure in the presence of malicious adversaries under full simulation based definitions (following the ideal/real model paradigm). Due to the difficulties of constructing such protocols, many researchers have resorted to weaker definitions of security and weaker adversary models. In this paper, we construct highly efficient protocols for the well-studied problem of oblivious polynomial evaluation.
Our protocol is secure under standard cryptographic assumptions for the settings of malicious adversaries, and readily transform to protocols that are secure under universal composability and in the presence of covert adversaries. Our protocol is constant round and requires O(d \cdot s) exponentiations, where is the degree of the polynomial and s is a statistical security parameter (that should equal about 160 in practice).
Security Analysis and Design of Proxy Signature Schemes over Braid Groups
The braid groups have attracted much attention as a new platform of constructing cryptosystems. This paper firstly analyzes the security vulnerabilities of existing proxy signature schemes over braid groups and presents feasible attacks. Then a new proxy signature scheme is proposed based on the difficulty of the conjugacy search problem and the multiple conjugacy search problem. Security analysis shows that the proposed scheme satisfies the security requirements of proxy signature.
A remark on the computation of cube roots in finite fields
We consider the computation of cube roots
in finite fields. For the computation of square roots in finite fields, there are two typical methods; the Tonelli-Shanks method and the Cipolla-Lehmer method. The former can be extended easily
to the case of -th roots, which is called the Adleman-Manders-Miller method, but it seems to be difficult to extend the latter to more general cases.
In this paper, we propose two explicit algorithms
for realizing the Cipolla-Lehmer method in the case of cube roots
for prime fields with .
We implement these methods and compare the results.
Last updated: 2009-10-19
An Automata-Theoretic Interpretation of Iterated Hash Functions - Application to Multicollisions
In this paper we present a new method of constructing multicollision
sets for iterated hash functions. Multicollisions have been studied
quite actively since Joux published an attack against iterated hash
functions, which works in complexity for
-multicollisions. Recently Kelsey \& Schneier and Aumasson have
published even faster multicollision attacks, which are based on fixed
points of the compression function and some assumptions on the
attacker's capabilities. Our method has complexity for arbitrarily large multicollision sets and does not have any
additional assumptions on the compression function or attacker's
capabilities. The drawback of our method is the extremely long
messages generated by the algorithm in the worst case. Our method is based on automata
theory and, given a compression function , we construct a finite
state transducer, which realizes the iterated hash function based on
. The method for generating multicollision sets is based on well
known pumping properties of these automata.
Identity-Based Hybrid Signcryption
Signcryption is a cryptographic primitive that fulfills both the
functions of digital signature and public key encryption
simultaneously, at a cost significantly lower than that required by
the traditional signature-then-encryption approach. In this paper,
we address a question whether it is possible to construct a hybrid
signcryption scheme in identity-based setting. This question seems
to have never been addressed in the literature. We answer the
question positively in this paper. In particular, we extend the
concept of signcryption key encapsulation mechanism to the
identity-based setting. We show that an identity-based signcryption
scheme can be constructed by combining an identity-based
signcryption key encapsulation mechanism with a data encapsulation
mechanism. We also give an example of identity-based signcryption
key encapsulation mechanism.
An Efficient Convertible Undeniable Signature Scheme with Delegatable Verification
Undeniable signatures, introduced by Chaum and van Antwerpen, require a verifier to interact with the signer to verify a signature, and hence allow the signer to control the verifiability of his signatures. Convertible undeniable signatures, introduced by Boyar, Chaum, Damg\aa{}rd, and Pedersen, furthermore allow the signer to convert signatures to publicly verifiable ones by publicizing a verification token, either for individual signatures or for all signatures universally. In addition, the signer is able to delegate the ability to prove validity and convert signatures to a semi-trusted third party by providing a verification key. While the latter functionality is implemented by the early convertible undeniable signature schemes, most recent schemes do not consider this despite its practical appeal.
In this paper we present an updated definition and security model for schemes allowing delegation, and highlight a new essential security property, token soundness, which is not formally treated in the previous security models for convertible undeniable signatures. We then propose a new convertible undeniable signature scheme. The scheme allows delegation of verification and is provably secure in the standard model assuming the computational co-Diffie-Hellman problem, a closely related problem, and the decisional linear problem are hard. Our scheme is, to the best of our knowledge, the currently most efficient convertible undeniable signature scheme which provably fulfills all security requirements in the standard model.
A Note on Linear Approximations of BLUE MIDNIGHT WISH Cryptographic Hash Function
Abstract. BLUE MIDNIGHT WISH hash function is the fastest among
14 algorithms in the second round of SHA-3 competition [1]. At the
beginning of this round authors were invited to add some tweaks before September 15th 2009. In this paper we discuss the tweaked version (BMW). The BMW algorithm [3] is of the type AXR, since it uses only operations ADD (sub), XOR and ROT (shift). If we substitute the operation ADD with operation XOR, we get a BMWlin, which is an affine transformation. In this paper we consider only a BMWlin function and its building blocks. These affine transformations can be represented as a linear matrix and a
constant vector. We found that all matrices of main blocks of BMWlin
have a full rank, or they have a rank very close to full rank. The structure of matrices was examined. Matrices of elementary blocks have an expected non-random structure, while main blocks have a random structure. We will also show matrices for different values of security parameter ExpandRounds1 (values between 0 and 16). We observed that increasing the number of rounds ExpandRounds1 tends to increase randomness as was intended by designers. These observations hold for both BMW256lin and BMW512lin. In this analysis we did not find any useful property, which would help in cryptanalysis, nor did we find any weaknesses of BMW. The study of all building blocks will follow.
Cryptanalysis of the Niederreiter Public Key Scheme Based on GRS Subcodes
A new structural attack on the McEliece/Niederreiter public key cryptosystem based on subcodes of generalized Reed-Solomon codes proposed by Berger and Loidreau is described. It allows the reconstruction of the private key for almost all practical parameter choices in polynomial time with high probability.
Efficient Certificateless KEM in the Standard Model
We give a direct construction of a certificateless key encapsulation
mechanism (KEM) in the standard model that is more efficient than the generic constructions proposed before by Huang and Wong \cite{DBLP:conf/acisp/HuangW07}. We use a direct construction from Kiltz and Galindo's KEM scheme \cite{DBLP:conf/acisp/KiltzG06} to obtain a certificateless KEM in the standard model; our construction is roughly twice as efficient as the generic construction. We also address the security flaw discovered by Selvi et al. \cite{cryptoeprint:2009:462}.
On Hierarchical Threshold Secret Sharing
Uncategorized
Uncategorized
Recently, two novel schemes have been proposed for hierarchical threshold secret sharing, one based on Birkoff interpolation and another based on bivariate Lagrange interpolation. In this short paper, we propose a much simpler solution for this problem.
One for All - All for One: Unifying Standard DPA Attacks
In this paper, we examine the relationship between and the efficiency of different approaches to standard (univariate) DPA attacks. We first show that, when feeded with the same assumptions about the target device (i.e. with the same leakage model), the most popular approaches such as using a distance-of-means test, correlation analysis, and Bayes attacks are essentially equivalent in this setting. Differences observed in practice are not due to differences in the statistical tests but due to statistical artifacts. Then, we establish a link between the correlation coefficient and the conditional entropy in side-channel attacks. In a first-order attack scenario, this relationship allows linking currently used metrics to evaluate standard DPA attacks (such as the number of power traces needed to perform a key recovery) with an information theoretic metric (the mutual information). Our results show that in the practical scenario defined formally in this paper, both measures are equally suitable to compare devices in respect to their susceptibility to DPA attacks. Together with observations regarding key and algorithm independence we consequently extend theoretical strategies for the sound evaluation of leaking devices towards the practice of side-channel attacks.
Precise Bounded-Concurrent Zero-Knowledge in Almost Constant Rounds
Precise concurrent zero-knowledge is a new notion introduced by
Pandey et al. \cite{P:P:M:T:V} in Eurocrypt'08 (which generalizes
the work on precise zero-knowledge by Micali and Pass \cite{M:P} in
STOC'06). This notion captures the idea that the view of any
verifier in concurrent interaction can be reconstructed in the
almost same time. \cite{P:P:M:T:V} constructed some (private-coin)
concurrent zero-knowledge argument systems for which achieve
precision in different levels and all these protocols use at least
rounds. In this paper we investigate the
feasibility of reducing the round complexity and still keeping
precision simultaneously. Our result is that we construct a
public-coin precise bounded-concurrent zero-knowledge argument
system for only using almost constant rounds, i.e.,
rounds. Bounded-concurrency means an a-priori bound on
the (polynomial) number of concurrent sessions is specified before
the protocol is constructed. Our result doesn't need any setup
assumption. We stress that this result cannot be obtained by
\cite{P:P:M:T:V} even in bounded-concurrent setting.
ROSSLER NONLINEAR DYNAMICAL MACHINE FOR CRYPTOGRAPHY APPLICATIONS
In many of the cryptography applications like password or IP address encryption schemes, symmetric cryptography is useful. In these relatively simpler applications of cryptography, asymmetric cryptography is difficult to justify on account of the computational and implementation complexities associated with asymmetric cryptography. Symmetric schemes make use of a single shared key known only between the two communicating hosts. This shared key is used both for the encryption as well as the decryption of data. This key has to be small in size besides being a subset of a potentially large keyspace making it convenient for the communicating hosts while at the same time making cryptanalysis difficult for the potential attackers. In the present work, an abstract Rossler nonlinear dynamical machine has been described first. The Rossler system exhibits chaotic dynamics for certain values of system parameters and initial conditions. The chaotic dynamics of the Rossler system with its apparently erratic and irregular characteristics and extreme sensitivity to the initial conditions has been used for the design of the cryptographic key in an attempt to increase the confusion and the challenge for the potential attackers.
Ntr¹u-like Public Key Cryptosystems beyond Dedekind Domain Up to Alternative Algebra
In this paper, we show that the fundamental concepts behind the Ntr¹u cryptosystem can be extended to a broader algebra than Dedekind domains. Also, we present an abstract and generalized algorithm for constructing a Ntr¹u-like cryptosystem such that the underlying algebra can be non-commutative or even non-associative. To prove the main claim, we show that it is possible to generalize Ntr¹u over non-commutative Quaternions (algebra in the sense of Cayley-Dikson, of dimension four over an arbitrary principal ideal domain) as well as non-associative Octonions (a power-associative and alternative algebra of dimension eight over a principal ideal domain).
Given the serious challenges ahead of non-commutative/non-associative algebra in quater- nionic or octonionic lattices, the proposed cryptosystems are more resistant to lattice-based attacks when compared to Ntr¹u. Concisely, this paper is making an abstract image of the mathematical base of Ntr¹u in such a way that one can make a similar cryptosystem based on various algebraic structures with the
goal of better security against lattice attack and/or more capability for protocol design.
Computing Hilbert class polynomials with the Chinese Remainder Theorem
We present a space-efficient algorithm to compute the Hilbert class polynomial H_D(X) modulo a positive integer P, based on an explicit form of the Chinese Remainder Theorem. Under the Generalized Riemann Hypothesis, the algorithm uses O(|D|^(1/2+o(1))log P) space and has an expected running time of O(|D|^(1+o(1)). We describe practical optimizations that allow us to handle larger discriminants than other methods, with |D| as large as 10^13 and h(D) up to 10^6. We apply these results to construct pairing-friendly elliptic curves of prime order, using the CM method.
Secure and Efficient HB-CM Entity Authentication Protocol
The simple, computationally efficient LPN-based HB-like entity authentication protocols have attracted a great deal of attention in the past few years due to the broad application prospect in low-cost pervasive devices. At present, the most efficient protocol is HB , which is proven to resist the GRS attack under the conjecture that it is secure in the DET-model. In this paper, we introduce an innovative HB-CM protocol, which significantly reduces the storage requirement while maintaining the same level of communication cost. We develop the concept of equivalence class, and present HB-CM reductionist proof that overcomes an inherent limitation in the HB security proof. In fact, HB is only provably resistant to partial instances of GRS attack, while we prove that HB-CM can prevent the full GRS attack except one trivial case. In addition, we propose a new noise mode for all HB-like protocols in order to thwart the latest OOV man-in-the-middle attack, which can effectively compromise all current HB-like protocols with the basic Bernoulli nose mode. The HB-CM protocol along with the proposed noise mode constitutes our final protocol: HB-CM.
Rebound Attack on the Full LANE Compression Function
In this work, we apply the rebound attack to the AES based SHA-3
candidate LANE. The hash function LANE uses a permutation
based compression function, consisting of a linear message expansion
and 6 parallel lanes. In the rebound attack on LANE, we apply
several new techniques to construct a collision for the full
compression function of LANE-256 and LANE-512. Using a
relatively sparse truncated differential path, we are able to solve
for a valid message expansion and colliding lanes independently.
Additionally, we are able to apply the inbound phase more than once
by exploiting the degrees of freedom in the parallel AES states.
This allows us to construct semi-free-start collisions for full
LANE-256 with compression function evaluations and
memory, and for full LANE-512 with compression
function evaluations and memory.
Fuzzy Privacy Preserving Peer-to-Peer Reputation Management
Uncategorized
Uncategorized
The P2PRep algorithm is a reputation-management mechanism in which a peer uses fuzzy techniques to compute local reputations and aggregates these results to compute a global reputation for another peer which has made an offer of service. While this mechanism is known to be extremely effective in the presence of malicious peers, it has one drawback: it does not preserve the anonymity of peers in the network during the voting phase of protocol. This makes it unsuitable for use in networks which associate peers with a routing identifier such as an IP address. We propose in this paper, a solution to this problem - the 3PRep (Privacy Preserving P2PRep) algorithm which implements two protocols to maintain vote privacy in P2PRep without significant additional computation and communications overhead. In doing so, we also provide a method to compute the Ordered Weighted Average (OWA) over distributed datasets while maintaining privacy of these data.
An Efficient Two-Party Identity-Based Key Exchange Protocol based on ECDLP
This paper presents an efficient identity-based key exchange protocol
based on the difficulty of computing a Elliptic Curve Discrete Lgarithm Problem. As compared with the previously proposed protocols, it has better performance in terms of the computational cost and the communication steps. Key exchange protocols allow two parties communicating over a public network to establish a common secret key called session key to encrypt the communication data. Due to their significance by in building a secure communication channel, a number of key exchange protocols have been suggested over the years for a variety of settings.The proposed key exchange protocol provides implicit key authentication as well as the desired security
attributes of an authenticated key exchange protocol.
A Multivariate Signature Scheme with an almost cyclic public key
Multivariate public key cryptography is one of the main approaches to guarantee the security of communication in a post quantum world.
One of the major drawbacks in this area is the huge size of the public key. In this paper we present a new idea to create a multivariate signature scheme with an almost cyclic public key. The scheme is very similar to the UOV-Scheme of Kipnis and Patarin but reduces the size of the public key by about 83 \verb!%!.
A Fast Mental Poker Protocol
Abstract. We present a fast and secure mental poker protocol. It is twice as fast as Barnett-Smart's and Castellà-Roca's protocols. This protocol is provably secure under DDH assumption.
Improved Cryptanalysis of Skein
The hash function Skein is the submission of Ferguson et al. to the NIST Hash Competition, and is arguably a serious candidate for selection as SHA-3. This paper presents the first third-party analysis of Skein, with an extensive study of its main component: the block cipher Threefish. We notably investigate near collisions, distinguishers, impossible differentials, key recovery using related-key differential and boomerang attacks. In particular, we present near collisions on up to 17 rounds, an impossible differential on 21 rounds, a related-key boomerang distinguisher on 34 rounds, a known-related-key boomerang distinguisher on 35 rounds, and key recovery attacks on up to 32 rounds, out of 72 in total for Threefish-512. None of our attacks directly extends to the full Skein hash. However, the pseudorandomness of Threefish is required to validate the security proofs on Skein, and our results conclude that at least 36 rounds of Threefish seem required for optimal security guarantees.
On the Relations Between Diffie-Hellman and ID-Based Key Agreement from Pairings
This paper studies the relationships between the traditional
Diffie-Hellman key agreement protocol and the identity-based
(ID-based) key agreement protocol from pairings.
For the Sakai-Ohgishi-Kasahara (SOK) ID-based key construction, we
show that identical to the Diffie-Hellman protocol, the SOK key
agreement protocol also has three variants, namely \emph{ephemeral},
\emph{semi-static} and \emph{static} versions. Upon this, we build
solid relations between authenticated Diffie-Hellman (Auth-DH)
protocols and ID-based authenticated key agreement (IB-AK)
protocols, whereby we present two \emph{substitution rules} for this
two types of protocols. The rules enable a conversion between the
two types of protocols. In particular, we obtain the \emph{real}
ID-based version of the well-known MQV (and HMQV) protocol.
Similarly, for the Sakai-Kasahara (SK) key construction, we show
that the key transport protocol underlining the SK ID-based
encryption scheme (which we call the ``SK protocol") has its non-ID
counterpart, namely the Hughes protocol. Based on this observation,
we establish relations between corresponding ID-based and non-ID-based protocols. In particular, we propose a highly enhanced version of the McCullagh-Barreto protocol.
On the Connection between Signcryption and One-pass Key Establishment
Key establishment between two parties that uses only one message transmission is referred to as one-pass key establishment (OPKE). OPKE provides the opportunity for very efficient constructions, even though they will typically provide a lower level of security than the corresponding multi-pass variants. In this paper, we explore the intuitive connection between signcryption and OPKE. By establishing a formal relationship between these two primitives, we show that with appropriate security notions, OPKE can be used as a signcryption KEM and vice versa. In order to establish the connection we explore the definitions of security for signcryption (KEM) and give new and generalised definitions. By making our generic constructions concrete we are able to provide new examples of signcryption KEMs and an OPKE protocol.
Efficient Confirmer Signatures from the ``Signature of a Commitment'' Paradigm
Uncategorized
Uncategorized
Generic constructions of designated confirmer signatures follow one of the
following two strategies; either produce a digital signature on the message to
be signed, then encrypt the resulting signature, or produce a commitment on
the message, encrypt the string used to generate the commitment and finally
sign the latter.
We study the second strategy by determining the exact security property needed
in the encryption to achieve secure constructions. This study
infers the exclusion of a useful type of encryption from the design due to an
intrinsic weakness in the paradigm. Next, we propose a simple method to
remediate to this weakness and we get efficient constructions which can be
used with \emph{any} digital signature.
Tight Bounds for Protocols with Hybrid Security
We consider broadcast and multi-party computation (MPC) in the setting where a digital signature scheme and a respective public-key infrastructure (PKI) are given among the players. However, neither the signature scheme nor the PKI are fully trusted. The goal is to achieve unconditional (PKI- and signature-independent) security up to a certain threshold, and security beyond this threshold under stronger assumptions, namely, that the forgery of signatures is impossible and/or that the given PKI is not under adversarial control. We give protocols for broadcast and MPC that achieve an optimal trade-off between these different levels of security.
Communication Optimal Multi-Valued Asynchronous Byzantine Agreement with Optimal Resilience
Byzantine Agreement (BA) and Broadcast (BC) are considered to be the most fundamental primitives for fault-tolerant distributed computing and cryptographic protocols. An important variant of BA and BC is Asynchronous Byzantine Agreement (ABA) and Asynchronous Broadcast (called as A-cast) respectively. Most often in the literature, protocols for ABA and A-cast were designed for a single bit message. But in many applications, these protocols may be invoked on long message rather than on single bit. Therefore, it is important to design efficient multi-valued protocols (i.e. protocols with long message) which extract advantage of directly dealing with long messages and are far better than multiple invocations to existing protocols for single bit. In synchronous network settings, this line of research was initiated by Turpin and Coan and later it is culminated in the result of Fitzi et al. who presented the first ever
communication optimal multi-valued BA and BC protocols with the help
of BA and BC protocols for short message. It was left open by Fitzi et al. to achieve the same in asynchronous settings.
Recently Patra et al. presented a communication optimal multi-valued A-cast using existing A-cast of Brach for small message.
Here we achieve the same for ABA which is known to be harder problem than A-cast. Specifically, we design a communication optimal,
optimally resilient multi-valued ABA protocol, based on the existing ABA protocol for short message.
Last updated: 2011-04-25
Practical Distributed Key Generation Scheme
Uncategorized
Uncategorized
Generating a distributed key, where a constant fraction of the players can reconstruct the key, is an essential component of thresh- old cryptography. Previous solutions are based on univariate polynomi als. We present a new distributed key generation (DKG) protocol. The key idea of our scheme is to use bivariate symmetric polynomial. Compared with other solutions, our construction is efficient in computation and simple and flexible in applications. In addition, our construction admits a rigorous proof of security. We also study the new-member-joining problem which often occurs in some applications: Players P1, ..., Pn, who make up a group G, have jointly generated a pair of private and public keys based on some DKG, when a new player hope to join in G, how does he get a share of private key? We present a new-member-joining protocol which is based on our DKG scheme and which completely solve this problem. we also give a proof of the security of this new-member-joining protocol in terms of correctness and secrecy.
On the Design of Trivium
eSTREAM called for new stream ciphers designed for niche areas such as exceptional performance in software and hardware where resources are restricted. This project provides an open platform to discuss these ciphers. Trivium is one of the promising new ciphers submitted to it. Until now, no attack has been successfully applied to it. This paper illustrates new design principles of stream ciphers based on the structure of Trivium and introduces the definition of k-order primitive polynomials. New designs of Trivium are also given according to the principles in this paper.
One-time-password-authenticated key exchange
To reduce the damage of phishing and spyware attacks, banks, governments, and other security-sensitive industries are deploying one-time password systems, where users have many passwords and use each password only once. If a single password is compromised, it can be only be used to impersonate the user once, limiting the damage caused. However, existing practical approaches to one-time passwords have been susceptible to sophisticated phishing attacks.
We give a formal security treatment of this important practical problem. We consider the use of one-time passwords in the context of password-authenticated key exchange (PAKE), which allows for mutual authentication, session key agreement, and resistance to phishing attacks. We describe a security model for the use of one-time passwords, explicitly considering the compromise of past (and future) one-time passwords, and show a general technique for building a secure one-time-PAKE protocol from any secure PAKE protocol. Our techniques also allow for the secure use of pseudorandomly generated and time-dependent passwords.
Precise Time and Space Simulatable Zero-Knowledge
Traditionally, the definition of zero-knowledge states that an
interactive proof of provides zero (additional) knowledge
if the view of any \emph{polynomial-time} verifier can be
reconstructed by a \emph{polynomial-time} simulator. Since this
definition only requires that the worst-case running-time of the
verifier and simulator are polynomials, zero-knowledge becomes a
worst-case notion.
In STOC'06, Micali and Pass proposed a new notion of precise
zero-knowledge, which captures the idea that the view of any
verifier in every interaction can be reconstructed in (almost) the
same time (i.e., the view can be ``indistinguishably
reconstructed''). This is the strongest notion among the known works
towards precislization of the definition of zero-knowledge.
However, as we know, there are two kinds of computational resources
(i.e. time and space) that every algorithm consumes in computation.
Although the view of a verifier in the interaction of a precise
zero-knowledge protocol can be reconstructed in almost the same
time, the simulator may run in very large space while at the same
time the verifier only runs in very small space. In this case it is
still doubtful to take indifference for the verifier to take part in
the interaction or to run the simulator. Thus the notion of precise
zero-knowledge may be still insufficient. This shows that
precislization of the definition of zero-knowledge needs further
investigation.
In this paper, we propose a new notion of precise time and space
simulatable zero-knowledge (PTSSZK), which captures the idea that
the view of any verifier in each interaction can be reconstructed
\emph{not only} in the same time, \emph{but also} in the same space.
We construct the first PTSSZK proofs and arguments with simultaneous
linear time and linear space precisions for all languages in .
Our protocols do not use noticeably more rounds than the known
precise zero-knowledge protocols, and the probability analysis of
the successful extraction of the new simulation strategy may be of
independent interests.
Efficiently from Semi-honest to Malicious OT via OLFE
A combiner securely implements a functionality out of a set implementations of another functionality from which some may be insecure. We present two efficient combiners for oblivious linear function evaluation (OLFE). The first is a constant-rate OLFE combiner in the semi-honest model, the second combiner implements Rabin string oblivious transfer (RabinOT) from OLFE in the malicious model.
As an application, we show a very efficient reductions in the malicious model of RabinOT over strings to one-out-of-two oblivious transfer over bits (OT) that is only secure in the semi-honest model. For string of size , our reductions uses only instances of OT, while previous results required . Our new reduction leads to an efficiency improvement for general multi-party computation (MPC) based on semi-honest OT, and makes it almost as efficient as MPC based on malicious OT.
All reductions are unconditionally secure, black-box, universally composable and secure against adaptive adversaries.
Efficient Verifiable Escrow and Fair Exchange with Trusted Hardware
At the heart of many fair exchange problems is verifiable escrow: a
sender encrypts some value using the public key of a trusted party
(called the recovery agent), and then must convince the receiver of
the ciphertext that the corresponding plaintext satisfies some
property (e.g., it contains the sender's signature on a
contract). Previous solutions to this problem are interactive, and
often rely on communication-intensive cut-and-choose zero-knowledge
proofs. In this paper, we provide a solution that uses generic trusted
hardware to create an efficient, non-interactive verifiable escrow
scheme. Our solution allows the protocol to use a set of recovery
agents with a threshold access structure, the \emph{verifiable group
escrow} notion which was informally introduced by Camenisch and
Damgard and which is formalized here. Finally, this paper shows how
this new non-interactive verifiable escrow scheme can be used to
create an efficient optimistic protocol for fair exchange of
signatures.
Cheating Detection and Cheater Identification in CRT-based Secret Sharing Schemes
In this paper we analyze the cheating detection and
cheater identification problems for the secret sharing schemes
based on the Chinese remainder theorem (CRT), more exactly
for Mignotte [1] and Asmuth-Bloom [2] schemes. We prove
that the majority of the solutions for Shamir’s scheme [3] can
be translated to these schemes and, moreover, there are some
interesting specific solutions.
Cryptanalysis and Security Enhancement on the Generation of Mu-Varadharajan Electronic Voting Protocol
Uncategorized
Uncategorized
Mu and Varadharajan proposed an electronic voting scheme and claimed
that their scheme authenticates the voters, protects the anonymity of them, and detects the identity of double voters. Due to some weaknesses in Mu-Varadharajan scheme, several modified schemes have been proposed by Lin et al., Hwang et al., Rodriguez-Henriquez et al. and Asaar et al.; however this paper shows that these schemes suffer from some weaknesses in fulfilling the pointed properties. For this purpose, we get Hwang et al. scheme as a case study and apply our new attacks on it. Also we consider the applicability of the attacks on other pointed schemes. In addition, we present a new scheme and show that the scheme resists against the proposed attacks without loosing
efficiency.
Double Voter Perceptible Blind Signature Based Electronic Voting Protocol
Uncategorized
Uncategorized
Mu et al. have proposed an electronic voting protocol and
claimed that it protects anonymity of voters, detects double
voting and authenticates eligible voters. It has been shown that
it does not protect voter's privacy and prevent double voting.
After that, several schemes have been presented to fulfill these
properties. However, many of them suffer from the same
weaknesses. In this paper, getting Asadpour et al. scheme
as one of the latest one and showing its weaknesses, we propose
a new voting scheme which is immune to the weaknesses
of previous schemes without loosing efficiency. The scheme,
is based on a special structure, which directly
use the identity of voter, hides it in that structure and reveals
it after double voting. We also, show that the security of this
scheme depends on hardness of RSA cryptosystem, Discrete Logarithm
problem and Representation problem.
Utilizing postponed ephemeral and pseudo-static keys in tripartite and identity-based key agreement protocols
We propose an new one-round implicitly authenticated three-party protocol that extends Joux's protocol as well as a two-party identity-based protocol. Our protocols have a single communication round that consists of ephemeral (one-time) public keys along with certificates in the tripartite protocol, and identities in the identity-based setting. As such our protocols are communication efficient and furthermore do not require enhanced message format.
Attacks on {RFID}-Based Electronic Voting Systems
Uncategorized
Uncategorized
Many secure systems, such as contactless credit cards and secure entrance systems, are built with contactless smart-card RFID technologies. In many cases these systems are claimed to be secure based on the assumption that readers and tags need to be in close proximity (about 5cm) in order to communicate. However, it is known that this proximity assumption is false: Relay attacks are a class of hardware-based attacks which compromise the safety of such systems by dramatically extending the interrogation range of the contactless system. Interestingly, the proposed Israeli e-voting scheme is based on contactless smartcards.
In this work we show how the proposed system can be completely compromised using low-cost relay attacks. Our attacks allow an adversary to read out all votes already cast into the ballot box, supress the votes of one or several voters, rewrite votes at will and even completely disqualify all votes in a single voting station. Our attacks are easy to mount, very difficult to detect, and compromise both the confidentiality and the integrity of the election system.
How to Construct Identity-Based Signatures without the Key Escrow Problem
The inherent key escrow problem is one of the main reasons for the
slow adoption of identity-based cryptography. The existing solution for mitigating the key escrow problem is by adopting multiple Private Key Generators (PKGs). Recently, there was a proposal that attempted to reduce the trust of the PKG by allowing a malicious PKG to be caught if he reveals the user's identity-based secret key illegally. Nonetheless, the proposal does not consider that the PKG can simply decrypt the ciphertext instead of revealing the secret key itself (in the case of identity-based encryption schemes).
The aim of this paper is to present an escrow-free identity-based signature (IBS) scheme, in which the malicious PKG will be caught if it releases a signature on behalf of the user but signed by itself. We present a formal model to capture such a scheme and provide a concrete construction.
Higher-order Masking and Shuffling for Software Implementations of Block Ciphers
Differential Power Analysis (DPA) is a powerful side channel key recovery attack that efficiently breaks block ciphers implementations. In software, two main techniques are usually applied to thwart them: masking and operations shuffling. To benefit from the advantages of the two techniques, recent works have proposed to combine them. However, the schemes which have been
designed until now only provide limited resistance levels and some
advanced DPA attacks have turned out to break them. In this paper,
we investigate the combination of masking and shuffling. We moreover
extend the approach with the use of higher-order masking and we
show that it enables to significantly improve the security level of
such a scheme. We first conduct a theoretical analysis in which the
efficiency of advanced DPA attacks targeting masking and shuffling
is quantified. Based on this analysis, we design a generic scheme
combining higher-order masking and shuffling. This scheme is
scalable and its security parameters can be chosen according to any
desired resistance level. As an illustration, we apply it to protect
a software implementation of AES for which we give several
security/efficiency trade-offs.
An Efficient Method for Random Delay Generation in Embedded Software
Random delays are a countermeasure against a range of side channel and fault attacks that is often implemented in embedded software. We propose a new method for generation of random delays and a criterion for measuring the efficiency of a random delay countermeasure. We implement this new method along with the existing ones on an 8-bit platform and mount practical side-channel attacks against the implementations. We show that the new method is significantly more secure in practice than the previously published solutions and also more lightweight.
Subtleties in the Definition of IND-CCA: When and How Should Challenge-Decryption be Disallowed?
The definition of IND-CCA disallows an adversary from querying the
challenge ciphertext to its decryption oracle. We point out that there are several ways to formalize this. We show that, surprisingly, for
public-key encryption the resulting notions are not all equivalent.
We then consider the same question for key-encapsulation mechanisms
(KEMs) and show that in this case the four notions ARE all
equivalent. Our discoveries are another manifestation of the
subtleties that make the study of cryptography so attractive and are
important towards achieving the definitional clarity and unity
required for firm foundations.
More Differential Paths of TIB3
Uncategorized
Uncategorized
The TIB3-256 hashing algorithm [3] is a first round candidate in the SHA-3 competition [2]. Properties of the message expansion and the PHTX function are observed, and then exploited to create new high-probability differential paths through the compression function. Examples conforming to the differential paths are presented. Only one of these differential paths can be applied to the tweaked version of TIB3v2 [4]. Due to the dual-block input mode used in TIB3 and TIB3v2, these differential paths do not seem extensible to the full hash functions.
Note: In the time between when this paper was written and when the paper was made public, the SHA-3 Round 2 Candidates were announced, and TIB3 had been eliminated from the competition.
KronCrypt - A New Symmetric Cryptosystem Based on Kronecker's Approximation Theorem
In this paper we show how to use an old mathematical concept of diophantine analysis, the approximation theorem of Kronecker, in symmetric cryptography. As a first practical application we propose and analyze the new symmetric 128-bit block cipher KronCrypt. The cipher is a 4-round Feistel network with a non-bijective round function f made up of a variable number of large key-dependent S-boxes, XORs and modular additions. Its key length is variable but not less than 128 bit. The main innovation of KronCrypt in the area of symmetric cryptography is the fact that the key-dependent S-boxes are based upon a constructive proof of the approximation theorem of Kronecker used as a boolean function. We prove the correctness of our concept in general and show how we designe the new cipher KronCrypt. Furthermore, results concerning statistical behaviour, i.e. confusion, diffusion and completeness, and differential cryptanalysis are presented.
Attacks Against Permute-Transform-Xor Compression Functions and Spectral Hash
Uncategorized
Uncategorized
This paper presents an attack on the strong collision resistance of the Spectral Hash SHA-3 candidate. Spectral-Hash (shash) is a Merkle-Damgård based hash function, carefully designed to resist all known cryptographic attacks. To best of our knowledge, our attack is the only known attack against the shash algorithm. We exploit the fundamental structure of the algorithm, completely bypassing the hash function's formidable cryptographic protections. Our attack is presented in three stages. First, we define the family of functions which have the structure we wish to exploit. We call members of this family PTX functions. Next, we show that all PTX functions, including functions which use random oracles, are vulnerable to our collision attack. Finally, we reformulate the shash compression function showing that it is a PTX function and thus vulnerable. We present results on a practical implementation of our attack, generating collisions for shash in less than a second on a typical desktop computer.
Security Bounds for the Design of Code-based Cryptosystems
Uncategorized
Uncategorized
Code-based cryptography is often viewed as an interesting ``Post-Quantum'' alternative to the classical number theory cryptography. Unlike many other such alternatives, it has the convenient advantage of having only a few, well identified, attack algorithms. However, improvements to these algorithms have made their effective complexity quite complex to compute. We give here some lower bounds on the work factor of idealized versions of these algorithms, taking into account all possible tweaks which could improve their practical complexity. The aim of this article is to help designers select durably secure parameters.
Three Improved Algorithms for Multi-path Key Establishment in Sensor Networks Using Protocols for Secure Message Transmission
In this paper, we propose a security model to capture active attacks against multi-path key establishment (MPKE) in sensor networks. Our model strengthens previous models to capture more attacks and achieve essential security goals for multi-path key establishment. In this model, we can apply protocols for perfectly secure message transmission to solve the multi-path key establishment problem. We propose a simple new protocol for optimal one-round perfectly secure message transmission based on Reed-Solomon codes. Then we use this protocol to obtain two new multi-path key establishment schemes that can be applied provided that fewer than one third of the paths are controlled by the adversary. Finally, we describe another MPKE scheme that tolerates a higher fraction (less than 1/2) of paths controlled by the adversary. This scheme is based on a new protocol for a weakened version of message transmission, which is very simple and efficient.
Our multi-path key establishment schemes achieve improved security and lower communication complexity, as compared to previous schemes.
Distinguishing Attacks on Stream Ciphers Based on Arrays of Pseudo-random Words
In numerous modern stream ciphers, the internal state consists of
a large array of pseudo-random words, and the output key-stream is
a relatively simple function of the state. In [Paul-Preneel],
it was heuristically shown that in various cases this structure
may lead to distinguishing attacks on the cipher. In this paper
we further investigate this structural attack. We present a
rigorous proof of the main probabilistic claim used in the attack
in the basic cases, and demonstrate by examining a concrete
example (the cipher SN3) that the heuristic
assumptions of the attack are remarkably precise in more
complicated cases. Furthermore, we use the general technique to
devise a distinguishing attack on the stream cipher
MV3 requiring words of key-stream.
Unlike the attacks in [Paul-Preneel], our attack does not
concentrate on the least significant bits of the words, thus
allowing to handle the combination of more operations
(XORs, modular additions and multiplications, and
rotations by a fixed number of bits) in the update and output
rules of the cipher.
Improved Garbled Circuit Building Blocks and Applications to Auctions and Computing Minima
We consider generic Garbled Circuit (GC)-based techniques for Secure Function Evaluation (SFE) in the semi-honest model.
We describe efficient GC constructions for addition, subtraction, multiplication, and comparison functions. Our circuits for subtraction and comparison are approximately two times smaller (in terms of garbled tables) than previous constructions. This implies corresponding computation and communication improvements in SFE of functions using our efficient building blocks. The techniques rely on recently proposed ``free XOR'' GC technique.
Further, we present concrete and detailed improved GC protocols
for the problem of secure integer comparison, and related
problems of auctions, minimum selection, and minimal distance.
Performance improvement comes both from building on
our efficient basic blocks and several problem-specific GC optimizations.
We provide precise cost evaluation of our constructions, which serves as a
baseline for future protocols.
Authenticated Broadcast with a Partially Compromised Public-Key Infrastructure
Given a public-key infrastructure (PKI) and digital signatures, it is possible to construct broadcast protocols tolerating any number of corrupted parties. Existing protocols, however, do not distinguish between corrupted parties who do not follow the protocol, and honest parties whose secret (signing) keys have been compromised but continue to behave honestly. We explore conditions under which it is possible to construct broadcast protocols that still provide the usual guarantees (i.e., validity/agreement) to the latter.
Consider a network of parties, where an adversary has compromised the secret keys of up to honest parties and, in addition, fully controls the behavior of up to other parties. We show that for any fixed and any fixed , there exists an efficient protocol for broadcast if and only if . (When , standard results imply feasibility for all .) We also show that if are not fixed, but are only guaranteed to satisfy the above bound, then broadcast is impossible to achieve except for a few specific values of ; for these ``exceptional'' values of , we demonstrate broadcast protocols. Taken together, our results give a complete characterization of this problem.
A Tree Based Recursive Scheme for Space Efficient Secret Sharing
This paper presents a k-out-of-n recursive secret sharing scheme based on an n-ary tree data structure. In recursive hiding of secrets, the user encodes additional secrets in the shares of the secret intended to be original shared without an expansion in the size of the latter, thereby decreasing the effective share size per secret and increasing the overall space efficiency of secret sharing, with a trade-off in security. The proposed scheme has applications in secure distributed storage and information dispersal protocols. It may be used as a steganographic channel to transmit hidden information in secret sharing, which may be used for authentication and verification of shares and the reconstructed secret itself.
A Secure and Efficient Authenticated Diffie–Hellman Protocol
The Exponential Challenge Response (XRC) and Dual Exponential Challenge Response (DCR) signature schemes are the building blocks of the HMQV protocol. We propose a complementary analysis of these schemes; on the basis of this analysis we show how impersonation and man in the middle attacks can be mounted against the HMQV protocol when some session specific information leakages happen.
We define the Full Exponential Challenge Response (FXRC) and Full Dual Exponential Challenge Response (FDCR) signature schemes; using these schemes we propose the Fully Hashed MQV protocol (with security arguments), which preserves the remarkable performance of the (H)MQV protocols and resists the attacks we present.
Single Block Attacks and Statistical Tests on CubeHash
This paper describes a second preimage attack
on the CubeHash cryptographic one-way hash function.
The attack finds a second preimage
in less time than brute force search
for these CubeHash variants:
CubeHash / -224 for ;
CubeHash / -256 for ;
CubeHash / -384 for ; and
CubeHash / -512 for .
However, the attack does not break
the CubeHash variants recommended for SHA-3.
The attack requires minimal memory
and can be performed in a massively parallel fashion.
This paper also describes several statistical randomness tests on CubeHash.
The tests were unable to disprove the hypothesis
that CubeHash behaves as a random mapping.
These results support CubeHash's viability
as a secure cryptographic hash function.
On-line Non-transferable Signatures Revisited
Undeniable signatures, introduced by Chaum and van Antwerpen, and designated confirmer signatures, introduced by Chaum, allow a signer to control the verifiability of his signatures by requiring a verifier to interact with the signer to verify a signature. An important security requirement for these types of signature schemes is \emph{non-transferability} which informally guarantees that even though a verifier has confirmed the validity of a signature by interacting with the signer, he cannot prove this knowledge to a third party. Recently Liskov and Micali pointed out that the commonly used notion of non-transferability only guarantees security against an off-line attacker which cannot influence the verifier while he interacts with the signer, and that almost all previous schemes relying on interactive protocols are vulnerable to on-line attacks. To address this, Liskov and Micali formalized on-line non-transferable signatures which are resistant to on-line attacks, and proposed a generic construction based on a standard signature scheme and an encryption scheme.
In this paper, we revisit on-line non-transferable signatures. Firstly, we extend the security model of Liskov and Micali to cover not only the sign protocol, but also the confirm and disavow protocols executed by the confirmer. Our security model furthermore considers the use of multiple (potentially corrupted or malicious) confirmers, and guarantees security against attacks related to the use of signer specific confirmer keys. We then present a new approach to the construction of on-line non-transferable signatures, and propose a new concrete construction which is provably secure in the standard model. Unlike the construction by Liskov and Micali, our construction does not require the signer to issue ``fake'' signatures to maintain security, and allows the confirmer to both confirm and disavow signatures. Lastly, our construction provides noticeably shorter signatures than the construction by Liskov and Micali.
Generic Attacks on Misty Schemes -5 rounds is not enough-
Misty schemes are classic cryptographic schemes used to construct pseudo-random permutations from bits to bits by using pseudo-random permutations from bits to bits. These permutations will be called the ``internal'' permutations, and is the number of rounds of the Misty scheme. Misty schemes are important from a practical point of view since for example, the Kasumi algorithm based on Misty schemes has been adopted as the standard blockcipher in the third generation mobile systems. In this paper we describe the best known ``generic'' attacks on Misty schemes, i.e. attacks when the internal permutations do not have special properties, or are randomly chosen. We describe known plaintext attacks (KPA), non-adaptive chosen plaintext attacks (CPA-1) and adaptive chosen plaintext and ciphertext attacks (CPCA-2) against these schemes. Some of these attacks were previously known, some are new. One important result of this paper is that we will show that when rounds, there exist such attacks with a complexity strictly less than . Consequently, at least 6 rounds are necessary to avoid these generic attacks on Misty schemes. When we also describe some attacks on Misty generators, i.e. attacks where more than one Misty permutation is required.
Last updated: 2009-08-26
Pairing-Friendly Elliptic Curves With Various Discriminants
In this paper, we extend the Brezing-Weng method by parameterizing the discriminant by a polynomial and give a construction of families of pairing-friendly elliptic curves with various discriminants. For 5, 8, 9, 15, 16, 20, 24 and 28, our method gives smaller value than the ones in previous results.
On Generic Constructions of Designated Confirmer Signatures (The ``Encryption of a Signature'' Paradigm Revisited)
Uncategorized
Uncategorized
Designated Confirmer signatures were introduced to limit the verification property
inherent to digital signatures. In fact, the verification in these
signatures is replaced by a confirmation/denial protocol between the
\emph{designated confirmer} and some verifier. An intuitive way to obtain such signatures consists in
first generating a digital signature on the message to be signed, then
encrypting the result using a suitable encryption scheme. This approach, referred to as the ``encryption of a signature'' paradigm, requires the
constituents (encryption and signature schemes) to meet the highest
security notions in order to achieve secure constructions.
In this paper, we revisit this method and establish the necessary and
sufficient assumptions on the building blocks in order to attain secure
confirmer signatures. Our study concludes that the paradigm, used in its basic
form, cannot allow a class of encryption schemes, which is vital for the
efficiency of the confirmation/denial protocols. Next, we consider a slight
variation of the paradigm, proposed in the context of undeniable signatures;
we recast it in the confirmer signature framework along with changes that
yield more flexibility, and we demonstrate its efficiency
by explicitly describing its confirmation/denial protocols when instantiated with building blocks from a large class of
signature/encryption schemes. Interestingly, the class of signatures we
consider is very popular and has been for instance used to build efficient
designated verifier signatures.
AIDA Breaks BIVIUM (A&B) in 1 Minute Dual Core CPU Time
The stream cipher BIVIUM (both BIVIUM-A and BIVIUM-B), a modification of the eSTREAM finalist TRIVIUM, can be broken completely by the Algebraic IV Differential Attack, AIDA, using simulations or one minute of dual core processing.
AIDA uses the subspaces of two 32-dimensional vector spaces over subsets of IV bits to recover 56 of the 80 key bits. The remaining 24 key bits are most easily determined by brute force search.
We applied the Fast Reed-Muller Transform to speed up the search for linear equations in the key bits
and the Wavefront Model to rule out nonlinear relations in the key bits early on.
Longest Common Subsequence as Private Search
At STOC 2006 and CRYPTO 2007, Beimel et. al. introduced a set of privacy requirements for algorithms that solve search problems. In this paper, we consider the longest common subsequence (LCS) problem as a private search problem, where the task is to find a string of (or embedding corresponding to) an LCS. We show that deterministic selection strategies do not meet the privacy guarantees considered for private search problems and, in fact, may ``leak'' an amount of information proportional to the entire input.
We then put forth and investigate several privacy structures for the LCS problem and design new and efficient output sampling and equivalence protecting algorithms that provably meet the corresponding privacy notions. Along the way, we also provide output sampling and equivalence protecting algorithms for finite regular languages, which may be of independent interest.
Identity-Based Chameleon Hash Scheme Without Key Exposure
In this paper, we propose the first identity-based chameleon hash
scheme without key exposure, which gives a positive answer for the open problem introduced by Ateniese and de Medeiros in 2004.
Leakage-Resilient Storage
We study a problem of secure data storage on hardware that may leak information. We introduce a new primitive, that we call {\em leakage-resilient storage} (LRS), which is an (unkeyed) scheme for encoding messages, and can be viewed as a generalization of the {\em All-Or-Nothing Transform} (AONT, Rivest 1997). The standard definition of AONT requires that it should be hard to reconstruct a message if not all the bits of its encoding are known. LRS is defined more generally, with respect to a class of functions. The security definition of LRS requires that it should be hard to reconstruct even if some values are known (where ), as long as the total length of is smaller than some parameter .
We construct an LRS scheme that is secure with respect to being a set of functions that can depend only on some restricted part of the memory. More precisely: we assume that the memory is divided in parts, and the functions in can be just applied to one of these parts. We also construct a scheme that is secure if the cardinality of is restricted (but still it can be exponential in the length of the encoding). This construction implies security in the case when the set consists of functions that are computable by Boolean circuits of a small size.
We also discuss the connection between the problem of constructing leakage-resilient storage and a theory of the compressibility of NP-instances.
Fast Architectures for the Pairing over Small-Characteristic Supersingular Elliptic Curves
This paper is devoted to the design of fast parallel accelerators for the cryptographic pairing on supersingular elliptic curves over finite fields of characteristics two and three. We propose here a novel hardware implementation of Miller's algorithm based on a parallel pipelined Karatsuba multiplier. After a short description of the strategies we considered to design our multiplier, we point out the intrinsic parallelism of Miller's loop and outline the architecture of coprocessors for the pairing over and . Thanks to a careful choice of algorithms for the tower field arithmetic associated with the pairing, we manage to keep the pipelined multiplier at the heart of each coprocessor busy. A final exponentiation is still required to obtain a unique value, which is desirable in most cryptographic protocols. We supplement our pairing accelerators with a coprocessor responsible for this task. An improved exponentiation algorithm allows us to save hardware resources.
According to our place-and-route results on Xilinx FPGAs, our designs improve both the computation time and the area-time trade-off compared to previously published coprocessors.
Linear Cryptanalysis of Reduced-Round PRESENT
Uncategorized
Uncategorized
PRESENT is a hardware-oriented block cipher suitable for resource
constrained environment. In this paper we analyze PRESENT by the
multidimensional linear cryptanalysis method.
We claim that our attack can recover the 80-bit secret key of PRESENT up to 25 rounds
out of 31 rounds with around data complexity.
Furthermore, we showed that the 26-round version of PRESENT can be attacked
faster than key exhaustive search with the data complexity
by an advanced key search technique.
Our results are superior to all the previous attacks.
We demonstrate our result by performing the
linear attacks on reduced variants of PRESENT.
Our results exemplify that
the performance of the multidimensional linear attack is superior compared
to the classical linear attack.
Computational Indistinguishability Amplification: Tight Product Theorems for System Composition
Computational indistinguishability amplification is the problem of strengthening cryptographic primitives whose security is defined by bounding the distinguishing advantage of an efficient distinguisher. Examples include pseudorandom generators (PRGs), pseudorandom functions (PRFs), and pseudorandom permutations (PRPs).
The literature on computational indistinguishability amplification consists only of few isolated results. Yao's XOR-lemma implies, by a hybrid argument, that no efficient distinguisher has advantage better than (roughly) in distinguishing the XOR of independent -bit PRG outputs from uniform randomness if no efficient distinguisher has advantage more than in distinguishing from a uniform -bit string. The factor allows for security amplification only if : For the case of PRFs, a random-offset XOR-construction of Myers was the first result to achieve {\em strong} security amplification, i.e., also for .
This paper proposes a systematic treatment of computational indistinguishability amplification. We generalize and improve the above product theorem for the XOR of PRGs along five axes. First, we prove the {\em tight} information-theoretic bound (without factor ) also for the computational setting. Second, we prove results for {\em interactive} systems (e.g. PRFs or PRPs). Third, we consider the general class of {\em neutralizing combination constructions}, not just XOR. As an application this yields the first indistinguishability amplification results for the cascade of PRPs (i.e., block ciphers) converting a weak PRP into an arbitrarily strong PRP, both for single-sided and two-sided queries. Fourth, {\em strong} security amplification is achieved for a subclass of neutralizing constructions which includes as a special case the construction of Myers. As an application we obtain highly practical optimal security amplification for block ciphers, simply by adding random offsets at the input and output of the cascade. Fifth, we show strong security amplification also for {\em weakened assumptions} like security against random-input (as opposed to chosen-input) attacks.
A key technique is a generalization of Yao's XOR-lemma to (interactive) systems, which is of independent interest.
First CPIR Protocol with Data-Dependent Computation
We design a new -CPIR protocol for
-bit strings as a combination of a noncryptographic (BDD-based) data structure and a more basic cryptographic primitive (communication-efficient -CPIR). is the first CPIR protocol where server's online computation depends substantially on the concrete database. We then show that (a) for reasonably small values of , is guaranteed to have simultaneously log-squared communication and sublinear online computation, and (b) can handle huge but sparse matrices, common in data-mining applications, significantly more efficiently compared to all previous protocols. The security of can be based on the well-known Decisional Composite Residuosity assumption
Provably Secure Convertible Undeniable Signatures with Unambiguity
This paper shows some efficient and provably-secure convertible undeniable signature schemes (with both selective conversion and all conversion), in the standard model and discrete logarithm setting. They further satisfy unambiguity, which is traditionally required for anonymous signatures. Briefly, unambiguity means that it is hard to generate a (message, signature) pair which is valid for two {\em different} public-keys. In other words, our schemes can be viewed as anonymous signature schemes as well as convertible undeniable signature schemes. Besides other applications, we show that such schemes are very suitable for anonymous auction.
Permutation Polynomials modulo }
A polynomial over a finite ring is called a
\textit{permutation polynomial} if the mapping
defined by is one-to-one. In this paper we consider the
problem of characterizing permutation polynomials; that is, we
seek conditions on the coefficients of a polynomial which are
necessary and sufficient for it to represent a permutation. We
also present a new class of permutation binomials over finite
field of prime order.
Computational Soundness for Key Exchange Protocols with Symmetric Encryption
Formal analysis of security protocols based on symbolic
models has been very successful in finding flaws in
published protocols and proving protocols secure, using
automated tools. An important question is whether this kind
of formal analysis implies security guarantees in the
strong sense of modern cryptography. Initiated by the
seminal work of Abadi and Rogaway, this question has been
investigated and numerous positive results showing this
so-called computational soundness of formal analysis have
been obtained. However, for the case of active adversaries
and protocols that use symmetric encryption computational
soundness has remained a challenge.
In this paper, we show the first general computational
soundness result for key exchange protocols with symmetric
encryption, along the lines of a paper by Canetti and
Herzog on protocols with public-key encryption. More
specifically, we develop a symbolic, automatically
checkable criterion, based on observational equivalence,
and show that a key exchange protocol that satisfies this
criterion realizes a key exchange functionality in the
sense of universal composability. Our results hold under
standard cryptographic assumptions.
Threshold Decryption and Zero-Knowledge Proofs for Lattice-Based Cryptosystems
We present a variant of Regev's cryptosystem, but with a new choice of parameters. By a recent classical reduction by Peikert we prove the scheme semantically secure based on the worst-case lattice problem GapSVP. From this we construct a threshold cryptosystem which has a very efficient and non-interactive decryption protocol. We prove the threshold cryptosystem secure against passive adversaries corrupting all but one of the players, and againts active adversaries corrupting less than one third of the players. We also describe how one can build a distributed key generation protocol. In the final part of the paper, we show how one can, in zero-knowledge - prove knowledge of the plaintext contained in a given ciphertext from Regev's original cryptosystem or our variant. The proof is of size only a constant times the size of a ciphertext.
Sub-linear Size Pairing-based Non-interactive Zero-Knowledge Arguments
We construct non-interactive zero-knowledge arguments for circuit satisfiability and arithmetic circuits with perfect completeness, perfect zero-knowledge and computational (co-)soundness. The non-interactive zero-knowledge arguments have sub-linear size and very efficient public verification. Our construction uses bilinear groups and is only proven secure in the generic group model, but does not rely on random oracles.
On the Security of 1024-bit RSA and 160-bit Elliptic Curve Cryptography
Meeting the requirements of NIST’s new cryptographic standards means phasing out usage of 1024-bit RSA and 160-bit elliptic curve cryptography (ECC) by the end of the year 2010. This write-up comments on the vulnerability of these systems to an open community attack effort and aims to assess the risk of their continued usage beyond 2010. We conclude that for 1024-bit RSA the risk is small at least until the year 2014, and that 160-bit ECC over a prime field may safely be used for much longer – with the current state of the art in cryptanalysis we would be surprised if a public effort can make a dent in 160-bit prime field ECC by the year 2020. Our assessment is based on the latest practical data of large scale integer factorization and elliptic curve discrete logarithm computation efforts.
Last updated: 2009-12-24
A Simple Secret Sharing Scheme for Hierarchical Threshold Access Structures
One of the recent generalizations of (t; n) secret sharing for
hierarchical threshold secret access structures is given by Tassa, where
he answer the natural question of sharing a secret among three employ-
ees at least one of which is a manager. However, the schemes proposed
to address this problem, require some significant amount of theoreti-
cal background. We give a simpler yet efficient method for hierarchical
threshold access structures. Our scheme employs a different approach
than previous works, as it involves a certain distribution of polynomi-
als, where members of higher compartments are given a summation of
evaluations of higher number of polynomials resulting in a hierarchical
effect. The simplicity of our scheme is advantageous both in theory and
in practical implementations.
Securing Plastic Money Using an RFID Based Protocol Stack
Uncategorized
Uncategorized
Since 2006, there have been three major systems that have been implemented in an attempt to reduce the threat of credit card fraud - Chip and PIN (United Kingdom), Chip Authentication Program - CAP (European Union), and RFID enabled credit cards (United States of America). In spite of a big effort by the EMV\footnote{EMV Co.: a body comprising of Europay, Mastercard, and Visa which develops standards for credit card interaction.}, there has been little evidence to demonstrate the success of these schemes in stopping fraudsters, scammers, and identity thieves. This may be attributed to combinations of poor usability, lack of trusted interfaces, the absence of smart-card cryptography that takes full advantage of the available computation resources, and inadequate authentication protocols. In this paper, we explain the shortcomings and vulnerabilities of each of these systems, and then explain requirements of a secure and usable cashless payment system. We also describe a new RFID based protocol stack - SECAPS (Secure Cashless Payment System), which obviates many of the attacks on the current schemes by using the newly available computation resources on modern RFID Tags.
QTRU: A Lattice Attack Resistant Version of NTRU
We propose QTRU, a probabilistic and multi-dimensional public key cryptosystem based on the NTRU public key cryptosystem using quaternion algebra. QTRU encrypts four data vectors in each encryption session and the only other major di®erence between NTRU and QTRU is that the underlying algebraic structure has been changed to a non-commutative algebraic structure. As a result, QTRU inherits the strength of NTRU and its positive points. In addition, the non commutativity of the underlying structure of QTRU makes it much more resistant to some lattice-based attacks. After a brief description of NRTU, we begin by describing the algebraic structure used in QTRU. Further, we present the details of the key generation, encryption and decryption algorithms of QTRU and discuss the issues regarding key security, message security, and probability of successful decryption. Last but not least, QTRU's resistance against lattice-based attacks is investigated.
Dual System Encryption: Realizing Fully Secure IBE and HIBE under Simple Assumptions
We present a new methodology for proving security of encryption
systems using what we call Dual System Encryption. Our techniques
result in fully secure Identity-Based Encryption (IBE) and
Hierarchical Identity-Based Encryption (HIBE) systems under the simple
and established decisional Bilinear Diffie-Hellman and decisional
Linear assumptions. Our IBE system has ciphertexts, private keys, and
public parameters each consisting of a constant number of group
elements. These results are the first HIBE system and the first IBE system with short parameters under simple assumptions.
In a Dual System Encryption system both ciphertexts and private keys can take on one of two indistinguishable forms. A private key or
ciphertext will be normal if they are generated respectively
from the system's key generation or encryption algorithm. These keys
and ciphertexts will behave as one expects in an IBE system. In
addition, we define semi-functional keys and ciphertexts. A
semi-functional private key will be able to decrypt all normally
generated ciphertexts; however, decryption will fail if one attempts
to decrypt a semi-functional ciphertext with a semi-functional private
key. Analogously, semi-functional ciphertexts will be decryptable only
by normal private keys.
Dual System Encryption opens up a new way to prove security of IBE and
related encryption systems. We define a sequence of games where
we change first the challenge ciphertext and then the private keys
one by one to be semi-functional. We finally
end up in a game where the challenge ciphertext and all private keys
are semi-functional at which point proving security is straightforward.