All papers (Page 201 of 24425 results)

Last updated:  2011-01-26
A non-Abelian factorization problem and an associated cryptosystem
Srinath Baba, Srinivas Kotyad, Raghu Teja
In this note, we define a cryptosystem based on non-commutative properties of groups. The cryptosystem is based on the hardness of the problem of factoring over these groups. This problem, interestingly, boils down to discrete logarithm problem on some Abelian groups. Further, we illustrate this method in three different non-Abelian groups GL, UT and the Braid Groups.
Last updated:  2011-06-17
Constructing differential 4-uniform permutations from know ones
Yuyin Yu, Mingsheng Wang, Yongqiang Li
It is observed that exchanging two values of a function over , its differential uniformity and nonlinearity change only a little. Using this idea, we find permutations of differential -uniform over whose number of the pairs of input and output differences with differential -uniform is , less than , which provides a solution for an open problem proposed by Berger et al. \cite{ber}. Moreover, for the inverse function over ( even), various possible differential uniformities are completely determined after its two values are exchanged. As a consequence, we get some highly nonlinear permutations with differential uniformity which are CCZ-inequivalent to the inverse function on .
Last updated:  2011-09-13
Lower and Upper Bounds for Deniable Public-Key Encryption
Rikke Bendlin, Jesper Buus Nielsen, Peter Sebastian Nordholt, Claudio Orlandi
A deniable cryptosystem allows a sender and a receiver to communicate over an insecure channel in such a way that the communication is still secure even if the adversary can threaten the parties into revealing their internal states after the execution of the protocol. This is done by allowing the parties to change their internal state to make it look like a given ciphertext decrypts to a message different from what it really decrypts to. Deniable encryption was in this way introduced to allow to deny a message exchange and hence combat coercion. Depending on which parties can be coerced, the security level, the flavor and the number of rounds of the cryptosystem, it is possible to define a number of notions of deniable encryption. In this paper we prove that there does not exist any non-interactive receiver-deniable cryptosystem with better than polynomial security. This also shows that it is impossible to construct a non-interactive bi-deniable public-key encryption scheme with better than polynomial security. Specifically, we give an explicit bound relating the security of the scheme to how efficient the scheme is in terms of key size. Our impossibility result establishes a lower bound on the security. As a final contribution we give constructions of deniable public-key encryption schemes which establishes upper bounds on the security in terms of key length. There is a gap between our lower and upper bounds, which leaves the interesting open problem of finding the tight bounds.
Last updated:  2011-01-25
Private Identification, Authentication and Key Agreement Protocol with Security Mode Setup
Farshid Farhat, Somayeh Salimi, Ahmad Salahi
Identification, authentication and key agreement protocol of UMTS networks with security mode setup has some weaknesses in the case of mutual freshness of key agreement, DoS-attack resistance, and efficient bandwidth consumption. In this article we consider UMTS AKA and some other proposed schemes. Then we explain the known weaknesses of the previous frameworks suggested for the UMTS AKA protocol. After that we propose a new protocol called private identification, authentication, and key agreement protocol (PIAKAP), for UMTS mobile network. Our suggested protocol combines identification and AKA stages of UMTS AKA protocol while eliminates disadvantages of related works and brings some new features to improve the UMTS AKA mechanism. These features consist of reducing the interactive rounds of the UMTS AKA with security mode setup and user privacy establishment.
Last updated:  2011-01-25
Fast Scalar Multiplication in ECC using The Multi base Number System.
G. N. Purohit, Asmita Singh Rawat
As a generalization of double base chains, multibase number system is very suitable for efficient computation of scalar multiplication of a point of elliptic curve because of shorter representation length and hamming weight. In this paper combined with the given formulas for computing the 7- Fold of an elliptic curve point P an efficient scalar multiplication algorithm of elliptic curve is proposed using 2,3 and 7 as basis of the multi based number system . The algorithms cost less compared with Shamirs trick and interleaving with NAFs method.
Last updated:  2011-01-25
Proxy Blind Multi-signature Scheme using ECC for handheld devices
Jayaprakash Kar
A proxy blind signature scheme is a special form of blind signature which allowed a designated person called proxy signer to sign on behalf of two or more original signers without knowing the content of the message or document. It combines the advantages of proxy signature, blind signature and multi-signature scheme. This paper describes an e±cient proxy blind multi-signature scheme. The security of the proposed schemes is based on the di±culty of breaking the one-way hash function and the elliptic curve discrete logarithm problem (ECDLP). This can be implemented in low power and small processor handheld devices such as smart card, PDA etc which work in low power and small processor. This scheme utilizes a trusted third party called certificate authority to ensure that signatures can only be generated during valid delegation period. It satisfies the security properties of both proxy and blind signature scheme.
Last updated:  2011-02-14
Computing endomorphism rings of elliptic curves under the GRH
Gaetan Bisson
We design a probabilistic algorithm for computing endomorphism rings of ordinary elliptic curves defined over finite fields that we prove has a subexponential runtime in the size of the base field, assuming solely the generalized Riemann hypothesis. Additionally, we improve the asymptotic complexity of previously known, heuristic, subexponential methods by describing a faster isogeny-computing routine.
Last updated:  2013-09-19
Reclaiming Privacy for Smartphone Applications (Revised Version)
Emiliano De Cristofaro, Anthony Durussel, Imad Aad
The scope of mobile phones has skyrocketed in recent years to such an extent that smartphone sales are expected to surpass those of PCs by the end of 2011. Equipped with relatively powerful processors and fairly large memory and storage capabilities, smartphones can accommodate increasingly complex interactive applications. As a result, the growing amount of sensitive information shared by smartphone users raises serious privacy concerns and motivates the need for appropriate privacy-preserving mechanisms. In this paper, we present a novel architecture geared for privacy-sensitive applications where personal information is shared among users and decisions are made based on given optimization criteria. Specifically, we focus on two application scenarios: (i) privacy-preserving interest sharing, i.e., discovering shared interests without leaking users' private information, and (ii) private scheduling, i.e., determining common availabilities and location preferences that minimize associate costs, without exposing any sensitive information. We propose efficient yet provably-private solutions, and conduct an extensive experimental analysis that attests to the practicality of the attained privacy features.
Last updated:  2011-01-21
Simple and Exact Formula for Minimum Loop Length in Ate_i Pairing based on Brezing-Weng Curves
Hoon Hong, Eunjeong Lee, Hyang-Sook Lee, Cheol-Min Park
We provide a simple and exact formula for the minimum Miller loop length in Ate_i pairing based on Brezing-Weng curves, in terms of the involved parameters, under a mild condition on the parameters. It will be also shown that almost all cryptographically useful parameters satisfy the mild condition. Hence the simple and exact formula is valid for them. It will also turn out that the formula depends only on two parameters, providing freedom to choose the other parameters to address the design issues other than minimizing the loop length.
Last updated:  2020-12-30
Fast point quadrupling on elliptic curves
Duc-Phong Le, Binh P Nguyen
Ciet et al.(2006) proposed an elegant method for trading inversions for multiplications when computing [2] P+Q from two given points P and Q on elliptic curves of Weierstrass form. Motivated by their work, this paper proposes a fast algorithm for computing [4] P with only one inversion in affine coordinates. Our algorithm that requires 1I+ 8S+ 8M, is faster than two repeated doublings whenever the cost of one field inversion is more expensive than the cost of four field multiplications plus four field squarings (ie I> 4M+ 4S). It saves one field multiplication and one field squaring in comparison with the Sakai-Sakurai method (2001). Even better, for special curves that allow" a= 0"(or" b= 0") speedup, we obtain [4] P in affine coordinates using just 1I+ 5S+ 9M (or 1I+ 5S+ 6M, respectively).
Last updated:  2011-01-21
Cold Boot Key Recovery by Solving Polynomial Systems with Noise
Martin Albrecht, Carlos Cid
A method for extracting cryptographic key material from DRAM used in modern computers has been recently proposed in [9]; the technique was called Cold Boot attacks. When considering block ciphers, such as the AES and DES, simple algorithms were also proposed in [9] to recover the cryptographic key from the observed set of round subkeys in memory (computed via the cipher’s key schedule operation), which were however subject to errors due to memory bits decay. In this work we extend this analysis to consider key recovery for other ciphers used in Full Disk Encryption (FDE) products. Our algorithms are also based on closest code word decoding methods, however apply a novel method for solving a set of non-linear algebraic equations with noise based on Integer Programming. This method should have further applications in cryptology, and is likely to be of independent interest. We demonstrate the viability of the Integer Programming method by applying it against the Serpent block cipher, which has a much more complex key schedule than AES. Furthermore, we also consider the Twofish key schedule, to which we apply a dedicated method of recovery.
Last updated:  2011-01-21
Higher-Order Differential Attack on Reduced SHA-256
Uncategorized
Mario Lamberger, Florian Mendel
Show abstract
Uncategorized
In this work, we study the application of higher-order differential attacks on hash functions. We show a second-order differential attack on the SHA-256 compression function reduced to 46 out of 64 steps. We implemented the attack and give the result in Table 1. The best attack so far (in a different attack model) with practical complexity was for 33 steps of the compression function.
Last updated:  2011-06-12
The Complexity Analysis of the MutantXL Family
Uncategorized
Mohamed Saied Emam Mohamed, Jintai Ding, Johannes Buchmann
Uncategorized
Algebraic attacks are based on the problem of solving systems of multivariate polynomial equations. The complexity of algorithms for solving this problem is essentially affected by the method of enlarging these multivariate systems. The MutantXL algorithm was presented as an efficient algorithm for solving multivariate systems. In this paper, we study the complexity of the MutantXL algorithm and give an upper bound to the number of mutants necessary for terminating the computations of the algorithm. At the ePrint archive, E. Thomae and C. Wolf recently published a new paper and presented two formulas for an upper bound on the number of linearly independent equations generated by MutantXL. We have noticed that these two formulas are not accurate. The first one leads to, indeed, a large upper bound. However, the second one does not take into account a lot of linearly independent computations. We indicate the errors in these formulas and propose new ones. Our formulas based on a theoretical evaluation of the maximum number of linearly independent equations produced by MutantXL. We have verified the new formulas with a large number of experiments on some HFE and random generated systems. Furthermore, we study the complexity of the MGB algorithm for computing Gröbner basis. The analysis of MGB suggested a new upper bound for the complexity of computing Gröbner bases.
Last updated:  2012-10-11
A New Family of Implicitly Authenticated Diffie-Hellman Protocols
Uncategorized
Andrew C. Yao, Yunlei Zhao
Show abstract
Uncategorized
Cryptography algorithm standards play a key role both to the practice of information security and to cryptography theory research. Among them, the MQV and HMQV protocols ((H)MQV, in short) are a family of implicitly authenticated Diffie-Hellman key-exchange (DHKE) protocols that are among the most efficient and are widely standardized. In this work, from some new perspectives and under some new design rationales, and also inspired by the security analysis of HMQV, we develop a new family of practical implicitly authenticated DHKE (IA-DHKE) protocols, which enjoy notable performance among security, efficiency, privacy, fairness and easy deployment. We make detailed comparisons between our new protocols and (H)MQV, showing that the newly developed protocols outperform HMQV in most aspects. Very briefly speaking, we achieve: 1. The most efficient provably secure IA-DHKE protocol to date, and the first online-optimal provably secure IA-DHKE protocols. 2. The first IA-DHKE protocol that is provably secure, resilience to the leakage of DH components and exponents, under merely standard assumptions without additionally relying on the knowledge-of-exponent assumption (KEA). 3. The first provably secure privacy-preserving and computationally fair IA-DHKE protocol, with privacy-preserving properties of reasonable deniability and post-ID computability and the property of session-key computational fairness. Guided by our new design rationales, in this work we also formalize and introduce some new concept, say session-key computational fairness (as a complement to session-key security), to the literature.
Last updated:  2012-09-13
Secure Authentication from a Weak Key, Without Leaking Information
Niek J. Bouman, Serge Fehr
We study the problem of authentication based on a weak key in the information-theoretic setting. A key is weak if its min-entropy is an arbitrary small fraction of its bit length. This problem has recently received considerable attention, with different solutions optimizing different parameters. We study the problem in an extended setting, where the weak key is as a one-time session key that is derived from a public source of randomness with the help of a (potentially also weak) long-term key. Our goal now is to authenticate a message by means of the weak session key in such a way that (nearly) no information on the long-term key is leaked. Ensuring privacy of the long-term key is vital for the long-term key to be re-usable. Previous work has not considered such a privacy issue, and previous solutions do not seem to satisfy this requirement. We show the existence of a practical four-round protocol that provides message authentication from a weak session key and that avoids non-negligible leakage on the long-term key. The security of our scheme also holds in the quantum setting where the adversary may have limited quantum side information on the weak session key. As an application of our scheme, we show the existence of an identification scheme in the bounded quantum storage model that is secure against a man-in-the-middle attack and that is truly password-based: it does not need any high entropy key, in contrast to the scheme proposed by Damgaard et al..
Last updated:  2011-09-02
The Geometry of Flex Tangents to a Cubic Curve and its Parameterizations
Jean-Marc Couveignes, Jean-Gabriel Kammerer
We show how the study of the geometry of the nine flex tangents to a cubic produces many pseudo-parameterizations, including the ones given by Icart, Kammerer, Lercier, Renault and Farashahi.
Last updated:  2011-01-19
Corrigendum to: The Cube Attack on Stream Cipher Trivium and Quadraticity Tests
Piotr Mroczkowski, Janusz Szmidt
In 2008 I. Dinur and A. Shamir presented a new type of algebraic attack on symmetric ciphers named cube attack. The method has been applied to reduced variants of stream ciphers Trivium and Grain- 128, reduced variants of the block ciphers Serpent and CTC and to a reduced version of the keyed hash function MD6. Independently a very similar attack named AIDA was introduced by M. Vielhaber. In this paper we develop quadraticity tests within the cube attack and apply them to a variant of stream cipher Trivium reduced to 709 initialization rounds. Using this method we obtain the full 80-bit secret key. In this way it eliminates the stage of brute force search of some secret key bits which occured in previous cube attacks. In this corrigendum to our previous paper the indexing of cubes and key bits was reversed making it consistent with other papers.
Last updated:  2012-07-11
Efficient Unconditional Asynchronous Byzantine Agreement with Optimal Resilience
Ashish Choudhury, Arpita Patra
We present an efficient and optimally resilient Asynchronous Byzantine Agreement (ABA) protocol involving n = 3t+1 parties over a completely asynchronous network, tolerating a computationally unbounded Byzantine adversary, who can control at most t parties out of the n parties. The amortized communication complexity of our ABA protocol is O(n^{3} \log \frac{1}{\epsilon}) bits for attaining agreement on a single bit, where \epsilon (\epsilon > 0) denotes the probability of non-termination. We compare our protocol with the best known optimally resilient ABA protocols of Canetti et al.(STOC 1993) and Abraham et al.~(PODC 2008) and show that our protocol gains by a factor of O(n^{8} \log \frac{1}{\epsilon}^{3}) over the ABA protocol of Canetti et al. and by a factor of O(n^{5} \frac{\log{n}}{\log \frac{1}{\epsilon}}) over the ABA protocol of Abraham et al. in terms of the communication complexity. To design our protocol, we first present a new, optimally resilient statistical asynchronous verifiable secret sharing (AVSS) protocol with n = 3t+1, which significantly improves the communication complexity of the only known optimally resilient statistical AVSS protocol of Canetti et al. Our AVSS protocol shares multiple secrets simultaneously and incurs lower communication complexity than executing multiple instances of an AVSS protocol sharing a single secret. To design our AVSS protocol, we further present a new asynchronous primitive called asynchronous weak commitment (AWC), which acts as a substitute for asynchronous weak secret sharing (AWSS), which was used as a primitive for designing AVSS by Canetti et al. We observe that AWC has weaker requirements than the AWSS and hence can be designed more efficiently. The common coin primitive is one of the most important building blocks for the construction of an ABA protocol. The best known common coin protocol of Feldman et al. requires multiple instances of an AVSS protocol sharing a single secret as a black-box. Unfortunately, this common coin protocol does not achieve its goal when the multiple invocations of AVSS sharing a single secret is replaced by a single invocation of an AVSS protocol sharing multiple secrets simultaneously. Therefore in this paper, we extend the existing common coin protocol to make it compatible with our new AVSS protocol (sharing multiple secrets). As a byproduct, our new common coin protocol is much more communication efficient than the existing common coin protocol.
Last updated:  2011-01-18
Fast Elliptic Curve Cryptography Using Optimal Double-Base Chains
Vorapong Suppakitpaisarn, Masato Edahiro, Hiroshi Imai
In this work, we propose an algorithm to produce the double-base chains that optimize the time used for computing an elliptic curve cryptosystem. The double-base chains is the representation that combining the binary and ternary representation. By this method, we can reduce the Hamming weight of the expansion, and reduce the time for computing the scalar point multiplication (Q = rS), that is the bottleneck operation of the elliptic curve cryptosystem. This representation is very redundant, i.e. we can present a number by many expansions. Then, we can select the way that makes the operation fastest. However, the previous works on double-bases chain have used a greedy algorithm, and their solutions are not optimized. We propose the algorithm based on the dynamic programming scheme that outputs the optimized the double-bases chain. The experiments show that we have reduced the time for computing the scalar multiplication by 3.88-3.95%, the multi-scalar multiplication by 2.55-4.37%, and the multi-scalar multiplication on the larger digit set by 3.5-12%.
Last updated:  2011-03-14
Outline of a proposal responding to E.U. and U.S. calls for trustworthy global-scale IdM and CKM designs
Benjamin Gittins
In 2007, the E.U. FP6 SecurIST called for trustworthy international identity management (IdM) that was user-centric. In 2009, the U.S. Department of Homeland Security (DHS) called for trustworthy global-scale IdM and the U.S. National Institute of Standards and Technology (NIST) called for new cryptographic key management (CKM) designs. In this paper we outline the core architecture for (apparently) the first globally scalable, post quantum secure, symmetric key based platform for provisioning IdM, key distribution/agreement and inter-enterprise CKM services. Our proposal employs a decentralised trust model that exploits compartmentalisation, redundancy and diversification simultaneously across service provider, software developer, hardware vendor, class of cryptographic primitive, and protocol axis. It employs behavioural analysis techniques and supports the collaborative management of international name spaces, management of client transactions using public identifiers and supports user-centric cross-cutting control mechanisms. Our proposal is suitable for use with commercial off the shelf hardware and is designed to wrap-around and protect the output of existing security deployments. The platform addresses the U.S. Networking and Information Technology Research and Development Program (NITRD) call to create a digital immune system (multi-layered protection, decentralised control, diversity, pattern recognition), the DHS call for combating insider attacks and malware, achieving survivability and availability, and NIST managers' call for a CKM design supporting billions of users without the use of public key technologies. This proposal has been designed as part of our Trustworthy Resilient Universal Secure Infrastructure Platform project.
Last updated:  2012-02-10
The Parazoa Family: Generalizing the Sponge Hash Functions
Elena Andreeva, Bart Mennink, Bart Preneel
Sponge functions were introduced by Bertoni et al. as an alternative to the classical Merkle-Damgaard design. Many hash function submissions to the SHA-3 competition launched by NIST in 2007, such as CubeHash, Fugue, Hamsi, JH, Keccak and Luffa, derive from the original sponge design, and security guarantees from some of these constructions are typically based on indifferentiability results. Although indifferentiability proofs for these designs often bear significant similarities, these have so far been obtained independently for each construction. In this work, we introduce the parazoa family of hash functions as a generalization of ``sponge-like'' functions. Similarly to the sponge design, the parazoa family consists of compression and extraction phases. The parazoa hash functions, however, extend the sponge construction by enabling the use of a wider class of compression and extraction functions that need to satisfy certain properties. More importantly, we prove that the parazoa functions satisfy the indifferentiability notion of Maurer et al. under the assumption that the underlying permutation is ideal. Not surprisingly, our indifferentiability result confirms the bound on the original sponge function, but it also carries over to a wider spectrum of hash functions and eliminates the need for a separate indifferentiability analysis.
Last updated:  2011-01-14
Simple and Efficient Single Round Almost Perfectly Secure Message Transmission Tolerating Generalized Adversary
Ashish Choudhury, Kaoru Kurosawa, Arpita Patra
Patra et al. gave a necessary and sufficient condition for the possibility of almost perfectly secure message transmission protocols tolerating general, non-threshold Q^2 adversary structure. However, their protocol requires at least three rounds and performs exponential (exponential in the size of the adversary structure) computation and communication. Moreover, they have left it as an open problem to design efficient protocol for almost perfectly secure message transmission, tolerating Q^2 adversary structure. In this paper, we show the first single round almost perfectly secure message transmission protocol tolerating Q^2 adversary structure. The computation and communication complexities of the protocol are both polynomial} in the size of underlying linear secret sharing scheme (LSSS) and adversary structure. This solves the open problem raised by Patra et al.. When we restrict our general protocol to threshold adversary with n=2t+1, we obtain a single round, communication optimal almost secure message transmission protocol tolerating threshold adversary, which is much more computationally efficient and relatively simpler than the previous communication optimal protocol of Srinathan et al.
Last updated:  2011-04-14
Private Discovery of Common Social Contacts
Emiliano De Cristofaro, Mark Manulis, Bertram Poettering
The increasing use of computing devices for social interactions fuels the proliferation of online social applications, yet prompts a number of privacy concerns. One common problem occurs when two unfamiliar users, in the process of establishing social relationships, want to assess their social proximity by discovering mutual social contacts. In this paper, we introduce \emph{Private Contact Discovery}, a novel cryptographic primitive that lets two users, on input their respective contact lists, learn their common contacts (if any), and nothing else. We present an efficient and provably secure construction, that (i) prevents arbitrary list manipulation by means of contact certification, and (ii) guarantees user authentication and revocability. Following a rigorous cryptographic treatment of the problem, we define \emph{contact-hiding} security and prove it for our solution, under the RSA assumption in the Random Oracle Model (ROM). We also show that other related cryptographic techniques are unsuitable in this context. Experimental analysis on various types of devices attests to the practicality of our technique, which achieves computational and communication overhead almost linear in the number of contacts.
Last updated:  2011-01-14
Supporting Publication and Subscription Confidentiality in Pub/Sub Networks
Uncategorized
Mihaela Ion, Giovanni Russello, Bruno Crispo
Show abstract
Uncategorized
The publish/subscribe model offers a loosely-coupled communication paradigm where applications interact indirectly and asynchronously. Publisher applications generate events that are sent to interested applications through a network of brokers. Subscriber applications express their interest by specifying filters that brokers can use for routing the events. Supporting confidentiality of messages being exchanged is still challenging. First of all, it is desirable that any scheme used for protecting the confidentiality of both the events and filters should not require the publishers and subscribers to share secret keys. In fact, such a restriction is against the loose-coupling of the model. Moreover, such a scheme should not restrict the expressiveness of filters and should allow the broker to perform event filtering to route the events to the interested parties. Existing solutions do not fully address these issues. In this paper, we provide a novel scheme that supports (i) confidentiality for events and filters; (ii) filters can express very complex constraints on events even if brokers are not able to access any information on both events and filters; (iii) and finally it does not require publishers and subscribers to share keys.
Last updated:  2011-01-14
Secure evaluation of polynomial using privacy ring homomorphisms
Alexander Rostovtsev, Alexey Bogdanov, Mikhail Mikhaylov
Method of secure evaluation of polynomial y=F(x_1, …, x_k) over some rings on untrusted computer is proposed. Two models of untrusted computer are considered: passive and active. In passive model untrusted computer correctly computes polynomial F and tries to know secret input (x_1, …, x_k) and output y. In active model untrusted computer tries to know input and output and tries to change correct output y so that this change cannot be determined. Secure computation is proposed by using one-time privacy ring homomorphism Z/nZ -> Z/nZ[z]/(f(z)), n = pq, generated by trusted computer. In the case of active model secret check point v = F(u_1, …, u_k) is used. Trusted computer generates polynomial f(z)=(z-t)(z+t), t in Z/nZ, and input X_i(z) in Z/nZ[z]/(f(z)) such that X_i(t)=x_i (mod n) for passive model, and f(z)=(z-t_1)(z-t_2)(z-t_3), t_i in Z/nZ and input X_i(z) in Z/nZ[z]/(f(z)) such that X_i(t_1)=x_i (mod n), X_i(t_2)= u_i (mod n) for active model. Untrusted computer computes function Y(z) = F(X_1(z), …, X_k(z)) in the ring Z/nZ[z]/(f(z)). For passive model trusted computer determines secret output y=Y(t) (mod n). For active model trusted computer checks that Y(t_2)=v (mod n), then determines correct output y=Y(t_1) (mod n).
Last updated:  2011-01-14
Improved zero-sum distinguisher for full round Keccak-f permutation
Ming Duan, Xuajia Lai
K is one of the five hash functions selected for the final round of the SHA-3 competition and its inner primitive is a permutation called K-. In this paper, we find that for the inverse of the only one nonlinear transformation of K-, the algebraic degrees of any output coordinate and of the product of any two output coordinates are both 3 and also 2 less than its size 5. Combining the observation with a proposition from an upper bound on the degree of iterated permutations, we improve the zero-sum distinguisher of full 24 rounds K- permutation by lowering the size of the zero-sum partition from to .
Last updated:  2011-01-14
Cryptanalysis with Ternary Difference: Applied to Block Cipher PRESENT
Farzaneh Abazari, Babak Sadeghian
Signed difference approach was first introduced by Wang for finding collision in MD5. In this paper we introduce ternary difference approach and present it in 3 symbols. To show its application we combine ternary difference approach with conventional differential cryptanalysis and apply that to cryptanalysis the reduced round PRESENT. We also use ant colony technique to obtain the best differential characteristic. To illustrate the privilege in the result of experiment, we calculate advantage of the attack.
Last updated:  2011-01-17
Fully Secure Anonymous Hierarchical Identity-Based Encryption with Constant Size Ciphertexts
Jae Hong Seo, Jung Hee Cheon
Efficient and privacy-preserving constructions for search functionality on encrypted data is important issues for data outsourcing, and data retrieval, etc. Fully secure anonymous Hierarchical ID-Based Encryption (HIBE) schemes is useful primitives that can be applicable to searchable encryptions [4], such as ID-based searchable encryption, temporary searchable encryption [1], and anonymous forward secure HIBE [9]. We propose a fully secure anonymous HIBE scheme with constant size ciphertexts.
Last updated:  2012-01-30
Cover and Decomposition Index Calculus on Elliptic Curves made practical. Application to a seemingly secure curve over
Uncategorized
Antoine Joux, Vanessa Vitse
Show abstract
Uncategorized
We present a new variant of cover and decomposition attacks on the elliptic curve discrete logarithm problem, that combines Weil descent and decomposition-based index calculus into a single discrete logarithm algorithm. This variant applies, at least theoretically, to all composite degree extension fields, and is particularly well-suited for curves defined over . We give a real-size example of discrete logarithm computations on a seemingly secure curve defined over a 1306$ extension field.
Last updated:  2011-01-14
Collision Resistance of the JH Hash Function
Jooyoung Lee, Deukjo Hong
In this paper, we analyze collision resistance of the JH hash function in the ideal primitive model. The JH hash function is one of the five SHA-3 candidates accepted for the final round of evaluation. The JH hash function uses a mode of operation based on a permutation, while its security has been elusive even in the random permutation model. One can find a collision for the JH compression function only with two backward queries to the basing primitive. However, the security is significantly enhanced in iteration. For , we prove that the JH hash function using an ideal -bit permutation and producing -bit outputs by truncation is collision resistant up to queries. This bound implies that the JH hash function provides the optimal collision resistance in the random permutation model.
Last updated:  2011-04-05
Homomorphic Signatures for Polynomial Functions
Dan Boneh, David Mandell Freeman
We construct the first homomorphic signature scheme that is capable of evaluating multivariate polynomials on signed data. Given the public key and a signed data set, there is an efficient algorithm to produce a signature on the mean, standard deviation, and other statistics of the signed data. Previous systems for computing on signed data could only handle linear operations. For polynomials of constant degree, the length of a derived signature only depends logarithmically on the size of the data set. Our system uses ideal lattices in a way that is a ``signature analogue'' of Gentry's fully homomorphic encryption. Security is based on hard problems on ideal lattices similar to those in Gentry's system.
Last updated:  2011-01-19
New Impossible Differential Attacks of Reduced-Round Camellia-192 and Camellia-256
Uncategorized
Jiazhe Chen, Keting Jia, Hongbo Yu, Xiaoyun Wang
Show abstract
Uncategorized
Camellia is a block cipher selected as a standard by ISO/IEC, which has been analyzed by a number of cryptanalysts. In this paper, we propose several 6-round impossible differential paths of Camellia with the layer in the middle of them. With the impossible differential and a well-organized precomputational table, impossible differential attacks on 10-round Camellia-192 and 11-round Camellia-256 are given, and the time complexity are and respectively. An impossible differential attack on 15-round Camellia-256 without layers and whitening is also be given, which needs about encryptions. To the best of our knowledge, these are the best cryptanalytic results of Camellia-192/-256 with layers and Camellia-256 without layers to date.
Last updated:  2011-01-08
An Anonymous Health Care System
Melissa Chase, Kristin Lauter
As medical records are converted to electronic form, risks of compromise of patients' privacy increase dramatically. The electronic format makes misuse of many patients' data much easier, so we must be extremely careful with who has access to this data. At the same time, this move to an electronic approach also gives us opportunities to improve patient privacy by leveraging recent cryptographic techniques, and in some ways to improve upon the traditional system. Here we look in particular at those parties, such as insurers and pharmacies, that are not actively involved in patient care. Currently patients who are insured are required to share the entire record of their medical treatment with their insurer in order to receive benefits, and a pharmacy may store all prescriptions filled for each patient. However, there is no medical reason for these parties to see this information --- they only need enough information to be able to prevent fraud and verify that the provided treatment should be covered under the patient's policy, or that the patient has a valid prescription for the medication being dispensed. We argue that, using recent developments in cryptography, we can allow this verification without revealing any additional information about the patient's record, thus obtaining optimal privacy guarantees.
Last updated:  2011-04-01
Exponential attacks on 6-round Luby-Rackoff and on 5-round Lai-Massey
Jean-Philippe Aumasson
The random oracle model and the ideal cipher model were proven equivalent after Coron et al. (CRYPTO 08) showed that six Feistel rounds are indifferentiable from an ideal cipher. This result, however, does not imply the inexistence of superpolynomial-time attacks outperforming generic (exponential-time) attacks. The finding of such attacks was left open by Coron et al., and is of utmost importance to evaluate the security of concrete fixed-parameters systems, as deployed in practice, for which the superpolynomial guarantee is an insufficient security argument. In addressing this issue, this paper proposes an exponential attack on six Feistel rounds, thus showing that at least seven rounds are necessary for optimal security guarantees. We then consider the Lai-Massey construction, as used in the block ciphers IDEA and FOX, for which we present an efficient attack on four rounds and an exponential attack on five. As a consequence, at least five Lai-Massey rounds are necessary to achieve indifferentiability in the general model.
Last updated:  2011-01-08
Unconditionally Reliable Message Transmission in Directed Neighbour Networks
Shashank Agrawal, Abhinav Mehta, Kannan Srinathan
The problem of unconditionally reliable message transmission (URMT) is to design a protocol which when run by players in a network enables a sender S to deliver a message to a receiver R with high probability, even when some players in the network are under the control of an unbounded adversary. Renault and Tomala [JoC2008] gave a characterization of undirected neighbour networks over which URMT tolerating Byzantine adversary is possible. In this paper, we generalize their result to the case of directed networks.
Last updated:  2011-01-08
Secure Message Transmission In Asynchronous Directed Networks
Shashank Agrawal, Abhinav Mehta, Kannan Srinathan
We study the problem of information-theoretically secure message transmission (SMT) in asynchronous directed networks. In line with the literature, the distrust and failures of the network is captured via a computationally unbounded Byzantine adversary that may corrupt some subset of nodes. We give a characterization of networks over which SMT from sender S to receiver R is possible in both the well-known settings, namely perfect SMT (PSMT) and unconditional SMT (USMT). We distinguish between two variants of USMT: one in which R can output an incorrect message (with small probability) and another in which R never outputs a wrong message, but may choose to abort (with small probability). We also provide efficient protocols for an important class of networks.
Last updated:  2011-01-08
Minimizing Non-interactive Zero-Knowledge Proofs Using Fully Homomorphic Encryption
Jens Groth
A non-interactive zero-knowledge proof can be used to demonstrate the truth of a statement without revealing anything else. It has been shown under standard cryptographic assumptions that non-interactive zero-knowledge proofs of membership exist for all languages in NP. However, known non-interactive zero-knowledge proofs of membership of NP-languages yield proofs that are larger than the corresponding membership witnesses. We investigate the question of minimizing the communication overhead involved in making non-interactive zero-knowledge proofs and show that if fully homomorphic encryption exists then it is possible to minimize the size of non-interactive zero-knowledge proofs and get proofs that are of the same size as the witnesses. Our technique is applicable to many types of non-interactive zero-knowledge proofs. We apply it to both standard non-interactive zero-knowledge proofs and to universally composable non-interactive zero-knowledge proofs. The technique can also be applied outside the realm of non-interactive zero-knowledge proofs, for instance to get witness-size interactive zero-knowledge proofs in the plain model without any setup.
Last updated:  2011-01-06
After-the-Fact Leakage in Public-Key Encryption
Shai Halevi, Huijia Lin
What does it mean for an encryption scheme to be leakage-resilient Prior formulations require that the scheme remains semantically secure even in the presence of leakage, but only considered leakage that occurs \emph{before the challenge ciphertext is generated}. Although seemingly necessary, this restriction severely limits the usefulness of the resulting notion. In this work we study after-the-fact leakage, namely leakage that the adversary obtains after seeing the challenge ciphertext. We seek a ``natural'' and realizable notion of security, which is usable in higher-level protocols and applications. To this end, we formulate \emph{entropic leakage-resilient PKE}. This notion captures the intuition that as long as the entropy of the encrypted message is higher than the amount of leakage, the message still has some (pseudo) entropy left. We show that this notion is realized by the Naor-Segev constructions (using hash proof systems). We demonstrate that entropic leakage-resilience is useful by showing a simple construction that uses it to get semantic security in the presence of after-the-fact leakage, in a model of bounded memory leakage from a split state.
Last updated:  2011-01-06
Structured Encryption and Controlled Disclosure
Melissa Chase, Seny Kamara
We consider the problem of encrypting structured data (e.g., a web graph or a social network) in such a way that it can be efficiently and privately queried. For this purpose, we introduce the notion of structured encryption which generalizes previous work on symmetric searchable encryption (SSE) to the setting of arbitrarily-structured data. In the context of cloud storage, structured encryption allows a client to encrypt data without losing the ability to query and retrieve it efficiently. Another application, which we introduce in this work, is to the problem of controlled disclosure, where a data owner wishes to grant access to only part of a massive dataset. We propose a model for structured encryption, a formal security definition and several efficient constructions. We present schemes for performing queries on two simple types of structured data, specifically lookup queries on matrix-structured data, and search queries on labeled data. We then show how these can be used to construct efficient schemes for encrypting graph data while allowing for efficient neighbor and adjacency queries. Finally, we consider data that exhibits a more complex structure such as labeled graph data (e.g., web graphs). We show how to encrypt this type of data in order to perform focused subgraph queries, which are used in several web search algorithms. Our construction is based on our labeled data and basic graph encryption schemes and provides insight into how several simpler algorithms can be combined to generate an efficient scheme for more complex queries.
Last updated:  2012-01-05
Progression-Free Sets and Sublinear Pairing-Based Non-Interactive Zero-Knowledge Arguments
Uncategorized
Helger Lipmaa
Show abstract
Uncategorized
In 2010, Groth constructed the only previously known sublinear-communication NIZK circuit satisfiability argument in the common reference string model. We optimize Groth's argument by, in particular, reducing both the CRS length and the prover's computational complexity from quadratic to quasilinear in the circuit size. We also use a (presumably) weaker security assumption, and have tighter security reductions. Our main contribution is to show that the complexity of Groth's basic arguments is dominated by the quadratic number of monomials in certain polynomials. We collapse the number of monomials to quasilinear by using a recent construction of progression-free sets.
Last updated:  2011-01-05
Computing Elliptic Curve Discrete Logarithms with the Negation Map
Ping Wang, Fangguo Zhang
It is clear that the negation map can be used to speed up the computation of elliptic curve discrete logarithms with the Pollard rho method. However, the random walks defined on elliptic curve points equivalence class used by Pollard rho will always get trapped in fruitless cycles. We propose an efficient alternative approach to resolve fruitless cycles. Besides the theoretical analysis, we also examine the performance of the new algorithm in experiments with elliptic curve groups. The experiment results show that we can achieve the speedup by a factor extremely close to , which is the best performance one can achieve in theory, using the new algorithm with the negation map.
Last updated:  2011-10-20
KISS: A Bit Too Simple
Greg Rose
KISS (`Keep it Simple Stupid') is an efficient pseudo-random number generator specified by G. Marsaglia and A. Zaman in 1993. G. Marsaglia in 1998 posted a C version to various USENET newsgroups, including \texttt{sci.crypt}. Marsaglia himself has never claimed cryptographic security for the KISS generator, but many others have made the intellectual leap and claimed that it is of cryptographic quality. In this paper we show a number of reasons why the generator does not meet the KISS authors' claims, why it is not suitable for use as a stream cipher, and that it is not cryptographically secure. Our best attack requires about 70 words of generated output and a few hours of computation to recover the initial state.
Last updated:  2011-01-05
Exploring the Limits of Common Coins Using Frontier Analysis of Protocols
Hemanta K. Maji, Pichayoot Ouppaphan, Manoj Prabhakaran, Mike Rosulek
In 2-party secure computation, access to common, trusted randomness is a fundamental primitive. It is widely employed in the setting of computationally bounded players (under various complexity assumptions) to great advantage. In this work we seek to understand the power of trusted randomness, primarily in the computationally unbounded (or information theoretic) setting. We show that a source of common randomness does not add any additional power for secure evaluation of deterministic functions, even when one of the parties has arbitrary influence over the distribution of the common randomness. Further, common randomness helps only in a trivial sense for realizing randomized functions too (namely, it only allows for sampling from publicly fixed distributions), if UC security is required. To obtain these impossibility results, we employ a recently developed protocol analysis technique, which we call the {\em frontier analysis}. This involves analyzing carefully defined ``frontiers'' in a weighted tree induced by the protocol's execution (or executions, with various inputs), and establishing various properties regarding one or more such frontiers. We demonstrate the versatility of this technique by employing carefully chosen frontiers to derive the different results. To analyze randomized functionalities we introduce a frontier argument that involves a geometric analysis of the space of probability distributions. Finally, we relate our results to computational intractability questions. We give an equivalent formulation of the ``cryptomania assumption'' (that there is a semi-honest or standalone secure oblivious transfer protocol) in terms of UC-secure reduction among randomized functionalities. Also, we provide an {\em unconditional result} on the uselessness of common randomness, even in the computationally bounded setting. Our results make significant progress towards understanding the exact power of shared randomness in cryptography. To the best of our knowledge, our results are the first to comprehensively characterize the power of large classes of randomized functionalities.
Last updated:  2012-11-23
Is privacy compatible with truthfulness?
Uncategorized
David Xiao
Show abstract
Uncategorized
In the area of privacy-preserving data mining, a differentially private mechanism intuitively encourages people to share their data because they are at little risk of revealing their own information. However, we argue that this interpretation is incomplete because external incentives are necessary for people to participate in databases, and so data release mechanisms should not only be differentially private but also compatible with incentives, otherwise the data collected may be false. We apply the notion of \emph{truthfulness} from game theory to this problem. In certain settings, it turns out that existing differentially private mechanisms do not encourage participants to report their information truthfully. On the positive side, we exhibit a transformation that takes truthful mechanisms and transforms them into differentially private mechanisms that remain truthful. Our transformation applies to games where the type space is small and the goal is to optimize an insensitive quantity such as social welfare. Our transformation incurs only a small additive loss in optimality, and it is computationally efficient. Combined with the VCG mechanism, our transformation implies that there exists a differentially private, truthful, and approximately efficient mechanism for any social welfare game with small type space. We also study a model where an explicit numerical cost is assigned to the information leaked by a mechanism. We show that in this case, even differential privacy may not be strong enough of a notion to motivate people to participate truthfully. We show that mechanisms that release a perturbed histogram of the database may reveal too much information. We also show that, in general, any mechanism that outputs a synopsis that resembles the original database (such as the mechanism of Blum et al. (STOC '08)) may reveal too much information.
Last updated:  2011-01-05
A low-memory algorithm for finding short product representations in finite groups
Uncategorized
Gaetan Bisson, Andrew V. Sutherland
Show abstract
Uncategorized
We describe a space-efficient algorithm for solving a generalization of the subset sum problem in a finite group G, using a Pollard-rho approach. Given an element z and a sequence of elements S, our algorithm attempts to find a subsequence of S whose product in G is equal to z. For a random sequence S of length d*log2(n), where n=#G and d>=2 is a constant, we find that its expected running time is O(sqrt(n)*log(n)) group operations (we give a rigorous proof for d>4), and it only needs to store O(1) group elements. We consider applications to class groups of imaginary quadratic fields, and to finding isogenies between elliptic curves over a finite field.
Last updated:  2011-01-05
On the correct use of the negation map in the Pollard rho method
Daniel J. Bernstein, Tanja Lange, Peter Schwabe
Bos, Kaihara, Kleinjung, Lenstra, and Montgomery recently showed that ECDLPs on the 112-bit secp112r1 curve can be solved in an expected time of 65 years on a PlayStation 3. This paper shows how to solve the same ECDLPs at almost twice the speed on the same hardware. The improvement comes primarily from a new variant of Pollard's rho method that fully exploits the negation map without branching, and secondarily from improved techniques for modular arithmetic.
Last updated:  2011-01-05
A Zero-One Law for Secure Multi-Party Computation with Ternary Outputs (full version)
Gunnar Kreitz
There are protocols to privately evaluate any function in the honest-but-curious setting assuming that the honest nodes are in majority. For some specific functions, protocols are known which remain secure even without an honest majority. The seminal work by Chor and Kushilevitz [CK91] gave a complete characterization of Boolean functions, showing that each Boolean function either requires an honest majority, or is such that it can be privately evaluated regardless of the number of colluding nodes. The problem of discovering the threshold for secure evaluation of more general functions remains an open problem. Towards a resolution, we provide a complete characterization of the security threshold for functions with three different outputs. Surprisingly, the zero-one law for Boolean functions extends to Z_3, meaning that each function with range Z_3 either requires honest majority or tolerates up to colluding nodes.
Last updated:  2016-03-20
Practical Frameworks For -Out-Of- Oblivious Transfer With Security Against Covert and Malicious Adversaries
Zeng Bing, Tang Xueming, Xu Peng, Jing Jiandu
We present two practical frameworks for -out-of- oblivious transfer (). The first one is secure against covert adversaries who are not always willing to cheat at any price. The security is proven under the ideal/real simulation paradigm (call such security fully simulatable security). The second one is secure against malicious adversaries who are always willing to cheat. It provides fully simulatable security and privacy respectively for the sender and the receiver (call such security one-sided simulatable security). The two frameworks can be implemented from the decisional Diffie-Hellman (DDH) assumption, the decisional -th residuosity assumption, the decisional quadratic residuosity assumption and so on. The DDH-based instantiation of our first framework costs the minimum communication rounds and the minimum computational overhead, compared with existing practical protocols for oblivious transfer with fully simulatable security against covert adversaries or malicious adversaries. Though our second framework is not efficient, compared with existing practical protocols with one-sided simulatable security against malicious adversaries. However, it first provides a way to deal with general on this security level. What is more, its DDH-based instantiation is more efficient than the existing practical protocols for oblivious transfer with fully simulatable security against malicious adversaries.
Last updated:  2012-01-17
Security Evaluation of MISTY Structure with SPN Round Function
Ruilin Li, Chao Li, Jinshu Su, Bing Sun
This paper deals with the security of MISTY structure with SPN round function. We study the lower bound of the number of active s-boxes for differential and linear characteristics of such block cipher construction. Previous result shows that the differential bound is consistent with the case of Feistel structure with SPN round function, yet the situation changes when considering the linear bound. We carefully revisit such issue, and prove that the same bound in fact could be obtained for linear characteristic. This result combined with the previous one thus demonstrates a similar practical secure level for both Feistel and MISTY structures. Besides, we also discuss the resistance of MISTY structure with SPN round function against other kinds of cryptanalytic approaches including the integral cryptanalysis and impossible differential cryptanalysis. We confirm the existence of 6-round integral distinguishers when the linear transformation of the round function employs a binary matrix (i.e., the element in the matrix is either 0 or 1), and briefly describe how to characterize 5/6/7-round impossible differentials through the matrix-based method.
Last updated:  2010-12-31
Identification of Multiple Invalid Pairing-based Signatures in Constrained Batches
Brian J. Matt
This paper describes a new method in pairing-based signature schemes for identifying the invalid digital signatures in a batch after batch verification has failed. The method more efficiently identifies non-trivial numbers, , of invalid signatures in constrained sized, , batches than previously published methods, and does not require that the verifier possess detailed knowledge of . Our method uses ``divide-and-conquer'' search to identify the invalid signatures within a batch, pruning the search tree to reduce the number of pairing computations required. The method prunes the search tree more rapidly than previously published techniques and thereby provides performance gains for batch sizes of interest. We are motivated by wireless systems where the verifier seeks to conserve computations or a related resource, such as energy, by using large batches. However, the batch size is constrained by how long the verifier can delay batch verification while accumulating signatures to verify. We compare the expected performance of our method (for a number of different signature schemes at varying security levels) for varying batch sizes and numbers of invalid signatures against earlier methods. We find that our new method provides the best performance for constrained batches, whenever the number of invalid signatures is less than half the batch size. We include recently published methods based on techniques from the group-testing literature in our analysis. Our new method consistently outperforms these group-testing based methods, and substantially reduces the cost () when .
Last updated:  2010-12-31
Practical Affiliation-Hiding Authentication from Improved Polynomial Interpolation
Mark Manulis, Bertram Poettering
Among the plethora of privacy-friendly authentication techniques, affiliation-hiding (AH) protocols are valuable for their ability to hide not only identities of communicating users behind their affiliations (memberships to groups), but also these affiliations from non-members. These qualities become increasingly important in our highly computerized user-centric information society, where privacy is an elusive good. Only little work on practical aspects of AH schemes, pursuing optimized implementations and deployment, has been done so far, and the main question a practitioner might ask --- whether affiliation-hiding schemes are truly practical today --- remained widely unanswered. Improving upon recent advances in the area of AH protocols, in particular on pioneering results in the multi-affiliation setting, we can give an affirmative answer to this question. To this end, we propose numerous algorithmic optimizations to a recent AH scheme leading to a remarkable performance gain. Our results are demonstrated not only at theoretical level, but we also offer implementations, performance measurements, and comparisons. At the same time, our improvements advance the area of efficient polynomial interpolation in finite fields, which is one of our building blocks.
Last updated:  2011-08-06
ABC - A New Framework for Block Ciphers
Uri Avraham, Eli Biham, Orr Dunkelman
We suggest a new framework for block ciphers named Advanced Block Cipher, or shortly ABC. ABC has additional non-secret parameters that ensure that each call to the underlying block cipher uses a different pseudo-random permutation. It therefore ensures that attacks that require more than one block encrypted under the same secret permutation cannot apply. In particular, this framework protects against dictionary attacks, and differential and linear attacks, and eliminates weaknesses of ECB and CBC modes. This new framework shares a common structure with HAIFA, and can share the same logic with HAIFA compression functions. We analyze the security of several modes of operation for ABCs block ciphers, and suggest a few instances of ABCs.
Last updated:  2010-12-31
On small secret key attack against RSA with high bits known prime factor
Yasufumi Hashimoto
It is well known that if the higher half bits of a prime factor are known or the secret key is small enough then the RSA cryptosystem is broken (e.g. [Coppersmith, J. Cryptology, 1997] and [Boneh-Durfee, Eurocrypt'99]). Recently, Sarkar-Maitra-Sarkar [Cryptology ePrint Archiv, 2008/315] proposed attacks against RSA under the conditions that the higher bits of a prime factor is known and the secret key is small. In the present paper, we improve their attacks to be effective for larger secret keys.
Last updated:  2012-09-24
A Note on Constant-Round Zero-Knowledge Proofs of Knowledge
Yehuda Lindell
In this note, we show the existence of \emph{constant-round} computational zero-knowledge \emph{proofs of knowledge} for all . The existence of constant-round zero-knowledge proofs was proven by Goldreich and Kahan (Journal of Cryptology, 1996), and the existence of constant-round zero-knowledge \emph{arguments} of knowledge was proven by Feige and Shamir (CRYPTO 1989). However, the existence of constant-round zero-knowledge proofs of knowledge for all is folklore, to the best of our knowledge, since no proof of this fact has been published.
Last updated:  2011-01-01
On the Affine Equivalence and Nonlinearity Preserving Bijective Mappings
İsa Sertkaya, Ali Doğanaksoy
It is well-known that affine equivalence relations keep nonlineaerity invariant for all Boolean functions. The set of all Boolean functions, , over , is naturally regarded as the dimensional vector space, . Thus, while analyzing the transformations acting on , , the group of all bijective mappings, defined from onto itself should be considered. As it is shown in \cite{ser,ser:dog,ser:dog:2}, there exist non-affine bijective transformations that preserve nonlinearity. In this paper, first, we prove that the group of affine equivalence relations is isomorphic to the automorphism group of Sylvester Hadamard matrices. Then, we show that new nonlinearity preserving non-affine bijective mappings also exist. Moreover, we propose that the automorphism group of nonlinearity classes, should be studied as a subgroup of , since it contains transformations which are not affine equivalence relations.
Last updated:  2011-04-04
Completeness Theorems with Constructive Proofs for Finite Deterministic 2-Party Functions (full version)
Daniel Kraschewski, Jörn Müller-Quade
In this paper we present simple but comprehensive combinatorial criteria for completeness of finite deterministic 2-party functions with respect to information-theoretic security. We give a general protocol construction for efficient and statistically secure reduction of oblivious transfer to any finite deterministic 2-party function that fulfills our criteria. For the resulting protocols we prove universal composability. Our results are tight in the sense that our criteria still are necessary for any finite deterministic 2-party function to allow for implementation of oblivious transfer with statistical privacy and correctness. We unify and generalize results of Joe Kilian (1991, 2000) in two ways. Firstly, we show that his completeness criteria also hold in the UC framework. Secondly, what is our main contribution, our criteria also cover a wide class of primitives that are not subject of previous criteria. We show that there are non-trivial examples of finite deterministic 2-party functions that are neither symmetric nor asymmetric and therefore have not been covered by existing completeness criteria so far. As a corollary of our work, every finite deterministic 2-party function is either complete or can be considered equivalent to a non-complete symmetric 2-party function---this assertion holds true with respect to active adversaries as well as passive adversaries. Thereby known results on non-complete symmetric 2-party functions are strengthened.
Last updated:  2010-12-21
Cubic groups
M. A. Popov
It is showed that an existence of cubic groups may suggest an existence of one-way function.
Last updated:  2012-11-29
Active Domain Expansion for Normal Narrow-pipe Hash Functions
Uncategorized
Xigen Yao
Show abstract
Uncategorized
Recently several reports of Cryptology ePrint Archive showed the discovering that for a normal iterative hash function the entropy and codomain would reduce greatly,then some conclusions were given: Narrow-pipe hash functions couldn't resist this reducing (But wide-pipe hash functions could.),and generic collision attacks on narrow-pipe hash functions would be faster than birthday paradox.The discovering and conclusions rely on the cases of active domain reducing which causes the empty set of a approximative probability in a iteration.However,we can thwart the conclusions by the way of Active Domain Expansion to keep or recover the entropy , by some amending for any a normal narrow-pipe hash function to realize it.And some hash mode such as LAB Mode can more simply do it.In this paper,we'd introduce Active Domain Expansion which includes Surjection Round and the sum block .The most important is to define a sum block to replace the input of a normal message block in compression function. is a sum of the foregoing i ``Encoded Blocks''.since the surjection round has the same purport and the form is a part of Active Domain Expansion,Surjections Round will be non-critical section in this paper.Besides,we can redefine the last block of additional bits.By these,a normal narrow-pipe hash function can resist the reducing completely.
Last updated:  2010-12-21
On the Impossibility of Instantiating PSS in the Standard Model
Uncategorized
Rishiraj Bhattacharyya, Avradip Mandal
Show abstract
Uncategorized
In this paper we consider the problem of securely instantiating Probabilistic Signature Scheme (PSS) in the standard model. PSS, proposed by Bellare and Rogaway \cite{BellareR96} is a widely deployed randomized signature scheme, provably secure (\emph{unforgeable under adaptively chosen message attacks}) in Random Oracle Model. Our main result is a black-box impossibility result showing that one can not prove unforgeability of PSS against chosen message attacks using blackbox techniques even assuming existence of \emph{ideal trapdoor permutations} (a strong abstraction of trapdoor permutations which inherits all security properties of a random permutation, introduced by Kiltz and Pietrzak in Eurocrypt 2009) or the \emph{lossy trapdoor permutations} \cite{PeikertW08}. Moreover, we show \emph{onewayness}, the most common security property of a trapdoor permutation does not suffice to prove even the weakest security criteria, namely \emph{unforgeability under zero message attack}. Our negative results can easily be extended to any randomized signature scheme where one can recover the random string from a valid signature.
Last updated:  2010-12-21
Cryptanalysis of the RSA Subgroup Assumption from TCC 2005
Jean-Sebastien Coron, Antoine Joux, Avradip Mandal, David Naccache, Mehdi Tibouchi
At TCC 2005, Groth underlined the usefulness of working in small RSA subgroups of hidden order. In assessing the security of the relevant hard problems, however, the best attack considered for a subgroup of size 2^{2k} had a complexity of O{2^k}. Accordingly, k=100 bits was suggested as a concrete parameter. This paper exhibits an attack with a complexity of roughly 2^{k/2} operations, suggesting that Groth's original choice of parameters was overly aggressive. It also discusses the practicality of this new attack and various implementation issues.
Last updated:  2013-02-20
Stronger difficulty notions for client puzzles and denial-of-service-resistant protocols
Douglas Stebila, Lakshmi Kuppusamy, Jothi Rangasamy, Colin Boyd, Juan Gonzalez Nieto
Client puzzles are meant to act as a defense against denial of service (DoS) attacks by requiring a client to solve some moderately hard problem before being granted access to a resource. However, recent client puzzle difficulty definitions (Stebila and Ustaoglu, 2009; Chen et al., 2009) do not ensure that solving n puzzles is n times harder than solving one puzzle. Motivated by examples of puzzles where this is the case, we present stronger definitions of difficulty for client puzzles that are meaningful in the context of adversaries with more computational power than required to solve a single puzzle. A protocol using strong client puzzles may still not be secure against DoS attacks if the puzzles are not used in a secure manner. We describe a security model for analyzing the DoS resistance of any protocol in the context of client puzzles and give a generic technique for combining any protocol with a strong client puzzle to obtain a DoS-resistant protocol.
Last updated:  2011-09-15
Uniqueness is a Different Story: Impossibility of Verifiable Random Functions from Trapdoor Permutations
Dario Fiore, Dominique Schröder
Verifiable random functions (VRFs), firstly proposed by Micali, Rabin, and Vadhan (FOCS 99), are pseudorandom functions with the additional property that the owner of the seed can issue publicly-verifiable proofs for the statements ``'', for any input . Moreover, the output of VRFs is guaranteed to be unique, which means that is the only image that can be proven to map to . Due to their properties, VRFs are a fascinating primitive that have found several theoretical and practical applications. However, despite their popularity, constructing VRFs seems to be a challenging task. Indeed only a few constructions based on specific number-theoretic problems are known and basing a scheme on a general assumption is still an open problem. Towards this direction, Brakerski, Goldwasser, Rothblum, and Vaikuntanathan (TCC 2009) recently showed that verifiable random functions cannot be constructed from one-way permutations in a black-box way. In this paper we put forward the study of the relationship between VRFs and well-established cryptographic primitives. As our main result, we prove that VRFs cannot be based on the existence of trapdoor permutations (TDPs) in a black-box manner. Our result sheds light on the nature of VRFs and can be considered interesting for at least two reasons: - First, given the separation result of Brakerski \emph{et al.}, one may think as though VRFs would naturally belong to the ``public-key world'', and thus it is interesting to figure out their relationship with other public-key primitives. In this sense, our result shows that VRFs are much stronger because we imply separations of VRFs from most of the primitives in this world: basically everything that is implied by TDPs is strictly weaker. These primitives include e.g., public-key encryption (even CCA secure), oblivious transfer, and key-agreement. - Second, the notion of VRFs is closely related to other two primitives: weak verifiable random functions (weak-VRFs) and verifiable pseudorandom generators (VPRGs). Weak-VRFs, defined by Brakerski \emph{et al.}, are a relaxation of VRFs in which the pseudorandomness holds only with respect to random inputs. VPRGs, introduced by Dwork and Naor (FOCS 2000), are pseudorandom generators that allow the owner of the seed to prove the correctness of subsets of the generated bits. It is well known that ``regular'' PRFs can be constructed from pseudorandom generators and from weak-PRFs in a black-box way. It was thus an open problem (already recognized by Dwork and Naor in their paper) whether similar approaches could be applicable to the ``verifiable'' analogous of such primitives. Since weak-VRFs and VPRGs are implied by TDPs, we give a negative answer to this problem showing that the case of verifiable random functions is essentially different.
Last updated:  2011-08-28
Improved Nguyen-Vidick Heuristic Sieve Algorithm for Shortest Vector Problem
Uncategorized
Xiaoyun Wang, Mingjie Liu, Chengliang Tian, Jingguo Bi
Show abstract
Uncategorized
In this paper, we present an improvement of the Nguyen-Vidick heuristic sieve algorithm for shortest vector problem in general lattices, which time complexity is 2^0.3836n polynomial computations, and space complexity is 2^0.2557n. In the new algorithm, we introduce a new sieve technique with two-level instead of the previous one-level sieve, and complete the complexity estimation by calculating the irregular spherical cap covering.
Last updated:  2010-12-21
Statistical Analysis of Second Order Differential Power Analysis
Emmanuel Prouff, Matthieu Rivain, Régis Bévan
Second Order Differential Power Analysis (2ODPA) is a powerful side channel attack that allows an attacker to bypass the widely used masking countermeasure. To thwart 2ODPA, higher order masking may be employed but it implies an non-negligible overhead. In this context, there is a need to know how efficient a 2O-DPA can be, in order to evaluate the resistance of an implementation that uses first order masking and, possibly, some hardware countermeasures. Different methods of mounting a practical 2O-DPA attack have been proposed in the literature. However, it is not yet clear which of these methods is the most efficient. In this paper, we give a formal description of the higher order DPA that are mounted against software implementations. We then introduce a framework in which the attack efficiencies may be compared. The attacks we focus on involve the combining of several leakage signals and the computation of correlation coefficients to discriminate the wrong key hypotheses. In the second part of this paper, we pay particular attention to 2O-DPA that involves the product combining or the absolute difference combining. We study them under the assumption that the device leaks the Hamming weight of the processed data together with an independent gaussian noise. After showing a way to improve the product combining, we argue that in this model the product combining is more efficient not only than absolute difference combining, but also than all the other combining techniques proposed in the literature.
Last updated:  2010-12-22
A Timed Logic for Modeling and Reasoning about Security Protocols
Xinfeng Lei, Rui Xue, Ting Yu
Many logical methods are usually considered suitable to express the static properties of security protocols while unsuitable to model dynamic processes or properties. However, a security protocol itself is in fact a dynamic process over time, and sometimes it is important to be able to express time-dependent security properties of protocols. In this paper, we present a new timed logic based on predicate modal logic, in which time is explicitly expressed in parameters of predicates or modal operators. This makes it possible to model an agent's actions, knowledge and beliefs at different and exact time points, which enables us to model both protocols and their properties, especially time-dependent properties. We formalize semantics of the presented logic, and prove its soundness. We also present a modeling scheme for formalizing protocols and security properties of authentication and secrecy under the logic. The scheme provides a flexible and succinct framework to reason about security protocols, and essentially enhances the power of logical methods for protocol analysis. As a case study, we then analyze a timed-release protocol using this framework, and discover a new vulnerability that did not appear previously in the literature. We provide a further example to show additional advantages of the modeling scheme in the new logic.
Last updated:  2010-12-21
A Practical Platform for Cube-Attack-like Cryptanalyses
Bo Zhu, Wenye Yu, Tao Wang
Recently, various cryptanalysis methods related to Cube Attack have attracted a lot of interest. We designed a practical platform to perform such cryptanalysis attacks. We also developed a web-based application at \url{http://cube-attack.appspot.com/}, which is open to public for simple testing and verification. In this paper, we focus on linearity testing and try to verify the data provided in several papers. Some interesting results produced in our work indicate certain improper assumptions were made in these papers.
Last updated:  2010-12-25
Construct MD5 Collisions Using Just A Single Block Of Message
Uncategorized
Tao Xie, Dengguo Feng
Show abstract
Uncategorized
So far, all the differential attacks on MD5 were constructed through multi-block collision method. Can collisions for MD5 be found using just a single block of message (i.e. 512-bit)? This has been an open problem since the first 2-block collision attack was given. Today, in the last month (Dec,) of 2010, we have to make public a result of our 1-block collision attacks on MD5 in Table 1 as below, which was actually obtained at the beginning of 2010, but for security reasons, the techniques are not allowed to be disclosed at the moment. Here, we are calling for a challenge to the cryptology community that, any one who first gives a new different 1-block collision attack on MD5 will win 10,000 US dollars (about 50,000 RMB in Chinese Yuan) as a reward for his (her) excellent work. This call for challenge will be ended on Jan 1st, 2013. This announcement’s first affiliated unit will be responsible for this amount of reward when a new different 1-block collision attack is received and verified.
Last updated:  2010-12-21
More Insights on Blockcipher-Based Hash Functions
Yiyuan Luo, Xuejia Lai
In this paper we give more insights on the security of blockcipher-based hash functions. We give a very simple criterion to build a secure large class of Single-Block-Length (SBL) or double call Double-Block-Length (DBL) compression functions based on blockciphers, where is the key length and is the block length and is an integer. This criterion is simpler than previous works in the literature. Based on the criterion, we can get many results from this criterion, and we can get a conclusion on such class of blockcipher-based hash functions. We solved the open problem left by Hirose. Our results show that to build a secure double call DBL compression function, it is required where is the number of message blocks. Thus, we can only build rate 1/2 secure double DBL blockcipher-based compression functions if . At last, we pointed out flaws in Stam's theorem about supercharged functions and gave a revision of this theorem and added another condition for the security of supercharged compression functions.
Last updated:  2011-08-30
A new algorithm for computing Groebner bases
Shuhong Gao, Frank Volny IV, Mingsheng Wang
Buchberger's algorithm for computing Groebner bases was introduced in 1965, and subsequently there have been extensive efforts in improving its efficiency. Major algorithms include F4 (Faugère 1999), XL (Courtois et al. 2000) and F5 (Faugère 2002). F5 is believed to be the fastest algorithm known in the literature. Most recently, Gao, Guan and Volny (2010) introduced an incremental algorithm (G2V) that is simpler and several times faster than F5. In this paper, a new algorithm is presented that can avoid the incremental nature of F5 and G2V. It matches Buchberger's algorithm in simplicity and yet is more flexible. More precisely, given a list of polynomials, the new algorithm computes simultaneously a Groebner basis for the ideal generated by the polynomials and a Groebner basis for the leading terms of the syzygy module of the given list of polynomials. For any term order for the ideal, one may vary signature orders (i.e. the term orders for the syzygy module). Under one signature order, the new algorithm specializes to the G2V, and under another signature order, the new algorithm is several times faster than G2V, as indicated by computer experiments on benchmark examples.
Last updated:  2013-01-21
Short collusion-secure fingerprint codes against three pirates
Uncategorized
Koji Nuida
Show abstract
Uncategorized
In this article, we propose a new construction of probabilistic collusion-secure fingerprint codes against up to three pirates and give a theoretical security evaluation. Our pirate tracing algorithm combines a scoring method analogous to Tardos codes (J. ACM, 2008) with an extension of parent search techniques of some preceding 2-secure codes. Numerical examples show that our code lengths are significantly shorter than (about 30% to 40% of) the shortest known c-secure codes by Nuida et al. (Des. Codes Cryptogr., 2009) with c = 3. Some preliminary proposal for improving efficiency of our tracing algorithm is also given.
Last updated:  2011-12-10
Enumerating Results of Homogeneous Rotation over
Uncategorized
Guang-Pu Go, Xi-Yong Zhang, Wen-Fen Liu
Uncategorized
In this paper, we consider the open problem of counting homogeneous rotation symmetric Boolean functions over . By using the Inclusion--Exclusion Principle, we obtain a formula to exactly enumerate such class of functions. As a consequence, the known formula of \cite[Theorem 9]{Maitra} in Boolean case is simplified.
Last updated:  2010-12-22
One-Pass HMQV and Asymmetric Key-Wrapping
Shai Halevi, Hugo Krawczyk
Consider the task of asymmetric key-wrapping, where a key-management server encrypts a cryptographic key under the public key of a client. When used in storage and access-control systems, it is often the case that the server has no knowledge about the client (beyond its public key) and no means of coordinating with it. For example, a wrapped key used to encrypt a backup tape may be needed many years after wrapping, when the server is no longer available, key-wrapping standards have changed, and even the security requirements of the client might have changed. Hence we need a flexible mechanism that seamlessly supports different options depending on what the original server was using and the current standards and requirements. We show that one-pass HMQV (which we call HOMQV) is a perfect fit for this type of applications in terms of security, efficiency and flexibility. It offers server authentication if the server has its own public key, and degenerates down to the standardized DHIES encryption scheme if the server does not have a public key. The performance difference between the unauthenticated DHIES and the authenticated HOMQV is very minimal (essentially for free for the server and only 1/2 exponentiation for the client). We provide a formal analysis of the protocol's security showing many desirable properties such as sender's forward-secrecy and resilience to compromise of ephemeral data. When adding a DEM part (as needed for key-wrapping) it yields a secure signcryption scheme (equivalently a UC-secure messaging protocol). The combination of security, flexibility, and efficiency, makes HOMQV a very desirable protocol for asymmetric key wrapping, one that we believe should be incorporated into implementations and standards
Last updated:  2011-08-30
Breaking An Identity-Based Encryption Scheme based on DHIES
Martin R. Albrecht, Kenneth G. Paterson
We present collusion attacks against a recently proposed IBE scheme of Chen et al. from ASIACCS 2010. The attacks recover the master secret key of the scheme and thereby invalidate the existing security analysis of this scheme. The attacks are flexible, allowing, for example, the amount of computation needed to be traded-off against the size of the collusion.
Last updated:  2010-12-15
Differential Fault Analysis of AES using a Single Multiple-Byte Fault
Subidh Ali, Debdeep Mukhopadhyay, Michael Tunstall
In this paper we present an improved fault attack on the Advanced Encryption Standard (AES). This paper presents an improvement on a recently published differential fault analysis of AES that requires one fault to recover the secret key being used. This attack requires that one byte entering into the eighth round is corrupted. We show that the attack is possible where more than one byte has been affected. Experimental results are described where a fault is injected using a glitch in the clock, demonstrating that this attack is practical.
Last updated:  2010-12-14
An Efficient and Information Theoretically Secure Rational Secret Sharing Scheme based on Symmetric Bivariate Polynomials
Zhang Yun, Christophe Tartary
The design of rational cryptographic protocols is a recently created research area at the intersection of cryptography and game theory. In this paper, we propose a new -out-of- rational secret sharing scheme requiring neither the involvement of the dealer (except during the initial share distribution) nor a trusted mediator. Our protocol leads to a Nash equilibrium surviving the iterated deletion of weakly dominated strategies for . Our construction is information theoretically secure and it is immune against backward induction attacks. Contrary to Kol and Naor who used a specific cryptographic primitive in their TCC'08 paper (namely, meaningful/meaningless encryption), the immunity of our scheme is based on the use of bivariate polynomials and one-time pads. To the best of our knowledge, it is the first time that such polynomials have been used for rational secret sharing. Our scheme is efficient and does not require any physical assumptions such as envelopes or ballot boxes. As most of existing rational protocols, our construction requires simultaneous broadcast channels. However, our proposed scheme does not require any computational assumption and it provides information theoretical security.
Last updated:  2011-06-09
ROTIV: RFID Ownership Transfer with Issuer Verification
Kaoutar Elkhiyaoui, Erik-Oliver Blass, Refik Molva
RFID tags travel between partner sites in a supply chain. For privacy reasons, each partner “owns” the tags present at his site, i.e., the owner is the only entity able to authenticate his tags. However, when passing tags on to the next partner in the supply chain, ownership of the old partner is “transferred” to the new partner. In this paper, we propose ROTIV, a protocol that allows for secure ownership transfer against some malicious owners. Furthermore, ROTIV offers issuer verification to prevent malicious partners from injecting fake tags not originally issued by some trusted party. As part of ownership, ROTIV provides a constant-time, privacy-preserving authentication. ROTIV’s main idea is to combine an HMAC-based authentication with tag key and state updates during ownership transfer. To assure privacy, ROTIV implements tag state re-encryption techniques and key update techniques, performed on the reader. ROTIV is designed for lightweight tags – tags are only required to evaluate a hash function.
Last updated:  2011-02-23
Low Data Complexity Attacks on AES
Charles Bouillaguet, Patrick Derbez, Orr Dunkelman, Nathan Keller, Vincent Rijmen, Pierre-Alain Fouque
The majority of current attacks on reduced-round variants of block ciphers seeks to maximize the number of rounds that can be broken, using less data than the entire codebook and less time than exhaustive key search. In this paper, we pursue a different approach, restricting the data available to the adversary to a few plaintext/ciphertext pairs. We show that consideration of such attacks (which received little attention in recent years) serves an important role in assessing the security of block ciphers and of other cryptographic primitives based on block ciphers. In particular, we show that these attacks can be leveraged to more complex attacks, either on the block cipher itself or on other primitives (e.g., stream ciphers, MACs, or hash functions) that use a small number of rounds of the block cipher as one of their components. As a case study, we consider the AES --- the most widely used block cipher, whose round function is used in various cryptographic primitives. We present attacks on up to four rounds of AES that require at most 10 known/chosen plaintexts. We then apply these attacks to cryptanalyze a variant of the stream cipher LEX, and to mount a new known plaintext attack on 6-round AES.
Last updated:  2011-02-27
Efficient and provably-secure certificateless signature scheme without bilinear pairings
Uncategorized
He Debiao, Chen Jianhua, Zhang Rui
Show abstract
Uncategorized
Many certificateless signature schemes using bilinear pairings have been proposed. But the relative computation cost of the pairing is approximately twenty times higher than that of the scalar multiplication over elliptic curve group. In order to improve the performance we propose a certificateless signature scheme without bilinear pairings. With the running time being saved greatly, our scheme is more practical than the previous related schemes for practical application.
Last updated:  2010-12-13
Black-box property of Cryptographic Hash Functions
Uncategorized
Michal Rjaško
Show abstract
Uncategorized
We define a new black-box property for cryptographic hash function families which guarantees that for a randomly chosen hash function from the family, everything ``non-trivial'' we are able to compute having access to the key , we can compute only with oracle access to . If a hash function family is pseudo-random and has the black-box property then a randomly chosen hash function from the family is resistant to all non-trivial types of attack. We also show that the HMAC domain extension transform is Prf-BB preserving, i.e. if a compression function is pseudo-random and has black-box property (Prf-BB for short) then is Prf-BB. On the other hand we show that the Merkle-Damg\aa rd construction is not Prf-BB preserving. Finally we show that every pseudo-random oracle preserving domain extension transform is Prf-BB preserving and vice-versa. Hence, Prf-BB seems to be an all-in-one property for cryptographic hash function families, which guarantees their ``total'' security.
Last updated:  2010-12-13
Divison Polynomials for Alternate Models of Elliptic Curves
Dustin Moody
In this paper we find division polynomials for Huff curves, Jacobi quartics, and Jacobi intersections. These curves are alternate models for elliptic curves to the more common Weierstrass curve. Division polynomials for Weierstrass curves are well known, and the division polynomials we find are analogues for these alternate models. Using the division polynomials, we show recursive formulas for the -th multiple of a point on each curve. As an application, we prove a type of mean-value theorem for Huff curves, Jacobi quartics and Jacobi intersections.
Last updated:  2010-12-13
On the Security of Hash Functions Employing Blockcipher Postprocessing
Donghoon Chang, Mridul Nandi, Moti Yung
Analyzing desired generic properties of hash functions is an important current area in cryptography.For example, in Eurocrypt 2009, Dodis, Ristenpart and Shrimpton [7] introduced the elegant notion of “Preimage Awareness” (PrA) of a hash function HP , and they showed that a PrA hash function followed by an output transformation modeled to be a FIL (fixed input length) random oracle is PRO (pseudorandom oracle) i.e. indifferentiable from a VIL (variable input length) random oracle. We observe that for recent practices in designing hash function (e.g. SHA-3 candidates) most output transformations are based on permutation(s) or blockcipher(s), which are not PRO. Thus, a natural question is how the notion of PrA can be employed directly with these types of more prevalent output transformations? We consider the Davies-Meyer’s type output transformation OT(x) := E(x)xor x where E is an ideal permutation. We prove that OT(HP (·)) is PRO if HP is PrA, preimage resistant and computable message aware (a related but not redundant notion, needed in the analysis that we introduce in the paper). The similar result is also obtained for 12 PGV output transformations. We also observe that some popular double block length output transformations can not be employed as output transformation.
Last updated:  2010-12-13
State convergence and keyspace reduction of the Mixer stream cipher
Sui-Guan Teo, Kenneth Koon-Ho Wong, Leonie Simpson, Ed Dawson
This paper presents an analysis of the stream cipher Mixer, a bit-based cipher with structural components similar to the well-known Grain cipher and the LILI family of keystream generators. Mixer uses a 128-bit key and 64-bit IV to initialise a 217-bit internal state. The analysis is focused on the initialisation function of Mixer and shows that there exist multiple key-IV pairs which, after initialisation, produce the same initial state, and consequently will generate the same keystream. Furthermore, if the number of iterations of the state update function performed during initialisation is increased, then the number of distinct initial states that can be obtained decreases. It is also shown that there exist some distinct initial states which produce the same keystream, resulting in a further reduction of the effective key space.
Last updated:  2011-09-17
Secure and Efficient Protocols for Iris and Fingerprint Identification
Marina Blanton, Paolo Gasti
Recent advances in biometric recognition and the increasing use of biometric data prompt significant privacy challenges associated with the possible misuse, loss or theft, of biometric data. Biometric matching is often performed by two mutually suspicious parties, one of which holds one biometric image while the other owns a possibly large biometric collection. Due to privacy and liability considerations, neither party is willing to share its data. This gives rise to the need to develop secure computation techniques over biometric data where no information is revealed to the parties except the outcome of the comparison or search. To address the problem, in this work we develop and implement the first privacy-preserving identification protocol for iris codes. We also design and implement a secure protocol for fingerprint identification based on FingerCodes with a substantial improvement in the performance compared to existing solutions. We show that new techniques and optimizations employed in this work allow us to achieve particularly efficient protocols suitable for large data sets and obtain notable performance gain compared to the state-of-the-art prior work.
Last updated:  2011-08-22
Public-Key Encryption with Fuzzy Keyword Search: A Provably Secure Scheme under Keyword Guessing Attack
Uncategorized
Peng Xu, Hai Jin
Show abstract
Uncategorized
A lot of interest has been drawn recently into public-key encryption with keyword search (PEKS), which keeps public-key encrypted documents amendable to secure keyword search. However, PEKS resist against keyword guessing attack by assuming that the size of the keyword space is beyond the polynomial level. But this assumption is ineffective in practice. PEKS are insecure under keyword guessing attack. As we observe, the key to defend such attack is to avoid the availability of the exact search trapdoor to adversaries. Accordingly, we compromise the exactness of search trapdoor by mapping at least two different keywords into a fuzzy search trapdoor. We propose a novel concept called public-key encryption with fuzzy keyword search (PEFKS), by which the un-trusted server only obtains the fuzzy search trapdoor instead of the exact search trapdoor, and define its semantic security under chosen keyword attack (SS-CKA) and indistinguishability of keywords under non-adaptively chosen keywords and keyword guessing attack (IK-NCK-KGA). For the keyword space with and without uniform distribution, we respectively present two universal transformations from anonymous identity-based encryption to PEFKS, and prove their SS-CKA and IK-NCK-KGA securities. To our knowledge, PEFKS is the first scheme to resist against keyword guessing attack on condition that the keyword space is not more than the polynomial level.
Last updated:  2013-09-25
Attacking and fixing Helios: An analysis of ballot secrecy
Veronique Cortier, Ben Smyth
Helios 2.0 is an open-source web-based end-to-end verifiable electronic voting system, suitable for use in low-coercion environments. In this article, we analyse ballot secrecy in Helios and discover a vulnerability which allows an adversary to compromise the privacy of voters. The vulnerability exploits the absence of ballot independence in Helios and works by replaying a voter's ballot or a variant of it, the replayed ballot magnifies the voter's contribution to the election outcome and this magnification can be used to violated privacy. We demonstrate the practicality of the attack by violating a voter's privacy in a mock election using the software implementation of Helios. Moreover, the feasibility of an attack is considered in the context of French legislative elections and, based upon our findings, we believe it constitutes a real threat to ballot secrecy. We present a fix and show that our solution satisfies a formal definition of ballot secrecy using the applied pi calculus. Furthermore, we present similar vulnerabilities in other electronic voting protocols -- namely, the schemes by Lee et al., Sako & Kilian, and Schoenmakers -- which do not assure ballot independence. Finally, we argue that independence and privacy properties are unrelated, and non-malleability is stronger than independence.
Last updated:  2012-02-08
No-leak authentication by the Sherlock Holmes method
Dima Grigoriev, Vladimir Shpilrain
We propose a class of authentication schemes that are literally zero-knowledge, as compared to what is formally defined as ``zero-knowledge" in cryptographic literature. We call this ``no-leak" authentication to distinguish from an established ``zero-knowledge" concept. The ``no-leak" condition implies ``zero-knowledge" (even ``perfect zero-knowledge"), but it is actually stronger, as we illustrate by examples. The principal idea behind our schemes is: the verifier challenges the prover with questions that he (the verifier) already knows answers to; therefore, even a computationally unbounded verifier who follows the protocol cannot possibly learn anything new during any number of authentication sessions. This is therefore also true for a computationally unbounded passive adversary.
Last updated:  2011-02-13
Cryptanalysis of Skein
Daniel J. Bernstein, Tanja Lange
Several attacks on Skein.
Last updated:  2010-12-08
A new result on the distinctness of primitive sequences over Z(pq) modulo 2
Qunxiong Zheng, Wenfeng Qi
Let Z/(pq) be the integer residue ring modulo pq with odd prime numbers p and q. This paper studies the distinctness problem of modulo 2 reductions of two primitive sequences over Z/(pq), which has been studied by H.J. Chen and W.F. Qi in 2009. First, it is shown that almost every element in Z/(pq) occurs in a primitive sequence of order n > 2 over Z/(pq). Then based on this element distribution property of primitive sequences over Z/(pq), previous results are greatly improved and the set of primitive sequences over Z/(pq) that are known to be distinct modulo 2 is further enlarged.
Last updated:  2012-08-02
Generic Compilers for Authenticated Key Exchange (Full Version)
Uncategorized
Tibor Jager, Florian Kohlar, Sven Schäge, Jörg Schwenk
Show abstract
Uncategorized
So far, all solutions proposed for {\em authenticated key agreement} combine key agreement and authentication into a single cryptographic protocol. However, in many important application scenarios, key agreement and entity authentication are clearly separated protocols. This fact enables efficient attacks on the na\"ıve combination of these protocols. In this paper, we propose new compilers for two-party key agreement and authentication, which are provably secure in the standard Bellare-Rogaway model. The constructions are generic: key agreement is executed first and results (without intervention of the adversary) in a secret session key on both sides. This key (or a derived key) is handed over, together with a transcript of all key exchange messages, to the authentication protocol, where it is combined with the random challenge(s) exchanged during authentication.
Last updated:  2010-12-10
Identity-based Digital Signature Scheme Without Bilinear Pairings
He Debiao, Chen Jianhua, Hu Jin
Many identity-based digital signature schemes using bilinear pairings have been proposed. But the relative computation cost of the pairing is approximately twenty times higher than that of the scalar multiplication over elliptic curve group. In order to save the running time and the size of the signature, in this letter, we propose an identity based signature scheme without bilinear pairings. With both the running time and the size of the signature being saved greatly, our scheme is more practical than the previous related schemes for practical application.
Last updated:  2010-12-08
Further Observations on Certificate-Base Encryption and its Generic Construction from Certificateless Public Key Encryption
Yang Lu
Certificate-based encryption (CBE) is a new asymmetric encryption paradigm which was introduced to solve the certificate management problem in traditional public key encryption (PKI). It combines PKE and identity-based encryption (IBE) while preserving some of their most attractive features. CBE provides an efficient implicit certificate mechanism which eliminates the third-party queries and simplifies the certificate revocation problem in the traditional PKI. It also solves the key escrow problem and key distribution problem inherent in IBE. In this paper, we introduce the key replacement attack and the malicious-but-passive certifier attack into CBE, and define a class of new security models for CBE under different security levels according to the power of the adversaries against CBE. Our new security models are more elaborated and stronger compared with other existing ones. Then, we propose a generic construction of CBE from certificateless public key encryption and prove its security under the proposed security models in the standard model. We also show a concrete conversion using the proposed generic construction.
Last updated:  2010-12-08
A Forgery Attack on the Candidate LTE Integrity Algorithm 128-EIA3
Thomas Fuhr, Henri Gilbert, Jean-Renë Reinhard, Marion Videau
In this note we show that the message authentication code 128-EIA3 considered for adoption as one of the integrity algorithms of the emerging mobile standard LTE is vulnerable to a simple existential forgery attack. This attack allows, given any message and the associated MAC value under an unknown integrity key and an initial vector, to predict the MAC value of a related message under the same key and the same initial vector with a success probability 1/2.
Last updated:  2018-11-23
Computing Discrete Logarithms in an Interval
Steven D. Galbraith, John M. Pollard, Raminder S. Ruprai
The discrete logarithm problem in an interval of size in a group is: Given and an integer to find an integer , if it exists, such that . Previously the best low-storage algorithm to solve this problem was the van Oorschot and Wiener version of the Pollard kangaroo method. The heuristic average case running time of this method is group operations. We present two new low-storage algorithms for the discrete logarithm problem in an interval of size . The first algorithm is based on the Pollard kangaroo method, but uses 4 kangaroos instead of the usual two. We explain why this algorithm has heuristic average case expected running time of group operations. The second algorithm is based on the Gaudry-Schost algorithm and the ideas of our first algorithm. We explain why this algorithm has heuristic average case expected running time of group operations. We give experimental results that show that the methods do work close to that predicted by the theoretical analysis. This is a revised version since the published paper that contains a corrected proof of Theorem 6 (the statement of Theorem 6 is unchanged). We thank Ravi Montenegro for pointing out the errors.
Last updated:  2012-02-14
A non-uniform birthday problem with applications to discrete logarithms
Uncategorized
Steven D. Galbraith, Mark Holmes
Show abstract
Uncategorized
We consider a generalisation of the birthday problem that arises in the analysis of algorithms for certain variants of the discrete logarithm problem in groups. More precisely, we consider sampling coloured balls and placing them in urns, such that the distribution of assigning balls to urns depends on the colour of the ball. We determine the expected number of trials until two balls of different colours are placed in the same urn. As an aside we present an amusing ``paradox'' about birthdays.
Last updated:  2010-12-08
Using Equivalence Classes to Accelerate Solving the Discrete Logarithm Problem in a Short Interval
Steven D. Galbraith, Raminder S. Ruprai
The Pollard kangaroo method solves the discrete logarithm problem (DLP) in an interval of size with heuristic average case expected running time approximately group operations. A recent variant of the kangaroo method, requiring one or two inversions in the group, solves the problem in approximately group operations. It is well-known that the Pollard rho method can be sped-up by using equivalence classes (such as orbits of points under an efficiently computed group homomorphism), but such ideas have not been used for the DLP in an interval. Indeed, it seems impossible to implement the standard kangaroo method with equivalence classes. The main result of the paper is to give an algorithm, building on work of Gaudry and Schost, to solve the DLP in an interval of size with heuristic average case expected running time of close to group operations for groups with fast inversion. In practice the algorithm is not quite this fast, due to problems with pseudorandom walks going outside the boundaries of the search space, and due to the overhead of handling fruitless cycles. We present some experimental results. This is the full version (with some minor corrections and updates) of the paper which was published in P. Q. Nguyen and D. Pointcheval (eds.), PKC 2010, Springer LNCS 6056 (2010) 368-383.
Last updated:  2010-12-02
An Evaluation of Hash Functions on a Power Analysis Resistant Processor Architecture
Simon Hoerder, Marcin Wojcik, Stefan Tillich, Dan Page
Cryptographic hash functions are an omnipresent components in security-critical software and devices; they support, for example, digital signature and data authenticity schemes, mechanisms for key derivation, pseudo-random number generation and so on. A criteria for candidate hash functions in the SHA-3 contest is resistance against side-channel analysis which is a major concern for mobile devices as well. This paper explores the implementation of said candidates on a variant of the Power-Trust platform; our results highlight this representing a flexible solution to power analysis attacks, implying only a modest performance overhead.
Last updated:  2010-11-30
Better Key Sizes (and Attacks) for LWE-Based Encryption
Richard Lindner, Chris Peikert
We analyze the concrete security and key sizes of theoretically sound lattice-based encryption schemes based on the ``learning with errors'' (LWE) problem. Our main contributions are: (1)~a new lattice attack on LWE that combines basis reduction with an enumeration algorithm admitting a time/success tradeoff, which performs better than the simple distinguishing attack considered in prior analyses; (2)~concrete parameters and security estimates for an LWE-based cryptosystem that is more compact and efficient than the well-known schemes from the literature. Our new key sizes are up to times smaller than prior examples, while providing even stronger concrete security levels.
Last updated:  2011-01-06
Cryptanalysis of Hummingbird-1
Markku-Juhani O. Saarinen
Hummingbird-1 is a lightweight encryption and message authentication primitive published in RISC ’09 and WLC ’10. Hummingbird-1 utilizes a 256-bit secret key and a 64-bit IV. We report a chosen-IV, chosen message attack that can recover the full secret key with a few million chosen messages processed under two related IVs. The attack requires at most 264 off-line computational effort. The attack has been implemented and demonstrated to work against a real-life implementation of Hummingbird-1. By attacking the differentially weak E component, the overall attack complexity can be reduced by a significant factor. Our cryptanalysis is based on a differential divide-and-conquer method with some novel techniques that are uniquely applicable to ciphers of this type.
Last updated:  2010-11-30
Statistical Analysis of Reduced Round Compression Functions of SHA-3 Second Round Candidates
Uncategorized
Ali Doğanaksoy, Barış Ege, Onur Koçak, Fatih Sulak
Show abstract
Uncategorized
National Institute of Standards and Technology announced a competition in 2008, of which the winner will be acknowledged as the new hash standard SHA-3. There are 14 second round candidates which are selected among 51 first round algorithms. In this paper, we apply statistical analysis to the second round candidate algorithms by using two different methods, and observe how conservative the algorithms are in terms of randomness. The first method evaluates 256-bit outputs, obtained from reduced round versions of the algorithms, through statistical randomness tests. On the other hand, the second method evaluates the randomness of the reduced round compression functions based on certain cryptographic properties. This analysis gives a rough idea on the security factor of the compression functions.
Last updated:  2013-06-06
Separating Succinct Non-Interactive Arguments From All Falsifiable Assumptions
Craig Gentry, Daniel Wichs
In this paper, we study succinct computationally sound proofs (arguments) for NP, whose communication complexity is polylogarithmic the instance and witness sizes. The seminal works of Kilian '92 and Micali '94 show that such arguments can be constructed under standard cryptographic hardness assumptions with four rounds of interaction, and that they be made non-interactive in the random-oracle model. The latter construction also gives us some evidence that succinct non interactive arguments (SNARGs) may exist in the standard model with a common reference string (CRS), by replacing the oracle with a sufficiently complicated hash function whose description goes in the CRS. However, we currently do not know of any construction of SNARGs with a formal proof of security under any simple cryptographic assumption. In this work, we give a broad black-box separation result, showing that black-box reductions cannot be used to prove the security of any SNARG construction based on any falsifiable cryptographic assumption. This includes essentially all common assumptions used in cryptography (one-way functions, trapdoor permutations, DDH, RSA, LWE etc.). More generally, we say that an assumption is falsifiable if it can be modeled as an interactive game between an adversary and an efficient challenger that can efficiently decide if the adversary won the game. This is similar, in spirit, to the notion of falsifiability of Naor '03, and captures the fact that we can efficiently check if an adversarial strategy breaks the assumption. Our separation result also extends to designated verifier SNARGs, where the verifier needs a trapdoor associated with the CRS to verify arguments, and slightly succinct SNARGs, whose size is only required to be sublinear in the statement and witness size.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.