All papers (Page 188 of 21869 results)

Last updated:  2009-02-24
A Brief History of Provably-Secure Public-Key Encryption
Alexander W. Dent
Public-key encryption schemes are a useful and interesting field of cryptographic study. The ultimate goal for the cryptographer in the field of public-key encryption would be the production of a very efficient encryption scheme with a proof of security in a strong security model using a weak and reasonable computational assumption. This ultimate goal has yet to be reached. In this invited paper, we survey the major results that have been achieved in the quest to find such a scheme.
Last updated:  2009-05-07
A Provably Secure And Efficient Countermeasure Against Timing Attacks
Boris Köpf, Markus Dürmuth
We show that the amount of information about the key that an unknown-message attacker can extract from a deterministic side-channel is bounded from above by |O| \log_2 (n+1) bits, where n is the number of side-channel measurements and O is the set of possible observations. We use this bound to derive a novel countermeasure against timing attacks, where the strength of the security guarantee can be freely traded for the resulting performance penalty. We give algorithms that efficiently and optimally adjust this trade-off for given constraints on the side-channel leakage or on the efficiency of the cryptosystem. Finally, we perform a case-study that shows that applying our countermeasure leads to implementations with minor performance overhead and formal security guarantees.
Last updated:  2012-03-22
Lossy Encryption: Constructions from General Assumptions and Efficient Selective Opening Chosen Ciphertext Security
Uncategorized
Brett Hemenway, Benoit Libert, Rafail Ostrovsky, Damien Vergnaud
Show abstract
Uncategorized
In this paper, we present new and general constructions of lossy encryption schemes. By applying results from Eurocrypt '09, we obtain new general constructions of cryptosystems secure against a Selective Opening Adversaries (SOA). Although it was recognized almost twenty years ago that SOA security was important, it was not until the recent breakthrough works of Hofheinz and Bellare, Hofheinz and Yilek that any progress was made on this fundamental problem. The Selective Opening problem is as follows: suppose an adversary receives $n$ commitments (or encryptions) of (possibly) correlated messages, and now the adversary can choose $n/2$ of the messages, and receive de-commitments (or decryptions and the randomness used to encrypt them). Do the unopened commitments (encryptions) remain secure? A protocol achieving this type of security is called secure against a selective opening adversary (SOA). This question arises naturally in the context of Byzantine Agreement and Secure Multiparty Computation, where an active adversary is able to eavesdrop on all the wires, and then choose a subset of players to corrupt. Unfortunately, the traditional definitions of security (IND-CPA, IND-CCA) do not guarantee security in this setting. In this paper: We formally define re-randomizable encryption and show that every re-randomizable encryption scheme gives rise to efficient encryptions secure against a selective opening adversary. (Very informally, an encryption is re-randomizable, if given any ciphertext, there is an efficient way to map it to an almost uniform re-encryption of the same underlying message). We define re-randomizable one-way functions and show that every re-randomizable one-way function family gives rise to efficient commitments secure against a selective opening adversary. We show that statistically-hiding 2-round Oblivious Transfer (OT) implies Lossy Encryption and so do smooth hash proof systems, as defined by Cramer-Shoup. Combining this with known results immediately shows that Private Information Retrieval and homomorphic encryption both imply Lossy Encryption, and thus Selective Opening Secure Public Key Encryption. Applying our constructions to well-known cryptosystems (such as Elgamal or Paillier), we obtain selective opening secure commitments and encryptions from the Decisional Diffie-Hellman (DDH), Decisional Composite Residuosity (DCR) and Quadratic Residuosity (QR) assumptions, that are either simpler or more efficient than existing constructions of Bellare, Hofheinz and Yilek. By applying our general results to the Paillier cryptosystem, we obtain the first cryptosystem to achieve simulation-based selective opening security from the DCR assumption. We provide (indistinguishability and simulation-based) definitions of adaptive chosen-ciphertext security (CCA2) in the selective opening setting and describe the first encryption schemes that provide security in the sense of this definition. In the indistinguishability-based model, we notably achieve short ciphertexts under standard number theoretic assumptions. In the simulation-based security chosen-ciphertext attack scenario, we handle non-adaptive (i.e., CCA1) adversaries and describe the first encryption scheme which is simultaneously CCA1 and SOA-secure.
Last updated:  2012-07-11
Unconditionally Secure Asynchronous Multiparty Computation with Quadratic Communication Per Multiplication Gate
Arpita Patra, Ashish Choudhary, C. Pandu Rangan
Secure multiparty computation (MPC) allows a set of $n$ parties to securely compute an agreed function, even if up to $t$ parties are under the control of an adversary. In this paper, we propose a new {\it Asynchronous secure multiparty computation} (AMPC) protocol that provides information theoretic security with $n = 4t+1$, where $t$ out of $n$ parties can be under the influence of a {\it Byzantine (active)} adversary ${\cal A}_t$ having {\it unbounded computing power}. Our protocol communicates ${\cal O}(n^2 \log|{\mathbb F}|)$ bits per multiplication and involves a negligible error probability of $2^{-\Omega(\kappa)}$, where $\kappa$ is the error parameter and ${\mathbb F}$ is the field over which the computation is carried out. The best known information theoretically secure AMPC with $n=4t+1$ communicates ${\cal O}(n^3 \log|{\mathbb F}|)$ bits per multiplication and does not involve any error probability in computation. Though a negligible error probability is involved, our AMPC protocol provides the best communication complexity among all the known AMPC protocols providing information theoretic security. Moreover, the communication complexity of our AMPC is same as the communication complexity of the best known AMPC protocol with {\it cryptographic assumptions}. As a tool for our AMPC protocol, we propose a new method of efficiently generating {\it $d$-sharing} of multiple secrets concurrently in asynchronous setting, which is of independent interest, where $t \leq d \leq 2t$. In the literature, though there are protocols for generating $t$-sharing and $2t$-sharing separately, there is no generic protocol for generating {\it $d$-sharing} for the range $t \leq d \leq 2t$. Moreover, our protocol provides better communication complexity than the existing methods for generating $2t$-sharing.
Last updated:  2010-09-27
Point Compression for Koblitz Elliptic Curves
Uncategorized
P. N. J. Eagle, Steven D. Galbraith, John Ong
Show abstract
Uncategorized
Elliptic curves over finite fields have applications in public key cryptography. A Koblitz curve is an elliptic curve $E$ over $\F_2$; the group $E( \Ftn )$ has convenient features for efficient implementation of elliptic curve cryptography. Wiener and Zuccherato and Gallant, Lambert and Vanstone showed that one can accelerate the Pollard rho algorithm for the discrete logarithm problem on Koblitz curves. This implies that when using Koblitz curves, one has a lower security per bit than when using general elliptic curves defined over the same field. Hence for a fixed security level, systems using Koblitz curves require slightly more bandwidth. We present a method to reduce this bandwidth when a normal basis representation for $\Ftn$ is used. Our method is appropriate for applications such as Diffie-Hellman key exchange or Elgamal encryption. We show that, with a low probability of failure, our method gives the expected bandwidth for a given security level.
Last updated:  2009-02-24
UC-Secure Source Routing Protocol
Uncategorized
Tao Feng, Xian Guo, Jianfeng Ma, Xinghua Li
Show abstract
Uncategorized
The multi-path routing scheme provides reliable guarantee for mobile ad hoc network. A new method is proposed that is using to analyze the security of multi-path routing protocol within the framework of Universally Composable (UC) security. Based on the topological model that there exist adversarial nodes, the concept of plausible route is extended and the definition of plausible-route set is presented. Plausible-route set is used in description of the multi-path routing for Ad hoc network, and a formal security definition based on UC-RP is given. A provably Security Multiple Node-Disjoint Paths source routing (SMNDP) is proposed and address secure fault issue of MNDP in the active adversary model. The new approach shows that the security of SMNDP can be reduced to the security of the message authentication code and the digital signature. SMNDP implements the correctness of route discovery process, the authentication of nodes identifier and the integrality of route information.
Last updated:  2009-02-24
Simulation without the Artificial Abort: Simplified Proof and Improved Concrete Security for Waters' IBE Scheme
Mihir Bellare, Thomas Ristenpart
Waters' variant of the Boneh-Boyen IBE scheme is attractive because of its efficency, applications, and security attributes,but suffers from a relatively complex proof with poor concrete security. This is due in part to the proof's ``artificial abort'' step, which has then been inherited by numerous derivative works. It has often been asked whether this step is necessary. We show that it is not, providing a new proof that eliminates this step. The new proof is not only simpler than the original one but offers better concrete security for important ranges of the parameters. As a result, one can securely use smaller groups, resulting in significant efficiency improvements.
Last updated:  2009-02-24
Multi-authority attribute based encryption with honest-but-curious central authority
Vladimir Bozovic, Daniel Socek, Rainer Steinwandt, Viktoria I. Villanyi
An attribute based encryption scheme capable of handling multiple authorities was recently proposed by Chase. The scheme is built upon a single-authority attribute based encryption scheme presented earlier by Sahai and Waters. Chase’s construction uses a trusted central authority that is inherently capable of decrypting arbitrary ciphertexts created within the system. We present a multi-authority attribute based encryption scheme in which only the set of recipients defined by the encrypting party can decrypt a corresponding ciphertext. The central authority is viewed as “honest-but-curious”: on the one hand it honestly follows the protocol, and on the other hand it is curious to decrypt arbitrary ciphertexts thus violating the intent of the encrypting party. The proposed scheme, which like its predecessors relies on the Bilinear Diffie-Hellman assumption, has a complexity comparable to that of Chase’s scheme. We prove that our scheme is secure in the selective ID model and can tolerate an honest-but-curious central authority.
Last updated:  2009-12-02
The Case for Quantum Key Distribution
Uncategorized
Douglas Stebila, Michele Mosca, Norbert Lütkenhaus
Show abstract
Uncategorized
Quantum key distribution (QKD) promises secure key agreement by using quantum mechanical systems. We argue that QKD will be an important part of future cryptographic infrastructures. It can provide long-term confidentiality for encrypted information without reliance on computational assumptions. Although QKD still requires authentication to prevent man-in-the-middle attacks, it can make use of either information-theoretically secure symmetric key authentication or computationally secure public key authentication: even when using public key authentication, we argue that QKD still offers stronger security than classical key agreement.
Last updated:  2009-04-27
Ensuring Data Storage Security in Cloud Computing
Cong Wang, Qian Wang, Kui Ren, Wenjing Lou
Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, Cloud Computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.
Last updated:  2009-10-26
CoSP: A General Framework For Computational Soundness Proofs
Uncategorized
Michael Backes, Dennis Hofheinz, Dominique Unruh
Show abstract
Uncategorized
We describe CoSP, a general framework for conducting computational soundness proofs of symbolic models and for embedding these proofs into formal calculi. CoSP considers arbitrary equational theories and computational implementations, and it abstracts away many details that are not crucial for proving computational soundness, such as message scheduling, corruption models, and even the internal structure of a protocol. CoSP enables soundness results, in the sense of preservation of trace properties, to be proven in a conceptually modular and generic way: proving x cryptographic primitives sound for y calculi only requires x+y proofs (instead of x*y proofs without this framework), and the process of embedding calculi is conceptually decoupled from computational soundness proofs of cryptographic primitives. We exemplify the usefulness of CoSP by proving the first computational soundness result for the full-fledged applied pi-calculus under active attacks. Concretely, we embed the applied pi-calculus into CoSP and give a sound implementation of public-key encryption and digital signatures.
Last updated:  2009-11-09
From Dolev-Yao to Strong Adaptive Corruption: Analyzing Security in the Presence of Compromising Adversaries
David Basin, Cas Cremers
We formalize a hierarchy of adversary models for security protocol analysis, ranging from a Dolev-Yao style adversary to more powerful adversaries who can reveal different parts of principals' states during protocol execution. We define our hierarchy by a modular operational semantics describing adversarial capabilities. We use this to formalize various, practically-relevant notions of key and state compromise. Our semantics can be used as a basis for protocol analysis tools. As an example, we extend an existing symbolic protocol-verification tool with our adversary models. The result is the first tool that systematically supports notions such as weak perfect forward secrecy, key compromise impersonation, and adversaries capable of so-called strong corruptions and state-reveal queries. As further applications, we use our model hierarchy to relate different adversarial notions, gaining new insights on their relative strengths, and we use our tool to find new attacks on protocols.
Last updated:  2009-02-16
Attacks on the DECT authentication mechanisms
Stefan Lucks, Andreas Schuler, Erik Tews, Ralf-Philipp Weinmann, Matthias Wenzel
Digital Enhanced Cordless Telecommunications (DECT) is a standard for connecting cordless telephones to a fixed telecommunications network over a short range. The cryptographic algorithms used in DECT are not publicly available. In this paper we reveal one of the two algorithms used by DECT, the DECT Standard Authentication Algorithm (DSAA). We give a very detailed security analysis of the DSAA including some very effective attacks on the building blocks used for DSAA as well as a common implementation error that can practically lead to a total break of DECT security. We also present a low cost attack on the DECT protocol, which allows an attacker to impersonate a base station and therefore listen to and reroute all phone calls made by a handset.
Last updated:  2009-02-16
On the Security of Iterated Hashing based on Forgery-resistant Compression Functions
Charles Bouillaguet, Orr Dunkelman, Pierre-Alain Fouque, Antoine Joux
In this paper we re-examine the security notions suggested for hash functions, with an emphasis on the delicate notion of second preimage resistance. We start by showing that, in the random oracle model, both Merkle-Damgaard and HAIFA achieve second preimage resistance beyond the birthday bound, and actually up to the level of known generic attacks, hence demonstrating the optimality of HAIFA in this respect. We then try to distill a more elementary requirement out of the compression function to get some insight on the properties it should have to guarantee the second preimage resistance of its iteration. We show that if the (keyed) compression function is a secure FIL-MAC then the Merkle-Damgaard mode of iteration (or HAIFA) still maintains the same level of second preimage resistance. We conclude by showing that this ``new'' assumption (or security notion) implies the recently introduced Preimage-Awareness while ensuring all other classical security notions for hash functions.
Last updated:  2009-02-16
Construction of large families of pseudorandom subsets using elliptic curves
Zhixiong Chen, Chenhuang Wu
Recently, Dartyge and Sárközy investigated the measures, i.e., the well distribution measure and the correlation measure of order $k$, of pseudorandomness of subsets of the set $\{1, 2,\ldots, N\}$, and they presented several constructive examples for subsets with strong pseudorandom properties when $N$ is a prime number. In this article, we present a construction of pseudorandom subsets using elliptic curves over finite fields and estimate the pseudorandom measures. Character sums play an important role in the proofs.
Last updated:  2010-07-29
Security of Practical Cryptosystems Using Merkle-Damgard Hash Function in the Ideal Cipher Model
Uncategorized
Yusuke Naito, Kazuki Yoneyama, Lei Wang, Kazuo Ohta
Show abstract
Uncategorized
Since the Merkle-Damgård (MD) type hash functions are differentiable from ROs even when compression functions are modeled by ideal primitives, there is no guarantee as to the security of cryptosystems when ROs are instantiated with structural hash functions. In this paper, we study the security of the instantiated cryptosystems whereas the hash functions have the well known structure of Merkle-Damgård construction with Stam's type-II compression function (denoted MD-TypeII) in the Ideal Cipher Model (ICM). Note that since the Type-II scheme includes the Davies-Meyer compression function, SHA-256 and SHA-1 have the MD-TypeII structure. We show that OAEP, RSA-KEM, PSEC-KEM, ECIES-KEM and many other encryption schemes are secure when using the MD-TypeII hash function. In order to show this, we customize the indifferentiability framework of Maurer, Renner and Holenstein. We call the customized framework ``indifferentiability with condition''. In this framework, for some condition $\alpha$ that cryptosystem $C$ satisfies, if hash function $H$ is indifferentiable from RO under condition $\alpha$, $C$ is secure when RO is instantiated with $H$. We note the condition of ``prefix-free'' that the above schemes satisfy. We show that the MD-TypeII hash function is indifferentiable from RO under this condition. When the output length of RO is incompatible with that of the hash function, the output size is expanded by Key Derivation Functions (KDFs). Since a KDF is specified as MGF1 in RSA's PKCS $\#$1 V2.1, its security discussion is important in practice. We show that, KDFs using the MD-TypeII hash function (KDF-MD-TypeII) are indifferentiable from ROs under this condition of ``prefix-free''. Therefore, we can conclude that the above practical encryption schemes are secure even when ROs are instantiated with (KDF-)MD-TypeII hash functions. Dodis, Ristenpart and Shrimpton showed that FDH, PSS, Fiat-Shamir, and so on are secure when RO is instantiated with the MD-TypeII hash function in the ICM, their analyses use the different approach from our approach called indifferentiability from public-use RO (pub-RO). They showed that the above cryptosystems are secure in the pub-RO model and the MD-TypeII hash function is indifferentiable from pub-RO. Since their analyses did not consider the structure of KDFs, there might exist some attack using a KDF's structure. We show that KDFs using pub-RO (KDF-pub-RO) is differentiable from pub-RO. Thus, we cannot trivially extend the result of Dodis et al to the indifferentiability for KDF-MD-TypeII hash functions. We propose a new oracle called private interface leak RO (privleak-RO). We show that KDF-pub-ROs are indifferentiable from privleak-ROs and the above cryptosystems are secure in the privleak-RO model. Therefore, by combining the result of Dodis et al. with our result, we can conclude that the above cryptosystems are secure when ROs are instantiated with KDF-MD-TypeII hash functions. Since OAEP, RSA-KEM, PSEC-KEM, ECIES-KEM and many other encryption schemes are insecure in the pub-RO (privleak-RO) model, we cannot confirm the security of these encryption schemes from the approach of Dodis et al. Therefore, the result of Dodis et al can be supplemented with our result. Consequently, from the two results we can confirm the security of almost practical cryptosystems when ROs are instantiated with (KDF-)MD-TypeII hash functions.
Last updated:  2009-05-29
Computational Oblivious Transfer and Interactive Hashing
Uncategorized
Kirill Morozov, George Savvides
Show abstract
Uncategorized
We use interactive hashing to achieve the most efficient OT protocol to date based solely on the assumption that trapdoor permutations (TDP) exist. Our protocol can be seen as the following (simple) modification of either of the two famous OT constructions: 1) In the one by Even et al (1985), a receiver must send a random domain element to a sender through IH; 2) In the one by Ostrovsky et al (1993), the players should use TDP instead of one-way permutation. A similar approach is employed to achieve oblivious transfer based on the security of the McEliece cryptosystem. In this second protocol, the receiver inputs a public key into IH, while privately keeping the corresponding secret key. Two different versions of IH are used: the computationally secure one in the first protocol, and the information-theoretically secure one in the second.
Last updated:  2009-02-16
Automatic Approach of Provable Security and its Application for OAEP+
GU Chun-Xiang, Guang Yan, ZHU Yue-Fei
Probable security is an important criteria for analyzing the security of cryptographic protocols. However, writing and verifying proofs by hand are prone to errors. This paper introduces the game-based approach of writing security proofs and its automatic technique. It advocates the automatic security proof approach based on process calculus, and presents the initial game and observational equivalences of OAEP+.
Last updated:  2009-02-16
Implementing cryptographic pairings: a magma tutorial
Luis J Dominguez Perez, Ezekiel J Kachisa, Michael Scott
In this paper we show an efficient implementation if the Tate, ate, and R-ate pairings in magma. This will be demostrated by using the KSS curves with embedding degree k=18
Last updated:  2009-02-16
Secret sharing on trees: problem solved
Laszlo Csirmaz, Gabor Tardos
We determine the worst case information rate for all secret sharing schemes based on trees. It is the inverse of $2-1/c$, where $c$ is the size of the maximal core in the tree. A {\it core} is a connected subset of the vertices so that every vertex in the core has a neighbor outside the core. The upper bound comes from an application of the entropy method, while the lower bound is achieved by a construction using Stinson's decomposition theorem. It is shown that $2-1/c$ is also the {\it fractional cover number} of the tree where the edges of the tree are covered by stars, and the vertex cover should be minimized. We also give an $O(n^2)$ algorithm which finds an optimal cover on any tree, and thus a perfect secret sharing scheme with optimal rate.
Last updated:  2009-11-13
Low Complexity Cubing and Cube Root Computation over $\F_{3^m}$ in Polynomial Basis
Uncategorized
Omran Ahmadi, Francisco Rodríguez-Henriquez
Show abstract
Uncategorized
We present low complexity formulae for the computation of cubing and cube root over $\F_{3^m}$ constructed using special classes of irreducible trinomials, tetranomials and pentanomials. We show that for all those special classes of polynomials, field cubing and field cube root operation have the same computational complexity when implemented in hardware or software platforms. As one of the main applications of these two field arithmetic operations lies in pairing-based cryptography, we also give in this paper a selection of irreducible polynomials that lead to low cost field cubing and field cube root computations for supersingular elliptic curves defined over $\F_{3^m}$, where $m$ is a prime number in the pairing-based cryptographic range of interest, namely, $m\in [47, 541]$.
Last updated:  2013-03-29
Optimistic Fair Exchange with Multiple Arbiters
Alptekin Kupcu, Anna Lysyanskaya
Fair exchange is one of the most fundamental problems in secure distributed computation. Alice has something that Bob wants, and Bob has something that Alice wants. A fair exchange protocol would guarantee that, even if one of them maliciously deviates from the protocol, either both of them get the desired content, or neither of them do. It is known that no two-party protocol can guarantee fairness in general; therefore the presence of a trusted arbiter is necessary. In optimistic fair exchange, the arbiter only gets involved in case of faults, but needs to be trusted. To reduce the trust put in the arbiter, it is natural to consider employing multiple arbiters. Expensive techniques like byzantine agreement or secure multi-party computation with Omega(n^2) communication can be applied to distribute arbiters in a non-autonomous way. Yet we are interested in efficient protocols that can be achieved by keeping the arbiters autonomous (non-communicating), especially for p2p settings in which the arbiters do not even know each other. Avoine and Vaudenay employ multiple autonomous arbiters in their optimistic fair exchange protocol which uses global timeout mechanisms; all arbiters have access to -loosely- synchronized clocks. They left two open questions regarding the use of distributed autonomous arbiters: (1) Can an optimistic fair exchange protocol without timeouts provide fairness (since it is hard to achieve synchronization in a p2p setting) when employing multiple autonomous arbiters? (2) Can any other optimistic fair exchange protocol with timeouts achieve better bounds on the number of honest arbiters required? In this paper, we answer both questions negatively. To answer these questions, we define a general class of optimistic fair exchange protocols with multiple arbiters, called "distributed arbiter fair exchange" (DAFE) protocols. Informally, in a DAFE protocol, if a participant fails to send a correctly formed message, the other party must contact some subset of the arbiters and get correctly formed responses from them. The arbiters do not communicate with each other, but only to Alice and Bob. We prove that no DAFE protocol can meaningfully exist.
Last updated:  2009-02-10
Overview of Turbo-Code Reconstruction Techniques
Johann Barbier, Eric Filiol
In this paper we analyze different techniques to blindly recover the parameters of turbo-code encoders with only the knowledge of noisy eavesdropped binary stream. We compare different strategies to reconstruct convolutional encoders, particularly when both are recursive systematic coders. We explain the intrinsic indetermination due to these techniques but also how to overcome such an issue in the case of turbo-code encoders. Moreover, we generalize Burel and al.'s particular reconstruction of the second (n,1)-encoder for (n,k)-encoders.
Last updated:  2009-02-10
On fractional correlation immunity of majority functions
Chuan-Kun Wu
The correlation immunity is known as an important cryptographic measure of a Boolean function with respect to its resist against the correlation attack. This paper generalizes the concept of correlation immunity to be of a fractional value, called fractional correlation immunity, which is a fraction between 0 and 1, and correlation immune function is the extreme case when the fractional correlation immunity is 1. However when a function is not correlation immune in the traditional sense, it may also has a nonzero fractional correlation immunity, which also indicates the resistance of the function against correlation attack. This paper first shows how this generalized concept of fractional correlation immunity is a reasonable measure on the resistance against the correlation attack, then studies the fractional correlation immunity of a special class of Boolean functions, i.e. majority functions, of which the subset of symmetric ones have been proved to have highest algebraic immunity. This paper shows that all the majority functions, including the symmetric ones and the non-symmetric ones, are not correlation immune. However their fractional correlation immunity approaches to 1 when the number of variable grows. This means that this class of functions also have good resistance against correlation attack, although they are not correlation immune in the traditional sense.
Last updated:  2009-05-22
Adaptive Preimage Resistance and Permutation-based Hash Functions
Uncategorized
Jooyoung Lee, Je Hong Park
Show abstract
Uncategorized
In this paper, we introduce a new notion of security, called \emph{adaptive preimage resistance}. We prove that a compression function that is collision resistant and adaptive preimage resistant can be combined with a public random function to yield a hash function that is indifferentiable from a random oracle. Specifically, we analyze adaptive preimage resistance of $2n$-bit to $n$-bit compression functions that use three calls to $n$-bit public random permutations. This analysis also provides a simpler proof of their collision resistance and preimage resistance than the one provided by Rogaway and Steinberger. By using such compression functions as building blocks, we obtain permutation-based pseudorandom oracles that outperform the Sponge construction and the MD6 compression function both in terms of security and efficiency.
Last updated:  2009-02-10
Foundations of Non-Malleable Hash and One-Way Functions
Alexandra Boldyreva, David Cash, Marc Fischlin, Bogdan Warinschi
Non-malleability is an interesting and useful property which ensures that a cryptographic protocol preserves the independence of the underlying values: given for example an encryption Enc(m) of some unknown message m, it should be hard to transform this ciphertext into some encryption Enc(m*) of a related message m*. This notion has been studied extensively for primitives like encryption, commitments and zero-knowledge. Non-malleability of one-way functions and hash functions has surfaced as a crucial property in several recent results, but it has not undergone a comprehensive treatment so far. In this paper we initiate the study of such non-malleable functions. We start with the design of an appropriate security definition. We then show that non-malleability for hash and one-way functions can be achieved, via a theoretical construction that uses perfectly one-way hash functions and simulation-sound non-interactive zero-knowledge proofs of knowledge (NIZKPoK). We also discuss the complexity of non-malleable hash and one-way functions. Specifically, we give a black-box based separation of non-malleable functions from one-way permutations (which our construction bypasses due to the 'non-black-box' NIZKPoK). We exemplify the usefulness of our definition in cryptographic applications by showing that non-malleability is necessary and sufficient to securely replace one of the two random oracles in the IND-CCA encryption scheme by Bellare and Rogaway, and to improve the security of client-server puzzles.
Last updated:  2009-02-12
On the Data Complexity of Statistical Attacks Against Block Ciphers (full version)
Céline Blondeau, Benoît Gérard
Many attacks on iterated block ciphers rely on statistical considerations using plaintext/ciphertext pairs to distinguish some part of the cipher from a random permutation. We provide here a simple formula for estimating the amount of plaintext/ciphertext pairs which is needed for such distinguishers and which applies to a lot of different scenarios (linear cryptanalysis, differential-linear cryptanalysis, differential/truncated differential/impossible differential cryptanalysis). The asymptotic data complexities of all these attacks are then derived. Moreover, we give an efficient algorithm for computing the data complexity accurately.
Last updated:  2009-02-16
CCZ-equivalence and Boolean functions
Uncategorized
Lilya Budaghyan, Claude Carlet
Show abstract
Uncategorized
We study further CCZ-equivalence of $(n,m)$-functions. We prove that for Boolean functions (that is, for $m=1$), CCZ-equivalence coincides with EA-equivalence. On the contrary, we show that for $(n,m)$- functions, CCZ-equivalence is strictly more general than EA-equivalence when $n\ge5$ and $m$ is greater or equal to the smallest positive divisor of $n$ different from 1. Our result on Boolean functions allows us to study the natural generalization of CCZ-equivalence corresponding to the CCZ-equivalence of the indicators of the graphs of the functions. We show that it coincides with CCZ-equivalence.
Last updated:  2010-02-11
On Deterministic Polynomial-Time Equivalence of Computing the CRT-RSA Secret Keys and Factoring
Subhamoy Maitra, Santanu Sarkar
Let $N = pq$ be the product of two large primes. Consider CRT-RSA with the public encryption exponent $e$ and private decryption exponents $d_p, d_q$. It is well known that given any one of $d_p$ or $d_q$ (or both) one can factorize $N$ in probabilistic poly$(\log N)$ time with success probability almost equal to 1. Though this serves all the practical purposes, from theoretical point of view, this is not a deterministic polynomial time algorithm. In this paper, we present a lattice based deterministic poly$(\log N)$ time algorithm that uses both $d_p, d_q$ (in addition to the public information $e, N$) to factorize $N$ for certain ranges of $d_p, d_q$. We like to stress that proving the equivalence for all the values of $d_p, d_q$ may be a nontrivial task.
Last updated:  2009-02-06
Security Enhancement of Various MPKCs by 2-layer Nonlinear Piece In Hand Method
Shigeo Tsujii, Kohtaro Tadaki, Ryou Fujita, Masahito Gotaishi, Toshinobu Kaneko
Following the last proposal of the nonlinear Piece in Hand method, which has 3-layer structure, 2-layer nonlinear Piece in Hand method is proposed. Both of them aim at enhancing the security of existing and future multivariate public key cryptosystems. The new nonlinear Piece in Hand is compared with the 3-layer method and PMI+, which was proposed by Ding, et al.
Last updated:  2009-09-22
Comparing Two Pairing-Based Aggregate Signature Schemes
Sanjit Chatterjee, Darrel Hankerson, Edward Knapp, Alfred Menezes
In 2003, Boneh, Gentry, Lynn and Shacham (BGLS) devised the first provably-secure aggregate signature scheme. Their scheme uses bilinear pairings and their security proof is in the random oracle model. The first pairing-based aggregate signature scheme which has a security proof that does not make the random oracle assumption was proposed in 2006 by Lu, Ostrovsky, Sahai, Shacham and Waters (LOSSW). In this paper, we compare the security and efficiency of the BGLS and LOSSW schemes when asymmetric pairings derived from Barreto-Naehrig (BN) elliptic curves are employed.
Last updated:  2009-02-06
On the impossibility of graph secret sharing
Laszlo Csirmaz
A {\it perfect secret sharing scheme} based on a graph $G$ is a randomized distribution of a secret among the vertices of the graph so that the secret can be recovered from the information assigned to vertices at the endpoints of any edge, while the total information assigned to an independent set of vertices is independent (in statistical sense) of the secret itself. The efficiency of a scheme is measured by the amount of information the most heavily loaded vertex receives divided by the amount of information in the secret itself. The (worst case) {\it information ratio} of $G$ is the infimum of this number. We calculate the best lower bound on the information ratio for an infinite family of graphs the celebrated entropy method can give. We argue that all existing constructions for secret sharing schemes are special cases of the generalized vector space construction. We give direct constructions of this type for the first two members of the family, and show that for the other members no construction exists which would match the bound yielded by the entropy method.
Last updated:  2010-10-11
On Generalization of Cheon's Algorithm
Takakazu Satoh
We consider a generalization of Cheon's algorithm on the strong Diffie-Hellman problem. More specifically, we consider the circumstance that p^k-1 has a small divisor for k>=3, where p is the order of group on which we consider the strong Diffie-Hellman problem. It seems that our algorithm is only effective for k=1, 2, that is, the original Cheon's algorithm.
Last updated:  2009-02-06
Anonymity in Shared Symmetric Key Primitives
Gregory M. Zaverucha, Douglas R. Stinson
We provide a stronger definition of anonymity in the context of shared symmetric key primitives, and show that existing schemes do not provide this level of anonymity. A new scheme is presented to share symmetric key operations amongst a set of participants according to a $(t,n)$-threshold access structure. We quantify the amount of information the output of the shared operation provides about the group of participants which collaborated to produce it.
Last updated:  2009-07-14
Designing an ASIP for Cryptographic Pairings over Barreto-Naehrig Curves
David Kammler, Diandian Zhang, Peter Schwabe, Hanno Scharwaechter, Markus Langenberg, Dominik Auras, Gerd Ascheid, Rainer Leupers, Rudolf Mathar, Heinrich Meyr
This paper presents a design-space exploration of an application-specific instruction-set processor (ASIP) for the computation of various cryptographic pairings over Barreto-Naehrig curves (BN curves). Cryptographic pairings are based on elliptic curves over finite fields--in the case of BN curves a field Fp of large prime order p. Efficient arithmetic in these fields is crucial for fast computation of pairings. Moreover, computation of cryptographic pairings is much more complex than elliptic-curve cryptography (ECC) in general. Therefore, we facilitate programming of the proposed ASIP by providing a C compiler. In order to speed up $\mathbb{F}_p$ -arithmetic, a RISC core is extended with additional functional units. The critical path delay of these units is adjusted to the base architecture in order to maintain the operating frequency. Independently from that adjustment, these units are scalable allowing for a trade-off between execution time and area consumption. Because the resulting speedup can be limited by the memory throughput, utilization of multiple data memories is proposed. However, developing a C compiler for multiple memories is a challenging task. Therefore, we introduce an enhanced memory system enabling multiple concurrent memory accesses while remaining totally transparent to the C compiler. The proposed design needs 15.8 ms for the computation of the Optimal-Ate pairing over a 256-bit BN curve at 338 MHz implemented with a 130 nm standard cell library. The processor core consumes 97 kGates making it suitable for the use in embedded systems.
Last updated:  2009-08-11
Universally Composable Symmetric Encryption
Ralf Kuesters, Max Tuengerthal
For most basic cryptographic tasks, such as public key encryption, digital signatures, authentication, key exchange, and many other more sophisticated tasks, ideal functionalities have been formulated in the simulation-based security approach, along with their realizations. Surprisingly, however, no such functionality exists for symmetric encryption, except for a more abstract Dolev-Yao style functionality. In this paper, we fill this gap. We propose two functionalities for symmetric encryption, an unauthenticated and an authenticated version, and show that they can be implemented based on standard cryptographic assumptions for symmetric encryption schemes, namely IND-CCA security and authenticated encryption, respectively. We also illustrate the usefulness of our functionalities in applications, both in simulation-based and game-based security settings.
Last updated:  2009-02-04
On the Security of Tandem-DM
Ewan Fleischmann, Michael Gorski, Stefan Lucks
We provide the first proof of security for Tandem-DM one of the oldest and most well-known constructions for turning a blockcipher with n-bit blocklength and 2n-bit keylength into a 2n-bit cryptographic hash function. We prove, that when Tandem-DM is instantiated with AES-256, i.e. blocklength 128 bits and keylength 256 bits, any adversary that asks less than 2^{120.4} queries cannot find a collision with success probability greater than 1/2. We also prove a bound for preimage resistance of Tandem-DM. Interestingly, as there is only one practical construction known (FSE'06, Hirose) turning such an (n,2n)-bit blockcipher into a 2n-bit compression function that has provably birthday-type collision resistance, Tandem-DM is one out of two structures that possess this desirable feature.
Last updated:  2009-02-03
New commutative semifields defined by PN multinomials
Lilya Budaghyan, Tor Helleseth
We introduce infinite families of perfect nonlinear Dembowski-Ostrom multinomials over $F_{p^{2k}}$ where $p$ is any odd prime. We prove that for $k$ odd and $p\ne3$ these PN functions define new commutative semifields (in part by studying the nuclei of these semifields). This implies that these functions are CCZ-inequivalent to all previously known PN mappings.
Last updated:  2009-05-01
ON THE SECURITY OF TWO RING SIGNCRYPTION SCHEMES
Uncategorized
S. Sree Vivek, S. Sharmila Deva Selvi, C. Pandu Rangan
Show abstract
Uncategorized
Ring signcryption is a cryptographic primitive, that allows an user to send a message in confidential, authentic and anonymous way, i.e. the recipient of the message is convinced that the message is valid and it comes from one of the ring member, but does not know the actual sender. In this paper, we show attacks on ring signcryption schemes by Li et al. \cite{FHY08} and Chung et al. \cite{ChungWLC06}. We demonstrate anonymity and confidentiality attack on the scheme by Li et al. \cite{FHY08} and confidentiality attack on the scheme by Chung et al. \cite{ChungWLC06}.
Last updated:  2009-12-05
Enhanced Target Collision Resistant Hash Functions Revisited
Mohammad Reza Reyhanitabar, Willy Susilo, Yi Mu
Enhanced Target Collision Resistance (eTCR) property for a hash function was put forth by Halevi and Krawczyk in Crypto 2006, in conjunction with the randomized hashing mode that is used to realize such a hash function family. eTCR is a strengthened variant of the well-known TCR (or UOWHF) property for a hash function family (i.e. a dedicated-key hash function). The contributions of this paper are twofold. First, we compare the new eTCR property with the well-known collision resistance (CR) property, where both properties are considered for a dedicated-key hash function. We show there is a separation between the two notions, that is, in general, eTCR property cannot be claimed to be weaker (or stronger) than CR property for any arbitrary dedicated-key hash function. Second, we consider the problem of eTCR property preserving domain extension. We study several domain extension methods for this purpose, including (Plain, Strengthened, and Prefix-free) Merkle-Damgård, Randomized Hashing (considered in dedicated-key hash setting), Shoup, Enveloped Shoup, XOR Linear Hash (XLH), and Linear Hash (LH) methods. Interestingly, we show that the only eTCR preserving method is a nested variant of LH which has a drawback of having high key expansion factor. Therefore, it is interesting to design a new and efficient eTCR preserving domain extension in the standard model.
Last updated:  2009-01-30
On the Portability of Generalized Schnorr Proofs
Jan Camenisch, Aggelos Kiayias, Moti Yung
The notion of Zero Knowledge Proofs (of knowledge) [ZKP] is central to cryptography; it provides a set of security properties that proved indispensable in concrete protocol design. These properties are defined for any given input and also for any auxiliary verifier private state, as they are aimed at any use of the protocol as a subroutine in a bigger application. Many times, however, moving the theoretical notion to practical designs has been quite problematic. This is due to the fact that the most efficient protocols fail to provide the above ZKP properties {\em for all} possible inputs and verifier states. This situation has created various problems to protocol designers who have often either introduced imperfect protocols with mistakes or with lack of security arguments, or they have been forced to use much less efficient protocols in order to achieve the required properties. In this work we address this issue by introducing the notion of ``protocol portability,'' a property that identifies input and verifier state distributions under which a protocol becomes a ZKP when called as a subroutine in a sequential execution of a larger application. We then concentrate on the very efficient and heavily employed ``Generalized Schnorr Proofs'' (GSP) and identify the portability of such protocols. We also point to previous protocol weaknesses and errors that have been made in numerous applications throughout the years, due to employment of GSP instances while lacking the notion of portability (primarily in the case of unknown order groups). This demonstrates that cryptographic application designers who care about efficiency need to consider our notion carefully. We provide a compact specification language for GSP protocols that protocol designers can employ. Our specification language is consistent with the ad-hoc notation that is currently widely used and it offers automatic derivation of the proof protocol while dictating its portability (i.e., the proper initial state and inputs) and its security guarantees. Thus, our language specifications can be used modularly in designs and proofs. This assures that the protocol implementation can indeed be used as a subroutine that is ZKP in its context. Finally, as a second alternative to designers wishing to use GSPs, we present a modification of GSP protocols that is unconditionally portable (i.e., ZKP) and is still quite efficient. Our constructions are the first such protocols proven secure in the standard model (while the previously known efficient constructions relied on the Random Oracle model).
Last updated:  2009-06-19
Extensions of the Cube Attack based on Low Degree Annihilators
Uncategorized
Aileen Zhang, Chu-Wee Lim, Khoongming Khoo, Wei Lei, Josef Pieprzyk
Show abstract
Uncategorized
At Crypto 2008, Shamir introduced a new algebraic attack called the cube attack, which allows us to solve black-box polynomials if we are able to tweak the inputs by varying an initialization vector. In a stream cipher setting where the filter function is known, we can extend it to the cube attack with annihilators: By applying the cube attack to Boolean functions for which we can find low-degree multiples (equivalently annihilators), the attack complexity can be improved. When the size of the filter function is smaller than the LFSR, we can improve the attack complexity further by considering a sliding window version of the cube attack with annihilators. Finally, we extend the cube attack to vectorial Boolean functions by finding implicit relations with low-degree polynomials.
Last updated:  2009-07-30
A Trade-Off Between Collision Probability and Key Size in Universal Hashing Using Polynomials
Palash Sarkar
Let $\rF$ be a finite field and suppose that a single element of $\rF$ is used as an authenticator (or tag). Further, suppose that any message consists of at most $L$ elements of $\rF$. For this setting, usual polynomial based universal hashing achieves a collision bound of $(L-1)/|\rF|$ using a single element of $\rF$ as the key. The well-known multi-linear hashing achieves a collision bound of $1/|\rF|$ using $L$ elements of $\rF$ as the key. In this work, we present a new universal hash function which achieves a collision bound of $m\lceil\log_m L\rceil/|\rF|$, $m\geq 2$, using $1+\lceil\log_m L\rceil$ elements of $\rF$ as the key. This provides a new trade-off between key size and collision probability for universal hash functions.
Last updated:  2009-02-03
On Approximating Addition by Exclusive OR
Palash Sarkar
Let $X^{(1)},X^{(2)},\ldots,X^{(n)}$ be independent and uniformly distributed over the non-negative integers $\{0,1,\ldots,2^i-1\}$; $S^{(n)}=X^{(1)}+X^{(2)}+\cdots+X^{(n)}$ and $L^{(n)}=X^{(1)}\oplus X^{(2)}\oplus \cdots\oplus X^{(n)}$. Denote the $i$-th bits of $S^{(n)}$ and $L^{(n)}$ by $S^{(n)}_i$ and $L^{(n)}_i$ respectively. We show that as $i\rightarrow \infty$, $\pr[S^{(n)}_i=L^{(n)}_i]\rightarrow \gamma^{(n)}= \frac{1}{2}+\frac{2^{n+1}(2^{n+1}-1)}{2(n+1)}\times\frac{b_{n+1}}{n!}$, where $b_n$ is the $n$-th Bernoulli number. As a consequence, $\gamma^{(2r)}=1/2$ for every $r$; and we show that $\gamma^{(2r+1)}\rightarrow 1/2$ as $r\rightarrow \infty$. For small values of $r$, $\gamma^{(2r+1)}$ is significantly different from $1/2$; for example $\gamma^{(3)}=1/3$ and $\gamma^{(5)}=17/30$. The behaviour of $\gamma^{(n)}$ for even and odd values of $n$ was earlier shown by Staffelbach and Meier without actually obtaining the formula mentioned above. For every fixed $n\geq 2$, we give a simple method for computing $\pr[S^{(n)}_i=L^{(n)}_i]$ for each $i\geq 0$. The expression involving Bernoulli numbers is arrived at via the Eulerian number triangle which in turn arises in relation to the stationary distribution of a Markov chain formed by the carry values.
Last updated:  2009-01-29
Traceability Codes
Uncategorized
Simon R. Blackburn, Tuvi Etzion, Siaw-Lynn Ng
Show abstract
Uncategorized
Traceability codes are combinatorial objects introduced by Chor, Fiat and Naor in 1994 to be used in traitor tracing schemes to protect digital content. A $k$-traceability code is used in a scheme to trace the origin of digital content under the assumption that no more than $k$ users collude. It is well known that an error correcting code of high minimum distance is a traceability code. When does this `error correcting construction' produce good traceability codes? The paper explores this question. The paper shows (using probabilistic techniques) that whenever $k$ and $q$ are fixed integers such that $k\geq 2$ and $q\geq k^2-\lceil k/2\rceil+1$, or such that $k=2$ and $q=3$, there exist infinite families of $q$-ary $k$-traceability codes of constant rate. These parameters are of interest since the error correcting construction cannot be used to construct $k$-traceability codes of constant rate for these parameters: suitable error correcting codes do not exist because of the Plotkin bound. This answers a question of Barg and Kabatiansky from 2004. Let $\ell$ be a fixed positive integer. The paper shows that there exists a constant $c$, depending only on $\ell$, such that a $q$-ary $2$-traceability code of length $\ell$ contains at most $cq^{\lceil \ell/4\rceil}$ codewords. When $q$ is a sufficiently large prime power, a suitable Reed--Solomon code may be used to construct a $2$-traceability code containing $q^{\lceil \ell/4\rceil}$ codewords. So this result may be interpreted as implying that the error correcting construction produces good $q$-ary $2$-traceability codes of length $\ell$ when $q$ is large when compared with $\ell$.
Last updated:  2009-01-29
Efficient Protocols for Set Intersection and Pattern Matching with Security Against Malicious and Covert Adversaries
Carmit Hazay, Yehuda Lindell
In this paper we construct efficient secure protocols for \emph{set intersection} and \emph{pattern matching}. Our protocols for securely computing the set intersection functionality are based on secure pseudorandom function evaluations, in contrast to previous protocols that are based on polynomials. In addition to the above, we also use secure pseudorandom function evaluation in order to achieve secure pattern matching. In this case, we utilize specific properties of the Naor-Reingold pseudorandom function in order to achieve high efficiency. Our results are presented in two adversary models. Our protocol for secure pattern matching and one of our protocols for set intersection achieve security against \emph{malicious adversaries} under a relaxed definition where one corruption case is simulatable and for the other only privacy (formalized through indistinguishability) is guaranteed. We also present a protocol for set intersection that is fully simulatable in the model of covert adversaries. Loosely speaking, this means that a malicious adversary can cheat, but will then be caught with good probability.
Last updated:  2009-01-29
Un-Trusted-HB: Security Vulnerabilities of Trusted-HB
Dmitry Frumkin, Adi Shamir
With increased use of passive RFID tags, the need for secure lightweight identification protocols arose. HB+ is one such protocol, which was proven secure in the detection-based model, but shown breakable by man-in-the-middle attacks. Trusted-HB is a variant of HB+, specifically designed to resist man-in-the-middle attacks. In this paper, we discuss several weaknesses of Trusted-HB, show that the formal security proof provided by its designers is incorrect, and demonstrate how to break it in realistic scenarios.
Last updated:  2009-01-29
Image Encryption by Pixel Property Separation
Karthik Chandrashekar Iyer, Aravinda Subramanya
Pixels in an image are essentially constituted of two properties, position and colour. Pixel Property Separation, a radically different approach for Symmetric-key image encryption, separates these properties to disturb the semantics of the image. The scheme operates in two orthogonal stages each requiring an encryption key. The first stage creates the Position Vector, an ordered set of Pixel Position Information controlled by a set of plaintext dependent Random Permutations. A bitmap flagging the presence of all the 24 bit colours is generated. The second stage randomly positions the image width and height within the ciphertext and finally applies a byte transposition on the ciphertext bytes. The complete set of image properties including width, height and pixel position-colour correlation are obscured, resulting in a practically unbreakable encryption. The orthogonality of the stages acts as an anti-catalyst for cryptanalysis. The information retrieved from compromising a stage is totally independent and cannot be used to derive the other. Classical cryptanalytic techniques demand huge number of attempts, most failing to generate valid encryption information. Plaintext attacks are rendered ineffective due to the dependency of the Random Permutations on the plaintext. Linear and Differential cryptanalysis are highly inefficient due to high Diffusion and Confusion. Although the paper describes the algorithm as applied to images, its applicability is not limited to images only. The cryptographic strength and the efficiency of operation is independent of the nature of the plaintext.
Last updated:  2009-04-27
On CCZ-equivalence and its use in secondary constructions of bent functions
Lilya Budaghyan, Claude Carlet
We prove that, for bent vectorial functions, CCZ-equivalence coincides with EA-equivalence. However, we show that CCZ-equivalence can be used for constructing bent functions which are new up to CCZ-equivalence. Using this approach we construct classes of nonquadratic bent Boolean and bent vectorial functions.
Last updated:  2009-01-29
Proofs of Retrievability via Hardness Amplification
Yevgeniy Dodis, Salil Vadhan, Daniel Wichs
Proofs of Retrievability (PoR), introduced by Juels and Kaliski, allow the client to store a file $F$ on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client's data. Constructions of PoR schemes attempt to minimize the client and server storage, the communication complexity of an audit, and even the number of file-blocks accessed by the server during the audit. In this work, we identify several different variants of the problem (such as bounded-use vs. unbounded-use, knowledge-soundness vs. information-soundness), and giving nearly optimal PoR schemes for each of these variants. Our constructions either improve (and generalize) the prior PoR constructions, or give the first known PoR schemes with the required properties. In particular, we \begin{itemize} \item Formally prove the security of an (optimized) variant of the bounded-use scheme of Juels and Kaliski~\cite{JuelsK07}, without making any simplifying assumptions on the behavior of the adversary. \item Build the first unbounded-use PoR scheme where the communication complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open question of Shacham and Waters~\cite{ShachamW08}. \item Build the first bounded-use scheme with {\em information-theoretic} security. \end{itemize} The main insight of our work comes from a simple connection between PoR schemes and the notion of {\em hardness amplification}, extensively studied in complexity theory. In particular, our improvements come from first abstracting a purely information-theoretic notion of {\em PoR codes}, and then building nearly optimal PoR codes using state-of-the-art tools from coding and complexity theory.
Last updated:  2009-01-25
How to Prove the Security of Practical Cryptosystems with Merkle-Damgård Hashing by Adopting Indifferentiability
Uncategorized
Yusuke Naito, Kazuki Yoneyama, Lei Wang, Kazuo Ohta
Show abstract
Uncategorized
In this paper, we show that major cryptosystems such as FDH, OAEP, and RSA-KEM are secure under a hash function $MD^h$ with Merkle-Damgård (MD) construction that uses a random oracle compression function $h$. First, we propose two new ideal primitives called Traceable Random Oracle ($\mathcal{TRO}$) and Extension Attack Simulatable Random Oracle ($\mathcal{ERO}$) which are weaker than a random oracle ($\mathcal{RO}$). Second, we show that $MD^h$ is indifferentiable from $\mathcal{LRO}$, $\mathcal{TRO}$ and $\mathcal{ERO}$, where $\mathcal{LRO}$ is Leaky Random Oracle proposed by Yoneyama et al. This result means that if a cryptosystem is secure in these models, then the cryptosystem is secure under $MD^h$ following the indifferentiability theory proposed by Maurer et al. Finally, we prove that OAEP is secure in the $\mathcal{TRO}$ model and RSA-KEM is secure in the $\mathcal{ERO}$ model. Since it is also known that FDH is secure in the $\mathcal{LRO}$ model, as a result, major cryptosystems, FDH, OAEP and RSA-KEM, are secure under $MD^h$, though $MD^h$ is not indifferentiable from $\mathcal{RO}$.
Last updated:  2009-01-25
Key Insulation and Intrusion Resilience Over a Public Channel
Mihir Bellare, Shanshan Duan, Adriana Palacio
Key insulation (KI) and Intrusion resilience (IR) are methods to protect a user's key against exposure by utilizing periodic communications with an auxiliary helper. But existing work assumes a secure channel between user and helper. If we want to realize KI or IR in practice we must realize this secure channel. This paper looks at the question of how to do this when the communication is over what we are more likely to have in practice, namely a public channel such as the Internet or a wireless network. We explain why this problem is not trivial, introduce models and definitions that capture the desired security in a public channel setting, and provide a complete (and surprising) answer to the question of when KI and IR are possible over a public channel. The information we provide is important to guide practitioners with regard to the usage of KI and IR and also to guide future research in this area.
Last updated:  2009-01-25
On Algebraic Relations of Serpent S-Boxes
Bhupendra Singh, Lexy Alexander, Sanjay Burman
Serpent is a 128-bit block cipher designed by Ross Anderson, Eli Biham and Lars Knudsen as a candidate for the Advanced Encryption Standard (AES). It was a finalist in the AES competition. The winner, Rijndael, got 86 votes at the last AES conference while Serpent got 59 votes [1]. The designers of Serpent claim that Serpent is more secure than Rijndael.In this paper we have observed that the nonlinear order of all output bits of serpent S-boxes are not 3 as it is claimed by the designers.
Last updated:  2009-01-25
Common Modulus Attacks on Small Private Exponent RSA and Some Fast Variants (in Practice)
M. Jason Hinek, Charles C. Y. Lam
In this work we re-examine two common modulus attacks on RSA. First, we show that Guo's continued fraction attack works much better in practice than previously expected. Given three instances of RSA with a common modulus $N$ and private exponents each smaller than $N^{0.33}$ the attack can factor the modulus about $93\%$ of the time in practice. The success rate of the attack can be increased up to almost $100\%$ by including a relatively small exhaustive search. Next, we consider Howgrave-Graham and Seifert's lattice-based attack and show that a second necessary condition for the attack exists that limits the bounds (beyond the original bounds) once $n \geq 7$ instances of RSA are used. In particular, by construction, the attack can only succeed when the private exponents are each smaller than $N^{0.5-\epsilon}$, given sufficiently many instances, instead of the original bound of $N^{1-\epsilon}$. In addition, we also consider the effectiveness of the attacks when mounted against multi-prime RSA and Tagaki's variant of RSA. For multi-prime RSA, we show three (or more) instances with a common modulus and private exponents smaller than $N^{1/3-\epsilon}$ is unsafe. For Takagi's variant, we show that three or more instances with a common modulus $N=p^rq$ is unsafe when all the private exponents are smaller than $N^{2/(3(r+1))-\epsilon}$. The results, for both variants, is obtained using Guo's method and are successful almost always with the inclusion of a small exhaustive search. When only two instances are available, Howgrave-Graham and Seifert's attack can be mounted on multi-prime RSA when the private exponents are smaller than $N^{(3+r)/7r-\epsilon}$ when there are $r$ primes in the modulus.
Last updated:  2009-03-08
Constructions of Truly Practical Secure Protocols using Standard Smartcards
Carmit Hazay, Yehuda Lindell
In this paper we show that using standard smartcards it is possible to construct truly practical secure protocols for a variety of tasks. Our protocols achieve full \emph{simulation-based security} in the presence of \emph{malicious adversaries}, and can be run on very large inputs. We present protocols for secure set intersection, oblivious database search and more. We have also implemented our set intersection protocol in order to show that it is truly practical: on sets of size 30,000 elements takes 20 seconds for one party and 30 minutes for the other. This demonstrates that in settings where physical smartcards can be sent between parties (as in the case of private data mining tasks between security and governmental agencies), it is possible to use secure protocols with proven simulation-based security.
Last updated:  2009-08-13
Key-Exposure Free Chameleon Hashing and Signatures Based on Discrete Logarithm Systems
Xiaofeng Chen, Fangguo Zhang, Haibo Tian, Baodian Wei, Kwangjo Kim
Chameleon signatures simultaneously provide the properties of non-repudiation and non-transferability for the signed message. However, the initial constructions of chameleon signatures suffer from the problem of key exposure. This creates a strong disincentive for the recipient to forge signatures, partially undermining the concept of non-transferability. Recently, some specific constructions of discrete logarithm based chameleon hashing and signatures without key exposure are presented, while in the setting of gap Diffile-Hellman groups with pairings. \indent \,\, In this paper, we propose the first key-exposure free chameleon hash and signature scheme based on discrete logarithm systems, without using the gap Diffile-Hellman groups. This provides more flexible constructions of efficient key-exposure free chameleon hash and signature schemes. Moreover, one distinguishing advantage of the resulting chameleon signature scheme is that the property of ``message hiding" or ``message recovery" can be achieved freely by the signer, $i.e.,$ the signer can efficiently prove which message was the original one if he desires.
Last updated:  2009-01-17
On a Conditional Collision Attack on NaSHA-512
Uncategorized
S. Markovski, A. Mileva, V. Dimitrova, D. Gligoroski
Show abstract
Uncategorized
A collision attack on NaSHA-512 was proposed by L. Ji et al. The claimed complexity of the attack is $2^{192}$. The proposed attack is realized by using a suitable differential pattern. In this note we show that the correct result that can be inferred from their differential pattern is in fact a conditional one. It can be stated correctly as follows: A collision attack on NaSHA-512 of complexity $k=1,2,\dots,2^{320}$ can be performed with an unknown probability of success $p_k$, where $ 0\le p_1\le p_2\le p_{2^{320}}\le 1$. Consequently, the attack proposed by L. Ji et al. can be considered only as a direction how a possible collision attack on NaSHA-512 could be realized. The birthday attack remains the best possible attack on NaSHA-512.
Last updated:  2009-05-02
NESHA-256, NEw 256-bit Secure Hash Algorithm (Extended Abstract)
Uncategorized
Yaser Esmaeili Salehani, Amir Tabatabaei, Mohammad Reza Sohizadeh Abyaneh, Mehdi Mohammad Hassanzadeh
Show abstract
Uncategorized
In this paper, we introduce a new dedicated 256-bit hash function: NESHA-256. The recently contest for hash functions held by NIST, motivates us to design the new hash function which has a parallel structure. Advantages of parallel structures and also using some ideas from the designing procedure of block-cipher-based hash functions strengthen our proposed hash function both in security and in efficiency. NESHA-256 is designed not only to have higher security but also to be faster than SHA-256: the performance of NESHA-256 is at least 38% better than that of SHA-256 in software. We give security proofs supporting our design, against existing known cryptographic attacks on hash functions.
Last updated:  2009-01-17
A Fast Implementation of $\eta_T$ Pairing in Characteristic Three on Intel Core 2 Duo Processor
MITSUNARI Shigeo
We present an efficient implementation of $\eta_T$ pairing on Intel Core 2 Duo processor. The processing speed of our implementation achieves 92 $\mu$sec over ${\mathbb F}_3^{97}$ and 553 $\mu$sec over ${\mathbb F}_3^{193}$ on 2.6GHz processor.
Last updated:  2013-12-25
Adaptively Secure Two-Party Computation with Erasures
Yehuda Lindell
In the setting of multiparty computation a set of parties with private inputs wish to compute some joint function of their inputs, whilst preserving certain security properties (like privacy and correctness). An adaptively secure protocol is one in which the security properties are preserved even if an adversary can adaptively and dynamically corrupt parties during a computation. This provides a high level of security, that is arguably necessary in today's world of active computer break-ins. Until now, the work on adaptively secure multiparty computation has focused almost exclusively on the setting of an honest majority, and very few works have considered the honest minority and two-party cases. In addition, significant computational and communication costs are incurred by most protocols that achieve adaptive security. In this work, we consider the two-party setting and assume that honest parties may \emph{erase} data. We show that in this model it is possible to securely compute any two-party functionality in the presence of \emph{adaptive semi-honest adversaries}. Furthermore, our protocol remains secure under concurrent general composition (meaning that it remains secure irrespective of the other protocols running together with it). Our protocol is based on Yao's garbled-circuit construction and, importantly, is as efficient as the analogous protocol for static corruptions. We argue that the model of adaptive corruptions with erasures has been unjustifiably neglected and that it deserves much more attention.
Last updated:  2009-07-21
An efficient fuzzy extractor for limited noise
Uncategorized
B. Skoric, P. Tuyls
Show abstract
Uncategorized
A fuzzy extractor is a security primitive that allows for reproducible extraction of an almost uniform key from a noisy non-uniform source. We analyze a fuzzy extractor scheme that uses universal hash functions for both information reconciliation and privacy amplification. This is a useful scheme when the number of error patterns likely to occur is limited, regardless of the error probabilities. We derive a sharp bound on the uniformity of the extracted key, making use of the concatenation property of universal hash functions and a recent tight formulation of the leftover hash lemma.
Last updated:  2009-05-18
Nofish - A new stream cipher
Uncategorized
Marius Oliver Gheorghita
Show abstract
Uncategorized
The proposed algorithm is a synchronous stream cipher, more precisely a binary additive stream cipher because it using the XOR function to encrypt the plaintext. The design is based on HENKOS stream cipher (http://eprint.iacr.org/2004/080.pdf), the functions used in the internal state are kept, the initialization and mixing key part being modified with respect to its revealed weaknesses. This stream cipher uses a named key of 64 bytes (512 bits) as a secret key and no initialization vector. Nofish is free to use for any non-commercial purposes, and the reference source code can be found in the appendix.
Last updated:  2009-06-14
Realizing Hash-and-Sign Signatures under Standard Assumptions
Susan Hohenberger, Brent Waters
Currently, there are relatively few instances of ``hash-and-sign'' signatures in the standard model. Moreover, most current instances rely on strong and less studied assumptions such as the Strong RSA and q-Strong Diffie-Hellman assumptions. In this paper, we present a new approach for realizing hash-and-sign signatures in the standard model. In our approach, a signer associates each signature with an index i that represents how many signatures that signer has issued up to that point. Then, to make use of this association, we create simple and efficient techniques that restrict an adversary which makes q signature requests to forge on an index no greater than 2q. Finally, we develop methods for dealing with this restricted adversary. Our approach requires that a signer maintains a small amount of state --- a counter of the number of signatures issued. We achieve two new realizations for hash-and-sign signatures respectively based on the RSA assumption and the Computational Diffie-Hellman assumption in bilinear groups.
Last updated:  2009-02-22
Security of Verifiably Encrypted Signatures
Markus Rückert, Dominique Schröder
In a verifiably encrypted signature scheme, signers encrypt their signature under the public key of a trusted third party and prove that they did so correctly. The security properties are unforgeability and opacity. Unforgeability states that a malicious signer should not be able to forge verifiably encrypted signatures and opacity prevents extraction from an encrypted signature. This paper proposes two novel fundamental requirements for verifiably encrypted signatures, called \emph{extractability} and \emph{abuse-freeness}, and analyze its effects on the security model of Boneh et al. Extractability ensures that the trusted third party is always able to extract a valid signature from a valid verifiably encrypted signature and abuse-freeness guarantees that a malicious signer, who cooperates with the trusted party, is not able to forge a verifiably encrypted signature. We further show that both properties are not covered by the model of Boneh et al., introduced at Eurocrypt 2003.
Last updated:  2009-06-16
Collision Attacks on NaSHA-384/512
Uncategorized
Zhimin Li, Licheng Wang, Daofeng Li, Yixian Yang
Show abstract
Uncategorized
NaSHA is a family of hash functions submitted by Markovski and Mileva as a SHA-3 candidate. In this paper, we present a collision attack on the hash function NaSHA for the output sizes 384-bit and 512-bit. This attack is based on the the weakness in the generate course of the state words and the fact that the quasigroup operation used in the compression function is only determined by partial state words. Its time complexity is about $2^{128}$ with negligible memory and its probability is more than $(1- \frac{2}{{2^{64} - 1}})^2$ ($\gg \frac{1}{2}$). This is currently by far the best known cryptanalysis result on this SHA-3 candidate.
Last updated:  2009-05-26
Short Redactable Signatures Using Random Trees
Ee-Chien Chang, Chee Liang Lim, Jia Xu
A redactable signature scheme for a string of objects supports verification even if multiple substrings are removed from the original string. It is important that the redacted string and its signature do not reveal anything about the content of the removed substrings. Existing schemes completely or partially leak a piece of information: the lengths of the removed substrings. Such length information could be crucial for many applications, especially when the removed substring has low entropy. We propose a scheme that can hide the length. Our scheme consists of two components. The first component $\mathcal{H}$, which is a ``collision resistant'' hash, maps a string to an unordered set, whereby existing schemes on unordered sets can then be applied. However, a sequence of random numbers has to be explicitly stored and thus it produces a large signature of size at least $(m k)$-bits where $m$ is the number of objects and $k$ is the size of a key sufficiently large for cryptographic operations. The second component uses RGGM tree, a variant of GGM tree, to generate the pseudo random numbers from a short seed, expected to be of size $O(k+ tk\log m)$ where $t$ is the number of removed substrings. Unlike GGM tree, the structure of the proposed RGGM tree is random. By an intriguing statistical property of the random tree, the redacted tree does not reveal the lengths of the substrings removed. The hash function $\mathcal{H}$ and the RGGM tree can be of independent interests.
Last updated:  2009-06-10
On Second-Order Fault Analysis Resistance for CRT-RSA Implementations
Emmanuelle Dottax, Christophe Giraud, Matthieu Rivain, Yannick Sierra
Since their publication in 1996, Fault Attacks have been widely studied from both theoretical and practical points of view and most of cryptographic systems have been shown vulnerable to this kind of attacks. Until recently, most of the theoretical fault attacks and countermeasures used a fault model which assumes that the attacker is able to disturb the execution of a cryptographic algorithm only once. However, this approach seems too restrictive since the publication in 2007 of the successful experiment of an attack based on the injection of two faults, namely a second-order fault attack. Amongst the few papers dealing with second-order fault analysis, three countermeasures were published at WISTP'07 and FDTC'07 to protect the RSA cryptosystem using the CRT mode. In this paper, we analyse the security of these countermeasures with respect to the second-order fault model considered by their authors. We show that these countermeasures are not intrinsically resistant and we propose a new method allowing us to implement a CRT-RSA that resists to this kind of second-order fault attack.
Last updated:  2009-01-13
Polynomial Runtime and Composability
Dennis Hofheinz, Dominique Unruh, Jörn Müller-Quade
To prove security of a multi-party cryptographic protocol, one often reduces attacks on the protocol to attacks on a suitable computational problem. Thus, if the computational problem is hard, then the protocol is secure. But to allow for a security reduction, the protocol itself and the attack on the protocol must be efficient, i.e., polynomial-time. Of course, the obvious way to enforce an overall polynomial runtime of the protocol is to require each individual protocol machine and adversarial entity to be polynomial-time. However, as the specific case of zero-knowledge protocols demonstrates, an a priori polynomial-time bound on all entities may not be an optimal choice because the running time of some machines needs to depend on that of others. As we want to be able to model arbitrary protocol tasks, we work in the Universal Composability framework (UC). This framework additionally provides strong composability guarantees. We will point out that in the UC setting, finding a useful notion of polynomial-time for the analysis of general protocols is a highly non-trivial task. Our goal in this work is to find a good and useful definition of polynomial-time for multi-party protocols in the UC setting that matches the intuition of what is feasible. A good definition should have the following properties: * Flexibility: All "intuitively feasible" protocols and protocol tasks should be considered polynomial-time. * Soundness: All "intuitively feasible" attacks (i.e., adversaries) should be considered polynomial-time. * Completeness: Only "intuitively feasible" attacks should be considered polynomial-time. In particular, this implies that the security of protocols can be reduced to computational hardness assumptions. * Composability: The induced security notion should support secure (universal) composition of protocols. * Simplicity: The notion should be easy to formulate, and for all practical cases, it should be easy to decide whether a protocol or attack runs in polynomial time. The problem of finding a good definition of polynomial time in the UC framework has been considered in a number of works, but no definition satisfying the five above criteria had been found so far. This seemingly simple problem is surprisingly elusive and it is hard to come up with a definition that does not involve many technical artifacts. In this contribution, we give a definition of polynomial time for cryptographic protocols in the UC model, called reactively polynomial, that satisfies all five properties. Our notion is simple and easy to verify. We argue for its flexibility, completeness and soundness with practical examples that are problematic with previous approaches. We give a very general composition theorem for reactively polynomial protocols. The theorem states that arbitrarily many instances of a secure protocol can be used in any larger protocol without sacrificing security. Our proof is technically different from and substantially more involved than proofs for previous protocol composition theorems (for previous definitions of polynomial runtime). We believe that it is precisely this additional proof complexity, which appears only once and for all in the proof of the composition theorem, that makes a useful definition as simple as ours possible.
Last updated:  2009-01-13
Correctness of Li Generalization of RSA Cryptosystem
Roman Popovych
For given N=pq with p and q different odd primes and natural m Li introduced the public key cryptosystem. In the case m=1 the system is just the famous RSA system. We answer the Li question about correctness of the system.
Last updated:  2009-01-13
Comparing With RSA
Julien Cathalo, David Naccache, Jean-Jacques Quisquater
A multi-set (MS) is a set where an element can occur more than once. MS hash functions (MSHFs) map MSs of arbitrary cardinality to fixed-length strings. This paper introduces a new RSA-based MSHF. The new function is efficient and produces small hashes. We prove that the proposed MSHF is collision-resistant under the assumption of unforgeability of deterministic RSA signatures. In many practical applications, programmers need to compare two (unordered) sets of integers. A trivial solution consists in sorting both sets ($\mathcal{O}(n \log n)$) and comparing them linearly. We show how MS hash functions can be turned into a linear-time, constant-space integer set equality test.
Last updated:  2009-01-13
Applying Time-Memory-Data Trade-Off to Meet-in-the-Middle Attack
Jiali Choy, Khoongming Khoo, Chuan-Wen Loe
In this paper, we present several new attacks on multiple encryption block ciphers based on the meet-in-the-middle attack. In the first attack (GDD-MTM), we guess a certain number of secret key bits and apply the meet-in-the-middle attack on multiple ciphertexts. The second attack (TMTO-MTM) is derived from applying the time-memory trade-off attack to the meet-in-the-middle attack on a single ciphertext. We may also use rainbow chains in the table construction to get the Rainbow-MTM attack. The fourth attack (BS-MTM) is defined by combining the time-memory-data trade-off attack proposed by Biryukov and Shamir to the meet-in-the-middle attack on multiple ciphertexts. Lastly, for the final attack (TMD-MTM), we apply the TMTO-Data curve, which demonstrates the general methodology for multiple data trade-offs, to the meet-in-the-middle attack. GDD-MTM requires no pre-processing, but the attack complexity is high while memory requirement is low. In the last four attacks, pre-processing is required but we can achieve lower (faster) online attack complexity at the expense of more memory in comparison with the GDD-MTM attack. To illustrate how the attacks may be used, we applied them in the cryptanalysis of triple DES. In particular, for the BS-MTM attack, we managed to achieve pre-computation and data complexity which are much lower while maintaining almost the same memory and online attack complexity, as compared to a time-memory-data trade-off attack by Biryukov et al. at SAC 2005. In all, our new methodologies offer viable alternatives and provide more flexibility in achieving time-memory-data trade-offs.
Last updated:  2009-01-13
Communication-Efficient Private Protocols for Longest Common Subsequence
Matthew Franklin, Mark Gondree, Payman Mohassel
We design communication efficient two-party and multi-party protocols for the longest common subsequence (LCS) and related problems. Our protocols achieve privacy with respect to passive adversaries, under reasonable cryptographic assumptions. We benefit from the somewhat surprising interplay of an efficient block-retrieval PIR (Gentry-Ramzan, ICALP 2005) with the classic “four Russians” algorithmic design. This result is the first improvement to the communication complexity for this application over generic results (such as Yao’s garbled circuit protocol) and, as such, is interesting as a contribution to the theory of communication efficiency for secure two-party and multiparty applications.
Last updated:  2009-01-13
Huge 2ndpreimages and collisions of khichidi-1
Uncategorized
prasanth Kumar Thandra, S. A. V. Satya Murty
Show abstract
Uncategorized
Khichidi-1 is a contestant of Sha-3. In this paper we exploited a weakness in the design of the algorithm. This allowed us to propose two kind of attacks 1) 2ndpreimage attack, 2) Collision attack. Our attacks are applicable to all the versions 224, 256, 384 and 512 and it is potentially strong.
Last updated:  2009-01-13
Anonymous signature scheme
Chunbo Ma, Jun Ao
In order to hide the identity of a signer, an anonymous signaure scheme is presented in this paper. In this scheme, a signer located in a specified group produces a signautre on behalf of the group. The recipient can verify whether the signature is valid and comes from the specified group, while tracing the signature to its source is impossible. The proposed anonymous signature is similarly to ring signature in some respects, for example, there is no manager, and no revocation mechanism against signer's anonymity. The most different between these two kinds of signatures is that the group in ring signature is adaptively constructed by the signer, while the group in our scheme is fixed.
Last updated:  2009-04-01
Fast elliptic-curve cryptography on the Cell Broadband Engine
Neil Costigan, Peter Schwabe
This paper is the first to investigate the power of the Cell Broadband Engine for state-of-the-art public-key cryptography. We pre- sent a high-speed implementation of elliptic-curve Diffie-Hellman (ECDH) key exchange for this processor, which needs 697080 cycles on one Syn- ergistic Processor Unit for a scalar multiplication on a 255-bit elliptic curve, including the costs for key verification and key compression. This cycle count is independent of inputs therefore protecting against timing attacks. This speed relies on a new representation of elements of the underlying finite field suited for the unconventional instruction set of this architec- ture. Furthermore we demonstrate that an implementation based on the multi- precision integer arithmetic functions provided by IBM's multi-precision math (MPM) library would take at least 2227040 cycles. Comparison with implementations of the same function for other archi- tectures shows that the Cell Broadband Engine is competitive in terms of cost-performance ratio to other recent processors such as the Intel Core 2 for public-key cryptography. Specifically, the state-of-the-art Galbraith-Lin-Scott ECDH software per- forms 27370 scalar multiplications per second using all four cores of a 2.5GHz Intel Core 2 Quad Q9300 inside a $296 computer, while the new software reported in this paper performs 27474 scalar multiplications per second on a Playstation 3 that costs just $221. Both of these speed reports are for high-security 256-bit elliptic-curve cryptography.
Last updated:  2011-04-04
Cube Attacks on Trivium
Uncategorized
S S Bedi, N Rajesh Pillai
Show abstract
Uncategorized
This paper discusses the Cube attacks proposed by Dinur and Shamir applied to Trivium. Independent verification of the equations given in Dinur and Shamir's paper were carried out. Experimentation showed that the precomputed equations were not general. They are correct when applied to the class of IVs for which they were computed - where IV bits at locations other than those corresponding to the cube are fixed at 0. When these IV bits are fixed at some other values, the relations do not hold. The probable cause for this is given and an extra step to the method for equation generation is suggested to take care of such cases.
Last updated:  2009-01-12
Key Predistribution Techniques for Grid-Based Wireless Sensor Networks
Simon R. Blackburn, Tuvi Etzion, Keith M. Martin, Maura B. Paterson
We consider symmetric key predistribution in grid-based wireless sensor networks. Networks consisting of wireless sensor nodes arranged in a grid pattern have many useful applications, including environmental monitoring and agribusiness. The structured physical distribution of nodes in such networks facilitates efficient distribution of keys to the nodes prior to deployment. It has been shown that combinatorial objects known as distinct-difference configurations (DDCs) can be used to construct effective key predistribution schemes (KPSs) for grid-based networks. In this paper we observe that the regular topology of a grid-based network enables an efficient trade-off between the connectivity, resilience and storage requirements of a KPS, and we discuss the balancing of these properties to suit application requirements. We then show how recent results on the construction of DDCs can be used to produce KPSs that achieve the desired balance, and we provide explicit algorithms for the instantiation of these schemes.
Last updated:  2009-01-12
Comparison-Based Key Exchange and the Security of the Numeric Comparison Mode in Bluetooth v2.1
Yehuda Lindell
In this paper we study key exchange protocols in a model where the key exchange takes place between devices with limited displays that can be compared by a human user. If the devices display the same value then the human user is convinced that the key exchange terminated successfully and securely, and if they do not then the user knows that it came under attack. The main result of this paper is a rigorous proof that the numeric comparison mode for device pairing in Bluetooth version 2.1 is secure, under appropriate assumptions regarding the cryptographic functions used. Our proof is in the standard model and in particular does not model any of the functions as random oracles. In order to prove our main result, we present formal definitions for key exchange in this model and show our definition to be equivalent to a simpler definition. This is a useful result of independent interest that facilitates an easier security analysis of protocols in this model.
Last updated:  2009-01-15
Avoid Mask Re-use in Masked Galois Multipliers
D. Canright
This work examines a weakness in re-using masks for masked Galois inversion, specifically in the masked Galois multipliers. Here we show that the mask re-use scheme included in our work[1] cannot result in "perfect masking," regardless of the order in which the terms are added; explicit distributions are derived for each step. The same problem requires new masks in the subfield calculations, not included in [1]. Hence, for resistance to first-order differential attacks, the masked S-box must use distinct, independent masks for input and output bytes of the masked inverter, and new masks in the subfields, resulting in a larger size. Ref[1]: Canright, D., Batina, L.: A Very Compact "Perfectly Masked" S-Box for AES. In ACNS2008, LNCS 5037, Springer-Verlag (2008), 446-459
Last updated:  2009-01-15
A Very Compact "Perfectly Masked" S-Box for AES (corrected)
D. Canright, Lejla Batina
Implementations of the Advanced Encryption Standard (AES), including hardware applications with limited resources (e.g., smart cards), may be vulnerable to "side-channel attacks" such as differential power analysis. One countermeasure against such attacks is adding a random mask to the data; this randomizes the statistics of the calculation at the cost of computing "mask corrections." The single nonlinear step in each AES round is the "S-box" (involving a Galois inversion), which incurs the majority of the cost for mask corrections. Oswald et al. showed how the "tower field" representation allows maintaining an additive mask throughout the Galois inverse calculation. This work applies a similar masking strategy to the most compact (unmasked) S-box to date. The result is the most compact masked S-box so far, with "perfect masking" (by the definition of Blomer) giving suitable implementations immunity to first-order differential side-channel attacks.
Last updated:  2010-03-12
Optimal Multicast Group Communication
Zhibin Zhou, Dijiang Huang
Many IP multicast based applications, such as Pay-TV, Multiplayer games, require controlling the group memberships of senders and receivers. One common solution is to encrypt the data with a session key shared with all authorized senders/receivers. To efficiently update the session key in the event of member removal, many rooted-tree based group key distribution schemes have been proposed. However, most of the existing rooted-tree based schemes are not optimal. In other words, given the O(log N) storage overhead, the communication overhead is not minimized. On the other hand, although Flat Table scheme achieves optimality , it is rather dismissed due to the vulnerability to collusion attacks. In this paper, we propose a key distribution scheme -- EGK that attains the same optimality as Flat Table without collusion vulnerability. EGK also support dynamic subgroup communication initialized by each group members (imagine a virtual chat room in the multicast group). Additionally, EGK provides constant message size and requires O(log N) storage overhead at the group controller, which makes EGK suitable for applications containing a large number of multicasting group members. Moreover, adding members in EGK requires just one multicasting message. EGK is the first work with such features and out-performs all existing schemes.
Last updated:  2011-03-02
Hybrid-Secure MPC: Trading Information-Theoretic Robustness for Computational Privacy
Uncategorized
Christoph Lucas, Dominik Raub, Ueli Maurer
Show abstract
Uncategorized
Most protocols for distributed, fault-tolerant computation, or multi-party computation (MPC), provide security guarantees in an all-or-nothing fashion: If the number of corrupted parties is below a certain threshold, these protocols provide all specified security guarantees. Beyond this threshold, they provide no security guarantees at all. Furthermore, most previous protocols achieve either information-theoretic (IT) security, in which case this threshold is low, or they achieve only computational security, in which case this threshold is high. In contrast, a hybrid-secure protocol provides different security guarantees depending on the set of corrupted parties and the computational power of the adversary, without being aware of the actual adversarial setting. Thus, hybrid-secure MPC protocols allow for graceful degradation of security. We present a hybrid-secure MPC protocol that provides an optimal trade-off between IT robustness and computational privacy: For any robustness parameter r<n/2, we obtain one MPC protocol that is simultaneously IT secure with robustness for up to t<=r actively corrupted parties, IT secure with fairness (no robustness) for up to t<n/2, and computationally secure with agreement on abort (privacy and correctness only) for up to t<n-r. Our construction is secure in the universal composability (UC) framework (based on a network of secure channels, a broadcast channel, and a common reference string). It achieves the bound on the trade-off between robustness and privacy shown by Ishai et al. [CRYPTO'06] and Katz [STOC'07], the bound on fairness shown by Cleve [STOC'86], and the bound on IT security shown by Kilian [STOC'00], and is the first protocol that achieves all these bounds simultaneously. For example, in the special case r=0 our protocol simultaneously achieves non-robust MPC for up to t<n corrupted parties in the computational setting (like Goldreich et al. [STOC'87]), while providing security with fairness in the IT setting for up to t<n/2 corrupted parties (like Rabin and Ben-Or [STOC'89] though without robustness).
Last updated:  2009-01-05
A note on Agrawal conjecture
Roman Popovych
We prove that Lenstra proposition suggesting existence of many counterexamples to Agrawal conjecture is true in a more general case. At the same time we obtain a strictly ascending chain of subgroups of the group (Zp[X]/(Cr(X)))* and state the modified conjecture that the set {X-1, X+2} generate big enough subgroup of this group.
Last updated:  2009-01-04
Homomorphic Trapdoor Commitments to Group Elements
Jens Groth
We present a homomorphic trapdoor commitment to group elements. In contrast, previous homomorphic trapdoor commitment schemes only allow the messages to be exponents. Our commitment scheme is length-reducing, we can make a short commitment to many group elements at once, and it is perfectly hiding and computationally binding. The construction is based on groups with a bilinear map and the binding property follows from the simultaneous triple pairing assumption. While the simultaneous triple pairing assumption is new, we demonstrate that it is implied by the well-known decision linear assumption.
Last updated:  2009-01-04
Huge Multicollisions and Multipreimages of Hash Functions BLENDER-n
Vlastimil Klima
In this paper we present a multicollision and multipreimage attack on the hash function Blender-n [1] for all output sizes n = 224, 256, 384 and 512. The complexity and memory requirements for finding 2^{2n} multipreimages (multicollisions) of Blender-n is roughly 10 times more than finding a collision for n/2-bit random hash function. All previous attacks were based on the trick by Joux [2] using many messages. Our attacks are based on one message with several fixpoints. The state register has eight words. By properly choosing message words we force half of the register to go to the original state. Then we will find a collision in the rest with complexity 2^{n/4}. The collision creates a fix point in the sequence of states of the state register. We use 10 such fix points. Previously known attacks [4, 5] on Blender-n have the complexity at least 2^{n/2}. Our 2^{2n}-multicollision and multipreimage attacks have a complexity 10*2^{n/4}.
Last updated:  2009-01-04
Impossible Differential Cryptanalysis of Pelican, MT-MAC-AES and PC-MAC-AES
Wei Wang, Xiaoyun Wang, Guangwu Xu
In this paper, the impossible differential cryptanalysis is extended to MAC algorithms \textsc{Pelican}, MT-MAC and PC-MAC based on AES and 4-round AES. First, we collect message pairs that produce the inner near-collision with some specific differences by the birthday attack. Then the impossible differential attack on 4-round AES is implemented using a 3-round impossible differential property. For \textsc{Pelican}, our attack can recover the internal state, which is an equivalent subkey. For MT-MAC-AES, the attack turns out to be a subkey recovery attack directly. The data complexity of the two attacks is $2^{85.5}$ chosen messages, and the time complexity is about $2^{85.5}$ queries. For PC-MAC-AES, we can recover the 256-bit key with $2^{85.5}$ chosen messages and $2^{128}$ queries.
Last updated:  2009-01-26
On Stateless Schemes for Message Authentication Using Pseudorandom Functions
Palash Sarkar
We consider the construction and analysis of pseudorandom functions (PRF) for message authentication. Earlier work due to Bernstein and Vaudenay show how to reduce the analysis of PRFs to some probability calculations. We revisit this result and use it to prove some general results on constructions which use a PRF with ``small'' domain to build a PRF with ``large'' domain. These results are then used to analyse several existing and new constructions. Important among them is a simplified proof of a bound on the PRF-property of the cipher block chaining (CBC) mode of operation of a block cipher for message authentication code (MAC). Several existing variants of CBC-MAC are analysed using our framework and new schemes are described. One of the new schemes improve upon the NIST standard CMAC scheme by reducing the number of block cipher invocations by one for messages which are longer than $n$ bits. Next, we consider parallelizable constructions. An improved version of the well known PMAC scheme is described; the improvement consists of removing the requirement of a discrete log computation in the design stage of PMAC. An earlier parallel construction called the protected counter sum (PCS) had been proposed by Bernstein. PCS uses a keyed compressing function rather than a block cipher. We describe a variant of PMAC which works with keyed compressing function and compared to PCS requires lesser number of invocations. All our constructions are in the stateless setting, i.e., a setting where the sender and the receiver do not share any state (apart from the common secret key). One of the aspects of our work is the simple and direct approach to the analysis of PRFs. In particular, we avoid the extensive and heavy machinery of game-playing technique which is used in most papers on this topic.
Last updated:  2009-11-28
Separating two roles of hashing in one-way message authentication
Uncategorized
L. H. Nguyen, A. W. Roscoe
Show abstract
Uncategorized
We analyse two new and related families of one-way authentication protocols, where a party wants to authenticate its public information to another. In the first, the objective is to do without shared passwords or a PKI, making use of low-bandwidth empirical/authentic channels where messages cannot be faked or modified. The analysis of these leads to a new security principle, termed separation of security concerns, under which protocols should be designed to tackle one-shot attacks and combinatorial search separately. This also leads us develop a new class of protocols for the case such as PKI where a relatively expensive signature mechanism exists. We demonstrate as part of this work that a popular protocol in the area, termed MANA I, neither optimises human effort nor offers as much security as had previously been believed. We offer a number of improved versions for MANA I that provides more security for half the empirical work, using a more general empirical channel.
Last updated:  2009-01-04
Thermocommunication
Julien Brouchier, Nora Dabbous, Tom Kean, Carol Marsh, David Naccache
Looking up -- you realize that one of the twelve light bulbs of your living room chandelier has to be replaced. You turn electricity off, move a table to the center of the room, put a chair on top of the table and, new bulb in hand, you climb up on top of the chair. As you reach the chandelier, you notice that\ldots all bulbs look alike and that you have forgotten which bulb needed to be changed.\smallskip Restart all over again?\smallskip Luckily, an effortless creative solution exists. By just touching the light bulbs you can determine the cold one and replace it! Put differently, information about the device's malfunction leaked-out via its temperature...
Last updated:  2009-01-04
A Hardware Analysis of Twisted Edwards Curves for an Elliptic Curve Cryptosystem
Brian Baldwin, Richard Moloney, Andrew Byrne, Gary McGuire, William P. Marnane
This paper presents implementation results of a reconfigurable elliptic curve processor defined over prime fields $GF(p)$. We use this processor to compare a new algorithm for point addition and point doubling operations on the twisted Edwards curves, against a current standard algorithm in use, namely the Double-and-Add. Secure power analysis versions of both algorithms are also examined and compared. The algorithms are implemented on an FPGA, and the speed, area and power performance of each are then evaluated for various modes of circuit operation using parallel processing. To the authors' knowledge, this work introduces the first documented FPGA implementation for computations on twisted Edwards curves over fields $GF(p)$.
Last updated:  2009-10-23
Resolving the Simultaneous Resettability Conjecture and a New Non-Black-Box Simulation Strategy
Vipul Goyal, Amit Sahai
Canetti, Goldreich, Goldwasser, and Micali (STOC 2000) introduced the notion of resettable zero-knowledge proofs, where the protocol must be zero-knowledge even if a cheating verifier can reset the prover and have several interactions in which the prover uses the same random tape. Soon afterwards, Barak, Goldreich, Goldwasser, and Lindell (FOCS 2001) studied the closely related notion of resettable soundness, where the soundness condition of the protocol must hold even if the cheating prover can reset the verifier to have multiple interactions with the same verifier's random tape. The main problem left open by this work was whether it is possible to have a single protocol that is simultaneously resettable zero knowledge and resettably sound. We resolve this question by constructing such a protocol. At the heart of our construction is a new non-black-box simulation strategy, which we believe to be of independent interest. This new strategy allows for simulators which ``marry'' recursive rewinding techniques (common in the context of concurrent simulation) with non-black-box simulation. Previous non-black-box strategies led to exponential blowups in computational complexity in such circumstances, which our new strategy is able to avoid.
Last updated:  2008-12-29
Comments on two multi-server authentication protocols
Yalin Chen, Chun-Hui Huang, Jue-Sam Chou
Recently, Tsai and Liao et al. each proposed a multi-server authentication protocol. They claimed their protocols are secure and can withstand various attacks. But we found some security loopholes in each protocol. We will show the attacks on their schemes.
Last updated:  2008-12-29
Odd-Char Multivariate Hidden Field Equations
Chia-Hsin Owen Chen, Ming-Shing Chen, Jintai Ding, Fabian Werner, Bo-Yin Yang
We present a multivariate version of Hidden Field Equations (HFE) over a finite field of odd characteristic, with an extra ``embedding'' modifier. Combining these known ideas makes our new MPKC (multivariate public key cryptosystem) more efficient and scalable than any other extant multivariate encryption scheme. Switching to odd characteristics in HFE-like schemes affects how an attacker can make use of field equations. Extensive empirical tests (using MAGMA-2.14, the best commercially available \mathbold{F_4} implementation) suggests that our new construction is indeed secure against algebraic attacks using Gröbner Basis algorithms. The ``embedding'' serves both to narrow down choices of pre-images and to guard against a possible Kipnis-Shamir type (rank-based) attack. We may hence reasonably argue that for practical sizes, prior attacks take exponential time. We demonstrate that our construction is in fact efficient by implementing practical-sized examples of our ``odd-char HFE'' with 3 variables (``THFE'') over $\mathrm{GF}(31)$. To be precise, our preliminary THFE implementation is $15\times$--$20\times$ the speed of RSA-1024.
Last updated:  2009-01-13
Distinguishing Attack and Second-Preimage Attack on the CBC-like MACs
Keting Jia, Xiaoyun Wang, Zheng Yuan, Guangwu Xu
In this paper, we first present a new distinguisher on the CBC-MAC based on a block cipher in Cipher Block Chaining (CBC) mode. It can also be used to distinguish other CBC-like MACs from random functions. The main results of this paper are on the second-preimage attack on CBC-MAC and CBC-like MACs include TMAC, OMAC, CMAC, PC-MAC and MACs based on three-key encipher CBC mode. Instead of exhaustive search, this attack can be performed with the birthday attack complexity.
Last updated:  2009-02-19
Resettably-Sound Resettable Zero Knowledge Arguments for NP
Yi Deng
We construct resettably-sound resettable zero knowledge arguments for NP based on standard hardness assumption (the existence of claw-free permutations) in the plain model. This proves the simultaneous resettability conjecture posed by Barak et al. in [FOCS 2001]. \setlength{\parindent}{2em} Our construction, inspired by the paradigm for designing concurrent zero knowledge protocols, makes crucial use of a tool called instance-dependent resettably-sound resettable WI argument of knowledge (\textsf{IDWIAOK} (and a special-purpose variant), introduced recently by Deng and Lin in [Eurocrypt 2007]).Roughly speaking, for a NP statement of the form $x_0\vee x_1$,\textsf{IDWIAOK} is an argument for which resettable WI property holds when both $x_0$ and $x_1$ are YES instances, and resettably-sound argument of knowledge property holds when $x_0$ is a NO instance. The heart of the simulator for our protocol is a new technique that allows us to embed the (non-black-box) straight-line simulation strategy in the (black-box) recursive rewinding simulation strategy.
Last updated:  2008-12-28
New Impossible Differential Attacks on AES
Jiqiang Lu, Orr Dunkelman, Nathan Keller, Jongsung Kim
In this paper we apply impossible differential attacks to reduced round AES. Using various techniques, including the early abort approach and key schedule considerations, we significantly improve previously known attacks due to Bahrak-Aref and Phan. The improvement of these attacks leads to the best known impossible differential attacks on 7-round AES-128 and AES-192, as well as to the best known impossible differential attacks on 8-round AES-256.
Last updated:  2008-12-28
An Accumulator Based on Bilinear Maps and Efficient Revocation for Anonymous Credentials
Jan Camenisch, Markulf Kohlweiss, Claudio Soriente
The success of electronic authentication systems, be it e-ID card systems or Internet authentication systems such as CardSpace, highly depends on the provided level of user-privacy. Thereby, an important requirement is an efficient means for revocation of the authentication credentials. In this paper we consider the problem of revocation for certificate-based privacy-protecting authentication systems. To date, the most efficient solutions for revocation for such systems are based on cryptographic accumulators. Here, an accumulate of all currently valid certificates is published regularly and each user holds a {\em witness} enabling her to prove the validity of her (anonymous) credential while retaining anonymity. Unfortunately, the users' witnesses must be updated at least each time a credential is revoked. For the know solutions, these updates are computationally very expensive for users and/or certificate issuers which is very problematic as revocation is a frequent event as practice shows. In this paper, we propose a new dynamic accumulator scheme based on bilinear maps and show how to apply it to the problem of revocation of anonymous credentials. In the resulting scheme, proving a credential's validity and updating witnesses both come at (virtually) no cost for credential owners and verifiers. In particular, updating a witness requires the issuer to do only one multiplication per addition or revocation of a credential and can also be delegated to untrusted entities from which a user could just retrieve the updated witness. We believe that thereby we provide the first authentication system offering privacy protection suitable for implementation with electronic tokens such as eID cards or drivers' licenses.
Last updated:  2008-12-28
Supporting Non-membership Proofs with Bilinear-map Accumulators
Ivan Damgård, Nikos Triandopoulos
In this short note, we present an extension of Nguyen's bilinear-map based accumulator scheme to support \emph{non-membership witnesses} and corresponding \emph{non-membership proofs}, i.e., cryptographic proofs that an element has not been accumulated to a given set. This complements the non-membership proofs developed by Li \emph{et al.} for the RSA accumulator, making the functionality of the bilinear-map accumulator equivalent to that of the RSA accumulator. Our non-membership extension of Nguyen's scheme makes use of the $q$-Strong Diffie-Hellman assumption the security of the original scheme is based on.
Last updated:  2008-12-28
A Secure Threshold Anonymous Password-Authenticated Key Exchange Protocol
SeongHan Shin, Kazukuni Kobara, Hideki Imai
At Indocrypt 2005, Viet et al., [22] have proposed an anonymous password-authenticated key exchange (PAKE) protocol and its threshold construction both of which are designed for client's password-based authentication and anonymity against a passive server, who does not deviate the protocol. In this paper, we first point out that their threshold construction is completely insecure against off-line dictionary attacks. For the threshold t > 1, we propose a secure threshold anonymous PAKE (for short, TAP) protocol with the number of clients n upper-bounded, such that n \leq 2 \sqrt{N-1} -1, where N is a dictionary size of passwords. We rigorously prove that the TAP protocol has semantic security of session keys in the random oracle model by showing the reduction to the computational Diffie-Hellman problem. In addition, the TAP protocol provides unconditional anonymity against a passive server. For the threshold t=1, we propose an efficient anonymous PAKE protocol that significantly improves efficiency in terms of computation costs and communication bandwidth compared to the original (not threshold) anonymous PAKE protocol [22].
Last updated:  2008-12-28
Predicate Privacy in Encryption Systems
Uncategorized
Emily Shen, Elaine Shi, Brent Waters
Show abstract
Uncategorized
Predicate encryption is a new encryption paradigm which gives the secret key owner fine-grained control over access to encrypted data. The secret key owner can generate tokens corresponding to predicates. An encryption of a plaintext x can be decrypted using a token corresponding to a predicate f if the plaintext satisfies the predicate, i.e., f(x) = 1. Prior work on public-key predicate encryption has focused on the notion of plaintext privacy, the property that ciphertexts reveal no information about the encrypted plaintext. In this paper, we consider a new notion called predicate privacy, the property that tokens reveal no information about the encoded query predicate. Predicate privacy is inherently impossible to achieve in the public-key setting and has therefore received little attention in prior work. In this work, we consider predicate encryption in the symmetric-key setting and present a symmetric-key predicate encryption scheme which supports inner product queries. We prove that our scheme achieves both plaintext privacy and predicate privacy.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.