All papers (Page 185 of 21909 results)

Last updated:  2009-09-04
One-time-password-authenticated key exchange
Kenneth G. Paterson, Douglas Stebila
To reduce the damage of phishing and spyware attacks, banks, governments, and other security-sensitive industries are deploying one-time password systems, where users have many passwords and use each password only once. If a single password is compromised, it can be only be used to impersonate the user once, limiting the damage caused. However, existing practical approaches to one-time passwords have been susceptible to sophisticated phishing attacks. We give a formal security treatment of this important practical problem. We consider the use of one-time passwords in the context of password-authenticated key exchange (PAKE), which allows for mutual authentication, session key agreement, and resistance to phishing attacks. We describe a security model for the use of one-time passwords, explicitly considering the compromise of past (and future) one-time passwords, and show a general technique for building a secure one-time-PAKE protocol from any secure PAKE protocol. Our techniques also allow for the secure use of pseudorandomly generated and time-dependent passwords.
Last updated:  2009-09-04
Precise Time and Space Simulatable Zero-Knowledge
Ning Ding, Dawu Gu
Traditionally, the definition of zero-knowledge states that an interactive proof of $x\in L$ provides zero (additional) knowledge if the view of any \emph{polynomial-time} verifier can be reconstructed by a \emph{polynomial-time} simulator. Since this definition only requires that the worst-case running-time of the verifier and simulator are polynomials, zero-knowledge becomes a worst-case notion. In STOC'06, Micali and Pass proposed a new notion of precise zero-knowledge, which captures the idea that the view of any verifier in every interaction can be reconstructed in (almost) the same time (i.e., the view can be ``indistinguishably reconstructed''). This is the strongest notion among the known works towards precislization of the definition of zero-knowledge. However, as we know, there are two kinds of computational resources (i.e. time and space) that every algorithm consumes in computation. Although the view of a verifier in the interaction of a precise zero-knowledge protocol can be reconstructed in almost the same time, the simulator may run in very large space while at the same time the verifier only runs in very small space. In this case it is still doubtful to take indifference for the verifier to take part in the interaction or to run the simulator. Thus the notion of precise zero-knowledge may be still insufficient. This shows that precislization of the definition of zero-knowledge needs further investigation. In this paper, we propose a new notion of precise time and space simulatable zero-knowledge (PTSSZK), which captures the idea that the view of any verifier in each interaction can be reconstructed \emph{not only} in the same time, \emph{but also} in the same space. We construct the first PTSSZK proofs and arguments with simultaneous linear time and linear space precisions for all languages in $\NP$. Our protocols do not use noticeably more rounds than the known precise zero-knowledge protocols, and the probability analysis of the successful extraction of the new simulation strategy may be of independent interests.
Last updated:  2009-09-04
Efficiently from Semi-honest to Malicious OT via OLFE
Jürg Wullschleger
A combiner securely implements a functionality out of a set implementations of another functionality from which some may be insecure. We present two efficient combiners for oblivious linear function evaluation (OLFE). The first is a constant-rate OLFE combiner in the semi-honest model, the second combiner implements Rabin string oblivious transfer (RabinOT) from OLFE in the malicious model. As an application, we show a very efficient reductions in the malicious model of RabinOT over strings to one-out-of-two oblivious transfer over bits (OT) that is only secure in the semi-honest model. For string of size $\ell = \omega(k^2)$, our reductions uses only $4 \ell + o(\ell)$ instances of OT, while previous results required $\Omega(\ell k^2)$. Our new reduction leads to an efficiency improvement for general multi-party computation (MPC) based on semi-honest OT, and makes it almost as efficient as MPC based on malicious OT. All reductions are unconditionally secure, black-box, universally composable and secure against adaptive adversaries.
Last updated:  2013-05-29
Efficient Verifiable Escrow and Fair Exchange with Trusted Hardware
Stephen R. Tate, Roopa Vishwanathan
At the heart of many fair exchange problems is verifiable escrow: a sender encrypts some value using the public key of a trusted party (called the recovery agent), and then must convince the receiver of the ciphertext that the corresponding plaintext satisfies some property (e.g., it contains the sender's signature on a contract). Previous solutions to this problem are interactive, and often rely on communication-intensive cut-and-choose zero-knowledge proofs. In this paper, we provide a solution that uses generic trusted hardware to create an efficient, non-interactive verifiable escrow scheme. Our solution allows the protocol to use a set of recovery agents with a threshold access structure, the \emph{verifiable group escrow} notion which was informally introduced by Camenisch and Damgard and which is formalized here. Finally, this paper shows how this new non-interactive verifiable escrow scheme can be used to create an efficient optimistic protocol for fair exchange of signatures.
Last updated:  2009-09-04
Cheating Detection and Cheater Identification in CRT-based Secret Sharing Schemes
Daniel Pasaila, Vlad Alexa, Sorin Iftene
In this paper we analyze the cheating detection and cheater identification problems for the secret sharing schemes based on the Chinese remainder theorem (CRT), more exactly for Mignotte [1] and Asmuth-Bloom [2] schemes. We prove that the majority of the solutions for Shamir’s scheme [3] can be translated to these schemes and, moreover, there are some interesting specific solutions.
Last updated:  2010-07-27
Cryptanalysis and Security Enhancement on the Generation of Mu-Varadharajan Electronic Voting Protocol
Uncategorized
Vahid Jahandideh, Amir S. Mortazavi, Yaser Baseri, Javad Mohajeri
Show abstract
Uncategorized
Mu and Varadharajan proposed an electronic voting scheme and claimed that their scheme authenticates the voters, protects the anonymity of them, and detects the identity of double voters. Due to some weaknesses in Mu-Varadharajan scheme, several modified schemes have been proposed by Lin et al., Hwang et al., Rodriguez-Henriquez et al. and Asaar et al.; however this paper shows that these schemes suffer from some weaknesses in fulfilling the pointed properties. For this purpose, we get Hwang et al. scheme as a case study and apply our new attacks on it. Also we consider the applicability of the attacks on other pointed schemes. In addition, we present a new scheme and show that the scheme resists against the proposed attacks without loosing efficiency.
Last updated:  2011-03-05
Double Voter Perceptible Blind Signature Based Electronic Voting Protocol
Uncategorized
Yaser Baseri, Amir S. Mortazavi, Maryam Rajabzadeh Asaar, Mohsen Pourpouneh, Javad Mohajeri
Show abstract
Uncategorized
Mu et al. have proposed an electronic voting protocol and claimed that it protects anonymity of voters, detects double voting and authenticates eligible voters. It has been shown that it does not protect voter's privacy and prevent double voting. After that, several schemes have been presented to fulfill these properties. However, many of them suffer from the same weaknesses. In this paper, getting Asadpour et al. scheme as one of the latest one and showing its weaknesses, we propose a new voting scheme which is immune to the weaknesses of previous schemes without loosing efficiency. The scheme, is based on a special structure, which directly use the identity of voter, hides it in that structure and reveals it after double voting. We also, show that the security of this scheme depends on hardness of RSA cryptosystem, Discrete Logarithm problem and Representation problem.
Last updated:  2009-09-01
Utilizing postponed ephemeral and pseudo-static keys in tripartite and identity-based key agreement protocols
Atsushi Fujioka, Koutarou Suzuki, Berkant Ustaoglu
We propose an new one-round implicitly authenticated three-party protocol that extends Joux's protocol as well as a two-party identity-based protocol. Our protocols have a single communication round that consists of ephemeral (one-time) public keys along with certificates in the tripartite protocol, and identities in the identity-based setting. As such our protocols are communication efficient and furthermore do not require enhanced message format.
Last updated:  2009-09-21
Attacks on {RFID}-Based Electronic Voting Systems
Uncategorized
Yossef Oren, Avishai Wool
Show abstract
Uncategorized
Many secure systems, such as contactless credit cards and secure entrance systems, are built with contactless smart-card RFID technologies. In many cases these systems are claimed to be secure based on the assumption that readers and tags need to be in close proximity (about 5cm) in order to communicate. However, it is known that this proximity assumption is false: Relay attacks are a class of hardware-based attacks which compromise the safety of such systems by dramatically extending the interrogation range of the contactless system. Interestingly, the proposed Israeli e-voting scheme is based on contactless smartcards. In this work we show how the proposed system can be completely compromised using low-cost relay attacks. Our attacks allow an adversary to read out all votes already cast into the ballot box, supress the votes of one or several voters, rewrite votes at will and even completely disqualify all votes in a single voting station. Our attacks are easy to mount, very difficult to detect, and compromise both the confidentiality and the integrity of the election system.
Last updated:  2009-09-25
How to Construct Identity-Based Signatures without the Key Escrow Problem
Tsz Hon Yuen, Willy Susilo, Yi Mu
The inherent key escrow problem is one of the main reasons for the slow adoption of identity-based cryptography. The existing solution for mitigating the key escrow problem is by adopting multiple Private Key Generators (PKGs). Recently, there was a proposal that attempted to reduce the trust of the PKG by allowing a malicious PKG to be caught if he reveals the user's identity-based secret key illegally. Nonetheless, the proposal does not consider that the PKG can simply decrypt the ciphertext instead of revealing the secret key itself (in the case of identity-based encryption schemes). The aim of this paper is to present an escrow-free identity-based signature (IBS) scheme, in which the malicious PKG will be caught if it releases a signature on behalf of the user but signed by itself. We present a formal model to capture such a scheme and provide a concrete construction.
Last updated:  2009-09-01
Higher-order Masking and Shuffling for Software Implementations of Block Ciphers
Matthieu Rivain, Emmanuel Prouff, Julien Doget
Differential Power Analysis (DPA) is a powerful side channel key recovery attack that efficiently breaks block ciphers implementations. In software, two main techniques are usually applied to thwart them: masking and operations shuffling. To benefit from the advantages of the two techniques, recent works have proposed to combine them. However, the schemes which have been designed until now only provide limited resistance levels and some advanced DPA attacks have turned out to break them. In this paper, we investigate the combination of masking and shuffling. We moreover extend the approach with the use of higher-order masking and we show that it enables to significantly improve the security level of such a scheme. We first conduct a theoretical analysis in which the efficiency of advanced DPA attacks targeting masking and shuffling is quantified. Based on this analysis, we design a generic scheme combining higher-order masking and shuffling. This scheme is scalable and its security parameters can be chosen according to any desired resistance level. As an illustration, we apply it to protect a software implementation of AES for which we give several security/efficiency trade-offs.
Last updated:  2009-09-01
An Efficient Method for Random Delay Generation in Embedded Software
Jean-Sébastien Coron, Ilya Kizhvatov
Random delays are a countermeasure against a range of side channel and fault attacks that is often implemented in embedded software. We propose a new method for generation of random delays and a criterion for measuring the efficiency of a random delay countermeasure. We implement this new method along with the existing ones on an 8-bit platform and mount practical side-channel attacks against the implementations. We show that the new method is significantly more secure in practice than the previously published solutions and also more lightweight.
Last updated:  2009-09-01
Subtleties in the Definition of IND-CCA: When and How Should Challenge-Decryption be Disallowed?
Mihir Bellare, Dennis Hofheinz, Eike Kiltz
The definition of IND-CCA disallows an adversary from querying the challenge ciphertext to its decryption oracle. We point out that there are several ways to formalize this. We show that, surprisingly, for public-key encryption the resulting notions are not all equivalent. We then consider the same question for key-encapsulation mechanisms (KEMs) and show that in this case the four notions ARE all equivalent. Our discoveries are another manifestation of the subtleties that make the study of cryptography so attractive and are important towards achieving the definitional clarity and unity required for firm foundations.
Last updated:  2009-09-01
More Differential Paths of TIB3
Uncategorized
Harry Wiggins, Philip Hawkes, Gregory G. Rose, Cameron McDonald
Show abstract
Uncategorized
The TIB3-256 hashing algorithm [3] is a first round candidate in the SHA-3 competition [2]. Properties of the message expansion and the PHTX function are observed, and then exploited to create new high-probability differential paths through the compression function. Examples conforming to the differential paths are presented. Only one of these differential paths can be applied to the tweaked version of TIB3v2 [4]. Due to the dual-block input mode used in TIB3 and TIB3v2, these differential paths do not seem extensible to the full hash functions. Note: In the time between when this paper was written and when the paper was made public, the SHA-3 Round 2 Candidates were announced, and TIB3 had been eliminated from the competition.
Last updated:  2009-09-01
KronCrypt - A New Symmetric Cryptosystem Based on Kronecker's Approximation Theorem
Carsten Elsner, Martin Schmidt
In this paper we show how to use an old mathematical concept of diophantine analysis, the approximation theorem of Kronecker, in symmetric cryptography. As a first practical application we propose and analyze the new symmetric 128-bit block cipher KronCrypt. The cipher is a 4-round Feistel network with a non-bijective round function f made up of a variable number of large key-dependent S-boxes, XORs and modular additions. Its key length is variable but not less than 128 bit. The main innovation of KronCrypt in the area of symmetric cryptography is the fact that the key-dependent S-boxes are based upon a constructive proof of the approximation theorem of Kronecker used as a boolean function. We prove the correctness of our concept in general and show how we designe the new cipher KronCrypt. Furthermore, results concerning statistical behaviour, i.e. confusion, diffusion and completeness, and differential cryptanalysis are presented.
Last updated:  2009-09-01
Attacks Against Permute-Transform-Xor Compression Functions and Spectral Hash
Uncategorized
Ethan Heilman
Show abstract
Uncategorized
This paper presents an attack on the strong collision resistance of the Spectral Hash SHA-3 candidate. Spectral-Hash (shash) is a Merkle-Damgård based hash function, carefully designed to resist all known cryptographic attacks. To best of our knowledge, our attack is the only known attack against the shash algorithm. We exploit the fundamental structure of the algorithm, completely bypassing the hash function's formidable cryptographic protections. Our attack is presented in three stages. First, we define the family of functions which have the structure we wish to exploit. We call members of this family PTX functions. Next, we show that all PTX functions, including functions which use random oracles, are vulnerable to our collision attack. Finally, we reformulate the shash compression function showing that it is a PTX function and thus vulnerable. We present results on a practical implementation of our attack, generating collisions for shash in less than a second on a typical desktop computer.
Last updated:  2009-09-01
Security Bounds for the Design of Code-based Cryptosystems
Uncategorized
Matthieu Finiasz, Nicolas Sendrier
Show abstract
Uncategorized
Code-based cryptography is often viewed as an interesting ``Post-Quantum'' alternative to the classical number theory cryptography. Unlike many other such alternatives, it has the convenient advantage of having only a few, well identified, attack algorithms. However, improvements to these algorithms have made their effective complexity quite complex to compute. We give here some lower bounds on the work factor of idealized versions of these algorithms, taking into account all possible tweaks which could improve their practical complexity. The aim of this article is to help designers select durably secure parameters.
Last updated:  2009-09-01
Three Improved Algorithms for Multi-path Key Establishment in Sensor Networks Using Protocols for Secure Message Transmission
Jiang Wu, Douglas R. Stinson
In this paper, we propose a security model to capture active attacks against multi-path key establishment (MPKE) in sensor networks. Our model strengthens previous models to capture more attacks and achieve essential security goals for multi-path key establishment. In this model, we can apply protocols for perfectly secure message transmission to solve the multi-path key establishment problem. We propose a simple new protocol for optimal one-round perfectly secure message transmission based on Reed-Solomon codes. Then we use this protocol to obtain two new multi-path key establishment schemes that can be applied provided that fewer than one third of the paths are controlled by the adversary. Finally, we describe another MPKE scheme that tolerates a higher fraction (less than 1/2) of paths controlled by the adversary. This scheme is based on a new protocol for a weakened version of message transmission, which is very simple and efficient. Our multi-path key establishment schemes achieve improved security and lower communication complexity, as compared to previous schemes.
Last updated:  2009-09-01
Distinguishing Attacks on Stream Ciphers Based on Arrays of Pseudo-random Words
Nathan Keller, Stephen D. Miller
In numerous modern stream ciphers, the internal state consists of a large array of pseudo-random words, and the output key-stream is a relatively simple function of the state. In [Paul-Preneel], it was heuristically shown that in various cases this structure may lead to distinguishing attacks on the cipher. In this paper we further investigate this structural attack. We present a rigorous proof of the main probabilistic claim used in the attack in the basic cases, and demonstrate by examining a concrete example (the cipher SN3) that the heuristic assumptions of the attack are remarkably precise in more complicated cases. Furthermore, we use the general technique to devise a distinguishing attack on the stream cipher MV3 requiring $2^{82}$ words of key-stream. Unlike the attacks in [Paul-Preneel], our attack does not concentrate on the least significant bits of the words, thus allowing to handle the combination of more operations (XORs, modular additions and multiplications, and rotations by a fixed number of bits) in the update and output rules of the cipher.
Last updated:  2017-12-08
Improved Garbled Circuit Building Blocks and Applications to Auctions and Computing Minima
Vladimir Kolesnikov, Ahmad-Reza Sadeghi, Thomas Schneider
We consider generic Garbled Circuit (GC)-based techniques for Secure Function Evaluation (SFE) in the semi-honest model. We describe efficient GC constructions for addition, subtraction, multiplication, and comparison functions. Our circuits for subtraction and comparison are approximately two times smaller (in terms of garbled tables) than previous constructions. This implies corresponding computation and communication improvements in SFE of functions using our efficient building blocks. The techniques rely on recently proposed ``free XOR'' GC technique. Further, we present concrete and detailed improved GC protocols for the problem of secure integer comparison, and related problems of auctions, minimum selection, and minimal distance. Performance improvement comes both from building on our efficient basic blocks and several problem-specific GC optimizations. We provide precise cost evaluation of our constructions, which serves as a baseline for future protocols.
Last updated:  2012-10-03
Authenticated Broadcast with a Partially Compromised Public-Key Infrastructure
S. Dov Gordon, Jonathan Katz, Ranjit Kumaresan, Arkady Yerukhimovich
Given a public-key infrastructure (PKI) and digital signatures, it is possible to construct broadcast protocols tolerating any number of corrupted parties. Existing protocols, however, do not distinguish between corrupted parties who do not follow the protocol, and honest parties whose secret (signing) keys have been compromised but continue to behave honestly. We explore conditions under which it is possible to construct broadcast protocols that still provide the usual guarantees (i.e., validity/agreement) to the latter. Consider a network of $n$ parties, where an adversary has compromised the secret keys of up to $t_c$ honest parties and, in addition, fully controls the behavior of up to $t_a$ other parties. We show that for any fixed $t_c > 0$ and any fixed $t_a$, there exists an efficient protocol for broadcast if and only if $2t_a + \min(t_a, t_c) < n$. (When $t_c = 0$, standard results imply feasibility for all $t_a<n$.) We also show that if $t_c, t_a$ are not fixed, but are only guaranteed to satisfy the above bound, then broadcast is impossible to achieve except for a few specific values of $n$; for these ``exceptional'' values of $n$, we demonstrate broadcast protocols. Taken together, our results give a complete characterization of this problem.
Last updated:  2009-09-01
A Tree Based Recursive Scheme for Space Efficient Secret Sharing
Abhishek Parakh, Subhash Kak
This paper presents a k-out-of-n recursive secret sharing scheme based on an n-ary tree data structure. In recursive hiding of secrets, the user encodes additional secrets in the shares of the secret intended to be original shared without an expansion in the size of the latter, thereby decreasing the effective share size per secret and increasing the overall space efficiency of secret sharing, with a trade-off in security. The proposed scheme has applications in secure distributed storage and information dispersal protocols. It may be used as a steganographic channel to transmit hidden information in secret sharing, which may be used for authentication and verification of shares and the reconstructed secret itself.
Last updated:  2012-01-05
A Secure and Efficient Authenticated Diffie–Hellman Protocol
Augustin P. Sarr, Philippe Elbaz–Vincent, Jean–Claude Bajard
The Exponential Challenge Response (XRC) and Dual Exponential Challenge Response (DCR) signature schemes are the building blocks of the HMQV protocol. We propose a complementary analysis of these schemes; on the basis of this analysis we show how impersonation and man in the middle attacks can be mounted against the HMQV protocol when some session specific information leakages happen. We define the Full Exponential Challenge Response (FXRC) and Full Dual Exponential Challenge Response (FDCR) signature schemes; using these schemes we propose the Fully Hashed MQV protocol (with security arguments), which preserves the remarkable performance of the (H)MQV protocols and resists the attacks we present.
Last updated:  2009-08-24
Single Block Attacks and Statistical Tests on CubeHash
Benjamin Bloom, Alan Kaminsky
This paper describes a second preimage attack on the CubeHash cryptographic one-way hash function. The attack finds a second preimage in less time than brute force search for these CubeHash variants: CubeHash $r$/$b$-224 for $b > 100$; CubeHash$r$/$b$-256 for $b > 96$; CubeHash$r$/$b$-384 for $b > 80$; and CubeHash$r$/$b$-512 for $b > 64$. However, the attack does not break the CubeHash variants recommended for SHA-3. The attack requires minimal memory and can be performed in a massively parallel fashion. This paper also describes several statistical randomness tests on CubeHash. The tests were unable to disprove the hypothesis that CubeHash behaves as a random mapping. These results support CubeHash's viability as a secure cryptographic hash function.
Last updated:  2011-03-07
On-line Non-transferable Signatures Revisited
Jacob C. N. Schuldt, Kanta Matsuura
Undeniable signatures, introduced by Chaum and van Antwerpen, and designated confirmer signatures, introduced by Chaum, allow a signer to control the verifiability of his signatures by requiring a verifier to interact with the signer to verify a signature. An important security requirement for these types of signature schemes is \emph{non-transferability} which informally guarantees that even though a verifier has confirmed the validity of a signature by interacting with the signer, he cannot prove this knowledge to a third party. Recently Liskov and Micali pointed out that the commonly used notion of non-transferability only guarantees security against an off-line attacker which cannot influence the verifier while he interacts with the signer, and that almost all previous schemes relying on interactive protocols are vulnerable to on-line attacks. To address this, Liskov and Micali formalized on-line non-transferable signatures which are resistant to on-line attacks, and proposed a generic construction based on a standard signature scheme and an encryption scheme. In this paper, we revisit on-line non-transferable signatures. Firstly, we extend the security model of Liskov and Micali to cover not only the sign protocol, but also the confirm and disavow protocols executed by the confirmer. Our security model furthermore considers the use of multiple (potentially corrupted or malicious) confirmers, and guarantees security against attacks related to the use of signer specific confirmer keys. We then present a new approach to the construction of on-line non-transferable signatures, and propose a new concrete construction which is provably secure in the standard model. Unlike the construction by Liskov and Micali, our construction does not require the signer to issue ``fake'' signatures to maintain security, and allows the confirmer to both confirm and disavow signatures. Lastly, our construction provides noticeably shorter signatures than the construction by Liskov and Micali.
Last updated:  2009-08-24
Generic Attacks on Misty Schemes -5 rounds is not enough-
Valerie Nachef, Jacques Patarin, Joana Treger
Misty schemes are classic cryptographic schemes used to construct pseudo-random permutations from $2n$ bits to $2n$ bits by using $d$ pseudo-random permutations from $n$ bits to $n$ bits. These $d$ permutations will be called the ``internal'' permutations, and $d$ is the number of rounds of the Misty scheme. Misty schemes are important from a practical point of view since for example, the Kasumi algorithm based on Misty schemes has been adopted as the standard blockcipher in the third generation mobile systems. In this paper we describe the best known ``generic'' attacks on Misty schemes, i.e. attacks when the internal permutations do not have special properties, or are randomly chosen. We describe known plaintext attacks (KPA), non-adaptive chosen plaintext attacks (CPA-1) and adaptive chosen plaintext and ciphertext attacks (CPCA-2) against these schemes. Some of these attacks were previously known, some are new. One important result of this paper is that we will show that when $d=5$ rounds, there exist such attacks with a complexity strictly less than $2^{2n}$. Consequently, at least 6 rounds are necessary to avoid these generic attacks on Misty schemes. When $d \geq 6$ we also describe some attacks on Misty generators, i.e. attacks where more than one Misty permutation is required.
Last updated:  2009-08-26
Pairing-Friendly Elliptic Curves With Various Discriminants
Woo Sug Kang, Ki Taek Kim
In this paper, we extend the Brezing-Weng method by parameterizing the discriminant $D$ by a polynomial $D(x)$ and give a construction of families of pairing-friendly elliptic curves with various discriminants. For $k =$ 5, 8, 9, 15, 16, 20, 24 and 28, our method gives smaller $\rho$ value than the ones in previous results.
Last updated:  2009-11-24
On Generic Constructions of Designated Confirmer Signatures (The ``Encryption of a Signature'' Paradigm Revisited)
Uncategorized
Laila El Aimani
Show abstract
Uncategorized
Designated Confirmer signatures were introduced to limit the verification property inherent to digital signatures. In fact, the verification in these signatures is replaced by a confirmation/denial protocol between the \emph{designated confirmer} and some verifier. An intuitive way to obtain such signatures consists in first generating a digital signature on the message to be signed, then encrypting the result using a suitable encryption scheme. This approach, referred to as the ``encryption of a signature'' paradigm, requires the constituents (encryption and signature schemes) to meet the highest security notions in order to achieve secure constructions. In this paper, we revisit this method and establish the necessary and sufficient assumptions on the building blocks in order to attain secure confirmer signatures. Our study concludes that the paradigm, used in its basic form, cannot allow a class of encryption schemes, which is vital for the efficiency of the confirmation/denial protocols. Next, we consider a slight variation of the paradigm, proposed in the context of undeniable signatures; we recast it in the confirmer signature framework along with changes that yield more flexibility, and we demonstrate its efficiency by explicitly describing its confirmation/denial protocols when instantiated with building blocks from a large class of signature/encryption schemes. Interestingly, the class of signatures we consider is very popular and has been for instance used to build efficient designated verifier signatures.
Last updated:  2009-08-17
AIDA Breaks BIVIUM (A&B) in 1 Minute Dual Core CPU Time
Michael Vielhaber
The stream cipher BIVIUM (both BIVIUM-A and BIVIUM-B), a modification of the eSTREAM finalist TRIVIUM, can be broken completely by the Algebraic IV Differential Attack, AIDA, using $2^{27.5}$ simulations or one minute of dual core processing. AIDA uses the subspaces of two 32-dimensional vector spaces over subsets of IV bits to recover 56 of the 80 key bits. The remaining 24 key bits are most easily determined by brute force search. We applied the Fast Reed-Muller Transform to speed up the search for linear equations in the key bits and the Wavefront Model to rule out nonlinear relations in the key bits early on.
Last updated:  2009-08-17
Longest Common Subsequence as Private Search
Mark Gondree, Payman Mohassel
At STOC 2006 and CRYPTO 2007, Beimel et. al. introduced a set of privacy requirements for algorithms that solve search problems. In this paper, we consider the longest common subsequence (LCS) problem as a private search problem, where the task is to find a string of (or embedding corresponding to) an LCS. We show that deterministic selection strategies do not meet the privacy guarantees considered for private search problems and, in fact, may ``leak'' an amount of information proportional to the entire input. We then put forth and investigate several privacy structures for the LCS problem and design new and efficient output sampling and equivalence protecting algorithms that provably meet the corresponding privacy notions. Along the way, we also provide output sampling and equivalence protecting algorithms for finite regular languages, which may be of independent interest.
Last updated:  2009-08-15
Identity-Based Chameleon Hash Scheme Without Key Exposure
Xiaofeng Chen, Fangguo Zhang, Haibo Tian, Kwangjo Kim
In this paper, we propose the first identity-based chameleon hash scheme without key exposure, which gives a positive answer for the open problem introduced by Ateniese and de Medeiros in 2004.
Last updated:  2010-04-13
Leakage-Resilient Storage
Francesco Davì, Stefan Dziembowski, Daniele Venturi
We study a problem of secure data storage on hardware that may leak information. We introduce a new primitive, that we call {\em leakage-resilient storage} (LRS), which is an (unkeyed) scheme for encoding messages, and can be viewed as a generalization of the {\em All-Or-Nothing Transform} (AONT, Rivest 1997). The standard definition of AONT requires that it should be hard to reconstruct a message $m$ if not all the bits of its encoding $\Encode(m)$ are known. LRS is defined more generally, with respect to a class $\Gamma$ of functions. The security definition of LRS requires that it should be hard to reconstruct $m$ even if some values $g_1(\Encode(m)),\ldots,$ $g_t(\Encode(m))$ are known (where $g_1,\ldots,g_t \in \Gamma$), as long as the total length of $g_1(\Encode(m)),\ldots,g_t(\Encode(m))$ is smaller than some parameter $c$. We construct an LRS scheme that is secure with respect to $\Gamma$ being a set of functions that can depend only on some restricted part of the memory. More precisely: we assume that the memory is divided in $2$ parts, and the functions in $\Gamma$ can be just applied to one of these parts. We also construct a scheme that is secure if the cardinality of $\Gamma$ is restricted (but still it can be exponential in the length of the encoding). This construction implies security in the case when the set $\Gamma$ consists of functions that are computable by Boolean circuits of a small size. We also discuss the connection between the problem of constructing leakage-resilient storage and a theory of the compressibility of NP-instances.
Last updated:  2009-08-19
Fast Architectures for the $\eta_T$ Pairing over Small-Characteristic Supersingular Elliptic Curves
Jean-Luc Beuchat, Jérémie Detrey, Nicolas Estibals, Eiji Okamoto, Francisco Rodríguez-Henríquez
This paper is devoted to the design of fast parallel accelerators for the cryptographic $\eta_T$ pairing on supersingular elliptic curves over finite fields of characteristics two and three. We propose here a novel hardware implementation of Miller's algorithm based on a parallel pipelined Karatsuba multiplier. After a short description of the strategies we considered to design our multiplier, we point out the intrinsic parallelism of Miller's loop and outline the architecture of coprocessors for the $\eta_T$ pairing over $\mathbb{F}_{2^m}$ and $\mathbb{F}_{3^m}$. Thanks to a careful choice of algorithms for the tower field arithmetic associated with the $\eta_T$ pairing, we manage to keep the pipelined multiplier at the heart of each coprocessor busy. A final exponentiation is still required to obtain a unique value, which is desirable in most cryptographic protocols. We supplement our pairing accelerators with a coprocessor responsible for this task. An improved exponentiation algorithm allows us to save hardware resources. According to our place-and-route results on Xilinx FPGAs, our designs improve both the computation time and the area-time trade-off compared to previously published coprocessors.
Last updated:  2010-01-25
Linear Cryptanalysis of Reduced-Round PRESENT
Uncategorized
Joo Yeon Cho
Show abstract
Uncategorized
PRESENT is a hardware-oriented block cipher suitable for resource constrained environment. In this paper we analyze PRESENT by the multidimensional linear cryptanalysis method. We claim that our attack can recover the 80-bit secret key of PRESENT up to 25 rounds out of 31 rounds with around $2^{62.4}$ data complexity. Furthermore, we showed that the 26-round version of PRESENT can be attacked faster than key exhaustive search with the $2^{64}$ data complexity by an advanced key search technique. Our results are superior to all the previous attacks. We demonstrate our result by performing the linear attacks on reduced variants of PRESENT. Our results exemplify that the performance of the multidimensional linear attack is superior compared to the classical linear attack.
Last updated:  2009-08-15
Computational Indistinguishability Amplification: Tight Product Theorems for System Composition
Ueli Maurer, Stefano Tessaro
Computational indistinguishability amplification is the problem of strengthening cryptographic primitives whose security is defined by bounding the distinguishing advantage of an efficient distinguisher. Examples include pseudorandom generators (PRGs), pseudorandom functions (PRFs), and pseudorandom permutations (PRPs). The literature on computational indistinguishability amplification consists only of few isolated results. Yao's XOR-lemma implies, by a hybrid argument, that no efficient distinguisher has advantage better than (roughly) $n2^{m-1} \delta^m$ in distinguishing the XOR of $m$ independent $n$-bit PRG outputs $S_1,\ldots,S_m$ from uniform randomness if no efficient distinguisher has advantage more than $\delta$ in distinguishing $S_i$ from a uniform $n$-bit string. The factor $2^{m-1}$ allows for security amplification only if $\delta<\frac{1}{2}$: For the case of PRFs, a random-offset XOR-construction of Myers was the first result to achieve {\em strong} security amplification, i.e., also for $\frac{1}{2} \le \delta < 1$. This paper proposes a systematic treatment of computational indistinguishability amplification. We generalize and improve the above product theorem for the XOR of PRGs along five axes. First, we prove the {\em tight} information-theoretic bound $2^{m-1}\delta^m$ (without factor $n$) also for the computational setting. Second, we prove results for {\em interactive} systems (e.g. PRFs or PRPs). Third, we consider the general class of {\em neutralizing combination constructions}, not just XOR. As an application this yields the first indistinguishability amplification results for the cascade of PRPs (i.e., block ciphers) converting a weak PRP into an arbitrarily strong PRP, both for single-sided and two-sided queries. Fourth, {\em strong} security amplification is achieved for a subclass of neutralizing constructions which includes as a special case the construction of Myers. As an application we obtain highly practical optimal security amplification for block ciphers, simply by adding random offsets at the input and output of the cascade. Fifth, we show strong security amplification also for {\em weakened assumptions} like security against random-input (as opposed to chosen-input) attacks. A key technique is a generalization of Yao's XOR-lemma to (interactive) systems, which is of independent interest.
Last updated:  2010-01-20
First CPIR Protocol with Data-Dependent Computation
Helger Lipmaa
We design a new $(n, 1)$-CPIR protocol $\mathsf{BddCpir}$ for $\ell$-bit strings as a combination of a noncryptographic (BDD-based) data structure and a more basic cryptographic primitive (communication-efficient $(2, 1)$-CPIR). $\mathsf{BddCpir}$ is the first CPIR protocol where server's online computation depends substantially on the concrete database. We then show that (a) for reasonably small values of $\ell$, $\mathsf{BddCpir}$ is guaranteed to have simultaneously log-squared communication and sublinear online computation, and (b) $\mathsf{BddCpir}$ can handle huge but sparse matrices, common in data-mining applications, significantly more efficiently compared to all previous protocols. The security of $\mathsf{BddCpir}$ can be based on the well-known Decisional Composite Residuosity assumption
Last updated:  2010-06-14
Provably Secure Convertible Undeniable Signatures with Unambiguity
Le Trieu Phong, Kaoru Kurosawa, Wakaha Ogata
This paper shows some efficient and provably-secure convertible undeniable signature schemes (with both selective conversion and all conversion), in the standard model and discrete logarithm setting. They further satisfy unambiguity, which is traditionally required for anonymous signatures. Briefly, unambiguity means that it is hard to generate a (message, signature) pair which is valid for two {\em different} public-keys. In other words, our schemes can be viewed as anonymous signature schemes as well as convertible undeniable signature schemes. Besides other applications, we show that such schemes are very suitable for anonymous auction.
Last updated:  2009-08-15
Permutation Polynomials modulo $p^n$}
Rajesh P Singh, Soumen Maity
A polynomial $f$ over a finite ring $R$ is called a \textit{permutation polynomial} if the mapping $R\rightarrow R$ defined by $f$ is one-to-one. In this paper we consider the problem of characterizing permutation polynomials; that is, we seek conditions on the coefficients of a polynomial which are necessary and sufficient for it to represent a permutation. We also present a new class of permutation binomials over finite field of prime order.
Last updated:  2009-08-15
Computational Soundness for Key Exchange Protocols with Symmetric Encryption
Ralf Kuesters, Max Tuengerthal
Formal analysis of security protocols based on symbolic models has been very successful in finding flaws in published protocols and proving protocols secure, using automated tools. An important question is whether this kind of formal analysis implies security guarantees in the strong sense of modern cryptography. Initiated by the seminal work of Abadi and Rogaway, this question has been investigated and numerous positive results showing this so-called computational soundness of formal analysis have been obtained. However, for the case of active adversaries and protocols that use symmetric encryption computational soundness has remained a challenge. In this paper, we show the first general computational soundness result for key exchange protocols with symmetric encryption, along the lines of a paper by Canetti and Herzog on protocols with public-key encryption. More specifically, we develop a symbolic, automatically checkable criterion, based on observational equivalence, and show that a key exchange protocol that satisfies this criterion realizes a key exchange functionality in the sense of universal composability. Our results hold under standard cryptographic assumptions.
Last updated:  2010-03-29
Threshold Decryption and Zero-Knowledge Proofs for Lattice-Based Cryptosystems
Rikke Bendlin, Ivan Damgård
We present a variant of Regev's cryptosystem, but with a new choice of parameters. By a recent classical reduction by Peikert we prove the scheme semantically secure based on the worst-case lattice problem GapSVP. From this we construct a threshold cryptosystem which has a very efficient and non-interactive decryption protocol. We prove the threshold cryptosystem secure against passive adversaries corrupting all but one of the players, and againts active adversaries corrupting less than one third of the players. We also describe how one can build a distributed key generation protocol. In the final part of the paper, we show how one can, in zero-knowledge - prove knowledge of the plaintext contained in a given ciphertext from Regev's original cryptosystem or our variant. The proof is of size only a constant times the size of a ciphertext.
Last updated:  2009-08-15
Sub-linear Size Pairing-based Non-interactive Zero-Knowledge Arguments
Jens Groth
We construct non-interactive zero-knowledge arguments for circuit satisfiability and arithmetic circuits with perfect completeness, perfect zero-knowledge and computational (co-)soundness. The non-interactive zero-knowledge arguments have sub-linear size and very efficient public verification. Our construction uses bilinear groups and is only proven secure in the generic group model, but does not rely on random oracles.
Last updated:  2009-09-01
On the Security of 1024-bit RSA and 160-bit Elliptic Curve Cryptography
Joppe W. Bos, Marcelo E. Kaihara, Thorsten Kleinjung, Arjen K. Lenstra, Peter L. Montgomery
Meeting the requirements of NIST’s new cryptographic standards means phasing out usage of 1024-bit RSA and 160-bit elliptic curve cryptography (ECC) by the end of the year 2010. This write-up comments on the vulnerability of these systems to an open community attack effort and aims to assess the risk of their continued usage beyond 2010. We conclude that for 1024-bit RSA the risk is small at least until the year 2014, and that 160-bit ECC over a prime field may safely be used for much longer – with the current state of the art in cryptanalysis we would be surprised if a public effort can make a dent in 160-bit prime field ECC by the year 2020. Our assessment is based on the latest practical data of large scale integer factorization and elliptic curve discrete logarithm computation efforts.
Last updated:  2009-12-24
A Simple Secret Sharing Scheme for Hierarchical Threshold Access Structures
Kerem Kaskaloglu, Ferruh Ozbudak
One of the recent generalizations of (t; n) secret sharing for hierarchical threshold secret access structures is given by Tassa, where he answer the natural question of sharing a secret among three employ- ees at least one of which is a manager. However, the schemes proposed to address this problem, require some significant amount of theoreti- cal background. We give a simpler yet efficient method for hierarchical threshold access structures. Our scheme employs a different approach than previous works, as it involves a certain distribution of polynomi- als, where members of higher compartments are given a summation of evaluations of higher number of polynomials resulting in a hierarchical effect. The simplicity of our scheme is advantageous both in theory and in practical implementations.
Last updated:  2009-11-01
Securing Plastic Money Using an RFID Based Protocol Stack
Uncategorized
Rishab Nithyanand
Show abstract
Uncategorized
Since 2006, there have been three major systems that have been implemented in an attempt to reduce the threat of credit card fraud - Chip and PIN (United Kingdom), Chip Authentication Program - CAP (European Union), and RFID enabled credit cards (United States of America). In spite of a big effort by the EMV\footnote{EMV Co.: a body comprising of Europay, Mastercard, and Visa which develops standards for credit card interaction.}, there has been little evidence to demonstrate the success of these schemes in stopping fraudsters, scammers, and identity thieves. This may be attributed to combinations of poor usability, lack of trusted interfaces, the absence of smart-card cryptography that takes full advantage of the available computation resources, and inadequate authentication protocols. In this paper, we explain the shortcomings and vulnerabilities of each of these systems, and then explain requirements of a secure and usable cashless payment system. We also describe a new RFID based protocol stack - SECAPS (Secure Cashless Payment System), which obviates many of the attacks on the current schemes by using the newly available computation resources on modern RFID Tags.
Last updated:  2009-08-10
QTRU: A Lattice Attack Resistant Version of NTRU
Ehsan Malekian, Ali Zakerolhosseini, Atefeh Mashatan
We propose QTRU, a probabilistic and multi-dimensional public key cryptosystem based on the NTRU public key cryptosystem using quaternion algebra. QTRU encrypts four data vectors in each encryption session and the only other major di®erence between NTRU and QTRU is that the underlying algebraic structure has been changed to a non-commutative algebraic structure. As a result, QTRU inherits the strength of NTRU and its positive points. In addition, the non commutativity of the underlying structure of QTRU makes it much more resistant to some lattice-based attacks. After a brief description of NRTU, we begin by describing the algebraic structure used in QTRU. Further, we present the details of the key generation, encryption and decryption algorithms of QTRU and discuss the issues regarding key security, message security, and probability of successful decryption. Last but not least, QTRU's resistance against lattice-based attacks is investigated.
Last updated:  2009-08-10
Dual System Encryption: Realizing Fully Secure IBE and HIBE under Simple Assumptions
Brent Waters
We present a new methodology for proving security of encryption systems using what we call Dual System Encryption. Our techniques result in fully secure Identity-Based Encryption (IBE) and Hierarchical Identity-Based Encryption (HIBE) systems under the simple and established decisional Bilinear Diffie-Hellman and decisional Linear assumptions. Our IBE system has ciphertexts, private keys, and public parameters each consisting of a constant number of group elements. These results are the first HIBE system and the first IBE system with short parameters under simple assumptions. In a Dual System Encryption system both ciphertexts and private keys can take on one of two indistinguishable forms. A private key or ciphertext will be normal if they are generated respectively from the system's key generation or encryption algorithm. These keys and ciphertexts will behave as one expects in an IBE system. In addition, we define semi-functional keys and ciphertexts. A semi-functional private key will be able to decrypt all normally generated ciphertexts; however, decryption will fail if one attempts to decrypt a semi-functional ciphertext with a semi-functional private key. Analogously, semi-functional ciphertexts will be decryptable only by normal private keys. Dual System Encryption opens up a new way to prove security of IBE and related encryption systems. We define a sequence of games where we change first the challenge ciphertext and then the private keys one by one to be semi-functional. We finally end up in a game where the challenge ciphertext and all private keys are semi-functional at which point proving security is straightforward.
Last updated:  2009-08-10
Practical Attacks on NESHA-256
Orr Dunkelman, Tor E. Bjørstad
Abstract. NESHA-256 is a cryptographic hash function designed by Esmaeili et al. and presented at WCC '09. We show that NESHA-256 is highly insecure.
Last updated:  2009-08-10
A Registration Scheme to Allocate a Unique Identification Number
Uncategorized
Manoj Kumar
Show abstract
Uncategorized
Identification is always a necessity of human life. Currently, our government has decided to allocate a unique identity to every Indian. This paper proposed a registration scheme, in which a controlling agency can generate a unique identification number in such a way that registration number cannot be forged and misused. In the proposed scheme, only the number holder can use his number and he/she can prove its validity to any third party, whenever necessary.
Last updated:  2009-09-03
Linearization Framework for Collision Attacks: Application to CubeHash and MD6
Uncategorized
Eric Brier, Shahram Khazaei, Willi Meier, Thomas Peyrin
Show abstract
Uncategorized
In this paper, an improved differential cryptanalysis framework for finding collisions in hash functions is provided. Its principle is based on linearization of compression functions in order to find low weight differential characteristics as initiated by Chabaud and Joux. This is formalized and refined however in several ways: for the problem of finding a conforming message pair whose differential trail follows a linear trail, a condition function is introduced so that finding a collision is equivalent to finding a preimage of the zero vector under the condition function. Then, the dependency table concept shows how much influence every input bit of the condition function has on each output bit. Careful analysis of the dependency table reveals degrees of freedom that can be exploited in accelerated preimage reconstruction under the condition function. These concepts are applied to an in-depth collision analysis of reduced-round versions of the two SHA-3 candidates CubeHash and MD6, and are demonstrated to give by far the best currently known collision attacks on these SHA-3 candidates.
Last updated:  2021-02-07
A short Note on Discrete Log Problem in $\mathbbF_p$
Habeeb Syed
Consider finite prime fields $\mathbb{F}_p$ for which $2$ is primitive element. In this short we propose a new algorithm to compute discrete log in such finite fields. Our algorithm is based on elementary properties of finite fields and is purely theoretical in nature. Further, complexity of the algorithm is exponential in nature and as such it is not being suggested for any computational purposes.
Last updated:  2009-08-03
Untraceable Tags based on Mild Assumptions
Carlo Blundo, Angelo De Caro, Giuseppe Persiano
Radio frequency identification (RFID) chips have been widely deployed in large-scale systems such as inventory control and supply chain management. While RFID technology has much advantage, however it may create new problems to privacy. Tag untraceability is a significant concern that needs to be addressed in deploying RFID-based system. In this paper we propose a new construction for untraceable tags. Our construction is the first construction in the symmetric bilinear setting based on a mild assumption. That is our assumption is tautological in the generic group model and is ``efficiently falsifiable'' in the sense that its problem instances are stated non-interactively and concisely (i.e., independently of the number of adversarial queries and other large quantities).
Last updated:  2014-07-01
Protecting Circuits from Computationally Bounded and Noisy Leakage
Sebastian Faust, Tal Rabin, Leonid Reyzin, Eran Tromer, Vinod Vaikuntanathan
Physical computational devices leak side-channel information that may, and often does, reveal secret internal states. We present a general transformation that compiles any circuit into a circuit with the same functionality but resilience against well-defined classes of leakage. Our construction requires a small, stateless and computation-independent leak-proof component that draws random elements from a fixed distribution. In essence, we reduce the problem of shielding arbitrarily complex circuits to the problem of shielding a single, simple component. Our approach is based on modeling the adversary as a powerful observer that inspects the device via a limited measurement apparatus. We allow the apparatus to access all the bits of the computation (except those inside the leak-proof component), and the amount of leaked information to grow unbounded over time. However, we assume that the apparatus is limited in the amount of output bits per iteration and the ability to decode certain linear encodings. While our results apply in general to such leakage classes, in particular, we obtain security against: - Constant-depth circuits leakage, where the leakage function is computed by an AC^0 circuit (composed of NOT gates and unbounded fan-in AND and OR gates). - Noisy leakage, where the leakage function reveals all the bits of the internal state of the circuit, perturbed by independent binomial noise. Namely, for some number p \in (0,1/2], each bit of the computation is flipped with probability p, and remains unchanged with probability 1-p.
Last updated:  2009-08-03
Detectable correlations in Edon-R
Peter Novotney, Niels Ferguson
The Edon-R compression function has a large set of useful differentials that produce easily detectable output bit biases. We show how to construct such differentials, and use them to create a distinguisher for Edon-R-512 that requires around $2^{54}$ compression function evaluations (or $2^{28}$ evaluations after a pre-computation of $2^{66}$ evaluations). The differentials can also be used to attack a variety of MAC and KDF constructions when they use Edon-R-512.
Last updated:  2009-08-19
Chosen-Ciphertext Secure RSA-type Cryptosystems
Uncategorized
Benoit Chevallier-Mames, Marc Joye
Show abstract
Uncategorized
This paper explains how to design fully secure RSA-type cryptosystems from schemes only secure against passive attacks, in the standard model. We rely on instance-independence assumptions, which, roughly speaking, conjecture that for certain problems, an interactive access to a solver for another problem does not help the challenger. Previously, instance-independence assumptions were used in a "negative" way, to prove that certain schemes proven in the random oracle model were not provable in the standard model. Our paradigm applies virtually to all (weakly secure) RSA-type encryption schemes for which public-key RSA exponent can be arbitrarily chosen. As an illustration, we present a chosen-ciphertext secure variant of the Naccache-Stern encryption scheme.
Last updated:  2009-08-03
Cryptanalysis of the Tillich-Zémor hash function
Markus Grassl, Ivana Ilic, Spyros Magliveras, Rainer Steinwandt
At CRYPTO ’94, Tillich and Zémor proposed a family of hash functions, based on computing a suitable matrix product in groups of the form SL_2(F_{2^n}).We show how to construct collisions between palindromic bit strings of length 2n + 2 for Tillich and Zémor’s construction. The approach also yields collisions for related proposals by Petit et al. from ICECS ’08 and CT-RSA ’09. It seems fair to consider our attack as practical: for parameters of interest, the colliding bit strings have a length of a few hundred bits and can be found on a standard PC within seconds.
Last updated:  2009-08-03
Forgotten Secret Recovering Scheme and Fuzzy Vault Scheme Constructed Based on Systematic Error-Correcting Codes
Masao KASAHARA
In this paper, we revisit the Forgotten Secret Recovering Scheme for $k$ users, referred to as FSRS($k$) previously proposed by the present author. FSRS($k$) takes advantage of the fact that Reed-Solomon code used in FSRS($k$) is constructed in a form of systematic code. We show that a particular class of FSRS($k$), FSRS($1$), can be successfully applied to the various cryptoscheme including Fuzzy Vault Scheme (FVS).
Last updated:  2009-08-19
Key Recovery Attacks of Practical Complexity on AES Variants With Up To 10 Rounds
Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry Khovratovich, Adi Shamir
AES is the best known and most widely used block cipher. Its three versions (AES-128, AES-192, and AES-256) differ in their key sizes (128 bits, 192 bits and 256 bits) and in their number of rounds (10, 12, and 14, respectively). In the case of AES-128, there is no known attack which is faster than the $2^{128}$ complexity of exhaustive search. However, AES-192 and AES-256 were recently shown to be breakable by attacks which require $2^{176}$ and $2^{119}$ time, respectively. While these complexities are much faster than exhaustive search, they are completely non-practical, and do not seem to pose any real threat to the security of AES-based systems. In this paper we describe several attacks which can break {\it with practical complexity} variants of AES-256 whose number of rounds are comparable to that of AES-128. One of our attacks uses only two related keys and $2^{39}$ time to recover the complete 256-bit key of a 9-round version of AES-256 (the best previous attack on this variant required 4 related keys and $2^{120}$ time). Another attack can break a 10 round version of AES-256 in $2^{45}$ time, but it uses a stronger type of {\it related subkey attack} (the best previous attack on this variant required 64 related keys and $2^{172}$ time). While neither AES-128 nor AES-256 can be directly broken by these attacks, the fact that their hybrid (which combines the smaller number of rounds from AES-128 along with the larger key size from AES-256) can be broken with such a low complexity raises serious concern about the remaining safety margin offered by the AES family of cryptosystems.
Last updated:  2010-03-05
Utility Dependence in Correct and Fair Rational Secret Sharing
Gilad Asharov, Yehuda Lindell
The problem of carrying out cryptographic computations when the participating parties are \emph{rational} in a game-theoretic sense has recently gained much attention. One problem that has been studied considerably is that of rational secret sharing. In this setting, the aim is to construct a mechanism (protocol) so that parties behaving rationally have incentive to cooperate and provide their shares in the reconstruction phase, even if each party prefers to be the only one to learn the secret. Although this question was only recently asked by Halpern and Teague (STOC 2004), a number of works with beautiful ideas have been presented to solve this problem. However, they all have the property that the protocols constructed need to know the actual utility values of the parties (or at least a bound on them). This assumption is very problematic because the utilities of parties are not public knowledge. We ask whether this \emph{dependence on the actual utility values} is really necessary and prove that in the case of two parties, rational secret sharing cannot be achieved without it. On the positive side, we show that in the multiparty case it is possible to construct a single mechanism that works for all (polynomial) utility functions. Our protocol has an expected number of rounds that is constant, and is optimally resilient to coalitions. In addition to the above, we observe that the known protocols for rational secret sharing that do not assume simultaneous channels all suffer from the problem that one of the parties can cause the others to output an incorrect value. (This problem arises when a party gains higher utility by having another output an incorrect value than by learning the secret itself; we argue that such a scenario needs to be considered.) We show that this problem is inherent in the non-simultaneous channels model, unless the actual values of the parties' utilities from this attack is known, in which case it is possible to prevent this from happening.
Last updated:  2009-07-31
More on Key Wrapping
Rosario Gennaro, Shai Halevi
We address the practice of key-wrapping, where one symmetric cryptographic key is used to encrypt another. This practice is used extensively in key-management architectures, often to create an ``adapter layer'' between incompatible legacy systems. Although in principle any secure encryption scheme can be used for key wrapping, practical constraints (which are commonplace when dealing with legacy systems) may severely limit the possible implementations, sometimes to the point of ruling out any ``secure general-purpose encryption.'' It is therefore desirable to identify the security requirements that are ``really needed'' for the key-wrapping application, and have a large variety of implementations that satisfy these requirements. This approach was developed in a work by Rogaway and Shrimpton at EUROCRYPT 2006. They focused on allowing deterministic encryption, and defined a notion of deterministic authenticated encryption (DAE), which roughly formalizes ``the strongest security that one can get without randomness.'' Although DAE is weaker than full blown authenticated encryption, it seems to suffice for the case of key wrapping (since keys are random and therefore the encryption itself can be deterministic). Rogaway and Shrimpton also described a mode of operation for block ciphers (called SIV) that realizes this notion. We continue in the direction initiated by Rogaway and Shirmpton. We first observe that the notion of DAE still rules out many practical and ``seemingly secure'' implementations. We thus look for even weaker notions of security that may still suffice. Specifically we consider notions that mirror the usual security requirements for symmetric encryption, except that the inputs to be encrypted are random rather than adversarially chosen. These notions are all strictly weaker than DAE, yet we argue that they suffice for most applications of key wrapping. As for implementations, we begin by observing that many standard encryption modes satisfy the key-warpping notion that mirrors CPA-security, even when used with a fixed IV (with the notable exception of CTR mode). To achieve the notion that mirrors authenticated encryption, we investigate a template of Hash-then-Encrypt (HtE), which seems practically appealing: In this method the key is first ``hashed'' into a short nonce, and then the nonce and key are encrypted using some standard encryption mode. We consider a wide array of ``hash functions'', ranging from a simple XOR to collision-resistant hashing, and examine what ``hash function'' can be used with what encryption mode.
Last updated:  2009-07-31
Attribute-Sets: A Practically Motivated Enhancement to Attribute-Based Encryption
Rakesh Bobba, Himanshu Khurana, Manoj Prabhakaran
In distributed systems users need to share sensitive objects with others based on the recipients’ ability to satisfy a policy. Attribute-Based Encryption (ABE) is a new paradigm where such policies are specified and cryptographically enforced in the encryption algorithm itself. Ciphertext-Policy ABE (CP-ABE) is a form of ABE where policies are associated with encrypted data and attributes are associated with keys. In this work we focus on improving the flexibility of representing user attributes in keys. Specifically, we propose Ciphertext Policy Attribute Set Based Encryption (CP-ASBE) - a new form of CP-ABE - which, unlike existing CP-ABE schemes that represent user attributes as a monolithic set in keys, organizes user attributes into a recursive set based structure and allows users to impose dynamic constraints on how those attributes may be combined to satisfy a policy. We show that the proposed scheme is more versatile and supports many practical scenarios more naturally and efficiently. We provide a prototype implementation of our scheme and evaluate its performance overhead.
Last updated:  2009-07-31
A study of pairing computation for elliptic curves with embedding degree 15
Nadia El Mrabet, Nicolas Guillermin, Sorina Ionica
This paper presents the first study of pairing computation on curves with embedding degree $15$. We compute the Ate and the twisted Ate pairing for a family of curves with parameter $\rho~1.5$ and embedding degree $15$. We use a twist of degree 3 to perform most of the operations in $\F_p$ or $\F_{p^5}$. Furthermore, we present a new arithmetic for extension fields of degree $5$. Our computations show that these curves give very efficient implementations for pairing-based cryptography at high security levels.
Last updated:  2013-03-04
Quantum readout of Physical Unclonable Functions: Remote authentication without trusted readers and authenticated Quantum Key Exchange without initial shared secrets
Uncategorized
Boris Skoric
Show abstract
Uncategorized
Physical Unclonable Functions (PUFs) are physical structures that are hard to clone and have a unique challenge-response behaviour. The term PUF was coined by Pappu et al. in 2001. That work triggered a lot of interest, and since then a substantial number of papers has been written about the use of a wide variety of physical structures for different security purposes such as identification, authentication, read-proof key storage, key distribution, tamper evidence, anti-counterfeiting, software-to-hardware binding and trusted computing. In this paper we propose a new security primitive: the quantum-readout PUF (QR-PUF). This is a classical PUF which is challenged using a quantum state, e.g. a single-photon state, and whose response is also a quantum state. By the no-cloning property of unknown quantum states, attackers cannot intercept challenges or responses without noticeably disturbing the readout process. Thus, a verifier who sends quantum states as challenges and receives the correct quantum states back can be certain that he is probing a specific QR-PUF without disturbances, even in the QR-PUF is far away `in the field' and under hostile control. For PUFs whose information content is not exceedingly large, all currently known PUF-based authentication and anti-counterfeiting schemes require trusted readout devices in the field. Our quantum readout scheme has no such requirement. Furthermore, we show how the QR-PUF authentication scheme can be interwoven with Quantum Key Exchange (QKE), leading to an authenticated QKE protocol between two parties. This protocol has the special property that it requires no a priori secret, or entangled state, shared by the two parties.
Last updated:  2009-07-30
A Simulation-Based Treatment of Authenticated Message Exchange
Klaas Ole Kuertz, Henning Schnoor, Thomas Wilke
Simulation-based security notions for cryptographic protocols are regarded as highly desirable, primarily because they admit strong composability and, consequently, a modular design. In this paper, we give a simulation-based security definition for two-round authenticated message exchange and show that a concrete protocol, 2AMEX-1, satisfies our security property, that is, we provide an ideal functionality for two-round authenticated message exchange and show that 2AMEX-1 realizes it securely. To model the involved public-key infrastructure adequately, we use a joint-state approach.
Last updated:  2010-01-25
Non-delegatable Identity-based Designated Verifier Signature
Uncategorized
Qiong Huang, Willy Susilo, Duncan S. Wong
Show abstract
Uncategorized
Designated verifier signature is a cryptographic primitive which allows a signer to convince a designated verifier of the validity of a statement but in the meanwhile prevents the verifier from transferring this conviction to any third party. In this work we present the \emph{first} identity-based designated verifier signature scheme that supports non-delegatability, and prove its security in the random oracle model, based on computational Diffie-Hellman assumption. Our scheme is perfectly non-transferable, and its non-delegatability follows the original definition proposed by Lipmaa et al. \cite{LipmaaWaBa05}.
Last updated:  2009-07-28
Adaptive Zero-Knowledge Proofs and Adaptively Secure Oblivious Transfer
Yehuda Lindell, Hila Zarosim
In the setting of secure computation, a set of parties wish to securely compute some function of their inputs, in the presence of an adversary. The adversary in question may be static (meaning that it controls a predetermined subset of the parties) or adaptive (meaning that it can choose to corrupt parties during the protocol execution and based on what it sees). In this paper, we study two fundamental questions relating to the basic zero-knowledge and oblivious transfer protocol problems: 1) Adaptive zero-knowledge proofs: We ask whether it is possible to construct adaptive zero-knowledge proofs (with unconditional soundness). Beaver (STOC 1996) showed that known zero-knowledge proofs are not adaptively secure, and in addition showed how to construct zero-knowledge arguments (with computational soundness). 2) Adaptively secure oblivious transfer: All known protocols for adaptively secure oblivious transfer rely on seemingly stronger hardness assumptions than for the case of static adversaries. We ask whether this is inherent, and in particular, whether it is possible to construct adaptively secure oblivious transfer from enhanced trapdoor permutations alone. We provide surprising answers to the above questions, showing that achieving adaptive security is sometimes harder than achieving static security, and sometimes not. First, we show that assuming the existence of one-way functions only, there exist adaptive zero-knowledge proofs for all languages in NP. In order to prove this, we overcome the problem that all adaptive zero-knowledge protocols known until now used equivocal commitments (which would enable an all-powerful prover to cheat). Second, we prove a black-box separation between adaptively secure oblivious transfer and enhanced trapdoor permutations. As a corollary, we derive a black-box separation between adaptively and statically securely oblivious transfer. This is the first black-box separation to relate to adaptive security and thus the first evidence that it is indeed harder to achieve security in the presence of adaptive adversaries than in the presence of static adversaries.
Last updated:  2009-07-27
Space Efficient Secret Sharing: A Recursive Approach
Uncategorized
Abhishek Parakh, Subhash Kak
Show abstract
Uncategorized
This paper presents a k-threshold secret sharing technique that distributes a secret S into shares of size |S|/(k-1), where |S| denotes the secret size. This bound is close to the optimal bound of |S|/k, if the secret is to be recovered from k shares. The proposed scheme makes use of repeated polynomial interpolation. Our technique has potential applications in secure and reliable storage of information on the Web and in sensor networks.
Last updated:  2009-07-27
Position Based Cryptography
Nishanth Chandran, Vipul Goyal, Ryan Moriarty, Rafail Ostrovsky
We consider what constitutes {\em identities\/} in cryptography. Typical examples include your name and your social-security number, or your fingerprint/iris-scan, or your address, or your (non-revoked) public-key coming from some trusted public-key infrastructure. In many situations, however, {\bf where you are} defines your identity. For example, we know the role of a bank-teller behind a bullet-proof bank window not because she shows us her credentials but by merely knowing her location. In this paper, we initiate the study of cryptographic protocols where the identity (or other credentials and inputs) of a party are derived from its \emph{geographic location}. We start by considering the central task in this setting, i.e., securely verifying the position of a device. Despite much work in this area, we show that in the Vanilla (or standard) model, the above task (i.e., of secure positioning) is impossible to achieve. In light of the above impossibility result, we then turn to the Bounded Retrieval Model (a variant of the Bounded Storage Model) and formalize and construct information theoretically secure protocols for two fundamental tasks: \begin{itemize} \item Secure Positioning; and \item Position Based Key Exchange. \end{itemize} We then show that these tasks are in fact {\em universal\/} in this setting -- we show how we can use them to realize Secure Multi-Party Computation. Our main contribution in this paper is threefold: to place the problem of secure positioning on a sound theoretical footing; to prove a strong impossibility result that simultaneously shows the insecurity of previous attempts at the problem; and to present positive results by showing that the bounded-retrieval framework is, in fact, one of the ``right" frameworks (there may be others) to study the foundations of position-based cryptography.
Last updated:  2010-11-10
Some Lattices Attacks on DSA and ECDSA
Uncategorized
Dimitrios Poulakis
Show abstract
Uncategorized
In this paper, using the LLL reduction method and computing the integral points of two classes of conics, we develop attacks on DSA and ECDSA in case where the secret and the ephemeral key and their modular inverse are quite small or quite large.
Last updated:  2009-07-24
Toward a Generic Construction of Convertible Undeniable Signatures from Pairing-Based Signatures
Laila El Aimani
Undeniable signatures were proposed to limit the verification property of ordinary digital signatures. In fact, the verification of such signatures cannot be attained without the help of the signer, via the confirmation/denial protocols. Later, the concept was refined to give the possibility of converting a \emph{selected} signature into an ordinary one, or publishing a \emph{universal} receipt that turns all undeniable signatures publicly verifiable. In this paper, we present the first generic construction for convertible undeniable signatures from certain weakly secure cryptosystems and any secure digital signature scheme. Next, we give two specific approaches for building convertible undeniable signatures from a large class of pairing-based signatures. These methods find a nice and practical instantiation with known encryption and signature schemes. For instance, we achieve the most efficient undeniable signatures with regard to the signature length and cost, the underlying assumption and the security model. We believe these constructions could be an interesting starting point to develop more efficient schemes or give better security analyses of the existing ones.
Last updated:  2009-07-22
On the Security of a Proxy Blind Signature Scheme over Braid Groups
Manoj Kumar
A proxy blind signature scheme is the combination of proxy signature and blind signature scheme. In 2009,Verma proposed a proxy blind signature scheme over braid groups. Verma claimed that the proposed scheme is secure against all possible security lapses and also satisfy all essential security attributes.This paper analyzes Verma’s proposed scheme and found that this scheme suffers with the serious security vulnerabilities. This paper show that the proposed scheme does not satisfy unforgeability and unlinkability, which are two essential security requirement of a secure proxy blind signature scheme.
Last updated:  2012-06-20
Cryptanalysis of a Generalized Unbalanced Feistel Network Structure
Ruilin Li, Bing Sun, Chao Li, Longjiang Qu
This paper reevaluates the security of GF-NLFSR, a new kind of generalized unbalanced Feistel network structure that was proposed at ACISP 2009. We show that GF-NLFSR itself reveals a very slow diffusion rate, which could lead to several distinguishing attacks. For GF-NLFSR containing $n$ sub-blocks, we find an $n^2$-round integral distinguisher by algebraic methods and further use this integral to construct an $(n^2+n-2)$-round impossible differential distinguisher. Compared with the original $(3n-1)$-round integral and $(2n-1)$-round impossible differential, ours are significantly better. Another contribution of this paper is to introduce a kind of non-surjective attack by analyzing a variant structure of GF-NLFSR, whose provable security against differential and linear cryptanalysis can also be provided. The advantage of the proposed non-surjective attack is that traditional non-surjective attack is only applicable to Feistel ciphers with non-surjective (non-uniform) round functions, while ours could be applied to block ciphers with bijective ones. Moreover, its data complexity is $\mathcal{O}(l)$ with $l$ the block length.
Last updated:  2010-12-02
Bonsai Trees (or, Arboriculture in Lattice-Based Cryptography)
Chris Peikert
We introduce *bonsai trees*, a lattice-based cryptographic primitive that we apply to resolve some important open problems in the area. Applications of bonsai trees include: 1. An efficient, stateless `hash-and-sign' signature scheme in the *standard model* (i.e., no random oracles), and 2. The first *hierarchical* identity-based encryption (HIBE) scheme (also in the standard model) that does not rely on bilinear pairings. Interestingly, the abstract properties of bonsai trees seem to have no known realization in conventional number-theoretic cryptography.
Last updated:  2015-09-08
MAC Precomputation with Applications to Secure Memory
Juan A. Garay, Vladimir Kolesnikov, Rae McLellan
We present ShMAC (Shallow MAC), a fixed input length message authentication code that performs most of the computation prior to the availability of the message. Specifically, ShMAC's message-dependent computation is much faster and smaller in hardware than the evaluation of a pseudorandom permutation (PRP), and can be implemented by a small shallow circuit, while its precomputation consists of one PRP evaluation. A main building block for ShMAC is the notion of strong differential uniformity (SDU), which we introduce, and which may be of independent interest. We present an efficient SDU construction built from previously considered differentially uniform functions. Our motivating application is a system architecture where a hardware-secured processor uses memory controlled by an adversary. We present in technical detail a novel, more efficient approach to encrypting and authenticating memory and discuss the associated trade-offs, while paying special attention to minimizing hardware costs and the reduction of DRAM latency.
Last updated:  2009-07-22
Impossible Differential Cryptanalysis of FOX
Zhongming Wu, Xuejia Lai, Bo Zhu, Yiyuan Luo
Block ciphers are the very foundation of computer and information security. FOX, also known as IDEA NXT, is a family of block ciphers published in 2004 and is famous for its provable security to cryptanalysis. In this paper, we apply impossible differential cryptanalysis on FOX cipher. We find a 4-round impossible difference, by using which adversaries can attack 5, 6 and 7-round FOX64 with $2^{71}$, $2^{135}$ and $2^{199}$ one-round encryptions respectively. Compared to the previous best attack with $2^{109.4}$, $2^{173.4}$ and $2^{237.4}$ full-round encryptions to 5, 6 and 7-round FOX64, the method in this paper is the best attack to FOX cipher. This attack can also be applied to 5-round FOX128 with $2^{135}$ one-round encryptions.
Last updated:  2009-07-21
A Domain Extender for the Ideal Cipher
Uncategorized
Jean-Sebastien Coron, Yevgeniy Dodis, Avradip Mandal, Yannick Seurin
Show abstract
Uncategorized
We describe the first domain extender for ideal ciphers, {\sl i.e.} we show a construction that is indifferentiable from a $2n$-bit ideal cipher, given a $n$-bit ideal cipher. Our construction is based on a $3$-round Feistel, and is more efficient than first building a $n$-bit random oracle from a $n$-bit ideal cipher and then a $2n$-bit ideal cipher from a $n$-bit random oracle (using a $6$-round Feistel). We also show that $2$ rounds are not enough for indifferentiability by exhibiting a simple attack. We also consider our construction in the standard model: we show that $2$ rounds are enough to get a $2n$-bit tweakable block-cipher from a $n$-bit tweakable block-cipher and we show that with $3$ rounds we can get beyond the birthday security bound.
Last updated:  2010-07-23
Asynchronous Distributed Private-Key Generators for Identity-Based Cryptography
Aniket Kate, Ian Goldberg
An identity-based encryption (IBE) scheme can greatly reduce the complexity of sending encrypted messages over the Internet. However, an IBE scheme necessarily requires a private-key generator (PKG), which can create private keys for clients, and so can passively eavesdrop on all encrypted communications. Although a distributed PKG has been suggested as a way to mitigate this problem for Boneh and Franklin's IBE scheme, the security of this distributed protocol has not been proven and the proposed solution does not work over the asynchronous Internet. Further, a distributed PKG has not been considered for any other IBE scheme. In this paper, we design distributed PKG setup and private key extraction protocols in an asynchronous communication model for three important IBE schemes; namely, Boneh and Franklin's IBE, Sakai and Kasahara's IBE, and Boneh and Boyen's BB1-IBE. We give special attention to the applicability of our protocols to all possible types of bilinear pairings and prove their IND-ID-CCA security in the random oracle model. Finally, we also perform a comparative analysis of these protocols and present recommendations for their use.
Last updated:  2009-09-14
Cache Timing Attacks on Camellia Block Cipher
Uncategorized
ZHAO Xin-jie, WANG Tao, ZHENG Yuan-yuan
Show abstract
Uncategorized
Camellia, as the final winner of 128-bit block cipher in NESSIE, is the most secure block cipher of the world. In 2003, Tsunoo proposed a Cache Attack using a timing of CPU cache, successfully recovered Camellia-128 key within 228 plaintexts and 35 minutes. In 2004, IKEDA YOSHITAKA made some further improvements on Tsunoo’s attacks, recovered Camellia-128 key within 221.4 plaintexts and 22 minutes. All of their attacks are belonged to timing driven Cache attacks, our research shows that, due to its frequent S-box lookup operations, Camellia is also quite vulnerable to access driven Cache timing attacks, and it is much more effective than timing driven Cache attacks. Firstly, we provide a general analysis model for symmetric ciphers using S-box based on access driven Cache timing attacks, point out that the F function of the Camellia can leak information about the result of encryption key XORed with expand-key, and the left circular rotating operation of the key schedule in Camellia has serious designing problem. Next, we present several attacks on Camellia-128/192/256 with and without FL/FL-1. Experiment results demonstrate: 500 random plaintexts are enough to recover full Camellia-128 key; 900 random plaintexts are enough to recover full Camellia-192/256 key; also, our attacks can be expanded to known ciphertext conditions by attacking the Camellia decryption procedure; besides, our attacks are quite easy to be expanded to remote scenarios, 3000 random plaintexts are enough to recover full encryption key of Camellia-128/192/256 in both local and campus networks. Finally, we discuss the reason why Camellia is weak in this type of attack, and provide some advices to cipher designers for hardening ciphers against cache timing attacks.
Last updated:  2009-07-21
Comparing SessionStateReveal and EphemeralKeyReveal for Diffie-Hellman protocols (extended version)
Berkant Ustaoglu
Both the ``eCK'' model, by LaMacchia, Lauter and Mityagin, and the ``CK01'' model, by Canetti and Krawczyk, address the effect of leaking session specific ephemeral data on the security of key establishment schemes. The CK01-adversary is given a \SessionStateReveal{} query to learn session specific private data defined by the protocol specification, whereas the eCK-adversary is equipped with an \RevealEphemeralKey{} query to access all ephemeral private input required to carry session computations. \SessionStateReveal{} \emph{cannot} be issued against the test session; by contrast \RevealEphemeralKey{} \emph{can} be used against the test session under certain conditions. On the other hand, it is not obvious how \RevealEphemeralKey{} compares to \SessionStateReveal{}. Thus it is natural to ask which model is more useful and practically relevant. While formally the models are not comparable, we show that recent analysis utilizing \SessionStateReveal{} and \RevealEphemeralKey{} have a similar approach to ephemeral data leakage. First we pinpoint the features that determine the approach. Then by examining common motives for ephemeral data leakage we conclude that the approach is meaningful, but does not take into account timing, which turns out to be critical for security. Lastly, for Diffie-Hellman protocols we argue that it is important to consider security when discrete logarithm values of the outgoing ephemeral public keys are leaked and offer a method to achieve security even if the values are exposed.
Last updated:  2009-07-21
On the Duality of Probing and Fault Attacks
Berndt M. Gammel, Stefan Mangard
In this work we investigate the problem of simultaneous privacy and integrity protection in cryptographic circuits. We consider a white-box scenario with a powerful, yet limited attacker. A concise metric for the level of probing and fault security is introduced, which is directly related to the capabilities of a realistic attacker. In order to investigate the interrelation of probing and fault security we introduce a common mathematical framework based on the formalism of information and coding theory. The framework unifies the known linear masking schemes. We proof a central theorem about the properties of linear codes which leads to optimal secret sharing schemes. These schemes provide the lower bound for the number of masks needed to counteract an attacker with a given strength. The new formalism reveals an intriguing duality principle between the problems of probing and fault security, and provides a unified view on privacy and integrity protection using error detecting codes. Finally, we introduce a new class of linear tamper-resistant codes. These are eligible to preserve security against an attacker mounting simultaneous probing and fault attacks.
Last updated:  2009-07-24
How to Delegate a Lattice Basis
David Cash, Dennis Hofheinz, Eike Kiltz
We present a technique, which we call basis delegation, that allows one to use a short basis of a given lattice to derive a new short basis of a related lattice in a secure way. And since short bases for lattices essentially function like cryptographic trapdoors, basis delegation turns out to be a very powerful primitive. As the main application of our technique, we show how to construct hierarchical identity-based encryption (HIBE) that is secure, without random oracles, under the assumption that certain standard lattice problems are hard in the worst case. This construction and its variants constitute the first HIBE schemes from lattices, as well as the first lattice-based constructions of stateless signatures and identity-based encryption without random oracles.
Last updated:  2009-07-21
Game Theoretic Resistance to Denial of Service Attacks Using Hidden Difficulty Puzzles
Harikrishna Narasimhan, Venkatanathan Varadarajan, C. Pandu Rangan
Denial of Service (DoS) vulnerabilities are one of the major concerns in today’s Internet. Client-puzzles offer a good mechanism to defend servers against DoS attacks. In this paper, we introduce the notion of hidden puzzle difficulty, where the attacker cannot determine the difficulty of the puzzle without expending a minimal amount of computational resource. Game theory is used to develop defense mechanisms, which make use of such puzzles. New puzzles that satisfy the requirements of the defense mechanisms have been proposed. We also show that our defense mechanisms are more effective than the ones proposed in the earlier work by Fallah.
Last updated:  2009-07-18
Compact Hardware Implementations of the SHA-3 Candidates ARIRANG, BLAKE, Grøstl, and Skein
Stefan Tillich, Martin Feldhofer, Wolfgang Issovits, Thomas Kern, Hermann Kureck, Michael Mühlberghuber, Georg Neubauer, Andreas Reiter, Armin Köfler, Mathias Mayrhofer
The weakening of the widely used SHA-1 hash function has also cast doubts on the strength of the related algorithms of the SHA-2 family. The US NIST has therefore initiated the SHA-3 competition in order to select a modern hash function algorithm as a ``backup'' for SHA-2. This algorithm should be efficiently implementable both in software and hardware under different constraints. In this paper, we present hardware implementations of the four SHA-3 candidates ARIRANG, BLAKE, Grøstl, and Skein with the primary constraint of minimizing chip area.
Last updated:  2009-07-20
A provably secure really source hiding designated verifier signature scheme based on random oracle model
Uncategorized
Huang-Ta Huang, Jue-Sam Chou
Show abstract
Uncategorized
A lot of designated verifier signature (DVS) schemes have been proposed. However, all of them only provide the basic security requirement that only the designated verifier can check the validity of the signature. They are either not secure enough or lacking source hiding. Hence, in this article, we design a provably secure DVS scheme. It not only can attain the basic security requirement but also hide the original signer’s identity which makes our scheme more suitable for the applications in an electronic voting system.
Last updated:  2009-07-18
An Efficient Concurrent Repetition Theorem
Douglas Wikström
Håstad et al. (2008) prove, using Raz's lemma (STOC '95) thefirst efficient parallel repetition theorem for protocols with anon-constant number of rounds, for a natural generalization ofpublic-coin protocols. They show that a parallel prover thatconvinces a fraction $1-\gamma$ of the embedded verifiers of a$\reps$-wise repeated $\exchanges$-message verifier can be turnedinto a prover with error probability$1-\gamma-O(\exchanges\sqrt{-\logg{\epsilon}/\reps})$. This improvesprevious results of Impagliazzo et al. (Crypto 2007) and Pass andVenkitasubramaniam (STOC 2007) that studies the constant round case. We prove a generalization of Raz's Lemma to random processes thatallows us to improve the analysis of the reduction of Håstad etal. in the public-coin case to$1-\gamma-O(\sqrt{-\logg{\epsilon}/\reps})$, i.e., we remove thedependence on the number rounds completely, and thus the restrictionto settings where $\reps>\exchanges^2$. An important implication of the strengthened parallel repetitiontheorem is the first efficient \emph{concurrent} repetition theoremfor protocols with a non-constant number of rounds. In concurrentrepetition, the verifiers execute completely independently and onlyreport their final decision, i.e., the prover chooses arbitrarily inwhich order it interacts with the individual verifiers. This shouldbe contrasted with parallel repetition where the verifiers aresynchronized in each round.
Last updated:  2009-07-18
Security Analysis of the GF-NLFSR Structure and Four-Cell Block Cipher
Wenling Wu, Lei Zhang, Liting Zhang, Wentao Zhang
The overall structure is one of the most important properties of block ciphers. At present, the most common structures include Feistel structure, SP structure, MISTY structure, L-M structure and Generalized Feistel structure. In \cite{29}, Choy et al. proposed a new structure called GF-NLFSR (Generalized Feistel-NonLinear Feedback Shift Register), and designed a new block cipher called Four-Cell which is based on the 4-cell GF-NLFSR. In this paper, we first study properties of the $n$-cell GF-NLFSR structure, and prove that for an $n$-cell GF-NLFSR, there exists an $(n^2+n-2)$ rounds impossible differential. Then we present an impossible differential attack on the full 25-round Four-Cell using this kind of 18-round impossible differential distinguisher together with differential cryptanalysis technique. The data complexity of our attack is $2^{111.5}$ and the time complexity is less than $2^{123.5}$ encryptions. In addition, we expect the attack to be more efficient when the relations between different round subkeys can be exploited by taking the key schedule algorithm into consideration.
Last updated:  2009-07-16
Anonymous ID Based Signcryption Scheme for Multiple Receivers
Sunder Lal, Prashant Kushwah
Anonymous signcryption is synonyms of ring signcryption which provides anonymity of the sender along with the advantages of signcryption. Multi receiver signcryption is suited for situation where a sender wants to send a message to multiple receivers in the confidential and authenticated way. This paper proposes an identity based anonymous signcryption scheme in multi-receiver setting. It also provides proofs of provable security of the proposed scheme under some computationally difficult problems.
Last updated:  2009-07-16
Comments on Shao-Cao's Unidirectional Proxy Re-Encryption Scheme from PKC 2009
Xi Zhang, Min-Rong Chen, Xia Li
In Eurocrypt'98, Blaze, Bleumer and Strauss [4] introduced a primitive named proxy re-encryption (PRE), in which a semi-trusted proxy can convert - without seeing the plaintext - a ciphertext originally intended for Alice into an encryption of the same message intended for Bob. PRE systems can be categorized into bidirectional PRE, in which the proxy can transform from Alice to Bob and vice versa, and unidirectional PRE, in which the proxy cannot transforms ciphertexts in the opposite direction. How to construct a PRE scheme secure against chosen-ciphertext attack (CCA) without pairings is left as an open problem in ACM CCS'07 by Canetti and Hohenberger [7]. In CANS'08, Deng et al. [8] successfully proposed a CCA-secure bidirectional PRE scheme without pairings. In PKC'09, Shao and Cao [10] proposed a unidirectional PRE without pairings, and claimed that their scheme is CCA-secure. They compared their scheme with Libert-Vergnaud's pairing-based unidirectional PRE scheme from PKC'08, and wanted to indicate that their scheme gains advantages over Libert-Vergnaud's scheme. However, Weng et al. [13] recently pointed out that Shao-Cao's scheme is not CCA-secure by giving a concrete chosen-ciphertext attack, and they also presented a more efficient CCA-secure unidirectional PRE scheme without parings. In this paper, we further point out that, Shao-Cao's comparison between their scheme and Libert-Vergnaud's scheme is unfair, since Shao-Cao's scheme is even not secure against chosen-plaintext attack (CPA) in Libert-Vergnaud's security model.
Last updated:  2009-07-14
Partitioning Multivariate Polynomial Equations via Vertex Separators for Algebraic Cryptanalysis and Mathematical Applications
Uncategorized
Kenneth Koon-Ho Wong, Gregory V. Bard, Robert H. Lewis
Show abstract
Uncategorized
We present a novel approach for solving systems of polynomial equations via graph partitioning. The concept of a variable-sharing graph of a system of polynomial equations is defined. If such graph is disconnected, then the system of equations is actually two separate systems that can be solved individually. This can provide a significant speed-up in computing the solution to the system, but is unlikely to occur either randomly or in applications. However, by deleting a small number of vertices on the graph, the variable-sharing graph could be disconnected in a balanced fashion, and in turn the system of polynomial equations are separated into smaller ones of similar sizes. In graph theory terms, this process is equivalent to finding balanced vertex partitions with minimum-weight vertex separators. The techniques of &#64257;nding these vertex partitions are discussed, and experiments are performed to evaluate its practicality for general graphs and systems of polynomial equations. Applications of this approach to the QUAD family of stream ciphers, algebraic cryptanalysis of the stream cipher Trivium and its variants, as well as some mathematical problems in game theory and computational algebraic geometry are presented. In each of these cases, the systems of polynomial equations involved are well-suited to our graph partitioning method, and constructive results are discussed.
Last updated:  2009-07-31
FPGA Implementations of SHA-3 Candidates:CubeHash, Grøstl, L{\sc ane}, Shabal and Spectral Hash
Uncategorized
Brian Baldwin, Andrew Byrne, Mark Hamilton, Neil Hanley, Robert P. McEvoy, Weibo Pan, William P. Marnane
Show abstract
Uncategorized
Abstract: Hash functions are widely used in, and form an important part of many cryptographic protocols. Currently, a public competition is underway to find a new hash algorithm(s) for inclusion in the NIST Secure Hash Standard (SHA-3). Computational efficiency of the algorithms in hardware will form one of the evaluation criteria. In this paper, we focus on five of these candidate algorithms, namely CubeHash, Grøstl, L{\sc ane}, Shabal and Spectral Hash. Using Xilinx Spartan-3 and Virtex-5 FPGAs, we present architectures for each of these hash functions, and explore area-speed trade-offs in each design. The efficiency of various architectures for the five hash functions is compared in terms of throughput per unit area. To the best of the authors' knowledge, this is the first such comparison of these SHA-3 candidates in the literature.
Last updated:  2010-03-13
Leakage Resilient Cryptography in Practice
Uncategorized
Francois-Xavier Standaert, Olivier Pereira, Yu Yu, Jean-Jacques Quisquater, Moti Yung, Elisabeth Oswald
Show abstract
Uncategorized
In this report, we are concerned with models to analyze the security of cryptographic algorithms against side-channel attacks. Our objectives are threefold. In a first part of the paper, we aim to survey a number of well known intuitions related to physical security and to connect them with more formal results in this area. For this purpose, we study the definition of leakage function introduced by Micali and Reyzin in 2004 and its relation to practical power consumption traces. Then, we discuss the non equivalence between the unpredictability and indistinguishability of pseudorandom generators in physically observable cryptography. Eventually, we examine the assumption of bounded leakage per iteration that has been used recently to prove the security of different constructions against side-channel attacks. We show that approximated leakage bounds can be obtained using the framework for the analysis of side-channel key recovery attacks published at Eurocrypt 2009. In a second part of the paper, we aim to investigate two recent leakage resilient pseudorandom generators, both from a theoretical and practical point of view. On the one hand, we consider a forward secure generator from ASIACCS 2008 and its similarities with a previous construction by Bellare and Yee. On the other hand, we analyze Pietrzak's block cipher based construction from Eurocrypt 2009. Doing this, we put forward the difficulty of meaningfully restricting the physical leakages and show that this difficulty leads to different drawbacks. It allows us to emphasize the differences between these two designs. First, one construction that we analyze requires strong black box assumptions (i.e. random oracles) - the other one considers unrealistic leakages leading to (possibly useless) performance overheads. Second, one construction considers an adversary able to adaptively choose a leakage function while the second one does not permit this adaptivity. Third, the security proof of the Eurocrypt 2009 construction relies on the assumption that ``only computation leaks'' (or relaxed but related hypotheses) while this assumption is not necessary for the ASIACCS construction. We then discuss the impact of these hypotheses with respect to recent technological advances. In the third part of the paper, we show that Pietrzak's leakage resilient mode of operation from Eurocrypt 2009 can be broken with a standard DPA if it is re-initialized without sharing new keys. Then, we propose solutions to fix this issue and extend the initial proposal from ASIACCS 2008 in order to rely on more standard cryptographic constructions. We use these alternative designs to illustrate the incompatibility between a fully adaptive selection of the leakage function and the secure initialization of a pseudorandom generator. We also argue that simple pseudorandom functions (e.g. the one of Goldreich, Goldwasser, Micali) can be shown leakage resilient, using the random oracle methodology. We additionally discuss the security vs. performance tradeoff that is inherent to these different schemes. Eventually, we show that the security of the forward secure pseudorandom number generator of Bellare and Yee against side-channel attacks cannot be directly generalized in the standard model. It is an open problem to determine the minimum black box assumptions and restrictions of the leakage function for this purpose.
Last updated:  2014-06-03
Efficient Indifferentiable Hashing into Ordinary Elliptic Curves
Eric Brier, Jean-Sebastien Coron, Thomas Icart, David Madore, Hugues Randriam, Mehdi Tibouchi
We provide the first construction of a hash function into ordinary elliptic curves that is indifferentiable from a random oracle, based on Icart's deterministic encoding from Crypto 2009. While almost as efficient as Icart's encoding, this hash function can be plugged into any cryptosystem that requires hashing into elliptic curves, while not compromising proofs of security in the random oracle model. We also describe a more general (but less efficient) construction that works for a large class of encodings into elliptic curves, for example the Shallue-Woestijne-Ulas (SWU) algorithm. Finally we describe the first deterministic encoding algorithm into elliptic curves in characteristic 3.
Last updated:  2009-07-13
A Novel ID-based Electronic Cash System from Pairings
Jue-Sam Chou, Yalin Chen, Ming-Hsun Cho, Hung-Min Sun
Recently, Chen et al. and Juang et al. each proposed one and two e-cash payment systems respectively. They claimed that their schemes are secure. However, in this paper, we will present the shortcomings of their schemes and then propose a novel one from pairings. After security analysis and comparison, we conclude that our scheme not only is more secure but also possesses more functions that a secure electronic cash system should encompass than all of the proposed protocols.
Last updated:  2009-07-13
Security weaknesses in two multi-server password based authentication protocols
Uncategorized
Jue-Sam Chou, Chun-Hui Huang, Cheng-Chung Ding
Show abstract
Uncategorized
In 2004 and 2005, Tsaur et al. proposed a smart card based password authentication schemes for multi-server environments, respectively. They claimed that their protocols are safe and can withstand various kinds of attacks. However, after analysis, we found their schemes each have some secure loopholes. In this article, we will show the security flaws in these two protocols.
Last updated:  2009-07-13
A New Lattice-Based Cryptosystem Mixed with a Knapsack
Yanbin Pan, Yingpu Deng, Yupeng Jiang, Ziran Tu
In this paper, we present a new lattice-based public-key cryptosystem mixed with a knapsack, which has reasonable key size and quick encryption and decryption. The module strategy in our cryptosystem can also be used to construct a framework for some GGH-type cryptosystems to improve their security.
Last updated:  2011-08-28
Partial Signatures and their Applications
Uncategorized
Mihir Bellare, Shanshan Duan
Show abstract
Uncategorized
We introduce Partial Signatures, where a signer, given a message, can compute a ``stub'' which preserves her anonymity, yet later she, but nobody else, can complete the stub to a full and verifiable signature under her public key. We provide a formal definition requiring three properties, namely anonymity, unambiguity and unforgeability. We provide schemes meeting our definition both with and without random oracles. Our schemes are surprisingly cheap in both bandwidth and computation. We describe applications including anonymous bidding and betting.
Last updated:  2009-12-26
Related-Key Rectangle Attack of the Full 80-Round HAS-160 Encryption Mode
Ewan Fleischmann, Michael Gorski, Stefan Lucks
In this paper we investigate the security of the encryption mode of the HAS-160 hash function. HAS-160 is a Korean hash standard which is widely used in Korea's industry. The structure of HAS-160 is similar to SHA-1 but includes some improvements. The encryption mode of HAS-160 is defined similarly as the encryption mode of SHA-1 that is called SHACAL-1. In 2006, Dunkelman et. al. successfully broke the full 80-round SHACAL-1. In this paper, we present the first cryptographic attack that breaks the encryption mode of the full 80-round HAS-160. SHACAL-1 and the encryption mode of HAS-160 are both blockciphers with key size 512 bits and plain-/ciphertext size of 160 bits. We will apply a key recovery attack that needs about 2^{155} chosen plaintexts and 2^{375.98} 80-round HAS-160 encryptions. The attack does not aim for a collision, preimage or 2nd-preimage attack, but it shows that HAS-160 used as a block cipher can be differentiated from an ideal cipher faster than exhaustive search.
Last updated:  2009-07-09
Attacking Reduced Rounds of the ARIA Block Cipher
Ewan Fleischmann, Michael Gorski, Stefan Lucks
ARIA is a block cipher proposed at ICISC'03. Its design is very similar to the advanced encryption standard (AES). The authors propose that on 32-bit processors, the encryption speed is at least 70% of that of the AES. They claim to offer a higher security level than AES. In this paper we present two attacks of reduced round ARIA which shows some weaknesses of the cipher. Moreover, our attacks have the lowest memory requirements compared to existing attacks on ARIA with an increase in the time complexity.
Last updated:  2009-07-09
Hard Fault Analysis of Trivium
Yupu Hu, Fengrong Zhang, Yiwei Zhang
Fault analysis is a powerful attack to stream ciphers. Up to now, the major idea of fault analysis is to simplify the cipher system by injecting some soft faults. We call it soft fault analysis. As a hardware--oriented stream cipher, Trivium is weak under soft fault analysis. In this paper we consider another type of fault analysis of stream cipher, which is to simplify the cipher system by injecting some hard faults. We call it hard fault analysis. We present the following results about such attack to Trivium. In Case 1 with the probability not smaller than 0.2396, the attacker can obtain 69 bits of 80--bits--key. In Case 2 with the probability not smaller than 0.2291, the attacker can obtain all of 80--bits--key. In Case 3 with the probability not smaller than 0.2291, the attacker can partially solve the key. In Case 4 with non--neglectable probability, the attacker can obtain a simplified cipher, with smaller number of state bits and slower non--linearization procedure. In Case 5 with non--neglectable probability, the attacker can obtain another simplified cipher. Besides, these 5 cases can be checked out by observing the key--stream.
Last updated:  2009-07-08
Untraceable RFID protocols are not trivially composable: Attacks on the revision of EC-RAC
Ton van Deursen, Sasa Radomirovic
It is well-known that protocols that satisfy a security property when executed in isolation do not necessarily satisfy the same security property when they are executed in an environment containing other protocols. We demonstrate this fact on a family of recently proposed RFID protocols by Lee, Batina, and Verbauwhede. We invalidate the authentication and untraceability claims made for several of the family's protocols. We also present man-in-the-middle attacks on untraceability in all of the protocols in the family. Similar attacks can be carried out on some other protocols in the literature, as well. We briefly indicate how to repair the protocols.
Last updated:  2009-07-07
Security Notions and Generic Constructions for Client Puzzles
Uncategorized
L. Chen, P. Morrissey, N. P. Smart, B. Warinschi
Show abstract
Uncategorized
Computational puzzles are mildly difficult computational problems that require resources (processor cycles, memory, or both) to solve. Puzzles have found a variety of uses in security. In this paper we are concerned with {\em client puzzles}: a type of puzzle used as a defense against Denial of Service (DoS) attacks. Before engaging in a resource consuming protocol with a client, a server demands that the client solves a freshly generated client puzzle. Despite their widespread use, the lack of formal models for security of client puzzles prevents a full analysis of proposed puzzles and, more importantly, prevents rigorous proofs for the effectiveness of puzzles as a DoS defense. The main contribution of this paper is a formal model for the security of client puzzles as a stepping stone towards solving the above problems. We clarify the interface that client puzzles should offer and give two security notions for puzzles. Both functionality and security are inspired by, and tailored to, the use of puzzles as a defense against DoS attacks. The first notion -- puzzle unforgeability -- requires that an adversary is unable to produce valid looking puzzles on its own. The second notion -- puzzle-difficulty -- requires that an adversary spends at least an appropriate amount of resources solving puzzles. Our definitions fill an important gap: breaking either of the two properties immediately leads to successful DoS attacks. We illustrate this point with an attack against a previously proposed puzzle construction. We show that a subtle flaw renders the construction forgeable and we explain how to exploit this flaw to mount a DoS attack on certain protocols that use this puzzle. We also provide a generic construction of a client puzzle. Our construction uses a pseudorandom function family to provide unforgeability and a one way function for the difficulty. We prove our generic construction meets our definitions of unforgeability and difficulty for client puzzles. Finally, we discuss and analyze (in the random oracle model) a practical instantiation of our construction based on hash functions.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.