All papers in 2011 (714 results)

Last updated:  2012-01-02
Position-Verification in Multi-Channel Models
Huajun Zhang, Zongyang Zhang, Zhenfu Cao
We propose an collusion-attack-resistant position-verification protocol in a new model called multi-channel model. In the multi-channel model, there are lots of communication channels. When a player picks a random channel and sends a short message over it, the message might slip by an adversary with high probability if the adversary does not know the channel beforehand. This idea is motivated from the spread spectrum communication techniques. We adopt it to solve the position-verification task. Adding different constraints into the multi-channel model, we make three sub-models: receiving-constrained multi-channel model, sending-constrained multi-channel model and cover-constrained multi-channel model. Our position-verification protocol is secure under all of these sub-models with appropriate parameters.
Last updated:  2012-01-02
A server-aided verification signature scheme without random oracles
Bin Wang, Qing Zhao
Server-aided verification(SAV) signature is useful for power-constrained devices since a powerful server can assist in performing costly operations such as pairing operations. Wu et al. [13] defined three security notions for SAV protocol to prevent a server from convincing a verifier that an invalid signature is valid. Security against strong collusion attack provides the strongest security guarantee among these notions. They [13] constructed SAV protocols that meet the requirement of these notions respectively. But they did not provide concrete running time to show that the running time of a verifier in their SAV protocol is strictly less than that of a verifier in the original verification protocol. In addition, a problem left open by their work is to design SAV signature which is unforgeable without random oracles as well as sound against strong collusion attack. To address the above issues, we first choose to design a SAV protocol called SAV-Hofheinz for a short signature proposed by Hofheinz unforgeable in the standard model. Then we implement SAV-Hofheinz by the PBC library and shows that the running time of a verifier in SAV-Hofheinz is strictly less than that of a verifier in the verification protocol of Hofheinz short signature.
Last updated:  2012-08-28
Efficient Java Implementation of Elliptic Curve Cryptography for J2ME-Enabled Mobile Devices
Johann Großschädl, Dan Page, Stefan Tillich
The Micro Edition of the Java 2 platform (J2ME) provides an application environment specifically designed to address the demands of embedded devices like cell phones, PDAs or set-top boxes. Since the J2ME platform does not include a crypto package, developers are forced to use third-party classes or to implement all cryptographic primitives from scratch. However, most existing implementations of elliptic curve (EC) cryptography for J2ME do not perform well on resource-restricted devices, in most cases due to poor efficiency of the underlying arithmetic operations. In this paper we present an optimized Java implementation of EC scalar multiplication that combines efficient finite-field arithmetic with efficient group arithmetic. More precisely, our implementation uses a pseudo-Mersenne (PM) prime field for fast modular reduction and a Gallant-Lambert-Vanstone (GLV) curve with an efficiently computable endomorphism to speed up the scalar multiplication with random base points. Our experimental results show that a conventional mobile phone without Java acceleration, such as the Nokia 6610, is capable to execute a 174-bit scalar multiplication in roughly 400 msec, which is more than 45 times faster than the widely-used Bouncy Castle Lightweight Crypto API for J2ME.
Last updated:  2011-12-31
Evolutionary Construction of de Bruijn Sequences
Meltem Sonmez Turan
A binary de Bruijn sequence of order $n$ is a cyclic sequence of period $2^n$, in which each $n$-bit pattern appears exactly once. These sequences are commonly used in random number generation and symmetric key cryptography particularly in stream cipher design, mainly due to their good statistical properties. Constructing de Bruijn sequences is of interest and well studied in the literature. In this study, we propose a new randomized construction method based on genetic algorithms. The method models de Bruijn sequences as a special type of traveling salesman tours (TSP) and tries to find optimal solutions. We present some experimental results for $n\leq 14$.
Last updated:  2011-12-31
Cryptanalysis of the Full AES Using GPU-Like Special-Purpose Hardware
Alex Biryukov, Johann Großschädl
The block cipher Rijndael has undergone more than ten years of extensive cryptanalysis since its submission as a candidate for the Advanced Encryption Standard (AES) in April 1998. To date, most of the publicly-known cryptanalytic results are based on reduced-round variants of the AES (respectively Rijndael) algorithm. Among the few exceptions that target the full AES are the Related-Key Cryptanalysis (RKC) introduced at ASIACRYPT 2009 and attacks exploiting Time-Memory-Key (TMK) trade-offs such as demonstrated at SAC 2005. However, all these attacks are generally considered infeasible in practice due to their high complexity (i.e. $2^{99.5}$ AES operations for RKC, $2^{80}$ for TMK). In this paper, we evaluate the cost of cryptanalytic attacks on the full AES when using special-purpose hardware in the form of multi-core AES processors that are designed in a similar way as modern Graphics Processing Units (GPUs) such as the NVIDIA GT200b. Using today's VLSI technology would allow for the implementation of a GPU-like processor reaching a throughput of up to $10^{12}$ AES operations per second. An organization able to spend one trillion US\$ for designing and building a supercomputer based on such processors could theoretically break the full AES in a time frame of as little as one year when using RKC, or in merely one month when performing a TMK attack. We also analyze different time-cost trade-offs and assess the implications of progress in VLSI technology under the assumption that Moore's law will continue to hold for the next ten years. These assessments raise some concerns about the long-term security of the AES.
Last updated:  2011-12-31
Fault Attack against Miller's algorithm
Nadia El Mrabet
We complete the study of [23] and [27] about Miller's algorithm. Miller's algorithm is a central step to compute the Weil, Tate and Ate pairings. The aim of this article is to analyze the weakness of Miller's algorithm when it undergoes a fault attack. We prove that Miller's algorithm is vulnerable to a fault attack which is valid in all coordinate systems, through the resolution of a nonlinear system. We highlight the fact that putting the secret as the rst argument of the pairing is not a countermeasure. This article is an extensed version of the article [15].
Last updated:  2011-12-31
Computational Extractors and Pseudorandomness
Dana Dachman-Soled, Rosario Gennaro, Hugo Krawczyk, Tal Malkin
Computational extractors are efficient procedures that map a source of sufficiently high min-entropy to an output that is computationally indistinguishable from uniform. By relaxing the statistical closeness property of traditional randomness extractors one hopes to improve the efficiency and entropy parameters of these extractors, while keeping their utility for cryptographic applications. In this work we investigate computational extractors and consider questions of existence and inherent complexity from the theoretical and practical angles, with particular focus on the relationship to pseudorandomness. An obvious way to build a computational extractor is via the ``extract-then-prg" method: apply a statistical extractor and use its output to seed a PRG. This approach carries with it the entropy cost inherent to implementing statistical extractors, namely, the source entropy needs to be substantially higher than the PRG's seed length. It also requires a PRG and thus relies on one-way functions. We study the necessity of one-way functions in the construction of computational extractors and determine matching lower and upper bounds on the ``black-box efficiency" of generic constructions of computational extractors that use a one-way permutation as an oracle. Under this efficiency measure we prove a direct correspondence between the complexity of computational extractors and that of pseudorandom generators, showing the optimality of the extract-then-prg approach for generic constructions of computational extractors and confirming the intuition that to build a computational extractor via a PRG one needs to make up for the entropy gap intrinsic to statistical extractors. On the other hand, we show that with stronger cryptographic primitives one can have more entropy- and computationally-efficient constructions. In particular, we show a construction of a very practical computational extractor from any weak PRF without resorting to statistical extractors.
Last updated:  2011-12-31
Cryptanalysis of The Atmel Cipher in SecureMemory, CryptoMemory and CryptoRF
Alex Biryukov, Ilya Kizhvatov, Bin Zhang
SecureMemory (SM), CryptoMemory (CM) and CryptoRF (CR) are the Atmel chip families with wide applications in practice. They implement a proprietary stream cipher, which we call the Atmel cipher, to provide authenticity, confidentiality and integrity. At CCS'2010, it was shown that given $1$ keystream frame, the secret key in SM protected by the simple version of the cipher can be recovered in $2^{39.4}$ cipher ticks and if $2640$ keystream frames are available, the secret key in CM guarded by the more complex version of the cipher can be restored in $2^{58}$ cipher ticks. In this paper, we show much more efficient and practical attacks on both versions of the Atmel cipher. The idea is to dynamically reconstruct the internal state of the underlying register by exploiting the different diffusion speeds of the different cells. For SM, we can recover the secret key in $2^{29.8}$ cipher ticks given $1$ keystream frame; for CM, we can recover the secret key in $2^{50}$ cipher ticks with around $24$ frames. Practical implementation of the full attack confirms our results.
Last updated:  2012-01-24
Improved Side Channel Attacks on Pairing Based Cryptography
Johannes Blömer, Peter Günther, Gennadij Liske
Techniques from pairing based cryptography (PBC) are used in an in- creasing number of cryptographic schemes. With progress regarding efficient implementations, pairings also become interesting for applications on smart cards. With these applications the question of the vulnerability to side channel attacks (SCAs) arises. Several known invasive and non-invasive attacksagainst pairing algorithms only work if the second but not if the first argument of the pairing is the secret. In this paper we extend some of these attacks also to the case where the first argument is the secret. Hence we may conclude that positioning the secret as the first argument of the pairing does not improve the security against SCAs, as it sometimes has been suggested.
Last updated:  2011-12-27
Differential Attacks on Generalized Feistel Schemes
Valerie Nachef, Emmanuel Volte, Jacques Patarin
While generic attacks on classical Feistel schemes and unbalanced Feistel schemes have been studied a lot, generic attacks on several generalized Feistel schemes like type-1, type-2 and type-3 and Alternating Feistel schemes, as defined in~\cite{HR}, have not been systematically investigated. This is the aim of this paper. We give our best Known Plaintext Attacks and non-adaptive Chosen Plaintext Attacks on these schemes and we determine the maximum number of rounds that we can attack. It is interesting to have generic attacks since there are well known block cipher networks that use generalized Feistel schemes: CAST-256 (type-1), RC-6 (type-2), MARS (type-3) and BEAR/LION (alternating). Also, Type-1 and Type-2 Feistel schemes are respectively used in the construction of the hash functions $Lesamnta$ and $SHAvite-3_{512}$.
Last updated:  2012-01-09
Security Analysis of a PUF based RFID Authentication Protocol
Masoumeh Safkhani, Nasour Bagheri, Majid Naderi
In this paper we consider the security of a PUF based RFID Authentication protocol which has been recently proposed by Bassil et al. The designers have claimed that their protocol offers immunity against a broad range of attacks while it provides excellent performance. However, we prove in contrary to its designers claim that this protocol does not provide any security. We present an efficient secret disclosure attack which retrieves all secret parameters of the protocol. Given those secret parameters, it would be trivial to apply any other attack in the context on the protocol. However, to highlight other weaknesses of the protocol we present extra reader traceability, impersonation and desynchronization attacks that do not require disclosing the secret parameters necessarily. Success probability of all mentioned attacks is almost ``1'' while the complexity is at most two runs of protocol.
Last updated:  2013-10-29
Waters Signatures with Optimal Security Reduction
Dennis Hofheinz, Tibor Jager, Edward Knapp
Waters signatures (Eurocrypt 2005) can be shown existentially unforgeable under chosen-message attacks under the assumption that the computational Diffie-Hellman problem in the underlying (pairing-friendly) group is hard. The corresponding security proof has a reduction loss of O(l*q), where l is the bitlength of messages, and q is the number of adversarial signature queries. The original reduction could meanwhile be improved to O(\sqrt{l}*q) (Hofheinz and Kiltz, Crypto 2008); however, it is currently unknown whether a better reduction exists. We answer this question as follows: (a) We give a simple modification of Waters signatures, where messages are encoded such that each two encoded messages have a suitably large Hamming distance. Somewhat surprisingly, this simple modification suffices to prove security under the CDH assumption with a reduction loss of O(q). (b) We also show that any black-box security proof for a signature scheme with re-randomizable signatures must have a reduction loss of at least \Omega(q), or the underlying hardness assumption is false. Since both Waters signatures and our variant from (a) are re-randomizable, this proves our reduction from (a) optimal up to a constant factor. Understanding and optimizing the security loss of a cryptosystem is important to derive concrete parameters, such as the size of the underlying group. We provide a complete picture for Waters-like signatures: there is an inherent lower bound for the security loss, and we show how to achieve it.
Last updated:  2012-01-02
Comments of an efficient and secure multi-server authentication scheme with key agreement
Uncategorized
Yitao Chen
Show abstract
Uncategorized
Recently, Tsaur et al. proposed an authentication scheme for multi-server environments and claimed their scheme could withstand various attacks. In this letter, we will point out that Tsaur et al. scheme is not suitable for multi-server environments since the user has to register for every server. Furthermore, we will show Tsaur et al. scheme is vulnerable to the password guessing attack and the privileged insider attack.
Last updated:  2012-07-27
Decentralized Attribute-Based Signatures
Tatsuaki Okamoto, Katsuyuki Takashima
In this paper, we present the first decentralized multi-authority attribute-based signature (DMA-ABS) scheme, in which no central authority and no trusted setup are required. The proposed DMA-ABS scheme for general (non-monotone) predicates is fully secure(adaptive-predicate unforgeable and perfect private) under a standard assumption, the decisional linear (DLIN) assumption, in the random oracle model. Our DMA-ABS scheme is comparably as efficient as the most efficient ABS scheme. As a by-product, this paper also presents an adaptively secure DMA functional encryption (DMA-FE) scheme under the DLIN assumption.
Last updated:  2014-08-29
Efficient Attribute-Based Signatures for Non-Monotone Predicates in the Standard Model
Tatsuaki Okamoto, Katsuyuki Takashima
This paper presents a fully secure (adaptive-predicate unforgeable and private) attribute-based signature (ABS) scheme in the standard model. The security of the proposed ABS scheme is proven under standard assumptions, the decisional linear (DLIN) assumption and the existence of collision resistant (CR) hash functions. The admissible predicates of the proposed ABS scheme are more general than those of the existing ABS schemes, i.e., the proposed ABS scheme is the first to support general non-monotone predicates, which can be expressed using NOT gates as well as AND, OR, and Threshold gates, while the existing ABS schemes only support monotone predicates. The proposed ABS scheme is comparably as efficient as (several times worse than) one of the most efficient ABS schemes, which is proven to be secure in the generic group model.
Last updated:  2012-02-17
Public-Key Encryption with Cluster-Chain-based Keyword Search
Peng Xu, Hai Jin, Wei Wang, Deqing Zou
It is widely acknowledged that the keyword search performance and the privacy of keywords are equally important for keyword searchable ciphertexts. To our knowledge, no public-key-encryption-based work in literature can accelerate the keyword search, while preserving semantic security of keywords under chosen keyword attacks. In this paper, we propose public-key encryption with cluster-chain-based keyword search (PCCS), which is an innovation of public-key encryption with keyword search (PEKS). PCCS not only has much more efficient keyword search, but also preserves the equal SS-CKA security with PEKS from computational bilinear Diffie-Hellman (CBDH) assumption. With such advantages, PCCS just slightly increases the size of public parameter and the space complexity of keyword searchable ciphertexts.
Last updated:  2012-11-12
A generalization of the class of hyper-bent Boolean functions in binomial forms
Uncategorized
Chunming Tang, Yu Lou, Yanfeng Qi, Baocheng Wang, Yixian Yang
Show abstract
Uncategorized
Bent functions, which are maximally nonlinear Boolean functions with even numbers of variables and whose Hamming distance to the set of all affine functions equals $2^{n-1}\pm 2^{\frac{n}{2}-1}$, were introduced by Rothaus in 1976 when he considered problems in combinatorics. Bent functions have been extensively studied due to their applications in cryptography, such as S-box, block cipher and stream cipher. Further, they have been applied to coding theory, spread spectrum and combinatorial design. Hyper-bent functions, as a special class of bent functions, were introduced by Youssef and Gong in 2001, which have stronger properties and rarer elements. Many research focus on the construction of bent and hyper-bent functions. In this paper, we consider functions defined over $\mathbb{F}_{2^n}$ by $f^{(r)}_{a,b}:=\mathrm{Tr}_{1}^{n}(ax^{r(2^m-1)}) +\mathrm{Tr}_{1}^{4}(bx^{\frac{2^n-1}{5}})$, where $n=2m$, $m\equiv 2\pmod 4$, $a\in \mathbb{F}_{2^m}$ and $b\in\mathbb{F}_{16}$. When $r\equiv 0\pmod 5$, we characterize the hyper-bentness of $f^{(r)}_{a,b}$. When $r\not \equiv 0\pmod 5$, $a\in mathbb{F}_{2^m}$ and $(b+1)(b^4+b+1)=0$, with the help of Kloosterman sums and the factorization of $x^5+x+a^{-1}$, we present a characterization of hyper-bentness of $f^{(r)}_{a,b}$. Further, we give all the hyper-bent functions of $f^{(r)}_{a,b}$ in the case $a\in\mathbb{F}_{2^{\frac{m}{2}}}$.
Last updated:  2011-12-23
SPONGENT: The Design Space of Lightweight Cryptographic Hashing
Uncategorized
Andrey Bogdanov, Miroslav Knezevic, Gregor Leander, Deniz Toz, Kerem Varici, Ingrid Verbauwhede
Show abstract
Uncategorized
The design of secure yet efficiently implementable cryptographic algorithms is a fundamental problem of cryptography. Lately, lightweight cryptography - optimizing the algorithms to fit the most constrained environments - has received a great deal of attention, the recent research being mainly focused on building block ciphers. As opposed to that, the design of lightweight hash functions is still far from being well-investigated with only few proposals in the public domain. In this article, we aim to address this gap by exploring the design space of lightweight hash functions based on the sponge construction instantiated with PRESENT-type permutations. The resulting family of hash functions is called SPONGENT. We propose 13 SPONGENT variants -- for different levels of collision and (second) preimage resistance as well as for various implementation constraints. For each of them we provide several ASIC hardware implementations - ranging from the lowest area to the highest throughput. We make efforts to address the fairness of comparison with other designs in the field by providing an exhaustive hardware evaluation on various technologies, including an open core library. We also prove essential differential properties of SPONGENT permutations, give a security analysis in terms of collision and preimage resistance, as well as study in detail dedicated linear distinguishers.
Last updated:  2012-03-14
Efficient Network Coding Signatures in the Standard Model
Dario Catalano, Dario Fiore, Bogdan Warinschi
Network Coding is a routing technique where each node may actively modify the received packets before transmitting them. While this departure from passive networks improves throughput and resilience to packet loss it renders transmission susceptible to {\em pollution attacks} where nodes can misbehave and change in a malicious way the messages transmitted. Nodes cannot use standard signature schemes to authenticate the modified packets: this would require knowledge of the original sender's signing key. Network coding signature schemes offer a cryptographic solution to this problem. Very roughly, such signatures allow signing vector spaces (or rather bases of such spaces). Furthermore, these signatures are homomorphic: given signatures on a set of vectors it is possible to create signatures for any linear combination of these vectors. Designing such schemes is a difficult task, and the few existent constructions either rely on random oracles or are rather inefficient. In this paper we introduce two new network coding signature schemes. Both of our schemes are provably secure in the standard model, rely on standard assumptions, {\em and} are in the same efficiency class with previous solutions based on random oracles.
Last updated:  2011-12-23
Deterministic Identity Based Signature Scheme and its Application for Aggregate Signatures
S. Sharmila Deva Selvi, S. Sree Vivek, C. Pandu Rangan
The revolutionary impact offered by identity based cryptography is phenomenal. This novel mechanism was first coined by Adi Shamir in 1984. Since then, several identity based signature schemes were reported. But surprisingly, none of the identity based signature scheme is having the property of determinism and does rely on bilinear pairing. We think positively in answering this long standing question of realizing deterministic identity based signature in composite order groups and we succeed in developing a signature scheme based on RSA assumption and is deterministic. It is indeed helpful in devising variants of signature primitive. Fully aggregateable identity based signature schemes without prior communication between the signing parties is an interesting issue in identity based cryptography. It is easy to see that deterministic identity based signature schemes lead to full aggregation of signatures without the aforementioned overhead. The major contribution of this paper is a novel deterministic identity based signature scheme whose security relies on the strong RSA assumption and random oracles. Based on this newly proposed deterministic identity based signature scheme, we design an identity based aggregate signature scheme which achieves full aggregation in one round. We formally prove the schemes to be existentially unforgeable under adaptive chosen message and identity attack.
Last updated:  2011-12-29
Generic Side-channel Distinguisher Based on Kolmogorov-Smirnov Test: Explicit Construction and Practical Evaluation
Jiye Liu, Yongbin Zhou, Shuguo Yang, Dengguo Feng
Construction and evaluation of efficient distinguishers with broad generality is one fundamental problem in the area of side-channel cryptanalysis. Due to their capabilities to deal with general correlations, MIA-like distinguishers have received wide attention from academia. In this paper, we conduct a comprehensive comparison investigation of existing MIA-like distinguishers, and then propose a new generic side-channel distinguisher based on partial Kolmogorov-Smirnov test, namely PKS distinguisher. Theoretical analysis and experimental attacks unanimously justify that PKS distinguisher works remarkably well with both linear and non-linear leakage models. Specifically, PKS distinguisher has obvious advantages over existing MIA-like distinguishers in terms of both success rate and guessing entropy. Additionally, lower computational complexity of PKS distinguisher further shows its better applicability than MIA-like distinguishers.
Last updated:  2011-12-23
A non-interactive deniable authentication scheme in the standard model
Bin Wang, Qing Zhao, Ke Dai
Deniable authentication protocols enable a sender to authenticate a message to a receiver such that the receiver is unable to prove the identity of the sender to a third party. In contrast to interactive schemes, non-interactive deniable authentication schemes improve communication efficiency. Currently, several non-interactive deniable authentication schemes have been proposed with provable security in the random oracle model. In this paper, we study the problem of constructing non-interactive deniable authentication scheme secure in the standard model without bilinear groups. An efficient non-interactive deniable authentication scheme is presented by combining the Diffie-Hellman key exchange protocol with authenticated encryption schemes. We prove the security of our scheme by sequences of games and show that the computational cost of our construction can be dramatically reduced by applying pre-computation technique.
Last updated:  2011-12-23
Fully Secure (Doubly-)Spatial Encryption under Simpler Assumptions
Cheng Chen, Zhenfeng Zhang, Dengguo Feng
Spatial encryption was first proposed by Boneh and Hamburg in 2008. It is one implementation of the generalized identity-based encryption schemes and many systems with a variety of properties can be derived from it. Recently, Hamburg improved the notion by presenting a variant called doubly-spatial encryption. The doubly spatial encryption is more powerful and expressive. More useful cryptography systems can be builded from it, such as attribute-based encryption, etc. However, most presented spatial encryption schemes are proven to be selectively secure. Only a few spatial encryption schemes achieve adaptive security, but not under standard assumptions. And no fully secure doubly-spatial encryption scheme has been presented before. In this paper, we primarily focus on the adaptive security of (doubly-)spatial encryption. A spatial encryption scheme and a doubly-spatial encryption scheme have been proposed. Then we apply the dual system methodology proposed by Waters in the security proof. Both of the schemes can be proven adaptively secure under standard assumptions, the decisional linear (DLIN) assumption and the decisional bilinear Diffie-Hellman (DBDH) assumption, over prime order groups in the standard model. To the best of our knowledge, our second scheme is the first fully secure construction of doubly-spatial encryption.
Last updated:  2012-01-09
Yet Another Ultralightweight Authentication Protocol that is Broken
Gildas Avoine, Xavier Carpent
Eghdamian and Samsudin published at ICIEIS 2011 an ultralightweight mutual authentication protocol that requires few bitwise operations. The simplicity of the design makes the protocol very suitable to low-cost RFID tags. However, we demonstrate in this paper that the long-term key shared by the reader and the tag can be recovered by an adversary with a few eavesdropped sessions only.
Last updated:  2011-12-23
A New Class of Multivariate Public Key Cryptosystem Constructed on the Basis of Message-Dependent Transformation
Masao KASAHARA
In this paper, a new class of Public-Key Cryptosystem(PKC) based on Random Simultaneous Equation of degree g(RSE(g)PKC) is presented. The proposed scheme uses a new class of trap-doors based on two classes of transformation, i.e. random transformation and message-dependent random transformation. For constructing the proposed scheme,random transformations X and Ψ are used. The transformation Ψ would yield a breakthrough to a field of multivaliate cryptosystem in a sense that the transformation is dependent on a message. Namely it is a message-dependent transformation on the basis of random coding. We show that the proposed PKC’s, can be secure against the various excellent attacks such as the attack based on the Gr¨obner bases calculation(Gröbner bases attack, GB attack), Patarin’s attack and Braeken-Wolf-Preneel attack, due to the random transformations using new trap-doors.
Last updated:  2013-02-05
(Efficient) Universally Composable Two-Party Computation Using a Minimal Number of Stateless Tokens
Uncategorized
Seung Geol Choi, Jonathan Katz, Dominique Schröder, Arkady Yerukhimovich, Hong-Sheng Zhou
Uncategorized
We continue the line of work initiated by Katz (Eurocrypt 2007) on using tamper-proof hardware tokens for universally composable secure computation with no additional setup. As our main result, we show an efficient oblivious-transfer (OT) protocol in which two parties each create and exchange a single, stateless token and can then run an unbounded number of OTs. This, in turn, means that the parties can perform (repeated) secure computation of arbitrary functions without exchanging additional tokens. Our protocol yields what we believe is the most practical and efficient known approach for universally composable computation based on tamper-proof hardware. Motivated by our result, we investigate the minimal number of stateless tokens needed for universally composable secure computation. We prove that our protocol is optimal in this regard for constructions having black-box security proofs. We also show that nonblack-box techniques can be used to obtain a construction using only a single stateless token.
Last updated:  2011-12-23
Authenticated Key Exchange under Bad Randomness
Guomin Yang, Shanshan Duan, Duncan S. Wong, Chik How Tan, Huaxiong Wang
We initiate the formal study on authenticated key exchange (AKE) under bad randomness. This could happen when (1) an adversary compromises the randomness source and hence directly controls the randomness of each AKE session; and (2) the randomness repeats in different AKE sessions due to reset attacks. We construct two formal security models, Reset-1 and Reset-2, to capture these two bad randomness situations respectively, and investigate the security of some widely used AKE protocols in these models by showing that they become insecure when the adversary is able to manipulate the randomness. On the positive side, we propose simple but generic methods to make AKE protocols secure in Reset-1 and Reset-2 models. The methods work in a modular way: first, we strengthen a widely used AKE protocol to achieve Reset-2 security, then we show how to transform any Reset-2 secure AKE protocol to a new one which also satisfies Reset-1 security.
Last updated:  2012-05-29
Cryptanalysis of WG-7 (A Lightweight Stream Cipher for RFID Encryption)
Mohammad Ali Orumiehchiha, Josef Pieprzyk, Ron Steinfeld
WG-7 is a stream cipher based on WG Stream Cipher and has been designed by Y. Luo, Q. Chai, G. Gong, and X. Lai in 2010. This cipher is designed for low cost and lightweight applications (RFID tags and mobile phones, for instance). This paper addresses cryptographic weaknesses of WG-7 Stream Cipher. We show that the key stream generated by WG-7 can be distinguished from a random sequence after knowing $ 2^{13.5}$ keystream bits and with a negligible error probability. Also, we investigate the security of WG-7 against algebraic attacks. An algebraic key recovery attack on this cipher is proposed. The attack allows to recover both the internal state and the secret key with the time complexity about $2^{27}$.
Last updated:  2012-07-04
Analysis of some natural variants of the PKP Algorithm
Rodolphe LAMPE, Jacques PATARIN
In 1989, Adi Shamir proposed a new zero-knowledge identification scheme based on a NP-complete problem called PKP for Permuted Kernel Problem. For a given prime p, a given matrix A and a given vector V , the problem is to find a permutation \pi such that the permuted vector V_\pi verifies A.V_\pi = 0 mod p. This scheme is still in 2011 known as one of the most efficient identification scheme based on a combinatorial problem. However, we will see in this paper that it is possible to improve this scheme significantly by combining new ideas in order to reduce the total number of computations to be performed and to improve very efficiently the security against side channel attacks using precomputations. We will obtain like this a new scheme that we have called SPKP. Moreover, if we use precomputed values in the scheme SPKP, then the prover will need to perform no computations (i.e. only selection and transmission of precomputed values). This is very interesting for security against side channel attacks because our scheme is zero-knowledge and we don't perform any computations using the key during the identification so we prove that any attacker (even using side channel attacks) being successfully identified implies that he has a solution to the NP-complete problem PKP.
Last updated:  2011-12-23
Cryptanalysis of Symmetric Block Ciphers Based on the Feistel Network with Non-bijective S-boxes in the Round Function
Uncategorized
Roman Oliynykov
Show abstract
Uncategorized
We consider ciphertext-only attack on symmetric block ciphers based on the Feistel network with secret S-boxes installed as an additional parameter, like in Soviet GOST 28147-89. In case when S-boxes are generated by authorized agency and cannot be verified by end-user of the cipher (e.g., in case of special equipment for encryption), application of non-bijective S-boxes allows significantly decrease deciphering complexity for authorized agency preserving high-level strength for other cryptanalysts.We show that it is necessary to have non-bijective S-boxes which outputs form non-trivial subgroup and give an example for deciphering complexity with known and secret non-bijective S-boxes for GOST.
Last updated:  2011-12-23
Identification Based Encryption with RSA-OAEP. Using SEM and Without
Rkia Aouinatou, Mostafa Belkasmi
In this article we show how we can integrate the RSA (RSA-OAEP) into the IBE. Our prove can be make with either Standard Model or Random Oracle. We firstly develop the basic ideas made in this direction, so that to create a novel scheme with which we can signs and crypt at the same time. Then we give our new approach which conserves properly the syntax of the RSA classic. Additionally we compare our authentication with the signature of Shamir. More than that, in the RSA-IBE there is the problem of relating the exponent with an identity. Even if, there was some proposals in this direction, but they operate only with the Random Oracle. And in this article we will response to question of Xuhua Ding and Gene Tsudik, in order to propose an efficient exponent for an RSA-IBE. In the end of the article we give a useful appendix.
Last updated:  2012-08-07
Timing Attacks against the Syndrome Inversion in Code-based Cryptosystems
Falko Strenzke
In this work we present the first practical key-aimed timing attack against code-based cryptosystems. It arises from vulnerabilities that are present in the inversion of the error syndrome through the Extended Euclidean Algorithm that is part of the decryption operation of these schemes. Three types of timing vulnerabilities are combined to a successful attack. Each is used to gain information about the secret support, which is part of code-based decryption keys: The first allows recovery of the zero-element, the second is a refinement of a previously described vulnerability yielding linear equations, and the third enables to retrieve cubic equations.
Last updated:  2011-12-18
UC framework for anonymous communication
István Vajda
In this research report we present an UC framework for the general task of anonymous communication. Definition of the ideal and the real models are carried out in the BPW (Backes-Pfitzmann-Waidner) formalism. It is shown how this approach relates to and extends earlier proposals [10],[15]. We consider also the adaptive adversary. An example is given for a wireless application.
Last updated:  2011-12-18
Physically Uncloneable Functions in the Universal Composition Framework
Chris Brzuska, Marc Fischlin, Heike Schröder, Stefan Katzenbeisser
Recently, there have been numerous works about hardware-assisted cryptographic protocols, either improving previous constructions in terms of efficiency, or in terms of security. In particular, many suggestions use Canetti's universal composition (UC) framework to model hardware tokens and to derive schemes with strong security guarantees in the UC framework. Here, we augment this approach by considering Physically Uncloneable Functions (PUFs) in the UC framework. Interestingly, when doing so, one encounters several peculiarities specific to PUFs, such as the intrinsic non-programmability of such functions. Using our UC notion of PUFs, we then devise efficient UC-secure protocols for basic tasks like oblivious transfer, commitments, and key exchange. It turns out that designing PUF-based protocols is fundamentally different than for other hardware tokens. For one part this is because of the non-programmability. But also, since the functional behavior is unpredictable even for the creator of the PUF, this causes an asymmetric situation in which only the party in possession of the PUF has full access to the secrets.
Last updated:  2011-12-18
Better Bootstrapping in Fully Homomorphic Encryption
Craig Gentry, Shai Halevi, Nigel P. Smart
Gentry's bootstrapping technique is currently the only known method of obtaining a "pure" fully homomorphic encryption (FHE) schemes, and it may offers performance advantages even in cases that do not require pure FHE (such as when using the new noise-control technique of Brakerski-Gentry-Vaikuntanathan). The main bottleneck in bootstrapping is the need to evaluate homomorphically the reduction of one integer modulo another. This is typically done by emulating a binary modular reduction circuit, using bit operations on binary representation of integers. We present a simpler approach that bypasses the homomorphic modular-reduction bottleneck to some extent, by working with a modulus very close to a power of two. Our method is easier to describe and implement than the generic binary circuit approach, and is likely to be faster in practice. In some cases it also allows us to store the encryption of the secret key as a single ciphertext, thus reducing the size of the public key. We also show how to combine our new method with the SIMD homomorphic computation techniques of Smart-Vercauteren and Gentry-Halevi-Smart, to get a bootstrapping method that works in time quasi-linear in the security parameter. This last part requires extending the techniques from prior work to handle arithmetic not only over fields, but also over some rings. (Specifically, our method uses arithmetic modulo a power of two, rather than over characteristic-two fields.)
Last updated:  2012-05-21
CTL: A Platform-Independent Crypto Tools Library Based on Dataflow Programming Paradigm
Uncategorized
Junaid Jameel Ahmad, Shujun Li, Ahmad-Reza Sadeghi, Thomas Schneider
Show abstract
Uncategorized
The diversity of computing platforms is increasing rapidly. In order to allow security applications to run on such diverse platforms, implementing and optimizing the same cryptographic primitives for multiple target platforms and heterogeneous systems can result in high costs. In this paper, we report our efforts in developing and benchmarking a platform-independent Crypto Tools Library (CTL). CTL is based on a dataflow programming framework called Reconfigurable Video Coding (RVC), which was recently standardized by ISO/IEC for building complicated reconfigurable video codecs. CTL benefits from various properties of the RVC framework including tools to 1) simulate the platform-independent designs, 2) automatically generate implementations in different target programming languages (e.g., C/C++, Java, LLVM, and Verilog/VHDL) for deployment on different platforms as software and/or hardware modules, and 3) design space exploitation such as automatic parallelization for multi- and many-core systems. We benchmarked the performance of the SHA-256 and AES implementations in CTL on single-core target platforms and demonstrated that implementations automatically generated from platform-independent RVC applications can achieve a run-time performance comparable to reference implementations manually written in C and Java. For a quad-core target platform, we benchmarked a 4-adic hash tree application based on SHA-256 that achieves a performance gain of up to 300% for hashing messages of size 8 MB.
Last updated:  2013-10-29
On definitions of selective opening security
Florian Böhl, Dennis Hofheinz, Daniel Kraschewski
Assume that an adversary observes many ciphertexts, and may then ask for openings, i.e. the plaintext and the randomness used for encryption, of some of them. Do the unopened ciphertexts remain secure? There are several ways to formalize this question, and the ensuing security notions are not known to be implied by standard notions of encryption security. In this work, we relate the two existing flavors of selective opening security. Our main result is that indistinguishability-based selective opening security and simulation-based selective opening security do not imply each other. We show our claims by counterexamples. Concretely, we construct two public-key encryption schemes. One scheme is secure under selective openings in a simulation-based sense, but not in an indistinguishability-based sense. The other scheme is secure in an indistinguishability-based sense, but not in a simulation-based sense. Our results settle an open question of Bellare et al. (Eurocrypt 2009). Also, taken together with known results about selective opening secure encryption, we get an almost complete picture how the two flavors of selective opening security relate to standard security notions.
Last updated:  2011-12-18
CommitCoin: Carbon Dating Commitments with Bitcoin
Jeremy Clark, Aleksander Essex
In the standard definition of a commitment scheme, the sender commits to a message and immediately sends the commitment to the recipient interested in it. However the sender may not always know at the time of commitment who will become interested in verifying it. Further, when the interested party does emerge, it could be critical to establish when the commitment was made. Employing a proof of work protocol at commitment time will later allow anyone to "carbon date" when the commitment was made, approximately, without trusting any external parties. We present CommitCoin, an instantiation of this approach that harnesses the existing processing power of the Bitcoin peer-to-peer network; a network used to mint and trade digital cash.
Last updated:  2011-12-16
Enhanced Biometrics-based Remote User Authentication Scheme Using Smart Cards
Uncategorized
Jian-Zhu Lu, Shaoyuan Zhang, Shijie Qie
Show abstract
Uncategorized
Authentication and key exchange are fundamental techniques for enabling secure communication over mobile networks. In order to reduce implementation complexity and achieve computation efficiency, design issues for efficient and secure biometrics-based remote user authentication scheme have been extensively investigated by research community in these years. Recently, two well-designed biometrics-based authentication schemes using smart cards are introduced by Li and Hwang and Li et al., respectively. Li and Hwang proposed an efficient biometrics-based remote user authentication scheme using smart card and Li et al. proposed an improvement. The authors of both schemes claimed that their protocol delivers important security features and system functionalities, such as without synchronized clock, freely changes password, mutual authentication, as well as low computation costs. However, these two schemes still have much space for security enhancement. In this paper, we first demonstrate a series of vulnerabilities on these two schemes. Then, an enhanced scheme with corresponding remedies is proposed to eliminate all identified security flaws in both schemes.
Last updated:  2012-01-16
Basing Obfuscation on Simple Tamper-Proof Hardware Assumptions
Uncategorized
Nico Döttling, Thilo Mie, Jörn Müller-Quade, Tobias Nilges
Show abstract
Uncategorized
Code obfuscation is one of the most powerful concepts in cryptography. It could yield functional encryption, digital rights management, and maybe even secure cloud computing. However, general code obfuscation has been proven impossible and the research then focused on obfuscating very specific functions, studying weaker security definitions for obfuscation, and using tamper-proof hardware tokens to achieve general code obfuscation. Following this last line this work presents the first scheme which bases general code obfuscation of multiple programs on one single stateless hardware token. Our construction is proven secure in the UC-framework and proceeds in three steps: 1. We construct an obfuscation scheme based on fully homomorphic encryption (FHE) and a hybrid functionality conditional decrypt, which decrypts the result of a homomorphic computation given a proof that the computation was performed as intended. One difficulty of the first step are possible decryptions errors in the FHE. These decryption errors can occur whenever the randomness for the encryption is chosen maliciously by the receiver of the obfuscated code. Such decryption errors then could make a real obfuscated computation distinguishable from a black box use of the non-obfuscated program. 2. Given two common reference strings (CRS) we construct a UC-protocol realizing the functionality conditional decrypt with a stateless hardware token. As the token is stateless it is resettable by a dishonest receiver and the proofs given to the token must be resettably sound. One additional difficulty occurs when the issuer of the token can be corrupted. A malicious token can be stateful and it cannot be prevented that it aborts after a hardwired number of invocations. To prevent adaptive behavior of a malicious token the data of the receiver has to be hidden from the token and the proofs given to the token must even hide the size of the program and the length of the computation. 3. Last we construct a protocol constructing a CRS with a stateless hardware token. Care has to be taken here to not let the token learn anything about the resulting CRS which could not be simulated, because the very same token will later be used in a protocol based on the security of this CRS.
Last updated:  2012-07-03
Extended Combinatorial Constructions for Peer-to-peer User-Private Information Retrieval
Colleen M. Swanson, Douglas R. Stinson
We consider user-private information retrieval (UPIR), an interesting alternative to private information retrieval (PIR) introduced by Domingo-Ferrer et al. In UPIR, the database knows which records have been retrieved, but does not know the identity of the person making the query. The goal of UPIR, then, is to disguise user profiles from the point of view of the database. Domingo-Ferrer et al.\ focus on using a peer-to-peer community to construct a UPIR scheme, which we term P2P UPIR. In this paper, we establish a strengthened model for P2P UPIR and clarify the privacy goals of such schemes using standard terminology from the field of privacy research. In particular, we argue that any solution providing privacy against the database should attempt to minimize any corresponding loss of privacy against other users. We consider the problem of user-privacy against other users in detail and consider a stronger adversarial model than previous work. We give an analysis of existing P2P UPIR schemes, which includes a new attack by the database. Finally, we introduce two new P2P UPIR protocols and give an analysis of the privacy properties provided by these protocols. Our P2P UPIR schemes, like those from existing work, draw from the field of combinatorial designs. Unlike previous work, however, which focuses on a special type of design known as a configuration, our protocols make use of more general designs. This allows for more flexibility in protocol set-up, allowing for a choice between having a dynamic scheme (in which users are permitted to enter and leave the system), or providing increased privacy against other users.
Last updated:  2013-03-04
Pseudorandom Signatures
Nils Fleischhacker, Felix Günther, Franziskus Kiefer, Mark Manulis, Bertram Poettering
We develop a three-level hierarchy of privacy notions for (unforgeable) digital signature schemes. We first prove mutual independence of existing notions of anonymity and confidentiality, and then show that these are implied by higher privacy goals. The top notion in our hierarchy is \emph{pseudorandomness}: signatures with this property hide the entire information about the signing process and cannot be recognized as signatures when transmitted over a public network. This implies very strong unlinkability guarantees across different signers and even different signing algorithms, and gives rise to new forms of private public-key authentication. We show that one way towards pseudorandom signatures leads over our mid-level notion, called \emph{indistinguishability}: such signatures can be simulated using only the public parameters of the scheme. As we reveal, indistinguishable signatures exist in different cryptographic settings (e.g. based on RSA, discrete logarithms, pairings) and can be efficiently lifted to pseudorandomness deploying general transformations using appropriate encoding techniques. We also examine a more direct way for obtaining pseudorandomness for any unforgeable signature scheme. All our transformations work in the standard model. We keep public verifiability of signatures in the setting of system-wide known public keys. Some results even hold if signing keys are disclosed to the adversary --- given that signed messages have high entropy.
Last updated:  2012-08-07
Fast and Secure Root Finding for Code-based Cryptosystems
Falko Strenzke
In this work we analyze five previously published respectively trivial approaches and two new hybrid variants for the task of finding the roots of the error locator polynomial during the decryption operation of code-based encryption schemes. We compare the performance of these algorithms and show that optimizations concerning finite field element representations play a key role for the speed of software implementations. Furthermore, we point out a number of timing attack vulnerabilities that can arise in root-finding algorithms, some aimed at recovering the message, others at the secret support. We give experimental results of software implementations showing that manifestations of these vulnerabilities are present in straightforward implementations of most of the root-finding variants presented in this work. As a result, we find that one of the variants provides security with respect to all vulnerabilities as well as competitive computation time for code parameters that minimize the public key size.
Last updated:  2011-12-22
Improved Results on Impossible Differential Cryptanalysis of Reduced-Round Camellia-192/256
Uncategorized
Ya Liu, Dawu Gu, Zhiqiang Liu, Wei Li, Ying Man
Show abstract
Uncategorized
As an international standard adopted by ISO/IEC, the block cipher Camellia has been used in various cryptographic applications. In this paper, we reevaluate the security of Camellia against impossible differential cryptanalysis. Specifically, we propose several 7-round impossible differentials with the $FL/FL^{-1}$ layers. Based on them, we mount impossible differential attacks on 11-round Camellia-192 and 12-round Camellia-256. The data complexities of our attacks on 11-round Camellia-192 and 12-round Camellia-256 are about $2^{120}$ chosen plaintexts and $2^{119.8}$ chosen plaintexts, respectively. The corresponding time complexities are approximately $2^{167.1}$ 11-round encryptions and $2^{220.87}$ 12-round encryptions. As far as we know, our attacks are $2^{16.9}$ times and $2^{19.13}$ times faster than the previously best known ones but have slightly more data.
Last updated:  2011-12-16
SHA-3 on ARM11 processors
Peter Schwabe, Bo-Yin Yang, Shang-Yi Yang
This paper presents high-speed assembly implementations of the 256-bit-output versions of all five SHA-3 finalists and of SHA-256 for the ARM11 family of processors. We report new speed records for all of the six implemented functions. For example our implementation of the round-3 version of JH-256 is 35% faster than the fastest implementation of the round-2 version of JH-256 in eBASH. Scaled with the number of rounds this is more than a 45% improvement.We also improve upon previous assembly implementations for 32-bit ARM processors. For example the implementation of Groestl-256 described in this paper is about 20% faster than the arm32 implementation in eBASH.
Last updated:  2011-12-16
Small Linearization: Memory Friendly Solving of Non-Linear Equations over Finite Fields
Christopher Wolf, Enrico Thomae
Solving non-linear and in particular Multivariate Quadratic equations over finite fields is an important cryptanalytic problem. Apart from needing exponential time in general, we also need very large amounts of memory, namely $\approx Nn^2$ for $n$ variables, solving degree $D$, and $N \approx n^D$. Exploiting systematic structures in the linearization matrix, we show how we can reduce this amount of memory by $n^2$ to $\approx N$. For practical problems, this is a significant improvement and allows to fit the overall algorithm in the RAM of \emph{one} machine, even for larger values of $n$. Hence we call our technique Small Linearization (sl). We achieve this by introducing a probabilistic version of the F$_5$ criterion. It allows us to replace (sparse) Gaussian Elimination by black box methods for solving the underlying linear algebra problem. Therefore, we achive a drastic reduction in the algorithm's memory requirements. In addition, Small Linearization allows for far easier parallelization than algorithms using structured Gauss.
Last updated:  2011-12-09
Re-Encryption-Based Key Management Towards Secure and Scalable Mobile Applications in Clouds
Piotr K. Tysowski, M. Anwarul Hasan
Cloud computing confers strong economic advantages, but many clients are reluctant to implicitly trust a third-party cloud provider. To address these security concerns, data may be transmitted and stored in encrypted form. Major challenges exist concerning the aspects of the generation, distribution, and usage of encryption keys in cloud systems, such as the safe location of keys, and serving the recent trend of users that tend to connect to contemporary cloud applications using resource-constrained mobile devices in extremely large numbers simultaneously; these characteristics lead to difficulties in achieving efficient and highly scalable key management. In this work, a model for key distribution based on the principle of dynamic data re-encryption is applied to a cloud computing system in a unique way to address the demands of a mobile device environment, including limitations on client wireless data usage, storage capacity, processing power, and battery life. The proposed cloud-based re-encryption model is secure, efficient, and highly scalable in a cloud computing context, as keys are managed by the client for trust reasons, processor-intensive data re-encryption is handled by the cloud provider, and key redistribution is minimized to conserve communication costs on mobile devices. A versioning history mechanism effectively manages keys for a continuously changing user population. Finally, an implementation on commercial mobile and cloud platforms is used to validate the performance of the model.
Last updated:  2013-03-13
An Efficient and Private RFID Authentication Protocol Supporting Ownership Transfer
Uncategorized
Suleyman Kardas, Atakan Arslan, Serkan Celik, Albert Levi
Uncategorized
Radio Frequency IDentification based systems, which are the most famous example of ubiquitous networks, are getting pervasively deployed in many daily life applications where privacy sensitivity is entrusted to tag or server. In some applications, ownership transfer of RFID labels is another significant requirement. Specifically, the owner of RFID tags could be required to change several times during its lifetime. During the transfer, new owner first obtains necessary private information from the old owner, with these information he then takes over tag identification and authorization so as to have secure communication with tags. Security and privacy are major issue in the presence of malicious adversary. Therefore, the protocol used to identify tag should not only allow a legitimate reader to authenticate a tag but it should also protect the privacy of the tag against unauthorized tracing. Besides, after ownership transfer, the authentication protocol should also prevent the old owner to trace the tags and disallow the new owner to trace old transactions of the tags. On the other hand, while achieving privacy and security on tag and server side, the computation complexity is also very important. In order to resolve these security and privacy problems, numerous authentication protocols have been proposed in the literature. Many of them are failed to provide security and privacy and the computation on the server side is also very high. Motivated by this need, in this paper, we first analyze an existing RFID authentication protocol and show that it does not resist against tag tracking attack. Then, we propose an RFID mutual authentication protocol which is also used to realize ownership transfer. In our protocol, the server needs only a constant-time complexity for identification when the tag and server are synchronized. In case of ownership transfer, our protocol preserves both old owner and new owner privacy. Our protocol also achieves backward untraceability against a strong adversary who compromise tag, and forward untraceability under the assumption that the adversary misses at least one subsequent successful session between the tag and the reader.
Last updated:  2011-12-09
A Gross-Zagier formula for quaternion algebras over totally real fields
Eyal Z. Goren, Kristin E. Lauter
We prove a higher dimensional generalization of Gross and Zagier's theorem on the factorization of differences of singular moduli. Their result is proved by giving a counting formula for the number of isomorphisms between elliptic curves with complex multiplication by two different imaginary quadratic fields $K$ and $K^\prime$, when the curves are reduced modulo a supersingular prime and its powers. Equivalently, the Gross-Zagier formula counts optimal embeddings of the ring of integers of an imaginary quadratic field into particular maximal orders in $B_{p, \infty}$, the definite quaternion algebra over $\mathbb{Q}$ ramified only at $p$ and infinity. Our work gives an analogous counting formula for the number of simultaneous embeddings of the rings of integers of primitive CM fields into superspecial orders in definite quaternion algebras over totally real fields of strict class number $1$. Our results can also be viewed as a counting formula for the number of isomorphisms modulo $\frak{p} \vert p$ between abelian varieties with CM by different fields. Our counting formula can also be used to determine which superspecial primes appear in the factorizations of differences of values of Siegel modular functions at CM points associated to two different CM fields, and to give a bound on those supersingular primes which can appear. In the special case of Jacobians of genus $2$ curves, this provides information about the factorizations of numerators of Igusa invariants, and so is also relevant to the problem of constructing genus $2$ curves for use in cryptography.
Last updated:  2011-12-09
Efficient Modular Exponentiation-based Puzzles for Denial-of-Service Protection
Uncategorized
Jothi Rangasamy, Douglas Stebila, Lakshmi Kuppusamy, Colin Boyd, Juan Gonzalez Nieto
Show abstract
Uncategorized
Client puzzles are moderately-hard cryptographic problems --- neither easy nor impossible to solve --- that can be used as a countermeasure against denial of service attacks on network protocols. Puzzles based on modular exponentiation are attractive as they provide important properties such as non-parallelisability, deterministic solving time, and linear granularity. We propose an efficient client puzzle based on modular exponentiation. Our puzzle requires only a few modular multiplications for puzzle generation and verification. For a server under denial of service attack, this is a significant improvement as the best known non-parallelisable puzzle proposed by Karame and Čapkun (ESORICS 2010) requires at least $2k$-bit modular exponentiation, where $k$ is a security parameter. We show that our puzzle satisfies the unforgeability and difficulty properties defined by Chen \etal{} (Asiacrypt 2009). We present experimental results which show that, for $1024$-bit moduli, our proposed puzzle can be up to $30 \times$ faster to verify than the Karame-Čapkun puzzle and $ 99 \times$ faster than the Rivest \etal's time-lock puzzle.
Last updated:  2012-09-23
On the Security of ID Based Signcryption Schemes
S. Sharmila Deva Selvi, S. Sree Vivek, Dhinakaran Vinayagamurthy, C. Pandu Rangan
A signcryption scheme is secure only if it satisfies both the confidentiality and the unforgeability properties. All the ID based signcryption schemes presented in the standard model till now do not have either the confidentiality or the unforgeability or both of these properties. Cryptanalysis of some of the schemes have been proposed already. In this work, we present the security attacks on `Secure ID based signcryption in the standard model' proposed by Li-Takagi and `Further improvement of an identity-based signcryption scheme in the standard model' by Li et al. and the flaws in the proof of security of `Efficient ID based signcryption in the standard model' proposed by Li et al., which are the recently proposed ID based signcryption schemes in the standard model. We also present the cryptanalysis of `Construction of identity based signcryption schemes' proposed by Pandey-Barua and the cryptanalysis of `Identity-Based Signcryption from Identity-Based Cryptography' proposed by Lee-Seo-Lee. These schemes present the methods of constructing an ID based signcryption scheme in the random oracle model from an ID based signature scheme and an ID based encryption scheme. Since none of the existing schemes in the standard model are found to be provably secure, we analyse the security of signcryption schemes got by directly combining an ID based signature scheme and an ID based encryption scheme in the standard model.
Last updated:  2011-12-09
Cloud-Assisted Multiparty Computation from Fully Homomorphic Encryption
Uncategorized
Adriana Lopez-Alt, Eran Tromer, Vinod Vaikuntanathan
Show abstract
Uncategorized
We construct protocols for secure multiparty computation with the help of a computationally powerful party, namely the "cloud''. Our protocols are simultaneously efficient in a number of metrics: * Rounds: our protocols run in 4 rounds in the semi-honest setting, and 5 rounds in the malicious setting. * Communication: the number of bits exchanged in an execution of the protocol is independent of the complexity of function f being computed, and depends only on the length of the inputs and outputs. * Computation: the computational complexity of all parties is independent of the complexity of the function f, whereas that of the cloud is linear in the size of the circuit computing f. In the semi-honest case, our protocol relies on the ``ring learning with errors'' (RLWE) assumption, whereas in the malicious case, security is shown under the Ring LWE assumption as well as the existence of simulation-extractable NIZK proof systems and succinct non-interactive arguments. In the malicious setting, we also relax the communication and computation requirements above, and only require that they be ``small'' -- polylogarithmic in the computation size and linear in the size of the joint size of the inputs. Our constructions leverage the key homomorphic property of the recent fully homomorphic encryption scheme of Brakerski and Vaikuntanathan (CRYPTO 2011, FOCS 2011). Namely, these schemes allow combining encryptions of messages under different keys to produce an encryption (of the sum of the messages) under the sum of the keys. We also design an efficient, non-interactive threshold decryption protocol for these fully homomorphic encryption schemes.
Last updated:  2011-12-16
Deploying secure multi-party computation for financial data analysis
Dan Bogdanov, Riivo Talviste, Jan Willemson
In this paper we describe a secure system for jointly collecting and analyzing financial data for a consortium of ICT companies. To guarantee each participant's privacy, we use secret sharing and secure multi-party computation (MPC) techniques. While MPC has been used to solve real-life problems beforehand, this is the first time where the actual MPC computation was done over the internet with computing nodes spread geographically apart. We describe the system architecture, security considerations and implementation details. We also present the user feedback analysis revealing that secure multi-party computation techniques give sufficient assurance for data donors to submit their sensitive information, and act as a critical enabling feature for privacy-preserving data mining.
Last updated:  2011-12-09
New Impossible Differential Attacks on Camellia
Dongxia Bai, Leibo Li
Camellia is one of the most worldwide used block ciphers, which has been selected as a standard by ISO/IEC. In this paper, we propose several new 7-round impossible differentials of Camellia with 2 $FL/FL^{-1}$ layers, which turn out to be the first 7-round impossible differentials with 2 $FL/FL^{-1}$ layers. Combined with some basic techniques including the early abort approach and the key schedule consideration, we achieve the impossible differential attacks on 11-round Camellia-128, 11-round Camellia-192, 12-round Camellia-192, and 14-round Camellia-256, and the time complexity are $2^{123.6}$, $2^{121.7}$, $2^{171.4}$ and $2^{238.2}$ respectively. As far as we know, these are the best results against the reduced-round variants of Camellia. Especially, we give the first attack on 11-round Camellia-128 reduced version with $FL/FL^{-1}$ layers.
Last updated:  2011-12-21
Program Obfuscation with Leaky Hardware
Uncategorized
Nir Bitansky, Ran Canetti, Shafi Goldwasser, Shai Halevi, Yael Tauman Kalai, Guy N. Rothblum
Show abstract
Uncategorized
We consider general program obfuscation mechanisms using ``somewhat trusted'' hardware devices, with the goal of minimizing the usage of the hardware, its complexity, and the required trust. Specifically, our solution has the following properties: \begin{itemize} \item The obfuscation remains secure even if all the hardware devices in use are {\em leaky}. That is, the adversary can obtain the result of evaluating any polynomial-time computable function on the local state of the device, as long as this function has short output. In addition the adversary also controls the communication between the devices. \item The number of hardware devices used in an obfuscation and the amount of work they perform are polynomial in the security parameter {\em independently} of the obfuscated function's complexity. \item A ({\em universal}) set of hardware components, owned by the user, is initialized only once and from that point on can be used with multiple ``software-based" obfuscations sent by different vendors. \end{itemize}
Last updated:  2013-01-02
Formally Assessing Cryptographic Entropy
Daniel R. L. Brown
Cryptography relies on the secrecy of keys. Measures of information, and thus secrecy, are called entropy. Previous work does not formally assess the cryptographically appropriate entropy of secret keys. This report defines several new forms of entropy appropriate for cryptographic situations. This report defines statistical inference methods appropriate for assessing cryptographic entropy.
Last updated:  2013-01-24
Anonymous attestation with user-controlled linkability
D. Bernhard, G. Fuchsbauer, E. Ghadafi, N. P. Smart, B. Warinschi
This paper is motivated by the observation that existing security models for Direct Anonymous Attestation (DAA) have problems to the extent that insecure protocols may be deemed secure when analysed under these models. This is particularly disturbing as DAA is one of the few complex cryptographic protocols resulting from recent theoretical advances actually deployed in real life. Moreover, standardisation bodies are currently looking into designing the next generation of such protocols. Our first contribution is to identify issues in existing models for DAA and explain how these errors allow for proving security of insecure protocols. These issues are exhibited in all deployed and proposed DAA protocols (although they can often be easily fixed). Our second contribution is a new security model for a class of “pre-DAA scheme”, i.e., DAA schemes where the computation on the user side takes place entirely on the trusted platform. Our model captures more accurately than any previous model the security properties demanded from DAA by the Trusted Computing Group (TCG), the group that maintains the DAA standard. Extending the model from pre-DAA to full DAA is only a matter of refining the trust models on the parties involved. Finally, we present a generic construction of a DAA protocol from new building blocks tailored for anonymous attestation. Some of them are new variations on established ideas, and may be of independent interest. We give instantiations for these building blocks that yield a DAA scheme more efficient than the one currently deployed, and as efficient as the one about to be standardised by the TCG which has no valid security proof.
Last updated:  2011-12-09
A Systematic Method to Evaluate and Compare the Performance of Physical Unclonable Functions
Uncategorized
Abhranil Maiti, Vikash Gunreddy, Patrick Schaumont
Show abstract
Uncategorized
In this work, we propose a systematic method to evaluate and compare the performance of Physical Unclonable Functions (PUFs). The need for such a method is justified by the fact that various types of PUFs have been proposed so far. However, there is no common method that can fairly compare them in terms of their performances. We first propose three generic dimensions of PUF measurements. We then define several parameters to quantify the performance of a PUF along these dimensions. We also analyze existing parameters proposed by other researchers. Based on our analysis, we propose a compact set of parameters that will be used as a tool to evaluate as well as compare the performance of different PUFs. To make the method independent of the underlying PUF technique, we focus on the statistical properties of the binary PUF responses. Finally, we show a detailed comparison analysis between two PUFs: ring-oscillator-based PUF (RO PUF) and Arbiter-based PUF (APUF) using measured PUF data.
Last updated:  2011-12-09
Use Data-depend Function Build Message Expansion Function
Uncategorized
ZiJie Xu, Ke Xu
Show abstract
Uncategorized
We had found functions can be used to fix bits [2] by given differences. We use these functions build a message expansion function. In the message expansion function, there are some bits include message bits and incremental bits produced from message bits, these bits will be used as parameter of date-depend function. This message expansion function will fixed at lease n × 5.5 bits with given differences, and any message modification will affect at least 8 data-depend function parameter.
Last updated:  2011-12-11
Privacy-Preserving Stream Aggregation with Fault Tolerance
T-H. Hubert Chan, Elaine Shi, Dawn Song
We consider applications where an untrusted aggregator would like to collect privacy sensitive data from users, and compute aggregate statistics periodically. For example, imagine a smart grid operator who wishes to aggregate the total power consumption of a neighborhood every ten minutes; or a market researcher who wishes to track the fraction of population watching ESPN on an hourly basis. We design novel mechanisms that allow an aggregator to accurately estimate such statistics, while offering provable guarantees of user privacy against the untrusted aggregator. Our constructions are resilient to user failure and compromise, and can efficiently support dynamic joins and leaves. Our constructions also exemplify the clear advantage of combining applied cryptography and differential privacy techniques.
Last updated:  2011-12-09
Elliptic Curve Cryptography in JavaScript
Laurie Haustenne, Quentin De Neyer, Olivier Pereira
We document our development of a library for elliptic curve cryptography in JavaScript. We discuss design choices and investigate optimizations at various levels, from integer multiplication and field selection to various fixed-based EC point multiplication techniques. Relying on a small volume of public precomputed data, our code provides a speed-up of a factor 50 compared to previous existing implementations. We conclude with a discussion of the impact of our work on a concrete application: the Helios browser-based voting system.
Last updated:  2013-05-03
An Improved Certificateless Authenticated Key Agreement Protocol
Haomin Yang, Yaoxue Zhang, Yuezhi Zhou
Recently, Mokhtarnameh, Ho, Muthuvelu proposed a certificateless key agreement protocol. In this paper, we show that their protocol is insecure against a man-in-the-middle attack which is a severe disaster for a key agreement protocol. In addition, the authors claimed that their scheme provides a binding a long-term public key with a corresponding partial private key. In fact, their protocol does not realize the binding. We propose an improved key agreement protocol based on the protocol proposed by Mokhtarnameh, Ho and Muthuvelu. The improved protocol can resist a man-in-the-middle attack as well as satisfy the desired security properties for key agreement. It truly realizes the one-to-one correspondence between the long-term public key and the partial private key of a user. If there are two different, working long-term public keys for the same identity, the key generation center will be identified as having misbehaved in issuing both corresponding partial private keys.
Last updated:  2021-02-16
Security Enhancement of the Vortex Family of Hash Functions
Uncategorized
Shay Gueron, Michael Kounavis
Show abstract
Uncategorized
Vortex is a new family of one-way hash functions which has been submitted to the NIST SHA-3 competition. Its design is based on using the Rijndael block cipher round as a building block, and using a multiplication-based merging function to support fast mixing in a small number of steps. Vortex is designed to be a fast hash function, when running on a processor that has AES acceleration and has a proven collision resistance [2]. Several attacks on Vortex have been recently published [3, 4, 5, 6] exploiting some structural properties of its design, as presented in the version submitted to the SHA-3 competition. These are mainly ¯rst and second preimage attacks with time complexity below the ideal, as well as attempts to distinguish the Vortex output from random. In this paper we study the root-cause of the attacks and propose few amendments to the Vortex structure, which eliminate the attacks without a®ecting its collision resistance and performance.
Last updated:  2012-05-09
CHECKER: On-site checking in RFID-based supply chains
Uncategorized
Kaoutar Elkhiyaoui, Erik-Oliver Blass, Refik Molva
Show abstract
Uncategorized
Counterfeit detection in RFID-based supply chains aims at preventing adversaries from injecting fake products that do not meet quality standards. This paper introduces CHECKER, a new protocol for counterfeit detection in RFID-based supply chains through on-site checking. While RFID-equipped products travel through the supply chain, RFID readers can verify product genuineness by checking the validity of the product’s path. CHECKER uses a polynomialbased encoding to represent paths in the supply chain. Each tag T in CHECKER stores an IND-CCA encryption of T’s identifier ID and a signature of ID using the polynomial encoding of T’s path as secret key. CHECKER is provably secure and privacy preserving. An adversary can neither inject fake products into the supply chain nor trace products. Moreover, RFID tags in CHECKER can be cheap read/write only tags that are not required to perform any computation. Storage requirements for a tag are low with only 120 Bytes.
Last updated:  2011-12-09
Fully Secure Spatial Encryption under Simple Assumptions with Constant-Size Ciphertexts
Jie Chen, Hoon Wei Lim, San Ling, Huaxiong Wang
In this paper, we propose two new spatial encryption (SE) schemes based on existing inner product encryption (IPE) schemes. Both of our SE schemes are fully secure under simple assumptions and in prime order bilinear groups. Moreover, one of our SE schemes has constant-size ciphertexts. Since SE implies hierarchical identity-based encryption (HIBE), we also obtain a fully secure HIBE scheme with constant-size ciphertexts under simple assumptions. Our second SE scheme is attribute-hiding (or anonymous). It has sizes of public parameters, secret keys and ciphertexts that are quadratically smaller than the currently known SE scheme with similar properties. As a side result, we show that negated SE is equivalent to non-zero IPE. This is somewhat interesting since the latter is known to be a special case of the former.
Last updated:  2011-12-09
On the Security of NMAC and Its Variants
Uncategorized
Fanbao Liu, Changxiang Shen, Tao Xie, Dengguo Feng
Show abstract
Uncategorized
We first propose a general equivalent key recovery attack to a $H^2$-MAC variant NMAC$_1$, which is also provable secure, by applying a generalized birthday attack. Our result shows that NMAC$_1$, even instantiated with a secure Merkle-Damgård hash function, is not secure. We further show that this equivalent key recovery attack to NMAC$_1$ is also applicable to NMAC for recovering the equivalent inner key of NMAC, in a related key setting. We propose and analyze a series of NMAC variants with different secret approaches and key distributions, we find that a variant NMAC-E, with secret envelop approach, can withstand most of the known attacks in this paper. However, all variants including NMAC itself, are vulnerable to on-line birthday attack for verifiable forgery. Hence, the underlying cryptographic hash functions, based on Merkle-Damgård construction, should be re-evaluated seriously.
Last updated:  2012-07-27
Achieving Short Ciphertexts or Short Secret-Keys for Adaptively Secure General Inner-Product Encryption
Tatsuaki Okamoto, Katsuyuki Takashima
In this paper, we present two non-zero inner-product encryption (NIPE) schemes that are adaptively secure under a standard assumption, the decisional linear (DLIN) assumption, in the standard model. One of the proposed NIPE schemes features constant-size ciphertexts and the other features constant-size secret-keys. Our NIPE schemes imply an identity-based revocation (IBR) system with constant-size ciphertexts or constant-size secret-keys that is adaptively secure under the DLIN assumption. Any previous IBR scheme with constant-size ciphertexts or constant-size secret-keys was not adaptively secure in the standard model. This paper also presents two zero inner-product encryption (ZIPE) schemes each of which has constant-size ciphertexts or constant-size secret-keys and is adaptively secure under the DLIN assumption in the standard model. They imply an identity-based broadcast encryption (IBBE) system with constant-size ciphertexts or constant-size secret-keys that is adaptively secure under the DLIN assumption. We also extend the proposed ZIPE schemes into two directions, one is a fully-attribute-hiding ZIPE scheme with constant-size secret-keys, and the other a hierarchical ZIPE scheme with constant-size ciphertexts.
Last updated:  2011-12-09
Breaking $H^2$-MAC Using Birthday Paradox
Fanbao Liu, Tao Xie, Changxiang Shen
$H^2$-MAC was proposed to increase efficiency over HMAC by omitting its outer key, and keep the advantage and security of HMAC at the same time. However, as pointed out by the designer, the security of $H^2$-MAC also depends on the secrecy of the intermediate value (the equivalent key) of the inner hashing. In this paper, we propose an efficient method to break $H^2$-MAC, by using a generalized birthday attack to recover the equivalent key, under the assumption that the underlying hash function is secure (weak collision resistance). We can successfully recover the equivalent key of $H^2$-MAC in about $2^{n/2}$ on-line MAC queries and $2^{n/2}$ off-line MAC computations with great probability. Moreover, we can improve the attack efficiency by reducing the on-line MAC queries, which can't be done concurrently. This attack shows that the security of $H^2$-MAC is totally dependent on the (weak) collision resistance of the underlying hash function, instead of the PRF-AX of the underlying compression function in the origin security proof of $H^2$-MAC.
Last updated:  2012-07-25
The security impact of a new cryptographic library
Daniel J. Bernstein, Tanja Lange, Peter Schwabe
This paper introduces a new cryptographic library, NaCl, and explains how the design and implementation of the library avoid various types of cryptographic disasters suffered by previous cryptographic libraries such as OpenSSL. Specifically, this paper analyzes the security impact of the following NaCl features: no data flow from secrets to load addresses; no data flow from secrets to branch conditions; no padding oracles; centralizing randomness; avoiding unnecessary randomness; extremely high speed; and cryptographic primitives chosen conservatively in light of the cryptanalytic literature.
Last updated:  2011-12-04
Fully Homomorphic Encryption Based on Approximate Matrix GCD
Uncategorized
Gu Chunsheng
Show abstract
Uncategorized
We first introduce approximate matrix GCD problem (AMGCD), and construct public key encryption schemes based on AMGCD. Then, we define a variant of AMGCD and design a new fully homomorphic encryption scheme (FHE) based on the variant AMGCD, whose security depends on the hardness assumption of the variant AMGCD problem.
Last updated:  2013-12-13
McOE: A Family of Almost Foolproof On-Line Authenticated Encryption Schemes
Ewan Fleischmann, Christian Forler, Stefan Lucks, Jakob Wenzel
On-Line Authenticated Encryption (OAE) combines privacy with data integrity and is on-line computable. Most block cipher-based schemes for Authenticated Encryption can be run on-line and are provably secure against nonce-respecting adversaries. But they fail badly for more general adversaries. This is not a theoretical observation only --~in practice, the reuse of nonces is a frequent issue. In recent years, cryptographers developed misuse resistant schemes for Authenticated Encryption. These guarantee excellent security even against general adversaries which are allowed to reuse nonces. Their disadvantage is that encryption can be performed in an off-line way, only. This paper introduces a new family of OAE schemes --called McOE -- dealing both with nonce-respecting and with general adversaries. Furthermore, we present two family members, i.e., McOE-X and McOE-G. They are based on a 'simple' block cipher. In contrast to every other OAE scheme known in literature, they provably guarantee reasonable security against general adversaries as well as standard security against nonce-respecting adversaries.
Last updated:  2011-11-30
Some Words About Cryptographic Key Recognition In Data Streams
Uncategorized
Alexey Chilikov, Evgeny Alekseev
Show abstract
Uncategorized
Search for cryptographic keys in RAM is a new and prospective technology which can be used, primarily, in the computer forensics. In order to use it, a cryptanalyst must solve, at least, two problems: to create a memory dump from target machine and to distinguish target cryptographic keys from other data. The latter leads to a new mathematical task: <<recognition of cryptographic keys in the (random) data stream>>. The complexity of this task significantly depends on target cryptoalgorithm. For some algorithms (i.e. AES or Serpent) this task is trivial but for other ones it may be very hard. In this work we present effective algorithms of expanded key recognition for Blowfish and Twofish. As far as we know this task for these algorithms has never been considered before.
Last updated:  2011-11-30
Constructing differentially 4-uniform permutations over $\mbf_{2^{2m}}$ from quadratic APN permutations over $\mbf_{2^{2m+1}}$
Yongqiang Li, Mingsheng Wang
In this paper, by means of the idea proposed in \cite{carlet4uniformpermu}, differentially 4-uniform permutations with the best known nonlinearity over $\mbf_{2^{2m}}$ can be constructed by using quadratic APN permutations over $\mbf_{2^{2m+1}}$. Special emphasis is given for the Gold functions. The algebraic degree of the constructions and their compositional inverse is also investigated. One of the constructions and its compositional inverse have both algebraic degree $m+1$ over $\mbf_2^{2m}$.
Last updated:  2011-11-30
Collision for 75-step SHA-1: Intensive Parallelization with GPU
Uncategorized
E. A. Grechnikov, A. V. Adinetz
Show abstract
Uncategorized
We present a brief report on the collision search for the reduced SHA-1. With a few improvements to our previous work, directed at efficient parallelization on a GPU cluster, we managed to construct a new collision for 75-step reduced SHA-1 hash function.
Last updated:  2012-03-06
Hummingbird: Privacy at the time of Twitter
Uncategorized
Emiliano De Cristofaro, Claudio Soriente, Gene Tsudik, Andrew Williams
Show abstract
Uncategorized
In the last several years, micro-blogging Online Social Networks (OSNs), such as Twitter, have taken the world by storm, now boasting over 100 million subscribers. As an unparalleled stage for an enormous audience, they offer fast and reliable centralized diffusion of pithy tweets to great multitudes of information-hungry and always-connected followers. At the same time, this information gathering and dissemination paradigm prompts some important privacy concerns about relationships between tweeters, followers and interests of the latter. In this paper, we assess privacy in today's Twitter-like OSNs and describe an architecture and a trial implementation of a privacy-preserving service called Hummingbird. It is essentially a variant of Twitter that protects tweet contents, hashtags and follower interests from the (potentially) prying eyes of the centralized server. We argue that, although inherently limited by Twitter's mission of scalable information-sharing, this degree of privacy is valuable. We demonstrate, via a working prototype, that Hummingbird's additional costs are tolerably low. We also sketch out some viable enhancements that might offer better privacy in the long term.
Last updated:  2011-11-29
Towards a Probabilistic Complexity-theoretic Modeling of Biological Cyanide Poisoning as Service Attack in Self-organizing Networks
Jiejun Kong, Dapeng Wu, Xiaoyan Hong, Mario Gerla
We draw an analogy of \emph{biological cyanide poisoning} to security attacks in self-organizing mobile ad hoc networks. When a circulatory system is treated as an enclosed network space, a hemoglobin is treated as a mobile node, and a hemoglobin binding with cyanide ion is treated as a compromised node (which cannot bind with oxygen to furnish its oxygen-transport function), we show how cyanide poisoning can reduce the probability of oxygen/message delivery to a rigorously defined ``negligible'' quantity. Like formal cryptography, security problem in our network-centric model is defined on the complexity-theoretic concept of ``negligible'', which is asymptotically sub-polynomial with respect to a pre-defined system parameter $x$. Intuitively, the parameter $x$ is the key length $n$ in formal cryptography, but is changed to the network scale, or the number of network nodes $N$, in our model. We use the $\RP$ ($n$-runs) complexity class with a virtual oracle to formally model the cyanide poisoning phenomenon and similar network threats. This new analytic approach leads to a new view of biological threats from the perspective of network security and complexity theoretic study.
Last updated:  2011-11-26
Rubik's for cryptographers
Christophe Petit, Jean-Jacques Quisquater
Hard mathematical problems are at the core of security arguments in cryptography. In this paper, we study mathematical generalizations of the famous Rubik's cube puzzle, namely the factorization, representation and balance problems in non-Abelian groups. These problems arise naturally when describing the security of Cayley hash functions, a class of cryptographic hash functions with very interesting properties. The factorization problem is also strongly related to a famous long-standing conjecture of Babai, at the intersection of group theory and graph theory. A constructive proof of Babai's conjecture would make all Cayley hash functions insecure, but on the other hand it would have many positive applications in graph theory and computer science. In this paper, we classify existing attacks against Cayley hash functions and we review known results on Babai's conjecture. Despite recent cryptanalytic progress on particular instances, we show that the factorization, representation and balance problems presumably remain good sources of cryptographic hard problems. Our study demonstrates that Cayley hash functions deserve further interest by the cryptography community.
Last updated:  2011-11-26
Random Number Generation Based on Oscillatory Metastability in Ring Circuits
Laszlo Hars
Random number generator designs are discussed, which utilize oscillatory metastability, induced by switching between two stable states of ring-connected digital gates. For a short time after the switch-over the circuits behave quite randomly, influenced by the circuit noise. We provide simple programs, which simulate the fundamental behavior of our circuits. We also present a mathematical model and theoretical explanations of the underlying physical phenomena, the random phase drift and pulse decay. These also illuminate the principles of other recently published random number generators. The feasibility of the designs was confirmed by FPGA prototypes. These random number generators are small, fast and built of standard logic gates. The simplest example contains just one XOR gate as the source of randomness.
Last updated:  2012-09-03
Untangling RFID Privacy Models
Iwen Coisel, Tania Martin
This article investigates privacy in Radio Frequency IDentification (RFID) systems. We survey the eight most well-known RFID privacy models. We examine their advantages and drawbacks, and provide a comprehensive comparison of these models. The first conclusion is that none of these models is complete, and the association of all their positive features does not either provide the best privacy model. Then, we give a privacy analysis of five distinct RFID authentication protocols according to these models, and exhibit a main worry: no model is able to distinguish perfectly all the five RFID protocols regarding privacy.
Last updated:  2011-11-26
Groestl Tweaks and their Effect on FPGA Results
Marcin Rogawski, Kris Gaj
In January 2011, Groestl team published tweaks to their specification of Groestl. In this paper, we investigate the influence of these tweaks on the Groestl performance in hardware. The results indicate that the performance penalty in terms of the throughput to area ratio depends strongly on the architecture used. This penalty is smaller in case of architecture in which permutations P and Q are implemented using two independent units.
Last updated:  2011-11-26
Security of Multiple-Key Agreement Protocols and Propose an Enhanced Protocol
Mohammad Sabzinejad Farash, Mahmoud Ahmadian Attari, Majid Bayat
Multiple key agreement protocols produce several session keys instead of one session key. Most of the multiple key agreement protocols do not utilize the hash functions in the signature schemes used for identification. Not using hash function in these protocols causes that the protocols do not satisfy some requirement security properties. In this paper we review the multiple key agreement protocols and perform attacks on some of them. Then we introduce a new multiple key agreement protocol and show that the proposed protocol is more secure than the existent multiple key agreement protocols.
Last updated:  2012-03-06
Practical realisation and elimination of an ECC-related software bug attack
B. B. Brumley, M. Barbosa, D. Page, F. Vercauteren
We analyse and exploit implementation features in OpenSSL version 0.9.8g which permit an attack against ECDH-based functionality. The attack, although more general, can recover the entire (static) private key from an associated SSL server via $633$ adaptive queries when the NIST curve P-256 is used. One can view it as a software-oriented analogue of the bug attack concept due to Biham et al. and, consequently, as the first bug attack to be successfully applied against a real-world system. In addition to the attack and a posteriori countermeasures, we show that formal verification, while rarely used at present, is a viable means of detecting the features which the attack hinges on. Based on the security implications of the attack and the extra justification posed by the possibility of intentionally incorrect implementations in collaborative software development, we conclude that applying and extending the coverage of formal verification to augment existing test strategies for OpenSSL-like software should be deemed a worthwhile, long-term challenge.
Last updated:  2011-11-23
A Scalable Method for Constructing Galois NLFSRs with Period $2^n-1$ using Cross-Join Pairs
Elena Dubrova
This paper presents a method for constructing $n$-stage Galois NLFSRs with period $2^n-1$ from $n$-stage maximum length LFSRs. We introduce nonlinearity into state cycles by adding a nonlinear Boolean function to the feedback polynomial of the LFSR. Each assignment of variables for which this function evaluates to 1 acts as a crossing point for the LFSR state cycle. By adding a copy of the same function to a later stage of the register, we cancel the effect of nonlinearity and join the state cycles back. The presented method requires no extra time steps and it has a smaller area overhead compared to the previous approaches based on cross-join pairs. It is feasible for large $n$. However, it has a number of limitations. One is that the resulting NLFSRs can have at most $\lfloor n/2 \rfloor$-1 stages with a nonlinear update. Another is that feedback functions depend only on state variables which are updated linearly. The latter implies that sequences generated by the presented method can also be generated using a nonlinear filter generator.
Last updated:  2013-08-14
Cheating Human Vision in Visual Secret Sharing
Uncategorized
Yu-Chi Chen, Gwoboa Horng, Du-Shiau Tsai
Show abstract
Uncategorized
Visual Secret Sharing (VSS), first introduced by Naor and Shamir, is a variant form of secret sharing; especially, secret decoding is stacking shares together without performing any complicated cryptographic computation. The recovered secret is visible by human vision system (HVS). However, Horng et al. showed cheating is possible in VSS, which is inspired from cheating in secret sharing. Since then many cheating activities and cheating immune schemes have been proposed, whereas all presented cheating activities only take cheating in any pixel as a unit into consideration. In this paper, we analyze some presented cheating activities, and we propose a new kind of cheating: Region Cheating Attack (RCA) as a result of the properties of HVS. Differently, RCA involves with cheating in a region which is several adjacent pixels as a unit. We therefore use RCA to enhance effect of several attacks which has been proposed. Moreover, a new attack, deterministic white-to-black attack (DWtBA), is proposed to point out that a well-known cheating immune scheme, proposed by De Prisco and De Santis, will suffer from RCA and DWtBA. Finally, we propose a remedy to overcome the problem of the scheme.
Last updated:  2012-11-15
Indifferentiability Security of the Fast Wide Pipe Hash: Breaking the Birthday Barrier
Dustin Moody, Souradyuti Paul, Daniel Smith-Tone
A hash function secure in the indifferentiability framework (TCC 2004) is able to resist all meaningful generic attacks. Such hash functions also play a crucial role in establishing the security of protocols that use them as random functions. To eliminate multi-collision type attacks on the MD mode (Crypto 1989), Lucks proposed widening the size of the internal state of hash functions. More specifically, he suggested that hash functions h:{0,1}*->{0,1}^n use underlying primitives of the form C:{0,1}^a -> {0,1}^{2n} (Asiacrypt 2005). The Fast Wide Pipe (FWP) hash mode was introduced by Nandi and Paul at Indocrypt 2010, as a faster variant of Lucks' Wide Pipe mode. Despite the higher speed, the proven indifferentiability bound of the FWP mode has so far been only up to the birthday barrier of n/2 bits. The main result of this paper is the improvement of the FWP bound to 2n/3 bits (up to an additive constant). The 2n/3-bit bound for FWP comes with two important implications. Many popular hash modes use primitives with a=2n, that is C:{0,1}^{2n}->{0,1}^{2n}. For this important case, the FWP becomes the _only_ mode to achieve indifferentiability security of more than n/2 bits; thus we solve a longstanding open problem. Secondly, among n-bit hash modes with a>2n, the FWP mode has the highest rate among all modes which have beyond-birthday-barrier security. To obtain the bound of 2n/3 bits, we follow the usual technique of constructing games with simulators, with certain BAD events to distinguish between the games. However, we introduce some novel ideas. In designing the BAD events, we used multi-collisions in addition to collisions. We also allowed the query-response graphs, maintained by the simulators, to grow for two phases every iteration, rather than just one phase. Finally, our carefully chosen set of sixteen BAD events establish an isomorphism of simulator graphs, from which the 2n/3-bit bound follows. We also provide evidence that extending the bound beyond 2n/3 bits may be possible if we allow the simulator-graph to grow for three (or more) phases every iteration. Another noteworthy feature of our proof -- that may be of independent interest -- is that we work with only three games rather than a long sequence games.
Last updated:  2012-08-17
Near-Linear Unconditionally-Secure Multiparty Computation with a Dishonest Minority
Eli Ben-Sasson, Serge Fehr, Rafail Ostrovsky
Secure multiparty computation (MPC) allows a set of n players to compute any public function, given as an arithmetic circuit, on private inputs, so that privacy of the inputs as well as correctness of the output are guaranteed. Of special importance both in cryptography and in complexity theory is the setting of information-theoretic MPC, where (dishonest) players are unbounded, and no cryptographic assumptions are used. In this setting, it was known since the 1980's that an honest majority of players is both necessary and sufficient to achieve privacy and correctness. The main open question that was left in this area is to establish the exact communication complexity of MPC protocols that can tolerate malicious behavior of a minority of dishonest players. In all works, there was a large gap between the communication complexity of the best known protocols in the malicious setting and the ``honest-but-curious'' setting, where players do not deviate from the protocol. In this paper, we show, for the first time, an MPC protocol that can tolerate dishonest minority of malicious players that matches the communication complexity of the best known MPC protocol in the honest-but-curious setting. More specifically, we present a new n-player MPC protocol that is secure against a computationally-unbounded active and malicious adversary that can adaptively corrupt up to a minority t < n/2 of the players. For polynomially-large binary circuits that are not too unshaped, our protocol has an amortized communication complexity of O(n log n + k/n^c) bits per multiplication (i.e. AND) gate, where k denotes the security parameter and c is an arbitrary non-negative constant. This improves on the previously most efficient protocol with the same security guarantee, which offers an amortized communication complexity of O(n^2 k) bits per multiplication gate. For any k polynomial in n, the amortized communication complexity of our protocol matches the best known O(n log n) communication complexity of passive security MPC protocol. Thus, our result gives the first near-linear complexity of MPC (instead of quadratic) in the dishonest-minority setting and settles the question of the difference in communication complexity between the honest-but-curious and fully malicious settings. For sufficiently large circuits, our protocol can be improved only if the honest-but-curious protocol can be improved. We introduce several novel techniques for reducing communication complexity of MPC in the malicious setting that are of independent interest and we believe will have wider applicability. One is a novel idea of computing authentication tags by means of a mini MPC, which allows us to avoid expensive double-sharings when dealing with malicious players; the other is a batch-wise multiplication verification that allows us to speedup Beaver's ``multiplication triples''. The techniques draw from the PCP world and this infusion of new techniques from other domains of computational complexity may find further uses in the context of MPC.
Last updated:  2014-02-21
Maximum Leakage Resilient IBE and IPE
Kaoru Kurosawa, Le Trieu Phong
In this paper, we show the first identity-based encryption (IBE) scheme, and inner product encryption (IPE) scheme, both of which achieve the maximum possible leakage rate $1-o(1)$ in the standard model under a static assumption. Specifically, under the DLIN assumption, even if $1-o(1)$ fraction of each private key is arbitrarily leaked, the IBE scheme is fully secure and the IPE scheme is selectively secure. (To our knowledge, no {\it leakage resilient} IPE scheme has been known so far.)
Last updated:  2011-11-28
A note on semi-bent functions with multiple trace terms and hyperelliptic curves
Uncategorized
Sihem Mesnager
Show abstract
Uncategorized
Semi-bent functions with even number of variables are a class of important Boolean functions whose Hadamard transform takes three values. In this note we are interested in the property of semi-bentness of Boolean functions defined on the Galois field $F_{2^n}$ (n even) with multiple trace terms obtained via Niho functions and two Dillon-like functions (the first one has been studied by Mesnager and the second one have been studied very recently by Wang, Tang, Qi, Yang and Xu). We subsequently give a connection between the property of semi-bentness and the number of rational points on some associated hyperelliptic curves. We use the hyperelliptic curve formalism to reduce the computational complexity in order to provide a polynomial time and space test leading to an efficient characterization of semi-bentness of such functions (which includes an efficient characterization of the hyperbent functions proposed by Wang et al.). The idea of this approach goes back to the recent work of Lisonek on the hyperbent functions studied by Charpin and Gong.
Last updated:  2015-11-06
Algebraic Complexity Reduction and Cryptanalysis of GOST
Uncategorized
Nicolas T. Courtois
Show abstract
Uncategorized
GOST 28147-89 is a well-known Russian government encryption standard. Its large key size of 256 bits at a particularly low implementation cost make that it is widely implemented and used, in OpenSSL and elsewhere. In 2010 GOST was submitted to ISO to become an international standard. GOST was analysed by Schneier, Biham, Biryukov, Dunkelman, Wagner, various Australian, Japanese, and Russian scientists, and all researchers seemed to agree that it looks quite secure. Though the internal structure of GOST seems quite weak compared to DES, and in particular the diffusion is not quite as good, it is always stipulated that this should be compensated by a large number of 32 rounds and by the additional non-linearity and diffusion provided by modular additions. At Crypto 2008 the hash function based on this cipher was broken. Yet as far as traditional encryption applications with keys generated at random are concerned, until 2011 no cryptographically significant attack on GOST was found. In this paper we present several new attacks on full 32-rounds GOST. Our methodology is derived from the idea of conditional algebraic attacks on block ciphers which can be defined as attacks in which the problem of key recovery is written as a problem of solving a large system of algebraic equations, and where the attacker makes some "clever" assumptions on the cipher which lead to an important simplification in the algebraic description of the problem, which makes it solvable in practice if the assumptions hold. Our methods work by black box reduction and allow to literally break the cipher apart into smaller pieces and reduce breaking GOST to a low data complexity software/algebraic/MITM attack on 8 or less rounds. Overall we obtain some 60 distinct attacks faster than brute force on the full 32-round GOST and we provide five nearly practical attacks on two major 128-bit variants of GOST (cf. Table 6). Our single key attacks are summarized in Table 3 p.53 and Table 7 p.153 and attacks with multiple keys in Table 4 page 128.
Last updated:  2013-08-09
Two RFID Privacy Models in Front of a Court
Mohammad Hassan Habibi, Mohammad Reza Aref
In ASIACRYPT 2007, Vaudenay proposed a comprehensive privacy model for unilateral RFID schemes. Soon after, in ASIACCS 2008, Paise and Vaudenay presented a new version of the cited model which includes mutual authentication protocols. Recently, Armknecht et al. published two papers in ACNS 2010 and Transactions on Computational Science that examines the Paise-Vaudenay model. They claim that the Paise-Vaudenay model has several inadequacies and prove some results that contradict the Paise-Vaudenay outcomes. In this paper, we investigate the Armknecht et al.'s papers and show some subtle faults in their works. In particular, our contribution is twofold. First, we show the privacy definition and the adversary goal presented by Armknecht et al. are completely different from the Paise-Vaudenay ones. Therefore, their different results arise from the fundamental differences in their definition of the privacy and their results cannot be valid in the Paise-Vaudenay model. Furthermore, we examine Armknecht et al.'s results and show that by using their methodology, different outcomes are achieved in the same theorems. In fact we prove by using their approach to the privacy that the highest achievable privacy level is narrow-weak privacy, which contradicts most of the theorems proved by Armknecht et al.
Last updated:  2011-11-21
New attacks on Keccak-224 and Keccak-256
Itai Dinur, Orr Dunkelman, Adi Shamir
The Keccak hash function is one of the five finalists in NIST's SHA-3 competition, and so far it showed remarkable resistance against practical collision finding attacks: After several years of cryptanalysis and a lot of effort, the largest number of Keccak rounds for which actual collisions were found was only 2. In this paper we develop improved collision finding techniques which enable us to double this number. More precisely, we can now find within a few minutes on a single PC actual collisions in standard Keccak-224 and Keccak-256, where the only modification is to reduce their number of rounds to 4. When we apply our techniques to 5-round Keccak, we can get in a few days excellent near collisions, where the Hamming distance is 5 in the case of Keccak-224 and 10 in the case of Keccak-256. Our new attack combines differential and algebraic techniques, and uses the fact that each round of Keccak is only a quadratic mapping in order to efficiently find pairs of messages which follow a high probability differential characteristic.
Last updated:  2011-11-21
Indifferentiability of the Hash Algorithm BLAKE
Uncategorized
Donghoon Chang, Mridul Nandi, Moti Yung
Show abstract
Uncategorized
The hash algorithm BLAKE, one of the SHA-3 finalists, was designed by Aumasson, Henzen, Meier, and Phan. Unlike other SHA-3 finalists, there is no known indifferentiable security proof on BLAKE. In this paper, we provide the indifferentiable security proof on BLAKE with the bound O(\delta^2/2^{n-3}), where \delta is the total number of blocks of queries, and n is the hash output size.
Last updated:  2011-11-21
Homomorphic encryption from codes
Andrej Bogdanov, Chin Ho Lee
We propose a new homomorphic encryption scheme based on the hardness of decoding under independent random noise from certain affine families of codes. Unlike in previous lattice-based homomorphic encryption schemes, where the message is hidden in the noisy part of the ciphertext, our scheme carries the message in the affine part of the transformation and applies noise only to achieve security. Our scheme can tolerate noise of arbitrary magnitude, as long as the noise vector has sufficiently small hamming weight (and its entries are independent). Our design achieves "proto-homomorphic" properties in an elementary manner: message addition and multiplication are emulated by pointwise addition and multiplication of the ciphertext vectors. Moreover, the extremely simple nature of our decryption makes the scheme easily amenable to bootstrapping. However, some complications are caused by the inherent presence of noticeable encryption error. Our main technical contribution is the development of two new techniques for handling this error in the homomorphic evaluation process. We also provide a definitional framework for homomorphic encryption that may be useful elsewhere.
Last updated:  2011-11-22
Adaptive Security of Concurrent Non-Malleable Zero-Knowledge
Zhenfu Cao, Zongyang Zhang, Yunlei Zhao
A zero-knowledge protocol allows a prover to convince a verifier the correctness of a statement without disclosing any other information to the verifier. It is a basic tool and widely used in many other cryptographic applications. However, when stand-alone zero-knowledge protocols are used in complex environments, e.g., the Internet, the basic properties may not be sufficient. This is why researchers considered security of zero-knowledge protocols under concurrent composition and man-in-the-middle attacks. Moreover, it is more likely that an adversary might break computers that run the protocol and get internal information of the parties. It is thus very necessary to take account of the security of zero-knowledge protocols when adaptive corruptions are allowed. Previous adaptively secure zero-knowledge protocols work either in a stand-alone setting, or in a concurrent setting with trusted setup assumptions. In this paper, we study adaptive security of zero-knowledge protocols under both concurrent self composition and man-in-the-middle attacks in the plain model (i.e., without any set-up assumptions). We provide a construction of adaptively secure concurrent non-malleable zero-knowledge proof/argument for every language in NP.
Last updated:  2011-11-21
Provable Security of BLAKE with Non-Ideal Compression Function
Elena Andreeva, Atul Luykx, Bart Mennink
We analyze the security of the SHA-3 finalist BLAKE. The BLAKE hash function follows the HAIFA design methodology, and as such it achieves optimal preimage, second preimage and collision resistance, and is indifferentiable from a random oracle up to approximately 2^{n/2} assuming the underlying compression function is ideal. In our work we show, however, that the compression function employed by BLAKE exhibits a non-random behavior and is in fact differentiable in only 2^{n/4} queries. Our attack on the indifferentiability of the BLAKE compression function seriously undermines the security strength of BLAKE not only with respect to its overall indifferentiability, but also its collision and (second) preimage security in the ideal model. Our next contribution is the restoration of the security results for BLAKE in the ideal model by refining the level of modularity and assuming that BLAKE's underlying block cipher is an ideal cipher. We prove that BLAKE is optimally collision, second preimage, and preimage secure (up to a constant). We go on to show that BLAKE is still indifferentiable from a random oracle up to the old bound of 2^{n/2} queries, albeit under a weaker assumption: the ideality of its block cipher.
Last updated:  2014-08-27
Multidimensional Meet-in-the-Middle Attack and Its Applications to KATAN32/48/64
Uncategorized
Bo Zhu, Guang Gong
Show abstract
Uncategorized
This paper investigates a new framework to analyze symmetric ciphers by guessing intermediate states and dividing algorithms into consecutive sub-ciphers. It is suitable for lightweight ciphers with simple key schedules and block sizes smaller than key lengths. New attacks on the block cipher family KATAN are proposed by adopting this framework. Our new attacks can recover the master keys of 175-round KATAN32, 130-round KATAN48 and 112-round KATAN64 faster than exhaustive search, and thus reach many more rounds than previous attacks. We also provide new attacks on 115-round KATAN32 and 100-round KATAN48 in order to demonstrate this new kind of attacks can be more time-efficient and memory-efficient than existing attacks.
Last updated:  2012-02-24
Practical Relay Attack on Contactless Transactions by Using NFC Mobile Phones
Lishoy Francis, Gerhard Hancke, Keith Mayes, Konstantinos Markantonakis
Contactless technology is widely used in security sensitive applications, including identification, payment and access-control systems. Near Field Communication (NFC) is a short-range contactless technology allowing mobile devices to act primarily as either a reader or a token. Relay attacks exploit the assumption that a contactless token within communication range is in close proximity, by placing a proxy-token in range of a contactless reader and relaying communication over a greater distance to a proxy-reader communicating with the authentic token. It has been theorised that NFC-enabled mobile phones could be used as a generic relay attack platform without any additional hardware, but this has not been successfully demonstrated in practice. We present a practical implementation of an NFC-enabled relay attack, requiring only suitable mobile software applications. This implementation reduces the complexity of relay attacks and therefore has potential security implications for current contactless systems. We also discuss countermeasures to mitigate the attack.
Last updated:  2011-11-19
Charm: A framework for Rapidly Prototyping Cryptosystems
Joseph A. Akinyele, Matthew D. Green, Avi D. Rubin
We describe Charm, an extensible framework designed for rapid prototyping of cryptographic systems that utilize the latest advances in cryptography, such as identity and attribute-based encryption, as well as the traditional cryptographic functions. Charm is designed to minimize code complexity, promote code re-use, and to automate interoperability, while not compromising on efficiency. Charm was designed from the ground up to support the implementation of advanced cryptographic schemes. It includes support for multiple cryptographic settings, an extensive library of re-usable code, and a protocol engine to aid in the development of interactive protocols. Our framework also provides a series of specialized tools that enable different cryptosystems to interoperate. We implemented over twenty cryptographic schemes using Charm, including some new ones that to our knowledge have never been built in practice. This paper describes our modular architecture, which includes a built-in benchmarking module that we use to compare the performance of primitives written in Python to comparable C implementations. We show that in many cases our techniques result in a potential order of magnitude decrease in code size, while inducing an acceptable performance impact.
Last updated:  2012-04-17
Impossible Differential Cryptanalysis of the Lightweight Block Ciphers TEA, XTEA and HIGHT
Jiazhe Chen, Meiqin Wang, Bart Preneel
TEA, XTEA and HIGHT are lightweight block ciphers with 64-bit block sizes and 128-bit keys. The round functions of the three ciphers are based on the simple operations XOR, modular addition and shift/rotation. TEA and XTEA are Feistel ciphers with 64 rounds designed by Needham and Wheeler, where XTEA is a successor of TEA, which was proposed by the same authors as an enhanced version of TEA. Whilst HIGHT, which is designed by Hong et al., is a generalized Feistel cipher with 32 rounds and eight 8-bit words in each round. On the one hand, all these ciphers are simple and easy to implement; on the other hand, the diffusion is slow, which allow us to find some impossible properties. This paper proposes a method to identify the impossible differentials for TEA and XTEA by using the diffusion property of these block ciphers, where the impossible differential comes from one bit contradiction. By means of the method, 14-round impossible differential of XTEA and 13-round impossible differential of TEA are derived, which results in improved impossible differential attacks on 23-round XTEA and 17-round TEA, respectively. These attacks significantly improve the previous 11-round impossible differential attack on TEA and 14-round impossible differential attack on XTEA given by Moon et al. from FSE 2002. For HIGHT, we improve the 26-round impossible differential attack proposed by Özen et al.; an impossible differential attack on 27-round HIGHT that is slightly faster that the exhaustive search is also given. The attacks on TEA, XTEA and HIGHT are also the best attacks in terms of time complexity.
Last updated:  2011-12-16
On the Joint Security of Encryption and Signature in EMV
Jean Paul Degabriele, Anja Lehmann, Kenneth G. Paterson, Nigel P. Smart, Mario Strefler
We provide an analysis of current and future algorithms for signature and encryption in the EMV standards in the case where a single key-pair is used for both signature and encryption. We give a theoretical attack for EMV's current RSA-based algorithms, showing how access to a partial decryption oracle can be used to forge a signature on a freely chosen message. We show how the attack might be integrated into EMV's CDA protocol flow, enabling an attacker with a wedge device to complete an offline transaction without knowing the cardholder's PIN. Finally, the elliptic curve signature and encryption algorithms that are likely to be adopted in a forthcoming version of the EMV standards are analyzed in the single key-pair setting, and shown to be secure.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.