All papers in 2012 (Page 7 of 733 results)

Last updated:  2013-04-02
On the Collision and Preimage Security of MDC-4 in the Ideal Cipher Model
Bart Mennink
We present the first collision and preimage security analysis of MDC-4, a 24 years old construction for transforming an n-bit block cipher into a 2n-bit hash function. We start with the MDC-4 compression function based on two independent block ciphers, and prove that any adversary with query access to the underlying block ciphers requires at least 2^{5n/8} queries (asymptotically) to find a collision, and at least 2^{5n/4} queries to find a preimage. These results then directly carry over to the MDC-4 hash function design. Next, we consider MDC-4 based on one single block cipher, and confirm that the collision bound carries over to the single block cipher setting. In case of preimage resistance we present a more negative result: for a target image with the same left and right half, a MDC-4 preimage in the single block cipher setting can be found in approximately 2^n queries. Yet, restricted to target images with different left and right halves, the preimage security bound of 2^{5n/4} queries is nevertheless retained.
Last updated:  2012-02-29
On The Nonlinearity of Maximum-length NFSR Feedbacks
Meltem Sonmez Turan
Linear Feedback Shift Registers (LFSRs) are the main building block of many classical stream ciphers; however due to their inherent linearity, most of the LFSR-based designs do not offer the desired security levels. In the last decade, using Nonlinear Feedback Shift Registers(NFSRs) in stream ciphers became very popular. However, the theory of NFSRs is not well-understood, and there is no efficient method that constructs a cryptographically strong feedback function with maximum period and also, given a feedback function it is hard to predict the period. In this paper, we study the maximum-length NFSRs, focusing on the nonlinearity of their feedback functions. First, we provide some upper bounds on the nonlinearity of the maximum-length feedback functions, and then we study the feedback functions having nonlinearity 2 in detail. We also show some techniques to improve the nonlinearity of a given feedback function using cross-joining.
Last updated:  2012-02-29
On the Immunity of Rotation Symmetric Boolean Functions Against Fast Algebraic Attacks
Yin Zhang, Meicheng Liu, Dongdai Lin
In this paper, it is shown that an $n$-variable rotation symmetric Boolean function $f$ with $n$ even but not a power of 2 admits a rotation symmetric function $g$ of degree at most $e\leq n/3$ such that the product $gf$ has degree at most $n-e-1$.
Last updated:  2012-02-29
Finding Optimal Formulae for Bilinear Maps
Uncategorized
Razvan Barbulescu, Jérémie Detrey, Nicolas Estibals, Paul Zimmermann
Show abstract
Uncategorized
We describe a unified framework to search for optimal formulae evaluating bilinear --- or quadratic --- maps. This framework applies to polynomial multiplication and squaring, finite field arithmetic, matrix multiplication, etc. We then propose a new algorithm to solve problems in this unified framework. With an implementation of this algorithm, we prove the optimality of various published upper bounds, and find improved upper bounds.
Last updated:  2012-03-20
Chosen-Ciphertext Secure Efficiently Searchable Encryption in the Standard Model
Yang Cui, Kirill Morozov
In the standard model, deterministic public-key encryption (PKE) secure against chosen-ciphertext attacks by privacy adversary (PRIV-CCA) is known to be built only from lossy trapdoor functions as demonstrated by Boldyreva et al at Crypto 2008. We show that the method of achieving IND-CCA security via correlated products, recently introduced by Rosen and Segev at TCC 2009, can be used to achieve PRIV-CCA secure PKE of uniform messages from any trapdoor permutation (TDP) in the standard model. Our schemes are {\em not} deterministic as a whole, however randomness is only applied to a particular part of the ciphertext - an one-time signature used for validity check. This allows efficient (logarithmic in the database size) search on encrypted data. In a nutshell, our first construction (which is generic) departs from any IND-CPA secure PKE (implied by TDP), builds its k-correlated version, transforms it into the k-correlated PRIV-CPA encryption, and finally lifts it up to PRIV-CCA security. In contrast to Rosen and Segev's correlated products method, we do not assume one-wayness under correlated inputs, thus any IND-CPA secure PKE can be used in our construction. In addition, we present the second construction -- which is more efficient, than the first one -- based on assumptions from coding theory and any TDP. Note that for the price of allowing some limited use of randomness, we achieve PRIV security for multiple messages, which is strictly stronger than the single-message notion PRIV1 achieved by the scheme of Boldyreva et al at Crypto 2008.
Last updated:  2015-12-18
On the Optimality of Lattices for the Coppersmith Technique
Uncategorized
Yoshinori Aono, Manindra Agrawal, Takakazu Satoh, Osamu Watanabe
Show abstract
Uncategorized
We investigate a method for finding small integer solutions of a univariate modular equation, that was introduced by Coppersmith and extended by May. We will refer this method as the Coppersmith technique. This paper provides a way to analyze a general limitations of the lattice construction for the Coppersmith technique. Our analysis upper bounds the possible range of $U$ that is asymptotically equal to the bound given by the original result of Coppersmith and May. This means that they have already given the best lattice construction. In addition, we investigate the optimality for the bivariate equation to solve the small inverse problem, which was inspired by Kunihiro's argument. In particular, we show the optimality for the Boneh-Durfee's equation used for RSA cryptoanalysis, To show our results, we establish framework for the technique by following the relation of Howgrave-Graham, and then concretely define the conditions in which the technique succeed and fails. We then provide a way to analyze the range of $U$ that satisfies these conditions. Technically, we show that the original result of Coppersmith achieves the optimal bound for $U$ when constructing a lattice in the standard way. We then provide evidence which indicates that constructing a non-standard lattice is generally difficult.
Last updated:  2012-02-29
Security Analysis of A Single Sign-On Mechanism for Distributed Computer Networks
Guilin Wang, Jiangshan Yu, Qi Xie
Single sign-on (SSO) is a new authentication mechanism that enables a legal user with a single credential to be authenticated by multiple service providers in distributed computer networks. Recently, Chang and Lee proposed a new SSO scheme and claimed its security by providing well-organized security arguments. In this paper, however, we demonstratively show that their scheme is actually insecure as it fails to meet credential privacy and soundness of authentication. Specifically, we present two impersonation attacks. The first attack allows a malicious service provider, who has successfully communicated with a legal user twice, to recover the user's credential and then to impersonate the user to access resources and services offered by other service providers. In the other attack an outsider without any credential may be able to enjoy network services freely by impersonating any legal user or a nonexistent user. We identify the flaws in their security arguments to explain why attacks are possible against their SSO scheme. Our attacks also applies to another SSO scheme proposed by Hsu and Chuang, which inspires the design of Chang-Lee scheme. We promote the study of the soundness of authentication as one open problem.
Last updated:  2012-02-29
More on Correcting Errors in RSA Private Keys: Breaking CRT-RSA with Low Weight Decryption Exponents
Santanu Sarkar, Subhamoy Maitra
Several schemes have been proposed towards the fast encryption and decryption in RSA and its variants. One popular idea is to use integers having low Hamming weight in the preparation of the decryption exponents. This is to reduce the multiplication effort in the square and multiply method in the exponentiation routine, both in encryption and decryption. In this paper we show that such schemes are insecure in CRT-RSA when the encryption exponent is small (e.g., $e = 2^{16}+1$). In particular, we show that the CRT-RSA schemes presented in SAC 1996 and ACISP 2005 with low weight decryption exponents can be broken in a few minutes in certain cases. Further, the scheme of CT-RSA 2010, where the decryption exponents are not of low weight but they have large low weight factors, can also be cryptanalysed. To mount the attack, we exploit the heuristic proposed by Henecka et al (Crypto 2010) that is capable of correcting errors in the secret parameters when the encryption exponent is small. In the process, we identify a few modifications of the error correction strategy that provides significantly improved experimental outcome and also beats the theoretical bounds given in the work of Henecka et al.
Last updated:  2012-02-29
Generic Construction of Certificate Based Encryption from Certificateless Encryption Revisited
Wei Gao, Guilin Wang, Kefei Chen, Xueli Wang
Certificateless public key encryption (CLE) and certificate based encryption (CBE) are two novel public key cryptographic primitives requiring no authenticity verification of the recipient's public key. Both of them are motivated to simultaneously solve the heavy certificate management problem inherent in the traditional public key encryption (PKE) and the key escrow problem inherent in the identity-based encryption (IBE). It is an attractive cryptographic task to formally explore the relation between CBE and CLE. In 2005, Al-Riyami and Paterson proposed one general conversion from CLE to CBE. Shortly later, Kang and Park pointed out a flaw in the security proof of Al-Riyami-Paterson conversion. In 2012, Wu et al. proposed another generic conversion from CLE to CBE. Compared with Al-Riyami-Paterson conversion, Wu et al.'s method can be proved secure, but it has to additionally involve collision resistant hash functions. It remains an open problem whether the generic conversion due to Al-Riyami and Paterson, which is very neat, is provably secure. We aim to solve this open problem. First, we formalize CLE's new security model, featured by introducing a new security property overlooked by previous security models. With this new security model as the basic technique, we succeed in proving that the Al-Riyami-Paterson generic conversion from CLE to CBE is secure, if the CLE scheme is secure in our new security model. A concrete provably secure CBE scheme is presented to demonstrate the application of our result.
Last updated:  2012-04-27
Provably Secure Generic Construction of Certificate Based Signature from Certificateless Signature in Standard Model
Uncategorized
Wei Gao, Guilin Wang, Kefei Chen, Xueli Wang
Show abstract
Uncategorized
Both certificateless cryptography (CLC) and certificate-based cryptography (CBC) are two novel public key paradigms which combine the merits of traditional public key cryptography (PKC) and identity-based cryptography (IBC). They succeed in avoiding the key escrow problem in IBC and reducing the public key management overhead in traditional PKC. This paper deals with the generic construction of certificate based signature (CBS) from certificateless signature (CLS). Wu et al. proposed the first generic conversion from CLS to CBS provably secure in the random oracle model. This paper proposes an intuitive, simple and provably secure generic conversion from CLS to CBS. The security for this conversion is proved in the standard model. To develope the security proof of this conversion, we put forth one novel security model which introduces a previously neglected notrivial attack and better captures the CLS security notion. Following this generic conversion, a provably secure CLS scheme is constructed as an example.
Last updated:  2012-02-29
FlipIt: The Game of "Stealthy Takeover"
Marten van Dijk, Ari Juels, Alina Oprea, Ronald L. Rivest
Recent targeted attacks have increased significantly in sophistication, undermining the fundamental assumptions on which most cryptographic primitives rely for security. For instance, attackers launching an Advanced Persistent Threat (APT) can steal full cryptographic keys, violating the very secrecy of "secret" keys that cryptographers assume in designing secure protocols. In this article, we introduce a game-theoretic framework for modeling various computer security scenarios prevalent today, including targeted attacks. We are particularly interested in situations in which an attacker periodically compromises a system or critical resource completely, learns all its secret information and is not immediately detected by the system owner or defender. We propose a two-player game between an attacker and defender called FlipIt or The Game of "Stealthy Takeover". In FlipIt, players compete to control a shared resource. Unlike most existing games, FlipIt allows players to move at any given time, taking control of the resource. The identity of the player controlling the resource, however, is not revealed until a player actually moves. To move, a player pays a certain move cost. The objective of each player is to control the resource a large fraction of time, while minimizing his total move cost. FlipIt provides a simple and elegant framework in which we can formally reason about the interaction between attackers and defenders in practical scenarios. In this article, we restrict ourselves to games in which one of the players (the defender) plays with a renewal strategy, one in which the intervals between consecutive moves are chosen independently and uniformly at random from a fixed probability distribution. We consider attacker strategies ranging in increasing sophistication from simple periodic strategies (with moves spaced at equal time intervals) to more complex adaptive strategies, in which moves are determined based on feedback received during the game. For different classes of strategies employed by the attacker, we determine strongly dominant strategies for both players (when they exist), strategies that achieve higher benefit than all other strategies in a particular class. When strongly dominant strategies do not exist, our goal is to characterize the residual game consisting of strategies that are not strongly dominated by other strategies. We also prove equivalence or strict inclusion of certain classes of strategies under different conditions. Our analysis of different FlipIt variants teaches cryptographers, system designers, and the community at large some valuable lessons: 1. Systems should be designed under the assumption of repeated total compromise, including theft of cryptographic keys. FlipIt provides guidance on how to implement a cost-effective defensive strategy. 2. Aggressive play by one player can motivate the opponent to drop out of the game (essentially not to play at all). Therefore, moving fast is a good defensive strategy, but it can only be implemented if move costs are low. We believe that virtualization has a huge potential in this respect. 3. Close monitoring of one’s resources is beneficial in detecting potential attacks faster, gaining insight into attacker’s strategies, and scheduling defensive moves more effectively. Interestingly, FlipIt finds applications in other security realms besides modeling of targeted attacks. Examples include cryptographic key rotation, password changing policies, refreshing virtual machines, and cloud auditing.
Last updated:  2012-03-07
On the Circular Security of Bit-Encryption
Ron Rothblum
Motivated by recent developments in fully homomorphic encryption, we consider the folklore conjecture that every semantically-secure bit-encryption scheme is circular secure, or in other words, that every bit-encryption scheme remains secure even when the adversary is given encryptions of the individual bits of the private-key. We show the following obstacles to proving this conjecture: 1. We construct a public-key bit-encryption scheme that is plausibly semantically secure, but is not circular secure. The circular security attack manages to fully recover the private-key. The construction is based on an extension of the Symmetric External Diffie-Hellman assumption (SXDH) from bilinear groups, to $\ell$-multilinear groups of order $p$ where $\ell \geq c \cdot \log p$ for some $c>1$. While there do exist $\ell$-multilinear groups (unconditionally), for $\ell \geq 3$ there are no known candidates for which the SXDH problem is believed to be hard. Nevertheless, there is also no evidence that such groups do not exist. Our result shows that in order to prove the folklore conjecture, one must rule out the possibility that there exist $\ell$-multilinear groups for which SXDH is hard. 2. We show that the folklore conjecture cannot be proved using a black-box reduction. That is, there is no reduction of circular security of a bit-encryption scheme to semantic security of that very same scheme that uses both the encryption scheme and the adversary as black-boxes. Both of our negative results extend also to the (seemingly) weaker conjecture that every CCA secure bit-encryption scheme is circular secure. As a final contribution, we show an equivalence between three seemingly distinct notions of circular security for public-key bit-encryption schemes. In particular, we give a general search to decision reduction that shows that an adversary that distinguishes between encryptions of the bits of the private-key and encryptions of zeros can be used to actually recover the private-key.
Last updated:  2012-04-23
Unbalanced Elementary Symmetric Boolean Functions with the Degree "d" and "wt(d)>=3"
Uncategorized
Zhihui Ou
Uncategorized
In the paper, for $d=2^{t}k$, $n=2^{t}(2k+q)+m$ and special $k=2^{w}(2^0+2^1+\cdots+2^s)$, we present that a majority of $X(d,n)$ are not balanced. The results include many cases $wt(d)\geq 3$ and $n\equiv 0,1,2,3 mod4$. The results are also parts of the conjecture that $X(2^t,2^{t+1}l-1)$ is only nonlinear balanced elementary symmetric Boolean function. Where $t\geq 2$, $q\geq 1$, $s\geq 0$, $w\geq 0$ and $m\geq -1$ are integers, and $X(d,n)=\bigoplus\limits_{1\le i_{1} <\cdots<i_{d}\le n} x_{i_{1} } \cdots x_{i_{d} }$.\newline\newline
Last updated:  2012-02-29
Cryptanalysis of a Universally Verifiable Efficient Re-encryption Mixnet
Shahram Khazaei, Björn Terelius, Douglas Wikström
We study the heuristically secure mix-net proposed by Puiggalí and Guasch (EVOTE 2010). We present practical attacks on both correctness and privacy for some sets of parameters of the scheme. Although our attacks only allow us to replace a few inputs, or to break the privacy of a few voters, this shows that the scheme can not be proven secure.
Last updated:  2015-01-03
Homomorphic Evaluation of the AES Circuit
Craig Gentry, Shai Halevi, Nigel P. Smart
We describe a working implementation of leveled homomorphic encryption (with or without bootstrapping) that can evaluate the AES-128 circuit. This implementation is built on top of the HElib library, whose design was inspired by an early version of the current work. Our main implementation (without bootstrapping) takes about 4 minutes and 3GB of RAM, running on a small laptop, to evaluate an entire AES-128 encryption operation. Using SIMD techniques, we can process upto 120 blocks in each such evaluation, yielding an amortized rate of just over 2 seconds per block. For cases where further processing is needed after the AES computation, we describe a different setting that uses bootstrapping. We describe an implementation that lets us process 180 blocks in just over 18 minutes using 3.7GB of RAM on the same laptop, yielding amortized 6 seconds/block. We note that somewhat better amortized per-block cost can be obtained using "byte-slicing" (and maybe also "bit-slicing") implementations, at the cost of significantly slower wall-clock time for a single evaluation.
Last updated:  2012-02-29
Combined Attacks on the AES Key Schedule
François Dassance, Alexandre Venelli
We present new combined attacks on the AES key schedule based on the work of Roche et al. The main drawbacks of the original attack are: the need for high repeatability of the fault, a very particular fault model and a very high complexity of the key recovery algorithm. We consider more practical fault models, we obtain improved key recovery algorithms and we present more attack paths for combined attacks on AES. We propose to inject faults on the different operations of the key schedule instead of the key state of round 9 or the corresponding data state. We also consider fault injections in AES constants such as the RCON or the affine transformation of the SubWord. By corrupting these constants, the attacker can easily deduce the value of the error. The key recovery complexity can then be greatly improved. Notably, we can obtain a complexity identical to a classical differential side-channel attack. Our attacks defeat most AES implementations secure against both high-order side-channel attacks and fault attacks.
Last updated:  2012-02-29
An algorithm for factoring integers
Uncategorized
Yingpu Deng, Yanbin Pan
Show abstract
Uncategorized
We propose an algorithm for factoring a composite number. The method seems new.
Last updated:  2012-04-21
The Collision Security of MDC-4
Ewan Fleischmann, Christian Forler, Stefan Lucks, Jakob Wenzel
There are four somewhat classical double length block cipher based compression functions known: MDC-2, MDC-4, Abreast-DM, and Tandem-DM. They all have been developed over 20 years ago. In recent years, cryptographic research has put a focus on block cipher based hashing and found collision security results for three of them (MDC-2, Abreast-DM, Tandem-DM). In this paper, we add MDC-4, which is part of the IBM CLiC cryptographic module (FIPS 140-2 Security Policy for IBM CrytoLite in C, October 2003), to that list by showing that - 'instantiated' using an ideal block cipher with 128 bit key/plaintext/ciphertext size - no adversary asking less than $2^{74.76}$ queries can find a collision with probability greater than $1/2$. This is the first result on the collision security of the hash function MDC-4. The compression function MDC-4 is created by interconnecting two MDC-2 compression functions but only hashing one message block with them instead of two. The developers aim for MDC-4 was to offer a higher security margin, when compared to MEDC-2, but still being fast enough for practical purposes. The MDC-2 collision security proof of Steinberger (EUROCRYPT 2007) cannot be directly applied to MDC-4 due to the structural differences. Although sharing many commonalities, our proof for MDC-4 is much shorter and we claim that our presentation is also easier to grasp.
Last updated:  2012-12-28
Recursive Composition and Bootstrapping for SNARKs and Proof-Carrying Data
Uncategorized
Nir Bitansky, Ran Canetti, Alessandro Chiesa, Eran Tromer
Show abstract
Uncategorized
\emph{Succinct non-interactive arguments} (SNARGs) enable verifying NP statements with much lower complexity than required for classical NP verification (in fact, with complexity that is \emph{independent} of the NP language at hand). In particular, SNARGs provide strong solutions to the problem of verifiably delegating computation. Despite recent progress in the understanding and construction of SNARGs, there remain unattained goals. First, \emph{publicly-verifiable SNARGs} are only known either in the random oracle model, or in a model that allows expensive offline preprocessing. Second, known SNARGs require from the prover significantly more time or space than required for classical NP verification. We show that, assuming collision-resistant hashing, \emph{any} SNARG having a natural \emph{proof of knowledge} property (i.e., a SNARK) can be ``bootstrapped" to obtain a \emph{complexity-preserving} SNARK, i.e., one without expensive preprocessing and where the prover's time and space complexity is essentially the same as that required for classical NP verification. By applying our transformation to known publicly-verifiable SNARKs with expensive preprocessing, we obtain the first publicly-verifiable complexity-preserving SNARK in the plain model (and in particular, eliminate the expensive preprocessing), thereby attaining the aforementioned goals. We also show an analogous transformation for privately-verifiable SNARKs, assuming fully-homomorphic encryption. Curiously, our transformations do not rely on PCPs. At the heart of our transformations is \emph{recursive composition} of SNARKs and, more generally, new techniques for constructing and using \emph{proof-carrying data} (PCD) systems, which extend the notion of a SNARK to the distributed setting. Concretely, to bootstrap a given SNARK, we recursively compose the SNARK to obtain a ``weak'' PCD system for shallow distributed computations, and then use the PCD framework to attain stronger, complexity-preserving SNARKs and PCD systems.
Last updated:  2012-02-24
Algebraic attack on lattice based cryptosystems via solving equations over real numbers.
Jintai Ding, Dieter Schmidt
In this paper we present a new algorithm to attack lattice based cryptosystems by solving a problem over real numbers. In the case of the NTRU cryptosystem, if we assume the additional information on the modular operations, we can break the NTRU cryptosystems completely by getting the secret key. We believe that this fact was not known before.
Last updated:  2012-04-23
Recent Results on Balanced Symmetric Boolean Functions
Yingming Guo, Guangpu Gao, Yaqun Zhao
In this paper we prove all balanced symmetric Boolean functions of fixed degree are trivial when the number of variables grows large enough. We also present the nonexistence of trivial balanced elementary symmetric Boolean functions except for $n=l\cdot2^{t+1}-1$ and $d=2^t$, where $t$ and $l$ are any positive integers, which shows Cusick's conjecture for balanced elementary symmetric Boolean functions is exactly the conjecture that all balanced elementary symmetric Boolean functions are trivial balanced. In additional, we obtain an integer $n_0$, which depends only on $d$, that Cusick's conjecture holds for any $n>n_0$.
Last updated:  2012-02-23
Tolerant Algebraic Side-Channel Analysis of {AES}
Yossef Oren, Avishai Wool
We report on a Tolerant Algebraic Side-Channel Analysis (TASCA) attack on an AES implementation, using an optimizing pseudo- Boolean solver to recover the secret key from a vector of Hamming weights corresponding to a single encryption. We first develop a boundary on the maximum error rate that can be tolerated as a function of the set size output by the decoder and the number of measurements. Then, we show that the TASCA approach is capable of recovering the secret key from errored traces in a reasonable time for error rates approaching this theoretical boundary – specifically, the key was recovered in 10 hours on average from 100 measurements with error rates of up to 20%. We discovered that, perhaps counter-intuitively, there are strong incentives for the attacker to use as few leaks as possible to recover the key. We describe the equation setup, the experiment setup and discuss the results.
Last updated:  2012-11-05
Hardness of decision (R)LWE for any modulus
Adeline Langlois, Damien Stehle
The decision Learning With Errors problem has proven an extremely flexible foundation for devising provably secure cryptographic primitives. LWE can be expressed in terms of linear algebra over Z/qZ. This modulus q is the subject of study of the present work. When q is prime and small, or when it is exponential and composite with small factors, LWE is known to be at least as hard as standard worst-case problems over euclidean lattices (sometimes using quantum reductions). The Ring Learning With Errors problem is a structured variant of LWE allowing for more compact keys and more efficient primitives. It is known to be at least as hard as standard worst-case problems restricted to so-called ideal lattices, but under even more restrictive arithmetic conditions on q. In this work, we prove that the arithmetic form of the modulus q is irrelevant to the computational hardness of LWE and RLWE. More precisely, we show that these problems are at least as hard as standard worst-case problems on lattices, under the unique condition that q is of polynomial bit-size. This result is most useful for adapting LWE-based cryptographic constructions to the RLWE setting. Among others, this allows us to derive the first Identity-Based Encryption scheme of quasi-optimal performance proven secure under standard worst-case lattice assumptions, in the standard model. Other applications include authentication, functional encryption and traitor tracing.
Last updated:  2013-08-15
Worst-Case to Average-Case Reductions for Module Lattices
Adeline Langlois, Damien Stehle
Most lattice-based cryptographic schemes are built upon the assumed hardness of the Short Integer Solution (SIS) and Learning With Errors (LWE) problems. Their efficiencies can be drastically improved by switching the hardness assumptions to the more compact Ring-SIS and Ring-LWE problems. However, this change of hardness assumptions comes along with a possible security weakening: SIS and LWE are known to be at least as hard as standard (worst-case) problems on euclidean lattices, whereas Ring-SIS and Ring-LWE are only known to be as hard as their restrictions to special classes of ideal lattices, corresponding to ideals of some polynomial rings. In this work, we define the Module-SIS and Module-LWE problems, which bridge SIS with Ring-SIS, and LWE with Ring-LWE, respectively. We prove that these average-case problems are at least as hard as standard lattice problems restricted to module lattices (which themselves generalize arbitrary and ideal lattices). As these new problems enlarge the toolbox of the lattice-based cryptographer, they could prove useful for designing new schemes. Importantly, the worst-case to average-case reductions for the module problems are (qualitatively) sharp, in the sense that there exist converse reductions. This property is not known to hold in the context of Ring-SIS/Ring-LWE: Ideal lattice problems could reveal easy without impacting the hardness of Ring-SIS/Ring-LWE.
Last updated:  2012-09-07
ECM at Work
Uncategorized
Joppe W. Bos, Thorsten Kleinjung
Show abstract
Uncategorized
The performance of the elliptic curve method (ECM) for integer factorization plays an important role in the security assessment of RSA-based protocols as a cofactorization tool inside the number field sieve. The efficient arithmetic for Edwards curves found an application by speeding up ECM. We propose techniques based on generating and combining addition-subtracting chains to optimize Edwards ECM in terms of both performance and memory requirements. This makes our approach very suitable for memory-constrained devices such as graphics processing units (GPU). For commonly used ECM parameters we are able to lower the required memory up to a factor 55 compared to the state-of-the-art Edwards ECM approach. Our ECM implementation on a GTX 580 GPU sets a new throughput record, outperforming the best GPU, CPU and FPGA results reported in literature.
Last updated:  2013-02-25
A Lattice-Based Traitor Tracing Scheme
San Ling, Damien Stehle
A traitor tracing scheme is a multi-receiver encryption scheme where malicious receiver coalitions aiming at building pirate decryption devices are deterred by the existence of a tracing algorithm: Using the pirate decryption device, the tracing algorithm can recover at least one member of the malicious coalition. All existing traitor tracing schemes rely either on rather inefficient generic constructions from arbitrary encryption schemes and collusion-secure fingerprinting codes, or on algebraic constructions exploiting the assumed hardness of variants of the Discrete Logarithm Problem. In this work, we present the first algebraic construction of a traitor tracing encryption scheme whose security relies on the assumed (quantum) worst-case hardness of standard lattice problems. The scheme is public-key, provably resists Chosen Plaintext Attacks and allows for minimal access black-box tracing (i.e., tracing works even if granted a very limited access to the pirate decryption device). It inherits the standard features of lattice-based cryptography, such as provable security under mild computational assumptions, conjectured resistance to quantum computers, and asymptotic efficiency. For proving the security, we introduce a Learning With Errors variant of the k-SIS problem from Boneh and Freeman [PKC'11], which we prove at least as hard as the standard LWE problem. We also describe a variant of our scheme with security based on the assumed hardness of the Ring Learning With Errors problem which achieves quasi-optimal asymptotic performance with respect to the security parameter.
Last updated:  2012-02-23
Collision Bounds for the Additive Pollard Rho Algorithm for Solving Discrete Logarithms
Uncategorized
Joppe W. Bos, Alina Dudeanu, Dimitar Jetchev
Show abstract
Uncategorized
We prove collision bounds for the Pollard rho algorithm to solve the discrete logarithm problem in a general cyclic group $G$. Unlike the setting studied by Kim et al. we consider additive walks: the setting used in practice to solve the elliptic curve discrete logarithm problem. Our bounds differ from the birthday bound $O(\sqrt{|G|})$ by a factor of $\sqrt{\log{|G|}}$ and are based on mixing time estimates for random walks on finite abelian groups due to Hildebrand.
Last updated:  2012-05-08
Remarks on- An ideal multi-secret sharing scheme based on MSP
Uncategorized
Zhi-hui Li Jing Li
Uncategorized
In 2010, C.- F. Hsu, Q.Cheng, X.M.Tang and B.Zeng proposed an ideal linear multi-secret sharing scheme based on monotone span programs (for short HCTZ scheme). This paper mainly makes an analysis about the problems in HCTZ scheme.Meanwhile, we presents an efficient ideal multi-secret sharing scheme based on monotone span programs for a family of access structures. The new scheme effectively overcomes the deficiency of HCTZ scheme, and has the advantage of small calculation cost.
Last updated:  2012-05-15
Study of the invariant coset attack on PRINTcipher: more weak keys with practical key recovery
Stanislav Bulygin, Michael Walter
In this paper we investigate the invariant property of PRINTcipher first discovered by Leander et al. in their CRYPTO 2011 paper. We provide a thorough study and show that there exist 64 families of weak keys for PRINTcipher--48 and many more for PRINTcipher--96. Moreover, we show that searching the weak key space may be substantially sped up by splitting the search into two consecutive steps. We show that for many classes of weak keys key recovery can be done in a matter of minutes in the chosen/known plaintext scenario. In fact, at least $2^{45}$ weak keys can be recovered in less than 20 minutes per key on a single PC using only a few chosen and one known plaintext(s). We provide detailed treatment of the methods and put them in a more general context that opens new interesting directions of research for PRESENT-like ciphers.
Last updated:  2012-04-16
Improved Algebraic Side-Channel Attack on AES
Uncategorized
Mohamed Saied Emam Mohamed, Stanislav Bulygin, Michael Zohner, Annelie Heuser, Michael Walter
Show abstract
Uncategorized
In this paper we present improvements of the algebraic side- channel analysis of the Advanced Encryption Standard (AES) proposed in [9]. In particular, we optimize the algebraic representation of AES and the algebraic representation of the obtained side-channel information in order to speed up the attack and increase the success rate. We study the performance of our improvements in both known and unknown plain-text/ciphertext attack scenarios. Our experiments indicate that in both cases the amount of required side-channel information is less than the one required in the attacks introduced in [9]. Furthermore, we introduce a method for error handling, which allows our improved algebraic side-channel attack to escape the assumption of an error-free measurement and thus become applicable in practice. We demonstrate the practical use of our improved algebraic side-channel attack by inserting predictions from a single-trace template attack.
Last updated:  2012-06-22
Optimally Robust Private Information Retrieval
Casey Devet, Ian Goldberg, Nadia Heninger
We give a protocol for multi-server information-theoretic private information retrieval which achieves the theoretical limit for Byzantine robustness. That is, the protocol can allow a client to successfully complete queries and identify server misbehavior in the presence of the maximum possible number of malicious servers. We have implemented our scheme and it is extremely fast in practice: up to thousands of times faster than previous work. We achieve these improvements by using decoding algorithms for error-correcting codes that take advantage of the practical scenario where the client is interested in multiple blocks of the database.
Last updated:  2012-02-23
Semi-Supervised Template Attack
Uncategorized
Liran Lerman, Stephane Fernandes Medeiros, Nikita Veshchikov, Cedric Meuter, Gianluca Bontempi, Olivier Markowitch
Show abstract
Uncategorized
Side channel attacks take advantage of the information leakage in a cryptographic device. A template attack is a family of side channel attacks which is reputed to be extremely effective. This kind of attacks supposes that the attacker can fully control a cryptographic device before attacking a similar one. In this paper, we propose a method based on a semi-supervised learning strategy to relax this assumption. The effectiveness of our proposal is confirmed by software simulations as well as by experiments on a 8-bit microcontroller.
Last updated:  2012-02-23
Computational Soundness of Symbolic Zero-knowledge Proofs: Weaker Assumptions and Mechanized Verification
Michael Backes, Fabian Bendun, Dominique Unruh
The abstraction of cryptographic operations by term algebras, called symbolic models, is essential in almost all tool-supported methods for analyzing security protocols. Significant progress was made in proving that symbolic models offering basic cryptographic operations such as encryption and digital signatures can be sound with respect to actual crypto- graphic realizations and security definitions. Even abstractions of sophisticated modern cryptographic primitives such as zero- knowledge (ZK) proofs were shown to have a computationally sound cryptographic realization, but only in ad-hoc formalisms and at the cost of placing strong assumptions on the underlying cryptography, which leaves only highly inefficient realizations. In this paper, we make two contributions to this problem space. First, we identify weaker cryptographic assumptions that we show to be sufficient for computational soundness of symbolic ZK proofs. These weaker assumptions are fulfilled by existing efficient ZK schemes as well as generic ZK constructions. Second, we conduct all computational soundness proofs in CoSP, a recent framework that allows for casting com- putational soundness proofs in a modular manner, independent of the underlying symbolic calculi. Moreover, all computational soundness proofs conducted in CoSP automatically come with mechanized proof support through an embedding of the applied $\pi$-calculus.
Last updated:  2013-11-24
Strongly Unforgeable Proxy Re-Signatures in the Standard Model
Uncategorized
S. Sree Vivek, S. Sharmila Deva Selvi, Guhan Balasubramanian, C. Pandu Rangan
Show abstract
Uncategorized
Proxy re-signatures are generally used for the delegation of signing rights of a user (delegator) to a semi- trusted proxy and a delegatee. The proxy can convert the signature of one user on a message into the signature of another user on the same message by using the delegation information (rekey) provided by the delegator. This is a handy primitive for network security and automated delegations in hierarchical organizations. Though proxy re- signature schemes that are secure in the standard model are available, none of them have addressed the security notion of strong existential unforgeability, where the adversary will not be able to forge even on messages for which signatures are already available. This is an important property for applications which involve the delegation of authentication on sensitive data. In this paper, we define the security model for strong unforgeability of proxy re-signature schemes. We propose two concrete strong unforgeable proxy re-signature schemes, where we induce the strong unforgeability in the scheme by embedding the transformation techniques carefully in the sign and resign algorithms. The security of both the schemes is related to the hardness of solving Computational Diffie-Hellman (CDH) problem.
Last updated:  2012-02-23
Public Key Cryptosystems Constructed Based on Reed-Solomon Codes, K(XV)SE(2)PKC, Realizing Coding Rate of Exactly 1.0
Masao KASAHARA
In this paper, we present a new class of public-key cryptosystems, K(XV)SE(2)PKC realizing the coding rate of exactly 1.0, based on Reed-Solomon codes(RS codes). We show that K(XV)SE(2)PKC is secure against the various attacks including the attacks based on the Gröbner basis calcula&#65364;ion (Gröbner basis attack, GB attack) and a linear transformation attack.
Last updated:  2012-05-18
Fully Homomorphic Encryption without Modulus Switching from Classical GapSVP
Zvika Brakerski
We present a new tensoring technique for LWE-based fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically ($B \to B^2\cdot\poly(n)$) with every multiplication (before ``refreshing''), our noise only grows linearly ($B \to B\cdot\poly(n)$). We use this technique to construct a \emph{scale-invariant} fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus $q$ and the initial noise level $B$, and not on their absolute values. Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process (no need for ``modulus switching''), and this modulus can take arbitrary form, including a power of $2$ which carries obvious advantages for implementation. In addition, security can be \emph{classically} reduced to the worst-case hardness of the GapSVP problem (with quasi-polynomial approximation factor), whereas previous constructions could only exhibit a quantum reduction to GapSVP.
Last updated:  2012-03-12
MAGNITUDE SQUARED COHERENCE BASED SCA
Uncategorized
Sebastien Tiran, Amine Dehbaoui, Philippe Maurine
Show abstract
Uncategorized
Magnitude Squared Coherence is a signal processing tool that indicates how well two time domain signals match one with the other by tracking linear dependencies in their spectral decomposition. This paper introduces different ways of using the Magnitude Squared Coherence for Side Channel Analysis. This distinguisher has several advantages over well-known distinguishers.
Last updated:  2012-06-01
Secure Identity-Based Encryption in the Quantum Random Oracle Model
Uncategorized
Mark Zhandry
Show abstract
Uncategorized
We give the first proof of security for an identity-based encryption scheme in the quantum random oracle model. This is the first proof of security for any scheme in this model that requires no additional assumptions. Our techniques are quite general and we use them to obtain security proofs for two random oracle hierarchical identity-based encryption schemes and a random oracle signature scheme, all of which have previously resisted quantum security proofs, even using additional assumptions. We also explain how to remove the extra assumptions from prior quantum random oracle model proofs. We accomplish these results by developing new tools for arguing that quantum algorithms cannot distinguish between two oracle distributions. Using a particular class of oracle distributions, so called semi-constant distributions, we argue that the aforementioned cryptosystems are secure against quantum adversaries.
Last updated:  2012-02-26
Efficient identity-based threshold decryption scheme from bilinear pairings
Wei Gao, Guilin Wang, Kefei Chen, Xueli Wang, Guoyan Zhang
Taking advantage of a technique that allows to safely distribute a private key among decryption servers we introduce a new identity-based threshold scheme, proven secure in the random oracle model. This new paring-based scheme features a lot of improvements compared to other schemes that can be found in the literature. Among them the two most noticeable ones are, the efficiency, by reducing the number of pairing computations, and the ability for a user to generate and share a private key without requiring any access to a PKG.
Last updated:  2013-04-24
Another look at HMAC
Neal Koblitz, Alfred Menezes
HMAC is the most widely-deployed cryptographic-hash-function-based message authentication code. First, we describe a security issue that arises because of inconsistencies in the standards and the published literature regarding keylength. We prove a separation result between two versions of HMAC, which we denote HMAC^{std} and HMAC^{Bel}, the former being the real-world version standardized by Bellare et al. in 1997 and the latter being the version described in Bellare's proof of security in his Crypto 2006 paper. Second, we describe how HMAC^{NIST} (the FIPS version standardized by NIST), while provably secure (in the single-user setting), succumbs to a practical attack in the multi-user setting. Third, we describe a fundamental defect from a practice-oriented standpoint in Bellare's 2006 security result for HMAC, and show that because of this defect his proof gives a security guarantee that is of little value in practice. We give a new proof of NMAC security that gives a stronger result for NMAC and HMAC and we discuss why even this stronger result by itself fails to give convincing assurance of HMAC security.
Last updated:  2012-02-23
Efficient identity-based threshold signature scheme from bilinear pairings in the standard model
Wei Gao, Guilin Wang, Xueli Wang, Kefei Chen
We propose a new identity-based threshold signature (IBTHS) scheme from bilinear pairings enjoying the following advantages in efficiency, security and functionality. The round-complexity of the threshold signing protocol is optimal since each party pays no other communication cost except broadcasting one single message. The computational complexity of the threshold signing procedure is considerably low since there appears no other time-consuming pairing except two pairings for verifying each signature shares. The communication channel requirement of the threshold signing procedure is the lowest since the broadcast channel among signers is enough. It is proved secure with optimal resilience in the standard model. It is the private key associated with an identity rather than a master key of the Public Key Generator (PKG) that is shared among signature generation servers. All these excellent properties are due to our new basic technique by which the private key in the bilinear group is indirectly shared through simply sharing an element in the finite field.
Last updated:  2012-02-23
Particularly Friendly Members of Family Trees
Uncategorized
Craig Costello
Show abstract
Uncategorized
The last decade has witnessed many clever constructions of parameterized families of pairing-friendly elliptic curves that now enable implementors targeting a particular security level to gather suitable curves in bulk. However, choosing the best curves from a (usually very large) set of candidates belonging to any particular family involves juggling a number of efficiency issues, such as the nature of binomials used to construct extension fields, the hamming-weight of key pairing parameters and the existence of compact generators in the pairing groups. In light of these issues, two recent works considered the best families for k=12 and k=24 respectively, and detailed subfamilies that offer very efficient pairing instantiations. In this paper we closely investigate the other eight attractive families with 8 \leq k <50, and systematically sub-divide each family into its family tree, branching off until concrete subfamilies are highlighted that simultaneously provide highly-efficient solutions to all of the above computational issues.
Last updated:  2012-08-18
Fast Reductions from RAMs to Delegatable Succinct Constraint Satisfaction Problems
Uncategorized
Eli Ben-Sasson, Alessandro Chiesa, Daniel Genkin, Eran Tromer
Show abstract
Uncategorized
Succinct arguments for NP are proof systems that allow a weak verifier to retroactively check computation done by a powerful prover. Constructions of such protocols prove membership in languages consisting of \emph{very large yet succinctly-represented constraint satisfaction problems} that, alas, are unnatural in the sense that the problems that arise in practice are not in such form. For general computation tasks, the most natural representation is typically as random-access machine (RAM) algorithms, because such a representation can be obtained very efficiently by applying a compiler to code written in a high-level programming language. Thus, understanding the efficiency of reductions from RAM computations to other NP-complete problem representations for which succinct arguments (or proofs) are known is a prerequisite to a more complete understanding of the applicability of these arguments. Existing succinct argument constructions rely either on circuit satisfiability or (in PCP-based constructions) on algebraic constraint satisfaction problems. In this paper, we present new and more efficient reductions from RAM (and parallel RAM) computations to both problems that (a) preserve succinctness (i.e., do not ``unroll'' the computation of a machine), (b) preserve zero-knowledge and proof-of-knowledge properties, and (c) enjoy fast and highly-parallelizable algorithms for transforming a witness to the RAM computation into a witness for the corresponding problem. These additional properties are typically not considered in ``classical'' complexity theory but are often required or very desirable in the application of succinct arguments. Fulfilling all these efficiency requirements poses significant technical challenges, and we develop a set of tools (both unconditional and leveraging computational assumptions) for generically and efficiently structuring and arithmetizing RAM computations for use in succinct arguments. More generally, our results can be applied to proof systems for NP relying on the aforementioned problem representations; these include various zero-knowledge proof constructions.
Last updated:  2012-02-23
Finding ECM-Friendly Curves through a Study of Galois Properties
Uncategorized
Razvan Barbulescu, Joppe W. Bos, Cyril Bouvier, Thorsten Kleinjung, Peter L. Montgomery
Show abstract
Uncategorized
In this paper we prove some divisibility properties of the cardinality of elliptic curves modulo primes. These proofs explain the good behavior of certain parameters when using Montgomery or Edwards curves in the setting of the elliptic curve method (ECM) for integer factorization. The ideas of the proofs help us to find new families of elliptic curves with good division properties which increase the success probability of ECM.
Last updated:  2012-02-23
Automatic Search of Attacks on round-reduced AES and Applications
Charles Bouillaguet, Patrick Derbez, Pierre-Alain Fouque
In this paper, we describe versatile and powerful algorithms for searching guess-and-determine and meet-in-the-middle attacks on some byte-oriented symmetric primitives. To demonstrate the strengh of these tools, we show that they allow to automatically discover new attacks on round-reduced AES with very low data complexity, and to find improved attacks on the AES-based MACs Alpha-MAC and Pelican-MAC, and also on the AES-based stream cipher LEX. Finally, the tools can be used in the context of fault attacks. These algorithms exploit the algebraically simple byte-oriented structure of the AES. When the attacks found by the tool are practical, they have been implemented and validated experimentally.
Last updated:  2016-11-11
Extended Security Arguments for (Ring) Signature Schemes
Sidi Mohamed El Yousfi Alaoui, Özgür Dagdelen, Pascal Véron, David Galindo, Pierre-Louis Cayrel
The well-known forking lemma by Pointcheval and Stern has been used to prove the security of the so-called generic signature schemes. These signature schemes are obtained via the Fiat-Shamir transform from three-pass identication schemes. A number of five-pass identification protocols have been proposed in the last few years. Extending the forking lemma and the Fiat-Shamir transform would allow to obtain new signature schemes since, unfortunately, these newly proposed schemes fall outside the original framework. In this paper, we provide an extension of the forking lemma in order to assess the security of what we call n-generic signature schemes. These include signature schemes that are derived from certain (2n + 1)-pass identication schemes. We thus obtain a generic methodology for proving the security of a number of signature schemes derived from recently published ve-pass identication protocols, and eventually for (2n+1)-pass identication schemes to come. Finally, we propose a similar extension of the forking lemma for ring signatures originally proposed by Herranz and Sáez.
Last updated:  2012-06-05
Parallelizing message schedules to accelerate the computations of hash functions
Uncategorized
Shay Gueron, Vlad Krasnov
Show abstract
Uncategorized
This paper describes an algorithm for accelerating the computations of Davies-Meyer based hash functions. It is based on parallelizing the computation of several message schedules for several message blocks of a given message. This parallelization, together with the proper use of vector processor instructions (SIMD) improves the overall algorithm’s performance. Using this method, we obtain a new software implementation of SHA-256 that performs at 12.11 Cycles/Byte on the 2nd and 10.84 Cycles/Byte on the 3rd Generation Intel® Core™ processors. We also show how to extend the method to the soon-to-come AVX2 architecture, which has wider registers. Since processors with AVX2 will be available only in 2013, exact performance reporting is not yet possible. Instead, we show that our resulting SHA-256 and SHA-512 implementations have a reduced number of instructions. Based on our findings, we make some observations on the SHA3 competition. We argue that if the prospective SHA3 standard is expected to be competitive against the performance of SHA-256 or SHA-512, on the high end platforms, then its performance should be well below 10 Cycles/Byte on the current, and certainly on the near future processors. Not all the SHA3 finalists have this performance. Furthermore, even the fastest finalists will probably offer only a small performance advantage over the current SHA-256 and SHA-512 implementations.
Last updated:  2012-02-23
Weak Keys of the Full MISTY1 Block Cipher for Related-Key Cryptanalysis
Jiqiang Lu, Wen-She Yap, Yongzhuang Wei
The MISTY1 block cipher has a 64-bit block length, a 128-bit user key and a recommended number of 8 rounds. It is a Japanese CRYPTREC-recommended e-government cipher, an European NESSIE selected cipher, and an ISO international standard. Despite of considerable cryptanalytic efforts during the past fifteen years, there has been no published cryptanalytic attack on the full MISTY1 cipher algorithm. In this paper, we present related-key differential and related-key amplified boomerang attacks on the full MISTY1 under certain weak key assumptions: We describe $2^{103.57}$ weak keys and a related-key differential attack on the full MISTY1 with a data complexity of $2^{61}$ chosen ciphertexts and a time complexity of $2^{87.94}$ encryptions; and we also describe $2^{92}$ weak keys and a related-key amplified boomerang attack on the full MISTY1 with a data complexity of $2^{60.5}$ chosen plaintexts and a time complexity of $2^{80.18}$ encryptions. For the very first time, our results exhibit a cryptographic weakness in the full MISTY1 cipher (when used with the recommended 8 rounds), and show that the MISTY1 cipher is distinguishable from a random function and thus cannot be regarded to be an ideal cipher.
Last updated:  2012-02-23
Modified version of “Latin Dances Revisited: New Analytic Results of Salsa20 and ChaCha”
Tsukasa Ishiguro
This paper is a modified version of own paper “Latin Dances Revisited: New Analytic Results of Salsa20 and ChaCha” presented in ICICS2011. In the original paper, there are incorrect data because of software bug in the experimental program. Therefore, we conducted with a correct program. Additionally, we modified the algorithm with a view to improvement of analysis precision.
Last updated:  2012-02-17
Ron was wrong, Whit is right
Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung, Christophe Wachter
We performed a sanity check of public keys collected on the web. Our main goal was to test the validity of the assumption that different random choices are made each time keys are generated. We found that the vast majority of public keys work as intended. A more disconcerting finding is that two out of every one thousand RSA moduli that we collected offer no security. Our conclusion is that the validity of the assumption is questionable and that generating keys in the real world for ``multiple-secrets'' cryptosystems such as RSA is significantly riskier than for ``single-secret'' ones such as ElGamal or (EC)DSA which are based on Diffie-Hellman.
Last updated:  2012-02-14
Randomized Partial Checking Revisited
Shahram Khazaei, Douglas Wikström
We study mix-nets with randomized partial checking (RPC) as proposed by Jakobsson, Juels, and Rivest (2002). RPC is a technique to verify the correctness of an execution both for Chaumian and homomorphic mix-nets. The idea is to relax the correctness and privacy requirements to achieve a more efficient mix-net. We identify serious issues in the original description of mix-nets with RPC and show how to exploit these to break both correctness and privacy, both for Chaumian and homomorphic mix-nets. Our attacks are practical and applicable to real world mix-net implementations, e.g., the Civitas and the Scantegrity voting systems.
Last updated:  2012-03-20
On the Security of Attribute Based Signature Schemes
S Sharmila Deva Selvi, Subhashini Venugopalan, C. Pandu Rangan
Attribute based signatures allow users possessing a set of credentials to sign documents; although the credentials of the signer can be verified, signers can still continue to retain a reasonable degree of anonymity. In this work we discuss aspects regarding the security of some attribute based signature schemes. In particular, we show multiple breaks in the existing threshold attribute based signature schemes by Li et al. 2010. We first claim that the scheme is not secure, since it allows, a signer possessing keys for some attributes to perform universal forgery and produce a signature that can satisfy the threshold for a set of attributes she does not possess. We then show a total break in the system, where the attacker can derive a part of the secret key and act as the key generating authority to generate private keys for other users. We also include examples of the attacks to highlight the flaws of this scheme, and other ABS schemes in Li and Kim 2008, Shahandashti et al. 2009, and Kumar et al. 2010; all of which employ the same or a similar key construct.
Last updated:  2012-02-10
A Pairing Based Strong Designated Verifier Signature Scheme without Random Oracles
Maryam Rajabzadeh Asaar, Mahmoud Salmasizadeh
In this study, a novel strong designated verifier signature scheme based on bilinear pairings with provable security in the standard model is proposed, while the existing ones are secure in the random oracle model. In 2007 and 2011, two strong designated verifier signature schemes in the standard model are proposed by Huang et al. and Zhang et al., respectively; in the former, the property of privacy of the signer’s identity is not proved and the security of the latter is based on the security of a pseudorandom function. Our proposal can deal with the aforementioned drawbacks of the previous schemes. Furthermore, it satisfies non-delegatability for signature verification
Last updated:  2012-03-10
Improved Security for Linearly Homomorphic Signatures: A Generic Framework
David Mandell Freeman
We propose a general framework that converts (ordinary) signature schemes having certain properties into linearly homomorphic signature schemes, i.e., schemes that allow authentication of linear functions on signed data. The security of the homomorphic scheme follows from the same computational assumption as is used to prove security of the underlying signature scheme. We show that the following signature schemes have the required properties and thus give rise to secure homomorphic signatures in the standard model: - The scheme of Waters (Eurocrypt 2005), secure under the computational Diffie-Hellman asumption in bilinear groups. - The scheme of Boneh and Boyen (Eurocrypt 2004, J. Cryptology 2008), secure under the $q$-strong Diffie-Hellman assumption in bilinear groups. - The scheme of Gennaro, Halevi, and Rabin (Eurocrypt 1999), secure under the strong RSA assumption. - The scheme of Hohenberger and Waters (Crypto 2009), secure under the RSA assumption. Our systems not only allow weaker security assumptions than were previously available for homomorphic signatures in the standard model, but also are secure in a model that allows a stronger adversary than in other proposed schemes. Our framework also leads to efficient linearly homomorphic signatures that are secure against our stronger adversary under weak assumptions (CDH or RSA) in the random oracle model; all previous proofs of security in the random oracle model break down completely when faced with our stronger adversary.
Last updated:  2012-10-29
Message Authentication, Revisited
Yevgeniy Dodis, Eike Kiltz, Krzysztof Pietrzak, Daniel Wichs
Traditionally, symmetric-key message authentication codes (MACs) are easily built from pseudorandom functions (PRFs). In this work we propose a wide variety of other approaches to building efficient MACs, without going through a PRF first. In particular, unlike deterministic PRF-based MACs, where each message has a unique valid tag, we give a number of probabilistic MAC constructions from various other primitives/assumptions. Our main results are summarized as follows: * We show several new probabilistic MAC constructions from a variety of general assumptions, including CCA-secure encryption, Hash Proof Systems and key-homomorphic weak PRFs. By instantiating these frameworks under concrete number theoretic assumptions, we get several schemes which are more efficient than just using a state-of-the-art PRF instantiation under the corresponding assumption. For example, we obtain elegant DDH-based MACs with much shorter keys than the quadratic-sized key of the Naor-Reingold PRF. We also show that several natural (probabilistic) digital signature schemes, such as those by Boneh-Boyen and Waters, can be significantly optimized when “downgraded” into a MAC, both in terms of their efficiency (e.g., no bilinear pairings) and security assumptions (e.g., standard CDH instead of bilinear CDH). * For probabilistic MACs, unlike deterministic ones, unforgeability against a chosen message attack (uf-cma) alone does not imply security if the adversary can additionally make verification queries (uf-cmva). In fact, a number of elegant constructions, such as recently constructed MACs based on Learning Parity with Noise (LPN) and some of the new MACs constructed in this work, are uf-cma but not not uf-cmva secure by themselves. We give an efficient generic transformation from any uf-cma secure MAC which is "message-hiding" into a uf-cmva secure MAC. Applied to LPN-based MACs, this resolves the main open problem of Kiltz et al. from Eurocrypt '11. * While all our new MAC constructions immediately give efficient actively secure, two-round symmetric-key identification schemes, we also show a very simple, three-round actively secure identification protocol from any weak PRF. In particular, the resulting protocol is much more efficient than the trivial approach of building a regular PRF from a weak PRF.
Last updated:  2012-05-31
Key recycling in authentication
Christopher Portmann
In their seminal work on authentication, Wegman and Carter propose that to authenticate multiple messages, it is sufficient to reuse the same hash function as long as each tag is encrypted with a one-time pad. They argue that because the one-time pad is perfectly hiding, the hash function used remains completely unknown to the adversary. Since their proof is not composable, we revisit it using a universally composable framework. It turns out that the above argument is insufficient: information about the hash function is in fact leaked in every round to the adversary, and after a bounded finite amount of rounds it is completely known. We show however that this leak is very small, and Wegman and Carter's protocol is still $\epsilon$-secure, if $\epsilon$-almost strongly universal hash functions are used. This implies that the secret key corresponding to the choice of hash function can be recycled for any task without any additional error than this $\epsilon$. We illustrate this by applying it to quantum key distribution (QKD): if the same hash function is recycled to authenticate the classical communication in every round of a QKD protocol, and used $\ell$ times per round, the total error after $r$ rounds is upper bounded by $r(\ell\epsilon+\epsilon')$, where $\epsilon'$ is the error of one round of QKD given an authentic channel.
Last updated:  2013-07-05
Anonymous Constant-Size Ciphertext HIBE From Asymmetric Pairings
Uncategorized
Somindu C. Ramanna, Palash Sarkar
Show abstract
Uncategorized
We present a new hierarchical identity based encryption (HIBE) scheme with constant-size ciphertext that can be implemented using the most efficient bilinear pairings, namely, Type-3 pairings. In addition to being fully secure, our scheme is anonymous. The HIBE is obtained by extending an asymmetric pairing based IBE scheme due to Lewko and Waters. The extension uses the approach of Boneh-Boyen-Goh to obtain constant-size ciphertexts and that of Boyen-Waters for anonymity. Security argument is based on the dual-system technique of Waters. The resulting HIBE is the only known scheme using Type-3 pairings achieving constant-size ciphertext, security against adaptive-identity attacks and anonymity under static assumptions without random oracles.
Last updated:  2012-02-06
A New Pseudorandom Generator from Collision-Resistant Hash Functions
Alexandra Boldyreva, Virendra Kumar
We present a new hash-function-based pseudorandom generator (PRG). Our PRG is reminiscent of the classical constructions iterating a function on a random seed and extracting Goldreich-Levin hardcore bits at each iteration step. The latest PRG of this type that relies on reasonable assumptions (regularity and one-wayness) is due to Haitner et al. In addition to a regular one-way function, each iteration in their ``randomized iterate'' scheme uses a new pairwise-independent function, whose descriptions are part of the seed of the PRG. Our construction does not use pairwise-independent functions and is thus more efficient, requiring less computation and a significantly shorter seed. Our scheme's security relies on the standard notions of collision-resistance and regularity of the underlying hash function, where the collision-resistance is required to be {\em exponential}. In particular, any polynomial-time adversary should have less than $2^{-n/2}$ probability of finding collisions, where $n$ is the output size of the hash function. We later show how to relax the regularity assumption by introducing a new notion that we call {\em worst-case regularity}, which lower bounds the size of primages of different elements from the range (while the common regularity assumption requires all such sets to be of equal size). Unlike previous results, we provide a concrete security statement.
Last updated:  2012-02-08
Cryptanalysis of Mun et al.'s anonymous authentication scheme for roaming service in global mobility networks
Hongbin Tang, Xinsong Liu
An anonymous user authentication scheme allows the user and the remote server to authenticate each other, and should preserve user anonymity. In 2011, Mun et al. proposed an enhanced secure anonymous user authentication scheme for roaming service in global mobility networks. They claimed that their scheme was more secure and efficient than others. However, we demonstrate that their scheme is vulnerable to the insider, impersonation, server spoofing, and denial of service attacks along with the efficiency and password issues. Meanwhile, it cannot provide any user anonymity. Thus it is not feasible for the real-life implementation.
Last updated:  2012-04-07
On the performance of certain Private Set Intersection protocols
Uncategorized
Emiliano De Cristofaro, Gene Tsudik
Show abstract
Uncategorized
Private Set Intersection (PSI) is a useful cryptographic primitive that allows two parties (client and server) to interact based on their respective (private) input sets, in such a way that client obtains nothing other than the set intersection, while server learns nothing beyond client set size. This paper considers one PSI construct from [DT10] and reports on its optimized implementation and performance evaluation. Several key implementation choices that significantly impact real-life performance are identified and a comprehensive experimental analysis (including micro-benchmarking, with various input sizes) is presented. Finally, it is shown that our optimized implementation of this RSA-OPRF-based PSI protocol markedly outperforms the one presented in [HEK12].
Last updated:  2012-02-06
Beating Shannon requires BOTH efficient adversaries AND non-zero advantage
Yevgeniy Dodis
In this note we formally show a "folklore" (but, to the best of our knowledge, not documented) fact that in order to beat the famous Shannon lower bound on key length for one-time-secure encryption, one must *simultaneously* restrict the attacker to be efficient, and also allow the attacker to break the system with some non-zero (i.e., negligible) probability. Despite being "folklore", we were unable to find a clean and simple proof of this result, despite asking several experts in the field. We hope that cryptography instructors will find this note useful when justifying the transition from information-theoretic to computational cryptography. We note that our proof cleanly handles *probabilistic* encryption, as well as a small *decryption error*.
Last updated:  2012-02-06
Identity-based Encryption with Efficient Revocation
Alexandra Boldyreva, Vipul Goyal, Virendra Kumar
Identity-based encryption (IBE) is an exciting alternative to public-key encryption, as IBE eliminates the need for a Public Key Infrastructure (PKI). Any setting, PKI- or identity-based, must provide a means to revoke users from the system. Efficient revocation is a well-studied problem in the traditional PKI setting. However in the setting of IBE, there has been little work on studying the revocation mechanisms. The most practical solution requires the senders to also use time periods when encrypting, and all the receivers (regardless of whether their keys have been compromised or not) to update their private keys regularly by contacting the trusted authority. We note that this solution does not scale well -- as the number of users increases, the work on key updates becomes a bottleneck. We propose an IBE scheme that significantly improves key-update efficiency on the side of the trusted party (from linear to logarithmic in the number of users), while staying efficient for the users. Our scheme builds on the ideas of the Fuzzy IBE primitive and binary tree data structure, and is provably secure.
Last updated:  2012-02-08
Eavesdropping on Satellite Telecommunication Systems
Benedikt Driessen
While communication infrastructures rapidly intertwine with our daily lives, public understanding of underlying technologies and privacy implications is often limited by their closed-source nature. Lacking the funding and resources of corporations and the intelligence community, developing and expanding this understanding is a sometimes tedious, but nonetheless important process. In this sense, we document how we have decrypted our own communication in the Thuraya satellite network. We have used open-source software to build on recent work which reverse-engineered and cryptanalized both stream ciphers currently used in the competing satellite communication standards GMR-1 and GMR-2. To break Thuraya’s encryption (which implements the GMR-1 standard) in a real-world scenario, we have enhanced an existing ciphertext-only attack. We have used common and moderately expensive equipment to capture a live call session and executed the described attack. We show that, after computing less than an hour on regular PC-hardware, we were able to obtain the session key from a handful of speech data frames. This effectively allows decryption of the entire session, thus demonstrating that the Thuraya system (and probably also SkyTerra and TerreStar, who are currently implementing GMR-1) is weak at protecting privacy.
Last updated:  2012-02-06
Investigating the Potential of Custom Instruction Set Extensions for SHA-3 Candidates on a 16-bit Microcontroller Architecture
Jeremy Constantin, Andreas Burg, Frank K. Gurkaynak
In this paper, we investigate the benefit of instruction set extensions for software implementations of all five SHA-3 candidates. To this end, we start from optimized assembly code for a common 16-bit microcontroller instruction set architecture. By themselves, these implementations provide reference for complexity of the algorithms on 16-bit architectures, commonly used in embedded systems. For each algorithm, we then propose suitable instruction set extensions and implement the modified processor core. We assess the gains in throughput, memory consumption, and the area overhead. Our results show that with less than 10% additional area, it is possible to increase the execution speed on average by almost 40%, while reducing memory requirements on average by more than 40%. In particular, the Grøstl algorithm, which was one of the slowest algorithms in previous reference implementations, ends up being the fastest implementation by some margin, once minor (but dedicated) instruction set extensions are taken into account.
Last updated:  2012-02-06
2-Dimension Sums: Distinguishers Beyond Three Rounds of RIPEMD-128 and RIPEMD-160
Yu Sasaki, Lei Wang
This paper presents differential-based distinguishers against ISO standard hash functions RIPEMD-128 and RIPEMD-160. The compression functions of RIPEMD-128/-160 adopt the double-branch structure, which updates a chaining variable by computing two functions and merging their outputs. Due to the double size of the internal state and difficulties of controlling two functions simultaneously, only few results were published before. In this paper, second-order differential paths are constructed on reduced RIPEMD-128 and -160. This leads to a practical 4-sum attack on 47 steps (out of 64 steps) of RIPEMD-128 and 40 steps (out of 80 steps) of RIPEMD-160. We then extend the distinguished property from the 4-sum to other properties, which we call \emph{a 2-dimension sum} and \emph{a partial 2-dimension sum}. As a result, the practical partial 2-dimension sum is generated on 48 steps of RIPEMD-128 and 42 steps of RIPEMD-160, with a complexity of $2^{35}$ and $2^{36}$, respectively. Theoretically, $2$-dimension sums are generated faster than the exhaustive search up to 52 steps of RIPEMD-128 and 51 steps of RIPEMD-160, with a complexity of $2^{101}$ and $2^{158}$, respectively. The practical attacks are implemented, and examples of generated (partial) 2-dimension sums are presented.
Last updated:  2012-02-01
Designing Integrated Accelerator for Stream Ciphers with Structural Similarities
Sourav Sen Gupta, Anupam Chattopadhyay, Ayesha Khalid
Till date, the basic idea for implementing stream ciphers has been confined to individual standalone designs. In this paper, we introduce the notion of integrated implementation of multiple stream ciphers within a single architecture, where the goal is to achieve area and throughput efficiency by exploiting the structural similarities of the ciphers at an algorithmic level. We present two case studies to support our idea. First, we propose the merger of SNOW 3G and ZUC stream ciphers, which constitute a part of the 3GPP LTE-Advanced security suite. We propose HiPAcc-LTE, a high performance integrated design that combines the two ciphers in hardware, based on their structural similarities. The integrated architecture reduces the area overhead significantly compared to two distinct cores, and also provides almost double throughput in terms of keystream generation, compared with the state-of-the-art implementations of the individual ciphers. As our second case study, we present IntAcc-RCHC, an integrated accelerator for the stream ciphers RC4 and HC-128. We show that the integrated accelerator achieves a slight reduction in area without any loss in throughput compared to our standalone implementations. We also achieve at least 1.5 times better throughput compared to general purpose processors. Long term vision of this hardware integration approach for cryptographic primitives is to build a flexible core supporting multiple designs having similar algorithmic structures.
Last updated:  2012-02-01
Incremental Deterministic Public-Key Encryption
Ilya Mironov, Omkant Pandey, Omer Reingold, Gil Segev
Motivated by applications in large storage systems, we initiate the study of incremental deterministic public-key encryption. Deterministic public-key encryption, introduced by Bellare, Boldyreva, and O'Neill (CRYPTO '07), provides a realistic alternative to randomized public-key encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful notion of security for low-entropy plaintexts distributions, and Bellare et al. demonstrated that a strong notion of security can in fact be realized for relatively high-entropy plaintext distributions. In order to achieve a meaningful level of security, a deterministic encryption algorithm should be typically used for encrypting rather long plaintexts for ensuring a sufficient amount of entropy. This requirement may be at odds with efficiency constraints, such as communication complexity and computation complexity in the presence of small updates. Thus, a highly desirable property of deterministic encryption algorithms is incrementality: small changes in the plaintext translate into small changes in the corresponding ciphertext. We present a framework for modeling the incrementality of deterministic public-key encryption. Within our framework we propose two schemes, which we prove to enjoy an optimal tradeoff between their security and incrementality up to small polylogarithmic factors. Our first scheme is a generic method which can be based on any deterministic public-key encryption scheme, and in particular, can be instantiated with any semantically-secure (randomized) public-key encryption scheme in the random oracle model. Our second scheme is based on the Decisional Diffie-Hellman assumption in the standard model. The approach underpinning our schemes is inspired by the fundamental ``sample-then-extract'' technique due to Nisan and Zuckerman (JCSS '96) and refined by Vadhan (J. Cryptology '04), and by the closely related notion of ``locally-computable extractors'' due to Vadhan. Most notably, whereas Vadhan used such extractors to construct private-key encryption schemes in the bounded-storage model, we show that techniques along these lines can also be used to construct incremental public-key encryption schemes.
Last updated:  2012-02-01
Modifying Boolean Functions to Ensure Maximum Algebraic Immunity
Konstantinos Limniotis, Nicholas Kolokotronis, Nicholas Kalouptsidis
The algebraic immunity of cryptographic Boolean functions is studied in this paper. Proper modifications of functions achieving maximum algebraic immunity are proved, in order to yield new functions of also maximum algebraic immunity. It is shown that the derived results apply to known classes of functions. Moreover, two new efficient algorithms to produce functions of guaranteed maximum algebraic immunity are developed, which further extend and generalize known constructions of functions with maximum algebraic immunity.
Last updated:  2015-01-28
Signature Schemes Secure against Hard-to-Invert Leakage
Sebastian Faust, Carmit Hazay, Jesper Buus Nielsen, Peter Sebastian Nordholt, Angela Zottarel
Side-channel attacks allow the adversary to gain partial knowledge of the secret key when cryptographic protocols are implemented in real-world hardware. The goal of leakage resilient cryptography is to design crytosystems that withstand such attacks. In the auxiliary input model an adversary is allowed to see a computationally hard-to-invert function of the secret key. The auxiliary input model weakens the bounded leakage assumption commonly made in leakage resilient cryptography as the hard-to-invert function may information-theoretically reveal the entire secret key. In this work, we propose the first constructions of digital signature schemes that are secure in the auxiliary input model. Our main contribution is a digital signature scheme that is secure against chosen message attacks when given any exponentially hard-to-invert function of the secret key. As a second contribution, we construct a signature scheme that achieves security for random messages assuming that the adversary is given a polynomial-time hard-to-invert function (where both the challenge as well as the signatures seen prior to that are computed on random messages). Here, polynomial-hardness is required even when given the entire public-key. We further show that such signature schemes readily give us auxiliary input secure identification schemes.
Last updated:  2012-01-30
PSCPA: Patient Self-controllable Privacy-preserving Cooperative Authentication in Distributed m-Healthcare Systems
Jun Zhou, Zhenfu Cao
Distributed m-healthcare systems significantly facilitate efficient patient treatment of high quality, while bringing about the challenge of keeping both the confidentiality of the personal health information and the patients' identity privacy simultaneously. It makes many existing data access control and anonymous authentication schemes inefficient in distributed m-healthcare systems. To solve the problem, in this paper, a novel authorized accessible privacy model (AAPM) is established. Patients can authorize physicians by setting an access tree supporting flexible threshold predicates. Then, based on it, a patient self-controllable privacy-preserving cooperative authentication scheme (PSCPA) realizing three levels of security and privacy requirement in distributed m-healthcare system is proposed. The directly authorized physicians can both decipher the personal health information and authenticate patients' identities by satisfying the access tree with their attribute sets. Due to the indistinguishability of the transcript simulation from the patients and physicians for the indirectly authorized physicians, they can only decipher the personal health information rather than authenticate patients' identities. The unauthorized persons can obtain neither. Moreover, PSCPA is extended in emergent cases and to resist Denial of Service (Dos) attacks. Finally, the formal security proof and simulation results show our scheme far outperforms the previous ones in terms of computational, communication and storage overhead.
Last updated:  2012-01-30
A novel Group Key Transfer Protocol
Chingfang Hsu, Bing Zeng, Qi Cheng, Guohua Cui
Group key transfer protocols depend on a mutually trusted key generation center (KGC) to transport the group key to all group members secretly. This approach requires that a trusted sever be set up, and it incurs communication overhead costs. In addition, the existing group key transfer protocols based on secret sharing all use threshold schemes that need to compute a -degree interpolating polynomial to encrypt and decrypt the secret group key, then it increases the computational complexity of system. In this paper, we first present a novel group key transfer protocol without an online KGC, which is based on DH key agreement and a perfect linear secret sharing scheme (LSSS). The confidentiality of the group key transfer phase of this protocol is information theoretically secure, which is ensured by this LSSS. Furthermore, this protocol can resist potential attacks and also reduce the overhead of system implementation. Goals and security threats of our proposed group key transfer protocol will be analyzed in detail.
Last updated:  2012-06-19
Key Length Estimation of Pairing-based Cryptosystems using $\eta_T$ Pairing
Naoyuki Shinohara, Takeshi Shimoyama, Takuya Hayashi, Tsuyoshi Takagi
The security of pairing-based cryptosystems depends on the difficulty of the discrete logarithm problem (DLP) over certain types of finite fields. One of the most efficient algorithms for computing a pairing is the $\eta_T$ pairing over supersingular curves on finite fields whose characteristic is $3$. Indeed many high-speed implementations of this pairing have been reported, and it is an attractive candidate for practical deployment of pairing-based cryptosystems. The embedding degree of the $\eta_T$ pairing is 6, so we deal with the difficulty of a DLP over the finite field $ GF(3^{6n})$, where the function field sieve (FFS) is known as the asymptotically fastest algorithm of solving it. Moreover, several efficient algorithms are employed for implementation of the FFS, such as the large prime variation. In this paper, we estimate the time complexity of solving the DLP for the extension degrees $n=97,163, 193,239,313,353,509$, when we use the improved FFS. To accomplish our aim, we present several new computable estimation formulas to compute the explicit number of special polynomials used in the improved FFS. Our estimation contributes to the evaluation for the key length of pairing-based cryptosystems using the $\eta_T$ pairing.
Last updated:  2012-06-04
A NEW DEDICATED CRYPTOGRAPHIC HASH FUNCTION
Norziana Jamil, Ramlan Mahmood, Muhammad Reza Z'aba, Nur Izura Udzir, Zuriati Ahmad Zukarnaen
Recent progress in cryptanalysis on cryptographic hash functions has shown that the most of the hash functions based on the design principles of MD4 are susceptible to differential attack. This paper describes a new 256-bit hash function which is based on parallel branches having a stronger compression function. It is designed to have higher security than that of MD family and its variant. The performance of the new hash functions are evaluated and compared with SHA-256 and FORK-256. It is shown that STITCH-256 exhibit the desired cryptographic properties and comparable with SHA-256 and FORK-256 in its compression function.
Last updated:  2012-01-29
Single-block collision attack on MD5
Uncategorized
Marc Stevens
Show abstract
Uncategorized
In 2010, Tao Xie and Dengguo Feng [ePrint 2010/643] constructed the first single-block collision for MD5 consisting of two 64-byte messages that have the same MD5 hash. Details of their attack, developed using what they call an evolutionary approach, has not been disclosed ``for security reasons''. Instead they have posted a challenge to the cryptology community to find a new different single-block collision attack for MD5. This paper answers that challenge by presenting a single-block collision attack based on other message differences together with an example colliding message pair. The attack is based on a new collision finding algorithm that exploits the low number of bitconditions in the first round. It uses a new way to choose message blocks that satisfy bitconditions up to step 22 and additionally uses three known tunnels to correct bitconditions up to step 25. The attack has an average runtime complexity equivalent to $2^{49.8}$ calls to MD5's compression function.
Last updated:  2012-01-29
Security Analysis of a Multi-Factor Authenticated Key Exchange Protocol
Feng Hao, Dylan Clarke
This paper shows several security weaknesses of a Multi-Factor Authenticated Key Exchange (MK-AKE) protocol, proposed by Pointcheval and Zimmer at ACNS'08. The Pointcheval-Zimmer scheme was designed to combine three authentication factors in one system, including a password, a secure token (that stores a private key) and biometrics. In a formal model, Pointcheval and Zimmer formally proved that an attacker had to break all three factors to win. However, the formal model only considers the threat that an attacker may impersonate the client; it however does not discuss what will happen if the attacker impersonates the server. We fill the gap by analyzing the case of the server impersonation, which is a realistic threat in practice. We assume that an attacker has already compromised the password, and we then present two further attacks: in the first attack, an attacker is able to steal a fresh biometric sample from the victim without being noticed; in the second attack, he can discover the victim's private key based on the Chinese Remainder theorem. Both attacks have been experimentally verified. In summary, an attacker actually only needs to compromise a single password factor in order to break the entire system. We also discuss the deficiencies in the Pointcheval-Zimmer formal model and countermeasures to our attacks.
Last updated:  2012-01-29
Cryptanalysis of the CHES 2009/2010 Random Delay Countermeasure
François Durvaux, Mathieu Renauld, François-Xavier Standaert, Loic van Oldeneel tot Oldenzeel, Nicolas Veyrat-Charvillon
Inserting random delays in cryptographic implementations is often used as a countermeasure against side-channel attacks. Most previous works on the topic focus on improving the statistical distribution of these delays. For example, efficient random delay generation algorithms have been proposed at CHES 2009/2010. These solutions increase security against attacks that solve the lack of synchronization between different leakage traces by integrating them. In this paper, we demonstrate that integration may not be the best tool to evaluate random delay insertions. For this purpose, we first describe different attacks exploiting pattern recognition techniques and Hidden Markov Models. Using these tools, we succeed in cryptanalyzing a (straightforward) implementation of the CHES 2009/2010 proposal in an Atmel microcontroller, with the same data complexity as an unprotected implementation of the AES Rijndael. In other words, we completely cancel the countermeasure in this case. Next, we show that our cryptanalysis tools are remarkably robust to attack improved variants of the countermeasure, e.g. with additional noise or irregular dummy operations. We also exhibit that the attacks remain applicable in a non-profiled adversarial scenario. Overall, these results suggest that the use of random delays may not be effective for protecting small embedded devices against side-channel leakage. They also confirm the need of worst-case analysis in physical security evaluations.
Last updated:  2012-04-22
Some results on $q$-ary bent functions
Uncategorized
Deep Singh, Maheshanand Bhaintwal, Brajesh Kumar Singh
Show abstract
Uncategorized
Kumar et al.(1985) have extended the notion of classical bent Boolean functions in the generalized setup on $\BBZ_q^n$. They have provided an analogue of classical Maiorana-McFarland type bent functions. In this paper, we study the crosscorrelation of a subclass of such generalized Maiorana-McFarland (\mbox{GMMF}) type bent functions. We provide a construction of quaternary ($q = 4$) bent functions on $n+1$ variables in terms of their subfunctions on $n$-variables. Analogues of sum-of-squares indicator and absolute indicator of crosscorrelation of Boolean functions are defined in the generalized setup. Further, $q$-ary functions are studied in terms of these indictors and some upper bounds of these indicators are obtained. Finally, we provide some constructions of balanced quaternary functions with high nonlinearity under Lee metric.
Last updated:  2012-01-29
Efficient Leakage-free Authentication of Trees, Graphs and Forests
Ashish Kundu, Mikhail Atallah, Elisa Bertino
Leakage-free authentication of trees and graphs have been studied in the literature. Such schemes have several practical applications especially in the cloud computing area. In this paper, we propose an authentication scheme that computes only one signature (optimal). Our scheme is not only super-efficient in the number of signatures it computes and in its runtime, but also is highly versatile -- it can be applied not only to trees, but also to graphs and forests (disconnected trees and graphs). While achieving such efficiency and versatility, we must also mention that our scheme achieves the desired security -- leakage-free authentication of data objects represented as trees, graphs and forests. This is achieved by another novel scheme that we have proposed in this paper -- a secure naming scheme for nodes of such data structures. Such a scheme assigns "secure names" to nodes such that these secure names can be used to verify the order between the nodes efficiently without leaking information about other nodes. As far as we know, our scheme is the first such scheme in literature that is optimal in its efficiency, supports two important security concerns -- authenticity and leakage-free (privacy-preserving/confidentiality), and is versatile in its applicability as it is to trees, graphs as well as forests. We have carried out complexity as well as experimental analysis of this scheme that corroborates its performance.
Last updated:  2012-01-30
Key-Alternating Ciphers in a Provable Setting: Encryption Using a Small Number of Public Permutations
Andrey Bogdanov, Lars R. Knudsen, Gregor Leander, Francois-Xavier Standaert, John Steinberger, Elmar Tischhauser
This paper considers---for the first time---the concept of key-alternating ciphers in a provable security setting. Key-alternating ciphers can be seen as a generalization of a construction proposed by Even and Mansour in 1991. This construction builds a block cipher $PX$ from an $n$-bit permutation $P$ and two $n$-bit keys $k_0$ and $k_1$, setting $PX_{k_0,k_1}(x)=k_1\oplus P(x\oplus k_0)$. Here we consider a (natural) extension of the Even-Mansour construction with $t$ permutations $P_1,\ldots,P_t$ and $t+1$ keys, $k_0,\ldots, k_t$. We demonstrate in a formal model that such a cipher is secure in the sense that an attacker needs to make at least $2^{2n/3}$ queries to the underlying permutations to be able to distinguish the construction from random. We argue further that the bound is tight for $t=2$ but there is a gap in the bounds for $t>2$, which is left as an open and interesting problem. Additionally, in terms of statistical attacks, we show that the distribution of Fourier coefficients for the cipher over all keys is close to ideal. Lastly, we define a practical instance of the construction with $t=2$ using AES referred to as AES$^2$. Any attack on AES$^2$ with complexity below $2^{85}$ will have to make use of AES with a fixed known key in a non-black box manner. However, we conjecture its security is $2^{128}$.
Last updated:  2012-02-16
Automatic Quantification of Cache Side-Channels
Boris Köpf, Laurent Mauborgne, Martin Ochoa
The latency gap between caches and main memory has been successfully exploited for recovering sensitive input to programs, such as cryptographic keys from implementation of AES and RSA. So far, there are no practical general-purpose countermeasures against this threat. In this paper we propose a novel method for automatically deriving upper bounds on the amount of information about the input that an adversary can extract from a program by observing the CPU's cache behavior. At the heart of our approach is a novel technique for efficient counting of concretizations of abstract cache states that enables us to connect state-of-the-art techniques for static cache analysis and quantitative information-flow. We implement our counting procedure on top of the AbsInt TimingExplorer, one of the most advanced engines for static cache analysis. We use our tool to perform a case study where we derive upper bounds on the cache leakage of a 128-bit AES executable on an ARM processor with a realistic cache configuration. We also analyze this implementation with a commonly suggested (but until now heuristic) countermeasure applied, obtaining a formal account of the corresponding increase in security.
Last updated:  2012-01-29
A note on hyper-bent functions via Dillon-like exponents
Sihem Mesnager, Jean-Pierre Flori
This note is devoted to hyper-bent functions with multiple trace terms (including binomial functions) via Dillon-like exponents. We show how the approach developed by Mesnager to extend the Charpin–Gong family and subsequently extended by Wang et al. fits in a much more general setting. To this end, we first explain how the original restriction for Charpin–Gong criterion can be weakened before generalizing the Mesnager approach to arbitrary Dillon-like exponents. Afterward, we tackle the problem of devising infinite families of extension degrees for which a given exponent is valid and apply these results not only to reprove straightforwardly the results of Mesnager and Wang et al., but also to characterize the hyper-bentness of new infinite classes of Boolean functions.
Last updated:  2012-01-29
Counterexamples to Hardness Amplification Beyond Negligible
Yevgeniy Dodis, Abhishek Jain, Tal Moran, Daniel Wichs
If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the ``direct product''; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated ones. Interestingly, proving that the direct product amplifies hardness is often highly non-trivial, and in some cases may be false. For example, it is known that the direct product (i.e. ``parallel repetition'') of general interactive games may not amplify hardness at all. On the other hand, positive results show that the direct product does amplify hardness for many basic primitives such as one-way functions/relations, weakly-verifiable puzzles, and signatures. Even when positive direct product theorems are shown to hold for some primitive, the parameters are surprisingly weaker than what we may have expected. For example, if we start with a weak one-way function that no poly-time attacker can break with probability $> \frac{1}{2}$, then the direct product provably amplifies hardness to some negligible probability. Naturally, we would expect that we can amplify hardness exponentially, all the way to $2^{-n}$ probability, or at least to some fixed/known negligible such as $n^{-\log n}$ in the security parameter $n$, just by taking sufficiently many instances of the weak primitive. Although it is known that such parameters cannot be proven via black-box reductions, they may seem like reasonable conjectures, and, to the best of our knowledge, are widely believed to hold. In fact, a conjecture along these lines was introduced in a survey of Goldreich, Nisan and Wigderson (ECCC '95). In this work, we show that such conjectures are false by providing simple but surprising counterexamples. In particular, we construct weakly secure signatures and one-way functions, for which standard hardness amplification results are known to hold, but for which hardness does not amplify beyond just negligible. That is, for any negligible function $\eps(n)$, we instantiate these primitives so that the direct product can always be broken with probability $\eps(n)$, no matter how many copies we take.
Last updated:  2012-01-29
An error in "On a new formal proof model for RFID location privacy"
Da-Zhi Sun
In Information Processing Letters 110 (2) (2009) 57-61, Deursen and Radomirovi&#263; evaluated five formal RFID privacy models. One main result is that Ha et al.’s RFID privacy model is incorrect. The supporting fact is that a constant-response protocol cannot pass the test of Ha et al.’s RFID privacy model. However, we demonstrate that the constant-response protocol is artificial, and the corresponding result is therefore unwarranted. It means that Ha et al.’s RFID privacy model is not a trivial model. Hence, more effort still can be made to improve Ha et al.’s RFID privacy model.
Last updated:  2012-02-03
Fault Analysis of the KATAN Family of Block Ciphers
Shekh Faisal Abdul-Latip, Mohammad Reza Reyhanitabar, Willy Susilo, Jennifer Seberry
In this paper, we investigate security of the KATAN family of block ciphers against differential fault attacks. KATAN consists of three variants with 32, 48 and 64-bit block sizes, called KATAN32, KATAN48 and KATAN64, respectively. All three variants have the same key length of 80 bits. We assume a single-bit fault injection model where the adversary is supposed to be able to corrupt a single random bit of the internal state of the cipher and this fault induction process can be repeated (by resetting the cipher); i.e., the faults are transient rather than permanent. First, we show how to identify the exact position of faulty bits within the internal state by precomputing difference characteristics for each bit position at a given round and comparing these characteristics with ciphertext differences (XOR of faulty and non-faulty ciphertexts) during the online phase of the attack. Then, we determine suitable rounds for effective fault inductions by analyzing distributions of low-degree (mainly, linear and quadratic) polynomial equations obtainable using the cube and extended cube attack techniques. The complexity of our attack on KATAN32 is $2^{59}$ computations and about 115 fault injections. For KATAN48 and KATAN64, the attack requires $2^{55}$ computations (for both variants), while the required number of fault injections is 211 and 278, respectively.
Last updated:  2012-01-22
On the Exact Security of Schnorr-Type Signatures in the Random Oracle Model
Yannick Seurin
The Schnorr signature scheme has been known to be provably secure in the Random Oracle Model under the Discrete Logarithm (DL) assumption since the work of Pointcheval and Stern (EUROCRYPT '96), at the price of a very loose reduction though: if there is a forger making at most $q_h$ random oracle queries, and forging signatures with probability $\varepsilon_F$, then the Forking Lemma tells that one can compute discrete logarithms with constant probability by rewinding the forger $\mathcal{O}(q_h/\varepsilon_F)$ times. In other words, the security reduction loses a factor $\mathcal{O}(q_h)$ in its time-to-success ratio. This is rather unsatisfactory since $q_h$ may be quite large. Yet Paillier and Vergnaud (ASIACRYPT 2005) later showed that under the One More Discrete Logarithm (OMDL) assumption, any \emph{algebraic} reduction must lose a factor at least $q_h^{1/2}$ in its time-to-success ratio. This was later improved by Garg~\emph{et al.} (CRYPTO 2008) to a factor $q_h^{2/3}$. Up to now, the gap between $q_h^{2/3}$ and $q_h$ remained open. In this paper, we show that the security proof using the Forking Lemma is essentially the best possible. Namely, under the OMDL assumption, any algebraic reduction must lose a factor $f(\varepsilon_F)q_h$ in its time-to-success ratio, where $f\le 1$ is a function that remains close to 1 as long as $\varepsilon_F$ is noticeably smaller than 1. Using a formulation in terms of expected-time and queries algorithms, we obtain an optimal loss factor $\Omega(q_h)$, independently of $\varepsilon_F$. These results apply to other signature schemes based on one-way group homomorphisms, such as the Guillou-Quisquater signature scheme.
Last updated:  2012-01-22
A First-Order Leak-Free Masking Countermeasure
Houssem MAGHREBI, Emmanuel PROUFF, Sylvain GUILLEY, Jean-Luc DANGER
One protection of cryptographic implementations against side-channel attacks is the masking of the sensitive variables. In this article, we present a first-order masking that does not leak information when the registers change values according to some specific (and realistic) rules. This countermeasure applies to all devices that leak a function of the distance between consecutive values of internal variables. In particular, we illustrate its practicality on both hardware and software implementations. Moreover, we introduce a framework to evaluate the soundness of the new first-order masking when the leakage slightly deviates from the rules involved to design the countermeasure. It reveals that the countermeasure remains more efficient than the state-of-the-art first-order masking if the deviation from the ideal model is equal to a few tens of percents, and that it is as good as a first-order Boolean masking even if the deviation is $50$\%.
Last updated:  2012-02-01
Breaking the provably secure SAKE-C authenticated key exchange protocol with Extended Key Compromise Impersonation (E-KCI) Attack
Uncategorized
Ali Mackvandi, Maryam Saeed, Mansour Naddafiun
Uncategorized
Authenticated Key Exchange (AKE) protocols are those protocols that allow two or more entities to concur with a common session key in an authentic manner in which this key is used to encrypt the proceeding communications. In 2010, Zhao et al. proposed Provably Secure Authenticated Key Exchange Protocol under the CDH Assumption (referred to as SAKE and SAKE-C). Despite the fact that the security of the proposed protocol is proved in the formal model, due to not considering all the prerequisite queries in defining and designing formal security model, in this letter it is shown that the so-called secure protocol is vulnerable to Extended Key Compromise Impersonation (E-KCI) attack so that this attack is a practicable flaw that was signaled by Tang et al. for the first time in 2011. Unfortunately, it is conspicuously perspicuous that most of the AKE and PAKE protocols are vulnerable to E-KCI attack which is a new-introduced flaw in this field, because even one of the most famous, secure, and efficient PAKE protocols such as the 3-pass HMQV protocol suffers from this vulnerability.
Last updated:  2012-01-20
Decoding Random Binary Linear Codes in $2^{n/20}$: How $1+1=0$ Improves Information Set Decoding
Uncategorized
Anja Becker, Antoine Joux, Alexander May, Alexander Meurer
Show abstract
Uncategorized
Decoding random linear codes is a well studied problem with many applications in complexity theory and cryptography. The security of almost all coding and LPN/LWE-based schemes relies on the assumption that it is hard to decode random linear codes. Recently, there has been progress in improving the running time of the best decoding algorithms for binary random codes. The ball collision technique of Bernstein, Lange and Peters lowered the complexity of Stern's information set decoding algorithm to $2^{0.0556n}$. Using {\it representations} this bound was improved to $2^{0.0537n}$ by May, Meurer and Thomae. We show how to further increase the number of representations and propose a new information set decoding algorithm with running time $2^{0.0494n}$.
Last updated:  2012-01-20
A new remote data integrity checking scheme for cloud storage
Xiangtao Yan, Yifa Li
Cloud storage services enable user to enjoy high-capacity and high-quality storage with less overhead, but it also brings many potential threats, for example, data integrality, data availability and so on. In this paper, we propose a new remote integrality and availability checking scheme for cloud storage. This new scheme can check mass file's integrality and availability with less storage, computation and communication resource. The new scheme also supports data dynamic update, public verifiability and privacy preserving.
Last updated:  2012-01-18
Variants of Waters' Dual-System Primitives Using Asymmetric Pairings
Somindu C. Ramanna, Sanjit Chatterjee, Palash Sarkar
Waters, in 2009, introduced an important technique, called dual-system encryption, to construct identity-based encryption (IBE) and related schemes. The resulting IBE scheme was described in the setting of symmetric pairing. A key feature of the construction is the presence of random tags in the ciphertext and decryption key. Later work by Lewko and Waters has removed the tags and proceeding through composite-order pairings has led to a more efficient dual-system IBE scheme using asymmetric pairings whose security is based on non-standard but static assumptions. In this work, we have systematically simplified Waters 2009 IBE scheme in the setting of asymmetric pairing. The simplifications retain tags used in the original description. This leads to several variants, the first one of which is based on standard assumptions and in comparison to Waters original scheme reduces ciphertexts and keys by two elements each. Going through several stages of simplifications, we finally obtain a simple scheme whose security can be based on two standard assumptions and a natural and minimal extension of the decision Diffie-Hellman problem for asymmetric pairing groups. The scheme itself is also minimal in the sense that apart from the tags, both encryption and key generation use exactly one randomiser each. This final scheme is more efficient than both the previous dual-system IBE scheme in the asymmetric setting due to Lewko and Waters and the more recent dual-system IBE scheme due to Lewko. We extend the IBE scheme to hierarchical IBE (HIBE) and broadcast encryption (BE) schemes. Both primitives are secure in their respective full models and have better efficiencies compared to previously known schemes offering the same level and type of security.
Last updated:  2012-02-18
On the security of Lo et al.’s ownership transfer protocol
Uncategorized
Masoumeh Safkhani, Nasour Bagheri, Majid Naderi, Ali Mahani
Show abstract
Uncategorized
Recently Lo et al. have proposed an ownership transfer pro- tocol for RFID objects using lightweight computing operators and claim their protocol provides stronger security robustness and higher perfor- mance efficiency in comparison with existing solutions. However, in this paper we show that their claim unfortunately does not hold. More pre- cisely, we present tag’s secret disclosure attack, new owner’s secret disclo- sure and fraud attack against the Lo et al.’s ownership transfer protocol. The success probability of all attacks is “1” while the complexity is only one run of protocol. Our observation shows that this protocol compro- mise the privacy of the tag and the new owner.
Last updated:  2012-01-20
Polynomial-Time, Semantically-Secure Encryption Achieving the Secrecy Capacity
Uncategorized
Mihir Bellare, Stefano Tessaro
Show abstract
Uncategorized
In the wiretap channel setting, one aims to get information-theoretic privacy of communicated data based only on the assumption that the channel from sender to adversary is noisier than the one from sender to receiver. The secrecy capacity is the optimal (highest possible) rate of a secure scheme, and the existence of schemes achieving it has been shown. For thirty years the ultimate and unreached goal has been to achieve this optimal rate with a scheme that is polynomial-time. (This means both encryption and decryption are proven polynomial time algorithms.) This paper finally delivers such a scheme. In fact it does more. Our scheme not only meets the classical notion of security from the wiretap literature, called MIS-R (mutual information security for random messages) but achieves the strictly stronger notion of semantic security, thus delivering more in terms of security without loss of rate.
Last updated:  2012-01-19
Security Analysis of J-PAKE
Mohsen Toorani
J-PAKE is a balanced Password-Authenticated Key Exchange (PAKE) protocol, proposed in 2008 and presented again in 2010 and 2011. One of its distinguishing features is that it does not require Public Key Infrastructure (PKI). Instead, it deploys Zero-Knowledge (ZK) techniques through the Schnorr's signature, and requires many computations and random number generations. J-PAKE has been submitted as a candidate for the IEEE P1363.2 standard for password-based public key cryptography, included in OpenSSL and OpenSSH, and used in the Mozilla Firefox's Sync mechanism. In this paper, we show that the J-PAKE protocol is vulnerable to a password compromise impersonation attack, and has other shortcomings with respect to replay and Unknown Key-Share (UKS) attacks.
Last updated:  2012-01-18
Dickson polynomials, hyperelliptic curves and hyper-bent functions
Jean-Pierre Flori, Sihem Mesnager
In this paper, we study the action of Dickson polynomials on subsets of finite fields of even characteristic related to the trace of the inverse of an element and provide an alternate proof of a not so well-known result. Such properties are then applied to the study of a family of Boolean functions and a characterization of their hyper-bentness in terms of exponential sums recently proposed by Wang et al. Finally, we extend previous works of Lison&#283;k and Flori and Mesnager to reformulate this characterization in terms of the number of points on hyperelliptic curves and present some numerical results leading to an interesting problem.
Last updated:  2012-09-21
Towards Unconditional Soundness: Computationally Complete Symbolic Attacker
Gergei Bana, Hubert Comon-Lundh
We consider the question of the adequacy of symbolic models versus computational models for the verification of security protocols. We neither try to include properties in the symbolic model that reflect the properties of the computational primitives nor add computational requirements that enforce the soundness of the symbolic model. We propose in this paper a different approach: everything is possible in the symbolic model unless it contradicts a computational assumption. In this way, we obtain unconditional soundness almost by construction. And we do not need to assume the absence of dynamic corruption or the absence of key-cycles, which are examples of hypotheses that are always used in related works. We set the basic framework, for arbitrary cryptographic primitives and arbitrary protocols, however for trace security properties only.
Last updated:  2013-05-14
Attacks and Security Proofs of EAX-Prime
Uncategorized
Kazuhiko Minematsu, Stefan Lucks, Hiraku Morita, Tetsu Iwata
Show abstract
Uncategorized
EAX$'$ (EAX-prime) is an authenticated encryption (AE) specified by ANSI C12.22 as a standard security function for Smart Grid. EAX$'$ is based on EAX proposed by Bellare, Rogaway, and Wagner. While EAX has a proof of security based on the pseudorandomness of the internal blockcipher, no published security result is known for EAX$'$. This paper studies the security of EAX$'$ and shows that there is a sharp distinction in security of EAX$'$ depending on the input length. EAX$'$ encryption takes two inputs, called cleartext and plaintext, and we present various efficient attacks against EAX$'$ using single-block cleartext and plaintext. At the same time we prove that if cleartexts are always longer than one block, it is provably secure based on the pseudorandomness of the blockcipher.
Last updated:  2012-02-03
Secondary constructions on generalized bent functions
Brajesh Kumar Singh
In this paper, we construct generalized bent Boolean functions in $n+ 2$ variables from $4$ generalized Boolean functions in $n$ variables. We also show that the direct sum of two generalized bent Boolean functions is generalized bent. Finally, we identify a set of affine functions in which every function is generalized bent.
Last updated:  2012-02-07
Efficient Mix-Net Verication by Proofs of Random Blocks
Uncategorized
Denise Demirel, Melanie Volkamer, Hugo Jonker
Uncategorized
In order for a mix-net to be usable in practice (e.g. in electronic voting), efficient verification is a must. Despite many advances in the last decade, zero-knowledge proofs remain too computationally intense. Two alternative proof approaches have been suggested: optimistic mix-net verification and randomized partial checking. Puiggalí et al. proposed a verification method combining these two approaches. This paper investigates their mix-net and proposes a verification method which offers both improved efficiency and more privacy.
Last updated:  2012-01-20
A Cryptographic Treatment of the Wiretap Channel
Uncategorized
Mihir Bellare, Stefano Tessaro, Alexander Vardy
Show abstract
Uncategorized
The wiretap channel is a setting where one aims to provide information-theoretic privacy of communicated data based solely on the assumption that the channel from sender to adversary is ``noisier'' than the channel from sender to receiver. It has been the subject of decades of work in the information and coding (I&C) community. This paper bridges the gap between this body of work and modern cryptography with contributions along two fronts, namely METRICS (definitions) of security, and SCHEMES. We explain that the metric currently in use is weak and insufficient to guarantee security of applications and propose two replacements. One, that we call mis-security, is a mutual-information based metric in the I&C style. The other, semantic security, adapts to this setting a cryptographic metric that, in the cryptography community, has been vetted by decades of evaluation and endorsed as the target for standards and implementations. We show that they are equivalent (any scheme secure under one is secure under the other), thereby connecting two fundamentally different ways of defining security and providing a strong, unified and well-founded target for designs. Moving on to schemes, results from the wiretap community are mostly non-constructive, proving the existence of schemes without necessarily yielding ones that are explicit, let alone efficient, and only meeting their weak notion of security. We apply cryptographic methods based on extractors to produce explicit, polynomial-time and even practical encryption schemes that meet our new and stronger security target.
Last updated:  2014-06-04
Reset Indifferentiability from Weakened Random Oracle Salvages One-pass Hash Functions
Uncategorized
Yusuke Naito, Kazuki Yoneyama, Kazuo Ohta
Show abstract
Uncategorized
Ristenpart et al. showed that the limitation of the indifferentiability theorem of Maurer et al. which does not cover all multi stage security notions but covers only single stage security notions, defined a new concept (reset indifferentiability), and proved the reset indifferentiability theorem, which is an analogy of the indifferentiability theorem covers all security notions S: if H^U is reset indifferentiable from RO, for any security notion, a cryptosystem C is at least as secure in the U model as in the RO model. Unfortunately, they also proved the impossibility of H^U being reset indifferentiable from a RO where H is a one-pass hash function such as ChopMD and Sponge constructions. In this paper, we will propose a new proof of molular approach instead of the RO methodology, Reset Indifferentiability from Weakened Random Oracle, called as the WRO methodology, in order to ensure the security of C with H^U, salvaging ChopMD and Sponge. The concrete proof procedure of the WRO methodology is as follows: 1. Define a new concept of WRO instead of RO, 2. Prove that H^U is reset indifferentiable from a WRO, (here an example of H is ChopMD and Sponge), and 3. Prove that C is secure in the WRO model. As a result we can prove that C with H^U is secure by combining the results of Steps 2, 3, and the theorem of Ristenpart et al. Moreover, for public-key encryption (as cryptosystem C) and chosen-distribution attack we will prove that C(WRO) is secure, which implies the appropriateness of the new concept of the WRO model.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.