All papers in 2011 (Page 4 of 714 results)

Last updated:  2011-08-05
Fuzzy Identity Based Encryption from Lattices
Shweta Agrawal, Xavier Boyen, Vinod Vaikuntanathan, Panagiotis Voulgaris, Hoeteck Wee
Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities. In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the standard Learning With Errors (LWE) problem. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We discuss why further extensions are not as easy as they may seem. As such, ours is among the first examples of advanced-functionality cryptosystem from lattices that goes “beyond IBE”.
Last updated:  2014-08-04
Higher-Order Glitches Free Implementation of the AES using Secure Multi-Party Computation Protocols - Extended Version
Thomas Roche, Emmanuel Prouff
Higher-order side channel attacks (HO-SCA) is a powerful technique against cryptographic implementations and the design of appropriate countermeasures is nowadays an important topic. In parallel, another class of attacks, called glitches attacks, have been investigated which exploit the hardware glitches phenomena occurring during the physical execution of algorithms. Some solutions have been proposed to counteract HO-SCA at any order or to defeat glitches attacks, but no work has until now focussed on the definition of a sound countermeasure thwarting both attacks. We introduce in this paper a circuit model in which side-channel resistance in presence of glitches effects can be characterized. This allows us to construct the first glitches free HO-SCA countermeasure. The new construction can be built from any Secure Multi-Party Computation protocol and, as an illustration, we propose to apply the protocol introduced by Ben-Or et al. at STOC in 1988. The adaptation of the latter protocol to the context of side-channel analysis results in a completely new higher-order masking scheme, particularly interesting when addressing resistance in the presence of glitches. An application of our scheme to the AES block cipher is detailed, as well as an information theoretic evaluation of the new masking function that we call polynomial masking.
Last updated:  2011-08-15
Automatic Insertion of DPA Countermeasures
Andrew Moss, Elisabeth Oswald, Dan Page, Michael Tunstall
Differential Power Analysis (DPA) attacks find a statistical correlation between the power consumption of a cryptographic device and intermediate values within the computation. Randomization of intermediate values breaks statistical dependence and thus prevents such attacks. The current state of the art in countermeasures involves manual manipulation of low-level assembly language to insert random masking. This paper introduces an algorithm to automate the process allowing the development of compilers capable of protecting programs against DPA.
Last updated:  2011-08-05
Comments on a password authentication and update scheme based on elliptic curve cryptography
Uncategorized
Debiao He
Show abstract
Uncategorized
The security of a password authentication and update scheme based on elliptic curve cryptography proposed by Islam et al. [S.K. Hafizul Islam, G.P. Biswas, Design of improved password authentication and update scheme based on elliptic curve cryptography, Mathematical and Computer Modelling (2011), doi:10.1016/j.mcm.2011.07.001] is analyzed. Three kinds of attacks are presented in different scenarios.
Last updated:  2011-08-16
Functional Encryption for Inner Product Predicates from Learning with Errors
Shweta Agrawal, David Mandell Freeman, Vinod Vaikuntanathan
We propose a lattice-based functional encryption scheme for inner product predicates whose security follows from the difficulty of the "learning with errors" (LWE) problem. This construction allows us to achieve applications such as range and subset queries, polynomial evaluation, and CNF/DNF formulas on encrypted data. Our scheme supports inner products over small fields, in contrast to earlier works based on bilinear maps. Our construction is the first functional encryption scheme based on lattice techniques that goes beyond basic identity-based encryption. The main technique in our scheme is a novel twist to the identity-based encryption scheme of Agrawal, Boneh and Boyen (Eurocrypt 2010).
Last updated:  2016-04-13
--Withdrawn--
Uncategorized
Xiaoyuan Yang, Weiyi Cai, Xu An Wang
Uncategorized
Last updated:  2011-08-03
Resettable Cryptography in Constant Rounds -- the Case of Zero Knowledge
Uncategorized
Yi Deng, Dengguo Feng, Vipul Goyal, Dongdai Lin, Amit Sahai, Moti Yung
Show abstract
Uncategorized
A fundamental question in cryptography deals with understanding the role that randomness plays in cryptographic protocols and to what extent it is necessary. One particular line of works was initiated by Canetti, Goldreich, Goldwasser, and Micali (STOC 2000) who introduced the notion of resettable zero-knowledge, where the protocol must be zero-knowledge even if a cheating verifier can reset the prover and have several interactions in which the prover uses the same random tape. Soon afterwards, Barak, Goldreich, Goldwasser, and Lindell (FOCS 2001) studied the setting where the \emph{verifier} uses a fixed random tape in multiple interactions. Subsequent to these works, a number of papers studied the notion of resettable protocols in the setting where \emph{only one} of the participating parties uses a fixed random tape multiple times. The notion of resettable security has been studied in two main models: the plain model and the bare public key model (also introduced in the above paper by Canetti et. al.). In a recent work, Deng, Goyal and Sahai (FOCS 2009) gave the first construction of a \emph{simultaneous} resettable zero-knowledge protocol where both participants of the protocol can reuse a fixed random tape in any (polynomial) number of executions. Their construction however required $O(n^\epsilon)$ rounds of interaction between the prover and the verifier. Both in the plain as well as the BPK model, this construction remain the only known simultaneous resettable zero-knowledge protocols. In this work, we study the question of round complexity of simultaneous resettable zero-knowledge in the BPK model. We present a \emph{constant round} protocol in such a setting based on standard cryptographic assumptions. Our techniques are significantly different from the ones used by Deng, Goyal and Sahai.
Last updated:  2011-11-29
Oblivious RAM with O((log N)^3) Worst-Case Cost
Elaine Shi, Hubert Chan, Emil Stefanov, Mingfei Li
Oblivious RAM (O-RAM) is a useful primitive that allows a client to hide its data access patterns from an untrusted server in storage outsourcing applications. This paper proposes novel O-RAM constructions that achieves poly-logarithmic worst-case cost, while consuming constant client-side storage. Our techniques for constructing Oblivious RAM are fundamentally different from previous approaches. Specifically, we organize the O-RAM storage into a binary tree over data buckets, while moving data blocks obliviously along tree edges. Our construction (instantiated the trivial bucket O-RAM) has remarkable conceptual simplicity, and eliminates the need to perform expensive oblivious sorting operations. As a result, to the best of our knowledge, our construction is by far the most practical scheme with constant client-side memory, under realistic parameterizations.
Last updated:  2011-08-11
Composition Theorems Without Pre-Established Session Identifiers
Ralf Kuesters, Max Tuengerthal
Canetti's universal composition theorem and the joint state composition theorems by Canetti and Rabin are useful and widely employed tools for the modular design and analysis of cryptographic protocols. However, these theorems assume that parties participating in a protocol session have pre-established a unique session ID (SID). While the use of such SIDs is a good design principle, existing protocols, in particular real-world security protocols, typically do not use pre-established SIDs, at least not explicitly and not in the particular way stipulated by the theorems. As a result, the composition theorems cannot be applied for analyzing such protocols in a modular and faithful way. In this paper, we therefore present universal and joint state composition theorems which do not assume pre-established SIDs. In our joint state composition theorem, the joint state is an ideal functionality which supports several cryptographic operations, including public-key encryption, (authenticated and unauthenticated) symmetric encryption, MACs, digital signatures, and key derivation. This functionality has recently been proposed by Küsters and Tuengerthal and has been shown to be realizable under standard cryptographic assumptions and for a reasonable class of environments. We demonstrate the usefulness of our composition theorems by several case studies on real-world security protocols, including IEEE 802.11i, SSL/TLS, SSH, IPsec, and EAP-PSK. While our applications focus on real-world security protocols, our theorems, models, and techniques should be useful beyond this domain.
Last updated:  2011-09-01
Can Homomorphic Encryption be Practical?
Kristin Lauter, Michael Naehrig, Vinod Vaikuntanathan
The prospect of outsourcing an increasing amount of data storage and management to cloud services raises many new privacy concerns for individuals and businesses alike. The privacy concerns can be satisfactorily addressed if users encrypt the data they send to the cloud. If the encryption scheme is homomorphic, the cloud can still perform meaningful computations on the data, even though it is encrypted. In fact, we now know a number of constructions of fully homomorphic encryption schemes that allow arbitrary computation on encrypted data. In the last two years, solutions for fully homomorphic encryption have been proposed and improved upon, but it is hard to ignore the elephant in the room, namely efficiency -- can homomorphic encryption ever be efficient enough to be practical? Certainly, it seems that all known fully homomorphic encryption schemes have a long way to go before they can be used in practice. Given this state of affairs, our contribution is two-fold. First, we exhibit a number of real-world applications, in the medical, financial, and the advertising domains, which require only that the encryption scheme is "somewhat" homomorphic. Somewhat homomorphic encryption schemes, which support a limited number of homomorphic operations, can be much faster, and more compact than fully homomorphic encryption schemes. Secondly, we show a proof-of-concept implementation of the recent somewhat homomorphic encryption scheme of Brakerski and Vaikuntanathan, whose security relies on the "ring learning with errors" (Ring LWE) problem. The system is very efficient, and has reasonably short ciphertexts. Our unoptimized implementation in magma enjoys comparable efficiency to even optimized pairing-based schemes with the same level of security and homomorphic capacity. We also show a number of application-specific optimizations to the encryption scheme, most notably the ability to convert between different message encodings in a ciphertext.
Last updated:  2011-07-30
A constant-round resettably-sound resettable zero-knowledge argument in the BPK model
Seiko Arita
In resetting attacks against a proof system, a prover or a verifier is reset and enforced to use the same random tape on various inputs as many times as an adversary may want. Recent deployment of cloud computing gives these attacks a new importance. This paper shows that argument systems for any NP language that are both resettably-sound and resettable zero-knowledge are possible by a constant-round protocol in the BPK model. For that sake, we define and construct a resettably-extractable {\em conditional} commitment scheme.
Last updated:  2011-08-01
A Fair Evaluation Framework for Comparing Side-Channel Distinguishers
Carolyn Whitnall, Elisabeth Oswald
The ability to make meaningful comparisons between side-channel distinguishers is important both to attackers seeking an optimal strategy and to designers wishing to secure a device against the strongest possible threat. The usual experimental approach requires the distinguishing vectors to be estimated: outcomes do not fully represent the inherent theoretic capabilities of distinguishers and do not provide a basis for conclusive, like-for-like comparisons. This is particularly problematic in the case of mutual information-based side channel analysis (MIA) which is notoriously sensitive to the choice of estimator. We propose an evaluation framework which captures those theoretic characteristics of attack distinguishers having the strongest bearing on an attacker's general ability to estimate with practical success, thus enabling like-for-like comparisons between different distinguishers in various leakage scenarios. We apply our framework to an evaluation of MIA relative to its rather more well-established correlation-based predecessor and a proposed variant inspired by the Kolmogorov-Smirnov distance. Our analysis makes sense of the rift between the a priori reasoning in favour of MIA and the disappointing empirical findings of previous comparative studies, and moreover reveals several unprecedented features of the attack distinguishers in terms of their sensitivity to noise. It also explores---to our knowledge, for the first time---theoretic properties of near-generic power models previously proposed (and experimentally verified) for use in attacks targeting injective functions.
Last updated:  2014-03-18
Formalizing Group Blind Signatures and Practical Constructions without Random Oracles
Uncategorized
Essam Ghadafi
Show abstract
Uncategorized
Group blind signatures combine anonymity properties of both group signatures and blind signatures and offer privacy for both the message to be signed and the signer. Their applications include multi-authority e-voting and distributed e-cash systems. The primitive has been introduced with only informal definitions for its required security properties. We offer two main contributions: first, we provide foundations for the primitive where we present formal security definitions offering various flavors of anonymity relevant to this setting. In the process, we identify and address some subtle issues which were not considered by previous constructions and (informal) security definitions. Our second main contribution is a generic construction that yields practical schemes with round-optimal signing and constant-size signatures. Our constructions permit dynamic and concurrent enrollment of new members, satisfy strong security requirements, and do not rely on random oracles. In addition, we introduce some new building blocks which may be of independent interest.
Last updated:  2011-08-10
Pseudorandom Functions and Lattices
Abhishek Banerjee, Chris Peikert, Alon Rosen
We give direct constructions of pseudorandom function (PRF) families based on conjectured hard lattice problems and learning problems. Our constructions are asymptotically efficient and highly parallelizable in a practical sense, i.e., they can be computed by simple, relatively \emph{small} low-depth arithmetic or boolean circuits (e.g., in NC$^{1}$ or even TC$^{0}$). In addition, they are the first low-depth PRFs that have no known attack by efficient quantum algorithms. Central to our results is a new ``derandomization'' technique for the learning with errors (\lwe) problem which, in effect, generates the error terms deterministically.
Last updated:  2012-02-14
On a generalized combinatorial conjecture involving addition $\mod 2^k - 1$
Gérard Cohen, Jean-Pierre Flori
In this note, we give a simple proof of the combinatorial conjecture proposed by Tang, Carlet and Tang, based on which they constructed two classes of Boolean functions with many good cryptographic properties. We also give more general properties about the generalization of the conjecture they propose.
Last updated:  2011-07-28
Cryptanalysis of HFE, Multi-HFE and Variants for Odd and Even Characteristic
Luk Bettale, Jean-Charles Faugère, Ludovic Perret
We investigate in this paper the security of HFE and Multi-HFE schemes as well as their minus and embedding variants. Multi-HFE is a generalization of the well-known HFE schemes. The idea is to use a multivariate quadratic system -- instead of a univariate polynomial in HFE -- over an extension field as a private key. According to the authors, this should make the classical direct algebraic (message-recovery) attack proposed by Faugère and Joux on HFE no longer efficient against Multi-HFE. We consider here the hardness of the key-recovery in Multi-HFE and its variants, but also in HFE (both for odd and even characteristic). We first improve and generalize the basic key recovery proposed by Kipnis and Shamir on HFE. To do so, we express this attack as matrix/vector operations. In one hand, this permits to improve the basic Kipnis-Shamir (KS) attack on HFE. On the other hand, this allows to generalize the attack on Multi-HFE. Due to its structure, we prove that a Multi-HFE scheme has much more equivalent keys than a basic HFE. This induces a structural weakness which can be exploited to adapt the KS attack against classical modifiers of multivariate schemes such as minus and embedding. Along the way, we discovered that the KS attack as initially described cannot be applied against HFE in characteristic 2. We have then strongly revised KS in characteristic 2 to make it work. In all cases, the cost of our attacks is related to the complexity of solving MinRank. Thanks to recent complexity results on this problem, we prove that our attack is polynomial in the degree of the extension field for all possible practical settings used in HFE and Multi-HFE. This makes then Multi-HFE less secure than basic HFE for equally-sized keys. As a proof of concept, we have been able to practically break the most conservative proposed parameters of multi-HFE in few days (256 bits security broken in 9 days).
Last updated:  2012-05-19
Hardness of Learning Problems over Burnside Groups of Exponent 3
Nelly Fazio, Kevin Iga, Antonio Nicolosi, Ludovic Perret, William E. Skeith III
In this work we investigate the hardness of a computational problem introduced in the recent work of Baumslag et al. In particular, we study the $B_n$-LHN problem, which is a generalized version of the learning with errors (LWE) problem, instantiated with a particular family of non-abelian groups (free Burnside groups of exponent 3). In our main result, we demonstrate a random self-reducibility property for $B_n$-LHN. Along the way, we also prove a sequence of lemmas regarding homomorphisms of free Burnside groups of exponent 3 that may be of independent interest.
Last updated:  2011-10-09
The n-Diffie-Hellman Problem and its Applications
Liqun Chen, Yu Chen
The main contributions of this paper are twofold. On the one hand, the twin Diffie-Hellman (twin DH) problem proposed by Cash, Kiltz and Shoup is extended to the $n$-Diffie-Hellman ($n$-DH) problem for an arbitrary integer $n$, and this new problem is shown to be at least as hard as the ordinary DH problem. Like the twin DH problem, the $n$-DH problem remains hard even in the presence of a decision oracle that recognizes solution to the problem. On the other hand, observe that the double-size key in the Cash et al. twin DH based encryption scheme can be replaced by two separated keys each for one entity, that results in a 2-party encryption scheme which holds the same security feature as the original scheme but removes the key redundancy. This idea is further extended to an $n$-party case, which is also known as $n$-out-of-$n$ encryption. As examples, a variant of ElGamal encryption and a variant of Boneh-Franklin IBE have been presented; both of them have proved to be CCA secure under the computational DH assumption and the computational bilinear Diffie-Hellman (BDH) assumption respectively, in the random oracle model. The two schemes are efficient, due partially to the size of their ciphertext, which is independent to the value $n$.
Last updated:  2015-04-14
Fair Computation with Rational Players
Amos Beimel, Adam Groce, Jonathan Katz, Ilan Orlov
We consider the problem of fair multiparty computation, where fairness means (informally) that all parties should learn the correct output. A seminal result of Cleve (STOC 1986) shows that fairness is, in general, impossible to achieve if a majority of the parties is malicious. Here, we treat all parties as rational and seek to understand what can be done. Asharov et al. (Eurocrypt 2011) showed impossibility of rational fair computation in the two-party setting, for a particular function and a particular choice of utilities. We observe, however, that in their setting the parties have no strict incentive to compute the function even in an ideal world where fairness is guaranteed. Revisiting the problem, we show that rational fair computation is possible, for arbitrary functions, as long as the parties have a strict incentive to compute the function in an ideal world where fairness is guaranteed. Our results extend to more general utility functions that do not directly correspond to fairness, as well as to the multi-party setting. Our work thus shows a new setting in which game-theoretic considerations can be used to circumvent a cryptographic impossibility result.
Last updated:  2011-07-28
Improved Anonymity for Key-Trees
Michael Beye, Thijs Veugen
Randomized hash-lock protocols for Radio Frequency IDentification (RFID) tags offer forward untraceability, but incur heavy search on the server. Key trees have been proposed as a way to reduce search times, but because partial keys in such trees are shared, key compromise affects several tags. Buttyän et al. have quantified the resulting loss of anonymity in the system, and proposed specific tree properties in an attempt to minimize this loss. We will further improve upon these results, and provide a proof of optimality. Finally, our proposals are compared to existing results by means of simulation.
Last updated:  2012-06-30
A More Efficient Computationally Sound Non-Interactive Zero-Knowledge Shuffle Argument
Uncategorized
Helger Lipmaa, Bingsheng Zhang
Show abstract
Uncategorized
We propose a new non-interactive (perfect) zero-knowledge (NIZK) shuffle argument that, when compared the only previously known efficient NIZK shuffle argument by Groth and Lu, has a small constant factor times smaller computation and communication, and is based on more standard computational assumptions. Differently from Groth and Lu who only prove the co-soundness of their argument under purely computational assumptions, we prove computational soundness under a necessary knowledge assumption. We also present a general transformation that results in a shuffle argument that has a quadratically smaller common reference string (CRS) and a small constant factor times times longer argument than the original shuffle. Our main technical result is a ``$1$-sparsity'' argument that has linear CRS length and prover's communication. This should be compared to the basic arguments of Groth (Asiacrypt 2010) and Lipmaa (TCC 2012), where the prover's computational complexity is quadratic. This gives a new insight to the NIZK arguments of Groth and Lipmaa, and we hope that the $1$-sparsity argument (and possible related future basic arguments) can be used to build NIZK arguments for other interesting languages.
Last updated:  2011-08-10
Analysis and Improvement of Thing's Time-Memory Trade-Off Attack
Zhenqi Li, Dongdai Lin, Wenhao Wang
Cryptanalytic time memory trade-off is a probabilistic algorithm for inverting a generic one-way function. Since its first introduction by Hellman, many variants and their analysis results have appeared. Thing gave a new time-memory trade-off method in 2009. His new method has higher success probability and lower memory requirements, compared with Oechslin's rainbow table, but its drawback is obvious and make it to be an impractical method. In this paper, we analyze and improve Thing's method and propose a new method. We compare the success probability, memory requirements and analysis time between our method and Thing's method. Results show that our new method is better than Thing's method and fix its innate drawback.
Last updated:  2011-07-20
An Efficient Rational Secret Sharing Scheme Based on the Chinese Remainder Theorem (Revised Version)
Yun Zhang, Christophe Tartary, Huaxiong Wang
The design of rational cryptographic protocols is a recently created research area at the intersection of cryptography and game theory. At TCC'10, Fuchsbauer \emph{et al.} introduced two equilibrium notions (computational version of strict Nash equilibrium and stability with respect to trembles) offering a computational relaxation of traditional game theory equilibria. Using trapdoor permutations, they constructed a rational $t$-out-of $n$ sharing technique satisfying these new security models. Their construction only requires standard communication networks but the share bitsize is $2 n |s| + O(k)$ for security against a single deviation and raises to $(n-t+1)\cdot (2n|s|+O(k))$ to achieve $(t-1)$-resilience where $k$ is a security parameter. In this paper, we propose a new protocol for rational $t$-out-of $n$ secret sharing scheme based on the Chinese reminder theorem. Under some computational assumptions related to the discrete logarithm problem and RSA, this construction leads to a $(t-1)$-resilient computational strict Nash equilibrium that is stable with respect to trembles with share bitsize $O(k)$. Our protocol does not rely on simultaneous channel. Instead, it only requires synchronous broadcast channel and synchronous pairwise private channels.
Last updated:  2011-11-07
On the Portability of Side-Channel Attacks - An Analysis of the Xilinx Virtex 4, Virtex 5, and Spartan 6 Bitstream Encryption Mechanism
Amir Moradi, Markus Kasper, Christof Paar
This paper is a short summary of our real-world side-channel analysis of the bitstream encryption mechanism provided by Xilinx FPGAs. This work covers our results analyzing the Virtex 4, Virtex 5, and Spartan 6 family showing that the encryption mechanism can be completely broken with moderate effort. The presented results provide an overview of a practical real-world analysis and should help practitioners to judge the necessity to implement side-channel countermeasures. We demonstrate sophisticated attacks on off-the-shelf FPGAs that go far beyond schoolbook attacks on 8-bit AES S-boxes. We were able to perform the key extraction by using only the measurements of a single power-up. Access to the key allows cloning and manipulating a design, which has been encrypted to protect the intellectual property and to prevent fraud. As a consequence, the target product faces serious threats like IP theft and more advanced attacks such as reverse engineering or the introduction of hardware Trojans. To the best of our knowledge, this is the first successful attack against the bitstream encryption of Xilinx Virtex 4, Virtex 5, and Spartan 6 reported in the open literature.
Last updated:  2011-07-22
On the Vulnerability of FPGA Bitstream Encryption against Power Analysis Attacks - Extracting Keys from Xilinx Virtex-II FPGAs
Amir Moradi, Alessandro Barenghi, Timo Kasper, Christof Paar
Over the last two decades FPGAs have become central components for many advanced digital systems, e.g., video signal processing, network routers, data acquisition and military systems. In order to protect the intellectual property and to prevent fraud, e.g., by cloning an FPGA or manipulating its content, many current FPGAs employ a bitstream encryption feature. We develop a successful attack on the bitstream encryption engine integrated in the widespread Virtex-II Pro FPGAs from Xilinx, using side-channel analysis. After measuring the power consumption of a single power-up of the device and a modest amount of off-line computation, we are able to recover all three different keys used by its triple DES module. Our method allows extracting secret keys from any real-world device where the bitstream encryption feature of Virtex-II Pro is enabled. As a consequence, the target product can be cloned and manipulated at will of the attacker. Also, more advanced attacks such as reverse engineering or the introduction of hardware Trojans become potential threats. As part of the side-channel attack, we were able to deduce certain internals of the hardware encryption engine. To our knowledge, this is the first attack against the bitstream encryption of a commercial FPGA reported in the open literature.
Last updated:  2011-07-18
Spatial Encryption
Mike Hamburg
In this thesis, we build on Boneh and Hamburg's work on generalized identity-based encryption and spatial encryption. GIBE is a model that covers many of the generalizations of identity-based cryptography that have appeared in the past decade. It is simple and flexible, and can be tuned to different attack models such as selective security. Spatial encryption is an encryption system within the GIBE model. It is designed as a toolbox from which other encryption systems can be built, using linear algebra as an encoding method. It is based on the work of Boneh, Boyen and Goh, and generalizes hierarchical IBE as well as the hierarchical inner-product encryption of Okamoto and Takashima. Here we show several new results related to spatial encryption. We show how to build adaptively secure spatial systems (under a compact but nonstandard assumption) using Lewko and Waters' dual-system encryption. In doing this, we also show how to adapt Lewko and Waters' result to a prime-order setting without sacrificing constant-size ciphertexts. We also show new embeddings of other cryptosystems into spatial encryption. Beyond spatial encryption, we propose a variant called "doubly-spatial encryption", which generalizes both spatial encryption and Attrapadung and Libert's "negated spatial encryption". This generalization adds more flexibility, including more flexible revocation systems and potential improvements in policy language. Unfortunately, we were only able to prove selective security for doubly-spatial encryption, and its ciphertext is no longer constant-size.
Last updated:  2011-07-28
Modulus Fault Attacks Against RSA-CRT Signatures
Eric Brier, David Naccache, Phong Q. Nguyen, Mehdi Tibouchi
RSA-CRT fault attacks have been an active research area since their discovery by Boneh, DeMillo and Lipton in 1997. We present alternative key-recovery attacks on RSA-CRT signatures: instead of targeting one of the sub-exponentiations in RSA-CRT, we inject faults into the public modulus before CRT interpolation, which makes a number of countermeasures against Boneh et al.'s attack ineffective. Our attacks are based on orthogonal lattice techniques and are very efficient in practice: depending on the fault model, between 5 to 45 faults suffice to recover the RSA factorization within a few seconds. Our simplest attack requires that the adversary knows the faulty moduli, but more sophisticated variants work even if the moduli are unknown, under reasonable fault models. All our attacks have been fully validated experimentally with fault-injection laser techniques.
Last updated:  2011-09-29
Analysis of the Parallel Distinguished Point Tradeoff
Uncategorized
Jin Hong, Ga Won Lee, Daegun Ma
Show abstract
Uncategorized
Cryptanalytic time memory tradeoff algorithms are tools for quickly inverting one-way functions and many consider the rainbow table method to be the most efficient tradeoff algorithm. However, it was recently announced, mostly based on experiments, that the parallelization of the perfect distinguished point tradeoff algorithm brings about an algorithm that is 50\% more efficient than the perfect rainbow table method. Motivated by this claim, while noting that the massive pre-computation associated with any tradeoff algorithm makes the non-perfect forms of tradeoff algorithms more practical, we provide an accurate theoretic analysis of the parallel version of the non-perfect distinguished point tradeoff algorithm. Performance differences between different tradeoff algorithms are usually not very large, but even these small differences can be crucial in practice. So we take care not to ignore the side effects of false alarms in providing an online time complexity analysis of the parallel distinguished point tradeoff algorithm. Our complexity results are used to compare the parallel non-perfect distinguished point tradeoff against the non-perfect rainbow table method. The two algorithms are compared under identical success rate requirements and the pre-computation efforts are also taken into account. Contrary to our anticipation, we find that the rainbow table method is superior in typical situations, even though the parallelization did have a positive effect on the efficiency of the distinguished point tradeoff algorithm.
Last updated:  2011-07-18
How to share secrets simultaneously
Laszlo Csirmaz
Each member of a team consisting of $n$ person has a secret. The $k$ out of $n$ simultaneous threshold secret sharing requires that any group of $k$ members should be able to recover the secret of the other $n-k$ members, while any group of $k-1$ or less members should have no information on the secret of other team members. We show that when all secrets are independent and have size $s$ then each team member must receive a share of size at least $(n-k)s$, and we present a scheme which achieves this bound. This result shows a significant saving over $n$ independent applications of the $k$ out of $n-1$ threshold schemes which assigns shares of size $(n-1)s$ to each team member independently of $k$.
Last updated:  2011-07-15
Efficient Implementation of Grand Cru with TI C6x+ Processor
Azhar Ali Khan, Ghulam Murtaza
Grand Cru, a candidate cipher algorithm of NESSIE project, is based on the strategy of multiple layered security and derived from AES-128. This algorithm was not selected for second phase evaluation of NESSIE project due to implementation and processing cost. In this paper we present relatively a fast implementation of the cipher using Texas Instrument’s DSP C64x+.
Last updated:  2011-11-06
Distributed Oblivious RAM for Secure Two-Party Computation
Uncategorized
Steve Lu, Rafail Ostrovsky
Show abstract
Uncategorized
Secure two-party computation protocol allows two players, Alice with secret input $x$ and Bob with secret input $y$, to jointly execute an arbitrary program $\pi(x,y)$ such that only the output of the program is revealed to one or both players. In the two player setting, under computational assumptions most approaches require to first ``unroll'' $\pi$ into a circuit, and then execute this circuit in a private manner. Without unrolling the circuit, an alternative method was proposed by Ostrovsky and Shoup (in STOC 1997) with both players simulating the CPU of an oblivious RAM machine using ``off-the-shelf'' secure two-party computation to perform CPU simulation with atomic instructions implemented by circuits and relying on one of the players to implement encrypted memory. The simulation of the CPU must be done through circuit-based secure two-party computation, thus CPU ``memory'' in the Oblivious RAM simulation must be minimized, as otherwise it impacts simulation of each step of the computation. Luckily, there are multiple Oblivious RAM solutions that require $O(1)$ CPU memory in the security parameter. The Ostrovsky-Shoup compiler suffers from two drawbacks: -The best running time of Oblivious RAM simulation with $O(1)$ memory is still prohibitive and requires $O(\log^2t/\log\log t)$ overhead for running programs of length $t$ by Kushilevitz, Lu, and Ostrovsky (in SODA 2012). -The most problematic part of this approach is that all Oblivious RAM simulations, starting with Goldreich and Ostrovsky (in JACM 1996) require ``Oblivious Sorting'' that introduce a huge constant into Oblivious RAM simulation that essentially kills all practicality. In this paper, we observe that in the secure two-party computation, Alice and Bob can simulate two non-communicating databases. We show how to extend the Ostrovsky-Shoup compiler so that two non-communicating servers can eliminate all the drawbacks stated above. More specifically, we design a new Oblivious RAM protocol where a client interacts with two non-communicating servers, that supports $n$ reads and writes, and requires $O(n)$ memory for the servers, $O(1)$ memory for the client, and $O(\log n)$ amortized read/write overhead for data access. The constants in the big-O notation are tiny, and we show that the storage and data access overhead of our solution concretely compares favorably to the state-of-the-art single-server schemes. As alluded to above, our protocol enjoys an important feature from a practical perspective as well. At the heart of almost all previous single-server Oblivious RAM solutions, a crucial but inefficient process known as oblivious sorting was required. In our two-server model, we describe a novel technique to bypass oblivious sorting, and show how this can be carefully blended with existing techniques to attain a more practical Oblivious RAM protocol in comparison to all prior work. This new protocol leads to a novel application in the realm of secure two-party RAM program computation. We show that our multi-server Oblivious RAM construction can be composed with an extended version of the Ostrovsky-Shoup compiler to obtain a new method for secure two-party RAM computation with lower overhead than (implicitly) existing constructions by a factor of $O(\log n/\log\log n)$ of Gordon et al.
Last updated:  2011-07-15
A representation of the $p$-sylow subgroup of $\perm(\F_p^n)$ and a cryptographic application
Stefan Maubach
This article concerns itself with the triangular permutation group, induced by triangular polynomial maps over $\F_p$, which is a $p$-sylow subgroup of $\perm(\F_p^n)$. The aim of this article is twofold: on the one hand, we give an alternative to $\F_p$-actions on $\F_p^n$, namely $\Z$-actions on $\F_p^n$ and how to describe them as what we call ``$\Z$-flows''. On the other hand, we describe how the triangular permutation group can be used in applications, in particular we give a cryptographic application for session-key generation. The described system has a certain degree of information theoretic security. We compute its efficiency and storage size. To make this work, we give explicit criteria for a triangular permutation map to have only one orbit, which we call ``maximal orbit maps''. We describe the conjugacy classes of maximal orbit maps, and show how one can conjugate them even further to the map $z\lp z+1$ on $\Z/p^n\Z$.
Last updated:  2014-03-11
Generic Fully Simulatable Adaptive Oblivious Transfer
Kaoru Kurosawa, Ryo Nojima, Le Trieu Phong
We aim at constructing adaptive oblivious transfer protocols, enjoying fully simulatable security, from various well-known assumptions such as DDH, $d$-Linear, QR, DCR, and LWE. To this end, we present two generic constructions of adaptive OT, one of which utilizes verifiable shuffles together with threshold decryption schemes, while the other uses permutation networks together with what we call {\em loosely-homomorphic} key encapsulation schemes. We then show that specific choices of the building blocks lead to concrete adaptive OT protocols with fully simulatable security in the standard model under the targeted assumptions. Our generic methods can be extended to build universally composable (UC) secure, and leakage-resilient OT protocols.
Last updated:  2011-07-17
A Novel RFID Authentication Protocol based on Elliptic Curve Cryptosystem
Uncategorized
Yalin Chen, Jue-Sam Chou, Chi-Fong Lin, Cheng-Lun Wu
Show abstract
Uncategorized
Recently, many researchers have proposed RFID authentication protocols. These protocols are mainly consists of two types: symmetric key based and asymmetric key based. The symmetric key based systems usually have some weaknesses such as suffering brute force, de-synchronization, impersonation, and tracing attacks. In addition, the asymmetric key based systems usually suffer from impersonation, man-in-the-middle, physical, and tracing attacks. To get rid of those weaknesses and reduce the system workload, we adopt elliptic curve cryptosystem (ECC) to construct an asymmetric key based RFID authentication system. Our scheme needs only two passes and can resist various kinds of attacks. It not only outperforms the other RFID schemes having the same security level but also is the most efficient.
Last updated:  2011-07-13
An Exploration of the Kolmogorov-Smirnov Test as Competitor to Mutual Information Analysis
Carolyn Whitnall, Elisabeth Oswald, Luke Mather
A theme of recent side-channel research has been the quest for distinguishers which remain effective even when few assumptions can be made about the underlying distribution of the measured leakage traces. The Kolmogorov-Smirnov (KS) test is a well known non-parametric method for distinguishing between distributions, and, as such, a perfect candidate and an interesting competitor to the (already much discussed) mutual information (MI) based attacks. However, the side-channel distinguisher based on the KS test statistic has received only cursory evaluation so far, which is the gap we narrow here. This contribution explores the effectiveness and effciency of Kolmogorov-Smirnov analysis (KSA), and compares it with mutual information analysis (MIA) in a number of relevant scenarios ranging from optimistic first-order DPA to multivariate settings. We show that KSA shares certain ‘generic’ capabilities in common with MIA whilst being more robust to noise than MIA in univariate settings. This has the practical implication that designers should consider results of KSA to determine the resilience of their designs against univariate power analysis attacks.
Last updated:  2015-01-22
Cryptanalysis and improvement of a certificateless multi-proxy signature scheme
Uncategorized
Miaomiao Tian, Wei Yang, Liusheng Huang
Uncategorized
Multi-proxy signature allows an original signer authorizing a proxy group as his proxy agent and only the cooperation of all proxy signers in the group can create a proxy signature on behalf of the original signer. Recently, Jin and Wen defined a formal model of certificateless multi-proxy signature and proposed a concrete scheme. They claimed that their scheme is provably secure in their security model. Unfortunately, by giving concrete attacks, we show that Jin-Wen's certificateless multi-proxy signature scheme is not secure according to their security model. Possible improvements of their scheme are also suggested to prevent these attacks.
Last updated:  2011-07-24
A generalization of the Lucas addition chains
Amadou TALL
In this paper, we give a generalization of Lucas addition chains, where subtraction is allowed. We call them ''Lucas addition-subtraction chain''. We also show that this new method gives minimal addition-subtraction chains for infinitely many integers. This new method will also be used to prove that Lucas addition chains are optimal for many integers. Moreover, we show that Lucas addition chains give minimal addition chains for all integers of Hamming weight $3$, like the \emph{binary method}. Finally, we give a theorem to get short (and many times minimal) Lucas addition-subtraction chains.
Last updated:  2011-07-12
Improved Generalized Birthday Attack
Uncategorized
Paul Kirchner
Show abstract
Uncategorized
Let r, B and w be positive integers. Let C be a linear code of length Bw and subspace of Fr . The k-regular-decoding problem is to find 2 a nonzero codeword consisting of w length-B blocks with Hamming weight k. This problem was mainly studied after 2002. Not being able to solve this problem is critical for cryptography as it gives a fast attack against FSB, SWIFFT and learning parity with noise. In this paper, the classical methods are used in the same algorithm and improved.
Last updated:  2012-05-14
Backward Unlinkability for a VLR Group Signature Scheme with Efficient Revocation Check
Uncategorized
Julien Bringer, Alain Patey
Show abstract
Uncategorized
Verifier-Local Revocation (VLR) group signatures, introduced by Boneh and Shacham in 2004, are a particular case of dynamic group signature schemes where the revocation process does not influence the activity of the signers. The verifiers use a Revocation List to check if the signers are revoked. In all known schemes, checking a signature requires a computational time linear in the number of revoked members. Usually, it requires one pairing per revoked user. Recently, Chen and Li proposed a scheme where Revocation Check uses exponentiations instead of pairings. In this paper, we first propose a correction of their scheme to enable a full proof of the traceability property. Then our main contribution is to extend this tweaked scheme to ensure Backward Unlinkability. This important property prevents the loss of anonymity of past signatures when a user is revoked. We succeed in achieving this consequent improvement with a constant additional cost only. We thus obtain the scheme with the most efficient Revocation Check among VLR schemes enabling Backward Unlinkability.
Last updated:  2011-07-12
Complexity of universal access structures
Laszlo Csirmaz
An important parameter in a secret sharing scheme is the number of minimal qualified sets. Given this number, the universal access structure is the richest possible structure, namely the one in which there are one or more participants in every possible Boolean combination of the minimal qualified sets. Every access structure is a substructure of the universal structure for the same number of minimal qualified subsets, thus universal access structures have the highest complexity given the number of minimal qualified sets. We show that the complexity of the universal structure with $n$ minimal qualified sets is between $n/\log_2n$ and $n/2.7182\dots$ asymptotically.
Last updated:  2011-09-22
Restoring the Differential Resistance of MD6
Ethan Heilman
These notes present new results to reestablish the differential resistance of MD6. In this paper we introduce a classification system of differential weight patterns that allows us to extend previous analysis to prove that MD6 is resistant to differential cryptanalysis. Our analysis allows us to more than double the security margin of MD6 against differential attacks.
Last updated:  2012-01-09
An efficient characterization of a family of hyperbent functions with multiple trace terms
Jean-Pierre Flori, Sihem Mesnager
Lisoněk recently reformulated the characterization of Charpin and Gong of a large class of hyperbent functions in terms of cardinalities of curves. In this note, we show that such a reformulation can be naturally extended to a distinct family of functions proposed by Mesnager. Doing so, a polynomial time and space test is obtained to test the hyperbentness of functions in this family. Finally we show how this reformulation can be transformed to obtain a more efficient test.
Last updated:  2011-07-10
Identity based signcryption schemes without random oracles
Prashant Kushwah, Sunder Lal
Signcryption is a cryptographic primitive which performs encryption and signature in a single logical step with the cost lower than signature-then-encryption approach.. In this paper we gave attacks on confidentiality and unforgeability of two identity based signcryption schemes without random oracles. Further we proposed an improved identity based signcryption scheme without random oracles. We also proposed an identity based public verifiable signcryption scheme with third party verification without random oracles.
Last updated:  2011-09-16
Monoidic Codes in Cryptography
Uncategorized
Paulo S. L. M. Barreto, Richard Lindner, Rafael Misoczki
Show abstract
Uncategorized
At SAC 2009, Misoczki and Barreto proposed a new class of codes, which have parity-check matrices that are quasi-dyadic. A special subclass of these codes were shown to coincide with Goppa codes and those were recommended for cryptosystems based on error-correcting codes. Quasi-dyadic codes have both very compact representations and allow for efficient processing, resulting in fast cryptosystems with small key sizes. In this paper, we generalize these results and introduce quasi-monoidic codes, which retain all desirable properties of quasi-dyadic codes. We show that, as before, a subclass of our codes contains only Goppa codes or, for a slightly bigger subclass, only Generalized Srivastava codes. Unlike before, we also capture codes over fields of odd characteristic. These include wild Goppa codes that were proposed at SAC 2010 by Bernstein, Lange, and Peters for their exceptional error-correction capabilities. We show how to instantiate standard code-based encryption and signature schemes with our codes and give some preliminary parameters.
Last updated:  2012-09-04
Socio-Rational Secret Sharing as a New Direction in Rational Cryptography
Uncategorized
Mehrdad Nojoumian, Douglas R. Stinson
Show abstract
Uncategorized
Rational secret sharing was proposed by Halpern and Teague in STOC'04. The authors show that, in a setting with rational players, secret sharing and multiparty computation are only possible if the actual secret reconstruction round remains unknown to the players. All the subsequent works use a similar approach with different assumptions. We change the direction by bridging cryptography, game theory, and reputation systems, and propose a social model for repeated rational secret sharing. We provide a novel scheme, named "socio-rational secret sharing", in which players are invited to each game based on their reputations in the community. The players run secret sharing protocols while founding and sustaining a public trust network. As a result, new concepts such as a rational foresighted player, social game, and social Nash equilibrium are introduced. To motivate our approach, consider a repeated secret sharing game such as secure auctions, where the auctioneers receive sealed-bids from the bidders to compute the auction outcome without revealing the losing bids. If we assume each party has a reputation value, we can then penalize (or reward) the players who are selfish (or unselfish) from game to game. We show that this social reinforcement rationally stimulates the players to be cooperative.
Last updated:  2011-07-10
Storing Secrets on Continually Leaky Devices
Yevgeniy Dodis, Allison Lewko, Brent Waters, Daniel Wichs
We consider the question of how to store a value secretly on devices that continually leak information about their internal state to an external attacker. If the secret value is stored on a single device from which it is efficiently retrievable, and the attacker can leak even a single predicate of the internal state of that device, then she may learn some information about the secret value itself. Therefore, we consider a setting where the secret value is shared between multiple devices (or multiple components of a single device), each of which continually leaks arbitrary adaptively chosen predicates its individual state. Since leakage is continual, each device must also continually update its state so that an attacker cannot just leak it entirely one bit at a time. In our model, the devices update their state individually and asynchronously, without any communication between them. The update process is necessarily randomized, and its randomness can leak as well. As our main result, we construct a sharing scheme for two devices, where a constant fraction of the internal state of each device can leak in between and during updates. Our scheme has the structure of a public-key encryption, where one share is a secret key and the other is a ciphertext. As a contribution of independent interest, we also get public-key encryption in the continual leakage model, introduced by Brakerski et al. and Dodis et al. (FOCS '10). This scheme tolerates continual leakage on the secret key and the updates, and simplifies the recent construction of Lewko, Lewko and Waters (STOC '11). For our main result, we show how to update the ciphertexts of the encryption scheme so that the message remains hidden even if an attacker interleaves leakage on secret key and ciphertext shares. The security of our scheme is based on the linear assumption in prime-order bilinear groups. We also provide an extension to general access structures realizable by linear secret sharing schemes across many devices. The main advantage of this extension is that the state of some devices can be compromised entirely, while that of the all remaining devices is susceptible to continual leakage. Lastly, we show impossibility of information theoretic sharing schemes in our model, where continually leaky devices update their state individually.
Last updated:  2011-09-27
High-speed high-security signatures
Daniel J. Bernstein, Niels Duif, Tanja Lange, Peter Schwabe, Bo-Yin Yang
This paper shows that a $390 mass-market quad-core 2.4GHz Intel Westmere (Xeon E5620) CPU can create 109000 signatures per second and verify 71000 signatures per second on an elliptic curve at a 2^128 security level. Public keys are 32 bytes, and signatures are 64 bytes. These performance figures include strong defenses against software side-channel attacks: there is no data flow from secret keys to array indices, and there is no data flow from secret keys to branch conditions.
Last updated:  2011-07-10
Decoding One Out of Many
Nicolas Sendrier
Generic decoding of linear codes is the best known attack against most code-based cryptosystems. Understanding and measuring the complexity of the best decoding technique is thus necessary to select secure parameters. We consider here the possibility that an attacker has access to many cryptograms and is satisfied by decrypting (i.e.\ decoding) only one of them. We show that, in many cases of interest in cryptology, a variant of Stern's collision decoding can be adapted to gain a factor almost $\sqrt{N}$ when $N$ instances are given. If the attacker has access to an unlimited number of instances, we show that the attack complexity is significantly lower, in fact raised by a power slightly larger than $2/3$. Finally we give indications on how to counter those attacks.
Last updated:  2011-07-10
Highly Nonlinear Boolean Functions with Optimal Algebraic Immunity and Good Behavior Against Fast Algebraic Attacks
Deng Tang, Claude Carlet, Xiaohu Tang
In this paper, we present a new combinatorial conjecture about binary strings. Based on the new conjecture, two classes of Boolean functions of $2k$ variables with optimal algebraic immunity are proposed, where $k\ge 2$. The first class contains unbalanced functions having high algebraic degree and nonlinearity. The functions in the second one are balanced and have maximal algebraic degree and high nonlinearity. It is checked that, at least for small numbers of variables, both classes of functions have a good behavior against fast algebraic attacks. Compared with the known Boolean functions resisting algebraic attacks and fast algebraic attacks, the two classes of functions possess the highest lower bounds on nonlinearity. These bounds are however not enough for ensuring a sufficient nonlinearity for allowing resistance to the fast correlation attack. Nevertheless, as for previously found functions with the same features, there is a gap between the bound that we can prove and the actual values computed for small numbers of variables. Moreover, these values are very good and much better than for the previously found functions having all the necessary features for being used in the filter model of pseudo-random generators.
Last updated:  2011-07-11
Security flaws in a biometrics-based multi-server authentication with key agreement scheme
Uncategorized
Debiao He
Show abstract
Uncategorized
Recently, Yoon et al. proposed an efficient biometrics-based multi-server authentication with key agreement scheme for smart cards on elliptic curve cryptosystem (ECC) for multi-server communication environments [E.-J. Yoon, K.-Y. Yoo(2011) Robust biometrics-based multi-server authentication with key agreement scheme for smart cards on elliptic curve cryptosystem, Journal of Supercomputing, DOI: 10.1007/s11227-010-0512-1]. They claimed their scheme could withstand various attacks. In the letter, we will show Yoon et al.’s scheme is vulnerable to the privileged insider attack, the masquerade attack and the smart cart lost attack.
Last updated:  2011-07-10
The Value $4$ of Binary Kloosterman Sums
Jean-Pierre Flori, Sihem Mesnager, Gérard Cohen
Kloosterman sums have recently become the focus of much research, most notably due to their applications in cryptography and their relations to coding theory. Very recently Mesnager has showed that the value $4$ of binary Kloosterman sums gives rise to several infinite classes of bent functions, hyper-bent functions and semi-bent functions in even dimension. In this paper we analyze the different strategies used to find zeros of binary Kloosterman sums to develop and implement an algorithm to find the value $4$ of such sums. We then present experimental results showing that the value $4$ of binary Kloosterman sums gives rise to bent functions for small dimensions, a case with no mathematical solution so far.
Last updated:  2011-07-24
Dynamic Group Blind Signatures
Essam Ghadafi
Group signatures provide authenticity of a message while maintaining singer's privacy. A blind signature on the other hand allows a user to obtain a signature while maintaining the privacy of the message. Group blind signatures combine properties of both group signatures and blind signatures and therefore offer a stronger notion of anonymity where both the message to be signed and the identity of the signer remain anonymous. Group blind signatures have many useful applications in practice; including multi-authority e-voting systems and distributed e-cash systems. In this paper, we first present a formalized security model for dynamic group blind signatures and then we propose a new group blind signature scheme which has a round-optimal signing protocol and yet does not rely on random oracles which is to the best of our knowledge the first scheme with such properties. Most of the previous schemes in the literature require either lengthy (many-round) signing protocols and/or random oracles to prove their security. Our scheme also allows for concurrent joining and yields signatures of a constant-size. In addition, the new variant of the Camenisch-Lysyanskaya signature scheme which we introduce and use for the joining protocol is of interest in its own right and could be used either on its own to sign group elements or as a building block for other cryptographic constructions.
Last updated:  2011-10-15
Practically Efficient Proof of Retrievability in Cloud Storage
Uncategorized
Jia XU, Ee-Chien CHANG
Show abstract
Uncategorized
Proofs of Retrievability ({\POR}) is a cryptographic method for remotely auditing the integrity of files stored in the cloud, without keeping a copy of the original files in local storage. In a {\POR} scheme, a user Alice backups her data file together with some authentication data to a potentially dishonest cloud storage server Bob. Later, Alice can periodically and remotely verify the integrity of her data stored with Bob using the authentication data, without retrieving back the data file during a verification. Besides security, performances in communication, storage overhead and computaton are major considerations. Shacham and Waters~\cite{CompactPOR} gave a fast scheme with $\mathcal{O}(s)$ communication bits and a factor of $1/s$ file size expansion. Although Ateniese~\emph{et al.}~\cite{PDP} achieves constant communication requirement with the same $1/s$ storage overhead, it requires intensive computation in the setup and verification. In this paper, we incorporate a recent construction of constant size polynomial commitment scheme into Shacham and Waters~\cite{CompactPOR} scheme. The resulting scheme requires constant communication bits (particularly, 720 bits if elliptic curve is used or 3312 bits if a modulo group is used) per verification and a factor of $1/s$ file size expansion, and its computation in the setup and verification is significantly reduced compared to Ateniese~\emph{et al.}~\cite{PDP}. Essentially, Ateniese~\emph{et al.}~\cite{PDP} requires one group multiplication per each bit of the data file in the setup, while the proposed scheme requires one group multiplication per each chunk of data bits (160 bits per chunk if elliptic curve is used or 1024 bits per chunk if modulo group is used). The experiment results show that our proposed scheme is indeed efficient and practical. Our security proof is based on Strong Diffie-Hellman Assumption.
Last updated:  2012-03-20
The Exact Security of a Stateful IBE and New Compact Stateful PKE Schemes
Uncategorized
S. Sree Vivek, S. Sharmila Deva Selvi, C. Pandu Rangan
Show abstract
Uncategorized
Recently, Baek et al. proposed a stateful identity based encryption scheme with compact ciphertext and commented that the security of the scheme can be reduced to the Computational Bilinear Diffie Hellman (CBDH) problem. In this paper, we formally prove that the security of the stateful identity based encryption scheme by Baek et al. cannot be reduced to the CBDH problem. In fact, we show that the challenger will confront the Y-Computational problem while providing the decryption oracle access to the adversary. We provide the exact and formal security proof for the scheme, assuming the hardness of the Gap Bilinear Diffie Hellman (GBDH) problem. We also propose two new stateful public key encryption scheme with ciphertext verifiability. Our schemes offer more compact ciphertext when compared to all existing stateful public key encryption schemes with ciphertext verifiability. We have proved all the schemes in the random oracle model.
Last updated:  2011-07-06
Certificateless Aggregate Signcryption Schemes
Ziba Eslami, Nasrollah Pakniat
The concept of an aggregate signcryption scheme was first introduced in 2009 by Selvi S.S.D. et. al. in the identity-based setting. The aggregation process of these schemes reduces the amount of exchanged information and is particularly useful in low-bandwidth communication networks and computationally-restricted environments. In this paper, we define a suitable security model for certificateless aggregate signcryption schemes and propose an example which we prove is secure in the random oracle model under the gap Bilinear Diffie-Hellman and computational Diffie-Helman intractability assumptions.
Last updated:  2012-06-15
High-Entropy Visual Identification for Touch Screen Devices
Nathaniel Wesley Filardo, Giuseppe Ateniese
We exhibit a system for improving the quality of user-derived keying material on touch-screen devices. We allow a device to recover previously generated, highly entropic data suitable for use as (part of) a strong secret key from a user’s act of identifying to the device. Our system uses visual cryptography [22], using no additional electronics and no memorization on the part of the user. Instead, we require the use of a transparency overlaid on the touch-screen. Our scheme is similar to the identification scheme of [23] but tailored for constrained, touch-screen displays.
Last updated:  2012-05-10
Constructing a Ternary FCSR with a Given Connection Integer
Uncategorized
Lin Zhiqiang, Pei Dingyi
Show abstract
Uncategorized
FCSRs have been proposed as an alternative to LFSRs for the design of stream ciphers. In 2009, a new "ring" representation of FCSRs was presented. This new representation preserves the statistical properties and circumvents the weaknesses of the Fibonacci and the Galois FCSRs. Moreover an extension of the ring FCSRs called ternary FCSRs has been proposed. They are suitable for hardware and software implementations of FCSRs. In this paper, we show a method of constructing a ternary FCSR with a given connection integer for hardware implementation. The construction is simple and convenient. And the ternary FCSRs we get are able to meet the hardware criteria.
Last updated:  2011-07-04
Generalized Learning Problems and Applications to Non-Commutative Cryptography
Gilbert Baumslag, Nelly Fazio, Antonio R. Nicolosi, Vladimir Shpilrain, William E. Skeith III
We propose a generalization of the learning parity with noise (LPN) and learning with errors (LWE) problems to an abstract class of group-theoretic learning problems that we term _learning homomorphisms from noise_ (LHN). This class of problems contains LPN and LWE as special cases, but is much more general. It allows, for example, instantiations based on non-abelian groups, resulting in a new avenue for the application of combinatorial group theory to the development of cryptographic primitives. We then study a particular instantiation using relatively free groups and construct a symmetric cryptosystem based upon it.
Last updated:  2011-07-04
An Efficient Attack on All Concrete KKS Proposals
Ayoub Otmani, Jean-Pierre Tillich
Kabastianskii, Krouk and Smeets proposed in 1997 a digital signature scheme based on a couple of random error-correcting codes. A variation of this scheme was proposed recently and was proved to be EUF-1CMA secure in the random oracle model. In this paper we investigate the security of these schemes and suggest a simple attack based on (essentially) Stern’s algorithm for finding low weight codewords. It efficiently recovers the private key of all schemes of this type existing in the literature. This is basically due to the fact that we can define a code from the available public data with unusual properties: it has many codewords whose support is concentrated in a rather small subset. In such a case, Stern’s algorithm performs much better and we provide a theoretical analysis substantiating this claim. Our analysis actually shows that the insecurity of the proposed parameters is related to the fact that the rates of the couple of random codes used in the scheme were chosen to be too close. This does not compromise the security of the whole KKS scheme. It just points out that the region of weak parameters is really much larger than previously thought.
Last updated:  2012-05-10
On the (Non-)Equivalence of UC Security Notions
Uncategorized
Oana Ciobotaru
Show abstract
Uncategorized
Over the years, various security notions have been proposed in order to cope with a wide range of security scenarios. Recently, the study of security notions has been extended towards comparing cryptographic definitions of secure implementation with game-theoretic definitions of universal implementation of a trusted mediator. In this work we go a step further: We define the notion of game universal implementation and we show it is equivalent to weak stand-alone security. Thus, we are able to answer positively the open question from [Halpern&Pass2010] regarding the existence of game-theoretic definitions that are equivalent to cryptographic security notions for which the ideal world simulator does not depend on both the distinguisher and the input distribution. Moreover, we investigate the propagation of the weak stand-alone security notion through the existing security hierarchy, from stand-alone to universal composability. Our main achievement in this direction is a separation result between two variants of the UC security definition: 1-bit specialized simulator UC security and specialized simulator UC security. This solves an open question from [Lindell03] and comes in contrast with the well known equivalence result between 1-bit UC security and UC security. We also show that weak security under 1-bounded concurrent general composition is equivalent to 1-bit specialized simulator UC security. As a consequence, we obtain that the notion of weak stand-alone security and the notion of stand-alone security are not equivalent.
Last updated:  2011-07-04
A coprocessor for secure and high speed modular arithmetic
Nicolas Guillermin
We present a coprocessor design for fast arithmetic over large numbers of cryptographic sizes. Our design provides a efficient way to prevent side channel analysis as well as fault analysis targeting modular arithmetic with large prime or composite numbers. These two countermeasure are then suitable both for Elliptic Curve Cryptography over prime fields or RSA using CRT or not. To do so, we use the residue number system (RNS) in an efficient manner to protect from leakage and fault, while keeping its ability to fast execute modular arithmetic with large numbers. We illustrate our countermeasure with a fully protected RSA-CRT implementation using our architecture, and show that it is possible to execute a secure 1024 bit RSA-CRT in less than 0:7 ms on a FPGA.
Last updated:  2011-07-04
Hidden Pair of Bijection Signature Scheme
Masahito Gotaishi, Shigeo Tsujii
A new signature system of multivariate public key cryptosys- tem is proposed. The new system, Hidden Pair of Bijection (HPB), is the advanced version of the Complementary STS system. This system real- ized both high security and quick signing. Experiments showed that the cryptanalysis of HPB by Gröbner bases has no less complexity than the random polynomial systems. It is secure against other way of cryptanalysis effective for Complementary STS. On the other hand, since it is based on bijections, signatures exist for any message, unlike other cryptosystems based on non-bijections such as HFE or Unbalanced Oil and Vinegar.
Last updated:  2011-09-15
Bi-Deniable Public-Key Encryption
Adam O'Neill, Chris Peikert, Brent Waters
In CRYPTO 1997, Canetti \etal put forward the intruiging notion of \emph{deniable encryption}, which (informally) allows a sender and/or receiver, having already performed some encrypted communication, to produce `fake' (but legitimate-looking) random coins that open the ciphertext to another message. Deniability is a powerful notion for both practice and theory: apart from its inherent utility for resisting coercion, a deniable scheme is also noncommitting (a useful property in constructing adaptively secure protocols) and secure under selective-opening attacks on whichever parties can equivocate. To date, however, known constructions have achieved only limited forms of deniability, requiring at least one party to withhold its randomness, and in some cases using an interactive protocol or external parties. In this work we construct \emph{bi-deniable} public-key cryptosystems, in which both the sender and receiver can simultaneously equivocate; we stress that the schemes are noninteractive and involve no third parties. One of our systems is based generically on ``simulatable encryption'' as defined by Damgård and Nielsen (CRYPTO 2000), while the other is lattice-based and builds upon the results of Gentry, Peikert and Vaikuntanathan (STOC 2008) with techniques that may be of independent interest. Both schemes work in the so-called ``multi-distributional'' model, in which the parties run alternative key-generation and encryption algorithms for equivocable communication, but claim under coercion to have run the prescribed algorithms. Although multi-distributional deniability has not attracted much attention, we argue that it is meaningful and useful because it provides credible coercion resistance in certain settings, and suffices for all of the related properties mentioned above.
Last updated:  2011-08-31
Cryptanalysis of the $AA_{\beta}$ Cryptosystem based on Linear Diophantine Equation Discrete Log Problem
Yanbin Pan, Yingpu Deng
Very recently, Ariffin and Abu proposed a new version of the $AA_{\beta}$ cryptosystem based on linear diophantine equation discrete log problem. However, we give an efficient simple ciphertext-only attack to show that it is not secure.
Last updated:  2011-08-05
$HB^N$: An HB-like protocol secure against man-in-the-middle attacks
Carl Bosley, Kristiyan Haralambiev, Antonio Nicolosi
We construct a simple authentication protocol whose security is based solely on the problem of Learning Parity with Noise (LPN) which is secure against Man-in-the-Middle attacks. Our protocol is suitable for RFID devices, whose limited circuit size and power constraints rule out the use of more heavyweight operations such as modular exponentiation. The protocol is extremely simple: both parties compute a noisy bilinear function of their inputs. The proof, however, is quite technical, and we believe that some of our technical tools may be of independent interest.
Last updated:  2011-07-01
Efficient Methods for Exploiting Faults Induced at AES Middle Rounds
Chong Hee Kim
Faults occurred during the operations in a hardware device cause many problems such as performance deterioration, unreliable output, etc. If a fault occurs in a cryptographic hardware device, the effect can be even serious because an adversary may exploit it to find the secret information stored in the device. More precisely, the adversary can find the key of a block cipher using differential information between correct and faulty ciphertexts obtained by inducing faults during the computation of ciphertexts. This kind of attack is called \emph{Differential Fault Analysis} (DFA). Among many ciphers \emph{Advanced Encryption Standard} (AES) has been the main target of DFA due to its popularity. AES is widely used in different platforms and systems including Intel and AMD microprocessors. Normally DFA on AES exploits faults induced at the last few rounds. Hence, a general countermeasure is to recompute the last few rounds of AES and compare it with the original output. As redundancy is a costly countermeasure, one should ascertain exactly which rounds need to be protected. In 2006, Phan and Yen introduced a new type of DFA, so called Square-DFA, that works even when faults are induced into some middle rounds. However, it is impractical as it requires several hundreds of faulty ciphertexts as well as a bit fault model. In this article, we propose new attacks that need only dozens of faulty ciphertexts in a byte fault model. Normally it is believed that randomly corrupting a byte is easier than corrupting a specific bit. In addition, we extend the attacks to the AES-192 and AES-256, which is the first result in the literature.
Last updated:  2011-07-28
Extractors Against Side-Channel Attacks: Weak or Strong?
Marcel Medwed, Francois-Xavier Standaert
Randomness extractors are important tools in cryptography. Their goal is to compress a high-entropy source into a more uniform output. Beyond their theoretical interest, they have recently gained attention because of their use in the design and proof of leakage-resilient primitives, such as stream ciphers and pseudorandom functions. However, for these proofs of leakage resilience to be meaningful in practice, it is important to instantiate and implement the components they are based on. In this context, while numerous works have investigated the implementation properties of block ciphers such as the AES Rijndael, very little is known about the application of side-channel attacks against extractor implementations. In order to close this gap, this paper instantiates a low-cost hardware extractor and analyzes it both from a performance and from a side-channel security point of view. Our investigations lead to contrasted conclusions. On the one hand, extractors can be efficiently implemented and protected with masking. On the other hand, they provide adversaries with many more exploitable leakage samples than, e.g. block ciphers. As a result, they can ensure high security margins against standard (non-profiled) side-channel attacks and turn out to be much weaker against profiled attacks. From a methodological point of view, our analysis consequently raises the question of which attack strategies should be considered in security evaluations.
Last updated:  2011-08-01
An efficient certificateless authenticated key agreement protocol without bilinear pairings
Debiao He
Certificateless public key cryptography simplifies the complex certificate management in the traditional public key cryptography and resolves the key escrow problem in identity-based cryptography. Many certificateless authenticated key agreement protocols using bilinear pairings have been proposed. But the relative computation cost of the pairing is approximately twenty times higher than that of the scalar multiplication over elliptic curve group. Recently, several certificateless authenticated key agreement protocols without pairings were proposed to improve the performance. In this paper, we propose a new certificateless authenticated key agreement protocol without pairing. The user in our just needs to compute five scale multiplication to finish the key agreement. We also show the proposed protocol is secure in the random oracle model.
Last updated:  2011-06-27
Strongly Secure One Round Authenticated Key Exchange Protocol with Perfect Forward Security
Hai Huang
This paper investigates the two-pass authenticated key exchange protocol in the enhanced Canetti-Krawczyk (eCK) with perfect forward security. Currently, there exist no authenticated key exchange protocols which are provably secure in eCK model and meanwhile achieve perfect forward security against active adversary in one round. We propose a new two-pass authenticated key exchange protocol which enjoys following desirable properties. \textbf{First}, our protocol is shown secure in the eCK model under the gap Diffie-Hellman (GDH) assumption. Moreover, our protocol does not use the NAXOS transformation, the drawback of which will be discussed in the introduction. \textbf{Second}, under the same assumption, we prove that our protocol achieves perfect forward security against active adversary in one round. To the best of our knowledge, our proposal is first two-pass (one round) AKE protocol provably secure in the eCK model and achieving perfect forward security against active adversary.
Last updated:  2011-06-29
LBlock: A Lightweight Block Cipher *
Wenling Wu, Lei Zhang
In this paper, we propose a new lightweight block cipher called LBlock. Similar to many other lightweight block ciphers, the block size of LBlock is 64-bit and the key size is 80-bit. Our security evaluation shows that LBlock can achieve enough security margin against known attacks, such as differential cryptanalysis, linear cryptanalysis, impossible differential cryptanalysis and related-key attacks etc. Furthermore, LBlock can be implemented efficiently not only in hardware environments but also in software platforms such as 8-bit microcontroller. Our hardware implementation of LBlock requires about 1320 GE on 0.18 $\mu m$ technology with a throughput of 200 Kbps at 100 KHz. The software implementation of LBlock on 8-bit microcontroller requires about 3955 clock cycles to encrypt a plaintext block.
Last updated:  2011-08-04
Efficient Fully Homomorphic Encryption from (Standard) LWE
Zvika Brakerski, Vinod Vaikuntanathan
We present a fully homomorphic encryption scheme that is based solely on the (standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of ``short vector problems'' on arbitrary lattices. Our construction improves on previous works in two aspects: 1. We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {\em re-linearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. 2. We deviate from the ``squashing paradigm'' used in all previous works. We introduce a new {\em dimension-modulus reduction} technique, which shortens the ciphertexts and reduces the decryption complexity of our scheme, {\em without introducing additional assumptions}. Our scheme has very short ciphertexts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is $k \cdot \polylog(k)+\log |DB|$ bits per single-bit query (here, $k$ is a security parameter).
Last updated:  2012-03-27
Another Look at Security Definitions
Uncategorized
Neal Koblitz, Alfred Menezes
Show abstract
Uncategorized
We take a critical look at security models that are often used to give "provable security'' guarantees. We pay particular attention to digital signatures, symmetric-key encryption, and leakage resilience. We find that there has been a surprising amount of uncertainty about what the "right'' definitions might be. Even when definitions have an appealing logical elegance and nicely reflect certain notions of security, they fail to take into account many types of attacks and do not provide a comprehensive model of adversarial behavior.
Last updated:  2011-06-27
A Domain Transformation for Structure-Preserving Signatures on Group Elements
Melissa Chase, Markulf Kohlweiss
We present a generic transformation that allows us to use a large class of pairing-based signatures to construct schemes for signing group elements in a structure preserving way. As a result of our transformation we obtain a new efficient signature scheme for signing a vector of group elements that is based only on the well established decisional linear assumption (DLIN). Moreover, the public keys and signatures of our scheme consist of group elements only, and a signature is verified by evaluating a set of pairing-product equations. In combination with the Groth-Sahai proof system, such a signature scheme is an ideal building block for many privacy-enhancing protocols. To do this, we start by proposing a new stateful signature scheme for signing vectors of exponents that is F-unforgeable under weak chosen message attacks. This signature scheme is of independent interest as it is compatible with Groth-Sahai proofs and secure under a computational assumption implied by DLIN. Then we give a general transformation for signing group elements based on signatures (for signing exponents) with efficient non-interactive zero-knowledge proofs. This transform also removes any dependence on state in the signature used to sign exponents. Finally, we obtain our result by instantiating this transformation with the above signature scheme and Groth-Sahai proofs.
Last updated:  2012-01-09
An Improved Internet Voting Protocol
Uncategorized
Mehmet Sabir Kiraz, Süleyman Kardaş, Muhammed Ali Bingöl, Fatih Birinci
Uncategorized
Norway is going to experience an Internet voting scheme in September 2011 for local governmental elections, targeting a comprehensive Internet voting system in 2017 for national election. This protocol is strong from several aspects. First of all, it resists against malicious voter’s computers. Namely, an honest voter will be aware of a malicious behavior caused by the computer during the entire voting procedure. However, the security of the protocol depends on the assumption that the players (organizations) are completely independent and reliable, and the receipt codes are sent to the voters securely. In this work, we take a closer look at the Internet voting protocol and investigate the followings: – The privacy of voters are compromised if there is a cooperation between the players Ballot Box (BB) and Receipt Generator (RG) since the private key of Decryption Service (DS) can be obtained by the two former players. To prevent this possible issue, we propose an improved protocol without adding additional players. – To verify the correctness of the overall protocol two additional channels are used where receipt codes are sent to the voters over the pre-channel (e.g., postal service) and also sent over the post-channel (e.g., SMS). However, if a voter holds both SMS and the paper of receipt codes at the same time, he can reveal his/her vote even after the election. To overcome this issue, we propose a new method where the SMS is used only as a notification message, and an additional phone call is used for the complete verification of the vote. – The reliability of the Norwegian scheme is based on the correctness of the receipt codes that are sent to the voters over a secure prechannel. However, if the printed receipt codes are falsely generated (or falsely printed) or the pre-channel is not completely secure, a vote can be counted for different candidates without any detection. In order to prevent this problem, in our protocol, the voters also take a part in the verification of the receipt codes before the vote casting protocol.
Last updated:  2011-06-27
Encrypting More Information in Visual Cryptography Scheme
Feng Liu, Peng Li, ChuanKun Wu
The visual cryptography scheme (VCS) is a scheme which encodes a secret image into several shares, and only qualified sets of shares can recover the secret image visually, other sets of shares cannot get any information about the content of the secret image. From the point of view of encrypting (carrying) the secret information, the traditional VCS is not an efficient method. The amount of the information that a VCS encrypts depends on the amount of secret pixels. And because of the restrictions of the human eyes and the pixel expansion and the alignment problem of the VCS, a VCS perhaps can only be used to encrypt a small secret image. VCS requires a random number generator to guide the generation of the shares. As we will show in this paper, the random input of VCS can ba seen as a subchannel which helps carrying more secret information. We propose a general method to increase the amount of secret information that a threshold VCS can encrypt by treating the pseudo-random inputs of the VCS as a subchannel, i.e. the Encrypting More Information Visual Cryptography Scheme (EMIVCS). We also study the bandwidth of the proposed EMIVCS. The disadvantage of the proposed scheme is that, the decoding process is computer aided. However, compared with other computer aided VCS, the proposed scheme is more efficient.
Last updated:  2011-06-26
Careful with Composition: Limitations of Indifferentiability and Universal Composability
Uncategorized
Thomas Ristenpart, Hovav Shacham, Thomas Shrimpton
Show abstract
Uncategorized
We exhibit a hash-based storage auditing scheme which is provably secure in the random-oracle model (ROM), but easily broken when one instead uses typical indifferentiable hash constructions. This contradicts the widely accepted belief that the indifferentiability composition theorem applies to any cryptosystem. We characterize the uncovered limitation of the indifferentiability framework by show- ing that the formalizations used thus far implicitly exclude security notions captured by experiments that have multiple, disjoint adversarial stages. Examples include deterministic public-key encryption (PKE), password-based cryptography, hash function nonmalleability, key-dependent message security, and more. We formalize a stronger notion, reset indifferentiability, that enables an indifferentiability- style composition theorem covering such multi-stage security notions, but then show that practical hash constructions cannot be reset indifferentiable. We discuss how these limitations also affect the universal composability framework. We finish by showing the chosen-distribution attack security (which requires a multi-stage game) of some important public-key encryption schemes built using a hash construction paradigm introduced by Dodis, Ristenpart, and Shrimpton.
Last updated:  2011-08-17
Fast and Regular Algorithms for Scalar Multiplication over Elliptic Curves
Matthieu Rivain
Elliptic curve cryptosystems are more and more widespread in everyday-life applications. This trend should still gain momentum in coming years thanks to the exponential security enjoyed by these systems compared to the subexponential security of other systems such as RSA. For this reason, efficient elliptic curve arithmetic is still a hot topic for cryptographers. The core operation of elliptic curve cryptosystems is the scalar multiplication which multiplies some point on an elliptic curve by some (usually secret) scalar. When such an operation is implemented on an embedded system such as a smart card, it is subject to {\em side channel attacks}. To withstand such attacks, one must constrain the scalar multiplication algorithm to be {\em regular}, namely to have an operation flow independent of the input scalar. A large amount of work has been published that focus on efficient and regular scalar multiplication and the choice leading to the best performances in practice is not clear. In this paper, we look into this question for general-form elliptic curves over large prime fields and we complete the current state-of-the-art. One of the fastest low-memory algorithms in the current literature is the Montgomery ladder using co-$Z$ Jacobian arithmetic {\em with $X$ and $Y$ coordinates only}. We detail the regular implementation of this algorithm with various trade-offs and we introduce a new binary algorithm achieving comparable performances. For implementations that are less constrained in memory, windowing techniques and signed exponent recoding enable reaching better timings. We survey regular algorithms based on such techniques and we discuss their security with respect to side-channel attacks. On the whole, our work give a clear view of the currently best time-memory trade-offs for regular implementation of scalar multiplication over prime-field elliptic curves.
Last updated:  2012-04-17
Functional Re-encryption and Collusion-Resistant Obfuscation
Nishanth Chandran, Melissa Chase, Vinod Vaikuntanathan
We introduce a natural cryptographic functionality called \emph{functional re-encryption}. Informally, this functionality, for a public-key encryption scheme and a function $F$ with $n$ possible outputs, transforms (``re-encrypts") an encryption of a message $m$ under an ``input public key" $\pk$ into an encryption of the same message $m$ under one of the $n$ ``output public keys", namely the public key indexed by $F(m)$. In many settings, one might require that the program implementing the functional re-encryption functionality should reveal nothing about both the input secret key $\sk$ as well as the function $F$. As an example, consider a user Alice who wants her email server to share her incoming mail with one of a set of $n$ recipients according to an access policy specified by her function $F$, but who wants to keep this access policy private from the server. Furthermore, in this setting, we would ideally obtain an even stronger guarantee: that this information remains hidden even when some of the $n$ recipients may be corrupted. To formalize these issues, we introduce the notion of \emph{collusion-resistant obfuscation} and define this notion with respect to average-case secure obfuscation (Hohenberger \emph{et al.} - TCC 2007). We then provide a construction of a functional re-encryption scheme for any function $F$ with a polynomial-size domain and show that it satisfies this notion of collusion-resistant obfuscation. We note that collusion-resistant security can be viewed as a special case of dependent auxiliary input security (a setting where virtually no positive results are known), and this notion may be of independent interest. Finally, we show that collusion-resistant obfuscation of functional re-encryption for a function $F$ gives a way to obfuscate $F$ in the sense of Barak {\em et al.} (CRYPTO 2001), indicating that this task is impossible for arbitrary (polynomial-time computable) functions $F$.
Last updated:  2012-03-22
Cryptanalysis of an Authenticated Key Agreement Protocol for Wireless Mobile Communications
Uncategorized
Debiao He
Uncategorized
With the rapid progress of wireless mobile communication, the authenticated key agreement (AKA) protocol has attracted an increasing amount of attention. However, due to the limitations of bandwidth and storage of the mobile devices, most of the existing AKA protocols are not suitable for wireless mobile communication. Recently, Lo et al. presented an efficient authenticated key agreement protocol based on elliptic curve cryptography and included their protocol in 3GPP2 specifications. However, in this letter, we point out that Lo et al.'s protocol is vulnerable to an off-line password guessing attack. To resist the attack, we also propose an efficient countermeasure.
Last updated:  2011-06-22
New look at impossibility result on Dolev-Yao models with hashes
István Vajda
Backes, Pfitzmann and Waidner showed in [7] that for protocols with hashes Dolev-Yao style models do not have cryptographically sound realization in the sense of BRSIM/UC in the standard model of cryptography. They proved that random oracle model provides a cryptographically sound realization. Canetti [9] introduced the notion of oracle hashing “towards realizing random oracles”. Based on these two approaches, we propose a random hash primitive, which already makes possible cryptographically sound realization in the sense of BRSIM/UC in the standard model of cryptography.
Last updated:  2011-11-19
On the Efficient Implementation of Pairing-Based Protocols
Michael Scott
The advent of Pairing-based protocols has had a major impact on the applicability of cryptography to the solution of more complex real-world problems. However there has always been a question mark over the performance of such protocols. In response much work has been done to optimize pairing implementation, and now it is generally accepted that being pairing-based does not preclude a protocol from consideration as a practical proposition. However although a lot of effort has gone into the optimization of the stand-alone pairing, in many protocols the pairing calculation appears in a particular context within which further optimizations may be possible. It is the purpose of this paper to bridge the gap between theory and practise, and to show that even complex protocols may have a surprisingly efficient implementation. We also point out that in some cases the usually recommended pairing friendly curves may not in fact be optimal. We claim a new record with our implementation of a pairing at the AES-256 bit level.
Last updated:  2011-06-22
Cryptanalysis of a key agreement protocol based on chaotic Hash
Debiao He
With the rapid development of theory and application of chaos, more and more researchers are focusing on chaos based cryptosystems. Recently, Guo et al.’s [X. Guo, J. Zhang, Secure group key agreement protocol based on chaotic Hash, Information Sciences 180 (2010) 4069–4074] proposed a secure key agreement protocol based on chaotic Hash. They claimed that their scheme could withstand various attacks. Unfortunately, by giving concrete attacks, we indicate that Guo et al.’s scheme is vulnerable to the off-line password guessing attack. The analysis shows Guo et al.’s scheme is not secure for practical application.
Last updated:  2011-06-22
A depth-16 circuit for the AES S-box
Joan Boyar, Rene Peralta
New techniques for reducing the depth of circuits for cryptographic applications are described and applied to the AES S-box. These techniques also keep the number of gates quite small. The result, when applied to the AES S-box, is a circuit with depth 16 and only 128 gates. For the inverse, it is also depth 16 and has only 127 gates. There is a shared middle part, common to both the S-box and its inverse, consisting of 63 gates.
Last updated:  2011-06-22
Cryptanalysis of Cho \textit{et al.}'s Protocol, A Hash-Based Mutual Authentication Protocol for RFID Systems
Masoumeh Safkhani, Pedro Peris-Lopez, Julio Cesar Hernandez-Castro, Nasour Bagheri, Majid Naderi
Radio frequency identification systems need protocols to provide confidentiality, user privacy, mutual authentication and etc. These protocols should resist active and passive attacks such as forgery, traceability, replay and desynchronization attacks. In this paper we cryptanalysis a hash based RFID mutual authentication protocol which has been recently proposed by Cho \textit{et al.} More precisely, we present the following attacks on this protocol: \begin{enumerate} \item \textbf{Desynchronization attack}: the success probability of attack is ``1'' while the attack complexity is one run of protocol. \item \textbf{Tag impersonation attack}: the success probability of attack is ``$\frac{1}{4}$'' for two runs of protocol. \item \textbf{Reader impersonation attack}: the success probability of attack is ``$\frac{1}{4}$'' for two runs of protocol. \end{enumerate}
Last updated:  2011-06-22
Simple and Asymptotically Optimal $t$-Cheater Identifiable Secret Sharing Scheme
Ashish Choudhury
In this paper, we consider the problem of k-out-of-n secret sharing scheme, capable of identifying t cheaters. We design a very simple k-out-of-n secret sharing scheme, which can identify up to t cheaters, with probability at least 1 - \epsilon, where 0 < \epsilon < 1/2, provided t < k / 2. This is the maximum number of cheaters, which can be identified by any k-out-of-n secret sharing scheme, capable of identifying t cheaters (we call these schemes as Secret Sharing with Cheater Identification (SSCI)). In our scheme, the set of all possible i^{th} share V_i satisfies the condition that |V_i| = |S| / \epsilon^{3n}, where S denotes the set of all possible secrets. Moreover, our scheme requires polynomial computation. In EUROCRYPT 2011, Satoshi Obana presented two SSCI schemes, which can identify up to t < k / 2 cheaters. However, the schemes require |V_i| \approx (n (t+1) 2^{3t-1} |S|) / \epsilon and |V_i| \approx ((n t 2^{3t})^2 |S|) / (\epsilon^2)$ respectively. Moreover, both the schemes are computationally inefficient, as they require to perform exponential computation in general. So comparing our scheme with the schemes of Obana, we find that not only our scheme is computationally efficient, but in our scheme the share size is significantly smaller than that of Obana. Thus our scheme solves one of the open problems left by Obana, urging to design efficient SSCI scheme with t < k/2. In CRYPT0 1995, Kurosawa, Obana and Ogata have shown that in any SSCI scheme, |V_i| \geq (|S| - 1) / (\epsilon) + 1. Though our proposed scheme does not exactly matches this bound, we show that our scheme {\it asymptotically} satisfies the above bound. To the best of our knowledge, our scheme is the best SSCI scheme, capable of identifying the maximum number of cheaters.
Last updated:  2012-05-21
Hardness of Computing Individual Bits for One-way Functions on Elliptic Curves
Uncategorized
Alexandre Duc, Dimitar Jetchev
Show abstract
Uncategorized
We prove that if one can predict any of the bits of the input to an elliptic curve based one-way function over a finite field, then we can invert the function. In particular, our result implies that if one can predict any of the bits of the input to a classical pairing-based one-way function with non-negligible advantage over a random guess then one can efficiently invert this function and thus, solve the Fixed Argument Pairing Inversion problem (FAPI-1/FAPI-2). The latter has implications on the security of various pairing-based schemes such as the identity-based encryption scheme of BonehFranklin, Hess’ identity-based signature scheme, as well as Joux’s three-party one-round key agreement protocol. Moreover, if one can solve FAPI-1 and FAPI-2 in polynomial time then one can solve the Computational Diffie–Hellman problem (CDH) in polynomial time. Our result implies that all the bits of the functions defined above are hard-to-compute assuming these functions are one-way. The argument is based on a list-decoding technique via discrete Fourier transforms due to Akavia–Goldwasser–Safra as well as an idea due to Boneh–Shparlinski.
Last updated:  2012-01-08
Cryptanalysis of the Smart-Vercauteren and Gentry-Halevi’s Fully Homomorphic Encryption
Uncategorized
Gu Chunsheng
Show abstract
Uncategorized
For the fully homomorphic encryption schemes in [SV10, GH11], this paper presents attacks to solve equivalent secret key and directly recover plaintext from ciphertext for lattice dimensions n=2048 by using lattice reduction algorithm. According to the average-case behavior of LLL in [NS06], their schemes are also not secure for n=8192.
Last updated:  2011-11-04
On the (In)security of Hash-based Oblivious RAM and a New Balancing Scheme
Uncategorized
Eyal Kushilevitz, Steve Lu, Rafail Ostrovsky
Show abstract
Uncategorized
With the gaining popularity of remote storage (e.g. in the Cloud), we consider the setting where a small, protected local machine wishes to access data on a large, untrusted remote machine. This setting was introduced in the RAM model in the context of software protection by Goldreich and Ostrovsky. A secure Oblivious RAM simulation allows for a client, with small (e.g., constant size) protected memory, to hide not only the data but also the sequence of locations it accesses (both reads and writes) in the unprotected memory of size $n$. Our main results are as follows: -We analyze several schemes from the literature, observing a repeated design flaw that leaks information on the memory access pattern. For some of these schemes, the leakage is actually non-negligible, while for others it is negligible. -On the positive side, we present a new secure oblivious RAM scheme, extending a recent scheme by Goodrich and Mitzenmacher. Our scheme uses only $O(1)$ local memory, and its (amortized) overhead is $O(\log^2 n/\log\log n)$, outperforming the previously-best $O(\log^2n)$ overhead (among schemes where the client only uses $O(1)$ additional local memory). -We also present a transformation of our scheme above (whose amortized overhead is $O(\log^2 n/\log\log n)$) into a scheme with worst-case overhead of $O(\log^2 n/\log\log n)$.
Last updated:  2011-11-04
SGCM: The Sophie Germain Counter Mode
Uncategorized
Markku-Juhani O. Saarinen
Show abstract
Uncategorized
Sophie Germain Counter Mode (SGCM) is an authenticated encryption mode of operation, to be used with 128-bit block ciphers such as AES. SGCM is a variant of the NIST standardized Galois / Counter Mode (GCM) which has been found to be susceptible to weak key / short cycle forgery attacks. The GCM attacks are made possible by its extremely smooth-order multiplicative group which splits into 512 subgroups. Instead of GCM's $GF(2^{128})$, we use $GF(p)$ with $p=2^{128}+12451$, where $\frac{p-1}{2}$ is also a prime. SGCM is intended for those who want a concrete, largely technically compatible alternative to GCM. In this memo we give a technical specification of SGCM, together with some elements of its implementation, security and performance analysis. Test vectors are also included.
Last updated:  2011-06-17
New Receipt-Free E-Voting Scheme and Self-Proving Mix Net as New Paradigm
Aram Jivanyan, Gurgen Khachatryan
The contribution of this paper is twofold. First we present a new simple electronic voting scheme having standard re-encryption mix net back-end, which allows to cast a ballot and verify its correctness in a new way. Then we extend the proposed scheme to represent a new very efficient mix network construction. We called our mix network to be self-proving mix, because it is shown how mix process correctness can be verified without demanding from mix party a special proof. Our proposed mix network allows to reveal all the cheating occurred during a mix process at verification of decryption parties work.
Last updated:  2011-06-17
On the Efficiency of Bit Commitment Reductions
Uncategorized
Samuel Ranellucci, Alain Tapp, Severin Winkler, Jürg Wullschleger
Show abstract
Uncategorized
Two fundamental building blocks of secure two-party computation are oblivious transfer and bit commitment. While there exist unconditionally secure implementations of oblivious transfer from noisy correlations or channels that achieve constant rates, similar constructions are not known for bit commitment. In this paper we show that any protocol that implements $n$ instances of bit commitment with an error of at most $2^{-k}$ needs at least $\Omega(kn)$ instances of a given resource such as oblivious transfer or a noisy channel. This implies in particular that it is impossible to achieve a constant rate. We then show that it is possible to circumvent the above lower bound by restricting the way in which the bit commitments can be opened. In the special case where only a constant number of instances can be opened, our protocol achieves a constant rate, which is optimal. Our protocol implements these restricted bit commitments from string commitments and is universally composable. The protocol provides significant speed-up over individual commitments in situations where restricted commitments are sufficient.
Last updated:  2011-11-20
A New Related-Key Boomerang Distinguishing Attack of Reduced-Round Threefish-256
Uncategorized
Shusheng Liu, Libin Wang, Zheng Gong
Show abstract
Uncategorized
On Nov 2007, NIST announced the SHA-3 competition to select a new hash standard as a replacement of SHA-2. On Dec 2010, five submissions have been selected as the final round candidates, including Skein, which have components based on ARX. In this paper, a new related-key boomerang distinguishing attack is proposed on 31-round Threefish-256 with a time complexity of about $2^{234}$. Our improved attack is based on the efficient algorithms for calculating differentials of modular addition.
Last updated:  2011-06-17
A Comprehensive Evaluation of Mutual Information Analysis Using a Fair Evaluation Framework
Carolyn Whitnall, Elisabeth Oswald
The resistance of cryptographic implementations to side channel analysis is matter of considerable interest to those concerned with information security. It is particularly desirable to identify the attack methodology (e.g. differential power analysis using correlation or distance-of-means as the distinguisher) able to produce the best results. Attempts to answer this question are complicated by the many and varied factors contributing to attack success: the device power consumption characteristics, an attacker's power model, the distinguisher by which measurements and model predictions are compared, the quality of the estimations, and so on. Previous work has delivered partial answers for certain restricted scenarios. In this paper we assess the effectiveness of mutual information analysis within a generic and comprehensive evaluation framework. Complementary to existing work, we present several notions/characterisations of attack success, as well as a means of indicating the amount of data required by an attack. We are thus able to identify scenarios in which mutual information offers performance advantages over other distinguishers. Furthermore we observe an interesting feature -- unique to the mutual information based distinguisher -- resembling a type of stochastic resonance, which could potentially enhance the effectiveness of such attacks over other methods in certain noisy scenarios.
Last updated:  2011-06-17
A Formal Approach to Distance-Bounding RFID Protocols
Ulrich Duerholz, Marc Fischlin, Michael Kasper, Cristina Onete
Distance-Bounding identification protocols aim at impeding man-in-the-middle attacks by measuring response times. There are three kinds of attacks such protocols could address: (1) Mafia attacks where the adversary relays communication between honest prover and honest verifier in different sessions; (2) Terrorist attacks where the adversary gets limited active support from the prover to impersonate. (3) Distance attacks where a malicious prover claims to be closer to the verifier than it actually is. Many protocols in the literature address one or two such threats, but no rigorous cryptographic security models ---nor clean security proofs--- exist so far. For resource-constrained RFID tags, distance-bounding is more difficult to achieve. Our contribution here is to formally define security against the above-mentioned attacks and to relate the properties. We thus refute previous beliefs about relations between the notions, showing instead that they are independent. Finally we use our new framework to assess the security of the RFID distance-bounding scheme due to Kim and Avoine, and enhance it to include impersonation security and allow for errors due to noisy channel transmissions.
Last updated:  2011-06-17
Minimal Connectivity for Unconditionally Secure Message Transmission in Synchronous Directed Networks
Uncategorized
Manan Nayak, Shashank Agrawal, Kannan Srinathan
Show abstract
Uncategorized
In this paper we give the minimal connectivity required in a synchronous directed network, which is under the influence of a computationally unbounded \emph{Byzantine} adversary that can corrupt a subset of nodes, so that Secure Message Transmission is possible between sender $S$ and receiver $R$. We also show that secure communication between a pair of nodes in a given synchronous directed network is possible in both directions if and only if reliable communication is possible between them. We assume that in a network, every node is capable of computation and we model the network along the lines of \cite{SR06}.
Last updated:  2011-06-17
Structure Preserving CCA Secure Encryption and Its Application to Oblivious Third Parties
Jan Camenisch, Kristiyan Haralambiev, Markulf Kohlweiss, Jorn Lapon, Vincent Naessens
In this paper we present the first public key encryption scheme that is structure preserving, i.e., our encryption scheme uses only algebraic operations. In particular it does not use hash-functions or interpret group elements as bit-strings. This makes our scheme a perfect building block for cryptographic protocols where parties for instance want to prove, to each other, properties about ciphertexts or jointly compute ciphertexts. Our scheme is also very efficient and is secure against \dkg adaptive\blk{} chosen ciphertext attacks. We also provide a few example protocols for our scheme. For instance, a joint computation of a ciphertext\dkg, generated from two secret plaintexts from each party respectively\blk, where in the end, only one of the parties learns the ciphertext. This latter protocol serves as a building block for our second contribution which is a set of protocols that implement the concept of oblivious trusted third parties. This concept has been proposed before, but no concrete realization was known.
Last updated:  2011-06-17
Scalar Multiplication on Koblitz Curves using $\tau^2-$NAF
Sujoy Sinha Roy, Chester Rebeiro, Debdeep Mukhopadhyay, Junko Takahashi, Toshinori Fukunaga
The paper proposes a $\tau^2-$NAF method for scalar multiplication on Koblitz curves, which requires asymptotically $0.215m$ point additions in $GF(2^m)$. For $\tau^2-$NAF method, point quading operation $(a\rightarrow a^4)$ is performed instead of point squarings. The proposed method is faster than normal $\tau-$NAF method, which requires around $\frac{m}{3}$ point additions. However, like width $w$ based $\tau-$NAF methods, there is an overhead of pre-computations in the $\tau^2-$NAF method. For extended binary fields of small size, the $\tau^2-$NAF based scalar multiplication requires almost same number of point additions as in width $4$ $\tau-$NAF method. Though, complexity wise, $\tau^2-$NAF based scalar multiplication and width $4-\tau-$NAF based scalar multiplication are similar, but the techniques are different.
Last updated:  2011-06-17
Two Simple Code-Verification Voting Protocols
Helger Lipmaa
Norwegian nationwide Internet voting will make use of a setting that we will call the code-verification voting. The concrete protocol that will be used in Norway was proposed by Scytl and improved by Gjøsteen. As we show, Gjøsteen's protocol has several undesirable properties. In particular, one of the online servers shares the secret key with the offline tallier. Even without considering that, the coalition of two online voting servers can breach voter privacy. We propose two new code-verification voting protocols. The first protocol separates the secret keys, and is as efficient as Gjøsteen's protocol. The second protocol provides voter privacy against the coalition of online voting servers but is somewhat less efficient. While the new protocols are more secure than the protocol that is going to be used in the Norwegian nationwide Internet voting, they are based on the same setting, so as to minimize the required infrastructural changes.
Last updated:  2011-06-17
Security of Blind Signatures Revisited
Dominique Schröder, Dominique Unruh
We revisit the definition of unforgeability of blind signatures as proposed by Pointcheval and Stern (Journal of Cryptology 2000). Surprisingly, we show that this established definition falls short in two ways of what one would intuitively expect from a secure blind signature scheme: It is not excluded that an adversary submits the same message $m$ twice for signing, and then produces a signature for $m'\neq m$. The reason is that the forger only succeeds if \emph{all} messages are distinct. Moreover, it is not excluded that an adversary performs $k$ signing queries and produces signatures on $k+1$ messages as long as \emph{each} of these signatures does not pass verification with probability~$1$. Finally, we proposed a new definition, honest-user unforgeability, that covers these attacks. We give a simple and efficient transformation that transforms any unforgeable blind signature scheme (with deterministic verification) into an honest-user unforgeable one.
Last updated:  2011-10-17
Implementing 4-Dimensional GLV Method on GLS Elliptic Curves with j-Invariant 0
Zhi Hu, Patrick Longa, Maozhi Xu
The Gallant-Lambert-Vanstone (GLV) method is a very efficient technique for accelerating point multiplication on elliptic curves with efficiently computable endomorphisms. Galbraith, Lin and Scott (J. Cryptol. 24(3), 446-469 (2011)) showed that point multiplication exploiting the 2-dimensional GLV method on a large class of curves over GF(p^2) was faster than the standard method on general elliptic curves over GF(p), and left as an open problem to study the case of 4-dimensional GLV on special curves (e.g., j(E) = 0) over GF(p^2). We study the above problem in this paper. We show how to get the 4-dimensional GLV decomposition with proper decomposed coefficients, and thus reduce the number of doublings for point multiplication on these curves to only a quarter. The resulting implementation shows that the 4-dimensional GLV method on a GLS curve runs in about 0.78 the time of the 2-dimensional GLV method on the same curve and in between 0.78-0.87 the time of the 2-dimensional GLV method using the standard method over GF(p). In particular, our implementation reduces by up to 27% the time of the previously fastest implementation of point multiplication on x86-64 processors due to Longa and Gebotys (CHES2010).
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.