All papers in 2012 (Page 2 of 733 results)

Last updated:  2013-03-14
Resource-Restricted Indifferentiability
Grégory Demay, Peter Gaźi, Martin Hirt, Ueli Maurer
A major general paradigm in cryptography is the following argument: Whatever an adversary could do in the real world, it could just as well do in the ideal world. The standard interpretation of ``just as well'' is that the translation from the real to the ideal world, usually called a simulator, is achieved by a probabilistic polynomial-time algorithm. This means that a polynomial blow-up of the adversary's time and memory requirements is considered acceptable. In certain contexts this interpretation of ``just as well'' is inadequate, for example if the concrete amount of memory used by the adversary is relevant. The example of Ristenpart et al. (Eurocrypt 2011), for which the original indifferentiability notion introduced by Maurer et al. (Eurocrypt 2004) is shown to be insufficient, turns out to be exactly of this type. It requires a fine-grained statement about the adversary's memory capacity, calling for a generalized treatment of indifferentiability where specific resource requirements can be taken into account by modeling them explicitly. We provide such treatment and employ the new indifferentiability notion to prove lower bounds on the memory required by any simulator in a domain extension construction of a public random function. In particular, for simulators without memory, even domain extension by a single bit turns out to be impossible. Moreover, for the construction of a random oracle from an ideal compression function, memory roughly linear in the length of the longest query is required. This also implies the impossibility of such domain extension in any multi-party setting with potential individual misbehavior by parties (i.e., no central adversary).
Last updated:  2013-10-15
Analysis of the Non-Perfect Table Fuzzy Rainbow Tradeoff
Byoung-Il Kim, Jin Hong
Time memory tradeoff algorithms are tools for inverting one-way functions, and they are often used to recover passwords from unsalted password hashes. There are many publicly known tradeoff algorithms, and the rainbow tradeoff is widely believed to be the best algorithm. This work provides an accurate complexity analysis of the non-perfect table version of the fuzzy rainbow tradeoff algorithm, which has not yet received much attention. It is shown that, when the pre-computation cost and the online efficiency are both taken into consideration, the non-perfect fuzzy rainbow tradeoff is preferable to the original rainbow tradeoff in many situations.
Last updated:  2012-11-01
A coding theory foundation for the analysis of general unconditionally secure proof-of-retrievability schemes for cloud storage
Maura B. Paterson, Douglas R. Stinson, Jalaj Upadhyay
There has been considerable recent interest in "cloud storage'' wherein a user asks a server to store a large file. One issue is whether the user can verify that the server is actually storing the file, and typically a challenge-response protocol is employed to convince the user that the file is indeed being stored correctly. The security of these schemes is phrased in terms of an extractor which will recover or retrieve the file given any "proving algorithm'' that has a sufficiently high success probability. This paper treats proof-of-retrievability schemes in the model of unconditional security, where an adversary has unlimited computational power. In this case retrievability of the file can be modelled as error-correction in a certain code. We provide a general analytical framework for such schemes that yields exact (non-asymptotic) reductions that precisely quantify conditions for extraction to succeed as a function of the success probability of a proving algorithm, and we apply this analysis to several archetypal schemes. In addition, we provide a new methodology for the analysis of keyed POR schemes in an unconditionally secure setting, and use it to prove the security of a modified version of a scheme due to Shacham and Waters under a slightly restricted attack model, thus providing the first example of a keyed POR scheme with unconditional security. We also show how classical statistical techniques can be used to evaluate whether the responses of the prover are accurate enough to permit successful extraction. Finally, we prove a new lower bound on storage and communication complexity of POR schemes.
Last updated:  2013-03-17
Candidate Multilinear Maps from Ideal Lattices
Sanjam Garg, Craig Gentry, Shai Halevi
We describe plausible lattice-based constructions with properties that approximate the sought-after multilinear maps in hard-discrete-logarithm groups, and show an example application of such multilinear maps that can be realized using our approximation. The security of our constructions relies on seemingly hard problems in ideal lattices, which can be viewed as extensions of the assumed hardness of the NTRU function.
Last updated:  2014-06-04
A NEW APPROACH TO THE DISCRETE LOGARITHM PROBLEM WITH AUXILIARY INPUTS
Uncategorized
Taechan Kim, Jung Hee Cheon
Show abstract
Uncategorized
The discrete logarithm problem with auxiliary inputs is to solve~$\alpha$ for given elements $g, g^\alpha, \ldots, g^{\alpha^d}$ of a cyclic group $G=\langle g \rangle$ of prime order~$p$. The best-known algorithm, proposed by Cheon in 2006, solves $\alpha$ in the case of $d | (p\pm 1)$ with running time of $O\left( \sqrt{p/d} + d^i \right)$ group exponentiations~($i=1$ or $1/2$ depending on the sign). There have been several attempts to generalize this algorithm in the case of $\Phi_k(p)$ for $k \ge 3$, but it has been shown, by Kim, Cheon and Lee, that they cannot have better complexity than the usual square root algorithms. We propose a new algorithm to solve the DLPwAI. The complexity of the algorithm is determined by a chosen polynomial $f \in \F_p[x]$ of degree $d$. We show that the proposed algorithm has a running time of $\widetilde O\left( \sqrt{p / \tau_f} +d \right)$ group exponentiations, where~$\tau_f$ is the number of absolutely irreducible factors of $f(x)-f(y)$. We note that it is always smaller than $\widetilde O(p^{1/2})$. To obtain a better complexity of the algorithm, we investigate an upper bound of $\tau_f$ and try to find polynomials that achieve the upper bound. We can find such polynomials in the case of $d|(p\pm 1)$. In this case, the algorithm has a running time of $\widetilde O\left(\sqrt{p/d} +d \right)$ group operations which corresponds with the lower bound in the generic group model. On the contrary, we show that no polynomial exists that achieves the upper bound in the case of $d \vert\Phi_3(p)=p^2+p+1$. As an independent interest, we present an analysis of a non-uniform birthday problem. Precisely, we show that a collision occurs with a high probability after $O\big( \frac{1}{ \sqrt{\sum_{k} {w_k}^2} } \big)$ samplings of balls, where the probability $w_k$ of assigning balls to the bin $k$ is arbitrary.
Last updated:  2012-10-30
On the (Non-)Reusability of Fuzzy Sketches and Extractors and Security Improvements in the Computational Setting
Marina Blanton, Mehrdad Aliasgari
Secure sketches and fuzzy extractors enable the use of biometric data in cryptographic applications by correcting errors in noisy biometric readings and producing cryptographic materials suitable for authentication, encryption, and other purposes. Such constructions work by producing a public sketch, which is later used to reproduce the original biometric and all derived information exactly from a noisy biometric reading. It has been previously shown that release of multiple sketches associated with a single biometric presents security problems for certain constructions. We continue the analysis to demonstrate that all other constructions in the literature are also prone to similar problems and cannot be safely reused. To mitigate the problem, we propose for each user to store one short secret string for all possible uses of her biometric, and show that simple constructions in the computational setting have numerous advantageous security and usability properties under standard hardness assumptions. Our constructions are generic in that they can be used with any existing secure sketch as a black box.
Last updated:  2012-10-29
Graph-Theoretic Algorithms for the ``Isomorphism of Polynomials'' Problem
Charles Bouillaguet, Pierre-Alain Fouque, Amandine Véber
We give three new algorithms to solve the ``isomorphism of polynomial'' problem, which was underlying the hardness of recovering the secret-key in some multivariate trapdoor one-way functions. In this problem, the adversary is given two quadratic functions, with the promise that they are equal up to linear changes of coordinates. Her objective is to compute these changes of coordinates, a task which is known to be harder than Graph-Isomorphism. Our new algorithm build on previous work in a novel way. Exploiting the birthday paradox, we break instances of the problem in time $q^{2n/3}$ (rigorously) and $q^{n/2}$ (heuristically), where $q^n$ is the time needed to invert the quadratic trapdoor function by exhaustive search. These results are obtained by turning the algebraic problem into a combinatorial one, namely that of recovering partial information on an isomorphism between two exponentially large graphs. These graphs, derived from the quadratic functions, are new tools in multivariate cryptanalysis.
Last updated:  2013-09-11
Quantum-Secure Message Authentication Codes
Dan Boneh, Mark Zhandry
We construct the first Message Authentication Codes (MACs) that are existentially unforgeable against a quantum chosen message attack. These chosen message attacks model a quantum adversary’s ability to obtain the MAC on a superposition of messages of its choice. We begin by showing that a quantum secure PRF is sufficient for constructing a quantum secure MAC, a fact that is considerably harder to prove than its classical analogue. Next, we show that a variant of Carter-Wegman MACs can be proven to be quantum secure. Unlike the classical settings, we present an attack showing that a pair-wise independent hash family is insufficient to construct a quantum secure one-time MAC, but we prove that a four-wise independent family is sufficient for one-time security.
Last updated:  2012-10-26
Secure Outsourced Attribute-Based Signatures
Jin Li, Xiaofeng Chen, Jingwei Li, Chunfu Jia, Duncan S. Wong, Willy Susilo
Attribute-based signature (ABS) is a useful variant of digital signature, which enables users to sign messages over attributes without revealing any information other than the fact that they have attested to the messages. However, heavy computational cost is required during signing in existing work of ABS, which grows linearly with the size of the predicate formula. As a result, this presents a significant challenge for resource-limited users (such as mobile devices) to perform such heavy computation independently. Aiming at tackling the challenge above, we propose and formalize a new paradigm called OABS, in which the computational overhead at user side is greatly reduced through outsourcing such intensive computation to an untrusted signing-cloud service provider (S-CSP). Furthermore, we apply this novel paradigm to existing ABS to reduce complexity and present two schemes, i) in the first OABS scheme, the number of exponentiations involving in signing is reduced from $O(d)$ to $O(1)$ (nearly three), where $d$ is the upper bound of threshold value defined in the predicate; ii) our second scheme is built on Herranz et al's construction with constant-size signatures. The number of exponentiations in signing is reduced from $O(d^2)$ to $O(d)$ and the communication overhead is $O(1)$. Security analysis demonstrates that both OABS schemes are secure in terms of the unforgeability and attribute-signer privacy definitions specified in the proposed security model. Finally, to allow for high efficiency and flexibility, we discuss extensions of OABS and show how to achieve accountability and outsourced verification as well.
Last updated:  2014-12-13
Leakage-Resilient Cryptography from Minimal Assumptions
Carmit Hazay, Adriana Lopez-Alt, Hoeteck Wee, Daniel Wichs
We present new constructions of leakage-resilient cryptosystems, which remain provably secure even if the attacker learns some arbitrary partial information about their internal secret key. For any polynomial $\ell$, we can instantiate these schemes so as to tolerate up to $\ell$ bits of leakage. While there has been much prior work constructing such leakage-resilient cryptosystems under concrete number-theoretic and algebraic assumptions, we present the first schemes under general and minimal assumptions. In particular, we construct: - Leakage-resilient public-key encryption from any standard public-key encryption. - Leakage-resilient weak pseudorandom functions, symmetric-key encryption}, and message-authentication codes from any one-way function. These are the first constructions of leakage-resilient symmetric-key primitives that do not rely on public-key assumptions. We also get the first constructions of leakage-resilient public-key encryption from ``search assumptions'', such as the hardness of factoring or CDH. Although our schemes can tolerate arbitrarily large amounts of leakage, the tolerated rate of leakage (defined as the ratio of leakage-amount to key-size) is rather poor in comparison to prior results under specific assumptions. As a building block of independent interest, we study a notion of weak hash-proof systems in the public-key and symmetric-key settings. While these inherit some of the interesting security properties of standard hash-proof systems, we can instantiate them under general assumptions.
Last updated:  2012-10-25
Collecting Data while Preserving Individuals' Privacy: A Case Study
Alexis Bonnecaze, Robert Rolland
Several companies exploit medical data to better understand medication consumption patterns. Their analyses are useful to various health actors in order to enhance health care management. In this article, we focus on a configuration which allows a network of pharmacies to forward medical data to a private company in order to construct a database. Pharmacies must operate in full compliance with legal requirements in terms of confidentiality and privacy. We show that our solution fulfills all the requirements. Our work leads us to introduce the concept of generalized discrete logarithm problem which is proven to be as hard as the discrete logarithm problem.
Last updated:  2012-10-25
A note on invariant linear transformations in multivariate public key cryptography
Andreas Wiemers
Imai and Matsumoto introduced a public key cryptosystem based on multivariate quadratic polynomials. In a simplified way, the essence of their cryptosystem can be described in the following way: Start with a central monomial F. The secret key comprises two invertible linear transformations T and L such that TFL is the public key. In order to study equivalent public keys it is natural to ask for the "invariant" secret keys (T,L), i.e. TFL=F. Lin, Faugere, Perret and Wang give a partial answer to this question by considering such L which fulfill FL=F. In this paper we will determine all invariant invertible linear transformations (T,L).
Last updated:  2013-03-13
How to Garble RAM Programs
Uncategorized
Steve Lu, Rafail Ostrovsky
Show abstract
Uncategorized
Assuming solely the existence of one-way functions, we show how to construct Garbled RAM Programs (GRAM) where its size only depends on fixed polynomial in the security parameter times the program running time. We stress that we avoid converting the RAM programs into circuits. As an example, our techniques implies the first garbled binary search program (searching over sorted encrypted data stored in a cloud) which is poly-logarithmic in the data size instead of linear. Our result requires the existence of one-way function and enjoys the same non-interactive properties as Yao's original garbled circuits.
Last updated:  2012-10-25
The LED Block Cipher
Jian Guo, Thomas Peyrin, Axel Poschmann, Matt Robshaw
We present a new block cipher LED. While dedicated to compact hardware implementation, and offering the smallest silicon footprint among comparable block ciphers, the cipher has been designed to simultaneously tackle three additional goals. First, we explore the role of an ultra-light (in fact non-existent) key schedule. Second, we consider the resistance of ciphers, and LED in particular, to related-key attacks: we are able to derive simple yet interesting AES-like security proofs for LED regarding related- or single-key attacks. And third, while we provide a block cipher that is very compact in hardware, we aim to maintain a reasonable performance profile for software implementation.
Last updated:  2013-01-24
On the coefficients of the polynomial in the number field sieve
Min Yang, Qingshu Meng, Zhangyi Wang, Li Li, Huanguo Zhang
Polynomial selection is very important in number field sieve. If the yield of a pair of polynomials is closely correlated with the coefficients of the polynomials, we can select polynomials by checking the coefficients first. This can speed up the selection of good polynomials. In this paper, we aim to study the correlation between the polynomial coefficients and the yield of the polynomials. By theoretical analysis and experiments, we find that a polynomial with the ending coefficient containing more small primes is usually better in yield than the one whose ending coefficient contains less. One advantage of the ending coefficient over the leading coefficient is that the ending coefficient is bigger and can contain more small primes in root optimizing stage. Using the complete discrimination system, we also analyze the condition on coefficients to obtain more real roots.
Last updated:  2013-02-28
Taking proof-based verified computation a few steps closer to practicality (extended version)
Srinath Setty, Victor Vu, Nikhil Panpalia, Benjamin Braun, Muqeet Ali, Andrew J. Blumberg, Michael Walfish
We describe Ginger, a built system for unconditional, general-purpose, and nearly practical verification of outsourced computation. Ginger is based on Pepper, which uses the PCP theorem and cryptographic techniques to implement an \emph{efficient argument} system (a kind of interactive protocol). Ginger slashes the query size and costs via theoretical refinements that are of independent interest; broadens the computational model to include (primitive) floating-point fractions, inequality comparisons, logical operations, and conditional control flow; and includes a parallel GPU-based implementation that dramatically reduces latency.
Last updated:  2012-10-25
A Novel Permutation-based Hash Mode of Operation FP and the Hash Function SAMOSA
Souradyuti Paul, Ekawat Homsirikamol, Kris Gaj
The contribution of the paper is two-fold. First, we design a novel permutation-based hash mode of operation FP, and analyze its security. The FP mode is derived by replacing the hard-to-invert primitive of the FWP mode -- designed by Nandi and Paul, Indocrypt 2010 -- with an easy-to-invert permutation; since easy-to-invert permutations with good cryptographic properties are normally easier to design, and are more efficient than the hard-to-invert functions, the FP mode is more suitable in practical applications than the FWP mode. We show that any n-bit hash function that uses the FP mode is indifferentiable from a random oracle up to 2^n/2 queries (up to a constant factor), if the underlying 2n-bit permutation is free from any structural weaknesses. Based on our further analysis and experiments, we conjecture that the FP mode is resistant to all non-trivial generic attacks with work less than the brute force, mainly due to its large internal state. We compare the FP mode with other permutation-based hash modes, and observe that it displays the so-far best security/rate trade-off. To put this into perspective, our second contribution is a proposal for a concrete hash function SAMOSA using the new mode and the $P$-permutations of the SHA-3 finalist Groestl. Based on our analysis we claim that the SAMOSA family cannot be attacked with work significantly less than the brute force. We also provide hardware implementation (FPGA) results for SAMOSA to compare it with the SHA-3 finalists. In our implementations, SAMOSA family consistently beats Groestl, Blake and Skein in the throughput to area ratio. With more efficient underlying permutation, it seems possible to design a hash function based on the FP mode that can achieve even higher performances.
Last updated:  2013-02-06
Evaluating User Privacy in Bitcoin
Uncategorized
Elli Androulaki, Ghassan Karame, Marc Roeschlin, Tobias Scherer, Srdjan Capkun
Show abstract
Uncategorized
Bitcoin is quickly emerging as a popular digital payment system. However, in spite of its reliance on pseudonyms, Bitcoin raises a number of privacy concerns due to the fact that all of the transactions that take place are publicly announced in the system. In this paper, we investigate the privacy guarantees of Bitcoin in the setting where Bitcoin is used as a primary currency for the daily transactions of individuals. More specifically, we evaluate the privacy that is provided by Bitcoin (i) by analyzing the genuine Bitcoin system and (ii) through a simulator that mimics Bitcoin client’s behavior in the context where Bitcoin is used for all transactions within a university. In this setting, our results show that the profiles of almost 40% of the users can be, to a large extent, recovered even when users adopt privacy measures recommended by Bitcoin. To the best of our knowledge, this is the first work that comprehensively analyzes, and evaluates the privacy implications of Bitcoin. As a by-product, we have designed and implemented the first simulator of Bitcoin; our simulator can be used to model the interaction between Bitcoin users in generic settings.
Last updated:  2012-12-18
Extending Brickell-Davenport Theorem to Non-Perfect Secret Sharing Schemes
Oriol Farràs, Carles Padró
One important result in secret sharing is the Brickell-Davenport Theorem: every ideal perfect secret sharing scheme defines a matroid that is uniquely determined by the access structure. Even though a few attempts have been made, there is no satisfactory definition of ideal secret sharing scheme for the general case, in which non-perfect schemes are considered as well. Without providing another unsatisfactory definition of ideal non-perfect secret sharing scheme, we present a generalization of the Brickell-Davenport Theorem to the general case. After analyzing that result under a new point of view and identifying its combinatorial nature, we present a characterization of the (not necessarily perfect) secret sharing schemes that are associated to matroids. Some optimality properties of such schemes are discussed.
Last updated:  2012-10-25
Improved Impossible Differential Attack on Reduced Version of Camellia-192/256
Uncategorized
Ya Liu, Dawu Gu, Zhiqiang Liu, Wei Li
Show abstract
Uncategorized
As an ISO/IEC international standard, Camellia has been used various cryptographic applications. In this paper, we improve previous attacks on Camellia-192/256 with key-dependent layers $FL/FL^{-1}$ by using the intrinsic weakness of keyed functions. Specifically, we present the first impossible differential attack on 13-round Camellia with $2^{121.6}$ chosen ciphertexts and $2^{189.9}$ 13-round encryptions, while the analysis for the biggest number of rounds in previous results on Camellia-192 worked on 12 rounds. Furthermore, we successfully attack 14-round Camellia-256 with $2^{122.1}$ chosen ciphertexts and $2^{229.3}$ 14-round encryptions. Compared with the previously best known attack on 14-round Camellia-256, the time complexity of our attack is reduced by $2^{8.9}$ times and the data complexity is comparable.
Last updated:  2012-10-25
Factor-4 and 6 (De)compression for Values of Pairings using Trace Maps
Tomoko Yonemura, Taichi Isogai, Hirofumi Muratani, Yoshikazu Hanatani
The security of pairing-based cryptosystems relies on the hardness of the discrete logarithm problems in elliptic curves and in finite fields related to the curves, namely, their embedding fields. Public keys and ciphertexts in the pairing-based cryptosystems are composed of points on the curves or values of pairings. Although the values of the pairings belong to the embedding fields, the representation of the field is inefficient in size because the size of the embedding fields is usually larger than the size of the elliptic curves. We show factor-4 and 6 compression and decompression for the values of the pairings with the supersingular elliptic curves of embedding degrees 4 and 6, respectively. For compression, we use the fact that the values of the pairings belong to algebraic tori that are multiplicative subgroups of the embedding fields. The algebraic tori can be expressed by the affine representation or the trace representation. Although the affine representation allows decompression maps, decompression maps for the trace representation has not been known. In this paper, we propose a trace representation with decompression maps for the characteristics 2 and 3. We first construct efficient decompression maps for trace maps by adding extra information to the trace representation. Our decompressible trace representation with additional information is as efficient as the affine representation is in terms of the costs of compression, decompression and exponentiation, and the size.
Last updated:  2012-12-14
Attribute-Based Encryption for Circuits from Multilinear Maps
Amit Sahai, Brent Waters
In this work, we provide the first construction of Attribute-Based Encryption (ABE) for general circuits. Our construction is based on the existence of multilinear maps. We prove selective security of our scheme in the standard model under the natural multilinear generalization of the BDDH assumption. Our scheme achieves both Key-Policy and Ciphertext-Policy variants of ABE. Our scheme and its proof of security directly translate to the recent multilinear map framework of Garg, Gentry, and Halevi.
Last updated:  2013-05-27
Biclique Cryptanalysis Of PRESENT, LED, And KLEIN
Farzaneh Abed, Christian Forler, Eik List, Stefan Lucks, Jakob Wenzel
In this paper, we analyze the resistance of the lightweight ciphers PRESENT, LED, and KLEIN to biclique attacks. Primarily, we describe attacks on the full-round versions PRESENT-80, PRESENT-128, LED-64, LED-128, KLEIN-80, and KLEIN-96. Our attacks have time complexities of $2^{79.49}$, $2^{127.32}$, $2^{63.58}$, $2^{127.42}$, $2^{79.00}$, and $2^{95.18}$ encryptions, respectively. In addition, we consider attacks on round-reduced versions of PRESENT and LED, to show the security margin for which an adversary can obtain an advantage of at least a factor of two compared to exhaustive search.
Last updated:  2012-11-06
--withdrawn--
Uncategorized
--withdrawn--
Uncategorized
--withdrawn--
Last updated:  2012-11-06
--withdrawn--
Uncategorized
--withdrawn--
Uncategorized
--withdrawn--
Last updated:  2012-10-25
Breaking Public Keys - How to Determine an Unknown RSA Public Modulus
Hans-Joachim Knobloch
Not surprisingly, the common use of any public key crypto system involves publishing the public key and keeping the private key secret. There are however a few applications where both the private and public key are kept secret, thereby effectively converting a public key crypto algorithm to a symmetric algorithm. We show that if the RSA cryptosystem is used in such a symmetric application, it is possible to determine the public RSA modulus if the public exponent is known and short, such as 3 or F4=65537, and two or more plaintext/ciphertext (or, if RSA is used for signing, signed value/signature) pairs are known.
Last updated:  2012-10-16
Symbolic computation in block cipher with application to PRESENT
Changyong Peng, Chuangying zhu, Yuefei Zhu, Fei Kang
In this paper,we give an example of how symbolic computation are used to analyze the block cipher PRESENT,an ultra-lightweight block cipher proposed by Bogdanov et al. at CHES’07.The block size is 64 bits and the key size can be 80 bit or 128 bit.Using Mathematica 7.0,this paper obtains the unexpanded polynomial expressions of the output of round 1-6 of PRESENT-80(80- bit key variant).The time complexity of getting these expressions is 4 minutes on a PC with a 2.6GHz CPU and 8G RAM.Then we expand the expressions of the output of round 1-2 and the LSB(least significant bit) of the output of round 3 and obtain the ANFs(Algebraic Normal Form) of these 129(=2*64+1) expressions. The time complexity of getting these ANFs is 22 minutes.It it known that the time complexity of the classical method of computing the ANF of an n-ary Boolean function from its truth table is O(n*2^n),with total time complexity of obtaining these 129 ANFs O(129*144*2^144) = O(2^158)(each of the 129 ANFs can be viewed as a 144-ary Boolean function,where 144=64+80,the sum of the block size and the key size).As an application,we give a side channel attack on PRESENT-80 under the single bit leakage model proposed by Dinur and Shamir.If the LSB bit of the output of the 3rd round can be obtained without error,then with 200 known plaintexts,we can set up an equation system in terms of the master key bits and can recover 43 bits key by the Gr¨obner Basis method.Compared with the previous side channel attack on PRESENT,such as Yang et al. in CANS 2009,Abdul-Latip et al. in ASIACCS 2011 and Zhao et al. in 2011,each of which needs at least 2^13 chosen plaintexts,the data complexity of our attack is the best.
Last updated:  2012-10-16
SHADE: Secure HAmming DistancE computation from oblivious transfer
Julien Bringer, Herve Chabanne, Alain Patey
We introduce two new schemes for securely computing Hamming distance in the two-party setting. Our first scheme is a very efficient protocol, based solely on 1-out-of-2 Oblivious Transfer, that achieves full security in the semi-honest setting and one-sided security in the malicious setting. Moreover we show that this protocol is significantly more efficient than the previous proposals, that are either based on garbled circuits or on homomorphic encryption. Our second scheme achieves full security against malicious adversaries and is based on Committed Oblivious Transfer. These protocols have direct applications to secure biometric identification.
Last updated:  2013-07-29
On Provably Secure Code-based Signature and Signcryption Scheme
Uncategorized
Preetha Mathew K, Sachin Vasant, C. Pandu Rangan
Show abstract
Uncategorized
Signcryption is a cryptographic protocol that provides uthentication and confidentiality as a single primitive at a cost lower than the combined cost of sign and encryption. Code-based cryptography, a likely candidate for post-quantum cryptography, provides an exciting alternative to number-theoretic cryptography. Courtois, Finiasz and Sendrier proposed the only practical code-based signature(CFS signature) at Asiacrypt 2001. But that signature scheme currently lacks a formal proof of security due to the existence of the high rate distinguisher proposed by Fauge`re et al. In this paper, we make use of an alternate key-construct for the CFS signature, and thus prove its existential unforgeability under chosen message attacks (EUF-CMA). Also, we propose a code-based signcryption scheme and prove its security. To the best of our knowledge, this is the first code-based, provably secure signature and signcryption scheme in literature.
Last updated:  2012-10-26
Quantitative Analysis of the Full Bitcoin Transaction Graph
Uncategorized
Dorit Ron, Adi Shamir
Show abstract
Uncategorized
The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.
Last updated:  2014-08-28
New Constructions and Proof Methods for Large Universe Attribute-Based Encryption
Yannis Rouselakis, Brent Waters
We propose two large universe Attribute-Based Encryption constructions. In a large universe ABE construction any string can be used as an attribute and attributes need not be enumerated at system setup. Our first construction establishes a novel large universe Ciphertext-Policy ABE scheme on prime order bilinear groups, while the second achieves a significant efficiency improvement over the large universe Key-Policy ABE systems of Lewko-Waters and Lewko. Both schemes are selectively secure in the standard model under two "q-type" assumptions similar to ones used in prior works. Our work brings back "program and cancel" techniques to this problem. We provide implementations and benchmarks of our constructions in Charm; a programming environment for rapid prototyping of cryptographic primitives.
Last updated:  2012-10-16
Using Randomizers for Batch Verification of ECDSA Signatures
Sabyasachi Karati, Abhijit Das, Dipanwita Roychowdhury
Randomizers are popularly used to prevent various types of attacks on batch-verification schemes. Recently, several algorithms based upon symbolic computation are proposed for the batch verification of ECDSA signatures. In this article, we demonstrate that the concept of randomizers can be easily embedded in these symbolic-computation algorithms. The performance degradation caused by randomizers is comparable with that associated with ECDSA*.
Last updated:  2013-07-24
On the (in)security of some smart-card-based password authentication schemes for WSN
Uncategorized
Ding Wang, Chun-guang Ma
Show abstract
Uncategorized
Understanding security failures of cryptographic protocols is the key to both patching existing protocols and designing future schemes. The design of secure and efficient remote user authentication schemes for real-time data access in wireless sensor networks (WSN) is still an open and quite challenging problem, though many schemes have been proposed lately. In this study, we analyze two recent proposals in this research domain. Firstly, Das et al.'s scheme is scrutinized, demonstrating its vulnerabilities to smart card security breach attack and privileged insider attack, which are among the security objectives pursued in their protocol specification. Then, we investigate a temporal-credential-based password authentication scheme introduced by Xue et al. in 2012. This protocol only involves hash and XOR operations and thus is suitable for the resource-constrained WSN environments where an external user wants to obtain real-time data from the sensor nodes inside WSN. However, notwithstanding their security arguments, we point out that Xue et al.'s protocol is still vulnerable to smart card security breach attack and privileged insider attack, and fails to provide identity protection. The proposed cryptanalysis discourages any use of the two schemes under investigation in practice and reveals some subtleties and challenges in designing this type of schemes. Besides reporting the security flaws, we put forward a principle that is vital for designing more robust two-factor authentication schemes for WSN.
Last updated:  2012-11-29
Cryptanalysis of the OKH Authenticated Encryption Scheme
Peng Wang, Wenling Wu, Liting Zhang
Alomair proposed a new authenticated encryption scheme OKH at ACNS 2012, and proved the security, i.e. authenticity and privacy, of OKH. Our research shows that it is not the case. We only need one query to break the authenticity of OKH with success probability of $1$, and two queries to break the privacy of OKH with success probability of $1-1/2^n$, where $n$ is the block-length of underlying blockcipher.
Last updated:  2012-10-16
Defending Against the Unknown Enemy: Applying FlipIt to System Security
Kevin D. Bowers, Marten van Dijk, Robert Griffin, Ari Juels, Alina Oprea, Ronald L. Rivest, Nikos Triandopoulos
Most cryptographic systems carry the basic assumption that entities are able to preserve the secrecy of their keys. With attacks today showing ever increasing sophistication, however, this tenet is eroding. “Advanced Persistent Threats” (APTs), for instance, leverage zero-day exploits and extensive system knowledge to achieve full compromise of cryptographic keys and other secrets.Such compromise is often silent, with defenders failing to detect the loss of private keys critical to protection of their systems. The growing virulence of today’s threats clearly calls for new models of defenders’ goals and abilities. In this paper, we explore applications of FlipIt, a novel game-theoretic model of system defense introduced recently. In FlipIt, an attacker periodically gains complete control of a system, with the unique feature that system compromises are stealthy, i.e., not immediately detected by the system owner, called the defender. We distill out several lessons from our study of FlipIt and demonstrate their application to several real-world problems, including password reset policies, key rotation, VM refresh and cloud auditing.
Last updated:  2012-10-16
Security Evaluations Beyond Computing Power: How to Analyze Side-Channel Attacks you Cannot Mount?
Nicolas Veyrat-Charvillon, Benoît Gérard, François-Xavier Standaert
Present key sizes for symmetric cryptography are usually required to be at least 80-bit long for short-term protection, and 128-bit long for long-term protection. However, current tools for security evaluations against side-channel attacks do not provide a precise estimation of the remaining key strength after some leakage has been observed, e.g. in terms of number of candidates to test. This leads to an uncomfortable situation, where the security of an implementation can be anywhere between enumerable values (i.e. $2^{40}$ -- $2^{50}$ key candidates to test) and the full key size (i.e. $2^{80}$ -- $2^{128}$ key candidates to test). In this paper, we mitigate this important issue, and describe a key rank estimation algorithm that provides tight bounds for the security level of leaking cryptographic devices. As a result and for the first time, we are able to analyze the full complexity of “standard” (i.e. divide-and-conquer) side-channel attacks, in terms of their tradeoff between time, data and memory complexity.
Last updated:  2017-03-24
A Framework for Unique Ring Signatures
Matthew Franklin, Haibin Zhang
We propose a simple, general, and unified framework for constructing unique ring signatures that simplify and capture the spirit of linkable ring signatures. The framework, which can be efficiently instantiated in the random oracle and the standard model, is obtained by generalizing the Bellare-Goldwasser ``PRF made public" paradigm. Security of the first instantiation can be more tightly related to the CDH problem and the DDH problem, compared to prior linkable ring signatures. The scheme leads to the most efficient linkable ring signature in the random oracle model, for a given level of provable security. The second one based on stronger assumptions partly simplifies and slightly improves the sublinear size traceable ring signature of Fujisaki (CT-RSA 2011).
Last updated:  2012-10-24
Concurrent Signature without Random Oracles
Uncategorized
Xiao Tan, Qiong Huang, Duncan S. Wong
Show abstract
Uncategorized
Concurrent signatures provide a way to exchange digital signature among parties in an efficient and fair manner. To the best of our knowledge, all the existing solutions can only be proven secure in the random oracle model. How to build an efficient concurrent signature scheme in the standard model has remained as an open problem since its introduction in 2004. In this paper we answer the problem affirmatively. Base on a novel idea, we propose a new concurrent signature construction, the security of which does not rely on the random oracle assumption. Our idea stems from an attempt of achieving a strong ambiguity feature that anyone should be able to produce indistinguishable ambiguous signatures by just using public information available in the system. In the multi-user setting, we prove the security of the new scheme based on Computational Diffie-Hellman (CDH) assumption, which is a rather standard and well-studied assumption in cryptography.
Last updated:  2013-11-30
Nanoelectronic Solutions for Hardware Security
Uncategorized
Jeyavijayan Rajendran, Ramesh Karri, James B. Wendt, Miodrag Potkonjak, Nathan McDonald, Garrett S. Rose, Bryant Wysocki
Show abstract
Uncategorized
Information security has emerged as an important system and application metric. Classical security solutions use algorithmic mechanisms that address a small subset of emerging security requirements, often at high energy and performance overhead. Further, emerging side channel and physical attacks can compromise classical security solutions. Hardware-based security solutions overcome many of the limitations of classical security while consuming less energy and improving performance. Nanoelectronics-based hardware security preserves all of these advantages while enabling conceptually new security mechanisms and security applications. This paper highlights nanoelectronics based security capabilities and challenges. The paper describes nanoelectronics-based hardware security primitives for device identification, digital forensics, and tamper detection. These primitives can be developed using the unique characteristics of emerging nanoelectronic devices such as complex device and system models, bidirectional operation, and temporal drift of state variables. We also identify important desiderata and outstanding challenges in nanoelectronics-based security.
Last updated:  2012-10-14
Quantum algorithm for the discrete logarithm problem for matrices over finite group rings
A. D. Myasnikov, A. Ushakov
We propose a polynomial time quantum algorithm for solving the discrete logarithm problem in matrices over finite group rings. The hardness of this problem was recently employed in the design of a key-exchange protocol proposed by D. Kahrobaei, C. Koupparis, and V. Shpilrain. Our result implies that the Kahrobaei et al. protocol does not belong to the realm of post-quantum cryptography.
Last updated:  2013-01-14
Limits on the Usefulness of Random Oracles
Iftach Haitner, Eran Omri, Hila Zarosim
In the random oracle model, parties are given oracle access to a random function (i.e., a uniformly chosen function from the set of all functions), and are assumed to have unbounded computational power (though they can only make a bounded number of oracle queries). This model provides powerful properties that allow proving the security of many protocols, even such that cannot be proved secure in the standard model (under any hardness assumption). The random oracle model is also used for showing that a given cryptographic primitive cannot be used in a black-box way to construct another primitive; in their seminal work, Impagliazzo and Rudich [STOC ’89] showed that no key-agreement protocol exists in the random oracle model, yielding that key-agreement cannot be black-box reduced to one-way functions. Their work has a long line of followup works (Simon [EC ’98], Gertner et al. [STOC ’00] and Gennaro et al. [SICOMP ’05], to name a few), showing that given oracle access to a certain type of function family (e.g., the family that “implements” public-key encryption) is not sufficient for building a given cryptographic primitive (e.g., oblivious transfer). Yet, the following question remained open: What is the exact power of the random oracle model? We make progress towards answering the above question, showing that, essentially, any no private input, semi-honest two-party functionality that can be securely implemented in the random oracle model, can be securely implemented information theoretically (where parties are assumed to be all powerful, and no oracle is given). We further generalize the above result to function families that provide some natural combinatorial property. Our result immediately yields essentially that the only no-input functionalities that can be securely realized in the random oracle model (in the sense of secure function evaluation), are the trivial ones (ones that can be securely realized information theoretically). In addition, we use the recent information theoretic impossibility result of McGregor et al. [FOCS ’10], to show the existence of functionalities (e.g., inner product) that cannot be computed both accurately and in a differentially private manner in the random oracle model; yielding that protocols for computing these functionalities cannot be black-box reduced to the existence of one-way functions.
Last updated:  2012-10-14
On Constant-Round Concurrent Zero-Knowledge from a Knowledge Assumption
Divya Gupta, Amit Sahai
In this work, we consider the long-standing open question of constructing constant-round concurrent zero-knowledge protocols in the plain model. Resolving this question is known to require non-black-box techniques. We consider non-black-box techniques for zero-knowledge based on knowledge assumptions, a line of thinking initiated by the work of Hada and Tanaka (CRYPTO 1998). Prior to our work, it was not known whether knowledge assumptions could be used for achieving security in the concurrent setting, due to a number of significant limitations that we discuss here. Nevertheless, we obtain the following results: 1. We obtain the first constant round concurrent zero-knowledge argument for \textbf{NP} in the plain model based on a new variant of knowledge of exponent assumption. Furthermore, our construction avoids the inefficiency inherent in previous non-black-box techniques such that those of Barak (FOCS 2001); we obtain our result through an efficient protocol compiler. 2. Unlike Hada and Tanaka, we do not require a knowledge assumption to argue the soundness of our protocol. Instead, we use a discrete log like assumption, which we call Diffie-Hellman Logarithm Assumption, to prove the soundness of our protocol. 3. We give evidence that our new variant of knowledge of exponent assumption is in fact plausible. In particular, we show that our assumption holds in the generic group model. 4. Knowledge assumptions are especially delicate assumptions whose plausibility may be hard to gauge. We give a novel framework to express knowledge assumptions in a more flexible way, which may allow for formulation of plausible assumptions and exploration of their impact and application in cryptography.
Last updated:  2012-10-14
Improved side channel attack on the block cipher NOEKEON
Changyong Peng, Chuangying zhu, Yuefei Zhu, Fei Kang
NOEKEON is a block cipher having key-size 128 and block size 128,proposed by Daemen, J et al.Shekh Faisal Abdul-Latip et al. give a side channel attack(under the single bit leakage model) on the cipher at ISPEC 2010.Their analysis shows that one can recover the 128-bit key of the cipher, by considering a one-bit information leakage from the internal state after the second round, with time complexity of O(2^68) evaluations of the cipher, and data complexity of about 2^10 chosen plaintexts.Our side channel attack improves upon the previous work of Shekh Faisal Abdul-Latip et al. from two aspects. First, we use the Hamming weight leakage model(Suppose the Hamming weight of the lower 64 bits and the higher 64 bits of the output of the first round can be obtained without error) which is a more relaxed leakage assumption, supported by many previously known practical results on side channel attacks, compared to the more challenging leakage assumption that the adversary has access to the ”exact” value of the internal state bits as used by Shekh Faisal Abdul-Latip et al. Second, our attack has also a reduced complexity compared to that of Shekh Faisal Abdul-Latip et al. Namely, our attack of recovering the 128-bit key of NOEKEON has a time complexity 20.1 seconds on a PC with 2.6 GHZ CPU and 8G RAM and data complexity of 99 known plaintexts; whereas, that of Shekh Faisal Abdul-Latip et al. has time complexity of O(2^68) and needs about 2^10 chosen plaintexts.
Last updated:  2012-12-23
Zero-Correlation Linear Cryptanalysis of Reduced-Round LBlock
Uncategorized
Hadi Soleimany, Kaisa Nyberg
Show abstract
Uncategorized
Zero-correlation linear attack is a new method for cryptanalysis of block ciphers developed by Bogdanov et al. in 2012. In this paper we adapt the matrix method to find zero-correlation linear approximations. Then we present several zero-correlation linear approximations for 14 rounds of LBlock and describe a cryptanalysis for 22 rounds of the reduced LBlock. After biclique attacks on LBlock revealed weaknesses in its key schedule, its designers presented a new version of the cipher with a revised key schedule. The attack presented in this paper is applicable to LBlock structure independently of the key scheduling. The attack needs distinct known plaintexts which is a more realistic attack model in comparison with impossible differential cryptanalysis which uses chosen plaintext pairs. Moreover, we performed simulations on a small variant LBlock and present the first experimental results on the theoretical model of the multidimensional zero-correlation linear cryptanalysis method.
Last updated:  2014-01-12
Improved Zero-knowledge Proofs of Knowledge for the ISIS Problem, and Applications
Uncategorized
San Ling, Khoa Nguyen, Damien Stehle, Huaxiong Wang
Show abstract
Uncategorized
In all existing efficient proofs of knowledge of a solution to the infinity norm Inhomogeneous Small Integer Solution ($\mathrm{ISIS}^{\infty}$) problem, the knowledge extractor outputs a solution vector that is only guaranteed to be~$\widetilde{O}(n)$ times longer than the witness possessed by the prover. As a consequence, in many cryptographic schemes that use these proof systems as building blocks, there exists a gap between the hardness of solving the underlying $\mathrm{ISIS}^{\infty}$ problem and the hardness underlying the security reductions. In this paper, we generalize Stern's protocol to obtain two statistical zero-knowledge proofs of knowledge for the $\mathrm{ISIS}^{\infty}$ problem that remove this gap. Our result yields the potential of relying on weaker security assumptions for various lattice-based cryptographic constructions. As applications of our proof system, we introduce a concurrently secure identity-based identification scheme based on the worst-case hardness of the $\mathrm{SIVP}_{\widetilde{O}(n^{1.5})}$ problem (in the $\ell_2$ norm) in general lattices in the random oracle model, and an efficient statistical zero-knowledge proof of plaintext knowledge with small constant gap factor for Regev's encryption scheme.
Last updated:  2012-10-07
On Transaction Pseudonyms with Implicit Attributes
Uncategorized
Stefan G. Weber
Show abstract
Uncategorized
Transaction pseudonyms with implicit attributes are a novel approach to multilevel linkable transaction pseudonyms. We extend earlier work of Juels and Pappu on reencryption-based transaction pseudonyms, by developing new mechanisms for controlled pseudonym linkability. This includes mechanisms for cooperative, stepwise re-identication as well as individual authentication of pseudonyms. Our proposal makes use of efficient techniques from the area of secure multiparty computation and cryptographically secure PRNGs.
Last updated:  2012-10-07
Leakage Squeezing of Order Two
Claude Carlet, Jean-Luc Danger, Sylvain Guilley, Houssem Maghrebi
In masking schemes, \emph{leakage squeezing} is the study of the optimal shares' representation, that maximizes the resistance order against high-order side-channel attacks. Squeezing the leakage of first-order Boolean masking has been problematized and solved previously in~\cite{DBLP:conf/africacrypt/MaghrebiCGD12}. The solution consists in finding a bijection $F$ that modifies the mask, in such a way that its graph, seen as a code, be of greatest dual distance. This paper studies second-order leakage squeezing, \emph{i.e.} leakage squeezing with two independent random masks. It is proved that, compared to first-order leakage squeezing, second-order leakage squeezing at least increments (by one unit) the resistance against high-order attacks, such as high-order correlation power analyses (HO-CPA). Now, better improvements over first-order leakage squeezing are possible by relevant constructions of the squeezing bijections pair. We provide with linear bijections that improve by strictly more than one (instead of one) the resistance order. Specifically, when the masking is applied on bytes (which suits AES), resistance against $1$st-order (resp. $2$nd-order) attacks is possible with one (resp. two) masks. Optimal leakage squeezing with one mask resists HO-CPA of orders up to $5$. In this paper, with two masks, we provide resistance against HO-CPA not only of order $5+1=6$, but also of order $7$.
Last updated:  2014-01-17
Quantization in Continuous-Source Zero Secrecy Leakage Helper Data Schemes
Uncategorized
Joep de Groot, Boris Škorić, Niels de Vreede, Jean-Paul Linnartz
Show abstract
Uncategorized
A Helper Data Scheme (HDS) is a cryptographic primitive that extracts a high-entropy noise-free string from noisy data. Helper Data Schemes are used for preserving privacy in biometric databases and for Physical Unclonable Functions. HDSs are known for the guided quantization of continuous-valued biometrics as well as for repairing errors in discrete-valued (digitized) extracted values. We refine the theory of Helper Data Schemes with the Zero Leakage (ZL) property, i.e., the mutual information between the helper data and the extracted secret is zero. We focus on quantization and prove that ZL necessitates particular properties of the helper data generating function: (i) the existence of “sibling points”, enrollment values that lead to the same helper data but different secrets; (ii) quantile helper data. We present an optimal reconstruction algorithm for our ZL scheme, that not only minimizes the reconstruction error rate but also yields a very efficient implementation of the verification. We compare the error rate to schemes that do not have the ZL property.
Last updated:  2012-10-07
Packed Ciphertexts in LWE-based Homomorphic Encryption
Zvika Brakerski, Craig Gentry, Shai Halevi
In this short note we observe that the Peikert-Vaikuntanathan-Waters (PVW) method of packing many plaintext elements in a single Regev-type ciphertext, can be used for performing SIMD homomorphic operations on packed ciphertext. This provides an alternative to the Smart-Vercauteren (SV) ciphertext-packing technique that relies on polynomial-CRT. While the SV technique is only applicable to schemes that rely on ring-LWE (or other hardness assumptions in ideal lattices), the PVW method can be used also for cryptosystems whose security is based on standard LWE (or more broadly on the hardness of ``General-LWE''). Although using the PVW method with LWE-based schemes leads to worse asymptotic efficiency than using the SV technique with ring-LWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with ``general-LWE'' schemes, suggesting yet another tradeoff that can be optimized for different settings.
Last updated:  2013-06-30
Adaptively Secure Garbling with Applications to One-Time Programs and Secure Outsourcing
Uncategorized
Mihir Bellare, Viet Tung Hoang, Phillip Rogaway
Show abstract
Uncategorized
Standard constructions of garbled circuits provide only static security, meaning the input x is not allowed to depend on the garbled circuit F. But some applications—notably one-time programs (Goldwasser, Kalai, and Rothblum 2008) and secure outsourcing (Gennaro, Gentry, Parno 2010)—need adaptive security, where x may depend on F. We identify gaps in proofs from these papers with regard to adaptive security and suggest the need of a better abstraction boundary. To this end we investigate the adaptive security of garbling schemes, an abstraction of Yao’s garbled-circuit technique that we recently introduced (Bellare, Hoang, Rogaway 2012). Building on that framework, we give definitions encompassing privacy, authenticity, and obliviousness, with either coarse-grained or fine-grained adaptivity. We show how adaptively secure garbling schemes support simple solutions for one-time programs and secure outsourcing, with privacy being the goal in the first case and obliviousness and authenticity the goal in the second. We give transforms that promote static-secure garbling schemes to adaptive-secure ones. Our work advances the thesis that conceptualizing garbling schemes as a first-class cryptographic primitive can simplify, unify, or improve treatments for higher-level protocols.
Last updated:  2012-10-07
Constant-Round Concurrent Zero Knowledge From Falsifiable Assumptions
Kai-Min Chung, Huijia Lin, Rafael Pass
We present a constant-round concurrent zero-knowledge protocol for NP. Our protocol is sound against uniform polynomial-time attackers, and relies on the existence of families of collision-resistant hash functions, and a new (but in our eyes, natural) falsifiable intractability assumption: Roughly speaking, that Micali's non-interactive CS-proofs are sound for languages in P.
Last updated:  2013-11-24
Aggregating CL-Signatures Revisited: Extended Functionality and Better Efficiency
Kwangsu Lee, Dong Hoon Lee, Moti Yung
Aggregate signature is public-key signature that allows anyone to aggregate different signatures generated by different signers on different messages into a short (called aggregate) signature. The notion has many applications where compressing the signature space is important: secure routing protocols, compressed certificate chain signature, software module authentications, and secure high-scale repositories and logs for financial transactions. In spite of its importance, the state of the art of the primitive is that it has not been easy to devise a suitable aggregate signature scheme that satisfies the conditions of real applications, with reasonable parameters: short public key size, short aggregate signatures size, and efficient aggregate signing/verification. In this paper, we propose aggregate signature schemes based on the Camenisch-Lysyanskaya (CL) signature scheme (Crypto 2004) whose security is reduced to that of CL signature which substantially improve efficiency conditions for real applications. - We first propose an efficient \textit{sequential aggregate signature} scheme with the shortest size public key, to date, and very efficient aggregate verification requiring only a constant number of pairing operations and $l$ number of exponentiations ($l$ being the number of signers). - Next, we propose an efficient \textit{synchronized aggregate signature} scheme with a very short public key size, and with the shortest (to date) size of aggregate signatures among synchronized aggregate signature schemes. Signing and aggregate verification are very efficient: they take constant number of pairing operations and $l$ number of exponentiations, as well. - Finally, we introduce a new notion of aggregate signature named \textit{combined aggregate signature} that allows a signer to dynamically use two modes of aggregation ``sequential'' and ``synchronized,'' employing the same private/public key. We also present an efficient combined aggregate signature based on our previous two aggregate signature schemes. This combined-mode scheme allows for application flexibility depending on real world scenario: For example, it can be used sequentially to sign incrementally generated legal documents, and synchronously to aggregate the end-of-day logs of all branches of an institute into a single location with a single aggregate signature.
Last updated:  2012-10-02
An Attack on a Fully Homomorphic Encryption Scheme
Hu Yupu, Wang Fenghe
In this paper we present an attack on a fully homomorphic encryption scheme on PKC2010. We construct a modi¯ed secret key, a modi¯ed decryption algorithm and a subset of the ciphertext space. When the ciphertext is from the subset, we can correctly decrypt it by our modi¯ed secret key and modi¯ed decryption algorithm. We also discuss when our modi¯ed decryption algorithm is e±cient, and when the subset is not negligible.
Last updated:  2012-09-30
Computational Soundness of Coinductive Symbolic Security under Active Attacks
Mohammad Hajiabadi, Bruce M. Kapron
In Eurocrypt 2010, Miccinacio initiated an investigation of cryptographically sound, symbolic security analysis with respect to coinductive adversarial knowledge, and demonstrated that under an adversarially passive model, certain security criteria (e.g. indistinguishability) may be given a computationally sound symbolic characterization, without the assumption of key acyclicity. Left open in his work was the fundamental question of ``the viability of extending the coinductive approach to prove computational soundness results in the presence of active adversaries.'' In this paper we make some initial steps toward answering this question in the affirmative with respect to an extension of a trace-based security model (proposed by Micciancio and Warinschi in TCC 2004) including asymmetric and symmetric encryption; in particular we prove that a random computational trace can be soundly abstracted by a coinductive symbolic trace with overwhelming probability, provided that both the underlying encryption schemes provide IND-CCA2 security (plus {ciphertext integrity} for the symmetric scheme), and that the diameter of the underlying coinductively-hidden subgraph is constant in every symbolic trace. This result holds even if the protocol allows arbitrarily nested applications of symmetric/asymmetric encryption, unrestricted transmission of symmetric keys, and adversaries who adaptively corrupt users, along with other forms of active attack. As part of our proof, we formulate a game-based definition of encryption security allowing adaptive corruptions of keys and certain forms of adaptive key-dependent plaintext attack, along with other common forms of CCA2 attack. We prove that (with assumptions similar to above,) security under this game is implied by IND-CCA2 security. This also characterizes a provably benign form of cyclic encryption which can be achieved under standard notions of encryption security, which may be of independent interest.
Last updated:  2014-01-27
Plaintext Awareness in Identity-Based Key Encapsulation
Mark Manulis, Bertram Poettering, Douglas Stebila
The notion of plaintext awareness (PA) has many applications in public key cryptography: it offers unique, stand-alone security guarantees for public key encryption schemes, has been used as a sufficient condition for proving indistinguishability against adaptive chosen ciphertext attacks (INDCCA), and can be used to construct privacy-preserving protocols such as deniable authentication. Unlike many other security notions, plaintext awareness is very fragile when it comes to differences between the random oracle and standard models; for example, many implications involving PA in the random oracle model are not valid in the standard model and vice versa. Similarly, strategies for proving PA of schemes in one model cannot be adapted to the other model. Existing research addresses PA in detail only in the public key setting. This paper gives the first formal exploration of plaintext awareness in the identity-based setting and, as initial work, proceeds in the random oracle model. The focus is laid mainly on identity-based key encapsulation mechanisms (IB-KEMs), for which the paper presents the first definitions of plaintext awareness, highlights the role of PA in proof strategies of INDCCA security, and explores relationships between PA and other security properties. On the practical side, our work offers the first, highly efficient, general approach for building IB-KEMs that are simultaneously plaintext-aware and INDCCA-secure. Our construction is inspired by the Fujisaki-Okamoto (FO) transform, but demands weaker and more natural properties of its building blocks. This result comes from a new look at the notion of gamma-uniformity that was inherent in the original FO transform. We show that for IB-KEMs (and PK-KEMs) this assumption can be replaced with a weaker computational notion, which is in fact implied by one-wayness. Finally, we give the first concrete IB-KEM scheme that is PA and INDCCA-secure by applying our construction to a popular IB-KEM and optimizing it for better performance.
Last updated:  2012-09-30
Domain-Specific Pseudonymous Signatures for the German Identity Card
Jens Bender, Özgür Dagdelen, Marc Fischlin, Dennis Kügler
The restricted identification protocol for the new German identity card basically provides a method to use pseudonyms such that they can be linked by individual service providers, but not across different service providers (even not malicious ones). The protocol can be augmented to allow also for signatures under the pseudonyms. In this paper, we thus view---and define---this idea more abstractly as a new cryptographic signature primitive with some form of anonymity, and use the term domain-specific pseudonymous signatures. We then analyze the restricted identification solutions in terms of the formal security requirements.
Last updated:  2012-09-30
PUFs: Myth, Fact or Busted? A Security Evaluation of Physically Unclonable Functions (PUFs) Cast in Silicon (Extended Version)
Stefan Katzenbeisser, Ünal Kocabaş, Vladimir Rožić, Ahmad-Reza Sadeghi, Ingrid Verbauwhede, Christian Wachsmann
Physically Unclonable Functions~(PUFs) are an emerging technology and have been proposed as central building blocks in a variety of cryptographic protocols and security architectures. However, the security features of PUFs are still under investigation: Evaluation results in the literature are difficult to compare due to varying test conditions, different analysis methods and the fact that representative data sets are publicly unavailable. In this paper, we present the first large-scale security analysis of ASIC implementations of the five most popular intrinsic electronic PUF types, including arbiter, ring oscillator, SRAM, flip-flop and latch PUFs. Our analysis is based on PUF data obtained at different operating conditions from $96$ ASICs housing multiple PUF instances, which have been manufactured in TSMC 65nm CMOS technology. In this context, we present an evaluation methodology and quantify the robustness and unpredictability properties of PUFs. Since all PUFs have been implemented in the same ASIC and analyzed with the same evaluation methodology, our results allow for the first time a fair comparison of their properties.
Last updated:  2012-09-28
Resource-based Corruptions and the Combinatorics of Hidden Diversity
Juan Garay, David Johnson, Aggelos Kiayias, Moti Yung
In the setting of cryptographic protocols, the corruption of a party has traditionally been viewed as a simple, uniform and atomic operation, where the adversary decides to get control over a party and this party immediately gets corrupted. In this paper, motivated by the fact that different players may require different resources to get corrupted, we put forth the notion of {\em resource-based corruptions}, where the adversary must invest some resources in order to do so. If the adversary has full information about the system configuration then resource-based corruptions would provide no fundamental difference from the standard corruption model. However, in a resource ``anonymous'' setting, in the sense that such configuration is hidden from the adversary, much is to be gained in terms of efficiency and security. We showcase the power of such {\em hidden diversity} in the context of secure multiparty computation (MPC) with resource-based corruptions and prove that it can effectively be used to circumvent known impossibility results. Specifically, if $OPT$ is the corruption budget that violates the completeness of MPC (the case when half or more of the players are corrupted), we show that if hidden diversity is available, the completeness of MPC can be made to hold against an adversary with as much as a $B\cdot OPT$ budget, for any constant $B>1$. This result requires a suitable choice of parameters (in terms of number of players and their hardness to corrupt), which we provide and further prove other tight variants of the result when the said choice is not available. Regarding efficiency gains, we show that hidden diversity can be used to force the corruption threshold to drop from 1/2 to 1/3, in turn allowing the use of much more efficient (information-theoretic) MPC protocols. We achieve the above through a series of technical contributions: o The modeling of the corruption process in the setting of cryptographic protocols through {\em corruption oracles} as well as the introduction of a notion of reduction to relate such oracles; o the abstraction of the corruption game as a combinatorial problem and its analysis; and, importantly, o the formulation of the notion of {\em inversion effort preserving} (IEP) functions which is a type of direct-sum property, and the property of {\em hardness indistinguishability}. While hardness indistinguishability enables the dissociation of parties' identities and the resources needed to corrupt them, IEP enables the discretization of adversarial work into corruption tokens, all of which may be of independent interest.
Last updated:  2012-10-16
New Impossibility Results for Concurrent Composition and a Non-Interactive Completeness Theorem for Secure Computation
Shweta Agrawal, Vipul Goyal, Abhishek Jain, Manoj Prabhakaran, Amit Sahai
We consider the client-server setting for the concurrent composition of secure protocols: in this setting, a single server interacts with multiple clients concurrently, executing with each client a specified protocol where only the client should receive any nontrivial output. Such a setting is easily motivated from an application standpoint. There are important special cases for which positive results are known – such as concurrent zero knowledge protocols – and it has been an open question explicitly asked, for instance, by Lindell [J. Cryptology’08] – whether other natural functionalities such as Oblivious Transfer (OT) are possible in this setting. In this work: 1. We resolve this open question by showing that unfortunately, even in this very limited concurrency setting, broad new impossibility results hold, ruling out not only OT, but in fact all nontrivial asymmetric functionalities. Our new negative results hold even if the inputs of all honest parties are fixed in advance, and the adversary receives no auxiliary information. 2. Along the way, we establish a new unconditional completeness result for asymmetric functionalities, where we characterize functionalities that are non-interactively complete secure against active adversaries. When we say that a functionality F is non-interactively complete, we mean that every other asymmetric functionality can be realized by parallel invocations of several copies of F, with no other communication in any direction. Our result subsumes a completeness result of Kilian [STOC’00] that uses protocols which require additional interaction in both directions.
Last updated:  2012-09-27
Security weakness in the Proof of Storage with Deduplication
Youngjoo Shin, Junbeom Hur, Kwangjo Kim
Achieving both security and efficiency is the challenging issue for a data outsourcing service in the cloud computing. Proof of Storage with Deduplication (POSD) is the first solution that addresses the issue for the cloud storage. However, the validity of the POSD scheme stands on the strong assumption that all clients are honest in terms of generating their keys. We present insecurity of the scheme under new attack model that malicious clients exploit dishonestly manipulated keys. We also propose an improvement of the POSD scheme to mitigate our attack.
Last updated:  2012-09-27
Bellcore attack in practice
Andrey Sidorenko, Joachim van den Berg, Remko Foekema, Michiel Grashuis, Jaap de Vos
In this paper we analyze practical aspects of the differential fault attack on RSA published by Boneh, Demillo and Lipton from Bellcore. We focus on the CRT variant, which requires only one faulty signature to be entirely broken provided that no DFA countermeasures are in use. Usually the easiest approach for the attacker is to introduce a fault in one of the two RSA-CRT exponentiations. These are time-consuming and often clearly visible in the power profiles. However, protection of the exponentiations against faults does not always circumvent the Bellcore attack. Our goal is to investigate and classify other possible targets of the attack.
Last updated:  2014-02-27
Provably Secure Concurrent Error Detection Against Differential Fault Analysis
Xiaofei Guo, Debdeep Mukhopadhyay, Ramesh Karri
Differential fault analysis (DFA) poses a significant threat to Advanced Encryption Standard (AES). It has been demonstrated that DFA can use only a single faulty ciphertext to reveal the secret key of AES in an average of 230 computation. Traditionally, concurrent error detection (CED) is used to protect AES against DFA. However, we emphasize that conventional CED assumes a uniform distribution of faults, which is not a valid assumption in the context of DFA. In contrast, we show practical examples which highlight that an attacker can inject specific and exploitable faults, thus threatening existing CED. This paper brings to the surface a new CED approach for cryptography, aimed at providing provable security by detecting all possible DFA-exploitable faults, which is a small subset of the entire fault space. We analyze the fault coverage of conventional CED against DFA-exploitable faults, and we find that the fault coverage of most of these techniques are significantly lower than the one they claimed. We stress that for security, it is imperative that CED should provide 100% fault coverage for DFA-exploitable faults. We further propose an invariance-based CED which provides 100% provable security against all known DFA of AES.
Last updated:  2012-09-24
Faster Pairing Computation on Jacobi quartic Curves with High-Degree Twists
Liangze Li, Hongfeng Wu, Fan Zhang
In this paper, we propose an elaborate geometric approach to explain the group law on Jacobi quartic curves which are seen as the intersection of two quadratic surfaces in space. Using the geometry interpretation we construct the Miller function. Then we present explicit formulae for the addition and doubling steps in Miller's algorithm to compute Tate pairing on Jacobi quartic curves. Both the addition step and doubling step of our formulae for Tate pairing computation on Jacobi curves are faster than previously proposed ones. Finally, we present efficient formulas for Jacobi quartic curves with twists of degree 4 or 6. For twists of degree 4, both the addition steps and doubling steps in our formulas are faster than the fastest result on Weierstrass curves. For twists of degree 6, the addition steps of our formulae are faster than the fastest result on Weierstrass curves.
Last updated:  2013-03-15
Dynamic Proofs of Retrievability via Oblivious RAM
David Cash, Alptekin Kupcu, Daniel Wichs
Proofs of retrievability allow a client to store her data on a remote server (``in the cloud'') and periodically execute an efficient audit protocol to check that all of the data is being maintained correctly and can be recovered from the server. For efficiency, the computation and communication of the server and client during an audit protocol should be significantly smaller than reading/transmitting the data in its entirety. Although the server is only asked to access a few locations of its storage during an audit, it must maintain full knowledge of all client data to be able to pass. Starting with the work of Juels and Kaliski (CCS '07), all prior solutions to this problem crucially assume that the client data is static and do not allow it to be efficiently updated. Indeed, they all store a redundant encoding of the data on the server, so that the server must delete a large fraction of its storage to `lose' any actual content. Unfortunately, this means that even a single bit modification to the original data will need to modify a large fraction of the server storage, which makes updates highly inefficient. Overcoming this limitation was left as the main open problem by all prior works. In this work, we give the first solution providing proofs of retrievability for dynamic storage, where the client can perform arbitrary reads/writes on any location within her data by running an efficient protocol with the server. At any point in time, the client can execute an efficient audit protocol to ensure that the server maintains the latest version of the client data. The computation and communication complexity of the server and client in our protocols is only polylogarithmic in the size of the client's data. The starting point of our solution is to split up the data into small blocks and redundantly encode each block of data individually, so that an update inside any data block only affects a few codeword symbols. The main difficulty is to prevent the server from identifying and deleting too many codeword symbols belonging to any single data block. We do so by hiding where the various codeword symbols for any individual data lock are stored on the server and when they are being accessed by the client, using the algorithmic techniques of oblivious RAM.
Last updated:  2012-09-22
Faster batch forgery identification
Daniel J. Bernstein, Jeroen Doumen, Tanja Lange, Jan-Jaap Oosterwijk
Batch signature verification detects whether a batch of signatures contains any forgeries. Batch forgery identification pinpoints the location of each forgery. Existing forgery-identification schemes vary in their strategies for selecting subbatches to verify (individual checks, binary search, combinatorial designs, etc.) and in their strategies for verifying subbatches. This paper exploits synergies between these two levels of strategies, reducing the cost of batch forgery identification for elliptic-curve signatures.
Last updated:  2013-09-09
Efficient Modular NIZK Arguments from Shift and Product
Uncategorized
Prastudy Fauzi, Helger Lipmaa, Bingsheng Zhang
Show abstract
Uncategorized
We propose a non-interactive product argument, that is more efficient than the one by Groth and Lipmaa, and a novel shift argument. We then use them to design several novel non-interactive zero-knowledge (NIZK) arguments. We obtain the first range proof with constant communication and subquadratic prover's computation. We construct NIZK arguments for $\mathbf{NP}$-complete languages, {\textsc{Set-Partition}}, {\textsc{Subset-Sum}} and {\textsc{Decision-Knapsack}}, with constant communication, subquadratic prover's computation and linear verifier's computation.
Last updated:  2012-09-22
Constrained Search for a Class of Good S-Boxes with Improved DPA Resistivity
Bodhisatwa Mazumdar, Debdeep Mukhopadhyay, Indranil Sengupta
In FSE 2005, \emph{transparency order} was proposed as a parameter for the robustness of S-boxes to \emph{Differential Power Analysis} (DPA):lower \emph{transparency order} implying more resistance. However most cryptographically strong Boolean functions have been found to have high \emph{transparency order}. Also it is a difficult problem to search for Boolean functions which are strong cryptographically, and yet have low \emph{transparency order}, the total search space for $(n,n)$-bit Boolean functions being as large as $n2^{2^n}$. In this paper we characterize \emph{transparency order} for various classes of Boolean functions by computing the upper and lower bounds of \emph{transparency order} for both even and odd numbers of variables. The transparency order is defined in terms of \emph{diffusion} properties of the structures of Boolean functions namely the number of bit flips in the output of the functions corresponding to the number of bit flips at the input of the function. The calculated bounds depend on the number of vectors flipping the input of S-box for which bias of probability of S-box output bit deviates from the value of 0.5. The \emph{transparency order} is found to be high in the class of those Boolean functions which have larger cardinality of input differences for which the probability of output bit flip is 0.5. Also we find that instead of \emph{propagation characteristics}, \emph{autocorrelation spectra} of the S-box function $F$ is a more qualifying candidate in deciding the characteristics of \emph{transparency order}. The relations developed to characterize \emph{transparency order} aid in our constrained random generation and search of a class of balanced $8\times8$ S-boxes with \emph{transparency order} upper bounded by 7.8, \emph{nonlinearity} in range $(104,110)$ and \emph{absolute indicator values} of \emph{GAC} in range $(48,88)$.
Last updated:  2013-02-21
Rotational cryptanalysis of round-reduced Keccak
Uncategorized
Pawel Morawiecki, Josef Pieprzyk, Marian Srebrny
Show abstract
Uncategorized
In this paper we attack round-reduced Keccak hash function with a technique called rotational cryptanalysis. We focus on Keccak variants proposed as SHA-3 candidates in the NIST's contest for a new standard of cryptographic hash function. Our main result is a preimage attack on 4-round Keccak and a 5-round distinguisher on Keccak-f[1600] permutation --- the main building block of Keccak hash function.
Last updated:  2012-11-04
A Versatile Multi-Input Multiplier over Finite Fields
Uncategorized
Haibo Yi, Shaohua Tang, Lingling Xu
Show abstract
Uncategorized
Multiplication of three elements over finite fields is used extensively in multivariate public key cryptography and solving system of linear equations over finite fields. This contribution shows the enhancements of multiplication of three elements over finite fields by using specific architecture. We firstly propose a versatile multi-input multiplier over finite fields. The parameters of this multiplier can be changed according to the requirement of the users which makes it reusable in different applications. Our evaluation of this multiplier gives optimum choices for multiplication of three elements over finite fields. Implemented results show that we takes $22.062$ ns and $16.354$ ns to execute each multiplication of three elements over $GF((2^4)^2)$ based on table look-up and polynomial basis on a FPGA respectively. Experimental results and mathematical proofs clearly demonstrate the improvement of the proposed versatile multiplier over finite fields.
Last updated:  2012-11-22
Differential Analysis of the LED Block Cipher
Florian Mendel, Vincent Rijmen, Deniz Toz, Kerem Varici
In this paper, we present a security analysis of the lightweight block cipher LED proposed by Guo et al. at CHES 2011. Since the design of LED is very similar to the Even-Mansour scheme, we first review existing attacks on this scheme and extend them to related-key and related-key-cipher settings before we apply them to LED. We obtain results for $12$ and $16$ rounds (out of $32$) for LED-$64$ and $16$ and $24$ rounds (out of $48$) for LED-$128$. Furthermore, we present an observation on full LED in the related-key-cipher setting. For all these attacks we need to find good differentials for one step (4 rounds) of LED. Therefore, we extend the study of plateau characteristics for AES-like structures from two rounds to four rounds when the key addition is replaced with a constant addition. We introduce an algorithm that can be used to find good differentials and right pairs for one step of LED. To be more precise, we can find more than $2^{10}$ right pairs for one step of LED with complexity of $2^{16}$ and memory requirement of $5 \times 2^{17}$. Moreover, a similar algorithm can also be used to find iterative characteristics for LED.
Last updated:  2014-04-08
Enhanced Chosen-Ciphertext Security and Applications
Dana Dachman-Soled, Georg Fuchsbauer, Payman Mohassel, Adam O'Neill
We introduce and study a new notion of \emph{enhanced chosen-ciphertext security} (ECCA) for public-key encryption. Loosely speaking, in the ECCA security experiment, the decryption oracle provided to the adversary is augmented to return not only the output of the decryption algorithm on a queried ciphertext but also of a \emph{randomness recovery} algorithm associated to the scheme. Our results mainly concern the case where the randomness recovery algorithm is efficient. We provide constructions of ECCA-secure encryption from adaptive trapdoor functions as defined by Kiltz \emph{et al.}~(EUROCRYPT 2010), resulting in ECCA encryption from standard number-theoretic assumptions. We then give two applications of ECCA-secure encryption: (1) We use it as a unifying concept in showing equivalence of adaptive trapdoor functions and tag-based adaptive trapdoor functions, resolving an open question of Kiltz \emph{et al}.~(2) We show that ECCA-secure encryption can be used to securely realize an approach to public-key encryption with non-interactive opening (PKENO) originally suggested by Damgård and Thorbek (EUROCRYPT 2007), resulting in new and practical PKENO schemes quite different from those in prior work. Our results demonstrate that ECCA security is of both practical and theoretical interest.
Last updated:  2012-09-20
Salus: A System for Server-Aided Secure Function Evaluation
Seny Kamara, Payman Mohassel, Ben Riva
Secure function evaluation (SFE) allows a set of mutually distrustful parties to evaluate a function of their joint inputs without revealing their inputs to each other. SFE has been the focus of active research and recent work suggests that it can be made practical. Unfortunately, current protocols and implementations have inherent limitations that are hard to overcome using standard and practical techniques. Among them are: (1) requiring participants to do work linear in the size of the circuit representation of the function; (2) requiring all parties to do the same amount of work; and (3) not being able to provide complete fairness. A promising approach for overcoming these limitations is to augment the SFE setting with a small set of untrusted servers that have no input to the computation and that receive no output, but that make their computational resources available to the parties. In this model, referred to as server-aided SFE, the goal is to tradeoff the parties' work at the expense of the servers. Motivated by the emergence of public cloud services such as Amazon EC2 and Microsoft Azure, recent work has explored the extent to which server-aided SFE can be achieved with a single server. In this work, we revisit the sever-aided setting from a practical perspective and design single-server-aided SFE protocols that are considerably more efficient than all previously-known protocols. We achieve this in part by introducing several new techniques for garbled-circuit-based protocols, including a new and efficient input-checking mechanism for cut-and-choose and a new pipelining technique that works in the presence of malicious adversaries. Furthermore, we extend the server-aided model to guarantee fairness which is an important property to achieve in practice. Finally, we implement and evaluate our constructions experimentally and show that our protocols (regardless of the number of parties involved) yield implementations that are 4 and 6 times faster than the most optimized two-party SFE implementation when the server is assumed to be malicious and covert, respectively.
Last updated:  2012-10-05
2048XKS - A Software Oriented High Security Block Cipher
Dieter Schmidt
The block cipher 2048XKS is a derivative of the block ciphers 1024 and 1024XKS, which in turn used the block ciphers MMB, SAFER, and Blowfish as building blocks. The block cipher 2048XKS has a block size of 2048 bis and a key length of 4096 bits and 8192 bits, respectively. 2048XKS is a Substition-Permutation-Network (SPN). It is designed for 32 bit microprocessors with a hardware integer multiplier.
Last updated:  2014-06-22
A Comparison of Perfect Table Cryptanalytic Tradeoff Algorithms
Ga Won Lee, Jin Hong
The performances of three major time memory tradeoff algorithms were compared in a recent paper. The algorithms considered there were the classical Hellman tradeoff and the non-perfect table versions of the distinguished point method and the rainbow table method. This paper adds the perfect table versions of the distinguished point method and the rainbow table method to the list, so that all the major tradeoff algorithms may now be compared against each other. Even though there are existing claims as to the superiority of one tradeoff algorithm over another algorithm, the algorithm performance comparisons provided by the current work and the recent preceding paper are of more practical value. Comparisons that take both the cost of pre-computation and the efficiency of the online phase into account, at parameters that achieve a common success rate, can now be carried out with ease. Comparisons can be based on the expected execution complexities rather than the worst case complexities, and details such as the effects of false alarms and various storage optimization techniques need no longer be ignored. A significant portion of this paper is allocated to accurately analyzing the execution behavior of the perfect table distinguished point method. In particular, we obtain a closed-form formula for the average length of chains associated with a perfect distinguished point table.
Last updated:  2012-09-20
Efficient Implementation of RSA Algorithm with MKE
Sami A. Nagar, Dr. Saad Alshamma
The aim of this paper is to improve the implementation of RSA algorithm in some certain situations. This improvement is based on the ideas of H. Ren-Junn, S. Feng-Fu, Y. Yi-Shiung and C. Chia-Yao and allows us to speed up the transmission of data between various nodes in networks. We realized our idea in the C{\#} language and in the database that was created by SQL Server 2008 R2. Identical database must be used in all networks gateways. The special protocol (RSA Handshake Database Protocol) was designed for controlling the creation of this database. Each gateway, that runs a RSA-Key Generations Offline procedure, is controlled according to specific issues and necessaries. This stage is called RSA-Key Generations Offline. We propose the new method for exchanging values of the keys among gateways. Roughly speaking gateways exchange indexes (Indexes Exchange) that refer to fields that contain the values of public and private keys.
Last updated:  2012-10-24
Private Top-k Aggregation Protocols
Uncategorized
Myungsun Kim, Abedelaziz Mohaisen, Jung Hee Cheon, Yongdae Kim
Show abstract
Uncategorized
In this paper, we revisit the private top-κ data aggregation problem. First we formally define the problem’s security requirements as both data and user privacy goals. To achieve both goals, and to strike a balance between efficiency and functionality, we devise a novel cryptographic construction that comes in two schemes; a fully decentralized simple construction and its practical and semi-decentralized variant. Both schemes are provably secure in the semi-honest model. We analyze the computational and communi- cation complexities of our construction, and show that it is much more efficient than the existing protocols in the literature.
Last updated:  2013-09-17
Intercepting Tokens: The Empire Strikes Back in the Clone Wars
Uncategorized
Özgür Dagdelen, Marc Fischlin
Show abstract
Uncategorized
We discuss interception attacks on cryptographic protocols which rely on trustworthy hardware like one-time memory tokens (Goldwasser et al., Crypto 2008). In such attacks the adversary can mount man-in-the-middle attacks and access, or even substitute, transmitted tokens. We show that many of the existing token-based protocols are vulnerable against this kind of attack, which typically lies outside of the previously considered security models. We also give a positive result for protocols remaining secure against such attacks. We present a very efficient protocol for password-based authenticated key exchange based on the weak model of one-time memory tokens. Our protocol only requires four moves, very basic operations, and the sender to send l tokens in the first step for passwords of length l. At the same time we achieve information-theoretic security in Canetti's universal composition framework (FOCS 2001) against adaptive adversaries (assuming reliable erasure), even if the tokens are not guaranteed to be transferred securely, i.e., even if the adversary can read or substitute transmitted tokens.
Last updated:  2012-09-20
Secret Sharing and Secure Computing from Monotone Formulae
Ivan Bjerre Damgård, Jonas Kölker, Peter Bro Miltersen
We present a construction of log-depth formulae for various threshold functions based on atomic threshold gates of constant size. From this, we build a new family of linear secret sharing schemes that are multiplicative, scale well as the number of players increases and allows to raise a shared value to the characteristic of the underlying field without interaction. Some of these schemes are in addition strongly multiplicative. Our formulas can also be used to construct multiparty protocols from protocols for a constant number of parties. In particular we implement black-box multiparty computation over non-Abelian groups in a way that is much simpler than previously known and we also show how to get a protocol in this setting that is efficient and actively secure against a constant fraction of corrupted parties, a long standing open problem. Finally, we show a negative result on usage of our scheme for pseudorandom secret sharing as defined by Cramer, Damgård and Ishai.
Last updated:  2012-09-20
A Low-Area Unified Hardware Architecture for the AES and the Cryptographic Hash Function Grøstl
Nuray At, Jean-Luc Beuchat, Eiji Okamoto, Ismail San, Teppei Yamazaki
This article describes the design of an 8-bit coprocessor for the AES (encryption, decryption, and key expansion) and the cryptographic hash function Grøstl on several Xilinx FPGAs. Our Arithmetic and Logic Unit performs a single instruction that allows for implementing AES encryption, AES decryption, AES key expansion, and Grøstl at all levels of security. Thanks to a careful organization of AES and Grøstl internal states in the register file, we manage to generate all read and write addresses by means of a modulo-128 counter and a modulo-256 counter. A fully autonomous implementation of Grøstl and AES on a Virtex-6 FPGA requires 169 slices and a single 36k memory block, and achieves a competitive throughput. Assuming that the security guarantees of Grøstl are at least as good as the ones of the other SHA-3 finalists, our results show that Grøstl is the best candidate for low-area cryptographic coprocessors.
Last updated:  2012-11-07
A Simple Combinatorial Treatment of Constructions and Threshold Gaps of Ramp Schemes
Maura B. Paterson, Douglas R. Stinson
We give easy proofs of some recent results concerning threshold gaps in ramp schemes. We then generalise a construction method for ramp schemes employing error-correcting codes so that it can be applied using nonlinear (as well as linear) codes. Finally, as an immediate consequence of these results, we provide a new explicit bound on the the minimum length of a code having a specified distance and dual distance.
Last updated:  2012-09-20
Solving Hard Lattice Problems and the Security of Lattice-Based Cryptosystems
Thijs Laarhoven, Joop van de Pol, Benne de Weger
This paper is a tutorial introduction to the present state-of-the-art in the field of security of lattice-based cryptosystems. After a short introduction to lattices, we describe the main hard problems in lattice theory that cryptosystems base their security on, and we present the main methods of attacking these hard problems, based on lattice basis reduction. We show how to find shortest vectors in lattices, which can be used to improve basis reduction algorithms. Finally we give a framework for assessing the security of cryptosystems based on these hard problems.
Last updated:  2012-09-27
Pairing computation on Edwards curves with high-degree twists
Liangze Li, Hongfeng Wu, Fan Zhang
In this paper, we propose an elaborate geometry approach to explain the group law on twisted Edwards curves which are seen as the intersection of quadric surfaces in place. Using the geometric interpretation of the group law we obtain the Miller function for Tate pairing computation on twisted Edwards curves. Then we present the explicit formulae for pairing computation on twisted Edwards curves. Our formulae for the doubling step are a littler faster than that proposed by Arene et.al.. Finally, to improve the efficiency of pairing computation we present twists of degree 4 and 6 on twisted Edwards curves.
Last updated:  2013-05-06
Generic Construction of Trace and Revoke Schemes
Uncategorized
Murat Ak, Aggelos Kiayias, Serdar Pehlivanoglu, Ali Aydin Selcuk
Show abstract
Uncategorized
Broadcast encryption (BE) is a cryptographic primitive that allows a broadcaster to encrypt digital content to a privileged set of users and in this way prevent revoked users from accessing the content. In BE schemes, a group of users, called traitor s may leak their keys and enable an adversary to receive the content. Such malicious users can be detected through traitor tracing (TT) schemes. The ultimate goal in a content distribution system would be combining traitor tracing and broadcast encryption (resulting in a trace and revoke system) so that any receiver key found to be compromised in a tracing process would be revoked from future transmissions. In this paper, we propose a generic method to transform a broadcast encryption scheme into a trace and revoke scheme. This transformation involves the utilization of a fingerprinting code over the underlying BE transmission. While fingerprinting codes have been used for constructing traitor tracing schemes in the past, their usage has various shortcomings such as the increase of the public key size with a linear factor in the length of the code. Instead, we propose a novel way to apply fingerprinting codes that allows for efficient parameters while retaining the traceability property. Our approach is based on a new property of fingerprinting codes we introduce, called public samplability. We have instantiated our generic transformation with the BE schemes of [4, 13, 20] something that enables us to produce trace and revoke schemes with novel properties. Specifically, we show (i) a trace and revoke scheme with constant private key size and short ciphertext size, (ii) the first ID-based trace and revoke scheme, (iii) the first publicly traceable scheme with constant private key size and (iv) the first trace and revoke scheme against pirate rebroadcasting attack in the public key setting.
Last updated:  2012-09-08
Dynamic Searchable Symmetric Encryption
Seny Kamara, Charalampos Papamanthou, Tom Roeder
Searchable symmetric encryption (SSE) allows a client to encrypt its data in such a way that this data can still be searched. The most immediate application of SSE is to cloud storage, where it enables a client to securely outsource its data to an untrusted cloud provider without sacrificing the ability to search over it. SSE has been the focus of active research and a multitude of schemes that achieve various levels of security and efficiency have been proposed. Any practical SSE scheme, however, should (at a minimum) satisfy the following properties: sublinear search time, security against adaptive chosen-keyword attacks, compact indexes and the ability to add and delete files efficiently. Unfortunately, none of the previously-known SSE constructions achieve all these properties at the same time. This severely limits the practical value of SSE and decreases its chance of deployment in real-world cloud storage systems. To address this, we propose the first SSE scheme to satisfy all the properties outlined above. Our construction extends the inverted index approach (Curtmola et al., CCS 2006) in several non-trivial ways and introduces new techniques for the design of SSE. In addition, we implement our scheme and conduct a performance evaluation, showing that our approach is highly efficient and ready for deployment.
Last updated:  2014-06-12
PRINCE - A Low-latency Block Cipher for Pervasive Computing Applications (Full version)
Julia Borghoff, Anne Canteaut, Tim Güneysu, Elif Bilge Kavun, Miroslav Knežević, Lars R. Knudsen, Gregor Leander, Ventzislav Nikov, Christof Paar, Christian Rechberger, Peter Rombouts, Søren S. Thomsen, Tolga Yalçın
This paper presents a block cipher that is optimized with respect to latency when implemented in hardware. Such ciphers are desirable for many future pervasive applications with real-time security needs. Our cipher, named PRINCE, allows encryption of data within one clock cycle with a very competitive chip area compared to known solutions. The fully unrolled fashion in which such algorithms need to be implemented calls for innovative design choices. The number of rounds must be moderate and rounds must have short delays in hardware. At the same time, the traditional need that a cipher has to be iterative with very similar round functions disappears, an observation that increases the design space for the algorithm. An important further requirement is that realizing decryption and encryption results in minimum additional costs. PRINCE is designed in such a way that the overhead for decryption on top of encryption is negligible. More precisely for our cipher it holds that decryption for one key corresponds to encryption with a related key. This property we refer to as alpha-reflection is of independent interest and we prove its soundness against generic attacks.
Last updated:  2012-09-16
An ID-Based Signcryption Scheme with Compartmented Secret Sharing for Unsigncryption
Graham Enos, Yuliang Zheng
In this paper the ID-based signcryption scheme of Li, Xin, and Hu is extended to a compartmented scheme. If an organization is partitioned into different compartments, this scheme allows any member of a specific compartment to participate in the unsigncryption; moreover, each member of a compartment has information unique to that individual. This construction is the first (to the authors’ knowledge) to combine identity-based encryption, Shamir’s threshold scheme, and signcryption into an implementable compartmented sharing scheme.
Last updated:  2012-09-07
Cryptanalysis of a recent two factor authentication scheme
Michael Scott
Very recently a scheme has been proposed by Wang and Ma for a robust smart-card based password authentication scheme, which claims to be secure against a Smart Card security breach. In this short note we attempt an initial cryptanalysis of this scheme.
Last updated:  2013-07-08
Invertible Polynomial Representation for Private Set Operations
Jung Hee Cheon, Hyunsook Hong, Hyung Tae Lee
In many private set operations, a set is represented by a polynomial over a ring $\Z_{\sigma}$ for a composite integer $\sigma$, where $\Z_\sigma$ is the message space of some additive homomorphic encryption. While it is useful for implementing set operations with polynomial additions and multiplications, a polynomial representation has a limitation due to the hardness of polynomial factorization over $\Z_\sigma$. That is, it is hard to recover a corresponding set from a resulting polynomial over $\Z_\sigma$ if $\sigma$ is not a prime. In this paper, we propose a new representation of a set by a polynomial over $\Z_\sigma$, in which $\sigma$ is a composite integer with {\em known factorization} but a corresponding set can be efficiently recovered from a polynomial except negligible probability in the security parameter. Note that $\Z_\sigma[x]$ is not a unique factorization domain, so a polynomial may be written as a product of linear factors in several ways. To exclude irrelevant linear factors, we introduce a special encoding function which supports early abort strategy. As a result, our representation can be efficiently inverted by computing all the linear factors of a polynomial in $\Z_{\sigma}[x]$ whose roots locate in the image of the encoding function. When we consider group decryption as in most private set operation protocols, inverting polynomial representations should be done without a single party possessing the secret of the utilized additive homomorphic encryption. This is very hard for Paillier's encryption whose message space is $\Z_N$ with unknown factorization of $N$. Instead, we detour this problem by using Naccache-Stern encryption with message space $\Z_\sigma$ where $\sigma$ is a smooth integer with public factorization. As an application of our representation, we obtain a constant round privacy-preserving set union protocol. Our construction improves the complexity than the previous without an honest majority. It can be also used for a constant round multi-set union protocol and a private set intersection protocol even when decryptors do not possess a superset of the resulting set.
Last updated:  2012-09-07
Computing endomorphism rings of abelian varieties of dimension two
Gaetan Bisson
Generalizing a method of Sutherland and the author for elliptic curves, we design a subexponential algorithm for computing the endomorphism ring structure of ordinary abelian varieties of dimension two over finite fields. Although its correctness and complexity bound rely on several assumptions, we report on practical computations showing that it performs very well and can easily handle previously intractable cases.
Last updated:  2012-09-07
Tahoe – The Least-Authority Filesystem
Zooko Wilcox-O'Hearn, Brian Warner
Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source.
Last updated:  2012-09-10
The Curious Case of Non-Interactive Commitments
Mohammad Mahmoody, Rafael Pass
It is well-known that one-way permutations (and even one-to-one one-way functions) imply the existence of non-interactive commitments. Furthermore the construction is black-box (i.e., the underlying one-way function is used as an oracle to implement the commitment scheme, and an adversary attacking the commitment scheme is used as an oracle in the proof of security). We rule out the possibility of black-box constructions of non interactive commitments from general (possibly not one-to-one) one-way functions. As far as we know, this is the first result showing a natural cryptographic task that can be achieved in a black-box way from one-way permutations but not from one-way functions. We next extend our black-box separation to constructions of non-interactive commitments from a stronger notion of one-way functions, which we refer to as \emph{hitting} one-way functions. Perhaps surprisingly, Barak, Ong, and Vadhan (Siam JoC '07) showed that there does exist a non-black-box construction of non-interactive commitments from hitting one-way functions. As far as we know, this is the first result to establish a ``separation'' between the power of black-box and non-black-box use of a primitive to implement a natural cryptographic task. We finally show that unless the complexity class NP has program checkers, the above separations extend also to non-interactive instance-based commitments, and 3-message public-coin honest-verifier zero-knowledge protocols with O(log n)-bit verifier messages. The well-known classical zero-knowledge proof for NP fall into this category.
Last updated:  2012-09-06
False Positive probabilities in q-ary Tardos codes: comparison of attacks
Uncategorized
A. Simone, B. Skoric
Show abstract
Uncategorized
We investigate False Positive (FP) accusation probabilities for q-ary Tardos codes in the Restricted Digit Model. We employ a computation method recently introduced by us, to which we refer as Convolution and Series Expansion (CSE). We present a comparison of several collusion attacks on q-ary codes: majority voting, minority voting, Interleaving, $\tilde\mu$-minimizing and Random Symbol (the q-ary equivalent of the Coin Flip strategy). The comparison is made by looking at the FP rate at approximately fixed False Negative rate. In nearly all cases we find that the strongest attack is either minority voting or $\tilde\mu$-minimizing, depending on the exact setting of parameters such as alphabet size, code length, and coalition size. Furthermore, we present results on the convergence speed of the CSE method, and we show how FP rate computations for the Random Symbol strategy can be sped up by a pre-computation step.
Last updated:  2012-09-06
Functional Encryption with Bounded Collusions via Multi-Party Computation
Sergey Gorbunov, Vinod Vaikuntanathan, Hoeteck Wee
We construct a functional encryption scheme secure against an a priori bounded polynomial number of collusions for the class of all polynomial-size circuits. Our constructions require only semantically secure public-key encryption schemes and pseudo-random generators computable by small-depth circuits (known to be implied by most concrete intractability assumptions). For certain special cases such as predicate encryption schemes with public index, the construction requires only semantically secure encryption schemes, which is clearly the minimal necessary assumption. Our constructions rely heavily on techniques from secure multiparty computation and randomized encodings. All our constructions are secure under a strong, adaptive simulation-based definition of functional encryption.
Last updated:  2012-09-06
Optimizing Segment Based Document Protection (Corrected Version)
Miroslaw Kutylowski, Maciej Gebala
In this paper we provide a corrected and generalized version of the scheme presented at SOFSEM'2012 in our paper ``Optimizing Segment Based Document Protection'' (SOFSEM 2012: Theory and Practice of Computer Science, LNCS 7147, pp. 566-575). We develop techniques for protecting documents with restricted access rights. In these documents so called \emph{segments} are encrypted. Different segments may be encrypted with different keys so that different user may be given different \emph{access rights}. Hierarchy of access rights is represented by means of a directed acyclic \emph{access graph}. The segments are encrypted with keys - where each key corresponds to one node in the access graph. The main feature of the access graph is that if there is an arch $\overrightarrow{AB}$ in the graph, then all segments labelled with $B$ can be decrypted with the key corresponding to node $A$. We show how to minimize the space overhead necessary for auxiliary keying information stored in the document. We provide an algorithm based on node disjoint paths in the access graph and key derivation based on one-way functions. Our current solution, based on maximal weighted matchings, provides an optimal solution for creating subdocuments, in case when frequency of creating each subdocument is known.
Last updated:  2013-12-28
Faster implementation of scalar multiplication on Koblitz curves
Diego F. Aranha, Armando Faz-Hernández, Julio López, Francisco Rodríguez-Henríquez
We design a state-of-the-art software implementation of field and elliptic curve arithmetic in standard Koblitz curves at the 128-bit security level. Field arithmetic is carefully crafted by using the best formulae and implementation strategies available, and the increasingly common native support to binary field arithmetic in modern desktop computing platforms. The i-th power of the Frobenius automorphism on Koblitz curves is exploited to obtain new and faster interleaved versions of the well-known $\tau$NAF scalar multiplication algorithm. The usage of the $\tau^{\lfloor m/3 \rfloor}$ and $\tau^{\lfloor m/4 \rfloor}$ maps are employed to create analogues of the 3-and 4-dimensional GLV decompositions and in general, the $\lfloor m/s \rfloor$-th power of the Frobenius automorphism is applied as an analogue of an $s$-dimensional GLV decomposition. The effectiveness of these techniques is illustrated by timing the scalar multiplication operation for fixed, random and multiple points. To our knowledge, our library was the first to compute a random point scalar multiplication in less than 10^5 clock cycles among all curves with or without endomorphisms defined over binary or prime fields. The results of our optimized implementation suggest a trade-off between speed, compliance with the published standards and side-channel protection. Finally, we estimate the performance of curve-based cryptographic protocols instantiated using the proposed techniques and compare our results to related work.
Last updated:  2012-12-17
Sequential Aggregate Signatures with Short Public Keys: Design, Analysis and Implementation Studies
Kwangsu Lee, Dong Hoon Lee, Moti Yung
The notion of aggregate signature has been motivated by applications and it enables any user to compress different signatures signed by different signers on different messages into a short signature. Sequential aggregate signature, in turn, is a special kind of aggregate signature that only allows a signer to add his signature into an aggregate signature in sequential order. This latter scheme has applications in diversified settings, such as in reducing bandwidth of a certificate chains, and in secure routing protocols. Lu, Ostrovsky, Sahai, Shacham, and Waters presented the first sequential aggregate signature scheme in the standard (non idealized ROM) model. The size of their public key, however, is quite large (i.e., the number of group elements is proportional to the security parameter), and therefore they suggested as an open problem the construction of such a scheme with short keys. Schröder recently proposed a sequential aggregate signature (SAS) with short public keys using the Camenisch-Lysyanskaya signature scheme, but the security is only proven under an interactive assumption (which is considered a relaxed notion of security). In this paper, we propose the first sequential aggregate signature scheme with short public keys (i.e., a constant number of group elements) in prime order (asymmetric) bilinear groups which is secure under static assumptions in the standard model. Further, our scheme employs constant number of pairing operation per message signing and message verification operation. Technically, we start with a public key signature scheme based on the recent dual system encryption technique of Lewko and Waters. This technique cannot give directly an aggregate signature scheme since, as we observed, additional elements should be published in the public key to support aggregation. Thus, our construction is a careful augmentation technique for the dual system technique to allow it to support a sequential aggregate signature scheme via randomized verification. We further implemented our scheme and conducted a performance study and implementation optimization.
Last updated:  2013-08-02
Unconditionally Secure Asynchronous Multiparty Computation with Linear Communication Complexity
Ashish Choudhury, Martin Hirt, Arpita Patra
Unconditionally secure multiparty computation (MPC) allows a set of n mutually distrusting parties to securely compute an agreed function f over some finite field in the presence of a computationally unbounded adversary, who can actively corrupt any t out of the n parties. Designing an asynchronous MPC (AMPC) protocol with a communication complexity of O(n) field elements per multiplication gate is a long standing open problem. We solve the open problem by presenting two AMPC protocols with the corruption threshold t < n / 4. Our first protocol is statistically secure (i.e. involves a negligible error in the protocol outcome) in a completely asynchronous setting and improves the communication complexity of the previous best AMPC protocol in the same setting by a factor of \Theta(n). Our second protocol is perfectly secure (i.e. error free) in a hybrid setting, where one round of communication is assumed to be synchronous and improves the communication complexity of the previous best AMPC protocol in the hybrid setting by a factor of \Theta(n^2). Starting with the seminal work of Beaver (CRYPTO 1991), it is by now a well-known technique to evaluate multiplication gates in an MPC protocol using shared random multiplication triples. The central contribution common to both the presented protocols is a new and simple framework for generating shared random multiplication triples. All the existing protocols approach the problem by first producing shared pairs of random values, followed by computing the shared product of each pair of random values by invoking known protocols for multiplication. Our framework takes a completely different approach and avoids using the multiplication protocols that are typically communication intensive. Namely, we ask the parties to verifiably share random multiplication triples and then securely extract shared random multiplication triples unknown to the adversary. The framework is of independent interest and can be adapted to any honest majority setting.
Last updated:  2015-02-24
Garbling XOR Gates ``For Free'' in the Standard Model
Benny Applebaum
Yao's Garbled Circuit (GC) technique is a powerful cryptographic tool which allows to ``encrypt'' a circuit $C$ by another circuit $\hC$ in a way that hides all information except for the final output. Yao's original construction incurs a constant overhead in both computation and communication per gate of the circuit $C$ (proportional to the complexity of symmetric encryption). Kolesnikov and Schneider (ICALP 2008) introduced an optimized variant that garbles XOR gates ``for free'' in a way that involves no cryptographic operations and no communication. This variant has become very popular and has been employed in several practical implementations leading to notable performance improvements. The security of the free-XOR optimization was originally proven in the random oracle model. In the same paper, Kolesnikov and Schneider also addressed the question of replacing the random oracle with a standard cryptographic assumption and suggested to use a hash function which achieves some form of security under correlated inputs. This claim was revisited by Choi et al. (TCC 2012) who showed that a stronger form of security is required, and proved that the free-XOR optimization can be realized based on a new primitive called \emph{circular 2-correlation hash function}. Unfortunately, it is currently unknown how to implement this primitive based on standard assumptions, and so the feasibility of realizing the free-XOR optimization in the standard model remains an open question. We resolve this question by showing that the free-XOR approach can be realized in the standard model under the \emph{learning parity with noise} (LPN) assumption. Our result is obtained in two steps: (1) We show that the hash function can be replaced with a symmetric encryption which remains secure under a combined form of related-key and key-dependent attacks; and (2) We show that such a symmetric encryption can be constructed based on the LPN assumption.
Last updated:  2012-09-05
Semantically-Secure Functional Encryption: Possibility Results, Impossibility Results and the Quest for a General Definition
Uncategorized
Mihir Bellare, Adam O'Neill
Show abstract
Uncategorized
This paper explains that SS1-secure functional encryption (FE) as defined by Boneh, Sahai and Waters implicitly incorporates security under key-revealing selective opening attacks (SOA-K). This connection helps intuitively explain their impossibility results and also allows us to prove stronger ones. To fill this gap and move us closer to the (laudable) goal of a general and achievable notion of FE security, we seek and provide two ``sans SOA-K'' definitions of FE security that we call SS2 and SS3. We prove various possibility results about these definitions. We view our work as a first step towards the challenging goal of a general, meaningful and achievable notion of FE security.
Last updated:  2013-04-09
RKA Security beyond the Linear Barrier: IBE, Encryption and Signatures
Uncategorized
Mihir Bellare, Kenneth G. Paterson, Susan Thomson
Show abstract
Uncategorized
We provide a framework enabling the construction of IBE schemes that are secure under related-key attacks (RKAs). Specific instantiations of the framework yield RKA-secure IBE schemes for sets of related key derivation functions that are non-linear, thus overcoming a current barrier in RKA security. In particular, we obtain IBE schemes that are RKA secure for sets consisting of all affine functions and all polynomial functions of bounded degree. Based on this we obtain the first constructions of RKA-secure schemes for the same sets for the following primitives: CCA-secure public-key encryption, CCA-secure symmetric encryption and Signatures. All our results are in the standard model and hold under reasonable hardness assumptions.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.