eprint.iacr.org will be offline for approximately an hour for routine maintenance at 11pm UTC on Tuesday, April 16. We lost some data between April 12 and April 14, and some authors have been notified that they need to resubmit their papers.

All papers in 2012 (Page 2 of 733 results)

Last updated:  2012-11-13
New Preimage Attack on MDC-4
Uncategorized
Deukjo Hong, Daesung Kwon
Show abstract
Uncategorized
In this paper, we provide some cryptanalytic results for double-block-length (DBL) hash modes of block ciphers, MDC-4. Our preimage attacks follow the framework of Knudsen et al.'s time/memory trade-off preimage attack on MDC-2. We find how to apply it to our objects. When the block length of the underlying block cipher is $n$ bits, the most efficient preimage attack on MDC-4 requires time and space about $2^{3n/2}$, which is to be compared to the previous best known preimage attack having time complexity of $2^{7n/4}$. Additionally, we propose an enhanced version of MDC-4, MDC-4$^*$ based on a simple idea. It is secure against our preimage attack and previous attacks and has the same efficiency as MDC-4.
Last updated:  2012-11-11
Pairings on Generalized Huff Curves
Abdoul Aziz Ciss, Djiby Sow
This paper presents the Tate pairing computation on generalized Huff curves proposed by Wu and Feng in \cite{Wu}. In fact, we extend the results of the Tate pairing computation on the standard Huff elliptic curves done previously by Joye, Tibouchi and Vergnaud in \cite{Joux}. We show that the addition step of the Miller loop can be performed in $1\mathbf{M}+(k+15)\mathbf{m}+2\mathbf{c}$ and the doubling one in $1\mathbf{M} + 1\mathbf{S} + (k + 12) \mathbf{m} + 5\mathbf{s} + 2\mathbf{c}$ on the generalized Huff curve.
Last updated:  2013-03-11
Message-Locked Encryption and Secure Deduplication
Uncategorized
Mihir Bellare, Sriram Keelveedhi, Thomas Ristenpart
Show abstract
Uncategorized
We formalize a new cryptographic primitive, Message-Locked Encryption (MLE), where the key under which encryption and decryption are performed is itself derived from the message. MLE provides a way to achieve secure deduplication (space-efficient secure outsourced storage), a goal currently targeted by numerous cloud-storage providers. We provide definitions both for privacy and for a form of integrity that we call tag consistency. Based on this foundation, we make both practical and theoretical contributions. On the practical side, we provide ROM security analyses of a natural family of MLE schemes that includes deployed schemes. On the theoretical side the challenge is standard model solutions, and we make connections with deterministic encryption, hash functions secure on correlated inputs and the sample-then-extract paradigm to deliver schemes under different assumptions and for different classes of message sources. Our work shows that MLE is a primitive of both practical and theoretical interest.
Last updated:  2013-08-22
On the Security of TLS Renegotiation
Florian Giesen, Florian Kohlar, Douglas Stebila
The Transport Layer Security (TLS) protocol is the most widely used security protocol on the Internet. It supports negotiation of a wide variety of cryptographic primitives through different cipher suites, various modes of client authentication, and additional features such as renegotiation. Despite its widespread use, only recently has the full TLS protocol been proven secure, and only the core cryptographic protocol with no additional features. These additional features have been the cause of several practical attacks on TLS. In 2009, Ray and Dispensa demonstrated how TLS renegotiation allows an attacker to splice together its own session with that of a victim, resulting in a man-in-the-middle attack on TLS-reliant applications such as HTTP. TLS was subsequently patched with two defence mechanisms for protection against this attack. We present the first formal treatment of renegotiation in secure channel establishment protocols. We add optional renegotiation to the authenticated and confidential channel establishment model of Jager et al., an adaptation of the Bellare--Rogaway authenticated key exchange model. We describe the attack of Ray and Dispensa on TLS within our model. We show generically that the proposed fixes for TLS offer good protection against renegotiation attacks, and give a simple new countermeasure that provides renegotiation security for TLS even in the face of stronger adversaries.
Last updated:  2017-01-02
SCAPI: The Secure Computation Application Programming Interface
Yael Ejgenberg, Moriya Farbstein, Meital Levy, Yehuda Lindell
Secure two-party and multiparty computation has long stood at the center of the foundations of theoretical cryptography. Recently, however, interest has grown regarding the efficiency of such protocols and their application in practice. As a result, there has been significant progress on this problem and it is possible to actually carry out secure computation for non-trivial tasks on reasonably large inputs. Part of this research goal of making secure computation practical has also involved \emph{implementations}. Such implementations are of importance for two reasons: first, they demonstrate the real efficiency of known and new protocols; second, they deepen our understanding regarding where the bottlenecks in efficiency lie. However, it is very hard to compare between implementations by different research groups since they are carried out on different platforms and using different infrastructures. In addition, most implementations have been carried out without the goal of code reuse, and so are not helpful to other researchers. The difficulty of beginning implementation projects is further compounded by the fact that existing cryptographic libraries (like openSSL, Bouncy Castle, and others) are tailored for tasks like encryption, authentication and key-exchange, and not for secure computation. We have developed SCAPI in order to address these problems. SCAPI is an \emph{open-source} general library tailored for secure computation implementations. Our aim in developing SCAPI has been to provide a flexible and efficient infrastructure for secure computation implementations, that is both easy to use and robust. Great care has been taken in the design of the library, in writing clean code, and in documentation. We hope that this library will be useful to the community interested in implementations of secure protocols, and will help to promote the goal of making secure computation practical.
Last updated:  2012-11-08
Efficient Group Key Management Schemes for Multicast Dynamic Communication Systems
Muhammad Yasir Malik
Key management in multicast dynamic groups, where users can leave or join at their ease is one of the most crucial and essential part of secure communication. Various efficient management strategies have been proposed during last decade that aim to decrease encryption costs and transmission overheads. In this report, two different types of key management schemes are proposed. First proposed scheme is based on One-way function tree (OFT). The proposed scheme fulfills the security gaps that have been pointed out in recent years. Second proposed scheme is based on logical key hierarchy (LKH). This proposed scheme provides better performance for, rather inflexible and expensive, LKH scheme.
Last updated:  2012-11-26
Efficient Group Signatures in the Standard Model
Laila El Aimani, Olivier Sanders
In a group signature scheme, group members are able to sign on behalf of the group. Since the introduction of this cryptographic authentication mechanism, several schemes have been proposed but only few of them enjoy a security in the standard model. Moreover, those provided in the standard model suffer the recourse to non standard-assumptions, or the expensive cost and bandwidth of the resulting signature. We provide three practical group signature schemes that are provably secure in the standard model under standard assumptions. The three schemes permit dynamic enrollment of new members while keeping a constant size for both keys and group signatures, and they improve the state-of-the art by several orders of magnitude.
Last updated:  2012-11-08
Bit-Parallel $GF(2^{n})$ Squarer Using Shifted Polynomial Basis
Uncategorized
Xi Xiong, Haining Fan
Show abstract
Uncategorized
We present explicit formulae and complexities of bit-parallel shifted polynomial basis (SPB) squarers in finite field $GF(2^{n})$s generated by general irreducible trinomials $x^{n}+x^{k}+1$ ($0< k <n$) and type-II irreducible pentanomials $x^{n}+x^{k+1}+x^{k}+x^{k-1}+1$ ($3<k<(n-3)/2$). The complexities of the proposed squarers match or slightly outperform the previous best results. These formulae can also be used to design polynomial basis Montgomery squarers without any change. Furthermore, we show by examples that XOR gate numbers of SPB squarers are different when different shift factors in the SPB definition, i.e., parameter $v$ in ${\{}x^{i-v}|0\leq i\leq n-1 {\}}$, are used. This corrects previous misinterpretation.
Last updated:  2012-11-05
Order-Preserving Encryption Revisited: Improved Security Analysis and Alternative Solutions
Alexandra Boldyreva, Nathan Chenette, Adam O’Neill
We further the study of order-preserving symmetric encryption (OPE), a primitive for allowing efficient range queries on encrypted data, recently initiated (from a cryptographic perspective) by Boldyreva et al.~(Eurocrypt '09). First, we address the open problem of characterizing what encryption via a random order-preserving function (ROPF) leaks about underlying data (ROPF being the ``ideal object'' in the security definition, POPF, satisfied by their scheme.) In particular, we show that, for a database of randomly distributed plaintexts and appropriate choice of parameters, ROPF encryption leaks neither the precise value of any plaintext nor the precise distance between any two of them. The analysis here introduces useful new techniques. On the other hand, we show that ROPF encryption leaks approximate value of any plaintext as well as approximate distance between any two plaintexts, each to an accuracy of about square root of the domain size. We then study schemes that are not order-preserving, but which nevertheless allow efficient range queries and achieve security notions stronger than POPF. In a setting where the entire database is known in advance of key-generation (considered in several prior works), we show that recent constructions of ``monotone minimal perfect hash functions'' allow to efficiently achieve (an adaptation of) the notion of IND-O(rdered) CPA also considered by Boldyreva et al., which asks that \emph{only} the order relations among the plaintexts is leaked. Finally, we introduce {\em modular} order-preserving encryption (MOPE), in which the scheme of Boldyreva et al. is prepended with a random shift cipher. MOPE improves the security of OPE in a sense, as it does not leak any information about plaintext location. We clarify that our work should not be interpreted as saying the original scheme of Boldyreva et al., or the variants that we introduce, are ``secure'' or ``insecure.'' Rather, the goal of this line of research is to help practitioners decide whether the options provide a suitable security-functionality tradeoff for a given application.
Last updated:  2012-11-05
Order-Preserving Symmetric Encryption
Alexandra Boldyreva, Nathan Chenette, Younho Lee, Adam O’Neill
We initiate the cryptographic study of order-preserving symmetric encryption (OPE), a primitive suggested in the database community by Agrawal et al.~(SIGMOD '04) for allowing efficient range queries on encrypted data. Interestingly, we first show that a straightforward relaxation of standard security notions for encryption such as indistinguishability against chosen-plaintext attack (IND-CPA) is unachievable by a practical OPE scheme. Instead, we propose a security notion in the spirit of pseudorandom functions (PRFs) and related primitives asking that an OPE scheme look ``as-random-as-possible" subject to the order-preserving constraint. We then design an efficient OPE scheme and prove its security under our notion based on pseudorandomness of an underlying blockcipher. Our construction is based on a natural relation we uncover between a random order-preserving function and the hypergeometric probability distribution. In particular, it makes black-box use of an efficient sampling algorithm for the latter.
Last updated:  2012-11-20
Impossible plaintext cryptanalysis and probable-plaintext collision attacks of 64-bit block cipher modes
David McGrew
The block cipher modes of operation that are widely used (CBC, CTR, CFB) are secure up to the birthday bound; that is, if $w2^{w}$ or fewer bits of data are encrypted with a $w$-bit block cipher. However, the detailed security properties close to this bound are not widely appreciated, despite the fact that $64$-bit block ciphers are sometimes used in that domain. This work addresses the issue by analyzing plaintext-recovery attacks that are effective close to that bound. We describe possible-plaintext attacks, which can learn unknown plaintext values that are encrypted with CBC, CFB, or OFB. We also introduce \textit{impossible plaintext} cryptanalysis, which can recover information encrypted with CTR, and can improve attacks against the aforementioned modes as well. These attacks work at the birthday bound, or even slightly below that bound, when the target plaintext values are encrypted under a succession of keys.
Last updated:  2013-03-12
Resolving the conflict between generality and plausibility in verified computation
Srinath Setty, Benjamin Braun, Victor Vu, Andrew J. Blumberg, Bryan Parno, Michael Walfish
The area of proof-based verified computation (outsourced computation built atop probabilistically checkable proofs and cryptographic machinery) has lately seen renewed interest. Although recent work has made great strides in reducing the overhead of naive applications of the theory, these schemes still cannot be considered practical. A core issue is that the work for the prover is immense, in general; it is near-practical only for hand-compiled computations that can be expressed in special forms. This paper addresses that problem. Provided one is willing to batch verification, we develop a protocol that achieves the efficiency of the best manually constructed protocols in the literature yet applies to most computations. Our protocol is built on the observation that the recently-proposed QAPs of Gennaro et al. (ePrint 2012/215) yield a linear PCP that works with the efficient argument protocol of Setty et al. (ePrint 2012/598, Security 2012, NDSS 2012), itself based on the proposal of Ishai et al. (CCC 2007). The consequence is a prover whose total work is not much more than \emph{linear} in the running time of the computation. We implement the protocol in the context of a built system that includes a compiler and a GPU implementation. The result, as indicated by an experimental evaluation, is a system that is {\em almost} usable for real problems---without special-purpose tailoring.
Last updated:  2012-11-08
Biclique Cryptanalysis of Lightweight Block Ciphers PRESENT, Piccolo and LED
Kitae Jeong, HyungChul Kang, Changhoon Lee, Jaechul Sung, Seokhie Hong
In this paper, we evaluate the security of lightweight block ciphers PRESENT, Piccolo and LED against biclique cryptanalysis. To recover the secret key of PRESENT-80/128, our attacks require $2^{79.76}$ full PRESENT-80 encryptions and $2^{127.91}$ full PRESENT-128 encryptions, respectively. Our attacks on Piccolo-80/128 require computational complexities of $2^{79.13}$ and $2^{127.35}$, respectively. The attack on a $29$-round reduced LED-64 needs $2^{63.58}$ 29-round reduced LED-64 encryptions. In the cases of LED-80/96/128, we propose the attacks on two versions. First, to recover the secret key of $45$-round reduced LED-80/96/128, our attacks require computational complexities of $2^{79.45}, 2^{95.45}$ and $2^{127.45}$, respectively. To attack the full version, we require computational complexities of $2^{79.37}, 2^{95.37}$ and $2^{127.37}$, respectively. However, in these cases, we need the full codebook. These results are superior to known biclique cryptanalytic results on them.
Last updated:  2012-11-05
Solving Subset Sum Problems of Densioty close to 1 by "randomized" BKZ-reduction
Claus P. Schnorr, Taras Shevchenko
Subset sum or Knapsack problems of dimension $n$ are known to be hardest for knapsacks of density close to 1.These problems are NP-hard for arbitrary $n$. One can solve such problems either by lattice basis reduction or by optimized birthday algorithms. Recently Becker, Coron, Jou } [BCJ10] present a birthday algorithm that follows Schroeppel, Shamir [SS81], and Howgrave-Graham, Joux [HJ10]. This algorithm solves 50 random knapsacks of dimension 80 and density close to 1 in roughly 15 hours on a 2.67 GHz PC. We present an optimized lattice basis reduction algorithm that follows Schnorr, Euchne} [SE03] using pruning of Schnorr, Hörner [SH95] that solves such random knapsacks of dimension 80 on average in less than a minute, and 50 such problems all together about 9.4 times faster and using much less space than [BCJ10] on another 2.67 GHz PC.
Last updated:  2012-11-07
Asynchronous Computational VSS with Reduced Communication Complexity
Michael Backes, Amit Datta, Aniket Kate
Verifiable secret sharing (VSS) is a vital primitive in secure distributed computing. It allows an untrusted dealer to verifiably share a secret among n parties in the presence of an adversary controlling at most t of them. VSS in the synchronous communication model has received tremendous attention in the cryptographic research community. Nevertheless, recent interest in deploying secure distributed computing over the Internet requires going beyond the synchronous communication model and thoroughly investigating VSS in the asynchronous communication model. In this work, we consider the communication complexity of asynchronous VSS in the com- putational setting for the optimal resilience of n = 3t + 1. The best known asynchronous VSS protocol by Cachin et al. has O(n^2) message complexity and O(kn^3) communication complexity, where k is a security parameter corresponding to the size of the secret. We close the linear complexity gap between these two measures for asynchronous VSS by presenting two protocols with O(n^2) message complexity and O(kn^2) communication complexity. Our first protocol satisfies the standard VSS definition, and can be used in stand-alone VSS scenarios as well as in applications such as Byzantine agreement. Our second and more intricate protocol satisfies a stronger VSS definition, and is useful in all VSS applications including multiparty computation and threshold cryptography.
Last updated:  2014-10-22
An ultra-lightweight ID-based pairwise key establishment scheme aiming at full collusion resistance
Uncategorized
Oscar Garcia-Morchon, Ronald Rietman, Ludo Tolhuizen, Domingo Gomez-Perez, Jaime Gutierrez, Santos Merino del Pozo
Uncategorized
This paper introduces a new key establishment scheme aiming at fully collusion-resistant identity-based symmetric-key agreement. In an identity-based pairwise key agreement scheme, a Trusted Third Party (TTP) manages the system and securely provides any node, e.g., Alice or Bob, with private keying materials. Alice can generate a pairwise key with Bob given her own secret keying material and Bob's identity. The full collusion resistance property would ensure that the scheme remains secure even if arbitrarily many devices collude or are compromised. Our scheme, the HIMMO algorithm, relies on two design concepts: Hiding Information and Mixing Modular Operations. Hiding information is related to the Noisy Interpolation Problem; the Mixing Modular Operations problem seems to be a new hard problem. We describe our scheme, the security of its underlying design principles and give order of magnitude estimations for secure configuration parameters. For these parameters, we show that our prototypic implementation of HIMMO on the 8-bit CPU ATmega128L can generate 128-bit keys in less than 7 ms based on an algorithm fitting in 428 B and with secret keying materials of size 656 B.
Last updated:  2012-11-02
Security Analysis of an Open Car Immobilizer Protocol Stack
Stefan Tillich, Marcin Wójcik
An increasing number of embedded security applications---which traditionally have been heavily reliant on secret and/or proprietary solutions---apply the principle of open evaluation. A recent example is the specification of an open security protocol stack for car immobilizer applications by Atmel, which has been presented at ESCAR 2010. This stack is primarily intended to be used in conjunction with automotive transponder chips of this manufacturer, but could in principle be deployed on any suitable type of transponder chip. In this paper we re-evaluate the security of this protocol stack. We were able to uncover a number of security vulnerabilities. We show that an attacker with a cheap standard reader close to such a car key can track it, lock sections of its EEPROM, and even render its immobilizer functionality completely useless. After eavesdropping on a genuine run of the authentication protocol between the car key and the car, an attacker is enabled to read and write the memory of the car key. Furthermore, we point out the threats of relay attacks and session hijacking, which require slightly more elaborate attack setups. For each of the indicated attacks we propose possible fixes and discuss their overhead.
Last updated:  2015-02-13
Balancing Output Length and Query Bound in Hardness Preserving Constructions of Pseudorandom Functions
Uncategorized
Nishanth Chandran, Sanjam Garg
Show abstract
Uncategorized
We revisit hardness-preserving constructions of a pseudo-random function (PRF) from any length doubling pseudo-random generator (PRG) when there is a non-trivial upper bound $q$ on the number of queries that the adversary can make to the PRF. Very recently, Jain, Pietrzak, and Tentes (TCC 2012) gave a hardness-preserving construction of a PRF that makes only $O(\log q)$ calls to the underlying PRG when $q = 2^{n^\epsilon}$ and $\epsilon \geq \frac{1}{2}$. This dramatically improves upon the efficiency of the construction of Goldreich, Goldwasser, and Micali (FOCS 1984). However, they explicitly left open the question of whether such constructions exist when $\epsilon < \frac{1}{2}$. In this work, we give constructions of PRFs that make only $O(\log q)$ calls to the underlying PRG when $q = 2^{n^\epsilon}$, for $0<\epsilon<1$; our PRF outputs $O(n^{2\epsilon})$ bits (on every input), as opposed to the construction of Jain {\em et al.} that outputs $n$ bits. That is, our PRF is not length preserving; however it outputs more bits than the PRF of Jain {\em et al.} when $\epsilon>\frac{1}{2}$. We obtain our construction through the use of information-theoretic tools such as {\em almost} $\alpha$-wise independent hash functions coupled with a novel proof strategy.
Last updated:  2013-01-28
Polynomial time solutions of computational problems in noncommutative-algebraic cryptography
Boaz Tsaban
We introduce the \emph{linear centralizer method}, and use it to devise a provable polynomial time solution of the Commutator Key Exchange Problem, the computational problem on which, in the passive adversary model, the security of the Anshel--Anshel--Goldfeld 1999 \emph{Commutator} key exchange protocol is based. We also apply this method to the computational problem underlying the \emph{Centralizer} key exchange protocol, introduced by Shpilrain and Ushakov in 2006. This is the first provable polynomial time cryptanalysis of the Commutator key exchange protocol, hitherto the most important key exchange protocol in the realm of noncommutative-algebraic cryptography, and the first cryptanalysis (of any kind) of the Centralizer key exchange protocol. Unlike earlier cryptanalyses of the Commutator key exchange protocol, our cryptanalyses cannot be foiled by changing the distributions used in the protocol.
Last updated:  2012-11-01
An arithmetic intersection formula for denominators of Igusa class polynomials
Kristin Lauter, Bianca Viray
In this paper we prove an explicit formula for an arithmetic intersection number on the Siegel moduli space of abelian surfaces, generalizing the work of Bruinier-Yang and Yang. These intersection numbers allow one to compute the denominators of Igusa class polynomials, which has important applications to the construction of genus 2 curves for use in cryptography. Bruinier and Yang conjectured a formula for intersection numbers on an arithmetic Hilbert modular surface, and as a consequence obtained a conjectural formula for the intersection number relevant to denominators of Igusa class polynomials under strong assumptions on the ramification of the primitive quartic CM field K. Yang later proved this conjecture assuming that the ring of integers is freely generated by one element over the ring of integers of the real quadratic subfield. In this paper, we prove a formula for the intersection number for more general primitive quartic CM fields, and we use a different method of proof than Yang. We prove a tight bound on this intersection number which holds for all primitive quartic CM fields. As a consequence, we obtain a formula for a multiple of the denominators of the Igusa class polynomials for an arbitrary primitive quartic CM field. Our proof entails studying the Embedding Problem posed by Goren and Lauter and counting solutions using our previous article that generalized work of Gross-Zagier and Dorman to arbitrary discriminants.
Last updated:  2013-03-14
Resource-Restricted Indifferentiability
Grégory Demay, Peter Gaźi, Martin Hirt, Ueli Maurer
A major general paradigm in cryptography is the following argument: Whatever an adversary could do in the real world, it could just as well do in the ideal world. The standard interpretation of ``just as well'' is that the translation from the real to the ideal world, usually called a simulator, is achieved by a probabilistic polynomial-time algorithm. This means that a polynomial blow-up of the adversary's time and memory requirements is considered acceptable. In certain contexts this interpretation of ``just as well'' is inadequate, for example if the concrete amount of memory used by the adversary is relevant. The example of Ristenpart et al. (Eurocrypt 2011), for which the original indifferentiability notion introduced by Maurer et al. (Eurocrypt 2004) is shown to be insufficient, turns out to be exactly of this type. It requires a fine-grained statement about the adversary's memory capacity, calling for a generalized treatment of indifferentiability where specific resource requirements can be taken into account by modeling them explicitly. We provide such treatment and employ the new indifferentiability notion to prove lower bounds on the memory required by any simulator in a domain extension construction of a public random function. In particular, for simulators without memory, even domain extension by a single bit turns out to be impossible. Moreover, for the construction of a random oracle from an ideal compression function, memory roughly linear in the length of the longest query is required. This also implies the impossibility of such domain extension in any multi-party setting with potential individual misbehavior by parties (i.e., no central adversary).
Last updated:  2013-10-15
Analysis of the Non-Perfect Table Fuzzy Rainbow Tradeoff
Byoung-Il Kim, Jin Hong
Time memory tradeoff algorithms are tools for inverting one-way functions, and they are often used to recover passwords from unsalted password hashes. There are many publicly known tradeoff algorithms, and the rainbow tradeoff is widely believed to be the best algorithm. This work provides an accurate complexity analysis of the non-perfect table version of the fuzzy rainbow tradeoff algorithm, which has not yet received much attention. It is shown that, when the pre-computation cost and the online efficiency are both taken into consideration, the non-perfect fuzzy rainbow tradeoff is preferable to the original rainbow tradeoff in many situations.
Last updated:  2012-11-01
A coding theory foundation for the analysis of general unconditionally secure proof-of-retrievability schemes for cloud storage
Maura B. Paterson, Douglas R. Stinson, Jalaj Upadhyay
There has been considerable recent interest in "cloud storage'' wherein a user asks a server to store a large file. One issue is whether the user can verify that the server is actually storing the file, and typically a challenge-response protocol is employed to convince the user that the file is indeed being stored correctly. The security of these schemes is phrased in terms of an extractor which will recover or retrieve the file given any "proving algorithm'' that has a sufficiently high success probability. This paper treats proof-of-retrievability schemes in the model of unconditional security, where an adversary has unlimited computational power. In this case retrievability of the file can be modelled as error-correction in a certain code. We provide a general analytical framework for such schemes that yields exact (non-asymptotic) reductions that precisely quantify conditions for extraction to succeed as a function of the success probability of a proving algorithm, and we apply this analysis to several archetypal schemes. In addition, we provide a new methodology for the analysis of keyed POR schemes in an unconditionally secure setting, and use it to prove the security of a modified version of a scheme due to Shacham and Waters under a slightly restricted attack model, thus providing the first example of a keyed POR scheme with unconditional security. We also show how classical statistical techniques can be used to evaluate whether the responses of the prover are accurate enough to permit successful extraction. Finally, we prove a new lower bound on storage and communication complexity of POR schemes.
Last updated:  2013-03-17
Candidate Multilinear Maps from Ideal Lattices
Sanjam Garg, Craig Gentry, Shai Halevi
We describe plausible lattice-based constructions with properties that approximate the sought-after multilinear maps in hard-discrete-logarithm groups, and show an example application of such multilinear maps that can be realized using our approximation. The security of our constructions relies on seemingly hard problems in ideal lattices, which can be viewed as extensions of the assumed hardness of the NTRU function.
Last updated:  2014-06-04
A NEW APPROACH TO THE DISCRETE LOGARITHM PROBLEM WITH AUXILIARY INPUTS
Uncategorized
Taechan Kim, Jung Hee Cheon
Show abstract
Uncategorized
The discrete logarithm problem with auxiliary inputs is to solve~$\alpha$ for given elements $g, g^\alpha, \ldots, g^{\alpha^d}$ of a cyclic group $G=\langle g \rangle$ of prime order~$p$. The best-known algorithm, proposed by Cheon in 2006, solves $\alpha$ in the case of $d | (p\pm 1)$ with running time of $O\left( \sqrt{p/d} + d^i \right)$ group exponentiations~($i=1$ or $1/2$ depending on the sign). There have been several attempts to generalize this algorithm in the case of $\Phi_k(p)$ for $k \ge 3$, but it has been shown, by Kim, Cheon and Lee, that they cannot have better complexity than the usual square root algorithms. We propose a new algorithm to solve the DLPwAI. The complexity of the algorithm is determined by a chosen polynomial $f \in \F_p[x]$ of degree $d$. We show that the proposed algorithm has a running time of $\widetilde O\left( \sqrt{p / \tau_f} +d \right)$ group exponentiations, where~$\tau_f$ is the number of absolutely irreducible factors of $f(x)-f(y)$. We note that it is always smaller than $\widetilde O(p^{1/2})$. To obtain a better complexity of the algorithm, we investigate an upper bound of $\tau_f$ and try to find polynomials that achieve the upper bound. We can find such polynomials in the case of $d|(p\pm 1)$. In this case, the algorithm has a running time of $\widetilde O\left(\sqrt{p/d} +d \right)$ group operations which corresponds with the lower bound in the generic group model. On the contrary, we show that no polynomial exists that achieves the upper bound in the case of $d \vert\Phi_3(p)=p^2+p+1$. As an independent interest, we present an analysis of a non-uniform birthday problem. Precisely, we show that a collision occurs with a high probability after $O\big( \frac{1}{ \sqrt{\sum_{k} {w_k}^2} } \big)$ samplings of balls, where the probability $w_k$ of assigning balls to the bin $k$ is arbitrary.
Last updated:  2012-10-30
On the (Non-)Reusability of Fuzzy Sketches and Extractors and Security Improvements in the Computational Setting
Marina Blanton, Mehrdad Aliasgari
Secure sketches and fuzzy extractors enable the use of biometric data in cryptographic applications by correcting errors in noisy biometric readings and producing cryptographic materials suitable for authentication, encryption, and other purposes. Such constructions work by producing a public sketch, which is later used to reproduce the original biometric and all derived information exactly from a noisy biometric reading. It has been previously shown that release of multiple sketches associated with a single biometric presents security problems for certain constructions. We continue the analysis to demonstrate that all other constructions in the literature are also prone to similar problems and cannot be safely reused. To mitigate the problem, we propose for each user to store one short secret string for all possible uses of her biometric, and show that simple constructions in the computational setting have numerous advantageous security and usability properties under standard hardness assumptions. Our constructions are generic in that they can be used with any existing secure sketch as a black box.
Last updated:  2012-10-29
Graph-Theoretic Algorithms for the ``Isomorphism of Polynomials'' Problem
Charles Bouillaguet, Pierre-Alain Fouque, Amandine Véber
We give three new algorithms to solve the ``isomorphism of polynomial'' problem, which was underlying the hardness of recovering the secret-key in some multivariate trapdoor one-way functions. In this problem, the adversary is given two quadratic functions, with the promise that they are equal up to linear changes of coordinates. Her objective is to compute these changes of coordinates, a task which is known to be harder than Graph-Isomorphism. Our new algorithm build on previous work in a novel way. Exploiting the birthday paradox, we break instances of the problem in time $q^{2n/3}$ (rigorously) and $q^{n/2}$ (heuristically), where $q^n$ is the time needed to invert the quadratic trapdoor function by exhaustive search. These results are obtained by turning the algebraic problem into a combinatorial one, namely that of recovering partial information on an isomorphism between two exponentially large graphs. These graphs, derived from the quadratic functions, are new tools in multivariate cryptanalysis.
Last updated:  2013-09-11
Quantum-Secure Message Authentication Codes
Dan Boneh, Mark Zhandry
We construct the first Message Authentication Codes (MACs) that are existentially unforgeable against a quantum chosen message attack. These chosen message attacks model a quantum adversary’s ability to obtain the MAC on a superposition of messages of its choice. We begin by showing that a quantum secure PRF is sufficient for constructing a quantum secure MAC, a fact that is considerably harder to prove than its classical analogue. Next, we show that a variant of Carter-Wegman MACs can be proven to be quantum secure. Unlike the classical settings, we present an attack showing that a pair-wise independent hash family is insufficient to construct a quantum secure one-time MAC, but we prove that a four-wise independent family is sufficient for one-time security.
Last updated:  2012-10-26
Secure Outsourced Attribute-Based Signatures
Jin Li, Xiaofeng Chen, Jingwei Li, Chunfu Jia, Duncan S. Wong, Willy Susilo
Attribute-based signature (ABS) is a useful variant of digital signature, which enables users to sign messages over attributes without revealing any information other than the fact that they have attested to the messages. However, heavy computational cost is required during signing in existing work of ABS, which grows linearly with the size of the predicate formula. As a result, this presents a significant challenge for resource-limited users (such as mobile devices) to perform such heavy computation independently. Aiming at tackling the challenge above, we propose and formalize a new paradigm called OABS, in which the computational overhead at user side is greatly reduced through outsourcing such intensive computation to an untrusted signing-cloud service provider (S-CSP). Furthermore, we apply this novel paradigm to existing ABS to reduce complexity and present two schemes, i) in the first OABS scheme, the number of exponentiations involving in signing is reduced from $O(d)$ to $O(1)$ (nearly three), where $d$ is the upper bound of threshold value defined in the predicate; ii) our second scheme is built on Herranz et al's construction with constant-size signatures. The number of exponentiations in signing is reduced from $O(d^2)$ to $O(d)$ and the communication overhead is $O(1)$. Security analysis demonstrates that both OABS schemes are secure in terms of the unforgeability and attribute-signer privacy definitions specified in the proposed security model. Finally, to allow for high efficiency and flexibility, we discuss extensions of OABS and show how to achieve accountability and outsourced verification as well.
Last updated:  2014-12-13
Leakage-Resilient Cryptography from Minimal Assumptions
Carmit Hazay, Adriana Lopez-Alt, Hoeteck Wee, Daniel Wichs
We present new constructions of leakage-resilient cryptosystems, which remain provably secure even if the attacker learns some arbitrary partial information about their internal secret key. For any polynomial $\ell$, we can instantiate these schemes so as to tolerate up to $\ell$ bits of leakage. While there has been much prior work constructing such leakage-resilient cryptosystems under concrete number-theoretic and algebraic assumptions, we present the first schemes under general and minimal assumptions. In particular, we construct: - Leakage-resilient public-key encryption from any standard public-key encryption. - Leakage-resilient weak pseudorandom functions, symmetric-key encryption}, and message-authentication codes from any one-way function. These are the first constructions of leakage-resilient symmetric-key primitives that do not rely on public-key assumptions. We also get the first constructions of leakage-resilient public-key encryption from ``search assumptions'', such as the hardness of factoring or CDH. Although our schemes can tolerate arbitrarily large amounts of leakage, the tolerated rate of leakage (defined as the ratio of leakage-amount to key-size) is rather poor in comparison to prior results under specific assumptions. As a building block of independent interest, we study a notion of weak hash-proof systems in the public-key and symmetric-key settings. While these inherit some of the interesting security properties of standard hash-proof systems, we can instantiate them under general assumptions.
Last updated:  2012-10-25
Collecting Data while Preserving Individuals' Privacy: A Case Study
Alexis Bonnecaze, Robert Rolland
Several companies exploit medical data to better understand medication consumption patterns. Their analyses are useful to various health actors in order to enhance health care management. In this article, we focus on a configuration which allows a network of pharmacies to forward medical data to a private company in order to construct a database. Pharmacies must operate in full compliance with legal requirements in terms of confidentiality and privacy. We show that our solution fulfills all the requirements. Our work leads us to introduce the concept of generalized discrete logarithm problem which is proven to be as hard as the discrete logarithm problem.
Last updated:  2012-10-25
A note on invariant linear transformations in multivariate public key cryptography
Andreas Wiemers
Imai and Matsumoto introduced a public key cryptosystem based on multivariate quadratic polynomials. In a simplified way, the essence of their cryptosystem can be described in the following way: Start with a central monomial F. The secret key comprises two invertible linear transformations T and L such that TFL is the public key. In order to study equivalent public keys it is natural to ask for the "invariant" secret keys (T,L), i.e. TFL=F. Lin, Faugere, Perret and Wang give a partial answer to this question by considering such L which fulfill FL=F. In this paper we will determine all invariant invertible linear transformations (T,L).
Last updated:  2013-03-13
How to Garble RAM Programs
Uncategorized
Steve Lu, Rafail Ostrovsky
Show abstract
Uncategorized
Assuming solely the existence of one-way functions, we show how to construct Garbled RAM Programs (GRAM) where its size only depends on fixed polynomial in the security parameter times the program running time. We stress that we avoid converting the RAM programs into circuits. As an example, our techniques implies the first garbled binary search program (searching over sorted encrypted data stored in a cloud) which is poly-logarithmic in the data size instead of linear. Our result requires the existence of one-way function and enjoys the same non-interactive properties as Yao's original garbled circuits.
Last updated:  2012-10-25
The LED Block Cipher
Jian Guo, Thomas Peyrin, Axel Poschmann, Matt Robshaw
We present a new block cipher LED. While dedicated to compact hardware implementation, and offering the smallest silicon footprint among comparable block ciphers, the cipher has been designed to simultaneously tackle three additional goals. First, we explore the role of an ultra-light (in fact non-existent) key schedule. Second, we consider the resistance of ciphers, and LED in particular, to related-key attacks: we are able to derive simple yet interesting AES-like security proofs for LED regarding related- or single-key attacks. And third, while we provide a block cipher that is very compact in hardware, we aim to maintain a reasonable performance profile for software implementation.
Last updated:  2013-01-24
On the coefficients of the polynomial in the number field sieve
Min Yang, Qingshu Meng, Zhangyi Wang, Li Li, Huanguo Zhang
Polynomial selection is very important in number field sieve. If the yield of a pair of polynomials is closely correlated with the coefficients of the polynomials, we can select polynomials by checking the coefficients first. This can speed up the selection of good polynomials. In this paper, we aim to study the correlation between the polynomial coefficients and the yield of the polynomials. By theoretical analysis and experiments, we find that a polynomial with the ending coefficient containing more small primes is usually better in yield than the one whose ending coefficient contains less. One advantage of the ending coefficient over the leading coefficient is that the ending coefficient is bigger and can contain more small primes in root optimizing stage. Using the complete discrimination system, we also analyze the condition on coefficients to obtain more real roots.
Last updated:  2013-02-28
Taking proof-based verified computation a few steps closer to practicality (extended version)
Srinath Setty, Victor Vu, Nikhil Panpalia, Benjamin Braun, Muqeet Ali, Andrew J. Blumberg, Michael Walfish
We describe Ginger, a built system for unconditional, general-purpose, and nearly practical verification of outsourced computation. Ginger is based on Pepper, which uses the PCP theorem and cryptographic techniques to implement an \emph{efficient argument} system (a kind of interactive protocol). Ginger slashes the query size and costs via theoretical refinements that are of independent interest; broadens the computational model to include (primitive) floating-point fractions, inequality comparisons, logical operations, and conditional control flow; and includes a parallel GPU-based implementation that dramatically reduces latency.
Last updated:  2012-10-25
A Novel Permutation-based Hash Mode of Operation FP and the Hash Function SAMOSA
Souradyuti Paul, Ekawat Homsirikamol, Kris Gaj
The contribution of the paper is two-fold. First, we design a novel permutation-based hash mode of operation FP, and analyze its security. The FP mode is derived by replacing the hard-to-invert primitive of the FWP mode -- designed by Nandi and Paul, Indocrypt 2010 -- with an easy-to-invert permutation; since easy-to-invert permutations with good cryptographic properties are normally easier to design, and are more efficient than the hard-to-invert functions, the FP mode is more suitable in practical applications than the FWP mode. We show that any n-bit hash function that uses the FP mode is indifferentiable from a random oracle up to 2^n/2 queries (up to a constant factor), if the underlying 2n-bit permutation is free from any structural weaknesses. Based on our further analysis and experiments, we conjecture that the FP mode is resistant to all non-trivial generic attacks with work less than the brute force, mainly due to its large internal state. We compare the FP mode with other permutation-based hash modes, and observe that it displays the so-far best security/rate trade-off. To put this into perspective, our second contribution is a proposal for a concrete hash function SAMOSA using the new mode and the $P$-permutations of the SHA-3 finalist Groestl. Based on our analysis we claim that the SAMOSA family cannot be attacked with work significantly less than the brute force. We also provide hardware implementation (FPGA) results for SAMOSA to compare it with the SHA-3 finalists. In our implementations, SAMOSA family consistently beats Groestl, Blake and Skein in the throughput to area ratio. With more efficient underlying permutation, it seems possible to design a hash function based on the FP mode that can achieve even higher performances.
Last updated:  2013-02-06
Evaluating User Privacy in Bitcoin
Uncategorized
Elli Androulaki, Ghassan Karame, Marc Roeschlin, Tobias Scherer, Srdjan Capkun
Show abstract
Uncategorized
Bitcoin is quickly emerging as a popular digital payment system. However, in spite of its reliance on pseudonyms, Bitcoin raises a number of privacy concerns due to the fact that all of the transactions that take place are publicly announced in the system. In this paper, we investigate the privacy guarantees of Bitcoin in the setting where Bitcoin is used as a primary currency for the daily transactions of individuals. More specifically, we evaluate the privacy that is provided by Bitcoin (i) by analyzing the genuine Bitcoin system and (ii) through a simulator that mimics Bitcoin client’s behavior in the context where Bitcoin is used for all transactions within a university. In this setting, our results show that the profiles of almost 40% of the users can be, to a large extent, recovered even when users adopt privacy measures recommended by Bitcoin. To the best of our knowledge, this is the first work that comprehensively analyzes, and evaluates the privacy implications of Bitcoin. As a by-product, we have designed and implemented the first simulator of Bitcoin; our simulator can be used to model the interaction between Bitcoin users in generic settings.
Last updated:  2012-12-18
Extending Brickell-Davenport Theorem to Non-Perfect Secret Sharing Schemes
Oriol Farràs, Carles Padró
One important result in secret sharing is the Brickell-Davenport Theorem: every ideal perfect secret sharing scheme defines a matroid that is uniquely determined by the access structure. Even though a few attempts have been made, there is no satisfactory definition of ideal secret sharing scheme for the general case, in which non-perfect schemes are considered as well. Without providing another unsatisfactory definition of ideal non-perfect secret sharing scheme, we present a generalization of the Brickell-Davenport Theorem to the general case. After analyzing that result under a new point of view and identifying its combinatorial nature, we present a characterization of the (not necessarily perfect) secret sharing schemes that are associated to matroids. Some optimality properties of such schemes are discussed.
Last updated:  2012-10-25
Improved Impossible Differential Attack on Reduced Version of Camellia-192/256
Uncategorized
Ya Liu, Dawu Gu, Zhiqiang Liu, Wei Li
Show abstract
Uncategorized
As an ISO/IEC international standard, Camellia has been used various cryptographic applications. In this paper, we improve previous attacks on Camellia-192/256 with key-dependent layers $FL/FL^{-1}$ by using the intrinsic weakness of keyed functions. Specifically, we present the first impossible differential attack on 13-round Camellia with $2^{121.6}$ chosen ciphertexts and $2^{189.9}$ 13-round encryptions, while the analysis for the biggest number of rounds in previous results on Camellia-192 worked on 12 rounds. Furthermore, we successfully attack 14-round Camellia-256 with $2^{122.1}$ chosen ciphertexts and $2^{229.3}$ 14-round encryptions. Compared with the previously best known attack on 14-round Camellia-256, the time complexity of our attack is reduced by $2^{8.9}$ times and the data complexity is comparable.
Last updated:  2012-10-25
Factor-4 and 6 (De)compression for Values of Pairings using Trace Maps
Tomoko Yonemura, Taichi Isogai, Hirofumi Muratani, Yoshikazu Hanatani
The security of pairing-based cryptosystems relies on the hardness of the discrete logarithm problems in elliptic curves and in finite fields related to the curves, namely, their embedding fields. Public keys and ciphertexts in the pairing-based cryptosystems are composed of points on the curves or values of pairings. Although the values of the pairings belong to the embedding fields, the representation of the field is inefficient in size because the size of the embedding fields is usually larger than the size of the elliptic curves. We show factor-4 and 6 compression and decompression for the values of the pairings with the supersingular elliptic curves of embedding degrees 4 and 6, respectively. For compression, we use the fact that the values of the pairings belong to algebraic tori that are multiplicative subgroups of the embedding fields. The algebraic tori can be expressed by the affine representation or the trace representation. Although the affine representation allows decompression maps, decompression maps for the trace representation has not been known. In this paper, we propose a trace representation with decompression maps for the characteristics 2 and 3. We first construct efficient decompression maps for trace maps by adding extra information to the trace representation. Our decompressible trace representation with additional information is as efficient as the affine representation is in terms of the costs of compression, decompression and exponentiation, and the size.
Last updated:  2012-12-14
Attribute-Based Encryption for Circuits from Multilinear Maps
Amit Sahai, Brent Waters
In this work, we provide the first construction of Attribute-Based Encryption (ABE) for general circuits. Our construction is based on the existence of multilinear maps. We prove selective security of our scheme in the standard model under the natural multilinear generalization of the BDDH assumption. Our scheme achieves both Key-Policy and Ciphertext-Policy variants of ABE. Our scheme and its proof of security directly translate to the recent multilinear map framework of Garg, Gentry, and Halevi.
Last updated:  2013-05-27
Biclique Cryptanalysis Of PRESENT, LED, And KLEIN
Farzaneh Abed, Christian Forler, Eik List, Stefan Lucks, Jakob Wenzel
In this paper, we analyze the resistance of the lightweight ciphers PRESENT, LED, and KLEIN to biclique attacks. Primarily, we describe attacks on the full-round versions PRESENT-80, PRESENT-128, LED-64, LED-128, KLEIN-80, and KLEIN-96. Our attacks have time complexities of $2^{79.49}$, $2^{127.32}$, $2^{63.58}$, $2^{127.42}$, $2^{79.00}$, and $2^{95.18}$ encryptions, respectively. In addition, we consider attacks on round-reduced versions of PRESENT and LED, to show the security margin for which an adversary can obtain an advantage of at least a factor of two compared to exhaustive search.
Last updated:  2012-11-06
--withdrawn--
Uncategorized
--withdrawn--
Uncategorized
--withdrawn--
Last updated:  2012-11-06
--withdrawn--
Uncategorized
--withdrawn--
Uncategorized
--withdrawn--
Last updated:  2012-10-25
Breaking Public Keys - How to Determine an Unknown RSA Public Modulus
Hans-Joachim Knobloch
Not surprisingly, the common use of any public key crypto system involves publishing the public key and keeping the private key secret. There are however a few applications where both the private and public key are kept secret, thereby effectively converting a public key crypto algorithm to a symmetric algorithm. We show that if the RSA cryptosystem is used in such a symmetric application, it is possible to determine the public RSA modulus if the public exponent is known and short, such as 3 or F4=65537, and two or more plaintext/ciphertext (or, if RSA is used for signing, signed value/signature) pairs are known.
Last updated:  2012-10-16
Symbolic computation in block cipher with application to PRESENT
Changyong Peng, Chuangying zhu, Yuefei Zhu, Fei Kang
In this paper,we give an example of how symbolic computation are used to analyze the block cipher PRESENT,an ultra-lightweight block cipher proposed by Bogdanov et al. at CHES’07.The block size is 64 bits and the key size can be 80 bit or 128 bit.Using Mathematica 7.0,this paper obtains the unexpanded polynomial expressions of the output of round 1-6 of PRESENT-80(80- bit key variant).The time complexity of getting these expressions is 4 minutes on a PC with a 2.6GHz CPU and 8G RAM.Then we expand the expressions of the output of round 1-2 and the LSB(least significant bit) of the output of round 3 and obtain the ANFs(Algebraic Normal Form) of these 129(=2*64+1) expressions. The time complexity of getting these ANFs is 22 minutes.It it known that the time complexity of the classical method of computing the ANF of an n-ary Boolean function from its truth table is O(n*2^n),with total time complexity of obtaining these 129 ANFs O(129*144*2^144) = O(2^158)(each of the 129 ANFs can be viewed as a 144-ary Boolean function,where 144=64+80,the sum of the block size and the key size).As an application,we give a side channel attack on PRESENT-80 under the single bit leakage model proposed by Dinur and Shamir.If the LSB bit of the output of the 3rd round can be obtained without error,then with 200 known plaintexts,we can set up an equation system in terms of the master key bits and can recover 43 bits key by the Gr¨obner Basis method.Compared with the previous side channel attack on PRESENT,such as Yang et al. in CANS 2009,Abdul-Latip et al. in ASIACCS 2011 and Zhao et al. in 2011,each of which needs at least 2^13 chosen plaintexts,the data complexity of our attack is the best.
Last updated:  2012-10-16
SHADE: Secure HAmming DistancE computation from oblivious transfer
Julien Bringer, Herve Chabanne, Alain Patey
We introduce two new schemes for securely computing Hamming distance in the two-party setting. Our first scheme is a very efficient protocol, based solely on 1-out-of-2 Oblivious Transfer, that achieves full security in the semi-honest setting and one-sided security in the malicious setting. Moreover we show that this protocol is significantly more efficient than the previous proposals, that are either based on garbled circuits or on homomorphic encryption. Our second scheme achieves full security against malicious adversaries and is based on Committed Oblivious Transfer. These protocols have direct applications to secure biometric identification.
Last updated:  2013-07-29
On Provably Secure Code-based Signature and Signcryption Scheme
Uncategorized
Preetha Mathew K, Sachin Vasant, C. Pandu Rangan
Show abstract
Uncategorized
Signcryption is a cryptographic protocol that provides uthentication and confidentiality as a single primitive at a cost lower than the combined cost of sign and encryption. Code-based cryptography, a likely candidate for post-quantum cryptography, provides an exciting alternative to number-theoretic cryptography. Courtois, Finiasz and Sendrier proposed the only practical code-based signature(CFS signature) at Asiacrypt 2001. But that signature scheme currently lacks a formal proof of security due to the existence of the high rate distinguisher proposed by Fauge`re et al. In this paper, we make use of an alternate key-construct for the CFS signature, and thus prove its existential unforgeability under chosen message attacks (EUF-CMA). Also, we propose a code-based signcryption scheme and prove its security. To the best of our knowledge, this is the first code-based, provably secure signature and signcryption scheme in literature.
Last updated:  2012-10-26
Quantitative Analysis of the Full Bitcoin Transaction Graph
Uncategorized
Dorit Ron, Adi Shamir
Show abstract
Uncategorized
The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.
Last updated:  2014-08-28
New Constructions and Proof Methods for Large Universe Attribute-Based Encryption
Yannis Rouselakis, Brent Waters
We propose two large universe Attribute-Based Encryption constructions. In a large universe ABE construction any string can be used as an attribute and attributes need not be enumerated at system setup. Our first construction establishes a novel large universe Ciphertext-Policy ABE scheme on prime order bilinear groups, while the second achieves a significant efficiency improvement over the large universe Key-Policy ABE systems of Lewko-Waters and Lewko. Both schemes are selectively secure in the standard model under two "q-type" assumptions similar to ones used in prior works. Our work brings back "program and cancel" techniques to this problem. We provide implementations and benchmarks of our constructions in Charm; a programming environment for rapid prototyping of cryptographic primitives.
Last updated:  2012-10-16
Using Randomizers for Batch Verification of ECDSA Signatures
Sabyasachi Karati, Abhijit Das, Dipanwita Roychowdhury
Randomizers are popularly used to prevent various types of attacks on batch-verification schemes. Recently, several algorithms based upon symbolic computation are proposed for the batch verification of ECDSA signatures. In this article, we demonstrate that the concept of randomizers can be easily embedded in these symbolic-computation algorithms. The performance degradation caused by randomizers is comparable with that associated with ECDSA*.
Last updated:  2013-07-24
On the (in)security of some smart-card-based password authentication schemes for WSN
Uncategorized
Ding Wang, Chun-guang Ma
Show abstract
Uncategorized
Understanding security failures of cryptographic protocols is the key to both patching existing protocols and designing future schemes. The design of secure and efficient remote user authentication schemes for real-time data access in wireless sensor networks (WSN) is still an open and quite challenging problem, though many schemes have been proposed lately. In this study, we analyze two recent proposals in this research domain. Firstly, Das et al.'s scheme is scrutinized, demonstrating its vulnerabilities to smart card security breach attack and privileged insider attack, which are among the security objectives pursued in their protocol specification. Then, we investigate a temporal-credential-based password authentication scheme introduced by Xue et al. in 2012. This protocol only involves hash and XOR operations and thus is suitable for the resource-constrained WSN environments where an external user wants to obtain real-time data from the sensor nodes inside WSN. However, notwithstanding their security arguments, we point out that Xue et al.'s protocol is still vulnerable to smart card security breach attack and privileged insider attack, and fails to provide identity protection. The proposed cryptanalysis discourages any use of the two schemes under investigation in practice and reveals some subtleties and challenges in designing this type of schemes. Besides reporting the security flaws, we put forward a principle that is vital for designing more robust two-factor authentication schemes for WSN.
Last updated:  2012-11-29
Cryptanalysis of the OKH Authenticated Encryption Scheme
Peng Wang, Wenling Wu, Liting Zhang
Alomair proposed a new authenticated encryption scheme OKH at ACNS 2012, and proved the security, i.e. authenticity and privacy, of OKH. Our research shows that it is not the case. We only need one query to break the authenticity of OKH with success probability of $1$, and two queries to break the privacy of OKH with success probability of $1-1/2^n$, where $n$ is the block-length of underlying blockcipher.
Last updated:  2012-10-16
Defending Against the Unknown Enemy: Applying FlipIt to System Security
Kevin D. Bowers, Marten van Dijk, Robert Griffin, Ari Juels, Alina Oprea, Ronald L. Rivest, Nikos Triandopoulos
Most cryptographic systems carry the basic assumption that entities are able to preserve the secrecy of their keys. With attacks today showing ever increasing sophistication, however, this tenet is eroding. “Advanced Persistent Threats” (APTs), for instance, leverage zero-day exploits and extensive system knowledge to achieve full compromise of cryptographic keys and other secrets.Such compromise is often silent, with defenders failing to detect the loss of private keys critical to protection of their systems. The growing virulence of today’s threats clearly calls for new models of defenders’ goals and abilities. In this paper, we explore applications of FlipIt, a novel game-theoretic model of system defense introduced recently. In FlipIt, an attacker periodically gains complete control of a system, with the unique feature that system compromises are stealthy, i.e., not immediately detected by the system owner, called the defender. We distill out several lessons from our study of FlipIt and demonstrate their application to several real-world problems, including password reset policies, key rotation, VM refresh and cloud auditing.
Last updated:  2012-10-16
Security Evaluations Beyond Computing Power: How to Analyze Side-Channel Attacks you Cannot Mount?
Nicolas Veyrat-Charvillon, Benoît Gérard, François-Xavier Standaert
Present key sizes for symmetric cryptography are usually required to be at least 80-bit long for short-term protection, and 128-bit long for long-term protection. However, current tools for security evaluations against side-channel attacks do not provide a precise estimation of the remaining key strength after some leakage has been observed, e.g. in terms of number of candidates to test. This leads to an uncomfortable situation, where the security of an implementation can be anywhere between enumerable values (i.e. $2^{40}$ -- $2^{50}$ key candidates to test) and the full key size (i.e. $2^{80}$ -- $2^{128}$ key candidates to test). In this paper, we mitigate this important issue, and describe a key rank estimation algorithm that provides tight bounds for the security level of leaking cryptographic devices. As a result and for the first time, we are able to analyze the full complexity of “standard” (i.e. divide-and-conquer) side-channel attacks, in terms of their tradeoff between time, data and memory complexity.
Last updated:  2017-03-24
A Framework for Unique Ring Signatures
Matthew Franklin, Haibin Zhang
We propose a simple, general, and unified framework for constructing unique ring signatures that simplify and capture the spirit of linkable ring signatures. The framework, which can be efficiently instantiated in the random oracle and the standard model, is obtained by generalizing the Bellare-Goldwasser ``PRF made public" paradigm. Security of the first instantiation can be more tightly related to the CDH problem and the DDH problem, compared to prior linkable ring signatures. The scheme leads to the most efficient linkable ring signature in the random oracle model, for a given level of provable security. The second one based on stronger assumptions partly simplifies and slightly improves the sublinear size traceable ring signature of Fujisaki (CT-RSA 2011).
Last updated:  2012-10-24
Concurrent Signature without Random Oracles
Uncategorized
Xiao Tan, Qiong Huang, Duncan S. Wong
Show abstract
Uncategorized
Concurrent signatures provide a way to exchange digital signature among parties in an efficient and fair manner. To the best of our knowledge, all the existing solutions can only be proven secure in the random oracle model. How to build an efficient concurrent signature scheme in the standard model has remained as an open problem since its introduction in 2004. In this paper we answer the problem affirmatively. Base on a novel idea, we propose a new concurrent signature construction, the security of which does not rely on the random oracle assumption. Our idea stems from an attempt of achieving a strong ambiguity feature that anyone should be able to produce indistinguishable ambiguous signatures by just using public information available in the system. In the multi-user setting, we prove the security of the new scheme based on Computational Diffie-Hellman (CDH) assumption, which is a rather standard and well-studied assumption in cryptography.
Last updated:  2013-11-30
Nanoelectronic Solutions for Hardware Security
Uncategorized
Jeyavijayan Rajendran, Ramesh Karri, James B. Wendt, Miodrag Potkonjak, Nathan McDonald, Garrett S. Rose, Bryant Wysocki
Show abstract
Uncategorized
Information security has emerged as an important system and application metric. Classical security solutions use algorithmic mechanisms that address a small subset of emerging security requirements, often at high energy and performance overhead. Further, emerging side channel and physical attacks can compromise classical security solutions. Hardware-based security solutions overcome many of the limitations of classical security while consuming less energy and improving performance. Nanoelectronics-based hardware security preserves all of these advantages while enabling conceptually new security mechanisms and security applications. This paper highlights nanoelectronics based security capabilities and challenges. The paper describes nanoelectronics-based hardware security primitives for device identification, digital forensics, and tamper detection. These primitives can be developed using the unique characteristics of emerging nanoelectronic devices such as complex device and system models, bidirectional operation, and temporal drift of state variables. We also identify important desiderata and outstanding challenges in nanoelectronics-based security.
Last updated:  2012-10-14
Quantum algorithm for the discrete logarithm problem for matrices over finite group rings
A. D. Myasnikov, A. Ushakov
We propose a polynomial time quantum algorithm for solving the discrete logarithm problem in matrices over finite group rings. The hardness of this problem was recently employed in the design of a key-exchange protocol proposed by D. Kahrobaei, C. Koupparis, and V. Shpilrain. Our result implies that the Kahrobaei et al. protocol does not belong to the realm of post-quantum cryptography.
Last updated:  2013-01-14
Limits on the Usefulness of Random Oracles
Iftach Haitner, Eran Omri, Hila Zarosim
In the random oracle model, parties are given oracle access to a random function (i.e., a uniformly chosen function from the set of all functions), and are assumed to have unbounded computational power (though they can only make a bounded number of oracle queries). This model provides powerful properties that allow proving the security of many protocols, even such that cannot be proved secure in the standard model (under any hardness assumption). The random oracle model is also used for showing that a given cryptographic primitive cannot be used in a black-box way to construct another primitive; in their seminal work, Impagliazzo and Rudich [STOC ’89] showed that no key-agreement protocol exists in the random oracle model, yielding that key-agreement cannot be black-box reduced to one-way functions. Their work has a long line of followup works (Simon [EC ’98], Gertner et al. [STOC ’00] and Gennaro et al. [SICOMP ’05], to name a few), showing that given oracle access to a certain type of function family (e.g., the family that “implements” public-key encryption) is not sufficient for building a given cryptographic primitive (e.g., oblivious transfer). Yet, the following question remained open: What is the exact power of the random oracle model? We make progress towards answering the above question, showing that, essentially, any no private input, semi-honest two-party functionality that can be securely implemented in the random oracle model, can be securely implemented information theoretically (where parties are assumed to be all powerful, and no oracle is given). We further generalize the above result to function families that provide some natural combinatorial property. Our result immediately yields essentially that the only no-input functionalities that can be securely realized in the random oracle model (in the sense of secure function evaluation), are the trivial ones (ones that can be securely realized information theoretically). In addition, we use the recent information theoretic impossibility result of McGregor et al. [FOCS ’10], to show the existence of functionalities (e.g., inner product) that cannot be computed both accurately and in a differentially private manner in the random oracle model; yielding that protocols for computing these functionalities cannot be black-box reduced to the existence of one-way functions.
Last updated:  2012-10-14
On Constant-Round Concurrent Zero-Knowledge from a Knowledge Assumption
Divya Gupta, Amit Sahai
In this work, we consider the long-standing open question of constructing constant-round concurrent zero-knowledge protocols in the plain model. Resolving this question is known to require non-black-box techniques. We consider non-black-box techniques for zero-knowledge based on knowledge assumptions, a line of thinking initiated by the work of Hada and Tanaka (CRYPTO 1998). Prior to our work, it was not known whether knowledge assumptions could be used for achieving security in the concurrent setting, due to a number of significant limitations that we discuss here. Nevertheless, we obtain the following results: 1. We obtain the first constant round concurrent zero-knowledge argument for \textbf{NP} in the plain model based on a new variant of knowledge of exponent assumption. Furthermore, our construction avoids the inefficiency inherent in previous non-black-box techniques such that those of Barak (FOCS 2001); we obtain our result through an efficient protocol compiler. 2. Unlike Hada and Tanaka, we do not require a knowledge assumption to argue the soundness of our protocol. Instead, we use a discrete log like assumption, which we call Diffie-Hellman Logarithm Assumption, to prove the soundness of our protocol. 3. We give evidence that our new variant of knowledge of exponent assumption is in fact plausible. In particular, we show that our assumption holds in the generic group model. 4. Knowledge assumptions are especially delicate assumptions whose plausibility may be hard to gauge. We give a novel framework to express knowledge assumptions in a more flexible way, which may allow for formulation of plausible assumptions and exploration of their impact and application in cryptography.
Last updated:  2012-10-14
Improved side channel attack on the block cipher NOEKEON
Changyong Peng, Chuangying zhu, Yuefei Zhu, Fei Kang
NOEKEON is a block cipher having key-size 128 and block size 128,proposed by Daemen, J et al.Shekh Faisal Abdul-Latip et al. give a side channel attack(under the single bit leakage model) on the cipher at ISPEC 2010.Their analysis shows that one can recover the 128-bit key of the cipher, by considering a one-bit information leakage from the internal state after the second round, with time complexity of O(2^68) evaluations of the cipher, and data complexity of about 2^10 chosen plaintexts.Our side channel attack improves upon the previous work of Shekh Faisal Abdul-Latip et al. from two aspects. First, we use the Hamming weight leakage model(Suppose the Hamming weight of the lower 64 bits and the higher 64 bits of the output of the first round can be obtained without error) which is a more relaxed leakage assumption, supported by many previously known practical results on side channel attacks, compared to the more challenging leakage assumption that the adversary has access to the ”exact” value of the internal state bits as used by Shekh Faisal Abdul-Latip et al. Second, our attack has also a reduced complexity compared to that of Shekh Faisal Abdul-Latip et al. Namely, our attack of recovering the 128-bit key of NOEKEON has a time complexity 20.1 seconds on a PC with 2.6 GHZ CPU and 8G RAM and data complexity of 99 known plaintexts; whereas, that of Shekh Faisal Abdul-Latip et al. has time complexity of O(2^68) and needs about 2^10 chosen plaintexts.
Last updated:  2012-12-23
Zero-Correlation Linear Cryptanalysis of Reduced-Round LBlock
Uncategorized
Hadi Soleimany, Kaisa Nyberg
Show abstract
Uncategorized
Zero-correlation linear attack is a new method for cryptanalysis of block ciphers developed by Bogdanov et al. in 2012. In this paper we adapt the matrix method to find zero-correlation linear approximations. Then we present several zero-correlation linear approximations for 14 rounds of LBlock and describe a cryptanalysis for 22 rounds of the reduced LBlock. After biclique attacks on LBlock revealed weaknesses in its key schedule, its designers presented a new version of the cipher with a revised key schedule. The attack presented in this paper is applicable to LBlock structure independently of the key scheduling. The attack needs distinct known plaintexts which is a more realistic attack model in comparison with impossible differential cryptanalysis which uses chosen plaintext pairs. Moreover, we performed simulations on a small variant LBlock and present the first experimental results on the theoretical model of the multidimensional zero-correlation linear cryptanalysis method.
Last updated:  2014-01-12
Improved Zero-knowledge Proofs of Knowledge for the ISIS Problem, and Applications
Uncategorized
San Ling, Khoa Nguyen, Damien Stehle, Huaxiong Wang
Show abstract
Uncategorized
In all existing efficient proofs of knowledge of a solution to the infinity norm Inhomogeneous Small Integer Solution ($\mathrm{ISIS}^{\infty}$) problem, the knowledge extractor outputs a solution vector that is only guaranteed to be~$\widetilde{O}(n)$ times longer than the witness possessed by the prover. As a consequence, in many cryptographic schemes that use these proof systems as building blocks, there exists a gap between the hardness of solving the underlying $\mathrm{ISIS}^{\infty}$ problem and the hardness underlying the security reductions. In this paper, we generalize Stern's protocol to obtain two statistical zero-knowledge proofs of knowledge for the $\mathrm{ISIS}^{\infty}$ problem that remove this gap. Our result yields the potential of relying on weaker security assumptions for various lattice-based cryptographic constructions. As applications of our proof system, we introduce a concurrently secure identity-based identification scheme based on the worst-case hardness of the $\mathrm{SIVP}_{\widetilde{O}(n^{1.5})}$ problem (in the $\ell_2$ norm) in general lattices in the random oracle model, and an efficient statistical zero-knowledge proof of plaintext knowledge with small constant gap factor for Regev's encryption scheme.
Last updated:  2012-10-07
On Transaction Pseudonyms with Implicit Attributes
Uncategorized
Stefan G. Weber
Show abstract
Uncategorized
Transaction pseudonyms with implicit attributes are a novel approach to multilevel linkable transaction pseudonyms. We extend earlier work of Juels and Pappu on reencryption-based transaction pseudonyms, by developing new mechanisms for controlled pseudonym linkability. This includes mechanisms for cooperative, stepwise re-identication as well as individual authentication of pseudonyms. Our proposal makes use of efficient techniques from the area of secure multiparty computation and cryptographically secure PRNGs.
Last updated:  2012-10-07
Leakage Squeezing of Order Two
Claude Carlet, Jean-Luc Danger, Sylvain Guilley, Houssem Maghrebi
In masking schemes, \emph{leakage squeezing} is the study of the optimal shares' representation, that maximizes the resistance order against high-order side-channel attacks. Squeezing the leakage of first-order Boolean masking has been problematized and solved previously in~\cite{DBLP:conf/africacrypt/MaghrebiCGD12}. The solution consists in finding a bijection $F$ that modifies the mask, in such a way that its graph, seen as a code, be of greatest dual distance. This paper studies second-order leakage squeezing, \emph{i.e.} leakage squeezing with two independent random masks. It is proved that, compared to first-order leakage squeezing, second-order leakage squeezing at least increments (by one unit) the resistance against high-order attacks, such as high-order correlation power analyses (HO-CPA). Now, better improvements over first-order leakage squeezing are possible by relevant constructions of the squeezing bijections pair. We provide with linear bijections that improve by strictly more than one (instead of one) the resistance order. Specifically, when the masking is applied on bytes (which suits AES), resistance against $1$st-order (resp. $2$nd-order) attacks is possible with one (resp. two) masks. Optimal leakage squeezing with one mask resists HO-CPA of orders up to $5$. In this paper, with two masks, we provide resistance against HO-CPA not only of order $5+1=6$, but also of order $7$.
Last updated:  2014-01-17
Quantization in Continuous-Source Zero Secrecy Leakage Helper Data Schemes
Uncategorized
Joep de Groot, Boris Škorić, Niels de Vreede, Jean-Paul Linnartz
Show abstract
Uncategorized
A Helper Data Scheme (HDS) is a cryptographic primitive that extracts a high-entropy noise-free string from noisy data. Helper Data Schemes are used for preserving privacy in biometric databases and for Physical Unclonable Functions. HDSs are known for the guided quantization of continuous-valued biometrics as well as for repairing errors in discrete-valued (digitized) extracted values. We refine the theory of Helper Data Schemes with the Zero Leakage (ZL) property, i.e., the mutual information between the helper data and the extracted secret is zero. We focus on quantization and prove that ZL necessitates particular properties of the helper data generating function: (i) the existence of “sibling points”, enrollment values that lead to the same helper data but different secrets; (ii) quantile helper data. We present an optimal reconstruction algorithm for our ZL scheme, that not only minimizes the reconstruction error rate but also yields a very efficient implementation of the verification. We compare the error rate to schemes that do not have the ZL property.
Last updated:  2012-10-07
Packed Ciphertexts in LWE-based Homomorphic Encryption
Zvika Brakerski, Craig Gentry, Shai Halevi
In this short note we observe that the Peikert-Vaikuntanathan-Waters (PVW) method of packing many plaintext elements in a single Regev-type ciphertext, can be used for performing SIMD homomorphic operations on packed ciphertext. This provides an alternative to the Smart-Vercauteren (SV) ciphertext-packing technique that relies on polynomial-CRT. While the SV technique is only applicable to schemes that rely on ring-LWE (or other hardness assumptions in ideal lattices), the PVW method can be used also for cryptosystems whose security is based on standard LWE (or more broadly on the hardness of ``General-LWE''). Although using the PVW method with LWE-based schemes leads to worse asymptotic efficiency than using the SV technique with ring-LWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with ``general-LWE'' schemes, suggesting yet another tradeoff that can be optimized for different settings.
Last updated:  2013-06-30
Adaptively Secure Garbling with Applications to One-Time Programs and Secure Outsourcing
Uncategorized
Mihir Bellare, Viet Tung Hoang, Phillip Rogaway
Show abstract
Uncategorized
Standard constructions of garbled circuits provide only static security, meaning the input x is not allowed to depend on the garbled circuit F. But some applications—notably one-time programs (Goldwasser, Kalai, and Rothblum 2008) and secure outsourcing (Gennaro, Gentry, Parno 2010)—need adaptive security, where x may depend on F. We identify gaps in proofs from these papers with regard to adaptive security and suggest the need of a better abstraction boundary. To this end we investigate the adaptive security of garbling schemes, an abstraction of Yao’s garbled-circuit technique that we recently introduced (Bellare, Hoang, Rogaway 2012). Building on that framework, we give definitions encompassing privacy, authenticity, and obliviousness, with either coarse-grained or fine-grained adaptivity. We show how adaptively secure garbling schemes support simple solutions for one-time programs and secure outsourcing, with privacy being the goal in the first case and obliviousness and authenticity the goal in the second. We give transforms that promote static-secure garbling schemes to adaptive-secure ones. Our work advances the thesis that conceptualizing garbling schemes as a first-class cryptographic primitive can simplify, unify, or improve treatments for higher-level protocols.
Last updated:  2012-10-07
Constant-Round Concurrent Zero Knowledge From Falsifiable Assumptions
Kai-Min Chung, Huijia Lin, Rafael Pass
We present a constant-round concurrent zero-knowledge protocol for NP. Our protocol is sound against uniform polynomial-time attackers, and relies on the existence of families of collision-resistant hash functions, and a new (but in our eyes, natural) falsifiable intractability assumption: Roughly speaking, that Micali's non-interactive CS-proofs are sound for languages in P.
Last updated:  2013-11-24
Aggregating CL-Signatures Revisited: Extended Functionality and Better Efficiency
Kwangsu Lee, Dong Hoon Lee, Moti Yung
Aggregate signature is public-key signature that allows anyone to aggregate different signatures generated by different signers on different messages into a short (called aggregate) signature. The notion has many applications where compressing the signature space is important: secure routing protocols, compressed certificate chain signature, software module authentications, and secure high-scale repositories and logs for financial transactions. In spite of its importance, the state of the art of the primitive is that it has not been easy to devise a suitable aggregate signature scheme that satisfies the conditions of real applications, with reasonable parameters: short public key size, short aggregate signatures size, and efficient aggregate signing/verification. In this paper, we propose aggregate signature schemes based on the Camenisch-Lysyanskaya (CL) signature scheme (Crypto 2004) whose security is reduced to that of CL signature which substantially improve efficiency conditions for real applications. - We first propose an efficient \textit{sequential aggregate signature} scheme with the shortest size public key, to date, and very efficient aggregate verification requiring only a constant number of pairing operations and $l$ number of exponentiations ($l$ being the number of signers). - Next, we propose an efficient \textit{synchronized aggregate signature} scheme with a very short public key size, and with the shortest (to date) size of aggregate signatures among synchronized aggregate signature schemes. Signing and aggregate verification are very efficient: they take constant number of pairing operations and $l$ number of exponentiations, as well. - Finally, we introduce a new notion of aggregate signature named \textit{combined aggregate signature} that allows a signer to dynamically use two modes of aggregation ``sequential'' and ``synchronized,'' employing the same private/public key. We also present an efficient combined aggregate signature based on our previous two aggregate signature schemes. This combined-mode scheme allows for application flexibility depending on real world scenario: For example, it can be used sequentially to sign incrementally generated legal documents, and synchronously to aggregate the end-of-day logs of all branches of an institute into a single location with a single aggregate signature.
Last updated:  2012-10-02
An Attack on a Fully Homomorphic Encryption Scheme
Hu Yupu, Wang Fenghe
In this paper we present an attack on a fully homomorphic encryption scheme on PKC2010. We construct a modi¯ed secret key, a modi¯ed decryption algorithm and a subset of the ciphertext space. When the ciphertext is from the subset, we can correctly decrypt it by our modi¯ed secret key and modi¯ed decryption algorithm. We also discuss when our modi¯ed decryption algorithm is e±cient, and when the subset is not negligible.
Last updated:  2012-09-30
Computational Soundness of Coinductive Symbolic Security under Active Attacks
Mohammad Hajiabadi, Bruce M. Kapron
In Eurocrypt 2010, Miccinacio initiated an investigation of cryptographically sound, symbolic security analysis with respect to coinductive adversarial knowledge, and demonstrated that under an adversarially passive model, certain security criteria (e.g. indistinguishability) may be given a computationally sound symbolic characterization, without the assumption of key acyclicity. Left open in his work was the fundamental question of ``the viability of extending the coinductive approach to prove computational soundness results in the presence of active adversaries.'' In this paper we make some initial steps toward answering this question in the affirmative with respect to an extension of a trace-based security model (proposed by Micciancio and Warinschi in TCC 2004) including asymmetric and symmetric encryption; in particular we prove that a random computational trace can be soundly abstracted by a coinductive symbolic trace with overwhelming probability, provided that both the underlying encryption schemes provide IND-CCA2 security (plus {ciphertext integrity} for the symmetric scheme), and that the diameter of the underlying coinductively-hidden subgraph is constant in every symbolic trace. This result holds even if the protocol allows arbitrarily nested applications of symmetric/asymmetric encryption, unrestricted transmission of symmetric keys, and adversaries who adaptively corrupt users, along with other forms of active attack. As part of our proof, we formulate a game-based definition of encryption security allowing adaptive corruptions of keys and certain forms of adaptive key-dependent plaintext attack, along with other common forms of CCA2 attack. We prove that (with assumptions similar to above,) security under this game is implied by IND-CCA2 security. This also characterizes a provably benign form of cyclic encryption which can be achieved under standard notions of encryption security, which may be of independent interest.
Last updated:  2014-01-27
Plaintext Awareness in Identity-Based Key Encapsulation
Mark Manulis, Bertram Poettering, Douglas Stebila
The notion of plaintext awareness (PA) has many applications in public key cryptography: it offers unique, stand-alone security guarantees for public key encryption schemes, has been used as a sufficient condition for proving indistinguishability against adaptive chosen ciphertext attacks (INDCCA), and can be used to construct privacy-preserving protocols such as deniable authentication. Unlike many other security notions, plaintext awareness is very fragile when it comes to differences between the random oracle and standard models; for example, many implications involving PA in the random oracle model are not valid in the standard model and vice versa. Similarly, strategies for proving PA of schemes in one model cannot be adapted to the other model. Existing research addresses PA in detail only in the public key setting. This paper gives the first formal exploration of plaintext awareness in the identity-based setting and, as initial work, proceeds in the random oracle model. The focus is laid mainly on identity-based key encapsulation mechanisms (IB-KEMs), for which the paper presents the first definitions of plaintext awareness, highlights the role of PA in proof strategies of INDCCA security, and explores relationships between PA and other security properties. On the practical side, our work offers the first, highly efficient, general approach for building IB-KEMs that are simultaneously plaintext-aware and INDCCA-secure. Our construction is inspired by the Fujisaki-Okamoto (FO) transform, but demands weaker and more natural properties of its building blocks. This result comes from a new look at the notion of gamma-uniformity that was inherent in the original FO transform. We show that for IB-KEMs (and PK-KEMs) this assumption can be replaced with a weaker computational notion, which is in fact implied by one-wayness. Finally, we give the first concrete IB-KEM scheme that is PA and INDCCA-secure by applying our construction to a popular IB-KEM and optimizing it for better performance.
Last updated:  2012-09-30
Domain-Specific Pseudonymous Signatures for the German Identity Card
Jens Bender, Özgür Dagdelen, Marc Fischlin, Dennis Kügler
The restricted identification protocol for the new German identity card basically provides a method to use pseudonyms such that they can be linked by individual service providers, but not across different service providers (even not malicious ones). The protocol can be augmented to allow also for signatures under the pseudonyms. In this paper, we thus view---and define---this idea more abstractly as a new cryptographic signature primitive with some form of anonymity, and use the term domain-specific pseudonymous signatures. We then analyze the restricted identification solutions in terms of the formal security requirements.
Last updated:  2012-09-30
PUFs: Myth, Fact or Busted? A Security Evaluation of Physically Unclonable Functions (PUFs) Cast in Silicon (Extended Version)
Stefan Katzenbeisser, Ünal Kocabaş, Vladimir Rožić, Ahmad-Reza Sadeghi, Ingrid Verbauwhede, Christian Wachsmann
Physically Unclonable Functions~(PUFs) are an emerging technology and have been proposed as central building blocks in a variety of cryptographic protocols and security architectures. However, the security features of PUFs are still under investigation: Evaluation results in the literature are difficult to compare due to varying test conditions, different analysis methods and the fact that representative data sets are publicly unavailable. In this paper, we present the first large-scale security analysis of ASIC implementations of the five most popular intrinsic electronic PUF types, including arbiter, ring oscillator, SRAM, flip-flop and latch PUFs. Our analysis is based on PUF data obtained at different operating conditions from $96$ ASICs housing multiple PUF instances, which have been manufactured in TSMC 65nm CMOS technology. In this context, we present an evaluation methodology and quantify the robustness and unpredictability properties of PUFs. Since all PUFs have been implemented in the same ASIC and analyzed with the same evaluation methodology, our results allow for the first time a fair comparison of their properties.
Last updated:  2012-09-28
Resource-based Corruptions and the Combinatorics of Hidden Diversity
Juan Garay, David Johnson, Aggelos Kiayias, Moti Yung
In the setting of cryptographic protocols, the corruption of a party has traditionally been viewed as a simple, uniform and atomic operation, where the adversary decides to get control over a party and this party immediately gets corrupted. In this paper, motivated by the fact that different players may require different resources to get corrupted, we put forth the notion of {\em resource-based corruptions}, where the adversary must invest some resources in order to do so. If the adversary has full information about the system configuration then resource-based corruptions would provide no fundamental difference from the standard corruption model. However, in a resource ``anonymous'' setting, in the sense that such configuration is hidden from the adversary, much is to be gained in terms of efficiency and security. We showcase the power of such {\em hidden diversity} in the context of secure multiparty computation (MPC) with resource-based corruptions and prove that it can effectively be used to circumvent known impossibility results. Specifically, if $OPT$ is the corruption budget that violates the completeness of MPC (the case when half or more of the players are corrupted), we show that if hidden diversity is available, the completeness of MPC can be made to hold against an adversary with as much as a $B\cdot OPT$ budget, for any constant $B>1$. This result requires a suitable choice of parameters (in terms of number of players and their hardness to corrupt), which we provide and further prove other tight variants of the result when the said choice is not available. Regarding efficiency gains, we show that hidden diversity can be used to force the corruption threshold to drop from 1/2 to 1/3, in turn allowing the use of much more efficient (information-theoretic) MPC protocols. We achieve the above through a series of technical contributions: o The modeling of the corruption process in the setting of cryptographic protocols through {\em corruption oracles} as well as the introduction of a notion of reduction to relate such oracles; o the abstraction of the corruption game as a combinatorial problem and its analysis; and, importantly, o the formulation of the notion of {\em inversion effort preserving} (IEP) functions which is a type of direct-sum property, and the property of {\em hardness indistinguishability}. While hardness indistinguishability enables the dissociation of parties' identities and the resources needed to corrupt them, IEP enables the discretization of adversarial work into corruption tokens, all of which may be of independent interest.
Last updated:  2012-10-16
New Impossibility Results for Concurrent Composition and a Non-Interactive Completeness Theorem for Secure Computation
Shweta Agrawal, Vipul Goyal, Abhishek Jain, Manoj Prabhakaran, Amit Sahai
We consider the client-server setting for the concurrent composition of secure protocols: in this setting, a single server interacts with multiple clients concurrently, executing with each client a specified protocol where only the client should receive any nontrivial output. Such a setting is easily motivated from an application standpoint. There are important special cases for which positive results are known – such as concurrent zero knowledge protocols – and it has been an open question explicitly asked, for instance, by Lindell [J. Cryptology’08] – whether other natural functionalities such as Oblivious Transfer (OT) are possible in this setting. In this work: 1. We resolve this open question by showing that unfortunately, even in this very limited concurrency setting, broad new impossibility results hold, ruling out not only OT, but in fact all nontrivial asymmetric functionalities. Our new negative results hold even if the inputs of all honest parties are fixed in advance, and the adversary receives no auxiliary information. 2. Along the way, we establish a new unconditional completeness result for asymmetric functionalities, where we characterize functionalities that are non-interactively complete secure against active adversaries. When we say that a functionality F is non-interactively complete, we mean that every other asymmetric functionality can be realized by parallel invocations of several copies of F, with no other communication in any direction. Our result subsumes a completeness result of Kilian [STOC’00] that uses protocols which require additional interaction in both directions.
Last updated:  2012-09-27
Security weakness in the Proof of Storage with Deduplication
Youngjoo Shin, Junbeom Hur, Kwangjo Kim
Achieving both security and efficiency is the challenging issue for a data outsourcing service in the cloud computing. Proof of Storage with Deduplication (POSD) is the first solution that addresses the issue for the cloud storage. However, the validity of the POSD scheme stands on the strong assumption that all clients are honest in terms of generating their keys. We present insecurity of the scheme under new attack model that malicious clients exploit dishonestly manipulated keys. We also propose an improvement of the POSD scheme to mitigate our attack.
Last updated:  2012-09-27
Bellcore attack in practice
Andrey Sidorenko, Joachim van den Berg, Remko Foekema, Michiel Grashuis, Jaap de Vos
In this paper we analyze practical aspects of the differential fault attack on RSA published by Boneh, Demillo and Lipton from Bellcore. We focus on the CRT variant, which requires only one faulty signature to be entirely broken provided that no DFA countermeasures are in use. Usually the easiest approach for the attacker is to introduce a fault in one of the two RSA-CRT exponentiations. These are time-consuming and often clearly visible in the power profiles. However, protection of the exponentiations against faults does not always circumvent the Bellcore attack. Our goal is to investigate and classify other possible targets of the attack.
Last updated:  2014-02-27
Provably Secure Concurrent Error Detection Against Differential Fault Analysis
Xiaofei Guo, Debdeep Mukhopadhyay, Ramesh Karri
Differential fault analysis (DFA) poses a significant threat to Advanced Encryption Standard (AES). It has been demonstrated that DFA can use only a single faulty ciphertext to reveal the secret key of AES in an average of 230 computation. Traditionally, concurrent error detection (CED) is used to protect AES against DFA. However, we emphasize that conventional CED assumes a uniform distribution of faults, which is not a valid assumption in the context of DFA. In contrast, we show practical examples which highlight that an attacker can inject specific and exploitable faults, thus threatening existing CED. This paper brings to the surface a new CED approach for cryptography, aimed at providing provable security by detecting all possible DFA-exploitable faults, which is a small subset of the entire fault space. We analyze the fault coverage of conventional CED against DFA-exploitable faults, and we find that the fault coverage of most of these techniques are significantly lower than the one they claimed. We stress that for security, it is imperative that CED should provide 100% fault coverage for DFA-exploitable faults. We further propose an invariance-based CED which provides 100% provable security against all known DFA of AES.
Last updated:  2012-09-24
Faster Pairing Computation on Jacobi quartic Curves with High-Degree Twists
Liangze Li, Hongfeng Wu, Fan Zhang
In this paper, we propose an elaborate geometric approach to explain the group law on Jacobi quartic curves which are seen as the intersection of two quadratic surfaces in space. Using the geometry interpretation we construct the Miller function. Then we present explicit formulae for the addition and doubling steps in Miller's algorithm to compute Tate pairing on Jacobi quartic curves. Both the addition step and doubling step of our formulae for Tate pairing computation on Jacobi curves are faster than previously proposed ones. Finally, we present efficient formulas for Jacobi quartic curves with twists of degree 4 or 6. For twists of degree 4, both the addition steps and doubling steps in our formulas are faster than the fastest result on Weierstrass curves. For twists of degree 6, the addition steps of our formulae are faster than the fastest result on Weierstrass curves.
Last updated:  2013-03-15
Dynamic Proofs of Retrievability via Oblivious RAM
David Cash, Alptekin Kupcu, Daniel Wichs
Proofs of retrievability allow a client to store her data on a remote server (``in the cloud'') and periodically execute an efficient audit protocol to check that all of the data is being maintained correctly and can be recovered from the server. For efficiency, the computation and communication of the server and client during an audit protocol should be significantly smaller than reading/transmitting the data in its entirety. Although the server is only asked to access a few locations of its storage during an audit, it must maintain full knowledge of all client data to be able to pass. Starting with the work of Juels and Kaliski (CCS '07), all prior solutions to this problem crucially assume that the client data is static and do not allow it to be efficiently updated. Indeed, they all store a redundant encoding of the data on the server, so that the server must delete a large fraction of its storage to `lose' any actual content. Unfortunately, this means that even a single bit modification to the original data will need to modify a large fraction of the server storage, which makes updates highly inefficient. Overcoming this limitation was left as the main open problem by all prior works. In this work, we give the first solution providing proofs of retrievability for dynamic storage, where the client can perform arbitrary reads/writes on any location within her data by running an efficient protocol with the server. At any point in time, the client can execute an efficient audit protocol to ensure that the server maintains the latest version of the client data. The computation and communication complexity of the server and client in our protocols is only polylogarithmic in the size of the client's data. The starting point of our solution is to split up the data into small blocks and redundantly encode each block of data individually, so that an update inside any data block only affects a few codeword symbols. The main difficulty is to prevent the server from identifying and deleting too many codeword symbols belonging to any single data block. We do so by hiding where the various codeword symbols for any individual data lock are stored on the server and when they are being accessed by the client, using the algorithmic techniques of oblivious RAM.
Last updated:  2012-09-22
Faster batch forgery identification
Daniel J. Bernstein, Jeroen Doumen, Tanja Lange, Jan-Jaap Oosterwijk
Batch signature verification detects whether a batch of signatures contains any forgeries. Batch forgery identification pinpoints the location of each forgery. Existing forgery-identification schemes vary in their strategies for selecting subbatches to verify (individual checks, binary search, combinatorial designs, etc.) and in their strategies for verifying subbatches. This paper exploits synergies between these two levels of strategies, reducing the cost of batch forgery identification for elliptic-curve signatures.
Last updated:  2013-09-09
Efficient Modular NIZK Arguments from Shift and Product
Uncategorized
Prastudy Fauzi, Helger Lipmaa, Bingsheng Zhang
Show abstract
Uncategorized
We propose a non-interactive product argument, that is more efficient than the one by Groth and Lipmaa, and a novel shift argument. We then use them to design several novel non-interactive zero-knowledge (NIZK) arguments. We obtain the first range proof with constant communication and subquadratic prover's computation. We construct NIZK arguments for $\mathbf{NP}$-complete languages, {\textsc{Set-Partition}}, {\textsc{Subset-Sum}} and {\textsc{Decision-Knapsack}}, with constant communication, subquadratic prover's computation and linear verifier's computation.
Last updated:  2012-09-22
Constrained Search for a Class of Good S-Boxes with Improved DPA Resistivity
Bodhisatwa Mazumdar, Debdeep Mukhopadhyay, Indranil Sengupta
In FSE 2005, \emph{transparency order} was proposed as a parameter for the robustness of S-boxes to \emph{Differential Power Analysis} (DPA):lower \emph{transparency order} implying more resistance. However most cryptographically strong Boolean functions have been found to have high \emph{transparency order}. Also it is a difficult problem to search for Boolean functions which are strong cryptographically, and yet have low \emph{transparency order}, the total search space for $(n,n)$-bit Boolean functions being as large as $n2^{2^n}$. In this paper we characterize \emph{transparency order} for various classes of Boolean functions by computing the upper and lower bounds of \emph{transparency order} for both even and odd numbers of variables. The transparency order is defined in terms of \emph{diffusion} properties of the structures of Boolean functions namely the number of bit flips in the output of the functions corresponding to the number of bit flips at the input of the function. The calculated bounds depend on the number of vectors flipping the input of S-box for which bias of probability of S-box output bit deviates from the value of 0.5. The \emph{transparency order} is found to be high in the class of those Boolean functions which have larger cardinality of input differences for which the probability of output bit flip is 0.5. Also we find that instead of \emph{propagation characteristics}, \emph{autocorrelation spectra} of the S-box function $F$ is a more qualifying candidate in deciding the characteristics of \emph{transparency order}. The relations developed to characterize \emph{transparency order} aid in our constrained random generation and search of a class of balanced $8\times8$ S-boxes with \emph{transparency order} upper bounded by 7.8, \emph{nonlinearity} in range $(104,110)$ and \emph{absolute indicator values} of \emph{GAC} in range $(48,88)$.
Last updated:  2013-02-21
Rotational cryptanalysis of round-reduced Keccak
Uncategorized
Pawel Morawiecki, Josef Pieprzyk, Marian Srebrny
Show abstract
Uncategorized
In this paper we attack round-reduced Keccak hash function with a technique called rotational cryptanalysis. We focus on Keccak variants proposed as SHA-3 candidates in the NIST's contest for a new standard of cryptographic hash function. Our main result is a preimage attack on 4-round Keccak and a 5-round distinguisher on Keccak-f[1600] permutation --- the main building block of Keccak hash function.
Last updated:  2012-11-04
A Versatile Multi-Input Multiplier over Finite Fields
Uncategorized
Haibo Yi, Shaohua Tang, Lingling Xu
Show abstract
Uncategorized
Multiplication of three elements over finite fields is used extensively in multivariate public key cryptography and solving system of linear equations over finite fields. This contribution shows the enhancements of multiplication of three elements over finite fields by using specific architecture. We firstly propose a versatile multi-input multiplier over finite fields. The parameters of this multiplier can be changed according to the requirement of the users which makes it reusable in different applications. Our evaluation of this multiplier gives optimum choices for multiplication of three elements over finite fields. Implemented results show that we takes $22.062$ ns and $16.354$ ns to execute each multiplication of three elements over $GF((2^4)^2)$ based on table look-up and polynomial basis on a FPGA respectively. Experimental results and mathematical proofs clearly demonstrate the improvement of the proposed versatile multiplier over finite fields.
Last updated:  2012-11-22
Differential Analysis of the LED Block Cipher
Florian Mendel, Vincent Rijmen, Deniz Toz, Kerem Varici
In this paper, we present a security analysis of the lightweight block cipher LED proposed by Guo et al. at CHES 2011. Since the design of LED is very similar to the Even-Mansour scheme, we first review existing attacks on this scheme and extend them to related-key and related-key-cipher settings before we apply them to LED. We obtain results for $12$ and $16$ rounds (out of $32$) for LED-$64$ and $16$ and $24$ rounds (out of $48$) for LED-$128$. Furthermore, we present an observation on full LED in the related-key-cipher setting. For all these attacks we need to find good differentials for one step (4 rounds) of LED. Therefore, we extend the study of plateau characteristics for AES-like structures from two rounds to four rounds when the key addition is replaced with a constant addition. We introduce an algorithm that can be used to find good differentials and right pairs for one step of LED. To be more precise, we can find more than $2^{10}$ right pairs for one step of LED with complexity of $2^{16}$ and memory requirement of $5 \times 2^{17}$. Moreover, a similar algorithm can also be used to find iterative characteristics for LED.
Last updated:  2014-04-08
Enhanced Chosen-Ciphertext Security and Applications
Dana Dachman-Soled, Georg Fuchsbauer, Payman Mohassel, Adam O'Neill
We introduce and study a new notion of \emph{enhanced chosen-ciphertext security} (ECCA) for public-key encryption. Loosely speaking, in the ECCA security experiment, the decryption oracle provided to the adversary is augmented to return not only the output of the decryption algorithm on a queried ciphertext but also of a \emph{randomness recovery} algorithm associated to the scheme. Our results mainly concern the case where the randomness recovery algorithm is efficient. We provide constructions of ECCA-secure encryption from adaptive trapdoor functions as defined by Kiltz \emph{et al.}~(EUROCRYPT 2010), resulting in ECCA encryption from standard number-theoretic assumptions. We then give two applications of ECCA-secure encryption: (1) We use it as a unifying concept in showing equivalence of adaptive trapdoor functions and tag-based adaptive trapdoor functions, resolving an open question of Kiltz \emph{et al}.~(2) We show that ECCA-secure encryption can be used to securely realize an approach to public-key encryption with non-interactive opening (PKENO) originally suggested by Damgård and Thorbek (EUROCRYPT 2007), resulting in new and practical PKENO schemes quite different from those in prior work. Our results demonstrate that ECCA security is of both practical and theoretical interest.
Last updated:  2012-09-20
Salus: A System for Server-Aided Secure Function Evaluation
Seny Kamara, Payman Mohassel, Ben Riva
Secure function evaluation (SFE) allows a set of mutually distrustful parties to evaluate a function of their joint inputs without revealing their inputs to each other. SFE has been the focus of active research and recent work suggests that it can be made practical. Unfortunately, current protocols and implementations have inherent limitations that are hard to overcome using standard and practical techniques. Among them are: (1) requiring participants to do work linear in the size of the circuit representation of the function; (2) requiring all parties to do the same amount of work; and (3) not being able to provide complete fairness. A promising approach for overcoming these limitations is to augment the SFE setting with a small set of untrusted servers that have no input to the computation and that receive no output, but that make their computational resources available to the parties. In this model, referred to as server-aided SFE, the goal is to tradeoff the parties' work at the expense of the servers. Motivated by the emergence of public cloud services such as Amazon EC2 and Microsoft Azure, recent work has explored the extent to which server-aided SFE can be achieved with a single server. In this work, we revisit the sever-aided setting from a practical perspective and design single-server-aided SFE protocols that are considerably more efficient than all previously-known protocols. We achieve this in part by introducing several new techniques for garbled-circuit-based protocols, including a new and efficient input-checking mechanism for cut-and-choose and a new pipelining technique that works in the presence of malicious adversaries. Furthermore, we extend the server-aided model to guarantee fairness which is an important property to achieve in practice. Finally, we implement and evaluate our constructions experimentally and show that our protocols (regardless of the number of parties involved) yield implementations that are 4 and 6 times faster than the most optimized two-party SFE implementation when the server is assumed to be malicious and covert, respectively.
Last updated:  2012-10-05
2048XKS - A Software Oriented High Security Block Cipher
Dieter Schmidt
The block cipher 2048XKS is a derivative of the block ciphers 1024 and 1024XKS, which in turn used the block ciphers MMB, SAFER, and Blowfish as building blocks. The block cipher 2048XKS has a block size of 2048 bis and a key length of 4096 bits and 8192 bits, respectively. 2048XKS is a Substition-Permutation-Network (SPN). It is designed for 32 bit microprocessors with a hardware integer multiplier.
Last updated:  2014-06-22
A Comparison of Perfect Table Cryptanalytic Tradeoff Algorithms
Ga Won Lee, Jin Hong
The performances of three major time memory tradeoff algorithms were compared in a recent paper. The algorithms considered there were the classical Hellman tradeoff and the non-perfect table versions of the distinguished point method and the rainbow table method. This paper adds the perfect table versions of the distinguished point method and the rainbow table method to the list, so that all the major tradeoff algorithms may now be compared against each other. Even though there are existing claims as to the superiority of one tradeoff algorithm over another algorithm, the algorithm performance comparisons provided by the current work and the recent preceding paper are of more practical value. Comparisons that take both the cost of pre-computation and the efficiency of the online phase into account, at parameters that achieve a common success rate, can now be carried out with ease. Comparisons can be based on the expected execution complexities rather than the worst case complexities, and details such as the effects of false alarms and various storage optimization techniques need no longer be ignored. A significant portion of this paper is allocated to accurately analyzing the execution behavior of the perfect table distinguished point method. In particular, we obtain a closed-form formula for the average length of chains associated with a perfect distinguished point table.
Last updated:  2012-09-20
Efficient Implementation of RSA Algorithm with MKE
Sami A. Nagar, Dr. Saad Alshamma
The aim of this paper is to improve the implementation of RSA algorithm in some certain situations. This improvement is based on the ideas of H. Ren-Junn, S. Feng-Fu, Y. Yi-Shiung and C. Chia-Yao and allows us to speed up the transmission of data between various nodes in networks. We realized our idea in the C{\#} language and in the database that was created by SQL Server 2008 R2. Identical database must be used in all networks gateways. The special protocol (RSA Handshake Database Protocol) was designed for controlling the creation of this database. Each gateway, that runs a RSA-Key Generations Offline procedure, is controlled according to specific issues and necessaries. This stage is called RSA-Key Generations Offline. We propose the new method for exchanging values of the keys among gateways. Roughly speaking gateways exchange indexes (Indexes Exchange) that refer to fields that contain the values of public and private keys.
Last updated:  2012-10-24
Private Top-k Aggregation Protocols
Uncategorized
Myungsun Kim, Abedelaziz Mohaisen, Jung Hee Cheon, Yongdae Kim
Show abstract
Uncategorized
In this paper, we revisit the private top-&#954; data aggregation problem. First we formally define the problem’s security requirements as both data and user privacy goals. To achieve both goals, and to strike a balance between efficiency and functionality, we devise a novel cryptographic construction that comes in two schemes; a fully decentralized simple construction and its practical and semi-decentralized variant. Both schemes are provably secure in the semi-honest model. We analyze the computational and communi- cation complexities of our construction, and show that it is much more efficient than the existing protocols in the literature.
Last updated:  2013-09-17
Intercepting Tokens: The Empire Strikes Back in the Clone Wars
Uncategorized
Özgür Dagdelen, Marc Fischlin
Show abstract
Uncategorized
We discuss interception attacks on cryptographic protocols which rely on trustworthy hardware like one-time memory tokens (Goldwasser et al., Crypto 2008). In such attacks the adversary can mount man-in-the-middle attacks and access, or even substitute, transmitted tokens. We show that many of the existing token-based protocols are vulnerable against this kind of attack, which typically lies outside of the previously considered security models. We also give a positive result for protocols remaining secure against such attacks. We present a very efficient protocol for password-based authenticated key exchange based on the weak model of one-time memory tokens. Our protocol only requires four moves, very basic operations, and the sender to send l tokens in the first step for passwords of length l. At the same time we achieve information-theoretic security in Canetti's universal composition framework (FOCS 2001) against adaptive adversaries (assuming reliable erasure), even if the tokens are not guaranteed to be transferred securely, i.e., even if the adversary can read or substitute transmitted tokens.
Last updated:  2012-09-20
Secret Sharing and Secure Computing from Monotone Formulae
Ivan Bjerre Damgård, Jonas Kölker, Peter Bro Miltersen
We present a construction of log-depth formulae for various threshold functions based on atomic threshold gates of constant size. From this, we build a new family of linear secret sharing schemes that are multiplicative, scale well as the number of players increases and allows to raise a shared value to the characteristic of the underlying field without interaction. Some of these schemes are in addition strongly multiplicative. Our formulas can also be used to construct multiparty protocols from protocols for a constant number of parties. In particular we implement black-box multiparty computation over non-Abelian groups in a way that is much simpler than previously known and we also show how to get a protocol in this setting that is efficient and actively secure against a constant fraction of corrupted parties, a long standing open problem. Finally, we show a negative result on usage of our scheme for pseudorandom secret sharing as defined by Cramer, Damgård and Ishai.
Last updated:  2012-09-20
A Low-Area Unified Hardware Architecture for the AES and the Cryptographic Hash Function Grøstl
Nuray At, Jean-Luc Beuchat, Eiji Okamoto, Ismail San, Teppei Yamazaki
This article describes the design of an 8-bit coprocessor for the AES (encryption, decryption, and key expansion) and the cryptographic hash function Grøstl on several Xilinx FPGAs. Our Arithmetic and Logic Unit performs a single instruction that allows for implementing AES encryption, AES decryption, AES key expansion, and Grøstl at all levels of security. Thanks to a careful organization of AES and Grøstl internal states in the register file, we manage to generate all read and write addresses by means of a modulo-128 counter and a modulo-256 counter. A fully autonomous implementation of Grøstl and AES on a Virtex-6 FPGA requires 169 slices and a single 36k memory block, and achieves a competitive throughput. Assuming that the security guarantees of Grøstl are at least as good as the ones of the other SHA-3 finalists, our results show that Grøstl is the best candidate for low-area cryptographic coprocessors.
Last updated:  2012-11-07
A Simple Combinatorial Treatment of Constructions and Threshold Gaps of Ramp Schemes
Maura B. Paterson, Douglas R. Stinson
We give easy proofs of some recent results concerning threshold gaps in ramp schemes. We then generalise a construction method for ramp schemes employing error-correcting codes so that it can be applied using nonlinear (as well as linear) codes. Finally, as an immediate consequence of these results, we provide a new explicit bound on the the minimum length of a code having a specified distance and dual distance.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.