All papers in 2012 (Page 6 of 733 results)

Last updated:  2012-05-14
A Cryptanalysis of HummingBird-2: The Differential Sequence Analysis
Uncategorized
Qi Chai, Guang Gong
Show abstract
Uncategorized
Hummingbird-2 is one recent design of lightweight block ciphers targeting constraint devices, which not only enables a compact hardware implementation and ultra-low power consumption but also meets the stringent response time as specified in ISO18000-6C. In this paper, we present the first cryptanalytic result on the full version of this cipher using two pairs of related keys, i.e., four keys. We discover that the differential sequences for the last invocation of the round function can be computed by running the full cipher, due to which the search space for the key can be reduced. Base upon this observation, we propose a probabilistic attack encompassing two phases, preparation phase and key recovery phase. The preparation phase, requiring $2^{80}$ effort in time, aims to reach an internal state, with $0.5$ success probability, that satisfies particular conditions. In the key recovery phase, by attacking the last invocation of the round function of the encryption (decryption resp.) using the proposed differential sequence analysis (DSA), we are able to recover $36$ bits (another $44$ bits resp.) of the $128$-bit key. In addition, the remaining $48$ bits of the key can be exhaustively searched and the overall time complexity of the key recovery phase is $2^{48.14}$. Note that the proposed attack, though exhibiting an interesting tradeoff between the success probability and time complexity, is only of a theoretical interest at the moment and does not affect the security of the Hummingbird-2 in practice.
Last updated:  2012-04-30
Implementing Pairings at the 192-bit Security Level
Diego F. Aranha, Laura Fuentes-Castañeda, Edward Knapp, Alfred Menezes, Francisco Rodríguez-Henríquez
We implement asymmetric pairings derived from Kachisa-Schaefer-Scott (KSS), Barreto-Naehrig (BN), and Barreto-Lynn-Scott (BLS) elliptic curves at the 192-bit security level. Somewhat surprisingly, we find pairings derived from BLS curves with embedding degree 12 to be the fastest for our serial as well as our parallel implementations. Our serial implementations provide a factor-3 speedup over the previous state-of-the-art, demonstrating that pairing computation at the 192-bit security level is not as expensive as previously thought. We also present a general framework for deriving a Weil-type pairing that is well-suited for computing a single pairing on a multi-processor machine.
Last updated:  2012-05-02
A General Construction for 1-round $\delta$-RMT and (0, $\delta$)-SMT
Reihaneh Safavi-Naini, Mohammed Ashraful Alam Tuhin, Pengwei Wang
In Secure Message Transmission (SMT) problem, a sender $\cal S$ is connected to a receiver $\cal R$ through $N$ node disjoint bidirectional paths in the network, $t$ of which are controlled by an adversary with \textit{unlimited computational power}. $\cal{S}$ wants to send a message $m$ to $\cal{R}$ in a \textit{reliable} and \textit{private} way. It is proved that SMT is possible if and only if $N\ge2t+1$. In Reliable Message Transmission (RMT) problem, the network setting is the same and the goal is to provide reliability for communication, only. In this paper we focus on 1-round $\delta$-RMT and $(0,\delta)$-SMT where the chance of protocol failure (receiver cannot decode the sent message) is at most $\delta$, and in the case of SMT, privacy is perfect. We propose a new approach to the construction of 1-round $\delta$-RMT and (0, $\delta$)-SMT for all connectivities $N \ge 2t+1$, using list decodable codes and message authentication codes. Our concrete constructions use folded Reed-Solomon codes and multireceiver message authentication codes. The protocols have optimal transmission rates and provide the highest reliability among all known comparable protocols. Important advantages of these constructions are, (i) they can be adapted to all connectivities, and (ii) have simple and direct security (privacy and reliability) proofs using properties of the underlying codes, and $\delta$ can be calculated from parameters of the underlying codes. We discuss our results in relation to previous work in this area and propose directions for future research.
Last updated:  2013-06-26
On Ideal Lattices and Learning with Errors Over Rings
Vadim Lyubashevsky, Chris Peikert, Oded Regev
The ``learning with errors'' (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worst-case lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for lattice-based hash functions (and related primitives). We resolve this question in the affirmative by introducing an algebraic variant of LWE called \emph{ring-LWE}, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ring-LWE distribution is pseudorandom, assuming that worst-case problems on ideal lattices are hard for polynomial-time quantum algorithms. Applications include the first truly practical lattice-based public-key cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ring-LWE.
Last updated:  2012-04-30
Languages with Efficient Zero-Knowledge PCP's are in SZK
Mohammad Mahmoody, David Xiao
A \emph{Zero-Knowledge PCP} (ZK-PCP) is a randomized PCP such that the view of any (perhaps cheating) efficient verifier can be efficiently simulated up to small statistical distance. Kilian, Petrank, and Tardos (STOC '97) constructed ZK-PCPs for all languages in $\NEXP$. Ishai, Mahmoody, and Sahai (TCC '12), motivated by cryptographic applications, revisited the possibility of \emph{efficient} ZK-PCPs for all $L \in \NP$ where the PCP is encoded as a polynomial-size circuit that given a query $i$ returns the $i\th$ symbol of the PCP. Ishai \etal showed that there is no efficient ZK-PCP for $\NP$ with a \emph{non-adaptive} verifier, who prepares all of its PCP queries before seeing any answers, unless $\NP \se \coAM$ and polynomial-time hierarchy collapses. The question of whether \emph{adaptive} verification can lead to efficient ZK-PCPs for $\NP$ remained open. In this work, we resolve this question and show that any language or promise problem with efficient ZK-PCPs must be in $\SZK$ (the class of promise problems with a statistical zero-knowledge \emph{single prover} proof system). Therefore, no $\NP$-complete problem can have an efficient ZK-PCP unless $\NP \se \SZK$ (which also implies $\NP \se \coAM$ and the polynomial-time hierarchy collapses). We prove our result by reducing any promise problem with an efficient ZK-PCP to two instances of the $\CEA$ problem defined and studied by Vadhan (FOCS'04) which is known to be complete for the class $\SZK$.
Last updated:  2012-04-30
Physical Unclonable Functions in Cryptographic Protocols: Security Proofs and Impossibility Results
Marten van Dijk, Ulrich Rührmair
We investigate the power of physical unclonable functions (PUFs) as a new primitive in cryptographic protocols. Our contributions split into three parts. Firstly, we focus on the realizability of PUF-protocols in a special type of stand-alone setting (the “stand-alone, good PUF setting”) under minimal assumptions. We provide new PUF definitions that require only weak average security properties of the PUF, and prove that these definitions suffice to realize secure PUF-based oblivious transfer (OT), bit commitment (BC) and key exchange (KE) in said setting. Our protocols for OT, BC and KE are partly new, and have certain practicality and security advantages compared to existing schemes. In the second part of the paper, we formally prove that there are very sharp limits on the usability of PUFs for OT and KE {\em beyond} the above stand-alone, good PUF scenario. We introduce two new and realistic attack models, the so-called posterior access model (PAM) and the bad PUF model, and prove several impossibility results in these models. First, OT and KE protocols whose security is solely based on PUFs are generally impossible in the PAM. More precisely, one-time access of an adversary to the PUF after the end of a single protocol (sub-)session makes all previous (sub-)sessions provably insecure. Second, OT whose security is solely based on PUFs is impossible in the bad PUF model, even if only a stand alone execution of the protocol is considered (i.e., even if no adversarial PUF access after the protocol is allowed). Our impossibility proofs do not only hold for the weak PUF definition of the first part of the paper, but even apply if ideal randomness and unpredictability is assumed in the PUF, i.e., if the PUF is modeled as a random permutation oracle. In the third part, we investigate the feasibility of PUF-based bit commitment beyond the stand-alone, good PUF setting. For a number of reasons, this case is more complicated than OT and KE. We first prove that BC is impossible in the bad PUF model if players have got access to the PUF between the commit and the reveal phase. Again, this result holds even if the PUF is “ideal” and modeled as a random permutation oracle. Secondly, we sketch (without proof) two new BC-protocols, which can deal with bad PUFs or with adversarial access between the commit and reveal phase, but not with both. We hope that our results can contribute to a clarification of the usability of PUFs in cryptographic protocols. They show that new hardware properties such as offline certifiability and the erasure of PUF responses would be required in order to make PUFs a broadly applicable cryptographic tool. These features have not yet been realized in practical PUF-implementations and generally seem hard to achieve at low costs. Our findings also show that the question how PUFs can be modeled comprehensively in a UC-setting must be considered at least partly open.
Last updated:  2013-04-13
Secure password-based remote user authentication scheme with non-tamper resistant smart cards
Ding Wang, Chun-guang Ma, Peng Wu
It is a challenge for password authentication protocols using non-tamper resistant smart cards to achieve user anonymity, forward secrecy, immunity to various attacks and high performance at the same time. In DBSec'11, Li et al. showed that Kim and Chung's password-based remote user authentication scheme is vulnerable to various attacks if the smart card is non-tamper resistant. Consequently, an improved version was proposed and claimed that it is secure against smart card security breach attacks. In this paper, however, we will show that Li et al.'s scheme still cannot withstand offline password guessing attack under the non-tamper resistance assumption of the smart card. In addition, their scheme is also vulnerable to denial of service attack and fails to provide user anonymity and forward secrecy. As our main contribution, a robust scheme is presented to cope with the aforementioned defects, while keeping the merits of different password authentication schemes using smart cards. The analysis demonstrates that our scheme meets all the proposed criteria and eliminates several hard security threats that are difficult to be tackled at the same time in previous scholarship.
Last updated:  2012-04-30
ZKPDL: A Language-Based System for Efficient Zero-Knowledge Proofs and Electronic Cash
Sarah Meiklejohn, C. Chris Erway, Alptekin Küpçü, Theodora Hinkle, Anna Lysyanskaya
In recent years, many advances have been made in cryptography, as well as in the performance of communication networks and processors. As a result, many advanced cryptographic protocols are now efficient enough to be considered practical, yet research in the area remains largely theoretical and little work has been done to use these protocols in practice, despite a wealth of potential applications. This paper introduces a simple description language, ZKPDL, and an interpreter for this language. ZKPDL implements non-interactive zero-knowledge proofs of knowledge, a primitive which has received much attention in recent years. Using our language, a single program may specify the computation required by both the prover and verifier of a zero-knowledge protocol, while our interpreter performs a number of optimizations to lower both computational and space overhead. Our motivating application for ZKPDL has been the efficient implementation of electronic cash. As such, we have used our language to develop a cryptographic library, Cashlib, that provides an interface for using ecash and fair exchange protocols without requiring expert knowledge from the programmer.
Last updated:  2012-09-01
When Homomorphism Becomes a Liability
Zvika Brakerski
We show that an encryption scheme cannot have a simple decryption function and be homomorphic at the same time, even with added noise. Specifically, if a scheme can homomorphically evaluate the majority function, then its decryption cannot be weakly-learnable (in particular, linear), even if large decryption error is allowed. (In contrast, without homomorphism, such schemes do exist and are presumed secure, e.g. based on LPN.) An immediate corollary is that known schemes that are based on the hardness of decoding in the presence of low hamming-weight noise cannot be fully homomorphic. This applies to known schemes such as LPN-based symmetric or public key encryption. Using these techniques, we show that the recent candidate fully homomorphic encryption, suggested by Bogdanov and Lee (ePrint '11, henceforth BL), is insecure. In fact, we show two attacks on the BL scheme: One that uses homomorphism, and another that directly attacks a component of the scheme.
Last updated:  2014-04-14
Shorter IBE and Signatures via Asymmetric Pairings
Jie Chen, Hoon Wei Lim, San Ling, Huaxiong Wang, Hoeteck Wee
We present efficient Identity-Based Encryption (IBE) and signature schemes under the Symmetric External Diffie-Hellman (SXDH) assumption in bilinear groups; our IBE scheme also achieves anonymity. In both the IBE and the signature schemes, all parameters have constant numbers of group elements, and are shorter than those of previous constructions based on Decisional Linear (DLIN) assumption. Our constructions use both dual system encryption (Waters, Crypto '09) and dual pairing vector spaces (Okamoto and Takashima, Pairing '08, Asiacrypt '09). Specifically, we show how to adapt the recent DLIN-based instantiations of Lewko (Eurocrypt '12) to the SXDH assumption. To our knowledge, this is the first work to instantiate either dual system encryption or dual pairing vector spaces under the SXDH assumption. Furthermore, our work could be extended to many other Functional Encryption. Particularly, we show how to instantiate our framework to Inner Product Encryption (IPE) and Key-Policy Functional Encryption (KP-FE). All parameters of our constructions are shorter than those of DLIN-based constructions.
Last updated:  2012-08-11
A Generalization of the Rainbow Band Separation Attack and its Applications to Multivariate Schemes
Uncategorized
Enrico Thomae
Show abstract
Uncategorized
The Rainbow Signature Scheme is a non-trivial generalization of the well known Unbalanced Oil and Vinegar (UOV) signature scheme (Eurocrypt '99) minimizing the length of the signatures. By now the Rainbow Band Separation attack is the best key recovery attack known. For some sets of parameters it is even faster than a direct attack on the public key. Unfortunately the available description of the attack is very ad hoc and does not provide deep insights. In this article we provide another view on the Rainbow Band Separation attack using the theory of equivalent keys and a new generalization called good keys. Thereby we generalize the attack into a framework that also includes Reconciliation attacks. We further formally prove the correctness of the attack and show that it does not only perform well on Rainbow, but on all multivariate quadratic (MQ) schemes that suffer from missing cross-terms. We apply our attack and break the Enhanced STS signature scheme and all its variants, as well as the MFE encryption scheme and its variant based on Diophantine equations. In the case of Rainbow and Enhanced TTS we show that parameters have to be chosen carefully and that the remaining efficiency gain over UOV is small. As there is still some room to improve the Band Separation attack, it is not clear whether layer-based MQ-schemes will eventually become superfluous or not.
Last updated:  2012-08-21
A secret sharing scheme of prime numbers based on hardness of factorization
Kai-Yuen Cheong
Secret sharing schemes usually have perfect information theoretic security. This implies that each share must be as large as the secret. In this work we propose a scheme where the shares are smaller, while the security becomes computational. The computational security assumption is hardness of factorization, which is a simple and rather standard assumption in cryptography. In our scheme, the shared secret can only be a set of prime numbers.
Last updated:  2012-04-23
Almost-Everywhere Secure Computation with Edge Corruptions
Nishanth Chandran, Juan Garay, Rafail Ostrovsky
We consider secure multi-party computation (MPC) in a setting where the adversary can separately corrupt not only the parties (nodes) but also the communication channels (edges), and can furthermore choose selectively and adaptively which edges or nodes to corrupt. Note that if an adversary corrupts an edge, even if the two nodes that share that edge are honest, the adversary can control the link and thus deliver wrong messages to both players. We consider this question in the information-theoretic setting, and require security against a computationally unbounded adversary. In a fully connected network the above question is simple (and we also provide an answer that is optimal up to a constant factor). What makes the problem more challenging is to consider the case of sparse networks. Partially connected networks are far more realistic than fully connected networks, which led Garay and Ostrovsky [Eurocrypt'08] to formulate the notion of (unconditional) \emph{almost everywhere (a.e.) secure computation} in the node-corruption model, i.e., a model in which not all pairs of nodes are connected by secure channels and the adversary can corrupt some of the nodes (but not the edges). In such a setting, MPC amongst all honest nodes cannot be guaranteed due to the possible poor connectivity of some honest nodes with other honest nodes, and hence some of them must be ``given up'' and left out of the computation. The number of such nodes is a function of the underlying communication graph and the adversarial set of nodes. In this work we introduce the notion of \emph{almost-everywhere secure computation with edge corruptions}, which is exactly the same problem as described above, except that we additionally allow the adversary to completely control some of the communication channels between two correct nodes---i.e., to ``corrupt'' edges in the network. While it is easy to see that an a.e. secure computation protocol for the original node-corruption model is also an a.e. secure computation protocol tolerating edge corruptions (albeit for a reduced fraction of edge corruptions with respect to the bound for node corruptions), no polynomial-time protocol is known in the case where a {\bf constant fraction} of the edges can be corrupted (i.e., the maximum that can be tolerated) and the degree of the network is sub-linear. We make progress on this front, by constructing graphs of degree $O(n^\epsilon)$ (for arbitrary constant $0<\epsilon<1$) on which we can run a.e. secure computation protocols tolerating a constant fraction of adversarial edges. The number of given-up nodes in our construction is $\mu n$ (for some constant $0<\mu<1$ that depends on the fraction of corrupted edges), which is also asymptotically optimal.
Last updated:  2012-04-22
Hedged Public-key Encryption: How to Protect against Bad Randomness
Mihir Bellare, Zvika Brakerski, Moni Naor, Thomas Ristenpart, Gil Segev, Hovav Shacham, Scott Yilek
Public-key encryption schemes rely for their IND-CPA security on per-message fresh randomness. In practice, randomness may be of poor quality for a variety of reasons, leading to failure of the schemes. Expecting the systems to improve is unrealistic. What we show in this paper is that we can, instead, improve the cryptography to offset the lack of possible randomness. We provide public-key encryption schemes that achieve IND-CPA security when the randomness they use is of high quality, but, when the latter is not the case, rather than breaking completely, they achieve a weaker but still useful notion of security that we call IND-CDA. This hedged public-key encryption provides the best possible security guarantees in the face of bad randomness. We provide simple RO-based ways to make in-practice IND-CPA schemes hedge secure with minimal software changes. We also provide non-RO model schemes relying on lossy trapdoor functions (LTDFs) and techniques from deterministic encryption. They achieve adaptive security by establishing and exploiting the anonymity of LTDFs which we believe is of independent interest.
Last updated:  2012-04-22
Private Fingerprint Matching
Siamak F. Shahandashti, Reihaneh Safavi-Naini, Philip Ogunbona
We propose a fully private fingerprint matching protocol that compares two fingerprints based on the most widely-used minutia-based fingerprint matching algorithm. The protocol enables two parties, each holding a private fingerprint, to find out if their fingerprints belong to the same individual. Unlike previous works, we do not make any simplifying assumption on the matching algorithm or use generic multiparty computation protocols in our constructions. We employ a commonly-used algorithm that works by first comparing minutia pairs from the two fingerprints based on their types, locations, and orientations, and then checking if the number of matching minutia pairs is more than a threshold, and we propose a concrete, scalable, and modular protocol. We prove security against honest-but-curious adversaries and discuss how security against malicious adversaries can be achieved using standard cryptographic techniques. Our protocol is realized using common cryptographic primitives and do not require pairing- or lattice-based cryptography.
Last updated:  2012-04-22
Differentially Private Continual Monitoring of Heavy Hitters from Distributed Streams
T-H. Hubert Chan, Mingfei Li, Elaine Shi, Wenchang Xu
We consider applications scenarios where an untrusted aggregator wishes to continually monitor the heavy-hitters across a set of distributed streams. Since each stream can contain sensitive data, such as the purchase history of customers, we wish to guarantee the privacy of each stream, while allowing the untrusted aggregator to accurately detect the heavy hitters and their approximate frequencies. Our protocols are scalable in settings where the volume of streaming data is large, since we guarantee low memory usage and processing overhead by each data source, and low communication overhead between the data sources and the aggregator.
Last updated:  2019-06-20
Efficient Dissection of Bicomposite Problems with Cryptanalytic Applications
Itai Dinur, Orr Dunkelman, Nathan Keller, Adi Shamir
In this paper we show that a large class of diverse problems have a bicomposite structure which makes it possible to solve them with a new type of algorithm called {\it dissection}, which has much better time/memory tradeoffs than previously known algorithms. A typical example is the problem of finding the key of multiple encryption schemes with $r$ independent $n$-bit keys. All the previous error-free attacks required time $T$ and memory $M$ satisfying $TM = 2^{rn}$, and even if ``false negatives'' are allowed, no attack could achieve $TM<2^{3rn/4}$. Our new technique yields the first algorithm which never errs and finds all the possible keys with a smaller product of $TM$, such as $T=2^{4n}$ time and $M=2^{n}$ memory for breaking the sequential execution of r=7 block ciphers. The improvement ratio we obtain increases in an unbounded way as $r$ increases, and if we allow algorithms which can sometimes miss solutions, we can get even better tradeoffs by combining our dissection technique with parallel collision search. To demonstrate the generality of the new dissection technique, we show how to use it in a generic way in order to improve rebound attacks on hash functions and to solve with better time complexities (for small memory complexities) hard combinatorial search problems, such as the well known knapsack problem.
Last updated:  2012-07-24
Adaptive CCA Broadcast Encryption with Constant-Size Secret Keys and Ciphertexts
Duong-Hieu Phan, David Pointcheval, Siamak F. Shahandashti, Mario Strefler
We consider designing broadcast encryption schemes with constant-size secret keys and ciphertexts, achieving chosen-ciphertext security. We first argue that known CPA-to-CCA transforms currently do not yield such schemes. We then propose a scheme, modifying a previous selective CPA secure proposal by Boneh, Gentry, and Waters. Our proposed scheme has constant-size secret keys and ciphertexts and we prove that it is selective chosen-ciphertext secure based on standard assumptions. Our scheme has ciphertexts that are shorter than those of the previous CCA secure proposals. Then we propose a second scheme that provides the functionality of both broadcast encryption and revocation schemes simultaneously using the same set of parameters. Finally we show that it is possible to prove our first scheme adaptive chosen-ciphertext secure under reasonable extensions of the bilinear Diffie-Hellman exponent and the knowledge of exponent assumptions. We prove both of these extended assumptions in the generic group model. Hence, our scheme becomes the first to achieve constant-size secret keys and ciphertexts (both asymptotically optimal) and adaptive chosen-ciphertext security at the same time.
Last updated:  2012-06-18
Quadratic Span Programs and Succinct NIZKs without PCPs
Rosario Gennaro, Craig Gentry, Bryan Parno, Mariana Raykova
We introduce a new characterization of the NP complexity class, called Quadratic Span Programs (QSPs), which is a natural extension of span programs defined by Karchmer and Wigderson. Our main motivation is the construction of succinct arguments of NP-statements that are quick to construct and verify. QSPs seem well-suited for this task, perhaps even better than Probabilistically Checkable Proofs (PCPs). In 2010, Groth constructed a NIZK argument in the common reference string (CRS) model for Circuit-SAT consisting of only 42 elements in a bilinear group. Interestingly, his argument does not (explicitly) use PCPs. But his scheme has some disadvantages -- namely, the CRS size and prover computation are both quadratic in the circuit size. In 2011, Lipmaa reduced the CRS size to quasi-linear, but with prover computation still quadratic. Using QSPs we construct a NIZK argument in the CRS model for Circuit-SAT consisting of just 7 group elements. The CRS size is linear in the circuit size, and prover computation is quasi-linear, making our scheme seemingly quite practical. (The prover only needs to do a linear number of group operations; the quasi-linear computation is a multipoint evaluation and interpolation.) Our results are complementary to those of Valiant (TCC 2008) and Bitansky et al. (2012), who use "bootstrapping" (recursive composition) of arguments to reduce CRS size and prover and verifier computation. QSPs also provide a crisp mathematical abstraction of some of the techniques underlying Groth's and Lipmaa's constructions.
Last updated:  2012-10-29
Automatic Search of Truncated Impossible Differentials for Word-Oriented Block Ciphers (Full Version)
Uncategorized
Shengbao Wu, Mingsheng Wang
Show abstract
Uncategorized
Impossible differential cryptanalysis is a powerful technique to recover the secret key of block ciphers by exploiting the fact that in block ciphers specific input and output differences are not compatible. This paper introduces a novel tool to search truncated impossible differentials for word-oriented block ciphers with bijective Sboxes. Our tool generalizes the earlier $\mathcal{U}$-method and the UID-method. It allows to reduce the gap between the best impossible differentials found by these methods and the best known differentials found by ad hoc methods that rely on cryptanalytic insights. The time and space complexities of our tool in judging an $r$-round truncated impossible differential are about $O(c\cdot l^4\cdot r^4)$ and $O(c'\cdot l^2\cdot r^2)$ respectively, where $l$ is the number of words in the plaintext and $c$, $c'$ are constants depending on the machine and the block cipher. In order to demonstrate the strength of our tool, we show that it does not only allow to automatically rediscover the longest truncated impossible differentials of many word-oriented block ciphers, but also finds new results. It independently rediscovers all 72 known truncated impossible differentials on 9-round CLEFIA. In addition, finds new truncated impossible differentials for AES, ARIA, Camellia without FL and FL$^{-1}$ layers, E2, LBlock, MIBS and Piccolo. Although our tool does not improve the lengths of impossible differentials for existing block ciphers, it helps to close the gap between the best known results of previous tools and those of manual cryptanalysis.
Last updated:  2012-04-22
Relation between Verifiable Random Functions and Convertible Undeniable Signatures, and New Constructions
Kaoru Kurosawa, Ryo Nojima, Le Trieu Phong
Verifiable random functions (VRF) and selectively-convertible undeniable signature (SCUS) schemes were proposed independently in the literature. In this paper, we observe that they are tightly related. This directly yields several deterministic SCUS schemes based on existing VRF constructions. In addition, we create a new probabilistic SCUS scheme, which is very compact. The confirmation and disavowal protocols of these SCUS are efficient, and can be run either sequentially, concurrently, or arbitrarily. These protocols are based on what we call zero-knowledge protocols for generalized DDH and non-DDH, which are of independent interest.
Last updated:  2012-08-08
Perfect Algebraic Immune Functions
Uncategorized
Meicheng Liu, Yin Zhang, Dongdai Lin
Show abstract
Uncategorized
A perfect algebraic immune function is a Boolean function with perfect immunity against algebraic and fast algebraic attacks. The main results are that for a perfect algebraic immune balanced function the number of input variables is one more than a power of two; for a perfect algebraic immune unbalanced function the number of input variables is a power of two. Also the Carlet-Feng functions on $2^s+1$ variables and the modified Carlet-Feng functions on $2^s$ variables are shown to be perfect algebraic immune functions. Furthermore, it is shown that a perfect algebraic immune function behaves good against probabilistic algebraic attacks as well.
Last updated:  2013-08-19
Strongly Secure Authenticated Key Exchange from Factoring, Codes, and Lattices
Uncategorized
Atsushi Fujioka, Koutarou Suzuki, Keita Xagawa, Kazuki Yoneyama
Show abstract
Uncategorized
An unresolved problem in research on authenticated key exchange (AKE) is to construct a secure protocol against advanced attacks such as key compromise impersonation and maximal exposure attacks without relying on random oracles. HMQV, a state of the art AKE protocol, achieves both efficiency and the strong security proposed by Krawczyk (we call it the CK+ model), which includes resistance to advanced attacks. However, the security proof is given under the random oracle model. We propose a generic construction of AKE from a key encapsulation mechanism (KEM). The construction is based on a chosen-ciphertext secure KEM, and the resultant AKE protocol is CK+ secure in the standard model. The construction gives the first CK+ secure AKE protocols based on the hardness of integer factorization problem, code-based problems, or learning problems with errors. In addition, instantiations under the Diffie-Hellman assumption or its variant can be proved to have strong security without non-standard assumptions such as $\pi$PRF and KEA1. Furthermore, we extend the CK+ model to identity-based (called the id-CK+ model), and propose a generic construction of identity-based AKE (ID-AKE) based on identity-based KEM, which satisfies id-CK+ security. The construction leads first strongly secure ID-AKE protocols under the hardness of integer factorization problem, or learning problems with errors.
Last updated:  2012-04-22
On the Existence of Boolean Functions with Optimal Resistance against Fast Algebraic Attacks
Uncategorized
Yusong Du, Fangguo Zhang
Show abstract
Uncategorized
It has been pointed out that an $n$-variable Boolean function $f$ has optimal resistance against fast algebraic attacks if and only if there does not exist a nonzero $n$-variable Boolean function $g$ of degree lower than $\frac{n}{2}$ such that $fg=h$ and $\mathrm{deg}(g)+\mathrm{deg}(h)<n$. In this corresponding, we show that there does not exist an $n$-variable Boolean function with optimal resistance against fast algebraic attacks for most values of $n$.
Last updated:  2012-04-22
Adaptive Preimage Resistance Analysis Revisited:\\ Requirements, Subtleties and Implications
Donghoon Chang, Moti Yung
In the last few years, the need to design new cryptographic hash functions has led to the intense study of when desired hash multi-properties are preserved or assured under compositions and domain extensions. In this area, it is important to identify the exact notions and provide often complex proofs of the resulting properties. Getting this analysis right (as part of provable security studies) is, in fact, analogous to cryptanalysis. We note that it is important and quite subtle to get indeed the ``right'' notions and properties, and ``right'' proofs in this relatively young area. Specifically, the security notion we deal with is ``adaptive preimage resistance'' (apr) which was introduced by Lee and Park as an extension of ``preimage resistance'' (pr). In Eurocrypt 2010, in turn, Lee and Steinberger already used the apr security notion to prove ``preimage awareness'' and ``indifferentiable security'' of their new double-piped mode of operation. They claimed that if $H^P$ is collision-resistant (cr) and apr, then $F(M)=\mathcal{R}(H^P(M))$ is indifferentiable from a variable output length (VIL) random oracle $\mathcal{F}$, where $H^P$ is a function based on an ideal primitive $P$ and $\mathcal{R}$ is a fixed input length (FIL) random oracle. However, there are some limitations in their claim, because they considered only indifferentiability security notion in the information-theoretic adversarial model, not in the computation-theoretic adversarial model. As we show in the current work, the above statement is \textit{not} correct in the computation-theoretic adversarial model. First in our studies, we give a counterexample to the above. Secondly, we describe \textit{a new requirement} on $H^P$ (called ``admissibility'') so that the above statement is correct even in the computation-theoretic adversarial model. Thirdly, we show that apr is, in fact, not a strengthened notion of preimage resistance. Fourthly, we explain the relation between preimage awareness and cr+apr+(our new requirement) in the computation-theoretic adversarial model. Finally, we show that a polynomial-based mode of operation \cite{LeSt10} satisfies our new requirement; namely, the polynomial-based mode of operation with fixed-input-length random oracles is indifferentiable from a variable-input-length random oracle in the computation-theoretic adversarial model.
Last updated:  2012-05-03
A NEW GUESS-AND-DETERMINE ATTACK ON THE A5/1 STREAM CIPHER
Uncategorized
Jay Shah, Ayan Mahalanobis
Show abstract
Uncategorized
In Europe and North America, the most widely used stream cipher to ensure privacy and con&#64257;dentiality of conversations in GSM mobile phones is the A5/1. In this paper, we present a new attack on the A5/1 stream cipher with an average time complexity of $2^(48.5)$, which is much less than the brute-force attack with a complexity of $2^(64)$ . The attack has a $100\%$ success rate and requires about 5.65GB storage. We provide a detailed description of our new attack along with its implementation and results.
Last updated:  2012-05-25
Cryptanalysis of Hummingbird-2
Kai Zhang, Lin Ding, Jie Guan
Hummingbird is a lightweight encryption and message authentication primitive published in RISC’09 and WLC’10. In FSE’11, Markku-Juhani O.Saarinen presented a differential divide-and-conquer method which has complexity upper bounded by 264 operations and requires processing of few megabytes of chosen messages under two related nonces (IVs). The improved version, Hummingbird-2, was presented in RFIDSec 2011. Based on the idea of differential collision, this paper discovers some weaknesses of the round function WD16 combining with key loading algorithm and we propose a related-key chosen-IV attack which can recover the full secret key. Under 24 pairs of related keys, the 128 bit initial key can be recovered, with the computational complexity of O(232.6) and data complexity of O(232.6). The result shows that the Hummingbird-2 cipher can’t resist related key attack.
Last updated:  2012-09-10
(Pseudo) Preimage Attack on Round-Reduced Grøstl Hash Function and Others (Extended Version)
Uncategorized
Shuang Wu, Dengguo Feng, Wenling Wu, Jian Guo, Le Dong, Jian Zou
Show abstract
Uncategorized
The Grøstl hash function is one of the 5 final round candidates of the SHA-3 competition hosted by NIST. In this paper, we study the preimage resistance of the Grøstl hash function. We propose pseudo preimage attacks on Grøstl hash function for both 256-bit and 512-bit versions, i.e. we need to choose the initial value in order to invert the hash function. Pseudo preimage attack on 5(out of 10)-round Grøstl-256 has a complexity of $(2^{244.85},2^{230.13})$ (in time and memory) and pseudo preimage attack on 8(out of 14)-round Grøstl-512 has a complexity of $(2^{507.32},2^{507.00})$. To the best of our knowledge, our attacks are the first (pseudo) preimage attacks on round-reduced Grøstl hash function, including its compression function and output transformation. These results are obtained by a variant of meet-in-the-middle preimage attack framework by Aoki and Sasaki. We also improve the time complexities of the preimage attacks against 5-round Whirlpool and 7-round AES hashes by Sasaki in FSE~2011.
Last updated:  2012-04-15
Information-flow control for programming on encrypted data
J. C. Mitchell, R. Sharma, D. Stefan, J. Zimmerman
Using homomorphic encryption and secure multiparty computation, cloud servers may perform regularly structured computation on encrypted data, without access to decryption keys. However, prior approaches for programming on encrypted data involve restrictive models such as boolean circuits, or standard languages that do not guarantee secure execution of all expressible programs. We present an expressive core language for secure cloud computing, with primitive types, conditionals, standard functional features, mutable state, and a secrecy preserving form of general recursion. This language, which uses an augmented information-flow type system to prevent control-flow leakage, allows programs to be developed and tested using conventional means, then exported to a variety of secure cloud execution platforms, dramatically reducing the amount of specialized knowledge needed to write secure code. We present a Haskell-based implementation and prove that cloud implementations based on secret sharing, homomorphic encryption, or other alternatives satisfying our general definition meet precise security requirements.
Last updated:  2012-07-13
Unique Group Signatures
Uncategorized
Matthew Franklin, Haibin Zhang
Show abstract
Uncategorized
We initiate the study of unique group signature such that signatures of the same message by the same user will always have a large common component (i.e., unique identifier). It enables an efficient detection algorithm, revealing the identities of illegal users, which is fundamentally different from previous primitives. We present a number of unique group signature schemes (without random oracles) under a variety of security models that extend the standard security models of ordinary group signatures. Our work is a beneficial step towards mitigating the well-known group signature paradox, and it also has many other interesting applications and efficiency implications.
Last updated:  2012-04-13
Robust biometric-based user authentication scheme for wireless sensor networks
Debiao He
Wireless sensor networks (WSNs) are applied widely a variety of areas such as real-time traffic monitoring, measurement of seismic activity, wildlife monitoring and so on. User authentication in WSNs is a critical security issue due to their unattended and hostile deployment in the field. In 2010, Yuan et al. proposed the first biometric-based user authentication scheme for WSNs. However, Yoon et al. pointed out that Yuan et al.’s scheme is vulnerable to the insider attack, user impersonation attack, GW-node impersonation attack and sensor node impersonate attack. To improve security, Yoon et al.’s proposed an improved scheme and claimed their scheme could withstand various attacks. Unfortunately, we will show Yoon et al.’s scheme is vulnerable to the denial-of-service attack (DoS) and the sensor node impersonation attack. To overcome the weaknesses in Yoon et al.’s scheme, we propose a new biometric-based user authentication scheme for WSNs. The analysis shows our scheme is more suitable for practical applications.
Last updated:  2012-04-13
Secure Similarity Coefficients Computation with Malicious Adversaries
Bo Zhang, Fangguo Zhang
Similarity coefficients play an important role in many application aspects. Recently, a privacy-preserving similarity coefficients protocol for binary data was proposed by Wong and Kim (Computers and Mathematics with Application 2012). In this paper, we show that their protocol is not secure, even in the semi-honest model, since the client can retrieve the input of the server without deviating from the protocol. Also we propose a secure similarity coefficients computation in the presence of malicious adversaries, and prove it using the standard simulation-based security definitions for secure two-party computation. We also discuss several extensions of our protocol for settling other problems. Technical tools in our protocol include zero-knowledge proofs and distributed ElGamal encryption.
Last updated:  2012-04-19
Comment an Anonymous Multi-receiver Identity-based Encryption Scheme
Uncategorized
J. H. Zhang, Y. B. Cui
Show abstract
Uncategorized
Anonymous receiver encryption is an important cryptographic primitive. It can protect the privacy of the receiver. In 2010, Fan \emph{et al} proposed an anonymous multi-receiver ID-based encryption by using Lagrange interpolating polynomial. Recently, Wang \emph{et al} showed that Fan \emph{et al}'s scheme satisfied anonymity of the receivers. Then they provided an improved scheme to fix it and showed that the improved scheme was secure. Unfortunately, we pointed out that Wang \emph{et al}'s improved scheme did't satisfy the receiver's anonymity by analyzing the security of the scheme yet. After analyzing the reason to produce such flaw, we give an improved method to repair it and show that our improved scheme satisfies the receiver's anonymity, and the improved scheme has advantage over Wang \emph{et al}'s scheme in terms of computational cost.
Last updated:  2012-04-13
Aggregate Signcryption
Alexander W. Dent
Signcryption schemes provide an efficient messaging system for data that needs to be sent with data confidentiality, data integrity and data origin authentication. However, the bandwidth overhead for the use of signcryption in a network in which a large number of messages need to be sent may be high. Motivated by aggregate signature schemes, we propose the concept of an aggregate signcryption scheme. An aggregate signcryption scheme allows distinct signcryption ciphertexts intended for the same recipient to be merged into a single signcryption ciphertext of smaller size without losing any of their security guarantees. This has the potential to provide significant bandwidth savings. We propose security models for this scheme, analyse the trivial generic constructions, propose an efficient new scheme, and analyse the bandwidth requirements of these schemes for a practical distributed database application.
Last updated:  2013-06-18
Using Symmetries in the Index Calculus for Elliptic Curves Discrete Logarithm
Jean-Charles Faugère, Pierrick Gaudry, Louise Huot, Guénaël Renault
In 2004, an algorithm is introduced to solve the DLP for elliptic curves defined over a non prime finite field $\F_{q^n}$. One of the main steps of this algorithm requires decomposing points of the curve $E(\F_{q^n})$ with respect to a factor base, this problem is denoted PDP. In this paper, we will apply this algorithm to the case of Edwards curves, the well-known family of elliptic curves that allow faster arithmetic as shown by Bernstein and Lange. More precisely, we show how to take advantage of some symmetries of twisted Edwards and twisted Jacobi intersections curves to gain an exponential factor \(2^{\omega (n-1)}\) to solve the corresponding PDP where $\omega$ is the exponent in the complexity of multiplying two dense matrices. Practical experiments supporting the theoretical result are also given. For instance, the complexity of solving the ECDLP for twisted Edwards curves defined over $\F_{q^5}$, with \(q\approx2^{64}\), is supposed to be $\sim$ $2^{160}$ operations in $E(\F_{q^5})$ using generic algorithms compared to \(2^{130}\) operations (multiplication of two $32$-bits words) with our method. For these parameters the PDP is intractable with the original algorithm. The main tool to achieve these results relies on the use of the symmetries and the quasi-homogeneous structure induced by these symmetries during the polynomial system solving step. Also, we use a recent work on a new algorithm for the change of ordering of Gröbner basis which provides a better heuristic complexity of the total solving process.
Last updated:  2012-06-25
Beyond the Limitation of Prime-Order Bilinear Groups, and Round Optimal Blind Signatures
Jae Hong Seo, Jung Hee Cheon
At Eurocrypt 2010, Freeman proposed a transformation from pairing-based schemes in composite-order bilinear groups to equivalent ones in prime-order bilinear groups. His transformation can be applied to pairing-based cryptosystems exploiting only one of two properties of composite-order bilinear groups: cancelling and projecting. At Asiacrypt 2010, Meiklejohn, Shacham, and Freeman showed that prime-order bilinear groups according to Freeman's construction cannot have two properties simultaneously except negligible probability and, as an instance of implausible conversion, proposed a (partially) blind signature scheme whose security proof exploits both the cancelling and projecting properties of composite-order bilinear groups. In this paper, we invalidate their evidence by presenting a security proof of the prime-order version of their blind signature scheme. Our security proof follows a different strategy and exploits only the projecting property. Instead of the cancelling property, a new property, that we call {\em translating}, on prime-order bilinear groups plays an important role in the security proof, whose existence was not known in composite-order bilinear groups. With this proof, we obtain a $2$-move (i.e., round optimal) (partially) blind signature scheme (without random oracle) based on the decisional linear assumption in the common reference string model, which is of independent interest. As the second contribution of this paper, we construct prime-order bilinear groups that possess both the cancelling and projecting properties at the same time by considering more general base groups. That is, we take a rank $n$ $\zp$-submodule of $\zp^{n^2}$, instead of $\zp^n$, to be a base group $G$, and consider the projections into its rank 1 submodules. We show that the subgroup decision assumption on this base group $G$ holds in the generic bilinear group model for $n=2$, and provide an efficient membership-checking algorithm to $G$, which was trivial in the previous setting. Consequently, it is still open whether there exists a cryptosystem on composite-order bilinear groups that cannot be constructed on prime-order bilinear groups.
Last updated:  2013-05-22
On The Security of One-Witness Blind Signature Schemes
Uncategorized
Foteini Baldimtsi, Anna Lysyanskaya
Show abstract
Uncategorized
Blind signatures have proved an essential building block for applications that protect privacy while ensuring unforgeability, i.e., electronic cash and electronic voting. One of the oldest, and most efficient blind signature schemes is the one due to Schnorr that is based on his famous identification scheme. Although it was proposed over twenty years ago, its unforgeability remains an open problem, even in the random-oracle model. In this paper, we show that current techniques for proving security in the random oracle model do not work for the Schnorr blind signature. Our results generalize to other important blind signatures, such as the one due to Brands. Brands' blind signature is at the heart of Microsoft's newly implemented UProve system, which makes this work relevant to cryptographic practice as well.
Last updated:  2013-05-31
Multi-Instance Security and its Application to Password-Based Cryptography
Mihir Bellare, Thomas Ristenpart, Stefano Tessaro
This paper develops a theory of multi-instance (mi) security and applies it to provide the first proof-based support for the classical practice of salting in password-based cryptography. Mi-security comes into play in settings (like password-based cryptography) where it is computationally feasible to compromise a single instance, and provides a second line of defense, aiming to ensure (in the case of passwords, via salting) that the effort to compromise all of some large number $m$ of instances grows linearly with m. The first challenge is definitions, where we suggest LORX-security as a good metric for mi security of encryption and support this claim by showing it implies other natural metrics, illustrating in the process that even lifting simple results from the si setting to the mi one calls for new techniques. Next we provide a composition-based framework to transfer standard single-instance (si) security to mi-security with the aid of a key-derivation function. Analyzing password-based KDFs from the PKCS#5 standard to show that they meet our indifferentiability-style mi-security definition for KDFs, we are able to conclude with the first proof that per password salts amplify mi-security as hoped in practice. We believe that mi-security is of interest in other domains and that this work provides the foundation for its further theoretical development and practical application.
Last updated:  2012-04-16
The BlueJay Ultra-Lightweight Hybrid Cryptosystem
Uncategorized
Markku-Juhani O. Saarinen
Show abstract
Uncategorized
We report on the development of BlueJay, a hybrid Rabin-based public key encryption cryptosystem that is suitable for ultra-lightweight (total 2000-3000 GE) platforms such as microsensors and RFID authentication tags. The design is related to authors’ Passerine and the Oren-Feldhofer WIPR proposals, but is suitable to a wider array of applications. The encryption mechanism is significantly faster and the implementation more lightweight than RSA (even with public exponent 3) and ECC with the same security level. Hardware implementations of the asymmetric encryption component of the hybrid cryptosystem require less than a thousand gate equivalents in addition to the memory storage required for the payload and public key data. An inexpensive, milliscale MCU SoC BlueJay implementation is reported and compared to RSA-AES on the same platform. The private key operation (not performed by the light-weight device but by the sensor network base station or a data acquisition reader) has roughly the same complexity as the RSA private key operation.
Last updated:  2012-04-13
A Multivariate based Threshold Ring Signature Scheme
Albrecht Petzoldt, Stanislav Bulygin, Johannes Buchmann
In \cite{SS11}, Sakumoto et al. presented a new multivariate identification scheme, whose security is based solely on the MQ-Problem of solving systems of quadratic equations over finite fields. In this paper we extend this scheme to a threshold ring identification and signature scheme. Our scheme is the first multivariate scheme of this type and generally the first multivariate signature scheme with special properties. Despite the fact that we need more rounds to achieve given levels of security, the signatures are at least twice shorter than those obtained by other post-quantum (e.g. code based) constructions. Furthermore, our scheme offers provable security, which is quite a rare fact in multivariate cryptography.
Last updated:  2012-04-13
An Efficient Homomorphic Encryption Protocol for Multi-User Systems
Liangliang Xiao, Osbert Bastani, I-Ling Yen
The homomorphic encryption problem has been an open one for three decades. Recently, Gentry has proposed a full solution. Subsequent works have made improvements on it. However, the time complexities of these algorithms are still too high for practical use. For example, Gentry’s homomorphic encryption scheme takes more than 900 seconds to add two 32 bit numbers, and more than 67000 seconds to multiply them. In this paper, we develop a non-circuit based symmetric-key homomorphic encryption scheme. It is proven that the security of our encryption scheme is equivalent to the large integer factorization problem, and it can withstand an attack with up to m ln&#8289;poly&#8289;(&#955;) chosen plaintexts for any predetermined m, where &#955; is the security parameter. Multiplication, encryption, and decryption are almost linear in m&#955;, and addition is linear in m&#955;. Performance analyses show that our algorithm runs multiplication in 108 milliseconds and addition in a tenth of a millisecond for &#955;=1024 and m=16. We further consider practical multiple-user data-centric applications. Existing homomorphic encryption schemes only consider one master key. To allow multiple users to retrieve data from a server, all users need to have the same key. In this paper, we propose to transform the master encryption key into different user keys and develop a protocol to support correct and secure communication between the users and the server using different user keys. In order to prevent collusion between some user and the server to derive the master key, one or more key agents can be added to mediate the interaction.
Last updated:  2012-04-13
Extending Order Preserving Encryption for Multi-User Systems
Liangliang Xiao, I-Ling Yen, Dung T. Huynh
Several order preserving encryption (OPE) algorithms have been developed in the literature to support search on encrypted data. However, existing OPE schemes only consider a single encryption key, which is infeasible for a practical system with multiple users (implying that all users should have the single encryption key in order to encrypt or decrypt confidential data). In this paper, we develop the first protocols, DOPE and OE-DOPE, to support the use of OPE in multi-user systems. First, we introduce a group of key agents into the system and invent the DOPE protocol to enable “distributed encryption” to assure that the OPE encryption key is not known by any entity in the system. However, in DOPE, if a key agent is compromised, the share of the secret data that is sent to this key agent is compromised. To solve the problem, we developed a novel oblivious encryption (OE) protocol based on the oblivious transfer concept to deliver and encrypt the shares obliviously. Then, we integrate it with DOPE to obtain the OE-DOPE protocol. Security of OE-DOPE is further enhanced with additional techniques. Both DOPE and OE-DOPE can be used with any existing OPE algorithms while retaining all the advantages of OPE without requiring the users to share the single encryption key, making the OPE approach feasible in practical systems.
Last updated:  2012-04-13
Security Analysis and Enhancement for Prefix-Preserving Encryption Schemes
Liangliang Xiao, I-Ling Yen
Prefix-preserving encryption (PPE) is an important type of encryption scheme, having a wide range of applications, such as IP addresses anonymization, prefix-matching search, and rang search. There are two issues in PPE schemes, security proof and single key requirement. Existing security proofs for PPE only reduce the security of a real PPE scheme to that of the ideal PPE object by showing their computational indistinguishability \cite{Ama07,Xu02}. Such security proof is incomplete since the security of the ideal encryption object is unknown. Also, existing prefix-preserving encryption schemes only consider a single encryption key, which is infeasible for a practical system with multiple users (Implying that all users should have the single encryption key in order to encrypt or decrypt confidential data). In this paper we develop a novel mechanism to analyze the security of the ideal PPE object. We follow the modern cryptographic approach and create a new security notion IND-PCPA. Then, we show that such weakened security notion is necessary and the ideal PPE object is secure under IND-PCPA. We also design a new, security-enhanced PPE protocol to support its use in multi-user systems, where no single entity in the system knows the PPE key. The protocol secret shares and distributes the PPE key to a group of key agents and let them ``distributedly encrypt'' critical data. We develop a novel distributed PPE algorithm and the corresponding request and response protocols. Experimental results show that the protocol is feasible in practical systems.
Last updated:  2012-08-10
On the Security of an Improved Password Authentication Scheme Based on ECC
Ding Wang, Chun-guang Ma
The design of secure remote user authentication schemes for mobile applications is still an open and quite challenging problem, though many schemes have been published lately. Recently, Islam and Biswas pointed out that Lin and Hwang et al.'s password-based authentication scheme is vulnerable to various attacks, and then presented an improved scheme based on elliptic curve cryptography (ECC) to overcome the drawbacks. Based on heuristic security analysis, Islam and Biswas claimed that their scheme is secure and can withstand all related attacks. In this paper, however, we show that Islam and Biswas's scheme cannot achieve the claimed security goals and report its flaws: (1) It is vulnerable to offline password guessing attack, stolen verifier attack and denial of service (DoS) attack; (2) It fails to preserve user anonymity. The cryptanalysis demonstrates that the scheme under study is unfit for practical use.
Last updated:  2013-04-11
Universally Composable Key-Management
Uncategorized
Steve Kremer, Robert Künnemann, Graham Steel
Show abstract
Uncategorized
We present the first universally composable key-management functionality, formalized in the GNUC framework by Hofheinz and Shoup. It allows the enforcement of a wide range of security policies and can be extended by diverse key usage operations with no need to repeat the security proof. We illustrate its use by proving an implementation of a security token secure with respect to arbitrary key-usage operations and explore a proof technique that allows the storage of cryptographic keys externally, a novel development in simulation-based security frameworks.
Last updated:  2012-04-11
Non-Malleable Extractors, Two-Source Extractors and Privacy Amplification
Xin Li
Dodis and Wichs introduced the notion of a non-malleable extractor to study the problem of privacy amplification with an active adversary. A non-malleable extractor is a much stronger version of a strong extractor. Given a weakly-random string $x$ and a uniformly random seed $y$ as the inputs, the non-malleable extractor $nmExt$ has the property that $nmExt(x,y)$ appears uniform even given $y$ as well as $nmExt(x,A(y))$, for an arbitrary function $A$ with $A(y) \neq y$. Dodis and Wichs showed that such an object can be used to give optimal privacy amplification protocols with an active adversary. Previously, there are only two known constructions of non-malleable extractors \cite{DLWZ11, CRS11}. Both constructions only work for $(n, k)$-sources with $k>n/2$. Interestingly, both constructions are also two-source extractors. In this paper, we present a strong connection between non-malleable extractors and two-source extractors. The first part of the connection shows that non-malleable extractors can be used to construct two-source extractors. If the non-malleable extractor works for small min-entropy and has a short seed length with respect to the error, then the resulted two-source extractor beats the best known construction of two-source extractors. This partially explains why previous constructions of non-malleable extractors only work for sources with entropy rate $>1/2$, and why explicit non-malleable extractors for small min-entropy may be hard to get. The second part of the connection shows that certain two-source extractors can be used to construct non-malleable extractors. Using this connection, we obtain the first construction of non-malleable extractors for $k < n/2$. Specifically, we give an unconditional construction for min-entropy $k=(1/2-\delta)n$ for some constant $\delta>0$, and a conditional (semi-explicit) construction that can potentially achieve $k=\alpha n$ for any constant $\alpha>0$. We also generalize non-malleable extractors to the case where there are more than one adversarial seeds, and show a similar connection between the generalized non-malleable extractors and two-source extractors. Finally, despite the lack of explicit non-malleable extractors for arbitrarily linear entropy, we give the first 2-round privacy amplification protocol with asymptotically optimal entropy loss and communication complexity for $(n, k)$ sources with $k=\alpha n$ for any constant $\alpha>0$. This dramatically improves previous results and answers an open problem in \cite{DLWZ11}.
Last updated:  2012-04-13
SmartTokens: Delegable Access Control with NFC-enabled Smartphones (Full Version)
Alexandra Dmitrienko, Ahmad-Reza Sadeghi, Sandeep Tamrakar, Christian Wachsmann
Today's smartphones and tablets offer compelling computing and storage capabilities enabling a variety of mobile applications with rich functionality. The integration of new interfaces, in particular near field communication~(NFC) opens new opportunities for new applications and business models, as the most recent trend in industry for payment and ticketing shows. These applications require storing and processing security-critical data on smartphones, making them attractive targets for a variety of attacks. The state of the art to enhance platform security concerns outsourcing security-critical computations to hardware-isolated Trusted Execution Environments~(TrEE). However, since these TrEEs are used by software running in commodity operating systems, malware could impersonate the software and use the TrEE in an unintended way. Further, existing NFC-based access control solutions for smartphones are either not public or based on strong assumptions that are hard to achieve in practice. We present the design and implementation of a generic access control system for NFC-enabled smartphones based on a multi-level security architecture for smartphones. Our solution allows users to delegate their access rights and addresses the bandwidth constraints of NFC. Our prototype captures electronic access to facilities, such as entrances and offices, and binds NFC operations to a software-isolated TrEE established on the widely used Android smartphone operating system. We provide a formal security analysis of our protocols and evaluated the performance of our solution.
Last updated:  2012-04-11
Third-order nonlinearities of some biquadratic monomial Boolean functions
Uncategorized
Brajesh Kumar Singh
Show abstract
Uncategorized
In this paper, we estimate the lower bounds on third-order nonlinearities of some biquadratic monomial Boolean functions of the form $Tr_1^n(\lambda x^d)$ for all $x \in \mathbb F_{2^n}$, where $\lambda \in \BBF_{2^n}^{*}$, \begin{itemize} \item [{(1)}]$d = 2^i + 2^j + 2^k + 1$, $i, j, k$ are integers such that $ i > j > k \geq 1$ and $n > 2 i$. \item [{(2)}] $d = 2^{3\ell} + 2^{2\ell} + 2^{\ell} + 1$, $\ell$ is a positive integer such that $\gcd (i, n) = 1$ and $n > 6$. \end{itemize}
Last updated:  2012-05-25
Replay attacks that violate ballot secrecy in Helios
Ben Smyth
Helios 2.0 is a web-based end-to-end verifiable electronic voting system, suitable for use in low-coercion environments. In this paper we identify a vulnerability in Helios which allows an adversary to compromise the privacy of voters whom cast abstention votes. The vulnerability can be attributed to the absence of ballot independence and the use of homomorphic ElGamal encryption, in particular, these properties can be exploited by an adversary to construct a ballot related to an abstention vote cast by an honest voter and this ballot can be submitted by a corrupt voter to influence the election outcome, thereby introducing information that can be used to violate privacy. We demonstrate the attack by breaking privacy in a mock election using the current Helios implementation. It is unlikely that the vulnerability will be exploited in a real-world election and therefore our results are largely theoretical. Nonetheless, we cannot expect any computational proofs of ballot secrecy without fixing this vulnerability and, moreover, the attack methodology may be of interest -- in particular, it could represent a viable threat to existing protocols in the literature -- thus providing motivation to report these results.
Last updated:  2012-04-16
Asymptotic fingerprinting capacity in the Combined Digit Model
Uncategorized
Dion Boesten, Boris Skoric
Show abstract
Uncategorized
We study the channel capacity of $q$-ary fingerprinting in the limit of large attacker coalitions. We extend known results by considering the Combined Digit Model, an attacker model that captures signal processing attacks such as averaging and noise addition. For $q=2$ we give results for various attack parameter settings. For $q\geq 3$ we present the relevant equations without providing a solution. We show how the channel capacity in the Restricted Digit Model is obtained as a limiting case of the Combined Digit Model.
Last updated:  2013-08-02
Differentially Private Smart Metering with Battery Recharging
Uncategorized
Michael Backes, Sebastian Meiser
Show abstract
Uncategorized
The energy industry has recently begun using smart meters to take fine-grained readings of energy usage. These smart meters enable flexible time-of-use billing, forecasting, and demand response, but they also raise serious user privacy concerns. We propose a novel technique for provably hiding sensitive power consumption information in the overall power consumption stream. Our technique relies on a rechargeable battery that is connected to the household's power supply. This battery is used to modify the household's power consumption by adding or subtracting noise (i.e., increasing or decreasing power consumption), in order to establish strong privacy guarantees in the sense of differential privacy. To achieve these privacy guarantees in realistic settings, we first investigate the influence of, and the interplay between, capacity and throughput bounds that batteries face in reality. We then propose an integrated method based on noise cascading that allows for recharging the battery on-the-fly so that differential privacy is retained, while adhering to capacity and throughput constraints, and while keeping the additional consumption of energy induced by our technique to a minimum.
Last updated:  2013-10-08
How to Construct Quantum Random Functions
Uncategorized
Mark Zhandry
Show abstract
Uncategorized
In the presence of a quantum adversary, there are two possible definitions of security for a pseudorandom function. The first, which we call standard-security, allows the adversary to be quantum, but requires queries to the function to be classical. The second, quantum-security, allows the adversary to query the function on a quantum superposition of inputs, thereby giving the adversary a superposition of the values of the function at many inputs at once. Existing techniques for proving the security of pseudorandom functions fail when the adversary can make quantum queries. We give the first quantum-security proofs for pseudorandom functions by showing that some classical constructions of pseudorandom functions are quantum-secure. Namely, we show that the standard constructions of pseudorandom functions from pseudorandom generators or pseudorandom synthesizers are secure, even when the adversary can make quantum queries. We also show that a direct construction from lattices is quantum-secure. To prove security, we develop new tools to prove the indistinguishability of distributions under quantum queries. In light of these positive results, one might hope that all standard-secure pseudorandom functions are quantum-secure. To the contrary, we show a separation: under the assumption that standard-secure pseudorandom functions exist, there are pseudorandom functions secure against quantum adversaries making classical queries, but insecure once the adversary can make quantum queries.
Last updated:  2014-09-17
A Modular Framework for Multi-Factor Authentication and Key Exchange
Nils Fleischhacker, Mark Manulis, Amir Azodi
Multi-Factor Authentication (MFA), often coupled with Key Exchange (KE), offers very strong protection for secure communication and has been recommended by many major governmental and industrial bodies for use in highly sensitive applications. Over the past few years many companies started to offer various MFA services to their users and this trend is ongoing. The MFAKE protocol framework presented in this paper offers \emph{à la carte} design of multi-factor authentication and key exchange protocols by mixing multiple \emph{types} and \emph{quantities} of authentication factors in a secure way: MFAKE protocols designed using our framework can combine any subset of multiple low-entropy (one-time) passwords/PINs, high-entropy private/public keys, and biometric factors. This combination is obtained in a modular way from efficient single-factor password-based, public key-based, and biometric-based authentication-only protocols that can be executed in concurrent sessions and bound to a single session of an unauthenticated key exchange protocol to guarantee forward secrecy. The modular approach used in the framework is particularly attractive for MFAKE solutions that require backward compatibility with existing single-factor authentication solutions or where new factors should be introduced gradually over some period of time. The framework is proven secure using the state-of-the art game-based security definitions where specifics of authentication factors such as dictionary attacks on passwords and imperfectness of the biometric matching processes are taken into account.
Last updated:  2012-04-11
Yet Another SHA-3 Round 3 FPGA Results Paper
Brian Baldwin, William P. Marnane
The NIST run SHA-3 competition is nearing completion. Currently in its final round, the five remaining competitors are still being examined in hardware, software and for security metrics in order to select a final winner. While there have been many area and speed results reported, one such metric that does not appear to be covered in very great detail is that of power and energy measurements on FPGA. This work attempts to add some new results to this section, namely, measured area, power, energy and iteration time results thereby giving NIST further metrics on which to base their selection decision.
Last updated:  2012-08-14
Billion-Gate Secure Computation with Malicious Adversaries
Uncategorized
Benjamin Kreuter, abhi shelat, Chih-hao Shen
Show abstract
Uncategorized
The goal of this paper is to assess the feasibility of two-party secure computation in the presence of a malicious adversary. Prior work has shown the feasibility of billion-gate circuits in the semi-honest model, but only the 35k-gate AES circuit in the malicious model, in part because security in the malicious model is much harder to achieve. We show that by incorporating the best known techniques and parallelizing almost all steps of the resulting protocol, evaluating billion-gate circuits is feasible in the malicious model. Our results are in the standard model (i.e., no common reference strings or PKIs) and, in contrast to prior work, we do not use the random oracle model which has well-established theoretical shortcomings.
Last updated:  2012-04-11
Eperio: Mitigating Technical Complexity in Cryptographic Election Verification
Aleksander Essex, Jeremy Clark, Urs Hengartner, Carlisle Adams
Cryptographic (or end-to-end) election verification is a promising approach to providing transparent elections in an age of electronic voting technology. In terms of execution time and software complexity however, the technical requirements for conducting a cryptographic election audit can be prohibitive. In an effort to reduce these requirements we present Eperio: a new, provably secure construction for providing a tally that can be efficiently verified using only a small set of primitives. We show how common-place utilities, like the use of file encryption, can further simplify the verification process for election auditors. Using Python, verification code can be expressed in 50 lines of code. Compared to other proposed proof-verification methods for end-to-end election audits, Eperio lowers the technical requirements in terms of execution time, data download times, and code size. As an interesting alternative, we explain how verification can be implemented using TrueCrypt and the built-in functions of a spreadsheet, making Eperio the first end-to-end system to not require special-purpose verification software.
Last updated:  2013-08-23
Everlasting Multi-Party Computation
Dominique Unruh
A protocol has everlasting security if it is secure against adversaries that are computationally unlimited after the protocol execution. This models the fact that we cannot predict which cryptographic schemes will be broken, say, several decades after the protocol execution. In classical cryptography, everlasting security is difficult to achieve: even using trusted setup like common reference strings or signature cards, many tasks such as secure communication and oblivious transfer cannot be achieved with everlasting security. An analogous result in the quantum setting excludes protocols based on common reference strings, but not protocols using a signature card. We define a variant of the Universal Composability framework, everlasting quantum-UC, and show that in this model, we can implement secure communication and general multi-party computation using signature cards as trusted setup.
Last updated:  2012-04-11
Improvements of Algebraic Attacks Based on Structured Gaussian Elimination
Uncategorized
Satrajit Ghosh, Abhijit Das
Show abstract
Uncategorized
Algebraic attacks are studied as a potential cryptanalytic procedure for various types of ciphers. The XL_SGE algorithm has been recently proposed to improve the complexity of the XL attack. XL_SGE uses structured Gaussian elimination (SGE) during the expansion phase of XL. In this paper, we establish that XL_SGE suffers from some serious drawbacks that impair the effectiveness of SGE-based reduction at all multiplication stages except the first. In order to avoid this problem, we propose several improvements of XL_SGE. Our modifications are based upon partial monomial multiplication and handling of columns of weight two. Our modified algorithms have been experimentally verified to be substantially superior to XL_SGE.
Last updated:  2012-04-11
Optimal First-Order Masking with Linear and Non-Linear Bijections
Houssem MAGHREBI, Claude CARLET, Sylvain GUILLEY, Jean-Luc DANGER
Hardware devices can be protected against side-channel attacks by introducing one random mask per sensitive variable. The computation throughout is unaltered if the shares (masked variable and mask) are processed concomitantly, in two distinct registers. Nonetheless, this setup can be attacked by a zero-offset second-order CPA attack. The countermeasure can be improved by manipulating the mask through a bijection $F$, aimed at reducing the dependency between the shares. Thus $d$th-order zero-offset attacks, that consist in applying CPA on the $d$th power of the centered side-channel traces, can be thwarted for $d \geq 2$ at no extra cost. We denote by $n$ the size in bits of the shares and call $F$ the transformation function, that is a bijection of $\mathbb{F}_2^n$. In this paper, we explore the functions $F$ that thwart zero-offset HO-CPA of maximal order $d$. We mathematically demonstrate that optimal choices for $F$ relate to optimal binary codes (in the sense of communication theory). First, we exhibit optimal linear $F$ functions. Second, we note that for values of $n$ for which non-linear codes exist with better parameters than linear ones. These results are exemplified in the case $n=8$, the optimal $F$ can be identified: it is derived from the optimal rate~$1/2$ binary code of size $2n$, namely the Nordstrom-Robinson $(16, 256, 6)$ code. This example provides explicitly with the optimal protection that limits to one mask of byte-oriented algorithms such as AES or AES-based SHA-3 candidates. It protects against all zero-offset HO-CPA attacks of order $d \leq 5$. Eventually, the countermeasure is shown to be resilient to imperfect leakage models.
Last updated:  2012-12-09
Zero Knowledge with Rubik's Cubes and Non-Abelian Groups
Uncategorized
Emmanuel VOLTE, Jacques PATARIN, Valérie NACHEF
Show abstract
Uncategorized
The factorization problem in non-abelian groups is still an open and a difficult problem. The Rubik's cube is a famous group that illustrates the hardness of the problem. We will define a public key identification scheme based on this problem, in the case of the Rubik's cube, when the number of moves is fixed to a given value. Our scheme consists of an interactive protocol which is zero-knowledge argument of knowledge under the assumption of the existence of a commitment scheme. We will see that our scheme works with any non-abelian groups with a set of authorized moves that has a specific property. Then we will generalize the scheme for larger Rubik's cubes and for any groups.
Last updated:  2022-06-27
Automatically Verified Mechanized Proof of One-Encryption Key Exchange
Bruno Blanchet
We present a mechanized proof of the password-based protocol One-Encryption Key Exchange (OEKE) using the computationally-sound protocol prover CryptoVerif. OEKE is a non-trivial protocol, and thus mechanizing its proof provides additional confidence that it is correct. This case study was also an opportunity to implement several important extensions of CryptoVerif, useful for proving many other protocols. We have indeed extended CryptoVerif to support the computational Diffie-Hellman assumption. We have also added support for proofs that rely on Shoup's lemma and additional game transformations. In particular, it is now possible to insert case distinctions manually and to merge cases that no longer need to be distinguished. Eventually, some improvements have been added on the computation of the probability bounds for attacks, providing better reductions. In particular, we improve over the standard computation of probabilities when Shoup's lemma is used, which allows us to improve the bound given in a previous manual proof of OEKE, and to show that the adversary can test at most one password per session of the protocol. In this paper, we present these extensions, with their application to the proof of OEKE. All steps of the proof are verified by CryptoVerif. This document is an updated version of a report from 2012. In the 10 years between 2012 and 2022, CryptoVerif has made a lot of progress. In particular, the probability bound obtained by CryptoVerif for OEKE has been improved, reaching an almost optimal probability: only statistical terms corresponding to collisions between group elements or between hashes are overestimated by a small constant factor.
Last updated:  2012-04-11
Attacking RSA-CRT Signatures with Faults on Montgomery Multiplication
Pierre-Alain Fouque, Nicolas Guillermin, Delphine Leresteux, Mehdi Tibouchi, Jean-Christophe Zapalowicz
In this paper, we present several efficient fault attacks against implementations of RSA-CRT signatures that use modular exponentiation algorithms based on Montgomery multiplication. They apply to any padding function, including randomized paddings, and as such are the first fault attacks effective against RSA-PSS. The new attacks work provided that a small register can be forced to either zero, or a constant value, or a value with zero high-order bits. We show that these models are quite realistic, as such faults can be achieved against many proposed hardware designs for RSA signatures.
Last updated:  2012-04-11
Quantum Money from Hidden Subspaces
Scott Aaronson, Paul Christiano
Forty years ago, Wiesner pointed out that quantum mechanics raises the striking possibility of money that cannot be counterfeited according to the laws of physics. We propose the first quantum money scheme that is (1) public-key, meaning that anyone can verify a banknote as genuine, not only the bank that printed it, and (2) cryptographically secure, under a "classical" hardness assumption that has nothing to do with quantum money. Our scheme is based on hidden subspaces, encoded as the zero-sets of random multivariate polynomials. A main technical advance is to show that the "black-box" version of our scheme, where the polynomials are replaced by classical oracles, is unconditionally secure. Previously, such a result had only been known relative to a quantum oracle (and even there, the proof was never published). Even in Wiesner's original setting -- quantum money that can only be verified by the bank -- we are able to use our techniques to patch a major security hole in Wiesner's scheme. We give the first private-key quantum money scheme that allows unlimited verifications and that remains unconditionally secure, even if the counterfeiter can interact adaptively with the bank. Our money scheme is simpler than previous public-key quantum money schemes, including a knot-based scheme of Farhi et al. The verifier needs to perform only two tests, one in the standard basis and one in the Hadamard basis -- matching the original intuition for quantum money, based on the existence of complementary observables. Our security proofs use a new variant of Ambainis's quantum adversary method, and several other tools that might be of independent interest.
Last updated:  2012-03-31
Approaches for the performance increasing of software implementation of integer multiplication in prime fields
Vladislav Kovtun, Andrew Okhrimenko
Authors have proposed the approach to increase performance of software implementation of finite field multiplication algorithm, for 32-bit and 64-bit platforms. The approach is based on delayed carry mechanism of significant bit in sum accumulating. This allows to avoid the requirement of taking into account the significant bit carry at the each iteration of the sum accumulation loop. The delayed carry mechanism reduces the total number of additions and gives the opportunity to apply the modern parallelization technologies.
Last updated:  2012-03-31
Analysis of Minimum Numbers of Linearly Active S-Boxes of a Class of Generalized Feistel Block Ciphers
Xiaopei Guo, Kejian Xu, Tongsen Sun, Xiubin Fan
For a class of generalized Feistel block ciphers, an explicit recurrent formula for the minimum numbers of linearly active $S$-boxes of any round $r$ is presented.
Last updated:  2012-03-30
A Distinguisher-Based Attack of a Homomorphic Encryption Scheme Relying on Reed-Solomon Codes
Valérie Gauthier, Ayoub Otmani, Jean-Pierre Tillich
Bogdanov and Lee suggested a homomorphic public-key encryption scheme based on error correcting codes. The underlying public code is a modified Reed-Solomon code obtained from inserting a zero submatrix in the Vandermonde generating matrix defining it. The columns that define this submatrix are kept secret and form a set $L$. We give here a distinguisher that detects if one or several columns belong to $L$ or not. This distinguisher is obtained by considering the code generated by component-wise products of codewords of the public code (the so called ``square code''). This operation is applied to punctured versions of this square code obtained by picking a subset $I$ of the whole set of columns. It turns out that the dimension of the punctured square code is directly related to the cardinality of the intersection of $I$ with $L$. This allows an attack which recovers the full set $L$ and which can then decrypt any ciphertext.
Last updated:  2013-03-31
Pairing-based methods for genus 2 jacobians with maximal endomorphism ring
Uncategorized
Sorina Ionica
Show abstract
Uncategorized
Using Galois cohomology, Schmoyer characterizes cryptographic non-trivial self-pairings of the \ell-Tate pairing in terms of the action of the Frobenius on the \ell-torsion of the Jacobian of a genus 2 curve. We apply similar techniques to study the non-degeneracy of the \ell-Tate pairing restrained to subgroups of the \ell-torsion which are maximal isotropic with respect to the Weil pairing. First, we deduce a criterion to verify whether the jacobian of a genus 2 curve has maximal endomorphism ring. Secondly, we derive a method to construct horizontal (\ell,\ell)-isogenies starting from a jacobian with maximal endomorphism ring.
Last updated:  2012-03-30
A List of Maximum Period NLFSRs
Elena Dubrova
Non-Linear Feedback Shift Registers (NLFSRs) are a generalization of Linear Feedback Shift Registers (LFSRs) in which a current state is a non-linear function of the previous state. While the theory behind LFSRs is well-understood, many fundamental problems related to NLFSRs remain open. Probably the most important one is finding a systematic procedure for constructing NLFSRs with a guaranteed long period. Available algorithms either consider some special cases, or are applicable to small NLFSRs only. In this paper, we present a complete list of $n$-bit NLFSRs with the period $2^n-1$, $n < 25$, for three different types of feedback functions with algebraic degree two. We hope that the presented experimental data might help analysing feedback functions of maximum-period NLFSRs and finding a supporting theory characterizing them.
Last updated:  2012-04-04
Key Updates for RFID Distance-Bounding Protocols: Achieving Narrow-Destructive Privacy
Cristina Onete
Distance-bounding protocols address man-in-the-middle (MITM) in authentication protocols: by measuring response times, verifiers ensure that the responses are not purely relayed. Durholz et al. [13] formalize the following attacks against distance-bounding protocols: (1) mafia fraud, where adversaries must authenticate to the verifier in the presence of honest provers; (2) terrorist fraud, where malicious provers help the adversary (in offline phases) to authenticate (however, the adversary shouldn't authenticate on its own); (3) distance fraud, where a malicious prover must convince the verifier that it is closer to it than in reality; (4) impersonation security, where the prover must authenticate to the verifier in the rounds where response times are not measured. A scenario where distance-bounding can be successfully deployed is RFID authentication, where the provers and RFID tags, and the verifiers are RFID readers. Security models and most distance-bounding schemes designed so far are static, i.e. the used secret key is never updated. The scenario considered by [13] features a single reader and a single tag. However, a crucial topic in RFID authentication is privacy, as formalized by Vaudenay [32]. Adversaries against privacy can corrupt tags and learn the secret keys; in this scenario, key updates ensure better privacy. In this paper we extend distance-bounding security to include key updates, and show a compiler that preserves mafia, distance, and impersonation security, and turns a narrow-weak private distance-bounding protocol into a narrow-destructive private distance-bounding protocol as in [32]. We discuss why it is much harder to attain terrorist fraud resistance, for both stateless and stateful scenarios. We optimize our compiler for cases where (i) the underlying distance-bounding protocol does not have reader authentication; (ii) impersonation security is achieved (by using a pseudorandom function) before the distance-bounding phase; or (iii) the prover ends by sending a MAC of the transcript. We also use our compiler on the enhanced construction in [13].
Last updated:  2015-10-16
On Secure Two-party Integer Division
Morten Dahl, Chao Ning, Tomas Toft
We consider the problem of {\it secure integer division}: given two Paillier encryptions of $\ell$-bit values $n$ and $d$, determine an encryption of \intdiv{n}{d} without leaking any information about $n$ or $d$. We propose two new protocols solving this problem. The first requires $\Oh(\ell)$ arithmetic operation on encrypted values (secure addition and multiplication) in $\Oh(1)$ rounds. This is the most efficient constant-rounds solution to date. The second protocol requires only $\Oh \left( (\log^2 \ell)(\kappa + \loglog \ell) \right)$ arithmetic operations in $\Oh(\log^2 \ell)$ rounds, where $\kappa$ is a correctness parameter. Theoretically, this is the most efficient solution to date as all previous solutions have required $\Omega(\ell)$ operations. Indeed, the fact that an $o(\ell)$ solution is possible at all is highly surprising.
Last updated:  2012-04-26
Differential propagation analysis of Keccak
Uncategorized
Joan Daemen, Gilles Van Assche
Show abstract
Uncategorized
In this paper we introduce new concepts that help read and understand low-weight differential trails in Keccak. We then propose efficient techniques to exhaustively generate all 3-round trails in its largest permutation below a given weight. This allows us to prove that any 6-round differential trail in Keccak-f[1600] has weight at least 74. In the worst-case diffusion scenario where the mixing layer acts as the identity, we refine the lower bound to 82 by systematically constructing trails using a specific representation of states.
Last updated:  2013-01-08
Provably Secure Online/Off-line Identity-Based Signature Scheme forWireless Sensor Network
Uncategorized
Jayaprakash Kar
Show abstract
Uncategorized
This paper describes an efficient and secure online and off-line signature scheme for wireless sensor network (WSN). Security of the proposed scheme is based on difficulty of breaking Bilinear Diffie-Hellman problem (BDHP). WSN systems are usually deployed in hostile environments where they encounter a wide variety of malicious attacks. Information that is the cooked data collected within the sensor network, is valuable and should be kept confidential. In order to protect this transmitted information or messages between any two adjacent sensor nodes, a mutual authentication and key establishment protocol is required for wireless sensor networks. Because some inherent restrictions of sensor nodes which include low power, less storage space, low computation ability and short communication range most existing protocols attempt to establish a pairwise key between any two adjacent sensor nodes by adopting a key pre-distribution approach. In order to further reduce the computational cost of signature generation, online/off-line is suitable for WSN. In on-line/offline signature scheme, the signing process can be broken into two phases. The first phase, performed off-line, is independent of the particular message to be signed; while the second phase is performed on-line, once the message is presented.
Last updated:  2012-03-28
New Constructions of Low Correlation Sequences with High Linear Complexity
Hai Xiong, Chao Li, Qingping Dai, Shaojing Fu
In this paper, we propose a new concept named similar-bent function and we present two general methods to construct balanced sequences with low correlation by using similar-bent functions and orthogonal similar-bent functions. We nd that the bent sequence sets are special cases of our construction. We also investigate the linear complexity of the new constructed sequences. If a suitable similar-bent function is given, the sequences constructed by it can have high linear complexity. As examples, we construct two new low correlation sequence sets. One constructed based on Dobbertin's iterative function is asymptotically optimal with respect to Welch's bound and the other one is constructed based on Kasami function whose sequences have a high linear complexity.
Last updated:  2012-07-07
New Construction of Perfect Sequence Set and Low Correlation Zone Sequence Set
Hai Xiong, Longjiang Qu, Chao Li
For a given binary ideal autocorrelation sequence, we construct a perfect sequence set by changing a few bits of the sequence. The set has a large size with respect to the period of its sequences. Based on the constructed perfect sequence set, a new class of low correlation zone sequence sets whose low correlation zone length can be chosen exibly are obtained. Moreover, the new constructed low correlation zone sequence sets can attain Tang-Fan-Matsufuji's bound with suitably chosen parameters.
Last updated:  2012-03-28
Hybrid Encryption in the Multi-User Setting
G. M. Zaverucha
This paper presents an attack in the multi-user setting on various public-key encryption schemes standardized in IEEE 1363a, SECG SEC 1 and ISO 18033-2. The multi-user setting is a security model proposed by Bellare et al., which allows adversaries to simultaneously attack multiple ciphertexts created by one or more users. An attack is considered successful if the attacker learns information about any of the plaintexts. We show that many standardized public-key encryption schemes are vulnerable in this model, and give ways to prevent the attack. We also show that the key derivation function and pseudorandom generator used to implement a hybrid encryption scheme must be secure in the multi-user setting, in order for the overall primitive to be secure in the multi-user setting. As an illustration of the former, we show that using HKDF (as standardized in NIST SP 800-56C) as a key derivation function for certain standardized hybrid public-key encryption schemes is insecure in the multi-user setting.
Last updated:  2012-03-24
Efficient and Optimally Secure Key-Length Extension for Block Ciphers via Randomized Cascading
Peter Gazi, Stefano Tessaro
We consider the question of efficiently extending the key length of block ciphers. To date, the approach providing highest security is triple encryption (used e.g. in Triple-DES), which was proved to have roughly k + min{n/2, k/2} bits of security when instantiated with ideal block ciphers with key length k and block length n, at the cost of three block-cipher calls per message block. This paper presents a new practical key-length extension scheme exhibiting k + n/2 bits of security – hence improving upon the security of triple encryption – solely at the cost of two block cipher calls and a key of length k + n. We also provide matching generic attacks showing the optimality of the security level achieved by our approach with respect to a general class of two-query constructions.
Last updated:  2012-03-23
Attack on Fully Homomorphic Encryption over the Integers
Gu Chunsheng
This paper presents a heuristic attack on the fully homomorphic encryption over the integers by using lattice reduction algorithm. Our result shows that the FHE in [DGHV10] is not secure for some parameter settings. We also present an improvement scheme to avoid the lattice attack in this paper.
Last updated:  2012-03-23
Fast Embedded Software Hashing
Dag Arne Osvik
We present new software speed records for several popular hash functions on low-end 8-bit AVR microcontrollers. Target algorithms include widely deployed hash functions like SHA-1 and SHA-256 as well as the SHA-3 (second round) candidates Blake-32 and Skein-256. A significant aspect of our implementations is that they reduce the overall resource requirements, improving not only execution time but also RAM footprint and sometimes ROM/Flash memory footprint at the same time, providing the best memory/performance trade-os reported so far. We believe that our results will shed new light on the ongoing SHA-3 competition, and be helpful for the next stage of the competition.
Last updated:  2012-03-23
Toward Practical Group Encryption
Laila El Aimani, Marc Joye
A group encryption scheme allows anyone to form a ciphertext for a given group member while keeping the receiver's identity private. At the same time, the encryptor is capable of proving that some (anonymous) group member is able to decrypt the ciphertext and, optionally, that the corresponding plaintext satisfies some \apriori\ relation (to prevent sending bogus messages). Finally, in case of a dispute, the identity of the intended receiver can be recovered by a designated authority. In this paper, we abstract a generic approach to construct group encryption schemes. We also introduce several new implementation tricks. As a result, we obtain group encryption schemes that significantly improve the state of the art. Both interactive and non-interactive constructions are considered.
Last updated:  2017-06-14
The Joint Signature and Encryption Revisited
Laila El Aimani
We study the Sign_then_Encrypt, Commit_then_Encrypt_and_Sign, and Encrypt_then_Sign paradigms in the context of two cryptographic primitives, namely designated confirmer signatures and signcryption. Our study identifies weaknesses in those paradigms which impose the use of expensive encryption (as a building block) in order to meet a reasonable security level. Next, we propose some optimizations which annihilate the found weaknesses and allow consequently cheap encryption without compromising the overall security. Our optimizations further enjoy verifiability, a property profoundly needed in many real-life applications of the studied primitives.
Last updated:  2012-04-11
A Framework for the Cryptographic Verification of Java-like Programs
Ralf Kuesters, Tomasz Truderung, Juergen Graf
We consider the problem of establishing cryptographic guarantees -- in particular, computational indistinguishability -- for Java or Java-like programs that use cryptography. For this purpose, we propose a general framework that enables existing program analysis tools that can check (standard) noninterference properties of Java programs to establish cryptographic security guarantees, even if the tools a priori cannot deal with cryptography. The approach that we take is new and combines techniques from program analysis and simulation-based security. Our framework is stated and proved for a Java-like language that comprises a rich fragment of Java. The general idea of our approach should, however, be applicable also to other practical programming languages. As a proof of concept, we use an automatic program analysis tool for checking noninterference properties of Java programs, namely the tool Joana, in order to establish computational indistinguishability for a Java program that involves clients sending encrypted messages over a network, controlled by an active adversary, to a server.
Last updated:  2012-04-05
On security of a Certificateless Aggregate Signature Scheme
Limin Shen, Yinxia Sun
Aggregate signatures are useful in special areas where the signatures on many different messages generated by many different users need to be compressed. Recently, Xiong et al. proposed a certificateless aggregate signature scheme provably secure in the random oracle model under the Computational Diffie-Hellman assumption. Unfortunately, by giving concrete attacks, we indicate that Xiong et al. aggregate signature scheme does not meet the basic requirement of unforgeability.
Last updated:  2012-03-22
On Boolean Ideals and Varieties with Application to Algebraic Attacks
Alexander Rostovtsev, Alexey Mizyukin
Finding the key of symmetric cipher takes computing common zero of polynomials, which define ideal and corresponding variety, usually considered over algebraically closed field. The solution is the point of the variety over prime field; it is defined by a sum of the polynomial ideal and the field ideal that defines prime field. Some authors use partitioning of this sum and reducing syzygies of polynomial ideal modulo field ideal. We generalize this method and consider polynomial ideal as a sum of two ideals, one of them is given by short polynomials, and add this ideal to the field ideal. Syzygies are reduced modulo this sum of ideals. Accuracy of definition of the substitution ideal by short polynomials can be increased using affine equivalence of ideals. This method decreases degree and length of syzygies and reduces complexity of Groebner basis computation.
Last updated:  2018-10-09
Circular chosen-ciphertext security with compact ciphertexts
Uncategorized
Dennis Hofheinz
Show abstract
Uncategorized
A key-dependent message (KDM) secure encryption scheme is secure even if an adversary obtains encryptions of messages that depend on the secret key. Such key-dependent encryptions naturally occur in scenarios such as harddisk encryption, formal cryptography, or in specific protocols. However, there are not many provably secure constructions of KDM-secure encryption schemes. Moreover, only one construction, due to Camenisch, Chandran, and Shoup (Eurocrypt 2009) is known to be secure against active (i.e., CCA) attacks. In this work, we construct the first public-key encryption scheme that is KDM-secure against active adversaries and has compact ciphertexts. As usual, we allow only circular key dependencies, meaning that encryptions of arbitrary *entire* secret keys under arbitrary public keys are considered in a multi-user setting. Technically, we follow the approach of Boneh, Halevi, Hamburg, and Ostrovsky (Crypto 2008) to KDM security, which however only achieves security against passive adversaries. We explain an inherent problem in adapting their techniques to active security, and resolve this problem using a new technical tool called ``lossy algebraic filters'' (LAFs). We stress that we significantly deviate from the approach of Camenisch, Chandran, and Shoup to obtain KDM security against active adversaries. This allows us to develop a scheme with compact ciphertexts that consist only of a constant number of group elements.
Last updated:  2012-12-01
Attacking Scrambled Burrows-Wheeler Transform
Martin Stanek
Scrambled Burrows-Wheeler transform [6] is an attempt to combine privacy (encryption) and data compression. We show that the proposed approach is insecure. We present chosen plaintext and known plaintext attacks and estimate their complexity in various scenarios.
Last updated:  2012-08-07
Replacing Username/Password with Software-Only Two-Factor Authentication
Michael Scott
It is basically a solved problem for a server to authenticate itself to a client using standard methods of Public Key cryptography. The Public Key Infrastructure (PKI) supports the SSL protocol which in turn enables this functionality. The single-point-of-failure in PKI, and hence the focus of attacks, is the Certification Authority. However this entity is commonly off-line, well defended, and not easily got at. For a client to authenticate itself to the server is much more problematical. The simplest and most common mechanism is Username/Password. Although not at all satisfactory, the only onus on the client is to generate and remember a password -- and the reality is that we cannot expect a client to be sufficiently sophisticated or well organised to protect larger secrets. However Username/Password as a mechanism is breaking down. So-called zero-day attacks on servers commonly recover files containing information related to passwords, and unless the passwords are of sufficiently high entropy they will be found. The commonly applied patch is to insist that clients adopt long, complex, hard-to-remember passwords. This is essentially a second line of defence imposed on the client to protect them in the (increasingly likely) event that the authentication server will be successfully hacked. Note that in an ideal world a client should be able to use a low entropy password, as a server can limit the number of attempts the client can make to authenticate itself. The often proposed alternative is the adoption of multifactor authentication. In the simplest case the client must demonstrate possession of both a token and a password. The banks have been to the forefront of adopting such methods, but the token is invariably a physical device of some kind. Cryptography's embarrassing secret is that to date no completely satisfactory means has been discovered to implement two-factor authentication entirely in software. In this paper we propose such a scheme.
Last updated:  2012-03-22
On Security Arguments of the Second Round SHA-3 Candidates
Elena Andreeva, Andrey Bogdanov, Bart Mennink, Bart Preneel, Christian Rechberger
In 2007, the US National Institute for Standards and Technology (NIST) announced a call for the design of a new cryptographic hash algorithm in response to vulnerabilities like differential attacks identified in existing hash functions, such as MD5 and SHA-1. NIST received many submissions, 51 of which got accepted to the first round. 14 candidates were left in the second round, out of which 5 candidates have been recently chosen for the final round. An important criterion in the selection process is the SHA-3 hash function security. We identify two important classes of security arguments for the new designs: (1) the possible reductions of the hash function security to the security of its underlying building blocks, and (2) arguments against differential attack on building blocks. In this paper, we compare the state of the art provable security reductions for the second round candidates, and review arguments and bounds against classes of differential attacks. We discuss all the SHA-3 candidates at a high functional level, analyze and summarize the security reduction results and bounds against differential attacks. Additionally, we generalize the well-known proof of collision resistance preservation, such that all SHA-3 candidates with a suffix-free padding are covered.
Last updated:  2012-05-20
On Polynomial Systems Arising from a Weil Descent
Uncategorized
Christophe Petit, Jean-Jacques Quisquater
Show abstract
Uncategorized
In the last two decades, many computational problems arising in cryptography have been successfully reduced to various systems of polynomial equations. In this paper, we revisit a class of polynomial systems introduced by Faugère, Perret, Petit and Renault. % Seeing these systems as natural generalizations of HFE systems, we provide experimental and theoretical evidence that their degrees of regularity are only slightly larger than the original degre of the equations, resulting in a very low complexity compared to generic systems. % We then revisit the applications of these systems to the elliptic curve discrete logarithm problem (ECDLP) for binary curves, to the factorization problem in $SL(2,\mathbf{F}_{2^n})$ and to other discrete logarithm problems. As a main consequence, we provide a heuristic analysis showing that Diem's variant of index calculus for ECDLP requires a \emph{subexponential} number of bit operations $O(2^{c\,n^{2/3}\log n})$ over the binary field $\mathbf{F}_{2^n}$, where $c$ is a constant smaller than $2$. % According to our estimations, generic discrete logarithm methods are outperformed for any $n>N$ where $N\approx2000$, but elliptic curves of currently recommended key sizes ($n\approx160$) are not immediately threatened. % The analysis can be easily generalized to other extension fields.
Last updated:  2012-03-22
Construction of the Tsujii-Shamir-Kasahara (TSK) Type Multivariate Public Key Cryptosystem, which relies on the Difficulty of Prime Factorization
Shigeo Tsujii, Kohtaro Tadaki, Masahito Gotaishi, Ryou Fujita
A new multivariate public-key cryptosystem (MPKC) with the security based on the difficulty of the prime factoring is proposed. Unlike conventional cryptosystems such as RSA, most MPKCs are expected secure against quantum computers, and their operation of encryption and decryption is expected quick, because they do not need exponential operation. However, their security against quantum computers is very difficult to prove mathematically. We propose a new MPKC based on sequential solution method, assuming the security against von Neumann computers, whose attack seems as difficult as prime factoring. This cryptosystem is applicable to both encryption and signature.
Last updated:  2012-03-22
Somewhat Practical Fully Homomorphic Encryption
Junfeng Fan, Frederik Vercauteren
In this paper we port Brakerski's fully homomorphic scheme based on the Learning With Errors (LWE) problem to the ring-LWE setting. We introduce two optimised versions of relinearisation that not only result in a smaller relinearisation key, but also faster computations. We provide a detailed, but simple analysis of the various homomorphic operations, such as multiplication, relinearisation and bootstrapping, and derive tight worst case bounds on the noise caused by these operations. The analysis of the bootstrapping step is greatly simplified by using a modulus switching trick. Finally, we derive concrete parameters for which the scheme provides a given level of security and becomes fully homomorphic.
Last updated:  2013-05-15
Universally Composable Secure Computation with (Malicious) Physically Uncloneable Functions
Rafail Ostrovsky, Alessandra Scafuro, Ivan Visconti, Akshay Wadia
Physically Uncloneable Functions (PUFs) [Pap01] are noisy physical sources of randomness. As such, they are naturally appealing for cryptographic applications, and have caught the interest of both theoreticians and practitioners. A major step towards understanding and securely using PUFs was recently taken in [Crypto 2011] where Brzuska, Fischlin, Schröder and Katzenbeisser model PUFs in the Universal Composition (UC) framework of Canetti [FOCS 2001]. Their model considers trusted PUFs only, and thus real-world adversaries can not create malicious PUFs, and can access the physical object only via the prescribed procedure. However,this does not accurately reflect real-life scenarios, where an adversary could be able to create and use malicious PUFs, or access the PUF through other procedures. The goal of this work is to extend the model proposed in [Crypto 2011] in order to capture such real-world attacks. The main contribution of this work is the study of the Malicious PUFs model. Namely, we extend the PUF functionality of Brzuska et al. so that it allows the adversary to create arbitrarily malicious PUFs. Then, we provide positive results in this, more realistic, model. We show that, under computational assumptions, it is possible to UC-securely realize any functionality. Furthermore, we achieve unconditional (not UC) security with malicious PUFs, by showing a statistically hiding statistically binding commitment scheme that uses one PUF only, and such PUF can be malicious. As an additional contribution, we investigate another attack model, where adversaries access to a trusted PUF in a different way (i.e., not following the prescribed procedure). Technically this attack translates into the fact that the simulator cannot observe the queries made to an honest PUF. In this model, queries are oblivious to the simulator, and we call it the "Oblivious Query" model. We are able to achieve unconditionally UC-secure computation, even in this more severe model. This protocol is secure against stronger adversaries compared to the ones of Brzuska et al. Finally, we show the impossibility of UC secure computation in the combination of the above two new models, where the real-world adversary can create malicious PUFs and maliciously access to honest PUFs. Our work sheds light on the significant power and applicability of PUFs in the design of cryptographic protocols modeling adversaries that misbehave with PUFs.
Last updated:  2012-04-26
Identity-Based Encryption with Master Key-Dependent Message Security and Applications
David Galindo, Javier Herranz, Jorge Villar
We introduce the concept of identity-based encryption (IBE) with master key-dependent chosen-plaintext (mKDM-sID-CPA) security. These are IBE schemes that remain secure even after the adversary sees encryptions, under some initially selected identities, of functions of the master secret key(s). We then propose a generic construction of chosen-ciphertext secure key-dependent encryption (KDM-CCA) schemes in the public key setting starting from mKDM-sID-CPA secure IBE schemes. This is reminiscent to the celebrated work by Canetti, Halevi and Katz (Eurocrypt 2004) on the traditional key-oblivious setting. Previously only one generic construction of KDM-CCA secure public key schemes was known, due to Camenisch, Chandran and Shoup (Eurocrypt 2009), and it required non-interactive zero knowledge proofs (NIZKs). Our transformation shows that NIZKs are not intrinsic to KDM-CCA public key encryption. Additionally, we are able to instantiate our new concept under the Rank assumption on pairing groups and for affine functions of the secret keys. The scheme builds on previous work by Boneh, Halevi, Hamburg and Ostrovsky (Crypto 2008). Our concrete schemes are only able to provide security against a bounded number of encryption queries, which is enough in some practical scenarios. As a corollary we obtain a KDM-CCA secure public key encryption scheme, in the standard model, whose security reduction to a static assumption is independent of the number of challenge queries. As an independent contribution, we give new and better reductions between the Rank problem (previously named as Matrix DDH problem) and the Decisional Linear and the Decisional 3-Party Diffie-Hellman problems.
Last updated:  2014-02-06
Bicliques for permutations: collision and preimage attacks in stronger settings
Uncategorized
Dmitry Khovratovich
Show abstract
Uncategorized
We extend and improve biclique attacks, which were recently introduced for the cryptanalysis of block ciphers and hash functions. While previous attacks required a primitive to have a key or a message schedule, we show how to mount attacks on the primitives with these parameters fixed, i.e. on permutations. We introduce the concept of sliced bicliques, which is a translation of regular bicliques to the framework with permutations. The new framework allows to convert preimage attacks into collision attacks and derive the first collision attacks on the reduced SHA-3 finalist Skein in the hash function setting up to 11 rounds. We also demonstrate new preimage attacks on the reduced Skein and the output transformation of the reduced Grøstl. Finally, the sophisticated technique of message compensation gets a simple explanation with bicliques.
Last updated:  2012-03-22
Highly-Parallel Montgomery Multiplication for Multi-core General-Purpose Microprocessors
Selcuk Baktir, Erkay Savas
Popular public key algorithms such as RSA and Diffie-Hellman key exchange, and more advanced cryptographic schemes such as Paillier's and Damgård-Jurik's algorithms (with applications in private information retrieval), require efficient modular multiplication with large integers of size at least 1024 bits. Montgomery multiplication algorithm has proven successful for modular multiplication of large integers. While general purpose multi-core processors have become the mainstream on desktop as well as portable computers, utilization of their computing resources have been largely overlooked when it comes to performing computationally intensive cryptographic operations. In this work, we propose a new parallel Montgomery multiplication algorithm which exhibits up to 39% better performance than the known best serial Montgomery multiplication variant for the bit-lengths of 2048 or larger. Furthermore, for bit-lengths of 4096 or larger, the proposed algorithm exhibits better performance utilizing multiple cores available. It achieves speedups of up to 81%, 3.37 times and 4.87 times for the used general-purpose microprocessors with 2, 4 and 6 cores, respectively. To our knowledge, this is the first work that shows with actual implementation results that Montgomery multiplication can be practically parallelized on general-purpose multi-core processors.
Last updated:  2013-08-15
Formal verication of secure ad-hoc network routing protocols using deductive model-checking
Ta Vinh Thong
Ad-hoc networks do not rely on a pre-installed infrastructure, but they are formed by end-user devices in a self-organized manner. A consequence of this principle is that end-user devices must also perform routing functions. However, end-user devices can easily be compromised, and they may not follow the routing protocol faithfully. Such compromised and misbehaving nodes can disrupt routing, and hence, disable the operation of the network. In order to cope with this problem, several secured routing protocols have been proposed for ad-hoc networks. However, many of them have design aws that still make them vulnerable to attacks mounted by compromised nodes. In this paper, we propose a formal verication method for secure ad-hoc network routing protocols that helps increasing the condence in a protocol by providing an analysis framework that is more systematic, and hence, less error-prone than the informal analysis. Our approach is based on a new process algebra that we specically developed for secure ad-hoc network routing protocols and a deductive proof technique. The novelty of this approach is that contrary to prior attempts to formal verication of secure ad-hoc network routing protocols, our verication method can be made fully automated, and provides expressiveness for explicitly modelling cryptography privitives
Last updated:  2015-12-17
An Improved Differential Attack on Full GOST (extended version)
Uncategorized
Nicolas T. Courtois
Show abstract
Uncategorized
GOST 28147-89 is a well-known block cipher and the official encryption standard of the Russian Federation. A 256-bit block cipher considered as an alternative for AES-256 and triple DES, having an amazingly low implementation cost and it is becoming increasingly popular. Until 2010 researchers unanimously agreed that: “despite considerable cryptanalytic efforts spent in the past 20 years, GOST is still not broken”, and in 2010 it was submitted to ISO 18033 to become a worldwide industrial encryption standard. In 2011 it was suddenly discovered that GOST can be broken and it is insecure on more than one account. There is a substantial variety of recent innovative attacks on GOST. We have reflection attacks, attacks with double, triple and even quadruple reflections, a large variety of self-similarity and black-box reduction attacks, some of which do not use any reflections whatsoever and few other. The final key recovery step in various attacks is in many cases a software algebraic attack or/and a Meet-In-The-Middle attack. In differential attacks key bits are guessed and confirmed by the differential properties and there have already been quite a few papers about advanced differential attacks on GOST. There is also several even more advanced “combination” attacks which combine the complexity reduction approach based on high-level self-similarity of with various advanced differential properties with 2,3 or 4 points. In this paper we consider some recent differential attacks on GOST and show how to further improve them. We present a single-key attack against full 32-round 256-bit GOST with time complexity of 2^179 which is substantially faster than any previous single key attack on GOST.
Last updated:  2012-03-22
Usable assembly language for GPUs: a success story
Daniel J. Bernstein, Hsieh-Chung Chen, Chen-Mou Cheng, Tanja Lange, Ruben Niederhagen, Peter Schwabe, Bo-Yin Yang
The NVIDIA compilers nvcc and ptxas leave the programmer with only very limited control over register allocation, register spills, instruction selection, and instruction scheduling. In theory a programmer can gain control by writing an entire kernel in van der Laan's cudasm assembly language, but this requires tedious, error-prone tracking of register assignments. This paper introduces a higher-level assembly language, qhasm-cudasm, that allows much faster programming while providing the same amount of control over the GPU. This language has been used successfully to build a 90000-machine-instruction kernel for a computation described in detail in the paper, the largest public cryptanalytic project in history. The best GTX 295 speed that has been obtained for this computation with nvcc and ptxas is 25 million iterations per second; the best GTX 295 speed that has been obtained with qhasm-cudasm is 63 million iterations per second.
Last updated:  2012-03-22
Adaptive Key Protection in Complex Cryptosystems with Attributes
Zilong Wang, Danfeng (Daphne) Yao, Rongquan Feng
In the attribute-based encryption (ABE) model, attributes (as opposed to identities) are used to encrypt messages, and all the receivers with qualifying attributes can decrypt the ciphertext. However, compromised attribute keys may affect the communications of many users who share the same access control policies. We present the notion of forward-secure attribute-based encryption (fs-ABE) and give a concrete construction based on bilinear map and decisional bilinear Diffie-Hellman assumption. Forward security means that a compromised private key by an adversary at time $t$ does not break the confidentiality of the communication that took place prior to $t$. We describe how to achieve both forward security and encryption with attributes, and formally prove our security against the adaptive chosen-ciphertext adversaries. Our scheme is non-trivial, and the key size only grows polynomially with $\log N$ (where $N$ is the number of time periods). We further generalize our scheme to support the individualized key-updating schedule for each attribute, which provides a finer granularity for key management. Our insights on the required properties that an ABE scheme needs to possess in order to be forward-secure compatible are useful beyond the specific fs-ABE construction given. We raise an open question at the end of the paper on the escrow problem of the master key in ABE schemes.
Last updated:  2018-11-23
David & Goliath Oblivious Affine Function Evaluation - Asymptotically Optimal Building Blocks for Universally Composable Two-Party Computation from a Single Untrusted Stateful Tamper-Proof Hardware Token
Nico Döttling, Daniel Kraschewski, Jörn Müller-Quade
In a seminal work, Katz (Eurocrypt 2007) showed that parties being able to issue tamper-proof hardware can implement universally composable secure computation without a trusted setup. Our contribution to the line of research initiated by Katz is a construction for general, information-theoretically secure, universally composable two-party computation based on a single stateful tamper-proof token. We provide protocols for multiple one-time memories, multiple commitments in both directions, and also bidirectional oblivious transfer. From this, general secure two-party computation (and even one-time programs) can be implemented by known techniques. Moreover, our protocols have asymptotically optimal communication complexity. The central part of our work is a construction for oblivious affine function evaluation (OAFE), which can be seen as a generalization of the oblivious transfer primitive: Parametrized by a finite field F and a dimension k, the OAFE primitive allows a designated sender to choose an affine function f:F->F^k, such that hidden from the sender a designated receiver can learn f(x) for exactly one input x in F of his choice. All our abovementioned results build upon this primitive and it may also be of particular interest for the construction of garbled arithmetic circuits.
Last updated:  2015-04-30
A Digital Signature Scheme for Long-Term Security
Uncategorized
Dimitrios Poulakis, Robert Rolland
Show abstract
Uncategorized
In this paper we propose a signature scheme based on two intractable problems, namely the integer factorization problem and the discrete logarithm problem for elliptic curves. It is suitable for applications requiring long-term security and provides smaller signatures than the existing schemes based on the integer factorization and integer discrete logarithm problems.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.