All papers in 2012 (Page 8 of 733 results)

Last updated:  2012-01-29
A note on hyper-bent functions via Dillon-like exponents
Sihem Mesnager, Jean-Pierre Flori
This note is devoted to hyper-bent functions with multiple trace terms (including binomial functions) via Dillon-like exponents. We show how the approach developed by Mesnager to extend the Charpin–Gong family and subsequently extended by Wang et al. fits in a much more general setting. To this end, we first explain how the original restriction for Charpin–Gong criterion can be weakened before generalizing the Mesnager approach to arbitrary Dillon-like exponents. Afterward, we tackle the problem of devising infinite families of extension degrees for which a given exponent is valid and apply these results not only to reprove straightforwardly the results of Mesnager and Wang et al., but also to characterize the hyper-bentness of new infinite classes of Boolean functions.
Last updated:  2012-01-29
Counterexamples to Hardness Amplification Beyond Negligible
Yevgeniy Dodis, Abhishek Jain, Tal Moran, Daniel Wichs
If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the ``direct product''; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated ones. Interestingly, proving that the direct product amplifies hardness is often highly non-trivial, and in some cases may be false. For example, it is known that the direct product (i.e. ``parallel repetition'') of general interactive games may not amplify hardness at all. On the other hand, positive results show that the direct product does amplify hardness for many basic primitives such as one-way functions/relations, weakly-verifiable puzzles, and signatures. Even when positive direct product theorems are shown to hold for some primitive, the parameters are surprisingly weaker than what we may have expected. For example, if we start with a weak one-way function that no poly-time attacker can break with probability $> \frac{1}{2}$, then the direct product provably amplifies hardness to some negligible probability. Naturally, we would expect that we can amplify hardness exponentially, all the way to $2^{-n}$ probability, or at least to some fixed/known negligible such as $n^{-\log n}$ in the security parameter $n$, just by taking sufficiently many instances of the weak primitive. Although it is known that such parameters cannot be proven via black-box reductions, they may seem like reasonable conjectures, and, to the best of our knowledge, are widely believed to hold. In fact, a conjecture along these lines was introduced in a survey of Goldreich, Nisan and Wigderson (ECCC '95). In this work, we show that such conjectures are false by providing simple but surprising counterexamples. In particular, we construct weakly secure signatures and one-way functions, for which standard hardness amplification results are known to hold, but for which hardness does not amplify beyond just negligible. That is, for any negligible function $\eps(n)$, we instantiate these primitives so that the direct product can always be broken with probability $\eps(n)$, no matter how many copies we take.
Last updated:  2012-01-29
An error in "On a new formal proof model for RFID location privacy"
Da-Zhi Sun
In Information Processing Letters 110 (2) (2009) 57-61, Deursen and Radomirović evaluated five formal RFID privacy models. One main result is that Ha et al.’s RFID privacy model is incorrect. The supporting fact is that a constant-response protocol cannot pass the test of Ha et al.’s RFID privacy model. However, we demonstrate that the constant-response protocol is artificial, and the corresponding result is therefore unwarranted. It means that Ha et al.’s RFID privacy model is not a trivial model. Hence, more effort still can be made to improve Ha et al.’s RFID privacy model.
Last updated:  2012-02-03
Fault Analysis of the KATAN Family of Block Ciphers
Shekh Faisal Abdul-Latip, Mohammad Reza Reyhanitabar, Willy Susilo, Jennifer Seberry
In this paper, we investigate security of the KATAN family of block ciphers against differential fault attacks. KATAN consists of three variants with 32, 48 and 64-bit block sizes, called KATAN32, KATAN48 and KATAN64, respectively. All three variants have the same key length of 80 bits. We assume a single-bit fault injection model where the adversary is supposed to be able to corrupt a single random bit of the internal state of the cipher and this fault induction process can be repeated (by resetting the cipher); i.e., the faults are transient rather than permanent. First, we show how to identify the exact position of faulty bits within the internal state by precomputing difference characteristics for each bit position at a given round and comparing these characteristics with ciphertext differences (XOR of faulty and non-faulty ciphertexts) during the online phase of the attack. Then, we determine suitable rounds for effective fault inductions by analyzing distributions of low-degree (mainly, linear and quadratic) polynomial equations obtainable using the cube and extended cube attack techniques. The complexity of our attack on KATAN32 is $2^{59}$ computations and about 115 fault injections. For KATAN48 and KATAN64, the attack requires $2^{55}$ computations (for both variants), while the required number of fault injections is 211 and 278, respectively.
Last updated:  2012-01-22
On the Exact Security of Schnorr-Type Signatures in the Random Oracle Model
Yannick Seurin
The Schnorr signature scheme has been known to be provably secure in the Random Oracle Model under the Discrete Logarithm (DL) assumption since the work of Pointcheval and Stern (EUROCRYPT '96), at the price of a very loose reduction though: if there is a forger making at most $q_h$ random oracle queries, and forging signatures with probability $\varepsilon_F$, then the Forking Lemma tells that one can compute discrete logarithms with constant probability by rewinding the forger $\mathcal{O}(q_h/\varepsilon_F)$ times. In other words, the security reduction loses a factor $\mathcal{O}(q_h)$ in its time-to-success ratio. This is rather unsatisfactory since $q_h$ may be quite large. Yet Paillier and Vergnaud (ASIACRYPT 2005) later showed that under the One More Discrete Logarithm (OMDL) assumption, any \emph{algebraic} reduction must lose a factor at least $q_h^{1/2}$ in its time-to-success ratio. This was later improved by Garg~\emph{et al.} (CRYPTO 2008) to a factor $q_h^{2/3}$. Up to now, the gap between $q_h^{2/3}$ and $q_h$ remained open. In this paper, we show that the security proof using the Forking Lemma is essentially the best possible. Namely, under the OMDL assumption, any algebraic reduction must lose a factor $f(\varepsilon_F)q_h$ in its time-to-success ratio, where $f\le 1$ is a function that remains close to 1 as long as $\varepsilon_F$ is noticeably smaller than 1. Using a formulation in terms of expected-time and queries algorithms, we obtain an optimal loss factor $\Omega(q_h)$, independently of $\varepsilon_F$. These results apply to other signature schemes based on one-way group homomorphisms, such as the Guillou-Quisquater signature scheme.
Last updated:  2012-01-22
A First-Order Leak-Free Masking Countermeasure
Houssem MAGHREBI, Emmanuel PROUFF, Sylvain GUILLEY, Jean-Luc DANGER
One protection of cryptographic implementations against side-channel attacks is the masking of the sensitive variables. In this article, we present a first-order masking that does not leak information when the registers change values according to some specific (and realistic) rules. This countermeasure applies to all devices that leak a function of the distance between consecutive values of internal variables. In particular, we illustrate its practicality on both hardware and software implementations. Moreover, we introduce a framework to evaluate the soundness of the new first-order masking when the leakage slightly deviates from the rules involved to design the countermeasure. It reveals that the countermeasure remains more efficient than the state-of-the-art first-order masking if the deviation from the ideal model is equal to a few tens of percents, and that it is as good as a first-order Boolean masking even if the deviation is $50$\%.
Last updated:  2012-02-01
Breaking the provably secure SAKE-C authenticated key exchange protocol with Extended Key Compromise Impersonation (E-KCI) Attack
Uncategorized
Ali Mackvandi, Maryam Saeed, Mansour Naddafiun
Uncategorized
Authenticated Key Exchange (AKE) protocols are those protocols that allow two or more entities to concur with a common session key in an authentic manner in which this key is used to encrypt the proceeding communications. In 2010, Zhao et al. proposed Provably Secure Authenticated Key Exchange Protocol under the CDH Assumption (referred to as SAKE and SAKE-C). Despite the fact that the security of the proposed protocol is proved in the formal model, due to not considering all the prerequisite queries in defining and designing formal security model, in this letter it is shown that the so-called secure protocol is vulnerable to Extended Key Compromise Impersonation (E-KCI) attack so that this attack is a practicable flaw that was signaled by Tang et al. for the first time in 2011. Unfortunately, it is conspicuously perspicuous that most of the AKE and PAKE protocols are vulnerable to E-KCI attack which is a new-introduced flaw in this field, because even one of the most famous, secure, and efficient PAKE protocols such as the 3-pass HMQV protocol suffers from this vulnerability.
Last updated:  2012-01-20
Decoding Random Binary Linear Codes in $2^{n/20}$: How $1+1=0$ Improves Information Set Decoding
Uncategorized
Anja Becker, Antoine Joux, Alexander May, Alexander Meurer
Show abstract
Uncategorized
Decoding random linear codes is a well studied problem with many applications in complexity theory and cryptography. The security of almost all coding and LPN/LWE-based schemes relies on the assumption that it is hard to decode random linear codes. Recently, there has been progress in improving the running time of the best decoding algorithms for binary random codes. The ball collision technique of Bernstein, Lange and Peters lowered the complexity of Stern's information set decoding algorithm to $2^{0.0556n}$. Using {\it representations} this bound was improved to $2^{0.0537n}$ by May, Meurer and Thomae. We show how to further increase the number of representations and propose a new information set decoding algorithm with running time $2^{0.0494n}$.
Last updated:  2012-01-20
A new remote data integrity checking scheme for cloud storage
Xiangtao Yan, Yifa Li
Cloud storage services enable user to enjoy high-capacity and high-quality storage with less overhead, but it also brings many potential threats, for example, data integrality, data availability and so on. In this paper, we propose a new remote integrality and availability checking scheme for cloud storage. This new scheme can check mass file's integrality and availability with less storage, computation and communication resource. The new scheme also supports data dynamic update, public verifiability and privacy preserving.
Last updated:  2012-01-18
Variants of Waters' Dual-System Primitives Using Asymmetric Pairings
Somindu C. Ramanna, Sanjit Chatterjee, Palash Sarkar
Waters, in 2009, introduced an important technique, called dual-system encryption, to construct identity-based encryption (IBE) and related schemes. The resulting IBE scheme was described in the setting of symmetric pairing. A key feature of the construction is the presence of random tags in the ciphertext and decryption key. Later work by Lewko and Waters has removed the tags and proceeding through composite-order pairings has led to a more efficient dual-system IBE scheme using asymmetric pairings whose security is based on non-standard but static assumptions. In this work, we have systematically simplified Waters 2009 IBE scheme in the setting of asymmetric pairing. The simplifications retain tags used in the original description. This leads to several variants, the first one of which is based on standard assumptions and in comparison to Waters original scheme reduces ciphertexts and keys by two elements each. Going through several stages of simplifications, we finally obtain a simple scheme whose security can be based on two standard assumptions and a natural and minimal extension of the decision Diffie-Hellman problem for asymmetric pairing groups. The scheme itself is also minimal in the sense that apart from the tags, both encryption and key generation use exactly one randomiser each. This final scheme is more efficient than both the previous dual-system IBE scheme in the asymmetric setting due to Lewko and Waters and the more recent dual-system IBE scheme due to Lewko. We extend the IBE scheme to hierarchical IBE (HIBE) and broadcast encryption (BE) schemes. Both primitives are secure in their respective full models and have better efficiencies compared to previously known schemes offering the same level and type of security.
Last updated:  2012-02-18
On the security of Lo et al.’s ownership transfer protocol
Uncategorized
Masoumeh Safkhani, Nasour Bagheri, Majid Naderi, Ali Mahani
Show abstract
Uncategorized
Recently Lo et al. have proposed an ownership transfer pro- tocol for RFID objects using lightweight computing operators and claim their protocol provides stronger security robustness and higher perfor- mance efficiency in comparison with existing solutions. However, in this paper we show that their claim unfortunately does not hold. More pre- cisely, we present tag’s secret disclosure attack, new owner’s secret disclo- sure and fraud attack against the Lo et al.’s ownership transfer protocol. The success probability of all attacks is “1” while the complexity is only one run of protocol. Our observation shows that this protocol compro- mise the privacy of the tag and the new owner.
Last updated:  2012-01-20
Polynomial-Time, Semantically-Secure Encryption Achieving the Secrecy Capacity
Uncategorized
Mihir Bellare, Stefano Tessaro
Show abstract
Uncategorized
In the wiretap channel setting, one aims to get information-theoretic privacy of communicated data based only on the assumption that the channel from sender to adversary is noisier than the one from sender to receiver. The secrecy capacity is the optimal (highest possible) rate of a secure scheme, and the existence of schemes achieving it has been shown. For thirty years the ultimate and unreached goal has been to achieve this optimal rate with a scheme that is polynomial-time. (This means both encryption and decryption are proven polynomial time algorithms.) This paper finally delivers such a scheme. In fact it does more. Our scheme not only meets the classical notion of security from the wiretap literature, called MIS-R (mutual information security for random messages) but achieves the strictly stronger notion of semantic security, thus delivering more in terms of security without loss of rate.
Last updated:  2012-01-19
Security Analysis of J-PAKE
Mohsen Toorani
J-PAKE is a balanced Password-Authenticated Key Exchange (PAKE) protocol, proposed in 2008 and presented again in 2010 and 2011. One of its distinguishing features is that it does not require Public Key Infrastructure (PKI). Instead, it deploys Zero-Knowledge (ZK) techniques through the Schnorr's signature, and requires many computations and random number generations. J-PAKE has been submitted as a candidate for the IEEE P1363.2 standard for password-based public key cryptography, included in OpenSSL and OpenSSH, and used in the Mozilla Firefox's Sync mechanism. In this paper, we show that the J-PAKE protocol is vulnerable to a password compromise impersonation attack, and has other shortcomings with respect to replay and Unknown Key-Share (UKS) attacks.
Last updated:  2012-01-18
Dickson polynomials, hyperelliptic curves and hyper-bent functions
Jean-Pierre Flori, Sihem Mesnager
In this paper, we study the action of Dickson polynomials on subsets of finite fields of even characteristic related to the trace of the inverse of an element and provide an alternate proof of a not so well-known result. Such properties are then applied to the study of a family of Boolean functions and a characterization of their hyper-bentness in terms of exponential sums recently proposed by Wang et al. Finally, we extend previous works of Lisoněk and Flori and Mesnager to reformulate this characterization in terms of the number of points on hyperelliptic curves and present some numerical results leading to an interesting problem.
Last updated:  2012-09-21
Towards Unconditional Soundness: Computationally Complete Symbolic Attacker
Gergei Bana, Hubert Comon-Lundh
We consider the question of the adequacy of symbolic models versus computational models for the verification of security protocols. We neither try to include properties in the symbolic model that reflect the properties of the computational primitives nor add computational requirements that enforce the soundness of the symbolic model. We propose in this paper a different approach: everything is possible in the symbolic model unless it contradicts a computational assumption. In this way, we obtain unconditional soundness almost by construction. And we do not need to assume the absence of dynamic corruption or the absence of key-cycles, which are examples of hypotheses that are always used in related works. We set the basic framework, for arbitrary cryptographic primitives and arbitrary protocols, however for trace security properties only.
Last updated:  2013-05-14
Attacks and Security Proofs of EAX-Prime
Uncategorized
Kazuhiko Minematsu, Stefan Lucks, Hiraku Morita, Tetsu Iwata
Show abstract
Uncategorized
EAX$'$ (EAX-prime) is an authenticated encryption (AE) specified by ANSI C12.22 as a standard security function for Smart Grid. EAX$'$ is based on EAX proposed by Bellare, Rogaway, and Wagner. While EAX has a proof of security based on the pseudorandomness of the internal blockcipher, no published security result is known for EAX$'$. This paper studies the security of EAX$'$ and shows that there is a sharp distinction in security of EAX$'$ depending on the input length. EAX$'$ encryption takes two inputs, called cleartext and plaintext, and we present various efficient attacks against EAX$'$ using single-block cleartext and plaintext. At the same time we prove that if cleartexts are always longer than one block, it is provably secure based on the pseudorandomness of the blockcipher.
Last updated:  2012-02-03
Secondary constructions on generalized bent functions
Brajesh Kumar Singh
In this paper, we construct generalized bent Boolean functions in $n+ 2$ variables from $4$ generalized Boolean functions in $n$ variables. We also show that the direct sum of two generalized bent Boolean functions is generalized bent. Finally, we identify a set of affine functions in which every function is generalized bent.
Last updated:  2012-02-07
Efficient Mix-Net Verication by Proofs of Random Blocks
Uncategorized
Denise Demirel, Melanie Volkamer, Hugo Jonker
Uncategorized
In order for a mix-net to be usable in practice (e.g. in electronic voting), efficient verification is a must. Despite many advances in the last decade, zero-knowledge proofs remain too computationally intense. Two alternative proof approaches have been suggested: optimistic mix-net verification and randomized partial checking. Puiggalí et al. proposed a verification method combining these two approaches. This paper investigates their mix-net and proposes a verification method which offers both improved efficiency and more privacy.
Last updated:  2012-01-20
A Cryptographic Treatment of the Wiretap Channel
Uncategorized
Mihir Bellare, Stefano Tessaro, Alexander Vardy
Show abstract
Uncategorized
The wiretap channel is a setting where one aims to provide information-theoretic privacy of communicated data based solely on the assumption that the channel from sender to adversary is ``noisier'' than the channel from sender to receiver. It has been the subject of decades of work in the information and coding (I&C) community. This paper bridges the gap between this body of work and modern cryptography with contributions along two fronts, namely METRICS (definitions) of security, and SCHEMES. We explain that the metric currently in use is weak and insufficient to guarantee security of applications and propose two replacements. One, that we call mis-security, is a mutual-information based metric in the I&C style. The other, semantic security, adapts to this setting a cryptographic metric that, in the cryptography community, has been vetted by decades of evaluation and endorsed as the target for standards and implementations. We show that they are equivalent (any scheme secure under one is secure under the other), thereby connecting two fundamentally different ways of defining security and providing a strong, unified and well-founded target for designs. Moving on to schemes, results from the wiretap community are mostly non-constructive, proving the existence of schemes without necessarily yielding ones that are explicit, let alone efficient, and only meeting their weak notion of security. We apply cryptographic methods based on extractors to produce explicit, polynomial-time and even practical encryption schemes that meet our new and stronger security target.
Last updated:  2014-06-04
Reset Indifferentiability from Weakened Random Oracle Salvages One-pass Hash Functions
Uncategorized
Yusuke Naito, Kazuki Yoneyama, Kazuo Ohta
Show abstract
Uncategorized
Ristenpart et al. showed that the limitation of the indifferentiability theorem of Maurer et al. which does not cover all multi stage security notions but covers only single stage security notions, defined a new concept (reset indifferentiability), and proved the reset indifferentiability theorem, which is an analogy of the indifferentiability theorem covers all security notions S: if H^U is reset indifferentiable from RO, for any security notion, a cryptosystem C is at least as secure in the U model as in the RO model. Unfortunately, they also proved the impossibility of H^U being reset indifferentiable from a RO where H is a one-pass hash function such as ChopMD and Sponge constructions. In this paper, we will propose a new proof of molular approach instead of the RO methodology, Reset Indifferentiability from Weakened Random Oracle, called as the WRO methodology, in order to ensure the security of C with H^U, salvaging ChopMD and Sponge. The concrete proof procedure of the WRO methodology is as follows: 1. Define a new concept of WRO instead of RO, 2. Prove that H^U is reset indifferentiable from a WRO, (here an example of H is ChopMD and Sponge), and 3. Prove that C is secure in the WRO model. As a result we can prove that C with H^U is secure by combining the results of Steps 2, 3, and the theorem of Ristenpart et al. Moreover, for public-key encryption (as cryptosystem C) and chosen-distribution attack we will prove that C(WRO) is secure, which implies the appropriateness of the new concept of the WRO model.
Last updated:  2012-08-27
Higher Order Algebraic Attacks on Stream Ciphers
Qichun Wang, Thomas Johansson
Last updated:  2012-01-17
Malleable Proof Systems and Applications
Melissa Chase, Markulf Kohlweiss, Anna Lysyanskaya, Sarah Meiklejohn
Malleability for cryptography is not necessarily an opportunity for attack, but in many cases a potentially useful feature that can be exploited. In this work, we examine notions of malleability for non-interactive zero-knowledge (NIZK) proofs. We start by defining a malleable proof system, and then consider ways to meaningfully control the malleability of the proof system, as in many settings we would like to guarantee that only certain types of transformations can be performed. We also define notions for the cases in which we do not necessarily want a user to know that a proof has been obtained by applying a particular transformation; these are analogous to function/circuit privacy for encryption. As our motivating application, we consider a shorter proof for verifiable shuffles. Our controlled-malleable proofs allow us for the first time to use one compact proof to prove the correctness of an entire multi-step shuffle. Each authority takes as input a set of encrypted votes and a controlled-malleable NIZK proof that these are a shuffle of the original encrypted votes submitted by the voters; it then permutes and re-randomizes these votes and updates the proof by exploiting its controlled malleability. As another application, we generically use controlled-malleable proofs to realize a strong notion of encryption security. Finally, we examine malleability in existing proof systems and observe that Groth-Sahai proofs are malleable. We then go beyond this observation by characterizing all the ways in which they are malleable, and use them to efficiently instantiate our generic constructions from above; this means we can instantiate our proofs and all their applications using only the Decision Linear (DLIN) assumption.
Last updated:  2012-01-10
Biclique Attack of the Full ARIA-256
Shao-zhen Chen Tian-min Xu
In this paper, combining the biclique cryptanalysis with the MITM attack, we present the first key recovery method for the full ARIA-256 faster than brute-force. The attack requires $2^{80}$ chosen plaintexts, and the time complexity is about $2^{255.2}$ full-round ARIA encryptions in the processing phase.
Last updated:  2012-01-10
PayTree: "Amortized Signature" for Flexible Micro-Payments
Uncategorized
Charanjit Jutla, Moti Yung
Show abstract
Uncategorized
We present the idea of PayTree, a method to amortize the work of a single signature production and verification among numerous micropayments made from a payer to a set of merchants (under various trust assumptions regarding these merchants). The PayTree scheme is simple yet flexible, can support arbitrary number of payees while using a single signature (unlike PayWord); it is easily extendible dynamically (without the use of further signatures) and has a reasonable computational penalty. It can be viewed as a ``divisible coin'' mechanism as well.
Last updated:  2012-01-07
On the Indifferentiability of the Integrated-Key Hash Functions
Saif Al-Kuwari
Most of today's popular hash functions are keyless such that they accept variable-length messages and return fixed-length fingerprints. However, recent separation results reported on several serious inherent weaknesses in these functions, motivating the design of hash functions in the keyed setting. The challenge in this case, however, is that on one hand, it is economically undesirable to abundant the already adopted (keyless) functions in favour of new (keyed) ones, and on the other hand, the process of converting a keyless function to a keyed one is, evidently, non-trivial. A solution to this dilemma is to adopt the "integrated-key" approach that creates keyed hash functions out of "unmodified" keyless primitives. In this paper, we adopt several integrated-key constructions and prove that they are indifferentiable from random oracle, showing in details how to develop indifferentiability proofs at the integrated-key setting. The presented indifferentiability proof is generic and can be applied on other hash functions constructed in this setting with sufficiently similar structures to the constructions in this paper.
Last updated:  2012-01-07
Security proof with dishonest keys
Hubert Comon-Lundh, Véronique Cortier, Guillaume Scerri
Symbolic and computational models are the two families of models for rigorously analysing security protocols. Symbolic models are abstract but offer a high level of automation while computational models are more precise but security proof can be tedious. Since the seminal work of Abadi and Rogaway, a new direction of research aims at reconciling the two views and many soundness results establish that symbolic models are actually sound w.r.t. computational models. This is however not true for the prominent case of encryption. Indeed, all existing soundness results assume that the adversary only uses honestly generated keys. While this assumption is acceptable in the case of asymmetric encryption, it is clearly unrealistic for symmetric encryption. In this paper, we provide with several examples of attacks that do not show-up in the classical Dolev-Yao model, and that do not break the IND-CPA nor INT-CTXT properties of the encryption scheme. Our main contribution is to show the first soundness result for symmetric encryption and arbitrary adversaries. We consider arbitrary indistinguishability properties and an unbounded number of sessions. This result relies on an extension of the symbolic model, while keeping standard security assumptions: IND-CPA and IND-CTXT for the encryption scheme.
Last updated:  2012-01-08
Optimal Multiple Assignments with (m,m)-Scheme for General Access Structures
Qiang Li, Xiangxue Li, Dong Zheng, Kefei Chen
Given the number n of the participants, one can solve an integer programming on 2^n variables to construct an optimal multiple assignment with threshold schemes for general access structure. In this paper, we focus on finding optimal multiple assignments with (m,m)-schemes. We prove that most of the variables in the corresponding integer programming take the value of 0, while the remaining variables take the values of either 0 or 1. We also show that given a complete access structure, an optimal scheme may be obtaineddirectly from the scheme by Ito, Saito, and Nishizeki (Secret sharing scheme realizeing any access structure, in Globecom 1987).
Last updated:  2012-05-16
Detecting Dangerous Queries: A New Approach for Chosen Ciphertext Security
Susan Hohenberger, Allison Lewko, Brent Waters
We present a new approach for creating chosen ciphertext secure encryption. The focal point of our work is a new abstraction that we call "Detectable Chosen Ciphertext Security" (DCCA). Intuitively, this notion is meant to capture systems that are not necessarily chosen ciphertext attack (CCA) secure, but where we can detect whether a certain query CT can be useful for decrypting (or distinguishing) a challenge ciphertext CT*. We show how to build chosen ciphertext secure systems from DCCA security. We motivate our techniques by describing multiple examples of DCCA systems including creating them from 1-bit CCA secure encryption --- capturing the recent Myers-shelat result (FOCS 2009). Our work identifies DCCA as a new target for building CCA secure systems.
Last updated:  2014-01-07
A Unified Approach to Deterministic Encryption: New Constructions and a Connection to Computational Entropy
Benjamin Fuller, Adam O'Neill, Leonid Reyzin
This paper addresses deterministic public-key encryption schemes (DE), which are designed to provide meaningful security when only source of randomness in the encryption process comes from the message itself. We propose a general construction of DE that unifies prior work and gives novel schemes. Specifically, its instantiations include: -The first construction from any trapdoor function that has sufficiently many hardcore bits. -The first construction that provides "bounded" multi-message security (assuming lossy trapdoor functions). The security proofs for these schemes are enabled by three tools that are of broader interest: - A weaker and more precise sufficient condition for semantic security on a high-entropy message distribution. Namely, we show that to establish semantic security on a distribution M of messages, it suffices to establish indistinguishability for all conditional distribution M|E, where E is an event of probability at least 1/4. (Prior work required indistinguishability on all distributions of a given entropy.) - A result about computational entropy of conditional distributions. Namely, we show that conditioning on an event E of probability p reduces the quality of computational entropy by a factor of p and its quantity by log_2 1/p. - A generalization of leftover hash lemma to correlated distributions. We also extend our result about computational entropy to the average case, which is useful in reasoning about leakage-resilient cryptography: leaking \lambda bits of information reduces the quality of computational entropy by a factor of 2^\lambda and its quantity by \lambda.
Last updated:  2012-02-20
The new SHA-3 software shootout
Daniel J. Bernstein, Tanja Lange
This paper introduces a new graphing mechanism to allow easy comparison of software performance of the SHA-3 candidates. The new mechanism concisely captures a large amount of performance data without oversimplifying the data. We have integrated this graphing mechanism into our eBASH (ECRYPT Benchmarking of All Submitted Hashes) project. New graphs are automatically posted at the top of http://bench.cr.yp.to/results-sha3.html whenever the eBASH performance results are updated. This paper includes snapshots of these graphs, but readers are advised to check the web page for the latest updates.See http://bench.cr.yp.to for more information regarding eBASH.
Last updated:  2012-01-05
On the distinctness of binary sequences derived from primitive sequences modulo square-free odd integers
Qun-Xiong Zheng, Wen-Feng Qi, Tian Tian
Let M be a square-free odd integer and Z/(M) the integer residue ring modulo M. This paper studies the distinctness of primitive sequences over Z/(M) modulo 2. Recently, for the case of M = pq, a product of two distinct prime numbers p and q, the problem has been almost completely solved. As for the case that M is a product of more prime numbers, the problem has been quite resistant to proof. In this paper, a partial proof is given by showing that a class of primitive sequences of order 2k+1 over Z/(M) is distinct modulo 2. Besides as an independent interest, the paper also involves two distribution properties of primitive sequences over Z/(M), which related closely to our main results.
Last updated:  2012-01-02
ECC2K-130 on NVIDIA GPUs
Daniel J. Bernstein, Hsieh-Chung Chen, Chen-Mou Cheng, Tanja Lange, Ruben Niederhagen, Peter Schwabe, Bo-Yin Yang
A major cryptanalytic computation is currently underway on multiple platforms, including standard CPUs, FPGAs, PlayStations and GPUs, to break the Certicom ECC2K-130 challenge. This challenge is to compute an elliptic-curve discrete logarithm on a Koblitz curve over F_2^131 . Optimizations have reduced the cost of the computation to approximately 2^77 bit operations in 2^61 iterations. GPUs are not designed for fast binary-field arithmetic; they are designed for highly vectorizable floating-point computations that fit into very small amounts of static RAM. This paper explains how to optimize the ECC2K-130 computation for this unusual platform. The resulting GPU software performs more than 63 million iterations per second, including 320 million F_2^131 multiplications per second, on a $500 NVIDIA GTX 295 graphics card. The same techniques for finite-field arithmetic and elliptic-curve arithmetic can be reused in implementations of larger systems that are secure against similar attacks, making GPUs an interesting option as coprocessors when a busy Internet server has many elliptic-curve operations to perform in parallel.
Last updated:  2012-01-02
Digital Signatures from Challenge-Divided Sigma-Protocols
Andrew C. Yao, Yunlei Zhao
Digital signature is one of the basic primitives in cryptography. A common paradigm of obtaining signatures, known as the Fiat-Shamir (FS) paradigm, is to collapse any Σ-protocol (which is 3-round public-coin honest-verifier zero-knowledge) into a non-interactive scheme with hash functions that are modeled to be random oracles (RO). The Digital Signature Standard (DSS) and Schnorr’s signature schemes are two salient examples following the FS-paradigm. In this work, we present a modified Fiat-Shamir paradigm, named challenge-divided Fiat-Shamir paradigm, which is applicable to a variant of Σ-protocol with divided random challenges. This new paradigm yields a new family of (online/offline efficient) digital signatures from challenge-divided Σ-protocols, including in particular a variant of Schnorr’s signature scheme called challenge-divided Schnorr signature. We then present a formal analysis of the challenge-divided Schnorr signature in the random oracle model. Finally, we give comparisons between the challenge-divided Schnorr signature and DSS and Schnorr’s signature, showing that the newly developed challenge-divided Schnorr signature can enjoy better (online/offline) efficiency (besides provable security in the random oracle model). Of independent interest is a new forking lemma, referred to as divided forking lemma, for dealing with multiple ordered rewinding points in the RO model, which is of independent interest and can be applied to analyzing other cryptographic schemes in the RO model.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.