All papers in 2011 (Page 7 of 714 results)

Last updated:  2011-07-04
Fully Homomorphic Encryption, Approximate Lattice Problem and LWE
Uncategorized
Gu Chunsheng
Show abstract
Uncategorized
In this paper, we first introduce a new concept of approximate lattice problem (ALP), which is an extension of learning with errors (LWE). Next, we propose two ALP-based public key encryption schemes. Then, we construct two new fully homomorphic encryption scheme (FHE) based on respectively approximate principal ideal lattice problem with related modulus (APIP-RM) and approximate lattice problem with related modulus (ALP-RM). Moreover, we also extend our ALP-RM-based FHE to the ALP problem with unrelated modulus (ALP-UM). Our work is different from previous works in three aspects: (1)We extend the LWE problem to the ALP problem. This ALP problem is similar to the closest vector problem in lattice. We believe that this problem is independent of interest. (2)We construct a new FHE by using a re-randomizing method, which is different from the squashing decryption in previous works. (3)The expansion rate is merely O(k) with k a security parameter in Our FHE, which can be improved to O(logk) by using dimension reduction [BV11], whereas all previous schemes are at least O(k*logk) [BV11, Gen11, LNV11]. Our method can also decrease a factor k of the expansion rate in their schemes.
Last updated:  2012-06-22
Efficient Techniques for Privacy-Preserving Sharing of Sensitive Information
Uncategorized
Emiliano De Cristofaro, Yanbin Lu, Gene Tsudik
Show abstract
Uncategorized
The need for controlled (privacy-preserving) sharing of sensitive information occurs in many different and realistic everyday scenarios, ranging from national security to social networking. We consider two interacting parties, at least one of which seeks information from the other: the latter is either willing, or compelled, to share information. This poses two challenges: (1) how to enable this type of sharing such that parties learn no information beyond what they are entitled to, and (2) how to do so efficiently, in real-world practical terms. This paper explores the notion of Privacy-Preserving Sharing of Sensitive Information (PPSSI), and provides two concrete and efficient instantiations, modeled in the context of simple database querying. Proposed techniques function as a privacy shield to protect parties from disclosing more than the required minimum of their respective sensitive information. PPSSI deployment prompts several challenges, that are addressed in this paper. Extensive experimental results attest to the practicality of attained privacy features and show that they incur quite low overhead (e.g., $10\%$ slower than standard MySQL).
Last updated:  2011-03-10
An efficient certificateless two-party authenticated key agreement scheme from pairings
Uncategorized
Debiao He, Jin Hu
Show abstract
Uncategorized
Key agreement (KA) allows two or more users to negotiate a secret session key among them over an open network. Authenticated key agreement (AKA) is a KA protocol enhanced to prevent active attacks. AKA can be achieved using a public-key infrastructure (PKI) or identity-based cryptography. However, the former suffers from a heavy certificate management burden while the latter is subject to the so-called key escrow problem. Recently, certificateless cryptography was introduced to mitigate these limitations. We propose an efficient certificateless two-party AKA protocol. Security is proven under the standard computational Diffie-Hellman (CDH) and bilinear Diffie-Hellman (BDH) assumptions. Our protocol is efficient and practical, because it requires only one pairing operation and three scale multiplications by each party. Moreover, the pairing operation and one scale multiplication scale can be pre-computed, then only two scale multiplications are needed to finished the key agreement.
Last updated:  2011-03-06
Generalizations of Bent Functions. A Survey
Natalia Tokareva
Bent functions (Boolean functions with extreme nonlinearity properties) are actively studied for their numerous applications in cryptography, coding theory, and other fields. New statements of problems lead to a large number of generalizations of the bent functions many of which remain little known to the experts in Boolean functions. In this article, we offer a systematic survey of them.
Last updated:  2011-03-08
Fully Homomorphic Encryption over the Binary Polynomials
Uncategorized
Gu Chunsheng
Uncategorized
This paper presents a new fully homomorphic encryption scheme over the binary polynomials. By using the self-loop bootstrappable technique, a ciphertext is refreshed to a new ciphertext with same message of an original ciphertext and smaller error terms. The size of ciphertext is remained fixed and the expansion of ciphertext is O(n) in our scheme. The security of our scheme is based on the hardness of finding an approximate-GCD problem over the binary polynomials, which is given a list of binary polynomials perturbed by the error polynomials with the smaller degree.
Last updated:  2011-03-05
Secure Blind Decryption
Matthew Green
In this work we construct public key encryption schemes that admit a protocol for /blindly/ decrypting ciphertexts. In a blind decryption protocol, a user with a ciphertext interacts with a secret keyholder such that the user obtains the decryption of the ciphertext and the keyholder learns nothing about what it decrypted. While we are not the first to consider this problem, previous works provided only weak security guarantees against malicious users. We provide, to our knowledge, the first practical blind decryption schemes that are secure under a strong CCA security definition. We prove our construction secure in the standard model under simple, well-studied assumptions in bilinear groups. To motivate the usefulness of this primitive we discuss several applications including privacy-preserving distributed file systems and Oblivious Transfer schemes that admit /public/ contribution.
Last updated:  2011-03-05
Practical Secure and Efficient Multiparty Linear Programming Based on Problem Transformation
Jannik Dreier, Florian Kerschbaum
Cryptographic solutions to privacy-preserving multiparty linear programming are slow. This makes them unsuitable for many economically important applications, such as supply chain optimization, whose size exceeds their practically feasible input range. In this paper we present a privacy-preserving transformation that allows secure outsourcing of the linear program computation in an efficient manner. We evaluate security by quantifying the leakage about the input after the transformation and present implementation results. Using this transformation, we can mostly replace the costly cryptographic operations and securely solve problems several orders of magnitude larger.
Last updated:  2011-05-17
Threshold Encryption into Multiple Ciphertexts
Martin Stanek
We propose (T,N) multi-ciphertext scheme for symmetric encryption. The scheme encrypts a message into N distinct ciphertexts. The knowledge of the symmetric key allows decryption of the original message from any ciphertext. Moreover, knowing T+1 ciphertexts allows efficient recovery of the original message without the key (and without revealing the key as well). We define the security property of the scheme, and prove the security of the proposed scheme. We discuss several variants of the basic scheme that provides additional authenticity and efficiency.
Last updated:  2011-03-05
Common Randomness and Secret Key Capacities of Two-way Channels
Hadi Ahmadi, Reihaneh Safavi-Naini
Common Randomness Generation (CRG) and Secret Key Establishment (SKE) are fundamental primitives that are used in information-theoretic coding and cryptography. We study these two problems over the two-way channel model of communication, introduced by Shannon. In this model, the common randomness (CK) capacity is defined as the maximum number of random bits per channel use that the two parties can generate. The secret key (SK) capacity is defined similarly when the random bits are also required to be secure against a passive adversary. We provide lower bounds on the two capacities. These lower bounds are tighter than those one might derive based on the previously known results. We prove our lower bounds by proposing a two-round, two-level coding construction over the two-way channel. We show that the lower bound on the common randomness capacity can also be achieved using a simple interactive channel coding (ICC) method. We furthermore provide upper bounds on these capacities and show that the lower and the upper bounds coincide when the two-way channel consists of two independent (physically degraded) one-way channels. We apply the results to the case where the channels are binary symmetric.
Last updated:  2011-03-05
Explicit Formulas for Real Hyperelliptic Curves of Genus 2 in Affine Representation
S. Erickson, M. J. Jacobson Jr., A. Stein
We present a complete set of efficient explicit formulas for arithmetic in the degree 0 divisor class group of a genus two real hyperelliptic curve given in affine coordinates. In addition to formulas suitable for curves defined over an arbitrary finite field, we give simplified versions for both the odd and the even characteristic cases. Formulas for baby steps, inverse baby steps, divisor addition, doubling, and special cases such as adding a degenerate divisor are provided, with variations for divisors given in reduced and adapted basis. We describe the improvements and the correctness together with a comprehensive analysis of the number of field operations for each operation. Finally, we perform a direct comparison of cryptographic protocols using explicit formulas for real hyperelliptic curves with the corresponding protocols presented in the imaginary model.
Last updated:  2011-03-05
Unconditionally Secure Signature Schemes Revisited
Uncategorized
Colleen M. Swanson, Douglas R. Stinson
Show abstract
Uncategorized
Unconditionally secure signature (USS) schemes provide the ability to electronically sign documents without the reliance on computational assumptions needed in traditional digital signatures. Unlike digital signatures, USS schemes require both different signing and different verification algorithms for each user in the system. Thus, any viable security definition for a USS scheme must carefully treat the subject of what constitutes a valid signature. That is, it is important to distinguish between signatures that are created using a user's signing algorithm and signatures that may satisfy one or more user verification algorithms. Moreover, given that each verifier has his own distinct verification algorithm, a USS scheme must necessarily handle the event of a disagreement. In this paper, we present a new security model for USS schemes that incorporates these notions, as well as give a formal treatment of dispute resolution and the trust assumptions required. We provide formal definitions of non-repudiation and transferability in the context of dispute resolution, and give sufficient conditions for a USS scheme to satisfy these properties. Finally, we give an analysis of the construction of Hanaoka et al. in our security model.
Last updated:  2011-03-05
Cryptographically Sound Security Proof for On-Demand Source Routing Protocol EndairA
István Vajda
We present the first cryptographically sound security proof of a routing protocol for mobile ad-hoc networks. More precisely, we show that the route discovery protocol does not output a non-existing path under arbitrary active attacks, where on a non-existing path there exists at least one pair of neighboring nodes without communication connection during the run of the route discovery protocol. The proof relies on the Dolev-Yao-style model of Backes, Pfitzmann and Waidner, which allows for mapping results obtained symbolically within this model to cryptographically sound proofs if certain assumptions are met.
Last updated:  2012-07-09
Optimal and Parallel Online Memory Checking
Uncategorized
Charalampos Papamanthou, Roberto Tamassia
Show abstract
Uncategorized
Memory checking studies the problem of cryptographically verifying the correctness of untrusted indexed storage. After a series of results yielding checkers with $O(\log n)$ query complexity, Dwork, Naor, Ruthblum and Vaikuntanathan~\cite{naor-lower-mm08} derived an $\Omega(\log n/\log \log n)$ lower bound on the query complexity of any checker operating on memory words of polylogarithmic size, where $n$ is the number of memory indices. In view of this lower bound, we make the following two contributions: (1) We construct an \emph{optimal online memory checker} of $\Theta(\log n/\log \log n)$ query complexity, closing in this way the relevant complexity gap. Our construction employs pseudorandom functions and a simple data grouping technique inspired by I/O algorithms. (2) In our second and main result, we put forth the notion of \emph{parallel online memory checking} and provide parallel checker constructions with $O(1)$ query complexity and $O(\log n)$ processors. We initially show that checkers that use \emph{secret} small memory, including our optimal checker, are easily parallelizable; However, checkers that use only \emph{reliable} small memory cannot be naturally parallelized. We overcome this barrier by employing an algebraic hash function based on lattices assumptions and construct such parallel checkers with only reliable memory. To achieve our result, we establish and exploit a property that we call \emph{repeated linearity} of lattice-based hash functions, that might be of independent interest. Applications of our checkers include \emph{update-optimal} external memory authenticated data structures. We construct an authenticated B-tree data structure which can be updated with \emph{two} I/Os, outperforming the \emph{logarithmic} update complexity of hash-based external memory Merkle trees.
Last updated:  2011-03-02
Lightweight Anonymous Authentication with TLS and DAA for Embedded Mobile Devices
Liqun Chen, Kurt Dietrich, Hans Löhr, Ahmad-Reza Sadeghi, Christian Wachsmann, Johannes Winter
Although anonymous authentication has been extensively studied, so far no scheme has been widely adopted in practice. A particular issue with fully anonymous authentication schemes is that users cannot easily be prevented from copying and sharing credentials. In this paper, we propose an anonymous authentication scheme for mobile devices that prevents copying and sharing of credentials based on hardware security features. Our system is an optimized adaptation of an existing direct anonymous attestation (DAA) scheme, specifically designed for resource-constrained mobile devices. Our solution provides (i) anonymity and untraceability of mobile embedded devices against service providers, (ii) secure device authentication even against collusions of malicious service providers, and (iii) allows for revocation of authentication credentials. We present a new cryptographic scheme with a proof of security, as well as an implementation on ARM TrustZone. Moreover, we evaluate the efficiency of our approach and demonstrate its suitability for mobile devices.
Last updated:  2011-03-28
A Novel Group Signature Scheme Based on MPKC
Guangdong Yang, Shaohua Tang, Li Yang
Group signature allows a group member to sign messages anonymously on the behalf of a group. In the case of a dispute, the designated group manager can open the signature to reveal the identity of its originator. As far as we know, most of the group signatures are based on traditional cryptography, such as RSA and discrete logarithm. Unfortunately these schemes would be broken if quantum computers emerge. The $\mathcal{MQ}$-problem based Multivariate Public-Key Cryptosystem (MPKC) is an important alternative to traditional PKCs for its potential to resist future attacks of quantum computers. The first group signature scheme based on MPKC is proposed in this paper. This scheme owns two special but important features. First, the group signature can be divided into different time periods. The signatures are linkable in the same time period, but un-linkable between different time periods. Second, the privileges of the group manager is limited. The group manager can not open a signature without the help of the verifier. These features are important in some applications such as e-voting systems. The theory of this scheme is simple and its security relies on the Isomorphism of Polynomials (IP) Problem and random hash function.
Last updated:  2011-03-02
Can Code Polymorphism Limit Information Leakage?
Antoine Amarilli, Sascha Müller, David Naccache, Daniel Page, Pablo Rauzy, Michael Tunstall
In addition to its usual complexity assumptions, cryptography silently assumes that information can be physically protected in a single location. As one can easily imagine, real-life devices are not ideal and information may leak through different physical side-channels. It is a known fact that information leakage is a function of both the executed code $F$ and its input $x$.\smallskip In this work we explore the use of polymorphic code as a way of resisting side channel attacks. We present experimental results with procedural and functional languages. In each case we rewrite the protected code code $F_i$ before its execution. The outcome is a genealogy of programs $F_0,F_1,\ldots$ such that for all inputs $x$ and for all indexes $i \neq j \Rightarrow F_i(x)=F_j(x)\mbox{~and~}F_i\neq F_j$. This is shown to increase resistance to side channel attacks.\smallskip
Last updated:  2011-02-28
Computing Discrete Logarithms in the Jacobian of High-Genus Hyperelliptic Curves over Even Characteristic Finite Fields
M. D. Velichka, M. J. Jacobson Jr., A. Stein
We describe improved versions of index-calculus algorithms for solving discrete logarithm problems in Jacobians of high-genus hyperelliptic curves defined over even characteristic fields. Our first improvement is to incorporate several ideas for the low-genus case by Gaudry and Theriault, including the large prime variant and using a smaller factor base, into the large-genus algorithm of Enge and Gaudry. We extend the analysis in [24] to our new algorithm, allowing us to predict accurately the number of random walk steps required to find all relations, and to select optimal degree bounds for the factor base. Our second improvement is the adaptation of sieving techniques from Flassenberg and Paulus, and Jacobson to our setting. The new algorithms are applied to concrete problem instances arising from the Weil descent attack methodology for solving the elliptic curve discrete logarithm problem, demonstrating significant improvements in practice.
Last updated:  2011-02-28
Fastplay-A Parallelization Model and Implementation of SMC on CUDA based GPU Cluster Architecture
Shi Pu, Pu Duan, Jyh-Charn Liu
We propose a four-tiered parallelization model for acceleration of the secure multiparty computation (SMC) on the CUDA based Graphic Processing Unit (GPU) cluster architecture. Specification layer is the top layer, which adopts the SFDL of Fairplay for specification of secure computations. The SHDL file generated by the SFDL compiler of Fairplay is used as inputs to the function layer, for which we developed both multi-core and GPU based control functions for garbling of various types of Boolean gates, and ECC-based 1-out-of-2 Oblivious Transfer (OT). These high level control functions invoke computation of 3-DGG (3-DES gate garbling), EGG (ECC based gate garbling), and ECC based OT that run at the secure protocol layer. An ECC Arithmetic GPU Library (EAGL), which co-run on the GPU cluster and its host, manages utilization of GPUs in parallel computing of ECC arithmetic. Experimental results show highly linear acceleration of ECC related computations when the system is not overloaded; When running on a GPU cluster consisted of 6 Tesla C870 devices, with GPU devices fully loaded with over 3000 execution threads, Fastplay achieved 35~40 times of acceleration over a serial implementation running on a 2.53GHz duo core CPU and 4GB memory. When the execution thread count exceeds this number, the speed up factor remains fairly constant, yet slightly increased.
Last updated:  2011-12-26
Computing on Authenticated Data
Jae Hyun Ahn, Dan Boneh, Jan Camenisch, Susan Hohenberger, abhi shelat, Brent Waters
In tandem with recent progress on computing on encrypted data via fully homomorphic encryption, we present a framework for computing on *authenticated* data via the notion of slightly homomorphic signatures, or P-homomorphic signatures. With such signatures, it is possible for a third party to derive a signature on the object m' from a signature of m as long as P(m,m')=1 for some predicate P which captures the ``authenticatable relationship" between m' and m. Moreover, a derived signature on m' reveals no extra information about the parent m. Our definition is carefully formulated to provide one unified framework for a variety of distinct concepts in this area, including arithmetic, homomorphic, quotable, redactable, transitive signatures and more. It includes being unable to distinguish a derived signature from a fresh one even when given the original signature. The inability to link derived signatures to their original sources prevents some practical privacy and linking attacks, which is a challenge not satisfied by most prior works. Under this strong definition, we then provide generic constructions for all univariate and closed predicates, and specific efficient constructions for a broad class of natural predicates such as quoting, subsets, weighted sums, averages, and Fourier transforms. To our knowledge, these are the first efficient constructions for these predicates (excluding subsets) that provably satisfy this strong security notion.
Last updated:  2011-05-26
ALRED Blues: New Attacks on AES-Based MAC's
Orr Dunkelman, Nathan Keller, Adi Shamir
The ALRED family of Message Authentication Codes (MAC's) is based on three principles: Using a keyless block cipher in CBC mode to process the message, choosing AES-128 as this cipher, and reducing the effective number of rounds to 4 in order to speed up the processing. In this paper we show that each one of these principles creates significant weaknesses. More specifically, we show that any ALRED-type MAC which uses a keyless block cipher is vulnerable to new time/memory tradeoff attacks which are faster than generic tradeoff attacks on one-way functions. We then use the special properties of keyless AES to attack any number of rounds (4, 10, or a million) by forging the MAC of essentially any desired message in negligible time and space after a one-time preprocessing stage requiring 2^{96} time and negligible space. For the recommended 4-round version we show how to do the same using an improved preprocessing stage with a semi-practical time complexity of 2^{65}, which is the best one can hope for in such MAC constructions. Finally, we show that even if we replace the 4-round keyless AES by a 5-round or a 6-round version with additional secret round keys we can still compute such MAC's much faster than via exhaustive search.
Last updated:  2011-02-28
Graceful Degradation in Multi-Party Computation
Martin Hirt, Christoph Lucas, Ueli Maurer, Dominik Raub
The goal of \emph{Multi-Party Computation} (MPC) is to perform an arbitrary computation in a distributed, private, and fault-tolerant way. For this purpose, a fixed set of $n$ parties runs a protocol that tolerates an adversary corrupting a subset of the participating parties, and still preserves certain security guarantees. Most MPC protocols provide security guarantees in an \emph{all-or-nothing} fashion: Either the set of corrupted parties is tolerated and the protocol provides all specified security guarantees, or the set of corrupted parties is not tolerated and the protocol provides no security guarantees at all. Similarly, corruptions are in an all-or-nothing fashion: Either a party is fully honest, or it is fully corrupted. For example, an actively secure protocol is rendered completely insecure when just one party is corrupted additionally to what is tolerated, even if all corrupted parties are only passive. In this paper, we provide the first treatment of MPC with graceful degradation of both security and corruptions. First of all, our protocols provide graceful degradation of security, i.e., different security guarantees depending on the actual number of corrupted parties: the more corruptions, the weaker the security guarantee. We consider all security properties generally discussed in the literature (secrecy, correctness, robustness, fairness, and agreement on abort). Furthermore, the protocols provide graceful degradation with respect to the corruption type, by distinguishing fully honest parties, passively corrupted parties, and actively corrupted parties. Security can be maintained against more passive corruptions than is possible for active corruptions. We focus on perfect security, and prove exact bounds for which MPC with graceful degradation of security and corruptions is possible for both threshold and general adversaries. Furthermore, we provide protocols that meet these bounds. This strictly generalizes known results on hybrid security and mixed adversaries.
Last updated:  2011-02-28
Linear Cryptanalysis Using Multiple Linear Approximations
Miia Hermelin, Kaisa Nyberg
In this article, the theory of multidimensional linear attacks on block ciphers is developed and the basic attack algorithms and their complexity estimates are presented. As an application the multidimensional linear distinguisher derived by Cho for the block cipher PRESENT is discussed in detail.
Last updated:  2012-08-15
Characterization of the relations between information-theoretic non-malleability, secrecy, and authenticity
Akinori Kawachi, Christopher Portmann, Keisuke Tanaka
Roughly speaking, an encryption scheme is said to be non-malleable, if no adversary can modify a ciphertext so that the resulting message is meaningfully related to the original message. We compare this notion of security to secrecy and authenticity, and provide a complete characterization of their relative strengths. In particular, we show that information-theoretic perfect non-malleability is equivalent to perfect secrecy of two different messages. This implies that for $n$-bit messages a shared secret key of length roughly $2n$ is necessary to achieve non-malleability, which meets the previously known upper bound. We define approximate non-malleability by relaxing the security conditions and only requiring non-malleability to hold with high probability (over the choice of secret key), and show that any authentication scheme implies approximate non-malleability. Since authentication is possible with a shared secret key of length roughly $\log n$, the same applies to approximate non-malleability.
Last updated:  2012-02-14
A New Approach to Practical Active-Secure Two-Party Computation
Jesper Buus Nielsen, Peter Sebastian Nordholt, Claudio Orlandi, Sai Sheshank Burra
We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao's garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce a number of novel techniques for relating the outputs and inputs of OTs in a larger construction. We also report on an implementation of this approach, that shows that our protocol is more efficient than any previous one: For big enough circuits, we can evaluate more than 20000 Boolean gates per second. As an example, evaluating one oblivious AES encryption (~34000 gates) takes 64 seconds, but when repeating the task 27 times it only takes less than 3 seconds per instance.
Last updated:  2011-03-01
Generic Methods to Achieve Tighter Security Reductions for a Category of IBE Schemes
Yu Chen, Liqun Chen, Zhong Chen
We show that Katz-Wang’s duplicating key and ciphertext technique can be extended to a generic method that can be used in a certain category of Identity-Based Encryption (IBE) schemes for the purposes of improving their security reductions. We further develop two refined approaches by adapting the randomness reuse technique in the Katz-Wang technique: one is public key duplication, and the other is master key duplication. Compared to the Katz-Wang technique, our two refined approaches do not only improve the performances of the resulting IBE schemes but also enable a reduction algorithm to deal with decryption queries correctly and therefore can achieve chosen ciphertext security. As case studies, we apply these two approaches to modify the Boneh- Franklin IBE scheme and the Boneh-Boyen IBE scheme, respectively. Both of the modifications improve the tightness of security reductions, compared to the original schemes, with a reasonably low cost.
Last updated:  2011-03-04
Octal Bent Generalized Boolean Functions
Uncategorized
Pantelimon Stanica, Thor Martinsen
Show abstract
Uncategorized
In this paper we characterize (octal) bent generalized Boolean functions defined on $\BBZ_2^n$ with values in $\BBZ_8$. Moreover, we propose several constructions of such generalized bent functions for both $n$ even and $n$ odd.
Last updated:  2011-09-03
Leftover Hash Lemma, Revisited
Boaz Barak, Yevgeniy Dodis, Hugo Krawczyk, Olivier Pereira, Krzysztof Pietrzak, Francois-Xavier Standaert, Yu Yu
The famous Leftover Hash Lemma (LHL) states that (almost) universal hash functions are good randomness extractors. Despite its numerous applications, LHL-based extractors suffer from the following two drawbacks: (1) Large Entropy Loss: to extract v bits from distribution X of min-entropy m which are e-close to uniform, one must set v <= m - 2*log(1/e), meaning that the entropy loss L = m-v >= 2*log(1/e). (2) Large Seed Length: the seed length n of (almost) universal hash function required by the LHL must be at least n >= min(u-v, v + 2*log(1/e))-O(1), where u is the length of the source. Quite surprisingly, we show that both limitations of the LHL --- large entropy loss and large seed --- can often be overcome (or, at least, mitigated) in various quite general scenarios. First, we show that entropy loss could be reduced to L=log(1/e) for the setting of deriving secret keys for a wide range of cryptographic applications. Specifically, the security of these schemes gracefully degrades from e to at most e + sqrt(e * 2^{-L}). (Notice that, unlike standard LHL, this bound is meaningful even for negative entropy loss, when we extract more bits than the the min-entropy we have!) Based on these results we build a general *computational extractor* that enjoys low entropy loss and can be used to instantiate a generic key derivation function for *any* cryptographic application. Second, we study the soundness of the natural *expand-then-extract* approach, where one uses a pseudorandom generator (PRG) to expand a short "input seed" S into a longer "output seed" S', and then use the resulting S' as the seed required by the LHL (or, more generally, any randomness extractor). Unfortunately, we show that, in general, expand-then-extract approach is not sound if the Decisional Diffie-Hellman assumption is true. Despite that, we show that it is sound either: (1) when extracting a "small" (logarithmic in the security of the PRG) number of bits; or (2) in *minicrypt*. Implication (2) suggests that the sample-then-extract approach is likely secure when used with "practical" PRGs, despite lacking a reductionist proof of security! Finally, we combine our main results to give a very *simple and efficient* AES-based extractor, which easily supports variable-length messages, and is likely to offer our *improved entropy loss bounds* for any computationally-secure application, despite having a *fixed-length* seed.
Last updated:  2011-12-29
On the Instantiability of Hash-and-Sign RSA Signatures
Uncategorized
Yevgeniy Dodis, Iftach Haitner, Aris Tentes
Show abstract
Uncategorized
The hash-and-sign RSA signature is one of the most elegant and well known signatures schemes, extensively used in a wide variety of cryptographic applications. Unfortunately, the only existing analysis of this popular signature scheme is in the random oracle model, where the resulting idealized signature is known as the RSA Full Domain Hash signature scheme (RSA-FDH). In fact, prior work has shown several “uninstantiability” results for various ab- stractions of RSA-FDH, where the RSA function was replaced by a family of trapdoor random permutations, or the hash function instantiating the random oracle could not be keyed. These abstractions, however, do not allow the reduction and the hash function instantiation to use the algebraic properties of RSA function, such as the multiplicative group structure of Z&#8727;n. In contrast, the multiplicative property of the RSA function is critically used in many standard model analyses of various RSA-based schemes. Motivated by closing this gap, we consider the setting where the RSA function representation is generic (i.e., black-box) but multiplicative, whereas the hash function itself is in the standard model, and can be keyed and exploit the multiplicative properties of the RSA function. This setting abstracts all known techniques for designing provably secure RSA-based signatures in the standard model, and aims to address the main limitations of prior uninstantiability results. Unfortunately, we show that it is still impossible to reduce the security of RSA-FDH to any natural assumption even in our model. Thus, our result suggests that in order to prove the security of a given instantiation of RSA-FDH, one should use a non-black box security proof, or use specific properties of the RSA group that are not captured by its multiplicative structure alone. We complement our negative result with a positive result, showing that the RSA-FDH signatures can be proven secure under the standard RSA assumption, provided that the number of signing queries is a-priori bounded.
Last updated:  2011-05-10
Fault-propagation Pattern Based DFA on SPN Structure Block Ciphers using Bitwise Permutation, with Application to PRESENT and PRINTcipher
Uncategorized
Xin-jie Zhao, Tao Wang, Shi-ze Guo
Show abstract
Uncategorized
This paper proposes a novel fault-propagation pattern based differential fault analysis method - FPP-DFA, and proves its feasibility on SPN structure block ciphers using bitwise permutation, such as PRESENT and PRINTcipher. Simulated experiments demonstrate that, with the fault model of injecting one nibble fault into the r-2th round substitution layer, on average 8 and 16 faulty samples can reduce the master key search space of PRESENT-80/128 to $2^{14.7}$ and $2^{21.1}$ respectively, and 12 and 24 effective faulty samples can reduce the master key search space of PRINTcipher-48/96 to $2^{13.7}$ and $2^{22.8}$ respectively; with the fault model of injecting one nibble fault into the r-3th round substitution layer, 8 samples can reduce the master key search space of PRINTCipher-96 to $2^{18.7}$.
Last updated:  2011-05-21
Co-induction and Computational Semantics for Public-key Encryption with Key Cycles
Mohammad Hajiabadi, Bruce M. Kapron
We consider the computational soundness and completeness of formal indistinguishability in public-key cryptography, in the presence of key-cycles. This problem in the absence of key-cycles is addressed in [1] by Herzog. It is known from [2] that the standard notions of computational security cannot give a computationally-sound interpretation to key-cyclic expressions. However, the work of [2] takes into account formal adversaries whose knowledge set is formulated using an inductive definition. In recent work by Micciancio [3], an alternate way of specifying the knowledge set of formal adversaries is proposed, giving a co-inductive definition of security. Using this new specification, a soundness result in the presence of key-cycles is proved in the private key setting. In this paper, we explore the co-inductive definition of security in public-key cryptography, and extend the soundness result of Herzog by proving it for key-cyclic expressions. We also show that the completeness result is achievable in the absence of key-cycles with respect to any length-revealing computational encryption system. In the presence of key-cycles, we show that the completeness result does not hold even with respect to CCA-2 secrecy.
Last updated:  2011-03-04
Traitor Tracing against Public Collaboration (Full Version)
Xingwen Zhao, Fangguo Zhang
Broadcast encryption provides a convenient method to distribute digital content to subscribers over an insecure broadcast channel. Traitor tracing is needed because some users may give out their decryption keys to construct pirate decoders. There are many traitor tracing schemes based on collusion secure codes and identifiable parent property codes. However, these schemes are subject to public collaboration of traitors, which is presented by Billet and Phan in EUROCRYPT 2009 as an attack against code-based traitor tracing schemes. In this paper, we describe a generic collusion secure codes based scheme secure against such collaboration. Our scheme is motivated by the idea of identity-based encryption with wildcards (WIBE). We regard the collusion secure codeword for each user as his/her identity, and issue private key accordingly. When in broadcasting, we use a special pattern of WIBE, namely all bit positions in the codewords of intended receivers are set as wildcards. When in tracing, we use another special pattern of WIBE, namely all positions are set as wildcards except the tracing position. By using WIBE, each user is issued one decryption key which should be used as a whole and any incomplete part of the key is useless, while in previous codes based schemes each user holds a number of keys that can be used separately for different bit positions in the codeword. Thus our scheme is resistant to public collaboration, since if the decryption key is disclosed as a whole, it will immediately lead to the accusation of the very traitor. Our idea fits well for code based traitor tracing schemes, no matter collusion secure codes or identifiable parent property codes. We provide an instance based on Boneh-Boyen-Goh WIBE scheme, achieving constant private key storage cost for each user. We also present another instance achieving shorter ciphertexts, on the expense of increasing public keys and private keys. Our scheme presents an answer to the problem left open by Billet and Phan.
Last updated:  2012-02-16
On the number of bent functions from iterative constructions: lower bounds and hypotheses
Natalia Tokareva
In the paper we study lower bounds on the number of bent functions that can be obtained by iterative constructions, namely by the construction proposed by A.Canteaut and P.Charpin in 2003. The number of bent iterative functions is expressed in terms of sizes of finite sets and it is shown that evaluation of this number is closely connected to the problem of decomposing Boolean function into sum of two bent functions. A new lower bound for the number of bent iterative functions that is supposed to be asymptotically tight is given. Applying Monte-Carlo methods the number of bent iterative functions in 8 variables is counted. Based on the performed calculations several hypotheses on the asymptotic value of the number of all bent functions are formulated.
Last updated:  2011-04-12
Does Pseudo-basis Extend to General Adversary?
Ashish Choudhury, Kaoru Kurosawa, Arpita Patra
Pseudo-basis is a powerful and fundamental concept in coding theory, introduced by Kurosawa and Suzuki (EUROCRYPT '08), which allows to efficiently localize t errors in a codeword in an interactive fashion, even by using a linear error correcting code with distance only t+1. It is used to construct the first efficient, communication and round optimal, 2-round perfectly secure message transmission (PSMT) scheme for n=2t+1, where an infinitely powerful adversary can actively corrupt t out of n (bidirectional) channels between a sender S and a receiver R. Recently, Yang and Desmedt (ASIACRYPT '10) claimed that by using pseudo-basis, they constructed the first ever efficient, constant round PSMTs against general (non-threshold) adversaries in undirected as well as directed networks. In this paper, our main contribution is to show that pseudo-basis can not be extended to general adversaries. This means that the PSMTs of Yang and Desmedt (based on pseudo-basis) are flawed. We next show a truly efficient, three round PSMT over directed networks against general adversaries without using pseudobasis. While the previous PSMT schemes in directed network designed without using pseudobasis are either inefficient or prone to guessing attack (shown by Yang and Desmedt in ICITS '09), our efficient scheme can resist guessing attack. Instead of using pseudo-basis, we generalize the union technique to our setting, which was previously used to construct PSMTs against threshold proactive adversaries over undirected networks. We also present simple and efficient 3-round scheme in undirected network, without basing it on pseudobasis, which can send multiple messages concurrently.
Last updated:  2011-02-20
Secure Datastructures based on Multiparty Computation
Tomas Toft
The problem of secure multiparty computation -- performing some computation based on distributed, private inputs -- has been studied intensively for more than twenty years. This work includes both ``one shot'' applications as well as reactive tasks, where the exact computation is not known in advance. We extend this line of work by asking whether it is possible to \emph{efficiently} both update and query secret data. A clearer formulation is, perhaps, to ask whether is it possible to construct efficient datastructures based on secure multiparty computation primitives. It is possible to construct arbitrary secure datastructures based on an oblivious RAM (ORAM). However, current state of the art information theoretically secure solutions incur a poly-logarithmic overhead on both secure computation and memory. The overhead is much smaller when considering computationally secure solutions, however, this requires secure evaluation of a one-way function as a primitive, which may reintroduce a considerable overhead. By constructing a secure priority queue we show that practical datastructures are possible. The ideas are radically different than those used in any ORAM implementation: The present solution accesses data in a \emph{deterministic} manner, whereas all ORAMs \emph{randomize} the access pattern in order to hide it. The priority queue operations -- insertion into the structure and deletion of the minimal element contained therein -- both require $\bigo(\log^2 n)$ invocations of the cryptographic primitives (secure arithmetic and comparison) amortized in $O(1)$ rounds amortized, where $n$ is the overall number of operations performed.
Last updated:  2011-02-20
Turbo Codes Can Be Asymptotically Information-Theoretically Secure
Xiao Ma
This paper shows that a turbo-coded communication system can be made secure with a little bit of complexity cost. The classical permutation ciphers are revisited and analyzed. Firstly, the ideal stream permutation ciphers are shown to be asymptotically information-theoretically secure in the sense that the channel from plaintext to ciphertext has a vanished capacity, while the practical stream permutation ciphers are shown to be more secure than the classical stream ciphers in terms of protecting keys. Secondly, a necessary condition to break down a block permutation cipher is derived, which is then utilized to guarantee the computational security of a modified block permutation cipher. Thirdly, turbo ciphers (turbo-like codes with private interleavers) are proposed and analyzed.
Last updated:  2011-02-20
Identity-based Digital Signature Scheme Without Bilinear Pairings
He Debiao, Chen Jianhua, Hu Jin
Many identity-based digital signature schemes using bilinear pairings have been proposed. But the relative computation cost of the pairing is approximately twenty times higher than that of the scalar multiplication over elliptic curve group. In order to save the running time and the size of the signature, we propose an identity based signature scheme without bilinear pairings. With both the running time and the size of the signature being saved greatly, our scheme is more practical than the previous related schemes for practical application.
Last updated:  2012-09-15
A Low-Area Unified Hardware Architecture for the AES and the Cryptographic Hash Function ECHO
Jean-Luc Beuchat, Eiji Okamoto, Teppei Yamazaki
We propose a compact coprocessor for the AES (encryption, decryption, and key expansion) and the cryptographic hash function ECHO on Virtex-$5$ and Virtex-$6$ FPGAs. Our architecture is built around a $8$-bit datapath. The Arithmetic and Logic Unit performs a single instruction that allows for implementing AES encryption, AES decryption, AES key expansion, and ECHO at all levels of security. Thanks to a careful organization of AES and ECHO internal states in the register file, we manage to generate all read and write addresses by means of a modulo-$16$ counter and a modulo-$256$ counter. A fully autonomous implementation of ECHO and AES on a Virtex-$5$ FPGA requires $193$ slices and a single $36$k memory block, and achieves competitive throughputs. Assuming that the security guarantees of ECHO are at least as good as the ones of the SHA-$3$ finalists BLAKE and Keccak, our results show that ECHO is a better candidate for low-area cryptographic coprocessors. Furthermore, the design strategy described in this work can be applied to combine the AES and the SHA-$3$ finalist {G}røstl.
Last updated:  2011-02-20
DPA Leakage Evaluation and Countermeasure Plug-in
Tang Ming, Wang Xiaorong, Qiu Zhenlong, Gao Si, Zhang Huanguo, Wu Qianhong
There exist 3 different types of research about SCAs, such as SCA analysis, SCA evaluation and SCA countermeasures. All of these studies try to establish more security in cryptographic software, hardware and system. Evaluation of SCA tries to find factors of different SCAs, moreover, the purpose of SCA Evaluation could be regarded as the first step of building countermeasures against SCAs. We choose DPA, which is one of the most popular and realistic SCAs at present, as our research target to build practical evaluation scheme and countermeasure which can be regarded as plug-in of EDA toolkits and could help designers of circuits to judge the power leakage and improve the resistance against DPAs automatically. Our contribution concludes: more accurate evaluation scheme; more efficient balanced scheme; be portable to build countermeasures based on evaluation scheme, furthermore, our countermeasures could be plug in EDA toolkits which is automatic and transparent to designers of circuits.
Last updated:  2012-01-09
A Unified Approach to Combinatorial Key Predistribution Schemes for Sensor Networks
Maura B. Paterson, Douglas R. Stinson
There have been numerous recent proposals for key predistribution schemes for wireless sensor networks based on various types of combinatorial structures such as designs and codes. Many of these schemes have very similar properties and are analysed in a similar manner. We seek to provide a unified framework to study these kinds of schemes. We derive general formulas for the metrics of the resulting key predistribution schemes that can be evaluated for a particular scheme simply by substituting appropriate parameters of the underlying combinatorial structure. We also compare various classes of schemes based on different designs, and point out that some existing proposed schemes are in fact identical, even though their descriptions may seem different.
Last updated:  2012-01-18
A Novel RFID Distance Bounding Protocol Based on Physically Unclonable Functions
Uncategorized
Suleyman Kardas, Mehmet Sabir Kiraz, Muhammed Ali Bingol, Huseyin Demirci
Uncategorized
Radio Frequency Identification (RFID) systems are vulnerable to relay attacks (i.e., mafia, terrorist and distance frauds) when they are used for authentication purposes. Distance bounding protocols are particularly designed as a countermeasure against these attacks. These protocols aim to ensure that the tags are in a distant area by measuring the round-trip delays during a rapid challenge-response exchange of short authenticated messages. Terrorist fraud is the most challenging attack to avoid, because a legitimate user (a tag owner) collaborates with an attacker to defeat the authentication system. Many RFID distance bounding protocols have been proposed recently, with encouraging results. However, none of them provides the ideal security against the terrorist fraud. Motivated by this need, we first introduce a strong adversary model for Physically Unclonable Functions (PUFs) based authentication protocol in which the adversary has access to volatile memory of the tag. We show that the security of Sadeghi \textit{et al.}'s PUF based authentication protocol is not secure in this model. We provide a new technique to improve the security of their protocol. Namely, in our scheme, even if an adversary has access to volatile memory she cannot obtain all long term keys to clone the tag. Next, we propose a novel RFID distance bounding protocol based on PUFs which satisfies the expected security requirements. Comparing to the previous protocols, the use of PUFs in our protocol enhances the system in terms of security and privacy. We also prove that our extended protocol with a final signature provides the ideal security against all those frauds, remarkably the terrorist fraud. Besides that, our protocols enjoy the attractive properties of PUFs, which provide a cost efficient and reliable method to fingerprint chips based on their physical properties.
Last updated:  2011-05-14
Really fast syndrome-based hashing
Daniel J. Bernstein, Tanja Lange, Christiane Peters, Peter Schwabe
The FSB (fast syndrome-based) hash function was submitted to the SHA-3 competition by Augot, Finiasz, Gaborit, Manuel, and Sendrier in 2008, after preliminary designs proposed in 2003, 2005, and 2007. Many FSB parameter choices were broken by Coron and Joux in 2004, Saarinen in 2007, and Fouque and Leurent in 2008, but the basic FSB idea appears to be secure, and the FSB submission remains unbroken. On the other hand, the FSB submission is also quite slow, and was not selected for the second round of the competition. This paper introduces RFSB, an enhancement to FSB. In particular, this paper introduces the RFSB-509 compression function, RFSB with a particular set of parameters. RFSB-509, like the FSB-256 compression function, is designed to be used inside a 256-bit collision-resistant hash function: all known attack strategies cost more than 2^128 to find collisions in RFSB-509. However, RFSB-509 is an order of magnitude faster than FSB-256. On a single core of a Core 2 Quad CPU, RFSB-509 runs at 13.62 cycles/byte: faster than SHA-256, faster than 6 of the 14 second-round SHA-3 candidates, and faster than 2 of the 5 SHA-3 finalists.
Last updated:  2011-02-14
Cryptanalysis of three matrix-based key establishment protocols
Simon R. Blackburn, Carlos Cid, Ciaran Mullan
We cryptanalyse a matrix-based key transport protocol due to Baumslag, Camps, Fine, Rosenberger and Xu from 2006. We also cryptanalyse two recently proposed matrix-based key agreement protocols, due to Habeeb, Kahrobaei and Shpilrain, and due to Romanczuk and Ustimenko.
Last updated:  2011-02-14
AES Variants Secure Against Related-Key Differential and Boomerang Attacks
Jiali Choy, Aileen Zhang, Khoongming Khoo, Matt Henricksen, Axel Poschmann
In this paper, we summarize the recent related-key differential and boomerang attacks on AES by Biryukov et al. and present a framework for protection against these attacks. Then we study an alternative AES key schedule proposed by May et al. at ACISP 2002 as a possible candidate to protect against these related key attacks. We find that there exist equivalent keys for this key schedule and in response, we propose an improvement to overcome this weakness. We proceed to prove, using our framework, that our improved May et al.'s key schedule is secure against related-key differential and boomerang attacks. Since May et al.'s key schedule is not on-the-fly (which is a requirement for some hardware implementations), we propose an on-the-fly AES key schedule that is resistant against related-key differential and boomerang attacks.
Last updated:  2011-04-25
Information-theoretic Bounds for Differentially Private Mechanisms
Uncategorized
Gilles Barthe, Boris Köpf
Show abstract
Uncategorized
There are two active and independent lines of research that aim at quantifying the amount of information that is disclosed by computing on confidential data. Each line of research has developed its own notion of confidentiality: on the one hand, differential privacy is the emerging consensus guarantee used for privacy-preserving data analysis. On the other hand, information-theoretic notions of leakage are used for characterizing the confidentiality properties of programs in language-based settings. The purpose of this article is to establish formal connections between both notions of confidentiality, and to compare them in terms of the security guarantees they deliver. We obtain the following results. First, we establish upper bounds for the leakage of every $\eps$-differentially private mechanism in terms of~$\eps$ and the size of the mechanism's input domain. We achieve this by identifying and leveraging a connection to coding theory. Second, we construct a class of $\eps$-differentially private channels whose leakage grows with the size of their input domains. Using these channels, we show that there cannot be domain-size-independent bounds for the leakage of all $\eps$-differentially private mechanisms. Moreover, we perform an empirical evaluation that shows that the leakage of these channels almost matches our theoretical upper bounds, demonstrating the accuracy of these bounds. Finally, we show that the question of providing optimal upper bounds for the leakage of $\eps$-differentially private mechanisms in terms of rational functions of $\eps$ is in fact decidable.
Last updated:  2011-02-14
Rational authentication protocols
Long H. Nguyen
We use ideas from game theory to transform two families of authentication protocols so that even an intruder attacks a protocol, its payoff will still be lower than when it does not. This is particularly useful in resisting or discouraging a powerful and rational intruder (as present in military applications) who makes many attempts to break a protocol because (1) even the intruder fails, a denial of service attack is still mounted successfully, and (2) in a password-based protocol, the chance of a successful attack increases quite significantly as more and more attempts are launched to guess the password.
Last updated:  2011-08-23
Constant-Rounds, Linear Multi-party Computation for Exponentiation and Modulo Reduction with Perfect Security
Uncategorized
Chao Ning, Qiuliang Xu
Show abstract
Uncategorized
Bit-decomposition is an important primitive in multi-party computation (MPC). Given a sharing of secret $x$, it allows the parties to compute the sharings of the bits of $x$ in constant rounds. With the help of bit-decomposition, we will be able to construct constant-rounds protocols for various MPC problems, such as \emph{equality test}, \emph{comparison}, \emph{public modulo reduction} and \emph{private exponentiation}, which are four main applications of bit-decomposition. However, when considering perfect security, bit-decomposition does \emph{not} have a linear communication complexity. Thus any protocols involving bit-decomposition inherit this inefficiency, i.e. the communication complexity is non-linear. Constructing protocols for MPC problems without relying on bit-decomposition is a meaningful work because this may provide us with perfect secure protocols with linear communication complexity. It is already proved that \emph{equality test}, \emph{comparison} and \emph{public modulo reduction} can be solved without involving bit-decomposition and the communication complexity can be reduced to linear. However, it remains an open problem whether \emph{private exponentiation} could be done without relying on bit-decomposition. In this paper, maybe somewhat surprisingly, we show that it can. That is to say, we construct a \emph{constant-rounds, linear, perfect secure} protocol for private exponentiation \emph{without} relying on bit-decomposition though it seems essential to this problem. Compared with the previous solution for this problem (with perfect security), our protocol has a lower round complexity and a much lower communication complexity. In a recent work, Ning and Xu proposed a generalization of bit-decomposition which can, given a sharing of secret $x$ and an integer $m \ge 2$, compute the sharings (or bitwise sharings) of the base-$m$ digits of $x$. They also proposed a linear protocol for public modulo reduction as a simplification of their generalization. In this paper, we show that their generalization can be further generalized. More importantly, as a simplification of our further generalization, we propose a public modulo reduction protocol which is more efficient than theirs. Specifically, the round complexity of our (modulo reduction) protocol is the same with theirs, but the communication complexity can be considerably lower.
Last updated:  2011-03-02
Rational Secret Sharing with Honest Players over an Asynchronous Channel
William K. Moses Jr., C. Pandu Rangan
We consider the problem of rational secret sharing introduced by Halpern and Teague \cite{HT04}, where the players involved in secret sharing play only if it is to their advantage. This can be characterized in the form of preferences. Players would prefer to get the secret than to not get it and secondly with lesser preference, they would like as few other players to get the secret as possible. Several positive results have already been published to efficiently solve the problem of rational secret sharing. However, only a handful of papers have touched upon the use of an asynchronous broadcast channel, and in those papers, either the protocol involved cryptographic primitives \cite{FKN10} or else the protocol required the dealer to be interactively involved \cite{MSR08a}. However, \cite{OPRV09} did handle such a case through the use of an honest minority of players, but in their paper, they had placed a restriction on the number of honest players that could take part in relation to the total number number of players active in the protocol. In our paper, we propose an $m$-out-of-$n$ rational secret sharing scheme which can function over an asynchronous broadcast channel without the use of cryptographic primitives and with a non-interactive dealer. This is possible because our scheme uses a small number, $k+1$, of honest players. The protocol is resilient to coalitions of size up to $k$ and furthermore it is $\varepsilon$-resilient to coalitions of size up to $m-1$. The protocol will have a strict Nash equilibrium with probability $Pr(\frac{k+1}{n})$ and an $\varepsilon$-Nash equilibrium with probability $Pr(\frac{n-k-1}{n})$. Furthermore, our protocol is immune to backward induction. Later on in the paper, we extend our results to include malicious players as well. We also show that our protocol handles the possibility of a player deviating in order to force another player to get a wrong value. This type of deviation was discussed and handled by Asharov and Lindell \cite{AL09} by increasing the number of rounds. However, our protocol handles this in what we believe to be a more time efficient manner.
Last updated:  2011-02-08
On the Distribution of the Subset Sum Pseudorandom Number Generator on Elliptic Curves
Uncategorized
Simon R. Blackburn, Alina Ostafe, Igor E. Shparlinski
Show abstract
Uncategorized
Given a prime $p$, an elliptic curve $\mathcal{E}/\mathbb{F}_p$ over the finite field $\mathbb{F}_p$ of $p$ elements and a binary linear recurrence sequence $\(u(n)\)_{n =1}^\infty$ of order~$r$, we study the distribution of the sequence of points $$ \sum_{j=0}^{r-1} u(n+j)P_j, \qquad n =1,\ldots, N, $$ on average over all possible choices of $\mathbb{F}_p$-rational points $P_1,\ldots, P_r$ on $\mathcal{E}$. For a sufficiently large $N$ we improve and generalise a previous result in this direction due to E.~El~Mahassni.
Last updated:  2011-05-19
Deniable Encryption with Negligible Detection Probability: An Interactive Construction
Markus Duermuth, David Mandell Freeman
Deniable encryption, introduced in 1997 by Canetti, Dwork, Naor, and Ostrovsky, guarantees that the sender or the receiver of a secret message is able to "fake" the message encrypted in a specific ciphertext in the presence of a coercing adversary, without the adversary detecting that he was not given the real message. To date, constructions are only known either for weakened variants with separate "honest" and "dishonest" encryption algorithms, or for single-algorithm schemes with non-negligible detection probability. We propose a sender-deniable public key encryption system with a single encryption algorithm. We describe a generic interactive construction based on a public key bit encryption scheme that has certain properties, and we give two examples of encryption schemes with these properties, one based on the quadratic residuosity assumption and the other on trapdoor permutations. Subsequent to the publication of this work in EUROCRYPT 2011, it was shown by Peikert and Waters that, contrary to our initial claim, the proposed scheme does not have negligible detection probability. In this updated version we describe the attack of Peikert and Waters and the error in our original proof. We include the previous version of the paper with the construction and the (incorrect) security theorem.
Last updated:  2011-06-23
Fully Simulatable Quantum-Secure Coin-Flipping and Applications
Carolin Lunemann, Jesper Buus Nielsen
We propose a coin-flip protocol which yields a string of strong, random coins and is fully simulatable against poly-sized quantum adversaries on both sides. It can be implemented with quantum-computational security without any set-up assumptions, since our construction only assumes mixed commitment schemes which we show how to construct in the given setting. We then show that the interactive generation of random coins at the beginning or during outer protocols allows for quantum-secure realizations of classical schemes, again without any set-up assumptions. As example applications we discuss quantum zero-knowledge proofs of knowledge and quantum-secure two-party function evaluation. Both applications assume only fully simulatable coin-flipping and mixed commitments. Since our framework allows to construct fully simulatable coin-flipping from mixed commitments, this in particular shows that mixed commitments are complete for quantum-secure two-party function evaluation. This seems to be the first completeness result for quantum-secure two-party function evaluation from a generic assumption.
Last updated:  2011-02-08
Cryptographic Treatment of Private User Profiles
Felix Günther, Mark Manulis, Thorsten Strufe
The publication of private data in user profiles in a both secure and private way is a rising problem and of special interest in, e.g., online social networks that become more and more popular. Current approaches, especially for decentralized networks, often do not address this issue or impose large storage overhead. In this paper, we present a cryptographic approach to \emph{private profile management} that is seen as a building block for applications in which users maintain their own profiles, publish and retrieve data, and authorize other users to access different portions of data in their profiles. In this course, we provide: (i) formalization of \emph{confidentiality} and \empf{unlinkability} as two main security and privacy goals for the data which is kept in profiles and users who are authorized to retrieve this data, and (ii) specification, analysis, and comparison of two private profile management schemes based on different encryption techniques.
Last updated:  2011-03-08
Secret Keys from Channel Noise
Hadi Ahmadi, Reihaneh Safavi-Naini
We study the problem of unconditionally secure Secret Key Establishment (SKE) when Alice and Bob are connected by two noisy channels in opposite directions, and the channels are eavesdropped by Eve. We consider the case that Alice and Bob do not have any sources of initial randomness at their disposal. We start by discussing special cases of interest where SKE is impossible, and then provide a simple SKE construction over a binary symmetric channel that achieves some rates of secret key. We next focus on the Secret Key (SK) capacity, i.e., the highest rate of secure and reliable key establishment (in bits per channel use) that the parties can achieve. Relying on the existence of capacity-achieving coding schemes, we propose a multi-round SKE protocol, called the \emph{main protocol}, that proves a lower bound on the SK capacity. The main protocol consists of an initialization round, followed by repeated use of a two-round SKE protocol, called the \emph{basic protocol}. We also provide an upper bound on the SK capacity and show that the two bounds coincide when channels do not leak information to the adversary. We apply the results to the case that communicants are connected by binary symmetric channels.
Last updated:  2011-12-02
Cryptanalysis and Security Enhancement of an Advanced Authentication Scheme using Smart Cards, and a Key Agreement Scheme for Two-Party Communication
Swapnoneel Roy, Amlan K Das, Yu Li
In this work we consider two protocols for performing cryptanalysis and security enhancement. The first one by Song, is a password authentication scheme based on smart cards. We note that this scheme has already been shown vulnerable to the off-line password guessing attack by Tapiador et al. We perform a further cryptanalysis on this protocol and observe that it is prone to the clogging attack, a kind of denial of service (DOS) attack. We observe that all smart card based authentication protocols which precede the one by Song, and require the server to compute the computationally intensive modular exponentiation, like the one by Xu et al., or Lee at al., are prone to the clogging attack. We then suggest an improvement on the protocol to prevent the clogging attack. The other protocol we consider is a two-party identity-based authenticated key agreement protocol by Hölbl et al. They have devised two such protocols in their work. They call them Protocol 1 and Protocol 2. Both the protocols have already been shown vulnerable to the insider attack in a recent work by Chen et al. Here we consider Protocol 2 and show its vulnerability to a simple man-in-the-middle attack where the adversary does not know or calculate either party's private key, or the session key. Protocol 2 by Hölbl et al is an improvement over a previous work by Tseng. This makes the Tseng's protocol vulnerable to the attack we illustrate. We further suggest an additional step for these protocols to make them immune against the man-in-the-middle attack.
Last updated:  2011-02-09
Cryptanalysis of Some Protocols for RFID Systems
Masoumeh Safkhani, Majid Naderi, Nasour Bagheri, Somitra Kumar Sanadhya
In this paper we analyze the security of the mutual authentication and ownership transfer protocols which have been recently proposed by Kulseng \textit{et al.} Our analysis demonstrates a variety of attacks against these protocols. We present a secret parameters disclosure attack which discloses any secret parameter between the tag and the reader. Our disclosure attack can be easily used as an impersonation attack against the mutual authentication protocol. In addition, we present an attack that retrieves the $PIN$-value in the ownership transfer protocol, where the $PIN$-value is a parameter that must be kept secret from any party including the owner of the tag. All the attacks presented in this work are passive, have low complexity and have the success probability of 1.
Last updated:  2011-02-08
A Group Signature Scheme from Lattice Assumptions
S. Dov Gordon, Jonathan Katz, Vinod Vaikuntanathan
Group signature schemes allow users to sign messages on behalf of a group while (1) maintaining anonymity (within that group) with respect to an observer, yet (2) ensuring traceability of a signer (by the group manager) when needed. In this work we give the first construction of a group signature scheme based on lattices (more precisely, the learning with errors assumption), in the random oracle model. Toward our goal, we construct a new algorithm for sampling a random superlattice of a given modular lattice together with a short basis, that may be of independent interest.
Last updated:  2011-02-01
Extending Baby-step Giant-step algorithm for FACTOR problem
Uncategorized
Martin Stanek
Show abstract
Uncategorized
Recently, a non-abelian factorization problem together with an associated asymmetric encryption scheme were introduced in [1]. We show how a classical baby-step giant-step algorithm for discrete logarithm can be extended to this problem. This contradicts the claims regarding the complexity of the proposed problem.
Last updated:  2011-06-07
Supplemental Access Control (PACE v2): Security Analysis of PACE Integrated Mapping
Jean-Sébastien Coron, Aline Gouget, Thomas Icart, Pascal Paillier
We describe and analyze the password-based key establishment protocol PACE v2 Integrated Mapping (IM), an evolution of PACE v1 jointly proposed by Gemalto and Sagem Sécurité. PACE v2 IM enjoys the following properties: patent-freeness3 (to the best of current knowledge in the field); full resistance to dictionary attacks, secrecy and forward secrecy in the security model agreed upon by the CEN TC224 WG16 group; optimal performances. The PACE v2 IM protocol is intended to provide an alternative to the German PACE v1 protocol, which is also the German PACE v2 Generic Mapping (GM) protocol, proposed by the German Federal Office for Information Security (BSI). In this document, we provide a description of PACE v2 IM, a description of the security requirements one expects from a password-based key establishment protocol in order to support secure applications, and a security proof of PACE v2 IM in the so-called Bellare-Pointcheval-Rogaway (BPR) security model.
Last updated:  2016-04-25
Another Look at RSA Signatures With Affine Padding
Uncategorized
Jean-Sébastien Coron, David Naccache, Mehdi Tibouchi
Show abstract
Uncategorized
Affine-padding {\sc rsa} signatures consist in signing $\omega\cdot m+\alpha$ instead of the message $m$ for some fixed constants $\omega,\alpha$. A thread of publications progressively reduced the size of $m$ for which affine signatures can be forged in polynomial time. The current bound is $\log m \sim \frac{N}{3}$ where $N$ is the {\sc rsa} modulus' bit-size. Improving this bound to $\frac{N}{4}$ has been an elusive open problem for the past decade.\smallskip In this invited talk we consider a slightly different problem: instead of minimizing $m$'s size we try to minimize its {\sl entropy}. We show that affine-padding signatures on $\frac{N}{4}$ entropy-bit messages can be forged in polynomial time. This problem has no direct cryptographic impact but allows to better understand how malleable the {\sc rsa} function is. In addition, the techniques presented in this talk might constitute some progress towards a solution to the longstanding $\frac{N}{4}$ forgery open problem.\smallskip\smallskip We also exhibit a sub-exponential time technique (faster than factoring) for creating affine modular relations between strings containing three messages of size $\frac{N}{4}$ and a fourth message of size $\frac{3N}{8}$.\smallskip Finally, we show than $\frac{N}{4}$-relations can be obtained in specific scenarios, {\sl e.g.} when one can pad messages with two independent patterns or when the modulus' most significant bits can be chosen by the opponent.\smallskip
Last updated:  2011-01-31
Spectral Coherence Analysis - First Experimental Results -
Uncategorized
Amine Dehbaoui, Sébastien Tiran, Philippe Maurine, François-Xavier Standaert, Nicolas Veyrat-Charvillon
Show abstract
Uncategorized
This paper introduces a new family of distinguishers for side channel analysis, based on the spectral coherence between leakage traces. Its main goal is to allow adversaries and evaluators of cryptographic devices to take advantage of both time domain and frequency domain intuitions, while also allowing to keep a generic attack in case such intuitions are not available. Compared to previous side channel analysis tools working in the frequency domain, Spectral Coherence Analysis has the significant advantage of directly capturing the degree of similarity between different time domain traces, rather than comparing them with an hypothetical (e.g. Hamming distance) leakage model. In other words, we exploit leakage models to build partitions of the leakage, but not to correlate with an estimated spectrum. As a result, we obtain a more generic and remarkably robust distinguisher. First experiments performed against an unprotected DES implementation suggest that we also gain an improved efficiency in certain meaningful application contexts.
Last updated:  2011-01-30
On Enumeration of Polynomial Equivalence Classes and Their Application to MPKC
Dongdai Lin, Jean-Charles Faugere, Ludovic Perret, Tianze Wang
The Isomorphism of Polynomials (IP) is one of the most fundamental problems in multivariate public key cryptography (MPKC). In this paper, we introduce a new framework to study the counting problem associated {to} IP. Namely, we present tools of finite geometry allowing to investigate the counting problem associated to IP. Precisely, we focus on enumerating or estimating the number of isomorphism equivalence classes of homogeneous quadratic polynomial systems. These problems are equivalent to finding the scale of the key space of a multivariate cryptosystem and the total number of different multivariate cryptographic schemes respectively, which might impact the security and the potential capability of MPKC. We also consider their applications in the analysis of a specific multivariate public key cryptosystem. Our results not only answer how many cryptographic schemes can be derived from monomials and how big the key space is for a fixed scheme, but also show that quite many HFE cryptosystems are equivalent to a Matsumoto-Imai scheme.
Last updated:  2011-02-17
Non-Applicability of Pseudobasis for Designing Perfectly Secure Message Transmission Protocols Against Non-Threshold Adversary
Ashish Choudhury, Kaoru Kurosawa, Arpita Patra
In EUROCRYPT 2008, Kurosawa and Suzuki introduced a very fundamental and interesting concept in coding theory called pseudobasis, which was used to design the first ever efficient and communication optimal two round perfectly secure message transmission (PSMT) protocol tolerating a t-active threshold adversary. Recently in ASIACRYPT 2010, Yang and Desmedt designed first ever efficient two round and three round PSMT protocols tolerating non-threshold adversary in both directed and undirected network settings. The key idea behind their protocol is adaptation of the concept of pseudobasis from threshold settings to non-threshold settings. However, in this paper, we show that the concept of pseudobasis will not work in non-threshold settings. This automatically implies that the protocols presented by Yang et al. will fail to provide perfect reliability. As additional contribution, we present efficient two round and three round PSMT protocols tolerating non-threshold adversary in undirected networks. Our two round protocol sends single message while our three round protocol can send multiple messages concurrently. Our protocols are conceptually simpler, without using pseudobasis and also does not use the idea of converting a linear secret sharing scheme (LSSS) into a linear code, as done in the protocols of Yang et al.
Last updated:  2011-01-28
Adaptive Pseudo-Free Groups and Applications
Dario Catalano, Dario Fiore, Bogdan Warinschi
A computational group is pseudo-free if an adversary cannot find solutions in this group for equations that are not trivially solvable in the free group. This notion was put forth by Rivest as a unifying abstraction of multiple group-related hardness assumptions commonly used in cryptography. Rivest's conjecture that the RSA group is pseudo-free had been settled by Micciancio for the case of RSA moduli that are the product of two safe primes. This result holds for a static setting where the adversary is only given the description of the group (together with a set of randomly chosen generators) and has to come up with the equation and the solution. In this paper we explore a powerful extension of the notion of pseudo-freeness. We identify, motivate, and study pseudo-freeness in face of adaptive adversaries who may learn solutions to other non-trivial equations before having to solve a new non-trivial equation. Our first contribution is a carefully crafted definition of adaptive pseudo-freeness that walks a fine line between being too weak and being unsatisfiable. We give generic constructions that show how any group that satisfies our definition can be used to construct digital signatures and network signature schemes. Next, we prove that the RSA group meets our more stringent notion of pseudo-freeness and as a consequence we obtain different results. First, we obtain a new network (homomorphic) signature scheme in the standard model. Secondly, we demonstrate the generality of our framework for signatures by showing that all existing strong RSA-based signature schemes are instantiations of our generic construction in the RSA group.
Last updated:  2011-01-28
Revocable Attribute-Based Signatures with Adaptive Security in the Standard Model
Alex Escala, Javier Herranz, Paz Morillo
An attribute-based signature with respect to a signing policy, chosen ad-hoc by the signer, convinces the verifier that the signer holds a subset of attributes satisfying that signing policy. Ideally, the verifier must obtain no other information about the identity of the signer or the attributes he holds. This primitive has many applications in real scenarios requiring both authentication and anonymity/privacy properties. We propose in this paper the first attribute-based signature scheme satisfying at the same time the following properties: (1) it admits general signing policies, (2) it is proved secure against fully adaptive adversaries, in the standard model, and (3) the number of elements in a signature depends only on the size of the signing policy. Furthermore, our scheme enjoys the additional property of revocability: an external judge can break the anonymity of a signature, when necessary. This property may be very interesting in real applications where authorities are unwilling to allow full anonymity of users.
Last updated:  2011-06-18
Towards Strong Adaptive Corruption Security of Authenticated Key Exchange
Uncategorized
Zheng Yang
Uncategorized
In this paper we study strong adaptive corruption security definitions for authenticated key exchange (AKE) protocols. Many recent protocols for Authenticated Key Exchange have been proven correct in the CK01 or eCK security model. The new model is suggested to be at least as strong as previous models for authenticated key exchange protocols. However, we observe that there are several kinds of attacks on existing AKE protocols that beyond the current class of security definitions which further reveal the shortcomings in security proofs in related AKE security models, in particular concerning the protocols under eCK model. Since the two models are not formally comparable, we discuss the ambiguities of existing security definitions and then provide a general framework for defining AKE security when involve strong adversary capabilities. In which we formulate the timing of the authentication, key generation and key confirmation, for different classes of AKE protocols. In addition, we propose a new two-pass AKE protocol called $\Sigma^y$ as an instance, which is proven secure in our proposed strong security definitions, under random oracle model and GDH assumption. In this protocol we show that our the proposed model, would also be a helpful guidance to design a secure protocol under strong adversary model. The intuition is generic: we embed the global unique identifier for unique-pairwise matching sessions into the key materials, before submitting to final key deviation function.
Last updated:  2013-05-17
Authenticated Key Exchange with Synchronized State
Uncategorized
Zheng Yang
Uncategorized
We study the problem on how to either prevent identity impersonation (IDI) attacks or limit its consequences by on-line detecting previously unidentified IDI attacks, where IDI attacks are normally caused by the leakage of identity related long-term key. Such problem has, up until now, lacked a provably good solution. We deal with this problem through the scenario on authenticated key exchange with synchronized state (AKESS). This work provides a security model for AKESS protocols, in which we particularly formalize the security of the synchronized state. We propose a two party execution state synchronization framework for symmetric case, based on which we propose a generic compiler for AKESS protocols. Our goal is to compile any existing passively secure key exchange (KE) protocol to AKESS protocol using synchronized state, without any modification on those KE protocols. The proposal is probably secure in the standard model under standard assumptions.
Last updated:  2011-01-26
Unbounded HIBE and Attribute-Based Encryption
Uncategorized
Allison Lewko, Brent Waters
Show abstract
Uncategorized
In this work, we present HIBE and ABE schemes which are ``unbounded" in the sense that the public parameters do not impose additional limitations on the functionality of the systems. In all previous constructions of HIBE in the standard model, a maximum hierarchy depth had to be fixed at setup. In all previous constructions of ABE in the standard model, either a small universe size or a bound on the size of attribute sets had to be fixed at setup. Our constructions avoid these limitations. We use a nested dual system encryption argument to prove full security for our HIBE scheme and selective security for our ABE scheme, both in the standard model and relying on static assumptions. Our ABE scheme supports LSSS matrices as access structures and also provides delegation capabilities to users.
Last updated:  2011-01-26
A non-Abelian factorization problem and an associated cryptosystem
Srinath Baba, Srinivas Kotyad, Raghu Teja
In this note, we define a cryptosystem based on non-commutative properties of groups. The cryptosystem is based on the hardness of the problem of factoring over these groups. This problem, interestingly, boils down to discrete logarithm problem on some Abelian groups. Further, we illustrate this method in three different non-Abelian groups GL$_n({{\mathbb{F}}_q})$, UT$_n({{\mathbb{F}}_q})$ and the Braid Groups.
Last updated:  2011-06-17
Constructing differential 4-uniform permutations from know ones
Yuyin Yu, Mingsheng Wang, Yongqiang Li
It is observed that exchanging two values of a function over ${\mathbb F}_{2^n}$, its differential uniformity and nonlinearity change only a little. Using this idea, we find permutations of differential $4$-uniform over ${\mathbb F}_{2^6}$ whose number of the pairs of input and output differences with differential $4$-uniform is $54$, less than $63$, which provides a solution for an open problem proposed by Berger et al. \cite{ber}. Moreover, for the inverse function over $\mathbb{F}_{2^n}$ ($n$ even), various possible differential uniformities are completely determined after its two values are exchanged. As a consequence, we get some highly nonlinear permutations with differential uniformity $4$ which are CCZ-inequivalent to the inverse function on $\mathbb{F}_{2^n}$.
Last updated:  2011-09-13
Lower and Upper Bounds for Deniable Public-Key Encryption
Rikke Bendlin, Jesper Buus Nielsen, Peter Sebastian Nordholt, Claudio Orlandi
A deniable cryptosystem allows a sender and a receiver to communicate over an insecure channel in such a way that the communication is still secure even if the adversary can threaten the parties into revealing their internal states after the execution of the protocol. This is done by allowing the parties to change their internal state to make it look like a given ciphertext decrypts to a message different from what it really decrypts to. Deniable encryption was in this way introduced to allow to deny a message exchange and hence combat coercion. Depending on which parties can be coerced, the security level, the flavor and the number of rounds of the cryptosystem, it is possible to define a number of notions of deniable encryption. In this paper we prove that there does not exist any non-interactive receiver-deniable cryptosystem with better than polynomial security. This also shows that it is impossible to construct a non-interactive bi-deniable public-key encryption scheme with better than polynomial security. Specifically, we give an explicit bound relating the security of the scheme to how efficient the scheme is in terms of key size. Our impossibility result establishes a lower bound on the security. As a final contribution we give constructions of deniable public-key encryption schemes which establishes upper bounds on the security in terms of key length. There is a gap between our lower and upper bounds, which leaves the interesting open problem of finding the tight bounds.
Last updated:  2011-01-25
Private Identification, Authentication and Key Agreement Protocol with Security Mode Setup
Farshid Farhat, Somayeh Salimi, Ahmad Salahi
Identification, authentication and key agreement protocol of UMTS networks with security mode setup has some weaknesses in the case of mutual freshness of key agreement, DoS-attack resistance, and efficient bandwidth consumption. In this article we consider UMTS AKA and some other proposed schemes. Then we explain the known weaknesses of the previous frameworks suggested for the UMTS AKA protocol. After that we propose a new protocol called private identification, authentication, and key agreement protocol (PIAKAP), for UMTS mobile network. Our suggested protocol combines identification and AKA stages of UMTS AKA protocol while eliminates disadvantages of related works and brings some new features to improve the UMTS AKA mechanism. These features consist of reducing the interactive rounds of the UMTS AKA with security mode setup and user privacy establishment.
Last updated:  2011-01-25
Fast Scalar Multiplication in ECC using The Multi base Number System.
G. N. Purohit, Asmita Singh Rawat
As a generalization of double base chains, multibase number system is very suitable for efficient computation of scalar multiplication of a point of elliptic curve because of shorter representation length and hamming weight. In this paper combined with the given formulas for computing the 7- Fold of an elliptic curve point P an efficient scalar multiplication algorithm of elliptic curve is proposed using 2,3 and 7 as basis of the multi based number system . The algorithms cost less compared with Shamirs trick and interleaving with NAFs method.
Last updated:  2011-01-25
Proxy Blind Multi-signature Scheme using ECC for handheld devices
Jayaprakash Kar
A proxy blind signature scheme is a special form of blind signature which allowed a designated person called proxy signer to sign on behalf of two or more original signers without knowing the content of the message or document. It combines the advantages of proxy signature, blind signature and multi-signature scheme. This paper describes an e±cient proxy blind multi-signature scheme. The security of the proposed schemes is based on the di±culty of breaking the one-way hash function and the elliptic curve discrete logarithm problem (ECDLP). This can be implemented in low power and small processor handheld devices such as smart card, PDA etc which work in low power and small processor. This scheme utilizes a trusted third party called certificate authority to ensure that signatures can only be generated during valid delegation period. It satisfies the security properties of both proxy and blind signature scheme.
Last updated:  2011-02-14
Computing endomorphism rings of elliptic curves under the GRH
Gaetan Bisson
We design a probabilistic algorithm for computing endomorphism rings of ordinary elliptic curves defined over finite fields that we prove has a subexponential runtime in the size of the base field, assuming solely the generalized Riemann hypothesis. Additionally, we improve the asymptotic complexity of previously known, heuristic, subexponential methods by describing a faster isogeny-computing routine.
Last updated:  2013-09-19
Reclaiming Privacy for Smartphone Applications (Revised Version)
Emiliano De Cristofaro, Anthony Durussel, Imad Aad
The scope of mobile phones has skyrocketed in recent years to such an extent that smartphone sales are expected to surpass those of PCs by the end of 2011. Equipped with relatively powerful processors and fairly large memory and storage capabilities, smartphones can accommodate increasingly complex interactive applications. As a result, the growing amount of sensitive information shared by smartphone users raises serious privacy concerns and motivates the need for appropriate privacy-preserving mechanisms. In this paper, we present a novel architecture geared for privacy-sensitive applications where personal information is shared among users and decisions are made based on given optimization criteria. Specifically, we focus on two application scenarios: (i) privacy-preserving interest sharing, i.e., discovering shared interests without leaking users' private information, and (ii) private scheduling, i.e., determining common availabilities and location preferences that minimize associate costs, without exposing any sensitive information. We propose efficient yet provably-private solutions, and conduct an extensive experimental analysis that attests to the practicality of the attained privacy features.
Last updated:  2011-01-21
Simple and Exact Formula for Minimum Loop Length in Ate_i Pairing based on Brezing-Weng Curves
Hoon Hong, Eunjeong Lee, Hyang-Sook Lee, Cheol-Min Park
We provide a simple and exact formula for the minimum Miller loop length in Ate_i pairing based on Brezing-Weng curves, in terms of the involved parameters, under a mild condition on the parameters. It will be also shown that almost all cryptographically useful parameters satisfy the mild condition. Hence the simple and exact formula is valid for them. It will also turn out that the formula depends only on two parameters, providing freedom to choose the other parameters to address the design issues other than minimizing the loop length.
Last updated:  2020-12-30
Fast point quadrupling on elliptic curves
Duc-Phong Le, Binh P Nguyen
Ciet et al.(2006) proposed an elegant method for trading inversions for multiplications when computing [2] P+Q from two given points P and Q on elliptic curves of Weierstrass form. Motivated by their work, this paper proposes a fast algorithm for computing [4] P with only one inversion in affine coordinates. Our algorithm that requires 1I+ 8S+ 8M, is faster than two repeated doublings whenever the cost of one field inversion is more expensive than the cost of four field multiplications plus four field squarings (ie I> 4M+ 4S). It saves one field multiplication and one field squaring in comparison with the Sakai-Sakurai method (2001). Even better, for special curves that allow" a= 0"(or" b= 0") speedup, we obtain [4] P in affine coordinates using just 1I+ 5S+ 9M (or 1I+ 5S+ 6M, respectively).
Last updated:  2011-01-21
Cold Boot Key Recovery by Solving Polynomial Systems with Noise
Martin Albrecht, Carlos Cid
A method for extracting cryptographic key material from DRAM used in modern computers has been recently proposed in [9]; the technique was called Cold Boot attacks. When considering block ciphers, such as the AES and DES, simple algorithms were also proposed in [9] to recover the cryptographic key from the observed set of round subkeys in memory (computed via the cipher’s key schedule operation), which were however subject to errors due to memory bits decay. In this work we extend this analysis to consider key recovery for other ciphers used in Full Disk Encryption (FDE) products. Our algorithms are also based on closest code word decoding methods, however apply a novel method for solving a set of non-linear algebraic equations with noise based on Integer Programming. This method should have further applications in cryptology, and is likely to be of independent interest. We demonstrate the viability of the Integer Programming method by applying it against the Serpent block cipher, which has a much more complex key schedule than AES. Furthermore, we also consider the Two&#64257;sh key schedule, to which we apply a dedicated method of recovery.
Last updated:  2011-01-21
Higher-Order Differential Attack on Reduced SHA-256
Uncategorized
Mario Lamberger, Florian Mendel
Show abstract
Uncategorized
In this work, we study the application of higher-order differential attacks on hash functions. We show a second-order differential attack on the SHA-256 compression function reduced to 46 out of 64 steps. We implemented the attack and give the result in Table 1. The best attack so far (in a different attack model) with practical complexity was for 33 steps of the compression function.
Last updated:  2011-06-12
The Complexity Analysis of the MutantXL Family
Uncategorized
Mohamed Saied Emam Mohamed, Jintai Ding, Johannes Buchmann
Uncategorized
Algebraic attacks are based on the problem of solving systems of multivariate polynomial equations. The complexity of algorithms for solving this problem is essentially affected by the method of enlarging these multivariate systems. The MutantXL algorithm was presented as an efficient algorithm for solving multivariate systems. In this paper, we study the complexity of the MutantXL algorithm and give an upper bound to the number of mutants necessary for terminating the computations of the algorithm. At the ePrint archive, E. Thomae and C. Wolf recently published a new paper and presented two formulas for an upper bound on the number of linearly independent equations generated by MutantXL. We have noticed that these two formulas are not accurate. The first one leads to, indeed, a large upper bound. However, the second one does not take into account a lot of linearly independent computations. We indicate the errors in these formulas and propose new ones. Our formulas based on a theoretical evaluation of the maximum number of linearly independent equations produced by MutantXL. We have verified the new formulas with a large number of experiments on some HFE and random generated systems. Furthermore, we study the complexity of the MGB algorithm for computing Gröbner basis. The analysis of MGB suggested a new upper bound for the complexity of computing Gröbner bases.
Last updated:  2012-10-11
A New Family of Implicitly Authenticated Diffie-Hellman Protocols
Uncategorized
Andrew C. Yao, Yunlei Zhao
Show abstract
Uncategorized
Cryptography algorithm standards play a key role both to the practice of information security and to cryptography theory research. Among them, the MQV and HMQV protocols ((H)MQV, in short) are a family of implicitly authenticated Diffie-Hellman key-exchange (DHKE) protocols that are among the most efficient and are widely standardized. In this work, from some new perspectives and under some new design rationales, and also inspired by the security analysis of HMQV, we develop a new family of practical implicitly authenticated DHKE (IA-DHKE) protocols, which enjoy notable performance among security, efficiency, privacy, fairness and easy deployment. We make detailed comparisons between our new protocols and (H)MQV, showing that the newly developed protocols outperform HMQV in most aspects. Very briefly speaking, we achieve: 1. The most efficient provably secure IA-DHKE protocol to date, and the first online-optimal provably secure IA-DHKE protocols. 2. The first IA-DHKE protocol that is provably secure, resilience to the leakage of DH components and exponents, under merely standard assumptions without additionally relying on the knowledge-of-exponent assumption (KEA). 3. The first provably secure privacy-preserving and computationally fair IA-DHKE protocol, with privacy-preserving properties of reasonable deniability and post-ID computability and the property of session-key computational fairness. Guided by our new design rationales, in this work we also formalize and introduce some new concept, say session-key computational fairness (as a complement to session-key security), to the literature.
Last updated:  2012-09-13
Secure Authentication from a Weak Key, Without Leaking Information
Niek J. Bouman, Serge Fehr
We study the problem of authentication based on a weak key in the information-theoretic setting. A key is weak if its min-entropy is an arbitrary small fraction of its bit length. This problem has recently received considerable attention, with different solutions optimizing different parameters. We study the problem in an extended setting, where the weak key is as a one-time session key that is derived from a public source of randomness with the help of a (potentially also weak) long-term key. Our goal now is to authenticate a message by means of the weak session key in such a way that (nearly) no information on the long-term key is leaked. Ensuring privacy of the long-term key is vital for the long-term key to be re-usable. Previous work has not considered such a privacy issue, and previous solutions do not seem to satisfy this requirement. We show the existence of a practical four-round protocol that provides message authentication from a weak session key and that avoids non-negligible leakage on the long-term key. The security of our scheme also holds in the quantum setting where the adversary may have limited quantum side information on the weak session key. As an application of our scheme, we show the existence of an identification scheme in the bounded quantum storage model that is secure against a man-in-the-middle attack and that is truly password-based: it does not need any high entropy key, in contrast to the scheme proposed by Damgaard et al..
Last updated:  2011-09-02
The Geometry of Flex Tangents to a Cubic Curve and its Parameterizations
Jean-Marc Couveignes, Jean-Gabriel Kammerer
We show how the study of the geometry of the nine flex tangents to a cubic produces many pseudo-parameterizations, including the ones given by Icart, Kammerer, Lercier, Renault and Farashahi.
Last updated:  2011-01-19
Corrigendum to: The Cube Attack on Stream Cipher Trivium and Quadraticity Tests
Piotr Mroczkowski, Janusz Szmidt
In 2008 I. Dinur and A. Shamir presented a new type of algebraic attack on symmetric ciphers named cube attack. The method has been applied to reduced variants of stream ciphers Trivium and Grain- 128, reduced variants of the block ciphers Serpent and CTC and to a reduced version of the keyed hash function MD6. Independently a very similar attack named AIDA was introduced by M. Vielhaber. In this paper we develop quadraticity tests within the cube attack and apply them to a variant of stream cipher Trivium reduced to 709 initialization rounds. Using this method we obtain the full 80-bit secret key. In this way it eliminates the stage of brute force search of some secret key bits which occured in previous cube attacks. In this corrigendum to our previous paper the indexing of cubes and key bits was reversed making it consistent with other papers.
Last updated:  2012-07-11
Efficient Unconditional Asynchronous Byzantine Agreement with Optimal Resilience
Ashish Choudhury, Arpita Patra
We present an efficient and optimally resilient Asynchronous Byzantine Agreement (ABA) protocol involving n = 3t+1 parties over a completely asynchronous network, tolerating a computationally unbounded Byzantine adversary, who can control at most t parties out of the n parties. The amortized communication complexity of our ABA protocol is O(n^{3} \log \frac{1}{\epsilon}) bits for attaining agreement on a single bit, where \epsilon (\epsilon > 0) denotes the probability of non-termination. We compare our protocol with the best known optimally resilient ABA protocols of Canetti et al.(STOC 1993) and Abraham et al.~(PODC 2008) and show that our protocol gains by a factor of O(n^{8} \log \frac{1}{\epsilon}^{3}) over the ABA protocol of Canetti et al. and by a factor of O(n^{5} \frac{\log{n}}{\log \frac{1}{\epsilon}}) over the ABA protocol of Abraham et al. in terms of the communication complexity. To design our protocol, we first present a new, optimally resilient statistical asynchronous verifiable secret sharing (AVSS) protocol with n = 3t+1, which significantly improves the communication complexity of the only known optimally resilient statistical AVSS protocol of Canetti et al. Our AVSS protocol shares multiple secrets simultaneously and incurs lower communication complexity than executing multiple instances of an AVSS protocol sharing a single secret. To design our AVSS protocol, we further present a new asynchronous primitive called asynchronous weak commitment (AWC), which acts as a substitute for asynchronous weak secret sharing (AWSS), which was used as a primitive for designing AVSS by Canetti et al. We observe that AWC has weaker requirements than the AWSS and hence can be designed more efficiently. The common coin primitive is one of the most important building blocks for the construction of an ABA protocol. The best known common coin protocol of Feldman et al. requires multiple instances of an AVSS protocol sharing a single secret as a black-box. Unfortunately, this common coin protocol does not achieve its goal when the multiple invocations of AVSS sharing a single secret is replaced by a single invocation of an AVSS protocol sharing multiple secrets simultaneously. Therefore in this paper, we extend the existing common coin protocol to make it compatible with our new AVSS protocol (sharing multiple secrets). As a byproduct, our new common coin protocol is much more communication efficient than the existing common coin protocol.
Last updated:  2011-01-18
Fast Elliptic Curve Cryptography Using Optimal Double-Base Chains
Vorapong Suppakitpaisarn, Masato Edahiro, Hiroshi Imai
In this work, we propose an algorithm to produce the double-base chains that optimize the time used for computing an elliptic curve cryptosystem. The double-base chains is the representation that combining the binary and ternary representation. By this method, we can reduce the Hamming weight of the expansion, and reduce the time for computing the scalar point multiplication (Q = rS), that is the bottleneck operation of the elliptic curve cryptosystem. This representation is very redundant, i.e. we can present a number by many expansions. Then, we can select the way that makes the operation fastest. However, the previous works on double-bases chain have used a greedy algorithm, and their solutions are not optimized. We propose the algorithm based on the dynamic programming scheme that outputs the optimized the double-bases chain. The experiments show that we have reduced the time for computing the scalar multiplication by 3.88-3.95%, the multi-scalar multiplication by 2.55-4.37%, and the multi-scalar multiplication on the larger digit set by 3.5-12%.
Last updated:  2011-03-14
Outline of a proposal responding to E.U. and U.S. calls for trustworthy global-scale IdM and CKM designs
Benjamin Gittins
In 2007, the E.U. FP6 SecurIST called for trustworthy international identity management (IdM) that was user-centric. In 2009, the U.S. Department of Homeland Security (DHS) called for trustworthy global-scale IdM and the U.S. National Institute of Standards and Technology (NIST) called for new cryptographic key management (CKM) designs. In this paper we outline the core architecture for (apparently) the first globally scalable, post quantum secure, symmetric key based platform for provisioning IdM, key distribution/agreement and inter-enterprise CKM services. Our proposal employs a decentralised trust model that exploits compartmentalisation, redundancy and diversification simultaneously across service provider, software developer, hardware vendor, class of cryptographic primitive, and protocol axis. It employs behavioural analysis techniques and supports the collaborative management of international name spaces, management of client transactions using public identifiers and supports user-centric cross-cutting control mechanisms. Our proposal is suitable for use with commercial off the shelf hardware and is designed to wrap-around and protect the output of existing security deployments. The platform addresses the U.S. Networking and Information Technology Research and Development Program (NITRD) call to create a digital immune system (multi-layered protection, decentralised control, diversity, pattern recognition), the DHS call for combating insider attacks and malware, achieving survivability and availability, and NIST managers' call for a CKM design supporting billions of users without the use of public key technologies. This proposal has been designed as part of our Trustworthy Resilient Universal Secure Infrastructure Platform project.
Last updated:  2012-02-10
The Parazoa Family: Generalizing the Sponge Hash Functions
Elena Andreeva, Bart Mennink, Bart Preneel
Sponge functions were introduced by Bertoni et al. as an alternative to the classical Merkle-Damgaard design. Many hash function submissions to the SHA-3 competition launched by NIST in 2007, such as CubeHash, Fugue, Hamsi, JH, Keccak and Luffa, derive from the original sponge design, and security guarantees from some of these constructions are typically based on indifferentiability results. Although indifferentiability proofs for these designs often bear significant similarities, these have so far been obtained independently for each construction. In this work, we introduce the parazoa family of hash functions as a generalization of ``sponge-like'' functions. Similarly to the sponge design, the parazoa family consists of compression and extraction phases. The parazoa hash functions, however, extend the sponge construction by enabling the use of a wider class of compression and extraction functions that need to satisfy certain properties. More importantly, we prove that the parazoa functions satisfy the indifferentiability notion of Maurer et al. under the assumption that the underlying permutation is ideal. Not surprisingly, our indifferentiability result confirms the bound on the original sponge function, but it also carries over to a wider spectrum of hash functions and eliminates the need for a separate indifferentiability analysis.
Last updated:  2011-01-14
Simple and Efficient Single Round Almost Perfectly Secure Message Transmission Tolerating Generalized Adversary
Ashish Choudhury, Kaoru Kurosawa, Arpita Patra
Patra et al. gave a necessary and sufficient condition for the possibility of almost perfectly secure message transmission protocols tolerating general, non-threshold Q^2 adversary structure. However, their protocol requires at least three rounds and performs exponential (exponential in the size of the adversary structure) computation and communication. Moreover, they have left it as an open problem to design efficient protocol for almost perfectly secure message transmission, tolerating Q^2 adversary structure. In this paper, we show the first single round almost perfectly secure message transmission protocol tolerating Q^2 adversary structure. The computation and communication complexities of the protocol are both polynomial} in the size of underlying linear secret sharing scheme (LSSS) and adversary structure. This solves the open problem raised by Patra et al.. When we restrict our general protocol to threshold adversary with n=2t+1, we obtain a single round, communication optimal almost secure message transmission protocol tolerating threshold adversary, which is much more computationally efficient and relatively simpler than the previous communication optimal protocol of Srinathan et al.
Last updated:  2011-04-14
Private Discovery of Common Social Contacts
Emiliano De Cristofaro, Mark Manulis, Bertram Poettering
The increasing use of computing devices for social interactions fuels the proliferation of online social applications, yet prompts a number of privacy concerns. One common problem occurs when two unfamiliar users, in the process of establishing social relationships, want to assess their social proximity by discovering mutual social contacts. In this paper, we introduce \emph{Private Contact Discovery}, a novel cryptographic primitive that lets two users, on input their respective contact lists, learn their common contacts (if any), and nothing else. We present an efficient and provably secure construction, that (i) prevents arbitrary list manipulation by means of contact certification, and (ii) guarantees user authentication and revocability. Following a rigorous cryptographic treatment of the problem, we define \emph{contact-hiding} security and prove it for our solution, under the RSA assumption in the Random Oracle Model (ROM). We also show that other related cryptographic techniques are unsuitable in this context. Experimental analysis on various types of devices attests to the practicality of our technique, which achieves computational and communication overhead almost linear in the number of contacts.
Last updated:  2011-01-14
Supporting Publication and Subscription Confidentiality in Pub/Sub Networks
Uncategorized
Mihaela Ion, Giovanni Russello, Bruno Crispo
Show abstract
Uncategorized
The publish/subscribe model offers a loosely-coupled communication paradigm where applications interact indirectly and asynchronously. Publisher applications generate events that are sent to interested applications through a network of brokers. Subscriber applications express their interest by specifying filters that brokers can use for routing the events. Supporting confidentiality of messages being exchanged is still challenging. First of all, it is desirable that any scheme used for protecting the confidentiality of both the events and filters should not require the publishers and subscribers to share secret keys. In fact, such a restriction is against the loose-coupling of the model. Moreover, such a scheme should not restrict the expressiveness of filters and should allow the broker to perform event filtering to route the events to the interested parties. Existing solutions do not fully address these issues. In this paper, we provide a novel scheme that supports (i) confidentiality for events and filters; (ii) filters can express very complex constraints on events even if brokers are not able to access any information on both events and filters; (iii) and finally it does not require publishers and subscribers to share keys.
Last updated:  2011-01-14
Secure evaluation of polynomial using privacy ring homomorphisms
Alexander Rostovtsev, Alexey Bogdanov, Mikhail Mikhaylov
Method of secure evaluation of polynomial y=F(x_1, …, x_k) over some rings on untrusted computer is proposed. Two models of untrusted computer are considered: passive and active. In passive model untrusted computer correctly computes polynomial F and tries to know secret input (x_1, …, x_k) and output y. In active model untrusted computer tries to know input and output and tries to change correct output y so that this change cannot be determined. Secure computation is proposed by using one-time privacy ring homomorphism Z/nZ -> Z/nZ[z]/(f(z)), n = pq, generated by trusted computer. In the case of active model secret check point v = F(u_1, …, u_k) is used. Trusted computer generates polynomial f(z)=(z-t)(z+t), t in Z/nZ, and input X_i(z) in Z/nZ[z]/(f(z)) such that X_i(t)=x_i (mod n) for passive model, and f(z)=(z-t_1)(z-t_2)(z-t_3), t_i in Z/nZ and input X_i(z) in Z/nZ[z]/(f(z)) such that X_i(t_1)=x_i (mod n), X_i(t_2)= u_i (mod n) for active model. Untrusted computer computes function Y(z) = F(X_1(z), …, X_k(z)) in the ring Z/nZ[z]/(f(z)). For passive model trusted computer determines secret output y=Y(t) (mod n). For active model trusted computer checks that Y(t_2)=v (mod n), then determines correct output y=Y(t_1) (mod n).
Last updated:  2011-01-14
Improved zero-sum distinguisher for full round Keccak-f permutation
Ming Duan, Xuajia Lai
K$\textsc{eccak}$ is one of the five hash functions selected for the final round of the SHA-3 competition and its inner primitive is a permutation called K$\textsc{eccak}$-$f$. In this paper, we find that for the inverse of the only one nonlinear transformation of K$\textsc{eccak}$-$f$, the algebraic degrees of any output coordinate and of the product of any two output coordinates are both 3 and also 2 less than its size 5. Combining the observation with a proposition from an upper bound on the degree of iterated permutations, we improve the zero-sum distinguisher of full 24 rounds K$\textsc{eccak}$-$f$ permutation by lowering the size of the zero-sum partition from $2^{1590}$ to $2^{1579}$.
Last updated:  2011-01-14
Cryptanalysis with Ternary Difference: Applied to Block Cipher PRESENT
Farzaneh Abazari, Babak Sadeghian
Signed difference approach was first introduced by Wang for finding collision in MD5. In this paper we introduce ternary difference approach and present it in 3 symbols. To show its application we combine ternary difference approach with conventional differential cryptanalysis and apply that to cryptanalysis the reduced round PRESENT. We also use ant colony technique to obtain the best differential characteristic. To illustrate the privilege in the result of experiment, we calculate advantage of the attack.
Last updated:  2011-01-17
Fully Secure Anonymous Hierarchical Identity-Based Encryption with Constant Size Ciphertexts
Jae Hong Seo, Jung Hee Cheon
Efficient and privacy-preserving constructions for search functionality on encrypted data is important issues for data outsourcing, and data retrieval, etc. Fully secure anonymous Hierarchical ID-Based Encryption (HIBE) schemes is useful primitives that can be applicable to searchable encryptions [4], such as ID-based searchable encryption, temporary searchable encryption [1], and anonymous forward secure HIBE [9]. We propose a fully secure anonymous HIBE scheme with constant size ciphertexts.
Last updated:  2012-01-30
Cover and Decomposition Index Calculus on Elliptic Curves made practical. Application to a seemingly secure curve over $\F_{p^6}$
Uncategorized
Antoine Joux, Vanessa Vitse
Show abstract
Uncategorized
We present a new variant of cover and decomposition attacks on the elliptic curve discrete logarithm problem, that combines Weil descent and decomposition-based index calculus into a single discrete logarithm algorithm. This variant applies, at least theoretically, to all composite degree extension fields, and is particularly well-suited for curves defined over $\F_{p^6}$. We give a real-size example of discrete logarithm computations on a seemingly secure curve defined over a 130$-bit degree $6$ extension field.
Last updated:  2011-01-14
Collision Resistance of the JH Hash Function
Jooyoung Lee, Deukjo Hong
In this paper, we analyze collision resistance of the JH hash function in the ideal primitive model. The JH hash function is one of the five SHA-3 candidates accepted for the final round of evaluation. The JH hash function uses a mode of operation based on a permutation, while its security has been elusive even in the random permutation model. One can find a collision for the JH compression function only with two backward queries to the basing primitive. However, the security is significantly enhanced in iteration. For $c\leq n/2$, we prove that the JH hash function using an ideal $n$-bit permutation and producing $c$-bit outputs by truncation is collision resistant up to $O(2^{c/2})$ queries. This bound implies that the JH hash function provides the optimal collision resistance in the random permutation model.
Last updated:  2011-04-05
Homomorphic Signatures for Polynomial Functions
Dan Boneh, David Mandell Freeman
We construct the first homomorphic signature scheme that is capable of evaluating multivariate polynomials on signed data. Given the public key and a signed data set, there is an efficient algorithm to produce a signature on the mean, standard deviation, and other statistics of the signed data. Previous systems for computing on signed data could only handle linear operations. For polynomials of constant degree, the length of a derived signature only depends logarithmically on the size of the data set. Our system uses ideal lattices in a way that is a ``signature analogue'' of Gentry's fully homomorphic encryption. Security is based on hard problems on ideal lattices similar to those in Gentry's system.
Last updated:  2011-01-19
New Impossible Differential Attacks of Reduced-Round Camellia-192 and Camellia-256
Uncategorized
Jiazhe Chen, Keting Jia, Hongbo Yu, Xiaoyun Wang
Show abstract
Uncategorized
Camellia is a block cipher selected as a standard by ISO/IEC, which has been analyzed by a number of cryptanalysts. In this paper, we propose several 6-round impossible differential paths of Camellia with the $FL/FL^{-1}$ layer in the middle of them. With the impossible differential and a well-organized precomputational table, impossible differential attacks on 10-round Camellia-192 and 11-round Camellia-256 are given, and the time complexity are $2^{175}$ and $2^{206.8}$ respectively. An impossible differential attack on 15-round Camellia-256 without $FL/FL^{-1}$ layers and whitening is also be given, which needs about $2^{236.1}$ encryptions. To the best of our knowledge, these are the best cryptanalytic results of Camellia-192/-256 with $FL/FL^{-1}$ layers and Camellia-256 without $FL/FL^{-1}$ layers to date.
Last updated:  2011-01-08
An Anonymous Health Care System
Melissa Chase, Kristin Lauter
As medical records are converted to electronic form, risks of compromise of patients' privacy increase dramatically. The electronic format makes misuse of many patients' data much easier, so we must be extremely careful with who has access to this data. At the same time, this move to an electronic approach also gives us opportunities to improve patient privacy by leveraging recent cryptographic techniques, and in some ways to improve upon the traditional system. Here we look in particular at those parties, such as insurers and pharmacies, that are not actively involved in patient care. Currently patients who are insured are required to share the entire record of their medical treatment with their insurer in order to receive benefits, and a pharmacy may store all prescriptions filled for each patient. However, there is no medical reason for these parties to see this information --- they only need enough information to be able to prevent fraud and verify that the provided treatment should be covered under the patient's policy, or that the patient has a valid prescription for the medication being dispensed. We argue that, using recent developments in cryptography, we can allow this verification without revealing any additional information about the patient's record, thus obtaining optimal privacy guarantees.
Last updated:  2011-04-01
Exponential attacks on 6-round Luby-Rackoff and on 5-round Lai-Massey
Jean-Philippe Aumasson
The random oracle model and the ideal cipher model were proven equivalent after Coron et al. (CRYPTO 08) showed that six Feistel rounds are indifferentiable from an ideal cipher. This result, however, does not imply the inexistence of superpolynomial-time attacks outperforming generic (exponential-time) attacks. The finding of such attacks was left open by Coron et al., and is of utmost importance to evaluate the security of concrete fixed-parameters systems, as deployed in practice, for which the superpolynomial guarantee is an insufficient security argument. In addressing this issue, this paper proposes an exponential attack on six Feistel rounds, thus showing that at least seven rounds are necessary for optimal security guarantees. We then consider the Lai-Massey construction, as used in the block ciphers IDEA and FOX, for which we present an efficient attack on four rounds and an exponential attack on five. As a consequence, at least five Lai-Massey rounds are necessary to achieve indifferentiability in the general model.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.