All papers in 2013 (Page 4 of 881 results)

Last updated:  2013-09-14
The Special Number Field Sieve in $\F _{p^{n}}$, Application to Pairing-Friendly Constructions
Antoine Joux, Cécile Pierrot
In this paper, we study the discrete logarithm problem in finite fields related to pairing-based curves. We start with a precise analysis of the state-of-the-art algorithms for computing discrete logarithms that are suitable for finite fields related to pairing-friendly constructions. To improve upon these algorithms, we extend the Special Number Field Sieve to compute discrete logarithms in $\F_{p^{n}}$, where $p$ has an adequate sparse representation. Our improved algorithm works for the whole range of applicability of the Number Field Sieve.
Last updated:  2013-09-14
PriWhisper: Enabling Keyless Secure Acoustic Communication for Smartphones
Bingsheng Zhang, Qin Zhan, Junfei Wang, Kui Ren, Cong Wang, Di Ma
Short-range wireless communication technologies have been used in many security-sensitive smartphone applications and services such as contactless micro payment and device pairing. Typically, the data confidentiality of the existing short-range communication systems relies on so-called "key-exchange then encryption" mechanism. Namely, both parties need to spend extra communication to establish a common key before transmitting their actual messages, which is inefficient, especially for short communication sessions. In this work, we present PriWhisper -- a keyless secure acoustic short-range communication system for smartphones. It is designed to provide a purely software-based solution to secure smartphone short-range communication without the key agreement phase. PriWhisper adopts the emerging friendly jamming technique from radio communication for data confidentiality. The system prototype is implemented and evaluated on several Android smartphone platforms for efficiency and usability. We theoretically and experimentally analyze the security of our proposed acoustic communication system against various passive and active adversaries. In particular, we also study the (in)separability of the data signal and jamming signal against Blind Signal Segmentation (BSS) attacks such as Independent Component Analysis (ICA). The result shows that PriWhisper provides sufficient security guarantees for commercial smartphone applications and yet strong compatibilities with most legacy smartphone platforms.
Last updated:  2013-09-14
Random Projections, Graph Sparsification, and Differential Privacy
Uncategorized
Jalaj Upadhyay
Show abstract
Uncategorized
This paper initiates the study of preserving {\em differential privacy} ({\sf DP}) when the data-set is sparse. We study the problem of constructing efficient sanitizer that preserves {\sf DP} and guarantees high utility for answering cut-queries on graphs. The main motivation for studying sparse graphs arises from the empirical evidences that social networking sites are sparse graphs. We also motivate and advocate the necessity to include the efficiency of sanitizers, in addition to the utility guarantee, if one wishes to have a practical deployment of privacy preserving sanitizers. We show that the technique of Blocki et al. (FOCS2012) ({\sf BBDS}) can be adapted to preserve {\sf DP} for answering cut-queries on sparse graphs, with an asymptotically efficient sanitizer than~{\sf BBDS}. We use this as the base technique to construct an efficient sanitizer for arbitrary graphs. In particular, we use a preconditioning step that preserves the spectral properties (and therefore, size of any cut is preserved), and then apply our basic sanitizer. We first prove that our sanitizer preserves {\sf DP} for graphs with high conductance. We then carefully compose our basic technique with the modified sanitizer to prove the result for arbitrary graphs. In certain sense, our approach is complementary to the Randomized sanitization for answering cut queries (Gupta, Roth, and Ullman, TCC 2012): we use graph sparsification, while Randomized sanitization uses graph densification. Our sanitizers almost achieves the best of both the worlds with the same privacy guarantee, i.e., it is almost as efficient as the most efficient sanitizer and it has utility guarantee almost as strong as the utility guarantee of the best sanitization algorithm. We also make some progress in answering few open problems by {\sf BBDS}. We make a combinatorial observation that allows us to argue that the sanitized graph can also answer $(S,T)$-cut queries with same asymptotic efficiency, utility, and {\sf DP} guarantee as our sanitization algorithm for $S, \bar{S}$-cuts. Moreover, we achieve a better utility guarantee than Gupta, Roth, and Ullman (TCC 2012). We give further optimization by showing that fast Johnson-Lindenstrauss transform of Ailon and Chazelle~\cite{AC09} also preserves {\sf DP}.
Last updated:  2013-09-14
On Measurable Side-Channel Leaks inside ASIC Design Primitives
Takeshi Sugawara, Daisuke Suzuki, Minoru Saeki, Mitsuru Shiozaki, Takeshi Fujino
Leaks inside semi-custom ASIC (Application Specific Integrated Circuit) design primitives are rigorously investigated. The study is conducted by measuring a dedicated TEG (Test Element Group) chip with a small magnetic-field probe on the chip surface. Measurement targets are standard cells and a memory macro cell. Leaks inside the primitives are focused as many of conventional countermeasures place measurability boundaries on these primitives. Firstly, it is shown that current-path leak: a leak based on input-dependent active current path within a standard cell is measurable. Major gate-level countermeasures (RSL, MDPL, and WDDL) become vulnerable if the current-path leak is considered. Secondly, it is shown that internal-gate leak: a leak based on non-linear sub-circuit within a XOR cell is measurable. It can be exploited to bias the distribution of the random mask. Thirdly, it is shown that geometric leak: a leak based on geometric layout of the memory matrix structure is measurable. It is a leak correlated to integer representation of the memory address. We also show that a ROM-based countermeasure (Dual-rail RSL memory) becomes vulnerable with the geometric leak. A general transistor-level design method to counteract the current-path and internal-gate leaks is also shown.
Last updated:  2013-09-14
A Method For Generation Of High-Nonlinear S-Boxes Based On Gradient Descent
Oleksandr Kazymyrov, Valentyna Kazymyrova, Roman Oliynykov
Criteria based on the analysis of the properties of vectorial Boolean functions for selection of substitutions (S-boxes) for symmetric cryptographic primitives are given. We propose an improved gradient descent method for increasing performance of nonlinear vectorial Boolean functions generation with optimal cryptographic properties. Substitutions are generated by proposed method for the most common 8-bits input and output messages have nonlinearity 104, 8-uniformity and algebraic immunity 3.
Last updated:  2013-09-14
Secure Two-Party Computation with Reusable Bit-Commitments, via a Cut-and-Choose with Forge-and-Lose Technique
Luís T. A. N. Brandão
A Secure Two Party Computation (S2PC) protocol allows two parties to compute over their combined private inputs, as if intermediated by a trusted third party. In the active model, security is maintained even if one party is malicious, deviating from the protocol specification. For example, a honest party retains privacy of its input and is ensured a correct output. This can be achieved with a cut-and-choose of garbled circuits (C&C-GCs), where some GCs are verified for correctness and the remaining are evaluated to determine the circuit output. This paper presents a new C&C-GCs-based S2PC protocol, with significant advantages in efficiency and applicability. First, in contrast with prior protocols that require a majority of evaluated GCs to be correct, the new protocol only requires that at least one evaluated GC is correct. In practice this reduces the total number of GCs to approximately one third, for the same statistical security goal. This is accomplished by augmenting the C&C with a new forge-and-lose technique based on bit commitments with trapdoor. Second, the output of the new protocol includes reusable XOR-homomorphic bit commitments of all circuit input and output bits, thereby enabling efficient linkage of several S2PCs in a reactive manner. The protocol has additional interesting characteristics (which may allow new comparison tradeoffs). The number of exponentiations is only linear with the number of input and output wires and a statistical parameter -- this is an improvement over protocols whose number of exponentiations is proportional to the number of GCs multiplied by the number of input and output wires. It uses unconditionally hiding bit commitments with trapdoor as the basis of oblivious transfers, with the circuit evaluator choosing a single value and the circuit constructor receiving two (a sort of 2-out-of-1 oblivious transfer, instead of the typical 1-out-of-2). The verification of consistency of circuit input and output keys across different GCs is embedded in the C&C structure.
Last updated:  2014-06-12
Extended Criterion for Absence of Fixed Points
Oleksandr Kazymyrov, Valentyna Kazymyrova
One of the criteria for substitutions used in block ciphers is the absence of fixed points. In this paper we show that this criterion must be extended taking into consideration a mixing key function. In practice, we give a description of AES when fixed points are reached. Additionally, it is shown that modulo addition has more advantages then XOR operation.
Last updated:  2013-09-13
Equivalence between MAC and PRF for Blockcipher based Constructions
Nilanjan Datta, Mridul Nandi
In FSE 2010, Nandi proved a sufficient condition of pseudo random function (PRF) for affine domain extensions (ADE), wide class of block cipher based domain extensions. This sufficient condition is satisfied by all known blockcipher based ADE constructions, however, it is not a characterization of PRF. In this paper we completely characterize the ADE and show that {\em message authentication code (MAC) and weakly collision resistant (WCR) are indeed equivalent to PRF}. Note that a PRF is trivially a MAC and WCR, however, the converse need not be true in general. So our result suggests that it would be sufficient to ensure resisting against weakly collision attack or the forging attack to construct a pseudo random function ADE. Unlike FSE 2010 paper, here we consider the {\em forced collisions of inputs of underlying blockciphers by incorporating the final outputs of a domain extension queried by an adaptive adversary}. This is the main reason why we are able to obtain a characterization of PRF. Our approach is a more general and hence might have other theoretical interest.
Last updated:  2014-02-14
On the Minimum Number of Multiplications Necessary for Universal Hash Constructions
Mridul Nandi
Let $d \geq 1$ be an integer and $R_1$ be a finite ring whose elements are called {\bf block}. A $d$-block universal hash over $R_1$ is a vector of $d$ multivariate polynomials in message and key block such that the maximum {\em differential probability} of the hash function is ``low''. Two such single block hashes are pseudo dot-product (\tx{PDP}) hash and Bernstein-Rabin-Winograd (\tx{BRW}) hash which require $\frac{n}{2}$ multiplications for $n$ message blocks. The Toeplitz construction and $d$ independent invocations of \tx{PDP} are $d$-block hash outputs which require $d \times \frac{n}{2}$ multiplications. However, here we show that {\em at least $(d-1) + \frac{n}{2}$ multiplications are necessary} to compute a universal hash over $n$ message blocks. We construct a {\em $d$-block universal hash, called \tx{EHC}, which requires the matching $(d -1) + \frac{n}{2}$ multiplications for $d \leq 4$}. Hence it is optimum and our lower bound is tight when $d \leq 4$. It has similar parllelizibility, key size like Toeplitz and so it can be used as a light-weight universal hash.
Last updated:  2013-09-10
Improved Meet-in-the-Middle Attacks on AES-192 and PRINCE
Leibo Li, Keting Jia, Xiaoyun Wang
This paper studies key-recovery attacks on AES-192 and PRINCE under single-key model by methodology of meet-in-the-middle attack. A new technique named key-dependent sieve is proposed to further reduce the memory complexity of Demirci et al.'s attack at EUROCRYPT 2013, which helps us to achieve 9-round attack on AES-192 by using a 5-round distinguisher; the data, time and memory complexities are 2^{121} chosen plaintexts, 2^{185} encryptions and 2^{185} 128- bit memories, respectively. The new technique is also applied to attack block cipher PRINCE. Instead of 6-round results in the previous cryptanalysis, we rst present attacks on 8-round (out of 12) PRINCEcore and PRINCE with about 2^{53} and 2^{60} encryptions, respectively. Furthermore, we construct an interesting 7-round distinguisher and extend the attack to 9-round PRINCE; the attack needs about 2^{57} chosen plaintexts, 2^{64} encryptions and 2^{57.3} 64-bit memories.
Last updated:  2013-09-09
Quad-RC4: Merging Four RC4 States towards a 32-bit Stream Cipher
Goutam Paul, Subhamoy Maitra, Anupam Chattopadhyay
RC4 has remained the most popular software stream cipher since the last two decades. In parallel to cryptanalytic attempts, researchers have come up with many variants of RC4, some targeted to more security, some towards more throughput. We observe that the design of RC4 has been changed a lot in most of the variants. Since the RC4 structure is quite secure if the cipher is used with proper precautions, an arbitrary change in the design may lead to potential vulnerabilities, such as the distinguishing attack (Tsunoo et al., 2007) on the word-oriented variant GGHN (Gong et al., 2005). Some variants keep the RC4 structure (Maitra et al., 2008), but is byte-oriented and hence is an overkill for modern wide-word processors. In this paper, we try to combine the best of both the worlds. We keep the basic RC4 structure which guarantees reasonable security (if properly used) and we combine 4 RC4 states tacitly to design a high throughput stream cipher called {\em Quad-RC4} that produces $32$-bit output at every round. The storage requirement for the internal state is only $1024$ bits. In terms of speed, this cipher performs much faster than normal RC4 and is comparable with HC-128, the fastest software stream cipher amongst the eSTREAM finalists. We also discuss the issue of generalizing the structure of Quad-RC4 to higher word-width variants.
Last updated:  2013-09-09
Efficient General-Adversary Multi-Party Computation
Uncategorized
Martin Hirt, Daniel Tschudi
Show abstract
Uncategorized
Secure multi-party computation (MPC) allows a set P of n players to evaluate a function f in presence of an adversary who corrupts a subset of the players. In this paper we consider active, general adversaries, characterized by a so-called adversary structure Z which enumerates all possible subsets of corrupted players. In particular for small sets of players general adversaries better capture real-world requirements than classical threshold adversaries. Protocols for general adversaries are ``efficient'' in the sense that they require |Z|^O(1) bits of communication. However, as |Z| is usually very large (even exponential in n), the exact exponent is very relevant. In the setting with perfect security, the most efficient protocol known to date communicates |Z|^3 bits; we present a protocol for this setting which communicates |Z|^2 bits. In the setting with statistical security, |Z|^3 bits of communication is needed in general (whereas for a very restricted subclass of adversary structures, a protocol with communication |Z|^2 bits is known); we present a protocol for this setting (without limitations) which communicates |Z|^1 bits.
Last updated:  2013-09-11
New Efficient Identity-Based Encryption From Factorization
Uncategorized
Jun Shao, Licheng Wang, Xiaolei Dong, Zhenfu Cao
Show abstract
Uncategorized
Identity Based Encryption (IBE) systems are often constructed using pairings or lattices. Three exceptions are due to Cocks in 2001, Boneh, Gentry and Hamburg in 2007, and Paterson and Srinivasan in 2009. The main goal of this paper to propose new IBE schemes, which may give a way to find IBEs without pairing or lattice. Essentially, the security of our IBE schemes is rooted in the intractability assumption of integer factorization. We believe that our constructions have some essential differences from all existing IBEs.
Last updated:  2013-09-24
More Efficient Cryptosystems From $k^{th}$-Power Residues
Uncategorized
Zhenfu Cao, Xiaolei Dong, Licheng Wang, Jun Shao
Show abstract
Uncategorized
At Eurocrypt 2013, Joye and Libert proposed a method for constructing public key cryptosystems (PKCs) and lossy trapdoor functions (LTDFs) from $(2^\alpha)^{th}$-power residue symbols. Their work can be viewed as non-trivial extensions of the well-known PKC scheme due to Goldwasser and Micali, and the LTDF scheme due to Freeman et al., respectively. In this paper, we will demonstrate that this kind of work can be extended \emph{more generally}: all related constructions can work for any $k^{th}$-power residues if $k$ only contains small prime factors, instead of $(2^\alpha)^{th}$-power residues only. The resultant PKCs and LTDFs are more efficient than that from Joye-Libert method in terms of decryption speed with the same message length.
Last updated:  2013-10-09
Cryptanalysis of the Speck Family of Block Ciphers
Uncategorized
Farzaneh Abed, Eik List, Stefan Lucks, Jakob Wenzel
Show abstract
Uncategorized
Simon and Speck are two families of ultra-lightweight block ciphers which were announced by the U.S. National Security Agency in June 2013. This paper presents differential and rectangle attacks for almost all members of the Speck family of ciphers, where we target up to 11/22, 12/23, 15/27, 15/29, and 18/34 rounds of the 32-, 48-, 64-, 96-, and 128-bit version, respectively.
Last updated:  2013-09-09
KDM Security in the Hybrid Framework
Gareth T. Davies, Martijn Stam
We study the natural question of how well suited the hybrid encryption paradigm is in the context of key-dependent message (KDM) attacks. We prove that if a key derivation function (KDF) is used in between the public (KEM) and symmetric (DEM) part of the hybrid scheme and this KDF is modelled as a random oracle, then one-wayness of the KEM and indistinguishability of the DEM together suffice for KDM security of the resulting hybrid scheme. We consider the most general scenario, namely CCA attacks and KDM functions that can call the random oracle. Although the result itself is not entirely unsuspected -- it does solve an open problem from Black, Rogaway, and Shrimpton (SAC 2002) -- proving it is considerably less straightforward; we develop some proof techniques that might be applicable in a wider context.
Last updated:  2013-12-13
Attacking PUF-Based Pattern Matching Key Generators via Helper Data Manipulation
Uncategorized
Jeroen Delvaux, Ingrid Verbauwhede
Show abstract
Uncategorized
Physically Unclonable Functions (PUFs) provide a unique signature for integrated circuits (ICs), similar to a fingerprint for humans. They are primarily used to generate secret keys, hereby exploiting the unique manufacturing variations of an IC. Unfortunately, PUF output bits are not perfectly reproducible and non-uniformly distributed. To obtain a high-quality key, one needs to implement additional post-processing logic on the same IC. Fuzzy extractors are the well-established standard solution. Pattern Matching Key Generators (PMKGs) have been proposed as an alternative. In this work, we demonstrate the latter construction to be vulnerable against manipulation of its public helper data. Full key recovery is possible, although depending on system design choices. We demonstrate our attacks using a 4-XOR arbiter PUF, manufactured in 65nm CMOS technology. We also propose a simple but effective countermeasure.
Last updated:  2013-09-05
Non-Malleable Coding Against Bit-wise and Split-State Tampering
Mahdi Cheraghchi, Venkatesan Guruswami
Non-malleable coding, introduced by Dziembowski, Pietrzak and Wichs (ICS 2010), aims for protecting the integrity of information against tampering attacks in situations where error-detection is impossible. Intuitively, information encoded by a non-malleable code either decodes to the original message or, in presence of any tampering, to an unrelated message. Non-malleable coding is possible against any class of adversaries of bounded size. In particular, Dziembowski et al. show that such codes exist and may achieve positive rates for any class of tampering functions of size at most $2^{2^{\alpha n}}$, for any constant $\alpha \in [0, 1)$. However, this result is existential and has thus attracted a great deal of subsequent research on explicit constructions of non-malleable codes against natural classes of adversaries. In this work, we consider constructions of coding schemes against two well-studied classes of tampering functions; namely, bit-wise tampering functions (where the adversary tampers each bit of the encoding independently) and the much more general class of split-state adversaries (where two independent adversaries arbitrarily tamper each half of the encoded sequence). We obtain the following results for these models. 1. For bit-tampering adversaries, we obtain explicit and efficiently encodable and decodable non-malleable codes of length $n$ achieving rate $1-o(1)$ and error (also known as "exact security") $\exp(-\tilde{\Omega}(n^{1/7}))$. Alternatively, it is possible to improve the error to $\exp(-\tilde{\Omega}(n))$ at the cost of making the construction Monte Carlo with success probability $1-\exp(-\Omega(n))$ (while still allowing a compact description of the code). Previously, the best known construction of bit-tampering coding schemes was due to Dziembowski et al. (ICS 2010), which is a Monte Carlo construction achieving rate close to .1887. 2. We initiate the study of seedless non-malleable extractors as a natural variation of the notion of non-malleable extractors introduced by Dodis and Wichs (STOC 2009). We show that construction of non-malleable codes for the split-state model reduces to construction of non-malleable two-source extractors. We prove a general result on existence of seedless non-malleable extractors, which implies that codes obtained from our reduction can achieve rates arbitrarily close to 1/5 and exponentially small error. In a separate recent work, the authors show that the optimal rate in this model is 1/2. Currently, the best known explicit construction of split-state coding schemes is due to Aggarwal, Dodis and Lovett (ECCC TR13-081) which only achieves vanishing (polynomially small) rate.
Last updated:  2013-09-05
Capacity of Non-Malleable Codes
Mahdi Cheraghchi, Venkatesan Guruswami
Non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs (ICS 2010), encode messages $s$ in a manner so that tampering the codeword causes the decoder to either output $s$ or a message that is independent of $s$. While this is an impossible goal to achieve against unrestricted tampering functions, rather surprisingly non-malleable coding becomes possible against every fixed family $F$ of tampering functions that is not too large (for instance, when $|F| \le \exp(2^{\alpha n})$ for some $\alpha \in [0, 1)$ where $n$ is the number of bits in a codeword). In this work, we study the "capacity of non-malleable coding", and establish optimal bounds on the achievable rate as a function of the family size, answering an open problem from Dziembowski et al. (ICS 2010). Specifically, 1. We prove that for every family $F$ with $|F| \le \exp(2^{\alpha n})$, there exist non-malleable codes against $F$ with rate arbitrarily close to $1-\alpha$ (this is achieved w.h.p. by a randomized construction). 2. We show the existence of families of size $\exp(n^{O(1)} 2^{\alpha n})$ against which there is no non-malleable code of rate $1-\alpha$ (in fact this is the case w.h.p for a random family of this size). 3. We also show that $1-\alpha$ is the best achievable rate for the family of functions which are only allowed to tamper the first $\alpha n$ bits of the codeword, which is of special interest. As a corollary, this implies that the capacity of non-malleable coding in the split-state model (where the tampering function acts independently but arbitrarily on the two halves of the codeword) equals $1/2$. We also give an efficient Monte Carlo construction of codes of rate close to 1 with polynomial time encoding and decoding that is non-malleable against any fixed $c > 0$ and family $F$ of size $\exp(n^c)$, in particular tampering functions with, say, cubic size circuits.
Last updated:  2013-09-30
Virtual Black-Box Obfuscation for All Circuits via Generic Graded Encoding
Zvika Brakerski, Guy N. Rothblum
We present a new general-purpose obfuscator for all polynomial-size circuits. The obfuscator uses graded encoding schemes, a generalization of multilinear maps. We prove that the obfuscator exposes no more information than the program's black-box functionality, and achieves {\em virtual black-box security}, in the generic graded encoded scheme model. This proof is under the Bounded Speedup Hypothesis (BSH, a plausible worst-case complexity-theoretic assumption related to the Exponential Time Hypothesis), in addition to standard cryptographic assumptions. We also show that the weaker notion of {\em indistinguishability obfuscation} can be achieved without BSH. Very recently, Garg et al.~(FOCS 2013) used graded encoding schemes to present a candidate obfuscator for indistinguishability obfuscation. They posed the problem of constructing a provably secure indistinguishability obfuscator in the generic graded encoding scheme model. Our obfuscator resolves this problem. Indeed, under BSH it achieves the stronger notion of virtual black box security, which is our focus in this work. Our construction is different from that of Garg et al., but is inspired by it, in particular by their use of permutation branching programs. We obtain our obfuscator by developing techniques used to obfuscate $d$-CNF formulas (ePrint 2013), and applying them to permutation branching programs. This yields an obfuscator for the complexity class $NC^1$. We then use homomorphic encryption to obtain an obfuscator for any polynomial-size circuit.
Last updated:  2013-09-05
Self-pairings on supersingular elliptic curves with embedding degree $three$
Binglong Chen, Chang-An Zhao
Self-pairings are a special subclass of pairings and have interesting applications in cryptographic schemes and protocols. In this paper, we explore the computation of the self-pairings on supersingular elliptic curves with embedding degree $k = 3$. We construct a novel self-pairing which has the same Miller loop as the Eta/Ate pairing. However, the proposed self-pairing has a simple final exponentiation. Our results suggest that the proposed self-pairings are more efficient than the other ones on the corresponding curves. We compare the efficiency of self-pairing computations on different curves over large characteristic and estimate that the proposed self-pairings on curves with $k=3$ require $44\%$ less field multiplications than the fastest ones on curves with $k=2$ at AES 80-bit security level.
Last updated:  2013-09-04
Preimage attacks on the round-reduced Keccak with the aid of differential cryptanalysis
Pawel Morawiecki, Josef Pieprzyk, Marian Srebrny, Michal Straus
In this paper we use differential cryptanalysis to attack the winner of the SHA-3 competition, namely Keccak hash function. Despite more than 6 years of intensive cryptanalysis there have been known only two preimage attacks which reach 3 (or slightly more) rounds. Our 3-round preimage attack improves the complexity of those two existing attacks and it is obtained with a different technique. We also show the partial preimage attack on the 4-round Keccak, exploiting two properties of the linear step of the Keccak-f permutation.
Last updated:  2014-04-02
Sometimes-Recurse Shuffle: Almost-Random Permutations in Logarithmic Expected Time
Ben Morris, Phillip Rogaway
We describe a security-preserving construction of a random permutation of domain size~$N$ from a random function, the construction tolerating adversaries asking all~$N$ plaintexts, yet employing just $\Theta(\lg N)$ calls, on average, to the one-bit-output random function. The approach is based on card shuffling. The basic idea is to use the \textit{sometimes-recurse} transformation: lightly shuffle the deck (with some other shuffle), cut the deck, and then recursively shuffle one of the two halves. Our work builds on a recent paper of Ristenpart and Yilek.
Last updated:  2015-09-08
A Definitional Framework for Functional Encryption
Christian Matt, Ueli Maurer
Functional encryption (FE) is a powerful generalization of various types of encryption. We investigate how FE can be used by a trusted authority to enforce access-control policies to data stored in an untrusted repository. Intuitively, if (functionally) encrypted data items are put in a publicly-readable repository, the effect of the encryption should be that every user has access to exactly (and only) those functions of the data items for which he has previously received the corresponding decryption key. That is, in an ideal-world view, the key authority can flexibly manage read access of users to the repository. This appears to be exactly what FE is supposed to achieve, and most natural applications of FE can be understood as specific uses of such a repository with access control. However, quite surprisingly, it is unclear whether known security definitions actually achieve this goal and hence whether known FE schemes can be used in such an application. In fact, there seems to be agreement in the cryptographic community that identifying the right security definitions for FE remains open. To resolve this problem, we treat FE in the constructive cryptography framework and propose a new conventional security definition, called composable functional encryption security (CFE-security), which exactly matches the described ideal-world interpretation. This definition (and hence the described application) is shown to be unachievable in the standard model but achievable in the random oracle model. Moreover, somewhat weaker definitions, which are achievable in the standard model, can be obtained by certain operational restrictions of the ideal-world repository, making explicit how schemes satisfying such a definition can (and cannot) meaningfully be used. Finally, adequate security definitions for generalizations of FE (such as multi-input, randomized functions, malicious ciphertext generation, etc.) can be obtained by straight-forward operational extensions of the repository and extracting the corresponding security definitions. This leads towards a unified treatment of the security of FE.
Last updated:  2013-09-04
Practical approaches to varying network size in combinatorial key predistribution schemes
Kevin Henry, Maura B. Paterson, Douglas R. Stinson
Combinatorial key predistribution schemes can provide a practical solution to the problem of distributing symmetric keys to the nodes of a wireless sensor network. Such schemes often inherently suit networks in which the number of nodes belongs to some restricted set of values (such as powers of primes). In a recent paper, Bose, Dey and Mukerjee have suggested that this might pose a problem, since discarding keyrings to suit a smaller network might adversely affect the properties of the scheme. In this paper we explore this issue, with specific reference to classes of key predistribution schemes based on transversal designs. We demonstrate through experiments that, for a wide range of parameters, randomly removing keyrings in fact has a negligible and largely predictable effect on the parameters of the scheme. In order to facilitate these computations, we provide a new, efficient, generally applicable approach to computing important properties of combinatorial key predistribution schemes. We also show that the structure of a resolvable transversal design can be exploited to give a deterministic method of removing keyrings to adjust the network size, in such a way that the properties of the resulting scheme are easy to analyse. We show that these schemes have the same asymptotic properties as the transversal design schemes on which they are based, and that for most parameter choices their behaviour is very similar.
Last updated:  2013-09-04
Black-Box Obfuscation for d-CNFs
Zvika Brakerski, Guy N. Rothblum
We show how to securely obfuscate a new class of functions: {\em conjunctions of $NC0_d$ circuits}. These are functions of the form $C(x) = \bigwedge_{i=1}^m C_i(x)$, where each $C_i$ is a boolean $NC0_d$ circuit, whose output bit is only a function of $d = O(1)$ bits of the input $x$. For example, $d$-CNFs, where each clause is a disjunction of at most $d$ variables, are in this class. Given such a function, we produce an obfuscated program that preserves the input-output functionality of the given function, but reveals nothing else. Our construction is based on multilinear maps, and can be instantiated using the recent candidates proposed by Garg, Gentry and Halevi (EUROCRYPT 2013) and by Coron, Lepoint and Tibouchi (CRYPTO 2013). We prove that the construction is a secure obfuscation in a generic multilinear group model, under the black-box definition of Barak et al.\ (CRYPTO 2001). Security is based on a new {\em worst-case} hardness assumption about exponential hardness of the NP-complete problem 3-SAT, the {\em Bounded Speedup Hypothesis}. One of the new techniques we introduce is a method for enforcing input consistency, which we call {\em randomizing sub-assignments}. We hope that this technique can find further application in constructing secure obfuscators. The family of functions we obfuscate is considerably richer than previous works that consider black-box obfuscation. As one application, we show how to achieve {\em obfuscated functional point testing}: namely, to construct a circuit that checks whether $f(x)=y$, where $f$ is an arbitrary ``public'' polynomial-time computable function, but $y$ is a ``secret'' point that is hidden in the obfuscation.
Last updated:  2013-09-04
Algebraic Aspects of the Russian Hash Standard GOST R 34.11-2012
Oleksandr Kazymyrov, Valentyna Kazymyrova
New GOST R 34.11-2012 standard has been recently selected by the Russian government to replace the old one. The algorithm is based on the hash function Stribog introduced in 2010. The high-level structure of the new hash function is similar to GOST R 34.11-94 with minor modifications. However, the compression function was changed significantly. Such a choice of the compression algorithm has been motivated by the Rjndael due to simplicity and understandable algebraic structure. In this paper we consider a number of algebraic aspects of the GOST R 34.11. We show how one can express the cipher in AES-like form over the finite field $\F_{2^8}$, and consider some approaches that can be used for the fast software implementation.
Last updated:  2014-01-23
Key Exchange with Unilateral Authentication: Composable Security Definition and Modular Protocol Design
Ueli Maurer, Björn Tackmann, Sandro Coretti
Key exchange with unilateral authentication (short: unilateral key exchange) is an important primitive in practical security protocols; a prime example is the widely deployed TLS protocol, which is usually run in this mode. Unilateral key-exchange protocols are employed in a client-server setting where only the server has a certified public key. The client is then authenticated by sending credentials via a connection that is secured with the key obtained from the protocol. Somewhat surprisingly and despite its importance in practical scenarios, this type of key exchange has received relatively little attention in the cryptographic literature compared to the type with mutual authentication. In this work, we follow the constructive cryptography paradigm of Maurer and Renner (ICS 2011) to obtain a (composable) security definition for key-exchange protocols with unilateral authentication: We describe a "unilateral key" resource and require from a key-exchange protocol that it constructs this resource in a scenario where only the server is authenticated. One main advantage of this approach is that it comes with strong composition guarantees: Any higher-level protocol proven secure with respect to the unilateral key resource remains secure if the key is obtained using a secure unilateral key-exchange protocol. We then describe a simple protocol based on any CPA-secure KEM and prove that it constructs a unilateral key (previous protocols in this setting relied on a CCA-secure KEM). The protocol design and our security analysis are fully modular and allow to replace a sub-protocol $\pi$ by a different sub-protocol $\pi'$ by only proving security of the sub-protocol $\pi'$; the composition theorem immediately guarantees that the security of the modified full protocol is maintained. In particular, one can replace the KEM by a sub-protocol based on Diffie-Hellman, obtaining a protocol that is similar to the A-DHKE protocol proposed by Shoup. Moreover, our analysis is simpler because the actual key-exchange part of the protocol can be analyzed in a simple three-party setting; we show that the extension to the multi-party setting follows generically. Compared to the TLS handshake protocol, the "de facto" standard for unilateral key exchange on the Internet, our protocol is more efficient (only two messages) and is based on weaker assumptions.
Last updated:  2015-06-21
Formally Proved Security of Assembly Code Against Power Analysis: A Case Study on Balanced Logic
Pablo Rauzy, Sylvain Guilley, Zakaria Najm
In his keynote speech at CHES 2004, Kocher advocated that side-channel attacks were an illustration that formal cryptography was not as secure as it was believed because some assumptions (e.g., no auxiliary information is available during the computation) were not modeled. This failure is caused by formal methods' focus on models rather than implementations. In this paper we present formal methods and tools for designing protected code and proving its security against power analysis. These formal methods avoid the discrepancy between the model and the implementation by working on the latter rather than on a high-level model. Indeed, our methods allow us (a) to automatically insert a power balancing countermeasure directly at the assembly level, and to prove the correctness of the induced code transformation; and (b) to prove that the obtained code is balanced with regard to a reasonable leakage model, and we show how to characterize the hardware to use the resources which maximize the relevancy of the model. The tools implementing our methods are then demonstrated in a case study in which we generate a provably protected PRESENT implementation for an 8-bit AVR smartcard.
Last updated:  2014-05-26
Multi-Valued Byzantine Broadcast: the $t < n$ Case
Martin Hirt, Pavel Raykov
All known protocols implementing broadcast from synchronous point-to-point channels tolerating any $t < n$ Byzantine corruptions have communication complexity at least $\Omega(\ell n^2)$. We give cryptographically secure and information-theoretically secure protocols for $t < n$ that communicate $O(\ell n)$ bits in order to broadcast sufficiently long $\ell$ bit messages. This matches the optimal communication complexity bound for any protocol allowing to broadcast $\ell$ bit messages. While broadcast protocols with the optimal communication complexity exist in cases where $t < n/3$ (by Liang and Vaidya in PODC '11) or $t < n/2$ (by Fitzi and Hirt in PODC '06), this paper is the first to present such protocols for $t < n$.
Last updated:  2013-09-04
More Efficient Oblivious Transfer and Extensions for Faster Secure Computation
Gilad Asharov, Yehuda Lindell, Thomas Schneider, Michael Zohner
Protocols for secure computation enable parties to compute a joint function on their private inputs without revealing anything but the result. A foundation for secure computation is oblivious transfer (OT), which traditionally requires expensive public key cryptography. A more efficient way to perform many OTs is to extend a small number of base OTs using OT extensions based on symmetric cryptography. In this work we present optimizations and efficient implementations of OT and OT extensions in the semi-honest model. We propose a novel OT protocol with security in the standard model and improve OT extensions with respect to communication complexity, computation complexity, and scalability. We also provide specific optimizations of OT extensions that are tailored to the secure computation protocols of Yao and Goldreich-Micali-Wigderson and reduce the communication complexity even further. We experimentally verify the efficiency gains of our protocols and optimizations. By applying our implementation to current secure computation frameworks, we can securely compute a Levenshtein distance circuit with 1.29 billion AND gates at a rate of 1.2 million AND gates per second. Moreover, we demonstrate the importance of correctly implementing OT within secure computation protocols by presenting an attack on the FastGC framework.
Last updated:  2013-09-04
Puzzle Encryption Algorithm
Gregory Alvarez, Charles Berenguer
This document describes the symmetric encryption algorithm called Puzzle. It is free and open. The objective of this paper is to get an opinion about its security from the cryptology community. It is separated in two parts, a technical description of the algorithm and its cryptanalysis. The algorithm has some interesting properties : The block size is variable and unknown from an attacker. The size of the key has no limit and is unknown from an attacker. The key size does not affect the algorithm speed (using a 256 bit key is the same as using a 1024 bit key). The algorithm is much faster than the average cryptographic function. Experimental test showed 600 Mo/s - 4 cycles/byte on an Intel Core 2 Duo P8600 2.40GHz and 1,2 Go/s - 2 cycles/byte on an Intel i5-3210M 2.50GHz. Both CPU had only 2 cores.
Last updated:  2013-09-05
More Efficient Cryptosystems From k-th Power Residues
Zhenfu Cao, Xiaolei Dong, Licheng Wang, Jun Shao
At Eurocrypt 2013, Joye and Libert proposed a method for constructing public key cryptosystems (PKCs) and lossy trapdoor functions (LTDFs) from $(2^\alpha)^{th}$-power residue symbols. Their work can be viewed as non-trivial extensions of the well-known PKC scheme due to Goldwasser and Micali, and the LTDF scheme due to Freeman et al., respectively. In this paper, we will demonstrate that this kind of work can be extended \emph{more generally}: all related constructions can work for any $k^{th}$ residues if $k$ only contains small prime factors, instead of $(2^\alpha)^{th}$-power residues only. The resultant PKCs and LTDFs are more efficient than that from Joye-Libert method in terms of decryption speed with the same message length.
Last updated:  2013-11-05
Equations System coming from Weil descent and subexponential attack for algebraic curve cryptosystem
Uncategorized
Koh-ichi Nagao
Show abstract
Uncategorized
Faugére et al. shows that the decomposition problem of a point of elliptic curve over binary field $F_{2^n}$ reduces to solving low degree equations system over $F_2$ coming from Weil descent. Using this method, the discrete logarithm problem of elliptic curve over $F_{2^n}$ reduces to linear constrains, i.e., solving equations system using linear algebra of monomial modulo field equations, and its complexity is expected to be subexponential of input size $n$. However, it is pity that at least using linear constrains, it is exponential. Petit et al. shows that assuming first fall degree assumption, from which the complexity of solving low degree equations system using Gröbner basis computation is subexponential, its total complexity is heuristically subexponential. On the other hands, the author shows that the decomposition problem of Jacobian of plane curve over $F_{p^n}$ also essentially reduces to solving low degree equations system over $F_p$ coming from Weil descent. In this paper, we revise (precise estimation of first fall degree) the results of Petit et al. and show that the discrete logarithm problem of elliptic curve over small characteristic field $F_{p^n}$ is subexponential of input size $n$, and the discrete logarithm problem of Jacobian of small genus curve over small characteristic field $F_{p^n}$ is also subexponential of input size $n$, under first fall degree assumption.
Last updated:  2013-12-13
Decomposition formula of the Jacobian group of plane curve
Koh-ichi Nagao
We give an algorithm for decomposing given element of Jacobian gruop into the sum of the decomposed factor, which consists of certain subset of the points of curve.
Last updated:  2013-09-04
Automatic Security Evaluation of Block Ciphers with S-bP Structures against Related-key Differential Attacks
Uncategorized
Siwei Sun, Lei Hu, Ling Song, Yonghong Xie, Peng Wang
Show abstract
Uncategorized
Counting the number of active S-boxes is a common way to evaluate the security of symmetric key cryptographic schemes against differential attack. Based on Mixed Integer Linear Programming (MILP), Mouha et al proposed a method to accomplish this task automatically for word-oriented symmetric-key ciphers with SPN structures. However, this method can not be applied directly to block ciphers of SPN structures with bitwise permutation diffusion layers (S-bP structures), due to its ignorance of the diffusion effect derived collaboratively by nonlinear substitution layers and bitwise permutation layers. Moreover, the MILP constrains presented in Mouha et al's method are not enough to describe the differential propagation behaviour of a linear diffusion layer constructed from a non-MDS code, even an almost MDS code. In this paper we extend Mouha et al's method for S-bP structures by introducing new representations for exclusive-or (XOR) differences to describe bit/word level differences simultaneously and by taking the collaborative diffusion effect of S-boxes and bitwise permutations into account. Our method is applied to the block cipher PRESENT-80, an international standard for lightweight symmetric key cryptography, to automatically evaluate its security against differential attacks. We obtain lower bounds on the numbers of active S-boxes in the single-key model for full 31-round PRESENT-80 and in related-key model for round-reduced PRESENT-80 up to 12 rounds, and therefore automatically prove that the full-round PRESENT-80 is secure against single-key differential attack, and the cost of related-key differential attack on the full-round PRESENT-80 is close to that of an exhaustive search: the best related-key differential characteristic for full PRESENT-80 is upper bounded by $2^{-72}$.
Last updated:  2013-09-04
TRS-80 with a grain of salt
Jean-Marie Chauvet
This paper presents early results of a (very) experimental implementation of the elliptic curve and stream cipher calculations of the Networking and Cryptography library (NaCl), on the TRS-80 Model I. Needless to say, the demonstration that such a library, which has been optimized for many modern platforms including leading edge desktops, servers and, recently, modern microcontrollers, is even feasible on such early home microcomputers is, at best, to be considered a recreation rather than as a practical application of technology. In the process, however, lessons were learned in implementing trade-offs for basic cryptographic primitives and, more importantly maybe, in experimenting with some transformative aspects of retrocomputing.
Last updated:  2013-08-30
Private Over-threshold Aggregation Protocols over Distributed Databases
Myungsun Kim, Abedelaziz Mohaisen, Jung Hee Cheon, Yongdae Kim
In this paper, we revisit the private over-threshold data aggregation problem, and formally define the problem's security requirements as both data and user privacy goals. To achieve both goals, and to strike a balance between efficiency and functionality, we devise a novel cryptographic construction that comes in two schemes; a fully decentralized construction and its practical but semi-decentralized variant. Both schemes are provably secure in the semi-honest model. We analyze the computational and communication complexities of our construction, and show that it is much more efficient than the existing protocols in the literature. Finally, we show that our basic protocol is efficiently transformed into a stronger protocol secure in the presence of malicious adversaries, together with performance and security analysis.
Last updated:  2013-08-30
Warrant-Hiding Delegation-by-Certificate Proxy Signature Schemes
Christian Hanser, Daniel Slamanig
Proxy signatures allow an entity (the delegator) to delegate his signing capabilities to other entities (called proxies), who can then produce signatures on behalf of the delegator. Typically, a delegator may not want to give a proxy the power to sign any message on his behalf, but only messages from a well defined message space. Therefore, the so called delegation by warrant approach has been introduced. Here, a warrant is included into the delegator's signature (the so called certificate) to describe the message space from which a proxy is allowed to choose messages to produce valid signatures for. Interestingly, in all previously known constructions of proxy signatures following this approach, the warrant is made explicit and, thus, is an input to the verification algorithm of a proxy signature. This means, that a verifier learns the entire message space for which the proxy has been given the signing power. However, it may be desirable to hide the remaining messages in the allowed message space from a verifier. This scenario has never been investigated in context of proxy signatures, but seems to be interesting for practical applications. In this paper, we resolve this issue by introducing so called warrant-hiding proxy signatures. We provide a formal security definition of such schemes by augmenting the well established security model for proxy signatures by Boldyreva et al. Furthermore, we discuss strategies how to realize this warrant-hiding property and we also provide two concrete instantiations of such a scheme. They enjoy different advantages, but are both entirely practical. Moreover, we prove them secure with respect to the augmented security model.
Last updated:  2013-08-30
Cryptanalysis of the SIMON Family of Block Ciphers
Hoda A. Alkhzaimi, Martin M. Lauridsen
Recently, the U.S National Security Agency has published the specifications of two families of lightweight block ciphers, SIMON and SPECK, in ePrint report 2013/404. The ciphers are developed with optimization towards both hardware and software in mind. While the specification paper discusses design requirements and performance of the presented lightweight ciphers thoroughly, no security assessment is given. This paper is a move towards filling that cryptanalysis gap for the SIMON family of ciphers. We present a series of observations on the presented construction that, in some cases, yield attacks, while in other cases may provide basis of further analysis by the cryptographic community. Specifically, we obtain attacks using classical- as well as truncated differentials. In the former case, we show how the smallest version of SIMON, Simon32/64, exhibits a strong differential effect.
Last updated:  2013-09-03
Searching for Nonlinear Feedback Shift Registers with Parallel Computing
Uncategorized
Przemysław Dąbrowski, Grzegorz Łabuzek, Tomasz Rachwalik, Janusz Szmidt
Show abstract
Uncategorized
Nonlinear feedback shift registers (NLFSRs) are used to construct pseudorandom generators for stream ciphers. Their theory is not so complete as that of linear feedback shift registers (LFSRs). In general, it is not known how to construct all NLFSRs with maximum period. The direct method is to search for such registers with suitable properties. Advanced technology of parallel computing has been applied both in software and hardware to search for maximum period NLFSRs having a fairly simple algebraic normal form.
Last updated:  2013-08-30
Lattice-Based FHE as Secure as PKE
Zvika Brakerski, Vinod Vaikuntanathan
We show that (leveled) fully homomorphic encryption (FHE) can be based on the hardness of $\otild(n^{1.5+\epsilon})$-approximation for lattice problems (such as GapSVP) under quantum reductions for any $\epsilon>0$ (or $\otild(n^{2+\epsilon})$-approximation under classical reductions). This matches the best known hardness for ``regular'' (non-homomorphic) lattice based public-key encryption up to the $\epsilon$ factor. A number of previous methods had hit a roadblock at quasipolynomial approximation. (As usual, a circular security assumption can be used to achieve a non-leveled FHE scheme.) Our approach consists of three main ideas: Noise-bounded sequential evaluation of high fan-in operations; Circuit sequentialization using Barrington's Theorem; and finally, successive dimension-modulus reduction.
Last updated:  2013-08-30
On the security of a password-only authenticated three-party key exchange protocol
Junghyun Nam, Kim-Kwang Raymond Choo, Juryon Paik, Dongho Won
This note reports major previously unpublished security vulnerabilities in the password-only authenticated three-party key exchange protocol due to Lee and Hwang (Information Sciences, 180, 1702-1714, 2010): (1) the Lee-Hwang protocol is susceptible to a man-in-the-middle attack and thus fails to achieve implicit key authentication; (2) the protocol cannot protect clients' passwords against an offline dictionary attack; and (3) the indistinguishability-based security of the protocol can be easily broken even in the presence of a passive adversary.
Last updated:  2014-01-17
Rebound attacks on Stribog
Uncategorized
Riham AlTawy, Aleksandar Kircanski, Amr M. Youssef
Show abstract
Uncategorized
In August 2012, the Stribog hash function was selected as the new Russian hash standard (GOST R 34.11-2012). Stribog is an AES-based primitive and is considered as an asymmetric reply to the new SHA-3. In this paper we investigate the collision resistance of the Stribog compression function and its internal cipher. Specifically, we present a message differential path for the internal block cipher that allows us to efficiently obtain a 5-round free-start collision and a 7.75 free-start near collision for the internal cipher with complexities $2^8$ and $2^{40}$, respectively. Finally, the compression function is analyzed and a 7.75 round semi free-start collision, 8.75 and 9.75 round semi free-start near collisions are presented along with an example for 4.75 round 49 out of 64 bytes near colliding message pair.
Last updated:  2014-01-07
Practical Issues with TLS Client Certificate Authentication
Arnis Parsovs
The most widely used secure Internet communication standard TLS (Transport Layer Security) has an optional client certificate authentication feature that in theory has significant security advantages over HTML form-based password authentication. In this paper we discuss practical security and usability issues related to TLS client certificate authentication stemming from the server-side and browser implementations. In particular, we analyze Apache's mod_ssl implementation on the server side and the most popular browsers – Mozilla Firefox, Google Chrome and Microsoft Internet Explorer on the client side. We complement our paper with a measurement study performed in Estonia where TLS client certificate authentication is widely used. We present our recommendations to improve the security and usability of TLS client certificate authentication.
Last updated:  2013-10-06
Inter-FSP Funds Transfer Protocol
Uncategorized
Amir Herzberg, Shay Nachmani
Show abstract
Uncategorized
The present work introduces the first decentralized secure funds transfer protocol with multiple participants. The protocol guarantees that a participant only loses money if a trusted peer happens to be corrupt. Furthermore, the loss is limited to the amount of credit given to that partner. The protocol supports expiration times for payment orders, and takes into consideration actual network queuing delays. To achieve our goals, we used several models and techniques from the Quality of Service area, to handle delays and avoid the expiration of payment orders. We provide rigorous proofs to the security requirements of the protocol.
Last updated:  2014-10-22
A Three-Level Sieve Algorithm for the Shortest Vector Problem
Uncategorized
Feng Zhang, Yanbin Pan, Gengran Hu
Show abstract
Uncategorized
In AsiaCCS 2011, Wang \textit{et al.} proposed a two-level heuristic sieve algorithm for the shortest vector problem in lattices, which improves the Nguyen-Vidick sieve algorithm. Inspired by their idea, we present a three-level sieve algorithm in this paper, which is shown to have better time complexity. More precisely, the time complexity of our algorithm is $2^{0.3778n+o(n)}$ polynomial-time operations and the corresponding space complexity is $2^{0.2833n+o(n)}$ polynomially many bits.
Last updated:  2014-04-29
Accelerating Scalar Conversion for Koblitz Curve Cryptoprocessors on Hardware Platforms
Uncategorized
Sujoy Sinha Roy, Junfeng Fan, Ingrid Verbauwhede
Show abstract
Uncategorized
Koblitz curves are a class of computationally efficient elliptic curves where scalar multiplications can be accelerated using $\tau$NAF representations of scalars. However conversion from an integer scalar to a short $\tau$NAF is a costly operation. In this paper we improve the recently proposed scalar conversion scheme based on division by $\tau^2$. We apply two levels of optimizations in the scalar conversion architecture. First we reduce the number of long integer subtractions during the scalar conversion. This optimization reduces the computation cost and also simplifies the critical paths present in the conversion architecture. Then we implement pipelines in the architecture. The pipeline splitting increases the operating frequency without increasing the number of cycles. We have provided detailed experimental results to support our claims made in this paper.
Last updated:  2013-08-30
Efficient Unobservable Anonymous Reporting against Strong Adversaries
Uncategorized
Nethanel Gelernter, Amir Herzberg
Show abstract
Uncategorized
We present DURP, a decentralized protocol for unobservable, anonymous reporting to an untrusted destination, with low latency and overhead. DURP provably ensures strong anonymity properties, as required for some applications (and not provided by existing systems and practical designs, e.g., Tor), specifically: Provable unobservability against global eavesdropper and malicious participants. Provable source anonymity against a malicious destination. Probable-innocence against a malicious destination which is also a global eavesdropper. DURP design is a modular combination of two modules: a queuing module, ensuring fixed rates for certain events, together with an anonymization module, which can use either Onion-Routing (DURP^OR) or Crowds (DURP^Crowds). We present anal-ysis, backed by simulation results, of the network properties and performance of DURP, and show it has reasonable overhead. We also use the analysis results to create an optimized version of DURP.
Last updated:  2013-08-30
Gossip Latin Square and The Meet-All Gossipers Problem
Uncategorized
Nethanel Gelernter, Amir Herzberg
Show abstract
Uncategorized
Given a network of n = 2^k gossipers, we want to schedule a cyclic calendar of meetings between all of them, such that: (1) each gossiper communicates (gossips) only once a day, with one other gossiper, (2) in every (n 1) consecutive days, each gossiper meets all other gossipers, and (3) every gossip, initiated by any gossiper, will reach all gossipers within k = log(n) days. In this paper we study the above stated meet-all gossipers problem, by den-ing and constructing the Gossip Latin Square (GLS), a combinatorial structure which solves the problem.
Last updated:  2019-04-07
On a Relation between the Ate Pairing and the Weil Pairing for Supersingular Elliptic Curves
Takakazu Satoh
The hyperelliptic curve Ate pairing provides an efficient way to compute a bilinear pairing on the Jacobian variety of a hyperelliptic curve. We prove that, for supersingular elliptic curves with embedding degree two, square of the Ate pairing is nothing but the Weil pairing. Using the formula, we develop an X-coordinate only pairing inversion method. However, the algorithm is still infeasible for cryptographic size problems.
Last updated:  2013-09-12
On the Limits of Provable Anonymity
Uncategorized
Nethanel Gelernter, Amir Herzberg
Show abstract
Uncategorized
We study provably secure anonymity, focusing on ultimate anonymity - strongest-possible anonymity requirements and adversaries. We begin with rigorous definition of anonymity against wide range of computationally-bounded attackers, including eavesdroppers, malicious peers, malicious destina-tions, and their combinations. Following the work of Hevia and Micciancio [15], our definition is generic, and captures different notions of anonymity (e.g., unobservability and sender anonymity). We then study the feasibility of ultimate anonymity. We show there is a protocol satisfying this requirement, but with absurd (although polynomial) inefficiency and overhead. We show that such inefficiency and overhead is unavoidable for `ultimate anonymity'. We then present a slightly-relaxed requirement and present feasible protocols for it.
Last updated:  2013-09-23
The Parallel-Cut Meet-In-The-Middle Attack
Ivica Nikolic, Lei Wang, Shuang Wu
We propose a new type of meet-in-the-middle attack that splits the cryptographic primitive in parallel to the execution of the operations. The result of the division are two primitives that have smaller input sizes and thus require lower attack complexities. However, the division is not completely independent and the sub-primitives depend (output of one is the input for the other) mutually on a certain number of bits. When the number of such bits is relatively small, we show a technique based on three classical meet-in-the-middle attacks that can recover the secret key of the cipher faster than an exhaustive search. We apply our findings to the lightweight block cipher Klein and show attacks on 10/11/13 rounds of Klein-64/-80/-96. Our approach requires only one or two pairs of known plaintexts and always recovers the secret key.
Last updated:  2013-09-10
How to Withstand Mobile Virus Attacks, Revisited
Uncategorized
Joshua Baron, Karim El Defrawy, Joshua Lampkins, Rafail Ostrovsky
Show abstract
Uncategorized
Secure Multiparty Computation (MPC) protocols allow a set of distrusting participants to securely compute a joint function of their private inputs without revealing anything but the output of the function to each other. In 1991 Ostrovsky and Yung introduced the \emph{proactive security model}, where faults spread throughout the network, analogous to the spread of a virus or a worm. More specifically, in the proactive security model, the adversary is not limited in the number of parties it can corrupt but rather in the {\em rate} of corruption with respect to a ``rebooting'' rate. In the same paper, Ostrovsky and Yung showed that constructing a general purpose MPC protocol in the proactive security model is indeed feasible when the rate of corruption is a constant fraction of the parties. Their result, however, was shown only for stand-alone security and incurred a large polynomial communication overhead for each gate of the computation. In contrast, protocols for ``classical'' MPC models (where the adversary is limited to corrupt in total up to a fixed fraction of the parties) have seen dramatic progress in reducing communication complexity in recent years. The question that we consider in this paper is whether continuous improvements of communication overhead in protocols for the ``classical'' stationary corruptions model in the MPC literature can lead to communication complexity reductions in the proactive security model as well. It turns out that improving communication complexity of proactive MPC protocols using modern techniques encounters two fundamental roadblocks due to the nature of the mobile faults model: First, in the proactive security model there is the inherent impossibility of ``bulk pre-computation'' to generate cryptographic material that can be slowly consumed during protocol computation in order to amortize communication cost (the adversary can easily discover pre-computed values if they are not refreshed, and refreshing is expensive); second, there is an apparent need for double-sharing (which requires high communication overhead) of data in order to achieve proactive security guarantees. Thus, techniques that were used to speed up classical MPC do not work, and new ideas are needed. That is exactly what we do in this paper: we show a novel MPC protocol in the proactive security model that can tolerate a $\frac13-\epsilon$ (resp. $\frac12-\epsilon$) fraction of moving faults, is perfectly (resp. statistically) UC-secure, and achieves near-linear communication complexity for each step of the computation. Our results match the asymptotic communication complexity of the best known results in the ``classical'' model of stationary faults \cite{DIK10}. One of the important building blocks that we introduce is a new near-linear ``packed'' proactive secret sharing (PPSS) scheme, where the amortized communication and computational cost of maintaining each individual secret share is just a constant. We believe that our PPSS scheme might be of independent interest.
Last updated:  2013-08-30
Anonymous HIBE from Standard Assumptions over Type-3 Pairings using Dual System Encryption
Somindu C. Ramanna, Palash Sarkar
We present the first anonymous hierarchical identity based encryption (HIBE) scheme using Type-3 pairings with adaptive security based on standard assumptions. Previous constructions of anonymous HIBE schemes did not simultaneously achieve all these features. The new construction uses dual pairing vector spaces using an identity hash earlier used by Boneh, Boyen and Goh. The proof of security follows dual system approach based on decisional subspace assumptions which are implied by Symmetric eXternal Diffie-Hellman (SXDH) assumption in Type-3 pairing groups.
Last updated:  2014-01-07
The Spammed Code Offset Method
Uncategorized
Boris Skoric, Niels de Vreede
Show abstract
Uncategorized
Helper data schemes are a security primitive used for privacy-preserving biometric databases and Physical Unclonable Functions. One of the oldest known helper data schemes is the Code Offset Method (COM). We propose an extension of the COM: the helper data is accompanied by many instances of fake helper data that are drawn from the same distribution as the real one. While the adversary has no way to distinguish between them, the legitimate party has more information and {\em can} see the difference. We use an LDPC code in order to improve the efficiency of the legitimate party's selection procedure. Our construction provides a new kind of trade-off: more effective use of the source entropy, at the price of increased helper data storage. We give a security analysis in terms of Shannon entropy and order-2 Renyi entropy. We also propose a variant of our scheme in which the helper data list is not stored but pseudorandomly generated, changing the trade-off to source entropy utilization vs. computation effort.
Last updated:  2013-10-09
Differential and Linear Cryptanalysis of Reduced-Round Simon
Farzaneh Abed, Eik List, Stefan Lucks, Jakob Wenzel
This paper presents differential attacks of round-reduced versions of Simon with up to 18/32, 19/36, 25/44, 35/54, and 46/72 rounds for the 32-, 48-, 64-, 96-, and 128-bit versions, respectively. Furthermore, we consider in brief related-key rectangle, impossible-differential, and also linear attacks. While all our attacks are completely academic, they demonstrate the drawback of the aggressive optimizations in Simon.
Last updated:  2016-12-12
Catena: A Memory-Consuming Password-Scrambling Framework
Uncategorized
Christian Forler, Stefan Lucks, Jakob Wenzel
Show abstract
Uncategorized
It is a common wisdom that servers should store the one-way hash of their clients’ passwords, rather than storing the password in the clear. In this paper we introduce a set of functional properties a key-derivation function (password scrambler) should have. Unfortunately, none of the existing algorithms satisfies our requirements and therefore, we introduce a novel and provably secure password scrambling framework (PSF) called Catena. Furthermore, we introduce two instantiations of Catena based on a memory-consuming one-way functions. Thus, Catena excellently thwarts massively parallel attacks on cheap memory-constrained hardware, such as recent graphical processing units (GPUs). Additionally, we show that Catena is also a good key-derivation function, since – in the random oracle model – it is indistinguishable from a random function. Furthermore, the memory-access pattern of both instantiations is password-independent and therefore, Catena provides resistance against cache-timing attacks. Moreover, Catena is the first PSF which naturally supports (1) client-independent updates (the server can increase the security parameters and update the password hash without user interaction or knowing the password), (2) an optional server relief protocol (saving the server’s resources at the cost of the client), and (3) a variant Catena-KG for secure key derivation (to securely generate many cryptographic keys of arbitrary lengths such that compromising some keys does not help to break others). We denote a password scrambler as a PSF with a certain instantiation.
Last updated:  2013-08-30
Threshold Secret Image Sharing
Teng Guo, Feng Liu, ChuanKun Wu, ChingNung Yang, Wen Wang, YaWei Ren
A (k; n) threshold secret image sharing scheme, abbreviated as (k; n)-TSISS, splits a secret image into n shadow images in such a way that any k shadow images can be used to reconstruct the secret image exactly. In 2002, for (k; n)-TSISS, Thien and Lin reduced the size of each shadow image to 1/k of the original secret image. Their main technique is by adopting all coefficients of a (k-1)-degree polynomial to embed the secret pixels. This benet of small shadow size has drawn many researcher's attention and their technique has been extensively used in the following studies. In this paper, we rst show that this technique is neither information theoretic secure nor computational secure. Furthermore, we point out the security defect of previous (k; n)-TSISSs for sharing textual images, and then fix up this security defect by adding an AES encryption process. At last, we prove that this new (k; n)-TSISS is computational secure.
Last updated:  2013-08-30
White-Box Security Notions for Symmetric Encryption Schemes
Uncategorized
Cécile Delerablée, Tancrède Lepoint, Pascal Paillier, Matthieu Rivain
Show abstract
Uncategorized
White-box cryptography has attracted a growing interest from researchers in the last decade. Several white-box implementations of standard block-ciphers (DES, AES) have been proposed but they have all been broken. On the other hand, neither evidence of existence nor proofs of impossibility have been provided for this particular setting. This might be in part because it is still quite unclear what {white-box} cryptography really aims to achieve and which security properties are expected from white-box programs in applications. This paper builds a first step towards a practical answer to this question by translating folklore intuitions behind white-box cryptography into concrete security notions. Specifically, we introduce the notion of white-box compiler that turns a symmetric encryption scheme into randomized white-box programs, and we capture several desired security properties such as one-wayness, incompressibility and traceability for white-box programs. We also give concrete examples of white-box compilers that already achieve some of these notions. Overall, our results open new perspectives on the design of white-box programs that securely implement symmetric encryption.
Last updated:  2013-10-28
The Resistance of PRESENT-80 Against Related-Key Differential Attacks
Sareh Emami, San Ling, Ivica Nikolic, Josef Pieprzyk, Huaxiong Wang
We examine the security of the 64-bit lightweight block cipher PRESENT-80 against related-key differential attacks. With a computer search we are able to prove that no related-key differential characteristic exists with probability higher than $2^{-64}$ for the full-round PRESENT-80. To overcome the exponential (in the state and key sizes) computational complexity we use truncated differences, however as the key schedule is not nibble oriented, we switch to actual differences and apply early abort techniques to prune the tree-based search. With a new method called extended split approach we are able to make the whole search feasible and we implement and run it in real time. Our approach targets the PRESENT-80 cipher however, with small modifications can be reused for other lightweight ciphers as well.
Last updated:  2013-08-30
Multiple Limited-Birthday Distinguishers and Applications
Jérémy Jean, María Naya-Plasencia, Thomas Peyrin
In this article, we propose a new improvement of the rebound techniques, used for cryptanalyzing AES-like permutations during the past years. Our improvement, that allows to reduce the complexity of the attacks, increases the probability of the outbound part by considering a new type of differential paths. Moreover, we propose a new type of distinguisher, the multiple limited-birthday problem, based on the limited-birthday one, but where differences on the input and on the output might have randomized positions. We also discuss the generic complexity for solving this problem and provide a lower bound of it as well as we propose an efficient and generic algorithm for solving it. Our advances lead to improved distinguishing or collision results for many AES-based functions such as AES, ECHO, Grøstl, LED, PHOTON and Whirlpool.
Last updated:  2014-03-10
Locally Updatable and Locally Decodable Codes
Nishanth Chandran, Bhavana Kanukurthi, Rafail Ostrovsky
We introduce the notion of locally updatable and locally decodable codes (LULDCs). In addition to having low decode locality, such codes allow us to update a codeword (of a message) to a codeword of a different message, by rewriting just a few symbols. While, intuitively, updatability and error-correction seem to be contrasting goals, we show that for a suitable, yet meaningful, metric (which we call the Prefix Hamming metric), one can construct such codes. Informally, the Prefix Hamming metric allows the adversary to arbitrarily corrupt bits of the codeword subject to one constraint -- he does not corrupt more than a $\ldcdist$ fraction (for some constant $\delta$) of the $t$ ``most-recently changed" bits of the codeword (for all $1\leq t\leq n$, where $n$ is the length of the codeword). Our results are as follows. First, we construct binary LULDCs for messages in ${0,1}^k$ with constant rate, update locality of $O(\log^2 k)$, and read locality of $O(k^\epsilon)$ for any constant $\epsilon<1$. Next, we consider the case where the encoder and decoder share a secret state and the adversary is computationally bounded. Here too, we obtain local updatability and decodability for the Prefix Hamming metric. Furthermore, we also ensure that the local decoding algorithm never outputs an incorrect message -- even when the adversary can corrupt an arbitrary number of bits of the codeword. We call such codes locally updatable locally decodable-detectable codes (LULDDCs) and obtain dramatic improvements in the parameters (over the information-theoretic setting). Our codes have constant rate, an update locality of $O(\log^2 k)$ and a read locality of $O(\lambda \log^2 k)$, where $\lambda$ is the security parameter. Finally, we show how our techniques apply to the setting of dynamic proofs of retrievability (DPoR) and present a construction of this primitive with better parameters than existing constructions. In particular, we construct a DPoR scheme with linear storage, $O(\log^2 k)$ write complexity, and $O(\lambda \log k)$ read and audit complexity.
Last updated:  2013-08-21
Montgomery Multiplication Using Vector Instructions
Joppe W. Bos, Peter L. Montgomery, Daniel Shumow, Gregory M. Zaverucha
In this paper we present a parallel approach to compute interleaved Montgomery multiplication. This approach is particularly suitable to be computed on 2-way single instruction, multiple data platforms as can be found on most modern computer architectures in the form of vector instruction set extensions. We have implemented this approach for tablet devices which run the x86 architecture (Intel Atom Z2760) using SSE2 instructions as well as devices which run on the ARM platform (Qualcomm MSM8960, NVIDIA Tegra 3 and 4) using NEON instructions. When instantiating modular exponentiation with this parallel version of Montgomery multiplication we observed a performance increase of more than a factor of 1.5 compared to the sequential implementation in OpenSSL for the classical arithmetic logic unit on the Atom platform for 2048-bit moduli.
Last updated:  2013-08-22
Universal Leaky Random Oracle Model
Uncategorized
Guangjun Fan, Yongbin Zhou, Dengguo Feng
Uncategorized
K. Yoneyama et al. introduces the Leaky Random Oracle Model at ProvSec2008, which only considers the leakage of the hash list of a hash function used by a cryptosystem due to various attacks caused by implementation or sloppy usages. However, an important fact is that such attacks not only leak the hash list of a hash function, but also leak other secret states outside the hash list of a cryptosystem (e.g. the secret key). In most cases, an adversary may be more interesting in revealing these secret states. Therefore, the Leaky Random Oracle Model is very limited because it only considers the leakage of the hash list and does not consider the leakage of other secret states. In this paper, we present a new leakage model based on the Leaky Random Oracle Model. In our new model, both the secret states (secret key) and the hash list can be leaked. Furthermore, the secret key can be leaked continually. Hence, our new model is more universal and stronger than the Leaky Random Oracle Model and some other leakage models. Furthermore, we give a provable security public key encryption scheme which is IND-CCA secure in our new model.
Last updated:  2013-08-21
Improvement of One Adaptive Oblivious Transfer Scheme
Zhengjun Cao, Lihua Liu
In 2011, the authors [8] presented an adaptive oblivious transfer (OT) scheme based on Decisional 3-Party Diffie-Hellman (3DDH) assumption. The encryption used in the scheme is a combination of the Boneh-Boyen IBE scheme and a variation of the Hohenberger-Waters signature. The scheme is somewhat inefficient since it combines the two underlying schemes in a simple way. In this paper, we present an improvement of the OT scheme and show its security under 3DDH assumption. The proposed skills are helpful for designing and analyzing other cryptographic schemes.
Last updated:  2014-09-08
Algebraic MACs and Keyed-Verification Anonymous Credentials
Melissa Chase, Sarah Meiklejohn, Gregory M. Zaverucha
We consider the problem of constructing anonymous credentials for use in a setting where the issuer of credentials is also the verifier, or more generally where the issuer and verifier have a shared key. In this setting we can use message authentication codes (MACs) instead of public key signatures as the basis for the credential system. To this end, we construct two algebraic MACs in prime-order groups, along with efficient protocols for issuing credentials, asserting possession a credential, and proving statements about hidden attributes (e.g., the age of the credential owner). We prove the security of the first scheme in the generic group model, and prove the security of the second scheme -- using a dual-system-based approach -- under decisional Diffie-Hellman (DDH). Our MACs are of independent interest, as they are the only uf-cmva-secure MACs with efficient proofs of knowledge. Finally, we compare the efficiency of our new systems to two existing constructions of anonymous credentials: U-Prove and Idemix. We show that the performance of the new schemes is competitive with U-Prove (which is not provably secure, whereas ours is based on DDH), and many times faster than Idemix.
Last updated:  2016-08-04
When Private Set Intersection Meets Big Data: An Efficient and Scalable Protocol
Changyu Dong, Liqun Chen, Zikai Wen
Large scale data processing brings new challenges to the design of privacy-preserving protocols: how to meet the increasing requirements of speed and throughput of modern applications, and how to scale up smoothly when data being protected is big. Efficiency and scalability become critical criteria for privacy preserving protocols in the age of Big Data. In this paper, we present a new Private Set Intersection (PSI) protocol that is extremely efficient and highly scalable compared with existing protocols. The protocol is based on a novel approach that we call oblivious Bloom intersection. It has linear complexity and relies mostly on efficient symmetric key operations. It has high scalability due to the fact that most operations can be parallelized easily. The protocol has two versions: a basic protocol and an enhanced protocol, the security of the two variants is analyzed and proved in the semi-honest model and the malicious model respectively. A prototype of the basic protocol has been built. We report the result of performance evaluation and compare it against the two previously fastest PSI protocols. Our protocol is orders of magnitude faster than these two protocols. To compute the intersection of two million-element sets, our protocol needs only 41 seconds (80-bit security) and 339 seconds (256-bit security) on moderate hardware in parallel mode.
Last updated:  2014-03-28
Leakage Resilient Proofs of Ownership in Cloud Storage, Revisited
Jia Xu, Jianying Zhou
Client-side deduplication is a very effective mechanism to reduce both storage and communication cost in cloud storage service. Halevi~\emph{et al.} (CCS '11) discovered security vulnerability in existing implementation of client-side deduplication and proposed a cryptographic primitive called ``proofs of ownership'' (PoW) as a countermeasure. In a proof of ownership scheme, any owner of the same file can prove to the cloud storage server that he/she owns that file in an efficient and secure manner, even if a bounded amount of any efficiently extractable information of that file has been leaked. We revisit Halevi~\emph{et al.}'s formulation of PoW and significantly improve the understanding and construction of PoW. Our contribution is twofold: \begin{itemize} \item Firstly, we propose a generic and conceptually simple approach to construct \emph{Privacy-Preserving} Proofs of Ownership scheme, by leveraging on well-known primitives (i.e. Randomness Extractor and Proofs of Retrievability) and technique (i.e. sample-then-extract). Our approach can be roughly described as \textsf{Privacy-Preserving PoW = Randomness Extractor $+$ Proofs of Retrievability}. \item Secondly, in order to provide a better instantiation of Privacy-Preserving-PoW, we propose a novel design of randomness extractor with large output size, which improves the state of art by reducing both the random seed length and entropy loss (i.e. the difference between the entropy of input and output) simultaneously. \end{itemize}
Last updated:  2014-01-14
Enforcing Language Semantics Using Proof-Carrying Data
Stephen Chong, Eran Tromer, Jeffrey A. Vaughan
Sound reasoning about the behavior of programs relies on program execution adhering to the language semantics. However, in a distributed computation, when a value is sent from one party to another, the receiver faces the question of whether the value is well-traced: could it have been produced by a computation that respects the language semantics? If not, then accepting the non-well-traced value may invalidate the receiver's reasoning, leading to bugs or vulnerabilities. Proof-Carrying Data (PCD) is a recently-introduced cryptographic mechanism that allows messages in a distributed computation to be accompanied by proof that the message, and the history leading to it, complies with a specified predicate. Using PCD, a verifier can be convinced that the predicate held throughout the distributed computation, even in the presence of malicious parties, and at a verification cost that is independent of the size of the computation producing the value. Unfortunately, previous approaches to using PCD required tailoring a specialized predicate for each application, using an inconvenient formalism and with little methodological support. We connect these two threads by introducing a novel, PCD-based approach to enforcing language semantics in distributed computations. We show how to construct an object-oriented language runtime that ensures that objects received from potentially untrusted parties are well-traced with respect to a set of class definitions. Programmers can then soundly reason about program behavior, despite values received from untrusted parties, without needing to be aware of the underlying cryptographic techniques.
Last updated:  2013-08-17
Rounding LLL: Finding Faster Small Roots of Univariate Polynomial Congruences
Jingguo Bi, Phong Q. Nguyen
In a seminal work at EUROCRYPT '96, Coppersmith showed how to find all small roots of a univariate polynomial congruence in polynomial time: this has found many applications in public-key cryptanalysis and in a few security proofs. However, the running time of the algorithm is a high-degree polynomial, which limits experiments: the bottleneck is an LLL reduction of a high-dimensional matrix with extra-large coefficients. We present in this paper a polynomial speedup over Coppersmith's algorithm. Our improvement is based on a special property of the matrices used by Coppersmith's algorithm, which allows us to speed up the LLL reduction by rounding. The exact speedup depends on the LLL algorithm used: for instance, the speedup is quadratic in the bit-size of the small-root bound if one uses the Nguyen-Stehlé L^2 algorithm.
Last updated:  2013-08-17
Low Data Complexity Biclique Cryptanalysis of Block Ciphers with Application to Piccolo and HIGHT
Siavash Ahmadi, Zahra Ahmadian, Javad Mohajeri, Mohammad Reza Aref
In this paper, we present a framework for biclique cryptanalysis of block ciphers with an extremely low data complexity. To that end, we enjoy a new representation of biclique attack. Then an algorithm for choosing two dierential characteristics is also presented to simultaneously minimize the data complexity and control the computational complexity. Then we characterize those block ciphers that are vulnerable to this technique and among them, we apply this attack on lightweight block ciphers Piccolo-80, Piccolo-128 and HIGHT. The data complexities of these attacks are considerably less than the existing results. For full-round Piccolo-80 and 128, the data complexity of the attacks are only 16 plaintext-ciphertext pairs and for full-round HIGHT our attack requires 256 pairs. In all attacks the computational complexity remains the same as the previous ones or even it is slightly improved.
Last updated:  2013-08-17
Discrete Ziggurat: A Time-Memory Trade-off for Sampling from a Gaussian Distribution over the Integers
Johannes Buchmann, Daniel Cabarcas, Florian Göpfert, Andreas Hülsing, Patrick Weiden
Several lattice-based cryptosystems require to sample from a discrete Gaussian distribution over the integers. Existing methods to sample from such a distribution either need large amounts of memory or they are very slow. In this paper we explore a different method that allows for a flexible time-memory trade-off, offering developers freedom in choosing how much space they can spare to store precomputed values. We prove that the generated distribution is close enough to a discrete Gaussian to be used in lattice-based cryptography. Moreover, we report on an implementation of the method and compare its performance to existing methods from the literature. We show that for large standard deviations, the Ziggurat algorithm outperforms all existing methods.
Last updated:  2014-01-26
Replacing a Random Oracle: Full Domain Hash From Indistinguishability Obfuscation
Uncategorized
Susan Hohenberger, Amit Sahai, Brent Waters
Show abstract
Uncategorized
Our main result gives a way to instantiate the random oracle with a concrete hash function in ``full domain hash'' applications. The term full domain hash was first proposed by Bellare and Rogaway and referred to a signature scheme from any trapdoor permutation that was part of their seminal work introducing the random oracle heuristic. Over time the term full domain hash has (informally) encompassed a broader range of notable cryptographic schemes including the Boneh-Franklin IBE scheme and Boneh-Lynn-Shacham (BLS) signatures. All of the above described schemes required a hash function that had to be modeled as a random oracle to prove security. Our work utilizes recent advances in indistinguishability obfuscation to construct specific hash functions for use in these schemes. We then prove security of the original cryptosystems when instantiated with our specific hash function. Of particular interest, our work evades the impossibility results of Dodis, Oliveira, and Pietrzak, who showed that there can be no black-box construction of hash functions that allow Full-Domain Hash Signatures to be based on trapdoor permutations, and its extension by Dodis, Haitner, and Tentes to the RSA Full-Domain Hash Signatures. This indicates that our techniques applying indistinguishability obfuscation may be useful in the future for circumventing other such black-box impossibility proofs.
Last updated:  2013-08-17
Multi-Key Searchable Encryption
Raluca Ada Popa, Nickolai Zeldovich
We construct a searchable encryption scheme that enables keyword search over data encrypted with {\em different} keys. The scheme is practical and was designed to be included in a new system for protecting data confidentiality in client-server applications against attacks on the server.
Last updated:  2013-10-07
SNARKs for C: Verifying Program Executions Succinctly and in Zero Knowledge
Eli Ben-Sasson, Alessandro Chiesa, Daniel Genkin, Eran Tromer, Madars Virza
An argument system for NP is a proof system that allows efficient verification of NP statements, given proofs produced by an untrusted yet computationally-bounded prover. Such a system is non-interactive and publicly-verifiable if, after a trusted party publishes a proving key and a verification key, anyone can use the proving key to generate non-interactive proofs for adaptively-chosen NP statements, and proofs can be verified by anyone by using the verification key. We present an implementation of a publicly-verifiable non-interactive argument system for NP. The system, moreover, is a zero-knowledge proof-of-knowledge. It directly proves correct executions of programs on TinyRAM, a random-access machine tailored for efficient verification of nondeterministic computations. Given a program $P$ and time bound T, the system allows for proving correct execution of $P$, on any input $x$, for up to T steps, after a one-time setup requiring $\tilde{O}(|P| T)$ cryptographic operations. An honest prover requires $\tilde{O}(|P| \cdot T)$ cryptographic operations to generate such a proof, while proof verification can be performed with only $O(|x|)$ cryptographic operations. This system can be used to prove the correct execution of C programs, using our TinyRAM port of the GCC compiler. This yields a zero-knowledge Succinct Non-interactive ARgument of Knowledge (zk-SNARK) for program executions in the preprocessing model -- a powerful solution for delegating NP computations, with several features not achieved by previously-implemented primitives. Our approach builds on recent theoretical progress in the area. We present efficiency improvements and implementations of two main ingredients: * Given a C program, we produce a circuit whose satisfiability encodes the correctness of execution of the program. Leveraging nondeterminism, the generated circuit's size is merely quasilinear in the size of the computation. In particular, we efficiently handle arbitrary and data-dependent loops, control flow, and memory accesses. This is in contrast with existing ``circuit generators'', which in the general case produce circuits of quadratic size. * Given a linear PCP for verifying satisfiability of circuits, we produce a corresponding SNARK. We construct such a linear PCP (which, moreover, is zero-knowledge and very efficient) by building on and improving on recent work on quadratic arithmetic programs.
Last updated:  2014-01-30
A Formal Proof of Countermeasures Against Fault Injection Attacks on CRT-RSA
Uncategorized
Pablo Rauzy, Sylvain Guilley
Show abstract
Uncategorized
In this article, we describe a methodology that aims at either breaking or proving the security of CRT-RSA implementations against fault injection attacks. In the specific case-study of the BellCoRe attack, our work bridges a gap between formal proofs and implementation-level attacks. We apply our results to three implementations of CRT-RSA, namely the unprotected one, that of Shamir, and that of Aumüller et al. Our findings are that many attacks are possible on both the unprotected and the Shamir implementations, while the implementation of Aumüller et al. is resistant to all single-fault attacks. It is also resistant to double-fault attacks if we consider the less powerful threat-model of its authors.
Last updated:  2013-08-17
Improvement of One Anonymous Identity-Based Encryption
Zhengjun Cao, Lihua Liu
In 2009, Seo et al. proposed an anonymous hierarchical identity-based encryption (IBE). The ciphertext consists of $(C_1, C_2, C_3, C_4)$, where $C_1$ is the blinded message, $C_4$ is the blinded identity, both $C_2$ and $C_3$ are used as decrypting helpers. To prove its security, the authors defined five games and introduced a strong simulator who is able to select different Setups for those games. In this paper, we optimize the IBE scheme by removing one decrypting helper and the strong simulator. We show its security under the $\ell$-computational Diffie-Hellman assumption with a normal simulator who only requires a unique Setup.
Last updated:  2013-08-17
A Comparison of Double Point Multiplication Algorithms and their Implementation over Binary Elliptic Curves
Reza Azarderakhsh, Koray Karabina
Efficient implementation of double point multiplication is crucial for elliptic curve cryptographic systems. We revisit three recently proposed simultaneous double point multiplication algorithms. We propose hardware architectures for these algorithms, and provide a comparative analysis of their performance. We implement the proposed architectures on Xilinx Virtex-4 FPGA, and report on the area and time results . Our results indicate that differential addition chain based algorithms are better suited to compute double point multiplication over binary elliptic curves for both high performance and resource constrained applications.
Last updated:  2016-07-18
On secret sharing with nonlinear product reconstruction
Ignacio Cascudo, Ronald Cramer, Diego Mirandola, Carles Padro, Chaoping Xing
Multiplicative linear secret sharing is a fundamental notion in the area of secure multi-party computation (MPC) and, since recently, in the area of two-party cryptography as well. In a nutshell, this notion guarantees that ``the product of two secrets is obtained as a linear function of the vector consisting of the coordinate-wise product of two respective share-vectors''. This paper focuses on the following foundational question, which is novel to the best of our knowledge. Suppose we {\em abandon the latter linearity condition} and instead require that this product is obtained by {\em some}, not-necessarily-linear ``product reconstruction function''. {\em Is the resulting notion equivalent to multiplicative linear secret sharing?} We show the (perhaps somewhat counter-intuitive) result that this relaxed notion is strictly {\em more general}. Concretely, fix a finite field $\FF_q$ as the base field over which linear secret sharing is considered. Then we show there exists an (exotic) linear secret sharing scheme with an unbounded number of players $n$ such that it has $t$-privacy with $t = \Omega(n)$ and such that it does admit a product reconstruction function, yet this function is {\em necessarily} nonlinear. In addition, we determine the minimum number of players for which those exotic schemes exist. Our proof is based on combinatorial arguments involving quadratic forms. It extends to similar separation results for important variations, such as strongly multiplicative secret sharing.
Last updated:  2013-08-15
Proving TLS-attack related open biases of RC4
Santanu Sarkar, Sourav Sen Gupta, Goutam Paul, Subhamoy Maitra
After a series of works on RC4 cryptanalysis in last few years (published in flagship cryptology conferences and journals), the most significant (and also very recent) attack on the cipher has been the discovery of vulnerabilities in the SSL/TLS protocol, by AlFardan, Bernstein, Paterson, Poettering and Schuldt. They ran extensive computations to identify significant short-term single-byte keystream biases of RC4, and utilized that knowledge in the attack. The biases identified by AlFardan et al. consist of earlier known biases of RC4, as well as some newly discovered ones. In this paper, we attempt at proving the new, unproved or partially proved biases amongst the above-mentioned ones. The theoretical proofs of these biases not only assert a scientific justification, but also discover intricate patterns and operations of the cipher associated with these biases. For example, while attempting the proof of a bias of the first output byte towards 129, we observe that this bias occurs prominently only for certain lengths of the secret key of RC4. In addition, our findings reveal that this bias may be related to the old and unsolved problem of ``anomalies'' in the distribution of the state array after the Key Scheduling Algorithm. In this connection, we prove the anomaly in $S_0[128] = 127$, a problem open for more than a decade. Other than proving the new biases, we also complete the proof for the extended keylength dependent biases in RC4, a problem attempted and partially solved by Isobe, Ohigashi, Watanabe and Morii in FSE 2013. Our new proofs and observations in this paper, along with the connection to the older results, provide a comprehensive view on the state-of-the-art literature in RC4 cryptanalysis.
Last updated:  2013-08-16
Type-Based Analysis of Protected Storage in the TPM (full version)
Jianxiong Shao, Dengguo Feng, Yu Qin
The Trusted Platform Module (TPM) is designed to enable trustworthy computation and communication over open networks. The TPM provides a way to store cryptographic keys and other sensitive values in its shielded memory and act as \emph{Root of Trust for Storage} (RTS). The TPM interacts with applications via a predefined set of commands (an API). In this paper, we give an abstraction model for the TPM 2.0 specification concentrating on Protected Storage part. With identification and formalization of their secrecy properties, we devise a type system with asymmetric cryptographic primitives to statically enforce and prove their security.
Last updated:  2013-08-15
Obfuscating Branching Programs Using Black-Box Pseudo-Free Groups
Ran Canetti, Vinod Vaikuntanathan
We show that the class of polynomial-size branching programs can be obfuscated according to a virtual black-box notion akin to that of Barak et al. [Crypto 01], in an idealized black-box group model over pseudo-free groups. This class is known to lie between $NC^1$ and $P$ and includes most interesting cryptographic algorithms. The construction is rather simple and is based on Kilian's randomization technique for Barrington's branching programs. The black-box group model over pseudo-free groups is a strong idealization. In particular, in a pseudo-free group, the group operation can be efficiently performed, while finding surprising relations between group elements is intractable. %inverses or linking between different representations of the same group element are infeasible. A black-box representation of the group provides an ideal interface which permits prescribed group operations, and nothing else. Still, the algebraic structure and security requirements appear natural and potentially realizable. They are also unrelated to the specific function to be obfuscated. Our modeling should be compared with the recent breakthrough obfuscation scheme of Garg et al. [FOCS 2013]: While the high level structure is similar, some important details differ. It should be stressed however that, unlike Garg et al., we do not provide a candidate concrete instantiation of our abstract structure.
Last updated:  2013-08-15
Limits on the Power of Cryptographic Cheap Talk
Pavel Hubacek, Jesper Buus Nielsen, Alon Rosen
We revisit the question of whether cryptographic protocols can replace correlated equilibria mediators in two-player strategic games. This problem was first addressed by Dodis, Halevi and Rabin (CRYPTO 2000), who suggested replacing the mediator with a secure protocol and proved that their solution is stable in the Nash equilibrium (NE) sense, provided that the players are computationally bounded. We show that there exist two-player games for which no cryptographic protocol can implement the mediator in a sequentially rational way; that is, without introducing empty threats. This explains why all solutions so far were either sequentially unstable, or were restricted to a limited class of correlated equilibria (specifically, those that do not dominate any NE, and hence playing them does not offer a clear advantage over playing any NE). In the context of computational NE, we classify necessary and sufficient cryptographic assumptions for implementing a mediator that allows to achieve a given utility profile of a correlated equilibrium. The picture that emerges is somewhat different than the one arising in semi-honest secure two-party computation. Specifically, while in the latter case every functionality is either “complete" (i.e., implies Oblivious Transfer) or “trivial" (i.e., can be securely computed unconditionally), in the former there exist some “intermediate" utility profiles whose implementation is equivalent to the existence of one-way functions.
Last updated:  2013-08-15
Non-Malleable Codes from Two-Source Extractors
Uncategorized
Stefan Dziembowski, Tomasz Kazana, Maciej Obremski
Show abstract
Uncategorized
We construct an efficient information-theoretically non-mall\-eable code in the split-state model for one-bit messages. Non-malleable codes were introduced recently by Dziembowski, Pietrzak and Wichs (ICS 2010), as a general tool for storing messages securely on hardware that can be subject to tampering attacks. Informally, a code $(Enc : \cal M \rightarrow \cal L \times \cal R, Dec : \cal L \times \cal R \rightarrow \cal M)$ is {\em non-malleable in the split-state model} if any adversary, by manipulating {\em independently} $L$ and $R$ (where $(L,R)$ is an encoding of some message $M$), cannot obtain an encoding of a message $M'$ that is not equal to $M$ but is ``related'' $M$ in some way. Until now it was unknown how to construct an information-theoretically secure code with such a property, even for $\cal M = \{0,1\}$. Our construction solves this problem. Additionally, it is leakage-resilient, and the amount of leakage that we can tolerate can be an arbitrary fraction $\xi < {1}/{4}$ of the length of the codeword. Our code is based on the inner-product two-source extractor, but in general it can be instantiated by any two-source extractor that has large output and has the property of being {\em flexible}, which is a new notion that we define. We also show that the non-malleable codes for one-bit messages have an equivalent, perhaps simpler characterization, namely such codes can be defined as follows: if $M$ is chosen uniformly from $\{0,1\}$ then the probability (in the experiment described above) that the output message $M'$ is not equal to $M$ can be at most $1/2 + \epsilon$.
Last updated:  2013-08-15
Improvement of Camenisch-Neven-Shelat Oblivious Transfer Scheme
Zhengjun Cao, Hanyue Cao
In 2007, Camenisch, Neven and Shelat proposed an adaptive oblivious transfer (OT) in which a sender has $N$ messages, of which a receiver can adaptively choose to receive $k$ one-after-the-other. In this paper, we show that the scheme has a drawback that the sender can only serve a single receiver only once. The drawback results from the deterministic encryption used. To fix it, we suggest to replace the deterministic encryption with a probabilistic encryption. The OT scheme adopts the paradigm of ``encryption and proof of knowledge" in order to force the sender to keep the consistency of the transferred messages. We remark that the paradigm is unnecessary. In most reasonable applications of OT, the transferred messages must be recognizable for the receiver or the sender is willing to disclose some messages to the receiver. This property has been explicitly specified in the earlier works by Rabin, Even, Goldreich and Lempel.
Last updated:  2013-08-15
Rational Protocol Design: Cryptography Against Incentive-driven Adversaries
Juan Garay, Jonathan Katz, Ueli Maurer, Bjoern Tackmann, Vassilis Zikas
Existing work on “rational cryptographic protocols” treats each party (or coalition of parties) running the protocol as a selfish agent trying to maximize its utility. In this work we propose a fundamentally different approach that is better suited to modeling a protocol under attack from an external entity. Specifically, we consider a two-party game between an protocol designer and an external attacker. The goal of the attacker is to break security properties such as correctness or privacy, possibly by corrupting protocol participants; the goal of the protocol designer is to prevent the attacker from succeeding. We lay the theoretical groundwork for a study of cryptographic protocol design in this setting by providing a methodology for defining the problem within the traditional simulation paradigm. Our framework provides ways of reasoning about important cryptographic concepts (e.g., adaptive corruptions or attacks on communication resources) not handled by previous game-theoretic treatments of cryptography. We also prove composition theorems that—for the first time—provide a sound way to design rational protocols assuming “ideal communication resources” (such as broadcast or authenticated channels) and then instantiate these resources using standard cryptographic tools. Finally, we investigate the problem of secure function evaluation in our framework, where the attacker has to pay for each party it corrupts. Our results demonstrate how knowledge of the attacker’s incentives can be used to
Last updated:  2013-11-20
Revocable IBE Systems with Almost Constant-size Key Update
Le Su, Hoon Wei Lim, San Ling, Huaxiong Wang
Identity-based encryption (IBE) has been regarded as an attractive alternative to more conventional certificate-based public key systems. It has recently attracted not only considerable research from the academic community, but also interest from the industry and standardization bodies. However, while key revocation is a fundamental requirement to any public key systems, not much work has been done in the identity-based setting. In this paper, we continue the study of revocable IBE (RIBE) initiated by Boldyreva, Goyal, and Kumar. Their proposal of a selective secure RIBE scheme, and a subsequent construction by Libert and Vergnaud in a stronger adaptive security model are based on a binary tree approach, such that their key update size is logarithmic in the number of users. We ask the question of whether or not the key update size could be further reduced by using a cryptographic accumulator. We show that, indeed, the key update material can be made constant with some small amount of auxiliary information, through a novel combination of the Lewko and Waters IBE scheme and the Camenisch, Kohlweiss, and Soriente pairing-based dynamic accumulator.
Last updated:  2013-08-15
Differential Fault Attack against Grain family with very few faults and minimal assumptions
Santanu Sarkar, Subhadeep Banik, Subhamoy Maitra
The series of published works, related to Differential Fault Attack (DFA) against the Grain family, require (i) quite a large number (hundreds) of faults (around $n \ln n$, where $n = 80$ for Grain v1 and $n = 128$ for Grain-128, Grain-128a) and also (ii) several assumptions on location and timing of the fault injected. In this paper we present a significantly improved scenario from the adversarial point of view for DFA against the Grain family of stream ciphers. Our model is the most realistic one so far as it considers that the cipher to be re-keyed a very few times and fault can be injected at any random location and at any random point of time, i.e., no precise control is needed over the location and timing of fault injections. We construct equations based on the algebraic description of the cipher by introducing new variables so that the degrees of the equations do not increase. In line of algebraic cryptanalysis, we accumulate such equations based on the fault-free and faulty key-stream bits and solve them using the SAT Solver Cryptominisat-2.9.5 installed with SAGE 5.7. In a few minutes we can recover the state of Grain v1, Grain-128 and Grain-128a with as little as 10, 4 and 10 faults respectively (and may be improved further with more computational efforts).
Last updated:  2013-08-15
A new class of semi-bent quadratic Boolean functions
Chunming Tang, Yanfeng Qi
In this paper, we present a new class of semi-bent quadratic Boolean functions of the form $f(x)=\sum_{i=1}^{\lfloor\frac{m-1}{2}\rfloor}Tr^n_1(c_ix^{1+4^{i}})$ $~(c_i\in \mathbb{F}_4$,$n=2m)$. We first characterize the semi-bentness of these quadratic Boolean functions. There exists semi-bent functions only when $m$ is odd. For the case: $m=p^r$, where $p$ is an odd prime with some conditions, we enumerate the semi-bent functions. Further, we give a simple characterization of semi-bentness for these functions with linear properties of $c_i$. In particular, for a special case of $p$, any quadratic Boolean function $f(x)=\sum_{i=1}^{\frac{p-1}{2}}Tr^{2p}_1(c_ix^{1+4^{i}})$ over $\mathbb{F}_{2^{2p}}$ is a semi-bent function.
Last updated:  2013-08-15
Cryptographically Enforced RBAC
Uncategorized
Anna Lisa Ferrara, George Fuchsbauer, Bogdan Warinschi
Show abstract
Uncategorized
Cryptographic access control promises to offer easily distributed trust and broader applicability, while reducing reliance on low-level online monitors. Traditional implementations of cryptographic access control rely on simple cryptographic primitives whereas recent endeavors employ primitives with richer functionality and security guarantees. Worryingly, few of the existing cryptographic access-control schemes come with precise guarantees, the gap between the policy specication and the implementation being analyzed only informally, if at all. In this paper we begin addressing this shortcoming. Unlike prior work that targeted ad-hoc policy specification, we look at the well-established Role-Based Access Control (RBAC) model, as used in a typical file system. In short, we provide a precise syntax for a computational version of RBAC, offer rigorous denitions for cryptographic policy enforcement of a large class of RBAC security policies, and demonstrate that an implementation based on attribute-based encryption meets our security notions. We view our main contribution as being at the conceptual level. Although we work with RBAC for concreteness, our general methodology could guide future research for uses of cryptography in other access-control models.
Last updated:  2013-08-15
Improved OT Extension for Transferring Short Secrets
Vladimir Kolesnikov, Ranjit Kumaresan
We propose an optimization and generalization of OT extension of Ishai et al. of Crypto 2003. For computational security parameter k, our OT extension for short secrets offers O(log k) factor performance improvement in communication and computation, compared to prior work. In concrete terms, for today's security parameters, this means approx. factor 2-3 improvement. This results in corresponding improvements in applications relying on such OT. In particular, for two-party semi-honest SFE, this results in O(log k) factor improvement in communication over state of the art Yao Garbled Circuit, and has the same asymptotic complexity as the recent multi-round construction of Kolesnikov and Kumaresan of SCN 2012. For multi-party semi-honest SFE, where their construction is inapplicable, our construction implies O(log k) factor communication and computation improvement over best previous constructions. As with our OT extension, for today's security parameters, this means approximately factor 2 improvement in semi-honest multi-party SFE. Our building block of independent interest is a novel IKNP-based framework for 1-out-of-n OT extension, which offers O(log n) factor performance improvement over previous work (for n<=k), and concrete factor improvement of up to 5 for today's security parameters (n=k=128). Our protocol is the first practical OT with communication/computation cost sublinear in the security parameter (prior sublinear constructions of Ishai et al. are not efficient in concrete terms).
Last updated:  2013-08-15
For an EPC-C1 G2 RFID compliant Protocol, CRC with Concatenation : No; PRNG with Concatenation : Yes
Masoumeh Safkhani, Nasour Bagheri
In this paper we present new constraints to EPCglobal Class 1 Generation 2 (EPC-C1 G2) standard which if they have been considered in the design of EPC-C1 G2 complaint authentication protocols, lead to prevent predecessor's protocols' weaknesses and also present the secure ones. Also in this paper as an example, we use Pang \textit{et al.} EPC-C1 G2-friendly protocol which has been recently proposed, to show our proposed constraints in EPC-C1 G2 standard. Pang \textit{et al.}'s protocol security analysis show how its security claim based on untraceability and resistance against de-synchronization attacks is ruined. More precisely, we present very efficient de-synchronization attack and traceability attack against the protocol. Finally, take Pang \textit{et al.} protocol's vulnerability points, we present new conditions to design EPC-C1 G2 complaint protocols and based on it we propose a secure (EPC-C1 G2) RFID authentication scheme which is a good sample to EPC-C1 G2 complaint protocols.
Last updated:  2013-08-15
An Efficient Scheme for Centralized Group Key Management in Collaborative Environments
Uncategorized
Constantinos Patsakis, Agusti Solanas
Show abstract
Uncategorized
The increasing demand for on-line collaborative applications has sparked the interest for multicast services, which in many cases have to guarantee properties such as authentication or confidentiality within groups of users.To do so, cryptographic protocols are generally used and the cryptographic keys, in which they rely, have to be managed (e.g. created, updated, distributed). The procedures to perform these operations are determined by the so-called Group Key Management Schemes. Many schemes have been proposed and some of them have been proven to be vulnerable. This is the case of the Piao et al. scheme, whose scalability/efficiency is very good but it is vulnerable to many attacks because its security is based on a ``weak'' mathematical problem, so it can be broken in polynomial time. Inspired by the concepts proposed in the Piao et al. scheme we have re-designed the protocol and we have founded it on a hard mathematical problem and tweaked some of the procedures. This way, we propose a new scheme that is efficient, collusion free, and provides backward and forward secrecy.
Last updated:  2014-07-21
Adaptively Secure Broadcast Encryption under Standard Assumptions with Better Efficiency
Kwangsu Lee, Dong Hoon Lee
In this paper, we present an efficient public-key broadcast encryption (PKBE) scheme with sub-linear size of public keys, private keys, and ciphertexts and prove its adaptive security under standard assumptions. Compared with the currently best scheme of Garg {\it et al.} (CCS 2010) that provides adaptive security under standard assumptions and sub-linear size of various parameters, the ciphertext size of our scheme is $94\%$ shorter and the encryption algorithm of our scheme is also $2.8$ times faster than the scheme of Garg {\it et al.} To achieve our scheme, we adapt the dual system encryption technique of Waters. However, there is a challenging problem to use this technique for the construction of PKBE with sub-linear size of ciphertexts such as a tag compression problem. To overcome this problem, we first devise a novel tag update technique for broadcast encryption. Using this technique, we build an efficient PKBE scheme in symmetric bilinear groups, and prove its adaptive security under standard assumptions. After that, we build another PKBE scheme in asymmetric bilinear groups and also prove its adaptive security under simple assumptions.
Last updated:  2015-02-18
Classification of Elliptic/hyperelliptic Curves with Weak Coverings against the GHS attack under an Isogeny Condition
Tsutomu Iijima, Fumiyuki Momose, Jinhui Chao
The GHS attack is known to map the discrete logarithm problem(DLP) in the Jacobian of a curve $C_{0}$ defined over the $d$ degree extension $k_{d}$ of a finite field $k$ to the DLP in the Jacobian of a new curve $C$ over $k$ which is a covering curve of $C_0$, then solve the DLP of curves $C/k$ by variations of index calculus algorithms. It is therefore important to know which curve $C_0/k_d$ is subjected to the GHS attack, especially those whose covering $C/k$ have the smallest genus $g(C)=dg(C_0)$, which we called satisfying the isogeny condition. Until now, 4 classes of such curves were found by Thériault and 6 classes by Diem. In this paper, we present a classification i.e. a complete list of all elliptic curves and hyperelliptic curves $C_{0}/k_{d}$ of genus 2, 3 which possess $(2,...,2)$ covering $C/k$ of $\Bbb{P}^1$ under the isogeny condition (i.e. $g(C)=d \cdot g(C_{0})$) in odd characteristic case. In particular, classification of the Galois representation of $\Gal(k_{d}/k)$ acting on the covering group $\cov(C/\Bbb{P}^1)$ is used together with analysis of ramification points of these coverings. Besides, a general existential condition of a model of $C$ over $k$ is also obtained. As the result, a complete list of all defining equations of curves $C_0/k_d$ with covering $C/k$ are provided explicitly. Besides the 10 classes of $C_0/k_d$ already known, 17 classes are newly found.
Last updated:  2014-02-13
Handling Authentication and Detection Probability in Multi-tag RFID Environment
Subhasish Dhal, Indranil Sengupta
In Radio Frequency Identification (RFID) technology, an adversary may access classified information about an object tagged with RFID tag. Therefore, authentication is a necessary requirement. Use of multiple tags in an object increases the detection probability and simultaneously ensures availability of multiple resources in the form of memory and computability. Authentication process in multi-tag arrangement may increase the traffic between reader and object and/or decrease the detection probability. Therefore the challenge is to keep intact the detection probability without increasing the traffic. Existence of multiple number of tags helps to distribute the authentication responsibility for an object among multiple number of tags. In this paper, we assume that an object is attached with multiple number of active tags and in each session a randomly selected tag is responsible for authentication process. The detection probability is intact since an active tag within the range of reader can be an inter-mediator.
Last updated:  2014-03-14
A New Object Searching Protocol for Multi-tag RFID
Subhasish Dhal, Indranil Sengupta
Searching an object from a large set is a tedious task. Radio Frequency IDentification (RFID) technology helps us to search the desired object efficiently. In this technology, a small chip called RFID tag, that contains the identification information about an object is attached to the same object. In general, a set of objects are attached with RFID tags. To find out a particular object preserving the possible security requirements, the RFID reader requests the tag in desired object to respond with its encrypted identification information. Since there is a response only from the tag in desired object the adversary gets the knowledge of existence of the desired object. Fake response from tag in undesired objects may fool the adversary. However, computation for fake responses is an overhead. In this paper, we propose a search technique which has a negligible amount of computation for fake responses. Multiple tags in the same object increases the detection probability and also the probability of success in search process. Our aim is to search a particular object efficiently preserving the possible security requirements amid various resource limitations in low-cost RFID tag.
Last updated:  2013-08-27
Efficient Public Integrity Checking for Cloud Data Sharing with Multi-User Modification
Jiawei Yuan, Shucheng Yu
In past years a body of data integrity checking techniques have been proposed for securing cloud data services. Most of these work assume that only the data owner can modify cloud-stored data. Recently a few attempts started considering more realistic scenarios by allowing multiple cloud users to modify data with integrity assurance. However, these attempts are still far from practical due to the tremendous computational cost on cloud users. Moreover, collusion between misbehaving cloud servers and revoked users is not considered. This paper proposes a novel data integrity checking scheme characterized by multi-user modification, collusion resistance and a constant computational cost of integrity checking for cloud users, thanks to our novel design of polynomial-based authentication tags and proxy tag update techniques. Our scheme also supports public checking and efficient user revocation and is provably secure. Numerical analysis and extensive experimental results show the efficiency and scalability of our proposed scheme.
Last updated:  2013-08-14
A Variant of Coppersmith's Algorithm with Improved Complexity and Efficient Exhaustive Search
Jean-Sébastien Coron, Jean-Charles Faugère, Guénaël Renault, Rina Zeitoun
Coppersmith described at Eurocrypt 96 a polynomial-time algorithm for finding small roots of univariate modular equations, based on lattice reduction. In this paper we describe the first improvement of the asymptotic complexity of Coppersmith's algorithm. Our method consists in taking advantage of Coppersmith's matrix structure, in order to apply LLL algorithm on a matrix whose elements are smaller than those of Coppersmith's original matrix. Using the $L^2$ algorithm, the asymptotic complexity of our method is $O(\log^{6+\epsilon} N)$ for any $\epsilon > 0$, instead of $O(\log^{8+\epsilon} N)$ previously. Furthermore, we devise a method that allows to speed up the exhaustive search which is usually performed to reach Coppersmith's theoretical bound. Our approach takes advantage of the LLL performed to test one guess, to reduce complexity of the LLL performed for the next guess. Experimental results confirm that it leads to a considerable performance improvement.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.