All papers in 2015 (Page 3 of 1255 results)

Last updated:  2015-10-30
The Complexity of Computing the Optimal Composition of Differential Privacy
Uncategorized
Jack Murtagh, Salil Vadhan
Show abstract
Uncategorized
In the study of differential privacy, composition theorems (starting with the original paper of Dwork, McSherry, Nissim, and Smith (TCC'06)) bound the degradation of privacy when composing several differentially private algorithms. Kairouz, Oh, and Viswanath (ICML'15) showed how to compute the optimal bound for composing k arbitrary (epsilon,delta)-differentially private algorithms. We characterize the optimal composition for the more general case of k arbitrary (epsilon_1, delta_1),...,(epsilon_k, delta_k)-differentially private algorithms where the privacy parameters may differ for each algorithm in the composition. We show that computing the optimal composition in general is #P-complete. Since computing optimal composition exactly is infeasible (unless FP=#P), we give an approximation algorithm that computes the composition to arbitrary accuracy in polynomial time. The algorithm is a modification of Dyer's dynamic programming approach to approximately counting solutions to knapsack problems (STOC'03).
Last updated:  2015-10-30
Information-theoretic Local Non-malleable Codes and their Applications
Uncategorized
Nishanth Chandran, Bhavana Kanukurthi, Srinivasan Raghuraman
Show abstract
Uncategorized
Error correcting codes, though powerful, are only applicable in scenarios where the adversarial channel does not introduce ``too many" errors into the codewords. Yet, the question of having guarantees even in the face of many errors is well-motivated. Non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs (ICS 2010), address precisely this question. Such codes guarantee that even if an adversary completely over-writes the codeword, he cannot transform it into a codeword for a related message. Not only is this a creative solution to the problem mentioned above, it is also a very meaningful one. Indeed, non-malleable codes have inspired a rich body of theoretical constructions as well as applications to tamper-resilient cryptography, CCA2 encryption schemes and so on. Another remarkable variant of error correcting codes were introduced by Katz and Trevisan (STOC 2000) when they explored the question of decoding ``locally". Locally decodable codes are coding schemes which have an additional ``local decode" procedure: in order to decode a bit of the message, this procedure accesses only a few bits of the codeword. These codes too have received tremendous attention from researchers and have applications to various primitives in cryptography such as private information retrieval. More recently, Chandran, Kanukurthi and Ostrovsky (TCC 2014) explored the converse problem of making the ``re-encoding" process local. Locally updatable codes have an additional ``local update" procedure: in order to update a bit of the message, this procedure accesses/rewrites only a few bits of the codeword. At TCC 2015, Dachman-Soled, Liu, Shi and Zhou initiated the study of locally decodable and updatable non-malleable codes, thereby combining all the important properties mentioned above into one tool. Achieving locality and non-malleability is non-trivial. Yet, Dachman-Soled \etal \ provide a meaningful definition of local non-malleability and provide a construction that satisfies it. Unfortunately, their construction is secure only in the computational setting. In this work, we construct information-theoretic non-malleable codes which are locally updatable and decodable. Our codes are non-malleable against $\s{F}_{\textsf{half}}$, the class of tampering functions where each function is arbitrary but acts (independently) on two separate parts of the codeword. This is one of the strongest adversarial models for which explicit constructions of standard non-malleable codes (without locality) are known. Our codes have $\bigo(1)$ rate and locality $\bigo(\lambda)$, where $\lambda$ is the security parameter. We also show a rate $1$ code with locality $\omega(1)$ that is non-malleable against bit-wise tampering functions. Finally, similar to Dachman-Soled \etal, our work finds applications to information-theoretic secure RAM computation.
Last updated:  2015-12-21
Making the Best of a Leaky Situation: Zero-Knowledge PCPs from Leakage-Resilient Circuits
Uncategorized
Yuval Ishai, Mor Weiss, Guang Yang
Show abstract
Uncategorized
A Probabilistically Checkable Proof (PCP) allows a randomized verifier, with oracle access to a purported proof, to probabilistically verify an input statement of the form ``$x\in L$'' by querying only few bits of the proof. A zero-knowledge PCP (ZKPCP) is a PCP with the additional guarantee that the view of any verifier querying a bounded number of proof bits can be efficiently simulated given the input $x$ alone, where the simulated and actual views are statistically close. Originating from the first ZKPCP construction of Kilian et~al.(STOC~'97), all previous constructions relied on locking schemes, an unconditionally secure oracle-based commitment primitive. The use of locking schemes makes the verifier \emph{inherently} adaptive, namely, it needs to make at least two rounds of queries to the proof. Motivated by the goal of constructing non-adaptively verifiable ZKPCPs, we suggest a new technique for compiling standard PCPs into ZKPCPs. Our approach is based on leakage-resilient circuits, which are circuits that withstand certain ``side-channel'' attacks, in the sense that these attacks reveal nothing about the (properly encoded) input, other than the output. We observe that the verifier's oracle queries constitute a side-channel attack on the wire-values of the circuit verifying membership in $L$, so a PCP constructed from a circuit resilient against such attacks would be ZK. However, a leakage-resilient circuit evaluates the desired function \emph{only if} its input is properly encoded, i.e., has a specific structure, whereas by generating a ``proof'' from the wire-values of the circuit on an \emph{ill-formed} ``encoded'' input, one can cause the verification to accept inputs $x\notin L$ \emph{with probability 1}. We overcome this obstacle by constructing leakage-resilient circuits with the additional guarantee that ill-formed encoded inputs are detected. Using this approach, we obtain the following results: \begin{itemize} \sloppy \item We construct the first \emph{witness-indistinguishable} PCPs (WIPCP) for NP with non-adaptive verification. WIPCPs relax ZKPCPs by only requiring that different witnesses be indistinguishable. Our construction combines strong leakage-resilient circuits as above with the PCP of Arora and Safra (FOCS '92), in which queries correspond to side-channel attacks by shallow circuits, and with correlation bounds for shallow circuits due to Lovett and Srivinasan (RANDOM '11). \item Building on these WIPCPs, we construct non-adaptively verifiable \emph{computational} ZKPCPs for NP in the common random string model, assuming that one-way functions exist. \item As an application of the above results, we construct \emph{3-round} WI and ZK proofs for NP in a distributed setting in which the prover and the verifier interact with multiple servers of which $t$ can be corrupted, and the total communication involving the verifier consists of $\poly\log\left(t\right)$ bits. \end{itemize}
Last updated:  2015-10-30
Computational Soundness of Uniformity Properties for Multi-party Computation based on LSSS
HUI ZHAO, Kouichi Sakurai
We provide a symbolic model for multi-party computation based on linear secret-sharing scheme, and prove that this model is com- putationally sound: if there is an attack in the computational world, then there is an attack in the symbolic (abstract) model. Our original contri- bution is that we deal with the uniformity properties, which cannot be described using a single execution trace, while considering an unbounded number of sessions of the protocols in the presence of active and adaptive adversaries.
Last updated:  2015-10-30
Oblivious Parallel RAM: Improved Efficiency and Generic Constructions
Uncategorized
Binyi Chen, Huijia Lin, Stefano Tessaro
Show abstract
Uncategorized
Oblivious RAM (ORAM) garbles read/write operations by a client (to access a remote storage server or a random-access memory) so that an adversary observing the garbled access sequence cannot infer any information about the original operations, other than their overall number. This paper considers the natural setting of Oblivious Parallel RAM (OPRAM) recently introduced by Boyle, Chung, and Pass (TCC 2016A), where $m$ clients simultaneously access in parallel the storage server. The clients are additionally connected via point-to-point links to coordinate their accesses. However, this additional inter-client communication must also remain oblivious. The main contribution of this paper is twofold: We construct the first OPRAM scheme that (nearly) matches the storage and server-client communication complexities of the most efficient single-client ORAM schemes. Our scheme is based on an extension of Path-ORAM by Stefanov et al (CCS 2013). Moreover, we present a generic transformation turning any (single-client) ORAM scheme into an OPRAM scheme.
Last updated:  2015-10-30
PLayPUF: Programmable Logically Erasable PUFs for Forward and Backward Secure Key Management
Chenglu Jin, Xiaolin Xu, Wayne Burleson, Ulrich Rührmair, Marten van Dijk
A silicon Physical Unclonable Function (PUF) is a hardware security primitive which implements a unique and unclonable function on a chip which, given a challenge as input, computes a response by measuring and leveraging (semiconductor process) manufacturing variations which differ from PUF to PUF. In this paper, we observe that by equipping a PUF with a small, constant-sized, tamper-resistant state, whose content cannot be modified, but can be read by adversaries, new and powerful cryptographic applications of PUFs become feasible. In particular, we show a new hardware concept which we call a Programmable Logically erasable PUF (PLayPUF). Its distinctive feature is that it allows the selective erasure of single challenge-response pairs (CRPs) without altering any other PUF-CRPs. The selective erasure of a CRP can be programmed a-priori by using a counter to indicate how many times the CRP can be read out before erasure. We show PLayPUFs can realize forward and {\it backward} secure key management schemes for public key encryption. The new notion of backward security informally means that even if an attacker uncovers a session key through the key management interface, the legitimate user will detect this leakage before he will ever use the session key. Backward security and its implementation via PLayPUFs allow the construction of novel, self-recovering certificate authorities (CAs) without relying on a digital master key. Our new CAs immediately detect key exposure through their interfaces, and recover from it without stopping their service, and without ever issuing certificates based on such exposed keys. This is a crucial step forward in implementing secure key management. We deliver a full proof-of-concept implementation of our new scheme on FPGA together with detailed performance data, as well as formal definitions of our new concepts, including the first definition of stateful PUFs.
Last updated:  2015-10-30
Cryptanalysis and Improvement of Identity-based Proxy Multi-signature scheme
Jayaprakash Kar
Cao-Cao’s recently proposed an identity-based proxy signature scheme and claim that the scheme is provably secure in random oracle model. In this paper we have reviewed the scheme and proven that the scheme is vulnerable to chosen message attack under the defined security model. To prevent this attack, we propose an improved version of the scheme. A Proxy multi-signature scheme allows an authorized proxy signer to sign on a message on behalf of a group of original signers.
Last updated:  2015-10-31
Comparison Between Irreducible and Separable Goppa Code in McEliece Cryptosystem
Thuraya M. Qaradaghi, Newroz N. Abdulrazaq
The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.
Last updated:  2017-05-22
Counter-in-Tweak: Authenticated Encryption Modes for Tweakable Block Ciphers
Thomas Peyrin, Yannick Seurin
We propose the Synthetic Counter-in-Tweak (SCT) mode, which turns a tweakable block cipher into a nonce-based authenticated encryption scheme (with associated data). The SCT mode combines in a SIV-like manner a Wegman-Carter MAC inspired from PMAC for the authentication part and a new counter-like mode for the encryption part, with the unusual property that the counter is applied on the tweak input of the underlying tweakable block cipher rather than on the plaintext input. Unlike many previous authenticated encryption modes, SCT enjoys provable security beyond the birthday bound (and even up to roughly $2^n$ tweakable block cipher calls, where $n$ is the block length, when the tweak length is sufficiently large) in the nonce-respecting scenario where nonces are never repeated. In addition, SCT ensures security up to the birthday bound even when nonces are reused, in the strong nonce-misuse resistance sense (MRAE) of Rogaway and Shrimpton (EUROCRYPT 2006). To the best of our knowledge, this is the first authenticated encryption mode that provides at the same time close-to-optimal security in the nonce-respecting scenario and birthday-bound security for the nonce-misuse scenario. While two passes are necessary to achieve MRAE-security, our mode enjoys a number of desirable features: it is simple, parallelizable, it requires the encryption direction only, it is particularly efficient for small messages compared to other nonce-misuse resistant schemes (no precomputation is required) and it allows incremental update of associated data.
Last updated:  2015-10-30
Verifiable Random Functions from Standard Assumptions
Uncategorized
Dennis Hofheinz, Tibor Jager
Show abstract
Uncategorized
The question whether there exist verifiable random functions with exponential-sized input space and full adaptive security based on a non-interactive, constant-size assumption is a long-standing open problem. We construct the first verifiable random functions which simultaneously achieve all these properties. Our construction can securely be instantiated in symmetric bilinear groups, based on any member of the (n-1)-linear assumption family with n >= 3. This includes, for example, the 2-linear assumption, which is also known as the decision linear (DLIN) assumption.
Last updated:  2015-10-29
Reconfigurable Cryptography: A flexible approach to long-term security
Julia Hesse, Dennis Hofheinz, Andy Rupp
We put forward the concept of a reconfigurable cryptosystem. Intuitively, a reconfigurable cryptosystem allows to increase the security of the system at runtime, by changing a single central parameter we call common reference string (CRS). In particular, e.g., a cryptanalytic advance does not necessarily entail a full update of a large public-key infrastructure; only the CRS needs to be updated. In this paper we focus on the reconfigurability of encryption and signature schemes, but we believe that this concept and the developed techniques can also be applied to other kind of cryptosystems. Besides a security definition, we offer two reconfigurable encryption schemes, and one reconfigurable signature scheme. Our first reconfigurable encryption scheme uses indistinguishability obfuscation (however only in the CRS) to adaptively derive short-term keys from long-term keys. The security of long-term keys can be based on a one-way function, and the security of both the indistinguishability obfuscation and the actual encryption scheme can be increased on-the-fly, by changing the CRS. We stress that our scheme remains secure even if previous short-term secret keys are leaked. Our second reconfigurable encryption scheme has a similar structure (and similar security properties), but relies on a pairing-friendly group instead of obfuscation. Its security is based on the recently introduced hierarchy of \(k\)-SCasc assumptions. Similar to the \(k\)-Linear assumption, it is known that \(k\)-SCasc implies \((k+1)\)-SCasc, and that this implication is proper in the generic group model. Our system allows to increase \(k\) on-the-fly, just by changing the CRS. In that sense, security can be increased without changing any long-term keys. We also offer a reconfigurable signature scheme based on the same hierarchy of assumptions.
Last updated:  2016-07-13
From Private Simultaneous Messages to Zero-Information Arthur-Merlin Protocols and Back
Uncategorized
Benny Applebaum, Pavel Raykov
Show abstract
Uncategorized
Göös, Pitassi and Watson (ITCS, 2015) have recently introduced the notion of \emph{Zero-Information Arthur-Merlin Protocols} (ZAM). In this model, which can be viewed as a private version of the standard Arthur-Merlin communication complexity game, Alice and Bob are holding a pair of inputs $x$ and $y$ respectively, and Merlin, the prover, attempts to convince them that some public function $f$ evaluates to 1 on $(x,y)$. In addition to standard completeness and soundness, Göös et al., require an additional ``zero-knowledge'' property which asserts that on each yes-input, the distribution of Merlin's proof leaks no information about the inputs $(x,y)$ to an external observer. In this paper, we relate this new notion to the well-studied model of \emph{Private Simultaneous Messages} (PSM) that was originally suggested by Feige, Naor and Kilian (STOC, 1994). Roughly speaking, we show that the randomness complexity of ZAM essentially corresponds to the communication complexity of PSM, and that the communication complexity of ZAM essentially corresponds to the randomness complexity of PSM. This relation works in both directions where different variants of PSM are being used. Consequently, we derive better upper-bounds on the communication-complexity of ZAM for arbitrary functions. As a secondary contribution, we reveal new connections between different variants of PSM protocols which we believe to be of independent interest.
Last updated:  2015-10-29
Exploiting Transformations of the Galois Configuration to Improve Guess-and-Determine Attacks on NFSRs
Gefei Li, Yuval Yarom, Damith C. Ranasinghe
Guess-and-determine attacks are based on guessing a subset of internal state bits and subsequently using these guesses together with the cipher's output function to determine the value of the remaining state. These attacks have been successfully employed to break NFSR-based stream ciphers. The complexity of a guess-and-determine attack is directly related to the number of state bits used in the output function. Consequently, an opportunity exits for efficient cryptanalysis of NFSR-based stream ciphers if NFSRs used can be transformed to derive an equivalent stream cipher with a simplified output function. In this paper, we present a new technique for transforming NFSRs. We show how we can use this technique to transform NFSRs to equivalent NFSRs with simplified output functions. We explain how such transformations can assist in cryptanalysis of NFSR-based ciphers and demonstrate the application of the technique to successfully cryptanalyse the lightweight cipher Sprout. Our attack on Sprout has a time complexity of 2^70.87, which is 2^3.64 times better than any published non-TMD attack, and requires only 164 bits of plaintext-ciphertext pairs.
Last updated:  2016-04-13
Homomorphic evaluation requires depth
Uncategorized
Andrej Bogdanov, Chin Ho Lee
Show abstract
Uncategorized
We show that homomorphic evaluation of any non-trivial functionality of sufficiently many inputs with respect to any CPA secure homomorphic encryption scheme cannot be implemented by circuits of polynomial size and constant depth, i.e., in the class AC0. In contrast, we observe that there exist ordinary public-key encryption schemes of quasipolynomial security in AC0 assuming noisy parities are exponentially hard to learn. We view this as evidence that homomorphic evaluation is inherently more complex than basic operations in encryption schemes.
Last updated:  2015-10-28
The Index j in RC4 is not Pseudo-random due to Non-existence of Finney Cycle
Subhamoy Maitra
In this very short note we prove that the pseudo-random index j of RC4 is indeed not pseudo-random. This is a simple result that missed our attention for quite a long time. We show that in long term Pr(j = i+1) = 1/N - 1/N^2, instead of the random association 1/N and this happens for the non-existence of the condition S[i] = 1 and j = i+1 that is mandatory for the non-existence of the Finney cycle.
Last updated:  2016-02-03
ARMed SPHINCS -- Computing a 41KB signature in 16KB of RAM
Andreas Hülsing, Joost Rijneveld, Peter Schwabe
This paper shows that it is feasible to implement the stateless hash-based signature scheme SPHINCS-256 on an embedded microprocessor with memory even smaller than a signature and limited computing power. We demonstrate that it is possible to generate and verify the 41\,KB signature on an ARM Cortex M3 that only has 16\,KB of memory available. We provide benchmarks for our implementation which show that this can be used in practice. To analyze the costs of using the stateless SPHINCS scheme instead of its stateful alternatives, we also implement XMSS$^{MT}$ on this platform and give a comparison.
Last updated:  2016-03-10
The Number of Boolean Functions with Multiplicative Complexity 2
Magnus Gausdal Find, Daniel Smith-Tone, Meltem Sonmez Turan
Multiplicative complexity is a complexity measure defined as the minimum number of AND gates required to implement a given primitive by a circuit over the basis (AND, XOR, NOT). Implementations of ciphers with a small number of AND gates are preferred in protocols for fully homomorphic encryption, multi-party computation and zero-knowledge proofs. In 2002, Fischer and Peralta showed that the number of $n$-variable Boolean functions with multiplicative complexity one equals $2\binom{2^n}{3}$. In this paper, we study Boolean functions with multiplicative complexity 2. By characterizing the structure of these functions in terms of affine equivalence relations, we provide a closed form formula for the number of Boolean functions with multiplicative complexity 2.
Last updated:  2015-10-28
Fully Homomorphic Encryption with Composite Number Modulus
Masahiro Yagisawa
Gentry’s bootstrapping technique is the most famous method of obtaining fully homomorphic encryption. In previous work I proposed a fully homomorphic encryption without bootstrapping which has the weak point in the plaintext. In this paper I propose the improved fully homomorphic encryption scheme on non-associative octonion ring over finite ring with composite number modulus where the plaintext p consists of three numbers u,v,w. The proposed fully homomorphic encryption scheme is immune from the “p and -p attack”. As the scheme is based on computational difficulty to solve the multivariate algebraic equations of high degree while the almost all multivariate cryptosystems proposed until now are based on the quadratic equations avoiding the explosion of the coefficients. Because proposed fully homomorphic encryption scheme is based on multivariate algebraic equations with high degree or too many variables, it is against the Gröbner basis attack, the differential attack, rank attack and so on. It is proved that if there exists the PPT algorithm that decrypts the plaintext from the ciphertexts of the proposed scheme, there exists the PPT algorithm that factors the given composite number modulus.
Last updated:  2016-10-27
Maturity and Performance of Programmable Secure Computation
Uncategorized
David W. Archer, Dan Bogdanov, Benny Pinkas, Pille Pullonen
Show abstract
Uncategorized
Secure computation research has gained traction internationally in the last five years. In the United States, the DARPA PROCEED program (2011-2015) focused on development of multiple SC paradigms and improving their performance. In the European Union, the PRACTICE program (2013-2016) focuses on its use to secure cloud computing. Both programs have demonstrated exceptional prototypes and performance improvements. In this paper, we collect the results from both programs and other published literature to present the state of the art in what can be achieved with today's secure computing technology. We consider linear secret sharing based computations, garbled circuits and fully homomorphic encryption. We describe theoretical and practical criteria that can be used to characterize secure computation paradigms and provide an overview of common benchmarks such as AES evaluation.
Last updated:  2015-11-04
Revisiting LEGOs: Optimizations, Analysis, and their Limit
Yan Huang, Ruiyu Zhu
The Cut-and-choose paradigm gives by far the most popular and efficient secure two-party computation protocols in the standard malicious model, able to offer s bits of security with only s copies of garbled circuits in the one-time execution scenario. Nielsen and Orlandi et al. have even proposed the seminal idea of LEGO-style cut-and-choose to further reduce the number of circuit copies to less than s while still keep constant round complexity. However, a substantial gap still exists between the theoretical idea of LEGO cut-and-choose and a practical implementation, e.g., LEGO is not compatible with free-XOR and uses expensive asymmetric key operations for soldering, while MiniLEGO leaves the important building-block of soldering unspecified. In this work, we introduce XOR-Homomorphic Interactive Hash and propose an efficient implementation of this primitive by combining Reed-Solomon encoding and k-out-of-n oblivious transfers. We show how to apply this primitive to solve the performance-critical wire-soldering problem and propose a new LEGO-style cut-and-choose protocol. Comparing to previous LEGO-style protocols, ours only requires a single (as opposed to “a majority of”) correctly garbled gate in each bucket to guarantee security against malicious adversaries. Plus, through integrating Half-Gates garbling, we double the chance a “bad” gate being detected in the check stage (compared to MiniLEGO). Our construction is more bandwidth-efficient than Lindell (CRYPTO, 2013) either when the circuit size N is sufficiently large, or when N is larger than a threshold solely determined by the ratio between the input and circuit sizes. E.g., we use less bandwidth for computing most linear and sub-linear functions. Deploying a LEGO-style cut-and-choose protocol involves convoluted protocol parameter selection. To this end, we give a thorough analysis of the relations among all protocol parameters and propose efficient algorithms that automate the search for the optimal parameter configuration based on a requirement specification (i.e., the security parameters s,k and application parameter N) with provable accuracy. Last, we formally prove a tight bound on the benefit of LEGO-style secure computation protocols, in the sense that the circuit duplication factor $\kappa$ has to be larger than 2 and any $\kappa > 2$ is indeed achievable. This corrects a common mistake of claiming LEGO cut-and-choose can reduce $\kappa$ to $O(sk/ \log N)$ since $2 \not\in O(sk/\log N)$.
Last updated:  2016-06-02
Cryptanalysis of GGH15 Multilinear Maps
Uncategorized
Jean-Sebastien Coron, Moon Sung Lee, Tancrede Lepoint, Mehdi Tibouchi
Show abstract
Uncategorized
We describe a cryptanalysis of the GGH15 multilinear maps. Our attack breaks in polynomial time the multipartite key-agreement protocol by generating an equivalent user private key. Our attack only applies to GGH15 without safeguards; for GGH15 with safeguards we only have a partial cryptanalysis that can recover any ratio of secret exponents. We also describe attacks against variants of the GGH13 multilinear maps proposed by Halevi (ePrint 2015/866) aiming at supporting graph-induced constraints, as in GGH15.
Last updated:  2015-10-28
Patterson-Wiedemann type functions on 21 variables with Nonlinearity greater than Bent Concatenation bound
Selcuk Kavut, Subhamoy Maitra
Nonlinearity is one of the most challenging combinatorial property in the domain of Boolean function research. Obtaining nonlinearity greater than the bent concatenation bound for odd number of variables continues to be one of the most sought after combinatorial research problems. The pioneering result in this direction has been discovered by Patterson and Wiedemann in 1983 (IEEE-IT), which considered Boolean functions on $5 \times 3 = 15$ variables that are invariant under the actions of the cyclic group ${GF(2^5)}^\ast \cdot {GF(2^3)}^\ast$ as well as the group of Frobenius authomorphisms. Some of these Boolean functions posses nonlinearity greater than the bent concatenation bound. The next possible option for exploring such functions is on $7 \times 3 = 21$ variables. However, obtaining such functions remained elusive for more than three decades even after substantial efforts as evident in the literature. In this paper, we exploit combinatorial arguments together with heuristic search to demonstrate such functions for the first time.
Last updated:  2017-01-16
CARIBE: Cascaded IBE for Maximum Flexibility and User-side Control
Britta Hale, Christopher Carr, Danilo Gligoroski
Mass surveillance and a lack of end-user encryption, coupled with a growing demand for key escrow under legal oversight and certificate authority security concerns, raise the question of the appropriateness of continued general dependency on PKI. Under this context, we examine Identity-Based Encryption (IBE) as an alternative to public-key encryption. Cascade encryption, or sequential multiple encryption, is the concept of layering encryption such that the ciphertext from one encryption step is the plaintext of the next. We describe CARIBE, a cascaded IBE scheme, for which we also provide a cascaded CCA security experiment, IND-ID-C.CCA, and prove its security in the computational model. CARIBE combines the ease-of-use of IBE with key escrow, limited to the case when the entire set of participating PKGs collaborate. Furthermore, we describe a particular CARIBE scheme, CARIBE-S, where the receiver is a self-PKG – one of the several PKGs included in the cascade. CARIBE-S inherits IND-ID-C.CCA from CARIBE, and avoids key escrow entirely. In essence, CARIBE-S offers the maximum flexibility of the IBE paradigm and gives the users complete control without the key escrow problem.
Last updated:  2015-10-27
Real time detection of cache-based side-channel attacks using Hardware Performance Counters
Marco Chiappetta, Erkay Savas, Cemal Yilmaz
In this paper we analyze three methods to detect cache-based side-channel attacks in real time, preventing or limiting the amount of leaked information. Two of the three methods are based on machine learning techniques and all the three of them can successfully detect an attacker in about one fifth of the time required to complete the attack. There were no false positives in our test environment. Moreover we could not measure a change in the execution time of the processes involved in the attack, meaning there is no perceivable overhead. We also analyze how the detection systems behave with a modified version of one of the spy processes. With some optimization we are confident these systems can be used in real world scenarios.
Last updated:  2015-10-27
The Ultimate Transposition Cipher (UTC)
Uncategorized
Gideon Samid
Show abstract
Uncategorized
An Ultimate Transposition Cipher (UTC) is defined as a cipher that transposes any permutation of some n elements to any other permutation of the same elements. Hence, by listing together the protected message and plausible alternatives to it, and then mixing it, one secures a ciphertext which the intended reader will readily "un-mix" (using the shared key), but the cryptanalyst will find proper keys for all the 'decoy messages' and will not be able to go further. The UTC transposed permutation (the ciphertext) projects mathematical parity towards these plausible candidates for the actual message sent. We show that in real life situations when both sides can reasonably prepare a list of plausible plaintexts, the UTC equals the mathematical security offered by Vernam's One-Time-Pad, albeit, without Vernam's key size inconvenience. UTC decoys may be constructed manually or with AI. Applying a UTC protection before further encrypting with any common cipher will add a new dimension of equivocation (a clear entropic advantage) to the prevailing intractability-only protection. An example for an actual UTC is referenced and described.
Last updated:  2015-11-05
Essentially Optimal Robust Secret Sharing with Maximal Corruptions
Allison Bishop, Valerio Pastro, Rajmohan Rajaraman, Daniel Wichs
In a $t$-out-of-$n$ robust secret sharing scheme, a secret message is shared among $n$ parties who can reconstruct the message by combining their shares. An adversary can adaptively corrupt up to $t$ of the parties, get their shares, and modify them arbitrarily. The scheme should satisfy privacy, meaning that the adversary cannot learn anything about the shared message, and robustness, meaning that the adversary cannot cause the reconstruction procedure to output an incorrect message. Such schemes are only possible in the case of an honest majority, and here we focus on unconditional security in the maximal corruption setting where $n = 2t+1$. In this scenario, to share an $m$-bit message with a reconstruction failure probability of at most $2^{-k}$, a known lower-bound shows that the share size must be at least $m + k$ bits. On the other hand, all prior constructions have share size that scales linearly with the number of parties $n$, and the prior state-of-the-art scheme due to Cevallos et al. (EUROCRYPT '12) achieves $m + \widetilde{O}(k + n)$. In this work, we construct the first robust secret sharing scheme in the maximal corruption setting with $n=2t+1$, that avoids the linear dependence between share size and the number of parties $n$. In particular, we get a share size of only $m + \widetilde{O}(k)$ bits. Our scheme is computationally efficient and relies on approximation algorithms for the minimum graph bisection problem.
Last updated:  2015-10-27
Secure Dating with Four or Fewer Cards
Antonio Marcedone, Zikai Wen, Elaine Shi
In Cornell's “CS4830: Introduction to Cryptography” offered Fall 2015, students are asked to devise a physical secure two-party protocol for computing AND, using 4 cards or fewer. An elegant 5-card scheme was first proposed by Boer et al. Recently, in Asiacrypt 2012, Mizuki et al. were the first to improve the scheme to 4 cards. Although they mention that 4 cards is the minimum -- the minimum only holds when users must encode their input each with two cards. Given the collective wisdom of our Cornell CS4830 students, we demonstrate an array of creative schemes using from 1 to 4 cards. Our students documented these solutions in a homework assignment, many of which are unanticipated by the instructor and the TAs. We had fun with students' solutions and therefore would like to share them. Several of the students solutions are simpler than the standard textbook version by Boer et al., and we imagine that they could be useful for pedagogical purposes.
Last updated:  2015-10-26
SECOND COORDINATE SEQUENCE OF MP-LRS OVER NONTRIVIAL GALOIS RING OF ODD CHARACTERISTIC
Vadim N. Tsypyschev
We investigate a well-known way to construct pseudo-random sequences by separation p-adic coordinate sequences of linear recurrences over Galois ring. Commonly it is necessary to know rank estimations of separated sequences. In this article we describe divisors of the minimal polynomial of the second p-adic coordinate sequence of the linear recurrent sequence of maximal period/MP-LRS over non-trivial Galois ring of odd characteristic in dependence of the initial vector of this LRS. Also we describe polynomials divisible by that minimal polynomial in dependence of the initial vector of this LRS. As a corollary we get non-trivial upper and lower estimations for the rank of the second coordinate sequence of such MP-LRS which provides us by possibility to use it in pseudo-random generation. We say that the Galois ring is non-trivial, if it differs from Galois field and from quotient ring too. These results were worked out with participation of V.L.Kurakin as a supervisor. Author is very grateful to V.L.Kurakin for his participation in this work
Last updated:  2015-10-26
The Energy Budget for Wireless Security: Extended Version
Dave Singelée, Stefaan Seys, Lejla Batina, Ingrid Verbauwhede
Due to the numerous security and privacy risks, applications deployed in wireless networks require strong cryptographic protection. Reducing the energy cost of cryptographic algorithms and protocols that run on wireless embedded devices, is a crucial requirement when developing security and privacy solutions for wireless networks. The goal of this work is to give an insight to the global energy cost of secure wireless communications. We will compare the energy cost of different wireless standards and a wide range of cryptographic primitives. To illustrate these numbers, we will evaluate the energy consumption of several authentication schemes for RFID. The results show that both computation and communication cost are important factors in the energy budget, and clearly connected to the security and privacy properties of the wireless applications.
Last updated:  2015-10-26
Reviving the Idea of Incremental Cryptography for the Zettabyte era Use case: Incremental Hash Functions Based on SHA-3
Hristina Mihajloska, Danilo Gligoroski, Simona Samardjiska
One of the crucial factors for enabling fast and secure computations in the Zettabyte era is the use of incremental cryptographic primitives. For files ranging from several megabytes up to hundreds of gigabytes, incremental cryptographic primitives offer speedup factors measured in multiple orders of magnitude. In this paper we define two incremental hash functions iSHAKE128 and iSHAKE256 based on the recent NIST proposal for SHA-3 Extendable-Output Functions SHAKE128 and SHAKE256. We give two practical implementation aspects of a newly introduced hash functions and compare them with already known tree based hash scheme. We show the trends of efficiency gains as the amount of data increases in comparisons between our proposed hash functions and the standard tree based incremental schemes. Our proposals have the security levels against collision attacks of 128 and 256 bits.
Last updated:  2016-06-03
Extended Tower Number Field Sieve: A New Complexity for the Medium Prime Case
Uncategorized
Taechan Kim, Razvan Barbulescu
Show abstract
Uncategorized
We introduce a new variant of the number field sieve algorithm for discrete logarithms in $\mathbb{F}_{p^n}$ called exTNFS. The most important modification is done in the polynomial selection step, which determines the cost of the whole algorithm: if one knows how to select good polynomials to tackle discrete logarithms in $\mathbb{F}_{p^\kappa}$, exTNFS allows to use this method when tackling $\mathbb{F}_{p^{\eta\kappa}}$ whenever $\gcd(\eta,\kappa)=1$. This simple fact has consequences on the asymptotic complexity of NFS in the medium prime case, where the complexity is reduced from $L_Q(1/3,\sqrt[3]{96/9})$ to $L_Q(1/3,\sqrt[3]{48/9})$, $Q=p^n$, respectively from $L_Q(1/3,2.15)$ to $L_Q(1/3,1.71)$ if multiple number fields are used. On the practical side, exTNFS can be used when $n=6$ and $n=12$ and this requires to updating the keysizes used for the associated pairing-based cryptosystems.
Last updated:  2015-10-27
Hardness Estimation of LWE via Band Pruning
Yoshinori Aono, Le Trieu Phong, Lihua Wang
This paper, examining the hardness of the search LWE problem, is a refined continuation of previous works including (Lindner-Peikert 2011, Liu-Nguyen 2013, Aono et al. 2013) using lattice reduction and lattice vector enumeration. We adopt the attack to the LWE using discrete Gaussian distribution, and propose a new bounding method named band pruning in lattice enumeration. We update the security estimations for several parameter sets proposed in the literature. Finally, using the data gained in our experiments, we derive an explicit formula linking the LWE's parameters with the bit security.
Last updated:  2015-10-26
PAGES+,PAGES-, and PAGES-- - Three Families of Block Ciphers
Dieter Schmidt
PAGES+ is a family of block ciphers based on block ciphers Speck [2] and PAGES [9]. The block length was increased vom 128 bit to 512 bit and additional rounds were introduced to make the cipher more resistent against attacks. The number of rounds are 64, 96, and 128. The key size are 1024 bit, 1536 bit, and 2048 bit, respectively. The size of variables, as with PAGES, is 128 bit. Thus the variables can be stored in registers of the microprocessors of two leading suppliers. As with Speck, PAGES+ utilizes instructions with a low latency, such as addition modulo 2 128 , subtraction modulo 2 128 , XOR, and circular shifts (rotation). All these instructions are usually carried out within a few cycles. Hence despite the number of rounds is considerable, PAGES+ has a high software throughput. For a processor with a frequency of 2.5 GHz, the software throughput with the optimized version of the reference implementation is 30 megabyte per second with a key length of 2048 bit and a number of rounds of 128. In hardware or the implementation on a FPGA a considerable performance is expected, yet with a limited expense. PAGES- is a family of block ciphers based on block ciphers Speck [2] and PAGES [9]. The block length was increased vom 128 bit to 256 bit and additional rounds were introduced to make the cipher more resistent against attacks. The number of rounds are 32, 48, 64, 96, and 128. The key size are 256 bit, 384 bit, 512 bit, 768 bit, and 1024 bit, respectively. The size of variables, as with Speck, is 64 bit. As with Speck, PAGES- utilizes instructions with a low latency, such as addition modulo 2 64 , subtraction modulo 2 64 , XOR, and circular shifts (rotation). All these instructions are usually carried out within one cycle. Hence despite the number of rounds is considerable, PAGES- has a high software throughput. For a processor with a frequency of 2.5 GHz, the software throughput with the optimized version of the reference implementation is 80 megabyte per second with a key length of 1024 bit and a number of rounds of 128. PAGES– is a family of block ciphers based on block ciphers Speck [2] and PAGES [9]. The block length was increased vom 128 bit to 512 bit and additional rounds were introduced to make the cipher more resistent against attacks. The number of rounds are 32, 48, 64, 96, and 128. The key size are 512 bit, 768 bit, 1024 bit, 1536 bit, and 2048 bit respectively. The size of variables, as with Speck, is 64 bit. For a processor with a frequency of 2.5 GHz, the software throughput with the optimized version of the reference implementation is 65 megabyte per second with a key length of 2048 bit and a number of rounds of 128.
Last updated:  2015-10-23
Parallel Implementation of Number Theoretic Transform
Hwajeong Seo, Zhe Liu, Yasuyuki Nogami, Jongseok Choi, Taehwan Park, Howon Kim
Number Theoretic Transform (NTT) based polynomial multiplication is the most important operation for Lattice-based cryptography. In this paper, we implement the parallel NTT computation over ARM-NEON architecture. Our contributions include the following optimizations: (1) we vectorized the Iterative Number Theoretic Transform, (2) we propose the 32-bit wise Shifting-Addition-Multiplication-Subtraction-Subtraction (SAMS2) techniques for speeding up the modular coefficient multiplication, (3) we exploit the incomplete arithmetic for representing the coefficient to ensure the constant time modular reduction. For medium-term security level, our optimized NTT implementation requires only 27; 160 clock cycles. Similarly for long-term security level, it takes 62; 160 clock cycles. These results are faster than the state-of-art sequential implementations by 31% and 34% respectively.
Last updated:  2017-06-07
Indistinguishability Obfuscation for Turing Machines: Constant Overhead and Amortization
Uncategorized
Prabhanjan Ananth, Abhishek Jain, Amit Sahai
Show abstract
Uncategorized
We study the asymptotic efficiency of indistinguishability obfuscation (iO) on two fronts: - Obfuscation size: Present constructions of indistinguishability obfuscation (iO) create obfuscated programs where the size of the obfuscated program is at least a multiplicative factor of security parameter larger than the size of the original program. In this work, we construct the first iO scheme for (bounded-input) Turing machines that achieves only a constant multiplicative overhead in size. The constant in our scheme is, in fact, 2. - Amortization: Suppose we want to obfuscate an arbitrary polynomial number of (bounded-input) Turing machines M_1,...,M_n. We ask whether it is possible to obfuscate M_1,...,M_n using a single application of an iO scheme for a circuit family where the size of any circuit is independent of n as well the size of any Turing machine M_i. In this work, we resolve this question in the affirmative, obtaining a new bootstrapping theorem for obfuscating arbitrarily many Turing machines. Our results rely on the existence of sub-exponentially secure iO for circuits and re-randomizable encryption schemes. In order to obtain these results, we develop a new template for obfuscating Turing machines that is of independent interest and has recently found application in subsequent work on patchable obfuscation [Ananth et al, EUROCRYPT'17].
Last updated:  2015-10-23
Recent progress on the elliptic curve discrete logarithm problem
Uncategorized
Steven D. Galbraith, Pierrick Gaudry
Show abstract
Uncategorized
We survey recent work on the elliptic curve discrete logarithm problem. In particular we review index calculus algorithms using summation polynomials, and claims about their complexity.
Last updated:  2017-04-18
New Proof Techniques for DLIN-Based Adaptively Secure Attribute-Based Encryption
Uncategorized
Katsuyuki Takashima
Show abstract
Uncategorized
We propose adaptively secure attribute-based encryption (ABE) schemes for boolean formulas over large universe attributes from the decisional linear (DLIN) assumption, which allow attribute reuse in an available formula without the previously employed redundant multiple encoding technique. Thus our KP-(resp. CP-)ABE has non-redundant ciphertexts (resp. secret keys). For achieving the results, we develop a new encoding method for access policy matrix for ABE, by decoupling linear secret sharing (LSS) into its matrix and randomness, and partially randomizing the LSS shares in simulation. The new techniques are of independent interest and we expect it will find another application than ABE.
Last updated:  2016-01-07
Attacking the Network Time Protocol
Uncategorized
Aanchal Malhotra, Isaac E. Cohen, Erik Brakke, Sharon Goldberg
Show abstract
Uncategorized
We explore the risk that network attackers can exploit unauthenticated Network Time Protocol (NTP) traffic to alter the time on client systems. We first discuss how an on-path attacker, that hijacks traffic to an NTP server, can quickly shift time on the server's clients. Then, we present a extremely low-rate (single packet) denial-of-service attack that an off-path attacker, located anywhere on the network, can use to disable NTP clock synchronization on a client. Next, we show how an off-path attacker can exploit IPv4 packet fragmentation to shift time on a client. We discuss the implications on these attacks on other core Internet protocols, quantify their attack surface using Internet measurements, and suggest a few simple countermeasures that can improve the security of NTP.
Last updated:  2016-10-13
Speed-Security Tradeoffs in Blockchain Protocols
Aggelos Kiayias, Giorgos Panagiotakos
Transaction processing speed is one of the major considerations in cryptocurrencies that are based on proof of work (POW) such as Bitcoin. At an intuitive level it is widely understood that processing speed is at odds with the security aspects of the underlying POW based consensus mechanism of such protocols, nevertheless the tradeoff between the two properties is still not well understood. In this work, motivated by recent work \cite{GKL15} in the formal analysis of the Bitcoin backbone protocol, we investigate the tradeoff between provable security and transaction processing speed viewing the latter as a function of the block generation rate. % We introduce a new formal property of blockchain protocols, called {\em chain growth}, and we show it is fundamental for arguing the security of a robust transaction ledger. % We strengthen the results of \cite{GKL15} in the following ways: we show how the properties of persistence and liveness of the ledger reduce in a black-box fashion in the underlying properties of the backbone protocol, namely common prefix, chain quality and chain growth, and we improve the security bounds showing that the robustness of the ledger holds for even the faster (than Bitcoin's) block generation rates which have been adopted by other ``alt-coins.'' % We also present a theoretical attack against bitcoin which we validate in simulation that works when blockchain rate is highly accelerated. This presents a natural upper bound in the context of the speed-security tradeoff. By combining our positive and negative results we map the speed/security domain for blockchain protocols and list open problems for future work.
Last updated:  2018-05-20
A Riddle Wrapped in an Enigma
Neal Koblitz, Alfred J. Menezes
In August 2015 the U.S. National Security Agency (NSA) released a major policy statement on the need for post-quantum cryptography (PQC). This announcement will be a great stimulus to the development, standardization, and commercialization of new quantumsafe algorithms. However, certain peculiarities in the wording and timing of the statement have puzzled many people and given rise to much speculation concerning the NSA, elliptic curve cryptography (ECC), and quantum-safe cryptography. Our purpose is to attempt to evaluate some of the theories that have been proposed.
Last updated:  2015-11-03
Functional Encryption: Decentralised and Delegatable
Uncategorized
Nishanth Chandran, Vipul Goyal, Aayush Jain, Amit Sahai
Show abstract
Uncategorized
Recent advances in encryption schemes have allowed us to go far beyond point to point encryption, the scenario typically envisioned in public key encryption. In particular, Functional Encryption (FE) allows an authority to provide users with keys corresponding to various functions, such that a user with a secret key corresponding to a function $f$, can compute $f(m)$ (and only that) from a cipher-text that encrypts $m$. While FE is a very powerful primitive, a key downside is the requirement of a central point of trust. FE requires the assumption of a central trusted authority which performs the system setup as well as manages the credentials of every party in the system on an ongoing basis. This is in contrast to public key infrastructure which may have multiple certificate authorities and allows a party to have different (and varying) level of trust in them. \\ \\ In this work, we address this issue of trust in two ways: \begin{itemize} \item First, we ask how realistic it is to have a central authority that manages all credentials and is trusted by everyone? For example, one may need to either obtain the permission of an income tax official or the permission of the police department and a court judge in order to be able to obtain specific financial information of a user from encrypted financial data. Towards that end, we introduce a new primitive that we call {\em Multi-Authority Functional Encryption} (MAFE) as a generalization of both Functional Encryption and Multi-Authority Attribute-Based Encryption (MABE). We show how to obtain MAFE for arbitrary polynomial-time computations based on subexponentially secure indistinguishability obfuscation and injective one-way functions. \item Second, we consider the notion of \emph{delegatable} functional encryption where any user in the system may independently act as a key generation authority. In delegatable FE, any user may derive a decryption key for a policy which is ``more restrictive" than its own. Thus, in delegatable functional encryption, keys can be generated in a hierarchical way, instead of directly by a central authority. In contrast to MAFE, however, in a delegatable FE scheme, the trust still ``flows'' outward from the central authority. \end{itemize} Finally, we remark that our techniques are of independent interest: we construct FE in arguably a more natural way where a decryption key for a function $f$ is simply a signature on $f$. Such a direct approach allows us to obtain a construction with interesting properties enabling multiple authorities as well as delegation.
Last updated:  2016-04-20
One-Key Compression Function Based MAC with Security beyond Birthday Bound
Uncategorized
Avijit Dutta, Mridul Nandi, Goutam Paul
Show abstract
Uncategorized
Ga{\v z}i et al. [CRYPTO 2014] analyzed the NI-MAC construction proposed by An and Bellare [CRYPTO 1999] and gave a tight birthday-bound of $O(\ell q^{2}/2^{n})$, as an improvement over the previous bound of $O(\ell^{2}q^{2}/2^{n})$. In this paper, we design a simple extension of NI-MAC, called NI$^+$-MAC, and prove that it has security bound beyond birthday (BBB) of order $O(q^2\ell^2 / 2^{2n})$ provided $\ell \leq 2^{n/4}$. Our construction not only lifts the security of NI-MAC beyond birthday, it also reduces the number of keys from 2 (NI uses 2 independent keys) to 1. Before this work, Yasuda had proposed [FSE 2008] a single fixed-keyed compression function based BBB-secure MAC with security bound $O(\ell q^2/2^{2n})$ that uses an extra mask, requires a storage space to store the mask. However, our proposed construction NI$^+$ does not require any extra mask and thereby has reduced the state size compared to Yasuda's proposal [FSE 2008] with providing the same order of security bound for light-weight applications
Last updated:  2015-10-19
On Bitcoin as a public randomness source
Uncategorized
Joseph Bonneau, Jeremy Clark, Steven Goldfeder
Show abstract
Uncategorized
We formalize the use of Bitcoin as a source of publicly-verifiable randomness. As a side-effect of Bitcoin's proof-of-work-based consensus system, random values are broadcast every time new blocks are mined. We can derive strong lower bounds on the computational min-entropy in each block: currently, at least 68 bits of min-entropy are produced every 10 minutes, from which one can derive over 32 near-uniform bits using standard extractor techniques. We show that any attack on this beacon would form an attack on Bitcoin itself and hence have a monetary cost that we can bound, unlike any other construction for a public randomness beacon in the literature. In our simplest construction, we show that a lottery producing a single unbiased bit is manipulation-resistant against an attacker with a stake of less than 50 bitcoins in the output, or about US$12,000 today. Finally, we propose making the beacon output available to smart contracts and demonstrate that this simple tool enables a number of interesting applications.
Last updated:  2016-05-04
Fast Fourier Orthogonalization
Léo Ducas, Thomas Prest
The classical fast Fourier transform (FFT) allows to compute in quasi-linear time the product of two polynomials, in the {\em circular convolution ring} $\mathbb R[x]/(x^d -1)$ --- a task that naively requires quadratic time. Equivalently, it allows to accelerate matrix-vector products when the matrix is *circulant*. In this work, we discover that the ideas of the FFT can be applied to speed up the orthogonalization process of matrices with circulant blocks of size $d\times d$. We show that, when $d$ is composite, it is possible to proceed to the orthogonalization in an inductive way ---up to an appropriate re-indexation of rows and columns. This leads to a structured Gram-Schmidt decomposition. In turn, this structured Gram-Schmidt decomposition accelerates a cornerstone lattice algorithm: the nearest plane algorithm. The complexity of both algorithms may be brought down to $\Theta(d \log d)$. Our results easily extend to *cyclotomic rings*, and can be adapted to Gaussian samplers. This finds applications in lattice-based cryptography, improving the performances of trapdoor functions.
Last updated:  2019-04-08
Inception Makes Non-malleable Codes Stronger
Uncategorized
Divesh Aggarwal, Tomasz Kazana, Maciej Obremski
Show abstract
Uncategorized
Non-malleable codes (NMCs), introduced by Dziembowski, Pietrzak and Wichs~\cite{DPW10}, provide a useful message integrity guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. NMCs have emerged as a fundamental object at the intersection of coding theory and cryptography. A large body of the recent work has focused on various constructions of non-malleable codes in the split-state model. Many variants of NMCs have been introduced in the literature i.e. strong NMCs, super strong NMCs and continuous NMCs. Perhaps the most useful notion among these is that of continuous non-malleable codes, that allows for continuous tampering by the adversary. In this paper we give the first efficient, information-theoretic secure construction of continuous non-malleable codes in $2$-split-state model. Enroute to our main result, we obtain constructions for almost all possible notion of non-malleable codes that have been considered in the split-state model, and for which such a construction is possible. Our result is obtained by a series of black-box reductions starting from the non-malleable codes from~\cite{ADL14}. One of the main technical ingredient of our result is a new concept that we call \emph{inception coding}. We believe it may be of independent interest.
Last updated:  2016-02-15
An Efficient Multiple PKG Compatible Identity Based Authenticated Key Agreement protocol
Uncategorized
Harish Karthikeyan, Suvradip Chakraborty, Kunwar Singh, C. Pandu Rangan
Show abstract
Uncategorized
In this paper we propose an efficient single-round, two-party identity based authenticated key agreement protocol in the setting of multiple Private Key Generators (PKGs). One of the major advantages of our construction is that it does not involve any pairing operations. To date, existing protocols in the Identity Based Key Agreement domain revolves around a single PKG environment. Efforts to exploit the multiple PKGs paradigm have placed excessive reliance on Elliptic Curve Cryptography and bilinear pairings. These are computationally intensive and cannot be used when computation is premium, specially in applications such as in a Vehicular Ad-Hoc Network (VANET) where the vehicles in a VANET may need to perform a large number of key agreement sessions. Previous attempts to model identity based key agreement in multiple PKG scenario by Chen and Kundla, McCullagh have very limited scope and provide weak security guarantees. We propose a new security model for identity based key agreement protocols involving multiple PKGs based on the eCK security model which is much more stronger than the existing models and captures additional properties like Key Compromise Impersonation and forward secrecy that were not captured by the previous models. Our protocol is proven secure in this new security model under the Gap Diffie Hellman (GDH) assumption in the Random Oracle (RO) model.
Last updated:  2016-09-15
Hierarchical Functional Encryption
Zvika Brakerski, Gil Segev
Functional encryption provides fine-grained access control for encrypted data, allowing each user to learn only specific functions of the encrypted data. We study the notion of \emph{hierarchical} functional encryption, which augments functional encryption with \emph{delegation} capabilities, offering significantly more expressive access control. We present a {\em generic transformation} that converts any general-purpose public-key functional encryption scheme into a hierarchical one without relying on any additional assumptions. This significantly refines our understanding of the power of functional encryption, showing (somewhat surprisingly) that the existence of functional encryption is equivalent to that of its hierarchical generalization. Instantiating our transformation with the existing functional encryption schemes yields a variety of hierarchical schemes offering various trade-offs between their delegation capabilities (i.e., the depth and width of their hierarchical structures) and underlying assumptions. When starting with a scheme secure against an unbounded number of collusions, we can support \emph{arbitrary} hierarchical structures. In addition, even when starting with schemes that are secure against a bounded number of collusions (which are known to exist under rather minimal assumptions such as the existence of public-key encryption and shallow pseudorandom generators), we can support hierarchical structures of bounded depth and width.
Last updated:  2016-02-19
TWORAM: Round-Optimal Oblivious RAM with Applications to Searchable Encryption
Sanjam Garg, Payman Mohassel, Charalampos Papamanthou
We present TWORAM, the first efficient round-optimal oblivious RAM (ORAM) scheme. TWORAM provides oblivious access of a memory index $y$ in exactly two rounds: The client prepares an encrypted query encapsulating $y$ and sends it to the server. The server accesses memory obliviously and returns encrypted information containing the desired value $\mathsf{M}[y]$. The cost of TWORAM is only a multiplicative factor of security parameter higher than the tree-based ORAM schemes such as the path ORAM of Stefanov et al. (CCS, 2013). TWORAM gives rise to interesting applications, and in particular to the first fully-secure searchable symmetric encryption scheme where search is sublinear and search pattern is not leaked---access pattern can also be concealed if we assume the documents are stored in the obliviously accessed memory $\mathsf{M}$.
Last updated:  2015-10-19
Applications of Key Recovery Cube-attack-like
Pawel Morawiecki, Josef Pieprzyk, Michal Straus, Marian Srebrny
In this paper, we describe a variant of the cube attack with much better-understood Preprocessing Phase, where complexity can be calculated without running the actual experiments and random-like search for the cubes. We apply our method to a few different cryptographic algorithms, showing that the method can be used against a wide range of cryptographic primitives, including hash functions and authenticated encryption schemes. We also show that our key-recovery approach could be a framework for side-channel attacks, where the attacker has to deal with random errors in measurements.
Last updated:  2015-10-26
Provisions: Privacy-preserving proofs of solvency for Bitcoin exchanges
Uncategorized
Gaby G. Dagher, Benedikt Buenz, Joseph Bonneau, Jeremy Clark, Dan Boneh
Show abstract
Uncategorized
Bitcoin exchanges function like banks, securely holding their customers' bitcoins on their behalf. Several exchanges have suffered catastrophic losses with customers permanently losing their savings. A proof of solvency demonstrates that the exchange controls sufficient reserves to settle each customer's account. We introduce Provisions, a privacy-preserving proof of solvency whereby an exchange does not have to disclose its Bitcoin addresses; total holdings or liabilities; or any information about its customers. We also propose an extension which prevents exchanges from colluding to cover for each other's losses. We have implemented Provisions and show that it offers practical computation times and proof sizes even for a large Bitcoin exchange with millions of customers.
Last updated:  2015-10-16
How to Vote Privately Using Bitcoin
Zhichao Zhao, T-H. Hubert Chan
Bitcoin is the first decentralized crypto-currency that is currently by far the most popular one in use. The bitcoin transaction syntax is expressive enough to setup digital contracts whose fund transfer can be enforced automatically. In this paper, we design protocols for the bitcoin voting problem, in which there are n voters, each of which wishes to fund exactly one of two candidates A and B. The winning candidate is determined by majority voting, while the privacy of individual vote is preserved. Moreover, the decision is irrevocable in the sense that once the outcome is revealed, the winning candidate is guaranteed to have the funding from all n voters. As in previous works, each voter is incentivized to follow the protocol by being required to put a deposit in the system, which will be used as compensation if he deviates from the protocol. Our solution is similar to previous protocols used for lottery, but needs an additional phase to distribute secret random numbers via zero-knowledge-proofs. Moreover, we have resolved a security issue in previous protocols that could prevent compensation from being paid.
Last updated:  2015-10-16
Confidential Benchmarking based on Multiparty Computation
Ivan Damgård, Kasper Damgård, Kurt Nielsen, Peter Sebastian Nordholt, Tomas Toft
We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping state during the computation. We ran the system with two servers doing the secure computation using a database with information on about 2500 users. Answers arrived in about 25 seconds.
Last updated:  2015-10-16
Cryptanalysis of Yasuda, Takagi and Sakurai's Signature Scheme Using Invariant Subspaces
Wenbin Zhang, Chik How Tan
In PQCrypto 2013 Yasuda, Takagi and Sakurai proposed an interesting signature scheme of efficiency $O(n^2)$ with parameter $(q=6781, n=121)$ claimed to have 140-bit security level. Later on almost at the same time two independent and different attacks were then proposed by Y. Hashimoto in PQCrypto 2014 and by the authors in ICISC 2014. Hashimoto's attack has complexity $O(n^4)$ and breaks $(q=6781, n=121)$ in several minutes. In this paper, we make an essential extension of our work in ICISC 2014. We develop for the our previous method a thorough and rigorous mathematical theory by applying intensively the theory of invariant subspaces, then work out a much better attack with complexity $O(n^4)$, and especially implement it successfully. Our new attack efficiently recovers equivalent private keys of many randomly generated instances, especially breaking $(q=6781, n=121)$ in only about 14.77 seconds, much faster than Y. Hashimoto's attack. The approach developed here might have further applications.
Last updated:  2015-11-17
Security Analysis of Cryptosystems Using Short Generators over Ideal Lattices
Shinya Okumura, Shingo Sugiyama, Masaya Yasuda, Tsuyoshi Takagi
In this paper, we analyze the security of cryptosystems using short generators over ideal lattices such as candidate multilinear maps by Garg, Gentry and Halevi and fully homomorphic encryption by Smart and Vercauteren. Our approach is based on a recent work by Cramer, Ducas, Peikert and Regev on analysis of recovering a short generator of an ideal in the $q$-th cyclotomic field for a prime power $q$. In their analysis, implicit lower bounds of the special values of Dirichlet $L$-functions at 1 are essentially used for estimating some sizes of the dual basis in the log-unit lattice of the $q$-th cyclotomic field. Our main contribution is to improve Cramer et al.'s analysis by giving explicit lower and upper bounds of the special values of Dirichlet $L$-functions at 1 for any non-trivial even Dirichlet characters modulo $q$. Moreover, we give various experimental evidence that recovering short generators of principle ideals in $2k$-th cyclotomic fields for $k \geq 10$ is succeeded with high probability. As a consequence, our analysis suggests that the security of the above cryptosystems based on the difficulty of recovering a short generator is reduced to solving the principal ideal problem under the number theoretical conjecture so-called Weber's class number problem.
Last updated:  2015-10-16
Results on polynomial interpolation with mixed modular operations and unknown moduli
Uncategorized
Oscar Garcia-Morchon, Ronald Rietman, Igor Shparlinski, Ludo Tolhuizen
Show abstract
Uncategorized
Motivated by a recently introduced HIMMO key predistribution scheme, we investigate the limits of various attacks on the polynomial interpolation problem with mixedmodular operations and hidden moduli. We firstly review the classical attack and consider itin a quantum-setting. Then, we introduce new techniques for finding out the secret moduli and consider quantum speed-ups.
Last updated:  2015-10-15
got HW crypto? On the (in)security of a Self-Encrypting Drive series
Gunnar Alendal, Christian Kison, modg
Self encrypting devices (SEDs) doing full disk encryption are getting more and more widespread. Hardware implemented AES encryption provides fast and transparent encryption of all user data on the storage medium, at all times. In this paper we will look into some models in a self encrypting external hard drive series; the Western Digital My Passport series. We will describe the security model of these devices and show several security weaknesses like RAM leakage, weak key attacks and even backdoors on some of these devices, resulting in decrypted user data, without the knowledge of any user credentials.
Last updated:  2016-01-08
Dismantling real-world ECC with Horizontal and Vertical Template Attacks
Margaux Dugardin, Louiza Papachristodoulou, Zakaria Najm, Lejla Batina, Jean-Luc Danger, Sylvain Guilley, Jean-Christophe Courrege, Carine Therond
Recent side-channel attacks on elliptic curve algorithms have shown that the security of these cryptosystems is a matter of serious concern. The development of techniques in the area of Template Attacks makes it feasible to extract a 256-bit secret key with only 257 traces. This paper enhances the applicability of this attack by exploiting both the horizontal leakage of the carry propagation during the finite field multiplication, and the vertical leakage of the input data. As a further contribution, our method provides detection and auto-correction of possible errors that may occur during the key recovery. These enhancements come at the cost of extra traces, while still providing a practical attack. Finally, we show that the elliptic curve technology developed in PolarSSL running on a ARM STM32F4 platform is completely vulnerable, when used without any modifications or countermeasures.
Last updated:  2016-01-16
Factoring as a Service
Luke Valenta, Shaanan Cohney, Alex Liao, Joshua Fried, Satya Bodduluri, Nadia Heninger
The difficulty of integer factorization is fundamental to modern cryptographic security using RSA encryption and signatures. Although a 512-bit RSA modulus was first factored in 1999, 512-bit RSA remains surprisingly common in practice across many cryptographic protocols. Popular understanding of the difficulty of 512-bit factorization does not seem to have kept pace with developments in computing power. In this paper, we optimize the CADO-NFS and Msieve implementations of the number field sieve for use on the Amazon Elastic Compute Cloud platform, allowing a non-expert to factor 512-bit RSA public keys in under four hours for \$75. We go on to survey the RSA key sizes used in popular protocols, finding hundreds or thousands of deployed 512-bit RSA keys in DNSSEC, HTTPS, IMAP, POP3, SMTP, DKIM, SSH, and PGP.
Last updated:  2022-03-16
Improved Masking for Tweakable Blockciphers with Applications to Authenticated Encryption
Uncategorized
Robert Granger, Philipp Jovanovic, Bart Mennink, Samuel Neves
Show abstract
Uncategorized
A popular approach to tweakable blockcipher design is via masking, where a certain primitive (a blockcipher or a permutation) is preceded and followed by an easy-to-compute tweak-dependent mask. In this work, we revisit the principle of masking. We do so alongside the introduction of the tweakable Even-Mansour construction MEM. Its masking function combines the advantages of word-oriented LFSR- and powering-up-based methods. We show in particular how recent advancements in computing discrete logarithms over finite fields of characteristic 2 can be exploited in a constructive way to realize highly efficient, constant-time masking functions. If the masking satisfies a set of simple conditions, then MEM is a secure tweakable blockcipher up to the birthday bound. The strengths of MEM are exhibited by the design of fully parallelizable authenticated encryption schemes OPP (nonce-respecting) and MRO (misuse-resistant). If instantiated with a reduced-round BLAKE2b permutation, OPP and MRO achieve speeds up to 0.55 and 1.06 cycles per byte on the Intel Haswell microarchitecture, and are able to significantly outperform their closest competitors.
Last updated:  2015-10-15
All or Nothing at All
Paolo D'Arco, Navid Nasr Esfahani, Douglas R. Stinson
We continue a study of unconditionally secure all-or-nothing transforms (AONT) begun in \cite{St}. An AONT is a bijective mapping that constructs $s$ outputs from $s$ inputs. We consider the security of $t$ inputs, when $s-t$ outputs are known. Previous work concerned the case $t=1$; here we consider the problem for general $t$, focussing on the case $t=2$. We investigate constructions of binary matrices for which the desired properties hold with the maximum probability. Upper bounds on these probabilities are obtained via a quadratic programming approach, while lower bounds can be obtained from combinatorial constructions based on symmetric BIBDs and cyclotomy. We also report some results on exhaustive searches and random constructions for small values of $s$.
Last updated:  2017-06-04
Incremental Program Obfuscation
Sanjam Garg, Omkant Pandey
Recent advances in program obfuscation suggest that it is possible to create software that can provably safeguard secret information. However, software systems usually contain large executable code that is updated multiple times and sometimes very frequently. Freshly obfuscating the program for every small update will lead to a considerable efficiency loss. Thus, an extremely desirable property for obfuscation algorithms is incrementality: small changes to the underlying program translate into small changes to the corresponding obfuscated program. We initiate a thorough investigation of incremental program obfuscation. We show that the strong simulation-based notions of program obfuscation, such as ``virtual black-box'' and ``virtual grey-box'' obfuscation, cannot be incremental (according to our efficiency requirements) even for very simple functions such as point functions. We then turn to the indistinguishability-based notions, and present two security definitions of varying strength --- namely, a weak one and a strong one. To understand the overall strength of our definitions, we formulate the notion of incremental best-possible obfuscation and show that it is equivalent to our strong indistinguishability-based notion. Finally, we present constructions for incremental program obfuscation satisfying both our security notions. We first give a construction achieving the weaker security notion based on the existence of general purpose indistinguishability obfuscation. Next, we present a generic transformation using oblivious RAM to amplify security from weaker to stronger, while maintaining the incrementality property.
Last updated:  2015-10-14
Multi-user Schnorr security, revisited
Daniel J. Bernstein
Three recent proposals for standardization of next-generation ECC signatures have included "key prefixing" modifications to Schnorr's signature system. Bernstein, Duif, Lange, Schwabe, and Yang stated in 2011 that key prefixing is "an inexpensive way to alleviate concerns that several public keys could be attacked simultaneously". However, a 2002 theorem by Galbraith, Malone-Lee, and Smart states that, for the classic Schnorr signature system, single-key security tightly implies multi-key security. Struik and then Hamburg, citing this theorem, argued that key prefixing was unnecessary for multi-user security and should not be standardized. This paper identifies an error in the 2002 proof, and an apparently insurmountable obstacle to the claimed theorem. The proof idea does, however, lead to a different theorem, stating that single-key security of the classic Schnorr signature system tightly implies multi-key security of the key-prefixed variant of the system. This produces exactly the opposite conclusion regarding standardization.
Last updated:  2015-10-14
Updates on Sorting of Fully Homomorphic Encrypted Data
Uncategorized
Nitesh Emmadi, Praveen Gauravaram, Harika Narumanchi, Habeeb Syed
Show abstract
Uncategorized
In this paper, we show implementation results of various algorithms that sort data encrypted with Fully Ho- momorphic Encryption scheme based on Integers. We analyze the complexities of sorting algorithms over encrypted data by considering Bubble Sort, Insertion Sort, Bitonic Sort and Odd- Even Merge sort. Our complexity analysis together with imple- mentation results show that Odd-Even Merge Sort has better performance than the other sorting techniques. We observe that complexity of sorting in homomorphic domain will always have worst case complexity independent of the nature of input. In addition, we show that combining different sorting algorithms to sort encrypted data does not give any performance gain when compared to the application of sorting algorithms individually.
Last updated:  2015-10-16
An Efficient Scheme to Reduce Side-Channel Leakage of MAC-Keccak for Smart Card
Pei Luo, Liwei Zhang, Yunsi Fei, A. Adam Ding
As the new SHA-3 standard, the side-channel security of Keccak has attracted a lot of attentions. Some works show that both software and hardware implementation of Keccak have strong side-channel leakages, and these leakages can be used easily by an attacker to recover secret key information. Secret sharing schemes are introduced to mask the leakages, while such schemes will incur large resource overhead. In this paper, we introduce another scheme based on the round rotation probability of Keccak to reduce the side-channel leakages. This scheme is easy to implement while it can efficiently help to reduce the side-channel leakages of Keccak.
Last updated:  2015-11-01
Bi-Deniable Inner Product Encryption from LWE
Daniel Apon, Xiong Fan, Feng-Hao Liu
Deniable encryption (Canetti et al. CRYPTO '97) is an intriguing primitive that provides a security guarantee against not only eavesdropping attacks as required by semantic security, but also stronger coercion attacks performed after the fact. The concept of deniability has later demonstrated useful and powerful in many other contexts, such as leakage resilience, adaptive security of protocols, and security against selective opening attacks. Despite its conceptual usefulness, our understanding of how to construct deniable primitives under standard assumptions is restricted. In particular, from standard assumptions such as Learning with Errors (LWE), we have only multi-distributional or non-negligible advantage deniable encryption schemes, whereas with the much stronger assumption of indistinguishable obfuscation, we can obtain at least fully-secure sender-deniable PKE and computation. How to achieve deniability for other more advanced encryption schemes under standard assumptions remains an interesting open question. In this work, we construct a bi-deniable inner product encryption (IPE) in the multi-distributional model without relying on obfuscation as a black box. Our techniques involve new ways of manipulating Gaussian noise, and lead to a significantly tighter analysis of noise growth in Dual Regev type encryption schemes. We hope these ideas can give insight into achieving deniability and related properties for further, advanced cryptographic constructions under standard assumptions.
Last updated:  2015-10-13
Multilinear Map via Scale-Invariant FHE: Enhancing Security and Efficiency
Jinsu Kim, Sungwook Kim, Jae Hong Seo
Cryptographic multilinear map is a useful tool for constructing numerous secure protocols and Graded Encoding System (GES) is an {\em approximate} concept of multilinear map. In multilinear map context, there are several important issues, mainly about security and efficiency. All early stage candidate multilinear maps are recently broken by so-called zeroizing attack, so that it is highly required to develop reliable mechanisms to prevent zeroizing attacks. Moreover, the encoding size in all candidate multilinear maps grows quadratically in terms of multilinearity parameter $\kappa$ and it makes them less attractive for applications requiring large $\kappa$. In this paper, we propose a new integer-based multilinear map that has several advantages over previous schemes. In terms of security, we expect that our construction is resistant to the zeroizing attack. In terms of efficiency, the bit-size of an encoding grows sublinearly with $\kappa$, more precisely $O((\log_2\kappa)^2)$. To this end, we essentially utilize a technique of the multiplication procedure in {\em scale-invariant} fully homomorphic encryption (FHE), which enables to achieve sublinear complexity in terms of multilinearity and at the same time security against the zeroizing attacks (EUROCRYPT 2015, IACR-Eprint 2015/934, IACR-Eprint 2015/941), which totally broke Coron, Lepoint, and Tibouchi's integer-based construction (CRYPTO 2013, CRYPTO2015). We find that the technique of scale-invariant FHE is not very well harmonized with previous approaches of making GES from (non-scale-invariant) FHE. Therefore, we first devise a new approach for approximate multilinear maps, called {\em Ring Encoding System (RES)}, and prove that a multilinear map built via RES is generically secure. Next, we propose a new efficient scale-invariant FHE with special properties, and then construct a candidate RES based on a newly proposed scale-invariant FHE. It is worth noting that, contrary to the CLT multilinear map (CRYPTO 2015), multiplication procedure in our construction does not add hidden constants generated by ladders of zero encodings, but mixes randoms in encodings in non-linear ways without using ladders of zero encodings. This feature is obtained by using the scale-invariant FHE and essential to prevent the Cheon et al.'s zeroizing attack.
Last updated:  2019-03-31
Ed3363 (HighFive) -- An alternative Elliptic Curve
Mike Scott
We propose a new Elliptic curve at a security level significantly greater than the standard 128 bits, that fills a gap in current proposals while bucking the expected security vs cost curve by exploiting the new trick recently described by Granger and Scott. This essentially reduces the cost of field multiplication to that of a field squaring.
Last updated:  2016-12-23
Encryption Switching Protocols
Geoffroy Couteau, Thomas Peters, David Pointcheval
We put forth a novel cryptographic primitive: encryption switching protocol (ESP), allowing to switch between two encryption schemes. Intuitively, this two-party protocol converts given ciphertexts from one scheme into ciphertexts of the same messages in the other scheme, for any polynomial number of switches, in any direction. Although ESP is a special kind of two-party computation protocol, it turns out that ESP implies general two-party computation under natural conditions. In particular, our new paradigm is tailored to the evaluation of functions over rings. Indeed, assuming the compatibility of two additively and multiplicatively homomorphic encryption schemes, switching ciphertexts makes it possible to efficiently reconcile the two internal laws. Since no such pair of schemes appeared in the literature, except for the non-interactive case of fully homomorphic encryption which still remains prohibitive in practice, we build the first ElGamal-like encryption scheme over (Zn;x) as a complement to the Paillier encryption scheme over (Zn;+), where n is a strong RSA modulus. Eventually, we also instantiate secure ESP between the two schemes, in front of malicious adversaries. Thanks to a pre-processing step, we manage to get an online communication in terms of group elements which neither depends on the security parameter nor on the modulus n. This makes use of a new technique called refreshable twin-ciphertext pool that is of independent interest.
Last updated:  2015-10-14
Fast Oblivious AES\\A dedicated application of the MiniMac protocol
Ivan Damgård, Rasmus Winther Zakarias
We present an actively secure multi-party computation the of the Advanced Encryption Standard (AES). To the best of our knowledge it is the fastest of its kind to date. We start from an efficient actively secure evaluation of general binary circuits that was implemented by the authors of [DLT14]. They presented an optimized implementation of the so-called MiniMac protocol [DZ13] that runs in the pre-processing model, and applied this to a binary AES circuit. In this paper we describe how to dedicate the pre-processing to the structure of AES, which improves significantly the throughput and latency of previous actively secure implementations. We get a latency of about 6 ms and amortised time about 0.4 ms per AES block, which seems completely adequate for practical applications such as verification of 1-time passwords.
Last updated:  2015-10-13
Improved Linear Cryptanalysis of reduced-round SIMON-32 and SIMON-48
Mohamed Ahmed Abdelraheem, Javad Alizadeh, Hoda A. Alkhzaimi, Mohammad Reza Aref, Nasour Bagheri, Praveen Gauravaram
In this paper we analyse two variants of SIMON family of light-weight block ciphers against linear cryptanalysis and present the best linear cryptanalytic results on these variants of reduced-round SIMON to date. We propose a time-memory trade-off method that finds differential/linear trails for any permutation allowing low Hamming weight differential/linear trails. Our method combines low Hamming weight trails found by the correlation matrix representing the target permutation with heavy Hamming weight trails found using a Mixed Integer Programming model representing the target differential/linear trail. Our method enables us to find a 17-round linear approximation for SIMON-48 which is the best current linear approximation for SIMON-48. Using only the correlation matrix method, we are able to find a 14-round linear approximation for SIMON-32 which is also the current best linear approximation for SIMON-32. The presented linear approximations allow us to mount a 23-round key recovery attack on SIMON-32 and a 24-round Key recovery attack on SIMON-48/96 which are the current best results on SIMON-32 and SIMON-48. In addition we have an attack on 24 rounds of SIMON-32 with marginal complexity.
Last updated:  2016-06-21
Blazing Fast 2PC in the Offline/Online Setting with Security for Malicious Adversaries
Yehuda Lindell, Ben Riva
Recently, several new techniques were presented to dramatically improve key parts of secure two-party computation (2PC) protocols that use the cut-and-choose paradigm on garbled circuits for 2PC with security against malicious adversaries. These include techniques for reducing the number of garbled circuits (Lindell 13, Huang et al.~13, Lindell and Riva 14, Huang et al.~14) and techniques for reducing the overheads besides garbled circuits (Mohassel and Riva 13, Shen and Shelat~13). We design a highly optimized protocol in the offline/online setting that makes use of all state-of-the-art techniques, along with several new techniques that we introduce. A crucial part of our protocol is a new technique for enforcing consistency of the inputs used by the party who garbles the circuits. This technique has both theoretical and practical advantages over \mbox{previous methods.} We present a prototype implementation of our new protocol, which is also the first implementation of the amortized cut-and-choose technique of Lindell and Riva (Crypto 2014). Our prototype achieves a speed of just \emph{$7$ ms in the online stage} and just $74$ ms in the offline stage per 2PC invoked, for securely computing AES in the presence of malicious adversaries (using 9 threads on two 2.9GHz machines located in the same Amazon region). We note that no prior work has gone below one second overall on average for the secure computation of AES for malicious adversaries (nor below 20ms in the online stage). Our implementation securely evaluates SHA-256 (which is a \emph{much bigger circuit}) with $33$ ms online time and $206$ ms offline time, per 2PC invoked.
Last updated:  2015-10-12
Bit Coincidence Mining Algorithm
Koh-ichi Nagao
Here, we propose new algorithm for solving ECDLP named "Bit Coincidence Mining Algorithm!", from which ECDLP is reduced to solving some quadratic equations system. In this algorithm, ECDLP of an elliptic curve $E$ defined over $\bF_q$ ($q$ is prime or power of primes) reduces to solving quadratic equations system of $d-1$ variables and $d+C_0-1$ equations where $C_0$ is small natural number and $d \sim C_0 \, \log_2 q$. This equations system is too large and it can not be solved by computer. However, we can show theoritically the cost for solving this equations system by xL algorithm is subexponential under the reasonable assumption of xL algorithm.
Last updated:  2015-10-12
Polynomial time reduction from 3SAT to solving low first fall degree multivariable cubic equations system
Uncategorized
Koh-ichi Nagao
Show abstract
Uncategorized
Koster shows that the problem for deciding whether the value of Semaev's formula $S_m(x_1,...,x_m)$ is $0$ or not, is NP-complete. This result directly does not means ECDLP being NP-complete, but, it suggests ECDLP being NP-complete. Further, Semaev shows that the equations system using $m-2$ number of $S_3(x_1,x_2,x_3)$, which is equivalent to decide whether the value of Semaev's formula $S_m(x_1,...,x_m)$ is $0$ or not, has constant(not depend on $m$ and $n$) first fall degree. So, under the first fall degree assumption, its complexity is poly in $n$ ($O(n^{Const})$).And so, suppose $P\ne NP$, which almost all researcher assume this, it has a contradiction and we see that first fall degree assumption is not true. Koster shows the NP-completeness from the group belonging problem, which is NP-complete, reduces to the problem for deciding whether the value of Semaev's formula $S_m(x_1,...,x_m)$ is $0$ or not, in polynomial time. In this paper, from another point of view, we discuss this situation. Here, we construct some equations system defined over arbitrary field $K$ and its first fall degree is small, from any 3SAT problem. The cost for solving this equations system is polynomial times under the first fall degree assumption. So, 3SAT problem, which is NP-complete, reduced to the problem in P under the first fall degree assumption. Almost all researcher assume $P \ne NP$, and so, it concludes that the first fall degree assumption is not true. However, we can take $K=\bR$(not finite field. It means that 3SAT reduces to solving multivariable equations system defined over $\R$ and there are many method for solving this by numerical computation. So, I must point out the very small possibility that NP complete problem is reduces to solving cubic equations equations system over $\bR$ which can be solved in polynomial time.
Last updated:  2015-10-12
Complexity of ECDLP under the First Fall Degree Assumption
Koh-ichi Nagao
Semaev shows that under the first fall degree assumption, the complexity of ECDLP over $\bF_{2^n}$, where $n$ is the input size, is $O(2^{n^{1/2+o(1)}})$. In his manuscript, the cost for solving equations system is $O((nm)^{4w})$, where $m$ ($2 \le m \le n$) is the number of decomposition and $w \sim 2.7$ is the linear algebra constant. It is remarkable that the cost for solving equations system under the first fall degree assumption, is poly in input size $n$. He uses normal factor base and the revalance of "Probability that the decomposition success" and "size of factor base" is done. %So that the result is induced. Here, using disjoint factor base to his method, "Probability that the decomposition success becomes $ \sim 1$ and taking the very small size factor base is useful for complexity point of view. Thus we have the result that states \\ "Under the first fall degree assumption, the cost of ECDLP over $\bF_{2^n}$, where $n$ is the input size, is $O(n^{8w+1})$." Moreover, using the authors results, in the case of the field characteristic $\ge 3$, the first fall degree of desired equation system is estimated by $\le 3p+1$. (In $p=2$ case, Semaev shows it is $\le 4$. But it is exceptional.) So we have similar result that states \\ "Under the first fall degree assumption, the cost of ECDLP over $\bF_{p^n}$, where $n$ is the input size and (small) $p$ is a constant, is $O(n^{(6p+2)w+1})$.
Last updated:  2015-10-19
Fast, uniform, and compact scalar multiplication for elliptic curves and genus 2 Jacobians with applications to signature schemes
Ping Ngai Chung, Craig Costello, Benjamin Smith
We give a general framework for uniform, constant-time one- and two-dimensional scalar multiplication algorithms for elliptic curves and Jacobians of genus~2 curves that operate by projecting to the \(x\)-line or Kummer surface, where we can exploit faster and more uniform pseudomultiplication, before recovering the proper ``signed'' output back on the curve or Jacobian. This extends the work of López and Dahab, Okeya and Sakurai, and Brier and Joye to genus~2, and also to two-dimensional scalar multiplication. Our results show that many existing fast pseudomultiplication implementations (hitherto limited to applications in Diffie--Hellman key exchange) can be wrapped with simple and efficient pre- and post-computations to yield competitive full scalar multiplication algorithms, ready for use in more general discrete logarithm-based cryptosystems, including signature schemes. This is especially interesting for genus~2, where Kummer surfaces can outperform comparable elliptic curve systems. As an example, we construct an instance of the Schnorr signature scheme driven by Kummer surface arithmetic.
Last updated:  2015-10-13
A note on constructions of bent functions from involutions
Uncategorized
Sihem Mesnager
Show abstract
Uncategorized
Bent functions are maximally nonlinear Boolean functions. They are important functions introduced by Rothaus and studied rstly by Dillon and next by many researchers for four decades. Since the complete classication of bent functions seems elusive, many researchers turn to design constructions of bent functions. In this note, we show that linear involutions (which are an important class of permutations) over nite elds give rise to bent functions in bivariate representations. In particular, we exhibit new constructions of bent functions involving binomial linear involutions whose dual functions are directly obtained without computation.
Last updated:  2015-10-12
Searching and Sorting of Fully Homomorphic Encrypted Data on Cloud
Ayantika Chatterjee, Indranil Sengupta
The challenge of maintaining confidentiality of stored data in cloud is of utmost importance to realize the potential of cloud computing. Storing data in encrypted form may solve the problem, but increases the security issues and diminishes the essence of cloud while performing operations on cloud data by repeated decryption-encryption. Hence, Fully homomorphic encryption (FHE) is an effective scheme to support arbitrary operations directly on encrypted data. Further, cloud mostly acts as storage database, hence secured sorting and searching of FHE cloud data can be an effective field of research. We have investigated the feasibility of performing comparison as well as partition based sort on CPA resistant FHE data and highlight an important observation that time requirement of partition based sort on FHE data is no better than comparison based sort owing to the security of the cryptosystem. We identify the recrypt operation, which is the denoising step of FHE as the main reason of costly timing requirement of such operations. Finally, we propose a two stage sorting technique termed as Lazy sort with reduced recrypt operation, which proves to be better in terms of performance on FHE data in comparison to partition as well as comparison sort.
Last updated:  2015-10-12
Analysis of an RFID Authentication Protocol in Accordance with EPC Standards
Behzad Abdolmaleki, Hamidreza Bakhshi, Karim Baghery, Mohammad Reza Aref
In the past few years, the design of RFID authentication protocols in accordance with the EPC Class-1 Generation-2 (EPC C1 G2) standards, has been one of the most important challenges in the information security domain. Although RFID systems provide user-friendly services for end-users, they can make security and privacy concerns for them. In this paper we analyze the security of an RFID mutual authentication protocol which is based on EPC Class-1 Generation-2 standard and proposed in 2013. The designers of protocol claimed that their protocol is secure against different security attacks and provides user privacy. In this paper, we show that unlike their claims, their protocol is not secure against most of the security attacks such as replay attack, the tag’s ID exposure, and the spoofing attacks. As a result, their protocol cannot provide security of RFID users in different authentication applications. Finally, in order to prevent the aforementioned attacks and overcome all the existing weaknesses, we apply a modification in the updating procedure of the protocol and propose a strengthened version of it.
Last updated:  2015-10-16
Guidelines for Using the CryptDB System Securely
Raluca Ada Popa, Nickolai Zeldovich, Hari Balakrishnan
This report has two goals. First, we review guidelines for using the CryptDB system [PRZB11, Pop14] securely by the administrators of database applications. These guidelines were already described in [PRZB11] and elaborated on in [Pop14], but in light of some recent work [NKW15] that applied these guidelines incorrectly, a short document devoted to summarizing these guidelines may be useful. Second, we explain that the study of Naveed, Kamara, and Wright [NKW15] represents an unsafe usage of CryptDB, violating CryptDB’s security guidelines. Hence, the conclusions drawn in that paper regarding CryptDB’s guarantees for medical applications are incorrect: had the guidelines been followed, none of the claimed attacks would have been possible.
Last updated:  2015-10-12
The OPTLS Protocol and TLS 1.3
Hugo Krawczyk, Hoeteck Wee
We present the OPTLS key-exchange protocol, its design, rationale and cryptographic analysis. OPTLS design has been motivated by the ongoing work in the TLS working group of the IETF for specifying TLS 1.3, the next-generation TLS protocol. The latter effort is intended to revamp the security of TLS that has been shown inadequate in many instances as well as to add new security and functional features. The main additions that influence the cryptographic design of TLS 1.3 (hence also of OPTLS) are a new "0-RTT requirement" (0-RTT stands for "zero round trip time") to allow clients that have a previously retrieved or cached public key of the server to send protected data already in the first flow of the protocol; making forward secrecy (PFS) a mandatory requirement; and moving to elliptic curves as the main cryptographic basis for the protocol (for performance and security reasons). Accommodating these requirements calls for moving away from the traditional RSA-centric design of TLS in favor of a protocol based on Diffie-Hellman techniques. OPTLS offers a simple design framework that supports all the above requirements with a uniform and modular logic that helps in the specification, analysis, performance optimization, and future maintenance of the protocol. The current (draft) specification of TLS 1.3 builds upon the OPTLS framework as a basis for the cryptographic core of the handshake protocol, adapting the different modes of OPTLS and its HKDF-based key derivation to the TLS 1.3 context.
Last updated:  2015-10-12
Faster point scalar multiplication on NIST elliptic curves over GF(p) using (twisted) Edwards curves over GF(p³)
Michał Wroński
In this paper we present a new method for fast scalar multiplication on elliptic curves over GF(p) in FPGA using Edwards and twisted Edwards curves over GF(p³). The presented solution works for curves with prime group order (for example for all NIST curves over GF(p)). It is possible because of using 2-isogenous twisted Edwards curves over GF(p³) instead of using short Weierstrass curves over GF(p) for point scalar multiplication. This problem was considered by Verneuil in [1], but in software solutions it is useless, because multiplication in GF(p³) is much harder than multiplication in GF(p). Fortunately in hardware solutions it is possible to make in FPGA fast multiplication in GF(p³) using parallel computations. Single multiplication in GF(p³) is still a little bit slower than in GF(p) but operations on twisted Edwards curves require less multiplications than operations on short Weierstrass curves. Using these observations results in that scalar multiplication on twisted Edwards curve may be in some situations shorter than scalar multiplication on short Weierstrass curve up to 26%. Moreover, in Edwards and twisted Edwards curves arithmetic it is possible to use unified formula (the same formula for points addition and point doubling) which protects us against some kinds of side channel attacks. We also present full coprocessor for fast scalar multiplication in FPGA using described techniques.
Last updated:  2015-10-12
On Reverse-Engineering S-Boxes with Hidden Design Criteria or Structure
Alex Biryukov, Léo Perrin
S-Boxes are the key components of many cryptographic primitives and designing them to improve resilience to attacks such as linear or differential cryptanalysis is well understood. In this paper, we investigate techniques that can be used to reverse-engineer S-box design and illustrate those by studying the S-Box $F$ of the Skipjack block cipher whose design process so far remained secret. We first show that the linear properties of $F$ are far from random and propose a design criteria, along with an algorithm which generates S-Boxes very similar to that of Skipjack. Then we consider more general S-box decomposition problems and propose new methods for decomposing S-Boxes built from arithmetic operations or as a Feistel Network of up to 5 rounds. Finally, we develop an S-box generating algorithm which can fix a large number of DDT entries to the values chosen by the designer. We demonstrate this algorithm by embedding images into the visual representation of S-box's DDT.
Last updated:  2015-10-22
Extended Functionality in Verifiable Searchable Encryption
James Alderman, Christian Janson, Keith M. Martin, Sarah Louise Renwick
When outsourcing the storage of sensitive data to an (untrusted) remote server, a data owner may choose to encrypt the data beforehand to preserve confidentiality. However, it is then difficult to efficiently retrieve specific portions of the data as the server is unable to identify the relevant information. Searchable encryption has been well studied as a solution to this problem, allowing data owners and other authorised users to generate search queries which the server may execute over the encrypted data to identify relevant data portions. However, many current schemes lack two important properties: verifiability of search results, and expressive queries. We introduce Extended Verifiable Searchable Encryption (eVSE) that permits a user to verify that search results are correct and complete. We also permit verifiable computational queries over keywords and specific data values, that go beyond the standard keyword matching queries to allow functions such as averaging or counting operations. We formally define the notion of eVSE within relevant security models and give a provably secure instantiation.
Last updated:  2015-10-11
The Conjoined Microprocessor
Ehsan Aerabi, A. Elhadi Amirouche, Houda Ferradi, Rémi Géraud, David Naccache, Jean Vuillemin
Over the last twenty years, the research community has devised sophisticated methods for retrieving secret information from sidechannel emanations, and for resisting such attacks. This paper introduces a new CPU architecture called the Conjoined Microprocessor. The Conjoined Microprocessor can randomly interleave the execution of two programs at very low extra hardware cost. We developed for the Conjoined Microprocessor a preprocessor tool that turns a target algorithm into two (or more) separate queues like $Q_0$ and $Q_1$ that can run in alternation. $Q_0$ and $Q_1$ fulfill the same operation as the original target algorithm. Power-analysis resistance is achieved by randomly alternating the execution of $Q_0$ and $Q_1$, with different runs resulting in different interleavings. Experiments reveal that this architecture is indeed effective against CPA.
Last updated:  2015-10-09
Some Cryptanalytic Results on Zipper Hash and Concatenated Hash
Ashwin Jha, Mridul Nandi
At SAC 2006, Liskov proposed the zipper hash, a technique for constructing secure (indifferentiable from random oracles) hash functions based on weak (invertible) compression functions. Zipper hash is a two pass scheme, which makes it unfit for practical consideration. But, from the theoretical point of view it seemed to be secure, as it had resisted standard attacks for long. Recently, Andreeva {\em et al.} gave a forced-suffix herding attack on the zipper hash, and Chen and Jin showed a second preimage attack provided $f_1$ is strong invertible. In this paper, we analyse the construction under the random oracle model as well as when the underlying compression functions have some weakness. We show (second) preimage, and herding attacks on an $n$-bit zipper hash and its relaxed variant with $f_1 = f_2$, all of which require less than $ 2^{n} $ online computations. Hoch and Shamir have shown that the concatenated hash offers only $\frac{n}{2}$-bits security when both the underlying compression functions are strong invertible. We show that the bound is tight even when only one of the underlying compression functions is strong invertible.
Last updated:  2016-12-22
Cut Down the Tree to Achieve Constant Complexity in Divisible E-Cash
David Pointcheval, Olivier Sanders, Jacques Traoré
Divisible e-cash, proposed in 1991 by Okamoto and Ohta, addresses a practical concern of electronic money, the problem of paying the exact amount. Users of such systems can indeed withdraw coins of a large value $N$ and then divide it into many pieces of any desired values $V\leq N$. Such a primitive therefore allows to avoid the use of several denominations or change issues. Since its introduction, many constructions have been proposed but all of them make use of the same framework: they associate each coin with a binary tree, which implies, at least, a logarithmic complexity for the spendings. In this paper, we propose the first divisible e-cash system without such a tree structure, and so without its inherent downsides. Our construction is the first one to achieve constant-time spendings while offering a quite easy management of the coins. It compares favorably with the state-of-the-art, while being provably secure in the standard model.
Last updated:  2017-10-09
Attacks on the Search-RLWE problem with small error
Uncategorized
Hao Chen, Kristin E. Lauter, Katherine E. Stange
Show abstract
Uncategorized
The Ring Learning-With-Errors (RLWE) problem shows great promise for post-quantum cryptography and homomorphic encryption. We describe a new attack on the non-dual search RLWE problem with small error widths, using ring homomorphisms to finite fields and the chi-squared statistical test. In particular, we identify a ``subfield vulnerability'' (Section 5.2) and give a new attack which finds this vulnerability by mapping to a finite field extension and detecting non-uniformity with respect to the number of elements in the subfield. We use this attack to give examples of vulnerable RLWE instances in Galois number fields. We also extend the well-known search-to-decision reduction result to Galois fields with any unramified prime modulus $q$, regardless of the residue degree $f$ of $q$, and we use this in our attacks. The time complexity of our attack is $O(n q^{2f})$, where $n$ is the degree of $K$ and $f$ is the {\it residue degree} of $q$ in $K$. We also show an attack on the non-dual (resp. dual) RLWE problem with narrow error distributions in prime cyclotomic rings when the modulus is a ramified prime (resp. any integer). We demonstrate the attacks in practice by finding many vulnerable instances and successfully attacking them. We include the code for all attacks.
Last updated:  2015-12-12
Multilinear Maps over the Integers Using Modulus Switching
Uncategorized
Gu Chunsheng
Uncategorized
After CLT13 of multilinear map over the integers was broken by Cheon, Han, Lee, Ryu and Stehle using zeroizing attack, a new variant CLT15 of CLT13 was proposed by Coron, Lepoint and Tibouchi by constructing new zero-testing parameter. Very recently, CLT15 was broken independently by Cheon, Lee and Ryu, and Minaud and Fouque using an extension of Cheon et al.’s attack. To immune CLT13 against zeroizing attack, we present a new construction of multilinear map over the integers using switching modulus technique. The security of our construction depends upon new hardness assumption.
Last updated:  2015-10-09
Zero-Knowledge Interactive Proof Systems for New Lattice Problems
Claude Crepéau, Raza Ali Kazmi
In this work we introduce a new hard problem in lattices called Isometric Lattice Problem (ILP) and reduce Linear Code Equivalence over prime fields and Graph Isomorphism to this prob- lem. We also show that this problem has an (efficient prover) perfect zero-knowledge interactive proof; this is the only hard problem in lattices that is known to have this property (with respect to malicious verifiers). Under the assumption that the polynomial hierarchy does not collapse, we also show that ILP cannot be NP-complete. We finally introduce a variant of ILP over the rationals radicands and provide similar results for this new problem.
Last updated:  2016-02-22
Improved Differential-Linear Cryptanalysis of 7-round Chaskey with Partitioning
Gaëtan Leurent
In this work we study the security of Chaskey, a recent lightweight MAC designed by Mouha et al., currently being considered for standardisation by ISO/IEC and ITU-T. Chaskey uses an ARX structure very similar to SipHash. We present the first cryptanalysis of Chaskey in the single user setting, with a differential-linear attack against 6 and 7 rounds, hinting that the full version of Chaskey with 8 rounds has a rather small security margin. In response to these attacks, a 12-round version has been proposed by the designers. To improve the complexity of the differential-linear cryptanalysis, we refine a partitioning technique recently proposed by Biham and Carmeli to improve the linear cryptanalysis of addition operations. We also propose an analogue improvement of differential cryptanalysis of addition operations. Roughly speaking, these techniques reduce the data complexity of linear and differential attacks, at the cost of more processing time per data. It can be seen as the analogue for ARX ciphers of partial key guess and partial decryption for SBox-based ciphers. When applied to the differential-linear attack against Chaskey, this partitioning technique greatly reduces the data complexity, and this also results in a reduced time complexity. While a basic differential-linear attack on 7 round takes 2^78 data and time (respectively 2^35 for 6 rounds), the improved attack requires only 2^48 data and 2^67 time (respectively 2^25 data and 2^29 time for 6 rounds). We also show an application of the partitioning technique to FEAL-8X, and we hope that this technique will lead to a better understanding of the security of ARX designs.
Last updated:  2016-02-22
Freestart collision for full SHA-1
Uncategorized
Marc Stevens, Pierre Karpman, Thomas Peyrin
Show abstract
Uncategorized
This article presents an explicit freestart colliding pair for SHA-1, i.e. a collision for its internal compression function. This is the first practical break of the full SHA-1, reaching all 80 out of 80 steps. Only 10 days of computation on a 64-GPU cluster were necessary to perform this attack, for a cost of approximately $2^{57.5}$ calls to the compression function of SHA-1. This work builds on a continuous series of cryptanalytic advancements on SHA-1 since the theoretical collision attack breakthrough of 2005. In particular, we reuse the recent work on 76-step SHA-1 of Karpman et al. from CRYPTO 2015 that introduced an efficient framework to implement (freestart) collisions on GPUs; we extend it by incorporating more sophisticated accelerating techniques such as boomerangs. We also rely on the results of Stevens from EUROCRYPT 2013 to obtain optimal attack conditions; using these techniques required further refinements for this work. Freestart collisions do not directly imply a collision for the full hash function. However, this work is an important milestone towards an actual SHA-1 collision and it further shows how GPUs can be used very efficiently for this kind of attack. Based on the state-of-the-art collision attack on SHA-1 by Stevens from EUROCRYPT 2013, we are able to present new projections on the computational and financial cost required for a SHA-1 collision computation. These projections are significantly lower than what was previously anticipated by the industry, due to the use of the more cost efficient GPUs compared to regular CPUs. We therefore recommend the industry, in particular Internet browser vendors and Certification Authorities, to retract SHA-1 quickly. We hope the industry has learned from the events surrounding the cryptanalytic breaks of MD5 and will retract SHA-1 before concrete attacks such as signature forgeries appear in the near future.
Last updated:  2016-02-17
Vulnerabilities of ``McEliece in the World of Escher"
Dustin Moody, Ray Perlner
Recently, Gligoroski et al. proposed code-based encryption and signature schemes using list decoding, blockwise triangular private keys, and a nonuniform error pattern based on ``generalized error sets." The general approach was referred to as \emph{McEliece in the World of Escher.} This paper demonstrates attacks which are significantly cheaper than the claimed security level of the parameters given by Gligoroski et al. We implemented an attack on the proposed 80-bit parameters which was able to recover private keys for both encryption and signatures in approximately 2 hours on a single laptop. We further find that increasing the parameters to avoid our attack will require parameters to grow by almost an order of magnitude for signatures, and (at least) two orders of magnitude for encryption.
Last updated:  2015-10-08
Private Genome Analysis through Homomorphic Encryption
Miran Kim, Kristin Lauter
The rapid development of genome sequencing technology allows researchers to access large genome datasets. However, outsourcing the data processing to the cloud poses high risks for personal privacy. The aim of this paper is to give a practical solution for this problem using homomorphic encryption. In our approach, all the computations can be performed in an untrusted cloud without requiring the decryption key or any interaction with the data owner, which preserves the privacy of genome data. In this paper, we present evaluation algorithms for secure computation of the minor allele frequencies and chi-squared statistic in a genome-wide association studies setting. We also describe how to privately compute the Hamming distance and approximate Edit distance between encrypted DNA sequences. Finally, we compare performance details of using two practical homomorphic encryption schemes - the BGV scheme by Gentry, Halevi and Smart and the YASHE scheme by Bos, Lauter, Loftus and Naehrig. Such an approach with the YASHE scheme analyzes data from 400 people within about 2 seconds and picks a variant associated with disease from 311 spots. For another task, using the BGV scheme, it took about 65 seconds to securely compute the approximate Edit distance for DNA sequences of size 5K and figure out the differences between them.
Last updated:  2015-11-21
Improved Linear (hull) Cryptanalysis of Round-reduced Versions of KATAN
Uncategorized
Danping Shi, Lei Hu, Siwei Sun, Ling Song
Show abstract
Uncategorized
KATAN is a family of block ciphers published at CHES 2009. Based on the Mixed-integer linear programming (MILP) technique, we propose the first third-party linear cryptanalysis on KATAN. Furthermore, we evaluate the security of KATAN against the linear attack without ignoring the dependence of the input bits of the $2\times 1$ S-box(the AND operation). Note that in previous analysis, the dependence is not considered, and therefore the previous results are not accurate. Furthermore, the mounted 131/120-round attack on KATAN32/48 respectively by our 84/90-round linear hull is the best single-key known-plaintext attack. In addition, a best 94-round linear hull attack is mounted on KATAN64 by our 76-round linear hull.
Last updated:  2015-10-06
When Organized Crime Applies Academic Results - A Forensic Analysis of an In-Card Listening Device
Houda Ferradi, Rémi Géraud, David Naccache, Assia Tria
This paper describes the forensic analysis of what the authors believe to be the most sophisticated smart card fraud encountered to date. In 2010, Murdoch et al. [7] described a man-in-the-middle attack against EMV cards. [7] demonstrated the attack using a general purpose FPGA board, noting that miniaturization is mostly a mechanical challenge, and well within the expertise of criminal gangs. This indeed happened in 2011, when about 40 sophisticated card forgeries surfaced in the field. These forgeries are remarkable in that they embed two chips wired top-to-tail. The first chip is clipped from a genuine stolen card. The second chip plays the role of the man-in-the-middle and communicates directly with the point of sale (PoS) terminal. The entire assembly is embedded in the plastic body of yet another stolen card. The forensic analysis relied on X-ray chip imaging, side-channel analysis, protocol analysis, and microscopic optical inspections.
Last updated:  2015-10-06
SOME REMARKS ON THE LOGARITHMIC SIGNATURES OF FINITE ABELIAN GROUPS
Thuong T. Dang, Tri T. Ton, Van H. Dang, Thuc D. Nguyen
In the paper about the cryptosystem MST3, Svaba and Trung pro- posed a way to build a cryptosystem based on the concept of logarithmic signa- tures, and they choose Suzuki's group, which is not abelian for implementing. Recently, to reason why these methods cannot be applied to abelian groups; Sv- aba, Trung and Wolf developed some algorithms to factorize the fused transver- sal logarithmic signatures (FTLS). Their attacks can be avoided by some mod- ications, which is the aim of this paper, where we will use the weakness of the discrete logarithm problem (DLP) to propose two cryptosystems. The rst one is based on the new concept about quasi-logarithmic signature of nite solvable groups, which is the generalization of logarithmic signatures. The second is built on the logarithmic signatures of nite cyclic 2-groups, which include two interesting examples on Pell's curves and elliptic curves over nite elds.
Last updated:  2015-11-25
Short Structure-Preserving Signatures
Essam Ghadafi
We construct a new structure-preserving signature scheme in the efficient Type-III asymmetric bilinear group setting with signatures shorter than all existing schemes. Our signatures consist of 3 group elements from the first source group and therefore have shorter size than all existing schemes as existing ones have at least one component of the signature in the second source group whose elements bit size is at least double their first group counterparts. Besides enjoying short signatures, our scheme is fully re-randomizable which is a useful property for many applications. Our result also constitutes a proof that the impossibility of unilateral structure-preserving signatures in the Type-III setting result of Abe et al.~(Crypto 2011) does not apply to constructions in which the message space is dual in both source groups. Besides checking the well-formedness of the message, verifying a signature in our scheme requires checking $2$ Pairing Product Equations (PPE) and require the evaluation of only $5$ pairings in total which matches the best existing scheme and outperforms many other existing ones. Reducing the number of pairings in the verification equations is very important when combining structure-preserving signature schemes with Groth-Sahai proofs as the number of pairings required for verifying Groth-Sahai proofs for PPE equations grows linearly with the number of pairing monomials in the source equations. We give some examples of how using our new scheme instead of existing ones improves the efficiency of some existing cryptographic protocols such as direct anonymous attestation and group signature related constructions.
Last updated:  2015-10-05
More Efficient Secure Outsourcing Methods for Bilinear Maps
Öznur Arabacı, Mehmet Sabir Kiraz, İsa Sertkaya, Osmanbey Uzunkol
Bilinear maps are popular cryptographic primitives which have been commonly used in various modern cryptographic protocols. However, the cost of computation for bilinear maps is expensive because of their realization using variants of Weil and Tate pairings of elliptic curves. Due to increasing availability of cloud computing services, devices with limited computational resources can outsource this heavy computation to more powerful external servers. Currently, the checkability probability of the most efficient outsourcing algorithm is $1/2$ and the overall computation requires $4$ point addition in the preimage and $3$ multiplications in the image of the bilinear map under the one-malicious version of a two-untrusted-program model. In this paper, we propose two efficient new algorithms which decrease not only the memory requirement but also the overall communication overhead.
Last updated:  2015-10-02
Cryptanalysis of the Round-Reduced Kupyna Hash Function
Jian Zou, Le Dong
The Kupyna hash function was selected as the new Ukrainian standard DSTU 7564:2014 in 2015. It is designed to replace the old Independent States (CIS) standard GOST 34.311-95. The Kupyna hash function is an AES-based primitive, which uses Merkle-Damgård compression function based on Even-Mansour design. In this paper, we show the first cryptanalytic attacks on the round-reduced Kupyna hash function. Using the rebound attack, we present a collision attack on 5-round of the Kupyna-256 hash function. The complexity of this collision attack is ($2^{120},2^{64}$) (in time and memory). Furthermore, we use guess-and-determine MitM attack to construct pseudo-preimage attacks on 6-round Kupyna-256 and Kupyna-512 hash function, respectively. The complexity of these preimage attacks are ($2^{250.33},2^{250.33}$) and ($2^{498.33},2^{498.33}$) (in time and memory), respectively.
Last updated:  2017-02-15
Building Single-Key Beyond Birthday Bound Message Authentication Code
Uncategorized
Nilanjan Datta, Avijit Dutta, Mridul Nandi, Goutam Paul, Liting Zhang
Uncategorized
MACs (Message Authentication Codes) are widely adopted in communication systems to ensure data integrity and data origin authentication, e.g. CBC-MACs in the ISO standard 9797-1. However, all the current designs based on block cipher either suffer from birthday attacks or require long key sizes. In this paper, we focus on designing {\em single keyed block cipher based MAC achieving beyond-birthday-bound (BBB) security (in terms of number of queries) in the standard model}. Here, we develop several tools on sampling distributions which would be quite useful in the analysis of mode of operations. In this paper, we also show that the sum of two dependent pseudorandom permutation with some loss of randomness is still PRF with BBB security. Then, we demonstrate a generic composition (including the single keyed) achieving BBB security provided that the underlying internal construction satisfies some variants of cover-free (we call them {\em extended cover-free} and {\em pseudo-cover-free}) and block-wise universal properties. By applying this result, we finally provide two concrete single keyed constructions which achieve BBB security. These two constructions, called \tx{1kf9} and \tx{1k\_PMAC+}, are basically simple one key variants of \tx{3kf9} and \tx{PMAC\_Plus} respectively. Thus, we solve a long-standing open problem in designing single-keyed BBB-secure MAC.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.