All papers in 2012 (Page 7 of 733 results)

Last updated:  2013-01-17
Toward Practical Private Access to Data Centers via Parallel ORAM
Jacob R. Lorch, Bryan Parno, James Mickens, Mariana Raykova, Joshua Schiffman
Recent events have shown online service providers the perils of possessing private information about users. Encrypting data mitigates but does not eliminate this threat: the pattern of data accesses still reveals information. Thus, we present Shroud, a general storage system that hides data access patterns from the servers running it, protecting user privacy. Shroud functions as a virtual disk with a new privacy guarantee: the user can look up a block without revealing the block's address. Such a virtual disk can be used for many purposes, including map lookup, microblog search, and social networking. Shroud aggressively targets hiding accesses among hundreds of terabytes of data. We achieve our goals by adapting oblivious RAM (ORAM) algorithms to enable large-scale parallelization. Specifically, we show, via new techniques such as oblivious aggregation, how to securely use many inexpensive secure coprocessors acting in parallel to improve request latency. Our evaluation combines large-scale emulation with an implementation on secure coprocessors and suggests that these adaptations bring private data access closer to practicality.
Last updated:  2012-03-21
CFS Software Implementation
Gregory Landais, Nicolas Sendrier
CFS is the first practical code-based signature scheme. In the present paper, we present the initial scheme and its evolutions, the attacks it had to face and the countermeasures applied. We will show that all things considered the system remains practical and we present a software implementation of the signing primitive. For eighty bits of security our implementation produces a signature in 1.3 seconds on a single core of Intel Xeon W3670 at 3.20 GHz. Moreover the computation is easy to distribute and we can take full profit of multicore processors reducing the signature time to a fraction of second in software.
Last updated:  2013-06-04
Composition Theorems for CCA Cryptographic Security
Uncategorized
Rodolphe Lampe, Jacques Patarin
Show abstract
Uncategorized
We present two new theorems to analyze the indistinguishability of the composition of cryptographic permutations and the indistinguishability of the XOR of cryptographic functions. Using the H Coefficients technique of \cite{Patarin-2001}, for any two families of permutations $F$ and $G$ with CCA distinghuishability advantage $\leq\alpha_F$ and $\leq\alpha_G$, we prove that the set of permutations $f\circ g, f\in F, g\in G$ has CCA distinguishability advantage $\leq\alpha_F\times\alpha_G$. This simple composition result gives a CCA indistinguishability geometric gain when composing blockciphers (unlike previously known clasical composition theorems). As an example, we apply this new theorem to analyze $4r$ and $6r$ rounds Feistel schemes with $r\geq 1$ and we improve previous best known bounds for a certain range of queries. Similarly, for any two families of functions $F$ and $G$ with distinghuishability advantage $\leq\alpha_F$ and $\leq\alpha_G$, we prove that the set of functions $f\oplus g, f\in F, g\in G$ has distinguishability advantage $\leq\alpha_F\times\alpha_G$. As an example, we apply this new theorem to analyze the XOR of $2r$ permutations and we improve the previous best known bounds for certain range of queries
Last updated:  2013-09-16
Broadcast (and Round) Efficient Verifiable Secret Sharing
Juan Garay, Clint Givens, Rafail Ostrovsky, Pavel Raykov
Verifiable secret sharing (VSS) is a fundamental cryptographic primitive, lying at the core of secure multi-party computation (MPC) and, as the distributed analogue of a commitment functionality, used in numerous applications. In this paper we focus on unconditionally secure VSS protocols with honest majority. In this setting it is typically assumed that parties are connected pairwise by authenticated, private channels, and that in addition they have access to a ``broadcast'' channel. Because broadcast {\em cannot} be simulated on a point-to-point network when a third or more of the parties are corrupt, it is impossible to construct VSS (and more generally, MPC) protocols in this setting without using a broadcast channel (or some equivalent addition to the model). A great deal of research has focused on increasing the efficiency of VSS, primarily in terms of round complexity. In this work we consider a refinement of the round complexity of VSS, by adding a measure we term {\em broadcast complexity}. We view the broadcast channel as an expensive resource and seek to minimize the number of rounds in which it is invoked as well. We construct a (linear) VSS protocol which uses the broadcast channel only {\em twice} in the sharing phase, while running in an overall constant number of rounds.
Last updated:  2013-08-24
Outsider-Anonymous Broadcast Encryption with Sublinear Ciphertexts
Nelly Fazio, Irippuge Milinda Perera
In the standard setting of broadcast encryption, information about the receivers is transmitted as part of the ciphertext. In several broadcast scenarios, however, the identities of the users authorized to access the content are often as sensitive as the content itself. In this paper, we propose the first broadcast encryption scheme with sublinear ciphertexts to attain meaningful guarantees of receiver anonymity. We formalize the notion of \emph{outsider-anonymous broadcast encryption} (oABE), and describe generic constructions in the standard model that achieve outsider-anonymity under adaptive corruptions in the chosen-plaintext and chosen-ciphertext settings. We also describe two constructions with enhanced decryption, one under the gap Diffie-Hellman assumption, in the random oracle model, and the other under the decisional Diffie-Hellman assumption, in the standard model.
Last updated:  2013-01-24
Provably Secure Distance-Bounding: an Analysis of Prominent Protocols
Uncategorized
Marc Fischlin, Cristina Onete
Show abstract
Uncategorized
Distance-bounding protocols prevent man-in-the-middle attacks by measuring response times. Recently, Dür\-holz et al.~\cite{DueFisKasOne11} formalized the four attacks such protocols typically address: (1) mafia attacks, where the adversary must impersonate to a verifier in the presence of an honest prover; (2) terrorist attacks, where the adversary gets some offline prover support to impersonate; (3) distance attacks, where provers claim to be closer to verifiers than they really are; and (4) impersonation security, where adversaries impersonate provers during lazy phases. \Duerholz\ et al.~\cite{DueFisKasOne11} also formally analyzed the security of (an enhanced version of) the construction of Kim and Avoine~\cite{KimAvo09}. In this paper, we quantify the security of some other well-known distance-bounding protocols, i.e.: Brands and Chaum~\cite{BrandsChaum93}, Hancke-Kuhn~\cite{HanKuhn05}, Avoine and Tchamkerten~\cite{AvTcham09}; Reid et al.~\cite{ReidGonzTangSen07}, the Swiss-knife protocol~\cite{KimAvoKoeStaPer09}, and the very recent proposal of Yang, Zhuang, and Wong~\cite{YangZhuWong12}. In particular, our main results show that (1) relating responses to a long-term secret key, as is the case for most protocols aiming to thwart terrorist fraud attacks, may make protocols vulnerable to so-called key-learning mafia fraud attacks, where the adversary learns a key bit-by-bit, by flipping a single time-critical response; (2) though relating responses can be a bad idea for mafia fraud, it sometimes enforces distance-fraud resistance, by thwarting in particular the attack of Boureanu et al.~\cite{Vau12}; (3) none of the three allegedly terrorist-fraud resistant protocols, i.e.~\cite{KimAvoKoeStaPer09,ReidGonzTangSen07,YangZhuWong12}, is in fact terrorist fraud resistant; for two of these protocols this is a matter of syntax, i.e.~they do not meet the strong security requirements given by \Duerholz\ et al.; the attack against the third protocol, i.e.~\cite{YangZhuWong12}, however, is almost trivial; (4) due to the absence of a second authentication phase, the protocol of Yang, Zhuang, and Wong is vulnerable to Denial of Service attacks. In light of our results, we also review definitions of terrorist fraud, arguing that, while the strong model in~\cite{DueFisKasOne11} may be at the moment more appropriate than the weaker intuition, it may in fact be too strong to capture terrorist fraud resistance.
Last updated:  2012-03-13
Additive autocorrelation of some classes of cubic semi-bent Boolean functions
Deep Singh, Maheshanand Bhaintwal
In this paper, we investigate the relation between the autocorrelation of a cubic Boolean function $f\in \cB_n$ at $a \in \BBF_{2^n}$ and the kernel of the bilinear form associated with $D_{a}f$, the derivative of $f$ at $a$. Further, we apply this technique to obtain the tight upper bounds of absolute indicator and sum-of-squares indicator for avalanche characteristics of various classes of highly nonlinear non-bent cubic Boolean functions.
Last updated:  2012-03-13
Compact Implementation of Threefish and Skein on FPGA
Nuray At, Jean-Luc Beuchat, Ismail San
The SHA-3 finalist Skein is built from the tweakable Threefish block cipher. In order to have a better understanding of the computational efficiency of Skein (resource sharing, memory access scheme, scheduling, etc.), we design a low-area coprocessor for Threefish and describe how to implement Skein on our architecture. We harness the intrinsic parallelism of Threefish to design a pipelined ALU and interleave several tasks in order to achieve a tight scheduling. From our point of view, the main advantage of Skein over other SHA-3 finalists is that the same coprocessor allows one to encrypt or hash a message.
Last updated:  2012-06-20
Short and Efficient Expressive Attribute-Based Signature in the Standard Model
Aijun Ge, Cheng Chen, Chuangui Ma, Zhenfeng Zhang
Attribute-based signature allows the signer to announce his endorsement using a signing policy without revealing the identity, and only the signer whose attributes satisfy the signing policy can generate a valid signature. Attribute-based signature can provide flexible access control policy and has many application in real scenarios requiring both privacy and authentication. In this paper, we present a new construction of efficient attribute-based signature schemes based on Waters' ciphertext-policy attribute-based encryption schemes. Our scheme is existentially unforgeable in the standard model for the selective adversary and can achieve perfect privacy. Compared with other attribute-based signature schemes supporting the general access structure, our scheme has the shortest signature size and the best computation efficiency.
Last updated:  2012-03-13
On Securing Communication From Profilers
Sandra Diaz-Santiago, Debrup Chakraborty
A profiling adversary is an adversary which aims to classify messages into pre-defined profiles and thus gain useful information regarding the sender or receiver of such messages. Usual chosen-plaintext secure encryption schemes are capable of securing information from profilers, but these schemes provide more security than required for this purpose. In this paper we study the requirements for an encryption algorithm to be secure only against profilers and finally give a precise notion of security for such schemes. We also present a full protocol for secure (against profiling adversaries) communication, which neither requires a key exchange nor a public key infrastructure. Our protocol guarantees security against non-human profilers and is constructed using CAPTCHAs and secret sharing schemes.
Last updated:  2012-03-13
Injection of transient faults using electromagnetic pulses -Practical results on a cryptographic system-
Uncategorized
A. Dehbaoui, J. M. Dutertre, B. Robisson, P. Orsatelli, P. Maurine, A. Tria
Show abstract
Uncategorized
This article considers the use of magnetic pulses to inject transient faults into the calculations of a RISC micro-controller running the AES algorithm. A magnetic coil is used to generate the pulses. It induces computational faults without any physical contact with the device. The injected faults are proved to be constant (i.e. data independent) under certain experimental conditions. This behaviour, combined with the ability to choose the faulted bytes thanks to timing accuracy in the fault injection process, makes it possible to implement most of the state-of-the-art fault attack schemes.
Last updated:  2012-05-09
Efficient Arithmetic on Elliptic Curves over Fields of Characteristic Three
Reza R. Farashahi, Hongfeng Wu, Chang-An Zhao
This paper presents new explicit formulas for the point doubling, tripling and addition for Hessian curves and their equivalent Weierstrass curves over finite fields of characteristic three. The cost of basic point operations is lower than that of all previously proposed ones. The new doubling, mixed addition and tripling formulas in projective coordinates require 3M+2C, 8M+1C+1D and 4M+4C+1D respectively, where M, C and D is the cost of a field multiplication, a cubing and a multiplication by a constant. Finally, we present several examples of ordinary elliptic curves in characteristic three for high security levels.
Last updated:  2012-03-13
An Efficient Multistage Secret Sharing Scheme Using Linear One-way Functions and Bilinear Maps
Mitra Fatemi, Taraneh Eghlidos, Mohammadreza Aref
In a Multistage Secret Sharing (MSSS) scheme, the authorized subsets of participants could reconstruct a number of secrets in consecutive stages. A One-Stage Multisecret Sharing (OSMSS) scheme is a special case of MSSS schemes that all secrets are recovered simultaneously. In these schemes, in addition to the individual shares, the dealer should provide the participants with a number of public values related to the secrets. The less the number of public values, the more efficient the scheme. It is desired that MSSS and OSMSS schemes provide the computational security; however, we show in this paper that OSMSS schemes do not fulfill the promise. Furthermore, by introducing a new multi-use MSSS scheme based on linear one-way functions, we show that the previous schemes can be improved in the number of public values. Compared to the previous MSSS schemes, the proposed scheme has less complexity in the process of share distribution. Finally, using bilinear maps, the participants are provided with the ability of verifying the released shares from other participants. To the best of our knowledge, this is the first verifiable MSSS scheme in which the number of public values linearly depends on the number of the participants and the secrets and which does not require secure communication channels.
Last updated:  2012-03-04
Password Protected Smart Card and Memory Stick Authentication Against Off-line Dictionary Attacks
Yongge Wang
We study the security requirements for remote authentication with password protected smart card. In recent years, several protocols for password-based authenticated key exchange have been proposed. These protocols are used for the protection of password based authentication between a client and a remote server. In this paper, we will focus on the password based authentication between a smart card owner and smart card via an untrusted card reader. In a typical scenario, a smart card owner inserts the smart card into an untrusted card reader and input the password via the card reader in order for the smart card to carry out the process of authentication with a remote server. In this case, we want to guarantee that the card reader will not be able to impersonate the card owner in future without the smart card itself. Furthermore, the smart card could be stolen. If this happens, we want the assurance that an adversary could not use the smart card to impersonate the card owner even though the sample space of passwords may be small enough to be enumerated by an off-line adversary.
Last updated:  2012-03-04
Accelerating the Final Exponentiation in the Computation of the Tate Pairings
Taechan Kim, Sungwook Kim, Jung Hee Cheon
Tate pairing computation consists of two parts: Miller step and final exponentiation step. In this paper, we investigate how to accelerate the final exponentiation step. Consider an order $r$ subgroup of an elliptic curve defined over $\Fq$ with embedding degree $k$. The final exponentiation in the Tate pairing is an exponentiation of an element in $\Fqk$ by $(q^k-1)/r$. The hardest part of this computation is to raise to the power $\lam:=\varphi_k(q)/r$. Write it as $\lam=\lam_0+\lam_1q+\cdots+\lam_{d-1}q^{d-1}$ in the $q$-ary representation. When using multi-exponentiation techniques with precomputation, the final exponentiation cost mostly depends on $\kappa(\lambda)$, the size of the maximum of $|\lambda_i|$. In many parametrized pairing-friendly curves, the value $\kappa$ is about $\left(1-\frac{1}{\rho\varphi(k)}\right)\log q$ where $\rho=\log q/\log r$, while random curves will have $\kappa \approx \log q$. We analyze how this small $\kappa$ is obtained for parametrized elliptic curves, and show that $\left(1-\frac{1}{\rho\varphi(k)}\right)\log q$ is almost optimal in the sense that for all known construction methods of parametrized pairing-friendly curves it is the lower bound. This method is useful, but has a limitation that it can only be applied to only parametrized curves and excludes many of elliptic curves. In the second part of our paper, we propose a method to obtain a modified Tate pairing with smaller $\kappa$ for {\em any elliptic curves}. More precisely, our method finds an integer $m$ such that $\kappa(m\lambda)=\left(1-\frac{1}{\rho\varphi(k)}\right)\log q$ efficiently using lattice reduction. Using this modified Tate pairing, we can reduce the number of squarings in the final exponentiation by about $\left(1-\frac{1}{\rho\varphi(k)}\right)$ times from the usual Tate pairing. We apply our method to several known pairing friendly curves to verify the expected speedup.
Last updated:  2012-03-04
Stronger Public Key Encryption Schemes Withstanding RAM Scraper Like Attacks
Uncategorized
S. Sree Vivek, S. Sharmila Deva Selvi, C. Pandu Rangan
Show abstract
Uncategorized
Security of an encryption system is formally established through the properties of an abstract game played between a challenger and an adversary. During the game, the adversary will be provided with all information that he could obtain in an attack model so that the adversary is fully empowered to carry out the break. The information will be provided to the adversary through the answers of appropriately defined oracle queries. Thus, during the game, adversary will ask various oracle queries and obtain the related responses and have them at his disposal to effect a break. This kind of interaction between challenger and adversary is called as training to the adversary. For example, in the lunch time attack model, the adversary may ask encryption as well as decryption oracle queries. The indistinguishability of ciphertext under this model (IND-CCA2 model) is considered to offer strongest security for confidentiality. In the recent past, an adversary could obtain several additional information than what he could normally obtain in the CCA2 model, thanks to the availability of powerful malwares. In order to realistically model the threats posed by such malwares, we need to empower the adversary with answers to few other kinds of oracles. This paper initiates such a research to counter malwares such as RAM scrapers and extend the CCA2 model with additional oracles to capture the effect of RAM scrapers precisely. After discussing the new kind of attack/threat and the related oracle, we show that the transformation in \cite{FujisakiO992cry} that yields a CCA2 secure system does not offer security against RAM scraper based attack. We refer the decryption oracle as glass box decryption oracle. We then propose two new schemes that offer security against glassbox decryption and also establish the formal security proof for the new schemes in random oracle and standard model.
Last updated:  2012-05-17
Universally Composable Security With Local Adversaries
Uncategorized
Ran Canetti, Margarita Vald
Show abstract
Uncategorized
The traditional approach to formalizing ideal-model based definitions of security for multi-party protocols models adversaries (both real and ideal) as centralized entities that control all parties that deviate from the protocol. While this centralized-adversary modeling suffices for capturing basic security properties such as secrecy of local inputs and correctness of outputs against coordinated attacks, it turns out to be inadequate for capturing security properties that involve restricting the sharing of information between separate adversarial entities. Indeed, to capture collusion-freeness and and game-theoretic solution concepts, Alwen et.al. [Crypto, 2012] propose a new ideal-model based definitional framework that involves a de-centralized adversary. We propose an alternative framework to that of Alwen et. al. We then observe that our framework allows capturing not only collusion-freeness and game-theoretic solution concepts, but also several other properties that involve the restriction of information flow among adversarial entities. These include some natural flavors of anonymity, deniability, timing separation, and information confinement. We also demonstrate the inability of existing formalisms to capture these properties. We then prove strong composition properties for the proposed framework, and use these properties to demonstrate the security, within the new framework, of two very different protocols for securely evaluating any function of the parties’ inputs.
Last updated:  2015-02-02
Clash Attacks on the Verifiability of E-Voting Systems
Ralf Kuesters, Tomasz Truderung, Andreas Vogt
Verifiability is a central property of modern e-voting systems. Intuitively, verifiability means that voters can check that their votes were actually counted and that the published result of the election is correct, even if the voting machine/authorities are (partially) untrusted. In this paper, we raise awareness of a simple attack, which we call a clash attack, on the verifiability of e-voting systems. The main idea behind this attack is that voting machines manage to provide different voters with the same receipt. As a result, the voting authorities can safely replace ballots by new ballots, and by this, manipulate the election without being detected. This attack does not seem to have attracted much attention in the literature. Even though the attack is quite simple, we show that, under reasonable trust assumptions, it applies to several e-voting systems that have been designed to provide verifiability. In particular, we show that it applies to the prominent ThreeBallot and VAV voting systems as well as to two e-voting systems that have been deployed in real elections: the Wombat Voting system and a variant of the Helios voting system. We discuss countermeasures for each of these systems and for (various variants of) Helios provide a formal analysis based on a rigorous definition of verifiability. More precisely, our analysis of Helios is with respect to the more general and in the area of e-voting often overlooked notion of accountability.
Last updated:  2012-03-04
Cryptanalysis of auditing protocol proposed by Wang et al. for data storage security in Cloud Computing
XU Chun-xiang, HE Xiao-hu, Daniel Abraha
Cloud Computing as the on-demand and remote provision of computational resources has been eagerly waited for a long time as a computing utility. It helps users to store their data in the cloud and enjoy the high quality service. However, users do not have physical possession on their own data, hence it is indispensable to create mechanisms on how to protect the security of the data stored. Thus, some auditing protocols are introduced to ensure authenticity and integrity of the outsourced data. Wang et al. proposed a public auditing protocol in 2010 and argued that it can resist against various known attacks. In this paper, we analyze the protocol and find serious security flaws in their protocol. Our analysis shows that the public auditing scheme proposed by Wang et al. can not resist against existential forgery using a known message attack. Moreover, we show that the protocol is vulnerable to attacks by a malicious cloud server and an outside attacker through four specific attacking schemes. The results show that the protocol can not provide secure data storage for users.
Last updated:  2012-03-04
On Hardening Leakage Resilience of Random Extractors for Instantiations of Leakage Resilient Cryptographic Primitives
Danyang Chen, Yongbin Zhou, Yang Han, Rui Xue, Qing He
Random extractors are proven to be important building blocks in constructing leakage resilient cryptographic primitives. Nevertheless, recent efforts showed that they are likely more leaky than other elementary components (e.g. block ciphers) in unprotected implementations of these primitives, in the context of side-channel attacks. In this context, from the adversary's point of view, the extractors themselves could become the point of interest. This paper extends the problem of how leakage resilience of random extractors could be to the case of protected instantiations. Specifically, we investigate the feasibility of applying classical countermeasures to ameliorate leakage resilience of cryptographic components and/or primitives against side-channel attacks, and then show how to evaluate the physical leakage resilience of these instantiations theoretically and practically. The countermeasures we consider are masking, shuffling, and combination of them. Taking one leakage-resilient stream cipher presented at FOCS 2008 as a case of study, we not only examine the leakage resilience of the underlying extractor, but also discuss how leakages from the extractor and from the underlying pseudo-random generator respectively impact the leakage resilience of the stream cipher as a whole. On the one hand, our theoretical and experimental results, which are consistent with each other, do justify some existing observations. On the other hand, and more importantly, our results reveal some new observations that contrast with these knowing ones, which explicitly indicates that previous observations are (mostly likely) incomplete. We argue that our work is of both obvious theoretical interest and important practical significance, and may help foster the further research on the design and implementation of random extractors in leakage-resilient cryptography.
Last updated:  2013-04-02
On the Collision and Preimage Security of MDC-4 in the Ideal Cipher Model
Bart Mennink
We present the first collision and preimage security analysis of MDC-4, a 24 years old construction for transforming an n-bit block cipher into a 2n-bit hash function. We start with the MDC-4 compression function based on two independent block ciphers, and prove that any adversary with query access to the underlying block ciphers requires at least 2^{5n/8} queries (asymptotically) to find a collision, and at least 2^{5n/4} queries to find a preimage. These results then directly carry over to the MDC-4 hash function design. Next, we consider MDC-4 based on one single block cipher, and confirm that the collision bound carries over to the single block cipher setting. In case of preimage resistance we present a more negative result: for a target image with the same left and right half, a MDC-4 preimage in the single block cipher setting can be found in approximately 2^n queries. Yet, restricted to target images with different left and right halves, the preimage security bound of 2^{5n/4} queries is nevertheless retained.
Last updated:  2012-02-29
On The Nonlinearity of Maximum-length NFSR Feedbacks
Meltem Sonmez Turan
Linear Feedback Shift Registers (LFSRs) are the main building block of many classical stream ciphers; however due to their inherent linearity, most of the LFSR-based designs do not offer the desired security levels. In the last decade, using Nonlinear Feedback Shift Registers(NFSRs) in stream ciphers became very popular. However, the theory of NFSRs is not well-understood, and there is no efficient method that constructs a cryptographically strong feedback function with maximum period and also, given a feedback function it is hard to predict the period. In this paper, we study the maximum-length NFSRs, focusing on the nonlinearity of their feedback functions. First, we provide some upper bounds on the nonlinearity of the maximum-length feedback functions, and then we study the feedback functions having nonlinearity 2 in detail. We also show some techniques to improve the nonlinearity of a given feedback function using cross-joining.
Last updated:  2012-02-29
On the Immunity of Rotation Symmetric Boolean Functions Against Fast Algebraic Attacks
Yin Zhang, Meicheng Liu, Dongdai Lin
In this paper, it is shown that an $n$-variable rotation symmetric Boolean function $f$ with $n$ even but not a power of 2 admits a rotation symmetric function $g$ of degree at most $e\leq n/3$ such that the product $gf$ has degree at most $n-e-1$.
Last updated:  2012-02-29
Finding Optimal Formulae for Bilinear Maps
Uncategorized
Razvan Barbulescu, Jérémie Detrey, Nicolas Estibals, Paul Zimmermann
Show abstract
Uncategorized
We describe a unified framework to search for optimal formulae evaluating bilinear --- or quadratic --- maps. This framework applies to polynomial multiplication and squaring, finite field arithmetic, matrix multiplication, etc. We then propose a new algorithm to solve problems in this unified framework. With an implementation of this algorithm, we prove the optimality of various published upper bounds, and find improved upper bounds.
Last updated:  2012-03-20
Chosen-Ciphertext Secure Efficiently Searchable Encryption in the Standard Model
Yang Cui, Kirill Morozov
In the standard model, deterministic public-key encryption (PKE) secure against chosen-ciphertext attacks by privacy adversary (PRIV-CCA) is known to be built only from lossy trapdoor functions as demonstrated by Boldyreva et al at Crypto 2008. We show that the method of achieving IND-CCA security via correlated products, recently introduced by Rosen and Segev at TCC 2009, can be used to achieve PRIV-CCA secure PKE of uniform messages from any trapdoor permutation (TDP) in the standard model. Our schemes are {\em not} deterministic as a whole, however randomness is only applied to a particular part of the ciphertext - an one-time signature used for validity check. This allows efficient (logarithmic in the database size) search on encrypted data. In a nutshell, our first construction (which is generic) departs from any IND-CPA secure PKE (implied by TDP), builds its k-correlated version, transforms it into the k-correlated PRIV-CPA encryption, and finally lifts it up to PRIV-CCA security. In contrast to Rosen and Segev's correlated products method, we do not assume one-wayness under correlated inputs, thus any IND-CPA secure PKE can be used in our construction. In addition, we present the second construction -- which is more efficient, than the first one -- based on assumptions from coding theory and any TDP. Note that for the price of allowing some limited use of randomness, we achieve PRIV security for multiple messages, which is strictly stronger than the single-message notion PRIV1 achieved by the scheme of Boldyreva et al at Crypto 2008.
Last updated:  2015-12-18
On the Optimality of Lattices for the Coppersmith Technique
Uncategorized
Yoshinori Aono, Manindra Agrawal, Takakazu Satoh, Osamu Watanabe
Show abstract
Uncategorized
We investigate a method for finding small integer solutions of a univariate modular equation, that was introduced by Coppersmith and extended by May. We will refer this method as the Coppersmith technique. This paper provides a way to analyze a general limitations of the lattice construction for the Coppersmith technique. Our analysis upper bounds the possible range of $U$ that is asymptotically equal to the bound given by the original result of Coppersmith and May. This means that they have already given the best lattice construction. In addition, we investigate the optimality for the bivariate equation to solve the small inverse problem, which was inspired by Kunihiro's argument. In particular, we show the optimality for the Boneh-Durfee's equation used for RSA cryptoanalysis, To show our results, we establish framework for the technique by following the relation of Howgrave-Graham, and then concretely define the conditions in which the technique succeed and fails. We then provide a way to analyze the range of $U$ that satisfies these conditions. Technically, we show that the original result of Coppersmith achieves the optimal bound for $U$ when constructing a lattice in the standard way. We then provide evidence which indicates that constructing a non-standard lattice is generally difficult.
Last updated:  2012-02-29
Security Analysis of A Single Sign-On Mechanism for Distributed Computer Networks
Guilin Wang, Jiangshan Yu, Qi Xie
Single sign-on (SSO) is a new authentication mechanism that enables a legal user with a single credential to be authenticated by multiple service providers in distributed computer networks. Recently, Chang and Lee proposed a new SSO scheme and claimed its security by providing well-organized security arguments. In this paper, however, we demonstratively show that their scheme is actually insecure as it fails to meet credential privacy and soundness of authentication. Specifically, we present two impersonation attacks. The first attack allows a malicious service provider, who has successfully communicated with a legal user twice, to recover the user's credential and then to impersonate the user to access resources and services offered by other service providers. In the other attack an outsider without any credential may be able to enjoy network services freely by impersonating any legal user or a nonexistent user. We identify the flaws in their security arguments to explain why attacks are possible against their SSO scheme. Our attacks also applies to another SSO scheme proposed by Hsu and Chuang, which inspires the design of Chang-Lee scheme. We promote the study of the soundness of authentication as one open problem.
Last updated:  2012-02-29
More on Correcting Errors in RSA Private Keys: Breaking CRT-RSA with Low Weight Decryption Exponents
Santanu Sarkar, Subhamoy Maitra
Several schemes have been proposed towards the fast encryption and decryption in RSA and its variants. One popular idea is to use integers having low Hamming weight in the preparation of the decryption exponents. This is to reduce the multiplication effort in the square and multiply method in the exponentiation routine, both in encryption and decryption. In this paper we show that such schemes are insecure in CRT-RSA when the encryption exponent is small (e.g., $e = 2^{16}+1$). In particular, we show that the CRT-RSA schemes presented in SAC 1996 and ACISP 2005 with low weight decryption exponents can be broken in a few minutes in certain cases. Further, the scheme of CT-RSA 2010, where the decryption exponents are not of low weight but they have large low weight factors, can also be cryptanalysed. To mount the attack, we exploit the heuristic proposed by Henecka et al (Crypto 2010) that is capable of correcting errors in the secret parameters when the encryption exponent is small. In the process, we identify a few modifications of the error correction strategy that provides significantly improved experimental outcome and also beats the theoretical bounds given in the work of Henecka et al.
Last updated:  2012-02-29
Generic Construction of Certificate Based Encryption from Certificateless Encryption Revisited
Wei Gao, Guilin Wang, Kefei Chen, Xueli Wang
Certificateless public key encryption (CLE) and certificate based encryption (CBE) are two novel public key cryptographic primitives requiring no authenticity verification of the recipient's public key. Both of them are motivated to simultaneously solve the heavy certificate management problem inherent in the traditional public key encryption (PKE) and the key escrow problem inherent in the identity-based encryption (IBE). It is an attractive cryptographic task to formally explore the relation between CBE and CLE. In 2005, Al-Riyami and Paterson proposed one general conversion from CLE to CBE. Shortly later, Kang and Park pointed out a flaw in the security proof of Al-Riyami-Paterson conversion. In 2012, Wu et al. proposed another generic conversion from CLE to CBE. Compared with Al-Riyami-Paterson conversion, Wu et al.'s method can be proved secure, but it has to additionally involve collision resistant hash functions. It remains an open problem whether the generic conversion due to Al-Riyami and Paterson, which is very neat, is provably secure. We aim to solve this open problem. First, we formalize CLE's new security model, featured by introducing a new security property overlooked by previous security models. With this new security model as the basic technique, we succeed in proving that the Al-Riyami-Paterson generic conversion from CLE to CBE is secure, if the CLE scheme is secure in our new security model. A concrete provably secure CBE scheme is presented to demonstrate the application of our result.
Last updated:  2012-04-27
Provably Secure Generic Construction of Certificate Based Signature from Certificateless Signature in Standard Model
Uncategorized
Wei Gao, Guilin Wang, Kefei Chen, Xueli Wang
Show abstract
Uncategorized
Both certificateless cryptography (CLC) and certificate-based cryptography (CBC) are two novel public key paradigms which combine the merits of traditional public key cryptography (PKC) and identity-based cryptography (IBC). They succeed in avoiding the key escrow problem in IBC and reducing the public key management overhead in traditional PKC. This paper deals with the generic construction of certificate based signature (CBS) from certificateless signature (CLS). Wu et al. proposed the first generic conversion from CLS to CBS provably secure in the random oracle model. This paper proposes an intuitive, simple and provably secure generic conversion from CLS to CBS. The security for this conversion is proved in the standard model. To develope the security proof of this conversion, we put forth one novel security model which introduces a previously neglected notrivial attack and better captures the CLS security notion. Following this generic conversion, a provably secure CLS scheme is constructed as an example.
Last updated:  2012-02-29
FlipIt: The Game of "Stealthy Takeover"
Marten van Dijk, Ari Juels, Alina Oprea, Ronald L. Rivest
Recent targeted attacks have increased significantly in sophistication, undermining the fundamental assumptions on which most cryptographic primitives rely for security. For instance, attackers launching an Advanced Persistent Threat (APT) can steal full cryptographic keys, violating the very secrecy of "secret" keys that cryptographers assume in designing secure protocols. In this article, we introduce a game-theoretic framework for modeling various computer security scenarios prevalent today, including targeted attacks. We are particularly interested in situations in which an attacker periodically compromises a system or critical resource completely, learns all its secret information and is not immediately detected by the system owner or defender. We propose a two-player game between an attacker and defender called FlipIt or The Game of "Stealthy Takeover". In FlipIt, players compete to control a shared resource. Unlike most existing games, FlipIt allows players to move at any given time, taking control of the resource. The identity of the player controlling the resource, however, is not revealed until a player actually moves. To move, a player pays a certain move cost. The objective of each player is to control the resource a large fraction of time, while minimizing his total move cost. FlipIt provides a simple and elegant framework in which we can formally reason about the interaction between attackers and defenders in practical scenarios. In this article, we restrict ourselves to games in which one of the players (the defender) plays with a renewal strategy, one in which the intervals between consecutive moves are chosen independently and uniformly at random from a fixed probability distribution. We consider attacker strategies ranging in increasing sophistication from simple periodic strategies (with moves spaced at equal time intervals) to more complex adaptive strategies, in which moves are determined based on feedback received during the game. For different classes of strategies employed by the attacker, we determine strongly dominant strategies for both players (when they exist), strategies that achieve higher benefit than all other strategies in a particular class. When strongly dominant strategies do not exist, our goal is to characterize the residual game consisting of strategies that are not strongly dominated by other strategies. We also prove equivalence or strict inclusion of certain classes of strategies under different conditions. Our analysis of different FlipIt variants teaches cryptographers, system designers, and the community at large some valuable lessons: 1. Systems should be designed under the assumption of repeated total compromise, including theft of cryptographic keys. FlipIt provides guidance on how to implement a cost-effective defensive strategy. 2. Aggressive play by one player can motivate the opponent to drop out of the game (essentially not to play at all). Therefore, moving fast is a good defensive strategy, but it can only be implemented if move costs are low. We believe that virtualization has a huge potential in this respect. 3. Close monitoring of one’s resources is beneficial in detecting potential attacks faster, gaining insight into attacker’s strategies, and scheduling defensive moves more effectively. Interestingly, FlipIt finds applications in other security realms besides modeling of targeted attacks. Examples include cryptographic key rotation, password changing policies, refreshing virtual machines, and cloud auditing.
Last updated:  2012-03-07
On the Circular Security of Bit-Encryption
Ron Rothblum
Motivated by recent developments in fully homomorphic encryption, we consider the folklore conjecture that every semantically-secure bit-encryption scheme is circular secure, or in other words, that every bit-encryption scheme remains secure even when the adversary is given encryptions of the individual bits of the private-key. We show the following obstacles to proving this conjecture: 1. We construct a public-key bit-encryption scheme that is plausibly semantically secure, but is not circular secure. The circular security attack manages to fully recover the private-key. The construction is based on an extension of the Symmetric External Diffie-Hellman assumption (SXDH) from bilinear groups, to $\ell$-multilinear groups of order $p$ where $\ell \geq c \cdot \log p$ for some $c>1$. While there do exist $\ell$-multilinear groups (unconditionally), for $\ell \geq 3$ there are no known candidates for which the SXDH problem is believed to be hard. Nevertheless, there is also no evidence that such groups do not exist. Our result shows that in order to prove the folklore conjecture, one must rule out the possibility that there exist $\ell$-multilinear groups for which SXDH is hard. 2. We show that the folklore conjecture cannot be proved using a black-box reduction. That is, there is no reduction of circular security of a bit-encryption scheme to semantic security of that very same scheme that uses both the encryption scheme and the adversary as black-boxes. Both of our negative results extend also to the (seemingly) weaker conjecture that every CCA secure bit-encryption scheme is circular secure. As a final contribution, we show an equivalence between three seemingly distinct notions of circular security for public-key bit-encryption schemes. In particular, we give a general search to decision reduction that shows that an adversary that distinguishes between encryptions of the bits of the private-key and encryptions of zeros can be used to actually recover the private-key.
Last updated:  2012-04-23
Unbalanced Elementary Symmetric Boolean Functions with the Degree "d" and "wt(d)>=3"
Uncategorized
Zhihui Ou
Uncategorized
In the paper, for $d=2^{t}k$, $n=2^{t}(2k+q)+m$ and special $k=2^{w}(2^0+2^1+\cdots+2^s)$, we present that a majority of $X(d,n)$ are not balanced. The results include many cases $wt(d)\geq 3$ and $n\equiv 0,1,2,3 mod4$. The results are also parts of the conjecture that $X(2^t,2^{t+1}l-1)$ is only nonlinear balanced elementary symmetric Boolean function. Where $t\geq 2$, $q\geq 1$, $s\geq 0$, $w\geq 0$ and $m\geq -1$ are integers, and $X(d,n)=\bigoplus\limits_{1\le i_{1} <\cdots<i_{d}\le n} x_{i_{1} } \cdots x_{i_{d} }$.\newline\newline
Last updated:  2012-02-29
Cryptanalysis of a Universally Verifiable Efficient Re-encryption Mixnet
Shahram Khazaei, Björn Terelius, Douglas Wikström
We study the heuristically secure mix-net proposed by Puiggalí and Guasch (EVOTE 2010). We present practical attacks on both correctness and privacy for some sets of parameters of the scheme. Although our attacks only allow us to replace a few inputs, or to break the privacy of a few voters, this shows that the scheme can not be proven secure.
Last updated:  2015-01-03
Homomorphic Evaluation of the AES Circuit
Craig Gentry, Shai Halevi, Nigel P. Smart
We describe a working implementation of leveled homomorphic encryption (with or without bootstrapping) that can evaluate the AES-128 circuit. This implementation is built on top of the HElib library, whose design was inspired by an early version of the current work. Our main implementation (without bootstrapping) takes about 4 minutes and 3GB of RAM, running on a small laptop, to evaluate an entire AES-128 encryption operation. Using SIMD techniques, we can process upto 120 blocks in each such evaluation, yielding an amortized rate of just over 2 seconds per block. For cases where further processing is needed after the AES computation, we describe a different setting that uses bootstrapping. We describe an implementation that lets us process 180 blocks in just over 18 minutes using 3.7GB of RAM on the same laptop, yielding amortized 6 seconds/block. We note that somewhat better amortized per-block cost can be obtained using "byte-slicing" (and maybe also "bit-slicing") implementations, at the cost of significantly slower wall-clock time for a single evaluation.
Last updated:  2012-02-29
Combined Attacks on the AES Key Schedule
François Dassance, Alexandre Venelli
We present new combined attacks on the AES key schedule based on the work of Roche et al. The main drawbacks of the original attack are: the need for high repeatability of the fault, a very particular fault model and a very high complexity of the key recovery algorithm. We consider more practical fault models, we obtain improved key recovery algorithms and we present more attack paths for combined attacks on AES. We propose to inject faults on the different operations of the key schedule instead of the key state of round 9 or the corresponding data state. We also consider fault injections in AES constants such as the RCON or the affine transformation of the SubWord. By corrupting these constants, the attacker can easily deduce the value of the error. The key recovery complexity can then be greatly improved. Notably, we can obtain a complexity identical to a classical differential side-channel attack. Our attacks defeat most AES implementations secure against both high-order side-channel attacks and fault attacks.
Last updated:  2012-02-29
An algorithm for factoring integers
Uncategorized
Yingpu Deng, Yanbin Pan
Show abstract
Uncategorized
We propose an algorithm for factoring a composite number. The method seems new.
Last updated:  2012-04-21
The Collision Security of MDC-4
Ewan Fleischmann, Christian Forler, Stefan Lucks, Jakob Wenzel
There are four somewhat classical double length block cipher based compression functions known: MDC-2, MDC-4, Abreast-DM, and Tandem-DM. They all have been developed over 20 years ago. In recent years, cryptographic research has put a focus on block cipher based hashing and found collision security results for three of them (MDC-2, Abreast-DM, Tandem-DM). In this paper, we add MDC-4, which is part of the IBM CLiC cryptographic module (FIPS 140-2 Security Policy for IBM CrytoLite in C, October 2003), to that list by showing that - 'instantiated' using an ideal block cipher with 128 bit key/plaintext/ciphertext size - no adversary asking less than $2^{74.76}$ queries can find a collision with probability greater than $1/2$. This is the first result on the collision security of the hash function MDC-4. The compression function MDC-4 is created by interconnecting two MDC-2 compression functions but only hashing one message block with them instead of two. The developers aim for MDC-4 was to offer a higher security margin, when compared to MEDC-2, but still being fast enough for practical purposes. The MDC-2 collision security proof of Steinberger (EUROCRYPT 2007) cannot be directly applied to MDC-4 due to the structural differences. Although sharing many commonalities, our proof for MDC-4 is much shorter and we claim that our presentation is also easier to grasp.
Last updated:  2012-12-28
Recursive Composition and Bootstrapping for SNARKs and Proof-Carrying Data
Uncategorized
Nir Bitansky, Ran Canetti, Alessandro Chiesa, Eran Tromer
Show abstract
Uncategorized
\emph{Succinct non-interactive arguments} (SNARGs) enable verifying NP statements with much lower complexity than required for classical NP verification (in fact, with complexity that is \emph{independent} of the NP language at hand). In particular, SNARGs provide strong solutions to the problem of verifiably delegating computation. Despite recent progress in the understanding and construction of SNARGs, there remain unattained goals. First, \emph{publicly-verifiable SNARGs} are only known either in the random oracle model, or in a model that allows expensive offline preprocessing. Second, known SNARGs require from the prover significantly more time or space than required for classical NP verification. We show that, assuming collision-resistant hashing, \emph{any} SNARG having a natural \emph{proof of knowledge} property (i.e., a SNARK) can be ``bootstrapped" to obtain a \emph{complexity-preserving} SNARK, i.e., one without expensive preprocessing and where the prover's time and space complexity is essentially the same as that required for classical NP verification. By applying our transformation to known publicly-verifiable SNARKs with expensive preprocessing, we obtain the first publicly-verifiable complexity-preserving SNARK in the plain model (and in particular, eliminate the expensive preprocessing), thereby attaining the aforementioned goals. We also show an analogous transformation for privately-verifiable SNARKs, assuming fully-homomorphic encryption. Curiously, our transformations do not rely on PCPs. At the heart of our transformations is \emph{recursive composition} of SNARKs and, more generally, new techniques for constructing and using \emph{proof-carrying data} (PCD) systems, which extend the notion of a SNARK to the distributed setting. Concretely, to bootstrap a given SNARK, we recursively compose the SNARK to obtain a ``weak'' PCD system for shallow distributed computations, and then use the PCD framework to attain stronger, complexity-preserving SNARKs and PCD systems.
Last updated:  2012-02-24
Algebraic attack on lattice based cryptosystems via solving equations over real numbers.
Jintai Ding, Dieter Schmidt
In this paper we present a new algorithm to attack lattice based cryptosystems by solving a problem over real numbers. In the case of the NTRU cryptosystem, if we assume the additional information on the modular operations, we can break the NTRU cryptosystems completely by getting the secret key. We believe that this fact was not known before.
Last updated:  2012-04-23
Recent Results on Balanced Symmetric Boolean Functions
Yingming Guo, Guangpu Gao, Yaqun Zhao
In this paper we prove all balanced symmetric Boolean functions of fixed degree are trivial when the number of variables grows large enough. We also present the nonexistence of trivial balanced elementary symmetric Boolean functions except for $n=l\cdot2^{t+1}-1$ and $d=2^t$, where $t$ and $l$ are any positive integers, which shows Cusick's conjecture for balanced elementary symmetric Boolean functions is exactly the conjecture that all balanced elementary symmetric Boolean functions are trivial balanced. In additional, we obtain an integer $n_0$, which depends only on $d$, that Cusick's conjecture holds for any $n>n_0$.
Last updated:  2012-02-23
Tolerant Algebraic Side-Channel Analysis of {AES}
Yossef Oren, Avishai Wool
We report on a Tolerant Algebraic Side-Channel Analysis (TASCA) attack on an AES implementation, using an optimizing pseudo- Boolean solver to recover the secret key from a vector of Hamming weights corresponding to a single encryption. We first develop a boundary on the maximum error rate that can be tolerated as a function of the set size output by the decoder and the number of measurements. Then, we show that the TASCA approach is capable of recovering the secret key from errored traces in a reasonable time for error rates approaching this theoretical boundary – specifically, the key was recovered in 10 hours on average from 100 measurements with error rates of up to 20%. We discovered that, perhaps counter-intuitively, there are strong incentives for the attacker to use as few leaks as possible to recover the key. We describe the equation setup, the experiment setup and discuss the results.
Last updated:  2012-11-05
Hardness of decision (R)LWE for any modulus
Adeline Langlois, Damien Stehle
The decision Learning With Errors problem has proven an extremely flexible foundation for devising provably secure cryptographic primitives. LWE can be expressed in terms of linear algebra over Z/qZ. This modulus q is the subject of study of the present work. When q is prime and small, or when it is exponential and composite with small factors, LWE is known to be at least as hard as standard worst-case problems over euclidean lattices (sometimes using quantum reductions). The Ring Learning With Errors problem is a structured variant of LWE allowing for more compact keys and more efficient primitives. It is known to be at least as hard as standard worst-case problems restricted to so-called ideal lattices, but under even more restrictive arithmetic conditions on q. In this work, we prove that the arithmetic form of the modulus q is irrelevant to the computational hardness of LWE and RLWE. More precisely, we show that these problems are at least as hard as standard worst-case problems on lattices, under the unique condition that q is of polynomial bit-size. This result is most useful for adapting LWE-based cryptographic constructions to the RLWE setting. Among others, this allows us to derive the first Identity-Based Encryption scheme of quasi-optimal performance proven secure under standard worst-case lattice assumptions, in the standard model. Other applications include authentication, functional encryption and traitor tracing.
Last updated:  2013-08-15
Worst-Case to Average-Case Reductions for Module Lattices
Adeline Langlois, Damien Stehle
Most lattice-based cryptographic schemes are built upon the assumed hardness of the Short Integer Solution (SIS) and Learning With Errors (LWE) problems. Their efficiencies can be drastically improved by switching the hardness assumptions to the more compact Ring-SIS and Ring-LWE problems. However, this change of hardness assumptions comes along with a possible security weakening: SIS and LWE are known to be at least as hard as standard (worst-case) problems on euclidean lattices, whereas Ring-SIS and Ring-LWE are only known to be as hard as their restrictions to special classes of ideal lattices, corresponding to ideals of some polynomial rings. In this work, we define the Module-SIS and Module-LWE problems, which bridge SIS with Ring-SIS, and LWE with Ring-LWE, respectively. We prove that these average-case problems are at least as hard as standard lattice problems restricted to module lattices (which themselves generalize arbitrary and ideal lattices). As these new problems enlarge the toolbox of the lattice-based cryptographer, they could prove useful for designing new schemes. Importantly, the worst-case to average-case reductions for the module problems are (qualitatively) sharp, in the sense that there exist converse reductions. This property is not known to hold in the context of Ring-SIS/Ring-LWE: Ideal lattice problems could reveal easy without impacting the hardness of Ring-SIS/Ring-LWE.
Last updated:  2012-09-07
ECM at Work
Uncategorized
Joppe W. Bos, Thorsten Kleinjung
Show abstract
Uncategorized
The performance of the elliptic curve method (ECM) for integer factorization plays an important role in the security assessment of RSA-based protocols as a cofactorization tool inside the number field sieve. The efficient arithmetic for Edwards curves found an application by speeding up ECM. We propose techniques based on generating and combining addition-subtracting chains to optimize Edwards ECM in terms of both performance and memory requirements. This makes our approach very suitable for memory-constrained devices such as graphics processing units (GPU). For commonly used ECM parameters we are able to lower the required memory up to a factor 55 compared to the state-of-the-art Edwards ECM approach. Our ECM implementation on a GTX 580 GPU sets a new throughput record, outperforming the best GPU, CPU and FPGA results reported in literature.
Last updated:  2013-02-25
A Lattice-Based Traitor Tracing Scheme
San Ling, Damien Stehle
A traitor tracing scheme is a multi-receiver encryption scheme where malicious receiver coalitions aiming at building pirate decryption devices are deterred by the existence of a tracing algorithm: Using the pirate decryption device, the tracing algorithm can recover at least one member of the malicious coalition. All existing traitor tracing schemes rely either on rather inefficient generic constructions from arbitrary encryption schemes and collusion-secure fingerprinting codes, or on algebraic constructions exploiting the assumed hardness of variants of the Discrete Logarithm Problem. In this work, we present the first algebraic construction of a traitor tracing encryption scheme whose security relies on the assumed (quantum) worst-case hardness of standard lattice problems. The scheme is public-key, provably resists Chosen Plaintext Attacks and allows for minimal access black-box tracing (i.e., tracing works even if granted a very limited access to the pirate decryption device). It inherits the standard features of lattice-based cryptography, such as provable security under mild computational assumptions, conjectured resistance to quantum computers, and asymptotic efficiency. For proving the security, we introduce a Learning With Errors variant of the k-SIS problem from Boneh and Freeman [PKC'11], which we prove at least as hard as the standard LWE problem. We also describe a variant of our scheme with security based on the assumed hardness of the Ring Learning With Errors problem which achieves quasi-optimal asymptotic performance with respect to the security parameter.
Last updated:  2012-02-23
Collision Bounds for the Additive Pollard Rho Algorithm for Solving Discrete Logarithms
Uncategorized
Joppe W. Bos, Alina Dudeanu, Dimitar Jetchev
Show abstract
Uncategorized
We prove collision bounds for the Pollard rho algorithm to solve the discrete logarithm problem in a general cyclic group $G$. Unlike the setting studied by Kim et al. we consider additive walks: the setting used in practice to solve the elliptic curve discrete logarithm problem. Our bounds differ from the birthday bound $O(\sqrt{|G|})$ by a factor of $\sqrt{\log{|G|}}$ and are based on mixing time estimates for random walks on finite abelian groups due to Hildebrand.
Last updated:  2012-05-08
Remarks on- An ideal multi-secret sharing scheme based on MSP
Uncategorized
Zhi-hui Li Jing Li
Uncategorized
In 2010, C.- F. Hsu, Q.Cheng, X.M.Tang and B.Zeng proposed an ideal linear multi-secret sharing scheme based on monotone span programs (for short HCTZ scheme). This paper mainly makes an analysis about the problems in HCTZ scheme.Meanwhile, we presents an efficient ideal multi-secret sharing scheme based on monotone span programs for a family of access structures. The new scheme effectively overcomes the deficiency of HCTZ scheme, and has the advantage of small calculation cost.
Last updated:  2012-05-15
Study of the invariant coset attack on PRINTcipher: more weak keys with practical key recovery
Stanislav Bulygin, Michael Walter
In this paper we investigate the invariant property of PRINTcipher first discovered by Leander et al. in their CRYPTO 2011 paper. We provide a thorough study and show that there exist 64 families of weak keys for PRINTcipher--48 and many more for PRINTcipher--96. Moreover, we show that searching the weak key space may be substantially sped up by splitting the search into two consecutive steps. We show that for many classes of weak keys key recovery can be done in a matter of minutes in the chosen/known plaintext scenario. In fact, at least $2^{45}$ weak keys can be recovered in less than 20 minutes per key on a single PC using only a few chosen and one known plaintext(s). We provide detailed treatment of the methods and put them in a more general context that opens new interesting directions of research for PRESENT-like ciphers.
Last updated:  2012-04-16
Improved Algebraic Side-Channel Attack on AES
Uncategorized
Mohamed Saied Emam Mohamed, Stanislav Bulygin, Michael Zohner, Annelie Heuser, Michael Walter
Show abstract
Uncategorized
In this paper we present improvements of the algebraic side- channel analysis of the Advanced Encryption Standard (AES) proposed in [9]. In particular, we optimize the algebraic representation of AES and the algebraic representation of the obtained side-channel information in order to speed up the attack and increase the success rate. We study the performance of our improvements in both known and unknown plain-text/ciphertext attack scenarios. Our experiments indicate that in both cases the amount of required side-channel information is less than the one required in the attacks introduced in [9]. Furthermore, we introduce a method for error handling, which allows our improved algebraic side-channel attack to escape the assumption of an error-free measurement and thus become applicable in practice. We demonstrate the practical use of our improved algebraic side-channel attack by inserting predictions from a single-trace template attack.
Last updated:  2012-06-22
Optimally Robust Private Information Retrieval
Casey Devet, Ian Goldberg, Nadia Heninger
We give a protocol for multi-server information-theoretic private information retrieval which achieves the theoretical limit for Byzantine robustness. That is, the protocol can allow a client to successfully complete queries and identify server misbehavior in the presence of the maximum possible number of malicious servers. We have implemented our scheme and it is extremely fast in practice: up to thousands of times faster than previous work. We achieve these improvements by using decoding algorithms for error-correcting codes that take advantage of the practical scenario where the client is interested in multiple blocks of the database.
Last updated:  2012-02-23
Semi-Supervised Template Attack
Uncategorized
Liran Lerman, Stephane Fernandes Medeiros, Nikita Veshchikov, Cedric Meuter, Gianluca Bontempi, Olivier Markowitch
Show abstract
Uncategorized
Side channel attacks take advantage of the information leakage in a cryptographic device. A template attack is a family of side channel attacks which is reputed to be extremely effective. This kind of attacks supposes that the attacker can fully control a cryptographic device before attacking a similar one. In this paper, we propose a method based on a semi-supervised learning strategy to relax this assumption. The effectiveness of our proposal is confirmed by software simulations as well as by experiments on a 8-bit microcontroller.
Last updated:  2012-02-23
Computational Soundness of Symbolic Zero-knowledge Proofs: Weaker Assumptions and Mechanized Verification
Michael Backes, Fabian Bendun, Dominique Unruh
The abstraction of cryptographic operations by term algebras, called symbolic models, is essential in almost all tool-supported methods for analyzing security protocols. Significant progress was made in proving that symbolic models offering basic cryptographic operations such as encryption and digital signatures can be sound with respect to actual crypto- graphic realizations and security definitions. Even abstractions of sophisticated modern cryptographic primitives such as zero- knowledge (ZK) proofs were shown to have a computationally sound cryptographic realization, but only in ad-hoc formalisms and at the cost of placing strong assumptions on the underlying cryptography, which leaves only highly inefficient realizations. In this paper, we make two contributions to this problem space. First, we identify weaker cryptographic assumptions that we show to be sufficient for computational soundness of symbolic ZK proofs. These weaker assumptions are fulfilled by existing efficient ZK schemes as well as generic ZK constructions. Second, we conduct all computational soundness proofs in CoSP, a recent framework that allows for casting com- putational soundness proofs in a modular manner, independent of the underlying symbolic calculi. Moreover, all computational soundness proofs conducted in CoSP automatically come with mechanized proof support through an embedding of the applied $\pi$-calculus.
Last updated:  2013-11-24
Strongly Unforgeable Proxy Re-Signatures in the Standard Model
Uncategorized
S. Sree Vivek, S. Sharmila Deva Selvi, Guhan Balasubramanian, C. Pandu Rangan
Show abstract
Uncategorized
Proxy re-signatures are generally used for the delegation of signing rights of a user (delegator) to a semi- trusted proxy and a delegatee. The proxy can convert the signature of one user on a message into the signature of another user on the same message by using the delegation information (rekey) provided by the delegator. This is a handy primitive for network security and automated delegations in hierarchical organizations. Though proxy re- signature schemes that are secure in the standard model are available, none of them have addressed the security notion of strong existential unforgeability, where the adversary will not be able to forge even on messages for which signatures are already available. This is an important property for applications which involve the delegation of authentication on sensitive data. In this paper, we define the security model for strong unforgeability of proxy re-signature schemes. We propose two concrete strong unforgeable proxy re-signature schemes, where we induce the strong unforgeability in the scheme by embedding the transformation techniques carefully in the sign and resign algorithms. The security of both the schemes is related to the hardness of solving Computational Diffie-Hellman (CDH) problem.
Last updated:  2012-02-23
Public Key Cryptosystems Constructed Based on Reed-Solomon Codes, K(XV)SE(2)PKC, Realizing Coding Rate of Exactly 1.0
Masao KASAHARA
In this paper, we present a new class of public-key cryptosystems, K(XV)SE(2)PKC realizing the coding rate of exactly 1.0, based on Reed-Solomon codes(RS codes). We show that K(XV)SE(2)PKC is secure against the various attacks including the attacks based on the Gröbner basis calcula&#65364;ion (Gröbner basis attack, GB attack) and a linear transformation attack.
Last updated:  2012-05-18
Fully Homomorphic Encryption without Modulus Switching from Classical GapSVP
Zvika Brakerski
We present a new tensoring technique for LWE-based fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically ($B \to B^2\cdot\poly(n)$) with every multiplication (before ``refreshing''), our noise only grows linearly ($B \to B\cdot\poly(n)$). We use this technique to construct a \emph{scale-invariant} fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus $q$ and the initial noise level $B$, and not on their absolute values. Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process (no need for ``modulus switching''), and this modulus can take arbitrary form, including a power of $2$ which carries obvious advantages for implementation. In addition, security can be \emph{classically} reduced to the worst-case hardness of the GapSVP problem (with quasi-polynomial approximation factor), whereas previous constructions could only exhibit a quantum reduction to GapSVP.
Last updated:  2012-03-12
MAGNITUDE SQUARED COHERENCE BASED SCA
Uncategorized
Sebastien Tiran, Amine Dehbaoui, Philippe Maurine
Show abstract
Uncategorized
Magnitude Squared Coherence is a signal processing tool that indicates how well two time domain signals match one with the other by tracking linear dependencies in their spectral decomposition. This paper introduces different ways of using the Magnitude Squared Coherence for Side Channel Analysis. This distinguisher has several advantages over well-known distinguishers.
Last updated:  2012-06-01
Secure Identity-Based Encryption in the Quantum Random Oracle Model
Uncategorized
Mark Zhandry
Show abstract
Uncategorized
We give the first proof of security for an identity-based encryption scheme in the quantum random oracle model. This is the first proof of security for any scheme in this model that requires no additional assumptions. Our techniques are quite general and we use them to obtain security proofs for two random oracle hierarchical identity-based encryption schemes and a random oracle signature scheme, all of which have previously resisted quantum security proofs, even using additional assumptions. We also explain how to remove the extra assumptions from prior quantum random oracle model proofs. We accomplish these results by developing new tools for arguing that quantum algorithms cannot distinguish between two oracle distributions. Using a particular class of oracle distributions, so called semi-constant distributions, we argue that the aforementioned cryptosystems are secure against quantum adversaries.
Last updated:  2012-02-26
Efficient identity-based threshold decryption scheme from bilinear pairings
Wei Gao, Guilin Wang, Kefei Chen, Xueli Wang, Guoyan Zhang
Taking advantage of a technique that allows to safely distribute a private key among decryption servers we introduce a new identity-based threshold scheme, proven secure in the random oracle model. This new paring-based scheme features a lot of improvements compared to other schemes that can be found in the literature. Among them the two most noticeable ones are, the efficiency, by reducing the number of pairing computations, and the ability for a user to generate and share a private key without requiring any access to a PKG.
Last updated:  2013-04-24
Another look at HMAC
Neal Koblitz, Alfred Menezes
HMAC is the most widely-deployed cryptographic-hash-function-based message authentication code. First, we describe a security issue that arises because of inconsistencies in the standards and the published literature regarding keylength. We prove a separation result between two versions of HMAC, which we denote HMAC^{std} and HMAC^{Bel}, the former being the real-world version standardized by Bellare et al. in 1997 and the latter being the version described in Bellare's proof of security in his Crypto 2006 paper. Second, we describe how HMAC^{NIST} (the FIPS version standardized by NIST), while provably secure (in the single-user setting), succumbs to a practical attack in the multi-user setting. Third, we describe a fundamental defect from a practice-oriented standpoint in Bellare's 2006 security result for HMAC, and show that because of this defect his proof gives a security guarantee that is of little value in practice. We give a new proof of NMAC security that gives a stronger result for NMAC and HMAC and we discuss why even this stronger result by itself fails to give convincing assurance of HMAC security.
Last updated:  2012-02-23
Efficient identity-based threshold signature scheme from bilinear pairings in the standard model
Wei Gao, Guilin Wang, Xueli Wang, Kefei Chen
We propose a new identity-based threshold signature (IBTHS) scheme from bilinear pairings enjoying the following advantages in efficiency, security and functionality. The round-complexity of the threshold signing protocol is optimal since each party pays no other communication cost except broadcasting one single message. The computational complexity of the threshold signing procedure is considerably low since there appears no other time-consuming pairing except two pairings for verifying each signature shares. The communication channel requirement of the threshold signing procedure is the lowest since the broadcast channel among signers is enough. It is proved secure with optimal resilience in the standard model. It is the private key associated with an identity rather than a master key of the Public Key Generator (PKG) that is shared among signature generation servers. All these excellent properties are due to our new basic technique by which the private key in the bilinear group is indirectly shared through simply sharing an element in the finite field.
Last updated:  2012-02-23
Particularly Friendly Members of Family Trees
Uncategorized
Craig Costello
Show abstract
Uncategorized
The last decade has witnessed many clever constructions of parameterized families of pairing-friendly elliptic curves that now enable implementors targeting a particular security level to gather suitable curves in bulk. However, choosing the best curves from a (usually very large) set of candidates belonging to any particular family involves juggling a number of efficiency issues, such as the nature of binomials used to construct extension fields, the hamming-weight of key pairing parameters and the existence of compact generators in the pairing groups. In light of these issues, two recent works considered the best families for k=12 and k=24 respectively, and detailed subfamilies that offer very efficient pairing instantiations. In this paper we closely investigate the other eight attractive families with 8 \leq k <50, and systematically sub-divide each family into its family tree, branching off until concrete subfamilies are highlighted that simultaneously provide highly-efficient solutions to all of the above computational issues.
Last updated:  2012-08-18
Fast Reductions from RAMs to Delegatable Succinct Constraint Satisfaction Problems
Uncategorized
Eli Ben-Sasson, Alessandro Chiesa, Daniel Genkin, Eran Tromer
Show abstract
Uncategorized
Succinct arguments for NP are proof systems that allow a weak verifier to retroactively check computation done by a powerful prover. Constructions of such protocols prove membership in languages consisting of \emph{very large yet succinctly-represented constraint satisfaction problems} that, alas, are unnatural in the sense that the problems that arise in practice are not in such form. For general computation tasks, the most natural representation is typically as random-access machine (RAM) algorithms, because such a representation can be obtained very efficiently by applying a compiler to code written in a high-level programming language. Thus, understanding the efficiency of reductions from RAM computations to other NP-complete problem representations for which succinct arguments (or proofs) are known is a prerequisite to a more complete understanding of the applicability of these arguments. Existing succinct argument constructions rely either on circuit satisfiability or (in PCP-based constructions) on algebraic constraint satisfaction problems. In this paper, we present new and more efficient reductions from RAM (and parallel RAM) computations to both problems that (a) preserve succinctness (i.e., do not ``unroll'' the computation of a machine), (b) preserve zero-knowledge and proof-of-knowledge properties, and (c) enjoy fast and highly-parallelizable algorithms for transforming a witness to the RAM computation into a witness for the corresponding problem. These additional properties are typically not considered in ``classical'' complexity theory but are often required or very desirable in the application of succinct arguments. Fulfilling all these efficiency requirements poses significant technical challenges, and we develop a set of tools (both unconditional and leveraging computational assumptions) for generically and efficiently structuring and arithmetizing RAM computations for use in succinct arguments. More generally, our results can be applied to proof systems for NP relying on the aforementioned problem representations; these include various zero-knowledge proof constructions.
Last updated:  2012-02-23
Finding ECM-Friendly Curves through a Study of Galois Properties
Uncategorized
Razvan Barbulescu, Joppe W. Bos, Cyril Bouvier, Thorsten Kleinjung, Peter L. Montgomery
Show abstract
Uncategorized
In this paper we prove some divisibility properties of the cardinality of elliptic curves modulo primes. These proofs explain the good behavior of certain parameters when using Montgomery or Edwards curves in the setting of the elliptic curve method (ECM) for integer factorization. The ideas of the proofs help us to find new families of elliptic curves with good division properties which increase the success probability of ECM.
Last updated:  2012-02-23
Automatic Search of Attacks on round-reduced AES and Applications
Charles Bouillaguet, Patrick Derbez, Pierre-Alain Fouque
In this paper, we describe versatile and powerful algorithms for searching guess-and-determine and meet-in-the-middle attacks on some byte-oriented symmetric primitives. To demonstrate the strengh of these tools, we show that they allow to automatically discover new attacks on round-reduced AES with very low data complexity, and to find improved attacks on the AES-based MACs Alpha-MAC and Pelican-MAC, and also on the AES-based stream cipher LEX. Finally, the tools can be used in the context of fault attacks. These algorithms exploit the algebraically simple byte-oriented structure of the AES. When the attacks found by the tool are practical, they have been implemented and validated experimentally.
Last updated:  2016-11-11
Extended Security Arguments for (Ring) Signature Schemes
Sidi Mohamed El Yousfi Alaoui, Özgür Dagdelen, Pascal Véron, David Galindo, Pierre-Louis Cayrel
The well-known forking lemma by Pointcheval and Stern has been used to prove the security of the so-called generic signature schemes. These signature schemes are obtained via the Fiat-Shamir transform from three-pass identication schemes. A number of five-pass identification protocols have been proposed in the last few years. Extending the forking lemma and the Fiat-Shamir transform would allow to obtain new signature schemes since, unfortunately, these newly proposed schemes fall outside the original framework. In this paper, we provide an extension of the forking lemma in order to assess the security of what we call n-generic signature schemes. These include signature schemes that are derived from certain (2n + 1)-pass identication schemes. We thus obtain a generic methodology for proving the security of a number of signature schemes derived from recently published ve-pass identication protocols, and eventually for (2n+1)-pass identication schemes to come. Finally, we propose a similar extension of the forking lemma for ring signatures originally proposed by Herranz and Sáez.
Last updated:  2012-06-05
Parallelizing message schedules to accelerate the computations of hash functions
Uncategorized
Shay Gueron, Vlad Krasnov
Show abstract
Uncategorized
This paper describes an algorithm for accelerating the computations of Davies-Meyer based hash functions. It is based on parallelizing the computation of several message schedules for several message blocks of a given message. This parallelization, together with the proper use of vector processor instructions (SIMD) improves the overall algorithm’s performance. Using this method, we obtain a new software implementation of SHA-256 that performs at 12.11 Cycles/Byte on the 2nd and 10.84 Cycles/Byte on the 3rd Generation Intel® Core™ processors. We also show how to extend the method to the soon-to-come AVX2 architecture, which has wider registers. Since processors with AVX2 will be available only in 2013, exact performance reporting is not yet possible. Instead, we show that our resulting SHA-256 and SHA-512 implementations have a reduced number of instructions. Based on our findings, we make some observations on the SHA3 competition. We argue that if the prospective SHA3 standard is expected to be competitive against the performance of SHA-256 or SHA-512, on the high end platforms, then its performance should be well below 10 Cycles/Byte on the current, and certainly on the near future processors. Not all the SHA3 finalists have this performance. Furthermore, even the fastest finalists will probably offer only a small performance advantage over the current SHA-256 and SHA-512 implementations.
Last updated:  2012-02-23
Weak Keys of the Full MISTY1 Block Cipher for Related-Key Cryptanalysis
Jiqiang Lu, Wen-She Yap, Yongzhuang Wei
The MISTY1 block cipher has a 64-bit block length, a 128-bit user key and a recommended number of 8 rounds. It is a Japanese CRYPTREC-recommended e-government cipher, an European NESSIE selected cipher, and an ISO international standard. Despite of considerable cryptanalytic efforts during the past fifteen years, there has been no published cryptanalytic attack on the full MISTY1 cipher algorithm. In this paper, we present related-key differential and related-key amplified boomerang attacks on the full MISTY1 under certain weak key assumptions: We describe $2^{103.57}$ weak keys and a related-key differential attack on the full MISTY1 with a data complexity of $2^{61}$ chosen ciphertexts and a time complexity of $2^{87.94}$ encryptions; and we also describe $2^{92}$ weak keys and a related-key amplified boomerang attack on the full MISTY1 with a data complexity of $2^{60.5}$ chosen plaintexts and a time complexity of $2^{80.18}$ encryptions. For the very first time, our results exhibit a cryptographic weakness in the full MISTY1 cipher (when used with the recommended 8 rounds), and show that the MISTY1 cipher is distinguishable from a random function and thus cannot be regarded to be an ideal cipher.
Last updated:  2012-02-23
Modified version of “Latin Dances Revisited: New Analytic Results of Salsa20 and ChaCha”
Tsukasa Ishiguro
This paper is a modified version of own paper “Latin Dances Revisited: New Analytic Results of Salsa20 and ChaCha” presented in ICICS2011. In the original paper, there are incorrect data because of software bug in the experimental program. Therefore, we conducted with a correct program. Additionally, we modified the algorithm with a view to improvement of analysis precision.
Last updated:  2012-02-17
Ron was wrong, Whit is right
Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung, Christophe Wachter
We performed a sanity check of public keys collected on the web. Our main goal was to test the validity of the assumption that different random choices are made each time keys are generated. We found that the vast majority of public keys work as intended. A more disconcerting finding is that two out of every one thousand RSA moduli that we collected offer no security. Our conclusion is that the validity of the assumption is questionable and that generating keys in the real world for ``multiple-secrets'' cryptosystems such as RSA is significantly riskier than for ``single-secret'' ones such as ElGamal or (EC)DSA which are based on Diffie-Hellman.
Last updated:  2012-02-14
Randomized Partial Checking Revisited
Shahram Khazaei, Douglas Wikström
We study mix-nets with randomized partial checking (RPC) as proposed by Jakobsson, Juels, and Rivest (2002). RPC is a technique to verify the correctness of an execution both for Chaumian and homomorphic mix-nets. The idea is to relax the correctness and privacy requirements to achieve a more efficient mix-net. We identify serious issues in the original description of mix-nets with RPC and show how to exploit these to break both correctness and privacy, both for Chaumian and homomorphic mix-nets. Our attacks are practical and applicable to real world mix-net implementations, e.g., the Civitas and the Scantegrity voting systems.
Last updated:  2012-03-20
On the Security of Attribute Based Signature Schemes
S Sharmila Deva Selvi, Subhashini Venugopalan, C. Pandu Rangan
Attribute based signatures allow users possessing a set of credentials to sign documents; although the credentials of the signer can be verified, signers can still continue to retain a reasonable degree of anonymity. In this work we discuss aspects regarding the security of some attribute based signature schemes. In particular, we show multiple breaks in the existing threshold attribute based signature schemes by Li et al. 2010. We first claim that the scheme is not secure, since it allows, a signer possessing keys for some attributes to perform universal forgery and produce a signature that can satisfy the threshold for a set of attributes she does not possess. We then show a total break in the system, where the attacker can derive a part of the secret key and act as the key generating authority to generate private keys for other users. We also include examples of the attacks to highlight the flaws of this scheme, and other ABS schemes in Li and Kim 2008, Shahandashti et al. 2009, and Kumar et al. 2010; all of which employ the same or a similar key construct.
Last updated:  2012-02-10
A Pairing Based Strong Designated Verifier Signature Scheme without Random Oracles
Maryam Rajabzadeh Asaar, Mahmoud Salmasizadeh
In this study, a novel strong designated verifier signature scheme based on bilinear pairings with provable security in the standard model is proposed, while the existing ones are secure in the random oracle model. In 2007 and 2011, two strong designated verifier signature schemes in the standard model are proposed by Huang et al. and Zhang et al., respectively; in the former, the property of privacy of the signer’s identity is not proved and the security of the latter is based on the security of a pseudorandom function. Our proposal can deal with the aforementioned drawbacks of the previous schemes. Furthermore, it satisfies non-delegatability for signature verification
Last updated:  2012-03-10
Improved Security for Linearly Homomorphic Signatures: A Generic Framework
David Mandell Freeman
We propose a general framework that converts (ordinary) signature schemes having certain properties into linearly homomorphic signature schemes, i.e., schemes that allow authentication of linear functions on signed data. The security of the homomorphic scheme follows from the same computational assumption as is used to prove security of the underlying signature scheme. We show that the following signature schemes have the required properties and thus give rise to secure homomorphic signatures in the standard model: - The scheme of Waters (Eurocrypt 2005), secure under the computational Diffie-Hellman asumption in bilinear groups. - The scheme of Boneh and Boyen (Eurocrypt 2004, J. Cryptology 2008), secure under the $q$-strong Diffie-Hellman assumption in bilinear groups. - The scheme of Gennaro, Halevi, and Rabin (Eurocrypt 1999), secure under the strong RSA assumption. - The scheme of Hohenberger and Waters (Crypto 2009), secure under the RSA assumption. Our systems not only allow weaker security assumptions than were previously available for homomorphic signatures in the standard model, but also are secure in a model that allows a stronger adversary than in other proposed schemes. Our framework also leads to efficient linearly homomorphic signatures that are secure against our stronger adversary under weak assumptions (CDH or RSA) in the random oracle model; all previous proofs of security in the random oracle model break down completely when faced with our stronger adversary.
Last updated:  2012-10-29
Message Authentication, Revisited
Yevgeniy Dodis, Eike Kiltz, Krzysztof Pietrzak, Daniel Wichs
Traditionally, symmetric-key message authentication codes (MACs) are easily built from pseudorandom functions (PRFs). In this work we propose a wide variety of other approaches to building efficient MACs, without going through a PRF first. In particular, unlike deterministic PRF-based MACs, where each message has a unique valid tag, we give a number of probabilistic MAC constructions from various other primitives/assumptions. Our main results are summarized as follows: * We show several new probabilistic MAC constructions from a variety of general assumptions, including CCA-secure encryption, Hash Proof Systems and key-homomorphic weak PRFs. By instantiating these frameworks under concrete number theoretic assumptions, we get several schemes which are more efficient than just using a state-of-the-art PRF instantiation under the corresponding assumption. For example, we obtain elegant DDH-based MACs with much shorter keys than the quadratic-sized key of the Naor-Reingold PRF. We also show that several natural (probabilistic) digital signature schemes, such as those by Boneh-Boyen and Waters, can be significantly optimized when “downgraded” into a MAC, both in terms of their efficiency (e.g., no bilinear pairings) and security assumptions (e.g., standard CDH instead of bilinear CDH). * For probabilistic MACs, unlike deterministic ones, unforgeability against a chosen message attack (uf-cma) alone does not imply security if the adversary can additionally make verification queries (uf-cmva). In fact, a number of elegant constructions, such as recently constructed MACs based on Learning Parity with Noise (LPN) and some of the new MACs constructed in this work, are uf-cma but not not uf-cmva secure by themselves. We give an efficient generic transformation from any uf-cma secure MAC which is "message-hiding" into a uf-cmva secure MAC. Applied to LPN-based MACs, this resolves the main open problem of Kiltz et al. from Eurocrypt '11. * While all our new MAC constructions immediately give efficient actively secure, two-round symmetric-key identification schemes, we also show a very simple, three-round actively secure identification protocol from any weak PRF. In particular, the resulting protocol is much more efficient than the trivial approach of building a regular PRF from a weak PRF.
Last updated:  2012-05-31
Key recycling in authentication
Christopher Portmann
In their seminal work on authentication, Wegman and Carter propose that to authenticate multiple messages, it is sufficient to reuse the same hash function as long as each tag is encrypted with a one-time pad. They argue that because the one-time pad is perfectly hiding, the hash function used remains completely unknown to the adversary. Since their proof is not composable, we revisit it using a universally composable framework. It turns out that the above argument is insufficient: information about the hash function is in fact leaked in every round to the adversary, and after a bounded finite amount of rounds it is completely known. We show however that this leak is very small, and Wegman and Carter's protocol is still $\epsilon$-secure, if $\epsilon$-almost strongly universal hash functions are used. This implies that the secret key corresponding to the choice of hash function can be recycled for any task without any additional error than this $\epsilon$. We illustrate this by applying it to quantum key distribution (QKD): if the same hash function is recycled to authenticate the classical communication in every round of a QKD protocol, and used $\ell$ times per round, the total error after $r$ rounds is upper bounded by $r(\ell\epsilon+\epsilon')$, where $\epsilon'$ is the error of one round of QKD given an authentic channel.
Last updated:  2013-07-05
Anonymous Constant-Size Ciphertext HIBE From Asymmetric Pairings
Uncategorized
Somindu C. Ramanna, Palash Sarkar
Show abstract
Uncategorized
We present a new hierarchical identity based encryption (HIBE) scheme with constant-size ciphertext that can be implemented using the most efficient bilinear pairings, namely, Type-3 pairings. In addition to being fully secure, our scheme is anonymous. The HIBE is obtained by extending an asymmetric pairing based IBE scheme due to Lewko and Waters. The extension uses the approach of Boneh-Boyen-Goh to obtain constant-size ciphertexts and that of Boyen-Waters for anonymity. Security argument is based on the dual-system technique of Waters. The resulting HIBE is the only known scheme using Type-3 pairings achieving constant-size ciphertext, security against adaptive-identity attacks and anonymity under static assumptions without random oracles.
Last updated:  2012-02-06
A New Pseudorandom Generator from Collision-Resistant Hash Functions
Alexandra Boldyreva, Virendra Kumar
We present a new hash-function-based pseudorandom generator (PRG). Our PRG is reminiscent of the classical constructions iterating a function on a random seed and extracting Goldreich-Levin hardcore bits at each iteration step. The latest PRG of this type that relies on reasonable assumptions (regularity and one-wayness) is due to Haitner et al. In addition to a regular one-way function, each iteration in their ``randomized iterate'' scheme uses a new pairwise-independent function, whose descriptions are part of the seed of the PRG. Our construction does not use pairwise-independent functions and is thus more efficient, requiring less computation and a significantly shorter seed. Our scheme's security relies on the standard notions of collision-resistance and regularity of the underlying hash function, where the collision-resistance is required to be {\em exponential}. In particular, any polynomial-time adversary should have less than $2^{-n/2}$ probability of finding collisions, where $n$ is the output size of the hash function. We later show how to relax the regularity assumption by introducing a new notion that we call {\em worst-case regularity}, which lower bounds the size of primages of different elements from the range (while the common regularity assumption requires all such sets to be of equal size). Unlike previous results, we provide a concrete security statement.
Last updated:  2012-02-08
Cryptanalysis of Mun et al.'s anonymous authentication scheme for roaming service in global mobility networks
Hongbin Tang, Xinsong Liu
An anonymous user authentication scheme allows the user and the remote server to authenticate each other, and should preserve user anonymity. In 2011, Mun et al. proposed an enhanced secure anonymous user authentication scheme for roaming service in global mobility networks. They claimed that their scheme was more secure and efficient than others. However, we demonstrate that their scheme is vulnerable to the insider, impersonation, server spoofing, and denial of service attacks along with the efficiency and password issues. Meanwhile, it cannot provide any user anonymity. Thus it is not feasible for the real-life implementation.
Last updated:  2012-04-07
On the performance of certain Private Set Intersection protocols
Uncategorized
Emiliano De Cristofaro, Gene Tsudik
Show abstract
Uncategorized
Private Set Intersection (PSI) is a useful cryptographic primitive that allows two parties (client and server) to interact based on their respective (private) input sets, in such a way that client obtains nothing other than the set intersection, while server learns nothing beyond client set size. This paper considers one PSI construct from [DT10] and reports on its optimized implementation and performance evaluation. Several key implementation choices that significantly impact real-life performance are identified and a comprehensive experimental analysis (including micro-benchmarking, with various input sizes) is presented. Finally, it is shown that our optimized implementation of this RSA-OPRF-based PSI protocol markedly outperforms the one presented in [HEK12].
Last updated:  2012-02-06
Beating Shannon requires BOTH efficient adversaries AND non-zero advantage
Yevgeniy Dodis
In this note we formally show a "folklore" (but, to the best of our knowledge, not documented) fact that in order to beat the famous Shannon lower bound on key length for one-time-secure encryption, one must *simultaneously* restrict the attacker to be efficient, and also allow the attacker to break the system with some non-zero (i.e., negligible) probability. Despite being "folklore", we were unable to find a clean and simple proof of this result, despite asking several experts in the field. We hope that cryptography instructors will find this note useful when justifying the transition from information-theoretic to computational cryptography. We note that our proof cleanly handles *probabilistic* encryption, as well as a small *decryption error*.
Last updated:  2012-02-06
Identity-based Encryption with Efficient Revocation
Alexandra Boldyreva, Vipul Goyal, Virendra Kumar
Identity-based encryption (IBE) is an exciting alternative to public-key encryption, as IBE eliminates the need for a Public Key Infrastructure (PKI). Any setting, PKI- or identity-based, must provide a means to revoke users from the system. Efficient revocation is a well-studied problem in the traditional PKI setting. However in the setting of IBE, there has been little work on studying the revocation mechanisms. The most practical solution requires the senders to also use time periods when encrypting, and all the receivers (regardless of whether their keys have been compromised or not) to update their private keys regularly by contacting the trusted authority. We note that this solution does not scale well -- as the number of users increases, the work on key updates becomes a bottleneck. We propose an IBE scheme that significantly improves key-update efficiency on the side of the trusted party (from linear to logarithmic in the number of users), while staying efficient for the users. Our scheme builds on the ideas of the Fuzzy IBE primitive and binary tree data structure, and is provably secure.
Last updated:  2012-02-08
Eavesdropping on Satellite Telecommunication Systems
Benedikt Driessen
While communication infrastructures rapidly intertwine with our daily lives, public understanding of underlying technologies and privacy implications is often limited by their closed-source nature. Lacking the funding and resources of corporations and the intelligence community, developing and expanding this understanding is a sometimes tedious, but nonetheless important process. In this sense, we document how we have decrypted our own communication in the Thuraya satellite network. We have used open-source software to build on recent work which reverse-engineered and cryptanalized both stream ciphers currently used in the competing satellite communication standards GMR-1 and GMR-2. To break Thuraya’s encryption (which implements the GMR-1 standard) in a real-world scenario, we have enhanced an existing ciphertext-only attack. We have used common and moderately expensive equipment to capture a live call session and executed the described attack. We show that, after computing less than an hour on regular PC-hardware, we were able to obtain the session key from a handful of speech data frames. This effectively allows decryption of the entire session, thus demonstrating that the Thuraya system (and probably also SkyTerra and TerreStar, who are currently implementing GMR-1) is weak at protecting privacy.
Last updated:  2012-02-06
Investigating the Potential of Custom Instruction Set Extensions for SHA-3 Candidates on a 16-bit Microcontroller Architecture
Jeremy Constantin, Andreas Burg, Frank K. Gurkaynak
In this paper, we investigate the benefit of instruction set extensions for software implementations of all five SHA-3 candidates. To this end, we start from optimized assembly code for a common 16-bit microcontroller instruction set architecture. By themselves, these implementations provide reference for complexity of the algorithms on 16-bit architectures, commonly used in embedded systems. For each algorithm, we then propose suitable instruction set extensions and implement the modified processor core. We assess the gains in throughput, memory consumption, and the area overhead. Our results show that with less than 10% additional area, it is possible to increase the execution speed on average by almost 40%, while reducing memory requirements on average by more than 40%. In particular, the Grøstl algorithm, which was one of the slowest algorithms in previous reference implementations, ends up being the fastest implementation by some margin, once minor (but dedicated) instruction set extensions are taken into account.
Last updated:  2012-02-06
2-Dimension Sums: Distinguishers Beyond Three Rounds of RIPEMD-128 and RIPEMD-160
Yu Sasaki, Lei Wang
This paper presents differential-based distinguishers against ISO standard hash functions RIPEMD-128 and RIPEMD-160. The compression functions of RIPEMD-128/-160 adopt the double-branch structure, which updates a chaining variable by computing two functions and merging their outputs. Due to the double size of the internal state and difficulties of controlling two functions simultaneously, only few results were published before. In this paper, second-order differential paths are constructed on reduced RIPEMD-128 and -160. This leads to a practical 4-sum attack on 47 steps (out of 64 steps) of RIPEMD-128 and 40 steps (out of 80 steps) of RIPEMD-160. We then extend the distinguished property from the 4-sum to other properties, which we call \emph{a 2-dimension sum} and \emph{a partial 2-dimension sum}. As a result, the practical partial 2-dimension sum is generated on 48 steps of RIPEMD-128 and 42 steps of RIPEMD-160, with a complexity of $2^{35}$ and $2^{36}$, respectively. Theoretically, $2$-dimension sums are generated faster than the exhaustive search up to 52 steps of RIPEMD-128 and 51 steps of RIPEMD-160, with a complexity of $2^{101}$ and $2^{158}$, respectively. The practical attacks are implemented, and examples of generated (partial) 2-dimension sums are presented.
Last updated:  2012-02-01
Designing Integrated Accelerator for Stream Ciphers with Structural Similarities
Sourav Sen Gupta, Anupam Chattopadhyay, Ayesha Khalid
Till date, the basic idea for implementing stream ciphers has been confined to individual standalone designs. In this paper, we introduce the notion of integrated implementation of multiple stream ciphers within a single architecture, where the goal is to achieve area and throughput efficiency by exploiting the structural similarities of the ciphers at an algorithmic level. We present two case studies to support our idea. First, we propose the merger of SNOW 3G and ZUC stream ciphers, which constitute a part of the 3GPP LTE-Advanced security suite. We propose HiPAcc-LTE, a high performance integrated design that combines the two ciphers in hardware, based on their structural similarities. The integrated architecture reduces the area overhead significantly compared to two distinct cores, and also provides almost double throughput in terms of keystream generation, compared with the state-of-the-art implementations of the individual ciphers. As our second case study, we present IntAcc-RCHC, an integrated accelerator for the stream ciphers RC4 and HC-128. We show that the integrated accelerator achieves a slight reduction in area without any loss in throughput compared to our standalone implementations. We also achieve at least 1.5 times better throughput compared to general purpose processors. Long term vision of this hardware integration approach for cryptographic primitives is to build a flexible core supporting multiple designs having similar algorithmic structures.
Last updated:  2012-02-01
Incremental Deterministic Public-Key Encryption
Ilya Mironov, Omkant Pandey, Omer Reingold, Gil Segev
Motivated by applications in large storage systems, we initiate the study of incremental deterministic public-key encryption. Deterministic public-key encryption, introduced by Bellare, Boldyreva, and O'Neill (CRYPTO '07), provides a realistic alternative to randomized public-key encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful notion of security for low-entropy plaintexts distributions, and Bellare et al. demonstrated that a strong notion of security can in fact be realized for relatively high-entropy plaintext distributions. In order to achieve a meaningful level of security, a deterministic encryption algorithm should be typically used for encrypting rather long plaintexts for ensuring a sufficient amount of entropy. This requirement may be at odds with efficiency constraints, such as communication complexity and computation complexity in the presence of small updates. Thus, a highly desirable property of deterministic encryption algorithms is incrementality: small changes in the plaintext translate into small changes in the corresponding ciphertext. We present a framework for modeling the incrementality of deterministic public-key encryption. Within our framework we propose two schemes, which we prove to enjoy an optimal tradeoff between their security and incrementality up to small polylogarithmic factors. Our first scheme is a generic method which can be based on any deterministic public-key encryption scheme, and in particular, can be instantiated with any semantically-secure (randomized) public-key encryption scheme in the random oracle model. Our second scheme is based on the Decisional Diffie-Hellman assumption in the standard model. The approach underpinning our schemes is inspired by the fundamental ``sample-then-extract'' technique due to Nisan and Zuckerman (JCSS '96) and refined by Vadhan (J. Cryptology '04), and by the closely related notion of ``locally-computable extractors'' due to Vadhan. Most notably, whereas Vadhan used such extractors to construct private-key encryption schemes in the bounded-storage model, we show that techniques along these lines can also be used to construct incremental public-key encryption schemes.
Last updated:  2012-02-01
Modifying Boolean Functions to Ensure Maximum Algebraic Immunity
Konstantinos Limniotis, Nicholas Kolokotronis, Nicholas Kalouptsidis
The algebraic immunity of cryptographic Boolean functions is studied in this paper. Proper modifications of functions achieving maximum algebraic immunity are proved, in order to yield new functions of also maximum algebraic immunity. It is shown that the derived results apply to known classes of functions. Moreover, two new efficient algorithms to produce functions of guaranteed maximum algebraic immunity are developed, which further extend and generalize known constructions of functions with maximum algebraic immunity.
Last updated:  2015-01-28
Signature Schemes Secure against Hard-to-Invert Leakage
Sebastian Faust, Carmit Hazay, Jesper Buus Nielsen, Peter Sebastian Nordholt, Angela Zottarel
Side-channel attacks allow the adversary to gain partial knowledge of the secret key when cryptographic protocols are implemented in real-world hardware. The goal of leakage resilient cryptography is to design crytosystems that withstand such attacks. In the auxiliary input model an adversary is allowed to see a computationally hard-to-invert function of the secret key. The auxiliary input model weakens the bounded leakage assumption commonly made in leakage resilient cryptography as the hard-to-invert function may information-theoretically reveal the entire secret key. In this work, we propose the first constructions of digital signature schemes that are secure in the auxiliary input model. Our main contribution is a digital signature scheme that is secure against chosen message attacks when given any exponentially hard-to-invert function of the secret key. As a second contribution, we construct a signature scheme that achieves security for random messages assuming that the adversary is given a polynomial-time hard-to-invert function (where both the challenge as well as the signatures seen prior to that are computed on random messages). Here, polynomial-hardness is required even when given the entire public-key. We further show that such signature schemes readily give us auxiliary input secure identification schemes.
Last updated:  2012-01-30
PSCPA: Patient Self-controllable Privacy-preserving Cooperative Authentication in Distributed m-Healthcare Systems
Jun Zhou, Zhenfu Cao
Distributed m-healthcare systems significantly facilitate efficient patient treatment of high quality, while bringing about the challenge of keeping both the confidentiality of the personal health information and the patients' identity privacy simultaneously. It makes many existing data access control and anonymous authentication schemes inefficient in distributed m-healthcare systems. To solve the problem, in this paper, a novel authorized accessible privacy model (AAPM) is established. Patients can authorize physicians by setting an access tree supporting flexible threshold predicates. Then, based on it, a patient self-controllable privacy-preserving cooperative authentication scheme (PSCPA) realizing three levels of security and privacy requirement in distributed m-healthcare system is proposed. The directly authorized physicians can both decipher the personal health information and authenticate patients' identities by satisfying the access tree with their attribute sets. Due to the indistinguishability of the transcript simulation from the patients and physicians for the indirectly authorized physicians, they can only decipher the personal health information rather than authenticate patients' identities. The unauthorized persons can obtain neither. Moreover, PSCPA is extended in emergent cases and to resist Denial of Service (Dos) attacks. Finally, the formal security proof and simulation results show our scheme far outperforms the previous ones in terms of computational, communication and storage overhead.
Last updated:  2012-01-30
A novel Group Key Transfer Protocol
Chingfang Hsu, Bing Zeng, Qi Cheng, Guohua Cui
Group key transfer protocols depend on a mutually trusted key generation center (KGC) to transport the group key to all group members secretly. This approach requires that a trusted sever be set up, and it incurs communication overhead costs. In addition, the existing group key transfer protocols based on secret sharing all use threshold schemes that need to compute a -degree interpolating polynomial to encrypt and decrypt the secret group key, then it increases the computational complexity of system. In this paper, we first present a novel group key transfer protocol without an online KGC, which is based on DH key agreement and a perfect linear secret sharing scheme (LSSS). The confidentiality of the group key transfer phase of this protocol is information theoretically secure, which is ensured by this LSSS. Furthermore, this protocol can resist potential attacks and also reduce the overhead of system implementation. Goals and security threats of our proposed group key transfer protocol will be analyzed in detail.
Last updated:  2012-06-19
Key Length Estimation of Pairing-based Cryptosystems using $\eta_T$ Pairing
Naoyuki Shinohara, Takeshi Shimoyama, Takuya Hayashi, Tsuyoshi Takagi
The security of pairing-based cryptosystems depends on the difficulty of the discrete logarithm problem (DLP) over certain types of finite fields. One of the most efficient algorithms for computing a pairing is the $\eta_T$ pairing over supersingular curves on finite fields whose characteristic is $3$. Indeed many high-speed implementations of this pairing have been reported, and it is an attractive candidate for practical deployment of pairing-based cryptosystems. The embedding degree of the $\eta_T$ pairing is 6, so we deal with the difficulty of a DLP over the finite field $ GF(3^{6n})$, where the function field sieve (FFS) is known as the asymptotically fastest algorithm of solving it. Moreover, several efficient algorithms are employed for implementation of the FFS, such as the large prime variation. In this paper, we estimate the time complexity of solving the DLP for the extension degrees $n=97,163, 193,239,313,353,509$, when we use the improved FFS. To accomplish our aim, we present several new computable estimation formulas to compute the explicit number of special polynomials used in the improved FFS. Our estimation contributes to the evaluation for the key length of pairing-based cryptosystems using the $\eta_T$ pairing.
Last updated:  2012-06-04
A NEW DEDICATED CRYPTOGRAPHIC HASH FUNCTION
Norziana Jamil, Ramlan Mahmood, Muhammad Reza Z'aba, Nur Izura Udzir, Zuriati Ahmad Zukarnaen
Recent progress in cryptanalysis on cryptographic hash functions has shown that the most of the hash functions based on the design principles of MD4 are susceptible to differential attack. This paper describes a new 256-bit hash function which is based on parallel branches having a stronger compression function. It is designed to have higher security than that of MD family and its variant. The performance of the new hash functions are evaluated and compared with SHA-256 and FORK-256. It is shown that STITCH-256 exhibit the desired cryptographic properties and comparable with SHA-256 and FORK-256 in its compression function.
Last updated:  2012-01-29
Single-block collision attack on MD5
Uncategorized
Marc Stevens
Show abstract
Uncategorized
In 2010, Tao Xie and Dengguo Feng [ePrint 2010/643] constructed the first single-block collision for MD5 consisting of two 64-byte messages that have the same MD5 hash. Details of their attack, developed using what they call an evolutionary approach, has not been disclosed ``for security reasons''. Instead they have posted a challenge to the cryptology community to find a new different single-block collision attack for MD5. This paper answers that challenge by presenting a single-block collision attack based on other message differences together with an example colliding message pair. The attack is based on a new collision finding algorithm that exploits the low number of bitconditions in the first round. It uses a new way to choose message blocks that satisfy bitconditions up to step 22 and additionally uses three known tunnels to correct bitconditions up to step 25. The attack has an average runtime complexity equivalent to $2^{49.8}$ calls to MD5's compression function.
Last updated:  2012-01-29
Security Analysis of a Multi-Factor Authenticated Key Exchange Protocol
Feng Hao, Dylan Clarke
This paper shows several security weaknesses of a Multi-Factor Authenticated Key Exchange (MK-AKE) protocol, proposed by Pointcheval and Zimmer at ACNS'08. The Pointcheval-Zimmer scheme was designed to combine three authentication factors in one system, including a password, a secure token (that stores a private key) and biometrics. In a formal model, Pointcheval and Zimmer formally proved that an attacker had to break all three factors to win. However, the formal model only considers the threat that an attacker may impersonate the client; it however does not discuss what will happen if the attacker impersonates the server. We fill the gap by analyzing the case of the server impersonation, which is a realistic threat in practice. We assume that an attacker has already compromised the password, and we then present two further attacks: in the first attack, an attacker is able to steal a fresh biometric sample from the victim without being noticed; in the second attack, he can discover the victim's private key based on the Chinese Remainder theorem. Both attacks have been experimentally verified. In summary, an attacker actually only needs to compromise a single password factor in order to break the entire system. We also discuss the deficiencies in the Pointcheval-Zimmer formal model and countermeasures to our attacks.
Last updated:  2012-01-29
Cryptanalysis of the CHES 2009/2010 Random Delay Countermeasure
François Durvaux, Mathieu Renauld, François-Xavier Standaert, Loic van Oldeneel tot Oldenzeel, Nicolas Veyrat-Charvillon
Inserting random delays in cryptographic implementations is often used as a countermeasure against side-channel attacks. Most previous works on the topic focus on improving the statistical distribution of these delays. For example, efficient random delay generation algorithms have been proposed at CHES 2009/2010. These solutions increase security against attacks that solve the lack of synchronization between different leakage traces by integrating them. In this paper, we demonstrate that integration may not be the best tool to evaluate random delay insertions. For this purpose, we first describe different attacks exploiting pattern recognition techniques and Hidden Markov Models. Using these tools, we succeed in cryptanalyzing a (straightforward) implementation of the CHES 2009/2010 proposal in an Atmel microcontroller, with the same data complexity as an unprotected implementation of the AES Rijndael. In other words, we completely cancel the countermeasure in this case. Next, we show that our cryptanalysis tools are remarkably robust to attack improved variants of the countermeasure, e.g. with additional noise or irregular dummy operations. We also exhibit that the attacks remain applicable in a non-profiled adversarial scenario. Overall, these results suggest that the use of random delays may not be effective for protecting small embedded devices against side-channel leakage. They also confirm the need of worst-case analysis in physical security evaluations.
Last updated:  2012-04-22
Some results on $q$-ary bent functions
Uncategorized
Deep Singh, Maheshanand Bhaintwal, Brajesh Kumar Singh
Show abstract
Uncategorized
Kumar et al.(1985) have extended the notion of classical bent Boolean functions in the generalized setup on $\BBZ_q^n$. They have provided an analogue of classical Maiorana-McFarland type bent functions. In this paper, we study the crosscorrelation of a subclass of such generalized Maiorana-McFarland (\mbox{GMMF}) type bent functions. We provide a construction of quaternary ($q = 4$) bent functions on $n+1$ variables in terms of their subfunctions on $n$-variables. Analogues of sum-of-squares indicator and absolute indicator of crosscorrelation of Boolean functions are defined in the generalized setup. Further, $q$-ary functions are studied in terms of these indictors and some upper bounds of these indicators are obtained. Finally, we provide some constructions of balanced quaternary functions with high nonlinearity under Lee metric.
Last updated:  2012-01-29
Efficient Leakage-free Authentication of Trees, Graphs and Forests
Ashish Kundu, Mikhail Atallah, Elisa Bertino
Leakage-free authentication of trees and graphs have been studied in the literature. Such schemes have several practical applications especially in the cloud computing area. In this paper, we propose an authentication scheme that computes only one signature (optimal). Our scheme is not only super-efficient in the number of signatures it computes and in its runtime, but also is highly versatile -- it can be applied not only to trees, but also to graphs and forests (disconnected trees and graphs). While achieving such efficiency and versatility, we must also mention that our scheme achieves the desired security -- leakage-free authentication of data objects represented as trees, graphs and forests. This is achieved by another novel scheme that we have proposed in this paper -- a secure naming scheme for nodes of such data structures. Such a scheme assigns "secure names" to nodes such that these secure names can be used to verify the order between the nodes efficiently without leaking information about other nodes. As far as we know, our scheme is the first such scheme in literature that is optimal in its efficiency, supports two important security concerns -- authenticity and leakage-free (privacy-preserving/confidentiality), and is versatile in its applicability as it is to trees, graphs as well as forests. We have carried out complexity as well as experimental analysis of this scheme that corroborates its performance.
Last updated:  2012-01-30
Key-Alternating Ciphers in a Provable Setting: Encryption Using a Small Number of Public Permutations
Andrey Bogdanov, Lars R. Knudsen, Gregor Leander, Francois-Xavier Standaert, John Steinberger, Elmar Tischhauser
This paper considers---for the first time---the concept of key-alternating ciphers in a provable security setting. Key-alternating ciphers can be seen as a generalization of a construction proposed by Even and Mansour in 1991. This construction builds a block cipher $PX$ from an $n$-bit permutation $P$ and two $n$-bit keys $k_0$ and $k_1$, setting $PX_{k_0,k_1}(x)=k_1\oplus P(x\oplus k_0)$. Here we consider a (natural) extension of the Even-Mansour construction with $t$ permutations $P_1,\ldots,P_t$ and $t+1$ keys, $k_0,\ldots, k_t$. We demonstrate in a formal model that such a cipher is secure in the sense that an attacker needs to make at least $2^{2n/3}$ queries to the underlying permutations to be able to distinguish the construction from random. We argue further that the bound is tight for $t=2$ but there is a gap in the bounds for $t>2$, which is left as an open and interesting problem. Additionally, in terms of statistical attacks, we show that the distribution of Fourier coefficients for the cipher over all keys is close to ideal. Lastly, we define a practical instance of the construction with $t=2$ using AES referred to as AES$^2$. Any attack on AES$^2$ with complexity below $2^{85}$ will have to make use of AES with a fixed known key in a non-black box manner. However, we conjecture its security is $2^{128}$.
Last updated:  2012-02-16
Automatic Quantification of Cache Side-Channels
Boris Köpf, Laurent Mauborgne, Martin Ochoa
The latency gap between caches and main memory has been successfully exploited for recovering sensitive input to programs, such as cryptographic keys from implementation of AES and RSA. So far, there are no practical general-purpose countermeasures against this threat. In this paper we propose a novel method for automatically deriving upper bounds on the amount of information about the input that an adversary can extract from a program by observing the CPU's cache behavior. At the heart of our approach is a novel technique for efficient counting of concretizations of abstract cache states that enables us to connect state-of-the-art techniques for static cache analysis and quantitative information-flow. We implement our counting procedure on top of the AbsInt TimingExplorer, one of the most advanced engines for static cache analysis. We use our tool to perform a case study where we derive upper bounds on the cache leakage of a 128-bit AES executable on an ARM processor with a realistic cache configuration. We also analyze this implementation with a commonly suggested (but until now heuristic) countermeasure applied, obtaining a formal account of the corresponding increase in security.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.