All papers in 2011 (Page 2 of 714 results)

Last updated:  2012-05-27
On Security of RASP Data Perturbation for Secure Half-Space Queries in the Cloud
Uncategorized
Keke Chen
Show abstract
Uncategorized
Secure data intensive computing in the cloud is challenging, involving a complicated tradeoff among security, performance, extra costs, and cloud economics. Although fully homomorphic encryption is considered as the ultimate solution, it is still too expensive to be practical at the current stage. In contrast, methods that preserve special types of data utility, even with weaker security, might be acceptable in practice. The recently proposed RASP perturbation method falls into this category. It can provide practical solutions for specific problems such as secure range queries, statistical analysis, and machine learning. The RASP perturbation embeds the multidimensional data into a secret higher dimensional space, enhanced with random noise addition to protect the confidentiality of data. It also provides a query perturbation method to transform half-space queries to a quadratic form and, meanwhile, preserving the results of half-space queries. The utility preserving property and wide application domains are appealing. However, since the security of this method is not thoroughly analyzed, the risk of using this method is unknown. The purpose of this paper is to investigate the security of the RASP perturbation method based on a specific threat model. The threat model defines three levels of adversarial power and the concerned attacks. We show that although the RASP perturbed data and queries are secure on the lowest level of adversarial power, they do not satisfy the strong indistinguishability definition on higher levels of adversarial power. As we have noticed, the indistinguishability definition might not be too strong to be useful in the context of data intensive cloud computation. In addition, the noise component in the perturbation renders it impossible to exactly recover the plain data; thus, all attacks are essentially estimation attacks. We propose a weaker security definition based on information theoretic measures to describe the effectiveness of estimation attacks, and then study the security under this weaker definition. This security analysis helps clearly identify the security weaknesses of the RASP perturbation and quantify the expected security under different levels of adversarial power.
Last updated:  2012-06-11
Multiparty Computation with Low Communication, Computation and Interaction via Threshold FHE
Gilad Asharov, Abhishek Jain, Daniel Wichs
Fully homomorphic encryption (FHE) provides a simple template for secure computation between two parties (Alice and Bob) where: (I) Alice encrypts her input under her key, (II) Bob homomorphically evaluates the desired function on Alice's ciphertext and his own input, and sends the encrypted output to Alice. Extending this approach to multiple parties raises the problem of which key to encrypt under; if all parties choose a key on their own, then homomorphic evaluation on ciphertexts under different keys will not be possible, and if a single party chooses the key for everyone then corrupting this party will break privacy for all. In this work, we explore the option of using threshold fully homomorphic encryption (TFHE), allowing many parties to cooperatively generate a common public key whose secret key is shared/distributed among them. Moreover, the parties can cooperatively decrypt a ciphertext without learning anything but the plaintext. We show how to instantiate this approach efficiently using the recent FHE schemes of Brakerski et al. (FOCS '11, ITCS '12) based on the learning with errors (LWE) assumption. Our main tool is to exploit the property that such LWE-based encryption schemes are homomorphic over their keys. Using TFHE, we construct multiparty computation (MPC) protocols secure against fully malicious settings, tolerating any number of corruptions, and providing security in the universal composability framework. Our schemes have several benefits fits over prior templates for MPC. Interaction: We get protocols with only 3 rounds of interaction in the common random string model, or 2 rounds with a reusable public-key infrastructure, improving on prior known results. Communication: The communication in our protocol is only proportional to the input and output size of the function being evaluated and independent of its circuit size. Computation: The only computation that depends on the size of the circuit being computed is a homomorphic evaluation over public ciphertexts. This computation can be performed by a single party or can be outsourced to an external server. Novel Approach: Prior approaches to MPC with a dishonest majority rely in part on some combination of the techniques of Yao (FOCS '86) and/or Goldreich, Micali and Wigderson (STOC '87). Our approach is fundamentally different and relies only on the homomorphic properties of LWE-based encryption.
Last updated:  2011-11-15
IBAKE: Identity-Based Authenticated Key Exchange Protocol
Vladimir Kolesnikov, Ganapathy S. Sundaram
The past decade has witnessed a surge in exploration of cryptographic concepts based on pairings over Elliptic Curves. In particular, identity-based cryptographic protocols have received a lot of attention, motivated mainly by the desire to eliminate the need for large-scale public key infrastructure. We follow this trend in this work, by introducing a new Identity-Based Authenticated Key Exchange (IBAKE) protocol, and providing its formal proof of security. IBAKE provides mutually-authenticated Key Exchange (AKE) using identities as public credentials. One identity-based AKE subtlety that we address in this work is the resilience to the man-in-the-middle attacks by the Key Management Service. For efficiency, we employ two Elliptic Curves with differing properties. Specifically, we use a combination of a super-singular and non-super-singular curves, where the super-singular curve is used as an identity-based encryption ``wrapper'' to achieve mutual authentication, and the resulting session key is based on a Diffie-Hellman key exchange in the non-super-singular curve. We provide a detailed proof of security of the resulting protocol with respect to (our own natural adaptation and simplification of) the AKE definitions of Kolesnikov and Rackoff.
Last updated:  2011-11-15
Adaptive and Concurrent Secure Computation from New Notions of Non-Malleability
Dana Dachman-Soled, Tal Malkin, Mariana Raykova, Muthuramakrishnan Venkitasubramaniam
We present a unified framework for obtaining general secure computation that achieves adaptive- Universally Composable (UC)-security. Our framework captures essentially all previous results on adaptive concurrent secure computation, both in relaxed models (e.g., quasi-polynomial time simulation), as well as trusted setup models (e.g., the CRS model, the imperfect CRS model). This provides conceptual simplicity and insight into what is required for adaptive and concurrent security, as well as yielding improvements to set-up assumptions and/or computational assumptions. Moreover, using our framework we provide first constructions of concurrent secure computation protocols that are adaptively secure in the timing model, and in the non-uniform simulation model. Conceptually, our framework can be viewed as an adaptive analogue to the recent work of Lin, Pass and Venkitasubramaniam [STOC `09], who considered only non-adaptive adversaries. Their main insight was that stand-alone non-malleability was sufficient for UC-security. A main conceptual contribution of this work is, quite surprisingly, that it is indeed the case even when considering adaptive security. A key element in our construction is a commitment scheme that satisfies a new notion of non-malleability. The notion of concurrent equivocal non-malleable commitments, intuitively, guarantees that even when a man-in-the-middle adversary observes concurrent equivocal commitments and decommitments, the binding property of the commitments continues to hold for commitments made by the adversary. This notion is stronger than standard notions of concurrent non-malleable commitments which either consider only specific commits (e.g., statistically-binding) or specific scenarios (e.g., the commitment phase and the decommitment phase are executed in a non-overlapping manner). Previously, commitments that satisfy our definition, have been constructed in setup models, but either require existence of stronger encryption schemes such as CCA-secure encryption or require independent ``trapdoors'' provided by the setup for every pair of parties to ensure non-malleability. We here provide a construction that eliminates these requirements and require only a single trapdoor.
Last updated:  2011-11-15
An optimal Key Enumeration Algorithm and its Application to Side-Channel Attacks
Nicolas Veyrat-Charvillon, Benoît Gérard, Mathieu Renauld, François-Xavier Standaert
Methods for enumerating cryptographic keys based on partial information obtained on key bytes are important tools in cryptanalysis. This paper discusses two contributions related to the practical application and algorithmic improvement of such tools. On the one hand, we observe that modern computing platforms allow performing very large amounts of cryptanalytic operations, approximately reaching 2^50 to 2^60 block cipher encryptions. As a result, cryptographic key sizes for such ciphers typically range between 80 and 128 bits. By contrast, the evaluation of leaking devices is generally based on distinguishers with very limited computational cost, such as Kocher's Diffirerential Power Analysis. We bridge the gap between these cryptanalytic contexts and show that giving side-channel adversaries some computing power has major consequences for the security of leaking devices. For this purpose, we first propose a Bayesian extension of non-profiled side-channel attacks that allows us to rate key candidates in function of their respective probabilities. Next, we investigate the impact of key enumeration taking advantage of this Bayesian formulation, and quantify the resulting reduction in the data complexity of the attacks. On the other hand, we observe that statistical cryptanalyses usually try to reduce the number and size of lists corresponding to partial information on key bytes, in order to limit the time and memory complexity of the key enumeration. Quite surprisingly, few solutions exist that allow an efficient merging of large lists of subkey candidates. We provide a new deterministic algorithm that significantly reduces the number of keys to test in a divide-and-conquer attack, at the cost of limited (practically tractable) memory requirements. It allows us to optimally enumerate key candidates from any number of (possibly redundant) lists of any size, given that the subkey information is provided as probabilities. As an illustration, we finally exhibit side-channel cryptanalysis experiments where the correct key candidate is ranked up to position 23^2, in which our algorithm reduces the number of keys to test offline by an average factor 2^5 and a factor larger than 2^10 in the worst observed cases, compared to previous solutions. We also suggest examples of statistical attacks in which the new deterministic algorithm would allow improved results.
Last updated:  2015-10-29
The PHOTON Family of Lightweight Hash Functions
Jian Guo, Thomas Peyrin, Axel Poschmann
RFID security is currently one of the major challenges cryptography has to face, often solved by protocols assuming that an on-tag hash function is available. In this article we present the PHOTON lightweight hash-function family, available in many different flavors and suitable for extremely constrained devices such as passive RFID tags. Our proposal uses a sponge-like construction as domain extension algorithm and an AES-like primitive as internal unkeyed permutation. This allows us to obtain the most compact hash function known so far (about 1120 GE for 64-bit collision resistance security), reaching areas very close to the theoretical optimum (derived from the minimal internal state memory size). Moreover, the speed achieved by PHOTON also compares quite favorably to its competitors. This is mostly due to the fact that unlike for previously proposed schemes, our proposal is very simple to analyze and one can derive tight AES-like bounds on the number of active Sboxes. This kind of AES-like primitive is usually not well suited for ultra constrained environments, but we describe in this paper a new method for generating the column mixing layer in a serial way, lowering drastically the area required. Finally, we slightly extend the sponge framework in order to offer interesting trade-offs between speed and preimage security for small messages, the classical use-case in hardware.
Last updated:  2012-09-13
Four-Dimensional Gallant-Lambert-Vanstone Scalar Multiplication
Uncategorized
Patrick Longa, Francesco Sica
Show abstract
Uncategorized
The GLV method of Gallant, Lambert and Vanstone~(CRYPTO 2001) computes any multiple $kP$ of a point $P$ of prime order $n$ lying on an elliptic curve with a low-degree endomorphism $\Phi$ (called GLV curve) over $\mathbb{F}_p$ as $kP = k_1P + k_2\Phi(P)$, with $\max\{|k_1|,|k_2|\}\leq C_1\sqrt n$ for some explicit constant $C_1>0$. Recently, Galbraith, Lin and Scott (EUROCRYPT 2009) extended this method to all curves over $\mathbb{F}_{p^2}$ which are twists of curves defined over $\mathbb{F}_p$. We show in this work how to merge the two approaches in order to get, for twists of any GLV curve over $\mathbb{F}_{p^2}$, a four-dimensional decomposition together with fast endomorphisms $\Phi, \Psi$ over $\mathbb{F}_{p^2}$ acting on the group generated by a point $P$ of prime order $n$, resulting in a proven decomposition for any scalar $k\in[1,n]$ given by $kP=k_1P+ k_2\Phi(P)+ k_3\Psi(P) + k_4\Psi\Phi(P)$, with $\max_i (|k_i|)< C_2\, n^{1/4}$ for some explicit $C_2>0$. Remarkably, taking the best $C_1, C_2$, we obtain $C_2/C_1<412$, independently of the curve, ensuring in theory an almost constant relative speedup. In practice, our experiments reveal that the use of the merged GLV-GLS approach supports a scalar multiplication that runs up to 50\% faster than the original GLV method. We then improve this performance even further by exploiting the Twisted Edwards model and show that curves originally slower may become extremely efficient on this model. In addition, we analyze the performance of the method on a multicore setting and describe how to efficiently protect GLV-based scalar multiplication against several side-channel attacks. Our implementations improve the state-of-the-art performance of point multiplication for a variety of scenarios including side-channel protected and unprotected cases with sequential and multicore execution.
Last updated:  2011-11-10
Improving Additive and Multiplicative Homomorphic Encryption Schemes Based on Worst-Case Hardness Assumptions}
Uncategorized
Carlos Aguilar Melchor, Slim Bettaieb, Philippe Gaborit, Javier Herranz
Show abstract
Uncategorized
In CRYPTO 2010, Aguilar et al. proposed a somewhat homomorphic encryption scheme, i.e. an encryption scheme allowing to compute a limited amount of sums and products over encrypted data, with a security reduction from LWE over general lattices. General lattices (as opposed to ideal lattices) do not have an inherent multiplicative structure but, using a tensorial product, Aguilar et al. managed to obtain a scheme allowing to compute products with a polylogarithmic amount of operands. In this paper we present an alternative construction allowing to compute products with polynomially-many operands while preserving the security reductions of the initial scheme. Unfortunately, despite this improvement our construction seems to be incompatible with Gentry's seminal transformation allowing to obtain fully-homomorphic encryption schemes. Recently, Brakerski et al. used the tensorial product approach introduced by Aguilar et al. in a new alternative way which allows to radically improve the performance of the obtained scheme. Based on this approach, and using two nice optimizations, their scheme is able to evaluate products with exponentially-many operands and can be transformed into an efficient fully-homomorphic encryption scheme while being based on general lattice problems. However, even if these results outperform the construction presented here, we believe the modifications we suggest for Aguilar et al.'s schemes are of independent interest.
Last updated:  2011-12-08
$GF(2^{n})$ Subquadratic Polynomial Basis Multipliers for Some Irreducible Trinomials
Xi Xiong, Haining Fan
The $GF(2^{n})$ multiplication operation in polynomial basis can be represented as a matrix-vector product form, and the matrix is often called the Mastrovito matrix. The Toeplitz matrix-vector product approach has been used to design subquadratic shifted polynomial basis multipliers. In order to apply this approach to subquadratic polynomial basis multipliers, this Mastrovito matrix should be transformed into a Toeplitz matrix. In this paper, two transformation methods are proposed for irreducible trinomial $x^{n}+x^{k}+1$, where $2k+1\leq n$.
Last updated:  2011-11-10
Efficient and Secure Delegation of Linear Algebra
Payman Mohassel
We consider secure delegation of linear algebra computation, wherein a client, \emph{privately} and \emph{verifiably}, outsources tasks such as matrix multiplication, matrix inversion, computing the rank and determinant, and solving a linear system to a remote worker. When operating on $n \times n$ matrices, we design non-interactive, and secure protocols for delegating matrix multiplication, based on a number of encryption schemes with limited homomorphic properties where the client only needs to perform $O(n^2)$ work. The main component of these delegation protocols is a mechanism for efficiently verifying the \emph{homomorphic matrix multiplication} performed by the worker. We introduce a general method for performing this verification, for any homomorphic encryption scheme that satisfies two special properties. We then show that most existing homomorphic encryption schemes satisfy these properties and hence can utilize our general verification method. In case of the BGN-style encryption of [Gentry et al., EUROCRYPT 2010], we also show a simpler and more efficient verification method that does not follow our general approach. Finally, we show constant round and efficient constructions for secure delegation of other linear algebra tasks based on our delegation protocol for matrix multiplication. In all of these constructions, the client's work is at most $O(n^2\log n)$. Our constructions can also be efficiently transformed to \emph{server-aided protocols} for secure two-party computation of linear algebra with similar efficiency.
Last updated:  2012-05-12
Genus 2 Hyperelliptic Curve Families with Explicit Jacobian Order Evaluation and Pairing-Friendly Constructions
Aurore Guillevic, Damien Vergnaud
The use of (hyper)elliptic curves in cryptography relies on the ability to compute the Jacobian order of a given curve. Recently, Satoh proposed a probabilistic polynomial time algorithm to test whether the Jacobian -- over a finite field $\mathbb{F}_q$ -- of a hyperelliptic curve of the form $Y^2 = X^5 + aX^3 + bX$ (with $a,b \in \mathbb{F}_q^*$) has a large prime factor. His approach is to obtain candidates for the zeta function of the Jacobian over $\mathbb{F}_q^*$ from its zeta function over an extension field where the Jacobian splits. We extend and generalize Satoh's idea to provide \emph{explicit} formulas for the zeta function of the Jacobian of genus 2 hyperelliptic curves of the form $Y^2 = X^5 + aX^3 + bX$ and $Y^2 = X^6 + aX^3 + b$ (with $a,b \in \mathbb{F}_q^*$). Our results are proved by elementary (but intricate) polynomial root-finding techniques. Hyperelliptic curves with small embedding degree and large prime-order subgroup are key ingredients for implementing pairing-based cryptographic systems. Using our closed formulas for the Jacobian order, we present several algorithms to obtain so-called \emph{pairing-friendly} genus 2 hyperelliptic curves. Our method relies on techniques initially proposed to produce pairing-friendly elliptic curves (namely, the Cocks-Pinch method and the Brezing-Weng method). We demonstrate this method by constructing several interesting curves with $\rho$-values around 3. We found for each embedding degree $5 \leqslant k \leqslant 35$ a family of curves of $\rho$-value between $2.25$ and $4$.
Last updated:  2011-11-11
Advanced Zero-Sum Distinguishers for the Permutations of the PHOTON Family
Uncategorized
Le Dong, Wenling Wu, Shuang Wu, Jian Zou
Uncategorized
PHOTON is a new collection of lightweight hash functions which use an extended sponge construction and AES-like permutations. The family has five members, and each of them has a corresponding permutation. The state sizes of these permutations are 100 bits, 144 bits, 196 bits, 256 bits and 288 bits, respectively. In this paper, we firstly estimate the upper bounds on the algebraic degrees of some round-reduced permutations and use the spectral properties to improve them. Then, some zero-sum distinguishers are constructed basing on these upper bounds. Applying the integral properties and the super-sbox technique used on AES-like block ciphers, we can extend one or two rounds in the middle of the previous zero-sum distinguishers. On the other side, the tighter upper bounds on algebraic degrees of these permutations are obtained by using some new results introduced by C. Boura etc. Basing on these new bounds, the full-round zero-sum distinguishers of the first four permutations can be constructed. Additionally, the results do not threat the security of the hash family.
Last updated:  2012-09-11
Positive Results for Concurrently Secure Computation in the Plain Model
Vipul Goyal
We consider the question of designing concurrently self-composable protocols in the plain model. We first focus on the minimal setting where there is a party \pa which might interact with several other parties in any unbounded (polynomial) number of concurrent sessions. \pa holds a single input $x$ which it uses in all the concurrent sessions. An analogy is a server interacting with various clients at the same time. In this ``single input" setting, we show that many (or even most) functionalities can be securely realized in the plain model. More precisely, we are able to realize all ideal functionalities except ones which are a (weak form of) cryptographic pseudorandom functions. We complement our positive result by showing an impossibility result in this setting for a functionality which evaluates a pseudorandom function. Our security definition follows the standard ideal/real world simulation paradigm (with no super polynomial simulation etc). There is no apriori bound on the number of concurrent executions. We show interesting extensions of our positive results to the more general setting where the honest parties may choose different inputs in different session (even adaptively), the roles that the parties assume in the protocol may be interchangeable, etc. We also put forward a conjecture that we call the bounded pseudoentropy conjecture. Prior to our work, the only positive results known in the plain model in the fully concurrent setting were for zero-knowledge.
Last updated:  2011-11-10
A Multi-Receiver ID-Based Generalized Signcryption Scheme
Caixue Zhou
Generalized signcryption(GSC) can adaptively work as an encryption scheme, a signature scheme or a signcryption scheme with only one algorithm. In this paper, the formal definition and security notions of multi-receiver identity-based generalized signcryption (MID-GSC) are defined. A concrete scheme is also proposed and proved to be confidential under the Bilinear Diffie-Hellman (BDH) assumption and existential unforgeable under the Computational Diffie-Hellman(CDH) assumption in the random oracle model, which only needs one pairing computation to generalized signcrypt a single message for n receivers using the randomness re-use technique. Compared with other multi-receiver ID-based signcryption schemes, the new scheme is also of high efficiency.
Last updated:  2012-05-06
A New Class of Hyper-bent Boolean Functions with Multiple Trace Terms
Uncategorized
Chunming Tang, Yanfeng Qi, Maozhi Xu, Baocheng Wang, Yixian Yang
Show abstract
Uncategorized
Introduced by Rothaus in 1976 as interesting combinatorial objects, bent functions are maximally nonlinear Boolean functions with even numbers of variables whose Hamming distance to the set of all affine functions equals $2^{n-1}\pm 2^{\frac{n}{2}-1}$. Not only bent functions are applied in cryptography, such as applications in components of S-box, block cipher and stream cipher, but also they have relations to coding theory. Hence a lot of research have been paid on them. Youssef and Gong introduced a new class of bent functions the so-called hyper-bent functions which have stronger properties and rarer elements. It seems that hyper-bent functions are more difficult to generate. Moreover, (hyper)-bent functions are not classified. Charpin and Gong studied a class of hyper-bent functions $f$ defined on $\mathbb{F}_{2^n}$ by $f=\sum\limits_{r\in R}\mathrm{Tr}_{1}^{n}(a_{r}x^{r(2^m-1)})$, $n=2m$ and $a_{r}\in \mathbb{F}_{2^n}$, where $R$ is a subset of a set of representatives of the cyclotomic cosets modulo $2^m + 1$ for which each coset has the full size $n$. Further, Mesnager contributed to the knowledge of a class of hyper-bent functions $f_{b}$ defined over $\mathbb{F}_{2^n}$ by $f_{b}=\sum\limits_{r\in R}\mathrm{Tr}_{1}^{n}(a_{r}x^{r(2^m-1)})+\mathrm{Tr}_{1}^{2}(bx^{\frac{2^n-1}{3}})$, $b\in \mathbb{F}_{4}$, $n=2m$ and $a_{r}\in \mathbb{F}_{2^m}$. In this paper, we study a new class of the hyper-bent functions $f_{b}$ defined over $\mathbb{F}_{2^n}$ by $f_{b}=\sum\limits_{r\in R}\mathrm{Tr}_{1}^{n}(a_{r}x^{r(2^m-1)})+\mathrm{Tr}_{1}^{4}(bx^{\frac{2^n-1}{5}})$, $b\in \mathbb{F}_{16}$, $n=2m$ and $a_{r}\in \mathbb{F}_{2^m}$.
Last updated:  2013-07-01
Efficient Password-Based Authenticated Key Exchange from Lattices
Yi Ding, Lei Fan
Protocols for password-based authenticated key exchange (PAKE) allow two users who share only a short, low-entropy password to agree on a cryptographically strong session key. One must ensure that protocols are immune to off-line dictionary attacks in which an adversary exhaustively enumerates all possible passwords in an attempt to determine the correct one. Recently Katz, et al. \cite{GK10} gave a new framework for realizing PAKE without random oracles, in the common reference string model. In this paper, we instantiate the framework of \cite{GK10} under the lattices assumptions. Specifically, we modified the lattice-based approximate projective hashing introduced in \cite{KV09} and plug it into the framework of \cite{GK10}, and we prove our new PAKE is efficient and secure based on the security of GK's PAKE framework \cite{GK10} in the standard model.
Last updated:  2011-11-10
New Subexponential Algorithms for Factoring in $SL(2,\fq)$
Jean-Charles Faugère, Ludovic Perret, Christophe Petit, Guénaël Renault
Cayley hash functions are a particular kind of cryptographic hash functions with very appealing properties. Unfortunately, their security is related to a mathematical problem whose hardness is not very well understood, the {factorization problem in finite groups}. Given a group $G$, a set of generators $\gen$ for this group and an element $g\in G$, the factorization problem asks for a ``short'' representation of $g$ as a product of the generators. In this paper, we provide a new algorithm for solving this problem for the group $G:=\G$. We first reduce the problem to the resolution of a particular kind of multivariate equation over $\fq$. Then, we introduce a dedicated approach to solve this equation with Gröbner bases. We provide a complexity analysis of our approach that is of independent interest from the point of view of Gröbner basis algorithms. Finally, we give the first subexponential time algorithm computing polynomial-length factorizations of any element $g$ with respect to any generator set $\gen$ of $\G$. Previous algorithms only worked for specific generator sets, ran in exponential time or produced factorizations that had at least a subexponential length. In practice, our algorithm beats the birthday-bound complexity of previous attacks for medium and large values of $n$.
Last updated:  2011-11-07
How to Delegate and Verify in Public: Verifiable Computation from Attribute-based Encryption
Bryan Parno, Mariana Raykova, Vinod Vaikuntanathan
The wide variety of small, computationally weak devices, and the growing number of computationally intensive tasks makes the delegation of computation to large data centers a desirable solution. However, computation outsourcing is useful only when the returned result can be trusted, which makes verifiable computation (VC) a must for such scenarios. In this work we extend the definition of verifiable computation in two important directions: public delegation and public verifiability, which have important applications in many practical delegation scenarios. Yet, existing VC constructions based on standard cryptographic assumptions fail to achieve these properties. As the primary contribution of our work, we establish an important (and somewhat surprising) connection between verifiable computation and attribute-based encryption (ABE), a primitive that has been widely studied. Namely, we show how to construct a VC scheme with public delegation and public verifiability from any ABE scheme. The VC scheme verifies any function in the class of functions covered by the permissible ABE policies. This scheme enjoys a very efficient verification algorithm that depends only on the output size. Strengthening this connection, we show a construction of a multi-function verifiable computation scheme from an ABE with outsourced decryption, a primitive defined recently by Green, Hohenberger and Waters (USENIX Security 2011). A multi-function VC scheme allows the verifiable evaluation of multiple functions on the same preprocessed input. In the other direction, we also explore the construction of an ABE scheme from verifiable computation protocols.
Last updated:  2011-11-07
Parallel Homomorphic Encryption
Uncategorized
Seny Kamara, Mariana Raykova
Show abstract
Uncategorized
In the problem of private outsourced computation, a client wishes to delegate the evaluation of a function f on a private input x to an untrusted worker without the latter learning anything about x and f(x). This problem occurs in many applications and, most notably, in the setting of cloud computing. In this work, we consider the problem of privately outsourcing computation to a cluster of machines, which typically happens when the computation is to be performed over massive datasets, e.g., to analyze large social networks or train machine learning algorithms on large corpora. At such scales, computation is beyond the capabilities of any single machine so it is performed by large-scale clusters of workers. We address this problem by introducing the notion of parallel homomorphic encryption (PHE) schemes, which are encryption schemes that support computation over encrypted data via evaluation algorithms that can be efficiently executed in parallel. We also consider delegated PHE schemes which, in addition, can hide the function being evaluated. More concretely, we focus on the MapReduce model of parallel computation and show how to construct PHE schemes that can support various MapReduce operations on encrypted datasets including element testing and keyword search. More generally, we construct schemes that can support the evaluation of functions in NC0 with locality 1 and m = polylog(k) (where k is the security parameter). Underlying our PHE schemes are two new constructions of (local) randomized reductions (Beaver and Feigenbaum, STACS '90) for univariate and multivariate polynomials. Unlike previous constructions, our reductions are not based on secret sharing and are fully-hiding in the sense that the privacy of the input is guaranteed even if the adversary sees all the client's queries. Our randomized reduction for univariate polynomials is information-theoretically secure and is based on permutation polynomials, whereas our reduction for multivariate polynomials is computationally-secure under the multi-dimensional noisy curve reconstruction assumption (Ishai, Kushilevitz, Ostrovsky, Sahai, FOCS '06).
Last updated:  2014-02-10
Efficient Multi-Query CPIR from Ring-LWE
Helger Lipmaa
We propose an $(n, m)$-computationally-private information retrieval (CPIR) protocol with rate $1 - o (1)$ and highly nontrivial (sublinear and data-dependent) server's computational complexity. For this, we note that an $(n, m)$-CPIR protocol is equivalent to a secure function evaluation protocol that evaluates a secret function $f$ on $m$ different inputs. Thus, we first design an efficient multi-level circuit for $f$, and then use the recent (ring-)LWE based fully-homomorphic encryption scheme by Brakerski, Gentry and Vaikuntanathan~\cite{eprint2011:BrakerskiGV} to evaluate the circuit in a private manner. Apart from the final result itself, several of our techniques may be of independent interest. This includes the construction of the circuit for $f$ and the definition and construction of computational batch codes.
Last updated:  2012-01-20
Receipt Freeness of Prêt à Voter Provably Secure
Uncategorized
Dalia Khader, Peter Y. A. Ryan
Show abstract
Uncategorized
Prêt à Voter is an end-to-end verifiable voting scheme that is also receipt free. Formal method analysis was used to prove that Prêt à Voter is receipt free. In this paper we use one of the latest versions of Prêt à Voter[XCH+10] to prove receipt freeness of the scheme using computational methods. We use provable security game models for the first time to prove a paper based voting scheme receipt free. In this paper we propose a game model that defines receipt freeness. We show that in order to simulate the game we require IND-CCA2 encryption scheme to create the ballots. The usual schemes used in constructing Prêt à Voter are either exponential ElGamal or Paillier because of their homomorphic properties that are needed for tallying, however both are IND-CPA secure. We propose a new verifiable shuffle ``D-shuffle'' to be used together with an IND-CPA encryption schemes that guarantees that the outputs of the shuffle are IND-CCA2 secure ciphertexts and they are used for constructing the ballots. The idea is based on Naor-Yung transformation[NY95]. We prove that if there exist an adversary that breaks receipt freeness then there exist an adversary that breaks the IND-CCA2 security of Naor-Yung encryption scheme. We further show that the ``D-Shuffle'' provides us with the option of having multiple authorities creating the ballots such that no single authority can break voter's privacy.
Last updated:  2011-11-25
CCA Secure IB-KEM from the Computational Bilinear Diffie-Hellman Assumption in the Standard Model
Uncategorized
Yu Chen, Liqun Chen, Zongyang Zhang
Show abstract
Uncategorized
In this paper, we propose several selective-identity chosen-ciphertext attack secure iden- tity based key encapsulation (IB-KEM) schemes that are provably secure under the computational bilinear Diffie-Hellman (CBDH) assumption in the standard model. Our schemes compare favor- ably to previous results in efficiency. With delicate modification, our schemes can be strengthened to be full-identity CCA secure easily.
Last updated:  2011-11-03
Generic Constructions for Verifiable Signcryption
Laila El Aimani
Signcryption is a primitive which simultaneously performs the functions of both signature and encryption in a way that is more efficient than signing and encrypting separately. We study in this paper constructions of signcryption schemes from basic cryptographic mechanisms; our study concludes that the known constructions require expensive encryption in order to attain confidentiality, however some adjustments make them rest on cheap encryption without compromising their security. Our constructions further enjoy verifiability which entitles the sender or the receiver to prove the validity of a signcryption with/out revealing the \emph{signcrypted} message. They also allow the receiver to release some information which allows anyone to publicly verify a signcryption on a given message. Finally, our constructions accept efficient instantiations if the building blocks belong to a wide class of signature/encryption schemes.
Last updated:  2011-11-03
A Unified Framework for Small Secret Exponent Attack on RSA
Noboru Kunihiro, Naoyuki Shinohara, Tetsuya Izu
We address a lattice based method on small secret exponent attack on RSA scheme. Boneh and Durfee reduced the attack into finding small roots of a bivariate modular equation: $x(N+1+y)+1 ¥equiv 0 mod e)$, where $N$ is an RSA moduli and $e$ is the RSA public key. Boneh and Durfee proposed a lattice based algorithm for solving the problem. When the secret exponent $d$ is less than $N^{0.292}$, their method breaks RSA scheme. Since the lattice used in the analysis is not full-rank, the analysis is not easy. Bl¥"omer and May gave an alternative algorithm. Although their bound $d ¥leq N^{0.290}$ is worse than Boneh--Durfee result, their method used a full rank lattice. However, the proof for their bound is still complicated. Herrmann and May gave an elementary proof for the Boneh--Durfee's bound: $d ¥leq N^{0.292}$. In this paper, we first give an elementary proof for achieving the bound of Bl¥"omer--May: $d ¥leq N^{0.290}$. Our proof employs unravelled linearization technique introduced by Herrmann and May and is rather simpler than Bl¥"omer--May's proof. Then, we provide a unified framework to construct a lattice that are used for solving the problem, which includes two previous method: Herrmann--May and Bl¥"omer--May methods as a special case. Furthermore, we prove that the bound of Boneh--Durfee: $d ¥leq N^{0.292}$ is still optimal in our unified framework.
Last updated:  2011-11-24
An Efficient Broadcast Attack against NTRU
Uncategorized
Jianwei Li, Yanbin Pan, Mingjie Liu, Guizhen Zhu
Show abstract
Uncategorized
The NTRU cryptosystem is the most practical scheme known to date. In this paper, we first discuss the ergodic-linearization algorithm against GGH, then naturally deduce a new and uniform broadcast attack against several variants of NTRU: for every recipient’s ciphertext, isolate out the blinding value vector, then do derandomization directly and entirety by using inner product, afterwards by using some properties of circular matrix together with linearization we obtain three linear congruence equations of the form aTY = s mod q0 with N + [N2] variables. Hence only if the number of the independent recipients’ ciphertexts/public-keys pairs reaches N + [N2] &#8722; 2 can we work out these variables and recover the plaintext in O(N3) arithmetic operations successfully. The experiment evidence indicates that our algorithm can efficiently broadcast attack against NTRU with the highest security parameters. To the best of our knowledge, this is the most efficient broadcast attack against NTRU. This is an algebraic broadcast attack, which is based on the special structure of the blinding value space Lr.
Last updated:  2012-03-22
Impact of Intel's New Instruction Sets on Software Implementation of $GF(2)[x]$ Multiplication
Chen Su, Haining Fan
PCLMULQDQ, a new instruction that supports $GF(2)[x]$ multiplication, was introduced by Intel in 2010. This instruction brings dramatic change to software implementation of multiplication in $GF(2^m)$ fields. In this paper, we present improved Karatsuba formulae for multiplying two small binary polynomials, compare different strategies for PCLMULQDQ-based multiplication in the five $GF(2^m)$ fields recommended by NIST and conclude the best design approaches to software implementation of $GF(2)[x]$ multiplication.
Last updated:  2012-11-08
Another Look at Symmetric Incoherent Optimal Eavesdropping against BB84
Uncategorized
Arpita Maitra, Goutam Paul
Show abstract
Uncategorized
The BB84 protocol is used by Alice (the sender) and Bob (the receiver) to settle on a secret classical bit-string by communicating qubits over an insecure quantum channel where Eve (the Eavesdropper) can have access. In this paper, we revisit a well known eavesdropping technique against BB84. We claim that there exist certain gaps in understanding the existing eavesdropping strategy in terms of cryptanalytic view and we try to bridge those gaps in this paper. First we refer to the result where it is shown that in the six-state variant of the BB84 protocol (Bru\ss, Phys. Rev. Lett., 1998), the mutual information between Alice (the sender) and Eve (the eavesdropper) is higher when two-bit probe is used compared to the one-bit probe and hence the two-bit probe provides a stronger eavesdropping strategy. However, from cryptanalytic point of view, we show that Eve has the same success probability in guessing the bit transmitted by Alice in both the cases of the two-bit and the one-bit probe. Thus, we point out that having higher mutual information may not directly lead to obtaining higher probability in guessing the key bit. It is also explained in the work of Bru\ss~that the six-state variant of the BB84 protocol is more secure than the traditional four-state BB84. We look into this point in more detail and identify that this advantage is only achieved at the expense of communicating more qubits in the six-state protocol. In fact, we present different scenarios, where given the same number of qubits communicated, the security comparison of the four and six-state protocols is evaluated carefully.
Last updated:  2013-01-22
Signatures of Correct Computation
Charalampos Papamanthou, Elaine Shi, Roberto Tamassia
We introduce \textit{Signatures of Correct Computation} (SCC), a new model for verifying dynamic computations in cloud settings. In the SCC model, a trusted \emph{source} outsources a function $f$ to an untrusted \emph{server}, along with a public key for that function (to be used during verification). The server can then produce a succinct signature $\sigma$ vouching for the correctness of the computation of $f$, i.e., that some result $v$ is indeed the correct outcome of the function $f$ evaluated on some point $\vec{a}$. There are two crucial performance properties that we want to guarantee in an SCC construction: (1)~verifying the signature should take asymptotically less time than evaluating the function $f$; and (2)~the public key should be efficiently updated whenever the function changes. We construct SCC schemes (satisfying the above two properties) supporting expressive manipulations over multivariate polynomials, such as polynomial evaluation and differentiation. Our constructions are adaptively secure in the random oracle model and achieve \emph{optimal} updates, i.e., the function's public key can be updated in time proportional to the number of updated coefficients, without performing a linear-time computation (in the size of the polynomial). We also show that signatures of correct computation imply \emph{Publicly Verifiable Computation} (PVC), a model recently introduced in several concurrent and independent works. Roughly speaking, in the SCC model, \emph{any client} can verify the signature $\sigma$ and be convinced of some computation result, whereas in the PVC model only the client that issued a query (or anyone who trusts this client) can verify that the server returned a valid signature (proof) for the answer to the query. Our techniques can be readily adapted to construct PVC schemes with adaptive security, efficient updates and \textit{without the random oracle model}.
Last updated:  2011-11-02
TweLEX: A Tweaked Version of the LEX Stream Cipher
Mainack Mondal, Avik Chakraborti, Nilanjan Datta, Debdeep Mukhopadhyay
\texttt{LEX} is a stream cipher proposed by Alex Biryukov. It was selected to phase $3$ of the eSTREAM competition. \texttt{LEX} is based on the Advanced Encryption Standard {\texttt{AES}) block cipher and uses a methodology called {\em Leak Extraction}, proposed by Biryukov himself. However Dunkelman and Keller show that a key recovery attack exists against \texttt{LEX}. Their attack requires $2^{36.3}$ bytes of keystream produced by the same key and works with a time complexity of $2^{112}$ operations. In this work we explored \texttt{LEX} further and have shown that under the assumption of a related key model we can obtain $24$ secret state bytes with a time complexity of $2^{96}$ and a data complexity of $2^{54.3}$. Subsequently, we introduce a tweaked version of \texttt{LEX}, called \texttt{TweLEX}, which is shown to resist all known attacks against \texttt{LEX}. Though the throughput of \texttt{TweLEX} is half of \texttt{LEX}, it is still $1.25$ times faster than \texttt{AES}, the underlying block cipher. This work attempts to revive the principle of {\em leak extraction} as a simple and elegant method to design stream ciphers.
Last updated:  2012-03-14
Iris: A Scalable Cloud File System with Efficient Integrity Checks
Uncategorized
Emil Stefanov, Marten van Dijk, Alina Oprea, Ari Juels
Show abstract
Uncategorized
We present Iris, a practical, authenticated file system designed to support workloads from large enterprises storing data in the cloud and be resilient against potentially untrustworthy service providers. As a transparent layer enforcing strong integrity guarantees, Iris lets an enterprise tenant maintain a large file system in the cloud. In Iris, tenants obtain strong assurance not just on data integrity, but also on data freshness, as well as data retrievability in case of accidental or adversarial cloud failures. Iris offers an architecture scalable to many clients (on the order of hundreds or even thousands) issuing operations on the file system in parallel. Iris includes new optimization and enterprise-side caching techniques specifically designed to overcome the high network latency typically experienced when accessing cloud storage. Iris also includes novel erasure coding techniques for efficient support of dynamic Proofs of Retrievability (PoR) protocols over the file system. We describe our architecture and experimental results on a prototype version of Iris. Iris achieves end-to-end throughput of up to 260MB per second for 100 clients issuing simultaneous requests on the file system. (This limit is dictated by the available network bandwidth and maximum hard drive throughput.) We demonstrate that strong integrity protection in the cloud can be achieved with minimal performance degradation.
Last updated:  2011-11-02
A Single-Key Attack on 6-Round KASUMI
Teruo Saito
KASUMI is a block cipher used in the confidentiality and integrity algorithms of the 3GPP (3rd Generation Partnership Project) mobile communications. In 2010, a related-key attack on full KASUMI was reported. The attack was very powerful and worked in practical complexity. However the attack was not a direct threat to full KASUMI because of the impractical assumptions related to the attack. Therefore, this paper concentrates on single-key attacks considered to be practical attacks. This paper proposes a single-key attack on 6-round KASUMI. The attack, which applies a technique of higher order differential attacks, requires 2^{60.8} data and 2^{65.4} encryption time. To the best of our knowledge, the attack reported in this paper is the most powerful single-key attack against reduced-round KASUMI in terms of time complexity.
Last updated:  2012-04-25
Revocable Identity-Based Encryption from Lattices
Jie Chen, Hoon Wei Lim, San Ling, Huaxiong Wang, Khoa Nguyen
In this paper, we present an identity-based encryption (IBE) scheme from lattices with efficient key revocation. We adopt multiple trapdoors from the Agrawal-Boneh-Boyen and Gentry-Peikerty-Vaikuntanathan lattice IBE schemes to realize key revocation, which in turn, makes use of binary-tree data structure. Using our scheme, key update requires logarithmic complexity in the maximal number of users and linear in the number of revoked users for the relevant key authority. We prove that our scheme is selective secure in the standard model and under the LWE assumption, which is as hard as the worst-case approximating short vectors on arbitrary lattices. Moreover, our key revocation techniques from lattices can be applied to obtain revocable functional encryption schemes in the similar setting.
Last updated:  2012-01-05
Randomness Extraction in finite fields $\mathbb{F}_{p^{n}}$
Uncategorized
Abdoul Aziz Ciss
Show abstract
Uncategorized
Many technics for randomness extraction over finite fields was proposed by various authors such as Fouque \emph{et al.} and Carneti \emph{et al.}. At eurocrypt'09, these previous works was improved by Chevalier \emph{et al.}, over a finite field $\mathbb{F}_{p}$, where $p$ is a prime. But their papers don't study the case where the field is not prime such as binary fields. In this paper, we present a deterministic extractor for a multiplicative subgroup of $\mathbb{F}^{*}_{p^{n}}$, where $p$ is a prime. In particular, we show that the $k$-first $\mathbb{F}_2$-coefficients of a random element in a subgroup of $\mathbb{F}^{*}_{2^n}$ are indistinguishable from a random bit-string of the same length. Hence, under the Decisional Diffie-Hellman assumption over binary fields, one can deterministically derive a uniformly random bit-string from a Diffie-Hellman key exchange in the standard model. Over $\mathbb{F}_{p}$, Chevalier \emph{et al.} use the "Polya-Vinogradov inequality" to bound incomplete character sums but over $\mathbb{F}^{*}_{p^{n}}$ we use "Winterhof inequality" to bound incomplete character sums. Our proposition is a good deterministic extractor even if the length of its output is less than those one can have with the leftover hash lemma and universal hash functions. Our extractor can be used in any cryptographic protocol or encryption schemes.
Last updated:  2012-01-18
Standard Security Does Not Imply Security Against Selective-Opening
Uncategorized
Mihir Bellare, Rafael Dowsley, Brent Waters, Scott Yilek
Show abstract
Uncategorized
We show that no commitment scheme that is hiding and binding according to the standard definition is semantically-secure under selective opening attack (SOA), resolving a long-standing and fundamental open question about the power of SOAs. We also obtain the first examples of IND-CPA encryption schemes that are not secure under SOA, both for sender corruptions where encryption coins are revealed and receiver corruptions where decryption keys are revealed. These results assume only the existence of collision-resistant hash functions.
Last updated:  2011-11-02
On a new generalization of Huff curves
Abdoul Aziz Ciss, Djiby Sow
Recently two kinds of Huff curves were introduced as elliptic curves models and their arithmetic was studied. It was also shown that they are suitable for cryptographic use such as Montgomery curves or Koblitz curves (in Weierstrass form) and Edwards curves. In this work, we introduce the new generalized Huff curves $ax(y^{2} -c) = by(x^{2}-d)$ with $abcd(a^{2}c-b^{2}d)\neq 0$, which contains the generalized Huff's model $ax(y^{2}- d) = by(x^{2}-d)$ with $abd(a^{2}-b^{2})\neq 0$ of Joye-Tibouchi-Vergnaud and the generalized Huff curves $x(ay^{2} -1) =y(bx^{2}-1)$ with $ab(a-b)\neq 0$ of Wu-Feng as a special case. The addition law in projective coordinates is as fast as in the previous particular cases. More generally all good properties of the previous particular Huff curves, including completeness and independence of two of the four curve parameters, extend to the new generalized Huff curves. We verified that the method of Joye-Tibouchi-Vergnaud for computing of pairings can be generalized over the new curve.
Last updated:  2011-11-02
Clockwise Collision Analysis -- Overlooked Side-Channel Leakage Inside Your Measurements
Yang Li, Daisuke Nakatsu, Qi Li, Kazuo Ohta, Kazuo Sakiyama
This paper presents a new side-channel attack technique called {\it clockwise collision} analysis. For the cryptographic implementations using synchronous digital circuit with a loop architecture, signal transitions as well as the side-channel leakage relates to not only the input data in the current cycle, but also the status in one-cycle before. The clockwise collision utilizes the fact that little computation is required in the second clock cycle when the inputs for two consecutive clock cycles are the same. In contrast, the previously known {\it computational collision} utilizes the fact that the computation of the same input value leads to similar side-channel leakage. By experimentation, we demonstrate the feasibility and vulnerability for this novel clockwise collision analysis both by injecting faults and by analyzing the power consumption.
Last updated:  2011-11-02
ACCELERATING THE SCALAR MULTIPLICATION ON GENUS 2 HYPERELLIPTIC CURVE CRYPTOSYSTEMS
Balasingham Balamohan
Elliptic Curve Cryptography (ECC) was independently introduced by Koblitz and Miller in the eighties. ECC requires shorter sizes of underlying finite fields in com- parison to other public key cryptosystems such as RSA, introduced by Rivest, Shamir and Adleman. Hyperelliptic curves, a generalization of elliptic curves, require decreas- ing field size as genus increases. Hyperelliptic curves of genus g achieve equivalent security of ECC with field size 1/g times the size of field of ECC for g <= 4. Recently, a lot of research is being focused on increasing the efficiency of hyperelliptic curve cryptosystems (HECC). The most time consuming operation in HECC is the scalar multiplication. At present, scalar multiplication on HECC over prime fields under performs in terms of computational time compared to ECC of equivalent security. This thesis focuses on optimizing HECC scalar multiplication at the point arithmetic level. At the point arithmetic level we obtain more efficient doubling and mixed addi- tion operations to decrease the computational time in the scalar multiplication using binary expansions of scalars. In addition, we introduce tripling operations for the Jacobians of hyperelliptic curves to make use of multibase representations of scalars that are being used effectively in ECC. We also develop double-add operations for semi-affine coordinates and Lange’s new coordinates. We use these double-add opera- tions to improve the computational cost of precomputation for semi-affine coordinates and that of more important main phase of scalar multiplication for semi-affine coor- dinates and Lange’s new coordinates. We derive special addition to improve the cost of precomputation for Lange’s new coordinates and projective coordinates.
Last updated:  2011-11-02
An Efficient Protocol for the Commit-Prove-Fair-Open functionality
Ou Ruan, Cai Fu, Guohua Cui
In TCC 2006, Garay et al. introduced the notion of "commit-prove-fair-open" functionality in order to achieve what they called "resource fairness" of secure multi-party computation(MPC) with corrupted majority. The protocol realizing this notion of fairness follows the gradual release approach and, further, it can be proven secure in the simulation paradigm and enjoys composition properties. In this paper, we show a more efficient resource-fair protocol of FCPFO based on a new variant of Garay et al. time-lines and simplified Camenisch-Shoup(sCS) commitment,whose communication and computation complexity are less than 1/5 of Garay et al. construction. In addition, our new protocol allows commitment to value 0, which is not possible in the plain Garay et al. construction.
Last updated:  2013-07-02
Efficient Multicast Key Distribution Using HOWP-Based Dynamic Group Access Structures
Uncategorized
Jing Liu, Qiong Huang, Bo Yang, Yang Zhang
Show abstract
Uncategorized
When assigning personal keys, stateful multicast key distribution (MKD) protocols usually rely on some type of dynamic group access structure which helps achieve a better tradeoff among storage, communication, and computation overheads. However, there exist some stateful MKD protocols whose personal key assignments are based on two static group access structures called Dual Hash Chain (DHC) and Binary Hash Tree (BHT). We introduce two new types of group access structures called Dual Homomorphic One-way Function Chain (D-HOFC) and Top-Down Homomorphic One-way Function Tree (TD-HOFT). Both can be regarded as dynamic counterparts of DHC and BHT, respectively. Our research motivation is to investigate what benefits these two new dynamic structures will bring for MKD protocols compared with their static counterparts. Using D-HOFC, we propose a time-based MKD protocol that counters the rejoining member attack on a DHC-based protocol, and a stateful user-based MKD protocol that has a lower computational overhead for Group Controller (GC) than the DHC-based protocol. Using TD-HOFT, we design a stateful user-based MKD protocol that outperforms the original EKT protocol. Performance comparisons and experiment results show that our protocols based on dynamic structures have their own advantages compared with those based on the corresponding static counterparts.
Last updated:  2012-09-06
Exclusive Key Based Group Rekeying Protocols
Uncategorized
Jing Liu, Changji Wang
Show abstract
Uncategorized
In this paper, we first clarify the meaning of research on 1-resilient group rekeying protocols by showing that they are actually building blocks for constructing hybrid group rekeying protocols with tunable collusion-bandwidth tradeoffs. We then construct secure and efficient 1-resilient group rekeying protocols based on the idea of exclusive key. Given a group of users, an exclusive key for a user i is a key shared by all users in this group except i, and thus can be used to exclude i from this group effectively. We first present three personal key assignment algorithms based on this idea. The first is based on independent exclusive keys, and thus has a great storage requirement. The other two are based on functionally-dependent exclusive keys, and thus greatly reduce the storage requirement. Employing each personal key assignment algorithm, we propose both a stateful group rekeying protocol and a stateless one. We prove that all six protocols are secure against single-user attacks (i.e., 1-resilient) in a symbolic security model. Performance comparisons between our protocols and related ones show that either of the proposed Protocol III and Protocol III’ is the best in its own class.
Last updated:  2013-09-10
Towards Efficient Provable Data Possession in Cloud Storage
Jia Xu, Ee-Chien Chang, Jianying Zhou
Provable Data Possession (\PDP) allows data owner to periodically and remotely audit their data stored in a cloud storage, without retrieving the file and without keeping a local copy. Ateniese~\emph{et al.} (CCS 07) proposed the first {\PDP} scheme, which is very efficient in communication and storage. However their scheme requires a lot of group exponentiation operations: In the setup, one group exponentiation is required to generate a tag per each data block. In each verification, (equivalently) $(m + \ell)$ group exponentiations are required to generate a proof, where $m$ is the size of a data block and $\ell$ is the number of blocks accessed during a verification. This paper proposed an efficient {\PDP} scheme. Compared to Ateniese~\emph{et al.} (CCS 07), the proposed scheme has the same complexities in communication and storage, but is more efficient in computation: In the setup, no group exponentiations are required. In each verification, only (equivalently) $m$ group exponentiations are required to generate a proof. The security of the proposed scheme is proved under Knowledge of Exponent Assumption and Factoriztion Assumption.
Last updated:  2011-10-25
A New Class of Multivariate Public Key Cryptosystems Constructed Based on Random Pseudo Cyclic Codes, K(XIII)SE(2)PKC, Realizing Coding Rate of Exactly 1.0
Masao Kasahara
In this paper, we present a new class of multivariate public-key cryptosystems, K(XIII)SE(2)PKC realizing the coding rate of exactly 1.0, based on random pseudo cyclic codes. The K(XIII)SE(2)PKC is constructed on the basis of K(IX)SE(1)PKC, formerly presented by the author. We show that K(XIII)SE(2)PKC is secure against the various attacks including the attack based on the Gröbner bases calculaion(GB attack) and the rank attack.
Last updated:  2011-10-25
The ElGamal cryptosystem over circulant matrices
Ayan Mahalanobis
Can one use the discrete logarithm problem in matrix groups, to build a better and secure cryptosystem? We argue, it is indeed the case. This makes the group of circulant matrices suitable and attractive for lightweight cryptography.
Last updated:  2011-12-01
Lower Bound on Covering Radius of Reed-Muller Codes in Set of Balanced Functions
Brajesh Kumar Singh, Sugata Gangopadhyay
In this paper, we derive a general lower bound on covering radius, $\hat{\rho}(0, 2, n)$ of Reed-Muller code $RM(2, n)$ in $R_{0, n}$, set of balanced Boolean functions on $n$ variables where $n = 2t + 1$, $t$ is an odd prime satisfying one of the following conditions \begin{enumerate} \item[(i)] $ord_t(2) = t - 1$; \item[(ii)] $t=2s + 1$, $s$ is odd, and $ord_t(2) = s$. \end{enumerate} Further, it is proved that $\hat{\rho}(0, 2, 11) \geq 806$, which is improved upon the bound obtained by Kurosawa et al.'s bound ({\em IEEE Trans. Inform. Theory}, vol. 50, no. 3, pp. 468-475, 2004).
Last updated:  2011-10-25
Degree of regularity for HFE-
Jintai Ding, Thorsten Kleinjung
In this paper, we prove a closed formula for the degree of regularity of the family of HFE- (HFE Minus) multivariate public key cryptosystems over a finite field of size $q$. The degree of regularity of the polynomial system derived from an HFE- system is less than or equal to \begin{eqnarray*} \frac{(q-1)(\lfloor \log_q(D-1)\rfloor +a)}2 +2 & & \text{if $q$ is even and $r+a$ is odd,} \\ \frac{(q-1)(\lfloor \log_q(D-1)\rfloor+a+1)}2 +2 & & \text{otherwise.} \end{eqnarray*} Here $q$ is the base field size, $D$ the degree of the HFE polynomial, $r=\lfloor \log_q(D-1)\rfloor +1$ and $a$ is the number of removed equations (Minus number). This allows us to present an estimate of the complexity of breaking the HFE Challenge 2: \vskip .1in \begin{itemize} \item the complexity to break the HFE Challenge 2 directly using algebraic solvers is about $2^{96}$. \end{itemize}
Last updated:  2011-10-25
Analysis of the Hamming Weight of the Extended wmbNAF
Ming Li, Ali Miri, Daming Zhu
Scalar multiplication is an important operation in elliptic curve cryptosystems(ECC). The algorithms for computing scalar multiplication are mostly based on the binary expansions of scalars, such as the non-adjacent form (NAF) and wNAF(sliding window method). Representing scalars using more bases can speed up the scalar multiplication, such as mbNAF, wmbNAF and extended wmbNAF, which was proposed by Longa and Miri in 2008. In this paper, we give a formal analysis of the Hamming weight of the extended wmbNAF method for scalar multiplication on general elliptic curves over large prime fields. Then the cost of this method is compared with NAF and other double-base methods. The analysis shows that we obtain the most efficient algorithm when using (2; 3; 5)NAF_{1;1;0}, which is 9:0% faster than the NAF method without extra storage requirement. Moreover, the recoding algorithm of the extended wmbNAF method is just as simple and fast as that of the NAF method.
Last updated:  2011-10-22
Single Layer Optical-scan Voting with Fully Distributed Trust
Aleksander Essex, Christian Henrich, Urs Hengartner
We present a new approach for cryptographic end-to-end verifiable optical-scan voting. Ours is the first that does not rely on a single point of trust to protect ballot secrecy while simultaneously offering a conventional single layer ballot form and unencrypted paper trail. We present two systems following this approach. The first system uses ballots with randomized confirmation codes and a physical in-person dispute resolution procedure. The second system improves upon the first by offering an informational dispute resolution procedure and a public paper audit trail through the use of self-blanking invisible ink confirmation codes. We then present a security analysis of the improved system.
Last updated:  2011-10-22
On the sparse subset sum problem from Gentry-Halevi's implementation of fully homomorphic encryption
Moon Sung Lee
In Gentry's fully homoomrphic cryptosystem, a sparse subset sum problem is used and a big set is included in the public key. In the implementation of a variant of Gentry's scheme, to reduce the size of the public key, Gentry and Halevi used a specific form of a sparse subset sum problem with geometric progressions. In this note, we show that their sparse subset sum challenges are rather easy given the aggressive choice of parameters. Our experiment shows that even their large instance of a sparse subset sum problem could be solved within two days with probability of about $44\%$. A more conservative parameter choice can easily avoid our attack.
Last updated:  2012-04-05
Fully Homomorphic Encryption with Polylog Overhead
Craig Gentry, Shai Halevi, Nigel P. Smart
We show that homomorphic evaluation of (wide enough) arithmetic circuits can be accomplished with only polylogarithmic overhead. Namely, we present a construction of fully homomorphic encryption (FHE) schemes that for security parameter $\secparam$ can evaluate any width-$\Omega(\secparam)$ circuit with $t$ gates in time $t\cdot polylog(\secparam)$. To get low overhead, we use the recent batch homomorphic evaluation techniques of Smart-Vercauteren and Brakerski-Gentry-Vaikuntanathan, who showed that homomorphic operations can be applied to "packed" ciphertexts that encrypt vectors of plaintext elements. In this work, we introduce permuting/routing techniques to move plaintext elements across these vectors efficiently. Hence, we are able to implement general arithmetic circuit in a batched fashion without ever needing to "unpack" the plaintext vectors. We also introduce some other optimizations that can speed up homomorphic evaluation in certain cases. For example, we show how to use the Frobenius map to raise plaintext elements to powers of~$p$ at the "cost" of a linear operation.
Last updated:  2012-01-06
Cryptographic Hash Functions: Recent Design Trends and Security Notions
Uncategorized
Saif Al-Kuwari, James H. Davenport, Russell J. Bradford
Show abstract
Uncategorized
Recent years have witnessed an exceptional research interest in cryptographic hash functions, especially after the popular attacks against MD5 and SHA-1 in 2005. In 2007, the U.S. National Institute of Standards and Technology (NIST) has also significantly boosted this interest by announcing a public competition to select the next hash function standard, to be named SHA-3. Not surprisingly, the hash function literature has since been rapidly growing in an extremely fast pace. In this paper, we provide a comprehensive, up-to-date discussion of the current state of the art of cryptographic hash functions security and design. We first discuss the various hash functions security properties and notions, then proceed to give an overview of how (and why) hash functions evolved over the years giving raise to the current diverse hash functions design approaches.
Last updated:  2011-10-22
Private-key Symbolic Encryption
Uncategorized
N. Ahmed, C. D. Jensen, E. Zenner
Show abstract
Uncategorized
Symbolic encryption, in the style of Dolev-Yao models, is ubiquitous in formal security analysis aiming at the automated verification of network protocols. The naive use of symbolic encryption, however, may unnecessarily require an expensive construction: an arbitrary-length encryption scheme that is private and non-malleable in an adaptive CCA-CPA setting. Most of the time, such assumptions remain hidden and rather symbolic encryption is instantiated with a seemingly ``good'' cryptographic encryption, such as AES in the CBC configuration. As an illustration of this problem, we first report new attacks on ECB and CBC based implementations of the well-known Needham-Schroeder and Denning-Sacco protocols. We then present a few symbolic encryption schemes along with their cryptographic semantics, and prove the hierarchical relations between the proposed schemes from both cryptographic and formal perspectives. These symbolic schemes can be seamlessly used in many existing formal security models.
Last updated:  2011-10-25
On the Security of RFID Anti Cloning Security Protocol(ACSP)
Masoumeh Safkhani, Nasour Bagheri, Majid Naderi
Recently Qian et al. have proposed a new attack for RFID systems, called counting attack, where the attacker just aims to estimate the number of tagged objects instead of steal the tags' private information. They have stated that most of the existing RFID mutual authentication protocols are vulnerable to this attack. To defend against counting attack, they propose a novel Anti-Counting Security Protocol called ACSP. The designers of ACSP have claimed that their protocol is resistant against counting attack and also the other known RFID security threats. However in this paper we present the following efficient attacks against this protocol: 1) Tag impersonation attack: the success probability of attack is "1" while the complexity is two runs of protocol. 2) Two single tag de-synchronization attacks, the success probability of both attacks are "1" while the complexity is at most two runs of protocol. 3)Group of tags de-synchronization attack: this attack, which can de-synchronize all tags in the range at once, has success probability of "1" while its complexity is one run of protocol. 4) Traceability attack: the adversary's advantage in this attack is almost "0.5", which is almost the maximum of possible advantages for an adversary in the same model. The complexity of attack is three runs of protocol
Last updated:  2011-10-17
A Group Testing Approach to Improved Corruption Localizing Hashing
Annalisa De Bonis, Giovanni Di Crescenzo
Efficient detection of integrity violations is crucial for the reliability of both data at rest and data in transit. While ideally one would want to always find all changes in the input data, in practice this capability may be expensive, and one may be content with localizing or finding a superset of any changes. Corruption-localizing hashing \cite{esorics09} is a cryptographic primitive that enhances collision-intractable hash functions thus improving the detection property of these functions into a suitable localization property. Corruption localizing hash schemes obtain some superset of the changes location, where the accuracy of this superset with respect to the actual changes is a metric of special interest, called localization factor. In this paper we consider the problem of designing corruption localizing hash schemes with reduced localization factor. In \cite{cocoon11}, combinatorial group testing techniques have been exploited to construct the first corruption-localizing hash scheme with {\em constant} localization factor and sublinear storage and time complexity. In this paper we continue the approach of \cite{cocoon11} and present schemes that improve on previous constructions in that they achieve an arbitrarily small localization factor while insuring the same time and storage complexity of the schemes in \cite{cocoon11}.
Last updated:  2012-02-15
A Domain-Specific Language for Computing on Encrypted Data
Alex Bain, John Mitchell, Rahul Sharma, Deian Stefan, Joe Zimmerman
In cloud computing, a client may request computation on confidential data that is sent to untrusted servers. While homomorphic encryption and secure multiparty computation provide building blocks for secure computation, software must be properly structured to preserve confidentiality. Using a general definition of \emph{secure execution platform}, we propose a single Haskell-based domain-specific language for cryptographic cloud computing and prove correctness and confidentiality for two representative and distinctly different implementations of the same programming language. The secret sharing execution platform provides information-theoretic security against colluding servers. The homomorphic encryption execution platform requires only one server, but has limited efficiency, and provides secrecy against a computationally-bounded adversary. Experiments with our implementation suggest promising computational feasibility, as cryptography improves, and show how code can be developed uniformly for a variety of secure cloud platforms, without explicitly programming separate clients and servers.
Last updated:  2011-11-15
Randomized Secure Two-Party Computation for Modular Conversion, Zero Test, Comparison, MOD and Exponentiation
Ching-Hua Yu, Bo-Yin Yang
When secure arithmetic is required, computation based on secure multiplication ($\MULT$) is much more efficient than computation based on secure boolean circuits. However, a typical application can also require other building blocks, such as comparison, exponentiation and the modulo (MOD) operation. Secure solutions for these functions proposed in the literature rely on bit-decomposition or other bit-oriented methods, which require $O(\ell)$ $\MULT$s for $\ell$-bit inputs. In the absence of a known bit-length independent solution, the complexity of the whole computation is often dominated by these non-arithmetic functions. To resolve the above problem, we start with a general modular conversion, which converts secret shares over distinct moduli. For this, we proposed a probabilistically correct protocol for this with a complexity that is independent of $\ell$. Then, we show that when these non-arithmetic functions are based on secure modular conversions, they can be computed in constant rounds and $O(k)$ $\MULT$s, where $k$ is a parameter for an error rate of $2^{-\Omega(k)}$. To promote our protocols to be actively secure, we apply $O(k)$ basic zero-knowledge proofs, which cost at most $O(k)$ exponentiation computation, $O(1)$ rounds and $O(k(\ell+\kappa))$ communication bits, where $\kappa$ is the security parameter used in the commitment scheme.
Last updated:  2016-07-04
Instantiability of RSA-OAEP under Chosen-Plaintext Attack
Eike Kiltz, Adam O'Neill, Adam Smith
We show that the widely deployed RSA-OAEP encryption scheme of Bellare and Rogaway (Eurocrypt 1994), which combines RSA with two rounds of an underlying Feistel network whose hash ({\em i.e.}, round) functions are modeled as random oracles, meets indistinguishability under chosen-plaintext attack (IND-CPA) in the {\em standard model} based on simple, non-interactive, and non-interdependent assumptions on RSA and the hash functions. To prove this, we first give a result on a more general notion called ``padding-based'' encryption, saying that such a scheme is IND-CPA if (1) its underlying padding transform satisfies a ``fooling" condition against small-range distinguishers on a class of high-entropy input distributions, and (2) its trapdoor permutation is sufficiently {\em lossy} as defined by Peikert and Waters (STOC 2008). We then show that the first round of OAEP satisfies condition (1) if its hash function is $t$-wise independent for $t$ roughly proportional to the allowed message length. We clarify that this result requires the hash function to be keyed, and for its key to be included in the public-key of RSA-OAEP. We also show that RSA satisfies condition (2) under the $\Phi$-Hiding Assumption of Cachin \emph{et al.}~(Eurocrypt 1999). This is the first {\em positive} result about the instantiability of RSA-OAEP. In particular, it increases confidence that chosen-plaintext attacks are unlikely to be found against the scheme. In contrast, RSA-OAEP's predecessor in PKCS \#1 v1.5 was shown to be vulnerable to such attacks by Coron {\em et al}.~(Eurocrypt 2000).
Last updated:  2011-10-11
Improved Attacks on Full GOST
Itai Dinur, Orr Dunkelman, Adi Shamir
GOST is a well known block cipher which was developed in the Soviet Union during the 1970's as an alternative to the US-developed DES. In spite of considerable cryptanalytic effort, until very recently there were no published single key attacks against its full 32-round version which were faster than the $2^{256}$ time complexity of exhaustive search. In February 2011, Isobe used in a novel way the previously discovered reflection property in order to develop the first such attack, which requires $2^{32}$ data, $2^{64}$ memory and $2^{224}$ time. Shortly afterwards, Courtois and Misztal used a different technique to attack the full GOST using $2^{64}$ data, $2^{64}$ memory and $2^{226}$ time. In this paper we introduce a new fixed point property and a better way to attack 8-round GOST in order to find improved attacks on full GOST: Given $2^{32}$ data we can reduce the memory complexity from an impractical $2^{64}$ to a practical $2^{36}$ without changing the $2^{224}$ time complexity, and given $2^{64}$ data we can simultaneously reduce the time complexity to $2^{192}$ and the memory complexity to $2^{36}$.
Last updated:  2011-10-11
An Improved Trace Driven Instruction Cache Timing Attack on RSA
Chen Cai-Sen, Wang Tao, Chen Xiao-Cen, Zhou Ping
The previous I-cache timing attacks on RSA which exploit the instruction path of a cipher were mostly proof-of-concept, and it is harder to put them into practice than D-cache timing attacks. We propose a new trace driven timing attack model based on spying on the whole I-cache. An improved analysis algorithm of the exponent using the characteristic of the size of the window is advanced, which could further reduce the search space of the bits of the key than the former and provide an error detection mechanism to detect some erroneous decisions of the operation sequence. We implemented an attack on RSA of OpenSSL under a practical environment, proving that the feasibility and effectiveness of I-Cache timing attack could be improved.
Last updated:  2014-09-30
GF(2^n) redundant representation using matrix embedding
Uncategorized
Yongjia Wang, Xi Xiong, Haining Fan
Show abstract
Uncategorized
By embedding a Toeplitz matrix-vector product (MVP) of dimension $n$ into a circulant MVP of dimension $N=2n+\delta -1$, where $\delta $ can be any nonnegative integer, we present a $GF(2^n)$ multiplication algorithm. This algorithm leads to a new redundant representation, and it has two merits: 1. The flexible choices of $\delta$ make it possible to select a proper $N$ such that the multiplication operation in ring $GF(2)[x]/(x^N+1$) can be performed using some asymptotically faster algorithms, e.g. the Fast Fourier Transformation (FFT)-based multiplication algorithm; 2. The redundant degrees, which are defined as $N/n$, are smaller than those of most previous $GF(2^n)$ redundant representations, and in fact they are approximately equal to 2 for all applicable cases.
Last updated:  2011-10-11
On the Role of Expander Graphs in Key Predistribution Schemes for Wireless Sensor Networks
Michelle Kendall, Keith Martin
Providing security for a wireless sensor network composed of small sensor nodes with limited battery power and memory can be a non-trivial task. A variety of key predistribution schemes have been proposed which allocate symmetric keys to the sensor nodes before deployment. In this paper we examine the role of expander graphs in key predistribution schemes for wireless sensor networks. Roughly speaking, a graph has good expansion if every `small' subset of vertices has a `large' neighbourhood, and intuitively, expansion is a desirable property for graphs of networks. It has been claimed that good expansion in the product graph is necessary for `optimal' networks. We demonstrate flaws in this claim, argue instead that good expansion is desirable in the intersection graph, and discuss how this can be achieved. We then consider key predistribution schemes based on expander graph constructions and compare them to other schemes in the literature. Finally, we propose the use of expansion and other graph-theoretical techniques as metrics for assessing key predistribution schemes and their resulting wireless sensor networks.
Last updated:  2011-10-11
On the security models for certificateless signature schemes achieving level 3 security
Yu-Chi Chen, Gwoboa Horng
Public key cryptography has found many applications in our modern society. To guarantee the authenticity of public keys, we need a trusted third party (TTP). In 1991, Girault defined three trust levels for a TTP. The higher the trusted level of the TTP is, the higher the security level of the cryptographic scheme is. In 2007, Hu et al. proposed a generic construction of a certificateless signature scheme, together with a security model, achieving Girault’s level 3 security. In 2011, Fan et al. presented a certificateless short signature scheme based on pairings. In this paper, we consider in depth the security requirements of certificateless signature schemes and show that previous models are inappropriate for achieving the desired level of security. We also present a new security model for a certificateless signature scheme to achieve level 3 security.
Last updated:  2019-03-31
Publicly Verifiable Proofs of Sequential Work
Mohammad Mahmoody, Tal Moran, Salil Vadhan
We construct a publicly verifiable protocol for proving computational work based on collision-resistant hash functions and a new plausible complexity assumption regarding the existence of "inherently sequential" hash functions. Our protocol is based on a novel construction of time-lock puzzles. Given a sampled "puzzle" $P \gets D_n$, where $n$ is the security parameter and $D_n$ is the distribution of the puzzles, a corresponding "solution" can be generated using $N$ evaluations of the sequential hash function, where $N>n$ is another parameter, while any feasible adversarial strategy for generating valid solutions must take at least as much time as $\Omega(N)$ *sequential* evaluations of the hash function after receiving $P$. Thus, valid solutions constitute a "proof" that $\Omega(N)$ parallel time elapsed since $P$ was received. Solutions can be publicly and efficiently verified in time $\poly(n) \cdot \polylog(N)$. Applications of these "time-lock puzzles" include noninteractive timestamping of documents (when the distribution over the possible documents corresponds to the puzzle distribution $D_n$) and universally verifiable CPU benchmarks. Our construction is secure in the standard model under complexity assumptions (collision-resistant hash functions and inherently sequential hash functions), and makes black-box use of the underlying primitives. Consequently, the corresponding construction in the random oracle model is secure unconditionally. Moreover, as it is a public-coin protocol, it can be made non-interactive in the random oracle model using the Fiat-Shamir Heuristic. Our construction makes a novel use of ``depth-robust'' directed acyclic graphs---ones whose depth remains large even after removing a constant fraction of vertices---which were previously studied for the purpose of complexity lower bounds. The construction bypasses a recent negative result of Mahmoody, Moran, and Vadhan (CRYPTO `11) for time-lock puzzles in the random oracle model, which showed that it is impossible to have time-lock puzzles like ours in the random oracle model if the puzzle generator also computes a solution together with the puzzle.
Last updated:  2012-01-16
Recyclable PUFs: Logically Reconfigurable PUFs
Stefan Katzenbeisser, Ünal Kocabas, Vincent van der Leest, Ahmad-Reza Sadeghi, Geert-Jan Schrijen, Heike Schröder, Christian Wachsmann
Physically Unclonable Functions (PUFs) are security primitives that exploit intrinsic random physical variations of hardware components. In the recent years, many security solutions based on PUFs have been proposed, including identification/authentication schemes, key storage and hardware-entangled cryptography. Existing PUF instantiations typically exhibit a static challenge/response behavior, while many practical applications would benefit from reconfigurable PUFs. Examples include the revocation or update of “secrets” in PUF-based key storage or cryptographic primitives based on PUFs. In this paper, we present the concept of Logically Reconfigurable PUFs (LR-PUFs) that allow changing the challenge/response behavior without physically replacing or modifying the underlying PUF. We present two efficient LR-PUF constructions and evaluate their performance and security. In this context, we introduce a formal security model for LR-PUFs. Finally, we discuss several practical applications of LR-PUFs focusing on lightweight solutions for resource-constrained embedded devices, in particular RFIDs.
Last updated:  2011-10-11
Security Evaluation against Differential Cryptanalysis for Block Cipher Structures
Uncategorized
Shengbao Wu, Mingsheng Wang
Show abstract
Uncategorized
Estimating immunity against differential and linear cryptanalysis is essential in designing secure block ciphers. A practical measure to achieve it is to find the minimal number of active S-boxes, or a lower bound for this minimal number. In this paper, we provide a general algorithm using integer programming, which not only can estimate a good lower bound of the minimal differential active S-boxes for various block cipher structures, but also provides an efficient way to select new structures with good properties against differential cryptanalysis. Experimental results for the Feistel, CAST256, SMS4, CLEFIA and Generalized Feistel structures indicate that bounds obtained by our algorithm are the tightest except for a few rounds of the SMS4 structure. Then, for the first time, bounds of the differential active S-boxes number for the MISTY1, Skipjack, MARS and Four-cell structures are illustrated with the application of our algorithm. Finally, our algorithm is used to find four new structures with good properties against differential cryptanalysis. Security evaluation against liner cryptanalysis can be processed with our algorithm similarly by considering dual structures.
Last updated:  2011-10-12
A New Distinguisher for CubeHash-8/b and CubeHash-15/b Compression Functions
Uncategorized
Javad Alizadeh, Abdolrasoul Mirghadri
Show abstract
Uncategorized
CubeHash is one of the round 2 candidates of the public SHA-3 competition hosted by NIST. It was designed by Bernstein. In this paper we find a new distinguisher to distinguish CubeHash compression function from a random function. This distinguisher principle is based on rotational analysis that formally introduced by Khovratovich and Nikolic. In order to use this technique, we need to compute the probability that four swap functions in CubeHash round function preserve the rotational property for any input pair. We compute these probabilities and find a new distinguisher that distinguish CubeHash-8/b and CubeHash-15/b compression function from a random function with probability greater than and , respectively. Until we know this is the first distinguisher for CubeHash compression function with more than 14 rounds.
Last updated:  2011-10-11
1-Resilient Boolean Function with Optimal Algebraic Immunity
Uncategorized
Qingfang Jin, Zhuojun Liu, Baofeng Wu
Show abstract
Uncategorized
In this paper, We propose a class of 2k-variable Boolean functions, which have optimal algebraic degree, high nonlinearity, and are 1-resilient. These functions have optimal algebraic immunity when k > 2 and u = -2^l; 0 =< l < k. Based on a general combinatorial conjecture, algebraic immunity of these functions is optimal when k > 2 and u = 2^l; 0 =< l < k. If the general combinatorial conjecture and a new assumption are both true, algebraic immunity of our functions is also optimal when k > 2, otherwise u.
Last updated:  2011-10-11
On the security of MQ_DRBG
V. O. Drelikhov, G. B. Marshalko, A. V. Pokrovskiy
MQ_DRBG is a pseudorandom number bit generator proposed for international standardization by the French national organization for Standardization (AFNOR). It makes use of a specific instantiation of a one-way function $S:\ftwo^n\rightarrow \ftwo^{n+r}$ based on quadratic multivariate polynomials. We describe two methods for constructing function $S$, satisfying requirements of the proposed draft, but having less security level.
Last updated:  2011-10-11
The Single Cycle T-functions
Uncategorized
Zhaopeng Dai, Zhuojun Liu
Show abstract
Uncategorized
In this paper the single cycle T-functions are studied. Making use of the explicit formulas of sum and product of 2-adic integers, we present the necessary and sufficient conditions on the generalized polynomial $\widetilde{p(x)} = a_{0} \substack {+ \\ \oplus} a_{1}x \substack {+ \\ \oplus} \cdots \substack {+ \\ \oplus} a_{d}x^{d} (\mbox{mod} \, 2^{n} )$ being a single cycle T-function. Furthermore, for any given generalized polynomial, we can deduce some expressions about its coefficients by which we can determine whether it is single cycle or not.
Last updated:  2011-10-11
Hidden Vector Encryption Fully Secure Against Unrestricted Queries
Angelo De Caro, Vincenzo Iovino, Giuseppe Persiano
Predicate encryption is an important cryptographic primitive (see \cite{BDOP04,BoWa07,Goyal06,KaSaWa08}) that enables fine-grained control on the decryption keys. Roughly speaking, in a predicate encryption scheme the owner of the master secret key $\MSK$ can derive secret key $\SK_P$, for any predicate $P$ from a specified class of predicates $\mathbb{P}$. In encrypting a message $M$, the sender can specify an {\em attribute} vector $\x$ and the resulting ciphertext $\tilde X$ can be decrypted only by using keys $\SK_P$ such that $P(\x)=1$. Our main contribution is the {\em first} construction of a predicate encryption scheme that can be proved {\em fully} secure against {\em unrestricted} queries by probabilistic polynomial-time adversaries under non-interactive constant sized (that is, independent of the length $\ell$ of the attribute vectors) hardness assumptions on bilinear groups of composite order. Specifically, we consider {\em hidden vector encryption} (HVE in short), a notable case of predicate encryption introduced by Boneh and Waters \cite{BoWa07} and further developed in \cite{ShWa08, IoPe08, SLNHJ10}. In a HVE scheme, the ciphertext attributes are vectors $\x=\langle x_1,\ldots,x_\ell\rangle$ of length $\ell$ over alphabet $\Sigma$, keys are associated with vectors $\y=\langle y_1,\ldots,y_\ell\rangle$ of length $\ell$ over alphabet $\Sigma\cup\{\star\}$ and we consider the $\Match(\x,\y)$ predicate which is true if and only if, for all $i$, $y_i\ne\star$ implies $x_i=y_i$. Previous constructions restricted the proof of security to adversaries that could ask only {\em non-matching} queries; that is, for challenge attribute vectors $\x_0$ and $\x_1$, the adversary could ask only for keys of vectors $\y$ for which$\Match(\x_0,\y)=\Match(\x_1,\y)=$ false. Our proof employs the dual system methodology of Waters \cite{Waters09}, that gave one of the first fully secure construction in this area, blended with a careful design of intermediate security games that keep into account the relationship between challenge ciphertexts and key queries.
Last updated:  2011-10-11
Public Key Cryptosystems Constructed Based on Random Pseudo Cyclic Codes, K(IX)SE(1)PKC, Realizing Coding Rate of Exactly 1.0
Masao Kasahara
In this paper, we present a new class of public-key cryptosystems, K(IX)SE(1)PKC realizing the coding rate of exactly 1.0, based on random pseudo cyclic codes. We show that K(IX)SE(1)PKC is secure against the various attacks including the attack based on the Gröbner bases calculaion (GB attack).
Last updated:  2012-06-24
Designing Privacy-preserving Smart Meters with Low-cost Microcontrollers
Andres Molina-Markham, George Danezis, Kevin Fu, Prashant Shenoy, David Irwin
Smart meters that track fine-grained electricity usage and implement sophisticated usage-based billing policies, e.g., based on time-of-use, are a key component of recent smart grid initiatives that aim to increase the electric grid's efficiency. A key impediment to widespread smart meter deployment is that fine-grained usage data indirectly reveals detailed information about consumer behavior, such as when occupants are home, when they have guests or their eating and sleeping patterns. Recent research proposes cryptographic solutions that enable sophisticated billing policies without leaking information. However, prior research does not measure the performance constraints of real-world smart meters, which use cheap ultra-low-power microcontrollers to lower deployment costs. In this paper, we explore the feasibility of designing privacy-preserving smart meters using low-cost microcontrollers and provide a general methodology for estimating design costs. We show that it is feasible to produce certified meter readings for use in billing protocols relying on Zero-Knowledge Proofs with microcontrollers such as those inside currently deployed smart meters. Our prototype meter is capable of producing these readings every 10 seconds using a $3.30USD MSP430 microcontroller, while less powerful microcontrollers deployed in today's smart meters are capable of producing readings every 28 seconds. In addition to our results, our goal is to provide smart meter designers with a general methodology for selecting an appropriate balance between platform performance, power consumption, and monetary cost that accommodates privacy-preserving billing protocols.
Last updated:  2012-01-27
Adaptively Attribute-Hiding (Hierarchical) Inner Product Encryption
Tatsuaki Okamoto, Katsuyuki Takashima
This paper proposes the first inner product encryption (IPE) scheme that is adaptively secure and fully attribute-hiding (attribute-hiding in the sense of the definition by Katz, Sahai and Waters), while the existing IPE schemes are either fully attribute-hiding but selectively secure or adaptively secure but weakly attribute-hiding. The proposed IPE scheme is proven to be adaptively secure and fully attribute-hiding under the decisional linear assumption in the standard model. The IPE scheme is comparably as efficient as the existing attribute-hiding IPE schemes. We also present a variant of the proposed IPE scheme with the same security that achieves shorter public and secret keys. A hierarchical IPE scheme can be constructed that is also adaptively secure and fully attribute-hiding under the same assumption. In this paper, we extend the dual system encryption technique by Waters into a more general manner, in which new forms of ciphertext and secret keys are employed and new types of information theoretical tricks are introduced along with several forms of computational reduction.
Last updated:  2011-10-03
Certificate-Based Signcryption: Security Model and Efficient Construction
Yang Lu, Jiguo Li
Signcryption is an important cryptographic primitive that simultaneously achieves confidentiality and authentication in an efficient manner. In 2008, Luo et al. introduced the notion of certificate-based signcryption and proposed the first construction of certificate-based signcryption. However, their scheme is insecure under the key replacement attack and also does not provide insider security. To overcome these disadvantages, we introduce a strengthened security model of certificate-based signcryption in this paper. The new security model accurately models insider security and the key replacement attacks that might be attempted by an adversary in a real certificate-based signcryption system. We also propose a new certificate-based signcryption scheme that reaches insider security and resists key replacement attacks. We show that this scheme is both chosen-ciphertext secure and existentially unforgeable in the random oracle model. Furthermore, performance analysis shows that the proposed scheme is efficient and practical.
Last updated:  2011-10-03
Minimalism in Cryptography: The Even-Mansour Scheme Revisited
Orr Dunkelman, Nathan Keller, Adi Shamir
In this paper we consider the following fundamental problem: What is the simplest possible construction of a block cipher which is provably secure in some formal sense? This problem motivated Even and Mansour to develop their scheme in 1991, but its exact security remained open for more than 20 years in the sense that the lower bound proof considered known plaintexts, whereas the best published attack (which was based on differential cryptanalysis) required chosen plaintexts. In this paper we solve this long standing open problem by describing the new Slidex attack which matches the T = \Omega(2^n/D) lower bound on the time T for any number of known plaintexts D. Once we obtain this tight bound, we can show that the original two-key Even-Mansour scheme is not minimal in the sense that it can be simplified into a single key scheme with half as many key bits which provides exactly the same security, and which can be argued to be the simplest conceivable provably secure block cipher. We then show that there can be no comparable lower bound on the memory requirements of such attacks, by developing a new memoryless attack which can be applied with the same time complexity but only in the special case of D=2^{n/2}. In the last part of the paper we analyze the security of several other variants of the Even-Mansour scheme, showing that some of them provide the same level of security while in others the lower bound proof fails for very delicate reasons.
Last updated:  2011-10-03
Efficient Implementation of the $\eta_T$ Pairing on GPU
Yosuke Katoh, Yun-Ju Huang, Chen-Mou Cheng, Tsuyoshi Takagi
Recently, efficient implementation of cryptographic algorithms on graphics processing units (GPUs) has attracted a lot of attention in the cryptologic research community. In this paper, we deal with efficient implementation of the $\eta_T$ pairing on supersingular curves over finite fields of characteristics 3. We report the performance results of implementations on NVIDIA GTX 285, GTX 480, Tesla C1060, and Tesla C2050 graphics cards. We have implemented $\eta_T$ pairing in three different ways, namely, one pairing by one thread (Implementation~\Rmnum{1}), one pairing by multiple threads (Implementation~\Rmnum{2}), and multiple pairings by multiple threads in a bitsliced fashion (Implementation~\Rmnum{3}). The timing for Implementation~\Rmnum{3} on a single GTX 285 is 1.47, 8.15, and 140.7~milliseconds for $\eta_T$ pairing over $\mathbb{F}_{3^{97}}$, $\mathbb{F}_{3^{193}}$, and $\mathbb{F}_{3^{509}}$, respectively. On a single GTX 480, the throughput performance of Implementation~\Rmnum{3} is 33710, 4970, and 332 $\eta_T$ pairings per second over $\mathbb{F}_{3^{97}}$, $\mathbb{F}_{3^{193}}$, and $\mathbb{F}_{3^{509}}$, respectively. To the best of our knowledge, this is the first implementation of $\eta_T$ pairing on GPU. Furthermore, it is currently the software implementation that achieves the highest single-chip throughput for $\eta_T$ pairing.
Last updated:  2013-02-16
Sign Modules in Secure Arithmetic Circuits
Ching-Hua Yu
In 1994, Feige, Killian, and Naor suggested a toy protocol of secure comparison, which takes secret input $[x]_7$ and $[y]_7$ between 0 and 2, using the modulo 7 arithmetic circuit. Because 0, 1, and 2 are quadratic residues while 5 and 6 are non-residues modulo 7, the protocol is done by securely evaluating the Legendre symbol of $[x-y]_7$, which can be carried out very efficiently by $O(1)$ secure multiplication gates. However, the extension regarding computation in large fields is undiscussed, and furthermore, whether it is possible to turn a toy comparison into a practically usable protocol is unknown. Motivated by these questions, in this paper, we study secure comparison-related problems using only the secure arithmetic black-box of a finite field, counting the cost by the number of secure multiplications. We observe that a specific type of quadratic patterns exists in all finite fields, and the existence of these patterns can be utilized to explore new solutions with sublinear complexities to several problems. First, we define \emph{sign modules} as partial functions that simulate integer signs in an effective range using a polynomial number of arithmetic operations in a finite field. Let $\ell$ denote the bit-length of a finite field size. We show the existence of $\fr{\ell/5}$-``effective" sign modules in any finite field that has a sufficiently large characteristic. When $\ell$ is decided first, we further show (by a constructive proof) the existence of prime fields that contain an $\Omega(\ell\log \ell)$-``effective" sign module and propose an efficient polynomial-time randomized algorithm that finds concrete instances of sign modules. Then, based on one effective sign module in an odd prime field $\Z_p$ and providing a binary-expressed random number in $\Z_p$, prepared in the offline phase, we show that the computation of bitwise less-than can be improved from the best known result of $O(\ell)$ to $O(\sqrt{\frac{\ell}{\log \ell}})$ (with $O(1)$ rounds) in the online phase using only the $\Z_p$-arithmetic black-box. Accompanied by several related improvements, secure computation involving integer comparisons and modular reductions can be improved from the best known result $O(\ell)$ to $O(\sqrt{\frac{\ell}{\log \ell}})$ (with $O(1)$ rounds), and a (deterministic) zero test can be improved from $O(\ell)$ to $O(1)$ in the online phase. Additionally, a linear complexity of bit-decomposition is also obtained.
Last updated:  2012-09-12
Leakage-Resilient Client-side Deduplication of Encrypted Data in Cloud Storage
Uncategorized
Jia Xu, Ee-Chien Chang, Jianying Zhou
Show abstract
Uncategorized
Cloud storage service is gaining popularity in recent years. Client-side deduplication is widely adopted by cloud storage services like Dropbox, MozyHome and Wuala, to save bandwidth and storage. Security flaws, which may lead to private data leakage, in the current client-side deduplication mechanism are found recently by Harnik~\emph{et al.}~(S\&P Magazine, '10) and Halevi~\emph{et al.} (CCS '11). Halevi~\emph{et al.} identified an important security issue in client side deduplication which leads to leakage of private users' files to outside attackers, and addressed this issue by constructing schemes which they called \emph{proofs of ownership} (PoW). In a proof of ownership scheme, any owner of the same file $F$ can prove to the cloud storage that he/she owns file $F$ in a robust and efficient way, even if a certain amount of arbitrary information about file $F$ is leaked. In this paper, we make two main contributions: \begin{itemize} \item We construct an efficient hash function $\mathsf{H}_k: \{ 0,1 \}^{M} \rightarrow \{ 0,1 \}^{L}$ with complexity in $\mathcal{O}(M + L)$, which is provably pairwise-independent in the random oracle model. We apply the constructed hash function to obtain a proof of ownership scheme, which is provably secure w.r.t. \emph{any} distribution of input file with sufficient min-entropy, in the random oracle model. In contrast, the PoW scheme (the last and the most practical construction) in Halevi~\emph{et al.} is provably secure w.r.t. only \emph{a particular type} of distribution (they call it a generalization of ``block-fixing'' distribution) of input file with sufficient min-entropy, in the random oracle model. \item We propose the first (to the best of our knowledge) solution to support cross-user client side deduplication over encrypted data in the \emph{leakage-resilient} model, where a certain amount of arbitrary information about users' files are leaked. Particularly, we address another important security issue in client side deduplication--- confidentiality of users' sensitive files against the honest-but-curious cloud storage server, by proposing a method to distribute a randomly chosen per-file encryption key to all owners of the same file, in an efficient and secure way. This key distribution method will be seamlessly incorporated into the process of client side deduplication. We emphasize that ``convergent encryption'', which encrypts a file $F$ using hash value $h(F)$ as encryption key, is not leakage-resilient and is thus insecure in the setting of PoW. Therefore, the direct combination of a PoW scheme and convergent encryption is not a solution for client side deduplication over encrypted data. \end{itemize}
Last updated:  2017-10-18
Lattice Signatures Without Trapdoors
Vadim Lyubashevsky
We provide an alternative method for constructing lattice-based digital signatures which does not use the ``hash-and-sign'' methodology of Gentry, Peikert, and Vaikuntanathan (STOC 2008). Our resulting signature scheme is secure, in the random oracle model, based on the worst-case hardness of the $\tilde{O}(n^{1.5})-SIVP$ problem in general lattices. The secret key, public key, and the signature size of our scheme are smaller than in all previous instantiations of the hash-and-sign signature, and our signing algorithm is also much simpler, requiring just a few matrix-vector multiplications and rejection samplings. We then also show that by slightly changing the parameters, one can get even more efficient signatures that are based on the hardness of the Learning With Errors problem. Our construction naturally transfers to the ring setting, where the size of the public and secret keys can be significantly shrunk, which results in the most practical to-date provably secure signature scheme based on lattices.
Last updated:  2012-11-15
Revisiting Lower and Upper Bounds for Selective Decommitments
Uncategorized
Rafail Ostrovsky, Vanishree Rao, Alessandra Scafuro, Ivan Visconti
Show abstract
Uncategorized
In [DNRS99, DNRS03], Dwork et al. opened the fundamental question of existence of commitment schemes that are secure against selective opening attacks (SOA, for short). In [BHY09] Bellare, Hofheinz, and Yilek, and Hofheinz in [Hof11] solved this problem positively by presenting a scheme which is based on non-black-box use of a one-way permutation and which has super- constant number of rounds. The achieved solution however opened other challenging questions on improvements of round complexity and on possibility of obtaining fully black-box schemes where access to an underlying primitive and to an adversary are black-box only. Recently, in TCC 2011, Xiao ([Xia11a]) investigated on how to achieve (nearly) optimal SOA-secure commitment schemes where optimality is in the sense of both the round complexity and the black-box use of cryptographic primitives. The work of Xiao focuses on a simulation-based security notion of SOA. Moreover, the various results in [Xia11a] focus only on either parallel or concurrent SOA. In this work we first point out various issues in the claims of [Xia11a] that actually re-open several of the questions left open in [BHY09, Hof11]. Then, we provide new lower bounds and concrete constructions that produce a very different state-of-the-art compared to the one given in [Xia11a]. More specifically, denoting by (x, y) the round complexity of a scheme that requires x rounds in the commitment phase and y rounds in the decommitment phase, and by considering only (like in [Xia11a]) the setting of black-box simulation for SOA-security, we show that: 1. There is an issue in the result of [Xia11a] on the existence of (3,1)-round schemes for parallel SOA; in fact, we are able to contradict their impossibility result by presenting a (3,1)-round scheme based on black-box use of trapdoor commitments. Moreover, we can instantiate such a scheme with a non-black-box use of a one-way function, thus producing a (3, 1)-round scheme based on any one-way function that improves the result of [BHY09, Hof11] from logarithmic round complexity to 3 (optimal), also under optimal complexity assumptions. We also show a (3, 3)-round scheme based on black-box use of any one-way permutation. 2. There is an issue in the proof of security for parallel composition of the (4, 1)-round scheme given in [Xia11a]; thus such scheme may not be secure. We show instead a (4,1)-round scheme based on black-box use of any weak trapdoor commitment scheme, and a (5,1)-round scheme based on black-box use of any one-way permutation. 3. There is an issue in the proof of security of the concurrent SOA-secure scheme of [Xia11a]. This issue emerges under the case where the simulator cannot itself efficiently sample from the distribution of committed messages. In fact, we contradict the claimed security of this scheme by showing that there can not exist such a scheme, regardless of its round complexity and of the (black-box or non-black-box) use of cryptographic primitives. All our schemes are secure for parallel SOA composition and also secure for concurrent SOA composition under the restriction that all commitment phases are played before any decommitment phase. Moreover, in all our constructions the simulator does not need to know the distribution of the messages committed to by the sender. In light of our result on the impossibility of a scheme that is SOA-secure under full-fledged concurrent composition (see Item 3 above), the concurrency achieved by our schemes is essentially optimal.
Last updated:  2012-11-06
Multiparty Computation from Somewhat Homomorphic Encryption
I. Damgard, V. Pastro, N. P. Smart, S. Zakarias
We propose a general multiparty computation protocol secure against an active adversary corrupting up to $n-1$ of the $n$ players. The protocol may be used to compute securely arithmetic circuits over any finite field $\F_{p^k}$. Our protocol consists of a preprocessing phase that is both independent of the function to be computed and of the inputs, and a much more efficient online phase where the actual computation takes place. The online phase is unconditionally secure and has total computational (and communication) complexity linear in $n$, the number of players, where earlier work was quadratic in $n$. Hence, the work done by each player in the online phase is independent of $n$ and moreover is only a small constant factor larger than what one would need to compute the circuit in the clear. It is the first protocol in the preprocessing model with these properties. We show a lower bound implying that for computation in large fields, our protocol is optimal. In practice, for 3 players, a secure 64-bit multiplication can be done in 0.05 ms. Our preprocessing is based on a somewhat homomorphic cryptosystem. We extend a scheme by Brakerski et al., so that we can perform distributed decryption and handle many values in parallel in one ciphertext. The computational complexity of our preprocessing phase is dominated by the public-key operations, we need $O(n^2/s)$ operations per secure multiplication where $s$ is a parameter that increases with the security parameter of the cryptosystem. Earlier work in this model needed $\Omega(n^2)$ operations. In practice, the preprocessing prepares a secure 64-bit multiplication for 3 players in about 13 ms, which is 2-3 order of magnitude faster than the best previous results.
Last updated:  2013-04-01
Formal Analysis of the Entropy / Security Trade-off in First-Order Masking Countermeasures against Side-Channel Attacks
Maxime Nassar, Sylvain Guilley, Jean-Luc Danger
Several types of countermeasures against side-channel attacks are known. The one called masking is of great interest since it can be applied to any protocol and/or algorithm, without nonetheless requiring special care at the implementation level. Masking countermeasures are usually studied with the maximal possible entropy for the masks. However, in practice, this requirement can be viewed as too costly. It is thus relevant to study how the security evolves when the number of mask values decreases. In this article, we study a first-order masking scheme, that makes use of one $n$-bit mask taking values in a strict subset of $\mathbb{F}_2^n$. For a given entropy budget, we show that the security does depend on the choice of the mask values. More specifically, we explore the space of mask sets that resist first and second-order correlation analysis (CPA and 2O-CPA), using exhaustive search for word size $n \leqslant 5$~bit and a SAT-solver for $n$ up to $8$~bit. We notably show that it is possible to protect algorithms against both CPA and 2O-CPA such as AES with only $12$ mask values. If the general trend is that more entropy means less leakage, some particular mask subsets can leak less (or on the contrary leak remarkably more). Additionally, we exhibit such mask subsets that allows a minimal leakage.
Last updated:  2011-10-01
Two-Output Secure Computation with Malicious Adversaries
abhi shelat, Chih-hao Shen
We present a method to compile Yao's two-player garbled circuit protocol into one that is secure against malicious adversaries that relies on witness indistinguishability. Our approach can enjoy lower communication and computation overhead than methods based on cut-and-choose and lower overhead than methods based on zero-knowledge proofs (or sigma-protocols). To do so, we develop and analyze new solutions to issues arising with this transformation: - How to guarantee the generator’s input consistency - How to support different outputs for each player without adding extra gates to the circuit of the function f being computed - How the evaluator can retrieve input keys but avoid selective failure attacks - Challenging 3/5 of the circuits is near optimal for cut-and-choose (and better than challenging 1/2) Our protocols require the existence of secure-OT and claw-free functions that have a weak malleability property. We discuss an experimental implementation of our protocol to validate our efficiency claims.
Last updated:  2012-06-03
Hash Functions Based on Three Permutations: A Generic Security Analysis
Bart Mennink, Bart Preneel
We consider the family of 2n-to-n-bit compression functions that are solely based on at most three permutation executions and on XOR-operators, and analyze its collision and preimage security. Despite their elegance and simplicity, these designs are not covered by the results of Rogaway and Steinberger (CRYPTO 2008). By defining a carefully chosen equivalence relation on this family of compression functions, we obtain the following results. In the setting where the three permutations pi_1, pi_2, pi_3 are selected independently and uniformly at random, there exist at most four equivalence classes that achieve optimal 2^{n/2} collision resistance. Under a certain extremal graph theory based conjecture, these classes are proven optimally collision secure. Additionally, three of these classes allow for finding preimages in 2^{n/2} queries, and only one achieves optimal 2^{2n/3} preimage resistance (with respect to the bounds of Rogaway and Steinberger, EUROCRYPT 2008). Consequently, a compression function is optimally collision and preimage secure if and only if it is equivalent to F(x_1,x_2) = x_1 xor pi_1(x_1) xor pi_2(x_2) xor pi_3(x_1 xor x_2 xor pi_1(x_1)). If the compression function makes three calls to the same random permutation, there does not exist any compression function attaining 2^{n/2} collision resistance: for any scheme, collisions can be found with 2^{2n/5} queries. This result casts some doubt over the existence of any (larger) secure permutation-based compression function built only on XOR-operators and (multiple invocations of) a single permutation.
Last updated:  2011-10-05
Static Fault Attacks on Hardware DES Registers
Uncategorized
Philippe Loubet-Moundi, David Vigilant, Francis Olivier
Show abstract
Uncategorized
In the late nineties, Eli Biham and Adi Shamir published the first paper on Differential Fault Analysis on symmetric key algorithms. More specifically they introduced a fault model where a key bit located in non-volatile memory is forced to $0/1$ with a fault injection. In their scenario the fault was permanent, and could lead the attacker to full key recovery with low complexity. In this paper, another fault model is considered: forcing a key bit to $0/1$ in the register of a hardware block implementing Data Encryption Standard. Due to the specific location of the fault, the key modification is not permanent in the life of the embedded device, and this leads to apply a powerful safe-error like attack. This paper reports a practical validation of the fault model on two actual circuits, and discusses limitations and efficient countermeasures against this threat.
Last updated:  2011-10-01
Key-Evolution Schemes Resilient to Space-Bounded Leakage
Stefan Dziembowski, Tomasz Kazana, Daniel Wichs
Much recent work in cryptography attempts to build secure schemes in the presence of \emph{side-channel leakage} or leakage caused by malicious software, like computer viruses. In this setting, the adversary may obtain some additional information (beyond the control of the scheme designer) about the internal secret state of a cryptographic scheme. Here, we consider key-evolution schemes that allow a user to evolve a secret-key $K_1$ via a \emph{deterministic} function $f$, to get updated keys $K_2 = f(K_1), K_3 = f(K_2), \ldots$. Such a scheme is \emph{leakage-resilient} if an adversary that can leak on the first $i$ steps of the evolution process does not get any useful information about any future keys. For such schemes, one must assume some restriction on the \emph{complexity} of the leakage to prevent \emph{pre-computation attacks}, where the leakage on a key $K_i$ simply pre-computes a future key $K_{i+ t}$ and leaks even a single bit on it. We notice that much of the prior work on this problem, and the restrictions made therein, can be divided into two types. Theoretical work offers rigor and provable security, but at the cost of having to make strong restrictions on the type of leakage and designing complicated schemes to make standard reduction-based proof techniques go through (an example of such an assumption is that only the data actually used in computation can leak to the adversary). On the other hand, practical work focuses on simple and efficient schemes, often at the cost of only achieving an intuitive notion of security without formal well-specified guarantees. In this paper, we complement the two tracks via a middle-of-the-road approach. On one hand, we rely on the random-oracle model, acknowledging the usefulness of this methodology in practice despite its theoretical shortcomings. On the other hand, we show that even in the random-oracle model, designing secure leakage-resilient schemes with clear and meaningful guarantees requires great care and is susceptible to pitfalls. For example, just assuming that leakage ``cannot evaluate the random oracle'' can be misleading. Instead, we define a new model in which we assume that the ``leakage'' can be any arbitrary \emph{space bounded} computation that can make random oracle calls itself. We connect the space-complexity of a computation in the random-oracle modeling to the \emph{pebbling complexity} on graphs. Using this connection, we derive meaningful guarantees for relatively simple key-evolution constructions. Our security proofs do not rely on the assumption that only data used in the computation can leak. Our scheme is secure also against a large and natural class of active attacks. This is especially important if the key evolution is performed on a PC that can be attacked by a virus. So far, all results that provided solutions against such attacks were secure under the assumption that the virus can download the data from the machine, but he cannot modify any information stored on it (that was called the {\em bounded retrieval model (BRM)}). This paper provides the first scheme were the adversary in the BRM can also modify the data stored on the machine.
Last updated:  2011-10-01
Secure and Efficient Proof of Storage with Deduplication
Qingji Zheng, Shouhuai Xu
Both security and efficiency are crucial to the success of cloud storage. So far, security and efficiency of cloud storage have been separately investigated as follows: On one hand, security notions such as Proof of Data Possession (\PDP) and Proof of Retrievability (\POR) have been introduced for detecting the tamperation of data stored in the cloud. One the other hand, the notion of Proof of Ownership (\POW) has also been proposed to alleviate the cloud server from storing multiple copies of the same data, which could substantially reduce the consumption of both network bandwidth and server storage space. These two aspects are seemingly quite to the opposite of each other. In this paper, we show, somewhat surprisingly, that the two aspects can actually co-exist within the same framework. This is possible fundamentally because of the following insight: {\em The public verifiability offered by \PDP/\POR\ schemes can be naturally exploited to achieve \POW}. This ``one stone, two birds" phenomenon not only inspired us to propose the novel notion of Proof of Storage with Deduplication (\POSD),but also guided us to design a concrete scheme that is provably secure in the Random Oracle model based on the Computational Diffie-Hellman (CDH) assumption.
Last updated:  2011-09-28
Efficient Delegation-Based Authentication Protocol with Strong Mobile Privacy
Jian-Zhu Lu, Hong-Qing Ren, Jipeng Zhou
In 2008, Tang and Wu designed a one-time alias mechanism for protecting the mobile privacy of a user. Recently, Youn and Lim proposed an improved delegation-based authentication protocol to provide private roaming service. In this article, we show that a link between requests may disclose information about the mobile privacy of a sender, and that the aliases of a user fail to achieve the unlinkability in Tan-Wu’s scheme. We remedy this situation by suggesting an enhanced protocol that utilizes a pseudorandom function. Compared to Youn-Lim’s protocol, our design is more efficient than theirs.
Last updated:  2011-10-15
Security Weaknesses of password-only authenticated key establishment protocol without public key cryptography
Uncategorized
Mohsen Toorani, Maryam Saeed
Uncategorized
In 2005, Laih et al. proposed a password-based authentication key exchange protocol that is not based on public key cryptography but uses human ability to extract strings from distorted images. In this letter, it is shown that Laih et al.’s protocol is vulnerable to password compromise impersonation, malicious server, offline password guessing, undetectable online password guessing, stolen-verifier, and Unknown Key-Share (UKS) attacks and it does not provide forward secrecy and key confirmation.
Last updated:  2011-09-26
Universally Composable Security Analysis of OAuth v2.0
Uncategorized
Suresh Chari, Charanjit Jutla, Arnab Roy
Show abstract
Uncategorized
This paper defines an ideal functionality for delegation of web access to a third-party where the authentication mechanism is password-based. We give a universally-composable (UC) realization of this ideal functionality assuming the availability of an SSL-like ideal functionality. We also show that this implementation can be further refined to give a browser based implementation whenever the browser supports https redirection. This implementation matches the 'Authorization Code' mode of the OAuth Version 2.0 Internet draft proposal, with the additional requirement that the third-party along with the Authorization Server must support an SSL-like functionality. From the universally-composable perspective, our ideal functionality definition is novel in the respect that it does not require the three parties to decide on a session identifier in advance, which is usually assumed in a UC setting. This allows us to realize the ideal functionality without any wrapper code, and thus exactly matching the desired protocol in the OAuth standard.
Last updated:  2011-10-18
A Note on the Density of the Multiple Subset Sum Problems
Uncategorized
Yanbin Pan, Feng Zhang
Show abstract
Uncategorized
It is well known that the general subset sum problem is NP-complete. However, almost all subset sum problems with density less than $0.9408\ldots$ can be solved in polynomial time with an oracle that can find the shortest vector in a special lattice. In this paper, we give a similar result for the multiple subset sum problems which has $k$ subset sum problems with the same solution. Some extended versions of the multiple subset sum problems are also considered. In addition, a modified lattice is involved to make the analysis much simpler than before.
Last updated:  2011-11-14
Security of Reduced-Round Camellia against Impossible Differential Attack
Uncategorized
Leibo Li, Jiazhe Chen, Xiaoyun Wang
Show abstract
Uncategorized
Camellia is one of the widely used block ciphers, which has been selected as an international standard by ISO/IEC. By using some interesting properties of $FL/FL^{-1}$ functions, we introduce new 7-round impossible differentials of Camellia for weak keys, which can be used to attack reduced-round Camellia under weak-key setting. The weak keys that work for the impossible differential take 3/4 of the whole key space, therefore, we can further get rid of the weak-key assumption and leverage the attacks to all keys by utilizing a method that is called \emph{the multiplied method}. As a result, for the whole key space, 10-round Camellia-128, 11-round Camellia-192 and 12-round Camellia-256 can be attacked with about $2^{120}$, $2^{184}$ and $2^{240}$ encryptions, respectively. In addition, we are able to extend the attacks to 12-round Camellia-192 and 14-round Camellia-256 which include two $FL/FL^{-1}$ layers, provided that the attacks do not have to be started from the first round.
Last updated:  2012-02-20
Security analysis of a fuzzy identity-based encryption scheme
Uncategorized
Miaomiao Tian, Liusheng Huang, Wei Yang
Uncategorized
A fuzzy identity-based encryption scheme allows a ciphertext encrypted under an identity $ID^*$ to be decrypted using a secret key corresponding to the identity $ID$ which is close to $ID^*$ as measured by some metric. Recently, Ren et al. proposed a new fuzzy identity-based encryption scheme and proved that it is fully chosen-ciphertext secure in the standard model. However, in this paper, we will show that their fuzzy identity-based encryption scheme is even not chosen-plaintext secure.
Last updated:  2011-09-25
A Compact S-Box Design for SMS4 Block Cipher
Imran Abbasi, Mehreen Afzal
This paper proposes a compact design of SMS4 S-box using combinational logic which is suitable for the implementation in area constraint environments like smart cards. The inversion algorithm of the proposed S-box is based on composite field GF(((22)2)2) using normal basis at all levels. In our approach, we examined all possible normal basis combinations having trace equal to one at each subfield level. There are 16 such possible combinations with normal basis and we have compared the S-box designs based on each case in terms of logic gates it uses for implementation. The isomorphism mapping and inverse mapping bit matrices are fully optimized using greedy algorithm. We prove that our best case reduces the complexity upon the SMS4 S-box design with existing inversion algorithm based on polynomial basis by 15% XOR and 42% AND gates.
Last updated:  2011-09-26
Pseudorandom Knapsacks and the Sample Complexity of LWE Search-to-Decision Reductions
Daniele Micciancio, Petros Mol
We study under what conditions the conjectured one-wayness of the knapsack function (with polynomially bounded inputs) over an arbitrary finite abelian group implies that the output of the function is pseudorandom, i.e., computationally indistinguishable from a uniformly chosen group element. Previous work of Impagliazzo and Naor (J. Cryptology 9(4):199-216, 1996) considers only specific families of finite abelian groups and uniformly chosen random \emph{binary} inputs. Our work substantially extends previous results and provides a much more general reduction that applies to arbitrary finite abelian groups and input distributions with polynomially bounded coefficients. As an application of the new result, we give \emph{sample preserving} search-to-decision reductions for the Learning With Errors (LWE) problem, introduced by Regev (J. ACM 56(6):34, 2009) and widely used in lattice-based cryptography.
Last updated:  2014-02-04
Houssem Maghrebi and Sylvain Guilley and Claude Carlet and Jean-Luc Danger
Houssem maghebi, Sylvain Guilley, Claude Carlet, Jean-Luc Danger
This article provides an in-depth study of high-order (HO) Boolean masking countermeasure against side-channel attacks. We introduce the notion of HO-CPA immunity as a metric to characterize a leakage function. We show that this notion intervenes to assess both the resistance against HO-CPA attacks and the amount of leakage. Namely, the HO-CPA immunity, denoted $\mathsf{HCI} \in \N^*$, coincides with the lowest order of a successful HO-CPA and gives the dependence of leakage behavior with the noise's variance $\sigma^2$ (according to $\mathcal{O}(1/\sigma^{2 \times \mathsf{HCI}})$ in Landau notation). Then, we introduce the technique of leakage squeezing. It is an optimization of the straightforward masking where masks are recoded relevantly by bijections. Our main contribution is to show that the HO-CPA immunity of a masking countermeasure can be incremented by one or even by two at virtually no added cost. Indeed, the bijections (and inverse bijections) can be incorporated in tables that are often found in cryptographic algorithms (e.g. substitution boxes).
Last updated:  2011-09-22
Leakage-Resilient Cryptography From the Inner-Product Extractor
Stefan Dziembowski, Sebastian Faust
We present a generic method to secure various widely-used cryptosystems against \emph{arbitrary} side-channel leakage, as long as the leakage adheres three restrictions: first, it is bounded per observation but in total can be arbitrary large. Second, memory parts leak \emph{independently}, and, third, the randomness that is used for certain operations comes from a simple (non-uniform) distribution. As a fundamental building block, we construct a scheme to store a cryptographic secret such that it remains \emph{information theoretically} hidden, even given arbitrary continuous leakage from the storage. To this end, we use a randomized encoding and develop a method to securely \emph{refresh} these encodings even in the presence of leakage. We then show that our encoding scheme exhibits an efficient additive homomorphism which can be used to protect important cryptographic tasks such as identification, signing and encryption. More precisely, we propose \emph{efficient} implementations of the Okamoto identification scheme, and of an ElGamal-based cryptosystem with security against continuous leakage, as long as the leakage adheres the above mentioned restrictions. We prove security of the Okamoto scheme under the DL assumption and \emph{CCA2 security} of our encryption scheme under the DDH assumption.
Last updated:  2011-09-22
Two 1-Round Protocols for Delegation of Computation
Ran Canetti, Ben Riva, Guy N. Rothblum
Consider a weak client that wishes to delegate computation to an untrusted server and be able to succinctly verify the correctness of the result, all within one round of interaction. We provide solutions for two relaxed variants of this problem. Specifically: \begin{itemize} \item We consider a model where the client delegates the computation to {\em two or more} servers, and is guaranteed to output the correct answer as long as even a {\em single} server is honest. We call this model \emph{Refereed Delegation of Computation (RDoC)}. In this model, we show a 1-round {\sf unconditionally statistically sound} protocol for any log-space uniform $\mathcal{NC}$ circuit. In contrast, all known one-round delegation protocols with a single server are only computationally sound. \item We consider a model with a non-succinct offline stage and {\sf pubic verifiability.} (Previously, this model was considered only with private verifiability, namely the client has to maintain some secret local information pertaining to the offline stage [Gennaro et al., CRYPTO 2010]). Public verifiability does away with the secret state, and so allows delegating the offline stage to a ``semi-trusted'' external third party that is potentially used by many clients, even mutually suspicious ones. It also allows for a stronger, more adaptive notion of soundness. In this model we show a 1-round computationally-sound protocol for any circuit $C$, {\em even a non-uniform one}. The client runs in time $poly(log(size(C)), depth(C))$, and soundness is guaranteed assuming the existence of collisions resistant hashing and poly-logarithmic PIR. Previously, publicly verifiable one round delegation protocols were known only for functions in log-space uniform $\mathcal {NC}$. \end{itemize}
Last updated:  2015-02-02
Verifiability, Privacy, and Coercion-Resistance: New Insights from a Case Study
Ralf Kuesters, Tomasz Truderung, Andreas Vogt
In this paper, we present new insights into central properties of voting systems, namely verifiability, privacy, and coercion-resistance. We demonstrate that the combination of the two forms of verifiability considered in the literature---individual and universal verifiability---are, unlike commonly believed, insufficient to guarantee overall verifiability. We also demonstrate that the relationship between coercion-resistance and privacy is more subtle than suggested in the literature. Our findings are partly based on a case study of prominent voting systems, ThreeBallot and VAV, for which, among others, we show that, unlike commonly believed, they do not provide any reasonable level of verifiability, even though they satisfy individual and universal verifiability. Also, we show that the original variants of ThreeBallot and VAV provide a better level of coercion-resistance than of privacy.
Last updated:  2011-09-22
Protecting AES with Shamir's Secret Sharing Scheme
Uncategorized
Louis Goubin, Ange Martinelli
Show abstract
Uncategorized
Cryptographic algorithms embedded on physical devices are particularly vulnerable to Side Channel Analysis (SCA). The most common countermeasure for block cipher implementations is masking, which randomizes the variables to be protected by combining them with one or several random values. In this paper, we propose an original masking scheme based on Shamir's Secret Sharing scheme~\cite{Sha79} as an alternative to Boolean masking. We detail its implementation for the AES using the same tool than Rivain and Prouff in CHES 2010~\cite{RP10}: multi-party computation. We then conduct a security analysis of our scheme in order to compare it to Boolean masking. Our results show that for a given amount of noise the proposed scheme - implemented to the first order - provides the same security level as $3^{rd}$ up to $4^{th}$ order boolean masking, together with a better efficiency.
Last updated:  2011-09-22
A general conjecture similar to T-D conjecture and its applications in constructing Boolean functions with optimal algebraic immunity
Uncategorized
Qingfang Jin, Zhuojun Liu, Baofeng Wu, Xiaoming Zhang
Show abstract
Uncategorized
In this paper, we propose two classes of 2k-variable Boolean functions, which have optimal algebraic immunity under the assumption that a general combinatorial conjecture is correct. These functions also have high algebraic degree and high nonlinearity. One class contain more bent functions, and the other class are balanced.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.