All papers in 2010 (Page 2 of 660 results)
Password-Protected Secret Sharing
We revisit the problem of protecting user's private data against
adversarial compromise of user's device(s) which would normally store
this data. We formalize an attractive solution to this problem as
Password-Protected Secret-Sharing (PPSS), which is a protocol
that allows a user to secret-share her data among n trustees in such
a way that (1) the user can retrieve the shared secret upon entering a
correct password into a reconstruction protocol which succeeds as long
as at least t+1 honest trustees participate, and (2) the shared data
remains secret even against the adversary which corrupts at most t
servers, with the level of protection expected of
password-authentication, i.e. the probability that the adversary
learns anything useful about the secret is at most negligibly greater
than q/|D| where q is the number of reconstruction protocol
instances in which adversary engages and |D| is the size of the
dictionary from which the password was randomly chosen.
We propose an efficient PPSS protocol in the public key model, i.e.
where the device can remember a trusted public key, provably secure
under the DDH assumption, using non-interactive zero-knowledge proofs
which are efficiently instantiatable in the Random Oracle Model (ROM).
The resulting protocol is robust and practical, with fewer than
exponentiations per party, and with only three messages
exchanged between the user and each server, implying a single round of
interaction in the on-line phase. As a side benefit our PPSS protocol
yields a new Threshold Password Authenticated Key Exchange (T-PAKE)
protocol in the public key model which is significantly faster than
existing T-PAKE's provably secure in the public key model in ROM.
On CCA-Secure Fully Homomorphic Encryption
It is well known that any encryption scheme which supports any form
of homomorphic operation cannot be secure against adaptive chosen ciphertext attacks. The question then arises as to what is the most stringent security definition which is achievable by homomorphic encryption schemes. Prior work has shown that various schemes which support a single homomorphic encryption scheme can be shown to be IND-CCA1, i.e. secure against lunchtime attacks. In this paper we extend this analysis to the recent fully homomorphic encryption scheme proposed by Gentry, as refined by Gentry, Halevi, Smart and Vercauteren. We show that the basic Gentry scheme is not IND-CCA1; indeed a trivial lunchtime attack allows one to recover the secret key. We then show that a minor modification to the variant of the somewhat homomorphic encryption scheme of Smart and Vercauteren will allow one to achieve IND-CCA1, indeed PA-1, in the standard model assuming a lattice based knowledge assumption. We also examine the security of the scheme against another security notion, namely security in the presence of ciphertext validity checking oracles; and show why CCA-like notions are important in applications in which multiple parties submit encrypted data to the ``cloud'' for secure processing.
Optimal Eta Pairing on Supersingular Genus-2 Binary Hyperelliptic Curves
This article presents a novel pairing algorithm over supersingular genus- binary hyperelliptic curves. Starting from Vercauteren's work on optimal pairings, we describe how to exploit the action of the -th power Verschiebung in order to reduce the loop length of Miller's algorithm even further than the genus- approach.
As a proof of concept, we detail an optimized software implementation and an FPGA accelerator for computing the proposed optimal Eta pairing on a genus- hyperelliptic curve over , which satisfies the recommended security level of bits. These designs achieve favourable performance in comparison with the best known implementations of -bit-security Type-1 pairings from the literature.
Solving LWE problem with bounded errors in polynomial time
Uncategorized
Uncategorized
In this paper, we present a new algorithm, such that, for the
learning with errors (LWE) problems, if the errors are bounded -- the errors do not span the whole prime finite field but a fixed known subset of size ( ), which we call the learning with bounded errors (LWBE) problems, we can solve it with complexity .
Last updated: 2011-01-14
A Digital Signature Based on Multivariate Polynomials over Fq
Uncategorized
Uncategorized
We propose the digital signature scheme based on multivariate polynomials over finite fields in this paper. We generate the multivariate a polynomial of high degree F(X) . We construct the digital signature scheme using F(X). Our system is immune from the Gröbner bases attacks because obtaining parameters of F(X) to be secret keys arrives at solving the multivariate algebraic equations that is one of NP complete problems .
Definitional Issues in Functional Encryption
We provide a formalization of the emergent notion of ``functional encryption,'' as well as introduce various security notions for it, and study relations among the latter. In particular, we show that indistinguishability and semantic security based notions of security are {\em inequivalent} for functional encryption in general; in fact, ``adaptive'' indistinguishability does not even imply ``non-adaptive'' semantic security. This is alarming given the large body of work employing (special cases of) the former. We go on to show, however, that in the ``non-adaptive'' case an equivalence does hold between indistinguishability and semantic security for what we call {\em preimage sampleable} schemes. We take this as evidence that for preimage sampleable schemes an indistinguishability based notion may be acceptable in practice. We show that some common functionalities considered in the literature satisfy this requirement.
RNS arithmetic in and application to fast pairing computation
In this work, we are interested in arithmetic in large prime field and their extensions of small degree.
We explain why it is very interesting to use RNS arithmetic for the base field when computations in have to be done, essentially thanks to lazy reduction. This is for example the case for pairing computations on ordinary curves (as MNT or BN curves). We prove that using RNS can considerably decrease the number of basic operations required for a pairing computation in many popular situations.
Cryptanalysis of a Fast Encryption Scheme for Databases and of its Variant
Protecting the confidentiality in large databases without degrading their performance is a challenging problem, especially when encryption and decryption must be performed at the database-level or at the application-level. We here focus on symmetric ciphers for database encryption since they are the only type of ciphers with acceptable performance for most applications. We point out that stream ciphers are the adequate type of encryption schemes. We present an attack on a dedicated stream cipher proposed by Ge and Zdonic in 2007, and on a variant of this encryption scheme.
Last updated: 2011-08-16
Strongly Secure Certificate-Based Encryption Scheme with Low Communication Bandwidth
Certificate-based encryption (CBE) is a new asymmetric encryption paradigm which was introduced to solve the certificate management problem in traditional public-key encryption (PKE). It combines PKE and identity-based encryption (IBE) while preserving some of their most attractive features. CBE provides an efficient implicit certificate mechanism which eliminates the third-party queries and simplifies the certificate revocation problem in the traditional PKI. It also solves the key escrow problem and the key distribution problem inherent in IBE. In this paper, we first present a new security model for CBE, which defines the public key replacement attack and strengthens the power of adversaries against CBE. Our model is more elaborated and stronger compared with other existing ones. We then propose an efficient CBE scheme which is proved to be secure in the proposed security model under the random oracle model. When compared with other existing CBE schemes, our scheme enjoys better performance, especially on the communication bandwidth.
A Note on Zero-Knowledge Proofs of Knowledge and the ZKPOK Ideal Functionality
In this note, we provide a formal proof of the fact that any protocol that is a zero-knowledge proof of knowledge for a relation is also a secure protocol for the zero-knowledge proof of knowledge functionality, where the latter is defined according to the standard framework of stand-alone secure computation. Although this is a well-known fact, to the best of our knowledge, no full proof of this has been published.
A Note on the Relation between the Definitions of Security for Semi-Honest and Malicious Adversaries
In secure computation, a set of parties wish to jointly compute some function of their private inputs while preserving security properties like privacy, correctness and more. The two main adversary models that have been considered are \emph{semi-honest} adversaries who follow the prescribed protocol but try to glean more information than allowed from the protocol transcript, and \emph{malicious} adversaries who can run any efficient strategy in order to carry out their attack. As such they can deviate at will from the prescribed protocol. One would naturally expect that any protocol that is secure in the presence of malicious adversaries will automatically be secure in the presence of semi-honest adversaries. However, due to a technicality in the definition, this is not necessarily true. In this brief note, we explain why this is the case, and show that a slight modification to the definition of semi-honest adversaries (specifically, allowing a semi-honest adversary to change its received input) suffices for fixing this anomaly.
Our aim in publishing this note is to make this curious fact more known to the wider cryptographic community.
Isogenies and Cryptography
This thesis explores the notion of isogenies and its applications to cryptography. Elliptic curve cryptography (ECC) is an efficient public cryptosystem with a short key size. For this reason it is suitable for implementing on memory-constraint devices such as smart cards, mobile devices, etc. However, these devices leak information about their private key through side channels (power consumption, electromagnetic radiation, timing etc) during cryptographic processing. In this thesis we have examined countermeasures against a specific side channel attack (power consumption) using isogeny, (a rational homomorphism between elliptic curves) and elliptic curve isomorphism. We found that these methods are an efficient way of securing cryptographic devices using ECC against power analysis attacks. We have also investigated the security and efficiency of implementation of a public key cryptosystem based on isogenies. We found that in order to implement the proposed cryptosystem one has to compute a root of the Hilbert polynomial HD(X) over Fp. Since there is no known efficient way of achieving this calculation, the proposed cryptosystem cannot be used in pract
A Novel Non-interactive Deniable Authentication Protocol with Designated Verifier on elliptic curve cryptosystem
Recently, many non-interactive deniable authentication (NIDA) protocols have been proposed. They are mainly composed of two types, signature-based and shared-secrecy based. After reviewing these schemes, we found that the signature-based approach can not deny the source of the message and thus can not achieve full deniability; and that, the shared-secrecy based approach suffers KCI attack although it can achieve full deniability. In addition, both types of schemes lack efficiency consideration for they mainly base on DLP, factoring, or bilinear pairing. Due to this observation, in this paper, we use the Fiat-Shamir heuristic method to propose a new ECC-based NIDA protocol which not only can achieve full deniability but also is more efficient than all of the proposed schemes due to the inheritent property of elliptic curve cryptosystem. Further, we prove the properties of full deniability and KCI resistance conflict for a NIDA protocol. Besides, we deduce that a NIDA protocol is deniable if and only if it is perfect zero-knowledge.
SHA-512/256
Uncategorized
Uncategorized
With the emergence of pervasive 64 bit computing we observe that it is more cost effective to compute a SHA-512 than it is to compute a SHA-256 over a given size of data. We propose a standard way to use SHA-512 and truncate its output to 256 bits. For 64 bit architectures, this would yield a more efficient 256 bit hashing algorithm, than the current SHA-256. We call this method SHA-512/256. We also provide a method for reducing the size of the SHA-512 constants table that an implementation will need to store.
Last updated: 2011-10-05
Symmetric-key Searchable keyword Concealment (SSC)
Uncategorized
Uncategorized
We discuss what is commonly known as “searchable symmetric keywords encryption,” although we prefer to replace “encryption” with “concealment,” since many of these transformations are not (efficiently) reversible (they look more like one way hashing).
The thrust of this paper is practical approaches to cryptographic solutions for cloud databases security and privacy. We limit the scope of this paper to symmetric key keyword concealment systems, and show the following: (i) for the attacker (with the DB admin as the attacker), any Symmetric-key Searchable keyword Concealment (SSC) with high probability behaves as far as the attacker is concerned like a deterministic cipher (therefore, we might as well use deterministic systems and rip their efficiencies). (ii) Range queries can be handled more securely and more efficiently than current proposals, such as OPE, at the cost of some extra bandwidth, but good compromises are easy to find; (iii) Conjunctions of keywords (and indeed any Boolean predicate) are easy to handle at the cost of some additional information leakage. We do not know of any real example where this extra leak matters.
Timed Encryption and Its Application
In this paper, we propose a new notion of timed encryption, in which the encryption is secure within time while
it is totally insecure after some time We are interested in the case where and are both polynomial. We propose a concrete construction that is provably secure in the random oracle model. We show that it can be generically (although inefficient) constructed from a timed commitment of Boneh and Naor (CRYPTO'00). Finally, we apply this primitive to construct a deniable secure key exchange protocol, where the deniability and secrecy both hold adaptively and the adversary can conduct session state reveal attacks and eavesdropping attacks in the non-eraser model. Our protocol is the first to achieve each of the following properties: adaptive deniability admitting eavesdropping attacks and deniability admitting session state reveal attacks in the non-eraser model. Our protocol is constructed using a timing restriction (inherited from the timed encryption). However, the requirement is rather weak. It essentially asks a user to respond to a ciphertext as soon as possible and hence does not artificially cause any delay. Our usage of timed encryption for the deniability is to use the forceful decryption to obtain the plaintext and hence does not use any random oracle assumption (even if
the secrecy proof needs this).
Optimal XOR based (2,n)-Visual Cryptography Schemes
A (2,n)-Visual Cryptography Scheme (VCS) is a kind of secret sharing scheme, where n participants share a secret image, and any two of them can recover the secret image visually without any cryptographic knowledge and computation devices, but any one of them cannot get any information about the secret image other than the size of the secret image. This paper studies the (2,n)-VCS_{XOR}, and shows the smallest (optimal) pixel expansion of such schemes, and the largest possible contrast for the (2,n)-VCS_{XOR} given its optimal pixel expansion. It also shows the largest (optimal) contrast of the (2,n)-VCS_{XOR}, and the smallest possible pixel expansion of such schemes given their optimal contrast. The results of this paper show that the (2,n)-VCS_{XOR} can achieve smaller pixel expansion and larger contrast than that of (2,n)-VCS_{OR}. It also shows that the construction of the basis matrix of optimal contrast (2,n)-VCS_{XOR} is equivalent to the construction of binary codes when they reach the maximum capability, and the construction of a specific class of optimal contrast (2,n)-VCS_{XOR} for n=2^{k}-1 is given.
Semantic Security Under Related-Key Attacks and Applications
In a related-key attack (RKA) an adversary attempts to break a cryptographic primitive by invoking the primitive with several secret keys which satisfy some known, or even chosen, relation.
We initiate a formal study of RKA security for \emph{randomized encryption} schemes. We begin by providing general definitions for
semantic security under passive and active RKAs. We then focus on RKAs in which the keys satisfy known linear relations over some Abelian group. We construct simple and efficient schemes which resist such RKAs even when the adversary can choose the linear relation adaptively during the attack.
More concretely, we present two approaches for constructing RKA-secure encryption schemes. The first is based on standard randomized encryption schemes which additionally satisfy a natural ``key-homomorphism'' property. We instantiate this approach under number-theoretic or lattice-based assumptions such as the Decisional Diffie-Hellman (DDH) assumption and the Learning Noisy Linear Equations assumption. Our second approach is based on RKA-secure pseudorandom generators. This approach can yield either {\em deterministic,} {\em one-time use} schemes with optimal ciphertext size or randomized unlimited use schemes. We instantiate this approach by constructing a simple RKA-secure pseurodandom generator under a variant of the DDH assumption.
Finally, we present several applications of RKA-secure encryption by showing that previous protocols which made a specialized use of random oracles in the form of \emph{operation respecting synthesizers} (Naor and Pinkas, Crypto 1999) or \emph{correlation-robust hash functions} (Ishai et. al., Crypto 2003) can be instantiated with RKA-secure encryption schemes. This includes the Naor-Pinkas protocol for oblivious transfer (OT) with adaptive queries, the IKNP protocol for batch-OT, the optimized garbled circuit construction of Kolesnikov and Schneider (ICALP 2008), and other results in the area of secure computation. Hence, by plugging in our constructions we get instances of these protocols that are provably secure in the standard model under standard assumptions.
Functional Encryption: Definitions and Challenges
Uncategorized
Uncategorized
We initiate the formal study of functional encryption by giving
precise definitions of the concept and its security. Roughly
speaking, functional encryption supports restricted secret keys that
enable a key holder to learn a specific function of encrypted data,
but learn nothing else about the data. For example, given an
encrypted program the secret key may enable the key holder to learn
the output of the program on a specific input without learning
anything else about the program.
We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area.
Squaring in cyclotomic subgroups
We propose new squaring formulae for cyclotomic subgroups of certain finite fields. Our formulae use a compressed representation of elements having the property that decompression can be performed at a very low cost. The squaring formulae lead to new exponentiation algorithms in cyclotomic subgroups which outperform the fastest previously-known exponentiation algorithms when the exponent has low Hamming weight. Our algorithms can be adapted to accelerate the final exponentiation step of pairing computations.
One-time Computable and Uncomputable Functions
This paper studies the design of cryptographic schemes that are secure even if implemented on untrusted machines, whose internals can be partially observed/controlled by an adversary. For example, this includes machines that are infected with a software virus.
We introduce a new cryptographic notion that we call a {\em one-time computable pseudorandom function (PRF)}, which is a PRF that can be evaluated \emph{at most once} on a machine which stores the (long) key , as long as: (1) the adversary cannot retrieve the key out of the machine completely (this is similar to the assumptions made in the so-called {\em Bounded-Retrieval Model}), and (2) the local read/write memory of the machine is restricted, and not too much larger than the size of . In particular, the \emph{only way} to evaluate on such device, is to overwrite part of the key , preventing all future evaluations of at any other point . We show that this primitive can be used to construct schemes for \emph{password protected storage} that are secure against dictionary attacks, even by a virus that infects the machine. Our constructions rely on the random-oracle model, and lower-bounds for {\em graphs pebbling} problems.
We show that our techniques can also be used to construct another primitive, that we call {\em uncomputable hash functions}, which are hash funcitons that cannot be computed if the local storage has some restricted size , but {\em can} be computed if they are given slightly more storage than . We show that this tool can be used to improve the communication complexity of {\em proofs-of-erasure} schemes, introduced recently by Perito and Tsudik (ESORICS 2010).
Rational Secret Sharing with Side Information in Point-to-Point Networks via Time-Delayed Encryption
Uncategorized
Uncategorized
In this paper, we give the first construction of a rational secret
sharing protocol that is strict Nash (or Nash with respect to
trembles) in the computational sense, works in a standard
point-to-point network with loose synchronization (i.e. does not rely
on the availability of a broadcast channel), and can tolerate
players with arbitrary side information about the secret. Since this
has been proved impossible in the plain model, our protocol requires
us to make assumptions about upper and lower bounds on the
computational resources of the participants, and also to assume that
they are out of sync with each other by at most a known quantity
and that their network latency is at most some known quantity
. Specifically, we define and realize (in the random-oracle
model, and assuming so-called standard architecture) time-delayed
encryption, a primitive that allows a sender to create a ciphertext
such that, for some parameter , it will take time for
the recipient to recover the plaintext, where, following the work on
memory-bound functions, time is measured in memory accesses.
Rational parties will prefer to follow our protocol for reconstructing
the secret even given a lot of side information about this
secret. This was not true of any of the previously proposed strict
Nash protocols for this task: In fact access to side information gave
players in those protocols an incentive to deviate. As a result, for
the first time, we have a rational secret reconstruction protocol that
can be used in applications where the secret can be useful in the
outside world (e.g. it can give players access to a valuable resource,
or decrypt a file).
Indifferentiable Deterministic Hashing to Elliptic and Hyperelliptic Curves
At Crypto 2010, Brier et al. proposed the first construction of a hash function into ordinary elliptic curves that was indifferentiable from a random oracle, based on Icart's deterministic encoding from Crypto 2009. Such a hash function can be plugged into any cryptosystem that requires hashing into elliptic curves, while not compromising proofs of security in the random oracle model. However, the proof relied on relatively involved tools from algebraic geometry, and only applied to Icart's deterministic encoding from Crypto 2009.
In this paper, we present a new, simpler technique based on bounds of character sums to prove the indifferentiability of similar hash function constructions based on essentially any deterministic encoding to elliptic curves or curves of higher genus, such as the algorithms by Shallue, van de Woestijne and Ulas, or the Icart-like encodings recently presented by Kammerer, Lercier and Renault. In particular, we get the first constructions of well-behaved hash functions to Jacobians of hyperelliptic curves.
Our technique also provides more precise estimates on the statistical behavior of those deterministic encodings and the hash function constructions based on them. Additionally, we can derive pseudorandomness results for partial bit patterns of such encodings.
Rotational Rebound Attacks on Reduced Skein
In this paper we combine the recent rotational cryptanalysis with the rebound attack, which results in the best cryptanalysis of Skein, a candidate for the SHA-3 competition. The rebound attack approach was so far only applied to AES-like constructions. For the first time, we show that this approach can also be applied to very different constructions. In more detail, we develop a number of techniques that extend the reach of both the inbound and the outbound phase, leading to rotational collisions for about 53/57 out of the 72 rounds of the Skein-256/512 compression function and the Threefish cipher. At this point, the results do not threaten the security of the full-round Skein hash function.
The new techniques include an analytical search for optimal input values in the rotational cryptanalysis, which allows to extend the outbound phase of the attack with a precomputation phase, an approach never used in any rebound-style attack before. Further we show how to combine multiple inside-out computations and neutral bits in the inbound phase of the rebound attack, and give well-defined rotational distinguishers as certificates of weaknesses for the compression functions and block ciphers.
Meet-in-the-Middle Attack on 8 Rounds of the AES Block Cipher under 192 Key Bits
The AES block cipher has a 128-bit block length and a user key of 128, 192 or 256 bits, released by NIST for data encryption in the USA; it became an ISO international standard in 2005. In 2008, Demirci and Selccuk gave a meet-in-the-middle attack on 7-round
AES under 192 key bits. In 2009, Demirci et al. (incorrectly) described a new meet-in-the-middle attack on 7-round AES under 192 key bits. Subsequently, Dunkelman et al. described an attack on 8-round AES under 192 key bits by taking advantage of several advanced techniques, including one about the key schedule. In this paper, we show that by exploiting a simple observation on the key schedule, a meet-in-the-middle attack on 8-round AES under 192 key bits can be obtained from Demirci and Selccuk's and Demirci et al.'s work; and a more efficient attack can be obtained when taking
into account Dunkelman et al.'s observation on the key schedule. In the single-key attack scenario, attacking 8 rounds is the best currently known cryptanalytic result for AES in terms of the numbers of attacked rounds, and our attack has a dramatically smaller data
complexity than the currently known attacks on 8-round AES under 192 key bits.
On The Impact of Target Technology in SHA-3 Hardware Benchmark Rankings
Both FPGAs and ASICs are widely used as the technology for comparing SHA-3 hardware benchmarking process. However, the impact of target technology in SHA-3 hardware benchmark rankings has hardly been considered. A cross-platform comparison between the FPGA and ASIC results of the 14 second round SHA-3 designs demonstrates the gap between two sets of benchmarking results. In this paper we describe a systematic approach to analyze a SHA-3 hardware benchmark
process for both FPGAs and ASICs, and we present our latest results for FPGA and ASIC evaluation of the 14 second round SHA-3 candidates.
Linear Analysis of Reduced-Round CubeHash
Recent developments in the field of cryptanalysis of hash functions
has inspired NIST to announce a competition for selecting a new cryptographic hash function to join the SHA family of standards. One of the 14 second-round candidates is CubeHash designed by Daniel J. Bernstein. CubeHash is a unique hash function in the sense that it does not iterate a common compression function, and offers a structure which resembles a sponge function, even though it is not exactly a sponge function. In this paper we analyze reduced-round variants of CubeHash where the adversary controls the full 1024-bit input to reduced-round CubeHash and can observe its full output. We show that linear approximations with high biases exist in reduced-round variants. For example, we present an 11-round
linear approximation with bias of 2^{−235}, which allows distinguishing 11-round CubeHash using about 2^{470} queries. We also discuss the extension of this distinguisher to 12 rounds using message modification techniques. Finally, we present a linear distinguisher for 14-round CubeHash which uses
about 2^{812} queries.
Balanced Boolean Functions with Optimum Algebraic Immunity and High Nonlinearity
In this paper, three constructions of balanced Boolean functions with optimum algebraic immunity are proposed. The cryptographical properties such as algebraic degree and nonlinearity of the constructed functions are also analyzed.
Last updated: 2012-01-05
Deterministic Public-Key Encryption Revisited
This paper revisits the notion of determinsitic public-key encryption (DE) --- introduced by Bellare, Boldyreva, and O'Neill~(CRYPTO 2007) and further studied by Bellare et al.~(CRYPTO 2008), Boldyreva et al.~(CRYPTO 2008), and Hemenway et al.~(ePrint 2010) --- which hides all possible partial information about high-entropy messages. Our results serve to unify as well as generalize/strengthen the prior work. First, we propose a general construction of DE that reduces the latter to trapdoor functions admitting certain kinds of hardcore functions. Roughly, the latter should be {\em robust}, meaning remain hardcore when the input is conditioned on an event that occurs with good probability; for the resulting DE scheme to be multi-message secure, the hardcore function should also be ``correlated-input secure.'' We then show that most known DE schemes, both in the random oracle and standard models, can be viewed as instantiations of our general construction or optimizations thereof, thereby explaining how these different schemes come about. We also give novel instantiations in the standard model, in both the single and multi-message cases:
* In the single-message case, we give an instantiation one-way trapdoor functions, whereas previous constructions based on one-wayness due to Bellare et al.~required a trapdoor permutation. In particular, we can use a trapdoor function that is super-polynomially hard to invert on high-entropy distributions but exponentially-hard to invert on uniform inputs.
* In the multi-message case, we give an instantiation meeting a new notion of `` -bounded'' multi-message security we introduce, for any polynomial , based on lossy trapdoor functions losing an fraction of their input. Prior work achieved only in the standard model. We also give an optimized version of this instantiation by extending some ideas of Boldyreva et al. Both of these results are based on generalizations of the (Crooked) Leftover Hash Lemma (LHL) that build on that of Kiltz et al.~(EUROCRYPT 2009). Interestingly, the optimized scheme relies on the fact that the Crooked LHL is more tolerant of ``imperfection'' of the hash function than the classical one.
An additional contribution is to give a more precise definitional equivalence (between semantic security and indistinguishability style security notions) for DE as compared to prior work. In particular, this makes our security proofs for instantiations based on one-wayness simpler than in prior work.
A 3-Subset Meet-in-the-Middle Attack: Cryptanalysis of the Lightweight Block Cipher KTANTAN
In this paper we describe a variant of existing meet-in-the-middle attacks on block ciphers. As an application, we propose meet-in-the-middle attacks that are applicable to the full 254-round KTANTAN family of block ciphers accepting a key of 80 bits. The attacks are due to some weaknesses in its bitwise key schedule. We report an attack of time complexity 2^75.170 encryptions on the full KTANTAN32 cipher with only 3 plaintext/ciphertext pairs and well as 2^75.044 encryptions on the full KTANTAN48 and 2^75.584 encryptions on the full KTANTAN64 with 2 plaintext/ciphertext pairs. All these attacks work in the classical attack model without any related keys.
In the differential
related-key model, we demonstrate 218- and 174-round differentials holding with probability 1. This shows that a strong related-key property can translate
to a successful attack in the non-related-key setting. Having extremely low data requirements, these attacks are valid even in RFID-like environments where only a very limited amount of text material may be available to an attacker.
Comparison of seven SHA-3 candidates software implementations on smart cards.
In this work, we present and compare seven SHA-3 second-round candidates implementations on two different architectures used on smart cards: the Intel 8051 and the ARM7TDMI. After presenting the performances of our implementations, we explain for each candidate the main differences between our 8-bit and 32-bit implementations. Then, we compare our results to those of two benchmarks published at the second SHA-3 candidates conference this summer, and deduce a ranking according to performance and what we call 8-bit tolerance.
Generating Pairing-friendly Parameters for the CM Construction of Genus 2 Curves over Prime Fields
Uncategorized
Uncategorized
We present two contributions in this paper. First, we give a quantitative analysis of the scarcity of pairing-friendly genus 2 curves. This result is an improvement relative to prior work which estimated the density of pairing-friendly genus 2 curves heuristically. Second, we present a method for generating pairing-friendly parameters for which , where is a measure of efficiency in pairing-based cryptography. This method works by solving a system of equations given in terms of coefficients of the Frobenius element. The algorithm is easy to understand and implement.
Constant-Round Private Function Evaluation with Linear Complexity
We consider the problem of private function evaluation (PFE) in the
two-party setting. Here, informally, one party holds an input
while the other holds a circuit describing a function ; the goal
is for one (or both) of the parties to learn while revealing
nothing more to either party. In contrast to the usual setting of
secure computation --- where the function being computed is known to
both parties --- PFE is useful in settings where the function (i.e.,
algorithm) itself must remain secret, e.g., because it is proprietary
or classified.
It is known that PFE can be reduced to standard secure computation by
having the parties evaluate a universal circuit}, and this is the
approach taken in most prior work. Using a universal circuit,
however, introduces additional overhead and results in a more complex
implementation. We show here a completely new technique for PFE that
avoids universal circuits, and results in constant-round protocols
with communication/computational complexity linear in the size of the
circuit computing . This gives the first constant-round protocol
for PFE with linear complexity (without using fully homomorphic
encryption), even restricted to semi-honest adversaries.
The Digital Signature Scheme MQQ-SIG
Uncategorized
Uncategorized
This document contains the Intellectual Property Statement and the technical description of the MQQ-SIG - a new public key digital signature scheme. The complete scientific publication covering the design rationale and the security analysis will be given in a separate publication. MQQ-SIG consists of quadratic polynomials with Boolean variables where , , or .
Faster Explicit Formulas for Computing Pairings over Ordinary Curves
We describe efficient formulas for computing pairings on ordinary elliptic curves over prime fields. First, we generalize lazy reduction techniques, previously considered only for arithmetic in quadratic extensions, to the whole pairing computation, including towering and curve arithmetic. Second, we introduce a new compressed squaring formula for cyclotomic subgroups and a new technique to avoid performing an inversion in the final exponentiation when the curve is parameterized by a negative integer. The techniques are illustrated in the context of pairing computation over Barreto-Naehrig curves, where they have a particularly efficient realization, and also combined with
other important developments in the recent literature. The resulting formulas reduce the number of required operations and, consequently, execution time, improving on the state-of-the-art performance of cryptographic pairings by 27%-33% on several popular 64-bit computing platforms. In particular, our techniques allow to compute a pairing under 2 million cycles for the first time on such architectures.
Torus-based compression by factor 4 and 6
Uncategorized
Uncategorized
We extend the torus-based compression technique for cyclotomic subgroups and show how the elements of certain subgroups in characteristic two and three fields can be compressed by a factor of 4 and 6, respectively. Our compression and decompression functions can be computed at a negligible cost. In particular, our techniques lead to very efficient exponentiation algorithms that work with the compressed representations of elements and can be easily incorporated into pairing-based protocols that require exponentiations or products of pairings.
Combining properties of cryptographic hash functions
A ``strong'' cryptographic hash function suitable for practical applications should simultaneously satisfy many security properties, like pseudo-randomness, collision resistance and unforgeability. This paper shows how to combine two hash function families each satisfying different security property into one hash function family, which satisfies both properties. In particular, given two hash function families and , where is pseudo-random and is collision resistant, we construct a combiner which satisfies pseudo-randomness and collision resistance. We also present a combiner for collision resistance and everywhere preimage resistance. When designing a new hash function family for some particular application, we can use such combiners with existing primitives and thus combine a hash function family satisfying all needed properties.
Affine Masking against Higher-Order Side Channel Analysis
In the last decade, an effort has been made by the research community to find efficient ways to thwart side channel analysis (SCA) against physical implementations of cryptographic algorithms. A common countermeasure for implementations of block ciphers is Boolean masking which randomizes by the bitwise addition of one or several random value(s) to the variables to be protected. However, advanced techniques called higher-order SCA attacks exist that overcome such a countermeasure. These attacks are greatly favored by the very nature of Boolean masking. In this paper, we revisit the affine masking initially introduced by Von Willich in 2001 as an alternative to Boolean masking. We show how to apply it to AES at the cost of a small timing overhead compared to Boolean masking. We then conduct an in-depth analysis pinpointing the leakage reduction implied by affine masking. Our results clearly show that the proposed scheme provides an excellent performance-security trade-off to protect AES against higher-order SCA.
Signatures Resilient to Continual Leakage on Memory and Computation
Recent breakthrough results by Brakerski et al and Dodis et al have shown that signature schemes can be made secure even if the adversary continually obtains information leakage from the secret key of the scheme. However, the schemes currently do not allow leakage on the secret key and randomness during signing, except in the random oracle model. Further, the random oracle based schemes require updates to the secret key in order to maintain security, even when no leakage during computation is present. We present the first signature scheme that is resilient to full continual leakage: memory leakage as well as leakage from processing during signing (both from the secret key and the randomness), in keygeneration, and in update. Our scheme can tolerate leakage of a 1 - o(1) fraction of the secret key between updates, and is proven secure in the standard model based on the symmetric external DDH (SXDH) assumption in bilinear groups. The time periods between updates are a function of the amount of leakage in the period (and nothing more). Our construction makes new use of the Groth-Sahai proof systems, and in particular avoids composing proofs, which gives improved efficiency. In addition, we introduce a new tool: independent pre-image resistant hash functions, which may be of independent interest.
Linear Approximations of Addition Modulo -1
Addition modulo is a basic arithmetic operation in the stream cipher ZUC. For evaluating ZUC in resistance to linear cryptanalysis, it is necessary to study properties of linear approximations of the addition modulo . In this paper we discuss linear approximations of the addition modulo for integer . As results, an exact formula on the correlations of linear approximations of the addition modulo is given for the case when two inputs are involved, and an iterative formula for the case when more than two inputs are involved. For a class of special linear approximations with all masks being equal to 1, we further discuss the limit of their correlations when goes to infinity. Let be the number of inputs of the addition modulo . It's shows that when is even, the limit is equal to zero, and when is odd, the limit is bounded by a constant depending on .
Implementing Gentry's Fully-Homomorphic Encryption Scheme
We describe a working implementation of a variant of Gentry's fully homomorphic encryption scheme (STOC 2009), similar to the variant used in an earlier implementation effort by Smart and Vercauteren (PKC 2010). Smart and Vercauteren implemented the underlying "somewhat homomorphic" scheme, but were not able to implement the bootstrapping functionality that is needed to get the complete scheme to work. We show a number of optimizations that allow us to implement all aspects of the scheme, including the bootstrapping functionality.
Our main optimization is a key-generation method for the underlying somewhat homomorphic encryption, that does not require full polynomial inversion. This reduces the asymptotic complexity from to when working with dimension- lattices (and practically reducing the time from many hours/days to a few seconds/minutes). Other optimizations include a batching technique for encryption, a careful analysis of the degree of the decryption polynomial, and some space/time trade-offs for the fully-homomorphic scheme.
We tested our implementation with lattices of several dimensions, corresponding to several security levels. From a "toy" setting in dimension 512, to "small," "medium," and "large" settings in dimensions 2048, 8192, and 32768, respectively. The public-key size ranges in size from 70 Megabytes for the "small" setting to 2.3 Gigabytes for the "large" setting. The time to run one bootstrapping operation (on a 1-CPU 64-bit machine with large memory) ranges from 30 seconds for the "small" setting to 30 minutes for the "large" setting.
Preimage Resistance Beyond the Birthday Bound: Double-Length Hashing Revisited
Security proofs are an essential part of modern cryptography. Often the challenge is not to come up with appropriate schemes but rather to technically prove that these satisfy the desired security properties.
We provide for the first time techniques for proving asymptotically optimal preimage resistance bounds for block cipher based double length, double call hash functions. More precisely, we consider for some compression functions using two calls to an ideal block cipher with an -bit block size. Optimally, an adversary trying to find a preimage for should require queries to the underlying block cipher. As a matter of fact there have been several attempts to prove the preimage resistance of such compression functions, but no proof did go beyond the barrier, therefore leaving a huge gap when compared to the optimal bound.
In this paper, we introduce two new techniques on how to lift this bound to . We demonstrate our new techniques for a simple and natural design of , being the concatenation of two instances of the well-known Davies-Meyer compression function.
Boolean functions with all main cryptographic properties
In this paper, we propose a class
of -variable Boolean functions which have optimal algebraic
degree, very high nonlinearity, and are -resilient. Based on
our newly proposed conjecture, it can be shown that the algebraic
immunity of our functions is at least suboptimal. Moreover, when
is odd, the algebraic immunity is actually optimal, and for
even , we find that the algebraic immunity is optimal at least
for .
Cryptanalysis of block EnRUPT
EnRUPT is a cryptographic primitive with a variable block and key length. We show several attacks on it that stem from properties of its keying, including a very fast related-key attack.
Key Agreement Protocols Based on Multivariate Polynomials over Fq
In this paper we propose new key agreement protocols based on multivariate polynomials over finite field Fq. We concretely generate the multivariate polynomial F(X)\in Fq[x1,..,xn] such that F(X)=\sum^m_{i=1} ki[Ai(X)^d+ Ai(X)^{d-1}+ ..+ Ai(X)] where Ai(X) =ai1x1+…+ainxn ,coefficients ki , aij\in Fq (i=1,..,m:j=1,..,n) and variables X=(x1,..,xn)^T \in Fq[x1,..,xn]^n. The common key K(X) has the form such that K(X)=\sum^m_{i=1}hi F((bi1x1,...,binxn)^T) where hi ,bij\in Fq (i=1,..,m:j=1,..,n) to be the temporary secret keys of the partner . Our system is immune from the Gröbner bases attacks because obtaining coefficients of F(X) to be secret keys arrives at solving the multivariate algebraic equations, that is, one of NP complete problems .Our protocols are also thought to be immune from the differential attacks because of the equations of high degree.
Last updated: 2010-10-30
--Withdrawn--
Uncategorized
Uncategorized
Semi-Homomorphic Encryption and Multiparty Computation
An additively-homomorphic encryption scheme enables us to compute
linear functions of an encrypted input by manipulating only the ciphertexts. We define the relaxed notion of a semi-homomorphic
encryption scheme, where the plaintext can be recovered as long as the
computed function does not increase the size of the input "too
much". We show that a number of existing cryptosystems are
captured by our relaxed notion. In particular, we give examples of semi-homomorphic encryption schemes based on lattices, subset sum and factoring.
We then demonstrate how semi-homomorphic encryption schemes allow us
to construct an efficient multiparty computation protocol for arithmetic circuits, UC-secure against a dishonest majority. The protocol consists of a preprocessing phase and an online phase. Neither the inputs nor the function to be computed have to be known during preprocessing.
Moreover, the online phase is extremely efficient as it requires
no cryptographic operations: the parties only need to exchange additive shares and verify information theoretic MACs.
Our contribution is therefore twofold: from a theoretical point of view, we can base multiparty computation on a variety of different assumptions, while on the practical side we offer a protocol with better efficiency than any previous solution.
Key-Dependent Message Security: Generic Amplification and Completeness
Key-dependent message (KDM) secure encryption schemes provide secrecy even when the attacker sees encryptions of messages related to the secret-key . Namely, the scheme should remain secure even when messages of the form are encrypted, where is taken from some function class .
A KDM \emph{amplification} procedure takes an encryption scheme which satisfies -KDM security and boost it into a -KDM secure scheme, where the function class should be richer than .
It was recently shown by Brakerski et al.~(TCC 2011) and
Barak et al.~(EUROCRYPT 2010), that a strong form of amplification is possible, provided that the underlying encryption scheme satisfies some special additional properties.
In this work, we prove the first \emph{generic} KDM amplification theorem which relies solely on the KDM security of the underlying scheme without making any other assumptions. Specifically, we show that an elementary form of KDM security against functions in which each output bit either copies or flips a single bit of the key (aka \emph{projections}) can be amplified into KDM security with respect to any function family that can be computed in arbitrary fixed polynomial-time. Furthermore, our amplification theorem and its proof are insensitive to the exact setting of KDM security, and they hold in the presence of multiple-keys and in the symmetric-key/public-key and the CPA/CCA cases. As a result, we can amplify the security of all known KDM constructions, including ones that could not be amplified before.
Finally, we study the minimal conditions under which full-KDM security (with respect to all functions) can be achieved. We show that under strong notion of KDM security, the existence of cyclic-secure fully-homomorphic encryption is not only sufficient for full-KDM security, as shown by Barak et al., but also necessary. On the other hand, we observe that for standard KDM security, this condition can be relaxed by adopting Gentry's bootstrapping technique (STOC 2009) to the KDM setting.
Multi-Party Privacy-Preserving Set Intersection with Quasi-Linear Complexity
Uncategorized
Uncategorized
Secure computation of the set intersection functionality allows
parties to find the intersection between their datasets without
revealing anything else about them. An efficient protocol for such
task could have multiple potential applications, in commerce,
health-care, and security. However, all currently known secure set
intersection protocols for parties have computational costs that
are quadratic in the (maximum) number of entries in the dataset
contributed by each party, rendering secure computation of set
intersection impractical on anything but small datasets.
In this paper we describe the first multi-party protocol for securely
computing the set intersection functionality with both the
communication and the computation costs that are quasi-linear in the
size of the datasets. Specifically, our protocols require
bits of communication and group
multiplications per player in the malicious adversary setting, where
is the size of each dataset and is security
parameter. Our protocol follows the basic idea of the protocol
proposed by Kissner and Song, but we gain efficiency by
using different representation of the polynomials associated with
users' datasets, and careful employment of algorithms that interpolate
or evaluate polynomials on multiple points more efficiently.
On the complexity of Decomposition Attack
In recent researches, it is discovered that index calculus is
useful for solving the discrete logarithm problems (DLP) of the groups of the Jacobian of curves (including elliptic curve) over finite field, which are widely used to cryptosystems. In these cases, the probability that an element of the group is written by the summation of N elements of large primes and factor bases is O(1) where N is some pre-fixed constant. So the situation is little different to the normal index calculus and it is proposed that it should be called another name, ”decomposition attack”. In decomposition attack, first, some relations are collected and the graph, whose vertexes are the set of large primes and whose edges are the relations, is considered and the elimination of large prime is done by using this graph.
However, in the proposed algorithm, the randomness of the
graph, which is difficult to define, is needed. In this paper, we first formulate the decomposition attack and next propose a new algorithm, which does not require the randomness of the graph and its worst complexity can be estimated.
On Efficient Non-Interactive Oblivious Transfer with Tamper-Proof Hardware
Uncategorized
Uncategorized
Oblivious transfer (OT, for short) [RAB81] is a fundamental primitive in the foundations of Cryptography. While in the standard model OT constructions rely on public-key cryptography, only very recently Kolesnikov in [KOL10] showed a truly efficient string OT protocol by using tamper-proof hardware tokens. His construction only needs few evaluations of a block cipher and requires stateless (therefore resettable) tokens that is very efficient for practical applications. However, the protocol needs to be interactive, that can be an hassle for many client-server setting and the security against malicious sender is achieved in a covert sense, meaning that a malicious sender can actually obtain the private input of the receiver while the receiver can detect this malicious behavior with probability 1/2.
Furthermore the protocol does not enjoy forward security (by breaking a token one violates the security of all previously played OTs).
In this work, we propose new techniques to achieve efficient non-interactive string OT using tamper-proof hardware tokens. While from one side our tokens need to be stateful, our protocol enjoys several appealing features: 1) it is secure against malicious receivers and the input privacy of honest receivers is guaranteed unconditionally against malicious senders, 2) it is forward secure, 3) it enjoys adaptive input security, therefore tokens can be sent before parties know their private inputs. This gracefully fits a large number of client-server settings (digital TV, e-banking) and thus many practical applications. On the bad side, the output privacy of honest receivers is not satisfied when tokens are reused for more than one execution.
A Fault Analytic Method against HB+
The search for lightweight authentication protocols suitable for low-cost RFID tags constitutes an active and challenging research area. In this context, a family of protocols based on the LPN problem has been proposed: the so-called HB-family. Despite the rich literature regarding the cryptanalysis of these protocols, there are no published results about the impact of fault analysis over them. The purpose of this paper is to fill this gap by presenting a fault analytic method against a prominent member of the HB-family: HB+ protocol. We demonstrate that the fault analysis model can lead to a flexible and effective attack against HB-like protocols, posing a serious threat over them.
On isotopisms of commutative presemifields and CCZ-equivalence of functions
A function from \textbf{F} to itself is planar if for any \textbf{F} the function is a permutation. CCZ-equivalence is the most general known equivalence relation of functions preserving planar property. This paper considers two possible extensions of CCZ-equivalence for functions over fields of odd characteristics, one proposed by Coulter and Henderson and the other by Budaghyan and Carlet. We show that the second one in fact coincides with CCZ-equivalence, while using the first one we generalize one of the known families of PN functions. In particular, we prove that, for any odd prime and any positive integers and , the indicators of the graphs of functions and from \textbf{F} to \textbf{F} are CCZ-equivalent if and only if and are CCZ-equivalent.
We also prove that, for any odd prime , CCZ-equivalence of functions from \textbf{F} to \textbf{F} , is strictly more general than EA-equivalence when and is greater or equal to the smallest positive divisor of different from 1.
Quantum Preimage and Collision Attacks on CubeHash
In this paper we show a quantum preimage attack on CubeHash-512-normal with complexity 2^192 . This kind of attack is expected to cost 2^256 for a good 512-bit hash function, and we argue that this violates the expected security of CubeHash. The preimage attack can also be used as a collision attack, given that a generic quantum collision attack on a 512-bit hash function require 2^256 operations, as explained in the
CubeHash submission document.
This attack only uses very simple techniques, most of which are borrowed from previous analysis of CubeHash: we just combine symmetry based attacks [1,8] with Grover's algorithm. However, it is arguably the first attack on a second-round SHA-3 candidate that is more efficient than the attacks considered by the designer.
Termination-Insensitive Computational Indistinguishability (and applications to computational soundness)
We defined a new notion of computational indistinguishability:
termination-insensitive computational indistinguishability
(tic-indistinguishability). Tic-indistinguishability models
indistinguishability with respect to distinguishers that cannot
distinguish between termination and non-termination.
We sketch how the new notion allows to get computational soundness
results of symbolic models for equivalence-based security properties
(such as anonymity) for processes that contain loops, solving an open
problem.
Practical Cryptanalysis of the Identification Scheme Based on the Isomorphism of Polynomial with One Secret Problem
This paper presents a practical cryptanalysis of the Identification
Scheme proposed by Patarin at Crypto 1996. This scheme relies on the
hardness of the Isomorphism of Polynomial with One Secret (IP1S),
and enjoys shorter key than many other schemes based on the hardness
of a combinatorial problem (as opposed to number-theoretic
problems). Patarin proposed concrete parameters that have not been
broken faster than exhaustive search so far. On the theoretical
side, IP1S has been shown to be harder than Graph Isomorphism, which
makes it an interesting target. We present two new deterministic
algorithms to attack the IP1S problem, and we rigorously analyze
their complexity and success probability. We show that they can
solve a (big) constant fraction of all the instances of degree two
in polynomial time. We verified that our algorithms are very
efficient in practice. All the parameters with degree two proposed
by Patarin are now broken in a few seconds. The parameters with
degree three can be broken in less than a CPU-month. The
identification scheme is thus quite badly broken.
BiTR: Built-in Tamper Resilience
The assumption of the availability of tamper-proof hardware tokens has been
used extensively in the design of cryptographic primitives. For example,
Katz (Eurocrypt 2007) suggests them as an alternative to other setup
assumptions, towards achieving general UC-secure multi-party computation.
On the other hand, a lot of recent research has focused on protecting
security of various cryptographic primitives against physical attacks such
as leakage and tampering.
In this paper we put forward the notion of Built-in Tamper Resilience (BiTR)
for cryptographic protocols, capturing the idea that the protocol that is
encapsulated in a hardware token preserves its security properties even
when an adversary may tamper with its secret state.
Our definition is within the UC model, and can be viewed as unifying and
extending several prior related works. We provide a composition theorem for
BiTR security of protocols, as well as several BiTR constructions for
specific cryptographic protocols or tampering function classes. In
particular, relaxing the tamper-proof token assumption of Katz's work,
we achieve UC-secure computation based on a hardware token that may be
susceptible to affine tampering attacks. We also present BiTR proofs for
identification and signature schemes in the same tampering model.
We next observe that non-malleable codes can be used as state encodings to
prove the BiTR property and show new positive results for deterministic
non-malleable encodings (as opposed to probabilistic that were previously
known) for various classes of tampering functions.
Proving Coercion-Resistance of Scantegrity II
By now, many voting protocols have been proposed that, among others, are designed to achieve coercion-resistance, i.e., resistance to vote buying and voter coercion. Scantegrity II is among the most prominent and successful such protocols in that it has been used in several elections. However, almost none of the modern voting protocols used in practice, including Scantegrity II, has undergone a rigorous cryptographic analysis.
In this paper, we prove that Scantegrity II enjoys an optimal level of coercion-resistance, i.e., the same level of coercion-resistance as an ideal voting protocol (which merely reveals the outcome of the election), except for so-called forced abstention attacks. This result is obtained under the (necessary) assumption that the workstation used in the protocol is honest.
Our analysis is based on a rigorous cryptographic definition of coercion-resistance we recently proposed. We argue that this definition is in fact the only existing cryptographic definition of coercion-resistance suitable for analyzing Scantegrity II. Our case study should encourage and facilitate rigorous cryptographic analysis of coercion-resistance also for other voting protocols used in practice.
Group Homomorphic Encryption: Characterizations, Impossibility Results, and Applications
Uncategorized
Uncategorized
We give a complete characterization both in terms of security and design of all currently existing group homomorphic encryption schemes, i.e., existing encryption schemes with a group homomorphic decryption function such as ElGamal and Paillier. To this end, we formalize and identify the basic underlying structure of all existing schemes and say that such schemes are of shift-type. Then, we construct an abstract scheme that represents all shift-type schemes (i.e., every scheme occurs as an instantiation of the abstract scheme) and prove its IND-CCA1 (resp. IND-CPA) security equivalent to the hardness of an abstract problem called Splitting Oracle-Assisted Subgroup Membership Problem, SOAP (resp. Subgroup Membership Problem, SMP). Roughly, SOAP asks for solving an SMP instance, i.e., for deciding whether a given ciphertext is an encryption of the neutral element of the ciphertext group, while allowing access to a certain oracle beforehand. Our results allow for contributing to a variety of open problems such as the IND-CCA1 security of Paillier's scheme, or the use of linear codes in group homomorphic encryption.
Furthermore, we design a new cryptosystem which provides features that are unique up to now: Its IND-CPA security is based on the k-linear problem introduced by Shacham, and Hofheinz and Kiltz, while its IND-CCA1 security is based on a new k-problem that we prove to have the same progressive property, namely that if the k-instance is easy in the generic group model, the (k+1)-instance is still hard.
ATTACKS ON THE AKACP PROTOCOL
Uncategorized
Uncategorized
We discuss a recently proposed one-pass authenticated key agreement protocol, by Mohammad, Chen, Hsu
and Lo, which was “derived” from their correponding two-pass version and claimed to be secure. We show
that this is not the case by demonstrating a number of vulnerabilities.
Secure Computations on Non-Integer Values
Uncategorized
Uncategorized
In this paper we present for the first time a framework that
allows secure two-party computations on approximations of real
valued signals. In our solution, we use a quantized logarithmic
representation of the signal samples, which enables to represent
both very small and very large numbers with bounded relative error.
We show that numbers represented in this way can be encrypted
using standard homomorphic encryption schemes; furthermore we
give protocols that allow to perform all arithmetic operations on
such encrypted values. Finally we demonstrate the practicality of
our framework by applying it to the problem of filtering encrypted
signals.
Co-Z Divisor Addition Formulae in Jacobian of Genus 2 Hyperelliptic Curves over Prime Fields
Uncategorized
Uncategorized
In this paper we proposed a new approach to divisor scalar multiplication in Jacobian of genus 2 hyperelliptic curves over fields with odd characteristic, without field inversion. It is based on improved addition formulae of the weight 2 divisors in projective divisor representation in most frequent case that suit very well to scalar multiplication algorithms based on Euclidean addition chains.
Last updated: 2010-10-12
Number formula and degree level of ergodic polynomial functions over / and generalized result of linear equation on ergodic power-series T-Function
Jin-Song Wang and Wen-Feng Qi gived the sufficient and necessary
condition that a polynomial function with integer coefficients modulo is a single
cycle T-function, That is, generates a single cycle if and only if , are odd, are even, , and , where . A Linear
Equation over the coordinate sequences of sequence generated by
iterated the polynomial single cycle T-function, that is,%
\begin{equation}
x_{i+2^{j-1},j}=x_{i,j}+x_{i,j-1}+ajA_{i,2}+a(j-1)+b\func{mod}2,3\leq j\leq
n-1 \label{equ.1}
\end{equation}%
given , where , be the j-th bit of . is a sequence of
period and a, b are constants determined by the coefficients .
In this paper, using Anashin's general theory, some detail combinatorial
result of stirling numbers and Larin's result , we can give the counting formula for the given degree polynomial
ergodic(single cycle) T-function. Then, for fixed , we can know, what's
the least degree that all the single-cycle polynomial transformations
can be expressed as the polynomials that degree does not exceed over . we deduce that Jin-Song Wang and Wen-Feng Qi's
result is a special case of ours, and their linear relation on the
coordinate sequences generated by single cycle polynomial T-function can be
extended to a more general function class. The equation shows that the
sequences generated by these T-functions have potential secure problems.(Thanks for professor Anashin's hint for the motivation of this paper)
Efficient Attributes for Anonymous Credentials (Extended Version)
We extend the Camenisch-Lysyanskaya anonymous credential system such that
selective disclosure of attributes becomes highly efficient. The resulting system
significantly improves upon existing approaches, which suffer from a linear
complexity in the total number of attributes. This limitation makes them unfit
for many practical applications, such as electronic identity cards. Our system
can incorporate an large number of binary and finite-set attributes without
significant performance impact. Our approach compresses all such attributes into
a single attribute base and, thus, boosts the efficiency of \emph{all} proofs of
possession. The core idea is to encode discrete binary and finite-set
values as prime numbers. We use the divisibility property for efficient
proofs of their presence or absence. We contribute efficient
methods for conjunctions and disjunctions, in addition. The system builds on the
Strong-RSA assumption.
We demonstrate the aptness of our method in
realistic application scenarios, such as electronic identity cards and
complex/structured credentials. Our method has crucial advantages in devices with
restricted computational capabilities, such as smartcards and cell phones.
A Practical (Non-interactive) Publicly Verifiable Secret Sharing Scheme
Uncategorized
Uncategorized
A publicly verifiable secret sharing (PVSS) scheme, proposed by Stadler in \cite{DBLP:conf/eurocrypt/Stadler96}, is a VSS scheme in which anyone, not only the shareholders, can verify that the secret shares are correctly distributed. PVSS can play essential roles in the systems using VSS. Achieving simultaneously the following two features for PVSS is a challenging job:
\begin{itemize}
\item Efficient non-interactive public verification.
\item Proving security for the public verifiability in the standard model.
\end{itemize}
In this paper we propose a -threshold PVSS scheme which satisfies both of these properties. Efficiency of the non-interactive public verification step of the proposed scheme is optimal (in terms of computations of bilinear maps (pairing)) while comparing with the earlier solution by \cite{DBLP:conf/sacrypt/HeidarvandV08}. In public verification step of \cite{DBLP:conf/sacrypt/HeidarvandV08}, one needs to compute many pairings, where is the number of shareholders, whereas in our scheme the number of pairing computations is only. This count is irrespective of the number of shareholders. We also provide a formal proof for the semantic security (IND) of our scheme based on the hardness of a problem that we call the -multi-sequence of exponents Diffie-Hellman problem (MSE-DDH). This problem falls under the general Diffie-Hellman exponent problem framework \cite{DBLP:conf/eurocrypt/BonehBG05}.
Stronger Security Model of Group Key Agreement
Uncategorized
Uncategorized
In PKC 2009, Gorantla, Boyd and González Nieto presented a nice result on modelling security for group key agreement (GKA) protocols. They proposed a novel security model (GBG model) that better supports the adversaries' queries than previous models for GKA protocols by considering KCI resilience. However, ephemeral key leakage attack resistance has been left outside the scope of the GBG model. In this paper, we demonstrate an ephemeral key leakage on an existing GKA protocol which has been shown secure in the GBG model. We then extend the GBG model by allowing the adversary greater attack powers of leaking ephemeral keys in GKA protocol session. We also apply the well known NAXOS trick to propose an improvement to an existing GKA protocol, which can resist the ephemeral key leakage attack. The security of the improved protocol has been argued under the our new model.
Last updated: 2010-11-12
A Suite of Identity Based Aggregate Signatures and a Multi-Signature Scheme from RSA
Fully aggregateable identity based signature schemes without prior communication between the signing parties is an interesting issue in identity based cryptography. On this front, we identify that deterministic identity based signature schemes lead to full aggregation of signatures without the aforementioned overhead. Inspired by Shamir's identity based signature scheme, we propose a deterministic identity based signature scheme which is also based on RSA. Based on this newly proposed deterministic identity based signature scheme, we design a suite of four identity based aggregate signature schemes with different properties. The first two schemes are deterministic identity based aggregation signature schemes, supporting full aggregation for general and ordered sequential aggregation respectively. The third and fourth schemes are non-deterministic aggregate signature schemes, supporting full aggregation for general and ordered sequential aggregation respectively. We formally prove the schemes to be existentially unforgeable in the random oracle model. We also propose an efficient identity based multi-signature scheme which achieves aggregation in one round.
Efficient Fully Secure Predicate Encryption for Conjunctions, Disjunctions and k-CNF/DNF formulae
Uncategorized
Uncategorized
Predicate encryption is an important cryptographic primitive that has found wide applications as it allows for fine-grained key management. In a predicate encryption scheme for a class C of predicates, the owner of the master secret key can derive a secret key Sk_P for any predicate P in C. Similarly, when encrypting plaintext M , the sender can specify an attribute vector x for the ciphertext Ct. Then, key Sk_P can decrypt all ciphertexts Ct with attribute vector x such
that P(x) = 1.
In this paper, we give fully secure implementations Conjunctions, Disjunctions and k-CNF/DNF predicates that guarantee the security of the plaintext and of the attribute. Our constructions for Disjunctions and Conjunctions are linear in the number of variables. Previous fully secure constructions for Disjunction required time exponential in the number of variables while for Conjunctions the best previous construction was quadratic in the number of variables.
A Collaborative Framework for Privacy Protection in Online Social Networks
With the wide use of online social networks (OSNs), the problem of data privacy has attracted much attention. Several approaches have been proposed to address this issue. One of privacy management approaches for OSN leverages a key management technique to enable a user to simply post encrypted contents so that only users who can satisfy the associate security policy can derive the key to access the data. However, the key management policies of existing schemes may grant access to unaurhorized users and cannot efficiently determine authorized users. In this paper, we propose a collaborative framework which enforces access control for OSN through an innovative key management focused on communities. This framework introduces a community key management based on a new group-oriented convergence cryptosystem, as well as provides an efficient privacy preservation
needed in a private OSN. To prove the feasibility of our approach, we also discuss a proof-of-concept implementation of our framework. Experimental results show that our
construction can achieve the identified design goals for OSNs with the acceptable performance.
Strong designated verifier signature scheme: new definition and construction
Recently, several strong designated verifier signature schemes have been proposed in the literature. In this paper, we first point out that such so-called strong designated verifier signature scheme is just message authentication code HMAC. Without the key property, unforgeability, for signatures, these schemes cannot enable signers to have complete controls over their signatures as demanded by Chaum and Van Antwerpen originally. No signer would use such Designated Verifier Signature schemes if he does not trust the designated verifier entirely. Then we introduce a new notion for the strong designated verifier signature scheme and its security requirements. We further propose the first strong designated verifier signature scheme based on Schnorr signature scheme, and provide a formal security proof under the DL assumption and the CDH assumption in the random oracle model. Finally, we discuss general methods to construct the strong designated verifier signature scheme.
Loiss: A Byte-Oriented Stream Cipher
This paper presents a byte-oriented stream cipher -- Loiss, which takes a 128-bit initial key and a 128-bit initial vector as inputs, and outputs a key stream of bytes. The algorithm is based on a linear feedback shift register, and uses a structure called BOMM in the filter generator, which has good property on resisting against algebraic attacks, linear distinguishing attacks and fast correlation attacks. In order for BOMM to be balanced, the S-boxes in BOMM must be orthomorphic permutations. To further improve the capability in resisting against those attacks, the S-boxes in BOMM must also possess some good cryptographic properties, for example, high algebraic immunity, high nonlinearity, and so on. However current researches on orthomorphic permutations pay little attention on their cryptographic properties, and we believe that Loiss not only enriches applications of orthomorphic permutations in cryptography, but also motivates the research on a variety of cryptographic properties of orthomorphic permutations.
Fully Leakage-Resilient Signatures
A signature scheme is {\em fully leakage resilient} (Katz and Vaikuntanathan, ASIACRYPT '09) if it is existentially unforgeable under an adaptive chosen-message attack even in a setting where an adversary may obtain bounded (yet arbitrary) leakage information on {\em all intermediate values that are used throughout the lifetime of the system}. This is a strong and meaningful notion of security that captures a significantly wide range of side-channel attacks.
One of the main challenges in constructing fully leakage-resilient signature schemes is dealing with leakage that may depend on the random bits used by the signing algorithm, and constructions of such schemes are known only in the random-oracle model. Moreover, even in the random-oracle model, known schemes are only resilient to leakage of less than half the length of their signing key.
In this paper we construct the first fully leakage-resilient signature schemes without random oracles. We present a scheme that is resilient to any leakage of length bits, where is the length of the signing key. Our approach relies on generic cryptographic primitives, and at the same time admits rather efficient instantiations based on specific number-theoretic assumptions. In addition, we show that our approach extends to the continual-leakage model, recently introduced by Dodis, Haralambiev, Lopez-Alt and Wichs (FOCS '10), and by Brakerski, Tauman Kalai, Katz and Vaikuntanathan (FOCS '10). In this model the signing key is allowed to be refreshed, while its corresponding verification key remains fixed, and the amount of leakage is assumed to be bounded only in between any two successive key refreshes.
Constant Round Non-Malleable Protocols using One Way Functions
We provide the first constant round constructions of non-malleable commitment and zero-knowledge protocols based only on one-way functions. This improves upon several previous (incomparable) works which required either: (a) super-constant number of rounds, or, (b) non-standard or sub-exponential hardness assumptions, or, (c) non-black-box simulation and collision resistant hash functions. These constructions also allow us to obtain the first constant round multi-party computation protocol relying only on the existence of constant round oblivious transfer protocols. Our primary technique can be seen as a means of implementing the previous ``two-slot simulation" idea in the area of non-malleability with only black-box simulation.
A simple modification of our commitment scheme gives a construction which makes use of the underlying one-way function in a black-box way. The modified construction satisfies the notion of what we call \emph{non-malleability w.r.t. replacement}. Non-malleability w.r.t. replacement is a slightly weaker yet natural notion of non-malleability which we believe suffices for many application of non-malleable commitments. We show that a commitment scheme which is non-malleable only w.r.t. replacement is sufficient to obtain a (fully) black-box multi-party computation protocol. This allows us to obtain a constant round multi-party computation protocol making only a black-box use of the standard cryptographic primitives with polynomial-time hardness thus directly improving upon the recent work of Wee (FOCS'10).
A NOTE ON SEMI-BENT BOOLEAN FUNCTIONS
Uncategorized
Uncategorized
We show how to construct semi-bent Boolean functions from PSap-
like bent functions. We derive innite classes of semi-bent functions in even
dimension having multiple trace terms.
Cryptanalysis of Block Ciphers Using Almost-Impossible Differentials
In this paper, inspired from the notion of impossible differentials, we present a model to use differentials that are less probable than a random permutation. We introduce such a distinguisher for 2 rounds of Crypton, and present an attack on 6 rounds of this predecessor AES candidate. As a special case of this idea, we embed parts of the additional rounds around the impossible differential into the distinguisher to make a probabilistic distinguisher with more rounds. We show that with this change, the data complexity is increased but the time complexity may be reduced or increased. Then we discuss that this change in the impossible differential cryptanalysis is commodious and rational when the data complexity is low and time complexity is marginal.
Automata Evaluation and Text Search Protocols with Simulation Based Security
Uncategorized
Uncategorized
This paper presents efficient protocols for securely computing the following two problems:
1) The fundamental problem of pattern matching. This problem is defined in the two-party setting, where party holds a pattern and party holds a text. The goal of is to learn where the pattern appears in the text, without revealing it to or learning anything else about 's text. This problem has been widely studied for decades due to its broad applicability.
We present several protocols for several notions of security. We further generalize one of our solutions to solve additional pattern matching related problems of interest.
2) Our construction from above, in the malicious case, is based on a
novel protocol for secure oblivious automata evaluation which is of independent interest. In this problem, party holds an automaton and party holds an input string, and they need to decide if the automaton accepts the input, without learning anything else.
Our protocol obtains full security in the face of malicious adversaries.
Constant-round Non-Malleable Commitments from Any One-Way Function
We show \emph{unconditionally} that the existence of commitment schemes implies the existence of \emph{constant-round} non-malleable commitments; earlier protocols required additional assumptions such as collision resistant hash functions or subexponential one-way functions.
Our protocol also satisfies the stronger notions of concurrent non-malleability and robustness. As a corollary, we establish that constant-round non-malleable zero-knowledge arguments for can be based on one-way functions and constant-round secure multi-party computation can be based on enhanced trapdoor permutations; also here, earlier protocols additionally required either collision-resistant hash functions or subexponential one-way functions.
Last updated: 2011-05-05
On Instantiation of the Random Oracle
In the current methodology of the Random Oracle Model, the random oracle is instantiated by a ``good cryptographic hash function''. However, due to the work of Canetti, Goldreich and Halevi, such methodology has been found problematic because there exist a construction secure in the Random Oracle Model, but any instantiation of the random oracle by any fully specified function which includes also any ``good cryptographic hash function'' will result in an insecure implementation. We investigate the Canetti-Goldreich-Halevi method, and propose a new method for the instantiation of the random oracle, in which the random oracle is instantiated by a floating pseudorandom function. Under this new method, Canetti, Goldreich and Halevi's construction will
have a secure implementation. Our work puts the methodology of the Random Oracle Model on firm grounds.
A secure email login system using virtual password
Uncategorized
Uncategorized
In today’s world password compromise by some adversaries is common for different purpose. In ICC 2008 Lei et al. proposed a new user authentication system based on the virtual password system. In virtual password system they have used linear randomized function to be secure against identity theft attacks, phishing attacks, keylogging attack and shoulder surfing system. In ICC 2010 Li’s given a security attack on the Lei’s work. This paper gives modification on Lei’s work to prevent the Li’s attack with reducing the server overhead. This paper also discussed the problems with current password recovery system and gives the better approach.
Enhanced STS using Check Equation --Extended Version of the Signature scheme proposed in the PQCrypt2010--
We propose solutions to the problems which has been left in the Enhanced STS, which was proposed in the PQCrypto 2010.
Enhanced STS signature scheme is dened as the public key with the Complementary STS structure, in which two STS public keys are symmetrically joined together. Or, the complementary STS is the public key where simply two STS public keys are joined together, without the protection with Check Equation.
We discuss the following issues left in the Enhanced STS, which was prosented in the PQCrypt2010:
(i) We implied that there may exist a way to cryptanalyze the Complementary STS structure. Although it has been proposed that the system be protected by Check Equations [35][37], in order to cope with an unknown attack, we did not show the concrete procedure. We show the actual procedure to cryptanalyze it and forge a signature.
(ii) We assumed that the Check Equation should be changed every time a document is signed. This practice is not always allowed. We improved this matter. The Check Equation which was proposed in the PQCrypto 2010 dened the valid life as a function of the number of times the documents are signed, because the secret key of Check Equation is analyzed by collecting valid signatures.
Now we propose a new method of integrating the Check Equation into the secret key and eliminate the risk of the hidden information drawn from the existing signature.
Side-Channel Attacks on the McEliece and Niederreiter Public-Key Cryptosystems
Research within “post-quantum” cryptography has focused on development of schemes that resist quantum cryptanalysis. However, if such schemes are to be deployed, practical questions of efficiency and physical security should also be addressed; this is particularly important for embedded systems. To this end, we investigate issues relating to side-channel attack against the McEliece and Niederreiter public-key cryptosystems, for example improving those presented by [19], and novel countermeasures against such attack.
Cryptanalysis of the Convex Hull Click Human Identification Protocol
Recently a convex hull based human identification protocol was proposed by Sobrado and Birget, whose steps can be performed by humans without additional aid. The main part of the protocol involves the user mentally forming a convex hull of secret icons in a set of graphical icons and then clicking randomly within this convex hull. While some rudimentary security issues of this protocol have been discussed, a comprehensive security analysis has been lacking. In this paper we analyse the security of this convex hull based protocol. In particular, we show two probabilistic attacks which reveal the user's secret after the observation of only a handful of authentication sessions. These attacks can be efficiently implemented as their time and space complexities are considerably less than brute force attack. We show that while the first attack can be mitigated through appropriately chosen values of system parameters, the second attack succeeds with a non-negligible probability even with large system parameter values which cross the threshold of usability.
On Compression of Data Encrypted with Block Ciphers
Uncategorized
Uncategorized
This paper investigates compression of data encrypted with block ciphers, such as the Advanced Encryption Standard (AES). It is shown that such data can be feasibly compressed without knowledge of the secret key. Block ciphers operating in various chaining modes are considered and it is shown how compression can be achieved without compromising security of the encryption scheme. Further, it is shown that there exists a fundamental limitation to the practical compressibility of block ciphers when no chaining is used between blocks. Some performance results for practical code constructions used to compress binary sources are presented.
Predicate Encryption with Partial Public Keys
Uncategorized
Uncategorized
Predicate encryption is a new powerful cryptographic primitive which allows for fine-grained access control for encrypted data: the owner of the secret key can release partial keys, called tokens, that can decrypt only a specific subset of ciphertexts. More specifically, in a predicate encryption scheme, ciphertexts and tokens have attributes and a token can decrypt a ciphertext if and only if a certain predicate of the two associated attributes holds.
In this paper, ciphertext attributes are vectors of fixed length
over an alphabet and
token attributes, called patterns, are vectors
of the same length over the alphabet .
We consider the predicate introduced by Boneh and Waters (TCC '07),
which is true if and only if and
agree in all
positions for which .
Various security notions are relevant for predicate encryption schemes.
First of all, one wants the ciphertexts to hide its attributes
(this property is called semantic security).
In addition, it makes sense also to consider the property of token security, a security notion in which the token
is required not to reveal any information on the associated pattern.
It is easy to see that predicate privacy is impossible to achieve in a
public-key setting.
In Shi, Shen and Waters, TCC '09, the authors considered the notion of a predicate encryption scheme in the symmetric-key setting and gave the first construction with token security.
In this paper, we consider the notion of a partial public key encryption (as suggested in Shi, Shen and Waters, TCC '09) in which a partial public key allows a user to generate only a subset of the ciphertexts.
We give a construction which is semantically secure and in which a token does not reveal any information on the associated pattern except for the locations of the 's.
The proofs of security of our construction are based on
hardness assumptions in bilinear groups of prime order;
this greatly improves the efficiency of the construction
when compared to previous constructions Shi, Shen and Waters
which used groups of composite orders.
Our security proofs do not use random oracles.
Pairing Computation on Elliptic Curves of Jacobi Quartic Form
This paper proposes explicit formulae for the addition step and
doubling step in Miller's algorithm to compute Tate pairing on
Jacobi quartic curves.
We present a geometric interpretation of the group law on Jacobi
quartic curves, %and our formulae for Miller's
%algorithm come from this interpretation.
which leads to formulae for Miller's algorithm. The doubling step
formula is competitive with that for Weierstrass curves and Edwards
curves. Moreover, by carefully choosing the coefficients, there
exist quartic twists of Jacobi quartic curves from which pairing
computation can benefit a lot. Finally, we provide some examples of
supersingular and ordinary pairing friendly Jacobi quartic curves.
Limitations on Transformations from Composite-Order to Prime-Order Groups: The Case of Round-Optimal Blind Signatures
Beginning with the work of Groth and Sahai, there has been much interest in transforming pairing-based schemes in composite-order groups to equivalent ones in prime-order groups. A method for achieving such transformations has recently been proposed by Freeman, who identified two properties of pairings using composite-order groups--"cancelling" and "projecting"--on which many schemes rely, and showed how either of these properties can be obtained using prime-order groups.
In this paper, we give evidence for the existence of limits to such transformations. Specifically, we show that a pairing generated in a natural way from the Decision Linear assumption in prime-order groups can be simultaneously cancelling and projecting only with negligible probability. As evidence that these properties can be helpful together as well as individually, we present a cryptosystem whose proof of security makes use of a pairing that is both cancelling and projecting.
Our example cryptosystem is a simple round-optimal blind signature scheme that is secure in the common reference string model, without random oracles, and based on mild assumptions; it is of independent interest.
Last updated: 2010-12-08
Two Attacks on Dutta’s Dynamic Group Key Agreement Protocol
Uncategorized
Uncategorized
Ratna Dutta and Rana Barua proposed a dynamic group key agreement protocol with constant round referred to as DGKA protocol. They claimed that the DGKA protocol is dynamic, efficient and provably secure under DDH assumption. In this paper, we analyze the security of the DGKA protocol and discovered its vulnerable nature towards two attacks. The first attack relates to the fact that this protocol does not satisfy the key independence property which is crucial for dynamic group key agreement protocol. The second one is an impersonation attack which demonstrates that the DGKA protocol is vulnerable to replay attacks.
Accusation probabilities in Tardos codes: the Gaussian approximation is better than we thought
Uncategorized
Uncategorized
We study the probability distribution of user accusations in the q-ary
Tardos fingerprinting system under the Marking Assumption, in the restricted digit model. In particular, we look at the applicability of the so-called Gaussian approximation,
which states that accusation probabilities tend to the normal distribution when the fingerprinting code is
long. We introduce a novel parametrization of the attack strategy which enables a significant speedup of numerical evaluations.
We set up a method, based on power series expansions, to systematically compute the probability of accusing
innocent users. The `small parameter' in the power series is 1/m, where m is the code length. We use our method to semi-analytically study the performance of the Tardos code against majority voting and interleaving attacks. The bias function `shape' parameter kappa
strongly influences the distance between the actual probabilities and the asymptotic Gaussian curve.
The impact on the collusion-reslilience of the code is shown.
For some realistic parameter values, the false accusation probability is even lower than the
Gaussian approximation predicts.
Last updated: 2011-03-07
Privacy-preserving Sharing of Sensitive Information
The need for controlled sharing of sensitive information occurs in many realistic everyday scenarios, ranging from critical (e.g., national security) to mundane (e.g., social networks). A typical scenario involves two parties, at least one of which seeks some information from the other. The latter is either willing, or compelled, to share information. This poses two challenges: (1) how to enable this type of sharing such that parties learn no (or minimal) information beyond what they are entitled to, and (2) how to do so efficiently, in real-world practical terms.
In this paper, we discuss the concept of Privacy-preserving Sharing of
Sensitive Information (PSSI) and provide an efficient database system implementation. The PSSI system functions as a privacy shield to protect parties from disclosing their respective sensitive information. Although seemingly simple, the design and deployment of PSSI prompts a number of new and interesting practical challenges, that are addressed in this paper. We present extensive experimental results that attest to the practicality of attained privacy features.
Two identification protocols based on Cayley graphs of Coxeter groups
Uncategorized
Uncategorized
A challenge-response identification protocol is introduced, based on the intractability of the word problem in some Coxeter groups. A Prover builds his public key as the set of leaves of a tree in the Cayley graph of a Coxeter group, and the tree itself is his private keys. Any challenge posed by a Verifier consists of a subset of the public key, and the Prover shows his knowledge of the private key by providing a subtree having as set of leaves the challenge set. Any third party aiming to impersonate the Prover faces a form of the word problem in the Coxeter group. Although this protocol maintains the secrecy of the whole private key, it is disclosing some parts of it. A second protocol is introduced which is indeed a transcription of the already classical zero-knowledge protocol to recognize pairs of isomorphic graphs.
Linear-Complexity Private Set Intersection Protocols Secure in Malicious Model
Private Set Intersection (PSI) protocols allow one party (“client”) to compute an intersection of its input set with that of another party (“server”), such that the client learns nothing other than the set intersection and the server learns nothing beyond client input size. Prior work yielded a range of PSI protocols secure under different cryptographic assumptions. Protocols operating in the semi-honest model offer better (linear) complexity while those in the malicious model are often significantly more costly. In this paper, we construct PSI and Authorized PSI (APSI) protocols secure in the malicious model under standard cryptographic assumptions, with both linear communication and computational complexities. To the best of our knowledge, our APSI is the first solution to do so. Finally, we show that our linear PSI is appreciably more efficient than the state-of-the-art.
Generic Constructions of Parallel Key-Insulated Encryption: Stronger Security Model and Novel Schemes
Exposure of a secret key is a significant threat in practice. As a
notion of security against key exposure, Dodis et al. advocated key-insulated security, and proposed concrete key-insulated encryption (KIE) schemes in which secret keys are periodically updated by using a physically ``insulated'' helper key. For significantly reducing possibility of exposure of the helper key, Hanaoka et al. further proposed the notion of parallel KIE (PKIE) in which multiple helper keys are used in alternate shifts. They also pointed out that in contrast to the case of the standard KIE, PKIE cannot be straightforwardly obtained from identity-based encryption (IBE). In this paper, we first discuss that previous security models for PKIE are somewhat weak, and thus re-formalize stronger security models for PKIE. Then we clarify that PKIE can be generically constructed (even in the strenghthened security models) by using a new primitive which we call one-time forward secure public key encryption (OTFS-PKE) and show that it is possible to construct OTFS-PKE from arbitrary IBE or hierarchical IBE (without degenerating into IBE).
By using our method, we can obtain various new PKIE schemes which yield desirable properties. For example, we can construct first PKIE schemes from lattice or quadratic residuosity problems (without using bilinear maps), and PKIE with short ciphertexts and cheaper computational cost for both encryption and decryption. Interestingly, the resulting schemes can be viewed as the partial
solutions to the open problem left by Libert, Quisquarter and Yung in PKC'07.
Computational Soundness about Formal Encryption in the Presence of Secret Shares and Key Cycles
The computational soundness of formal encryption is studied extensively following the work of Abadi and Rogaway. Recent work considers the scenario in which secret sharing is needed, and separately, the scenario when key cycles are present. The novel technique is the use of a co-induction definition of the adversarial knowledge. In this paper, we prove a computational soundness theorem of formal encryption in the presence of both key cycles and secret shares at the same time, which is a non-trivial extension of former approaches.
PEKSrand: Providing Predicate Privacy in Public-key Encryption with Keyword Search
Recently, Shen, Shi, and Waters introduced the
notion of predicate privacy, i.e., the property that t(x) reveals
no information about the encoded predicate p, and proposed
a scheme that achieves predicate privacy in the symmetric-key
settings. In this paper, we propose two schemes. In the first
scheme, we extend PEKS to support predicate privacy based
on the idea of randomization. To the best of our knowledge, this
is the first work that ensures predicate privacy in the publickey
settings without requiring interactions between the receiver
and potential senders, the size of which may be very large.
Moreover, we identify a new type of attacks against PEKS,
i.e., statistical guessing attacks. Accordingly, we introduce a new
notion called statistics privacy, i.e., the property that predicate
privacy is preserved even when the statistical distribution of
keywords is known, and propose a scheme that makes a tradeoff
between statistics privacy and storage efficiency (of the delegate).
According to our analysis and experimental results, compared
to PEKS, both of our schemes introduce reasonable additional
communication and computation overheads and can be smoothly
deployed in existing systems.
How to implement the public Key Operations in Code-based Cryptography on Memory-constrained Devices
While it is generally believed that due to their large public
key sizes code based public key schemes cannot be conveniently used
when memory-constrained devices are involved, we propose an approach
for Public Key Infrastructure (PKI) scenarios which totally eliminates
the need to store public keys of communication partners. Instead, all the
necessary computation steps are performed during the transmission of
the key. We show the feasibility of the approach through an example
implementation and give arguments that it will be possible for a smart
card controller to carry out the associated computations to sustain the
transmission rates of possible future high speed contactless interfaces.
Last updated: 2010-09-17
Weaknesses of SIP Authentication Scheme for Converged VoIP Networks
The Session Initiation Protocol (SIP) is commonly used to establish Voice over IP (VoIP) calls. Mostly recently, Yoon et al. proposed an efficient SIP authentication scheme in a converged VoIP network based on elliptic curve cryptosystem (ECC) [E-J. Yoon, K-Y. Yoo, C. Kim, Y-S. Hong, M. Jo, H-H. Chen, A Secure and Efficient SIP Authentication Scheme for Converged VoIP Networks, Computer Communications (2010), doi: 10.1016/j.comcom. 2010.03.026.]. In this letter, we first demonstrate that it is insecure against off-line password guessing attack.
Passive Cryptanalysis of the UnConditionally Secure Authentication Protocol for RFID Systems
Recently, Alomair et al. proposed the rst UnConditionally
Secure mutual authentication protocol for low-cost RFID systems(UCS-
RFID). The security of the UCS-RFID relies on ve dynamic secret keys
which are updated at every protocol run using a fresh random number
(nonce) secretly transmitted from a reader to tags.
Our results show that, at the highest security level of the protocol (security parameter= 256), inferring a nonce is feasible with the probability of 0.99 by eavesdropping(observing) about 90 runs of the protocol. Finding a nonce enables a passive attacker to recover all ve secret keys of the protocol. To do so, we propose a three-phase probabilistic approach in this paper. Our attack recovers the secret keys with a probability that increases by accessing more protocol runs. We also show that tracing a tag using this protocol is also possible even with less runs of the protocol.
Unconditionally Secure Rational Secret Sharing in Standard Communication Networks
Rational secret sharing protocols in both the two-party and
multi-party settings are proposed. These protocols are built in
standard communication networks and with unconditional security.
Namely, the protocols run over standard point-to-point networks
without requiring physical assumptions or simultaneous channels, and
even a computationally unbounded player cannot gain more than
by deviating from the protocol. More precisely, for the
-out-of- protocol the is a negligible function in
the size of the secret, which is caused by the information-theoretic
MACs used for authentication. The -out-of- protocol is
-resilient and the is exponentially small in the
number of participants. Although secret recovery cannot be
guaranteed in this setting, a participant can at least reduce the
Shannon entropy of the secret to less than after the protocol.
When the secret-domain is large, every rational player has great
incentive to participate in the protocol.
Identity Based Partial Aggregate Signature Scheme Without Pairing
Uncategorized
Uncategorized
An identity based signature allows users to sign their documents using their private keys and the signature can be verified by any one, using the identity of the signer and public parameters of the system. An aggregate signature scheme is a digital signature scheme which allows aggregation of different signatures by different users on different messages. The primary objective of aggregate signature scheme is to achieve both computational and communication efficiency. Here, we propose an identity based aggregate signature scheme, which uses a variation of light weight Schnorr type identity based signature scheme, where in the signers need not agree upon a common randomness and the aggregation is done without having any kind of interaction among the signers. The scheme does not involve any pairing operations even for aggregate signature verification. It is computationally efficient since it avoids the costlier operation in elliptic curve groups (Bilinear Pairings). It should be noted that our signature achieves only partial aggregation because the private key of each user is generated by a randomized extract algorithm and hence a random value is to be propagated with each single signature generated.
- « Previous
- 1
- 2
- 3
- ...
- 7
- Next »