All papers (Page 212 of 24333 results)
Algorithms to solve massively under-defined systems of multivariate quadratic equations
Uncategorized
Uncategorized
It is well known that the problem to solve a set of randomly chosen multivariate quadratic equations over a finite field is NP-hard. However, when the number of variables is much larger than the number of equations, it is not necessarily difficult to solve equations. In fact, when n>m(m+1) (n,m are the numbers of variables and equations respectively) and the field is of even characteristic, there is an algorithm to solve equations in polynomial time (see [Kipnis et al, Eurocrypt'99] and also [Courtois et al, PKC'02]). In the present paper, we give two algorithms to solve quadratic equations; one is for the case of n>(about)m^2-2m^{3/2}+2m and the other is for the case of n>m(m+1)/2+1. The first algorithm solves equations over any finite field in polynomial time. The second algorithm requires exponential time operations. However, the number of required variables is much smaller than that in the first one, and the complexity is much less than the exhaustive search.
A new bound for t−wise almost universal hash functions
Using the pigeon-hole principle, we derive a new bound for the key length in a t-wise almost universal hash function where the multicollision or t-collision probability is bounded above by epsilon in the range [0,1]. The important features of this bound are (1) it decreases very slowly as t increases, and (2) the key length grows at least linearly with the logarithm of the message length. To our knowledge, this is the first almost universal hash bound for any integer t > 1. This work arises from the use of t-wise almost universal hash functions in manual authentication protocols.
Last updated: 2009-09-11
FaceTrust: Assessing the Credibility of Online Personas via Social Networks
Despite the large volume of societal interactions taking place on the Internet, it is still hard to assess the credibility of statements made by online users. The digital credentials issued by trustworthy certificate authorities partially address this problem, but a tedious registration and verification process as well as its high cost hinder the wide adoption of this solution. This paper presents FaceTrust, a system that leverages online social networks to provide lightweight, flexible and relaxed credentials that enable users to assess the credibility of others and their assertions.
Euclid's Algorithm, Guass' Elimination and Buchberger's Algorithm
Uncategorized
Uncategorized
It is known that Euclid's algorithm,
Guass' elimination and Buchberger's algorithm play important roles
in algorithmic number theory, symbolic computation and cryptography,
and even in science and engineering. The aim of this paper is to
reveal again the relations of these three algorithms, and, simplify
Buchberger's algorithm without using multivariate division
algorithm. We obtain an algorithm for computing the greatest common
divisor of several positive integers, which can be regarded as the
generalization of Euclid's algorithm. This enables us to re-find the
Guass' elimination and further simplify Buchberger's algorithm for
computing Gröbner bases of polynomial ideals in modern
Computational Algebraic Geometry.
Efficient group authentication protocols based on human interaction
We re-examine the needs of computer security in pervasive computing from
first principles, specifically the problem of bootstrapping secure networks. We consider the case of systems that may have no shared secret information, and where there is no structure such as a PKI available.
We propose several protocols which achieve a high degree of security based on a combination of human-mediated communication and an ordinary
Dolev-Yao communication medium. In particular they resist combinatorial
attacks on the hash or digest values that have to be compared by
human users, seemingly optimising the amount of security they can achieve for a given amount of human effort. We compare our protocols with recent pairwise protocols proposed by, for example, Hoepman and Vaudenay.
Secure EPC Gen2 compliant Radio Frequency Identification
Uncategorized
Uncategorized
The increased functionality of EPC Class1 Gen2 (EPCGen2) is making this standard a de facto specification for inexpensive tags in the RFID industry. Recently three EPCGen2 compliant protocols that address security issues were proposed in the literature. In this paper we analyze these protocols and show that they are not secure and subject to replay/impersonation and statistical analysis attacks. We then propose an EPCGen2 compliant RFID protocol that uses the numbers drawn from synchronized pseudorandom number generators (RNG) to provide secure tag identification and session unlinkability. This protocol is optimistic and its security reduces to the (cryptographic) pseudorandomness of the RNGs supported by EPCGen2.
Secret Handshake: Strong Anonymity Definition and Construction
Secret handshake allows two members in the same group to authenticate each other secretly. In previous works of secret handshake schemes, two types of anonymities against the group authority (GA) of a group G are discussed: 1)Even GA cannot identify members, namely nobody can identify them (No-Traceability), 2)Only GA can identify members (Traceability). In this paper, first the necessity of tracing of the identification is shown. Second, we classify abilities of GA into the ability of identifying players and that of issuing the certificate to members. We introduce two anonymities Co-Traceability and Strong Detector Resistance. When a more strict anonymity is required ever for GA, the case 2) is unfavorable for members. Then, we introduce Co-Traceability where even if A has GAs ability of identifying members or issuing the certificate, A cannot trace members identification. However, if a scheme satisfies Co-Traceability, GA may be able to judge whether handshake players belong to the own group. Then, we introduce Strong Detector Resistance where even if an adversary A has GAs ability of identifying members, A cannot make judgments whether a handshaking player belongs to G. Additionally, we propose a secret handshake scheme which satisfies previous security requirements and our proposed anonymity requirements by using group signature scheme with message recovery.
Preimage Attack on ARIRANG
Uncategorized
Uncategorized
The hash function ARIRANG is one of the 1st round SHA-3
candidates. In this paper, we present preimage attacks on ARIRANG
with step-reduced compression functions. We consider two
step-reduced variants of the compression function. First one uses
the same feedforward as the original algorithm, and the other
one has the feedforward working at the output of the half
steps. Our attack finds a preimage of the 33-step OFF(Original
FeedForward )-variants of ARIRANG-256 and ARIRANG-512 from Step
1 to Step 33, and a preimage of the 31-step MFF(Middle
FeedForward )-variants of ARIRANG-256 and ARIRANG-512 from Step
3 to Step 33.
Transferable Constant-Size Fair E-Cash
Uncategorized
Uncategorized
We propose an efficient blind certification protocol with interesting properties. It falls in the Groth-Sahai framework for witness-indistinguishable proofs, thus extended to a certified signature it immediately yields non-frameable group signatures. We use blind certification to build an efficient (offline) e-cash system that guarantees user anonymity and transferability of coins without increasing their size. As required for fair e-cash, in case of fraud, anonymity can be revoked by an authority, which is also crucial to deter from double spending.
Security of Permutation-based Compression Function lp 231
Uncategorized
Uncategorized
In this paper, we study security of a certain class of permutation-based compression functions. Denoted lp 231 by Rogaway and Steinberger, they are 2n-to-n-bit compression functions using three calls to a single -bit random permutation. We prove that lp 231 is asymptotically preimage resistant up to 2^{2n/3}/n query complexity and collision resistant up to 2^{n/2}/n^{1+e} query complexity for any e>0. Based on a single permutation, lp 231 provides both efficiency and almost optimal collision security.
On the security of Identity Based Ring Signcryption Schemes
Signcryption is a cryptographic primitive which offers authentication and confidentiality simultaneously with a cost lower than signing and encrypting the message independently. Ring signcryption enables a user to signcrypt a message along with the identities of a set of potential senders (that includes him) without revealing which user in the set has actually produced the signcryption. Thus a ring signcrypted message has anonymity in addition to authentication and confidentiality. Ring signcryption schemes have no group managers, no setup procedures, no revocation procedures and no coordination: any user can choose any set of users (ring), that includes himself and signcrypt any message by using his private and public key as well as other users (in the ring) public keys, without getting any approval or assistance from them. Ring Signcryption is useful for leaking trustworthy secrets in an anonymous, authenticated and confidential way.
\medskip
To the best of our knowledge, seven identity based ring signcryption schemes are reported in the literature. Two of them were already proved to be insecure in \cite{ZBSW08} and \cite{SSP09}. In this paper, we show that four among the remaining five schemes do not provide confidentiality, to be specific, two schemes are not secure against chosen plaintext attack and other two schemes do not provide adaptive chosen ciphertext security. We then propose a new scheme and formally prove the security of the new scheme in the random oracle model. A comparison of our scheme with the only existing correct scheme by Huang et al. shows that our scheme is much more efficient than the scheme by Huang et al.
Multiple and Unlinkable Public Key Encryption without Certificates
We newly propose a multiple and unlinkable identity-based public key
encryption scheme. Unlike the traditional public key encryption and
identity-based encryption schemes, our scheme allows the use of a
various number of identity-based public keys in different groups or
applications while keeping a single decryption key so that the
decryption key can decrypt every ciphertexts encrypted with those
public keys. Also our scheme removes the use of certificates as well
as the key escrow problem so it is functional and practical. Since
our public keys are unlinkable, the user's privacy can be protected
from attackers who collect and trace the user information and
behavior using the known public keys. Furthermore, we suggest a
decryption key renewal protocol to strengthen the security of the
single decryption key. Finally, we prove the security of our scheme
against the adaptive chosen-ciphertext attack under the random
oracle model.
Chosen-ciphertext Secure Encryption from Hard Algebraic Set Systems
We put forward the new abstract framework of "hard algebraic set systems" that allows to construct efficient chosen-ciphertext secure encryption schemes under computational (rather than decisional) intractability assumptions. Our framework can be instantiated with both RSA and Diffie-Hellman type assumptions, but in itself is completely abstract.
Ideal Hierarchical Secret Sharing Schemes
Hierarchical secret sharing is among the most natural generalizations of threshold secret sharing, and it has attracted a lot of attention since the invention of secret sharing until nowadays. Several constructions of ideal hierarchical secret sharing schemes have been proposed, but it was not known what access structures admit such a scheme. We solve this problem by providing a natural definition for the family of the hierarchical access structures and, more importantly, by presenting a complete characterization of the ideal hierarchical access structures, that is, the ones admitting an ideal secret sharing scheme. Our characterization is based on the well known
connection between ideal secret sharing schemes and matroids and, more specifically, on the connection between ideal multipartite secret sharing schemes and integer polymatroids. In particular, we prove that every hierarchical matroid port admits an ideal linear secret sharing scheme over every large enough finite field. Finally, we use our results to present a new proof for the existing characterization of the ideal weighted threshold access structures.
The Analysis of Galois Substitution Counter Mode (GSCM)
In~\cite{gscm}, GSCM mode of operation for authenticated encryption was presented. GSCM is based on the Galois/Counter Mode (GCM). GSCM is an enhancement of GCM, which is characterized by its high throughput and low memory consumption in network applications. In this paper, we propose some enhancements to GSCM and compare it with the different implementations of GCM. We present stability, performance, memory and security analyses of different implementations of GSCM and GCM.
Certificateless Group Oriented Signature Secure Against Key Replacement Attack
Since Al-Riyami and Paterson presented certificateless cryptography, many certificateless schemes have been proposed for different purposes. In this paper, we present a certificateless group oriented signature scheme based on bilinear pairing. In our scheme, only the members in the same group with the signer can independently verify the signature. We prove the signature scheme is existential unforgeability under adaptive chosen message attack in random oracle model.
A Hybrid RFID Protocol against Tracking Attacks
We study the problem how to construct an RFID mutual authentication
protocol between tags and readers. Juels (in Journal on Selected
Areas in Communications, 2006) proposed an open question which asks
whether there exists a fully privacy-preserving RFID authentication
protocol in which the time complexity of a successful authentication
phase can be done in sub-linear time where is the number
of tags participating in the RFID system.
In this work, we answer this question positively in an amortized view. Precisely, we design a fully privacy-preserving protocol in which the amortized cost of a successful authentication phase only requires time . In addition, our protocol is a hybrid from the randomized hash lock scheme of Weis et. al., the hash chain scheme of Ohkubo et. al., and the efficient identification scheme of Dimitriou. We combine them delicately to obtain the desired authentication protocol.
The Dark Side of Security by Obscurity and Cloning MiFare Classic Rail and Building Passes Anywhere, Anytime
MiFare Classic is the most popular contactless smart card with about 200 millions copies in circulation worldwide. At Esorics 2008 Dutch researchers showed that the underlying cipher Crypto-1 can be cracked in as little as 0.1 seconds if the attacker can access or eavesdrop the RF communications with the (genuine) reader. We discovered that a MiFare classic card can be cloned in a much more practical card-only scenario, where the attacker only needs to be in the proximity of the card for a number of minutes, therefore making usurpation of identity through pass cloning feasible at any moment and under any circumstances. For example, anybody sitting next to the victim on a train or on a plane is now be able to clone his/her pass. Other researchers have also (independently from us) discovered this vulnerability, however our attack requires less queries to the card and does not require any precomputation. In addition, we discovered that certain versions or clones of MiFare Classic are even weaker, and can be cloned in 1 second. The main security vulnerability that we need to address with regard to MiFare Classic is not about cryptography, RFID protocols and software vulnerabilities. It is a systemic one: we need to understand how much our economy is vulnerable to sophisticated forms of electronic subversion where potentially one smart card developer can intentionally (or not), but quite easily in fact, compromise the security of governments, businesses and financial institutions worldwide.
How to Extract and Expand Randomness: A Summary and Explanation of Existing Results
Uncategorized
Uncategorized
We examine the use of randomness extraction and expansion in key agreement (KA) protocols to generate uniformly random keys in the standard model. Although existing works provide the basic theorems necessary, they lack details or examples of appropriate cryptographic primitives and/or parameter sizes. This has lead to the large amount of min-entropy needed in the (non-uniform) shared secret being overlooked in proposals and efficiency comparisons of KA protocols. We therefore summarize existing work in the area and examine the security levels achieved with the use of various extractors and expanders for particular parameter sizes.
The tables presented herein show that the shared secret needs a min-entropy of at least 292 bits (and even more with more realistic assumptions) to achieve an overall security level of 80 bits using the extractors and expanders we consider. The tables may be used to find the min-entropy required for various security levels and assumptions. We also find that when using the short exponent theorems of Gennaro et al., the short exponents may need to be much longer than they suggested.
Practical Key Recovery Attack against Secret-prefix Edon-R
Edon-R is one of the fastest SHA-3 candidate. In this paper we
study the security of Edon-R, and we show that using Edon-R as a
MAC with the secret prefix construction is unsafe. We present a
practical attack in the case of Edon-R256, which requires 32
queries, 2^30 computations, negligible memory, and a
precomputation of 2^50 . This does not directly contradict the
security claims of Edon-R or the NIST requirements for SHA-3, but
we believe it shows a strong weakness in the design.
A First Order Recursive Construction of Boolean Function with Optimum Algebraic Immunity
This paper proposed a first order recursive construction of Boolean
function with optimum algebraic immunity. We also show that the
Boolean functions are balanced and have good algebraic degrees.
Last updated: 2009-03-30
Signature Schemes with Bounded Leakage Resilience
A leakage-resilient cryptosystem remains secure even if arbitrary information about the secret key (or possibly other internal state information) is leaked to an adversary. We demonstrate the first constructions of leakage-resilient signature schemes that remain secure as long as a bounded amount of information, depending on the length of the secret key, is leaked. We show efficient schemes in the random oracle model that handle leakage of up to bits of information about the signer's entire internal state. In the standard model, we show an inefficient scheme that can handle leakage of up to bits of information about the secret key, and a one-time signature scheme tolerating arbitrary leakage of bits.
Last updated: 2009-03-30
A New Lattice for Implicit Factoring
Uncategorized
Uncategorized
In PKC 2009, Alexander May and Maike Ritzenhofen\cite{MR} proposed
an ingenious lattice-based method to factor the RSA moduli
with the help of the oracle which outputs ,
where and share the least significant bits and
is large enough when we query it with . They also showed that
when asking queries for RSA moduli with -bit ,
they can improve the bound on to . In
this paper, we propose a new lattice for implicit factoring in
polynomial time, and can be a little smaller than in \cite{MR}.
Moreover, we also give a method in which the bound on can also
be improved to
but
with just only one query. Moreover we can show that our method
reduces the running time of the implicit factoring for balanced RSA
moduli much efficiently and makes it practical.
Key Predistribution Schemes in Distributed Wireless Sensor Network using Combinatorial Designs Revisited
Uncategorized
Uncategorized
A Sensor Node in Wireless Sensor Network has very limited resources such as processing capability, memory capacity, battery power, and communication capability. When the communication between any two sensor nodes are required to be secured, the symmetric key cryptography technique is used for its advantage over public key cryptography in terms of requirement of less resources. Keys are pre-distributed to each sensor node from a set of keys called key pool before deployment of sensors nodes. Combinatorial design helps in a great way to determine the way keys are drawn from the key pool for distributing to individual sensor nodes. We study various deterministic key predistribution techniques that are based on combinatorial design.
Constructions of Even-variable Boolean Function with Optimum Algebraic Immunity
Uncategorized
Uncategorized
This paper proposed an improved construction of even-variable Boolean function with optimum algebraic immunity. Compared with those in~\cite{Carl06}, our Boolean functions are more balance. Specially, for , the -variables Boolean function is balanced. Furthermore, we generalized it to a class of constructions, meaning there would be much more constructions.
Faster and Timing-Attack Resistant AES-GCM
We present a bitsliced implementation of AES encryption in counter mode for
64-bit Intel processors. Running at 7.59 cycles/byte on a Core~2, it is up to 25% faster than previous implementations,
while simultaneously offering protection against timing attacks. In
particular, it is the only cache-timing-attack resistant
implementation offering competitive speeds for stream as well as for
packet encryption: for 576-byte packets, we improve performance over
previous bitsliced implementations by more than a factor of 2. We also report more than 30%
improved speeds for lookup-table based Galois/Counter mode
authentication, achieving 10.68 cycles/byte for authenticated
encryption. Furthermore, we present the first constant-time
implementation of AES-GCM that has a reasonable speed of
cycles/byte, thus offering a full suite of timing-analysis resistant software for authenticated encryption.
Attacks on a Lightweight Cipher Based on a Multiple Recursive Generator
At IEEE GLOBECOM 2008, a lightweight cipher based on a Multiple Recursive Generator (MRG) was proposed for use in resource limited environment such as sensor nodes and RFID tags. This paper proposes two efficient attacks on this MRG cipher. A distinguishing attack is firstly introduced to identify the use of an MRG cipher that has a modulus suggested by its designers. It requires words of ciphertext and the most significant bit of each corresponding plaintext word. Then an efficient known plaintext attack is proposed to construct the cipher's current state and generate subkeys used for all subsequent encryption. The known plaintext attack, when targeted at the MRG ciphers optimized for efficiency, only requires 2k words of known plaintext and trivial computation where k is the MRG order. Even the ciphers based on complicated and inefficient MRGs can be attacked with low complexity, e.g., in the magnitude of words of known plaintext for all MRG ciphers with order 47, regardless of which MRG modulus is used. These two attacks indicate that the examined MRG cipher structure is seriously flawed.
Side Channel Cube Attacks on Block Ciphers
In this paper we formalize the notion of {\it leakage attacks} on
iterated block ciphers, in which the attacker can find (via
physical probing, power measurement, or any other type of side
channel) one bit of information about the intermediate state of
the encryption after each round. Since bits computed during the
early rounds can be typically represented by low degree
multivariate polynomials, cube attacks seem to be an ideal generic
key recovery technique in these situations. However, the original
cube attack requires extremely clean data, whereas the information
provided by side channel attacks can be quite noisy. To address
this problem, we develop a new variant of cube attack which can
tolerate considerable levels of noise (affecting more than 11\% of
the leaked bits in practical scenarios). Finally, we demonstrate
our approach by describing efficient leakage attacks on two of the
best known block ciphers, AES (requiring about time for
full key recovery) and SERPENT (requiring about time for
full key recovery).
Threshold Attribute-Based Signatures and Their Application to Anonymous Credential Systems
Inspired by the recent developments in attribute-based encryption, in this paper we propose threshold attribute-based signatures (t-ABS). In a t-ABS, signers are associated with a set of attributes and verification of a signed document against a verification attribute set succeeds if the signer has a threshold number of (at least t) attributes in common with the verification attribute set. A t-ABS scheme enables a signature holder to prove possession of signatures by revealing only the relevant (to the verification attribute set) attributes of the signer, hence providing signer-attribute privacy for the signature holder. We define t-ABS schemes, formalize their security and propose two t-ABS schemes: a basic scheme that is selectively unforgeable and a second one that is existentially unforgeable, both provable in the standard model, assuming hardness of the computational Diffie-Hellman problem. We show that our basic t-ABS scheme can be augmented with two extra protocols that are used for efficiently issuing and verifying t-ABS signatures on committed values. We call the augmented scheme a threshold attribute based c-signature scheme (t-ABCS). We show how a t-ABCS scheme can be used to realize a secure {threshold attribute-based anonymous credential system (t-ABACS) providing signer-attribute privacy. We propose a security model for t-ABACS and give a concrete scheme using t-ABCS scheme. Using the simulation paradigm, we prove that the credential system is secure if the t-ABCS scheme is secure.
A Full Key Recovery Attack on HMAC-AURORA-512
In this note, we present a full key recovery attack on HMAC-AURORA-512 when
512-bit secret keys are used and the MAC length is 512-bit long.
Our attack requires queries and the off-line complexity is AURORA-512 operations,
which is significantly less than the complexity of the exhaustive search for a 512-bit key.
The attack can be carried out with a negligible amount of memory.
Our attack can also recover the inner-key of HMAC-AURORA-384 with almost the same complexity as in HMAC-AURORA-512.
This attack does not recover the outer-key of HMAC-AURORA-384, but
universal forgery is possible by combining the inner-key recovery and 2nd-preimage attacks.
Our attack exploits some weaknesses in the mode of operation.
Practical Secure Evaluation of Semi-Private Functions
Two-party Secure Function Evaluation (SFE) is a very useful
cryptographic tool which allows two parties to evaluate a function known to both parties on their private (secret) inputs.
Some applications with sophisticated privacy needs require the function to be known only to one party and kept private (hidden) from the other one. However, existing solutions for SFE of private functions (PF-SFE) deploy Universal Circuits
(UC) and are still very inefficient in practice.
In this paper we bridge the gap between SFE and PF-SFE with SFE of what we call {\em semi-private functions} (SPF-SFE), i.e., one function out of a given class of functions is evaluated without revealing which one.
We present a general framework for SPF-SFE allowing a fine-grained trade-off and tuning between SFE and PF-SFE covering both extremes. In our framework, semi-private functions can be composed from several privately programmable blocks (PPB) which can be programmed with one function out of a class of functions. The framework allows efficient and secure embedding of constants into the resulting circuit to improve performance. To demonstrate practicability of the framework we have implemented a compiler for SPF-SFE based on the Fairplay SFE framework.
SPF-SFE is sufficient for many practically relevant privacy-preserving applications, such as privacy-preserving credit checking which can be implemented using our framework and compiler as described in the paper.
On the Complexity of Integer Factorization
This note presents a deterministic integer factorization algorithm based on a system of polynomial equations. The main result establishes a new deterministic time complexity bench mark.
Hardware Accelerator for the Tate Pairing in Characteristic Three Based on Karatsuba-Ofman Multipliers
This paper is devoted to the design of fast parallel accelerators
for the cryptographic Tate pairing in characteristic three over
supersingular elliptic curves. We propose here a novel hardware
implementation of Miller's loop based on a pipelined Karatsuba-Ofman
multiplier. Thanks to a careful selection of algorithms for computing the tower field arithmetic associated to the Tate pairing, we manage to keep the pipeline busy. We also describe the strategies we
considered to design our parallel multiplier. They are included in a
VHDL code generator allowing for the exploration of a wide range of
operators. Then, we outline the architecture of a coprocessor for
the Tate pairing over . However, a final
exponentiation is still needed to obtain a unique value, which is
desirable in most of the cryptographic protocols. We supplement our
pairing accelerator with a coprocessor responsible for this task. An
improved exponentiation algorithm allows us to save hardware
resources.
According to our place-and-route results on Xilinx FPGAs, our design
improves both the computation time and the area-time trade-off
compared to previoulsy published coprocessors.
Last updated: 2009-07-22
Optimized Public Key Infrastructure -- A PKI to Support Efficient Document's Signatures
Optimized Public Key Infrastructures are traditional PKI in which end users may optimize the signatures of their documents, replacing the signer's validation data with Optimized Certificates (OC). OCs carry the signer's identification and public key, but are issued for a specific time, i.e., fields notBefore and notAfter have the same value, thus there are no reasons to revoke them. The OC's certification path is supposed to be shorter and uses Micali's revocation scheme. Furthermore, OCs include signed document's hashcodes, working also as time-stamps. Therefore, OCs are useful to replace signed document's validation data by one smaller and easier to verify. Finally, when OCs become invalid due to cryptographic algorithm weakness and limits in the validity periods of their certificate chains, they can be easily replaced by new ones, thus this proposal is suitable for efficient long term archiving.
On the Complexity of Khovratovich et.al's Preimage Attack on Edon-R
Uncategorized
Uncategorized
Based on the analysis made by van Oorschot and Wiener for the
complexity of parallel memoryless collision search, we show that the memoryless meet-in-the-middle attack which is one part of the whole preimage attack of Khovratovich et. al. on Edon-R hash
function has complexity bigger than 2^n.
A Continuous Fault Countermeasure for AES Providing a Constant Error Detection Rate
Many implementations of cryptographic algorithms have shown to be susceptible to fault attacks. For some of them, countermeasures against specific fault models have been proposed. However, for symmetric algorithms like AES, the main focus of available countermeasures lies on performance so that their achieved error detection rates are rather low or not determinable at all. Even worse, those error detection rates only apply to specific parts of the cipher. In this paper we present a way to achieve a constantly higher error detection rate throughout the whole algorithm while assuming a much stronger adversary model than in previous papers. Furthermore, we propose solutions for two very important, unsolved questions: First, how to do secure and efficient table lookups in redundant algebras. Second, how to implement secure correctness checks to verify the result in a scenario where the adversary can manipulate comparisons. Our paper is therefore the first one to construct a sound and continuous AES fault countermeasure with an attacker-independent minimum error detection rate.
A2BE: Accountable Attribute-Based Encryption for Abuse Free Access Control
As a recently proposed public key primitive, attribute-based
encryption (ABE) (including Ciphertext-policy ABE (CP-ABE) and
Key-policy ABE (KP-ABE)) is a highly promising tool for secure
access control. In this paper, the issue of key abuse in ABE is
formulated and addressed. Two kinds of key abuse problems are
considered, i) illegal key sharing among colluding users and ii)
misbehavior of the semi-trusted attribute authority including
illegal key (re-)distribution. Both problems are extremely important
as in an ABE-based access control system, the attribute private keys
directly imply users' privileges to the protected resources. To the
best knowledge of ours, such key abuse problems exist in all current
ABE schemes as the attribute private keys assigned to the users are
never designed to be linked to any user specific information except
the commonly shared user attributes.
To be concrete, we focus on the prevention of key abuse in CP-ABE in
this paper \footnote{Our technique can easily be extended to KP-ABE
as well.}. The notion of accountable CP-ABE (CP-A BE, in short)
is first proposed to prevent illegal key sharing among colluding
users. The accountability for user is achieved by embedding
additional user specific information in the attribute private key
issued to the user. To further obtain accountability for the
attribute authority as well, the notion of strong CP-A BE is
proposed, allowing each attribute private key to be linked to the
corresponding user's secret that is unknown to the attribute
authority. We show how to construct such a strong CP-A BE and
prove its security based on the computational Diffie-Hellman
assumption. Finally, we show how to utilize the new technique to
solve some open problems existed in the previous accountable
identity-based encryption schemes.
Changing probabilities of differentials and linear sums via isomorphisms of ciphers
\begin{document}
Ciphers and are isomorphic if there exists invertible
computable in both directions map , , . Cipher is vulnerable if and only if isomorphic
cipher is vulnerable. Instead of computing the key of a cipher it is
sufficient to find suitable isomorphic cipher and compute its key. If
is arbitrary substitution and is round substitution, its
conjugate is cipher isomorphism. Conjugate
substitutions have the same cycle type. Conjugation can be composed with
affine maps.
Combining conjugation and affine equivalence, sometimes we can transform
non-linear special -box to conjugate affine substitution . Usually for
given , there are many different auxiliary substitutions .
Conjugate diffusion map and XOR operation become non-linear, but taking
appropriate we can get large probabilities of differentials and
linear sums of diffusion map and XOR.
For example AES substitution (as finite field inverting) is approximately
conjugate with bit changing substitution. That conjugate substitution has
differentials and linear sums of probability 1. Corresponding byte
substitution defines non-linear conjugate diffusion map and
non-linear conjugate to XOR operation with round key. Probabilities of differentials
(biases of linear sums) of byte substitution of conjugate diffusion map are
8-12 times more then corresponding values of original -box.
Probabilities of differentials of conjugate XOR with the round key byte
depends on the round key and can be 1 for some key bytes.
Information Theoretically Secure Multi Party Set Intersection Re-Visited
Uncategorized
Uncategorized
We re-visit the problem of secure multiparty set intersection in information theoretic settings. In \cite{LiSetMPCACNS07}, Li et.al have proposed a protocol for multiparty set intersection problem with parties, that provides information theoretic security,
when parties are corrupted by an active adversary having {\it unbounded computing power}. In \cite{LiSetMPCACNS07}, the authors claimed that their protocol takes six rounds of communication and communicates field elements, where each party has a set containing field elements. However, we show that the round and communication complexity of the protocol in \cite{LiSetMPCACNS07} is much more than what is claimed
in \cite{LiSetMPCACNS07}. We then propose a {\it novel} information theoretically secure protocol for multiparty set intersection with
, which significantly improves the "actual" round and communication complexity (as shown in this paper) of the protocol given in \cite{LiSetMPCACNS07}. To design our protocol, we use several tools which are of independent interest.
Scalable Compilers for Group Key Establishment : Two/Three Party to Group
Uncategorized
Uncategorized
This work presents the first scalable, efficient and generic compilers to construct group key exchange (GKE) protocols from two/three party key exchange (2-KE/3-KE) protocols. We propose three different compilers where the first one is a 2-KE to GKE compiler (2-TGKE) for tree topology, the second one is also for tree topology but from 3-KE to GKE (3-TGKE) and the third one is a compiler that constructs a GKE from 3-KE for circular topology. Our compilers 2-TGKE and 3-TGKE are first of their kind and are efficient due to the underlying tree topology. For the circular topology, we design a compiler called 3-CGKE. 2-TGKE and 3-TGKE compilers require a total of communication, when compared to the existing compiler for circular topology, where the communication cost is . By extending the compilers 2-TGKE and 3-TGKE using the techniques in \cite{DLB07}, scalable compilers for tree based authenticated group key exchange protocols (2-TAGKE/3-TAGKE), which are secure against active adversaries can be constructed. As an added advantage our compilers can be used in a setting where there is asymmetric distribution of computing power. Finally, we present a constant round authenticated group key exchange (2-TAGKE) obtained by applying Diffie-Hellman protocol and the technique in \cite{DLB07} to our compiler 2-TGKE. We prove the security of our compilers in a stronger Real or Random model and do not assume the existence of random oracles.
Weakness of Key Predistribution Scheme Proposed by J. Dong et al.
Uncategorized
Uncategorized
A Sensor Node in Wireless Sensor Network has very limited resources such as processing capability, memory capacity, battery power, and communication capability. When the communication between any two sensor nodes are required to be secured, the symmetric key cryptography technique is used for its advantage over public key cryptography in terms of requirement of less resources. Keys are pre-distributed to each sensor node from a set of keys called key pool before deployment of sensors nodes. Combinatorial design helps in a great way to determine the way keys are drawn from the key pool for distributing to individual sensor nodes. J. Dong et al proposed a key predistribution scheme based on orthogonal array. We present the weakness of this predistribution scheme.
Attacks on AURORA-512 and the Double-Mix Merkle-Damgaard Transform
We analyse the Double-Mix Merkle-Damgaard construction (DMMD) used in the AURORA family of hash functions. We show that DMMD falls short of providing the expected level of security. Specically, we are able to find 2nd pre-images for AURORA-512 in time 2^{291}, and collisions in time 2^{234.4}. A limited-memory variant finds collisions in time 2^{249}.
A 2nd-Preimage Attack on AURORA-512
In this note, we present a 2nd-preimage attack on AURORA-512, which is one of the candidates for SHA-3. Our attack can generate 2nd-preimages of any given message, in particular, the attack complexity becomes optimal when the message length is 9 blocks or more. In such a case, the attack complexity is approximately AURORA-512 operations, which is less than the brute force attack on AURORA-512, namely, . Our attack exploits some weakness in the mode of operation.
Short Chosen-Prefix Collisions for MD5 and the Creation of a Rogue CA Certificate
We present a refined chosen-prefix collision construction for MD5 that allowed creation of a rogue Certification Authority (CA) certificate, based on a collision with a regular end-user website certificate provided by a commercial CA. Compared to the previous construction from Eurocrypt 2007, this paper describes a more °exible family of differential paths and a new variable birthdaying search space. Combined with a time-memory trade-off, these improvements lead to just three pairs of near-collision blocks to generate the collision, enabling construction of RSA moduli that are sufficiently short to be accepted by current CAs. The entire construction is fast enough to allow for adequate prediction of certificate serial number and validity period: it can be made to require about 2^{49} MD5 compression function calls. Finally, we improve the complexity of identical-prefix collisions for MD5 to about 2^{16} MD5 compression function calls and use it to derive a practical single-block chosen-prefix collision construction of which an example is given.
On the Security of Stream Cipher CryptMT v3
CryptMT v3 is a stream cipher submitted to eStream project, and has entered the third evaluation phase. Any attack has not been found until now. In this paper, we mainly discuss the security of the state initialization process of CryptMT v3. For the key and IV setup function , we can construct a probabilistic testing algorithm
with a distinguishing probability 1, which indicates that for each key , is a non-PRF. However, we have not found any non-randomness about the keystream output.
Cryptanalysis of Stream Cipher Grain Family
Uncategorized
Uncategorized
Grain v1 is one of the 7 final candidates of ECRYPT
eStream project, which involves in the 80-bit secret key. Grain-128
is a variant version with 128-bit secret key, and Grain v0 is the
original version in the first evaluation phase. Firstly, we describe
a distinguishing attack against the Grain family with weak Key-IVs.
Utilizing the second Walsh spectra of the nonlinear functions, we
show that there are / / weak Key-IVs among
total / / Key-IVs, and to distinguish a
weak Key-IV needs about / / keystream
bits and / / operations for Grain
v0, Grain v1 and Grain-128 respectively. Secondly, we apply
algebraic attacks to the Grain family with a weak Key-IV, and can
recover the secret key in about 2 seconds and 150 keystream bits for
Grain v0 and Grain v1, and reveal the key of Grain-128 with about
100 keystream bits and operations. Furthermore, we
discuss the period of the keystream with a weak Key-IV for any
Grain-like structure which can lead in self-sliding attack.
Further Results on Implicit Factoring in Polynomial Time
In PKC 2009, May and Ritzenhofen presented interesting problems related to factoring large integers with some implicit hints. One
of the problems is as follows. Consider and
, where are large primes. The primes are of same bit-size with the constraint that certain amount of Least Significant Bits (LSBs) of are same. Further the primes are of same bit-size without any constraint. May and Ritzenhofen proposed a strategy to factorize both in poly time ( is an integer with same
bit-size as ) with the implicit information that share certain amount of LSBs. We explore the same problem with a different lattice-based strategy. In a general framework, our method works when implicit information is available related to Least Significant as well as Most Significant Bits (MSBs). Given , we show that one can factor simultaneously in poly time (under some assumption related to Gröbner Basis) when share certain amount of MSBs and/or LSBs. We also study the case when share some bits
in the middle. Our strategy presents new and encouraging results in this direction. Moreover, some of the observations by May and Ritzenhofen get improved when we apply our ideas for the LSB case.
Compact E-Cash and Simulatable VRFs Revisited
Efficient non-interactive zero-knowledge proofs are a powerful tool for solving many cryptographic problems. We apply the recent Groth-Sahai (GS) proof system for pairing product equations (Eurocrypt 2008) to two related cryptographic problems: compact e-cash (Eurocrypt 2005) and simulatable verifiable random functions (CRYPTO 2007).
We present the first efficient compact e-cash scheme that does not rely on a random oracle in its security proof. To this end we construct efficient GS proofs for signature possession, pseudo randomness and set membership. The GS proofs for pseudorandom functions give rise to a much cleaner and substantially faster construction of simulatable verifiable random functions (sVRF) under a weaker number theoretic assumption. We obtain the first efficient fully simulatable sVRF with a polynomial sized output domain (in the security parameter).
A Collision Attack on AURORA-512
In this note, we present a collision attack on AURORA-512,
which is one of the candidates for SHA-3.
The attack complexity is approximately AURORA-512 operations,
which is less than the birthday bound of AURORA-512, namely, .
Our attack exploits some weakness in the mode of operation.
Public-Key Cryptosystems Resilient to Key Leakage
Most of the work in the analysis of cryptographic schemes is concentrated in abstract adversarial models that do not capture {\em side-channel attacks}. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. Inspired by recent side-channel attacks, especially the ``cold boot attacks'' of Halderman et al. (USENIX
Security '08), Akavia, Goldwasser and Vaikuntanathan (TCC '09) formalized a realistic framework for modeling the security of encryption schemes against a wide class of side-channel attacks in which adversarially chosen functions of the secret key are leaked. In the setting of public-key encryption, Akavia et al. showed that Regev's lattice-based scheme (STOC '05) is resilient to any leakage of bits, where is the length of the secret key.
In this paper we revisit the above-mentioned framework and our main results are as follows:
-- We present a generic construction of a public-key encryption scheme that is resilient to key leakage from any {\em universal hash proof system}. The construction does not rely on additional computational assumptions, and the resulting scheme is as efficient as the underlying proof system. Existing constructions of such proof systems imply that our construction can be based on a variety of number-theoretic assumptions, including the decisional Diffie-Hellman assumption (and its progressively weaker -Linear variants), the quadratic residuosity assumption, and Paillier's composite residuosity assumption.
-- We construct a new hash proof system based on the decisional Diffie-Hellman assumption (and its -Linear variants), and show that the resulting scheme is resilient to any leakage of bits. In addition, we prove that the recent scheme of Boneh et al. (CRYPTO '08), constructed to be a ``circular-secure'' encryption scheme, fits our generic approach and is also resilient to any leakage of bits.
-- We extend the framework of key leakage to the setting of chosen-ciphertext attacks. On the theoretical side, we prove that the Naor-Yung paradigm is applicable in this setting as well, and obtain as a corollary encryption schemes that are CCA2-secure with any leakage of bits. On the practical side, we prove that variants of the Cramer-Shoup cryptosystem (along the lines of our generic construction) are CCA1-secure with any leakage of bits, and CCA2-secure with any leakage of bits.
1024 - A High Security Software Oriented Block Cipher
A cryptographer with week algorithm get his feedback almost instantly by the open crypto community. But what about government cryptanalysis ? Given the fact that there is a considerable amount of cryptanalysis behind closed doors, what is to be done to get COMINT deaf ? The NSA, as the most closely examined SIGINT agency, has a workforce of 38,000 [2], among them several thousand cryptologist. The actual wiretapping is done by the Central Security Service with 25,000 women and men. Other industrialised states have also thousands cryptologist at their wage role.
The block cipher 1024 is an attempt to make cryptanalysis more difficult especially with differential (DC) and linear cryptanalysis. The assumption is that the increased security will defeat other cryptanalytical not yet known by the open crypto community.
1024 has a block size of 1024 bits and a key length of 2048 bits.
Constructing pairing-friendly hyperelliptic curves using Weil restriction
A pairing-friendly curve is a curve over a finite field whose Jacobian has small embedding degree with respect to a large prime-order subgroup. In this paper we construct pairing-friendly genus 2 curves over finite fields whose Jacobians are ordinary and simple, but not absolutely simple. We show that constructing such curves is equivalent to constructing elliptic curves over that become pairing-friendly over a finite extension of . Our main proof technique is Weil restriction of elliptic curves. We describe adaptations of the Cocks-Pinch and Brezing-Weng methods that produce genus 2 curves with the desired properties. Our examples include a parametric family of genus 2 curves whose Jacobians have the smallest recorded -value for simple, non-supersingular abelian surfaces.
A Step Towards QC Blind Signatures
In this paper we propose a conversion from signature schemes connected to coding theory into blind signature schemes. We give formal security reductions to combinatorial problems not connected to number theory. This is the first blind signature scheme which can not be broken by quantum computers via cryptanalyzing the underlying signature scheme employing Shor’s algorithms. We thus present a step towards diversifying computational assumptions on which blind signatures can be based.
We achieve blind signatures by a different concept of blinding: Instead of blinding the message, we blind the public key, such that generating a (blind) signature for the blinded key requires the interaction of the holder of the original secret key. To verify the blind signature, the connection between the original and the blinded key is proven by a static ZK proof. The major ingredient for our conversion is the PKP protocol by Shamir.
Encryption Schemes Secure under Selective Opening Attack
Uncategorized
Uncategorized
We provide the first public key encryption schemes proven secure against selective opening attack
(SOA). This means that if an adversary obtains a number of ciphertexts and then corrupts some
fraction of the senders, obtaining not only the corresponding messages but also the coins under which
they were encrypted then the security of the other messages is guaranteed. Whether or not schemes with
this property exist has been open for many years. Our schemes are based on a primitive we call lossy
encryption. Our schemes have short keys (public and secret keys of a fixed length suffice for encrypting
an arbitrary number of messages), are stateless, are non-interactive, and security does not rely on
erasures. The schemes are without random oracles, proven secure under standard assumptions (DDH,
Paillier’s DCR, QR, lattices), and even efficient. We are able to meet both an indistinguishability
(IND-SOA-C) and a simulation-style, semantic security (SS-SOA-C) definition.
Computing the endomorphism ring of an ordinary elliptic curve over a finite field
Uncategorized
Uncategorized
We present two algorithms to compute the endomorphism ring of an ordinary elliptic curve E defined over a finite field F_q. Under suitable heuristic assumptions, both have subexponential complexity. We bound the complexity of the first algorithm in terms of log q, while our bound for the second algorithm depends primarily on log |D_E|, where D_E is the discriminant of the order isomorphic to End(E). As a byproduct, our method yields a short certificate that may be used to verify that the endomorphism ring is as claimed.
A Single Initialization Server for Multi-Party Cryptography
We present information-theoretically secure bit commitment, zero-knowledge and multi-party computation based on the assistance of an initialization server. In the initialization phase, the players interact with the server to gather resources that are later used to
perform useful protocols. This initialization phase does not depend on the input of the protocol it will later enable. Once the initialization is complete, the server’s assistance is no longer
required. This paper improves on previous work as there is only one server and it does not need to be trusted. If the server is honest, the protocols are secure against any coalition of dishonest players. If all players are honest, then there is an exponentially small probability that both the initialization phase succeeds and that later the protocol fails. That is, the server cannot create a situation in the initialization phase that would lead honest players to accuse each other. The protocols are built in a modular fashion and achieve linear complexity for the players in terms of the security parameter, number of players and the size of the circuit.
Attacking Cryptographic Schemes Based on "Perturbation Polynomials"
We show attacks on several cryptographic schemes that have recently been proposed for achieving various security goals in sensor networks. Roughly speaking, these schemes all use "perturbation polynomials" to add "noise" to polynomial-based systems that offer information-theoretic security, in an attempt to increase the resilience threshold while maintaining efficiency. We show that the heuristic security arguments given for these modified schemes do not hold, and that they can be completely broken once we allow even a slight extension of the parameters beyond those achieved by the underlying information-theoretic schemes.
Our attacks apply to the key predistribution scheme of Zhang et al. (MobiHoc~2007), the access-control schemes of Subramanian et al. (PerCom~2007), and the authentication schemes of Zhang et~al. (INFOCOM~2008).
Identification of Multiple Invalid Signatures in Pairing-based Batched Signatures
This paper describes new methods in pairing-based signature schemes for identifying the invalid digital signatures in a batch, after batch verification has failed. These methods efficiently identify non-trivial numbers of invalid signatures in batches of (potentially large) numbers of signatures.
Our methods use “divide-and-conquer” search to identify the invalid signatures within a batch, but prune the search tree to substantially reduce the number of pairing computations required. The methods presented in this paper require computing on average O(w) products of pairings to identify w invalid signatures within a batch of size N, compared with the O(w(log2(N/w)+1)) [for w < N/2] that traditional divide-and-conquer methods require. Our methods avoid the problem of
exponential growth in expected computational cost that affect earlier proposals which, on average, require computing O(w) products of pairings.
We compare the expected performance of our batch verification methods with previously published divide-and-conquer and exponential cost methods for Cha-Cheon identity-based signatures [6]. However, our methods also apply to a number of short signature schemes and as well as to other identity-based signature schemes.
A note on the security of MST3
In this paper, we study the recently proposed encryption scheme MST3, focusing on a concrete instantiation using Suzuki-2-groups. In a passive scenario, we argue that the one wayness of this scheme may not, as claimed, be proven without the assumption that factoring group elements with respect to random covers for a subset of the group is hard. As a result, we conclude that for the proposed Suzuki 2-groups instantiation, impractical key sizes should be used in order to prevent more or less straightforward factorization attacks.
Enhanced Privacy ID from Bilinear Pairing
Enhanced Privacy ID (EPID) is a cryptographic scheme that enables the
remote authentication of a hardware device while preserving the privacy
of the device. EPID can be seen as a direct anonymous attestation scheme with enhanced revocation capabilities. In EPID, a device can be
revoked if the private key embedded in the hardware device has been
extracted and published widely so that the revocation manager finds the
corrupted private key. In addition, the revocation manager can revoke a
device based on the signatures the device has signed, if the private
key of the device is not known. In this paper, we introduce a new
security notion of EPID including the formal definitions of anonymity
and unforgeability with revocation. We also give a construction of an
EPID scheme from bilinear pairing. Our EPID scheme is efficient and
provably secure in the random oracle model under the strong
Diffie-Hellman assumption and the decisional Diffie-Hellman assumption.
On the Lower Bounds of the Second Order Nonlinearity of some Boolean Functions
The -th order nonlinearity of a Boolean function is an important
cryptographic criterion in analyzing the security of stream as well
as block ciphers. It is also important in coding theory as it is
related to the covering radius of the Reed-Muller code .
In this paper we deduce the lower bounds of the second order nonlinearity
of the two classes of Boolean functions of the form
\begin{enumerate}
\item
with
and where .
\item
where and
is an integer such that , .
\end{enumerate}
For some , the first class gives bent functions whereas
Boolean functions of the second class are all bent, i.e., they achieve
optimum first order nonlinearity.
Cascade Encryption Revisited
The security of cascade blockcipher encryption is an important and well-studied problem in theoretical cryptography with practical implications. It is well-known that double encryption improves the security only marginally, leaving triple encryption as the shortest reasonable cascade. In a recent paper, Bellare and Rogaway showed that in the ideal cipher model, triple encryption is significantly more secure than single and double encryption, stating the security of longer cascades as an open question.
In this paper, we propose a new lemma on the indistinguishability of systems extending Maurer's theory of random systems. In addition to being of independent interest, it allows us to compactly rephrase Bellare and Rogaway's proof strategy in this framework, thus making the argument more abstract and hence easy to follow. As a result, this allows us to address the security of longer cascades as well as some errors in their paper. Our result implies that for blockciphers with smaller key space than message space (e.g. DES), longer cascades improve the security of the encryption up to a certain limit. This partially answers the open question mentioned above.
Reducing RFID Reader Load with the Meet-in-the-Middle Strategy
In almost any RFID system, a reader needs to identify, and optionally authenticate, a multitude of
tags. If each tag has a unique secret, identification and authentication are trivial, however, the
reader (or a back-end server) needs to perform a brute-force search for each tag-reader
interaction. In this paper, we suggest a simple, efficient and secure technique that reduces reader
computation to . Our technique is based on the well-known
``meet-in-the-middle'' strategy used in the past to attack certain symmetric ciphers.
Knapsack Cryptosystem on Elliptic Curves
The LLL algorithm is strong algorithm that decrypts the additional type Knapsack cryptosystem. However, the LLL algorithm
is not applicable in the addition in the group that rational points of elliptic curves on finite fields do. Therefore, we
think the Knapsack cryptosystem constructed on elliptic curves. By using the pairing for the decryption, it is shown to be
able to make the computational complexity of the decryption a polynomial time by making the decryption function by the
pairing values.
A Brief History of Provably-Secure Public-Key Encryption
Public-key encryption schemes are a useful and interesting field of cryptographic study. The ultimate goal for the cryptographer in the field of public-key encryption would be the production of a very efficient encryption scheme with a proof of security in a strong security model using a weak and reasonable computational assumption. This ultimate goal has yet to be reached. In this invited paper, we survey the major results that have been achieved in the quest to find such a scheme.
A Provably Secure And Efficient Countermeasure Against Timing Attacks
We show that the amount of information about the key that an
unknown-message attacker can extract from a deterministic
side-channel is bounded from above by |O| \log_2 (n+1) bits, where
n is the number of side-channel measurements and O is the set of
possible observations. We use this bound to derive a novel
countermeasure against timing attacks, where the strength of the
security guarantee can be freely traded for the resulting
performance penalty. We give algorithms that efficiently and
optimally adjust this trade-off for given constraints on the
side-channel leakage or on the efficiency of the
cryptosystem. Finally, we perform a case-study that shows that
applying our countermeasure leads to implementations with minor
performance overhead and formal security guarantees.
Lossy Encryption: Constructions from General Assumptions and Efficient Selective Opening Chosen Ciphertext Security
Uncategorized
Uncategorized
In this paper, we present new and general constructions of lossy encryption schemes. By applying results from Eurocrypt '09, we obtain new general constructions of cryptosystems secure against a Selective Opening Adversaries (SOA). Although it was recognized almost twenty years ago that SOA security was important, it was not until the recent breakthrough works of Hofheinz and Bellare, Hofheinz and Yilek that any progress was made on this fundamental problem.
The Selective Opening problem is as follows: suppose an adversary receives commitments (or encryptions) of (possibly) correlated messages, and now the adversary can choose of the messages, and receive de-commitments (or decryptions and the randomness used to encrypt them). Do the unopened commitments (encryptions) remain secure? A protocol achieving this type of security is called secure against a selective opening adversary (SOA). This question arises naturally in the context of Byzantine Agreement and Secure Multiparty
Computation, where an active adversary is able to eavesdrop on all the wires, and then choose a subset of players to corrupt. Unfortunately, the traditional definitions of security (IND-CPA, IND-CCA) do not guarantee security in this setting. In this paper:
We formally define re-randomizable encryption and show that every re-randomizable encryption scheme gives rise to efficient encryptions secure against a selective opening adversary. (Very informally, an encryption is re-randomizable, if given any ciphertext, there is an efficient way to map it to an almost uniform re-encryption of the same underlying message).
We define re-randomizable one-way functions and show that every re-randomizable one-way function family gives rise to efficient commitments secure against a selective opening adversary.
We show that statistically-hiding 2-round Oblivious Transfer (OT) implies Lossy Encryption and so do smooth hash proof systems, as defined by Cramer-Shoup. Combining this with known results immediately shows that Private Information Retrieval and homomorphic encryption both imply Lossy Encryption, and thus Selective Opening Secure Public Key Encryption.
Applying our constructions to well-known cryptosystems (such as Elgamal or Paillier), we obtain selective opening secure commitments and encryptions from the Decisional Diffie-Hellman (DDH), Decisional Composite Residuosity (DCR) and Quadratic Residuosity (QR) assumptions, that are either simpler or more efficient than existing constructions of Bellare, Hofheinz and Yilek. By applying our general results to the Paillier cryptosystem, we obtain the first cryptosystem to achieve simulation-based selective opening security from the DCR assumption.
We provide (indistinguishability and simulation-based) definitions of adaptive chosen-ciphertext security (CCA2) in the selective opening setting and describe the first encryption schemes that provide security in the sense of this definition. In the indistinguishability-based model, we notably achieve short ciphertexts under standard number theoretic assumptions. In the simulation-based security chosen-ciphertext attack scenario, we handle non-adaptive (i.e., CCA1) adversaries and describe the first encryption scheme which is simultaneously CCA1 and SOA-secure.
Last updated: 2012-07-11
Unconditionally Secure Asynchronous Multiparty Computation with Quadratic Communication Per Multiplication Gate
Secure multiparty computation (MPC) allows a set of parties to securely compute an agreed function, even if up to parties are under the control of an adversary. In this paper, we propose a new {\it Asynchronous secure multiparty computation} (AMPC) protocol
that provides information theoretic security with , where out of parties can be under the influence of a {\it Byzantine (active)} adversary having {\it unbounded computing power}. Our protocol communicates bits per multiplication and involves a negligible error probability of , where is the error parameter and is the field over which the computation is carried out. The best known information theoretically secure AMPC with communicates bits per multiplication and does not involve any error probability in computation. Though a negligible error probability is involved, our AMPC protocol provides the best communication complexity among all the known AMPC protocols providing information theoretic security. Moreover, the communication complexity of our AMPC is same as the communication complexity of the best known AMPC protocol with {\it cryptographic assumptions}.
As a tool for our AMPC protocol, we propose a new method of efficiently generating {\it -sharing} of multiple secrets concurrently in asynchronous setting, which is of independent interest, where . In the literature, though there are protocols for generating -sharing and -sharing separately, there is no generic protocol for generating {\it -sharing} for the range . Moreover, our protocol provides better communication complexity than the existing methods for generating -sharing.
Point Compression for Koblitz Elliptic Curves
Uncategorized
Uncategorized
Elliptic curves over finite fields have applications in public key cryptography. A Koblitz curve is an elliptic curve over ; the group has convenient features for efficient implementation of elliptic curve cryptography.
Wiener and Zuccherato and Gallant, Lambert and Vanstone showed that one can accelerate the Pollard rho algorithm for the discrete logarithm problem on Koblitz curves. This implies that when using Koblitz curves, one has a lower security per bit than when using general elliptic curves defined over the same field. Hence for a fixed security level, systems using Koblitz curves require slightly more bandwidth.
We present a method to reduce this bandwidth when a normal basis representation for is used. Our method is appropriate for applications such as Diffie-Hellman key exchange or Elgamal encryption. We show that, with a low probability of failure, our method gives the expected bandwidth for a given security level.
UC-Secure Source Routing Protocol
Uncategorized
Uncategorized
The multi-path routing scheme provides reliable guarantee for mobile ad hoc network. A new method is proposed that is using to analyze the security of multi-path routing protocol within the framework of Universally Composable (UC) security. Based on the topological model that there exist adversarial nodes, the concept of plausible route is extended and the definition of plausible-route set is presented. Plausible-route set is used in description of the multi-path routing for Ad hoc network, and a formal security definition based on UC-RP is given. A provably Security Multiple Node-Disjoint Paths source routing (SMNDP) is proposed and address secure fault issue of MNDP in the active adversary model. The new approach shows that the security of SMNDP can be reduced to the security of the message authentication code and the digital signature. SMNDP implements the correctness of route discovery process, the authentication of nodes identifier and the integrality of route information.
Simulation without the Artificial Abort: Simplified Proof and Improved Concrete Security for Waters' IBE Scheme
Waters' variant of the Boneh-Boyen IBE scheme is attractive because of its efficency, applications, and security attributes,but suffers from a relatively complex proof with poor concrete security. This is due in part to the proof's ``artificial abort'' step, which has then been inherited by numerous derivative works. It has often been asked whether this step is necessary. We show that it is not, providing a new proof
that eliminates this step. The new proof is not only simpler than the original one but offers better concrete security for important ranges of
the parameters. As a result, one can securely use smaller groups, resulting in significant efficiency improvements.
Multi-authority attribute based encryption with honest-but-curious central authority
An attribute based encryption scheme capable of handling multiple authorities was recently proposed by Chase. The scheme is built upon a single-authority attribute based encryption scheme presented earlier
by Sahai and Waters. Chase’s construction uses a trusted central authority that is inherently capable of decrypting arbitrary ciphertexts created within the system. We present a multi-authority attribute based encryption scheme in which only the set of recipients defined by the encrypting party can decrypt a corresponding ciphertext. The central authority is viewed as “honest-but-curious”: on the one hand it honestly follows the protocol, and on the other hand it is curious to decrypt arbitrary ciphertexts thus violating the intent of the encrypting party. The proposed scheme, which like its predecessors relies on the Bilinear Diffie-Hellman assumption, has a complexity comparable to that of Chase’s scheme. We prove that our scheme is secure in the selective ID model and can tolerate an honest-but-curious central authority.
The Case for Quantum Key Distribution
Uncategorized
Uncategorized
Quantum key distribution (QKD) promises secure key agreement by using quantum mechanical systems. We argue that QKD will be an important part of future cryptographic infrastructures. It can provide long-term confidentiality for encrypted information without reliance on computational assumptions. Although QKD still requires authentication to prevent man-in-the-middle attacks, it can make use of either information-theoretically secure symmetric key authentication or computationally secure public key authentication: even when using public key authentication, we argue that QKD still offers stronger security than classical key agreement.
Ensuring Data Storage Security in Cloud Computing
Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, Cloud Computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.
CoSP: A General Framework For Computational Soundness Proofs
Uncategorized
Uncategorized
We describe CoSP, a general framework for conducting computational
soundness proofs of symbolic models and for embedding these proofs
into formal calculi. CoSP considers arbitrary equational theories and
computational implementations, and it abstracts away many details that
are not crucial for proving computational soundness, such as message
scheduling, corruption models, and even the internal structure of a
protocol. CoSP enables soundness results, in the sense of preservation
of trace properties, to be proven in a conceptually modular and
generic way: proving x cryptographic primitives sound for y calculi
only requires x+y proofs (instead of x*y proofs without this
framework), and the process of embedding calculi is conceptually
decoupled from computational soundness proofs of cryptographic
primitives. We exemplify the usefulness of CoSP by proving the first
computational soundness result for the full-fledged applied
pi-calculus under active attacks. Concretely, we embed the applied
pi-calculus into CoSP and give a sound implementation of public-key
encryption and digital signatures.
From Dolev-Yao to Strong Adaptive Corruption: Analyzing Security in the Presence of Compromising Adversaries
We formalize a hierarchy of adversary models for security protocol analysis, ranging from a Dolev-Yao style adversary to more powerful adversaries who can reveal different parts of principals' states during protocol execution. We define our hierarchy by a modular operational semantics describing adversarial capabilities. We use this to formalize various, practically-relevant notions of key and state compromise. Our semantics can be used as a basis for protocol analysis tools. As an example, we extend an existing symbolic protocol-verification tool with our adversary models. The result is the first tool that systematically supports notions such as weak perfect forward secrecy, key compromise impersonation, and adversaries capable of so-called strong corruptions and state-reveal queries. As further applications, we use our model hierarchy to relate different adversarial notions, gaining new insights on their relative strengths, and we use our tool to find new attacks on protocols.
Attacks on the DECT authentication mechanisms
Digital Enhanced Cordless Telecommunications (DECT) is a standard for connecting cordless telephones to a fixed telecommunications network over a short range. The cryptographic algorithms used in DECT are not publicly available. In this paper we reveal one of the two algorithms used by DECT, the DECT Standard Authentication Algorithm (DSAA). We give a very detailed security analysis of the DSAA including some very effective attacks on the building blocks used for DSAA as well as a common implementation error that can practically lead to a total break of DECT security. We also present a low cost attack on the DECT protocol, which allows an attacker to impersonate a base station and therefore listen to and reroute all phone calls made by a handset.
On the Security of Iterated Hashing based on Forgery-resistant Compression Functions
In this paper we re-examine the security notions suggested for hash
functions, with an emphasis on the delicate notion of second
preimage resistance. We start by showing that, in the random oracle
model, both Merkle-Damgaard and HAIFA achieve second preimage resistance beyond
the birthday bound, and actually up to the level of known generic
attacks, hence demonstrating the optimality of HAIFA in this respect.
We then try to distill a more elementary requirement out of the
compression function to get some insight on the properties it should
have to guarantee the second preimage resistance of its
iteration. We show that if the (keyed) compression function is a
secure FIL-MAC then the Merkle-Damgaard mode of iteration (or HAIFA) still
maintains the same level of second preimage resistance. We conclude
by showing that this ``new'' assumption (or security notion)
implies the recently introduced
Preimage-Awareness while ensuring all other classical security
notions for hash functions.
Construction of large families of pseudorandom subsets using elliptic curves
Recently, Dartyge and Sárközy investigated the measures, i.e., the well distribution measure and the correlation measure of order , of pseudorandomness of subsets of the set , and they presented several constructive examples for subsets with strong pseudorandom properties when is a prime number. In this article, we present a construction of pseudorandom subsets using elliptic curves over finite fields and estimate the pseudorandom measures. Character sums play an important role in the proofs.
Security of Practical Cryptosystems Using Merkle-Damgard Hash Function in the Ideal Cipher Model
Uncategorized
Uncategorized
Since the Merkle-Damgård (MD) type hash functions are differentiable from ROs even when compression functions are modeled by ideal primitives,
there is no guarantee as to the security of cryptosystems when ROs are instantiated with structural hash functions.
In this paper, we study the security of the instantiated cryptosystems whereas
the hash functions have the well known structure of Merkle-Damgård construction with Stam's type-II compression function (denoted MD-TypeII) in the Ideal Cipher Model (ICM).
Note that since the Type-II scheme includes the Davies-Meyer compression function,
SHA-256 and SHA-1 have the MD-TypeII structure.
We show that OAEP, RSA-KEM, PSEC-KEM, ECIES-KEM and many other encryption schemes are secure when using the MD-TypeII hash function.
In order to show this, we customize the indifferentiability framework of Maurer, Renner and Holenstein.
We call the customized framework ``indifferentiability with condition''.
In this framework, for some condition that cryptosystem satisfies,
if hash function is indifferentiable from RO under condition , is secure when RO is instantiated with .
We note the condition of ``prefix-free'' that the above schemes satisfy.
We show that the MD-TypeII hash function is indifferentiable from RO under this condition.
When the output length of RO is incompatible with that of the hash function, the output size is expanded by Key Derivation Functions (KDFs).
Since a KDF is specified as MGF1 in RSA's PKCS 1 V2.1, its security discussion is important in practice.
We show that, KDFs using the MD-TypeII hash function (KDF-MD-TypeII) are indifferentiable from ROs under this condition of ``prefix-free''.
Therefore, we can conclude that the above practical encryption schemes are secure even when ROs are instantiated with (KDF-)MD-TypeII hash functions.
Dodis, Ristenpart and Shrimpton showed that FDH, PSS, Fiat-Shamir, and so on are secure when RO is instantiated with the MD-TypeII hash function in the ICM,
their analyses use the different approach from our approach called indifferentiability from public-use RO (pub-RO).
They showed that the above cryptosystems are secure in the pub-RO model and the MD-TypeII hash function is indifferentiable from pub-RO.
Since their analyses did not consider the structure of KDFs, there might exist some attack using a KDF's structure.
We show that KDFs using pub-RO (KDF-pub-RO) is differentiable from pub-RO.
Thus, we cannot trivially extend the result of Dodis et al to the indifferentiability for KDF-MD-TypeII hash functions.
We propose a new oracle called private interface leak RO (privleak-RO).
We show that KDF-pub-ROs are indifferentiable from privleak-ROs and the above cryptosystems are secure in the privleak-RO model.
Therefore, by combining the result of Dodis et al. with our result,
we can conclude that the above cryptosystems are secure when ROs are instantiated with KDF-MD-TypeII hash functions.
Since OAEP, RSA-KEM, PSEC-KEM, ECIES-KEM and many other encryption schemes are insecure in the pub-RO (privleak-RO) model,
we cannot confirm the security of these encryption schemes from the approach of Dodis et al.
Therefore, the result of Dodis et al can be supplemented with our result.
Consequently, from the two results we can confirm the security of almost practical cryptosystems when ROs are instantiated with (KDF-)MD-TypeII hash functions.
Computational Oblivious Transfer and Interactive Hashing
Uncategorized
Uncategorized
We use interactive hashing to achieve the most efficient OT protocol to date based solely on the assumption that trapdoor permutations (TDP) exist. Our protocol can be seen as the following (simple) modification of either of the two famous OT constructions: 1) In the one by Even et al (1985), a receiver must send a random domain element to a sender through IH; 2) In the one by Ostrovsky et al (1993), the players should use TDP instead of one-way permutation. A similar approach is employed to achieve oblivious transfer based on the security of the McEliece cryptosystem.
In this second protocol, the receiver inputs a public key into IH, while privately keeping the corresponding secret key. Two different versions of IH are used: the computationally secure one in the first protocol, and the information-theoretically secure one in the second.
Automatic Approach of Provable Security and its Application for OAEP+
Probable security is an important criteria for analyzing the security of cryptographic protocols. However, writing and verifying proofs by hand are prone to errors. This paper introduces the game-based approach of writing security proofs and its automatic technique. It advocates the automatic security proof approach based on process calculus, and presents the initial game and observational equivalences of OAEP+.
Implementing cryptographic pairings: a magma tutorial
In this paper we show an efficient implementation if the Tate, ate, and R-ate pairings in magma. This will be demostrated by using the KSS curves with embedding degree k=18
Secret sharing on trees: problem solved
We determine the worst case information rate for all secret sharing
schemes based on trees. It is the inverse of , where is the size of the maximal core in the tree. A {\it core} is a connected subset of the vertices so that every vertex in the core has a neighbor outside the core. The upper bound comes from an application of the entropy method, while the lower bound is achieved by a construction using Stinson's decomposition theorem.
It is shown that is also the {\it fractional cover number} of the tree where the edges of the tree are covered by stars, and the vertex cover should be minimized. We also give an algorithm which finds an optimal cover on any tree, and thus a perfect secret sharing scheme with optimal rate.
Low Complexity Cubing and Cube Root Computation over in Polynomial Basis
Uncategorized
Uncategorized
We present low complexity formulae for the computation
of cubing and cube root over constructed using special classes of irreducible
trinomials, tetranomials and pentanomials.
We show that for all those special classes of polynomials, field cubing and field cube root operation
have the same computational complexity when implemented in hardware or software platforms.
As one of the main applications of these two field arithmetic operations lies in pairing-based
cryptography, we also give in this paper a selection of irreducible polynomials that lead to low cost
field cubing and field cube root computations for supersingular elliptic curves defined over
, where is a prime number in the pairing-based cryptographic range of interest, namely,
.
Optimistic Fair Exchange with Multiple Arbiters
Fair exchange is one of the most fundamental problems in secure distributed computation. Alice has something that Bob wants, and Bob has something that Alice wants. A fair exchange protocol would guarantee that, even if one of them maliciously deviates from the protocol, either both of them get the desired content, or neither of them do. It is known that no two-party protocol can guarantee fairness in general; therefore the presence of a trusted arbiter is necessary. In optimistic fair exchange, the arbiter only gets involved in case of faults, but needs to be trusted. To reduce the trust put in the arbiter, it is natural to consider employing multiple arbiters.
Expensive techniques like byzantine agreement or secure multi-party computation with Omega(n^2) communication can be applied to distribute arbiters in a non-autonomous way. Yet we are interested in efficient protocols that can be achieved by keeping the arbiters autonomous (non-communicating), especially for p2p settings in which the arbiters do not even know each other. Avoine and Vaudenay employ multiple autonomous arbiters in their optimistic fair exchange protocol which uses global timeout mechanisms; all arbiters have access to -loosely- synchronized clocks. They left two open questions regarding the use of distributed autonomous arbiters: (1) Can an optimistic fair exchange protocol without timeouts provide fairness (since it is hard to achieve synchronization in a p2p setting) when employing multiple autonomous arbiters? (2) Can any other optimistic fair exchange protocol with timeouts achieve better bounds on the number of honest arbiters required? In this paper, we answer both questions negatively. To answer these questions, we define a general class of optimistic fair exchange protocols with multiple arbiters, called "distributed arbiter fair exchange" (DAFE) protocols. Informally, in a DAFE protocol, if a participant fails to send a correctly formed message, the other party must contact some subset of the arbiters and get correctly formed responses from them. The arbiters do not communicate with each other, but only to Alice and Bob. We prove that no DAFE protocol can meaningfully exist.
Overview of Turbo-Code Reconstruction Techniques
In this paper we analyze different techniques to blindly recover the
parameters of turbo-code encoders with only the knowledge of noisy
eavesdropped binary stream. We compare different strategies to
reconstruct convolutional encoders, particularly when both are
recursive systematic coders. We explain the intrinsic
indetermination due to these techniques but also how to overcome such
an issue in the case of turbo-code encoders. Moreover, we generalize
Burel and al.'s particular reconstruction of the second
(n,1)-encoder for (n,k)-encoders.
On fractional correlation immunity of majority functions
The correlation immunity is known as an important cryptographic
measure of a Boolean function with respect to its
resist against the correlation attack. This paper generalizes
the concept of correlation immunity to be of a fractional value, called
fractional correlation immunity, which is a fraction between
0 and 1, and correlation immune function is the extreme case when the
fractional correlation immunity is 1. However when a function is not
correlation immune in the traditional sense, it may also has a nonzero
fractional correlation immunity, which also indicates the resistance
of the function against correlation attack.
This paper first shows how this generalized concept of fractional
correlation immunity is a reasonable measure on the resistance
against the correlation attack, then studies the fractional
correlation immunity of a special class of Boolean functions, i.e.
majority functions, of which the subset of symmetric ones have been
proved to have highest algebraic immunity. This paper shows that all
the majority functions, including the symmetric ones and the
non-symmetric ones, are not correlation immune. However their
fractional correlation immunity approaches to 1 when the number of
variable grows. This means that this class of functions also have
good resistance against correlation attack, although they are not
correlation immune in the traditional sense.
Adaptive Preimage Resistance and Permutation-based Hash Functions
Uncategorized
Uncategorized
In this paper, we introduce a new notion of security, called \emph{adaptive preimage resistance}. We prove that a compression function that is collision resistant and adaptive preimage resistant can be combined with a public random function to yield a hash function that is indifferentiable from a random oracle.
Specifically, we analyze adaptive preimage resistance of -bit to -bit compression functions that use three calls to -bit public random permutations. This analysis also provides a simpler proof of their collision resistance and preimage resistance than the one provided by Rogaway and Steinberger. By using such compression functions as building blocks, we obtain permutation-based pseudorandom oracles that outperform the Sponge construction and the MD6 compression function both in terms of security and efficiency.
Foundations of Non-Malleable Hash and One-Way Functions
Non-malleability is an interesting and useful property which ensures
that a cryptographic protocol preserves the independence of the
underlying values: given for example an encryption Enc(m) of some
unknown message m, it should be hard to transform this ciphertext
into some encryption Enc(m*) of a related message m*. This
notion has been studied extensively for primitives like encryption,
commitments and zero-knowledge.
Non-malleability of one-way functions and hash functions has
surfaced as a crucial property in several recent results,
but it has not undergone a comprehensive
treatment so far. In this paper we initiate the study of such
non-malleable functions. We start with the design of an appropriate
security definition. We then show that non-malleability for hash and
one-way functions can be achieved, via a theoretical
construction that uses perfectly one-way hash functions and
simulation-sound non-interactive zero-knowledge proofs of knowledge
(NIZKPoK).
We also discuss the complexity of non-malleable hash
and one-way functions. Specifically, we give a black-box based separation of non-malleable functions from one-way permutations (which our construction bypasses due to the 'non-black-box' NIZKPoK).
We exemplify the usefulness of our definition in
cryptographic applications by showing that non-malleability is
necessary and sufficient to securely replace one of the two random
oracles in the IND-CCA encryption scheme by Bellare and Rogaway, and
to improve the security of client-server puzzles.
On the Data Complexity of Statistical Attacks Against Block Ciphers (full version)
Many attacks on iterated block ciphers rely on statistical considerations using plaintext/ciphertext pairs to distinguish some part of the cipher from a random permutation. We provide here a simple formula for estimating the amount of plaintext/ciphertext pairs which is needed for such distinguishers and which applies to a lot of different scenarios (linear cryptanalysis, differential-linear cryptanalysis, differential/truncated differential/impossible differential cryptanalysis). The asymptotic data complexities of all these attacks are then derived. Moreover, we give an efficient algorithm for computing the data complexity accurately.
CCZ-equivalence and Boolean functions
Uncategorized
Uncategorized
We study further CCZ-equivalence of -functions. We prove that
for Boolean functions (that is, for ), CCZ-equivalence coincides
with EA-equivalence. On the contrary, we show that for -
functions, CCZ-equivalence is strictly more general than EA-equivalence when and is greater or equal to the smallest
positive divisor of different from 1.
Our result on Boolean functions allows us to study the natural
generalization of CCZ-equivalence corresponding to the CCZ-equivalence
of the indicators of the graphs of the functions. We show that it
coincides with CCZ-equivalence.
On Deterministic Polynomial-Time Equivalence of Computing the CRT-RSA Secret Keys and Factoring
Let be the product of two large primes. Consider CRT-RSA with
the public encryption exponent and private decryption exponents . It is well known that given any one of or (or both) one can factorize in probabilistic poly time with success probability almost equal to 1. Though this serves all the practical purposes, from theoretical point of view, this is not a deterministic polynomial time algorithm. In this paper, we present a lattice based deterministic poly time algorithm that uses both (in addition to the public information ) to factorize for certain ranges of . We like to stress that proving the equivalence for all the values of may be a nontrivial task.
Security Enhancement of Various MPKCs by 2-layer Nonlinear Piece In Hand Method
Following the last proposal of the nonlinear Piece in Hand method, which has 3-layer structure, 2-layer nonlinear Piece in Hand method is proposed. Both of them aim at enhancing the security of existing and future multivariate public key cryptosystems. The new nonlinear Piece in Hand is compared with the 3-layer method and PMI+, which was proposed by Ding, et al.
Comparing Two Pairing-Based Aggregate Signature Schemes
In 2003, Boneh, Gentry, Lynn and Shacham (BGLS) devised the first
provably-secure aggregate signature scheme. Their scheme uses bilinear
pairings and their security proof is in the random oracle model.
The first pairing-based aggregate signature scheme which has a security
proof that does not make the random oracle assumption was proposed in
2006 by Lu, Ostrovsky, Sahai, Shacham and Waters (LOSSW). In this paper,
we compare the security and efficiency of the BGLS and LOSSW schemes
when asymmetric pairings derived from Barreto-Naehrig (BN) elliptic
curves are employed.
On the impossibility of graph secret sharing
A {\it perfect secret sharing scheme} based on a graph is a randomized distribution of a secret among the vertices of the graph so that the secret can be recovered from the information assigned to vertices at the endpoints of any edge, while the total information assigned to an independent set of vertices is independent (in statistical sense) of the secret itself.
The efficiency of a scheme is measured by the amount of information the most heavily loaded vertex receives divided by the amount of information in the secret itself. The (worst case) {\it information ratio} of is the infimum of this number. We calculate the best lower bound on the information ratio for an infinite family of graphs the celebrated entropy method can give.
We argue that all existing constructions for secret sharing schemes are special cases of the generalized vector space construction. We give direct constructions of this type for the first two members of the family, and show that for the other members no construction exists which would match the bound yielded by the entropy method.
On Generalization of Cheon's Algorithm
We consider a generalization of Cheon's algorithm on the strong Diffie-Hellman problem.
More specifically, we consider the circumstance that p^k-1 has a small divisor for k>=3, where p is the order of group on which we consider the strong Diffie-Hellman problem.
It seems that our algorithm is only effective for k=1, 2, that is, the original Cheon's algorithm.
Anonymity in Shared Symmetric Key Primitives
We provide a stronger definition of anonymity in the context of shared symmetric key primitives, and show that
existing schemes do not provide this level of anonymity. A new scheme is presented to share symmetric key operations
amongst a set of participants according to a -threshold access structure.
We quantify the amount of information the output of the shared operation provides about the group of
participants which collaborated to produce it.
Designing an ASIP for Cryptographic Pairings over Barreto-Naehrig Curves
This paper presents a design-space exploration of an application-specific instruction-set
processor (ASIP) for the computation of various cryptographic pairings over Barreto-Naehrig
curves (BN curves). Cryptographic pairings are based on elliptic curves over finite fields--in
the case of BN curves a field Fp of large prime order p. Efficient arithmetic in these fields is
crucial for fast computation of pairings. Moreover, computation of cryptographic pairings is
much more complex than elliptic-curve cryptography (ECC) in general. Therefore, we facilitate
programming of the proposed ASIP by providing a C compiler.
In order to speed up -arithmetic, a RISC core is extended with additional functional units.
The critical path delay of these units is adjusted to the base architecture in order to maintain
the operating frequency. Independently from that adjustment, these units are scalable allowing
for a trade-off between execution time and area consumption. Because the resulting speedup
can be limited by the memory throughput, utilization of multiple data memories is proposed.
However, developing a C compiler for multiple memories is a challenging task. Therefore, we
introduce an enhanced memory system enabling multiple concurrent memory accesses while
remaining totally transparent to the C compiler.
The proposed design needs 15.8 ms for the computation of the Optimal-Ate pairing over a
256-bit BN curve at 338 MHz implemented with a 130 nm standard cell library. The processor
core consumes 97 kGates making it suitable for the use in embedded systems.
Universally Composable Symmetric Encryption
For most basic cryptographic tasks, such as public key
encryption, digital signatures, authentication, key
exchange, and many other more sophisticated tasks, ideal
functionalities have been formulated in the
simulation-based security approach, along with their
realizations. Surprisingly, however, no such functionality
exists for symmetric encryption, except for a more abstract
Dolev-Yao style functionality. In this paper, we fill this
gap. We propose two functionalities for symmetric
encryption, an unauthenticated and an authenticated
version, and show that they can be implemented based on
standard cryptographic assumptions for symmetric encryption
schemes, namely IND-CCA security and authenticated
encryption, respectively. We also illustrate the usefulness
of our functionalities in applications, both in
simulation-based and game-based security settings.