## All papers in 2009 (638 results)

Last updated: 2010-01-02

Reducing Elliptic Curve Logarithm to Logarithm in a Finite Field $\mathbb{F}_q$ for Some Orders

We change elliptic curve discrete logarithm problem to discrete logarithm problem of $\mathbb{F}_q$ using elliptic divisibility sequences. And the method works for the situation
$\#E(\mathbb{F}_q)=q-1,q\pm \sqrt{q},q\pm \sqrt{2q-1},q\pm \sqrt{3q-2},q+\sqrt{4q-3}$.

Efficient Characteristic Set Algorithms for Equation Solving in Finite Fields and Application in Analysis of Stream Ciphers

Efficient characteristic set methods for computing solutions of a
polynomial equation system in a finite field are proposed.
We introduce the concept of proper triangular sets and prove that
proper triangular sets are square-free and have solutions.
We present an improved algorithm which can be used to reduce the
zero set of an equation system in general form to the union of zero
sets of proper triangular sets. Bitsize complexity for the algorithm
is given in the case of Boolean polynomials.
We also give a characteristic set method for Boolean polynomials,
where the size of the polynomials are effectively controlled.
The methods are implemented and extensive experiments show that they
are quite efficient for solving equations raised in analyzing
certain classes of stream ciphers.

Obtaining More Karatsuba-Like Formulae over The Binary Field

The aim of this paper is to find more Karatsuba-like formulae for a fixed set of moduli polynomials in GF(2)[x]. To this end, a theoretical framework is established. We first generalize the division algorithm, and then present a generalized definition of the remainder of integer division. Finally, a previously generalized Chinese remainder theorem is used to achieve our initial goal. As a by-product of the generalized remainder of integer division, we rediscover Montgomery’s N-residue and present a systematic interpretation of definitions of Montgomery’s multiplication and addition operations.

Flexible Quasi-Dyadic Code-Based Public-Key Encryption and Signature

Drawback of code-based public-key cryptosystems is that their public-key size is lage. It takes some hundreds KB to some MB for typical parameters.
While several attempts have been conducted to reduce it,
most of them have failed except one, which is Quasi-Dyadic (QD) public-key (for large extention degrees).
While an attack has been proposed on QD public-key (for small extension degrees), it can be prevented by making the extension degree $m$ larger, specifically by making $q^(m (m-1))$ large enough where $q$ is the base filed and for a binary code, $q=2$.
The drawback of QD is, however, it must hold $n << 2^m - t$ (at least $n \leq 2^{m-1}$) where $n$ and $t$ are the code lenght and the error correction capability of the underlying code.
If it is not satisfied, its key generation fails since it is performed by trial and error.
This condition also prevents QD from generating parameters for code-based digital signatures since without making $n$ close to $2^m - t$,
$2^{mt}/{n \choose t}$ cannot be small.
To overcome these problems, we propose ``Flexible'' Quasi-Dyadic (FQD) public-key that can even achieve $n=2^m - t$ with one shot.
Advantages of FQD include
1) it can reduce the publi-key size further,
2) it can be applied to code-based digital signatures, too.

Attacks on Hash Functions based on Generalized Feistel - Application to Reduced-Round Lesamnta and SHAvite-3_{512}

In this paper we study the strength of two hash functions which are based on Generalized Feistels. Our proposed attacks themselves are mostly independent of the round function in use, and can be applied to similar hash functions which share the same structure but have different round functions.
We start with a 22-round generic attack on the structure of Lesamnta, and adapt it to the actual round function to attack 24-round Lesamnta. We then show a generic integral attack on 20-round Lesamnta (which can be used against the block cipher itself). We follow with an attack
on 9-round SHAvite-3_{512} which is the first cryptanalytic result on the hash function (which also works for the tweaked version of SHAvite-3_{512}).

Traitor-Tracing on Binary Strings

Uncategorized

Uncategorized

Codes with the \emph{Identifiable Parent Property} (IPP) have been studied in the context of traitor tracing; such codes can be used to enable a data supplier to determine the origin of pirated data. We consider an analogous property for a set of binary strings $S$: if a new string $\tau$ is formed by concatenating substrings of members of $S$, we should be able to identify at least one original string which must have been used to generate $\tau$. We prove upper and lower bounds for the size of sets which satisfy this property.

Cryptanalysis of Secure Message Transmission Protocols with Feedback

In the context of secure point-to-point message transmission in networks with minimal connectivity, previous studies showed that feedbacks from the receiver to the sender can be used to reduce the requirements of network connectivity. We observe that the way how feedbacks were used in previous work does not guarantee perfect privacy to the transmitted message, when the adversary performs a Guessing Attack. In this paper, we shall describe our new Guessing Attack to some existing protocols (in fact, we are the first to point out a flaw in the protocols of Desmedt-Wang's Eurocrypt'02 paper and of Patra-Shankar-Choudhary-Srinathan-Rangan's CANS'07 paper), and propose a scheme defending against a general adversary structure. In addition, we also show how to achieve almost perfectly secure message transmission with feedbacks when perfect reliability or perfect privacy is not strictly required.

Improvement of Das's Two-Factor Authentication Protocol in Wireless Sensor Networks

User authentication is essential for customized services
and privileged access control in wireless sensor network.
In 2009, Das proposed a novel two-factor authentication scheme
for wireless sensor network, where a user must prove the possession of both a password and a smart card.
His scheme is well-designed for sensor nodes which typically have limited resources
in the sense that its authentication procedure requires no public key operations
but it utilizes only cryptographic hash function.
In this letter, we point out that Das's protocol is vulnerable to an off-line password guessing attack,
and also show a countermeasure to overcome the vulnerability without sacrificing any efficiency and usability.
Besides the patch, we suggest a method to protect query response messages
from wireless a sensor node to a user, which is necessary in serving a user in a confidential and authentic way.

Information-Theoretically Secure Protocols and Security Under Composition

We investigate the question of whether security of protocols in the information-theoretic setting (where the adversary is computationally unbounded) implies the security of these protocols under concurrent composition. This question is motivated by the folklore that all known protocols that are secure in the information-theoretic setting are indeed secure under concurrent composition. We provide answers to this question for a number of different settings (i.e., considering perfect versus statistical security, and concurrent composition with adaptive versus fixed inputs). Our results enhance the understanding of what is necessary for obtaining security under composition, as well as providing tools (i.e., composition theorems) that can be used for proving the security of protocols under composition while considering only the standard stand-alone definitions of security.

A Comparative Analysis of Delay Based PUF Implementations on FPGA

Physical Unclonable Functions promise cheap, efficient, and secure identification and authentication of devices. In FPGA devices, PUFs may be instantiated directly from FPGA fabric components in order to exploit the propagation delay differences of signals caused by manufacturing process variations. Multiple delay based PUF architectures have been proposed. However, we have observed inconsistent results among them. Ring Oscillator PUF works fine, while other delay based PUFs show a significantly lower quality. Rather than proposing complex system level solutions, we focus on the fundamental building blocks of the PUF. In our effort to compare the various delay based PUF architectures, we have closely examined how each architecture maps into the FPGA fabric. Our conclusions are that arbiter and butterfly PUF architectures are ill suited for FPGAs, because delay skew due to routing asymmetry is over 10 times higher than the random variation due to manufacturing process. On the other hand, ring oscillator PUF does not suffer from the same limitations.

Using Sphinx to Improve Onion Routing Circuit Construction

This paper presents compact message formats for onion routing circuit construction using the Sphinx methodology developed for mixes. We significantly compress the circuit construction messages for three onion routing protocols that have emerged as enhancements to the Tor anonymizing network; namely, Tor with predistributed Diffie-Hellman values, pairing-based onion routing, and certificateless onion routing. Our new circuit constructions are also secure in the universal composability framework, a property that was missing from the original constructions. Further, we compare the performance of our schemes with their older counterparts as well as with each other.

A Unified Method for Finding Impossible Differentials of Block Cipher Structures

Uncategorized

Uncategorized

In this paper, we propose a systematic method for finding impossible
differentials for block cipher structures, better than the
$\mathcal{U}$-method introduced by Kim \textit{et al}~\cite{Kim03}.
It is referred as a unified impossible differential finding method
(UID-method). We apply the UID-method to some popular block ciphers
such as {\sf Gen-Skipjack}, {\sf Gen-CAST256}, {\sf Gen-MARS}, {\sf
Gen-RC6}, {\sf Four-Cell}, {\sf SMS4} and give the detailed
impossible differentials. By the UID-method, we find a 16-round
impossible differential on {\sf Gen-Skipjack} and a 19-round
impossible differential on {\sf Gen-CAST256}. Thus we disprove the
\textsl{Conjecture 2} proposed in
\textsl{Asiacrypt'00}~\cite{Sung00} and the theorem in
\textsl{FSE'09} rump session presentation~\cite{Pudovkina09}. On
{\sf Gen-MARS} and {\sf SMS4}, the impossible differentials find by
the UID-method are much longer than that found by the
$\mathcal{U}$-method. On the {\sf Four-Cell} block cipher, our
result is the same as the best result previously obtained by
case-by-case treatment.

Approximate Integer Common Divisor Problem relates to Implicit Factorization

In this paper, we analyse how to calculate the GCD of $k$ $(\geq 2)$
many large integers, given their approximations. Two versions of the
approximate common divisor problem, presented by Howgrave-Graham in CaLC 2001, are special cases of our analysis when $k = 2$. We then relate the approximate common divisor problem to the implicit factorization problem. This has been introduced by May and Ritzenhofen in PKC 2009 and studied under the assumption that some of Least Significant Bits (LSBs) of certain primes are same. Our strategy can be applied to the implicit factorization problem in a general framework considering the equality of (i) Most Significant Bits (MSBs), (ii) Least Significant Bits (LSBs) and (iii) MSBs and LSBs together. We present new and improved theoretical as well as experimental results in comparison with the state of the art works in this area.

Cryptographic Accumulators for Authenticated Hash Tables

Uncategorized

Uncategorized

Hash tables are fundamental data structures that optimally answer
membership queries. Suppose a client stores $n$ elements in a hash
table that is outsourced at a remote server. Authenticating the
hash table functionality, i.e., verifying the correctness of queries
answered by the server and ensuring the integrity of the stored
data, is crucial because the server, lying outside the
administrative control of the client, can be malicious.
We design efficient and secure protocols for optimally
authenticating (non-)membership queries on hash tables, using
cryptographic accumulators as our basic security primitive and
applying them in a novel hierarchical way over the stored data. We
provide the first construction for authenticating a hash table with
\emph{constant query} cost and \emph{sublinear update} cost,
strictly improving upon previous methods.
Our first solution, based on the RSA accumulator, allows the server
to provide a proof of integrity of the answer to a membership query
in \emph{constant} time and supports updates in
$O\left(n^{\epsilon}\log n\right)$ time for any fixed constant
$0<\epsilon<1$, yet keeping the communication and verification costs
constant. It also lends itself to a scheme that achieves different
trade-offs---namely, constant update time and $O(n^{\epsilon})$
query time.
Our second solution uses an accumulator that is based on bilinear
pairings to achieve $O(n^{\epsilon})$ update time at the server
while keeping all other complexities constant. Both schemes apply
to two concrete data authentication models and an experimental
evaluation shows good scalability.

Security Analysis of the PACE Key-Agreement Protocol

We analyze the Password Authenticated Connection Establishment (PACE) protocol for authenticated key agreement, recently proposed by the German Federal Office for Information Security (BSI) for the
deployment in machine readable travel documents. We show that the PACE protocol is secure in the real-or-random sense of Abdalla, Fouque and Pointcheval, under a number-theoretic assumption related to the
Diffie-Hellman problem and assuming random oracles and ideal ciphers.

Universally Constructing 12-th Degree Extension Field for Ate Pairing

We need to perform arithmetic in $\Fpt$ to use Ate pairing on a Barreto-Naehrig (BN) curve, where $p(z)$ is a prime given by $p(z)=36z^4+36z^3+24z^2+6z+1$ with an integer $z$. In many implementations of Ate pairing, $\Fpt$ has been regarded as the 6-th extension of $\Fpp$, and it has been constructed as $\Fpt=\Fpp[v]/(v^6-\xi)$ for an element $\xi\in \Fpp$ such that $v^6-\xi$ is irreducible in $\Fpp[v]$. Such $\xi$ depends on the value of $p(z)$, and we may use mathematic software to find $\xi$. This paper shows that when $z \equiv 7,11 \pmod{12}$ we can universally construct $\Fpp$ as $\Fpt=\Fpp[v]/(v^6-u-1)$, where $\Fpp=\Fp[u]/(u^2+1)$.

A Strong Blind Signature Scheme over Braid Groups

The rapid development of quantum computing makes public key cryptosystems not based on commutative algebraic systems hot topic. Because of the non-commutativity property, the braid group with braid index more than two becomes a new candidate for constructing cryptographic protocols. A strong blind signature scheme is proposed based on the difficulty of the one-more matching conjugacy problem in the braid groups, in which the signer can not relate the signature of the blinded message to that of the original message. The usage of random factor ensures that the blind signatures of the same message are different and avoids the weakness of simultaneous conjugating. The scheme can resist the adaptively chosen-message attack under the random oracle model.

On the Analysis of Cryptographic Assumptions in the Generic Ring Model

The generic ring model considers algorithms that operate on elements of an algebraic ring by performing only the ring operations and without exploiting properties of a given representation of ring elements. It is used to analyze the hardness of computational problems defined over rings. For instance, it is known that breaking RSA is equivalent to factoring in the generic ring model (Aggarwal and Maurer, Eurocrypt 2009). Do hardness results in the generic ring model support the conjecture that solving the considered problem is also hard in the standard model, where elements of $\Z_n$ are represented by integers modulo $n$?
We prove in the generic ring model that computing the Jacobi symbol of an integer modulo $n$ is equivalent to factoring. Since there are simple and efficient non-generic algorithms which compute the Jacobi symbol, this provides an example of a natural computational problem which is hard in the generic ring model, but easy to solve if elements of $\Z_n$ are given in their standard representation as integers. Thus, a proof in the generic ring model is unfortunately not a very strong indicator for the hardness of a computational problem in the standard model.
Despite this negative result, generic hardness results still provide a lower complexity bound for a large class of algorithms, namely all algorithms solving a computational problem independent of a given representation of ring elements. Thus, from this point of view results in the generic ring model are still interesting. Motivated by this fact, we show also that solving the quadratic residuosity problem generically is equivalent to factoring.

Security of ECQV-Certified ECDSA Against Passive Adversaries

We show that the elliptic curve Qu-Vanstone implicit certificate scheme (ECQV), when composed with the Elliptic Curve Digital Signature Algorithm (ECDSA), is secure against passive adversaries under the combined assumption of the random oracle model and the generic group model,---if the ECQV certificate itself is excluded from the signable message space, because of an attack of Kravitz. In contrast, we detail an attack on the composition of another implicit certificate scheme, proposed by Pintsov and Vanstone for Optimal Mail Certificates (OMC), and ECDSA. This composition attack forges an implicitly certified ECDSA
signature, and is passive in the sense of needing no interaction with the signer or the certification authority. (Pintsov and Vanstone did not propose combining OMC with ECDSA.)

A Family of Weak Keys in HFE (and the Corresponding Practical Key-Recovery)

The HFE (Hidden Field Equations) cryptosystem is one of the most
interesting public-key multivariate scheme. It has been proposed
more than 10 years ago by Patarin and seems to withstand the attacks
that break many other multivariate schemes, since only
subexponential ones have been proposed. The public key is a system
of quadratic equations in many variables. These equations are
generated from the composition of the secret elements: two linear
mappings and a polynomial of small degree over an extension field.
In this paper we show that there exist weak keys in HFE when the
coefficients of the internal polynomial are defined in the ground
field. In this case, we reduce the secret key recovery problem to an
instance of the Isomorphism of Polynomials (IP) problem between the
equations of the public key and themselves. Even though for schemes
such as SFLASH or $C^*$ the hardness of key-recovery relies on the
hardness of the IP problem, this is normally not the case for HFE,
since the internal polynomial is kept secret. However, when a weak
key is used, we show how to recover all the components of the secret
key in practical time, given a solution to an instance of the IP
problem. This breaks in particular a variant of HFE proposed by
Patarin to reduce the size of the public key and called the
``subfield variant''.

Data-Depend Hash Algorithm

Uncategorized

Uncategorized

We study some technologys that people had developed to analyse and attack hash algorithm. We find a way that use data-depend function to resist differential attack. Then we design a hash algorithm that called Data-Depend Hash Algorit(DDHA). And DDHA is simple and strong under differential attack.

An efficient ID- based directed signature scheme from bilinear pairings

A directed signature scheme allows a designated verifier to directly verify a signature issued to him, and a third party to check the signature validity with the help of the signer or the designated verifier as well. Directed signatures are applicable where the signed message is sensitive to the signature receiver. Due to its merits, directed signature schemes are suitable for applications such as bill of tax and bill of health. In this paper, we proposed an efficient identity based directed signature scheme from bilinear pairings. Our scheme is efficient than the existing directed signature schemes. In the random oracle model, our scheme is unforgeable under the Computational Diffie-Hellman (CDH) assumption, and invisible under the Decisional Bilinear Diffie-Hellman (DBDH).

Fully Homomorphic Encryption over the Integers

We describe a very simple ``somewhat homomorphic'' encryption scheme
using only elementary modular arithmetic, and use Gentry's techniques
to convert it into a fully homomorphic scheme. Compared to Gentry's construction, our somewhat homomorphic scheme merely uses addition and multiplication over the integers rather than working with ideal lattices over a polynomial ring. The main appeal of our approach is the conceptual simplicity.
We reduce the security of our somewhat homomorphic scheme to finding
an approximate integer gcd -- i.e., given a list of integers that are near-multiples of a hidden integer, output that hidden integer. We investigate the hardness of this task, building on earlier work of Howgrave-Graham.

Faster Pairing Computations on Curves with High-Degree Twists

Uncategorized

Uncategorized

Research on efficient pairing implementation has focussed on reducing the loop length and on using high-degree twists. Existence of twists of degree larger than $2$ is a very restrictive criterion but luckily constructions for pairing-friendly elliptic curves with such twists exist. In fact, Freeman, Scott and Teske showed in their overview paper that often the best known methods of constructing pairing-friendly elliptic curves over fields of large prime characteristic produce curves that admit twists of degree $3, 4$ or $6$.
A few papers have presented explicit formulas for the doubling and the addition step in Miller's algorithm, but the optimizations were all done for the Tate pairing with degree-$2$ twists, so the main usage of the high-degree twists remained incompatible with more efficient formulas.
In this paper we present efficient formulas for curves with twists of degree $2, 3, 4$ or $6$. These formulas are significantly faster than their predecessors. We show how these faster formulas can be applied to Tate and ate pairing variants, thereby speeding up all practical suggestions for efficient pairing implementations over fields of large characteristic.

Secure Multiparty AES (full paper)

We propose several variants of a secure multiparty computation
protocol for AES encryption. The best variant requires $2200 +
\frac{400}{255}$ expected elementary operations in expected $70 +
\frac{20}{255}$ rounds to encrypt one 128-bit block with a 128-bit
key. We implemented the variants using VIFF, a software framework for
implementing secure multiparty computation (MPC).
Tests with three players (passive security against
at most one corrupted player) in a local network showed that one block
can be encrypted in 2 seconds. We also argue that this result could be
improved by an optimized implementation. The security requirements are
the same as for the underlying MPC scheme.

Classification of Elliptic/hyperelliptic Curves with Weak Coverings against GHS Attack without Isogeny Condition

The GHS attack is known as a method to map the discrete logarithm problem(DLP) in the Jacobian of a curve C_{0} defined over the d degree extension k_{d} of a finite field k to the DLP in the Jacobian of a new curve C over k which is a covering curve of C_{0}.
Recently, classification and density analysis were shown for all elliptic and hyperelliptic curves C_{0}/k_d of genus 2, 3 which possess (2, \ldots ,2) covering C/k of {\mathbb{P}^{1}} under the isogeny condition (i.e. when g(C)=d \cdot g(C_{0})).
In this paper, we show a complete classification of small genus hyperelliptic curves C_0/k_d which possesses (2,..,2) covering C over k without the isogeny condition. Our main approach is to use representation of the extension of Gal(k_{d}/k) acting on cov(C/\mathbb{P}^{1}). Explicit defining equations of such curves and the existence of a model of C over k are also presented.

On the Impossibility of Batch Update for Cryptographic Accumulators

Uncategorized

Uncategorized

A cryptographic accumulator is a scheme where a set of elements is represented by a single short value. This value, along with another value called witness allows to prove membership into the set. In their survey on accumulators [FN02], Fazzio and Nicolisi noted that the Camenisch and Lysyanskaya's construction[CL02] was such that the time to update a witness after m changes to the accumulated value was proportional to m. They posed the question whether batch update was possible, namely if it was possible to build a cryptographic
accumulator where the time to update witnesses is independent from the number of changes in the accumulated set.
Recently, Wang et al. answered positively by giving a construction for an accumulator with batch update in [WWP07, WWP08]. In this work we show that the construction is not secure by exhibiting an attack. Moreover, we prove it cannot be fixed. If the accumulated value has been updated m times, then the time to update a witness must be at least
(m) in the worst case.

Golden Fish: An Intelligent Stream Cipher Fuse Memory Modules

Uncategorized

Uncategorized

In this paper, we use a high-order iterated function generated by block cipher as the nonlinear filter to improve the security of stream cipher. Moreover, by combining the published rounds function in block cipher and OFB as the nonlinear functional mode with an extra memory module, we enable to control the nonlinear complexity of the design. This new approach fuses the block cipher operation mode with two memory modules in one stream cipher. The security of this design is proven by the both periodic and nonlinear evaluation. The periods of this structure is guaranteed by the traditional Linear Feedback Shift Register design and the security of nonlinear characteristic is demonstrated by block cipher algorithm design itself, which is remarkably safer than the previous designs of stream cipher. We also can find such design style at SHA3.

Security Analysis of A Remote User Authentication Protocol by Liao and Wang

In Elsevier's journal of Computer Standards & Interfaces, 2007, Liao and Wang proposed an authentication protocol using smart card and claimed that their protocol provides security against replay attacks, active attacks and insider attacks. In addition, they argued that user anonymity is guaranteed. In this paper, we point out that Liao-Wang protocol is vulnerable to an insider attack by presenting a simple method for a malicious server to impersonate any user authenticating to the server. We also demonstrate that user anonymity can be violated as colluding servers can easily track activities of users.

Grouping-Proof Protocol for RFID Tags: Security Definition and Scalable Construction

In this paper, we propose a grouping-proof protocol for RFID tags based on secret sharing. Our proposed protocol addresses the scalability issue of the previous protocols by removing the need for an RFID reader to relay messages from one tag to another tag. We also present a security model for a secure grouping-proof protocol which properly addresses the so called \emph{mafia fraud atttack}. Mafia fraud attack (sometimes called distance fraud) is a simple relay attack suggested by Yvo Desmedt. Any location-based protocol including RFID protocols is vulnerable to this attack even if cryptography is used. One practical countermeasure to mafia fraud attack is to employ a distance-bounding protocol into a location-based protocol. However, in the light of work by Chandran et al., mafia fraud attack cannot be theoretically prevented. Therefore, we need to take hits fact into account in order to make sense about security notion for secure grouping-proof protocols.

Non-Malleable Codes

We introduce the notion of “non-malleable codes” which relaxes the notion of error correction and error detection. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. In contrast to error-correction and error-detection, non malleability can be achieved for very rich classes of modifications.
We construct an efficient code that is non-malleable with respect to modifications that effect each bit of the codeword arbitrarily (i.e. leave it untouched, flip it or set it to either 0 or 1), but independently of the value of the other bits of the codeword. Using the probabilistic method, we also show a very strong and general statement: there exists a non-malleable code for every “small enough” family F of functions via which codewords can be modified. Although this probabilistic method argument does not directly yield efficient constructions, it gives us efficient non-malleable codes in the random-oracle model for very general classes of tampering functions-e.g. functions where every bit in the tampered codeword can depend arbitrarily on any 99% of the bits in the original codeword.
As an application of non-malleable codes, we show that they provide an elegant algorithmic solution to the task of protecting functionalities implemented in hardware (e.g. signature cards) against “tampering attacks”. In such attacks, the secret state of a physical system is tampered, in the hopes that future interaction with the modified system will reveal some secret information. This problem, was previously studied in the work of Gennaro et al. in 2004 under the name “algorithmic tamper proof security” (ATP). We show that non-malleable codes can be used to achieve important improvements over the prior work. In particular, we show that any functionality can be made secure against a large class of tampering attacks, simply by encoding the secret-state with a non-malleable code while it is stored in memory.

Last updated: 2010-03-01

Efficient Client Puzzles based on Repeated-Squaring

In this paper, we propose a new, non-parallelizable verification-efficient client puzzle. Our puzzle is based on repeated-squaring and enables efficient verification of the puzzle solution that is reported by the client (prover). Client puzzles based on repeated-squaring were first proposed by Rivest et al. and constitute one of the first examples of non-parallelizable puzzles.
The main drawback of these puzzles was their high verification overhead. In this work, we show how this overhead can be significantly reduced by transferring the puzzle verification burden to the prover that executes the puzzle. Given a 1024-bit modulus, the improvement gain in the verification overhead of our puzzle when compared to the original repeated-squaring puzzle
is almost 50 times. We achieve this by embedding a secret
-- only known to the verifier -- within the Euler trapdoor function
that is used in repeated-squaring puzzles. We provide a security proof for this construction. We further show how our puzzle can be integrated in a number of protocols, including those used for efficient protection against DoS attacks and for the remote verification of the computing performance of devices. We validate the performance of our puzzle on a large number of PlanetLab nodes.

On a weakness of the Tu-Deng function and its repair

We observe that the function introduced by Z. Tu and Y. Deng in the ePrint Archive paper 2009/272 is weak against fast algebraic attacks. We propose an alternative function sharing all the properties of the Tu-Deng function but having not this weakness.

Solving the Shortest Lattice Vector Problem in Time 2^2.465n

Uncategorized

Uncategorized

The Shortest lattice Vector Problem is central in lattice-based
cryptography, as well as in many areas of computational mathematics
and computer science. We present an algorithm for solving it in time
2^2.465n and space 2^1.233n, where n is the lattice dimension. This improves the best previously known algorithm, by Micciancio and Voulgaris [SODA 2010], which runs in time 2^3.199n and space 2^1.325n.

Composition of Zero-Knowledge Proofs with Efficient Provers

We revisit the composability of different forms of zero-knowledge proofs when the honest prover strategy is restricted to be polynomial time (given
an appropriate auxiliary input). Our results are:
\begin{enumerate}
\item When restricted to efficient provers, the original Goldwasser--Micali--Rackoff (GMR) definition of zero knowledge (STOC `85), here called \emph{plain zero knowledge}, is closed under a constant number of sequential compositions (on the same input). This contrasts with the case of unbounded provers, where Goldreich and Krawczyk (ICALP `90, SICOMP `96) exhibited a protocol that is zero knowledge under the GMR definition, but for which the sequential composition of 2 copies is not zero knowledge.
\item If we relax the GMR definition to only require that the simulation is indistinguishable from the verifier's view by uniform polynomial-time distinguishers, with no auxiliary input beyond the statement being proven, then again zero knowledge is not closed under sequential composition of 2 copies.
\item We show that auxiliary-input zero knowledge with efficient provers is not closed under {\em parallel} composition of 2 copies under the assumption that there is a secure key agreement protocol (in which it is easy to recognize valid transcripts). Feige and Shamir (STOC `90) gave similar results under the seemingly incomparable assumptions that (a) the discrete logarithm problem is hard, or (b) $\UP\not\subseteq \BPP$ and one-way functions exist.
\end{enumerate}

An FPGA Technologies Area Examination of the SHA-3 Hash Candidate Implementations

This paper presents an examination of the different FPGA architectures used to implement the various hash function candidates for the currently ongoing NIST-organised SHA-3 competition~\cite{Sha3NIST}. This paper is meant to be used as both a quick reference guide used in conjunction with the results table~\cite{Sha3zoo} as an aid in finding the ”best-fit” FPGA for a particular algorithm, as well as a helpful guide for explaining the many different terms and measurement units used in the various FPGA packages.

Secure Ranging With Message Temporal Integrity

In this paper, we address the problem of delay attacks on radio frequency time of arrival (ToA) secure ranging. In secure ranging, two mutually trusted devices try to determine their distance in the presence of an attacker. A delay attack consists of delaying the ranging messages exchanged between the devices, resulting in an increase of themeasuredmessage arrival times and thus in an increase of the measured distance.
In this work, we propose the first secure ranging protocol that enables the detection of delay attacks on ranging. This protocol therefore enables two trusted devices to obtain a secure estimate of their mutual distance; existing solutions enabled the devices only to obtain an upper bound on their mutual distance. We further discuss possible implementations of our secure ranging protocol using Ultra-Wide-Band radio technology. Finally, we introduce and formally define the notion of message temporal integrity, a message security property which relates to message delay and advancement.

Parallel Shortest Lattice Vector Enumeration on Graphics Cards

In this paper we present an algorithm for parallel exhaustive search for short vectors in lattices. This algorithm can be applied to a wide range of parallel computing systems. To illustrate the algorithm, it was implemented on graphics cards using CUDA, a programming framework for NVIDIA graphics cards. We gain large speedups compared to previous serial CPU implementations. Our implementation is almost 5 times faster in high lattice dimensions.
Exhaustive search is one of the main building blocks for lattice basis reduction in cryptanalysis. Our work results in an advance in practical lattice reduction.

Constructing Certificateless Encryption and ID-Based Encryption from ID-Based Key Agreement

We discuss the relationship between ID-based key agreement
protocols, certificateless encryption and ID-based key encapsulation
mechanisms.
In particular we show how in some sense ID-based key agreement
is a primitive from which all others can be derived.
In doing so we focus on distinctions between what we
term pure ID-based schemes and non-pure schemes,
in various security models.
We present security models for ID-based key agreement
which do not ``look natural'' when considered as analogues of
normal key agreement schemes, but which look more natural
when considered in terms of the models used in certificateless
encryption.

Groth--Sahai proofs revisited

Since their introduction in 2008, the non interactive zero-knowledge (NIZK) and non interactive witness indistinguishable (NIWI) proofs designed by Groth and Sahai have been used in numerous applications.
In this paper we offer two contributions to the study of these proof systems. First we identify and correct some errors, present in the oringal online manuscript, that occur in two of the three instantiations of the Groth-Sahai NIWI proofs for which the equation checked by the verifier is not valid for honest executions of the protocol. (In particular, implementations of these proofs would not work correctly.) We explain why, perhaps surprisingly, the NIZK proofs that are built from these NIWI proofs do not suffer from a similar problem. Secondly, we study the efficiency of existing instantiations and note that only one of the three instantiations has the potential of being practical. We therefore propose a natural extension of an existing assumption from symmetric pairings to asymmetric ones which in turn enables Groth-Sahai proofs based on new classes of efficient pairings.

On the Design and Implementation of an Efficient DAA Scheme

Direct Anonymous Attestation (DAA) is an anonymous digital signature scheme that aims to provide both signer authentication and privacy. One of the properties that makes DAA an attractive choice in practice is the split signer role. In short, a principal signer (a Trusted Platform Module (TPM)) signs messages in collaboration with an assistant signer (the Host, a standard computing platform into which the TPM is embedded). This split aims to harness the high level of security
offered by the TPM, and augment it using the high level of computational and storage ability oered by the Host. Our contribution in this paper is a modication to an existing pairing-based DAA scheme that signicantly improves efficiency, and a comparison with the original RSA-based DAA scheme via a concrete implementation.

Twisted Jacobi Intersections Curves

In this paper, the twisted Jacobi intersections which contains
Jacobi intersections as a special case is introduced. We show that
every elliptic curve over the prime field with three points of order
$2$ is isomorphic to a twisted Jacobi intersections curve. Some fast
explicit formulae for twisted Jacobi intersections curves in
projective coordinates are presented. These explicit formulae for
addition and doubling are almost as fast as the Jacobi
intersections. In addition, the scalar multiplication can be more
effective in twisted Jacobi intersections than in Jacobi
intersections. Moreover, we propose new addition formulae which are
independent of parameters of curves and more effective in reality
than the previous formulae in the literature.

Could SFLASH be repaired?

Uncategorized

Uncategorized

The SFLASH signature scheme stood for a decade as the most successful cryptosystem based on multivariate polynomials, before an efficient attack was finally found in 2007. In this paper, we review its recent cryptanalysis and we notice that its weaknesses can all be linked to the fact that the cryptosystem is built on the structure of a large field. As the attack demonstrates, this richer structure can be accessed by an attacker by using the specific symmetry of the core function being used. Then, we investigate the effect of restricting this large field to a purely linear subset and we find that the symmetries exploited by the attack are no longer present. At a purely defensive level, this defines a countermeasure which can be used at a moderate overhead. On the theoretical side, this informs us of limitations of the recent attack and raises interesting remarks about the design itself of multivariate schemes.

Efficiency Limitations for $\Sigma$-Protocols for Group Homomorphisms

Efficient zero-knowledge proofs of knowledge for group homomorphisms are essential for numerous systems in applied cryptography. Especially, $\Sigma$-protocols for proving knowledge of discrete logarithms in known and hidden order groups are of prime importance. Yet, while these proofs can be performed very efficiently within groups of known order, for hidden order groups the respective proofs are far less efficient.
This paper shows strong evidence that this efficiency gap cannot be bridged. Namely, whilst there are efficient protocols allowing a prover to cheat only with negligibly small probability in the case of known order groups, we provide strong evidence that for hidden order groups this probability is bounded below by $1/2$ for all efficient $\Sigma$-protocols not using common reference strings or the like.
We prove our results for a comprehensive class of $\Sigma$-protocols in the generic group model, and further strengthen them by investigating certain instantiations in the plain model.

Efficient Set Operations in the Presence of Malicious Adversaries

Uncategorized

Uncategorized

We revisit the problem of constructing efficient secure two-party protocols for the problems of set-intersection and set-union, focusing on the model of malicious parties. Our main results are constant-round protocols that exhibit linear communication and a (practically) linear number of exponentiations with simulation based security. In the heart of these constructions is a technique based on a combination of a perfectly hiding commitment and an oblivious pseudorandom function evaluation protocol. Our protocols readily transform into protocols that are UC-secure, and we discuss how to perform these transformations.

Enabling Efficient Fuzzy Keyword Search over Encrypted Data in Cloud Computing

As Cloud Computing becomes prevalent, more and more sensitive information are being centralized into the cloud. For the
protection of data privacy, sensitive data usually have to be encrypted before outsourcing, which makes effective data
utilization a very challenging task. Although traditional searchable encryption schemes allow a user to securely search
over encrypted data through keywords and selectively retrieve files of interest, these techniques support only
\emph{exact} keyword search. That is, there is no tolerance of minor typos and format inconsistencies which, on the
other hand, are typical user searching behavior and happen very frequently. This significant drawback makes existing
techniques unsuitable in Cloud Computing as it greatly affects system usability, rendering user searching experiences
very frustrating and system efficacy very low. In this paper, for the first time we formalize and solve the problem of
effective fuzzy keyword search over encrypted cloud data while maintaining keyword privacy. Fuzzy keyword search
greatly enhances system usability by returning the matching files when users' searching inputs exactly match the predefined keywords or the closest possible matching files based on keyword similarity semantics, when exact match fails. In our solution, we exploit edit distance to quantify keywords similarity and develop two advanced
techniques on constructing fuzzy keyword sets, which achieve optimized storage and representation overheads. We further propose a brand new symbol-based trie-traverse searching scheme, where a multi-way tree structure is built up using symbols transformed from the resulted fuzzy keyword sets. Through rigorous security analysis, we show that our proposed solution is secure and privacy-preserving, while correctly realizing the goal of fuzzy keyword search. Extensive
experimental results demonstrate the efficiency of the proposed solution.

From Passive to Covert Security at Low Cost

Aumann and Lindell defined security against covert attacks, where the adversary is malicious, but is only caught cheating with a certain probability. The idea is that in many real-world cases, a large probability of being caught is sufficient to prevent the adversary from trying to cheat. In this paper, we show how to compile a passively secure protocol for honest majority into one that is secure against covert attacks, again for honest majority and catches cheating with probability 1/4. The cost of the modified protocol is essentially twice that of the original plus an overhead that only depends on the number of inputs.

Embedded SFE: Offloading Server and Network using Hardware Tokens

Uncategorized

Uncategorized

We consider Secure Function Evaluation (SFE) in the client-server setting where the server issues a secure token to the client. The token is not trusted by the client and is not a trusted third party.
We show how to take advantage of the token to drastically reduce the communication complexity of SFE and computation load of the server.
Our main contribution is the detailed consideration of design decisions, optimizations, and trade-offs, associated with the setting and its strict hardware requirements for practical deployment.
In particular, we model the token as a computationally weak device with small constant-size memory and limit communication between client and server.
We consider semi-honest, covert, and malicious adversaries.
We show the feasibility of our protocols based on a FPGA implementation.

More Constructions of Lossy and Correlation-Secure Trapdoor Functions

We propose new and improved instantiations of lossy trapdoor functions (Peikert and Waters, STOC '08), and correlation-secure trapdoor functions (Rosen and Segev, TCC '09). Our constructions widen the set of number-theoretic assumptions upon which these primitives can be based, and are summarized as follows:
- Lossy trapdoor functions based on the quadratic residuosity assumption. Our construction relies on modular squaring, and whereas previous such constructions were based on seemingly stronger assumptions, we present the first construction that is based solely on the quadratic residuosity assumption. We also present a generalization to higher order power residues.
- Lossy trapdoor functions based on the composite residuosity assumption. Our construction guarantees essentially any required amount of lossiness, where at the same time the functions are more efficient than the matrix-based approach of Peikert and Waters.
- Lossy trapdoor functions based on the $d$-Linear assumption. Our construction both simplifies the DDH-based construction of Peikert and Waters, and admits a generalization to the whole family of $d$-Linear assumptions without any loss of efficiency.
- Correlation-secure trapdoor functions related to the hardness of syndrome decoding.

Information-set decoding for linear codes over Fq

A code-based cryptosystem is considered secure if the best known
attack against it is information-set decoding. Stern's algorithm and
its improvements are well optimized and the complexity is reasonably
well understood. However, these algorithms only handle codes over F2.
This paper presents a generalization of Stern's
information-set-decoding algorithm for decoding linear codes
over arbitrary finite fields Fq and analyzes the complexity.
This result makes it possible to compute the security
of recently proposed code-based systems over non-binary fields.
As an illustration, ranges of parameters for generalized
McEliece cryptosystems using classical Goppa codes over
F31 are suggested for which the new information-set-decoding algorithm needs 2^128 bit operations.

Confidential Signatures and Deterministic Signcryption

Encrypt-and-sign, where one encrypts and signs a message in
parallel, is usually not recommended for confidential message
transmission. The reason is that the signature typically leaks
information about the message. This motivates our investigation of
confidential signature schemes, which hide all information about
(high-entropy) input messages. In this work we provide a formal
treatment of confidentiality for such schemes and a comprehensive
discussion of the relationship of different notions we propose. We
give constructions meeting our notions, both in the random oracle
model and the standard model. As part of this we show that full
domain hash signatures achieve a weaker level of confidentiality
than Fiat-Shamir signatures. We then revisit the connection of
confidential signatures to signcryption schemes. We give formal
security models for deterministic signcryption schemes for
high-entropy and low-entropy messages, and prove encrypt-and-sign to
be secure for confidential signature schemes and high-entropy
messages. Finally, we show that one can derandomize any signcryption
scheme in our model and obtain a secure deterministic scheme.

Poly-Dragon: An efficient Multivariate Public Key Cryptosystem

In this paper we propose an efficient multivariate
public key cryptosystem. Public key of our cryptosystem contains polynomials of total degree three in plaintext and ciphertext variables, two in plaintext variables and one in ciphertext variables. However, it is possible to reduce the public key size by writing it as two sets of quadratic multivariate polynomials. The complexity of encryption in our public key cryptosystem is $O(n^{3})$, where $n$ is bit size, which is equivalent to other multivariate
public key cryptosystems. For decryption we need only four exponentiations in the binary field. Our Public key algorithm is bijective and can be used for encryption as well as for signatures.

A mean value formula for elliptic curves

It is proved in this paper that for any point on an elliptic
curve, the mean value of $x$-coordinates of its $n$-division points
is the same as its $x$-coordinate and the mean value of $y$-coordinates of its $n$-division points is the $n$ times of its $y$-coordinate.

An Improved Differential Fault Attack on Camellia

Uncategorized

Uncategorized

The S-box lookup is one of the most important operations in cipher algorithm design, and also is the most effective part to prevent traditional linear and differential attacks, however, when the physical implementation of the algorithm is considered, it becomes the weakest part of cryptosystems. This paper studies an active fault based implementation attack on block ciphers with S-box. Firstly, it proposes the basic DFA model and then presents two DFA models for Feistel and SPN structure block ciphers. Secondly, based on the Feistel DFA model, it presents several improved attacks on Camellia encryption and proposes new attacks on Camellia key schedule. By injecting one byte random fault into the r-1th round left register or the the r-1th round key, after solving 8 equations to recover 5 or 6 propagated differential fault of the rth round left register, 5 or 6 bytes of the rth equivalent subkey can be recovered at one time. Simulation experiments demonstrate that about 16 faulty ciphertexts are enough to obtain Camellia-128 key, and about 32, 24 ciphertexts are required to obtain both Camellia-192/256 key with and without FL/FL-1 layer respectively. Compared with the previous study by ZHOU Yongbin et. al. by injecting one byte fault into the rth round left register to recover 1 equivalent subkey byte and obtaining Camellia-128 and Camellia-192/256 with 64 and 96 faulty ciphertexts respectively, our attacks not only extend the fault location, but also improve the fault injection efficiency and decrease the faulty ciphertexts number, besides, our DFA model on Camellia encryption can be easily extended to DFA on Camellia key schedule case, while ZHOU’s can not. The attack model proposed in this paper can be adapted into most of the block ciphers with S-boxes. Finally, the contradictions between traditional cryptography and implementation attacks are analyzed, the state of the art and future directions of the DFA on Block ciphers with S-boxes are discussed.

Scan-based Attacks on Linear Feedback Shift Register Based Stream Ciphers

Uncategorized

Uncategorized

In this paper, we present an attack on stream cipher implementations by determining the scan chain structure of the linear feedback shift registers in their implementations. Although scan Design-for-Test (DFT) is a powerful testing scheme, we show that it can be used to retrieve the information stored in a crypto chip thus compromising its theoretically proven security.

Differential-Algebraic Algorithms for the Isomorphism of Polynomials Problem

In this paper, we investigate the difficulty of the Isomorphism of
Polynomials (IP) Problem as well as one of its variant IP1S. The
Isomorphism of Polynomials is a well-known problem studied more particularly in
multivariate cryptography as it is related to the hardness of the key
recovery of such cryptosystems. The problem is the following: given
two families of multivariate polynomials~$\A$ and~$\B$, find two
invertible linear (or affine) mappings $S$ and $T$ such that
$\B=T\circ \A \circ S$. For IP1S, we suppose that $T$ is the
identity. It is known that the difficulty of such problems depends
on the structure of the polynomials (\ie homogeneous, or not) and
the nature of the transformations (affine, or linear). Here, we
analyze the different cases and propose improved algorithms. We
precisely describe the situation in terms of complexity and
sufficient conditions to make the algorithms work. The algorithms
presented here combine linear algebra techniques, including the use
of differentials, together with Gröbner bases and statistical
tools such as the birthday paradox and a totally new object in the IP-context, the so-called \emph{Galton-Watson trees}. We show
that random instances of IP1S with quadratic polynomials can be
broken in time $\bigO{n^6}$, where $n$ is the number of variables,
independently of the number of polynomials. For IP1S with cubic
polynomials, as well as for IP, we propose new algorithms of
complexity $\bigO{n^6}$ if the polynomials of $\A$ are inhomogeneous
and $S, T$ linear. In all the other cases, we propose an algorithm
that requires $\bigO{n^6 \cdot q^{n/2}}$ computation. Finally, we
propose several algorithms for different subcases of the full IP
problem, the best of which has complexity $\bigO{q^{n/2}}$. These
new tools allow to break the challenges proposed by Patarin in
practice, and also raise some fundamental questions about the general security of multivariate public-key schemes.

A Game-Based Definition of Coercion-Resistance and its Applications

Coercion-resistance is one of the most important and
intricate security requirements for voting protocols.
Several definitions of coercion-resistance have been
proposed in the literature, both in cryptographic
settings and more abstract, symbolic models. However,
unlike symbolic approaches, only very few voting
protocols have been rigorously analyzed within the
cryptographic setting. A major obstacle is that existing
cryptographic definitions of coercion-resistance tend to
be complex and limited in scope: They are often tailored
to specific classes of protocols or are too demanding.
In this paper, we therefore present a simple and
intuitive cryptographic definition of
coercion-resistance, in the style of game-based
definitions. This definition allows to precisely measure
the level of coercion-resistance a protocol provides. As
the main technical contribution of this paper, we apply
our definition to two voting systems, namely, the Bingo
voting system and ThreeBallot. The results we obtain are
out of the scope of existing approaches. We show that the
Bingo voting system provides the same level of
coercion-resistance as an ideal voting system. We also
precisely measure the degradation of coercion-resistance
of ThreeBallot in case the so-called short ballot
assumption is not met and show that the level of
coercion-resistance ThreeBallot provides is significantly
lower than that of an ideal system, even in case of short
ballots.

A Diagonal Fault Attack on the Advanced Encryption Standard

The present paper develops an attack on the AES algorithm,
exploiting multiple byte faults in the state matrix. The
work shows that inducing a random fault anywhere in one of the four diagonals
of the state matrix at the input of the
eighth round of the cipher leads to the deduction of the entire AES key.
We also propose a more generalized fault attack which works if the fault induction does
not stay confined to one diagonal.
To the best of our knowledge, we present for the first time actual chip results
for a fault attack on an iterative AES hardware running on a Xilinx FPGA platform.
We show that when the fault stays within a diagonal, the AES key can be deduced with a brute force complexity of
approximately $2^{32}$, which was successfully performed in about $400$ seconds
on an Intel Xeon Server with $8$ cores. We show further that even if the
fault induction corrupts two
or three diagonals, $2$ and $4$ faulty ciphertexts are necessary to
uniquely identify the correct key.

A complete set of addition laws\\for incomplete Edwards curves

Edwards curves were the first curves shown to have a complete addition law. However, the completeness of the addition law depends on the curve parameters and even a complete Edwards curve becomes incomplete over a quadratic field extension. This paper covers arbitrary Edwards curves and gives a set of two addition laws that for any pair of input points P1, P2 produce the sum P1+P2.

Privacy-Preserving Public Auditing for Secure Cloud Storage

Uncategorized

Uncategorized

Using Cloud Storage, users can remotely store their data and enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources, without the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the outsourced data makes the data integrity
protection in Cloud Computing a formidable task, especially for users with constrained computing resources. Moreover, users should be able to just use the cloud storage as if it is local, without worrying about the need to verify its integrity. Thus, enabling public auditability for cloud storage is of critical importance so that users can resort to a third party auditor (TPA) to check the integrity of outsourced data and be worry-free. To securely introduce an effective TPA, the auditing process should bring in no new vulnerabilities towards user data privacy, and introduce no additional online burden to user. In this paper, we propose a secure cloud storage system supporting privacy-preserving public auditing. We further extend our result to enable the TPA to perform audits for multiple users
simultaneously and efficiently. Extensive security and performance analysis show the proposed schemes are provably secure and highly efficient.

Efficient and Provably Secure Certificateless Signcryption from Bilinear Maps

Uncategorized

Uncategorized

Signcryption is a cryptographic primitive that fulfills both the functions of digital signature and public key encryption simultaneously, at a cost significantly lower than that required by the traditional signature-then-encryption approach. In 2008, Barbosa and Farshim introduced the notion of certificateless signcryption (CLSC) and proposed the first CLSC scheme [2], but which requires six pairing operations in the signcrypt and unsigncrypt phases. In this paper, aimed at designing an efficient CLSC scheme, we propose a new efficient CLSC scheme from bilinear maps, which requires only two pairing operations in the signcrypt and unsigncrypt phases and is more efficient than all the schemes available.

On the nonlinearity profile of the Dillon function

The nonlinearity profile of a Boolean function is the sequence of its minimum Hamming distances $nl_r(f)$ to all functions of degrees at most $r$, for $r\geq 1$. The nonlinearity profile of a vectorial function is the sequence of the minimum Hamming distances between its component functions and functions of degrees at most $r$, for $r\geq 1$.The profile of the multiplicative inverse functions has been lower bounded in a previous paper by the same author. No other example of an infinite class of functions with unbounded algebraic degree has been exhibited since then, whose nonlinearity profile could be efficiently lower bounded. In this preprint, we lower bound the whole nonlinearity profile of the simplest Dillon bent function $(x,y)\mapsto xy^{2^{n/2}-2}$, $x,y\in F_{2^{n/2}}$.

Public-Key Cryptographic Primitives Provably as Secure as Subset Sum

We propose a semantically-secure public-key encryption scheme whose security is polynomial-time equivalent to the hardness of solving random instances of the subset sum problem. The subset sum assumption required for the security of our scheme is weaker than that of existing subset-sum based encryption schemes, namely the lattice-based schemes of Ajtai and Dwork (STOC '97), Regev (STOC '03, STOC '05), and Peikert (STOC '09). Additionally, our proof of security is simple and direct. We also present a natural variant of our scheme that is secure against key-leakage attacks, as well as an oblivious transfer protocol that is secure against semi-honest adversaries.

Differential Fault Analysis of the Advanced Encryption Standard using a Single Fault

In this paper we present an enhanced Differential Fault Attack that can be applied to the AES using a single fault. We demonstrate that when a single random byte fault is induced that affects the input of the eighth round, the AES key can be deduced using a two stage algorithm. The first step, would be expected to reduce the possible key hypotheses to $2^{32}$, and the second step to a mere $2^{8}$. Furthermore, we show that, with certain faults, this can be further reduced to two key hypotheses.

Voting with unconditional privacy: CFSY for booth voting

Uncategorized

Uncategorized

In this note we simplify
the Cramer, Franklin, Schoenmaker
and Yung internet voting protocol to the booth setting.
In it,
objects of the form $g_0^r g_1^{x_1}...g_l^{x_l}$
are used to define an
unconditionally hiding commitment scheme.
Because of their homomorphic properties
they are particularly suited for voting protocols
with unconditional privacy.
In fact, we show that almost all existing
protocols that provide unconditional privacy
use or could benefit from these commitments.
Even though we present no novelty from a cryptographic perpective,
the protocol presented is interesting from a voting perspective
because it is simple enough to be understood by non-cryptographers,
yet very powerful.

New Addition Operation and Its Application for Scalar Multiplication on Hessian Curves over Prime Fields

In this paper, we present a new addition operation on Hessian curves with low cost.
It can be applied to resist the side channel attacks for scalar multiplication,
and also can be used to compute precomputation points for window-based
scalar multiplication on Hessian curves over prime fields.
We propose two new precomputation schemes
that are shown to achieve the lowest cost among all known methods.
By using the fractional $w$NAF and fractional $wmb$NAF,
if $n=192$ bits and $1I\approx30M$, scheme 1 can save up to $31M$,
scheme 2 can save up to $28M$ with $w\geq 6$, where $I$, $M$ represent
the inversion and the multiplication, respectively.

Last updated: 2012-11-11

On the Equivalence of Two Models for Key-Dependent-Message Encryption

In this paper we examine the relationship between the security models for key-dependent-message encryption proposed by Backes \emph{et al.} \cite{Backes:08:OAEP} and Camenisch \emph{et al.} \cite{Camenisch:09:Public}. We show that when the two notions are equivalent for certain logical classes of function families when the number of keys $\ell$ in the system is logarithmically small.

Fully Homomorphic Encryption with Relatively Small Key and Ciphertext Sizes

We present a fully homomorphic encryption scheme which has both relatively small key and ciphertext size. Our construction follows that of Gentry by producing a fully homomorphic scheme from a “somewhat” homomorphic scheme. For the somewhat homomorphic scheme the public and private keys consist of two large integers (one of which is shared by both the public and private key) and the ciphertext consists of one large integer. As such, our scheme has smaller message expansion and key size than Gentry’s original scheme. In addition, our proposal allows efficient fully homomorphic encryption over any field of characteristic two.

Achieving Oblivious Transfer Capacity of Generalized Erasure Channels in the Malicious Model

Uncategorized

Uncategorized

Information-theoretically secure string oblivious transfer (OT) can be constructed based on discrete memoryless channel (DMC). The oblivious transfer capacity of a channel characterizes -- similarly to the (standard) information capacity -- how efficiently it can be exploited for secure oblivious transfer of strings. The OT capacity of a Generalized Erasure Channel (GEC) - which is a combination of a (general) DMC with the erasure channel - has been established by Ahlswede and Csizar at ISIT'07 in the case of passive adversaries. In this paper, we present the protocol that achieves this capacity against malicious adversaries for GEC with erasure probability at least 1/2. Our construction is based on the protocol of Crépeau and Savvides from Eurocrypt'06 which uses interactive hashing (IH). We solve an open question posed by the above paper, by basing it upon a constant round IH scheme (previously proposed by Ding et al at TCC'04). As a side result, we show that Ding et al IH protocol can deal with transmission errors.

Secure Network Coding Over the Integers

Network coding has received significant attention in the networking community for its potential to increase throughput and improve robustness without any centralized control. Unfortunately, network coding is highly susceptible to ``pollution attacks" in which malicious nodes modify packets in a way that prevents the reconstruction of information at recipients; such attacks cannot be prevented using standard end-to-end cryptographic authentication because network coding requires that intermediate nodes modify data packets in transit. Specialized solutions to the problem have been developed in recent years based on homomorphic hashing and homomorphic signatures. The latter are more bandwidth-efficient but require more computation; in particular, the only known construction uses bilinear maps.
We contribute to this area in several ways. We present the first homomorphic signature scheme based solely on the RSA assumption (in the random oracle model), and present a homomorphic hashing scheme based on composite moduli that is computationally more efficient than existing schemes (and which leads to secure network coding signatures based solely on the hardness of factoring in the standard model). Both schemes use shorter public keys than previous schemes. In addition, we show variants of existing schemes that reduce the communication overhead significantly for moderate-size networks, and which improve computational efficiency in some cases quite dramatically (e.g., we achieve a 20-fold speedup in the computation of intermediate nodes). At the core of our techniques is a modified approach to network coding where instead of working in a vector space over a field, we work directly over the integers (with small coefficients).

Ephemeral key compromise attack on the IB-KA protocol

Recently, Dario Fiore and Rosario Gennaro proposed the IB-KA protocol, which was inspired by MQV protocol. They provide a full
proof of security of IB-KA protocol using techniques developed by
Krawczyk in the Canetti-Krawczyk model. They designed the IB-KA
protocol with some security properties such as perfect forward
secrecy, reflection attack resilience, and key compromise impersonation resilience. But they didn't consider ephemeral key
compromise problem in the design of IB-KA protocol, and made no
analysis whether the IB-KA protocol can resist ephemeral key
compromise attacks. In this paper, we present ephemeral key
compromise attack on the the IB-KA protocol. Our work shows that the
IB-KA protocol is designed without ephemeral key compromise
resilience.

Properties of the Discrete Differential with Cryptographic Applications

Recently, the $C^{*-}$ signature scheme has been completely broken by Dubois et al. (Dubois et al., CRYPTO and EUROCRYPT 2007). As a consequence, the security of SFLASH and other multivariate public key systems have been impaired. The attacks presented in (Dubois et al., CRYPTO and EUROCRYPT 2007) rely on a symmetry of the differential of the encryption mapping. In (Ding et al., 2007), Ding et al. experimentally justify the use projection as a method of avoiding the new attack. In this paper, we derive some properties of the discrete differential, give a theoretical justification for the reparation in (Ding et al., 2007), and establish the exact context in which this attack is applicable.

New Cryptosystems From CSP-Based Self-Distributive Systems

Uncategorized

Uncategorized

We propose new cryptosystems based on self-distributive systems that are defined by conjugator searching problems (CSP) in noncommutative groups. Under the newly developed cryptographic assumptions, our basic construction is proven IND-CPA secure in the standard model. Then, we describe two extensions: The first is proven IND-CCA secure in the random oracle model, while the second achieves the IND-CCA security in the standard model. Moreover, our proposal is instantiated with braid groups, and leads to a new braid-based encryption scheme and its security is directly rooted in the intractability assumption of CSP in braid groups.

Faster Squaring in the Cyclotomic Subgroup of Sixth Degree Extensions

This paper describes an extremely efficient squaring operation in the so-called `cyclotomic subgroup' of $\F_{q^6}^{\times}$, for $q \equiv 1 \bmod{6}$. This result arises from considering the Weil restriction of scalars of this group from $\F_{q^6}$ to $\F_{q^2}$, and provides efficiency improvements for both pairing-based and
torus-based cryptographic protocols.

Optimal pairing revisited

Vercauteren introduced
a notion of optimal pairings. Up to know the only known optimal
pairing is the optimal Ate pairing. In this paper, we give some
properties of optimal pairing and provide an algorithm for finding
an optimal pairing if there exists one which is defined on the given
elliptic curve. Applying the cyclotomic polynomial, we construct
some new optimal pairings and provide a construction method of
pairing-friendly elliptic curves on which the optimal pairing can be
defined. Our algorithm is explicit and works for arbitrary embedding
degree $k$ and large prime subgroup orders $r$.

A Family of $p$-ary Binomial Bent Functions

For a prime $p$ with $p\equiv 3\,({\rm mod}\, 4)$ and an odd number
$m$, the Bentness of the $p$-ary binomial function $f_{a,b}(x)={\rm
Tr}_{1}^n(ax^{p^m-1})+{\rm Tr}_{1}^2(bx^{\frac{p^n-1}{4}})$ is
characterized, where $n=2m$, $a\in \bF_{p^n}^*$, and $b\in
\bF_{p^2}^*$. The necessary and sufficient conditions of
$f_{a,b}(x)$ being Bent are established respectively by an
exponential sum and two sequences related to $a$ and $b$. For the
special case of $p=3$, we further characterize the Bentness of the
ternary function $f_{a,b}(x)$ by the Hamming weight of a sequence.

How to pair with a human

Uncategorized

Uncategorized

We introduce a protocol, that we call Human Key Agreement, that allows pairs of humans to establish a key in a (seemingly hopeless) case where no public-key infrastructure is available, the users do not share any common secret, and have never been connected by any
physically-secure channel. Our key agreement scheme, while vulnerable to the human-in-the middle attacks, is secure against any malicious machine-in-the middle. The only assumption that we make is that the attacker is a machine that is not able to break the Captcha puzzles
(introduced by von Ahn et al., EUROCRYPT 2003).
Our main tool is a primitive that we call a Simultaneous Turing Test, which is a protocol that allows two users to verify if they are both human, in such a way that if one of them is not a human, then he does not learn whether the other one is human, or not.
To construct this tool we use a Universally-Composable Password Authenticated Key Agreement of Canetti et al. (EUROCRYPT 2005).

Improvements on two password-based authentication protocols

Recently, Liao et al. and Hölbl et al. each proposed a user authentication protocol, respectively. Both claimed that their schemes can withstand various attacks. However, Xiang et al. pointed out Liao et al.’s protocol suffers from three kinds of attacks, the replay attack, the guessing attack, and the Denial-of-service (DoS) attack. Moreover, we and Munilla et al. also found Hölbl et al.’s protocol suffers from the password guessing attack. In this paper, we will propose the two protocols’ improvements respectively. After analyses and comparisons, we conclude that our improvements are not only more secure but also more efficient in communication cost than all of the proposed password based schemes that we know.

On the Security Vulnerabilities of a Hash Based Strong Password Authentication Scheme

User authentication is an essential task for network security. To serve this purpose,in the past years, several strong password
authentication schemes have been proposed, but none of them probably withstand to known security threats. In 2004, W.
C. Ku proposed a new hash based strong password authentication scheme and claimed that the proposed scheme withstands
to replay, password fie compromise, denial of service and insider attack. This paper analyzes W. C. Ku’s scheme and found
that the proposed scheme does not support mutual authentication, session key generation phase for secure communication.
In addition, in W. C. Ku’s scheme, the user is not free to change his password. However, in this paper, we show that W. C.
Ku’s scheme is still vulnerable to insider, man in the middle, password guessing, replay, impersonation, stolen verifier and
denial of service attacks.

New Integral Distinguisher for Rijndael-256

The known 3-round distinguisher of Rijndael-256 is byte-
oriented and 2^8 plaintexts are needed to distinguish 3-round Rijndael from a random permutation. In this paper, we consider the influence of the order of the plaintexts and present a new 3-round distinguisher which only needs 32 plaintexts.

Last updated: 2010-01-14

Quantifying Trust

Trust is a central concept in public-key cryptography infrastructure and in security in general. We study its initial quantification and its spread patterns. There is empirical evidence that in trust-based reputation model for virtual communities, it pays to restrict the clusters of agents to small sets with high mutual trust. We propose and motivate a mathematical model, where this phenomenon emerges naturally. In our model, we separate trust values from their weights. We motivate this separation using real examples, and show that in this model, trust converges to the extremes, agreeing with and accentuating the observed phenomenon. Specifically, in our model, cliques of agents of maximal mutual trust are formed, and the trust between any two agents that do not maximally trust each other, converges to zero.
We offer initial practical relaxations to the model that preserve some of the theoretical flavor.

Last updated: 2010-01-14

Towards a Theory of Trust Based Collaborative Search

We developed three new theoretical insights into the art of
hierarchical clustering in the context of web-search. A no-
table example where these results may be useful is Trust
Based Collaborative Search, where an active user consults
agents that in the past performed a similar search. We pro-
ceed with this as an example throughout the paper, even
though the results are more broadly applicable. The …rst
result is that under plausible conditions, trust converges to
the extremes, creating clusters of maximal trust. The trust
between any two agents, whose initial mutual trust is not
maximal, eventually vanishes. In practice there is uncer-
tainty about data, hence we have to approximate the …rst
result with less than maximal trust. We allow clustering
tolerance equal to the uncertainty at each stage. The sec-
ond result is that in the context of search, under plausible
assumptions, this uncertainty converges exponentially fast
as we descend the clustering tree. The third observation is
that Shannon's cryptography may help estimate that uncer-
tainty.

Constructing Tower Extensions for the implementation of Pairing-Based Cryptography

Uncategorized

Uncategorized

A cryptographic pairing evaluates as an element in an extension field, and the evaluation itself involves a considerable
amount of extension field arithmetic. It is recognised that organising the extension field as a ``tower'' of subfield extensions has many
advantages. Here we consider criteria that apply when choosing the best towering construction, and the associated choice of
irreducible polynomials for the implementation of pairing-based cryptosystems. We introduce a method for automatically constructing efficient towers
for more congruency classes than previous methods, some of which allow faster arithmetic.

Last updated: 2010-02-26

An enhanced password authenticated key agreement protocol for wireless mobile network

Password-based Authenticated Key Agreement (PAKA) protocols are widely used in wireless mobile networks, however many existing PAKA protocols have security flaws. In the 3GPP2 network, there are several PAKA protocols proposed to enhance the security of the Authentication Key distribution mechanism which is subjected to the Man-In-The-Middle attack. We point out the security flaws of such protocols in [4,5] and give two practical attacks on them. Moreover we propose an enhanced PAKA protocol that can resist undetectable on-line and off-line password guessing attacks, and formally analyze its security in the Random Oracle model. In addition, we consider a special version of Diffie-Hellman problem called Degenerate Diffie-Hellman problem and propose two assumptions called Computational and Decision Degenerate Diffie-Hellman assumption which are as difficult as CDH assumption and DDH assumption respectively.

ON A COMBINATORIAL CONJECTURE

Uncategorized

Uncategorized

Recently, Tu and Deng proposed a combinatorial conjecture on binary string, on the premise that the conjecture is correct they obtain two classes of Boolean functions which are both algebraic immunity optimal: the first class of functions are also bent. The second class are balanced functions, which have optimal algebraic degree and the best nonlinearity up to now. In this paper, from three different sides, we prove this conjecture is true in many cases with different counting strategies. We also propose some problems about the weight equations which is related to this conjecture. Because of the scattered distribution, we predict that a general counting is difficult to obtain.

Cryptanalysis of a key exchange scheme based on block matrices

In this paper we describe a cryptanalysis of a key exchange scheme recently proposed by Alvarez, Tortosa, Vicent and Zamora. The scheme is based on exponentiation of block matrices over a finite field of prime order. We present an efficient reduction of the problem of disclosing the shared key to the discrete logarithm problem (DLP) in an extension of the base field.

Preimage Attacks on Reduced DHA-256

Uncategorized

Uncategorized

DHA-256 (Double Hash Algorithm) was proposed at the Cryptographic Hash Workshop hosted by NIST in November 2005. DHA-256 is a dedicated hash function with output length of 256 bits and 64 steps of operations designed to enhance SHA-256 security. In this paper, we show two attacks on reduced DHA-256. The first attack finds one-block second preimage and preimage of 26-step DHA-256 with time complexity of 2^{223.82} compression function operations and 2^{32} x 9 words memory. The second attack finds pseudo-preimage and preimage of 35-step DHA-256 with time complexity of 2^{239.63} and 2^{248.82} compression function operations, respectively, and 2^{16} x 11 words memory. To the best of our knowledge, this is the first paper that analyzes second pre-image resistance and preimage resistance of DHA-256.

A Novel Design Method of Stream Ciphers Based on Table-Element Permutation

Uncategorized

Uncategorized

In this paper, a new stream ciphers design method (named TEP) is proposed to base on the table-element nonlinear permutation. A number of words are generated by n-LFSRs(linear feedback shift register) input to a table. In the table, every word is dealt with by the nonlinear transforms and the several words are combined with nonlinear function to produce keystream words. The algorithm is simplicity and the secret key is generated rapidly. The result of many simulation experiments show that the keystream by TEP method generating can meet DIEHARD statistics tests. The approach is efficient to design stream ciphers.

How to Construct Cryptosystems and Hash Functions in Weakened Random Oracle Models

Uncategorized

Uncategorized

In this paper, we discuss how to construct secure cryptosystems and secure hash functions in weakened random oracle models.
~~~~The weakened random oracle model ($\wrom$), which was introduced by Numayama et al. at PKC 2008, is a random oracle with several weaknesses.
Though the security of cryptosystems in the random oracle model, $\rom$, has been discussed sufficiently,
the same is not true for $\wrom$.
A few cryptosystems have been proven secure in $\wrom$.
In this paper,
we will propose a new conversion that can convert \emph{any} cryptosystem secure in $\rom$ to a new cryptosystem that is secure in
the first preimage tractable random oracle model $\fptrom$ \emph{without re-proof}.
$\fptrom$ is $\rom$ without preimage resistance and so is the weakest of the $\wrom$ models.
Since there are many secure cryptosystems in $\rom$, our conversion can yield many cryptosystems secure in $\fptrom$.
~~~~The fixed input length weakened random oracle model, $\filwrom$, introduced by Liskov at SAC 2006,
reflects the known weakness of compression functions.
We will propose new hash functions that are indifferentiable from $\ro$ when the underlying compression function is modeled by a two-way partially-specified preimage-tractable fixed input length random oracle model ($\wfilrom$).
$\wfilrom$ is $\filrom$ without two types of preimage resistance and is the weakest of the $\filwrom$ models.
The proposed hash functions are more efficient than the existing hash functions which are
indifferentiable from $\ro$ when the underlying compression function is modeled by $\wfilrom$.

Making Collusion-Secure Codes (More) Robust against Bit Erasure

A collusion-secure code is called robust if it is secure against erasure of a limited number of undetectable bits, in addition to collusion attacks under Marking Assumption. In this article, we propose the first general conversion method of (non-robust) $c$-secure codes to robust $c$-secure codes. Also, the same method amplifies robustness of given robust $c$-secure codes. By applying our conversion to $c$-secure codes given by Nuida et al. (AAECC 2007), we present robust $c$-secure codes with code length of order $\Theta(c^2 \log^2 c)$ with respect to $c$. This code length improves preceding results by Sirvent (WCC 2007) and by Boneh and Naor (ACM CCS 2008) and is close to the one by Billet and Phan (ICITS 2008), although our construction is based on a weaker assumption than those preceding results. As an application, applying our resulting code to construction by Boneh and Naor also improves their traitor tracing scheme against imperfect decoders in efficiency of both key sizes and pirate tracing procedure.

A NOTE ON YAO'S THEOREM ABOUT PSEUDORANDOM GENERATORS

The Yao's theorem gives an equivalence between
the indistinguishability of a pseudorandom generator
and the impredictability of the next bit from an asymptotic point of view.
We present in this paper,
with detailed proofs, some modified versions of the Yao's theorem
which can be of interest for the study of practical systems. We study
the case of one pseudorandom generator, then the case of a family of
pseudorandom generators having the same fixed length and last an
asymptotical version of the previous result. We compute in each case
the cost of the reduction between the two algorithms.

Non-Interactive Verifiable Computing: Outsourcing Computation to Untrusted Workers

Verifiable Computation enables a computationally weak client to "outsource" the computation of a function F on various inputs x_1,...,x_k to one or more workers. The workers return the result of the function evaluation, e.g., y_i=F(x_i), as well as a proof that the computation of F was carried out correctly on the given value x_i. The verification of the proof should require substantially less computational effort than computing F(x_i) from scratch.
We present a protocol that allows the worker to return a computationally-sound, non-interactive proof that can be verified in O(m) time, where m is the bit-length of the output of F. The protocol requires a one-time pre-processing stage by the client which takes O(|C|) time, where C is the smallest Boolean circuit computing F. Our scheme also provides input and output privacy for the client, meaning that the workers do not learn any information about the x_i or y_i values.

Construction of A New Class of Linear Multivariate Public Key Cryptosystem, K(I)SE(1)PKC

In this paper, we present a new class of linear multivariate PKC referred to as K(I)SE(1)PKC.
We shall show that K(I)SE(1)PKC,
a linear multivariate PKC, can be sufficiently secure against any linear transformation attack due to the probabilistic structure.
We show that the probabilistic structure can be successfully introduced by the use of the Chinese Remainder Theorem.

Fuzzy extractors for continuous distributions

We show that there is a direct relation between the maximum length of
the keys extracted from biometric data and the error rates of the biometric
system. The length of the bio-key depends on the amount of distinguishing
information that can be extracted from the source data. This information can
be used a-priori to evaluate the potential of the biometric data in the context
of a speciﬁc cryptographic application. We model the biometric data more
naturally as a continuous distribution and we give a new deﬁnition for fuzzy
extractors that works better for this type of data.

Connections between Quaternary and Binary Bent Functions

Boolean bent functions were introduced by Rothaus (1976) as combinatorial objects related to difference sets, and have since enjoyed a great popularity in symmetric cryptography and low correlation sequence design. In this paper direct links between Boolean bent functions, generalized Boolean bent functions (Schmidt,
2006) and quaternary bent functions (Kumar, Scholtz, Welch, 1985)
are explored. We also study Gray images of bent functions and
notions of generalized nonlinearity for functions that are relevant
to generalized linear cryptanalysis.

Last updated: 2010-02-16

A Formal Framework for Cryptanalyzing RFID Distance Bounding Protocols

Uncategorized

Uncategorized

Many distance bounding protocols appropriate for RFID technology have been proposed recently. However, the design and the analysis of these protocols are not based on a formal perspective. Motivated by this need, a formal framework is presented that helps the future attempts to cryptanalyze and design new distance bounding protocols. We first formalize the adversary scenarios, the protocol means, and the adversary goals in general. Then, we focus on the formalism for RFID systems by describing and extending the adversary strategies and the prover model. Two recently published distance bounding protocols are cryptanalyzed using our formal framework to demonstrate its relevancy and efficiency. Our formalism thus allows to prove that the adversary success probabilities are higher than the originally claimed ones.

Analysis of Intermediate Field Systems

We study a new generic trapdoor for public key multivariate cryptosystems, called IFS for Intermediate Field Systems, which can be seen as dual to HFE. This new trapdoor relies on the possibility to invert a system of quadratic multivariate equations with few (logarithmic with respect to the security parameter) unknowns on an intermediate field thanks to Groebner bases algorithms. We provide a comprehensive study of the security of this trapdoor and show that it is equivalent to the security provided by HFE. Therefore, while insecure in its basic form, this trapdoor may reveal quite attractive when used with, e.g., the minus modifier.

Breaking ECC2K-130

Elliptic-curve cryptography is becoming the standard public-key
primitive not only for mobile devices but also for high-security
applications.
Advantages are the higher cryptographic
strength per bit in comparison with RSA and the higher speed in
implementations.
To improve understanding of the exact strength of the elliptic-curve
discrete-logarithm problem, Certicom has published a series of
challenges. This paper describes breaking the ECC2K-130 challenge
using a parallelized version of Pollard's rho method.
This is a major computation bringing together the contributions of
several clusters of conventional computers, PlayStation~3 clusters,
computers with powerful graphics cards and FPGAs. We also give
/preseestimates for an ASIC design. In particular we present * our choice and analysis of the iteration function for the rho method; * our choice of finite field arithmetic and representation;
* detailed descriptions of the implementations on a multitude of
platforms: CPUs, Cells, GPUs, FPGAs, and ASICs; * details about running the attack.

Converting Pairing-Based Cryptosystems from Composite-Order Groups to Prime-Order Groups

We develop an abstract framework that encompasses the key properties of bilinear groups of composite order that are required to construct secure pairing-based cryptosystems, and we show how to use prime-order elliptic curve groups to construct bilinear groups with the same properties. In particular, we define a generalized version of the subgroup decision problem and give explicit constructions of bilinear groups in which the generalized subgroup decision assumption follows from the decision Diffie-Hellman assumption, the decision linear assumption, and/or related assumptions in prime-order groups.
We apply our framework and our prime-order group constructions to create more efficient versions of cryptosystems that originally required composite-order groups. Specifically, we consider the Boneh-Goh-Nissim encryption scheme, the Boneh-Sahai-Waters traitor tracing system, and the Katz-Sahai-Waters attribute-based encryption scheme. We give a security theorem for the prime-order group instantiation of each system, using assumptions of comparable complexity to those used in the composite-order setting. Our conversion of the last two systems to prime-order groups answers a problem posed by Groth and Sahai.

Covering Radius of Two-dimensional Lattices

The covering radius problem in any dimension is not known to be
solvable in nondeterministic polynomial time, but when in
dimension two, we give a deterministic polynomial time algorithm
by computing a reduced basis using Gauss' algorithm in this paper.