All papers in 2008 (Page 4 of 545 results)
Cryptanalysis of an Authentication Scheme Using Truncated Polynomials
An attack on a recently proposed authentication scheme of Shpilrain and Ushakov is presented. The public information allows the derivation of a system of polynomial equations for the secret key bits. Our attack uses simple elimination techniques to distill linear equations. For the proposed parameter choice, the attack often finds secret keys or alternative secret keys within minutes with moderate resources.
New balanced Boolean functions satisfying all the main cryptographic criteria
We study an infinite class of functions which provably achieve an optimum algebraic immunity, an optimum algebraic degree and a good nonlinearity.
We checked that it has also a good behavior against fast algebraic attacks.
On the economic payoff of forensic systems when used to trace Counterfeited Software and content
We analyze how well forensic systems reduce counterfeiting of soft-
ware and content. We make a few idealized assumptions, and show that if the revenues of the producer before the introduction of forensics (Ro) are non-zero then the payoff of forensics is independent of the overall market size, it declines as the ratio between the penalty and the crime (both monetized) goes up, but that this behavior is reversed if Ro = 0. We also show that the payoff goes up as the ratio between success probability with and without forensics grows; however, for typical parameters most of the payoff is already reached when this ratio is 5.
Last updated: 2008-11-02
Enumeration of Homogeneous Rotation Symmetric functions over GF(p)
Uncategorized
Uncategorized
Rotation symmetric functions have been used as components
of different cryptosystems. This class of functions are invariant under circular translation of indices. In this paper, we will do some enumeration on homogeneous rotation symmetric functions over GF(p). And we givea formula to count homogeneous rotation symmetric functions when the greatest common divisor of input variable n and the degree d is a power of a prime, which solves the open problem in [7].
Practical Attacks on HB and HB+ Protocols
HB and HB+ are a shared-key authentication protocol
designed for low-cost devices such as RFID tags.
It was proposed by Juels and Weis at Crypto 2005.
The security of the protocol relies
on the ``learning parity with noise'' (LPN) problem,
which was proved to be
NP-hard.
The best known attack on LPN (by Levieil and Fouque, SCN 2006) requires
exponential number of samples and exponential
number of operations to be performed. This
makes this attack impractical because it is
infeasible to collect exponentially-many
observations of the protocol execution.
We present a passive attack on HB protocol which
requires only linear (to the length of the
secret key) number of samples. Number of performed
operations is still exponential, but attack is
efficient for some real-life values of the parameters,
i.~e.~noise $\frac{1}{8}$ and key length $144$-bits.
Leakage-Resilient Cryptography in the Standard Model
We construct a stream-cipher $\SC$ whose \emph{implementation} is
secure even if arbitrary (adversely chosen) information on the
internal state of $\SC$ is leaked during computation. This captures
\emph{all} possible side-channel attacks on $\SC$ where the amount
of information leaked in a given period is bounded, but overall can
be arbitrary large, in particular much larger than the internal
state of $\SC$. The only other assumption we make on the
\emph{implementation} of $\SC$ is that only data that is accessed
during computation leaks information.
The construction can be based on any pseudorandom generator, and the
only computational assumption we make is that this PRG is secure
against non-uniform adversaries in the classical
sense (i.e. when there are no side-channels).
The stream-cipher $\SC$ generates its output in chunks
$K_1,K_2,\ldots$, and arbitrary but bounded information leakage is
modeled by allowing the adversary to adaptively chose a function
$f_\ell:\bin^*\rightarrow\bin^\lambda$ before $K_\ell$ is computed,
she then gets $f_\ell(\tau_\ell)$ where $\tau_\ell$ is the internal
state of $\SC$ that is accessed during the computation of
$K_\ell$.
One notion of security we prove for $\SC$ is that $K_\ell$
is indistinguishable from random when given $K_1,\ldots,K_{\ell-1}$,
$f_1(\tau_1),\ldots, f_{\ell-1}(\tau_{\ell-1})$ and also the complete
internal state of $\SC$ after $K_{\ell}$ has been computed
(i.e. our cipher is forward-secure).
The construction is based on alternating extraction (previously
used in the intrusion-resilient secret-sharing scheme from
FOCS'07). We move this concept to the computational setting by
proving a lemma that states that the output of any PRG has high HILL
pseudoentropy (i.e. is indistinguishable from some distribution with
high min-entropy) even if arbitrary information about the seed is
leaked. The amount of leakage $\leak$ that we can tolerate in each
step depends on the strength of the underlying PRG, it is at least
logarithmic, but can be as large as a constant fraction of the
internal state of $\SC$ if the PRG is exponentially hard.
Recognition in Ad Hoc Pervasive Networks
We examine the problem of message and entity recognition in the context of ad hoc networks. We review the definitions and the security model described in the literature and examine previous recognition protocols described in ABCLMN98, HWGW05, LZWW05, M03, and WW03. We prove that there is a one to one correspondence
between non-interactive message recognition protocols and digital
signature schemes. Hence, we concentrate on designing interactive recognition protocols.
We look at LZWW05 in more detail and suggest a variant to overcome a certain shortcoming. In particular, in case of communication failure or adversarial disruption, this protocol is not equipped with a practical resynchronization process and can fail to resume. We propose a variant of this protocol which is equipped with a resynchronization technique that allows users to resynchronize whenever they wish or when they suspect an intrusion.
On the Provable Security of Multi-Receiver Signcryption Schemes
Uncategorized
Uncategorized
In ATC 2007, an identity based signcryption scheme for multiple receivers was proposed by Yu et al. In this paper, we first show that Yu et al.'s signcryption scheme is insecure by demonstrating an universal forgeability attack - anyone can generate a valid signcryption on any message on behalf of any legal user for any set of legal receivers without knowing the secret keys of the legal users. Also, we point out a subtle flaw in the proof of confidentiality given by Yu et al. and show that the scheme does not provide confidentiality. Further, we propose a corrected version of Yu et al.'s scheme and formally prove its security (confidentiality and unforgeability) under the existing security model for signcryption.\\
In another direction, Fagen Li et al. have proposed a pairing based multi-recipient signcryption scheme which works in public key infrastructure (PKI). We show that, the scheme proposed by Fagen Li et al. is not adaptive chosen ciphertext secure. We propose a new PKI based multi-receiver signcryption scheme and formally prove confidentiality and unforgeability of the scheme. Since all the previously reported schemes are shown to have flaws either in this paper or else where, the schemes reported in this paper are the only correct and efficient ones (both identity based and PKI based) for multi-receiver signcryption.
Local Affinity Based Inversion of Filter Generators
Uncategorized
Uncategorized
We propose a novel efficient cryptanalytic technique allowing
an adversary to recover an initial state of filter generator given
its output sequence. The technique is applicable to filter
generators possessing local affinity property.
A Modular Security Analysis of the TLS Handshake Protocol
We study the security of the widely deployed Secure Session
Layer/Transport Layer Security (TLS) key agreement
protocol.
Our analysis identifies, justifies, and exploits the modularity
present in the design of the protocol:
the {\em application keys} offered to higher level applications are
obtained from a {\em master key}, which in turn is derived, through
interaction, from a {\em pre-master key}.
Our first contribution consists of formal models that clarify the
security level enjoyed by each of these types of keys. The models
that we provide fall under well established paradigms in defining
execution, and security notions.
We capture the realistic setting where only one of the two parties
involved in the execution of the protocol (namely the server) has a
certified public key, and where the same master key is used to
generate multiple application keys.
The main contribution of the paper is a modular and generic proof of
security for the application keys established through the TLS
protocol.
We show that the transformation used by TLS to derive master keys
essentially transforms an {\em arbitrary} secure pre-master key
agreement protocol into a secure master-key agreement protocol.
Similarly, the transformation used to derive application keys works
when applied to an arbitrary secure master-key agreement protocol.
These results are in the random oracle model.
The security of the overall protocol then follows from proofs of
security for the basic pre-master key generation protocols employed by
TLS.
Constant-Round Concurrent Non-Malleable Commitments and Decommitments
In this paper we consider commitment schemes that are secure against concurrent poly-time man-in-the-middle (cMiM) attacks. Under such attacks, two possible notions of security for commitment schemes have been proposed in the literature: concurrent non-malleability with respect to commitment and concurrent non-malleability with respect to decommitment (i.e., opening).
After the original notion of non-malleability introduced by [Dolev, Dwork and Naor STOC 91] that is based on the independence of the committed and decommitted message, a new and stronger notion of non-malleability has been given in [Pass and Rosen STOC 05] by requiring that for any man-in-the-middle adversary there is a stand-alone adversary that succeeds with the same probability.
Under this stronger security notion, a constant-round commitment scheme that is concurrent non-malleable only with respect to commitment has been given in [Pass and Rosen FOCS 05] for the plain model, thus leaving as an open problem the construction of a constant-round concurrent non-malleable commitments with respect to decommitment. In other words, in [Pass and Rosen FOCS 05] security against adversaries that mount concurrent man-in-the-middle attacks is guaranteed only during the commitment phase (under their stronger notion of non-malleability).
The main result of this paper is a commitment scheme that is concurrent non-malleable with respect to both commitment and
decommitment, under the stronger notion of [Pass and Rosen STOC 05].
This property protects against cMiM attacks mounted during both commitments and decommitments which is a crucial security requirement in several applications, as in some digital auctions, in which players have to perform both commitments and decommitments.
Our scheme uses a constant number of rounds of interaction in the
plain model and is the first scheme that enjoys all these properties
under the definitions of [Pass and Rosen FOCS 05].
We stress that, exactly as in [Pass and Rosen FOCS 05], we assume that commitments and decommitments are performed in two distinct phases that do not overlap in time.
On the CCA1-Security of Elgamal and Damgård's Elgamal
Uncategorized
Uncategorized
It is known that there exists a reduction from the CCA1-security of
Damgård's Elgamal (DEG) cryptosystem to what we call the $\DDH^{\DSDH}$ assumption. We show that $\DDH^{\DSDH}$ is unnecessary for DEG-CCA1, while DDH is insufficient for DEG-CCA1. We also show that CCA1-security of the Elgamal cryptosystem is equivalent to another assumption $\DDH^{\CSDH}$, while we show that $\DDH^{\DSDH}$ is insufficient for Elgamal's CCA1-security. Finally, we prove a generic-group model lower bound $\Omega (\sqrt[3]{q})$ for the hardest considered assumption $\DDH^{\CSDH}$, where $q$ is the largest prime factor of the group order.
Last updated: 2008-11-09
On Resettably-Sound Resttable Zero Knowledege Arguments
We study the simultaneous resettability problem, namely whether resettably-sound resettable ZK arguments for non-trivial languages exist (posed by Barak et al. [BGGL FOCS'01]), in both the plain model and the bare public-key (BPK for short) model. Under
general hardness assumptions, we show:
1. in the BPK model, there exist constant-round (full-fledged) resettably-sound resettable ZK arguments for NP.
This resolves a main problem in this model that remained open since the Micali and Reyzin's identification of notions of soundness [MR Crypto 2001] in the BPK model.
2.in the plain model, there exist constant-round (unbounded) resettably-sound class-bounded resettable ZK (as defined by Deng and Lin in [DL Eurocrypt 2007]) arguments for NP.
This improves the previous result of Deng and Lin [Eurocrypt 2007] in that the DL construction for class-bounded resettable ZK argument achieves only a weak notion of resettable-soundness.
The crux of these results is a construction of constant-round instance-dependent (full-fledged) resettably-sound resettable WI argument of knowledge (IDWIAOK for short) for any NP statement of the form x_0\in L_0 or x_1\in L_1, a notion also introduced by Deng and Lin [Eurocrypt 2007], whose construction, however, obtains only weak resettable-soundness when x_0\notin L_0.
Our approach to the simultaneous resettability problem in the BPK model is to make a novel use of IDWIAOK, which gives rise to an elegant structure we call \Sigma-puzzles. Given the fact that all previously known resettable ZK arguments in the BPK model can be achieved in the
plain model when ignoring round complexity, we believe this approach will shed light on the simultaneous resettability problem in the plain model.
Perfectly Secure Message Transmission Tolerating Mixed Adversary
Uncategorized
Uncategorized
In this paper, we study the issues related to the possibility, feasibility and optimality for perfectly secure message transmission (PSMT) in an undirected synchronous network, under the influence of a mixed adversary having unbounded computing power, who can corrupt some of the nodes in the network in Byzantine, fail-stop and passive fashion respectively. Specifically, we answer the following questions: (a) Possibility: Given a network and a mixed adversary, what are the necessary and sufficient conditions for the existence of any PSMT protocol over the network tolerating the adversary?
(b) Feasibility: Once the existence of a protocol is ensured, then does there exist a polynomial time and efficient protocol on the given network? (c) Optimality: Given a message of specific length, what is the minimum communication complexity (lower bound) needed by any PSMT protocol to transmit the message and how to design a polynomial time protocol whose total communication complexity matches the lower bound on the communication complexity? We answer the above questions by considering two different types of mixed adversary, namely static mixed adversary and mobile mixed adversary. Intuitively, it is more difficult to tolerate a mobile mixed adversary (who can corrupt different set of nodes during different stages of the protocol) in comparison to its static counter part
(who corrupts the same set of nodes throughout the protocol). However, surprisingly, we show that the connectivity requirement in the network and lower bound on communication complexity of PSMT protocols is same against both static and mobile mixed adversary. To design our protocols against static and mobile mixed adversary, we use several new techniques, which are of independent interest.
A Novel Probabilistic Passive Attack on the Protocols HB and HB+
We present a very simple probabilistic, passive attack against the
protocols HB and HB+. Our attack presents some interesting features:
it requires less captured transcripts of protocol executions when com-
pared to previous results; It makes possible to trade the amount of
required transcripts for computational complexity; the value of noise
used in the protocols HB and HB+ need not be known.
A New Collision Differential For MD5 With Its Full Differential Path
Since the first collision differential with its full differential path was presented for MD5 function by Wang et al. in 2004, renewed interests on collision attacks for the MD family of hash functions have surged over the world of cryptology. To date, however, no cryptanalyst can give a second computationally feasible collision differential for MD5 with its full differential path, even no improved differential paths based on Wangs MD5 collision differential have appeared in literature. Firstly in this paper, a new differential cryptanalysis called signed difference is defined, and some principles or recipes on finding collision differentials and designing differential paths are proposed, the signed difference generation or elimination rules which are implicit in the auxiliary functions, are derived. Then, based on these newly found properties and rules, this paper comes up with a new computationally feasible collision differential for MD5 with its full differential path, which is simpler thus more understandable than Wangs, and a set of sufficient conditions considering carries that guarantees a full collision is derived from the full differential path. Finally, a multi-message modification-based fast collision attack algorithm for searching collision messages is specialized for the full differential path, resulting in a computational complexity of 2 to the power of 36 and 2 to the power of 32 MD5 operations, respectively for the first and second blocks. As for examples, two collision message pairs with different first blocks are obtained.
Identification and Privacy: Zero-Knowledge is not Enough
At first glance, privacy and zero-knowledgeness seem to be similar properties. A scheme is private when no information is revealed on the prover and in a zero-knowledge scheme, communications should not leak provers' secrets.
Until recently, privacy threats were only partially formalized and some zero-knowledge (ZK) schemes have been proposed so far to ensure privacy. We here explain why the intended goal is not reached. Following the privacy model proposed by Vaudenay at Asiacrypt 2007, we then reconsider the analysis of these schemes and thereafter introduce a general framework to modify identification schemes leading to different levels of privacy. Our new protocols can be useful, for instance, for identity documents, where privacy is a great issue.
Furthermore, we propose efficient implementations of zero-knowledge and private identification schemes based on modifications of the GPS scheme. The security and the privacy are based on a new problem: the Short Exponent Strong Diffie-Hellman (SESDH) problem. The hardness of this problem is related to the hardness of the Strong Diffie-Hellman (SDH) problem and to the hardness of the Discrete Logarithm with Short Exponent (DLSE) problem. The security and privacy of these new schemes are proved in the random oracle paradigm.
Revisiting Wiener's Attack -- New Weak Keys in RSA
In this paper we revisit Wiener's method (IEEE-IT 1990) of continued fraction (CF) to find new weaknesses in RSA. We consider RSA with $N = pq$, $q < p < 2q$, public encryption exponent $e$ and private decryption exponent $d$. Our motivation is to find out when RSA is insecure given $d$ is $O(N^\delta)$, where we are mostly interested in the range $0.3 \leq \delta \leq 0.5$. Given $\rho$ $(1 \leq \rho \leq 2)$ is known to the attacker, we show that the RSA keys are weak when $d = N^\delta$ and $\delta < \frac{1}{2} - \frac{\gamma}{2}$, where $|\rho q - p| \leq \frac{N^\gamma}{16}$. This presents additional results over the work of de Weger (AAECC 2002). We also discuss how the lattice based idea of Boneh-Durfee (IEEE-IT 2000) works better to find weak keys beyond the bound $\delta < \frac{1}{2} - \frac{\gamma}{2}$. Further we show that, the RSA keys are weak when $d < \frac{1}{2} N^\delta$ and $e$ is $O(N^{\frac{3}{2}-2\delta})$ for $\delta \leq \frac{1}{2}$. Using similar techniques we also present new results over the work of Blömer and May (PKC 2004).
New Impossible Differential Cryptanalysis of ARIA
This paper studies the security of ARIA against impossible differential cryptanalysis. Firstly an algorithm is given to find many new 4-round impossible differentials of ARIA. Followed by such impossible differentials, we improve the previous impossible differential attack on 5/6-round ARIA. We also point out that the existence of such impossible differentials are due to the bad properties of the binary matrix employed in the diffusion layer.
Proxy Key Re-encapsulation Mechanism for Group Communications
Uncategorized
Uncategorized
Many practical applications use hybrid encryption mechanism to deal with large plaintext messages or real-time communication since the performance of the public key encryption scheme is poor. The key encapsulation is a crucial part in hybrid encryption mechanism, which allows a sender to generate a random session key and distribute it to recipient. In this paper we present a proxy key re-encapsulation scheme for group communication. The proxy in our scheme is allowed to transform the encapsulated message corresponding to group A's public key into one that can be decapsulated by the member in group B. It can be used in cases when a group users need to perform sensitive operation without holding the necessary secret key.
Provably Secure ID-Based Broadcast Signcryption (IBBSC) Scheme
With the advent of mobile and portable devices such as cell phones and PDAs, wireless content distribution has become a major means of communications and entertainment. In such applications, a central authority needs to deliver encrypted data to a large number of recipients in such a way that only a privileged subset of users can decrypt it. A broadcasting news channel may face this problem, for example, when a large number of people subscribe to a daily exclusive news feature. This is exactly the kind of problem that \textit{broadcast encryption} attempts to efficiently solve. On top of this, especially in the current digital era, junk content or spam is a major turn off in almost every Internet application. If all the users who subscribe to the news feed receive meaningless noise or any unwanted content, then the broadcaster is going to lose them. This results in the additional requirement that subscribers have source authentication with respect to their broadcaster. \textit{Broadcast signcryption}, which enables the broadcaster to simultaneously encrypt and sign the content meant for a specific set of users in a single logical step, provides the most efficient solution to the dual problem of confidentiality and authentication. Efficiency is a major concern, because mobile devices have limited memory and computational power and wireless bandwidth is an extremely costly resource. While several alternatives exist in implementing broadcast signcryption schemes, identity-based (ID-based) schemes are arguably the best suited because of the unique advantage that they provide --- any unique, publicly available parameter of a user can be his public key, which eliminates the need for a complex public key infrastructure. In ASIAN 2004, Mu et al. \cite{MSLR04} propose what they call an ID-based authenticated broadcast encryption scheme, which is also a broadcast signcryption scheme, as the security goals are the same. They claim that their scheme provides message authentication and confidentiality and formally prove that the broadcaster's secret is not compromised, but in this paper, we demonstrate that even without knowing the broadcaster's secret, it is possible for a legal user to impersonate the broadcaster. We demonstrate this by mounting a universal forgeability attack --- any valid user, on receiving and decrypting a valid ciphertext from a broadcaster, can generate a valid ciphertext on any message on behalf of that broadcaster for the same set of legal receivers to which the broadcaster signcrypted the earlier message, without knowing any secrets. Following this, we propose a new ID-based broadcast signcryption (IBBSC) scheme, and formally prove its security under the strongest existing security models for broadcast signcryption (IND-CCA2 and EUF-CMA2).
An ID-based Authenticated Key Exchange Protocol Based on Bilinear Diffie-Hellman Problem
Uncategorized
Uncategorized
In this paper, we present a new ID-based two-party authenticated key
exchange (AKE) protocol, which makes use of a new technique called
twin Diffie-Hellman problem proposed by Cash, Kiltz and Shoup. We
show that our scheme is secure under bilinear Diffie-Hellman (BDH)
assumption in the enhanced Canetti-Krawczyk (eCK) model, which
better supports the adversary's queries than previous AKE models. To
the best of our knowledge, our scheme is the \emph{first} ID-based
AKE protocol provably secure in eCK model.
On the Security of a Visual Cryptography Scheme for Color Images
In Pattern Recognition, vol. 36, 2003, Hou proposed a four-share visual cryptography scheme for color images. The scheme splits a secret image into four shares, the black mask and the other three shares. It was claimed that without knowing the black mas, no information about the secret image can be obtained even if all the other three shares are known. In this paper, we show that this may be true for a few specific two-color secret images only. In all other cases however, security cannot be guaranteed. We show that an attacker can compromise a randomly chosen two-color secret image from any two of the other three shares with probability 4/7. The advantage will increase to 6/7 if all the other three shares are known. If the secret image has three or four colors, we show that the attacker can compromise it with probability 4/7 and 8/35, respectively. Finally, we show that our technique can be extended to compromising secret images with more than four colors.
Encryption-On-Demand: Practical and Theoretical Considerations
Alice and Bob may develop a spontaneous, yet infrequent need for online confidential exchange. They may be served by an 'encryption-on-demand' (EoD) service which will enable them to communicate securely with no prior preparations, and no after effects. We delineate a possible EoD service, and describe some of its theoretical and practical features. The proposed framework is a website which could tailor-make an encryption package to be downloaded by both Alice and Bob for their ad-hoc use. The downloaded package will include the cryptographic algorithm and a unique key, which may be of any size, since Alice and Bob will not have to enter, or regard the key per se, they would simply use the downloaded page to encrypt and decrypt their data. After their secure exchange both Alice and Bob may ignore, or discard the downloaded software, and restart the same procedure, with a different tailor-made package, exactly when needed. This framework allows for greater flexibility in managing the complexity aspects that ensures security. Alice and Bob will not have to know what encryption scheme they use. The server based tailoring program could pseudo-randomly pick AES, DES, RSA, ECC, select a short, or long key, and otherwise greatly increase the variability that would have to be negotiated by a cryptanalyst. Encryption-on-demand is offered on http://youdeny.com . Features are described.
Efficient Conversion of Secret-shared Values Between Different Fields
We show how to effectively convert a secret-shared bit $b$ over a
prime field to another field. If initially given a random replicated
secret share this conversion can be done by the cost of revealing
one secret shared value. By using a pseudo-random function it is
possible to convert arbitrary many bit values from one initial
random replicated share. Furthermore, we generalize the conversion to
handle general values of a bounded size.
Essentially Optimal Universally Composable Oblivious Transfer
Oblivious transfer is one of the most important cryptographic primitives, both for theoretical and practical reasons and several protocols were proposed during the years. We provide the first oblivious transfer protocol which is simultaneously optimal on the following list of parameters:
Security: it has universal composition.
Trust in setup assumptions: only one of the parties needs to trust the setup and some setup is needed for UC security.
Trust in computational assumptions: only one of the parties needs to trust a computational assumption.
Round complexity: it uses only two rounds.
Communication complexity: it communicates O(1) group elements to transfer one out of two group elements.
The Big-O notation hides 32, meaning that the communication is probably not optimal, but is essentially optimal in that the overhead is at least constant.
Our construction is based on pairings, and we assume the presence of a key registration authority.
Last updated: 2008-09-25
Analysis and Details of the Random Cipher Output Mode Of Operation Primitives
Uncategorized
Uncategorized
Consider that Hardware and Software attack Technologies seem to be advancing at an exponential pace. Should it be acceptable to believe that all of the current Modes Of Operation (MOO) will still be 100% safe from technology attacks 5 to 30 years or more into the future? Predictions about DES’s security when it was first developed proved to be wrong; with the volume of information and data being protected by current MOO’s, the security industry cannot afford to be wrong again. This is not to say that just because the experts were wrong about the DES that they are wrong now. They have never had and do not have perfect vision into the future about what will develop in the security attacking technology arena. Suppose some ‘brainiac’ teenager devises a sophisticated attack technology that no one thought of and one or more of the accepted MOO’s are broken; then we will all be racing to recover. With these potential advances in hardware and attack technology could come the time when none of the currently accepted modes of operation are safe from an attack. We ought to consider not designing ciphers that are even more complex, as this will just escalate the ‘leap-frog’ race between cipher developers and attackers. The problem isn’t the complexity; the mathematical connection between the plaintext/ciphertext pair and the connection to only one key needs to be expanded to multiple key connections. This MOO is presented as one potential solution to be considered to combat this potential problem by attempting a solution along this path. This proposal does not involve any new cipher engine technology.
Efficient arithmetic on elliptic curves using a mixed Edwards-Montgomery representation
From the viewpoint of x-coordinate-only arithmetic on elliptic curves, switching between the Edwards model and the Montgomery model is quasi cost-free. We use this observation to speed up Montgomery's algorithm, reducing the complexity of a doubling step from 2M + 2S to 1M + 3S for suitably chosen curve parameters.
Oracle-Assisted Static Diffie-Hellman Is Easier Than Discrete Logarithms
This paper extends Joux-Naccache-Thomé's $e$-th root algorithm to the static Diffie-Hellman problem ({\sc sdhp}).
The new algorithm can be adapted to diverse finite fields by customizing it with an {\sc nfs}-like core or an {\sc ffs}-like
core.
In both cases, after a number of {\sc sdhp} oracle queries, the attacker builds-up the ability to solve new {\sc sdhp} instances {\sl unknown before the query phase}.
While sub-exponential, the algorithm is still significantly faster than all currently known {\sc dlp} and {\sc sdhp} resolution methods.
We explore the applicability of the technique to various cryptosystems.
The attacks were implemented in ${\mathbb F}_{2^{1025}}$ and also in ${\mathbb F}_{p}$, for a $516$-bit $p$.
A New Multi-Linear Universal Hash Family
Uncategorized
Uncategorized
A new universal hash family is described. Messages are sequences over a finite field $\rF_q$ while
keys are sequences over an extension field $\rF_{q^n}$. A linear map $\psi$ from $\rF_{q^n}$ to itself
is used to compute the output digest. Of special interest is the case $q=2$. For this case, we show
that there is an efficient way to implement $\psi$ using a tower field representation of $\rF_{q^n}$.
From a practical point of view, the focus of our constructions is small
hardware and other resource constrained applications. For such platforms, our constructions
compare favourably to previous work.
On Implementation of GHS Attack against Elliptic Curve Cryptosystems over Cubic Extension Fields of Odd Characteristics
Uncategorized
Uncategorized
In this paper, we present algorithms for implementation of the GHS attack to Elliptic curve cryptosystems (ECC). In particular, we consider two large classes of elliptic curves over cubic extension fields of odd characteristics which have weak covering curves against GHS attack, whose existence have been shown recently.
We show an algorithm to find definition equation of the covering curve and an algorithm to transfer DLP of the elliptic curve to Jacobian of the covering curve. An algorithm to test if the covering curve is hyperelliptic is also shown in the appendix.
Multi-Factor Password-Authenticated Key Exchange
We consider a new form of authenticated key exchange which we call multi-factor password-authenticated key exchange, where session establishment depends on successful authentication of multiple short secrets that are complementary in nature, such as a long-term password and a one-time response, allowing the client and server to be mutually assured of each other's identity without directly disclosing private information to the other party.
Multi-factor authentication can provide an enhanced level of assurance in higher security scenarios such as online banking, virtual private network access, and physical access because a multi-factor protocol is designed to remain secure even if all but one of the factors has been compromised.
We introduce the first formal security model for multi-factor password-authenticated key exchange protocols, propose an efficient and secure protocol called MFPAK, and provide a formal argument to show that our protocol is secure in this model. Our security model is an extension of the Bellare-Pointcheval-Rogaway security model for password-authenticated key exchange and the formal analysis proceeds in the random oracle model.
The Multireceiver Commitment Schemes
Existing commitment schemes were addressed under the classic two-party scenario. However,
popularity of the secure multi-party computation in today's lush network communication is
motivating us to adopt more sophisticate commitment schemes. In this paper, we study for
the first time multireceiver commitment in unconditionally secure setting, i.e., one
committer promises a group of verifiers a common secret value (in computational setting it is
trivial). We extend the Rivest model for this purpose and present a provably secure generic
construction using multireceiver authentication codes (without secrecy) as a building
block. Two concrete schemes are proposed as its immediate implementations, which are
almost as efficient as an optimal MRA-code. Furthermore, to affirmatively answer the open
question of Pinto, Souto, Matos and Antunes, we present also a generic construction (for
two-party case) using only an A-code with secrecy. Finally, we show the possibility of
constructing multireceiver commitment schemes using other primitives such as verifiable
secret sharing. We leave open problems and believe the work will open doors for more interesting research.
Reducing the Complexity of the Weil Pairing Computation
Uncategorized
Uncategorized
In this paper, we present some new variants based on the Weil pairing for efficient pairing computations. The new pairing variants have the short Miller iteration loop and simple final exponentiation. We then show that computing the proposed pairings is more efficient than computing the Weil pairing. Experimental results for these pairings are also given.
Efficient Chosen Ciphertext Secure Public Key Encryption under the Computational Diffie-Hellman Assumption
Recently Cash, Kiltz, and Shoup showed a variant of the Cramer-Shoup (CS) public key encryption (PKE) scheme whose chosen-ciphertext (CCA) security relies on the computational Diffie-Hellman (CDH) assumption. The cost for this high security is that the size of ciphertexts is much longer than the CS scheme. In this paper, we show how to achieve CCAsecurity under the CDH assumption without increasing the size of ciphertexts. We further show a more efficient scheme under the hashed Diffie-Hellman (HDH) assumption such that the size of ciphertexts is the same as that of the Kurosawa-Desmedt (KD) scheme. Note that the CDH and HDH assumptions are weaker than the decisional Diffie-Hellman assumption which the CS and KD schemes rely on. Both of our schemes are based on a certain broadcast encryption (BE) scheme while the Cash-Kiltz-Shoup scheme is based on a different paradigm which is called the Twin DH problem.
As an independent interest, we also show a generic method of constructing CCA-secure PKE schemes from BE schemes such that the existing CCA-secure constructions can be viewed as special cases.
Complexity Analysis of a Fast Modular Multiexponentiation Algorithm
Recently, a fast modular multiexponentiation algorithm for computing A^X B^Y (mod N) was proposed. The authors claimed that on average their algorithm only requires to perform 1.306k modular multiplications (MMs), where k is the bit length of the exponents. This claimed performance is significantly better than all other comparable algorithms, where the best known result by other algorithms achieves 1.503k MMs only. In this paper, we give a formal complexity analysis and show the claimed performance is not true. The actual computational complexity of the algorithm should be 1.556k. This means that the best modular multiexponentiation algorithm based on canonical-sighed-digit technique is still not able to overcome the 1.5k barrier.
Computing Bilinear Pairings on Elliptic Curves with Automorphisms
Uncategorized
Uncategorized
In this paper, we present a novel method for constructing a
super-optimal pairing with great efficiency, which we call the omega
pairing. The computation of the omega pairing requires the simple
final exponentiation and short loop length in Miller's algorithm
which leads to a significant improvement over the previously known
techniques on certain pairing-friendly curves. Experimental results
show that the omega pairing is about 22% faster and 19% faster
than the super-optimal pairing proposed by Scott at security level
of AES 80 bits on certain pairing-friendly curves in affine
coordinate systems and projective coordinate systems, respectively.
Remarks on the Attack of Fouque et al. against the {\ell}IC Scheme
In 2007, $\ell$-Invertible Cycles ($\ell$IC) was proposed by Ding et al.
This is one of the most efficient trapdoors for encryption/signature
schemes, and of the mixed field type for multivariate quadratic
public-key cryptosystems.
Such schemes fit on the implementation over low cost smart cards or PDAs.
In 2008, Fouque et al. proposed an efficient attack against the $\ell$IC signature scheme by using Gröbner basis algorithms.
However, they only explicitly dealt with the odd case, i.e. $\ell$ is odd, but the even case; they only implemented their proposed attack in the odd case.
In this paper, we propose an another practical attack against the $\ell$IC encryption/signature scheme.
Our proposed attack does not employ Gröbner basis algorithms,
and can be applied to the both even and odd cases.
We show the efficiency of the attack by using some experimental results.
Furthermore, the attack can be also applied to the $\ell$IC- scheme.
To the best of our knowledge, we for the first time show some experimental results of a practical attack against the $\ell$IC- scheme for the even case.
Efficient Receipt-Free Ballot Casting Resistant to Covert Channels
We present an efficient, covert-channel-resistant, receipt-free ballot casting scheme that can be used by humans without trusted hardware. In comparison to the recent Moran-Naor proposal, our scheme produces a significantly shorter ballot, prevents covert channels in the ballot, and opts for statistical soundness rather than everlasting privacy (achieving both seems impossible). The human interface remains the same, based on Neff's MarkPledge scheme, and requires of the voter only short-string operations.
Partial Fairness in Secure Two-Party Computation
A seminal result of Cleve (STOC '86) is that, in general, complete fairness is impossible to achieve in two-party computation. In light of this, various techniques for obtaining partial fairness have been suggested in the literature. We propose a definition of partial fairness within the standard real-/ideal-world paradigm that addresses deficiencies of prior definitions. We also show broad feasibility results with respect to our definition: partial fairness is possible for any (randomized) functionality $f:X \times Y \rightarrow Z^1 \times Z^2$ at least one of whose domains or ranges is polynomial in size. Our protocols are always private, and when one of the domains has polynomial size our protocols also simultaneously achieve the usual notion of security with abort. In contrast to some prior work, we rely on standard assumptions only.
We also show that, as far as general feasibility is concerned, our results are optimal. Specifically, there exist functions with super-polynomial domains and ranges for which it is impossible to achieve our definition.
On Software Parallel Implementation of Cryptographic Pairings
A significant amount of research has focused on methods to improve the
efficiency of cryptographic pairings; in part this work is motivated by the wide range of applications for such primitives. Although numerous hardware accelerators for pairing evaluation have used parallelism within extension field arithmetic to improve efficiency, similar techniques have not been examined in software thus far. In this paper we focus on parallelism within one pairing evaluation (intra-pairing), and parallelism between different pairing evaluations (inter-pairing). We identify several methods for exploiting such parallelism (extending previous results in the context of ECC) and show that it is possible to accelerate pairing evaluation by a significant factor in comparison to a naive approach.
Cryptanalysis of the Cai-Cusick Lattice-based Public-key Cryptosystem
In 1998, Cai and Cusick proposed a lattice-based public-key
cryptosystem based on the similar ideas of the Ajtai-Dwork
cryptosystem, but with much less data expansion. However, they
didn't give any security proof. In our paper, we present an
efficient ciphertext-only attack which runs in polynomial time
against the cryptosystem to recover the message, so the Cai-Cusick
lattice-based public-key cryptosystem is not secure. We also present
two chosen-ciphertext attacks to get a similar private key which
acts as the real private key.
Privacy-Preserving Matching of DNA Profiles
In the last years, DNA sequencing techniques have advanced to the
point that DNA identification and paternity testing has become almost
a commodity. Due to the critical nature of DNA related data, this
causes substantial privacy issues. In this paper, we introduce
cryptographic privacy enhancing protocols that allow to perform the
most common DNA-based identity, paternity and ancestry tests
and thus implement privacy-enhanced online
genealogy services or research projects. In the semi-honest attacker
model, the protocols guarantee that no sensitive information about the
involved DNA is exposed, and are resilient against common forms of measurement
errors during DNA sequencing. The protocols are practical and efficient,
both in terms of communication and computation complexity.
Polynomials for Ate Pairing and $\mathbf{Ate}_{i}$ Pairing
The irreducible factor $r(x)$ of $\mathrm{\Phi}_{k}(u(x))$ and $u(x)
$ are often used in constructing pairing-friendly curves. $u(x)$ and
$u_{c} \equiv u(x)^{c} \pmod{r(x)}$ are selected to be the Miller
loop control polynomial in Ate pairing and $\mathrm{Ate}_{i}$
pairing. In this paper we show that when $4|k$ or the minimal prime
which divides $k$ is larger than $2$, some $u(x)$ and $r(x)$ can not
be used as curve generation parameters if we want $\mathrm{Ate}_{i}$
pairing to be efficient. We also show that the Miller loop length
can not reach the bound $\frac{\mathrm{log_{2}r}}{\varphi(k)}$ when
we use the factorization of $\mathrm{\Phi}_{k}(u(x))$ to generate
elliptic curves.
How To Ensure Forward and Backward Untraceability of RFID Identification Schemes By Using A Robust PRBG
In this paper, we analyze an RFID identification scheme which is designed to provide forward untraceability and backward untraceability. We show that if a standard cryptographic pseudorandom bit generator (PRBG) is used in the scheme, then the scheme may fail to provide
forward untraceability and backward untraceability. To achieve the desired untraceability features, the scheme can use a robust PRBG which provides forward security and backward security. We also note that the backward security is stronger than necessary for the backward untraceability of the scheme.
On The Security of The ElGamal Encryption Scheme and Damgard’s Variant
In this paper, we give security proofs for ElGamal encryption scheme and its variant by Damgard (DEG). For the ElGamal encryption, we show that (1) under the delayed-target discrete log assumption and a variant of the generalized knowledge-of-exponent assumption, ElGamal encryption is one-way under non-adaptive chosen cipher attacks; (2) one-wayness of ElGamal encryption under non-adaptive chosen cipher attacks is equivalent to the hardness of the delayed-target computational Diffie-Hellman problem. For DEG, (1) we give a new proof that DEG is semantically secure against non-adaptive chosen ciphertext attacks under the delayed-target decisional Diffie-Hellman assumption (although the same result has been presented in the literature before, our proof seems simpler); (2) we show that the DHK1 assumption, which was first proposed for DEG security proof, is stronger than necessary. A decisional (thus weaker) version of DHK1 assumption is sufficient for DEG security proof.
Simultaneous field divisions: an extension of Montgomery's trick
Montgomery's trick is a technique which can be used to quickly compute multiple field inversion simultaneously. We extend this technique to simultaneous field divisions (that is, combinations of field multiplications and field inversion). The generalized Montgomery's trick is faster in some fields than a simple inversion with Montgomery's trick followed by a simple field multiplication
Security needs in embedded systems
The paper discusses the hardware and software security requirements
in an embedded device that are involved in the transfer of secure digital data. The paper gives an overview on the security processes like encryption/decryption, key agreement, digital signatures and digital certificates that are used to achieve data protection during data transfer. The paper also discusses the security requirements in the device to prevent possible physical attacks to expose the secure data such as secret keys from the device. The paper also briefs on the security enforced in a device by the use of proprietary security technology and also discusses the security measures taken during the production of the device.
Secure Multiparty Computation for Privacy-Preserving Data Mining
In this paper, we survey the basic paradigms and notions of secure multiparty computation and discuss their relevance to the field of privacy-preserving data mining. In addition to reviewing definitions and constructions for secure multiparty computation, we discuss the issue of efficiency and demonstrate the difficulties involved in constructing highly efficient protocols. We also present common errors that are prevalent in the literature when secure multiparty computation techniques are applied to privacy-preserving data mining. Finally, we discuss the relationship between secure multiparty computation and privacy-preserving data mining, and show which problems it solves and which problems it does not.
A New Family of Perfect Nonlinear Binomials
We prove that the binomials $x^{p^s+1}-\alpha x^{p^k+p^{2k+s}}$
define perfect nonlinear mappings in $GF(p^{3k})$ for an appropriate choice of the integer $s$ and $\alpha \in GF(p^{3k})$. We show that these binomials are inequivalent to known perfect nonlinear monomials. As a consequence we obtain new commutative semifields for $p\geq 5$ and odd $k$.
An Efficient and Provably-Secure Identity-based Signcryption Scheme for Multiple PKGs
In this paper, based on the scheme proposed by Barreto et al in ASIACRYPT 2005, an identity-based signcryption scheme in multiple Private Key Generator (PKG) environment is proposed, which mitigates the problems referred to users' private keys escrow and distribution in single PKG system. For security of the scheme, it is proved to satisfy the properties of message confidentiality and existential signature-unforgeability, assuming the intractability of the q-Strong Diffie-Hellman problem and the q-Bilinear Diffie-Hellman Inversion problem. For efficiency, compared with the state-of-the-art signcryption schemes of the same kind, our proposal needs less pairing computations and is shown to be the most efficient identity-based signcryption schemes for multiple PKGs up to date.
Endomorphisms for faster elliptic curve cryptography on a large class of curves
Efficiently computable homomorphisms allow elliptic curve point
multiplication to be accelerated using the Gallant-Lambert-Vanstone
(GLV) method.
We extend results of Iijima, Matsuo, Chao and Tsujii which give
such homomorphisms
for a large class of elliptic curves by working over quadratic extensions
and demonstrate that these results can be applied to the
GLV method.
Our implementation runs in between 0.70 and 0.84 the time
of the previous best methods for elliptic
curve point multiplication on curves without small class number
complex multiplication. Further speedups are
possible when using more special curves.
Last updated: 2008-11-03
A Tamper-Evident Voting Machine Resistant to Covert Channels
To provide a high level of security guarantee cryptography is introduced into the design of the voting machine. The voting machine based on cryptography is vulnerable to attacks through covert channels. An adversary may inject malicious codes into the voting machine and make it leak vote information unnoticeably by exploiting the randomness used in encryptions and zero-knowledge proofs. In this paper a voting machine resistant to covert channels is designed. It has the following properties: Firstly, it is tamper-evident. The randomness used by the voting machine is generated by the election authority. The inconsistent use of the randomness can be detected by the voter from examining a destroyable verification code. Even if malicious codes are run in the voting machine attacks through subliminal channels are thwarted. Next, it is voter-verifiable. The voter has the ability to verify if the ballot cast by the machine is consistent with her intent without doing complicated cryptographic computation. Finally, the voting system is receipt-free. Vote-buying and coercion are prevented.
Investigating the DPA-Resistance Property of Charge Recovery Logics
The threat of DPA attacks is of crucial importance when designing cryptographic hardware. As a result, several DPA countermeasures at the cell level have been proposed in the last years, but none of them offers perfect protection against DPA attacks. Moreover, all of these DPA-resistant logic styles increase the power consumption and the area consumption significantly. On the other hand, there are some logic styles which provide less power dissipation (so called charge recovery logic) that can be considered as a DPA countermeasure. In this article we examine them from the DPA-resistance point of view. As an example of charge recovery logic styles, 2N-2N2P is evaluated. It is shown that the usage of this logic style leads to an improvement of the DPA-resistance and at the same time reduces the energy consumption which make it especially suitable for pervasive devices. In fact, it is the first time that a proposed DPA-resistant logic style consumes less power than the corresponding standard CMOS circuit.
Last updated: 2008-08-04
None
Uncategorized
Uncategorized
None
User-Sure-and-Safe Key Retrieval
In a key retrieval scheme, a human user interacts with a client
computer to retrieve a key. A scheme is user-sure if any adversary
without access to the the user cannot distinguish the retrieved key
from a random key. A scheme is user-safe if any adversary without
access to the client's keys, or simultaneous user and client access,
cannot exploit the user to distinguish the retrieved key from a random
key. A multiple-round key retrieval scheme, where the user is given
informative prompts to which the user responds, is proved to be
user-sure and user-safe.
Remote key retrieval involves a keyless client and a remote, keyed
server. User-sure and user-safe are defined similarly for remote key
retrieval. The scheme is user-anonymous if the server cannot identify
the user. A remote version of the multiple-round key retrieval scheme
is proved to be user-sure, user-safe and user-anonymous.
How to Build a Hash Function from any Collision-Resistant Function
Uncategorized
Uncategorized
Recent collision-finding attacks against hash functions such as MD5 and SHA-1 motivate the use of provably collision-resistant (CR) functions in their place. Finding a collision in a provably CR function implies the ability to solve some hard problem (e.g., factoring). Unfortunately, existing provably CR functions make poor replacements for hash functions as they fail to deliver behaviors demanded by practical use. In particular, they are easily distinguished from a random oracle. We initiate an investigation into building hhash functions from provably CR functions. As a method for achieving this, we present the Mix-Compress-Mix (MCM) construction; it envelopes any provably CR function H (with suitable regularity properties) between two injective ``mixing'' stages. The MCM construction simultaneously enjoys (1) provable collision-resistance in the standard model, and (2) indifferentiability from a monolithic random oracle when the mixing stages themselves are indifferentiable from a random oracle that observes injectivity. We instantiate our new design approach by specifying a blockcipher-based construction that appropriately realizes the mixing stages.
Information Leakage of Flip-Flops in DPA-Resistant Logic Styles
This contribution discusses the information leakage of flip-flops for different DPA-resistant logic styles. We show that many of the proposed side-channel resistant logic styles still employ flip-flops that leak data-dependent information. Furthermore, we apply simple models for the leakage of masked flip-flops to design a new attack on circuits implemented using masked logic styles. Contrary to previous attacks on masked logic styles, our attack does not predict the mask bit and does not need detailed knowledge about the attacked device, e.g., the circuit layout. Moreover, our attack works even if all the load capacitances of the complementary logic signals are perfectly balanced and even if the PRNG is ideally unbiased. Finally, after performing the attack on DRSL, MDPL, and iMDPL circuits we show that single-bit masks do not influence the exploitability of the revealed leakage of the masked flip-flops.
An Efficient and Provably Secure ID-Based Threshold Signcryption Scheme
Signcryption is a cryptographic primitive that performs digital
signature and public key encryption simultaneously, at a lower
computational costs and communication overheads than the
signature-then-encryption approach. Recently, two identity-based
threshold signcryption schemes[12],[26] have been
proposed by combining the concepts of identity-based threshold
signature and signcryption together. However, the formal models and
security proofs for both schemes are not considered. In this paper,
we formalize the concept of identity-based threshold signcryption
and give a new scheme based on the bilinear pairings. We prove its
confidentiality under the Decisional Bilinear Diffie-Hellman
assumption and its unforgeability under the Computational
Diffie-Hellman assumption in the random oracle model. Our scheme
turns out to be more efficient than the two previously proposed
schemes.
Privacy-Preserving Audit and Extraction of Digital Contents
A growing number of online services, such as Google, Yahoo!, and Amazon, are starting to charge users for their storage. Customers often use these services to store valuable data such as email, family photos and videos, and disk backups. Today, a customer must entirely trust such external services to maintain the integrity of hosted data and return it intact. Unfortunately, no service is infallible.
To make storage services accountable for data loss, we present protocols that allow a third-party auditor to periodically verify the data stored by a service and assist in returning the data intact to the customer. Most importantly, our protocols are privacy-preserving, in that they never reveal the data contents to the auditor. Our solution removes the burden of verification from the customer, alleviates both the customer’s and storage service’s fear of data leakage, and provides a method for independent arbitration of data retention contracts.
A New Approach to Secure Logging
The need for secure logging is well-understood by the security
professionals, including both researchers and practitioners. The
ability to efficiently verify all (or some) log entries is
important to any application employing secure logging techniques.
In this paper, we begin by examining state-of-the-art in secure
logging and identify some problems inherent to systems based on
trusted third-party servers. We then propose a different approach
to secure logging based upon recently developed Forward-Secure
Sequential Aggregate (FssAgg) authentication techniques. Our
approach offers both space-efficiency and provable security. We
illustrate two concrete schemes -- one private-verifiable and one
public-verifiable -- that offer practical secure logging without
any reliance on on-line trusted third parties or secure hardware.
We also investigate the concept of immutability in the context of
forward secure sequential aggregate authentication to provide
finer grained verification. Finally, we report on some experience
with a prototype built upon a popular code version control system.
On the Secure Obfuscation of Deterministic Finite Automata
In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [B79,G87,GO96,K80] and revise them to the current obfuscation setting of Barak et al. [BGI+01]. Under this model, we introduce an efficient oracle that retains some ``small" secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong ``virtual black box" property originally proposed in [BGI+01] while incorporating the stronger condition of dependent auxiliary inputs in [GTK05]. Additionally, we further show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.
Preimage Attacks on 3-Pass HAVAL and Step-Reduced MD5
Uncategorized
Uncategorized
This paper presents preimage attacks for the hash functions 3-pass
HAVAL and step-reduced MD5. Introduced in 1992 and 1991
respectively, these functions underwent severe collision attacks,
but no preimage attack. We describe two preimage attacks on the
compression function of 3-pass HAVAL. The attacks have a complexity
of about $2^{224}$ compression function evaluations instead of
$2^{256}$. Furthermore, we present several preimage attacks on the
MD5 compression function that invert up to 47 (out of 64) steps
within $2^{96}$ trials instead of $2^{128}$. Though our attacks are
not practical, they show that the security margin of 3-pass HAVAL
and step-reduced MD5 with respect to preimage attacks is not as high
as expected.
Restricted Adaptive Oblivious Transfer
In this work we consider the following primitive, that we call {\it restricted adaptive oblivious transfer}. On the one hand, the owner of a database wants to restrict the access of users to this data according to some policy, in such a way that a user can only obtain information satisfying the restrictions imposed by the owner. On the other hand, a legitimate user wants to privately retrieve allowed parts of the data, in a sequential and adaptive way, without letting the owner know which part of the data is being obtained.
After having formally described the components and required properties of a protocol for restricted adaptive oblivious transfer, we propose two generic ways to realize this primitive. The first one uses a cryptographic tool which has received a lot of attention from the literature in the last years: cryptosystems which are both multiplicatively and additively homomorphic. Our second generic construction is based on secret sharing schemes.
Proofs of Knowledge with Several Challenge Values
In this paper we consider the problem of increasing
the number of possible challenge values from 2 to $s$
in various zero-knowledge cut and choose protocols.
First we discuss doing this for graph isomorphism protocol.
Then we show how increasing this number improves efficiency
of protocols for double discrete logarithm
and $e$-th root of discrete logarithm which are potentially very useful
tools for constructing complex cryptographic protocols.
The practical improvement given by our paper is 2-4 times in terms
of both time complexity and transcript size.
Imaginary quadratic orders with given prime factor of class number
Abelian class group Cl(D) of imaginary quadratic order with odd squarefree discriminant D is used in public key cryptosystems, based on discrete logarithm problem in class group and in cryptosystems, based on isogenies of elliptic curves. Discrete logarithm problem in Cl(D) is hard if #Cl(D) is prime or has large prime divisor. But no algorithms for generating such D are known.
We propose probabilistic algorithm that gives discriminant of imaginary quadratic order O_D with subgroup of given prime order l. Algorithm is based on properties of Hilbert class field polynomial H_D for elliptic curve over field of p^l elements. Let trace of Frobenius endomorphism is T, discriminant of Frobenius endomorphism D = T^2-4p^l and j is not in prime field. Then deg(H_D) = #Cl(O_D) and #Cl(D)=0 (mod l). If Diophantine equation D = T^2-4p^l with variables l<O(|D|^(1/4)), prime p and T has solution only for l=1, then class number is prime.
Last updated: 2008-05-29
An Efficient ID-based Ring Signature Scheme from Pairings
A ring signature allows a user from a set of possible signers to convince the verifier that the author of the signature belongs to the set but identity of the author is not disclosed. It protects the anonymity of a signer since the verifier knows only that the signature comes from a member of a ring, but doesn't know exactly who the signer is. This paper proposes a new ID-based ring signature scheme based on the bilinear pairings. The new scheme provides signatures with constant-size without counting the list of identities to be included in the ring. When using elliptic curve groups of order 160 bit prime, our ring signature size is only about 61 bytes. There is no pairing operation involved in the ring sign procedure, and there are only three paring operations involved in the verification procedure. So our scheme is more efficient compared to schemes previously proposed. The new scheme can be proved secure with the hardness assumption of the k-Bilinear Diffie-Hellman Inverse problem, in the random oracle model.
Optimal Discretization for High-Entropy Graphical Passwords
In click-based graphical password schemes that allow arbitrary click locations on image, a click should be verified as correct if it is close within a predefined distance to the originally chosen location. This condition should hold even when for security reasons the password hash is stored in the system, not the password itself. To solve this problem, a robust discretization method has been proposed, recently. In this paper, we show that previous work on discretization does not give optimal results with respect to the entropy of the graphical passwords and propose a new discretization method to increase the password space. To improve the security further, we also present several methods that use multiple hash computations for password verification.
Algebraic Techniques in Differential Cryptanalysis
In this paper we propose a new cryptanalytic method against block ciphers, which combines both algebraic and statistical techniques. More specifically, we show how to use algebraic relations arising from differential characteristics to speed up and improve key-recovery differntial attacks against block ciphers. To illustrate the new technique, we apply algebraic techniques to mount differential attacks against round reduced variants of Present-128.
New construction of Boolean functions with maximun algebraic immunity
Because of the algebraic attacks, a high
algebraic immunity is now an important criteria for Boolean
functions used in stream ciphers. In this paper, by using the
relationship between some flats and support of a n variables
Boolean function f, we introduce a general method to determine the
algebraic immunity of a Boolean function and finally construct some
balanced functions with optimum algebraic immunity.
Proofs of Retrievability: Theory and Implementation
A proof of retrievability (POR) is a compact proof by a file system (prover) to a client (verifier) that a target file $F$ is intact, in the sense that the client can fully recover it. As PORs incur lower communication complexity than transmission of $F$ itself, they are an attractive building block for high-assurance remote storage systems.
In this paper, we propose a theoretical framework for the design of PORs. Our framework improves the previously proposed POR constructions of Juels-Kaliski and Shacham-Waters, and also sheds light on the conceptual limitations of previous theoretical models for PORs. It supports a fully Byzantine adversarial model, carrying only the restriction—fundamental to all PORs—that the adversary’s error rate $\epsilon$ be bounded when the client seeks to extract $F$. Our techniques support efficient protocols across
the full possible range of $\epsilon$, up to $\epsilon$ non-negligibly close to 1.
We propose a new variant on the Juels-Kaliski protocol and describe a prototype implementation.
We demonstrate practical encoding even for files $F$ whose size exceeds that of client main memory.
Non-Linear Reduced Round Attacks Against SHA-2 Hash family
Uncategorized
Uncategorized
Most of the attacks against (reduced) SHA-2 family in literature
have used local collisions which are valid for linearized version of SHA-2 hash functions. Recently, at FSE '08, an attack against reduced round SHA-256 was presented by Nikolić and Biryukov which used
a local collision which is valid for the actual SHA-256 function. It is a 9-step local collision which starts by introducing a modular difference of 1 in the two messages. It succeeds with probability roughly 1/3. We build on the work of Nikolić and Biryukov and provide a generalized nonlinear local collision which accepts an arbitrary initial message difference. This local collision succeeds with probability 1. Using this local collision we present attacks against 18-step SHA-256 and 18-step SHA-512 with arbitrary initial difference. Both of these attacks succeed with probability 1.
We then present special cases of our local collision and show two different differential paths for attacking 20-step SHA-256 and 20-step SHA-512. One of these paths is the same as presented by Nikolić and Biryukov while the other one is a new differential path. Messages following both these differential paths can be found with probability 1. This improves on the previous result where the success
probability of 20-step attack was 1/3. Finally, we present two differential paths for 21-step collisions for SHA-256 and SHA-512, one of which is a new path. The success probability of these paths for SHA-256 is roughly $2^{-15}$ and $2^{-17}$ which improves on the 21-step attack having probability $2^{-19}$ reported earlier.
We show examples of message pairs following all the presented differential paths for up to 21-step collisions in SHA-256. We also show first real examples of colliding message pairs for up to 20-step reduced SHA-512.
Full Cryptanalysis of LPS and Morgenstern Hash Function
Collisions in the LPS cryptographic hash function of Charles, Goren and Lauter have been found by Zémor and
Tillich, but it was not clear whether computing preimages was also easy for this hash
function. We present a probabilistic polynomial time algorithm solving this problem. Subsequently, we study the
Morgenstern hash, an interesting variant of LPS hash, and break this function as well. Our attacks build upon the ideas of Zémor and Tillich but are not straightforward extensions of it. Finally, we discuss fixes for the
Morgenstern hash function and other applications of our results.
The Round Complexity of Verifiable Secret Sharing Revisited
Uncategorized
Uncategorized
The round complexity of interactive protocols is one of their most
important complexity measures. In this work we prove that existing lower bounds for the round complexity of VSS can be circumvented by introducing a negligible probability of error in the reconstruction phase. Previous results show matching lower and upper bounds of {\em three} rounds for VSS, with $n = 3t+1$, where the reconstruction of the secrets always succeeds, i.e. with probability 1. In contrast we show that with a negligible probability of error in the reconstruction phase:
\begin{enumerate}
\item There exists an efficient 2-round VSS protocol for $n = 3t + 1$.
If we assume that the adversary is non-rushing then we can achieve a
1-round reconstruction phase.
\item There exists an efficient 1-round VSS
for $t = 1$ and $n > 3$.
\item We prove that our results are optimal both in resilience and number
of sharing rounds by showing:
\begin{enumerate}
\item There does not exist a 2-round WSS \footnote{WSS is
a weaker notion of VSS.} (and hence VSS) for
$n \leq 3t$.
\item There does not exist a 1-round VSS protocol for
$t \geq 2$ and $n \geq 4$.
\end{enumerate}
\end{enumerate}
Binary Edwards Curves
This paper presents a new shape for ordinary elliptic curves over fields of characteristic 2. Using the new shape, this paper presents the first complete addition formulas for binary elliptic curves, i.e., addition formulas that work for all pairs of input points, with no exceptional cases. If n >= 3 then the complete curves cover all isomorphism classes of ordinary elliptic curves over F_2^n.
This paper also presents dedicated doubling formulas for these curves using 2M + 6S + 3D, where M is the cost of a field multiplication, S is the cost of a field squaring, and D is the cost of multiplying by a curve parameter. These doubling formulas are also the first complete doubling formulas in the literature, with no exceptions for the neutral element, points of order 2, etc.
Finally, this paper presents complete formulas for differential addition, i.e., addition of points with known difference. A differential addition and doubling, the basic step in a Montgomery
ladder, uses 5M + 4S + 2D when the known difference is given in affine form.
Cryptanalysing the Critical Group: Efficiently Solving Biggs's Discrete Logarithm Problem
Biggs has recently proposed the critical group of a certain class of
finite graphs as a platform group for cryptosystems relying on the
difficulty of the discrete log problem. The paper uses techniques from
the theory of Picard groups on finite graphs to show that the discrete
log problem can be efficiently solved in Biggs's groups. Thus this
class of groups is not suitable as a platform for discrete log based
cryptography.
Understanding Phase Shifting Equivalent Keys and Exhaustive Search
Recent articles~\cite{kucuk,ckp08,isobe,cryptoeprint:2008:128} introduce the concept of phase shifting equivalent keys in stream ciphers, and exploit this concept in order to mount attacks on some specific ciphers. The idea behind phase shifting equivalent keys is that, for many ciphers, each internal state can be considered as the result of an injection of a key and initialization vector. This enables speeding up the standard exhaustive search algorithm among the $2^n$ possible keys by decreasing the constant factor of $2^n$ in the time complexity of the algorithm. However, this has erroneously been stated in~\cite{isobe,cryptoeprint:2008:128} as decreasing the complexity of the algorithm below $2^n$. In this note, we show why this type of attacks, using phase shifting equivalent keys to improve exhaustive key search, can never reach time complexity below $2^n$, where $2^n$ is the size of the key space.
Possibility and impossibility results for selective decommitments
Uncategorized
Uncategorized
The *selective decommitment problem* can be described as follows: assume an adversary receives a number of commitments and then may request openings of, say, half of them. Do the unopened commitments remain secure? Although this question arose more than twenty years ago, no satisfactory answer could be presented so far. We answer the question in several ways:
- If simulation-based security is desired (i.e., if we demand that the adversary's output can be simulated by a machine that does not see the unopened commitments), then security is *not achievable* for non-interactive or perfectly binding commitment schemes via black-box reductions to standard cryptographic assumptions. *However,* we show how to achieve security in this sense with interaction and a non-black-box reduction to one-way permutations.
- If only indistinguishability of the unopened commitments from random commitments is desired, then security is *not achievable* for (interactive or non-interactive) perfectly binding commitment schemes, via black-box reductions to standard cryptographic assumptions. *However,* any statistically hiding scheme *does* achieve security in this sense.
Our results give an almost complete picture when and how security under selective openings can be achieved. Applications of our results include:
- Essentially, an encryption scheme *must* be non-committing in order to achieve provable security against an adaptive adversary.
- When implemented with our commitment scheme, the interactive proof for graph 3-coloring due to Goldreich et al. becomes black-box zero-knowledge under parallel composition.
On the technical side, we develop a technique to show very general impossibility results for black-box proofs.
Non-black-box Techniques Are Not Necessary for Constant Round Non-malleable Protocols
Recently, non-black-box techniques have enjoyed great success in cryptography. In particular, they have led to the construction of \emph{constant round} protocols for two basic cryptographic tasks (in the plain model): non-malleable zero-knowledge (NMZK) arguments
for NP, and non-malleable commitments. Earlier protocols, whose security proofs relied only on black-box techniques, required non-constant (e.g., $O(\log n)$) number of rounds. Given the inefficiency (and complexity) of existing non-black-box techniques, it is natural to ask whether they are \emph{necessary} for achieving constant-round non-malleable cryptographic protocols.
In this paper, we answer this question in the \emph{negative}. Assuming the validity of a recently introduced assumption, namely
the \emph{Gap Discrete Logarithm} (Gap-DL) assumption [MMY06], we construct a constant round \emph{simulation-extractable} argument system for NP, which implies NMZK. The Gap-DL assumption also leads to a very simple and natural construction of \emph{non-interactive non-malleable commitments}. In addition, plugging our simulation-extractable argument in the construction of Katz, Ostrovsky, and
Smith [KOS03] yields the first $O(1)$-round secure multiparty computation with a dishonest majority using only black-box techniques.
Although the Gap-DL assumption is relatively new and non-standard, in
addition to answering some long standing open questions, it brings a
new approach to non-malleability which is simpler and very natural. We also demonstrate that \odla~holds unconditionally against \emph{generic} adversaries.
Algebraic Attacks on the Crypto-1 Stream Cipher in MiFare Classic and Oyster Cards
MiFare Crypto 1 is a lightweight stream cipher used in London's Oyster card, Netherland's OV-Chipcard, US Boston's CharlieCard,
and in numerous wireless access control and ticketing systems worldwide. Recently, researchers have been able to recover this algorithm by reverse engineering.
We have examined MiFare from the point of view of the so called "algebraic attacks". We can recover the full 48-bit key of MiFare algorithm in 200 seconds on a PC, given 1 known IV (from one single encryption).
The security of this cipher is therefore close to zero. This is particularly shocking, given the fact that, according to the Dutch press, 1 billion of MiFare Classic chips are used worldwide,
including in many governmental security systems.
Improved lower bound on the number of balanced symmetric functions over GF(p)
The lower bound on the number of n-variable balanced symmetric
functions over finite fields GF(p) presented in
{\cite{Cusick}} is improved in this paper.
On the (Im)Possibility of Key Dependent Encryption
Uncategorized
Uncategorized
We study the possibility of constructing encryption schemes secure
under messages that are chosen depending on the key~$k$ of the
encryption scheme itself. We give the following separation results that
hold both in the private and in the public key settings:
\begin{itemize}
\item Let~$\mathcal{H}$ be the family of $\poly(n)$-wise independent
hash-functions. There exists no fully-black-box reduction from an
encryption scheme secure against key-dependent messages to one-way
permutations (and also to families of trapdoor permutations) if the
adversary can obtain encryptions of~$h(k)$ for~$h \in \mathcal{H}$.
\item There exists no reduction from an encryption scheme
secure against key-dependent messages to, essentially, \emph{any} cryptographic
assumption, if the adversary can obtain an encryption of~$g(k)$
for an \emph{arbitrary} $g$, as long as the reduction's proof of
security treats both the adversary and the function $g$ as black
boxes.
\end{itemize}
Universally Composable Adaptive Oblivious Transfer
In an oblivious transfer (OT) protocol, a Sender with messages M_1,...,M_N and a Receiver with indices s_1,...,s_k interact in such a way that at the end the Receiver obtains M_{s_1},...,M_{s_k} without learning anything about the other messages, and the Sender does not learn anything about s_1,...,s_k. In an adaptive protocol, the Receiver may obtain M_{s_{i-1}} before deciding on $s_i$. Efficient adaptive OT protocols are interesting both as a building block for secure multiparty computation and for enabling oblivious searches on medical and patent databases.
Historically, adaptive OT protocols were analyzed with respect to a ``half-simulation'' definition which Naor and Pinkas showed to be flawed. In 2007, Camenisch, Neven, and shelat, and subsequent other works, demonstrated efficient adaptive protocols in the full-simulation model. These protocols, however, all use standard rewinding techniques in their proofs of security and thus are not universally composable. Recently, Peikert, Vaikuntanathan and Waters presented universally composable (UC) non-adaptive OT protocols (for the 1-out-of-2 variant). However, it is not clear how to preserve UC security while extending these protocols to the adaptive k-out-of-N setting. Further, any such attempt would seem to require O(N) computation per transfer for a database of size N. In this work, we present an efficient and UC-secure adaptive k-out-of-N OT protocol, where after an initial commitment to the database, the cost of each transfer is constant. Our construction is secure under bilinear assumptions in the standard model.
Formally Bounding the Side-Channel Leakage in Unknown-Message Attacks
Uncategorized
Uncategorized
We propose a novel approach for quantifying a system's resistance to unknown-message side-channel attacks. The approach is based on a measure of the secret information that an attacker can extract from
a system from a given number of side-channel measurements. We
provide an algorithm to compute this measure, and we use it to
analyze the resistance of hardware implementations of cryptographic
algorithms with respect to power and timing attacks. In particular,
we show that message-blinding -- the common countermeasure against
timing attacks -- reduces the rate at which information about the
secret is leaked, but that the complete information is still
eventually revealed. Finally, we compare information measures
corresponding to unknown-message, known-message, and chosen-message
attackers and show that they form a strict hierarchy.
Modular polynomials for genus 2
Modular polynomials are an important tool in many algorithms involving elliptic curves. In this article we generalize this concept to the
genus 2 case. We give the theoretical framework describing the genus 2
modular polynomials and discuss how to explicitly compute them.
A Proxy Signature Scheme over Braid Groups
Proxy Signatures, introduced by Mambo, Usuda and Okamoto, allow a designated person to sign on behalf of an original signer. Braid groups has been playing an important role in the theory of cryptography as these are non commutative groups used in cryptography. Some digital signature schemes have been given but no proxy signature scheme has been introduced over braid groups. In this paper we have proposed proxy signature scheme using conjugacy search problem over braid groups. Our proxy signature scheme is partial delegated protected proxy signature.
A non-interactive deniable authentication scheme based on designated verifier proofs
Uncategorized
Uncategorized
A deniable authentication protocol enables a receiver to identify the source of the given messages but unable to prove to a third party the identity of the sender. In recent years, several non-interactive deniable authentication schemes have been proposed in order to enhance efficiency. In this paper, we propose a security model for non-interactive deniable authentication schemes. Then a non-interactive deniable authentication scheme is presented based on designated verifier proofs. Furthermore, we prove the security of our scheme under the DDH assumption.
DISH: Distributed Self-Healing in Unattended Sensor Networks
Unattended wireless sensor networks (UWSNs) operating in hostile
environments face the risk of compromise. % by a mobile adversary.
Unable to off-load collected data to a sink or some other trusted
external entity, sensors must protect themselves by attempting to
mitigate potential compromise and safeguarding their data.
In this paper, we focus on techniques that allow unattended sensors
to recover from intrusions by soliciting help from peer sensors.
We define a realistic adversarial model and show how certain
simple defense methods can result in sensors re-gaining secrecy and
authenticity of collected data, despite adversary's efforts to the
contrary. We present an extensive analysis and a set of simulation
results that support our observations and demonstrate the
effectiveness of proposed techniques.
Secure Online Elections in Practice
Current remote e-voting schemes aim at a number of security objectives. However, this is not enough for providing secure online elections in practice. Beyond a secure e-voting protocol, there are many organizational and technical security requirements that have to be satisfied by the operational environment in which the scheme is implemented. We have investigated four state-of-the-art e-voting protocols in order to identify the organizational and technical requirements which these protocols need to be met in order to work correctly. Satisfying these requirements is a costly task which reduces the potential advantages of e-voting considerably. We introduce the concept of a Voting Service Provider (VSP) which carries out electronic elections as a trusted third party and is responsible
for satisfying the organizational and technical requirements. We show which measures the VSP takes to meet these requirements. To establish trust in the VSP we propose a Common Criteria evaluation and a legal framework. Following this approach, we show that the VSP enables secure, cost-effective, and thus feasible online elections.
On Black-Box Ring Extraction and Integer Factorization
The black-box extraction problem over rings has (at least) two important interpretations in cryptography: An efficient algorithm for this problem implies (i) the equivalence of computing discrete logarithms and solving the Diffie-Hellman problem and (ii) the in-existence of secure ring-homomorphic encryption schemes.
In the special case of a finite field, Boneh/Lipton and Maurer/Raub showed that there exist algorithms solving the black-box extraction problem in subexponential time. It is unknown whether there exist more efficient algorithms.
In this work we consider the black-box extraction problem over finite rings of characteristic $n$, where $n$ has at least two different prime factors. We provide a polynomial-time reduction from factoring $n$ to the black-box extraction problem for a large class of finite commutative unitary rings. Under the factoring assumption, this implies the in-existence of certain efficient generic reductions from computing discrete logarithms to the Diffie-Hellman problem on the one side, and might be an indicator that secure ring-homomorphic encryption schemes exist on the other side.
A Generalized Brezing-Weng Algorithm for Constructing Pairing-Friendly Ordinary Abelian Varieties
We give an algorithm that produces families of Weil numbers for ordinary abelian varieties over finite fields with prescribed embedding degree. The algorithm uses the ideas of Freeman, Stevenhagen, and Streng to generalize the Brezing-Weng construction of pairing-friendly elliptic curves. We discuss how CM methods can be used to construct these varieties, and we use our algorithm to give examples of pairing-friendly ordinary abelian varieties of dimension 2 and 3 that are absolutely simple and have smaller $\rho$-values than any previous such example.
Last updated: 2008-08-13
The Walsh Spectrum of a New Family of APN Functions
Uncategorized
Uncategorized
The extended Walsh spectrum of a new family of APN functions is
computed out. It turns out that the walsh spectrum of these functions are the same as that of Gold functions.
Redundant $\tau$-adic Expansions II: Non-Optimality and Chaotic Behaviour
When computing scalar multiples on Koblitz curves, the Frobenius endomorphism can be used to replace the usual doublings on the curve. This involves digital expansions of the scalar to the complex base $\tau=(\pm 1\pm \sqrt{-7})/2$ instead of binary expansions. As in the binary case, this method can be sped up by enlarging the set of valid digits at the cost of precomputing some points on the curve. In the binary case, it is known that a simple syntactical condition (the so-called $w$-NAF-condition) on the expansion ensures that the number of curve additions is minimised. The purpose of this paper is to show that this is not longer true for the base $\tau$ and $w\in\{4,5,6\}$. Even worse, it is shown that there is no longer an online algorithm to compute an optimal expansion from the digits of some standard expansion from the least to the most significant digit, which can be interpreted as chaotic behaviour. The proofs heavily depend on symbolic computations involving transducer automata.
Computational soundness of symbolic zero-knowledge proofs
The abstraction of cryptographic operations by term algebras, called
Dolev-Yao models, is essential in almost all tool-supported methods
for proving security protocols. Recently significant progress was made
in proving that Dolev-Yao models offering the core cryptographic
operations such as encryption and digital signatures can be sound with
respect to actual cryptographic realizations and security
definitions. Recent work, however, has started to extend Dolev-Yao
models to more sophisticated operations with unique security
features. Zero-knowledge proofs arguably constitute the most amazing
such extension.
In this paper, we first identify which additional properties a
cryptographic (non-interactive) zero-knowledge proof needs to fulfill
in order to serve as a computationally sound implementation of
symbolic (Dolev-Yao style) zero-knowledge proofs; this leads to the
novel definition of a symbolically-sound zero-knowledge proof
system. We prove that even in the presence of arbitrary active
adversaries, such proof systems constitute computationally sound
implementations of symbolic zero-knowledge proofs. This yields the
first computational soundness result for symbolic zero-knowledge
proofs and the first such result against fully active adversaries of
Dolev-Yao models that go beyond the core cryptographic operations.
Last updated: 2008-09-23
Impossible Differential Cryptanalysis of CLEFIA
Uncategorized
Uncategorized
This paper mainly discussed the impossible differerential crypt-
analysis on CLEFIA which was proposed in FSE2007. New 9-round
impossible differentials which are difrererent from the previous ones are discovered. Then these differerences are applied to the attack of reduced-CLEFIA. For 128-bit case, it is possible to apply an impossible differen-tial attack to 12-round CLEFIA which requires 2^110.93 chosen plaintexts and the time complexity is 2^111. For 192/256-bit cases, it is possible to apply impossible differential attack to 13-round CLEFIA and the chosen plaintexts and time complexity are 2^111.72 and 2^158 respectively. For 256-bit cases, it needs 2^112.3 chosen plaintexts and no more than 2^199 encryptions to attack 14-round CLEFIA and 2^113 chosen plaintexts to attack 15-round 256-bit CLEFIA with the time complexity less than 2^248 encryptions.
Robust Combiners for Software Hardening
Uncategorized
Uncategorized
All practical software hardening schemes, as well as practical encryption schemes, e.g., AES, were not proven to be secure. One technique to enhance security is {\em robust combiners}.
An algorithm $C$ is a robust combiner for specification $S$, e.g., privacy, if for any two implementations $X$ and $Y$, of a cryptographic scheme, the combined scheme $C(X,Y)$ satisfies $S$ provided {\em either} $X$ {\em or} $Y$ satisfy $S$.
We present the first robust combiners for software hardening, specifically for obfuscation \cite{barak:obfuscation}, and for White-Box Remote Program Execution (\w) \cite{herzberg2009towards}. WBRPE and obfuscators are software hardening techniques that are employed to protect execution of programs in remote, hostile environment. \w\ provides a software only platform allowing secure execution of programs on untrusted, remote hosts, ensuring privacy of the program, and of the inputs to the program, as well as privacy and integrity of the result of the computation. Obfuscators protect the code (and secret data) of the program that is sent to the remote host for execution.
Robust combiners are particularly important for software hardening, where there is no standard whose security is established. In addition, robust combiners for software hardening are interesting from software engineering perspective since they introduce new techniques of reductions and code manipulation.
Toy Factoring by Newton's Method
A theoretical approach for integer factoring using complex analysis is
to apply Newton's method on an analytic function whose zeros, within
in a certain strip, correspond to factors of a given integer. A few
successful toy experiments of factoring small numbers with this
approach, implemented in a naive manner that amounts to complexified
trial division, show that, at least for small numbers, Newton's method
finds the requisite zeros, at least some of time (factors of nearby
numbers were also sometimes found). Nevertheless, some heuristic
arguments predict that this approach will be infeasible for factoring
numbers of the size currently used in cryptography.
Redundant $\tau$-adic Expansions I: Non-Adjacent Digit Sets and their Applications to Scalar Multiplication
This paper investigates some properties of $\tau$-adic expansions of
scalars. Such expansions are widely used in the design of scalar
multiplication algorithms on Koblitz Curves, but at the same
time they are much less understood than their binary counterparts.
Solinas introduced the width-$w$ $\tau$-adic non-adjacent form for
use with Koblitz curves. This is an expansion of integers
$z=\sum_{i=0}^\ell z_i\tau^i$, where $\tau$ is
a quadratic integer depending on the curve, such that $z_i\ne 0$
implies $z_{w+i-1}=\ldots=z_{i+1}=0$,
like the sliding window binary recodings of integers.
It uses a redundant digit set, i.e., an expansion of an integer using
this digit set need not be uniquely determined if the
syntactical constraints are not enforced.
We show that the digit sets described by Solinas, formed by elements
of minimal norm in their residue classes, are uniquely determined.
Apart from this digit set of minimal norm representatives, other digit sets
can be chosen such that all integers can be represented by a width-$w$
non-adjacent form using those digits. We describe an algorithm
recognizing admissible digit sets.
Results by Solinas and by Blake, Murty, and Xu are generalized.
In particular, we introduce two new useful families of digit sets.
The first set is syntactically defined.
As a consequence of its adoption we can
also present improved and streamlined algorithms to perform the
precomputations in $\tau$-adic scalar multiplication methods.
The latter use an improvement of the computation of sums and differences of
points on elliptic curves with mixed affine and López-Dahab
coordinates.
The second set is suitable for low-memory applications, generalizing an approach
started by Avanzi, Ciet, and Sica. It permits to devise a scalar multiplication
algorithm that dispenses with the initial precomputation stage
and its associated memory space. A suitable choice of the parameters of the
method leads to a scalar multiplication algorithm on Koblitz Curves
that achieves sublinear complexity in the number of expensive curve operations.
A Real-World Attack Breaking A5/1 within Hours
Uncategorized
Uncategorized
In this paper we present a real-world hardware-assisted attack on the well-known A5/1 stream cipher which is (still) used to secure GSM communication in most countries all over the world. During the last ten years A5/1 has been intensively analyzed. However, most of the proposed attacks are just of theoretical interest since they lack from practicability — due to strong preconditions, high computational demands and/or huge storage requirements — and have never been fully implemented.
In contrast to these attacks, our attack which is based on the work by Keller and Seitz [KS01] is running on an existing special-purpose hardware device, called COPACOBANA. With the knowledge of only 64 bits of keystream the machine is able to reveal the corresponding internal 64-bit state of the cipher in about 7 hours on average. Besides providing a detailed description of our attack architecture as well as implementation results, we propose and analyze an optimization that leads again to an improvement of about 16% in computation time.
Dynamic SHA-2
Uncategorized
Uncategorized
In this paper I describe the construction of Dynamic SHA-2 family of cryptographic hash functions. They are built with design components from the SHA-2 family, but I use the bits in message as parameters of function G, R and ROTR operation in the new hash functionh. It enabled us to achieve a novel design principle: When message is changed, the calculation will be different. It make the system can resistant against all extant attacks.