All papers in 2010 (Page 5 of 660 results)
On FPGA-based implementations of Gr\{o}stl
The National Institute of Standards and Technology (NIST) has started a competition for a new secure hash standard. To make a significant comparison between the submitted candidates, third party implementations of all proposed hash functions are needed. This is one of the reasons why the SHA-3 candidate Gr\{o}stl has been chosen for a FPGA-based implementation.
Mainly our work is motivated by actual and future developments of the automotive market (e.g. car-2-car communication systems), which will increase the necessity for a suitable cryptographic infrastructure in modern vehicles (cf. AUTOSAR project) even further. One core component of such an infrastructure is a secure cryptographic hash function, which is used for a lot of applications like challenge-response authentication systems or digital signature schemes. Another motivation to evaluate Gr\{o}stl is its resemblance to AES.
The automotive market demands, like any mass market, low budget and therefore compact implementations, hence our evaluation of Gr\{o}stl focuses on area optimizations. It is shown that, while Gr\{o}stl is inherently quite large compared to AES, it is still possible to implement the Gr\{o}stl algorithm on small and low budget FPGAs like the second smallest available Spartan-3, while maintaining a reasonable high throughput.
Bent functions at the minimal distance and algorithms of constructing linear codes for CDMA
Uncategorized
Uncategorized
In this paper we study linear codes for CDMA (Code Division Multiple Access).
On lower bounds of second-order nonlinearities of cubic bent functions constructed by concatenating Gold functions
In this paper we consider cubic bent functions obtained by Leander and McGuire
(J. Comb. Th. Series A, 116 (2009) 960-970) which are
concatenations of quadratic Gold functions.
A lower bound of second-order nonlinearities of these
functions is obtained. This bound is compared with the lower
bounds of second-order nonlinearities obtained for functions
belonging to some other classes of functions which are recently
studied.
Feasible Attack on the 13-round AES-256
In this note we present the first attack with feasible complexity on the 13-round AES-256. The attack runs in the related-subkey scenario with four related keys, in 2^{76} time, data, and memory.
Last updated: 2010-05-08
On the Public Key Replacement and Universal Forgery Attacks of Short Certificateless Signature
Certificateless cryptography eliminates the need of certificates in the PKI and solves the inherent key escrow problem in the ID-based cryptography. Recently, Du and Wen proposed a short certi¯cateless signature scheme without MapToPoint hash function, and the signature size is short enough with only half of the DSA signature. In this paper, after the detailing the formal of certificateless signature scheme, we show that the Du and Wen's short certificateless signature scheme is insecure which is broken by a type-I adversary who has the ability in replacing users' public keys and accessing to the signing oracles, and it also cannot resist on the universal forgery attack
for any third user.
Automorphism group of the set of all bent functions
Boolean function in even number of variables is called {\it bent} if it is at the maximal possible Hamming distance from the class of all affine Boolean functions. We have proven that every isometric mapping of the set of all Boolean functions into itself that transforms bent functions into bent functions is a combination of an affine transform of coordinates and an affine shift.
Cryptanalysis of XXTEA
XXTEA, or Corrected Block TEA, is a simple block cipher in Roger Needham and David Wheeler's TEA series of algorithms. We describe a chosen plaintext attack for XXTEA using about $2^{59}$ queries and negligible work.
Separable Hash Functions
We introduce a class of hash functions with the property that messages with the same hash are well separated in terms of their Hamming distance. We provide an example of such a function that uses cyclic codes and an elliptic curve group over a finite field.
\smallskip
A related problem is ensuring that the {\it consecutive distance} between messages with the same hash is as large as possible. We derive bounds on the c.d. separability factor of such hash functions.
A supplement to Liu et al.'s certificateless signcryption scheme in the standard model
Recently, Liu et al. proposed the first certificateless signcryption scheme without random oracles and proved it was semantically secure in the standard model. However, Selvi et al. launched a fatal attack to its confidentiality by replacing users' public keys, thus pointed out this scheme actually doesn't reach the semantic security as claimed. In this paper, we come up with a rescue scheme based on Liu et al.'s original proposal. A Schnorr-based one-time signature is added to each user's public key, which is used to resist Selvi et al.'s attack. In addition, according to the mistake made in Liu et al.'s security proof, we also show that our improvement is really secure in the standard model under the intractability of the decisional bilinear Diffie-Hellman assumption.
Modeling Attacks on Physical Unclonable Functions
We show in this paper how several proposed Physical Unclonable Functions (PUFs)
can be broken by numerical
modeling attacks. Given a set of challenge-response pairs (CRPs) of a PUF, our attacks construct a computer algorithm
which behaves indistinguishably from the original PUF on almost all CRPs. This algorithm can subsequently impersonate
the PUF, and can be cloned and distributed arbitrarily. This breaks the security of essentially all applications and
protocols that are based on the respective PUF.
The PUFs we attacked successfully include standard Arbiter PUFs and Ring Oscillator PUFs of arbitrary sizes, and XOR Arbiter PUFs, Lightweight Secure PUFs, and Feed-Forward Arbiter PUFs of up to a given size and complexity.
Our attacks are based upon various machine learning techniques,
including Logistic Regression and Evolution Strategies. Our work will be useful to PUF designers and attackers alike.
Collusion Free Protocol for Rational Secret Sharing
We consider the \textit{rational secret sharing problem} introduced by Halpern and Teague\cite{ht04}, where players prefer to get the secret rather than not to get the secret and with lower preference, prefer that as few of the other players get the secret. Some positive results have been derived by Kol and Naor\cite{stoc08} by considering that players only prefer to learn. They have proposed an efficient $m$-out-of-$n$ protocol for rational secret sharing without using cryptographic primitives. Their solution considers that players are of two types; one player is the short player and the rest of the players are long players. But their protocol is susceptible to coalitions if the short player colludes with any of the long players. We extend their protocol, and propose a completely collusion free, $\varepsilon$-Nash equilibrium protocol, when $n \geq 2m -1 $, where $n$ is the number of players and $m$ is the number of shares needed to construct the secret.
Rational Secret Sharing without Broadcast
We consider the concept of rational secret sharing, which was initially introduced by Halpern and Teague \cite{ht04}, where players' preferences are that they prefer to learn the secret than not, and moreover they prefer that as few others learn the secret as possible. This paper is an attempt to introduce a rational secret sharing scheme which defers from previous RSS schemes in that this scheme does not rely on broadcast to send messages but instead uses point to point transmissions. Not only that, but the protocol will not rely on any cryptographic primitives and is coalition resilient except for when the short player colludes with a long player.
Automatic Search for Related-Key Differential Characteristics in Byte-Oriented Block Ciphers: Application to AES, Camellia, Khazad and Others
While differential behavior of modern ciphers in a single secret key scenario is relatively well understood, and simple techniques for computation of security lower bounds are readily available, the security of modern block ciphers against related-key attacks is still very ad hoc. In this paper we make a first step towards provable security of block ciphers against related-key attacks by presenting an efficient search tool for finding differential characteristics
both in the state and in the key (note that due to similarities between block ciphers and hash functions such tool will be useful in analysis of hash functions as well). We use this tool to search for the best possible (in terms of the number of rounds) related-key differential characteristics in AES, byte-Camellia, Khazad, FOX, and Anubis. We show the best related-key differential characteristics for 5, 11, and 14 rounds of AES-128, AES-192, and AES-256 respectively. We use the optimal differential characteristics to design the best related-key and chosen key attacks on AES-128 (7 out of 10 rounds), AES-192 (full 12 rounds), byte-Camellia (full 18 rounds) and Khazad (7 and 8 out of 8 rounds). We also show that ciphers FOX and
Anubis have no related-key attacks on more than 4-5 rounds.
A New Joint Fingerprinting and Decryption Scheme based on a Lattice Problem
We propose a new encryption scheme that supports joint fingerprinting and decryption. The scheme is remarkably resistant to known-plaintext attack and collusion attack (e.g. average attack or other linear combination attack) on keys. Interestingly, the security of our scheme is relied on a lattice problem: Given a collection of random lattice points generated from a short basis of a lattice, find the short basis. The scheme can be used as a traitor-tracing scheme or a buyer-seller watermarking scheme.
Quantifying Trust
Trust is a central concept in public-key cryptography infrastruc-
ture and in security in general. We study its initial quantification and its spread patterns. There is empirical evidence that in trust-based reputation model for virtual communities, it pays to restrict the clusters of agents to small sets with high mutual trust. We propose and motivate a mathematical model, where this phenomenon emerges naturally. In our model, we separate trust values from their weights. We motivate this separation using real examples, and show that in this model, trust converges to the extremes, agreeing with and accentuating the observed phenomenon. Specifically, in our model, cliques of agents of maximal mutual trust are formed, and the trust between any two agents that do not maximally trust each other, converges to zero. We offer initial practical relaxations to the model that preserve some of the theoretical flavor.
Towards a Theory of Trust Based Collaborative Search
Trust Based Collaborative Search is an interactive metasearch engine, presenting the user with clusters of results, based not only on the similarity of content, but also on the similarity of the recommending agents. The theory presented here is broad enough to cover search, browsing, recommendations, demographic profiling, and consumer targeting. We use the term search as an example. We developed a novel general trust theory. In this context, as a special case, we equate trust between agents with the similarity between their search-behaviors. The theory suggests that clusters should be close to maximal similarity within a tolerance dictated by the amount of uncertainty about the vectors of probabilities of attributes representing queries, pages and agents. In addition, we give a new theoretical analysis of clustering tolerances, enabling more judicial decisions about optimal tolerances. Specifically, we show that tolerances should at least be divided by a constant>1 as we descend from one layer in the hierarchical clustering to the next. We also show a promising connection between collaborative search and cryptography: A query plays the role of a cryptogram, the search engine is the cryptanalyst, and the user's intention is the cleartext. Shannon's unicity distance is the length of the search. It is needed to quantify the clustering-tolerance.
Authenticating Aggregate Range Queries over Dynamic Multidimensional Dataset
Uncategorized
Uncategorized
We are interested in the integrity of the query results from an outsourced database service provider. Alice passes a set
$\set{D}$ of $d$-dimensional points, together with some authentication tag $\set{T}$, to an untrusted service provider Bob. Later, Alice issues some query over $\set{D}$ to Bob, and Bob should produce a query result and a proof based on $\set{D}$ and $\set{T}$.
Alice wants to verify the integrity of the query result with the help of the proof, using only the private key.
Xu J.~\emph{et al.}~\cite{maia-full} proposed an authentication scheme to solve this problem for multidimensional aggregate range query, including {\SUM, \COUNT, \MIN, \MAX} and {\MEDIAN}, and multidimensional range selection query, with $O(d^2)$ communication overhead. However, their scheme only applys to static database.
This paper extends their method to support dynamic operations on the dataset, including inserting or deleting a point from the dataset. The communication overhead of our scheme is $O(d^2 \log N)$, where $N$ is the
number of data points in the dataset.
Construction of 1-Resilient Boolean Functions with Optimal Algebraic Immunity and Good Nonlinearity
Uncategorized
Uncategorized
This paper presents a construction for a class of 1-resilient Boolean functions with optimal algebraic immunity on an even number of variables by dividing them into two correlation classes, i.e. equivalence classes. From which, a nontrivial pair of functions has been found by applying the generating matrix. For $n$ is small (e.g. $n=6$), a part of these functions achieve almost optimal nonlinearity. Apart from their good nonlinearity, the functions reach Siegenthaler's \cite{Siegenthaler} upper bound of algebraic degree. Furthermore, a class of 1-resilient functions on any number $n>2$ of variables with at least sub-optimal algebraic immunity is provided.
Efficient Access Control of Sensitive Data Service in Outsourcing Scenarios
With the rapid application of service-oriented technologies, service and data outsourcing has become a practical and useful computing paradigm. Combined use of access control and cryptography was proposed by many researchers to protect information in this outsourcing scenario. However, existing approaches often limit dynamical update of access control policy, or have security weakness in practical use. In this paper, we propose a new solution to realize efficient access control of sensitive data service in outsourcing scenarios by using a new re-encryption execution model. Our solution realizes selective access control, dynamical policy updating, simple key management, and collusion prevention of the outsourcee and customers. We also give some proofs of our implementation.
Improved Delegation of Computation using Fully Homomorphic Encryption
Following Gennaro, Gentry, and Parno (Cryptology ePrint Archive 2009/547), we use fully homomorphic encryption to design improved schemes for delegating computation. In such schemes, a delegator outsources the computation of a function $F$ on many, dynamically chosen inputs $x_i$ to a worker in such a way that it is infeasible for the worker to make the delegator accept a result other than $F(x_i)$. The "online stage" of the Gennaro et al. scheme is very efficient: the parties exchange two messages, the delegator runs in time $poly(log T)$, and the worker runs in time $poly(T)$, where $T$ is the time complexity of $F$. However, the "offline stage" (which depends on the function $F$ but not the inputs to be delegated) is inefficient: the delegator runs in time $poly(T)$ and generates a public key of length $poly(T)$ that needs to be accessed by the worker during the online stage.
Our first construction eliminates the large public key from the Gennaro et al. scheme. The delegator still invests $poly(T)$ time in the offline stage, but does not need to communicate or publish anything. Our second construction reduces the work of the delegator in the offline stage to $poly(log T)$ at the price of a 4-message (offline) interaction with a $poly(T)$-time worker (which need not be the same as the workers used in the online stage). Finally, we describe a "pipelined" implementation of the second construction that avoids the need to re-run the offline construction after errors are detected (assuming errors are not too frequent).
Weaknesses of a dynamic ID-based remote user authentication scheme
The security of a password authentication scheme using smart cards proposed by Khan et al. is analyzed. Four kinds of attacks are presented in different scenarios. The analyses show that the scheme is insecure for practical application.
One-round and authenticated three-party multiple key exchange protocol from parings
Uncategorized
Uncategorized
One round three-party authenticated key exchange protocols are extremely important to secure communications and are now extensively adopted in network communications. These protocols allow users to communicate securely over public networks simply by using easy-to-remember long-term private keys. In 2001, Harn and Lin proposed an authentication key exchange protocol in which two parties generate four shared keys in one round, and three of these keys can provide perfect forward secrecy.This work,which aims to generalize two-party multiple key agreement sets to three-party key agreement sets,presents a three-party multiple key exchange protocol based on bilinear pairing.The proposed protocol does not require server's public key and requires only a single round. Compared with existing protocols, the proposed protocol is more efficient and provide greater security.
Collusion Free Protocol for Correlated Element Selection Problem
Uncategorized
Uncategorized
A common problem in many markets is that competing firms cannot plan joint business strategies which are socially beneficial, as each firm has its own preferable business strategy which would yield higher profits for it and lower profits for the others. The solution to this problem becomes complex because each firm need not stick to its commitment to follow the pre-designated strategy. Game theory suggests to us a way to enforce this commitment, as when every player chooses his actions according to his observation of the value of a common public signal and, assuming that the others do not deviate, no player is willing to deviate from his recommended strategy. The players do not deviate from their recommended strategy as playing them would yield them a much higher expected pay-off than playing individually. The common public channel can be a trusted external mediator which may send each player his recommended strategy. This mediator can be simulated by a cryptographic protocol, which all the players agree to implement. This problem of suggesting the protocol is known as the \textit{Correlated Element Selection Problem}. The first two-player protocol was proposed by Dodis et. al\cite{dhr00} in Crypto 2000. The extension of the two-player protocol to an $n$-player protocol is highly prone to collusions, as two firms can collude and cheat the rest of the firms. The main contribution of the paper is the first $n$-player collusion free protocol for the \textit{correlated element selection problem} that does not use hardware primitives. We assume that players are honest but curious.
A New Security Model for Authenticated Key Agreement
Uncategorized
Uncategorized
The Canetti--Krawczyk (CK) and extended Canetti--Krawczyk (eCK) security models, are widely used to provide security arguments for key agreement protocols. We discuss security shades in the (e)CK models, and some practical attacks unconsidered in (e)CK--security arguments.
We propose a strong security model which encompasses the eCK one. We also propose a new protocol, called Strengthened MQV (SMQV), which in addition to provide the same efficiency as the (H)MQV protocols, is particularly suited for distributed implementations wherein a tamper--proof device is used to store long--lived keys, while session keys are used on an untrusted host machine. The SMQV protocol meets our security definition under the Gap Diffie--Hellman assumption and the Random Oracle model.
Accountability: Definition and Relationship to Verifiability
Many cryptographic tasks and protocols, such as non-repudiation, contract-signing, voting, auction, identity-based encryption, and certain forms of secure multi-party computation, involve the use of (semi-)trusted parties, such as notaries and authorities.
It is crucial that such parties can be held accountable in case they misbehave as this is a strong incentive for such parties to follow the protocol. Unfortunately, there does not exist a general and convincing definition of accountability that would allow to assess the level of accountability a protocol provides.
In this paper, we therefore propose a new, widely applicable definition of accountability, with interpretations both in symbolic and computational models. Our definition reveals that accountability is closely related to verifiability, for which we also propose a new definition. We prove that verifiability can be interpreted as a restricted form of accountability. Our findings on verifiability are of independent interest.
As a proof of concept, we apply our definitions to the analysis of protocols for three different tasks: contract-signing, voting, and auctions. Our analysis unveils some subtleties and unexpected weaknesses, showing in one case that the protocol is unusable in practice. However, for this protocol we propose a fix to establish a reasonable level of accountability.
Attribute-based group key establishment
Motivated by the problem of establishing a session key among
parties based on the possession of certain credentials only, we discuss a
notion of attribute-based key establishment. A number of new issues arise
in this setting that are not present in the usual settings of group key
establishment where unique user identities are assumed to be publicly
available.
After detailing the security model, we give a two-round solution in the
random oracle model. As main technical tool we introduce a notion of
attribute-based signcryption, which may be of independent interest. We
show that the type of signcryption needed can be realized through the
encrypt-then-sign paradigm. Further, we discuss additional guarantees
of the proposed protocol, that can be interpreted in terms of deniability
and privacy.
Efficient provable data possession for hybrid clouds
Provable data possession is a technique for ensuring the integrity of data in outsourcing storage service. In this paper, we propose a cooperative provable data possession scheme in hybrid clouds to support scalability of service and data migration, in which we consider the existence of multiple cloud service providers to cooperatively store and maintain the clients' data. Our experiments show that the verification of our scheme requires a small, constant amount of overhead, which minimizes communication complexity.
Commuting Signatures and Verifiable Encryption and an Application to Non-Interactively Delegatable Credentials
Verifiable encryption allows to encrypt a signature and prove that the plaintext is valid. We introduce a new primitive called commuting signature that extends verifiable encryption in multiple ways: a signer can encrypt both signature and message and prove validity; more importantly, given a ciphertext, a signer can create a verifiably encrypted signature on the encrypted message; thus signing and encrypting commute. We instantiate commuting signatures using the proof system by Groth and Sahai (EUROCRYPT '08) and the automorphic signatures by Fuchsbauer (ePrint report 2009/320). As an application, we give an instantiation of delegatable anonymous credentials, a powerful primitive introduced by Belenkiy et al. (CRYPTO '09). Our instantiation is arguably simpler than theirs and it is the first to provide non-interactive issuing and delegation, which is a standard requirement for non-anonymous credentials. Moreover, the size of our credentials and the cost of verification are less than half of those of the only previous construction, and efficiency of issuing and delegation is increased even more significantly. All our constructions are proved secure in the standard model.
On Representable Matroids and Ideal Secret Sharing
Uncategorized
Uncategorized
In secret sharing, the exact characterization of ideal access structures is a longstanding open problem. Brickell and Davenport (J. of Cryptology, 1991) proved that ideal access structures are induced by matroids. Subsequently, ideal access structures and access structures induced by matroids have attracted a lot of attention. Due to the difficulty of finding general results, the characterization of ideal access structures has been studied for several particular families of access structures. In all these families, all the matroids that are related to access structures in the family are representable and, then, the matroid-related access structures coincide with the ideal ones.
In this paper, we study the characterization of representable matroids. By using the well known connection between ideal secret sharing and matroids and, in particular, the recent results on ideal multipartite access structures and the connection between multipartite matroids and discrete polymatroids, we obtain a characterization of a family of representable multipartite matroids, which implies a sufficient condition for an access structure to be ideal.
By using this result and further introducing the reduced discrete polymatroids, we provide a complete characterization of quadripartite representable matroids, which was until now an open problem, and hence, all access structures related to quadripartite representable matroids are the ideal ones. By the way, using our results, we give a new and simple proof that all access structures related to unipartite, bipartite and tripartite matroids coincide with the ideal ones.
Throughput-Optimal Routing in Unreliable Networks
We demonstrate the feasibility of throughput-efficient routing in a highly unreliable network. Modeling a network as a graph with vertices representing nodes and edges representing the links between them, we consider two forms of unreliability: unpredictable edge-failures, and deliberate deviation from protocol specifications by corrupt nodes. The first form of unpredictability represents networks with dynamic topology, whose links may be constantly going up and down; while the second form represents malicious insiders attempting to disrupt communication by deliberately disobeying routing rules, by e.g. introducing junk messages or deleting or altering messages. We present a robust routing protocol for end-to-end communication that is simultaneously resilient to both forms of unreliability, achieving provably optimal throughput performance. Our proof proceeds in three steps: 1) We use competitive-analysis to find a lower-bound on the optimal throughput-rate of a routing protocol in networks susceptible to only edge-failures (i.e. networks with no malicious nodes); 2) We prove a matching upper bound by presenting a routing protocol that achieves this throughput rate (again in networks with no malicious nodes); and 3) We modify the protocol to provide additional protection against malicious nodes, and prove the modified protocol performs (asymptotically) as well as the original.
A calculus for game-based security proofs
The game-based approach to security proofs in cryptography is a widely-used methodology for writing proofs rigorously. However a unifying language for writing games is still missing. In this paper we show how CSLR, a probabilistic lambda-calculus with a type system that guarantees that computations are probabilistic polynomial time, can be equipped with a notion of game indistinguishability. This allows us to dene cryptographic constructions, eective adversaries, security notions, computational assumptions, game transformations, and game-based security proofs in the unied framework provided by CSLR. Our code for cryptographic constructions is close to implementation in the sense that we do not assume primitive uniform distributions but use a realistic algorithm to approximate them. We illustrate our calculus on cryptographic constructions for public-key encryption and pseudorandom bit generation.
Concurrent composition in the bounded quantum storage model
We define the BQS-UC model, a variant of the UC model, that deals with protocols in the bounded quantum storage model. We present a statistically secure commitment protocol in the BQS-UC model that composes concurrently with other protocols and an (a-priori) polynomially-bounded number of instances of itself. Our protocol has an efficient simulator which is important if one wishes to compose our protocol with protocols that are only computationally secure. Combining our result with prior results, we get a statistically BQS-UC secure protocol for general two-party computation without the need for any setup assumption. The round complexity of that protocol is linear in the circuit depth.
Practical NFC Peer-to-Peer Relay Attack using Mobile Phones
NFC is a standardised technology providing short-range RFID communication channels for mobile devices. Peer-to-peer applications for mobile devices are receiving increased interest and in some cases these services are relying on NFC communication. It has been suggested that NFC systems are particularly vulnerable to relay attacks, and that the attacker's proxy devices could even be implemented using off-the-shelf NFC-enabled devices. This paper describes how a relay attack can be implemented against systems using legitimate peer-to-peer NFC communication by developing and installing suitable MIDlets on the attacker's own NFC-enabled mobile phones. The attack does not need to access secure program memory nor use any code signing, and can use publicly available APIs. We go on to discuss how relay attack countermeasures using device location could be used in the mobile environment. These countermeasures could also be applied to prevent relay attacks on contactless applications using `passive' NFC on mobile phones.
A Security Weakness in Composite-Order Pairing-Based Protocols with Imbedding Degree $k>2$
In this note we describe a security weakness in pairing-based protocols when the group order is composite and the imbedding degree $k$ is greater than $2$.
Circular and Leakage Resilient Public-Key Encryption Under Subgroup Indistinguishability (or: Quadratic Residuosity Strikes Back)
The main results of this work are new public-key encryption schemes that, under the quadratic residuosity (QR) assumption (or Paillier's decisional composite residuosity (DCR) assumption), achieve key-dependent message security as well as high resilience to secret key leakage and high resilience to the presence of auxiliary input information.
In particular, under what we call the {\it subgroup indistinguishability assumption}, of which the QR and DCR are special
cases, we can construct a scheme that has:
* Key-dependent message (circular) security. Achieves security even when encrypting affine functions of its own secret key (in fact, w.r.t. affine ``key-cycles'' of predefined length). Our scheme also meets the requirements for extending key-dependent message security to broader classes of functions beyond affine functions using previous techniques of [BGK, ePrint09] or [BHHI, Eurocrypt10].
* Leakage resiliency. Remains secure even if any adversarial low-entropy (efficiently computable) function of the secret key is given to the adversary. A proper selection of parameters allows for a ``leakage rate'' of $(1-o(1))$ of the length of the secret key.
* Auxiliary-input security. Remains secure even if any sufficiently \emph{hard to invert} (efficiently computable) function of the secret key is given to the adversary.
Our scheme is the first to achieve key-dependent security and auxiliary-input security based on the DCR and QR assumptions. Previous schemes that achieved these properties relied either on the DDH or LWE assumptions. The proposed scheme is also the first to achieve leakage resiliency for leakage rate $(1-o(1))$ of the secret key length, under the QR assumption. We note that leakage resilient schemes under the DCR and the QR assumptions, for the restricted case of composite modulus product of safe primes, were implied by the work of [NS, Crypto09], using hash proof systems. However, under the QR assumption, known constructions of hash proof systems only yield a leakage rate
of $o(1)$ of the secret key length.
A Security Weakness in a Generic Construction of a Group Key Exchange Protocol
Protocols for group key exchange are cryptographic algorithms that allow a group of parties communicating over a public network to come up with a common secret key. One of the interesting results of research on group key exchange is the protocol compiler presented by Abdalla et al.~in TCC '07. Abdalla et al.'s compiler shows how one can transform any authenticated 2-party key exchange protocol into an authenticated group key exchange protocol with 2 more rounds of communication. This compiler certainly is elegant in its genericness, symmetricity, simplicity and efficiency. However, the situation completely changes when it comes to security. In this work, we reveal a major security weakness in Abdalla et al.'s compiler and show how to address it. The security weakness uncovered here implies that Abdalla et al.'s proof of security for their compiler is invalid.
Efficient Implementation of the Orlandi Protocol Extended Version
We present an efficient implementation of the Orlandi protocol which is the first implementation of a protocol for multiparty computation on arithmetic circuits, which is secure against up to $n-1$ static, active adversaries. An efficient implementation of an actively secure self-trust protocol enables a number of multiparty computation where one or more of the parties only trust himself. Examples includes auctions, negotiations, and online gaming. The efficiency of the implementation is largely obtained through an efficient implementation of the Paillier cryptosystem, also described in this paper.
Improved Differential Attacks for ECHO and Grostl
We present improved cryptanalysis of two second-round SHA-3 candidates: the AES-based hash functions ECHO and GROSTL. We explain methods for building better differential trails for ECHO by increasing the granularity of the truncated differential paths previously considered. In the case of GROSTL, we describe a new technique, the internal differential attack, which shows that when using parallel computations designers should also consider the differential security between the parallel branches. Then, we exploit the recently introduced start-from-the-middle or Super-Sbox attacks, that proved to be very efficient when attacking AES-like permutations, to achieve a very efficient utilization of the available freedom degrees. Finally, we obtain the best known attacks so far for both ECHO and GROSTL. In particular, we are able to mount a distinguishing attack for the full GROSTL-256 compression function.
Some Observations on Indifferentiability
At Crypto 2005, Coron et al. introduced a formalism to study the presence or absence of structural flaws in iterated hash functions: If one cannot differentiate a hash function using ideal
primitives from a random oracle, it is considered structurally sound, while the ability to differentiate it from a random oracle indicates a structural weakness. This model was devised as a tool to see subtle real world weaknesses while in the random oracle world. In this paper we take in a practical point of view.
We show, using well known examples like NMAC and the Mix-Compress-Mix (MCM) construction, how we can prove a hash construction secure and insecure at the same time in the indifferentiability setting.
These constructions do not differ in their implementation but only on an abstract level. Naturally, this gives rise to the question what to conclude for the implemented hash function.
Our results cast doubts about the notion of “indifferentiability from a random oracle” to be a mandatory, practically relevant criterion (as e.g., proposed by Knudsen [16] for the SHA-3 competition) to separate good hash structures from bad ones.
Solving Generalized Small Inverse Problems
We introduce a ``generalized small inverse problem (GSIP)'' and
present an algorithm for solving this problem. GSIP is formulated
as finding small solutions of $f(x_0, x_1, \ldots , x_n)=x_0 h(x_1,
\ldots , x_n)+C=0 (\bmod \; M)$ for an $n$-variate polynomial $h$,
non-zero integers $C$ and $M$. Our algorithm is based on
lattice-based Coppersmith technique. We provide a strategy for
construction of a lattice basis for solving $f=0$, which are
systematically transformed from a lattice basis for solving $h=0$.
Then, we derive an upper bound such that the target problem can be
solved in polynomial time in $\log M$ in an explicit form.
Since GSIPs include some RSA related problems, our algorithm is
applicable to them. For example, the small key attacks by Boneh
and Durfee are re-found automatically.
(If) Size Matters: Size-Hiding Private Set Intersection
Uncategorized
Uncategorized
Modern society is increasingly dependent on, and fearful of, the availability of electronic information. There are numerous examples of situations where sensitive data must be -- sometimes reluctantly -- shared between two or more entities without mutual trust. As often happens, the research community has foreseen the need for mechanisms to enable limited (privacy-preserving) sharing of sensitive information and a number of effective solutions have been proposed. Among them, Private Set Intersection (PSI) techniques are particularly appealing for scenarios where two parties wish to compute an intersection of their respective sets of items without revealing to each other {\em any other information}. Thus far, "any other information" has been interpreted to mean any information about items not in the intersection.
In this paper, we motivate the need for Private Set Intersection with a stronger privacy property of {\em hiding the size} of the set held by one of the two entities ("client"). We introduce the notion of Size-Hiding Private Set Intersection (SHI-PSI) and propose an efficient construction secure under the RSA assumption in the Random Oracle Model. We also show that input size-hiding is attainable at very low additional cost.
Tracker: Security and Privacy for RFID-based Supply Chains
The counterfeiting of pharmaceutics or luxury objects is a major
threat to supply chains today. As different facilities of a
supply chain are distributed and difficult to monitor, malicious
adversaries can inject fake objects into the supply chain. This
paper presents Tracker, a protocol for object genuineness verification
in RFID-based supply chains. More precisely, Tracker allows to securely
identify which (legitimate) path an object/tag has taken through a
supply chain. Tracker provides privacy: an adversary can neither learn
details about an object's path, nor can it trace and link objects in
supply chain. Tracker's security and privacy is based on an extension of
polynomial signature techniques for run-time fault detection using
homomorphic encryption. Contrary to related work, RFID tags in this
paper are not required to perform \emph{any computation}, but only
feature a few bytes of storage such as ordinary EPC Class 1 Gen 2
tags.
Last updated: 2012-01-01
New Montgomery-based Semi-systolic Multiplier for Even-type GNB of GF(2^m)
Efficient finite field multiplication is crucial for implementing public key crytosystem. Based on new Gaussian normal basis Montgomery(GNBM) representation, this paper presents a semi-systolic even-type GNBM multiplier.Compared with the only existing semi-systolic even-type GNB multiplier, the proposed multiplier saves about 57% space complexity and 50% time complexity.
Secure Code Update for Embedded Devices via Proofs of Secure Erasure
Remote attestation is the process of verifying internal state of a remote
embedded device. It is an important component of many security protocols and
applications. Although techniques assisted by specialized secure hardware are
effective, they not yet viable for low-cost embedded devices. One notable
alternative is software-based attestation which is both less costly and more
efficient. However, recent results identified weaknesses in some proposed
methods, thus showing that security of remote software attestation remains a
challenge.
Inspired by these developments, this paper explores a different approach that
relies neither on secure hardware nor on tight timing constraints. By taking
advantage of the bounded memory/storage model of low-cost embedded devices and
assuming a small amount of read-only memory (ROM), our uses a new primitive --
Proofs of Secure Erasure (PoSE-s). We show that, even though our PoSE-based
approach is effective and provably secure, it is not cheap. However, it is
particularly well-suited and practical for two other related tasks: secure code
update and secure memory/storage erasure. We consider several flavors of
PoSE-based protocols and demonstrate their feasibility in the context of
existing commodity embedded devices.
Distinguishing Attacks on MAC/HMAC Based on A New Dedicated Compression Function Framework
Uncategorized
Uncategorized
A new distinguishing attack on HMAC and NMAC based
on a dedicated compression function framework H, proposed in ChinaCrypt2008, is first presented in this paper, which distinguish the HMAC/NMAC-H from HMAC/NMAC with a random function. The attack needs 2^{17}
chosen messages and 223 queries, with a success rate of 0.873. Furthermore, according to distinguishing attack on SPMAC-H, a key recovery attack on the SPMAC-H is present, which recover all 256-bit key with 2^{17)chosen messages, 2^{19} queries, and (t+1)x8 times decrypting algorithms.
On the q-Strong Diffie-Hellman Problem
This note is an exposition of reductions among the q-strong Diffie-Hellman problem and related problems.
How to Tell if Your Cloud Files Are Vulnerable to Drive Crashes
This paper presents a new challenge---verifying that a remote server is storing a file in a fault-tolerant manner, i.e., such that it can survive hard-drive failures. We describe an approach called the Remote Assessment of Fault Tolerance (RAFT). The key technique in a RAFT is to measure the time taken for a
server to respond to a read request for a collection of file blocks. The larger the number of hard drives across which a file is distributed, the faster the read-request response. Erasure codes also play an important role in our solution. We describe a theoretical framework for RAFTs and show experimentally that RAFTs can work in practice.
Composable Security Analysis of OS Services
We provide an analytical framework for analyzing basic integrity properties of file systems, namely the binding of files to filenames and writing capabilities. A salient feature of our modeling and analysis is that it is *composable*: In spite of the fact that we analyze the filesystem in isolation, security is guaranteed even when the file system operates as a component within an arbitrary, and potentially adversarial system. Such secure composability properties seem essential when trying to assert the security of large systems.
Our results are obtained by adapting the *Universally Composable* (UC) security framework to the analysis of software systems. Originally developed for cryptographic protocols, the UC framework allows the analysis of simple components in isolation, and provides assurance that these components maintain their behavior when combined in a large system, potentially under adversarial conditions.
Quantum Proofs of Knowledge
We motivate, define and construct quantum proofs of knowledge, that
is, proofs of knowledge secure against quantum adversaries. Our
constructions are based on a new quantum rewinding technique that
allows us to extract witnesses in many classical proofs of
knowledge. We give criteria under which a classical proof of knowledge
is a quantum proof of knowledge. Combining our results with Watrous'
results on quantum zero-knowledge, we show that there are
zero-knowledge quantum proofs of knowledge for all languages in NP
(assuming quantum 1-1 one-way functions).
Practical-time Attack on the Full MMB Block Cipher
Modular Multiplication based Block Cipher (MMB) is a block cipher
designed by Daemen \emph{et al.} as an alternative to the IDEA block
cipher. In this paper, we give a practical-time attack on the full
MMB with adaptive chosen plaintexts and ciphertexts. By the
constructive sandwich distinguisher for 5 of the 6 rounds of MMB
with amazingly high probability 1, we give the key recovery attack
on the full MMB with data complexity $2^{40}$ and time complexity
$2^{13.4}$ MMB encryptions. Then a rectangle-like sandwich attack on
the full MMB is presented, with $2^{66.5}$ chosen plaintexts,
$2^{64}$ MMB encryptions and $2^{70.5}$ memory bytes. By the way, we
show an improved differential attack on the full MMB with data
complexity of $2^{96}$ chosen plaintexts and ciphertexts, time
complexity $2^{64}$ encryptions and $2^{66}$ bytes of memory.
Last updated: 2010-04-28
Fully Secure Identity-Based Encryption Without Random Oracles: A variant of Boneh-Boyen HIBE
We present an Identity-Based Encryption (IBE) scheme that is fully
secure without random oracles and has several advantages over previous such schemes - namely, computational efficiency, shorter public parameters, and simple assumption. The construction is remarkably simple and the security reduction is straightforward. We first give our CPA construction based on the decisional Bilinear Diffie-Hellman (BDH) problem, then archiving the CCA security by employing secure symmetric-key encryption algorithm. Additionally, we transform the CPA construction into a new signature scheme that is secure under the
computational Diffie-Helleman assumption without random oracles.
Identity-Based Authenticated Asymmetric Group Key Agreement Protocol
In identity-based public-key cryptography, an entity's public key can be easily derived from its identity. The direct derivation of public keys in identity-based public-key cryptography eliminates the need for certificates and solves certain public key management problems in traditional public-key cryptosystems. Recently, the notion of asymmetric group key agreement was introduced, in which the group members merely negotiate a common encryption key which is accessible to any entity, but they hold respective secret decryption keys. In this paper, we first propose a security model for identity-based authenticated asymmetric group key agreement (IB-AAGKA) protocols. We then propose an IB-AAGKA protocol which is proven secure under the Bilinear Di±e-Hellman Exponent assumption. Our protocol is also efficient, and readily adaptable to provide broadcast encryption.
Efficient Implementation of Elliptic Curve Point Operations Using Binary Edwards Curves
This paper presents a deterministic algorithm for converting points on an ordinary elliptic curve (defined over a field of characteristic 2) to points on a complete binary Edwards curve. This avoids the problem of choosing curve parameters at random. When implemented on a large (512 bit) hardware multiplier, computation of point multiplication using this algorithm performs significantly better, in terms of code complexity, code coverage and timing, than the standard implementation. In addition, we propose a simple modification to the birational equivalence detailed in the paper by Bernstein et al. which both reduces the number of inversions required in the affine mapping and has fewer exceptional points. Finally, we compare software implementations using this efficient point multiplication for binary Edwards curves with computations on elliptic curves in Weierstrass form.
Increased Resilience in Threshold Cryptography: Sharing a Secret with Devices That Cannot Store Shares
Threshold cryptography has been used to secure data and control access by sharing a private cryptographic key over different devices. This means that a minimum number of these devices, the threshold $t+1$, need to be present to use the key. The benefits are increased security, because an adversary can compromise up to $t$ devices, and resilience, since any subset of $t+1$ devices is sufficient.
Many personal devices are not suitable for threshold schemes, because they do not offer secure storage, which is needed to store shares of the private key. This article presents several protocols in which shares are stored in protected form (possibly externally). This makes them suitable for low-cost devices with a factory-embedded key, e.g., car keys and access cards. All protocols are verifiable through public broadcast, thus without private channels. In addition, distributed key generation does not require all devices to be present.
Authentication protocols based on low-bandwidth unspoofable channels: a comparative survey
One of the main challenges in pervasive computing is how we can establish secure communication over an untrusted high-bandwidth network without any initial knowledge or a Public Key Infrastructure. An approach studied by a number of researchers is building security though human work creating a low-bandwidth empirical (or authentication) channel where the transmitted information is authentic and cannot be faked or modified. In this paper, we give an analytical survey of authentication protocols of this type. We start with non-interactive authentication schemes, and then move on to analyse a number of strategies used to build interactive pair-wise and group protocols that minimise the human work relative to the amount of security obtained as well as optimising the computation processing. In studying these protocols, we will discover that their security is underlined by the idea of commitment before knowledge, which is refined by two protocol design principles introduced in this survey.
On Protecting Cryptographic Keys Against Continual Leakage
Uncategorized
Uncategorized
Side-channel attacks have often proven to have a devastating effect on the security of cryptographic schemes. In this paper, we address the problem of storing cryptographic keys and computing on them in a manner that preserves security even when the adversary is able to obtain information leakage during the computation on the key.
Using the recently achieved fully homomorphic encryption, we show how to encapsulate a key and repeatedly evaluate arbitrary functions on it so that no adversary can gain any useful information from a large class of side-channel attacks. We work in the model of Micali and Reyzin, assuming that only the active part of memory during computationleaks information. Similarly to previous works, our construction makes use of a single ``leak-free'' hardware token that samples from a globally-fixed distribution that does not depend on the key.
Our construction is the first general compiler to achieve resilience against polytime leakage functions without performing any leak-free computation on the underlying secret key. Furthermore, the amount of computation our construction must perform does not grow with the amount of leakage the adversary is able to obtain; instead, it suffices to make a stronger assumption about the security of the fully homomorphic encryption.
Certificateless generalized signcryption
Generalized Signcryption is a fresh cryptographic primitive that not only can obtain encryption and signature in a single operation, but also provives encryption or signature alone when needed. This paper gives a formal definition of certificateless generalized signcryption and its security model is present. A concrete certificateless generalized signcryption scheme is also proposed in this paper.
Heraclitus: A LFSR-based Stream Cipher with Key Dependent Structure
We describe Heraclitus as an example of a stream cipher
that uses a 128 bit index string to specify the structure of each instance in real time:
each instance of Heraclitus will be a stream cipher based on mutually
clocked shift registers.
Ciphers with key-dependent structures have been investigated and are
generally based on Feistel networks. Heraclitus, however, is based
on mutually clocked shift registers. Ciphers of this type have been extensively analysed, and published
attacks on them will be infeasible against any instance of Heraclitus.
The speed and security of Heraclitus makes it suitable as a session cipher,
that is, an instance is generated at key exchange and used for one session.
Robust Combiner for Obfuscators
Practical software hardening schemes are heuristic and are not proven to be secure. One technique to enhance security is {\em robust combiners}.
An algorithm $C$ is a robust combiner for specification $S$, e.g., privacy, if for any two implementations $X$ and $Y$, of a cryptographic scheme, the combined scheme $C(X,Y)$ satisfies $S$ provided {\em either} $X$ {\em or} $Y$ satisfy $S$.
We present the first robust combiner for software hardening, specifically for obfuscation \cite{barak:obfuscation}. Obfuscators are software hardening techniques that are employed to protect execution of programs in remote, hostile environment. Obfuscators protect the code (and secret data) of the program that is sent to the remote host for execution.
Robust combiners are particularly important for software hardening, where there is no standard whose security is established. In addition, robust combiners for software hardening are interesting from software engineering perspective since they introduce new techniques of software only fault tolerance.
Last updated: 2010-09-26
Impossible Differential Cryptanalysis on E2
E2 is a 128-bit block cipher which employs Feistel structure and 2-round SPN in round function. It is an AES candidate and was designed by NTT. In the former publications, E2 is supposed no
more than 5-round impossible differential. In this paper, we describe some 6-round impossible differentials
of E2. By using the 6-round impossible differential, we first present an attack on 9-round reduced version
of E2-256 without IT Function(the initial transformation) and FT-Function(the Final transformation) function.
Generic Constructions for Verifiably Encrypted Signatures without Random Oracles or NIZKs
Verifiably encrypted signature schemes (VES) allow a signer to encrypt his or her signature under the public key of a trusted third party, while maintaining public signature verifiability.
With our work, we propose two generic constructions based on Merkle authentication trees that do not require non-interactive zero-knowledge proofs (NIZKs) for maintaining verifiability. Both are stateful and secure in the standard model. Furthermore, we extend the specification for VES, bringing it closer to real-world needs. We also argue that statefulness can be a feature in common business scenarios.
Our constructions rely on the assumption that CPA (even slightly weaker) secure encryption, ``maskable'' CMA secure signatures, and collision resistant hash functions exist. ``Maskable'' means that a signature can be hidden in a verifiable way using a secret masking value. Unmasking the signature is hard without knowing the secret masking value.
We show that our constructions can be instantiated with a broad range of efficient signature and encryption schemes, including two lattice-based primitives. Thus, VES schemes can be based on the hardness of worst-case lattice problems, making them secure against subexponential and quantum-computer attacks.
Among others, we provide the first efficient pairing-free instantiation in the standard model.
A Framework for Fully-Simulatable $t$-out-of-$n$ Oblivious Transfer
Oblivious transfer is a fundamental building block for multiparty computation protocols. In this paper, we present a generally realizable framework for fully-simulatable $t$-out-of-$n$ oblivious transfer ($\mbox{OT}^{n}_{t}$) with security against non-adaptive malicious adversaries in the plain mode. Our construction relies on a single cryptographic primitive which is a variant of smooth projective hashing (SPH). A direct consequence of our work is that the existence of protocols for $\mbox{OT}^{n}_{t}$ is reduced to the existence of this SPH variant. Before this paper, the only known reductions provided half-simulatable security and every known efficient protocol involved at least two distinct cryptographic primitives. We show how to instantiate this new SPH variant under not only the decisional Diffie-Hellman assumption, the decisional $N$-th residuosity assumption and the decisional quadratic residuosity assumption as currently existing SPH constructions, but also the learning with errors problem. Our framework only needs $4$ communication rounds, which implies that it is more round-efficient than known protocols holding identical features.
The Rebound Attack and Subspace Distinguishers: Application to Whirlpool
We introduce the rebound attack as a variant of differential cryptanalysis on
hash functions and apply it to the hash function Whirlpool, standardized by
ISO/IEC. We give attacks on reduced variants of the Whirlpool hash function and
the Whirlpool compression function. Next, we introduce the subspace problems as
generalizations of near-collision resistance. Finally, we present
distinguishers based on the rebound attack, that apply to the full compression
function of Whirlpool and the underlying block cipher $W$.
Fully Secure Anonymous HIBE and Secret-Key Anonymous IBE with Short Ciphertexts
Uncategorized
Uncategorized
Lewko and Waters [Eurocrypt 2010] presented a fully secure HIBE with short ciphertexts. In this paper we show how to modify their construction to achieve anonymity. We prove the security of our scheme under static (and generically secure) assumptions formulated in composite order bilinear groups. In addition, we present a fully secure Anonymous IBE in the secret-key setting. Secret-Key Anonymous IBE was implied by the work of [Shen-Shi-Waters - TCC 2009] which can be shown secure in the selective-id model. No previous fully secure construction of secret-key Anonymous IBE is known.
Cryptography Against Continuous Memory Attacks
We say that a cryptographic scheme is Continous Leakage-Resilient (CLR), if it allows users to refresh their secret keys, using only fresh local randomness, such that:
1. The scheme remains functional after any number of key refreshes, although the public key never changes. Thus, the “outside world” is neither affected by these key refreshes, nor needs to know about their frequency.
2. The scheme remains secure even if the adversary can continuously leak arbitrary information about the current secret-key of the system, as long as the amount of leaked information is bounded in between any two successive key refreshes. There is no bound on the total amount of information that can be leaked during the lifetime of the system.
In this work, we construct a variety of practical CLR schemes, including CLR one-way relations, CLR signatures, CLR identification schemes, and CLR authenticated key agreement protocols. For each of the above, we give general constructions, and then show how to instantiate them efficiently using a well established assumption on bilinear groups, called the K-Linear assumption (for any constant K >= 1).
Our constructions are highly modular, and we develop many interesting techniques and building-blocks along the way, including: leakage-indistinguishable re-randomizable relations, homomorphic NIZKs, and leakage-of-ciphertext non-malleable encryption schemes.
Prior to our work, no “truly CLR” schemes were known, as previous leakage-resilient schemes suffer from one or more of the following drawbacks: (a) restrictions are placed on the type of allowed leakage, such as the axiom that “only computation leaks information”; (b) the overall amount of key leakage is bounded a-priori for the lifetime
of the system and there is no method for refreshing keys ; (c) the efficiency of the scheme degrades proportionally with the number of refreshes; (d) the key updates require an additional leak-free “master secret key” to be stored securely; (e) the scheme is only proven secure under a strong non-standard assumption.
On E-Vote Integrity in the Case of Malicious Voter Computers
Uncategorized
Uncategorized
Norway has started to implement e-voting (over the Internet, and by using voters' own computers) within the next few years. The vulnerability of voter's computers was identified as a serious threat to e-voting. In this paper, we study the vote integrity of e-voting when the voter computers cannot be trusted. First, we make a number of assumptions about the available infrastructure. In particular, we assume the existence of two out-of-band channels that do not depend on the voter computers. The first channel is used to transmit integrity check codes to the voters prior the election, and the second channel is used to transmit a check code, that corresponds to her vote, back to a voter just after his or her e-vote vast cast. For this we also introduce a new cryptographic protocol. We present the new protocol with enough details to facilitate an implementation, and also present the timings of an actual implementation.
Identity-Based Online/Offline Key Encapsulation and Encryption
An identity-based online/offline encryption (IBOOE) scheme splits the encryption process into two phases. The first phase performs most of the heavy computations, such as modular exponentiation or pairing over points on elliptic curve. The knowledge of the plaintext or the receiver's identity is not required until the second phase, where the ciphertext is produced by only light computations, such as integer addition/multiplication or hashing. This division of computations makes encryption affordable by devices with limited computation power since the preparation works can be executed ``offline'' or possibly by some powerful devices.
Since efficiency is the main concern, smaller ciphertext size and less burden in the computation requirements of all phases (i.e., both phases of encryption and the decryption phase) are desirable. In this paper, we proposed new schemes with improved efficiency over previous schemes by assuming random oracles. Our first construction is a very efficient scheme which is secure against chosen-plaintext attack (CPA), This scheme is slightly modified from an existing scheme. In particular, the setup and the user private key remain the same. We then proceed to propose the notion of ID-based Online/Offline KEM (IBOOKEM) that allows the key encapsulation process to be split into offline and online stages, in the same way as IBOOE does. We also present a generic transformation to get security against chosen-ciphertext attack (CCA) for IBOOE from any IBOOKEM scheme with one-wayness only. Our schemes (both CPA and CCA) are the most efficient one in the state-of-the-art, in terms of online computation and ciphertext size, which are the two main focuses of online/offline schemes. Our schemes are very suitable to be deployed on embedded devices such as smartcard or wireless sensor which have very limited computation powers and the communication bandwidth is very expensive.
Speeding Up The Widepipe: Secure and Fast Hashing
Uncategorized
Uncategorized
In this paper we propose a new sequential mode of operation -- the \emph{Fast wide pipe} or FWP for short -- to hash
messages of arbitrary length. The mode is shown to be (1)
\emph{preimage-resistance preserving}, (2)
\emph{collision-resistance-preserving} and, most importantly, (3)
\emph{indifferentiable} from a random oracle up to $\mathcal{O}(2^{n/2})$
compression function invocations. In addition, our rigorous investigation suggests that
any variants of Joux's multi-collision, Kelsey-Schneier 2nd preimage and
Herding attack are also ineffective on this mode. This fact leads us to conjecture that the indifferentiability security bound of FWP can be extended beyond the birthday barrier. From the point of view of efficiency, this new mode, for example, is \textit{always} faster than the Wide-pipe mode when both modes use an identical compression function. In particular, it is nearly twice as fast as the Wide-pipe for a reasonable selection of the input and output size of the compression function. We also compare the FWP with several other modes of operation.
Non-Transferable Proxy Re-Encryption Scheme for Data Dissemination Control
Uncategorized
Uncategorized
A proxy re-encryption (PRE) scheme allows a proxy to re-encrypt
a ciphertext for Alice (delegator) to a ciphertext for Bob (delegatee) without
seeing the underlying plaintext. With the help of the proxy, Alice can delegate
the decryption right to any delegatee. However, existing PRE schemes generally
suffer from at least one of the followings. Some schemes fail to provide the
non-transferable property in which the proxy and the delegatee can collude to
further delegate the decryption right to anyone. This is the main open problem
left for PRE schemes. Other schemes assume the existence of a fully trusted
private key generator (PKG) to generate the re-encryption key to be used by
the proxy for re-encrypting a given ciphertext for a target delegatee. But this
poses two problems in PRE schemes if the PKG is malicious: the PKG in their
schemes may decrypt both original ciphertexts and re-encrypted ciphertexts
(referred as the key escrow problem); and the PKG can generate re-encryption
key for arbitrary delegatees without permission from the delegator (we refer
to it as the PKG despotism problem).
In this paper, we propose the first non-transferable proxy re-encryption
scheme which successfully achieves the non-transferable property. We also reduce
the full trust in PKG, only a limited amount of trust is placed in the proxy
and PKG. We show that the new scheme solved the PKG despotism problem
and key escrow problem as well. Further, we find that the new scheme satisfies
requirements of data dissemination control which is also a challenging
goal for data security. We explore the potential of adopting our new scheme
to achieve data dissemination control and implement a non-transferable re-encryption based encrypted PC/USB file system. Performance measurements
of our scheme demonstrate that non-transferable re-encryption is practical and
efficient.
On Designated Verifier Signature Schemes
Designated verifier signature schemes allow a signer to convince only the designated
verifier that a signed message is authentic. We define attack models on the
unforgeability property of such schemes and analyze relationships among the models. We
show that the no-message model, where an adversary is given only public keys, is
equivalent to the model, where an adversary has also oracle access to the verification
algorithm. We also show a separation between the no-message model and the chosen-message model,
where an adversary has access to the signing algorithm. Furthermore, we present a
modification of the Yang-Liao designated verifier signature scheme and prove its
security. The security of the modified scheme is based on the computational
Diffie-Hellman problem, while the original scheme requires strong Diffie-Hellman
assumption.
J-PAKE: Authenticated Key Exchange Without PKI
Password Authenticated Key Exchange (PAKE) is one of the
important topics in cryptography. It aims to address a
practical security problem: how to establish secure
communication between two parties solely based on a shared
password without requiring a Public Key Infrastructure (PKI).
After more than a decade of extensive research in this field,
there have been several PAKE protocols available. The EKE and
SPEKE schemes are perhaps the two most notable examples. Both
techniques are however patented. In this paper, we review these
techniques in detail and summarize various theoretical and
practical weaknesses. In addition, we present a new PAKE
solution called J-PAKE. Our strategy is to depend on
well-established primitives such as the Zero-Knowledge Proof
(ZKP). So far, almost all of the past solutions have avoided
using ZKP for the concern on efficiency. We demonstrate how to
effectively integrate the ZKP into the protocol design and
meanwhile achieve good efficiency. Our protocol has comparable
computational efficiency to the EKE and SPEKE schemes with
clear advantages on security.
New generic algorithms for hard knapsacks
In this paper, we
study the complexity of solving hard knapsack problems, i.e.,
knapsacks with a density close to $1$ where lattice-based low
density attacks are not an option. For such knapsacks, the current
state-of-the-art is a 31-year old algorithm by Schroeppel and Shamir
which is based on birthday paradox techniques and yields a running
time of $\TildeOh(2^{n/2})$ for knapsacks of $n$ elements and uses
$\TildeOh(2^{n/4})$ storage. We propose here two new algorithms
which improve on this bound, finally lowering the running time down
to $\TildeOh (2^{0.3113\, n})$ for almost all knapsacks of density $1$.
We also demonstrate the practicality
of these algorithms with an implementation.
Cryptographic Role-based Security Mechanisms based on Role-Key Hierarchy
Even though role-based access control (RBAC) can tremendously help us minimize the complexity in administering users, it is still needed to realize the notion of roles at the resource level. In this paper, we propose a practical cryptographic RBAC model, called role-key hierarchy model, to support various security features including signature, identification and encryption based on role-key hierarchy. With the help of rich algebraic structure of elliptic curve, we introduce a role-based cryptosystem construction to verify the rationality and validity of our proposed model. Also, a proof-of-concept prototype implementation and performance evaluation are iscussed to demonstrate the feasibility and efficiency of our mechanisms.
Certificateless Signcryption without Pairing
Uncategorized
Uncategorized
Certificateless public key cryptography is receiving significant attention because it is a new paradigm that simplifies the traditional PKC and solves the inherent key escrow problem suffered by ID-PKC. Certificateless signcryption is one of the most important security primitives in CL-PKC. However, to the best of our knowledge, all constructions of certificateless signcryption (CLSC) in the literature are built from bilinear maps which need costly operations. In the paper, motivated by certificateless encryption schemes proposed in [3, 21], we present a pairing-free CLSC scheme, which is more efficient than all previous constructions.
New software speed records for cryptographic pairings
This paper presents new software speed records for the computation
of cryptographic pairings. More specifically, we present details of an implementation which computes the optimal ate pairing
on a 256-bit Barreto-Naehrig curve in only 4,379,912 cycles
on one core of an Intel Core 2 Quad Q9550 processor.
This speed is achieved by combining
1.) state-of-the-art high-level optimization techniques,
2.) a new representation of elements
in the underlying finite fields which makes
use of the special modulus arising from
the Barreto-Naehrig curve construction,
and 3.) implementing arithmetic in this
representation using the double-precision floating-point
SIMD instructions of the AMD64 architecture.
New Methods to Construct Golay Complementary Sequences Over the $QAM$ Constellation
In this paper, based on binary Golay complementary
sequences, we propose some methods to construct Golay complementary
sequences of length $2^n$ for integer n, over the $M^2$-$QAM$
constellation and $2M$-$Q$-$PAM$ constellations, where $M=2^m$ for
integer $m$. A method to judge whether a sequence constructed using
the new general offset pairs over the $QAM$ constellation is Golay
complementary sequence is proposed. Base on this judging rule, we
can construct many new Golay complementary sequences. In particular,
we study Golay complementary sequences over $16$-$QAM$ constellation
and $64$-$QAM$ constellation,many new Golay complementary sequences
over these constellations have been found.
Last updated: 2013-03-21
Rational Secret Sharing AS Extensive Games
Some punishments in rational secret sharing schemes turn
out to be empty threats. In this paper, we first model 2-out-of-2 rational secret sharing in an extensive game with imperfect information, and then provide a strategy for achieving secret recovery in this game. Moreover, we prove that the strategy is a sequential equilibrium which means after any history of the game no player can benefit from deviations so long as the other players stick to the strategy. Therefor, by considering rational secret sharing as an extensive game, we design a scheme which
eliminates empty threats. Except assuming the existence of a simultaneous broadcast channel, our scheme can have dealer off-line and extend to the t-out-of-n rational secret sharing, and also satisfies computational equilibria in some sense.
Preventing Pollution Attacks in Multi-Source Network Coding
Network coding is a method for achieving channel capacity in networks.
The key idea is to allow network routers to linearly mix packets as
they traverse the network so that recipients receive linear
combinations of packets. Network coded systems are vulnerable to
pollution attacks where a single malicious node floods the network
with bad packets and prevents the receiver from decoding correctly.
Cryptographic defenses to these problems are based on homomorphic
signatures and MACs. These proposals, however, cannot handle mixing of
packets from multiple sources, which is needed to achieve the full
benefits of network coding. In this paper we address integrity of
multi-source mixing. We propose a security model for this setting
and provide a generic construction.
A Simple BGN-type Cryptosystem from LWE
We construct a simple public-key encryption scheme that supports polynomially many additions and one multiplication, similar to the cryptosystem of Boneh, Goh, and Nissim (BGN). Security is based on the hardness of the learning with errors (LWE) problem, which is known to be as hard as certain worst-case lattice problems.
Some features of our cryptosystem include support for large message space, an easy way of achieving formula-privacy, a better message-to-ciphertext expansion ratio than BGN, and an easy way of multiplying two encrypted polynomials. Also, the scheme can be made identity-based and leakage-resilient (at the cost of a higher message-to-ciphertext expansion ratio).
Cryptanalysis of a DoS-resistant ID-based password authentication
Remote authentication is a method to authenticate remote users over insecure communication channel. Password-based authentication schemes have been widely deployed to verify the legitimacy of remote users. Very recently, Hwang et al. proposed a DoS-resistant ID-based password authentication scheme using smart cards. In the current work, we are concerned with the password security of the Hwang et al.’s scheme. We first show that their scheme is vulnerable to a password guessing attack in which an attacker exhaustively enumerates all possible passwords in an off-line manner to determine the correct one. We then figure out how to eliminate the security vulnerability of their scheme.
The World is Not Enough: Another Look on Second-Order DPA
In a recent work, Mangard et al. showed that under certain assumptions, the (so-called) standard univariate side-channel attacks using a distance-of-means test, correlation analysis and Gaussian templates are essentially equivalent. In this paper, we show that in the context of multivariate attacks against masked implementations, this conclusion does not hold anymore. While a single distinguisher can be used to compare the susceptibility of different unprotected devices to first-order DPA, understanding second-order attacks requires to carefully investigate the information leakages and the adversaries exploiting these leakages, separately. Using a framework put forward by Standaert et al. at Eurocrypt 2009, we provide the first analysis that explores these two topics in the case of a masked implementation exhibiting a Hamming weight leakage model. Our results lead to refined intuitions regarding the efficiency of various practically-relevant distinguishers. Further, we also investigate the case of second- and third-order masking (i.e. using three and four shares to represent one value). This evaluation confirms that higher-order masking only leads to significant security improvements if the secret sharing is combined with a sufficient amount of noise. Eventually, we show that an information theoretic analysis allows determining this necessary noise level, for different masking schemes and target security levels, with high accuracy and smaller data complexity than previous~methods.
A Class of 1-Resilient Function with High Nonlinearity and Algebraic Immunity
Uncategorized
Uncategorized
In this paper, we propose a class of 1-resilient Boolean function with optimal algebraic degree and high nonlinearity, moreover, based on the conjecture proposed in [4], it can be proved that the algebraic immunity of our function is at least suboptimal.
Identity Based Online/Offline Encryption Scheme
Consider the situation where a low power device with limited computational power has to perform cryptographic operation in order to do secure communication to the base station where the computational power is not limited. The most obvious way is to split each and every cryptographic operations into resource consuming, heavy operations (which are performed when the device is idle) and the fast light weight operations (which are executed on the fly). This concept is called online/offline cryptography. In this paper, we show the security weakness of an identity based online offline encryption scheme proposed in ACNS 09 by Liu et al. \cite{LiuZ09}. The scheme in \cite{LiuZ09} is the first identity based online offline encryption scheme in the random oracle model, in which the message and recipient are not known during the offline phase. We show that this scheme is not CCA secure. We show the weakness in the security proof of CCA secure online/offline encryption system proposed by Chow et al. in \cite{Chow10}. We propose a new provably secure identity based online offline encryption scheme in which the message and receiver are not known during the offline phase. Since all the CCA secure identity based online/offline encryption schemes are shown to have weakness, ours is the first provably secure scheme with the aforementioned properties.
On the Static Diffie-Hellman Problem on Elliptic Curves over Extension Fields
Uncategorized
Uncategorized
We show that for any elliptic curve $E(\F_{q^n})$, if an adversary has access to a Static Diffie-Hellman Problem (Static DHP) oracle, then by making $O(q^{1-\frac{1}{n+1}})$ Static DHP oracle queries during an initial learning phase, for fixed $n>1$ and
$q \rightarrow \infty$ the adversary can solve {\em any} further instance of the Static DHP in {\em heuristic} time
$\tilde{O}(q^{1-\frac{1}{n+1}})$. Our proposal also solves the
{\em Delayed Target DHP} as defined by Freeman, and naturally extends to provide algorithms for solving the {\em Delayed Target DLP}, the {\em One-More DHP} and {\em One-More DLP}, as studied by Koblitz and Menezes in the context of Jacobians of hyperelliptic curves of small genus. We also argue that for {\em any} group in which index calculus can be effectively applied, the above problems have a natural relationship, and will {\em always} be easier than the DLP. While practical only for very small $n$, our algorithm reduces the security provided by the elliptic curves defined over $\F_{p^2}$ and $\F_{p^4}$ proposed by Galbraith, Lin and Scott at EUROCRYPT 2009, should they be used in any protocol where a user can be made to act as a proxy Static DHP oracle, or if used in protocols whose security is related to any of the above problems.
A Comparison of Cryptanalytic Tradeoff Algorithms
Three time memory tradeoff algorithms are compared in this paper. Specifically, the classical tradeoff algorithm by Hellman, the distinguished point tradeoff method, and the rainbow table method, in their non-perfect table versions, are treated.
We show that, under parameters and assumptions that are typically considered in theoretic discussions of the tradeoff algorithms, Hellman and distinguished point tradeoffs perform very close to each other and that the rainbow table method performs somewhat better than the other two algorithms. Our method of comparison can easily be applied to other situations, where the conclusions could be different.
The analysis of tradeoff efficiency presented in this paper does not ignore the effects of false alarms and also covers techniques for reducing storage, such as ending point truncations and index tables. Our comparison of algorithms takes the success probabilities and pre-computation efforts fully into account.
Sanitizable signatures with strong transparency in the standard model
Sanitizable signatures provide several security features which
are useful in many scenarios including military and medical applications. Sanitizable signatures allow a semi-trusted party to update some part of the digitally signed document without interacting with the original signer. Such schemes, where the verifer cannot identify whether the message has been sanitized, are said to possess strong transparency. In this paper, we have described the first efficient and provably secure sanitizable signature scheme having strong transparency under the standard
model.
A Reflection on the Security of Two-Party Key Establishment Protocols
Two-party key establishment has been a very fruitful research area
in cryptography, with many security models and numerous protocols
proposed. In this paper, we take another look at the YAK protocol
and the HMQV protocols and present some extended analysis. Motivated
by our analysis, we reflect on the security properties that are
desired by two-party key establishment protocols, and their
formalizations. In particular, we take into account the interface
between a key establishment protocol and the applications which may
invoke it, and emphasize the concept of session and the usage
of session identifier. Moreover, we show how to design a
two-party key establishment protocol to achieve both key
authentication and entity authentication properties in our security
model.
Compact Implementations of BLAKE-32 and BLAKE-64 on FPGA
We propose compact architectures of the SHA-$3$ candidates BLAKE-32 and BLAKE-64 for several FPGA families. We harness the intrinsic parallelism of the algorithm to interleave the computation of four instances of the $G_i$ function. This approach allows us to design an Arithmetic and Logic Unit with four pipeline stages and to achieve high clock frequencies. With careful scheduling, we completely avoid pipeline bubbles. For the time being, the designs presented in this work are the most compact ones for any of the SHA-3 candidates. We show for instance that a fully autonomous implementation of BLAKE-32 on a Xilinx Virtex-5 device requires 56 slices and two memory blocks.
Chosen Ciphertext Secure Encryption over Semi-smooth Subgroup
In this paper we propose two public key encryption schemes over the
semi-smooth subgroup introduced by Groth05. Both the schemes are
proved secure against chosen ciphertext attacks under the factoring
assumption. Since the domain of exponents is much smaller, both our
schemes are significantly more efficient than Hofheiz-Kiltz 2009
encryption.
On Foundation and Construction of Physical Unclonable Functions
Uncategorized
Uncategorized
Physical Unclonable Functions (PUFs) have been introduced as a new cryptographic primitive, and whilst a large number of PUF designs and applications have been proposed, few studies has been undertaken on the theoretical foundation of PUFs. At the same time, several PUF designs have been found to be insecure, raising questions about their design methodology. Moreover, PUFs with efficient implementation are needed to enable many applications in practice.
In this paper, we present novel results on the theoretical foundation and practical construction for PUFs. First, we prove that, for an $\ell$-bit-input and $m$-bit-output PUF containing $n$ silicon components, if $n< \frac{m2^{\ell}}{c}$ where $c$ is a constant, then 1) the PUF cannot be a random function, and 2) confusion and diffusion are necessary for the PUF to be a pseudorandom function. Then, we propose a helper data algorithm (HDA) that is secure against active attacks and significantly reduces PUF implementation overhead compared to previous HDAs. Finally, we integrate PUF construction into block cipher design to implement an efficient physical unclonable pseudorandom permutation (PUPRP); to the best of our knowledge, this is the first practical PUPRP using an integrated approach.
On a conjecture about binary strings distribution
Uncategorized
Uncategorized
It is a difficult challenge to find Boolean functions used in stream ciphers achieving all of the necessary criteria and the research of such functions has taken a significant delay with respect to cryptanalyses. A lot of attacks has led to design criteria for these functions; mainly: balancedness, a high algebraic degree, a high nonlinearity, a good behavior against Fast Algebraic Attacks and also a high algebraic immunity (which is now an absolutely necessary criterion (but not sufficient) for cryptographic Boolean functions).
Very recently, an infinite class of Boolean functions has been proposed by Tu and Deng having many very nice cryptographic properties under the assumption that the following combinatorial conjecture about binary strings is true:
\begin{cjt}
\label{cjt:original}
Let $\Stk$ be the following set:
\[
\Stk=\set{(a,b) \in \left(\Zk\right)^2 | a + b = t \text{ and } w(a) + w(b) < k} .
\]
Then:
\[
\abs{\Stk} \leq 2^{k-1} .
\]
\end{cjt}
The main contribution of the present paper is the reformulation of the problem in terms of {\em carries} which gives more insight on it than simple counting arguments.
Successful applications of our tools include explicit formulas of $\abs{\Stk}$ for numbers whose binary expansion is made of one block (see theorem \ref{thm:one}), a proof that the conjecture is {\em asymptotically} true (see theorem \ref{thm:asymptotic}) and a proof that a family of numbers (whose binary expansion has a high number of \textttup{1}s and isolated \textttup{0}s) reaches the bound of the conjecture (see theorem \ref{thm:extremal}). We also conjecture that the numbers in that family are the only ones reaching the bound (see conjecture \ref{cjt:extremal}).
Dismantling SecureMemory, CryptoMemory and CryptoRF
Uncategorized
Uncategorized
The Atmel chip families SecureMemory, CryptoMemory, and CryptoRF use a proprietary stream cipher to guarantee authenticity, confidentiality, and integrity. This paper describes the cipher in detail and points out several weaknesses. One is the fact that the three components of the cipher operate largely independently; another is that the intermediate output generated by two of those components is strongly correlated with the generated keystream. For SecureMemory, a single eavesdropped trace is enough to recover the secret key with probability 0.57 in 2^{39} cipher ticks. This is a factor of 2^{31.5} faster than a brute force attack. On a 2 GHz laptop, this takes around 10 minutes. With more traces, the secret key can be recovered with virtual certainty without significant additional cost in time. For CryptoMemory and CryptoRF, if one has 2640 traces it is possible to recover the key in 2^{52} cipher ticks, which is 2^{19} times faster than brute force. On a 50 machine cluster of 2 GHz quad-core machines this would take less than 2 days.
A Meet-in-the-Middle Attack on ARIA
Uncategorized
Uncategorized
In this paper, we study the meet-in-the-middle attack against
block cipher ARIA. We find some new 3-round and 4-round distinguish-
ing properties of ARIA. Based on the 3-round distinguishing property,
we can apply the meet-in-the-middle attack with up to 6 rounds for
all versions of ARIA. Based on the 4-round distinguishing property, we can mount a successful attack on 8-round ARIA-256. Furthermore, the 4-round distinguishing property could be improved which leads to a 7-round attack on ARIA-192. The data and time complexities of 7-round attack are 2^120 and 2^185:3, respectively. The data and time complexities of 8-round attack are 2^56 and 2^251:6, respectively. Compared with the existing cryptanalytic results on ARIA, our 5-round attack has the lowest data and time complexities and the 6-round attack has the lowest data complexity. Moreover, it is shown that 8-round ARIA-256 is not immune to the meet-in-the-middle attack.
Evolutionary Cipher against Differential Power Attack
Uncategorized
Uncategorized
DPA attack is one of most threatening SCA attacks, this paper focuses on research of DPA resistance. There are two phases in DPA attacks: collection and analyzing which can be utilized to construct different countermeasures against DPAs, such as balancing technologies aim at analyzing. We propose a new idea with dynamic structure algorithm to resist DPAs and call this measure as evolutionary cipher which can effectively resist DPA attacks based on destroying differential power computation model proposed by kocher. Moreover, evolutionary cipher opens up a new idea to design safety cryptographic algorithm for it can resist both DPA attack and some mathematic attacks as well. Designing principles of evolutionary cipher can be referenced by other dynamic cryptographic algorithms. This paper has theoretically and practically proofed security and effectiveness of evolutionary cipher to resist against DPAs.
Fault Analysis Study of the Block Cipher FOX64
FOX is a family of symmetric block ciphers from MediaCrypt AG that helps to secure digital media, communications, and storage. The high-level structure of FOX is the so-called (extended) Lai-Massey scheme. This paper presents a detailed fault analysis of the block cipher FOX64, the 64-bit version of FOX, based on a differential property of tworound Lai-Massey scheme in a fault model. Previous fault attack on FOX64 shows that each round-key (resp. whole round-keys) could be recovered through 11.45 (resp. 183.20) faults on average. Our proposed fault attack, however, can deduce any round-key (except the first one) through 4.25 faults on average (4 in the best case), and retrieve the whole round-keys through 43.31 faults on average (38 in the best case). This implies that the number of needed faults in the fault attack on FOX64 can be significantly reduced. Furthermore, the technique introduced in this paper can be extended to other series of the block cipher family FOX.
Comment on four two-party authentication protocols
In this paper, we analyze the protocols of Bindu et al., Goriparthi et al., Wang et al. and Hölbl et al.. After analyses, we found that Bindu et al.’s protocol suffers from the insider attack if the smart card is lost, both Goriparthi et al.’s and Wang et al.’s protocols can’t withstand the DoS attack on the password change phase which makes the password invalid after the protocol run, and Hölbl et al.’s protocol is vulnerable to the insider attack since a malevolent legal user can deduce KGC’s secret key xs.
Black-Box Constructions of Protocols for Secure Computation
It is well known that secure computation without an honest majority requires computational assumptions. An interesting question that therefore arises relates to the way such computational assumptions are used. Specifically, can the secure protocol use the underlying primitive (e.g., a one-way trapdoor permutation) in a {\em black-box} way, treating it as an oracle, or must it be {\em nonblack-box} (by referring to the code that computes the primitive)? Despite the fact that many general constructions of cryptographic schemes refer to the underlying primitive in a black-box wayonly, there are some constructions that are inherently nonblack-box. Indeed, all known constructions of protocols for general secure computation that are secure in the presence of a malicious adversary and without an honest majority use the underlying primitive in a nonblack-box way (requiring to prove in zero-knowledge statements that relate to the primitive).
In this paper, we study whether such nonblack-box use is essential. We answer this question in the negative. Concretely, we present a \emph{fully black-box reduction} from oblivious transfer with security against malicious parties to oblivious transfer with security against semi-honest parties. As a corollary, we get the first constructions of general multiparty protocols (with security against malicious adversaries and without an honest majority) which only make a {\em black-box} use of semi-honest oblivious transfer, or alternatively a black-box use of lower-level primitives such as enhanced trapdoor permutations or homomorphic encryption.
Golay Complementary Sequences Over the QAM Constellation
In this paper, we present new constructions for $ M^{2}$
-QAM and $2M$ $Q$-$PAM$ Golay complementary sequences of length
$2^n$ for integer $n$, where $M=2^{m}$ for integer $m$. New
decision conditions are proposed to judge whether an offset pairs
can be used to construct the Golay complementary sequences over
constellation, and with the new decision conditions, we prove the
conjecture 1 proposed by Ying Li~\cite{16}. We describe a new offset
pairs and construct new $64$-$QAM$ Golay sequences based on this new
offset pairs. We also study the $128$-$QAM$ Golay complementary
sequences, and propose a new decision condition to judge whether the
sequences are $128$-$QAM$ Golay complementary.
1024XKS - A High Security Software Oriented Block Cipher Revisited
The block cipher 1024 has a key schedule that somehow resembles that of IDEA. The user key is cyclicly shifted by a fiexed amount to form the round keys. In the key schedule of IDEA this has lead to weak keys. The primitive key schedule from 1024 may lead also to attacks with related keys. Although to the knowlegde of the author weak keys or attacks with related keys have not yet been published, there is a need to put things right. The new one-way key schedule of 1024XKS (eXtended Key Schedule) has pseudo-random round keys, which are obtained by using the cipher as randomizer.Apart from that, the user key has now to sizes, 2048 bit and 4096 bit. Also the order of the s-boxes have been changed to thwart attacks based on symmetry
Stange's Elliptic Nets and Coxeter Group F4
Uncategorized
Uncategorized
Stange, generalizing Ward's elliptic divisibility sequences, introduced elliptic nets, and showed an equivalence between elliptic nets and elliptic curves. This note relates Stange's recursion for elliptic nets and the Coxeter group F4.