All papers in 2014 (Page 11 of 1029 results)

Last updated:  2014-01-13
General Impossibility of Group Homomorphic Encryption in the Quantum World
Frederik Armknecht, Tommaso Gagliardoni, Stefan Katzenbeisser, Andreas Peter
Group homomorphic encryption represents one of the most important building blocks in modern cryptography. It forms the basis of widely-used, more sophisticated primitives, such as CCA2-secure encryption or secure multiparty computation. Unfortunately, recent advances in quantum computation show that many of the existing schemes completely break down once quantum computers reach maturity (mainly due to Shor’s algorithm). This leads to the challenge of constructing quantum-resistant group homomorphic cryptosystems. In this work, we prove the general impossibility of (abelian) group homomorphic encryption in the presence of quantum adversaries, when assuming the IND-CPA security notion as the minimal security requirement. To this end, we prove a new result on the probability of sampling generating sets of finite (sub-)groups if sampling is done with respect to an arbitrary, unknown distribution. Finally, we provide a sufficient condition on homomorphic encryption schemes for our quantum attack to work and discuss its satisfiability in non-group homomorphic cases. The impact of our results on recent fully homomorphic encryption schemes poses itself as an open question.
Last updated:  2014-11-24
Channel Equalization for Side Channel Attacks
Colin O'Flynn, Zhizhang (David) Chen
This paper introduces the use of channel equalization as a method of simplifying side channel analysis attacks, by effectively collapsing all points in a power measurement trace into a single random variable. This uses a simple Finite Impulse Response (FIR) linear equalizer, which has been studied extensively in communications systems. In addition the estimation of a channel model is used in developing the Channel Estimation Analysis (CEA), which is a generic attack requiring similar assumptions to the Correlation Power Analysis (CPA) attack. Both channel equalization and the CEA attack are straight-forward to apply to real systems, and Python examples are provided. Results of attacking unprotected AES-128 and protected AES-256RSM on a microcontroller are provided.
Last updated:  2014-01-10
Twisting Edwards curves with isogenies
Mike Hamburg
Edwards’ elliptic curve form is popular in modern cryptographic implementations thanks to their fast, strongly unified addition formulas. Twisted Edwards curves with a = −1 are slightly faster, but their addition formulas are not complete over Fp where p ≡ 3 (mod 4). In this short note, we propose that designers specify Edwards curves, but implement scalar multiplications and the like using an isogenous twisted Edwards curve.
Last updated:  2014-01-10
Studying Potential Side Channel Leakages on an Embedded Biometric Comparison System
Maël Berthier, Yves Bocktaels, Julien Bringer, Hervé Chabanne, Taoufik Chouta, Jean-Luc Danger, Mélanie Favre, Tarik Graba
We study in this work the potential side channel leakages of a hardware biometric comparison system that has been designed for fingerprints. An embedded biometric system for comparison aims at comparing a stored biometric data with a freshly acquired one without the need to send the stored biometric data outside the system. Here one may try to retrieve the stored data via side channel, similarly as for embedded cryptographic modules where one may try to exploit side channel for attacking the modules. On one hand, we show that we can find partial information by the means of simple Side Channel Analysis that may help to retrieve the stored fingerprint. On the other hand, we illustrate that reconstructing the fingerprint remains not trivial and we give some simple countermeasures to protect further the comparison algorithm.
Last updated:  2014-07-08
Side-Channel Leakage through Static Power – Should We Care about in Practice? –
Amir Moradi
By shrinking the technology static power consumption of CMOS circuits is becoming a major concern. In this paper, we present the first practical results of exploiting static power consumption of FPGA-based cryptographic devices in order to mount a key-recovery side-channel attack. The experiments represented here are based on three Xilinx FPGAs built on 65nm, 45nm, and 28nm process technologies. By means of a sophisticated measurement setup and methodology we demonstrate an exploitable information leakage through static power of the underlying FPGAs. The current work highlights the feasibility of side-channel analysis attacks by static power that have been known for years but have not been performed and investigated in practice yet. This is a starting point for further research investigations, and may have a significant impact on the efficiency of DPA countermeasures in the near future.
Last updated:  2014-01-08
An Efficient Pseudo-Random Generator with Applications to Public-Key Encryption and Constant-Round Multiparty Computation
Ivan Damgård, Jesper Buus Nielsen
We present a pseudo-random bit generator expanding a uniformly random bit-string r of length k/2, where k is the security parameter, into a pseudo-random bit-string of length 2k − log^2(k) using one modular exponentiation. In contrast to all previous high expansion-rate pseudo-random bit generators, no hashing is necessary. The security of the generator is proved relative to Paillier’s composite degree residuosity assumption. As a first application of our pseudo-random bit generator we exploit its efficiency to optimise Paillier’s crypto-system by a factor of (at least) 2 in both running time and usage of random bits. We then exploit the algebraic properties of the generator to construct an efficient protocol for secure constant-round multiparty function evaluation in the cryptographic setting. This construction gives an improvement in communication complexity over previous protocols in the order of nk^2, where n is the number of participants and k is the security parameter, resulting in a communication complexity of O(nk^2|C|) bits, where C is a Boolean circuit computing the function in question.
Last updated:  2014-01-08
Solving Random Subset Sum Problem by $l_{p}$-norm SVP Oracle
Gengran Hu, Yanbin Pan, Feng Zhang
It is well known that almost all random subset sum instances with density less than 0.6463... can be solved with an $l_{2}$-norm SVP oracle by Lagarias and Odlyzko. Later, Coster \emph{et al.} improved the bound to 0.9408... by using a different lattice. In this paper, we generalize this classical result to $l_p$-norm. More precisely, we show that for $p\in \mathbb{Z}^{+}$, an $l_p$-norm SVP oracle can be used to solve almost all random subset sum instances with density bounded by $\delta_p$, where $\delta_1=0.5761$ and $\delta_p = 1/(\frac{1}{2^p}\log_2(2^{p+1}-2)+\log_2(1+\frac{1}{(2^p-1)(1-(\frac{1}{2^{p+1}-2})^{(2^p-1)})})))$ for $p\geq 3$(asymptotically, $\delta_p\approx 2^p/(p+2)$). Since $\delta_p$ goes increasingly to infinity when $p$ tends to infinity, it can be concluded that an $l_p$-norm SVP oracle with bigger $p$ can solve more subset sum instances. An interesting phenomenon is that an $l_p$-norm SVP oracle with $p\geq 3$ can help solve almost all random subset sum instances with density one, which are thought to be the most difficult instances.
Last updated:  2014-01-08
Ultra-lightweight 8-bit Multiplicative Inverse Based S-box Using LFSR
Sourav Das
Most of the lightweight block ciphers are nibble-oriented as the implementation of a 4-bit S-box is much more compact than an 8-bit S-box. This paper proposes a novel implementation of multiplicative inverse for 8-bit S-boxes using LFSR requiring only 138 gate-equivalent. It can be shown that if such S-boxes are adopted for the AES it takes less than 50 gate-equivalent per S-box in parallel implementation. Canright's \cite{Canright} implementation of the AES S-box is five times more expensive compared to this method for AES-like S-boxes. With this powerful scheme, a lightweight block cipher can be designed using an 8-bit S-box.
Last updated:  2014-01-08
Online/Offline Attribute-Based Encryption
Susan Hohenberger, Brent Waters
Attribute-based encryption (ABE) is a type of public key encryption that allows users to encrypt and decrypt messages based on user attributes. For instance, one can encrypt a message to any user satisfying the boolean formula (``crypto conference attendee'' AND ``PhD student'') OR ``IACR member''. One drawback is that encryption and key generation computational costs scale with the complexity of the access policy or number of attributes. In practice, this makes encryption and user key generation a possible bottleneck for some applications. To address this problem, we develop new techniques for ABE that split the computation for these algorithms into two phases: a preparation phase that does the vast majority of the work to encrypt a message or create a secret key *before* it knows the message or the attribute list/access control policy that will be used (or even the size of the list or policy). A second phase can then rapidly assemble an ABE ciphertext or key when the specifics become known. This concept is sometimes called ``online/offline'' encryption when only the message is unknown during the preparation phase; we note that the addition of unknown attribute lists and access policies makes ABE significantly more challenging. One motivating application for this technology is mobile devices: the preparation work can be performed while the phone is plugged into a power source, then it can later rapidly perform ABE operations on the move without significantly draining the battery.
Last updated:  2015-04-22
(De-)Constructing TLS
Markulf Kohlweiss, Ueli Maurer, Cristina Onete, Bjoern Tackmann, Daniele Venturi
TLS is one of the most widely deployed cryptographic protocols on the Internet; it is used to protect the confidentiality and integrity of transmitted data in various client-server protocols. Its non-standard use of cryptographic primitives, however, makes it hard to formally assess its security. It is in fact difficult to use traditional (well-understood) security notions for the key-exchange (here: handshake) and the encryption/authentication (here: record layer) parts of the protocol due to the fact that, on the one hand, traditional game-based notions do not easily support composition, and on the other hand, all TLS versions up to and including 1.2 combine the two phases in a non-standard way. In this paper, we provide a modular security analysis of the handshake in TLS version 1.2 and a slightly sanitized version of the handshake in the current draft of TLS version 1.3, following the constructive cryptography approach of Maurer and Renner (ICS 2011). We provide a deconstruction of the handshake into modular sub-protocols and a security proof for each such sub-protocol. We also show how these results can be combined with analyses of the respective record layer protocols, and the overall result is that in all cases the protocol constructs (unilaterally) secure channels between the two parties from insecure channels and a public-key infrastructure. This approach ensures that (1) each sub-protocol is proven in isolation and independently of the other sub-protocols, (2) the overall security statement proven can easily be used in higher-level protocols, and (3) TLS can be used in any composition with other secure protocols. In more detail, for the key-exchange step of TLS 1.2, we analyze the RSA-based and both Diffie-Hellman-based variants (with static and ephemeral server key share) under a non-randomizability assumption for RSA-PKCS and the Gap Diffie-Hellman assumption, respectively; in all cases we make use of random oracles. For the respective step of TLS 1.3, we prove security under the Decisional Diffie-Hellman assumption in the standard model. In all statements, we require additional standard computational assumptions on other primi- tives. In general, since the design of TLS is not modular, the constructive decomposition is less fine-grained than one might wish to have and than it is for a modular design. This paper therefore also suggests new insights into the intrinsic problems incurred by a non-modular protocol design such as that of TLS.
Last updated:  2014-01-08
Lazy Modulus Switching for the BKW Algorithm on LWE
Martin R. Albrecht, Jean-Charles Faugère, Robert Fitzpatrick, Ludovic Perret
Some recent constructions based on LWE do not sample the secret uniformly at random but rather from some distribution which produces small entries. The most prominent of these is the binary-LWE problem where the secret vector is sampled from {0, 1}∗ or {−1, 0, 1}∗. We present a variant of the BKW algorithm for binary-LWE and other small secret variants and show that this variant reduces the complexity for solving binary-LWE. We also give estimates for the cost of solving binary-LWE instances in this setting and demonstrate the advantage of this BKW variant over standard BKW and lattice reduction techniques applied to the SIS problem. Our variant can be seen as a combination of the BKW algorithm with a lazy variant of modulus switching which might be of independent interest.
Last updated:  2014-01-07
Completeness for Symmetric Two-Party Functionalities - Revisited
Uncategorized
Yehuda Lindell, Eran Omri, Hila Zarosim
Show abstract
Uncategorized
Understanding the minimal assumptions required for carrying out cryptographic tasks is one of the fundamental goals of theoretical cryptography. A rich body of work has been dedicated to understanding the complexity of cryptographic tasks in the context of (semi-honest) secure two-party computation. Much of this work has focused on the characterization of trivial and complete functionalities (resp., functionalities that can be securely implemented unconditionally, and functionalities that can be used to securely compute all functionalities). All previous works define reductions via an ideal implementation of the functionality; \ie $f$ reduces to $g$ if one can implement $f$ using an ideal box (or oracle) that computes the function $g$ and returns the output to both parties. Such a reduction models the computation of $f$ as an \emph{atomic operation}. However, in the real-world, protocols proceed in rounds, and the output is not learned by the parties simultaneously. In this paper we show that this distinction is significant. Specifically, we show that there exist symmetric functionalities (where both parties receive the same outcome), that are neither trivial nor complete under ``ideal-box reductions'', and yet the existence of a constant-round protocol for securely computing such a functionality implies infinitely-often oblivious transfer (meaning that it is secure for infinitely-many $n$'s). In light of the above, we propose an alternative definitional infrastructure for studying the triviality and completeness of functionalities.
Last updated:  2014-08-18
Two-round password-only authenticated key exchange in the three-party setting
Junghyun Nam, Kim-Kwang Raymond Choo, Juryon Paik, Dongho Won
We present the first provably-secure 3-party password-only authenticated key exchange (PAKE) protocol that can run in only two communication rounds. Our protocol is generic in the sense that it can be constructed from any 2-party PAKE protocol. The protocol is proven secure in a variant of the widely accepted model of Bellare, Pointcheval and Rogaway (2000) without any idealized assumptions on the cryptographic primitives used. We also investigate the security of the 2-round 3-party PAKE protocol of Wang, Hu and Li (2010), and demonstrate that this protocol cannot achieve implicit key authentication in the presence of an active adversary.
Last updated:  2014-05-30
Triple and Quadruple Encryption: Bridging the Gaps
Bart Mennink, Bart Preneel
Triple encryption is a cascade of three block cipher evaluations with independent keys, in order to enlarge its key size. This design is proven secure up to approximately 2^{kappa+min{kappa/2,n/2}} queries (by Bellare and Rogaway, EUROCRYPT 2006, and Gaži and Maurer, ASIACRYPT 2009), where kappa denotes the key size and n the block length of the underlying block cipher. On the other hand, the best known attack requires about 2^{kappa+n/2} queries (by Lucks, FSE 1998, and Gaži, CRYPTO 2013). These bounds are non-tight for kappa <= n. In this work, we close this gap. By strengthening the best known attack as well as tightening the security bound, we prove that triple encryption is tightly secure up to 2^{kappa+min{kappa,n/2}} queries. Additionally, we prove that the same tight security bound holds for quadruple encryption (which consists of four sequentially evaluated block ciphers), and derive improved security and attack bounds for cascades consisting of five or more rounds. This work particularly solves the longstanding open problem of proving tight security of the well-known Triple-DES construction.
Last updated:  2014-02-08
Tight Security Bounds for Triple Encryption
Jooyoung Lee
In this paper, we revisit the old problem asking the exact provable security of triple encryption in the ideal cipher model. For a blockcipher with key length k and block size n, triple encryption is known to be secure up to 2^{k+min{k/2,n/2}} queries, while the best attack requires 2^{k+min{k,n/2}} query complexity. So there is a gap between the upper and lower bounds for the security of triple encryption. We close this gap by proving the security up to 2^{k+min{k,n/2}} query complexity. With the DES parameters, triple encryption is secure up to 2^{82.5} queries, greater than the current bound of 2^{78.3} and comparable to 2^{83.5} for 2-XOR-cascade. We also analyze the security of two-key triple encryption, where the first and the third keys are identical. We prove that two-key triple encryption is secure up to 2^{k+min{k,n/2}} queries to the underlying blockcipher and 2^{min{k,n/2}} queries to the outer permutation. For the DES parameters, this result is interpreted as the security of two-key triple encryption up to 2^{32} plaintext-ciphertext pairs and 2^{81.7} blockcipher encryptions.
Last updated:  2015-06-18
Linkable Message Tagging: Solving the Key Distribution Problem of Signature Schemes
Uncategorized
Felix Günther, Bertram Poettering
Show abstract
Uncategorized
Digital signatures are one of the most extensively used cryptographic primitives today. It is well-understood that they guarantee practical security only if the corresponding verification keys are distributed authentically; however, arguably, satisfying solutions for the latter haven't been found yet, or at least aren't in large-scale deployment. This paper introduces a novel approach for cryptographic message authentication where this problem does not arise: A linkable message tagging scheme (LMT) identifies pairs of messages and accompanying authentication tags as related if and only if these tags were created using the same secret key. In other words, in contrast to signature schemes, our primitive does not aim at detecting whether individually considered messages originate from an explicitly specified entity, but instead decides whether all messages from a given collection originate from the same (possibly anonymous) source. The appealing consequence is that our primitive fully avoids public keys and hence elegantly sidesteps the key distribution problem of signature schemes. As an interesting application of LMT we envision an email authentication system with minimal user interaction. Email clients could routinely generate a secret LMT key upon their first invocation, and then equip all outgoing messages with corresponding tags. On the receiver's side, client software could automatically verify for incoming messages whether they indeed originate from the same entity as previously or subsequently received messages with identical sender address. Although this form of authentication does not provide as strong guarantees of message's origin as signature schemes would do, we do believe that trading the apparently discouraging obstacles implied by the authentic distribution of signature verification keys for the assumption that an attacker does not forge every message exchanged between parties is quite attractive. As technical contributions we formalize the notions of LMT and its (more efficient) variant CMT (classifiable message tagging), including corresponding notions of unforgeability. For both variants we propose a range of provably secure constructions, basing on different hardness assumptions, with and without requiring random oracles.
Last updated:  2014-01-07
A Novel Modular Adder for One Thousand Bits and More Using Fast Carry Chains of Modern FPGAs
Marcin Rogawski, Kris Gaj, Ekawat Homsirikamol
In this paper a novel, low latency family of adders and modular adders has been proposed. This family efficiently combines the ideas of high-radix carry-save addition and the parallel prefix networks. It also takes advantage of fast carry chains of modern FPGAs. The implementation results reveal that these hybrid adders have great potential for efficient implementation of modular addition of long integers used in various public key cryptography schemes.
Last updated:  2014-01-07
Maximal Information Coefficient Analysis
Uncategorized
Yanis Linge, Cecile Dumas, Sophie Lambert-Lacroix
Show abstract
Uncategorized
Abstract In the domain of the Side Channel Attacks, various statistical tools have succeeded to retrieve a secret key, as the Pearson coefficient or the Mutual Information. In this paper we propose to study the Maximal Information Coefficient (MIC) which is a non-parametric method introduced by Reshef et al. [13] to compare two random variables. The MIC is based on the mutual information but it is easier to implement and is robust to the noise. We show how apply this tool in the particular case of the side channel attacks. As in statistics, benefits only appears with drawbacks, the computing complexity of the MIC is high. Therefore, we propose a way to efficiently compute the MIC. The obtained attack called the Maximal Information Coefficient Analysis is compared to the CPA [3] and the MIA [8]. The results show the interest of this approach when the leakage is noisy and bad modeleled.
Last updated:  2014-12-09
Construction of New Families of ‎MDS‎ Diffusion Layers
S. M. Dehnavi, A. Mahmoodi Rishakani, M. R. Mirzaee Shamsabad, Hamidreza Maimani, Einollah Pasha
Diffusion layers are crucial components of symmetric ciphers&#8206;. &#8206;These components&#8206;, &#8206;along with suitable Sboxes&#8206;, &#8206;can make symmetric ciphers resistant against statistical attacks like linear and differential cryptanalysis&#8206;. &#8206;Conventional &#8206;&#8206;MDS diffusion layers, which are defined as matrices over finite fields, have been used in symmetric ciphers such as AES&#8206;, &#8206;Twofish and SNOW&#8206;. &#8206;In this paper&#8206;, &#8206;we study linear, linearized and nonlinear MDS diffusion layers&#8206;. We investigate linearized diffusion layers, &#8206;which are a generalization of conventional diffusion layers&#8206;; t&#8206;hese diffusion layers are used in symmetric ciphers like SMS4&#8206;, &#8206;Loiss and ZUC&#8206;. W&#8206;e introduce some &#8206;new &#8206;families of linearized MDS diffusion layers &#8206;and as a consequence, &#8206;we &#8206;present a&#8206; &#8206;method &#8206;for &#8206;construction of &#8206;&#8206;&#8206;&#8206;randomized linear &#8206;&#8206;&#8206;&#8206;&#8206;diffusion &#8206;layers over a finite field. Nonlinear MDS diffusion layers are introduced in Klimov's thesis; we investigate nonlinear MDS diffusion layers theoretically, and we present a new family of nonlinear MDS diffusion layers. We show that these nonlinear diffusion layers can be made randomized with a low &#8206;implementatio&#8206;n cost. An important fact about linearized and nonlinear diffusion layers is that they are more resistant against algebraic attacks in comparison to conventional diffusion layers. A &#8206;special case of diffusion layers are &#8206;&#8206;&#8206;(0,1)&#8206;-&#8206;diffusion layers. This type of diffusion layers are used in symmetric ciphers like ARIA&#8206;. &#8206;W&#8206;e examine (0,1)&#8206;-&#8206;diffusion layers and prove a theorem about them&#8206;. &#8206;At last&#8206;, &#8206;we study linearized MDS diffusion layers of symmetric ciphers Loiss, SMS4 and ZUC&#8206;, from the mathematical viewpoint.
Last updated:  2014-01-05
A Certificate-Based Proxy Signature with Message Recovery without Bilinear Pairing
Uncategorized
Ali Mahmoodi, Javad Mohajeri, Mahmoud Salmasizadeh
Show abstract
Uncategorized
In this paper, we propose the first provable secure certificate-based proxy signature with message recovery without bilinear pairing. The notion of certificate-based cryptography was initially introduced by Gentry in 2003, in order to simplify certificate management in traditional public key cryptography(PKC)and to solve the key escrow problem in identity-based cryptosystems. To date, a number of certificate-based proxy signature(CBPS)schemes from bilinear pairing have been proposed. Nonetheless, the total computation cost of a pairing is higher than that of scalar multiplication(e.g., over elliptic curve group). Consequently, schemes without pairings would be more appealing in terms of efficiency. According to the available research in this regard, our scheme is the first provable secure CBPS scheme with message recovery which is based on the elliptic curve discrete logarithm problem. We prove the security of the presented scheme against existential forgery under adaptive chosen message and ID attacks in the random oracle model. Moreover, the paper will also show how it would be possible to convert this scheme to the CBPS scheme without message recovery. This scheme has more applications in situations with limited bandwidth and power-constrained devices.
Last updated:  2014-01-07
Characterization of EME with Linear Mixing
Uncategorized
Nilanjan Datta, Mridul Nandi
Show abstract
Uncategorized
Encrypt-Mix-Encrypt is a type of SPRP based construction, where a masked plaintext is encrypted in ECB mode of, then a non-linear mixing is performed and then again an encryption is performed in ECB mode which is masked to produce the ciphertext. Using the property of the binary field, the authors proved that the construction is not SPRP secure if the mixing used is linear. In this paper, we observe that relaxing the mixing operation to some specific efficient linear mixing provides the PRP property of the construction. Moreover choosing a linear mixing that gives the online property is not a difficult task. We can use this fact to construct an efficient Online PRP using Encrypt-Mix-Encrypt type of construction with the mix operation being a linear online mixing, making the construction efficient and online. We also show that the construction with linear mixing doesn't provide SPRP security even if we perform all the operations in a prime field instead of binary field. Thus, we fully characterize EME with linear mixing.
Last updated:  2014-06-19
A Theoretical Study of Kolmogorov-Smirnov Distinguishers, Side-Channel Analysis vs. Differential Cryptanalysis
Uncategorized
Annelie Heuser, Olivier Rioul, Sylvain Guilley
Show abstract
Uncategorized
In this paper, we carry out a detailed mathematical study of two theoretical distinguishers based on the Kolmogorov-Smirnov (KS) distance. This includes a proof of soundness and the derivation of closed- form expressions, which can be split into two factors: one depending only on the noise and the other on the confusion coefficient of Fei, Luo and Ding. This allows one to have a deeper understanding of the relative influences of the signal-to-noise ratio and the confusion coefficient on the distinguisher’s performance. Moreover, one is able to directly compare distinguishers based on their closed-form expressions instead of using evaluation metric that might obscure the actual performance and favor one distinguisher over the other. Furthermore, we formalize the link between the confusion coefficient and differential cryptanalysis, which shows that the stronger an S-box is resistant to differential attacks the weaker it is against side-channel attacks, and vice versa.
Last updated:  2014-04-04
One Weird Trick to Stop Selfish Miners: Fresh Bitcoins, A Solution for the Honest Miner.
Uncategorized
Ethan Heilman
Show abstract
Uncategorized
Abstract—A recent result in Bitcoin is the selfish mining strategy in which a selfish cartel withholds blocks they mine to gain an advantage. This strategy is both incentive-compatible and harmful to Bitcoin. In this paper we introduce a new defense against selfish mining that improves on the previous best result, we raise the threshold of mining power necessary to profitably selfishly mine from 25% to 32% under all propagation advantages. While the security of our system uses unforgeable timestamps, it is robust to their compromise. Additionally, we discuss the difficulty a mining conspiracy would face attempting to keep the compromise of our scheme secret and we analyze incentives for getting miners to adopt these changes.
Last updated:  2014-01-06
Efficient Non-Interactive Zero Knowledge Arguments for Set Operations
Prastudy Fauzi, Helger Lipmaa, Bingsheng Zhang
We propose a non-interactive zero knowledge \emph{pairwise multiset sum equality test (PMSET)} argument in the common reference string (CRS) model that allows a prover to show that the given committed multisets $\AAA_j$ for $j \in \set{1, 2, 3, 4}$ satisfy $\AAA_1 \uplus \AAA_2 = \AAA_3 \uplus \AAA_4$, i.e., every element is contained in $\AAA_1$ and $\AAA_2$ exactly as many times as in $\AAA_3$ and $\AAA_4$. As a corollary to the $\PUTME$ argument, we present arguments that enable to efficiently verify the correctness of various (multi)set operations, for example, that one committed set is the intersection or union of two other committed sets. The new arguments have constant communication and verification complexity (in group elements and group operations, respectively), whereas the CRS length and the prover's computational complexity are both proportional to the cardinality of the (multi)sets. We show that one can shorten the CRS length at the cost of a small increase of the communication and the verifier's computation.
Last updated:  2014-01-10
The analysis of the Keccak with the new method called parity
Uncategorized
Ghanei yakhdan. mostafa
Uncategorized
The Keccak hash function is chosen of the SHA-3 competition. This function has the appropriate structural and so far it has been resistant against collision attacks. In the last few years after efforts cryptanalysis, first, the collision in the second round and near collision in the third rounds, was found. After Shamir and Et al succeeded find collision in fourth round and near collision in 5 rounds for keccak 224,256. In this paper we propose a method that it can used, for the texts with size smaller than r and finds,collision in first round and near collision, in the second round, regardless of any probability. Our attack technique, using the find the text that have the same parity such original text in &#952; function. It is practical for all keccak varies and it finds collision in first round and near collision for second round in few minute. We have named this method " Parity Analyse”.
Last updated:  2014-01-03
MaxMinMax problem and sparse equations over finite fields
Uncategorized
Igor Semaev
Show abstract
Uncategorized
Asymptotical complexity of sparse equation systems over finite field $F_q$ is studied. Let the variable sets belong to a fixed family $\mathcal{X}=\{X_1,\ldots,X_m\}$ while the polynomials $f_i(X_i)$ are taken independently and uniformly at random from the set of all polynomials of degree $\leq q-1$ in each of the variables in $X_i$. In particular, for $|X_i|\le3$, $m=n$, we prove the average complexity of finding all solutions to $f_i(X_i)=0, i=1,\ldots,m$ by Gluing algorithm ( Semaev, Des. Codes Cryptogr., vol. 49 (2008), pp.47--60) is at most $ q^{\frac{n}{5.7883}+O(\log n)}$ for arbitrary $\mathcal{X}$ and $q$. The proof results from a detailed analysis of 3-MaxMinMax problem, a novel problem for hyper-graphs.
Last updated:  2014-01-02
$GF(2^n)$ Bit-Parallel Squarer Using Generalized Polynomial Basis For a New Class of Irreducible Pentanomials
Xi Xiong, Haining Fan
We present explicit formulae and complexities of bit-parallel $GF(2^{n})$ squarers for a new class of irreducible pentanomials $x^{n}+x^{n-1}+x^{k}+x+1$, where $n$ is odd and $1<k<(n-1)/2$. The squarer is based on the generalized polynomial basis of $GF(2^{n})$. Its gate delay matches the best results, while its XOR gate complexity is $n+1$, which is only about 2/3 of the current best results.
Last updated:  2014-01-02
Pseudorandom Generator Based on Hard Lattice Problem
Kuan Cheng
This paper studies how to construct a pseudorandom generator using hard lattice problems. We use a variation of the classical hard problem \emph{Inhomogeneous Small Integer Solution} ISIS of lattice, say \emph{Inhomogeneous Subset Sum Solution} ISSS. ISSS itself is a hash function. Proving the preimage sizes ISSS hash function images are almost the same, we construct a pseudorandom generator using the method in \cite{GKL93}. Also, we construct a pseudoentropy generator using the method in \cite{HILL99}. Most theoretical PRG constructions are not feasible in fact as they require rather long random bits as seeds. Our PRG construction only requires seed length to be $O(n^{2}\log_{2} n)$ which is feasible practically.
Last updated:  2014-01-01
Comments on: EIBAS - an efficient identity broadcast authentication scheme in wireless sensor networks
Yalin Chen, Jue-Sam Chou
Recently, Shm et al. Proposed an efficient identity-based broadcast authentication scheme based on Tso et al.’s IBS scheme with message recovery to achieve security requirements in wireless sensor networks. They claim that their scheme can achieve security requirements and mitigated DOS attack by limiting the times of signature verification failures in wireless sensor networks (WSN). However, we found that the scheme cannot attain the security level as they claimed. We will demonstrate it in this article.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.