All papers in 2012 (Page 3 of 733 results)

Last updated:  2012-09-25
Commitments and Efficient Zero-Knowledge Proofs from Learning Parity with Noise
Abhishek Jain, Stephan Krenn, Krzysztof Pietrzak, Aris Tentes
We construct a perfectly binding string commitment scheme whose security is based on the learning parity with noise ($\LPN$) assumption, or equivalently, the hardness of decoding random linear codes. Our scheme not only allows for a simple and efficient zero-knowledge proof of knowledge for committed values (essentially a $\Sigma$-protocol), but also for such proofs showing any kind of relation amongst committed values, i.e. proving that messages $\vm_0,\ldots,\vm_u$, are such that $\vm_0=C(\vm_1,\ldots,\vm_u)$ for any circuit $C$. To get soundness which is exponentially small in a security parameter $t$, and when the zero-knowledge property relies on the LPN problem with secrets of length $\ell$, our $3$ round protocol has communication complexity $\bigO(t|C|\ell\log(\ell))$ and computational complexity of $\bigO(t|C|\ell)$ bit operations. The hidden constants are small, and the computation consists mostly of computing inner products of bit-vectors.
Last updated:  2013-03-01
Constant-Overhead Secure Computation of Boolean Circuits using Preprocessing
Uncategorized
Ivan Damgard, Sarah Zakarias
Show abstract
Uncategorized
We present a protocol for securely computing a Boolean circuit $C$ in presence of a dishonest and malicious majority. The protocol is unconditionally secure, assuming access to a preprocessing functionality that is not given the inputs to compute on. For a large number of players the work done by each player is the same as the work needed to compute the circuit in the clear, up to a constant factor. Our protocol is the first to obtain these properties for Boolean circuits. On the technical side, we develop new homomorphic authentication schemes based on asymptotically good codes with an additional multiplication property. We also show a new algorithm for verifying the product of Boolean matrices in quadratic time with exponentially small error probability, where previous methods would only give a constant error.
Last updated:  2016-03-10
Entangled Cloud Storage
Uncategorized
Giuseppe Ateniese, Özgür Dagdelen, Ivan Damgard, Daniele Venturi
Show abstract
Uncategorized
Entangled cloud storage (Aspnes et al., ESORICS 2004) enables a set of clients to "entangle" their files into a single *clew* to be stored by a (potentially malicious) cloud provider. The entanglement makes it impossible to modify or delete significant part of the clew without affecting *all* files encoded in the clew. A clew keeps the files in it private but still lets each client recover his own data by interacting with the cloud provider; no cooperation from other clients is needed. At the same time, the cloud provider is discouraged from altering or overwriting any significant part of the clew as this will imply that none of the clients can recover their files. We put forward the first simulation-based security definition for entangled cloud storage, in the framework of *universal composability* (Canetti, FOCS 2001). We then construct a protocol satisfying our security definition, relying on an *entangled encoding scheme* based on privacy-preserving polynomial interpolation; entangled encodings were originally proposed by Aspnes et al. as useful tools for the purpose of data entanglement. As a contribution of independent interest we revisit the security notions for entangled encodings, putting forward stronger definitions than previous work (that for instance did not consider collusion between clients and the cloud provider). Protocols for entangled cloud storage find application in the cloud setting, where clients store their files on a remote server and need to be ensured that the cloud provider will not modify or delete their data illegitimately. Current solutions, e.g., based on Provable Data Possession and Proof of Retrievability, require the server to be challenged regularly to provide evidence that the clients' files are stored *at a given time*. Entangled cloud storage provides an alternative approach where any single client operates implicitly on behalf of all others, i.e., as long as one client's files are intact, the entire remote database continues to be safe and unblemished.
Last updated:  2012-09-03
Enabling 3-share Threshold Implementations for any 4-bit S-box
Sebastian Kutzner, Phuong Ha Nguyen, Axel Poschmann
Threshold Implementation (TI) is an elegant and widely accepted countermeasure against 1-st order Differential Power Analysis (DPA) in Side Channel Attacks. The 3-share TI is the most efficient version of TI, but so far, it can only be applied to 50\% of all 4-bit S-boxes. In this paper, we study the limitations of decomposition and introduce factorization to enable the 3-share TI for any optimal 4-bit S-box. We propose an algorithm which can decompose any optimal 4-bit S-box to quadratic vectorial boolean functions with a time complexity of $2^{19}$. Furthermore, we use our new methodology in combination with decomposition to optimize ciphers utilizing many different S-boxes, and, to highlight the strength of our new methodology, we construct a 3-share Threshold Implementation of SERPENT which was believed to be not possible until now. Last, we show how to implemented all SERPENT S-boxes with only one mutual core.
Last updated:  2012-09-03
On 3-share Threshold Implementations for 4-bit S-boxes
Sebastian Kutzner, Phuong Ha Nguyen, Axel Poschmann, Huaxiong Wang
One of the most promising lightweight hardware countermeasures against SCA attacks is the so-called Threshold Implementation (TI) countermeasure. In this work we resolve many of the remaining open issues towards it's applicability. In particular, our contribution is three-fold: first we define which optimal (from a cryptographic point of view) S-boxes can be implemented with a 3-share TI. Second, we introduce two methodologies to efficiently implement these S-boxes. Third, as an example, we successfully apply these methodologies to PRESENT and are able to decrease the area requirements of its protected S-box by 57\%.
Last updated:  2016-07-19
On the Implausibility of Constant-Round Public-Coin Zero-Knowledge Proofs
Yi Deng, Juan Garay, San Ling, Huaxiong Wang, Moti Yung
We consider the problem of whether there exist non-trivial constant-round public-coin zero-knowledge (ZK) proofs. To date, in spite of high interest in the above, there is no definite answer to the question. We focus on the type of ZK proofs that admit a universal simulator (which handles all malicious verifiers), and show a connection between the existence of such proof systems and a seemingly unrelated “program understanding” problem: for a natural class of constant-round public-coin ZK proofs (which we call “canonical,” since all known ZK protocols fall into this category), a session prefix output by the universal simulator can actually be used to distinguish a non-trivial property of the next-step functionality of the verifier’s code. Our result can be viewed as extended new evidence against the existence of constant-round public-coin ZK proofs, since the existence of such a proof system will bring about either one of the following: (1) a positive result for the above program-understanding problem, a typical goal in reverse-engineering attempts, commonly believed to be notoriously hard, or (2) a rather unfathomable simulation strategy beyond the only known (straight-line simulation) technique for their argument counterpart, as we also argue. Earlier negative evidence on constant-round public-coin ZK proofs is Barack, Lindell and Vadhan [FOCS ’03]’s result, which was based on the incomparable assumption of the existence of certain entropy-preserving hash functions, now (due to further work) known not to be achievable from standard assumptions via black-box reduction. The core of our technical contribution is showing that there exists a single verifier step for constant-round public-coin ZK proofs whose functionality (rather than its code) is crucial for a successful simulation. This is proved by combining a careful analysis of the behavior of a set of verifiers in the above protocols and during simulation, with an improved structure-preserving version of the well-known Babai-Moran Speedup (de-randomization) Theorem, a key tool of independent interest.
Last updated:  2012-09-03
Compact Implementation and Performance Evaluation of Hash Functions in ATtiny Devices
Josep Balasch, Bariş Ege, Thomas Eisenbarth, Benoit Gérard, Zheng Gong, Tim Güneysu, Stefan Heyse, Stéphanie Kerckhof, François Koeune, Thomas Plos, Thomas Pöppelmann, Francesco Regazzoni, François-Xavier Standaert, Gilles Van Assche, Ronny Van Keer, Loïc van Oldeneel tot Oldenzeel, Ingo von Maurich
The pervasive diffusion of electronic devices in security and privacy sensitive applications has boosted research in cryptography. In this context, the study of lightweight algorithms has been a very active direction over the last years. In general, symmetric cryptographic primitives are good candidates for low-cost implementations. For example, several previous works have investigated the performances of block ciphers on various platforms. Motivated by the recent SHA3 competition, this paper extends these studies to another family of cryptographic primitives, namely hash functions. We implemented different algorithms on an ATMEL AVR ATtiny45 8-bit microcontroller, and provide their performance evaluation using different figures. All the implementations were carried out with the goal of minimizing the code size and memory utilization, and evaluated using a common interface. As part of our contribution, we additionally decided to make all the corresponding source codes available on a web page, under an open-source license. We hope that this paper provides a good basis for researchers and embedded system designers who need to include more and more functionalities in next generation smart devices.
Last updated:  2013-03-03
Succinct Malleable NIZKs and an Application to Compact Shuffles
Melissa Chase, Markulf Kohlweiss, Anna Lysyanskaya, Sarah Meiklejohn
Depending on the application, malleability in cryptography can be viewed as either a flaw or — especially if sufficiently understood and restricted — a feature. In this vein, Chase, Kohlweiss, Lysyanskaya, and Meiklejohn recently defined malleable zero-knowledge proofs, and showed how to control the set of allowable transformations on proofs. As an application, they construct the first compact verifiable shuffle, in which one such controlled-malleable proof suffices to prove the correctness of an entire multi-step shuffle. Despite these initial steps, a number of natural problems remained: (1) their construction of controlled-malleable proofs relies on the inherent malleability of Groth-Sahai proofs and is thus not based on generic primitives; (2) the classes of allowable transformations they can support are somewhat restrictive. In this paper, we address these issues by providing a generic construction of controlled-malleable proofs using succinct non-interactive arguments of knowledge, or SNARGs for short. Our construction can support very general classes of transformations, as we no longer rely on the transformations that Groth-Sahai proofs can support.
Last updated:  2012-09-03
On pseudorandomization of information-theoretically secure schemes without hardness assumptions
Koji Nuida
A recent work by Nuida and Hanaoka (in ICITS 2009) provided a proof technique for security of information-theoretically secure cryptographic schemes in which the random input tape is implemented by a pseudorandom generator (PRG). In this paper, we revisit their proof technique and generalize it by introducing some trade-off factor, which involves the original proof technique as a special case and provides a room of improvement of the preceding result. Secondly, we consider two issues of the preceding result; one is the requirement of some hardness assumption in their proof; another is the gap between non-uniform and uniform computational models appearing when transferring from the exact security formulation adopted in the preceding result to the usual asymptotic security. We point out that these two issues can be resolved by using a PRG proposed by Impagliazzo, Nisan and Wigderson (in STOC 1994) against memory-bounded distinguishers, instead of usual PRGs against time-bounded distinguishers. We also give a precise formulation of a computational model explained by Impagliazzo et al., and by using this, perform a numerical comparison showing that, despite the significant advantage of removing hardness assumptions, our result is still better than, or at least competitive to, the preceding result from quantitative viewpoints. The results of this paper would suggest a new motivation to use PRGs against distinguishers with computational constraints other than time complexity in practical situations rather than just theoretical works.
Last updated:  2012-09-03
Scalable Deniable Group Key Establishment
Uncategorized
Kashi Neupane, Rainer Steinwandt, Adriana Suarez Corona
Show abstract
Uncategorized
The popular Katz-Yung compiler from CRYPTO 2003 can be used to transform unauthenticated group key establishment protocols into authenticated ones. In this paper we present a modication of Katz and Yung's construction which maintains the round complexity of their compiler, but for "typical" unauthenticated group key establishments adds authentication in such a way that deniability is achieved as well. As an application, a deniable authenticated group key establishment with three rounds of communication can be constructed.
Last updated:  2012-09-03
Hierarchical Identity-Based (Lossy) Trapdoor Functions
Uncategorized
Alex Escala, Javier Herranz, Benoit Libert, Carla Rafols
Show abstract
Uncategorized
Lossy trapdoor functions, introduced by Peikert and Waters (STOC'08), have received a lot of attention in the last years, because of their wide range of applications in theoretical cryptography. The notion has been recently extended to the identity-based setting by Bellare \textit{et al.} (Eurocrypt'12). We provide one more step in this direction, by considering the notion of hierarchical identity-based (lossy) trapdoor functions (HIB-TDFs). Hierarchical identity-based cryptography has proved very useful both for practical applications and to establish theoretical relations with other cryptographic primitives. The notion of security for IB-TDFs put forward by Bellare \textit{et al.} easily extends to the hierarchical scenario, but an (H)IB-TDF secure in this sense is not known to generically imply other related primitives with security against adaptive-id adversaries, not even IND-ID-CPA secure encryption. Our first contribution is to define a new security property for (H)IB-TDFs. We show that functions satisfying this property imply secure cryptographic primitives in the adaptive identity-based setting: these include encryption schemes with semantic security under chosen-plaintext attacks, deterministic encryption schemes, and (non-adaptive) hedged encryption schemes that maintain some security when messages are encrypted using randomness of poor quality. We emphasize that some of these primitives were unrealized in the (H)IB setting previous to this work. As a second contribution, we describe the first pairing-based HIB-TDF realization. This is also the first example of hierarchical trapdoor function based on traditional number theoretic assumptions: so far, constructions were only known under lattice assumptions. Our HIB-TDF construction is based on techniques that differ from those of Bellare {\it et al.} in that it uses a hierarchical predicate encryption scheme as a key ingredient. The resulting HIB-TDF is proved to satisfy the new security definition, against either selective or, for hierarchies of constant depth, adaptive adversaries. In either case, we only need the underlying predicate encryption system to be selectively secure.
Last updated:  2012-09-03
Are We Compromised? Modelling Security Assessment Games
Uncategorized
Viet Pham, Carlos Cid
Show abstract
Uncategorized
Security assessments are an integral part of organisations' strategies for protecting their digital assets and critical IT infrastructure. In this paper we propose a game-theoretic modelling of a particular form of security assessment -- one which addresses the question ``are we compromised?''. We do so by extending the recently proposed game ``FlipIt'', which itself can be used to model the interaction between defenders and attackers under the Advanced Persistent Threat (APT) scenario. Our extension gives players the option to ``test'' the state of the game before making a move. This allows one to study the scenario in which organisations have the option to perform periodic security assessments of such nature, and the benefits they may bring.
Last updated:  2014-06-23
Privacy Amplification with Asymptotically Optimal Entropy Loss
Nishanth Chandran, Bhavana Kanukurthi, Rafail Ostrovsky, Leonid Reyzin
We study the problem of ``privacy amplification'': key agreement between two parties who both know a weak secret w, such as a password. (Such a setting is ubiquitous on the internet, where passwords are the most commonly used security device.) We assume that the key agreement protocol is taking place in the presence of an active computationally unbounded adversary Eve. The adversary may have partial knowledge about w, so we assume only that w has some entropy from Eve's point of view. Thus, the goal of the protocol is to convert this non-uniform secret w into a uniformly distributed string $R$ that is fully secret from Eve. R may then be used as a key for running symmetric cryptographic protocols (such as encryption, authentication, etc.). Because we make no computational assumptions, the entropy in R can come only from w. Thus such a protocol must minimize the entropy loss during its execution, so that R is as long as possible. The best previous results have entropy loss of $\Theta(\kappa^2)$, where $\kappa$ is the security parameter, thus requiring the password to be very long even for small values of $\kappa$. In this work, we present the first protocol for information-theoretic key agreement that has entropy loss LINEAR in the security parameter. The result is optimal up to constant factors. We achieve our improvement through a somewhat surprising application of error-correcting codes for the edit distance. The protocol can be extended to provide also ``information reconciliation,'' that is, to work even when the two parties have slightly different versions of w (for example, when biometrics are involved).
Last updated:  2012-09-03
Constant Ciphertext Length in CP-ABE
Nishant Doshi, Devesh Jinwala
Ciphertext policy attribute based encryption (CP-ABE) is a technique in which user with secret key containing attributes, only able to decrypt the message if the attributes in the policy match with the attributes in secret key. The existing methods that use reasonably computable decryption policies produce the ciphertext of size at least linearly varying with the number of attributes with additional pairing operations during encryption and decryption. In this paper, we propose a scheme in which ciphertext remains constant in length, irrespective of the number of attributes. Our scheme works for a threshold case: the number of attributes in a policy must be a subset of attributes in a secret key. The security of propose scheme is based on Decisional Bilinear Diffie-Hellman (DBDH) problem.
Last updated:  2015-02-27
Authenticity, Integrity and Proof of Existence for Long-Term Archiving: a Survey
Uncategorized
Martín A. G. Vigil, Daniel Cabarcas, Alexander Wiesmaier, Johannes Buchmann
Uncategorized
Electronic archives are increasingly being used to store information that needs to be available for a long time such as land register information and medical records. In order for the data in such archives to remain useful, their integrity and authenticity must be protected over their entire life span. Also, in many cases it must be possible to prove that the data existed at a certain point in time. In this paper we survey solutions that provide long-term integrity, authenticity, and proof of existence of archived data. We analyze which trust assumptions they require and compare their efficiency. Based on our analysis, we discuss open problems and promising research directions.
Last updated:  2014-01-14
Almost Perfect Algebraic Immune Functions with Good Nonlinearity
Uncategorized
Meicheng Liu, Dongdai Lin
Show abstract
Uncategorized
In the last decade, algebraic and fast algebraic attacks are regarded as the most successful attacks on LFSR-based stream ciphers. Since the notion of algebraic immunity was introduced, the properties and constructions of Boolean functions with maximum algebraic immunity have been researched in a large number of papers. However, there are few results with respect to Boolean functions with provable good immunity against fast algebraic attacks. In previous literature, only Carlet-Feng function, which is affine equivalent to discrete logarithm function, was proven to be optimal against fast algebraic attacks as well as algebraic attacks. In this paper, it is proven that a family of $2k$-variable Boolean functions, including the function recently constructed by Tang et al. [IEEE TIT 59(1): 653--664, 2013], are almost perfect algebraic immune for any integer $k\geq 3$. More exactly, they achieve optimal algebraic immunity and almost perfect immunity to fast algebraic attacks. The functions of such family are balanced and have optimal algebraic degree. A lower bound on their nonlinearity is obtained based on the work of Tang et al., which is better than that of Carlet-Feng function. It is also checked for $3\leq k\leq 9$ that the exact nonlinearity of such functions is very good, which is slightly smaller than that of Carlet-Feng function, and some functions of this family even have a slightly larger nonlinearity than Tang et al.'s function. To sum up, among the known functions with provable good immunity against fast algebraic attacks, the functions of this family make a trade-off between the exact value and the lower bound of nonlinearity.
Last updated:  2013-08-12
The low-call diet: Authenticated Encryption for call counting HSM users
Mike Bond, George French, Nigel P. Smart, Gaven J. Watson
We present a new mode of operation for obtaining authenticated encryption suited for use in banking and government environments where cryptographic services are only available via a Hardware Security Module (HSM) which protects the keys but offers a limited API. The practical problem is that despite the existence of better modes of operation, modern HSMs still provide nothing but a basic (unauthenticated) CBC mode of encryption, and since they mediate all access to the key, solutions must work around this. Our mode of operation makes only a single call to the HSM, yet provides a secure authenticated encryption scheme; authentication is obtained by manipulation of the plaintext being passed to the HSM via a call to an unkeyed hash function. The scheme offers a considerable performance improvement compared to more traditional authenticated encryption techniques which must be implemented using multiple calls to the HSM. Our new mode of operation is provided with a proof of security, on the assumption that the underlying block cipher used in the CBC mode is a strong pseudorandom permutation, and that the hash function is modelled as a random oracle.
Last updated:  2012-09-03
Updating attribute in CP-ABE: A New Approach
Nishant Doshi, Devesh Jinwala
In Ciphertext-Policy Attribute Based Encryption (CP-ABE), attributes are attached to the user‟s secret key and access policy is at-tached to the ciphertext. If attributes in the secret key of a user satisfy the policy then only the genuine user can decrypt the ciphertext. How-ever, such scenario also necessitates periodic updating of the secret key with the changing attributes. According to our observations, the existing attempts at doing so are not efficient. In this paper, we propose a newer approach to add, update or delete the value of particular attribute effi-ciently without the knowledge of the other attributes.
Last updated:  2012-09-03
"Metaproofs" (and their Cryptographic Applications)
Alfredo De Santis, Moti Yung
We develop a non-interactive proof-system which we call "Metaproof" (mu-NIZK proof system); it provides a proof of "the existence of a proof to a statement". This meta-mathematical notion indeed seems redundant when we deal with proving NP statements, but in the context of zero-knowledge theory and cryptography it has a large variety of applications. Combined with another tool we develop which we call "on-line simulatable NIZK proof system", it is the key tool used to solve the open problem of the existence of a many prover non-interactive zero-knowledge system (MP-NIZK proof system). This problem was presented by Micali when the important notion of non-interactive zero-knowledge proofs (NIZK) was first suggested and implemented for a sole prover. The solution immensely enlarges the domain of applications of the NIZK model. The work also provides a new connection between bounded (single-theorem) non-interactive zero-knowledge proofs and the unbounded (multi-theorem) one. This may help in reducing the complexity assumption upon which to base NIZK systems. Remark: This is a full version (with more details, more material, and with new proofs) of the Crypto 1990 paper on Metaproof. Over the years, the concept has been used and reinvented for specific settings beyond the original ones, by others; (which has made it more useful). Recently, we were asked about this paper and about details, so here they are! For historical reasons, except for this remark, this version is presented as it was in the above mentioned date under the above affiliations, though we did not pursue publication before!
Last updated:  2013-09-30
Protocol Misidentification Made Easy with Format-Transforming Encryption
Kevin P. Dyer, Scott E. Coull, Thomas Ristenpart, Thomas Shrimpton
Deep packet inspection (DPI) technologies provide much-needed visibility and control of network traffic using port-independent protocol identification, where a network flow is labeled with its application-layer protocol based on packet contents. In this paper, we provide the first comprehensive evaluation of a large set of DPI systems from the point of view of protocol misidentification attacks, in which adversaries on the network attempt to force the DPI to mislabel connections. Our approach uses a new cryptographic primitive called format-transforming encryption (FTE), which extends conventional symmetric encryption with the ability to transform the ciphertext into a format of our choosing. We design an FTE-based record layer that can encrypt arbitrary application-layer traffic, and we experimentally show that this forces misidentification for all of the evaluated DPI systems. This set includes a proprietary, enterprise-class DPI system used by large corporations and nation-states. We also show that using FTE as a proxy system incurs no latency overhead and as little as 16\% bandwidth overhead compared to standard SSH tunnels. Finally, we integrate our FTE proxy into the Tor anonymity network and demonstrate that it evades real-world censorship by the Great Firewall of China.
Last updated:  2012-09-03
Efficient Query Integrity for Outsourced Dynamic Databases
Qingji Zheng, Shouhuai Xu, Giuseppe Ateniese
As databases are increasingly outsourced to the cloud, data owners require various security assurances. This paper investigates one particular assurance, query integrity, by which a database querier (either the data owner or a third party) can verify that its queries were faithfully executed by the cloud server with respect to the outsourced database. Query integrity is investigated in the setting of dynamic databases, where the outsourced databases can be updated by the data owners as needed. We present a formal security definition of query integrity and a provably-secure efficient construction. Our solution improves upon the state-of-the-art solutions by additionally allowing aggregate queries and more flexible join queries. In addition, we provide better performance by eliminating a linear factor in the extra storage complexity for security purpose. Our solution also achieves a trade-off between computational and communication complexities.
Last updated:  2012-09-03
A Method for Generating Full Cycles by a Composition of NLFSRs
Elena Dubrova
Non-Linear Feedback Shift Registers (NLFSR) are a generalization of Linear Feedback Shift Registers (LFSRs) in which a current state is a non-linear function of the previous state. The interest in NLFSRs is motivated by their ability to generate pseudo-random sequences which are usually hard to break with existing cryptanalytic methods. However, it is still not known how to construct large $n$-stage NLFSRs which generate full cycles of $2^n$ possible states. This paper presents a method for generating full cycles by a composition of NLFSRs. First, we show that an $n*k$-stage register with period $O(2^{2n})$ can be constructed from $k$ $n$-stage NLFSRs by adding to their feedback functions a logic block of size $O(n*k)$. This logic block implements Boolean functions representing the set of pairs of states whose successors have to be exchanged in order to join cycles. Then, we show how to join all cycles into one by using one more logic block of size $O(n*k^2)$ and an extra time step. The presented method is feasible for generating very large full cycles.
Last updated:  2012-09-03
On the Multiple Fault Attack on RSA Signatures with LSBs of Messages Unknown
Uncategorized
Lidong Han, Wei Wei, Mingjie Liu
Show abstract
Uncategorized
In CHES 2009, Coron, Joux, Kizhvatov, Naccache and Paillier(CJKNP) introduced a fault attack on RSA signatures with partially unknown messages. They factored RSA modulus $N$ using a single faulty signature and increased the bound of unknown messages by multiple fault attack, however, the complexity multiple fault attack is exponential in the number of faulty signatures. At RSA 2010, it was improved which run in polynomial time in number of faults. Both previous multiple fault attacks deal with the general case that the unknown part of message is in the middle. This paper handles a special situation that some least significant bits of messages are unknown. First, we describe a sample attack by utilizing the technique of solving simultaneous diophantine approximation problem, and the bound of unknown message is $N^{\frac1{2}-\frac1{2\ell}}$ where $\ell$ is the number of faulty signatures. Our attacks are heuristic but very efficient in practice. Furthermore, the new bound can be extended up to $N^{\frac1{2}^{1+\frac1{\ell}}}$ by the Cohn-Heninger technique. Comparison between previous attacks and new attacks with LSBs of message unknown will be given by simulation test.
Last updated:  2012-09-06
Desynchronization Attack on RAPP Ultralightweight Authentication Protocol
Zahra Ahmadian, Mahmoud Salmasizadeh, Mohammad Reza Aref
RAPP (RFID Authentication Protocol with Permutation) is a recently proposed efficient ultralightweight authentication protocol. The operation used in this protocol is totally different from the other existing ultralightweight protocols due to the use of new introduced data dependent permutations and avoidances of modular arithmetic operations and biased logical operations such as AND and OR. The designers of RAPP claimed that this protocol resists against desynchronization attacks since the last messages of the protocol is sent by the reader and not by the tag. This letter challenges this assumption and shows that RAPP is vulnerable against desynchronization attack. This attack has a remarkable probability of success and is effective whether Hamming weight-based or modular-based rotations are used by the protocol.
Last updated:  2012-09-23
Recursive Linear and Differential Cryptanalysis of Ultralightweight Authentication Protocols
Zahra Ahmadian, Mahmoud Salmasizadeh, Mohammad Reza Aref
Privacy is faced to serious challenges in the ubiquitous computing world. In order to handle this problem, some researches in recent years have focused on design and analysis of privacy friendly ultralightweight authentication protocols. In less than a decade, many ultralightweight authentication protocols are proposed. Though, successful crypanalyses are proposed for almost all of them, most of these attacks are based on ad-hoc methods that are not extensible to a large class of ultralightweight protocols. So this research area still suffers from the lack of structured cryptanalysis and evaluation ethods. In this paper, we introduce new frameworks for full disclosure attacks on ultralightweight authentication protocols based on new concepts of recursive linear and recursive differential cryptanalysis. Both of them exploit triangular functions in ultralightweight protocols and recover all secret data stored in the tag in a recursive manner. The recursive linear attack is applied to Yeh et al. and SLMAP authentication protocols. This attack is passive, deterministic (i.e. the attacker can retrieve all the secrets with probability of one), and requires only a single authentication session. The recursive differential attack is more powerful and can be applied to the protocols which linear attack may not work on. We show the effectiveness of this attack on LMAP++and SASI authentication protocols. This differential attack is probabilistic, active in the sense that the attacker suffices only to block some specific messages, and requires a few authentication sessions.
Last updated:  2012-08-22
Designated Verifier Threshold Proxy Signature Scheme without Random Oracles
Mohammad Beheshti-Atashgah, Majid Bayat, Mahmoud Gardeshi, Mohammad Reza Aref
In a $(t,n)$ designated verifier threshold proxy signature \, scheme, an original signer can delegate his/her signing power to $n$ proxy signers such that any $t$ or more out of $n$ proxy signers can sign messages on behalf of the original signer but $t-1$ or less of the proxy signers cannot generate a valid proxy signature. Of course, the signature is issued for a designated receiver and therefore only the designated receiver can validate the proxy signature. In this paper, we propose a new designated verifier threshold proxy signature scheme and also show that the proposed scheme has provable security in the standard model. The security of proposed scheme is based on the $GBDH$ assumption and the proposed scheme satisfies all the security requirements of threshold proxy signature schemes.
Last updated:  2012-08-22
Short communication: An interpretation of the Linux entropy estimator
Benjamin Pousse
The Linux random number generator (LRNG) aims to produce random numbers with all the limitations due to a deterministic machine. Two recent analysis exist for this generator. These analysis provide strong cryptographic details about LRNG. However both fail to give a mathematical explanation of the entropy estimator embedded. In this paper we propose an interpretation using Newton polynomial interpolation.
Last updated:  2012-08-22
Computational Soundness without Protocol Restrictions
Michael Backes, Ankit Malik, Dominique Unruh
The abstraction of cryptographic operations by term algebras, called Dolev-Yao models, is essential in almost all tool-supported methods for verifying security protocols. Recently significant progress was made in establishing computational soundness results: these results prove that Dolev-Yao style models can be sound with respect to actual cryptographic realizations and security definitions. However, these results came at the cost of imposing various constraints on the set of permitted security protocols: e.g., dishonestly generated keys must not be used, key cycles need to be avoided, and many more. In a nutshell, the cryptographic security definitions did not adequately capture these cases, but were considered carved in stone; in contrast, the symbolic abstractions were bent to reflect cryptographic features and idiosyncrasies, thereby requiring adaptations of existing verification tools. In this paper, we pursue the opposite direction: we consider a symbolic abstraction for public-key encryption and identify two cryptographic definitions called PROG-KDM (programmable key-dependent message) security and MKE (malicious-key extractable) security that we jointly prove to be sufficient for obtaining computational soundness without imposing assumptions on the protocols using this abstraction. In particular, dishonestly generated keys obtained from the adversary can be sent, received, and used. The definitions can be met by existing cryptographic schemes in the random oracle model. This yields the first computational soundness result for trace-properties that holds for arbitrary protocols using this abstraction (in particular permitting to send and receive dishonestly generated keys), and that is accessible to all existing tools for reasoning about Dolev-Yao models without further adaptations.
Last updated:  2013-12-19
Exploiting Collisions in Addition Chain-based Exponentiation Algorithms Using a Single Trace
Uncategorized
Neil Hanley, HeeSeok Kim, Michael Tunstall
Show abstract
Uncategorized
Public key cryptographic algorithms are typically based on group exponentiation algorithms where the exponent is private. A collision attack is typically where an adversary seeks to determine whether two operations in an exponentiation have the same input. In this paper we extend this to an adversary who seeks to determine whether the output of one operation is used as the input to another. We describe implementations of these attacks to a 192-bit scalar multiplication over an elliptic curve that only require a single power consumption trace to succeed with a high probability. Moreover, our attacks do not require any knowledge of the input to the exponentiation algorithm. These attacks would therefore be applicable to algorithms such as EC-DSA, where an exponent is ephemeral, or to implementations where an exponent is blinded. Moreover, we define attacks against exponentiation algorithms that are considered to be resistant to collision attacks and prove that collision attacks are applicable to all addition chain-based exponentiation algorithms. Hence, we demonstrate that a side-channel resistant implementation of a group exponentiation algorithm will require countermeasures that introduce enough noise such that an attack is not practical.
Last updated:  2012-10-02
Cryptanalysis of Two Dynamic ID-based Remote User Authentication Schemes for Multi-Server Architecture
Ding Wang, Chun-guang Ma, De-li Gu, Zhen-shan Cui
Understanding security failures of cryptographic protocols is the key to both patching existing protocols and designing future schemes. In NSS'10, Shao and Chin showed that Hsiang and Shih's dynamic ID-based remote user authentication scheme for multi-server environment is vulnerable to server spoofing attack and fails to preserve user anonymity, and further proposed an improved version which is claimed to be efficient and secure. In this study, however, we will demonstrate that, although Shao-Chin's scheme possesses many attractive features, it still cannot achieve the claimed security goals, and we report its following flaws: (1) It cannot withstand offline password guessing attack under their non-tamper resistance assumption of the smart card; (2) It fails to provide user anonymity; (3) It is prone to user impersonation attack. More recently, Li et al. found that Sood et al.'s dynamic ID-based authentication protocol for multi-server architecture is still vulnerable to several kinds of attacks and presented a new scheme that attempts to overcome the identified weaknesses. Notwithstanding their intentions, Li et al.'s scheme is still found vulnerable to various known attacks by researchers. In this study, we perform a further cryptanalysis and uncover its two other vulnerabilities: (1) It cannot achieve user anonymity, the essential goal of a dynamic ID-based scheme; (2) It is susceptible to offline password guessing attack. The proposed cryptanalysis discourages any use of the two schemes under investigation in practice and reveals some subtleties and challenges in designing this type of schemes.
Last updated:  2012-08-21
An Efficient Signcryption Scheme from q-Diffie-Hellman Problems
Jayaprakash Kar
Confidentiality and authenticity are two fundamental security requirement of Public key Cryptography. These are achieved by encryption scheme and digital signatures respectively. Here we present a provably secure signcryption scheme in random oracle model by modifying Libert et al's scheme [2]. Our scheme is more e±cient and secure than Libert et al's scheme. Tan [1] proved that this scheme is not secure against non-adaptive chosen cipher text attacks. It has been also proved that the semantically secure symmetric encryption scheme proposed in the Libert et al's scheme is not su±cient to guarantee to be secure against adaptive chosen ciphertext attacks. Here we proposed a modified version of Libert et al's scheme. The security of which is proven using two as- sumptions, namely the Strong Diffie-Hellman (SDH) and Diffie-Hellman Inversion (DHI) in the random oracle model.
Last updated:  2012-08-22
Approaches for the Parallelization of Software Implementation of Integer Multiplication
Vladislav Kovtun, Andrew Okhrimenko
In this paper there are considered several approaches for the increasing performance of software implementation of integer multiplication algorithm for the 32-bit & 64-bit platforms via parallelization. The main idea of algorithm parallelization consists in delayed carry mechanism using which authors have proposed earlier [11]. The delayed carry allows to get rid of connectivity in loop iterations for sums accumulation of products, which allows parallel execution of loops iterations in separate threads. Upon completion of sum accumulation threads, it is necessary to make corrections in final result via assimilation of carries. First approach consists in optimization of parallelization for the two execution threads and second approach is an evolution of the first approach and is oriented on three and more execution threads. Proposed approaches for parallelization allow increasing the total algorithm computational complexity, as for one execution thread, but decrease total execution time on multi-core CPU.
Last updated:  2012-12-07
Improved Security Bounds for Key-Alternating Ciphers via Hellinger Distance
John Steinberger
A $t$-round key alternating cipher can be viewed as an abstraction of AES. It defines a cipher $E$ from $t$ fixed public permutations $P_1, \ldots, P_t : \{0,1\}^n \ra \{0,1\}^n$ and a key $k = k_0\Vert \cdots \Vert k_t \in \{0,1\}^{n(t+1)}$ by setting $E_{k}(x) = k_t \oplus P_t(k_{t-1} \oplus P_{t-1}(\cdots k_1 \oplus P_1(k_0 \oplus x) \cdots))$. The indistinguishability of $E_k$ from a random truly random permutation by an adversary who also has oracle access to the (public) random permutations $P_1, \ldots, P_t$ was investigated for $t = 2$ by Even and Mansour and, much later, by Bogdanov et al. The former proved indistinguishability up to $2^{n/2}$ queries for $t = 1$ while the latter proved indistinguishability up to $2^{2n/3}$ queries for $t \geq 2$ (ignoring low-order terms). Our contribution is to improve the analysis of Bogdanov et al$.$ by showing security up to $2^{3n/4}$ queries for $t \geq 3$. Given that security cannot exceed $2^{\frac{t}{t+1}n}$ queries, this is in particular achieves a tight bound for the case $t = 3$, whereas, previously, tight bounds had only been achieved for $t = 1$ (by Even and Mansour) and for $t = 2$ (by Bogdanov et al$.$). Our main technique is an improved analysis of the elegant \emph{sample distinguishability} game introduced by Bogdanov et al. More specifically, we succeed in eliminating adaptivity by considering the Hellinger advantage of an adversary, a notion that we introduce here. To our knowledge, our result constitutes the first time Hellinger distance (a standard measure of ``distance'' between random variables, and a cousin of statistical distance) is used in a cryptographic indistinguishability proof.
Last updated:  2013-04-01
Short Signatures From Diffie-Hellman: Realizing Short Public Key
Uncategorized
Jae Hong Seo
Show abstract
Uncategorized
Efficient signature scheme whose security is relying on reliable assumptions is important. There are few schemes based on the standard assumptions such as the Diffie-Hellman (DH) in the standard model. We present a new approach for (hash-and-sign) DH-based signature scheme in the standard model. First, we combine two known techniques, programmable hashes and a tag-based signature scheme so that we obtain a short signature scheme with somewhat short public key of $\Theta(\frac{\lambda}{\log\lambda})$ group elements. Then, we developed a new technique for {\em asymmetric trade} between the public key and random tags, which are part of signatures. Roughly speaking, we can dramatically reduce the public key size by adding one field element in each signature. More precisely, our proposal produces public key of $\Theta(\sqrt{\frac{\lambda}{\log \lambda}})$ group elements, where $\lambda$ is the security parameter. The signature size is still short, requiring two elements in a group of order $p$ and two integers in $\zp$. In our approach, we can guarantee the security against adversaries that make an a-priori bounded number of queries to signing oracle (we call {\em bounded CMA}). i.e., the maximum number $q$ of allowable signing queries is prescribed at the parameter generating time. Note that for polynomial $q$, we limit ourselves to dealing with only polynomial-time reductions in all security proofs.
Last updated:  2012-08-21
Mix-Compress-Mix Revisited: Dispensing with Non-invertible Random Injection Oracles
Mohammad Reza Reyhanitabar, Willy Susilo
We revisit the problem of building dual-model secure (DMS) hash functions that are simultaneously provably collision resistant (CR) in the standard model and provably pseudorandom oracle (PRO) in an idealized model. Designing a DMS hash function was first investigated by Ristenpart and Shrimpton (ASIACRYPT 2007); they put forth a generic approach, called Mix-Compress-Mix (MCM), and showed the feasibility of the MCM approach with a secure (but inefficient) construction. An improved construction was later presented by Lehmann and Tessaro (ASIACRYPT 2009). The proposed construction by Ristenpart and Shrimpton requires a non-invertible (pseudo-) random injection oracle (PRIO) and the Lehmann-Tessaro construction requires a non-invertible random permutation oracle (NIRP). Despite showing the feasibility of realizing PRIO and NIRP objects in theory–using ideal ciphers and (trapdoor) one-way permutations– these constructions suffer from several efficiency and implementation issues as pointed out by their designers and briefly reviewed in this paper. In contrast to the previous constructions, we show that constructing a DMS hash function does not require any PRIO or NIRP, and hence there is no need for additional (trapdoor) one-way permutations. In fact, Ristenpart and Shrimpton posed the question of whether MCM is secure under easy-to-invert mixing steps as an open problem in their paper.We resolve this question in the affirmative in the fixed-input-length (FIL) hash setting. More precisely, we show that one can sandwich a provably CR function, which is sufficiently compressing, between two random invertible permutations to build a provably DMS compression function. Any multi-property-preserving (MPP) domain extender that preserves CR and PRO can then be used to convert such a DMS compression function to a full-fledged DMS hash function. Interestingly, there are efficient off-the-shelf candidates for all the three ingredients (provably CR compression functions, random invertible permutations, and MPP domain extenders) from which one can choose to implement such a DMS hash function in practice. Further, we also explain the implementation options as well as a concrete instantiation.
Last updated:  2012-08-21
Cryptanalysis on a novel unconditionally secure oblivious polynomial evaluation protocol
Wang Qinglong, Xu Li
Vanishree et.al proposed a novel unconditionally oblivious polynomial evaluation protocol and they claimed that can fulfill both sender and receiver’s security. Here, this protocol is cryptanalyzed. We find that it has a fatal fault which cannot implement the receiver’s security at all and show the detail analyzing process.
Last updated:  2012-08-21
Improved Key Recovery Attacks on Reduced-Round AES in the Single-Key Setting
Patrick Derbez, Pierre-Alain Fouque, Jérémy Jean
In this paper, we revisit meet-in-the-middle attacks on AES in the single-key model and improve on Dunkelman, Keller and Shamir attacks of Asiacrypt 2010. We present the best attack on 7 rounds of AES-128 where data/time/memory complexities are below $2^{100}$. Moreover, we are able to extend the number of rounds to reach attacks on 8 rounds for both AES-192 and AES-256. This gives the best attacks on those two versions with a data complexity of $2^{107}$ chosen-plaintexts, a memory complexity of $2^{96}$ and a time complexity of $2^{172}$ for AES-192 and $2^{196}$ for AES-256. Finally, we also describe the best attack on 9 rounds of AES-256 with $2^{120}$ chosen-plaintexts and time and memory complexities of $2^{203}$. All these attacks have been found by carefully studying the number of reachable multisets in Dunkelman et al. attacks.
Last updated:  2012-08-21
A j-lanes tree hashing mode and j-lanes SHA-256
Shay Gueron
j-lanes hashing is a tree mode that splits an input message to j slices, computes j independent digests of each slice, and outputs the hash value of their concatenation. We demonstrate the performance advantage of j-lanes hashing on SIMD architectures, by coding a 4-lanes-SHA-256 implementation and measuring its performance on the latest 3rd Generation Intel® Core™. For message ranging 2KB to 132KB in length, the 4-lanes SHA-256 is between 1.5 to 1.97 times faster than the fastest publicly available implementation (that we are aware of), and between 1.9 to 2.5 times faster than OpenSSL 1.0.1c. For long messages, there is no significant performance difference between different choices of j. We show that the 4-lanes SHA-256 is faster than the two SHA3 finalists (BLAKE and Keccak) that have a published tree mode implementation. We explain why j-lanes hashing will be even faster on the future AVX2 architecture with 256 bits registers. This suggests that standardizing a tree mode for hash functions (SHA-256 in particular) would deliver significant performance benefits for a multitude of algorithms and usages.
Last updated:  2012-08-18
Efficient Signatures of Knowledge and DAA in the Standard Model
David Bernhard, Georg Fuchsbauer, Essam Ghadafi
Direct Anonymous Attestation (DAA) is one of the most complex cryptographic protocols deployed in practice. It allows an embedded secure processor known as a Trusted Platform Module (TPM) to attest to the configuration of its host computer without violating the owner's privacy. DAA has been standardized by the Trusted Computing Group. The security of the DAA standard and all existing schemes is analyzed in the random oracle model. We provide the first constructions of DAA in the standard model, that is, without relying on random oracles. As a building block for our schemes, we construct the first efficient standard-model signatures of knowledge, which have many applications beyond DAA.
Last updated:  2012-11-25
On the Semantic Security of Functional Encryption Schemes
Uncategorized
Manuel Barbosa, Pooya Farshim
Show abstract
Uncategorized
Functional encryption (FE) is a powerful cryptographic primitive that generalizes many asymmetric encryption systems proposed in recent years. Syntax and security definitions for general FE were recently proposed by Boneh, Sahai, and Waters (BSW) (TCC 2011) and independently by O'Neill (ePrint 2010/556). In this paper we revisit these definitions, identify several shortcomings in them, and propose a new definitional approach that overcomes these limitations. Our definitions display good compositionality properties and allow us to obtain new feasibility and impossibility results for adaptive token-extraction attack scenarios that shed further light on the potential reach of general FE for practical applications. The main contributions of the paper are the following: - We show that the BSW definition of semantic security fails to reject intuitively insecure FE schemes where a ciphertext leaks more about an encrypted message than that which can be recovered from an image under the supported functionality. Our definition (as O'Neill's) does not suffer from this problem. - We introduce an orthogonal notion of \emph{setup security} that rejects all FE schemes where the master secret key may give unwanted power to the TA, allowing the recovery of extra information from images under the supported functionality. We prove FE schemes supporting \emph{all-or-nothing} functionalities are intrinsically setup-secure and further show that many well-known functionalities \emph{are} all-or-nothing. - We extend the equivalence result of O'Neill between indistinguishability and semantic security to restricted \emph{adaptive} token-extraction attacks (the standard notion of security for, e.g., IBE and ABE schemes). We establish that this equivalence holds for the large class of all-or-nothing functionalities. Conversely, we show that the proof technique used to establish this equivalence cannot be applied to schemes supporting a one-way function. - We show that the notable \emph{inner-product} functionality introduced by Katz, Sahai, and Waters (EUROCRYPT 2008) can be used to encode a one-way function under the Small Integer Solution (SIS) problem, and hence natural approaches to prove its (restricted) adaptive security fail. This complements the equivalence result of O'Neill for the non-adaptive case, and leaves open the question of proving the semantic security of existing inner-product encryption schemes.
Last updated:  2013-01-28
Sender Equivocable Encryption Schemes Secure against Chosen-Ciphertext Attacks Revisited
Zhengan Huang, Shengli Liu, Baodong Qin
In Eurocrypt 2010, Fehr et al. proposed the first sender equivocable encryption scheme secure against chosen-ciphertext attack (NC-CCA) and proved that NC-CCA security implies security against selective opening chosen-ciphertext attack (SO-CCA). The NC-CCA security proof of the scheme relies on security against substitution attack of a new primitive, ``cross-authentication code''. However, the security of cross-authentication code can not be guaranteed when all the keys used in the code are exposed. Our key observation is that in the NC-CCA security game, the randomness used in the generation of the challenge ciphertext is exposed to the adversary. This random information can be used to recover all the keys involved in cross-authentication code, and forge a ciphertext (like a substitution attack of cross-authentication code) that is different from but related to the challenge ciphertext. And the response of decryption oracle, with respect to the forged ciphertext, leaks information. This leaked information can be employed by an adversary to spoil the NC-CCA security proof of Fehr et al.'s scheme encrypting multi-bit plaintext. In this paper, we provide a security analysis of Fehr et al.'s scheme, showing that its NC-CCA security proof is flawed by presenting an attack. We point out that Fehr et al.'s scheme encrypting single-bit plaintext can be refined to achieve NC-CCA security, free of cross-authentication code. We introduce the strong notion of cross-authentication code, apply it to Fehr et al.'s scheme, and show that the new version of Fehr et al.'s scheme achieves NC-CCA security for multi-bit plaintext.
Last updated:  2013-06-06
On the Simplicity of Converting Leakages from Multivariate to Univariate – Case Study of a Glitch-Resistant Masking Scheme –
Uncategorized
Amir Moradi, Oliver Mischke
Show abstract
Uncategorized
Several masking schemes to protect cryptographic implementations against side-channel attacks have been proposed. A few considered the glitches, and provided security proofs in presence of such inherent phenomena happening in logic circuits. One which is based on multi-party computation protocols and utilizes Shamir’s secret sharing scheme was presented at CHES 2011. It aims at providing security for hardware implementations – mainly of AES – against those sophisticated side-channel attacks that also take glitches into account. One part of this article deals with the practical issues and relevance of the aforementioned masking scheme. Following the recommendations given in the extended version of the mentioned article, we first provide a guideline on how to implement the scheme for the simplest settings. Constructing an exemplary design of the scheme, we provide practical side-channel evaluations based on a Virtex-5 FPGA. Our results demonstrate that the implemented scheme is indeed secure against univariate power analysis attacks given a basic measurement setup. In the second part of this paper we show how using very simple changes in the measurement setup opens the possibility to exploit multivariate leakages while still performing a univariate attack. Using these techniques the scheme under evaluation can be defeated using only a moderate number of measurements. This is applicable not only to the scheme showcased here, but also to most other known masking schemes where the shares of sensitive values are processed in adjacent clock cycles.
Last updated:  2012-08-18
A Quasigroup Based Random Number Generator for Resource Constrained Environments
Matthew Battey, Abhishek Parakh
This paper proposes a pseudo random number generator (PRNG) based on quasigroups. The proposed PRNG has low memory requirements, is autonomous and the quality of the output stream of random numbers is better than other available standard PRNG implementations (commercial and open source) in majority of the tests. Comparisons are done using the benchmark NIST Statistical Test Suite and compression tools. Results are presented for quality of raw stream of random numbers and for encryption results using these random numbers.
Last updated:  2012-09-13
Some Connections Between Primitive Roots and Quadratic Non-Residues Modulo a Prime
Uncategorized
Sorin Iftene
Show abstract
Uncategorized
In this paper we present some interesting connections between primitive roots and quadratic non-residues modulo a prime. Using these correlations, we propose some polynomial deterministic algorithms for generating primitive roots for primes with special forms (for example, for safe primes).
Last updated:  2012-08-18
Perfect Keyword Privacy in PEKS Systems
Mototsugu Nishioka
This paper presents a new security notion, called \emph{perfect keyword privacy (PKP)}, for non-interactive public-key encryption with keyword search (PEKS) \cite{bcop04}. Although the conventional security notion for PEKS guarantees that a searchable ciphertext leaks no information about keywords, it gives no guarantee concerning leakage of a keyword from the trapdoor. PKP is a notion for overcoming this fatal deficiency. Since the trapdoor has verification functionality, the popular concept of ``indistinguishability'' is inadequate for capturing the notion of keyword privacy from the trapdoor. Hence, our formalization of PKP depends on the idea of formalizing a perfectly one-way hash function \cite{can97,cmr98}. We also present \emph{IND-PKP security} as a useful notion for showing that a given PEKS scheme has PKP. Furthermore, we present PKP+ and IND-PKP+ as enhanced notions of PKP and IND-PKP, respectively. Finally, we present several instances of an IND-PKP or IND-PKP+ secure PEKS scheme, in either the random oracle model or the standard model.
Last updated:  2012-10-01
Functional Encryption: New Perspectives and Lower Bounds
Shweta Agrawal, Sergey Gorbunov, Vinod Vaikuntanathan, Hoeteck Wee
Functional encryption is an emerging paradigm for public-key encryption that enables fine-grained control of access to encrypted data. In this work, we present new perspectives on security definitions for functional encryption, as well as new lower bounds on what can be achieved. Our main contributions are as follows: * We show a lower bound for functional encryption that satisfies a weak (non-adaptive) simulation-based security notion, via pseudo-random functions. This is the first lower bound that exploits unbounded collusions in an essential way. * We put forth and discuss a simulation-based notion of security for functional encryption, with an unbounded simulator (called USIM). We show that this notion interpolates indistinguishability and simulation-based security notions, and has strong correlations to results and barriers in the zero-knowledge and multi-party computation literature.
Last updated:  2012-08-18
New results on nonexistence of generalized bent functions
Uncategorized
Yupeng Jiang, Yingpu Deng
Show abstract
Uncategorized
We get two kinds of new results on nonexistence of generalized bent function. The first one is Based on Feng's results by using Schmidt's field descent method. For the second kind, considering special property of the field $\mathbb{Q}(\zeta_{23^e})$, We get new nonexistence results of generalized bent functions with type $[3,2\cdot23^e]$.
Last updated:  2012-08-18
Computational Entropy and Information Leakage
Benjamin Fuller, Leonid Reyzin
We investigate how information leakage reduces computational entropy of a random variable X. Recall that HILL and metric computational entropy are parameterized by quality (how distinguishable is X from a variable Z that has true entropy) and quantity (how much true entropy is there in Z). We prove an intuitively natural result: conditioning on an event of probability p reduces the quality of metric entropy by a factor of p and the quantity of metric entropy by log 1/p note that this means that the reduction in quantity and quality is the same, because the quantity of entropy is measured on logarithmic scale). Our result improves previous bounds of Dziembowski and Pietrzak (FOCS 2008), where the loss in the \emph{quantity} of entropy was related to its original quality. The use of metric entropy simplifies the analogous the result of Reingold et. al. (FOCS 2008) for HILL entropy. Further, we simplify dealing with information leakage by investigating conditional metric entropy. We show that, conditioned on leakage of \lambda bits, metric entropy gets reduced by a factor 2^\lambda in quality and \lambda in quantity.
Last updated:  2012-08-18
T-MATCH: Privacy-Preserving Item Matching for Storage-Only RFID Tags
Kaoutar Elkhiyaoui, Erik-Oliver Blass, Refik Molva
RFID-based tag matching allows a reader Rk to determine whether two tags Ti and Tj store some attributes that jointly fulfill a boolean constraint. The challenge in designing a matching mechanism is tag privacy. While cheap tags are unable to perform any computation, matching has to be achieved without revealing the tags’ attributes. In this paper, we present T-MATCH, a protocol for secure and privacy preserving RFID tag matching. T-MATCH involves a pair of tags Ti and Tj , a reader Rk, and a backend server S. To ensure tag privacy against Rk and S, T-MATCH employs a new technique based on secure two-party computation that prevents Rk and S from disclosing tag attributes. For tag privacy against eavesdroppers, each tag Ti in T-MATCH stores an IND-CPA encryption of its attribute. Such an encryption allows Rk to update the state of Ti by merely re-encrypting Ti’s ciphertext. T-MATCH targets cheap tags that cannot perform any computation, but are only required to store 150 bytes.
Last updated:  2012-08-18
Finding Lower Bounds on the Complexity of Secret Sharing Schemes by Linear Programming
Carles Padro, Leonor Vazquez, An Yang
Optimizing the maximum, or average, length of the shares in relation to the length of the secret for every given access structure is a difficult and long-standing open problem in cryptology. Most of the known lower bounds on these parameters have been obtained by implicitly or explicitly using that every secret sharing scheme defines a polymatroid related to the access structure. The best bounds that can be obtained by this combinatorial method can be determined by using linear programming, and this can be effectively done for access structures on a small number of participants. By applying this linear programming approach, we improve some of the known lower bounds for the access structures on five participants and the graph access structures on six participants for which these parameters were still undetermined. Nevertheless, the lower bounds that are obtained by this combinatorial method are not tight in general. For some access structures, they can be improved by adding to the linear program non-Shannon information inequalities as new constraints. We obtain in this way new separation results for some graph access structures on eight participants and for some ports of non-representable matroids. Finally, we prove that, for two access structures on five participants, the combinatorial lower bound cannot be attained by any linear secret sharing scheme.
Last updated:  2012-08-14
Deterministic Public Key Encryption and Identity-Based Encryption from Lattices in the Auxiliary-Input Setting
Xiang Xie, Rui Xue, Rui Zhang
Deterministic public key encryption (D-PKE) provides an alternative to randomized public key encryption in various scenarios (e.g. search on encrypted data) where the latter exhibits inherent drawbacks. In CRYPTO'11, Brakerski and Segev formalized a framework for studying the security of deterministic public key encryption schemes with respect to auxiliary inputs. A trivial requirement is that the plaintext should not be efficiently recoverable from the auxiliary inputs. In this paper, we present an efficient deterministic public key encryption scheme in the auxiliary-input setting from lattices. The public key size, ciphertext size and ciphertext expansion factor are improved compared with the scheme proposed by Brakerski and Segev. Our scheme is also secure even in the multi-user setting where related messages may be encrypted under multiple public keys. In addition, the security of our scheme is based on the hardness of the learning with errors (LWE) problem which remains hard even for quantum algorithms. Furthermore, we consider deterministic identity-based public key encryption (D-IBE) in the auxiliary-input setting. The only known D-IBE scheme (without considering auxiliary inputs) in the standard model was proposed by Bellare et al. in EUROCRYPT'12. However, this scheme is only secure in the selective security setting, and Bellare et al. identified it as an open problem to construct adaptively secure D-IBE schemes. The second contribution of this work is to propose a D-IBE scheme from lattices that is adaptively secure.
Last updated:  2012-08-14
Perfect Ambiguous Optimistic Fair Exchange
Yang Wang, Man Ho Au, Willy Susilo
Protocol for fair exchange of digital signatures is essential in many applications including contract signing, electronic commerce, or even peer-to-peer file sharing. In such a protocol, two parties, Alice and Bob, would like to exchange digital signatures on some messages in a fair way. It is known that a trusted arbitrator is necessary in the realization of such a protocol. We identify that in some scenarios, it is required that prior to the completion of the protocol, no observer should be able to tell whether Alice and Bob are conducting such an exchange. Consider the following scenario in which Apple engages Intel in an exchange protocol to sign a contract that terminates their OEM agreement. The information would be of value to a third party (such as the stock broker, or other OEM companies). If the protocol transcript can serve as an evidence that such a communication is in progress, any observer of this communication, including the employees of both companies, would be tempted to capture the transcript and sell it to outsiders. We introduce a new notion called \emph{perfect ambiguous optimistic fair exchange} (PAOFE), which is particularly suitable to the above scenario. PAOFE fulfils all traditional requirements of cryptographic fair exchange of digital signatures and, in addition, guarantees that the communication transcript cannot be used as a proof to convince others that the protocol is in progress. Specifically, we formalize the notion of PAOFE and present a rigorous security model in the multi-user setting under the chosen-key attack. We also present a generic construction of PAOFE from existing cryptographic primitives and prove that our proposal is secure with respect to our definition in the standard model.
Last updated:  2012-12-27
Succinct Arguments from Multi-Prover Interactive Proofs and their Efficiency Benefits
Nir Bitansky, Alessandro Chiesa
\emph{Succinct arguments of knowledge} are computationally-sound proofs of knowledge for NP where the verifier's running time is independent of the time complexity $t$ of the nondeterministic NP machine $M$ that decides the given language. Existing succinct argument constructions are, typically, based on techniques that combine cryptographic hashing and probabilistically-checkable proofs (PCPs). Yet, even when instantiating these constructions with state-of-the-art PCPs, the prover needs $\Omega(t)$ space in order to run in quasilinear time (i.e., time $t \poly(k)$), regardless of the space complexity $s$ of the machine $M$. We say that a succinct argument is \emph{complexity preserving} if the prover runs in time $t \poly(k)$ and space $s \poly(k)$ and the verifier runs in time $|x| \poly(k)$ when proving and verifying that a $t$-time $s$-space random-access machine nondeterministically accepts an input $x$. Do complexity-preserving succinct arguments exist? To study this question, we investigate the alternative approach of constructing succinct arguments based on multi-prover interactive proofs (MIPs) and stronger cryptographic techniques: (1) We construct a one-round succinct MIP of knowledge, where each prover runs in time $t \polylog(t)$ and space $s \polylog(t)$ and the verifier runs in time $|x| \polylog(t)$. (2) We show how to transform any one-round MIP protocol to a succinct four-message argument (with a single prover), while preserving the time and space efficiency of the original MIP protocol; using our MIP protocol, this transformation yields a complexity-preserving four-message succinct argument. As a main tool for our transformation, we define and construct a \emph{succinct multi-function commitment} that (a) allows the sender to commit to a vector of functions in time and space complexity that are essentially the same as those needed for a single evaluation of the functions, and (b) ensures that the receiver's running time is essentially independent of the function. The scheme is based on fully-homomorphic encryption (and no additional assumptions are needed for our succinct argument). (3) In addition, we revisit the problem of \emph{non-interactive} succinct arguments of knowledge (SNARKs), where known impossibilities prevent solutions based on black-box reductions to standard assumptions. We formulate a natural (but non-standard) variant of homomorphic encryption having a \emph{homomorphism-extraction property}. We show that this primitive essentially allows to squash our interactive protocol, while again preserving time and space efficiency, thereby obtaining a complexity-preserving SNARK. We further show that this variant is, in fact, implied by the existence of (complexity-preserving) SNARKs.
Last updated:  2015-07-29
Information-Theoretic Timed-Release Security: Key-Agreement, Encryption, and Authentication Codes
Yohei Watanabe, Takenobu Seito, Junji Shikata
In this paper, we study timed-release cryptography with information-theoretic security. As fundamental cryptographic primitives with information-theoretic security, we can consider key-agreement, encryption, and authentication codes. Therefore, in this paper we deal with information-theoretic timed-release security for all those primitives. Specifically, we propose models and formalizations of security for information-theoretic timed-release key-agreement, encryption, and authentication codes; we also derive tight lower bounds on entities' memory-sizes required for all those ones; and we show optimal constructions of all those ones. Furthermore, we investigate a relationship of mechanisms between information-theoretic timed-release key-agreement and information-theoretic key-insulated key-agreement. It turns out that there exists a simple algorithm which converts the former into the latter, and vice versa. In the sense, we conclude that these two mechanisms are essentially close.
Last updated:  2012-12-14
Barriers in Cryptography with Weak, Correlated and Leaky Sources
Daniel Wichs
There has been much recent progress in constructing cryptosystems that maintain their security without requiring uniform randomness and perfect secrecy. These schemes are motivated by a diverse set of problems such as providing resilience to side-channel leakage, using weak physical sources of randomness as secret keys, and allowing deterministic encryption for high-entropy messages. The study of these problems has significantly deepened our understanding of how randomness is used in cryptographic constructions and proofs. Nevertheless, despite this progress, some basic and seemingly achievable security properties have eluded our reach. For example, we are unable to prove the security of basic tools for manipulating weak/leaky random sources, such as as pseudo-entropy generators and seed-dependent computational condensers. We also do not know how to prove leakage-resilient security of any cryptosystem whose secret key is uniquely determined by its public key. In the context of deterministic encryption we do not have a standard-model constructions achieving the strongest notion of security proposed by Bellare, Boldyreva and O'Neill (CRYPTO '07), that would allow us to encrypt arbitrarily correlated messages of sufficiently large individual entropy. In this work, we provide broad black-box separation results, showing that the security of such primitives cannot be proven under virtually any standard cryptographic hardness assumption via a reduction that treats the adversary as a black box. We do so by formalizing the intuition that ``the only way that a reduction can simulate the correctly distributed view for an attacker is to know all the secrets, in which case it does not learn anything useful from the attack''. Such claims are often misleading and clever way of getting around them allow us to achieve a wealth of positive results with imperfect/leaky randomness. However, in this work we show that this intuition can be formalized and that it indeed presents a real barrier for the examples given above.
Last updated:  2012-09-20
Computing small discrete logarithms faster
Daniel J. Bernstein, Tanja Lange
Computations of small discrete logarithms are feasible even in "secure" groups, and are used as subroutines in several cryptographic protocols in the literature. For example, the Boneh--Goh--Nissim degree-2-homomorphic public-key encryption system uses generic square-root discrete-logarithm methods for decryption. This paper shows how to use a small group-specific table to accelerate these subroutines. The cost of setting up the table grows with the table size, but the acceleration also grows with the table size. This paper shows experimentally that computing a discrete logarithm in an interval of order l takes only 1.93*l^{1/3} multiplications on average using a table of size l^{1/3} precomputed with 1.21*l^{2/3} multiplications, and computing a discrete logarithm in a group of order l takes only 1.77*l^{1/3} multiplications on average using a table of size l^{1/3} precomputed with 1.24*l^{2/3} multiplications.
Last updated:  2012-08-13
Hush Functions Extended to Any Size Input versus Any Size Output
Gideon Samid
Traditional hush functions map a large number to a small number such that the reverse-hush has an infinity of solutions, and nonetheless a collision is hard to come by. This primitive is so abundantly useful that one is tempted to extend it such that any number large or small may be mapped to any number larger, or smaller while maintaining the above conditions. This extension would increase the flexibility of the commodity hush primitive, expand its current applications, and likely suggest new ones. Additional generality may be achieved by allowing the input to determine the computational burden, and involving Turing’s Entscheidungsproblem. We propose an algorithm where a natural number, X, is mapped to another natural number Y, referring to the mapping as a "Crypto Square", and to the reverse as "Crypto Square Root": Y = X**2|c and X = √Y|c. While the crypto-square mapping is a proper function, the square root equation has infinite solutions. There exists a deterministic solution algorithm to find any desired number of solutions to a square-root equation. This asymmetry proves itself useful, since the mapping is Z+→Z+, and hence the chance of collision for any finite size set is negligible. Unlike standard one-way functions, crypto-square shields the identity of the input (X), not by the intractability of the reverse function, but by Vernam-like equivocation per the infinity of X candidates. This prospect suggests further examination of this “square” algorithm for possible useful roles in various crypto protocols, especially protocols concerned with privacy, authentication and deniability.
Last updated:  2012-08-13
Crowd-Blending Privacy
Uncategorized
Johannes Gehrke, Michael Hay, Edward Lui, Rafael Pass
Show abstract
Uncategorized
We introduce a new definition of privacy called \emph{crowd-blending privacy} that strictly relaxes the notion of differential privacy. Roughly speaking, $k$-crowd blending private sanitization of a database requires that each individual $i$ in the database ``blends'' with $k$ other individuals $j$ in the database, in the sense that the output of the sanitizer is ``indistinguishable'' if $i$'s data is replaced by $j$'s. We demonstrate crowd-blending private mechanisms for histograms and for releasing synthetic data points, achieving strictly better utility than what is possible using differentially private mechanisms. Additionally, we demonstrate that if a crowd-blending private mechanism is combined with a ``pre-sampling'' step, where the individuals in the database are randomly drawn from some underlying population (as is often the case during data collection), then the combined mechanism satisfies not only differential privacy, but also the stronger notion of zero-knowledge privacy. This holds even if the pre-sampling is slightly biased and an adversary knows whether certain individuals were sampled or not. Taken together, our results yield a practical approach for collecting and privately releasing data while ensuring higher utility than previous approaches.
Last updated:  2012-08-13
Must you know the code of f to securely compute f?
Mike Rosulek
When Alice and Bob want to securely evaluate a function of their shared inputs, they typically first express the function as a (boolean or arithmetic) circuit and then securely evaluate that circuit, gate-by-gate. In other words, a secure protocol for evaluating $f$ is typically obtained in a {\em non-black-box-way} from $f$ itself. Consequently, secure computation protocols have high overhead (in communication \& computation) that is directly linked to the circuit-description complexity of $f$. In other settings throughout cryptography, black-box constructions invariably lead to better practical efficiency than comparable non-black-box constructions. Could secure computation protocols similarly be made more practical by eliminating their dependence on a circuit representation of the target function? Or, in other words, {\em must one know the code of $f$ to securely evaluate $f$?} In this work we initiate the theoretical study of this question. We show the following: 1. A complete characterization of the 2-party tasks which admit such security against semi-honest adversaries. The characterization is inspired by notions of {\em autoreducibility} from computational complexity theory. From this characterization, we show a class of pseudorandom functions that {\em cannot} be securely evaluated (when one party holds the seed and the other holds the input) without ``knowing'' the code of the function in question. On the positive side, we show a class of functions (related to blind signatures) that can indeed be securely computed without ``knowing'' the code of the function. 2. Sufficient conditions for such security against malicious adversaries, also based on autoreducibility. We show that it is not possible to prove membership in the image of a one-way function in zero-knowledge, without ``knowing'' the code of the one-way function.
Last updated:  2012-08-13
A Probabilistic Quantum Key Transfer Protocol
Abhishek Parakh
We propose a protocol to transfer a one-time-pad (in a probabilistic manner) from Alice to Bob, over a public channel. The proposed protocol is unique because Bob merely acts as the receiver of the pad (secret key), i.e. Bob does not need to send any message back to Alice unless he detects eavesdropping. Such a secure transfer of one-time-pad, over public channel, is not possible in classical cryptography and in quantum cryptography all previous protocols require Bob to send almost as many messages back to Alice as she does to Bob, to establish a key.
Last updated:  2014-01-14
New Leakage Resilient CCA-Secure Public Key Encryption
Kaoru Kurosawa, Ryo Nojima, Le Trieu Phong
This paper shows a generic method of constructing CCA-secure public key encryption schemes with leakage resilience on the secret key. It is based on a new kind of universal$_2$ hash proof system which accepts an auxiliary parameter. Specifically, two schemes are presented, basing on the DCR assumption and DLIN assumption respectively.
Last updated:  2014-01-20
EPiC: Efficient Privacy-Preserving Counting for MapReduce
Uncategorized
Erik-Oliver Blass, Guevara Noubir, Triet D. Vo-Huu
Show abstract
Uncategorized
In the face of an untrusted cloud infrastructure, outsourced data needs to be protected. We present EPiC, a practical protocol for the privacy-preserving evaluation of a fundamental operation on data sets: frequency counting. In an encrypted outsourced data set, a cloud user can specify a pattern, and the cloud will count the number of occurrences of this pattern in an oblivious manner. A pattern is expressed as a Boolean formula on the fields of data records and can specify values counting, value comparison, range counting, and conjunctions/disjunctions of field values. We show how a general pattern, defined by a Boolean formula, is arithmetized into a multivariate polynomial and used in EPiC. To increase the performance of the system, we introduce a new somewhat homomorphic encryption scheme based on a previous work on the Hidden Modular Group assumption. This scheme is highly efficient in our particular counting scenario. Besides a formal analysis where we prove EPiC's privacy, we also present implementation and evaluation results. We specifically target Google's prominent MapReduce paradigm as offered by major cloud providers. Our evaluation performed both locally and in Amazon's public cloud with data set sizes of up to 1 TByte shows only a modest overhead of 20% compared to non-private counting, attesting to EPiC's efficiency.
Last updated:  2012-08-08
Stam's Conjecture and Threshold Phenomena in Collision Resistance
Uncategorized
John Steinberger, Xiaoming Sun, Zhe Yang
Show abstract
Uncategorized
At CRYPTO 2008 Stam conjectured that if an $(m\!+\!s)$-bit to $s$-bit compression function $F$ makes $r$ calls to a primitive $f$ of $n$-bit input, then a collision for $F$ can be obtained (with high probability) using $r2^{(nr-m)/(r+1)}$ queries to $f$, which is sometimes less than the birthday bound. Steinberger (Eurocrypt 2010) proved Stam's conjecture up to a constant multiplicative factor for most cases in which $r = 1$ and for certain other cases that reduce to the case $r = 1$. In this paper we prove the general case of Stam's conjecture (also up to a constant multiplicative factor). Our result is qualitatively different from Steinberger's, moreover, as we show the following novel threshold phenomenon: that exponentially many (more exactly, $2^{s-2(m-n)/(r+1)}$) collisions are obtained with high probability after $O(1)r2^{(nr-m)/(r+1)}$ queries. This in particular shows that threshold phenomena observed in practical compression functions such as JH are, in fact, unavoidable for compression functions with those parameters. (This is the full version of the same-titled article that appeared at CRYPTO 2012.)
Last updated:  2014-02-20
Tweakable Blockciphers with Beyond Birthday-Bound Security
Will Landecker, Thomas Shrimpton, R. Seth Terashima
Liskov, Rivest and Wagner formalized the tweakable blockcipher (TBC) primitive at CRYPTO'02. The typical recipe for instantiating a TBC is to start with a blockcipher, and then build up a construction that admits a tweak. Almost all such constructions enjoy provable security only to the birthday bound, and the one that does achieve security beyond the birthday bound (due to Minematsu) severely restricts the tweak size and requires per-invocation blockcipher rekeying. This paper gives the first TBC construction that simultaneously allows for arbitrarily “wide” tweaks, does not rekey, and delivers provable security beyond the birthday bound. Our construction is built from a blockcipher and an $\eAXU$ hash function. As an application of the TBC primitive, LRW suggest the TBC-MAC construction (similar to CBC-MAC but chaining through the tweak), but leave open the question of its security. We close this question, both for TBC-MAC as a PRF and a MAC. Along the way, we find a nonce-based variant of TBC-MAC that has a tight reduction to the security of the underlying TBC, and also displays graceful security degradation when nonces are misused. This result is interesting on its own, but it also serves as an application of our new TBC construction, ultimately giving a variable input-length PRF with beyond birthday-bound security.
Last updated:  2012-09-13
Long Term Confidentiality: a Survey
Uncategorized
Johannes Braun, Johannes Buchmann, Ciaran Mullan, Alex Wiesmaier
Show abstract
Uncategorized
Sensitive electronic data may be required to remain confidential for long periods of time. Yet encryption under a computationally secure cryptosystem cannot provide a guarantee of long term confidentiality, due to potential advances in computing power or cryptanalysis. Long term confidentiality is ensured by information theoretically secure ciphers, but at the expense of impractical key agreement and key management. We overview known methods to alleviate these problems, whilst retaining some form of information theoretic security relevant for long term confidentiality.
Last updated:  2012-08-07
On the Impossibility of Constructing Efficient Key Encapsulation and Programmable Hash Functions in Prime Order Groups
Goichiro Hanaoka, Takahiro Matsuda, Jacob C. N. Schuldt
In this paper, we focus on the problem of minimizing ciphertext overhead, and discuss the (im)possibility of constructing key encapsulation mechanisms (KEMs) with low ciphertext overhead. More specifically, we rule out the existence of algebraic black-box reductions from the (bounded) CCA security of a natural class of KEMs to any non-interactive problem. The class of KEMs captures the structure of the currently most efficient KEMs defined in standard prime order groups, but restricts an encapsulation to consist of a single group element and a string. This result suggests that we cannot rely on existing techniques to construct a CCA secure KEM in standard prime order groups with a ciphertext overhead lower than two group elements. Furthermore, we show how the properties of an (algebraic) programmable hash function can be exploited to construct a simple, efficient and CCA secure KEM based on the hardness of the decisional Diffie-Hellman problem with the ciphertext overhead of just a single group element. Since this KEM construction is covered by the above mentioned impossibility result, this enables us to derive a lower bound on the hash key size of an algebraic programmable hash function, and rule out the existence of algebraic $({\sf poly},n)$-programmable hash functions in prime order groups for any integer $n$. The latter result answers an open question posed by Hofheinz and Kiltz (CRYPTO'08) in the case of algebraic programmable hash functions in prime order groups.
Last updated:  2012-11-03
Multi-receiver Homomorphic Authentication Codes for Network Coding
Uncategorized
Zhaohui Tang, Hoon Wei Lim
Show abstract
Uncategorized
We investigate a new class of authenticate codes (A-codes) that support verification by a group of message recipients in the network coding setting. That is, a sender generates an A-code over a message such that any intermediate node or recipient can check the authenticity of the message, typically to detect pollution attacks. We call such an A-code as multi-receiver homomorphic A-code (MRHA-code). In this paper, we first formally define an MRHA-code. We then derive some lower bounds on the security parameters and key sizes associated with our MRHA-codes. Moreover, we give efficient constructions of MRHA-code schemes that can be used to mitigate pollution attacks on network codes. Unlike prior works on computationally secure homomorphic signatures and MACs for network coding, our MRHA-codes achieve unconditional security.
Last updated:  2012-08-06
Differential Fault Analysis of AES: Towards Reaching its Limits
Uncategorized
Sk Subidh Ali, Debdeep Mukhopadhyay, Michael Tunstall
Show abstract
Uncategorized
In this paper we present a theoretical analysis of the limits of the Differential Fault Analysis (DFA) of AES by developing an inter relationship between conventional cryptanalysis of AES and DFAs. We show that the existing attacks have not reached these limits and present techniques to reach these. More specifically, we propose optimal DFA on states of AES-128 and AES-256. We also propose attacks on the key schedule of the three versions of AES, and demonstrate that these are some of the most efficient attacks on AES to date. Our attack on AES-128 key schedule is optimal, and the attacks on AES-192 and AES-256 key schedule are very close to optimal. Detailed experimental results have been provided for the developed attacks. The work has been compared to other works and also the optimal limits of Differential Fault Analysis of AES.
Last updated:  2013-03-16
A note on ‘An efficient certificateless aggregate signature with constant pairing computations’
Uncategorized
Debiao He, Jianhua Chen, Miaomiao Tian
Show abstract
Uncategorized
Recently, Xiong et al. [H. Xiong, Z. Guan, Z. Chen, F. Li, An efficient certificateless aggregate signature with constant pairing computations, Information Science, 219, pp. 225–235, 2013] proposed an efficient certificateless signature (CLS) scheme and used it to construct a certificateless aggregate signature (CLAS) scheme with constant pairing computations. They also demonstrated that both of the two schemes are provably secure in the random oracle model under the computational Diffie-Hellman assumption. Unfortunately, by giving concrete attacks, we point out that Xiong et al.’s schemes are not secure in their security model.
Last updated:  2012-08-06
Factorization of a 1061-bit number by the Special Number Field Sieve
Uncategorized
Greg Childers
Show abstract
Uncategorized
I provide the details of the factorization of the Mersenne number $2^{1061}-1$ by the Special Number Field Sieve. Although this factorization is easier than the completed factorization of RSA-768, it represents a new milestone for factorization using publicly available software.
Last updated:  2013-05-07
Improved CRT Algorithm for Class Polynomials in Genus 2
Uncategorized
Kristin Lauter, Damien Robert
Show abstract
Uncategorized
We present a generalization to genus~2 of the probabilistic algorithm of Sutherland for computing Hilbert class polynomials. The improvement over the Bröker-Gruenewald-Lauter algorithm for the genus~2 case is that we do not need to find a curve in the isogeny class whose endomorphism ring is the maximal order; rather, we present a probabilistic algorithm for ``going up'' to a maximal curve (a curve with maximal endomorphism ring), once we find any curve in the right isogeny class. Then we use the structure of the Shimura class group and the computation of $(\ell,\ell)$-isogenies to compute all isogenous maximal curves from an initial one. This is an extended version of the article published at ANTS~X.
Last updated:  2012-08-07
Group Signatures with Almost-for-free Revocation
Benoit Libert, Thomas Peters, Moti Yung
Group signatures are a central cryptographic primitive where users can anonymously and accountably sign messages in the name of a group they belong to. Several efficient constructions with security proofs in the standard model ({\it i.e.}, without the random oracle idealization) appeared in the recent years. However, like standard PKIs, group signatures need an efficient revocation system to be practical. Despite years of research, membership revocation remains a non-trivial problem: many existing solutions do not scale well due to either high overhead or constraining operational requirements (like the need for all users to update their keys after each revocation). Only recently, Libert, Peters and Yung (Eurocrypt'12) suggested a new scalable revocation method, based on the Naor-Naor-Lotspiech (NNL) broadcast encryption framework, that interacts nicely with techniques for building group signatures in the standard model. While promising, their mechanism introduces important storage requirements at group members. Namely, membership certificates, which used to have constant size in existing standard model constructions, now have polylog size in the maximal cardinality of the group (NNL, after all, is a tree-based technique and such dependency is naturally expected). In this paper we show how to obtain private keys of {\it constant} size. To this end, we introduce a new technique to leverage the NNL subset cover framework in the context of group signatures but, perhaps surprisingly, without logarithmic relationship between the size of private keys and the group cardinality. Namely, we provide a way for users to efficiently prove their membership of one of the generic subsets in the NNL subset cover framework. This technique makes our revocable group signatures competitive with ordinary group signatures ({\it i.e.}, without revocation) in the standard model. Moreover, unrevoked members (as in PKIs) still do not need to update their keys at each revocation.
Last updated:  2012-08-05
Adaptively Secure Multi-Party Computation with Dishonest Majority
Sanjam Garg, Amit Sahai
Adaptively secure multiparty computation is an essential and fundamental notion in cryptography. In this work we focus on the basic question of constructing a multiparty computation protocol secure against a \emph{malicious}, \emph{adaptive} adversary in the \emph{stand-alone} setting without assuming an honest majority, in the plain model. It has been believed that this question can be resolved by composing known protocols from the literature. We show that in fact, this belief is fundamentally mistaken. In particular, we show: \begin{itemize} \item[-]\textbf{Round inefficiency is unavoidable when using black-box simulation:} There does not exist any $o(\frac{n}{\log{n}})$ round protocol that adaptively securely realizes a (natural) $n$-party functionality with a black-box simulator. Note that most previously known protocols in the adaptive security setting relied on black-box simulators. \item[-]\textbf{A constant round protocol using non-black-box simulation:} We construct a \emph{constant round} adaptively secure multiparty computation protocol in a setting without \emph{honest majority} that makes crucial use of non-black box techniques. \end{itemize} Taken together, these results give the first resolution to the question of adaptively secure multiparty computation protocols with a malicious dishonest majority in the plain model, open since the first formal treatment of adaptive security for multiparty computation in 1996.
Last updated:  2012-08-05
New Preimage Attacks Against Reduced SHA-1
Simon Knellwolf, Dmitry Khovratovich
This paper shows preimage attacks against reduced SHA-1 up to 57 steps. The best previous attack has been presented at CRYPTO 2009 and was for 48 steps finding a two-block preimage with incorrect padding at the cost of 2159.3 evaluations of the compression function. For the same variant our attacks find a one-block preimage at 2150.6 and a correctly padded two-block preimage at 2151.1 evaluations of the compression function. The improved results come out of a differential view on the meet-in-the-middle technique originally developed by Aoki and Sasaki. The new framework closely relates meet-in-the-middle attacks to differential cryptanalysis which turns out to be particularly useful for hash functions with linear message expansion and weak diffusion properties.
Last updated:  2018-10-09
Robust Smart Card based Password Authentication Scheme against Smart Card Security Breach
Ding Wang, Ping Wang, Chun-guang Ma, Zhong Chen
As the most prevailing two-factor authentication mechanism, smart card based password authentication has been a subject of intensive research in the past decade and hundreds of this type of schemes have been proposed. However, most of them were found severely flawed, especially prone to the smart card loss problem, shortly after they were first put forward, no matter the security is heuristically analyzed or formally proved. In SEC'12, Wang pointed out that, the main cause of this issue is attributed to the lack of an appropriate security model to fully identify the practical threats. To address the issue, Wang presented three kinds of security models, namely Type I, II and III, and further proposed four concrete schemes, only two of which, i.e. PSCAV and PSCAb, are claimed to be secure under the harshest model, i.e. Type III security model. However, in this paper, we demonstrate that PSCAV still cannot achieve the claimed security goals and is vulnerable to an offline password guessing attack and other attacks in the Type III security mode, while PSCAb has several practical pitfalls. As our main contribution, a robust scheme is presented to cope with the aforementioned defects and it is proven to be secure in the random oracle model. Moreover, the analysis demonstrates that our scheme meets all the proposed criteria and eliminates several hard security threats that are difficult to be tackled at the same time in previous scholarship.
Last updated:  2012-08-05
Breaking and Repairing GCM Security Proofs
Tetsu Iwata, Keisuke Ohashi, Kazuhiko Minematsu
In this paper, we study the security proofs of GCM (Galois/Counter Mode of Operation). We first point out that a lemma, which is related to the upper bound on the probability of a counter collision, is invalid. Both the original privacy and authenticity proofs by the designers are based on the lemma. We further show that the observation can be translated into a distinguishing attack that invalidates the main part of the privacy proof. It turns out that the original security proofs of GCM contain a flaw, and hence the claimed security bounds are not justified. A very natural question is then whether the proofs can be repaired. We give an affirmative answer to the question by presenting new security bounds, both for privacy and authenticity. As a result, although the security bounds are larger than what were previously claimed, GCM maintains its provable security. We also show that, when the nonce length is restricted to 96 bits, GCM has better security bounds than a general case of variable length nonces.
Last updated:  2012-08-05
Dynamic Credentials and Ciphertext Delegation for Attribute-Based Encryption
Amit Sahai, Hakan Seyalioglu, Brent Waters
Motivated by the question of access control in cloud storage, we consider the problem using Attribute-Based Encryption (ABE) in a setting where users' credentials may change and ciphertexts may be stored by a third party. We find that a comprehensive solution to our problem must simultaneously allow for the revocation of ABE private keys as well as allow for the ability to update ciphertexts to reflect the most recent updates. Our main result is obtained by pairing two contributions: - Revocable Storage. We ask how a third party can process a ciphertext to disqualify revoked users from accessing data that was encrypted in the past, while the user still had access. In applications, such storage may be with an untrusted entity and as such, we require that the ciphertext management operations can be done without access to any sensitive data (which rules out decryption and re-encryption). We define the problem of revocable storage and provide a fully secure construction. Our core tool is a new procedure that we call ciphertext delegation. One can apply ciphertext delegation on a ciphertext encrypted under a certain access policy to `re-encrypt' it to a more restrictive policy using only public information. We provide a full analysis of the types of delegation possible in a number of existing ABE schemes. - Protecting Newly Encrypted Data. We consider the problem of ensuring that newly encrypted data is not decryptable by a user's key if that user's access has been revoked. We give the first method for obtaining this revocation property in a fully secure ABE scheme. We provide a new and simpler approach to this problem that has minimal modifications to standard ABE. We identify and define a simple property called piecewise key generation which gives rise to efficient revocation. We build such solutions for Key-Policy and Ciphertext-Policy Attribute-Based Encryption by modifying an existing ABE scheme due to Lewko et al. to satisfy our piecewise property and prove security in the standard model. It is the combination of our two results that gives an approach for revocation. A storage server can update stored ciphertexts to disqualify revoked users from accessing data that was encrypted before the user's access was revoked. This is the full version of the Crypto 2012 paper.
Last updated:  2012-08-05
Secure Database Commitments and Universal Arguments of Quasi Knowledge
Melissa Chase, Ivan Visconti
In this work we focus on a simple database commitment functionality where besides the standard security properties, one would like to hide the size of the input of the sender. Hiding the size of the input of a player is a critical requirement in some applications, and relatively few works have considered it. Notable exceptions are the work on zero-knowledge sets introduced in~\cite{MRK03}, and recent work on size-hiding private set intersection~\cite{ADT11}. However, neither of these achieves a secure computation (i.e., a reduction of a real-world attack of a malicious adversary into an ideal-world attack) of the proposed functionality. The first result of this submission consists in defining ``secure'' database commitment and in observing that previous constructions do not satisfy this definition. This leaves open the question of whether there is any way this functionality can be achieved. We then provide an affirmative answer to this question by using new techniques that combined together achieve ``secure'' database commitment. Our construction is in particular optimized to require only a constant number of rounds, to provide non-interactive proofs on the content of the database, and to rely only on the existence of a family of CRHFs. This is the first result where input-size hiding secure computation is achieved for an interesting functionality and moreover we obtain this result with standard security (i.e., simulation in expected polynomial time against fully malicious adversaries, without random oracles, non-black-box extraction assumptions, hardness assumptions against super-polynomial time adversaries, or other controversial/strong assumptions). A key building block in our construction is a universal argument enjoying an improved proof of knowledge property, that we call quasi-knowledge. This property is significantly closer to the standard proof of knowledge property than the weak proof of knowledge property satisfied by previous constructions.
Last updated:  2012-08-05
Differential Privacy with Imperfect Randomness
Uncategorized
Yevgeniy Dodis, Adriana Lopez-Alt, Ilya Mironov, Salil Vadhan
Show abstract
Uncategorized
In this work we revisit the question of basing cryptography on imperfect randomness. Bosley and Dodis (TCC'07) showed that if a source of randomness R is "good enough" to generate a secret key capable of encrypting k bits, then one can deterministically extract nearly k almost uniform bits from R, suggesting that traditional privacy notions (namely, indistinguishability of encryption) requires an "extractable" source of randomness. Other, even stronger impossibility results are known for achieving privacy under specific "non-extractable" sources of randomness, such as the gamma-Santha-Vazirani (SV) source, where each next bit has fresh entropy, but is allowed to have a small bias gamma< 1$ (possibly depending on prior bits). We ask whether similar negative results also hold for a more recent notion of privacy called differential privacy (Dwork et al., TCC'06), concentrating, in particular, on achieving differential privacy with the Santha-Vazirani source. We show that the answer is no. Specifically, we give a differentially private mechanism for approximating arbitrary "low sensitivity" functions that works even with randomness coming from a gamma-Santha-Vazirani source, for any gamma<1. This provides a somewhat surprising "separation" between traditional privacy and differential privacy with respect to imperfect randomness. Interestingly, the design of our mechanism is quite different from the traditional "additive-noise" mechanisms (e.g., Laplace mechanism) successfully utilized to achieve differential privacy with perfect randomness. Indeed, we show that any (accurate and private) "SV-robust" mechanism for our problem requires a demanding property called consistent sampling, which is strictly stronger than differential privacy, and cannot be satisfied by any additive-noise mechanism.
Last updated:  2013-02-15
Algebraic (Trapdoor) One Way Functions and their Applications
Uncategorized
Dario Catalano, Dario Fiore, Rosario Gennaro, Konstantinos Vamvourellis
Show abstract
Uncategorized
In this paper we introduce the notion of {\em Algebraic (Trapdoor) One Way Functions}, which, roughly speaking, captures and formalizes many of the properties of number-theoretic one-way functions. Informally, a (trapdoor) one way function $F: X \to Y$ is said to be algebraic if $X$ and $Y$ are (finite) abelian cyclic groups, the function is {\em homomorphic} i.e. $F(x)\cdot F(y) = F(x \cdot y)$, and is {\em ring-homomorphic}, meaning that it is possible to compute linear operations ``in the exponent'' over some ring (which may be different from $\ZZ_p$ where $p$ is the order of the underlying group $X$) without knowing the bases. Moreover, algebraic OWFs must be {\em flexibly one-way} in the sense that given $y = F(x)$, it must be infeasible to compute $(x', d)$ such that $F(x')=y^{d}$ (for $d \neq 0$). Interestingly, algebraic one way functions can be constructed from a variety of {\em standard} number theoretic assumptions, such as RSA, Factoring and CDH over bilinear groups. As a second contribution of this paper, we show several applications where algebraic (trapdoor) OWFs turn out to be useful. In particular: - {\em Publicly Verifiable Secure Outsourcing of Polynomials}: We present efficient solutions which work for rings of arbitrary size and characteristic. When instantiating our protocol with the RSA/Factoring based algebraic OWFs we obtain the first solution which supports small field size, is efficient and does not require bilinear maps to obtain public verifiability. - {\em Linearly-Homomorphic Signatures}: We give a direct construction of FDH-like linearly homomorphic signatures from algebraic (trapdoor) one way permutations. Our constructions support messages and homomorphic operations over {\em arbitrary} rings and in particular even small fields such as $\FF_2$. While it was already known how to realize linearly homomorphic signatures over small fields (Boneh-Freeman, Eurocrypt 2011), from lattices in the random oracle model, ours are the first schemes achieving this in a very efficient way from Factoring/RSA. - {\em Batch execution of Sigma protocols}: We construct a simple and efficient Sigma protocol for any algebraic OWP and show a ``batch'' version of it, i.e. a protocol where many statements can be proven at a cost (slightly superior) of the cost of a single execution of the original protocol. Given our RSA/Factoring instantiations of algebraic OWP, this yields, to the best of our knowledge, the first batch verifiable Sigma protocol for groups of unknown order.
Last updated:  2012-09-12
Impossibility Results for Static Input Secure Computation
Sanjam Garg, Abishek Kumarasubramanian, Rafail Ostrovsky, Ivan Visconti
Consider a setting of two mutually distrustful parties Alice and Bob who want to securely evaluate some function on pre­-specified inputs. The well studied notion of two­-party secure computation allows them to do so in the stand­alone setting. Consider a deterministic function (e.g., 1­-out­-of­-2 bit OT) that Alice and Bob can not evaluate trivially and which allows only Bob to receive the output. We show that Alice and Bob can not securely compute any such function in the concurrent setting even when their inputs are pre­-specified. Our impossibility result also extends to all deterministic functions in which both Alice and Bob get the same output. Our results have implications in the bounded­-concurrent setting as well.
Last updated:  2012-08-05
TorScan: Tracing Long-lived Connections and Differential Scanning Attacks
Alex Biryukov, Ivan Pustogarov, Ralf-Philipp Weinmann
Tor is a widely used anonymity network providing low-latency communication capabilities. Around 400,000 users per day use Tor to route TCP traffic through a sequence of relays; three hops are selected from a pool of currently almost 3000 volunteer-operated Tor relays to comprise a route through the network for a limited time. In comparison to single-hop proxies, forwarding TCP streams through multiple relays increases the anonymity of the users significantly: each hop along the route only knows its successor and predecessor. The anonymity provided by Tor heavily relies on the hardness of linking a user's entry and exit nodes. If an attacker gains access to the topological information about the Tor network instead of having to consider the network as a fully connected graph, this anonymity may be reduced. In fact, we have found ways to probe the connectivity of a Tor relay. We demonstrate how the resulting leakage of the Tor network topology can be used and present attacks to trace back a user from an exit relay to a small set of potential entry nodes.
Last updated:  2012-08-05
On the Security of Dynamic Group Signatures: Preventing Signature Hijacking
Yusuke Sakai, Jacob C. N. Schuldt, Keita Emura, Goichiro Hanaoka, Kazuo Ohta
We identify a potential weakness in the standard security model for dynamic group signatures which appears to have been overlooked previously. More specifically, we highlight that even if a scheme provably meets the security requirements of the model, a malicious group member can potentially claim ownership of a group signature produced by an honest group member by forging a proof of ownership. This property leads to a number of vulnerabilities in scenarios in which dynamic group signatures are likely to be used. We furthermore show that the dynamic group signature scheme by Groth (ASIACRYPT 2007) does not provide protection against this type of malicious behavior. To address this, we introduce the notion of \emph{opening soundness} for group signatures which essentially requires that it is infeasible to produce a proof of ownership of a valid group signature for any user except the original signer. We then show a relatively simple modification of the scheme by Groth which allows us to prove opening soundness for the modified scheme without introducing any additional assumptions. We believe that opening soundness is an important and natural security requirement for group signatures, and hope that future schemes will adopt this type of security.
Last updated:  2013-04-03
A formal study of two physical countermeasures against side channel attacks
Sébastien Briais, Sylvain Guilley, Jean-Luc Danger
Secure electronic circuits must implement countermeasures against a wide range of attacks. Often, the protection against side channel attacks requires to be tightly integrated within the functionality to be protected. It is now part of the designer's job to implement them. But this task is known to be error-prone, and with current development processes, countermeasures are evaluated often very late (at circuit fabrication). In order to improve the confidence of the designer in the efficiency of the countermeasure, we suggest in this article to resort to formal methods early in the design flow for two reasons. First of all, we intend to check that the process of transformation of the design from the vulnerable description to the protected one does not alter the functionality. Second, we wish to prove that the security properties (that can derive from a formal security functional specification) are indeed met after transformation. Our first contribution is to show how such a framework can be setup (in COQ) for netlist-level protections. The second contribution is to illustrate that this framework indeed allows to detect vulnerabilities in dual-rail logics, with the examples of wave differential dynamic logic (WDDL) and balanced cell-based differential logic (BCDL).
Last updated:  2012-08-05
Simple construction of epsilon-biased distribution
Long Hoang Nguyen, Andrew William Roscoe
Epsilon-biased distribution has many applications in practice, including universal hashing computation. In this paper we will improve an existing epsilon-biased distribution construction due to Alon et al. that requires to uniformly and efficiently sample irreducible polynomials of a large degree, e.g. between 80 and 160. To remove the need for such a sampling which can be computationally expensive, we will replace the irreducible polynomials by random monic polynomials of higher degree, i.e. every degree r monic polynomial whether irreducible or reducible is selected with the same probability 2^{-r}. To analyse the security of the scheme, we need to find the maximum number of degree r polynomials that divide a degree n polynomial where n > r.
Last updated:  2012-08-05
Rational authentication protocols and their use in financial transactions
Long Hoang Nguyen
We use ideas from game theory to improve two families of authentication protocols, namely password-based and manual authentication schemes. The protocols will be transformed so that even if an intruder attacks different protocol runs between honest nodes, its expected payoff will still be lower than when it does not attack. A rational intruder, who always tries to maximise its payoff, therefore has no incentive to attack any protocol run among trustworthy parties. To illustrate the use of our method, we present a case study relating to the password-based authentication stage of on-line banking, where passwords are chosen either randomly or biasedly by, e.g., humans. For the latter we use the publicly available 32 million passwords of the social gaming network website RockYou as the source of human-selected passwords.
Last updated:  2012-08-05
Constructing Pairing-Friendly Genus 2 Curves with Split Jacobian
Robert Drylo
Genus 2 curves with simple but not absolutely simple jacobians can be used to construct pairing-based cryptosystems more efficient than for a generic genus 2 curve. We show that there is a full analogy between methods for constructing ordinary pairing-friendly elliptic curves and simple abelian varieties, which are iogenous over some extension to a product of elliptic curves. We extend the notion of complete, complete with variable discriminant, and sparse families introduced in by Freeman, Scott and Teske for elliptic curves, and we generalize the Cocks-Pinch method and the Brezing-Weng method to construct families of each type. To realize abelian surfaces as jacobians we use of genus 2 curves of the form $y^2=x^5+ax^3+bx$ or $y^2=x^6+ax^3+b$, and apply the method of Freeman and Satoh. As applications we find some families of abelian surfaces with recorded $\rho$-value $\rho=2$ for embedding degrees $k=3,4,6,12$, or $\rho=2.1$ for $k=27,54$. We also give variable-discriminant families with best $\rho$-values.
Last updated:  2012-08-05
A Generalised Formula for Calculating the Resilience of Random Key Predistribution Schemes
Ed Kendall, Michelle Kendall, Wilfrid S. Kendall
A commonly used metric for comparing the resilience of key predistribution schemes is $\fail_s$, which measures the proportion of network connections which are `broken' by an adversary which has compromised $s$ nodes. In `Random key predistribution schemes for sensor networks', Chan, Perrig and Song present a formula for measuring the resilience in a class of random key predistribution schemes called $q$-composite schemes. We present a correction to this formula for schemes where more than one key may be used to secure a link between a pair of nodes. Our corrected formula features an additional parameter which makes it applicable to a wider variety of random key predistribution schemes, including the original Eschenauer Gligor scheme. We also present a simplification of the formula for calculating connectivity. We refer to the recent paper by Yum and Lee which also claims to correct the original formula for the $q$-composite scheme. However the resulting formula is complicated, computationally demanding, and hard to understand. The formula which we propose and prove is easily computable and can be applied to a wider range of schemes.
Last updated:  2015-10-01
The Stream Cipher Core of the 3GPP Encryption Standard 128-EEA3: Timing Attacks and Countermeasures
Gautham Sekar
The core of the 3rd Generation Partnership Project (3GPP) encryption standard 128-EEA3 is a stream cipher called ZUC. It was designed by the Chinese Academy of Sciences and proposed for inclusion in the cellular wireless standards called “Long Term Evolution” or “4G”. The LFSR-based cipher uses a 128-bit key. In this paper, we first show timing attacks on ZUC that can recover, with about 71.43% success rate, (i) one bit of the secret key immediately, and (ii) information involving 6 other key bits. The time, memory and data requirements of the attacks are negligible. While we see potential improvements to the attacks, we also suggest countermeasures.
Last updated:  2012-08-07
Scalable Group Signatures with Revocation
Benoit Libert, Thomas Peters, Moti Yung
Group signatures are a central cryptographic primitive, simultaneously supporting accountability and anonymity. They allow users to anonymously sign messages on behalf of a group they are members of. The recent years saw the appearance of several constructions with security proofs in the standard model ({\it i.e.}, without appealing to the random oracle heuristic). For a digital signature scheme to be adopted, an efficient revocation scheme (as in regular PKI) is absolutely necessary. Despite over a decade of extensive research, membership revocation remains a non-trivial problem in group signatures: all existing solutions are not truly scalable due to either high overhead (e.g., large group public key size), or limiting operational requirement (the need for all users to follow the system's entire history). In the standard model, the situation is even worse as many existing solutions are not readily adaptable. To fill this gap and tackle this challenge, we describe a new revocation approach based, perhaps somewhat unexpectedly, on the Naor-Naor-Lotspiech framework which was introduced for a different problem (namely, that of broadcast encryption). Our mechanism yields efficient and scalable revocable group signatures in the standard model. In particular, the size of signatures and the verification cost are independent of the number of revocations and the maximal cardinality $N$ of the group while other complexities are at most polylogarithmic in $N$. Moreover, the schemes are history-independent: unrevoked group members do not have to update their keys when a revocation occurs.
Last updated:  2012-08-05
Programmable encryption and key-dependent messages
Dominique Unruh
We present the notion of PROG-KDM security for public-key encryption schemes. This security notion captures both KDM security and revealing of secret keys (key corruptions) in a single definition. This is achieved by requiring the existence of a simulator that can program ciphertexts when a secret key is revealed, i.e., the simulator can delay the decision what plaintext is contained in what ciphertext to the moment where the ciphertext is opened. The definition is formulated in the random oracle model. We show that PROG-KDM security can be achieved by showing that a natural and practical construction in the ideal cipher model is PROG-KDM secure (hybrid encryption using authenticated CBC encryption).
Last updated:  2012-08-05
Biclique Cryptanalysis of TWINE
Mustafa Çoban, Ferhat Karakoç, Özkan Boztaş
TWINE is a lightweight block cipher proposed at ECRYPT Workshop on Lightweight Cryptography 2011, Belgium. The cipher consists of 36 rounds and has two versions TWINE-80 and TWINE-128 supporting key lengths of 80 and 128 bits, respectively. The block length of the two versions is 64-bit. In this paper, we present the first single-key attacks on the both versions of the cipher. In these attacks, we use the recently developed biclique technique. The complexities of the attacks on TWINE-80 and TWINE-128 are $2^{79.10}$ and $2^{126.82}$ respectively and the data requirement for the two attacks is $2^{60}$.
Last updated:  2012-08-02
Security margin evaluation of SHA-3 contest finalists through SAT-based attacks
Ekawat Homsirikamol, Pawel Morawiecki, Marcin Rogawski, Marian Srebrny
In 2007, the U.S. National Institute of Standards and Technology (NIST) announced a public contest aiming at the selection of a new standard for a cryptographic hash function. In this paper, the security margin of five SHA-3 finalists is evaluated with an assumption that attacks launched on finalists should be practically verified. A method of attacks applied is called logical cryptanalysis where the original task is expressed as a SATisfiability problem instance. A new toolkit is used to simplify the most arduous stages of this type of cryptanalysis and helps to mount the attacks in a uniform way. In the context of SAT-based attacks, it has been shown that all the finalists have substantially bigger security margin than the current standards SHA-256 and SHA-1. Two other metrics, software performance and hardware efficiency are combined with security results to provide a more comprehensive picture of the SHA-3 finalists.
Last updated:  2012-08-02
A Publicly-Veriable Mix-net with Everlasting Privacy Towards Observers
Uncategorized
Denise Demirel, Jeroen van de Graaf
Show abstract
Uncategorized
In this paper we present a novel, publicly verifiable mixing scheme which has everlasting privacy towards observers: all the information published on the bulletin board by the mixes (audit information etc) reveals no information about the identity of any of the messages published. The correctness of the mixing process is statistical: even if all authorities conspire, they cannot change the contents of any message without being detected with overwhelming probability. We accomplish this by encoding the messages submitted using so-called Pedersen commitments. Decoding (opening) these is possible because we create a parallel mix-net run by the same mixes to which the public has no access. This private mix-net uses the same permutations as the public one, but uses homomorphic encryption, which is used to send auxiliary information (messages, decommitment values) through the mix-net to allow decoding.
Last updated:  2014-02-10
DAC-MACS: Effective Data Access Control for Multi-Authority Cloud Storage Systems
Kan Yang, Xiaohua Jia, Kui Ren
Data access control is an effective way to ensure the data security in the cloud. However, due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems. Existing access control schemes are no longer applicable to cloud storage systems, because they either produce multiple encrypted copies of the same data or require a fully trusted cloud server. Ciphertext-Policy Attribute-based Encryption (CP-ABE) is a promising technique for access control of encrypted data. It requires a trusted authority manages all the attributes and distributes keys in the system. In cloud storage systems, there are multiple authorities co-exist and each authority is able to issue attributes independently. However, existing CP-ABE schemes cannot be directly applied to the access control for multi-authority cloud storage systems, due to the inefficiency of decryption and revocation. In this paper, we propose DAC-MACS (Data Access Control for Multi-Authority Cloud Storage), an effective and secure data access control scheme with efficient decryption and revocation. Specifically, we construct a new multi-authority CP-ABE scheme with efficient decryption and also design an efficient attribute revocation method that can achieve both forward security and backward security. The analysis and the simulation results show that our DAC-MACS is highly efficient and provably secure under the security model.
Last updated:  2012-08-01
Weaknesses of an Improvement Authentication Scheme using
Rafael Martínez-Peláez, Francisco Rico-Novella
Recently, Sood-Sarje-Singh proposed an improvement to Liou et al.’s dynamic ID-based remote user authentication scheme using smart cards to prevent impersonation attack, malicious user attack, off-line password guessing attack, and man-in-the-middle attack. However, we demonstrate that Sood et al.’s scheme is still vulnerable to malicious user attack, impersonation attack and steal information from a database attack.
Last updated:  2012-08-01
Efficient Padding Oracle Attacks on Cryptographic Hardware
Romain Bardou, Riccardo Focardi, Yusuke Kawamoto, Lorenzo Simionato, Graham Steel, Joe-Kai Tsay
We show how to exploit the encrypted key import functions of a variety of different cryptographic devices to reveal the imported key. The attacks are padding oracle attacks, where error messages resulting from incorrectly padded plaintexts are used as a side channel. In the asymmetric encryption case, we modify and improve Bleichenbacher's attack on RSA PKCS#1v1.5 padding, giving new cryptanalysis that allows us to carry out the `million message attack' in a mean of 49 000 and median of 14 500 oracle calls in the case of cracking an unknown valid ciphertext under a 1024 bit key (the original algorithm takes a mean of 215 000 and a median of 163 000 in the same case). We show how implementation details of certain devices admit an attack that requires only 9 400 operations on average (3 800 median). For the symmetric case, we adapt Vaudenay's CBC attack, which is already highly efficient. We demonstrate the vulnerabilities on a number of commercially available cryptographic devices, including security tokens, smartcards and the Estonian electronic ID card. The attacks are efficient enough to be practical: we give timing details for all the devices found to be vulnerable, showing how our optimisations make a qualitative difference to the practicality of the attack. We give mathematical analysis of the effectiveness of the attacks, extensive empirical results, and a discussion of countermeasures and manufacturer reaction.
Last updated:  2017-12-08
Beyond eCK: Perfect Forward Secrecy under Actor Compromise and Ephemeral-Key Reveal
Cas Cremers, Michèle Feltz
We show that it is possible to achieve perfect forward secrecy in two-message or one-round key exchange (KE) protocols that satisfy even stronger security properties than provided by the extended Canetti-Krawczyk (eCK) security model. In particular, we consider perfect forward secrecy in the presence of adversaries that can reveal ephemeral secret keys and the long-term secret keys of the actor of a session (similar to Key Compromise Impersonation). We propose two new game-based security models for KE protocols. First, we formalize a slightly stronger variant of the eCK security model that we call eCKw. Second, we integrate perfect forward secrecy into eCKw, which gives rise to the even stronger eCK-PFS model. We propose a security-strengthening transformation (i.e., a compiler) between our new models. Given a two-message Diffie-Hellman type protocol secure in eCKw, our transformation yields a two-message protocol that is secure in eCK-PFS. As an example, we show how our transformation can be applied to the NAXOS protocol.
Last updated:  2016-02-12
Revisiting Key Schedule's Diffusion In Relation With Round Function's Diffusion
Jialin Huang, Xuejia Lai
We study the weakness of key schedules from an observation: many existing attacks use the fact that the key schedules poorly distribute key bits in the diffusion path of round function. This reminds us of the importance of the diffusion's relation between key schedule and round function. We present new cryptanalysis results by exploring such diffusion relation and propose a new criterion for necessary key schedule diffusion. We discuss potential attacks and summarize the causes for key schedules without satisfying this criterion. One major cause is that overlapping between the diffusion of key schedule and round function leads to information leakage of key bits. Finally, a measure to estimate our criterion for recursive key schedules is presented. Today designing key schedule still lacks practical and necessary principles. For a practical key schedule with limited diffusion, our work adds more insight to its requirements and helps to maximize the security level.
Last updated:  2012-08-01
Low complexity bit-parallel $GF(2^m)$ multiplier for all-one polynomials
Yin Li, Gong-liang Chen, Xiao-ning Xie
This paper presents a new bit-parallel multiplier for the finite field $GF(2^m)$ generated with an irreducible all-one polynomial. Redundant representation is used to reduce the time delay of the proposed multiplier, while a three-term Karatsuba-like formula is combined with this representation to decrease the space complexity. As a result, the proposed multiplier requires about 10 percent fewer AND/XOR gates than the most efficient bit-parallel multipliers using an all-one polynomial, while it has almost the same time delay as the previously proposed ones.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.