All papers in 2012 (733 results)

Last updated:  2012-12-27
Systematic Treatment of Remote Attestation
Aurelien Francillon, Quan Nguyen, Kasper B. Rasmussen, Gene Tsudik
Embedded computing devices (such as actuators, controllers and sensors of various sizes) increasingly permeate many aspects of modern life: from medical to automotive, from building and factory automation to weapons, from critical infrastructures to home entertainment. Despite their specialized nature as well as limited resources and connectivity, these devices are now becoming increasingly popular and attractive targets for various attacks, especially, remote malware infestations. There has been a number of research proposals to detect and/or mitigate such attacks. They vary greatly in terms of application generality and underlying assumptions. However, one common theme is the need for Remote Attestation, a distinct security service that allows a trusted party (verifier) to check the internal state of a remote untrusted embedded device (prover). This paper provides a systematic treatment of Remote Attestation, starting with a precise definition of the desired service and proceeding to its systematic deconstruction into necessary and sufficient properties. These properties are, in turn, mapped into a minimal collection of hardware and software components that results in secure Remote Attestation. One distinguishing feature of this line of research is the need to prove (or, at least argue) architectural minimality; this is rarely encountered in security research. This work also offers some insights into vulnerabilities of certain prior techniques and provides a promising platform for attaining more advanced security services and guarantees.
Last updated:  2012-12-27
On the Security of the Core of PRINCE Against Biclique and Differential Cryptanalysis
Farzaneh Abed, Eik List, Stefan Lucks
PRINCE is a modern involutive lightweight cipher which was proposed by Rechberger et al. in 2012. PRINCE uses 64-bit core cipher, which holds the major encryption logic and is wrapped by two key additions. Thus, the security of the cipher is mainly depending on the security properties of the core. In this paper, we present an independent-biclique attack on the full version and also a differential inside-out cryptanalysis on the round-reduced version of the core of PRINCE.
Last updated:  2021-06-16
Unprovable Security of 2-Message Zero Knowledge
Kai-Min Chung, Edward Lui, Mohammad Mahmoody, Rafael Pass
Goldreich and Oren (JoC'94) show that only languages in BPP have 2-message zero-knowledge arguments. In this paper we consider weaker, super-polynomial simulation (SPS), notions of zero-knowledge. We present barriers to using black-box reductions for demonstrating soundness of 2-message protocols with efficient prover strategies satisfying SPS zero-knowledge. More precisely, if $poly(T(n))$-hard one-way functions exist for a super-polynomial $T(n)$, the following holds about 2-message efficient prover arguments over statements of length $n$. 1. Black-box reductions cannot prove soundness of 2-message $T(n)$-simulatable arguments based on any polynomial-time intractability assumption, unless the assumption can be broken in polynomial time. This complements known 2-message quasi-polynomial-time simulatable arguments using a quasi-polynomial-time reduction (Pass'03), and 2-message exponential-time simulatable proofs using a polynomial-time reduction (Dwork-Naor'00, Pass'03). 2. Back-box reductions cannot prove soundness of 2-message strong $T(n)$-simulatable arguments, even if the reduction and the challenger both can run in $poly(T(n))$-time, unless the assumption can be broken in $poly(T(n))$ time. Strong $T(\cdot)$-simulatability means that the output of the simulator is indistinguishable also for $poly(T(\cdot))$-size circuits, with a $negl(T(\cdot))$ indistinguishability gap. This complements known 3-message strong quasi-polynomial-time simulatable proofs (Blum'86, Canetti et~al'~00), or 2-message quasi-polynomial-time simulatable arguments (Khurana-Sahai'17, Kalai-Khurana-Sahai'18) satisfying a relaxed notion of strong simulation where the distinguisher's size can be large, but the distinguishing gap is negligible in $n$.
Last updated:  2012-12-19
Non Observability in the Random Oracle Model
Prabhanjan Ananth, Raghav Bhaskar
The Random Oracle Model, introduced by Bellare and Rogaway, provides a method to heuristically argue about the security of cryptographic primitives and protocols. The basis of this heuristic is that secure hash functions are close enough to random functions in their behavior, and so, a primitive that is secure using a random function should continue to remain secure even when the random function is replaced by a real hash function. In the security proof, this setting is realized by modeling the hash function as a random oracle. However, this approach in particular also enables any reduction, reducing a hard problem to the existence of an adversary, to \emph{observe} the queries the adversary makes to its random oracle and to \emph{program} the responses that the oracle provides to these queries. While, the issue of programmability of query responses has received a lot of attention in the literature, to the best of our knowledge, observability of the adversary's queries has not been identified as an artificial artefact of the Random Oracle Model. In this work, we study the security of several popular schemes when the security reduction cannot ``observe'' the adversary's queries to the random oracle, but can (possibly) continue to ``program'' the query responses. We first show that RSA-PFDH and Schnorr's signatures continue to remain secure when the security reduction is non observing (NO reductions), which is not surprising as their proofs in the random oracle model rely on programmability. We also provide two example schemes, namely, Fischlin's NIZK-PoK \cite{Fischlin05} and non interactive extractable commitment scheme, extractor algorithms of which seem to rely on observability in the random oracle model. While we prove that Fischlin's online extractors cannot exist when they are non observing, our extractable commitment scheme continues to be secure even when the extractors are non observing. We also introduce Non Observing Non Programming reductions which we believe are closest to standard model reductions.
Last updated:  2012-12-19
Further results on the distinctness of binary sequences derived from primitive sequences modulo square-free odd integers
Qun-Xiong Zheng, Wen-Feng Qi
This paper studies the distinctness of primitive sequences over Z/(M) modulo 2, where M is an odd integer that is composite and square-free, and Z/(M) is the integer residue ring modulo M. A new sufficient condition is given for ensuring that primitive sequences generated by a primitive polynomial f(x) over Z/(M) are pairwise distinct modulo 2. Such result improves a recent result obtained in our previous paper [27] and consequently the set of primitive sequences over Z/(M) that can be proven to be distinct modulo 2 is greatly enlarged.
Last updated:  2012-12-18
Calling out Cheaters: Covert Security With Public Verifiability
Gilad Asharov, Claudio Orlandi
We introduce the notion of covert security with public verifiability, building on the covert security model introduced by Aumann and Lindell (TCC 2007). Protocols that satisfy covert security guarantee that the honest parties involved in the protocol will notice any cheating attempt with some constant probability $\epsilon$. The idea behind the model is that the fear of being caught cheating will be enough of a deterrent to prevent any cheating attempt. However, in the basic covert security model, the honest parties are not able to persuade any third party (say, a judge) that a cheating occurred. We propose (and formally define) an extension of the model where, when an honest party detects cheating, it also receives a certificate that can be published and used to persuade other parties, without revealing any information about the honest party's input. In addition, malicious parties cannot create fake certificates in the attempt of framing innocents. Finally, we construct a secure two-party computation protocol for any functionality $f$ that satisfies our definition, and our protocol is almost as efficient as the one of Aumann and Lindell. We believe that the fear of a public humiliation or even legal consequences vastly exceeds the deterrent given by standard covert security. Therefore, even a small value of the deterrent factor $\epsilon$ will suffice in discouraging any cheating attempt. As the overall complexity of covert security and the parameter $\epsilon$ are inversely proportional to each other, we believe that the small price to pay to get the public verifiability property on top of covert security will be dominated by the efficiency gain obtained by using a smaller value $\epsilon$.
Last updated:  2012-12-18
Cryptanalysis of WIDEA
Gaëtan Leurent
WIDEA is a family of block ciphers designed by Junod and Macchetti in 2009 as an extension of IDEA to larger block sizes (256 and 512 bits for the main instances WIDEA-4 and WIDEA-8) and key sizes (512 and 1024 bits), with a focus on using them to design a hash function. WIDEA is based on the trusted IDEA design, and was expected to inherit its good security properties. WIDEA-w is composed of w parallel copies of the IDEA block cipher, with an MDS matrix to provide diffusion between them. In this paper we present low complexity attacks on WIDEA based on truncated differentials. We show a distinguisher for the full WIDEA with complexity only 2^65, and we use the distinguisher in a key-recovery attack with complexity w·2^68. We also show a collision attack on WIDEA-8 if it is used to build a hash function using the Merkle-Damgård mode of operation. The attacks exploit the parallel structure of WIDEA and the limited diffusion between the IDEA instances, using differential trails where the MDS diffusion layer is never active. In addition, we use structures of plaintext to reduce the data complexity.
Last updated:  2012-12-18
On the (In)security of the Fiat-Shamir Paradigm, Revisited
Uncategorized
Dana Dachman-Soled, Abhishek Jain, Yael Tauman Kalai, Adriana Lopez-Alt
Show abstract
Uncategorized
The Fiat-Shamir paradigm [CRYPTO'86] is a heuristic for converting 3-round identification schemes into signature schemes, and more generally, for collapsing rounds in public-coin interactive protocols. This heuristic is very popular both in theory and in practice, and many researchers have studied its security (and insecurity). In this work, we continue this study. As our main result, we show that for many well studied interactive *proofs* (and arguments) the soundness of the Fiat-Shamir heuristic cannot be proven via a black-box reduction to any falsifiable assumption. Previously, the insecurity of this paradigm was exemplified only when applied to interactive arguments (as opposed to proofs). Using similar techniques, we also show a black-box impossibility result for Micali's CS-proofs [FOCS'94]. Namely, we prove that there exist PCPs such that for "sufficiently hard'' NP languages, Micali's CS-proof cannot be proven sound via black-box reduction to any falsifiable assumption. These results are obtained by extending the impossibility of two-message zero knowledge protocols due to Goldreich and Oren [J. Cryptology'94].
Last updated:  2012-12-19
Why "Fiat-Shamir for Proofs" Lacks a Proof
Nir Bitansky, Sanjam Garg, Daniel Wichs
The Fiat-Shamir heuristic (CRYPTO '86) is used to convert any 3-message public-coin proof or argument system into a non-interactive argument, by hashing the prover's first message to select the verifier's challenge. It is known that this heuristic is sound when the hash function is modeled as a random oracle. On the other hand, the surprising result of Goldwasser and Kalai (FOCS '03) shows that there exists a computationally sound argument on which the Fiat-Shamir heuristic is never sound, when instantiated with any actual efficient hash function. This leaves us with the following interesting possibility: perhaps there exists a hash function that securely instantiates the Fiat-Shamir heuristic for all 3-message public-coin statistically sound proofs, even if it can fail for some computationally sound arguments. Indeed, the existence of such hash functions has been conjectured by Barak, Lindell and Vadhan (FOCS '03), who also gave a seemingly reasonable and sufficient condition under which such hash functions exist. However, we do not have any provably secure construction of such hash functions, under any standard assumption such as the hardness of DDH, RSA, QR, LWE, etc. In this work we give a broad black-box separation result, showing that the security of such hash functions cannot be proved under virtually any standard cryptographic assumption via a black-box reduction.
Last updated:  2012-12-18
On the Non-malleability of the Fiat-Shamir Transform
Sebastian Faust, Markulf Kohlweiss, Giorgia Azzurra Marson, Daniele Venturi
The Fiat-Shamir transform is a well studied paradigm for removing interaction from public-coin protocols. We investigate whether the resulting non-interactive zero-knowledge (NIZK) proof systems also exhibit non-malleability properties that have up to now only been studied for NIZK proof systems in the common reference string model: first, we formally define simulation soundness and a weak form of simulation extraction in the random oracle model (ROM). Second, we show that in the ROM the Fiat-Shamir transform meets these properties under lenient conditions. A consequence of our result is that, in the ROM, we obtain truly efficient non malleable NIZK proof systems essentially for free. Our definitions are sufficient for instantiating the Naor-Yung paradigm for CCA2-secure encryption, as well as a generic construction for signature schemes from hard relations and simulation-extractable NIZK proof systems. These two constructions are interesting as the former preserves both the leakage resilience and key-dependent message security of the underlying CPA-secure encryption scheme, while the latter lifts the leakage resilience of the hard relation to the leakage resilience of the resulting signature scheme.
Last updated:  2012-12-18
Profiled Model Based Power Simulator for Side Channel Evaluation
Uncategorized
Nicolas Debande, Maël Berthier, Yves Bocktaels, Thanh-Ha Le
Show abstract
Uncategorized
An embedded cryptographic device performs operation on sensitive data and as such, is vulnerable to side-channel attacks. This forces smart-card manufacturers to carefully consider development of security mechanisms. To accelerate this procedure, the use of power and electromagnetic simulator can be relevant and saves non negligible time. Based on a high level simulator, we propose to use profiled abstract models to gain accuracy on the simulated traces. These abstract models are obtained by profiling some parts of the target device which is physically available by the evaluator.
Last updated:  2012-12-18
Cryptanalysis of RAPP, an RFID Authentication Protocol
Nasour Bagheri, Masoumeh Safkhani, Pedro Peris-Lopez, Juan E. Tapiador
Tian et al. proposed a novel ultralightweight RFID mutual authentication protocol [4] that has recently been analyzed in [1], [2], [5]. In this letter, we first propose a desynchronization attack that succeeds with probability almost 1, which improves upon the 0.25 given by the attack in [1]. We also show that the bad properties of the proposed permutation function can be exploited to disclose several bits of the tag’s secret (rather than just one bit as in [2]), which increases the power of a traceability attack. Finally, we show how to extend the above attack to run a full disclosure attack, which requires to eavesdrop less protocol runs than the attack described in [5] (i.e., 192 << 230).
Last updated:  2012-12-18
Recovering RSA Secret Keys from Noisy Key Bits with Erasures and Errors
Noboru Kunihiro, Naoyuki Shinohara, Tetsuya Izu
We discuss how to recover RSA secret keys from noisy key bits with erasures and errors. There are two known algorithms recovering original secret keys from noisy keys. At Crypto 2009, Heninger and Shacham proposed a method for the case where an erroneous version of secret keys contains only erasures. Subsequently, Henecka et al. proposed a method for an erroneous version containing only errors at Crypto2010. For physical attacks such as side-channel and cold boot attacks, we need to study key recovery from a noisy secret key containing both erasures and errors. In this paper, we propose a method to recover a secret key from such an erroneous version and analyze the condition for error and erasure rates so that our algorithm succeeds in finding the correct secret key in polynomial time. We also evaluate a theoretical bound to recover the secret key and discuss to what extent our algorithm achieves this bound.
Last updated:  2014-05-14
Efficient, Adaptively Secure, and Composable Oblivious Transfer with a Single, Global CRS
Seung Geol Choi, Jonathan Katz, Hoeteck Wee, Hong-Sheng Zhou
We present a general framework for efficient, universally composable oblivious transfer (OT) protocols in which a single, global, common reference string (CRS) can be used for multiple invocations of oblivious transfer by arbitrary pairs of parties. In addition: - Our framework is round-efficient. E.g., under the DLIN or SXDH assumptions we achieve round-optimal protocols with static security, or 3-round protocols with adaptive security (assuming erasure). - Our resulting protocols are more efficient than any known previously, and in particular yield protocols for string OT using O(1) exponentiations and communicating O(1) group elements. Our result improves on that of Peikert et al. (Crypto 2008), which uses a CRS whose length depends on the number of parties in the network and achieves only static security. Compared to Garay et al. (Crypto 2009), we achieve adaptive security with better round complexity and efficiency.
Last updated:  2014-04-21
How Practical is Public-Key Encryption Based on LPN and Ring-LPN?
Uncategorized
Ivan Damgård, Sunoo Park
Show abstract
Uncategorized
We conduct a study of public-key cryptosystems based on variants of the Learning Parity with Noise (LPN) problem. The main LPN variant in consideration was introduced by Alekhnovich (FOCS 2003), and we describe several improvements to the originally proposed scheme, inspired by similar existing variants of Regev's LWE-based cryptosystem. To achieve further efficiency, we propose the first public-key cryptosystem based on the ring-LPN problem, which is a more recently introduced LPN variant that makes for substantial improvement in terms of both time and space. We also introduce a variant of this problem called the transposed Ring-LPN problem. Our public-key scheme based on this problem is even more efficient. For all cases, we compute the parameters required for various security levels in practice, given the best currently known attacks. Our conclusion is that the basic LPN-based scheme is in several respects not competitive with existing practical schemes, as the public key, ciphertexts and encryption time become very large already for 80-bit security. On the other hand, the scheme based on transposed Ring-LPN is far better in all these respects. Although the public key and ciphertexts are still larger than for, say, RSA at comparable security levels, they are not prohibitively large; moreover, for decryption, the scheme outperforms RSA for security levels of 112 bits or more. The Ring-LPN based scheme is less efficient, however. Thus, LPN-based public-key cryptography seems to be somewhat more promising for practical use than has been generally assumed so far.
Last updated:  2013-07-01
5PM: Secure Pattern Matching
Uncategorized
Joshua Baron, Karim El Defrawy, Kirill Minkovich, Rafail Ostrovsky, Eric Tressler
Show abstract
Uncategorized
In this paper we consider the problem of secure pattern matching that allows single-character wildcards and substring matching in the malicious (stand-alone) setting. Our protocol, called 5PM, is executed between two parties: Server, holding a text of length $n$, and Client, holding a pattern of length $m$ to be matched against the text, where our notion of matching is more general and includes non-binary alphabets, non-binary Hamming distance and non-binary substring matching. 5PM is the first secure expressive pattern matching protocol designed to optimize round complexity by carefully specifying the entire protocol round by round. In the malicious model, 5PM requires $O((m+n)k^2)$ bandwidth and $O(m+n)$ encryptions, where $m$ is the pattern length and $n$ is the text length. Further, 5PM can hide pattern size with no asymptotic additional costs in either computation or bandwidth. Finally, 5PM requires only two rounds of communication in the honest-but-curious model and eight rounds in the malicious model. Our techniques reduce pattern matching and generalized Hamming distance problems to a novel linear algebra formulation that allows for generic solutions based on any additively homomorphic encryption. We believe our efficient algebraic techniques are of independent interest.
Last updated:  2012-12-14
Verifiable Elections That Scale for Free
Melissa Chase, Markulf Kohlweiss, Anna Lysyanskaya, Sarah Meiklejohn
In order to guarantee a fair and transparent voting process, electronic voting schemes must be verifiable. Most of the time, however, it is important that elections also be anonymous. The notion of a verifiable shuffle describes how to satisfy both properties at the same time: ballots are submitted to a public bulletin board in encrypted form, verifiably shuffled by several mix servers (thus guaranteeing anonymity), and then verifiably decrypted by an appropriate threshold decryption mechanism. To guarantee transparency, the intermediate shuffles and decryption results, together with proofs of their correctness, are posted on the bulletin board throughout this process. In this paper, we present a verifiable shuffle and threshold decryption scheme in which, for security parameter k, L voters, M mix servers, and N decryption servers, the proof that the end tally corresponds to the original encrypted ballots is only O(k(L + M + N)) bits long. Previous verifiable shuffle constructions had proofs of size O(kLM + kLN), which, for elections with thousands of voters, mix servers, and decryption servers, meant that verifying an election on an ordinary computer in a reasonable amount of time was out of the question. The linchpin of each construction is a controlled-malleable proof (cm-NIZK), which allows each server, in turn, to take a current set of ciphertexts and a proof that the computation done by other servers has proceeded correctly so far. After shuffling or partially decrypting these ciphertexts, the server can also update the proof of correctness, obtaining as a result a cumulative proof that the computation is correct so far. In order to verify the end result, it is therefore sufficient to verify just the proof produced by the last server.
Last updated:  2012-12-14
Cryptanalysis of RAKAPOSHI Stream Cipher
Lin Ding, Jie Guan
RAKAPOSHI is a hardware oriented stream cipher designed by Carlos Cid et al. in 2009. The stream cipher is based on Dynamic Linear Feedback Shift Registers, with a simple and potentially scalable design, and is particularly suitable for hardware applications with restricted resources. The RAKAPOSHI stream cipher offers 128-bit security. In this paper, we point out some weaknesses in the cipher. Firstly, it shows that there are 2^192 weak (key, IV) pairs in RAKAPOSHI stream cipher. Secondly, for weak (key, IV) pairs of RAKAPOSHI, they are vulnerable to linear distinguishing attack and algebraic attack. Finally, we propose a real time related key chosen IV attack on RAKAPOSHI. The attack on RAKAPOSHI recovers the 128-bit secret key of with a computational complexity of 2^37, requiring 47 related keys, 2^8 chosen IVs and 2^14.555 keystream bits. The success probability of this attack is 0.999, which is quite close to 1. The experimental results corroborate our assertion.
Last updated:  2013-07-03
Fully Automated Analysis of Padding-Based Encryption in the Computational Model
Gilles Barthe, Juan Manuel Crespo, Benjamin Grégoire, César Kunz, Yassine Lakhnech, Benedikt Schmidt, Santiago Zanella-Béguelin
Computer-aided verification provides effective means of analyzing the security of cryptographic primitives. However, it has remained a challenge to achieve fully automated analyses yielding guarantees that hold against computational (rather than symbolic) attacks. This paper meets this challenge for public-key encryption schemes built from trapdoor permutations and hash functions. Using a novel combination of techniques from computational and symbolic cryptography, we present proof systems for analyzing the chosen-plaintext and chosen-ciphertext security of such schemes in the random oracle model. Building on these proof systems, we develop a toolset that bundles together fully automated proof and attack finding algorithms. We use this toolset to build a comprehensive database of encryption schemes that records attacks against insecure schemes, and proofs with concrete bounds for secure ones.
Last updated:  2012-12-14
Cryptanalysis of matrix conjugation schemes
A. D. Myasnikov, A. Ushakov
In this paper we cryptanalyze two protocols: Grigoriev-Shpilrain authentication protocol and Wang et al. public key encryption protocols that use computational hardness of some variations of the conjugacy search problem in noncommutative monoids. We devise a practical heuristic algorithm solving those problems. As a conclusion we claim that these protocols are insecure for the proposed parameter values.
Last updated:  2015-02-15
Encoding Functions with Constant Online Rate or How to Compress Garbled Circuits Keys
Benny Applebaum, Yuval Ishai, Eyal Kushilevitz, Brent Waters
\emph{Randomized encodings of functions} can be used to replace a ``complex'' function $f(x)$ by a ``simpler'' randomized mapping $\hat{f}(x;r)$ whose output distribution on an input $x$ encodes the value of $f(x)$ and hides any other information about $x$. One desirable feature of randomized encodings is low \emph{online complexity}. That is, the goal is to obtain a randomized encoding $\hat{f}$ of $f$ in which most of the output can be precomputed and published before seeing the input $x$. When the input $x$ is available, it remains to publish only a short string $\hat{x}$, where the online complexity of computing $\hat{x}$ is independent of (and is typically much smaller than) the complexity of computing $f$. Yao's garbled circuit construction gives rise to such randomized encodings in which the online part $\hat{x}$ consists of $n$ encryption keys of length $\kappa$ each, where $n=|x|$ and $\kappa$ is a security parameter. Thus, the {\em online rate} $|\hat{x}|/|x|$ of this encoding is proportional to the security parameter $\kappa$. In this paper, we show that the online rate can be dramatically improved. Specifically, we show how to encode any polynomial-time computable function $f:\bit^n\to\bit^{m(n)}$ with online rate of $1+o(1)$ and with nearly linear online computation. More concretely, the online part $\hat{x}$ consists of an $n$-bit string and a single encryption key. These constructions can be based on the decisional Diffie-Hellman assumption (DDH), the Learning with Errors assumption (LWE), or the RSA assumption. We also present a variant of this result which applies to {\em arithmetic formulas}, where the encoding only makes use of arithmetic operations, as well as several negative results which complement our positive results. Our positive results can lead to efficiency improvements in most contexts where randomized encodings of functions are used. We demonstrate this by presenting several concrete applications. These include protocols for secure multiparty computation and for non-interactive verifiable computation in the preprocessing model which achieve, for the first time, an optimal online communication complexity, as well as non-interactive zero-knowledge proofs which simultaneously minimize the online communication and the prover's online computation.
Last updated:  2014-12-01
Generic Constructions of Integrated PKE and PEKS
Uncategorized
Yu Chen, Jiang Zhang, Zhenfeng Zhang, Dongdai Lin
Show abstract
Uncategorized
In this paper we investigate the topic of integrated public-key encryption (PKE) and public-key encryption with keyword search (PEKS) schemes (PKE-PEKS as shorthand). We first formalize the strongest security notion to date for PKE-PEKS schemes, named joint CCA-security. We then propose two simple constructions of jointly CCA-secure PKE- PEKS schemes from anonymous (hierarchical) identity-based encryption schemes. Besides, we also define the notion of consistency for PKE-PEKS schemes, as well as revisit its related notions (including consistency of PEKS schemes, robustness and collision-freeness of IBE schemes), which may be of independent interest.
Last updated:  2012-12-10
Root Optimization of Polynomials in the Number Field Sieve
Shi Bai, Richard P. Brent, Emmanuel Thomé
The general number field sieve (GNFS) is the most efficient algorithm known for factoring large integers. It consists of several stages, the first one being polynomial selection. The quality of the chosen polynomials in polynomial selection can be modelled in terms of size and root properties. In this paper, we describe some algorithms for selecting polynomials with very good root properties.
Last updated:  2012-12-11
The Weakness of Integrity Protection for LTE
Uncategorized
Teng Wu, Guang Gong
Show abstract
Uncategorized
In this paper, we concentrate on the security issues of the integrity protection of LTE and present two different forgery attacks. For the first attack, referred to as a {\em linear forgery attack}, EIA1 and EIA3, two integrity protection algorithms of LTE, are insecure if the initial value (IV) can be repeated twice during the life cycle of an integrity key (IK). Because of the linearity of EIA1 and EIA3, given two valid Message Authentication Codes (MACs) our algorithm can forge up to $2^{32}$ valid MACs. Thus, the probability of finding a valid MAC is dramatically increased. Although the combination of IV and IK never repeats in the ordinary case, in our well-designed scenario, the attacker can make the same combination occur twice. The duplication provides the opportunity to conduct our linear forgery attack, which may harm the security of communication. To test our linear forgery attack algorithm, we generate two counter check messages and successfully forge the third one. We also examine the attack timing by simulating real communication. From the experimental results, our attack is applicable. The second attack is referred to as a {\em trace extension forgery attack}, which works only in theory. However, this attack is more general than the linear forgery attack. Known only one MAC and message pair, we can construct a different message, who has the same MAC as the original one, with the probability $\frac{1}{2^{16}}$. In this attack, trace function is applied to the message to shrink the guessing space.
Last updated:  2013-01-16
Cryptography Using CAPTCHA Puzzles
Abishek Kumarasubramanian, Rafail Ostrovsky, Omkant Pandey, Akshay Wadia
A \captcha is a puzzle that is easy for humans but hard to solve for computers. A formal framework, modelling \captcha puzzles (as hard AI problems), was introduced by Ahn, Blum, Hopper, and Langford (\cite{AhnBHL03}, Eurocrypt 2003). Despite their attractive features and wide adoption in practice, the use of \captcha puzzles for general cryptographic applications has been limited. In this work, we explore various ways to formally model \captcha puzzles and their human component and explore new applications for \captcha. We show that by defining \captcha with additional (strong but realistic) properties, it is possible to broaden \captcha applicability, including using it to learning a machine's ``secret internal state.'' To facilitate this, we introduce the notion of an human-extractable \captcha, which we believe may be of independent interest. We show that this type of \captcha yields a \emph{constant round} protocol for \emph{fully} concurrent non-malleable zero-knowledge. To enable this we also define and construct a \captcha -based commitment scheme which admits ``straight line'' extraction. We also explore \captcha definitions in the setting of Universal Composability (UC). We show that there are two (incomparable) ways to model \captcha within the UC framework that lead to different results. In particular, we show that in the so called \emph{indirect access model}, for every polynomial time functionality $\calf$ there exists a protocol that UC-realizes $\calf$ using human-extractable \captcha, while for the so-called \emph{direct access model}, UC is impossible, even with the help of human-extractable \captcha. The security of our constructions using human-extractable \captcha is proven against the (standard) class of all polynomial time adversaries. In contrast, most previous works guarantee security only against a very limited class of adversaries, called the \emph{conservative} adversaries.
Last updated:  2014-07-29
A Simple Provably Secure Key Exchange Scheme Based on the Learning with Errors Problem
Uncategorized
Jintai Ding, Xiang Xie, Xiaodong Lin
Show abstract
Uncategorized
We use the learning with errors (LWE) problem to build a new simple and provably secure key exchange scheme. The basic idea of the construction can be viewed as certain extension of Diffie-Hellman problem with errors. The mathematical structure behind comes from the commutativity of computing a bilinear form in two different ways due to the associativity of the matrix multiplications:$$(\mathbf{x}^t \times \mathbf{A})\times \mathbf{y}=\mathbf{x}^t \times (\mathbf{A}\times \mathbf{y}),$$ where $\mathbf{x,y}$ are column vectors and $\mathbf{A}$ is a square matrix. We show that our new schemes are more efficient in terms of communication and computation complexity compared with key exchange schemes or key transport schemes via encryption schemes based on the LWE problem. Furthermore, we extend our scheme to the ring learning with errors (RLWE) problem, resulting in small key size and better efficiency.
Last updated:  2012-12-10
The k-BDH Assumption Family: Bilinear Map Cryptography from Progressively Weaker Assumptions
Karyn Benson, Hovav Shacham, Brent Waters
Over the past decade bilinear maps have been used to build a large variety of cryptosystems. In addition to new functionality, we have concurrently seen the emergence of many strong assumptions. In this work, we explore how to build bilinear map cryptosystems under progressively weaker assumptions. We propose $k$-BDH, a new family of progressively weaker assumptions that generalizes the decisional bilinear Diffie-Hellman (DBDH) assumption. We give evidence in the generic group model that each assumption in our family is strictly weaker than the assumptions before it. DBDH has been used for proving many schemes secure, notably identity-based and functional encryption schemes; we expect that our $k$-BDH will lead to generalizations of many such schemes. To illustrate the usefulness of our $k$-BDH family, we construct a family of selectively secure Identity-Based Encryption (IBE) systems based on it. Our system can be viewed as a generalization of the Boneh-Boyen IBE, however, the construction and proof require new ideas to fit the family. We then extend our methods to produces hierarchical IBEs and CCA security; and give a fully secure variant. In addition, we discuss the opportunities and challenges of building new systems under our weaker assumption family.
Last updated:  2012-12-10
Improved (Pseudo) Preimage Attack and Second Preimage Attack on Round-Reduced Grøstl
Jian Zou, Wenling Wu, Shuang Wu, Le Dong
Grøstl is one of the five finalists in the third round of SHA-3 competition hosted by NIST. In this paper, we use many techniques to improve the pseudo preimage attack on Grøstl hash function, such as subspace preimage attack and guess-and-determine technique. We present improved pseudo preimage attacks on 5-round Grøstl-256 and 8-round Grøstl-512 respectively. The complexity of the above two attacks are ($2^{239.90},2^{240.40}$) (in time and memory) and ($2^{499.50},2^{499}$) respectively. Furthermore, we propose pseudo preimage attack and pseudo second preimage attack on 6-round Grøstl-256. The complexity of our 6-round pseudo preimage and second preimage attack is ($2^{253.26},2^{253.67}$) and ($2^{251.0},2^{252.0}$) respectively. As far as we know, these are the best known attacks on round-reduced Grøstl hash function.
Last updated:  2013-07-18
Square root computation over even extension fields
Uncategorized
Gora Adj, Francisco Rodríguez-Henríquez
Show abstract
Uncategorized
This paper presents a comprehensive study of the computation of square roots over finite extension fields. We propose two novel algorithms for computing square roots over even field extensions of the form $\F_{q^{2}}$, with $q=p^n,$ $p$ an odd prime and $n\geq 1$. Both algorithms have an associate computational cost roughly equivalent to one exponentiation in $\F_{q^{2}}$. The first algorithm is devoted to the case when $q\equiv 1 \bmod 4$, whereas the second one handles the case when $q\equiv 3 \bmod 4$. Numerical comparisons show that the two algorithms presented in this paper are competitive and in some cases more efficient than the square root methods previously known.
Last updated:  2013-07-18
Generic Related-key Attacks for HMAC
Thomas Peyrin, Yu Sasaki, Lei Wang
In this article we describe new generic distinguishing and forgery attacks in the related-key scenario (using only a single related-key) for the HMAC construction. When HMAC uses a k-bit key, outputs an n-bit MAC, and is instantiated with an l-bit inner iterative hash function processing m-bit message blocks where m=k, our distinguishing-R attack requires about 2^{n/2} queries which improves over the currently best known generic attack complexity 2^{l/2} as soon as l>n. This means that contrary to the general belief, using wide-pipe hash functions as internal primitive will not increase the overall security of HMAC in the related-key model when the key size is equal to the message block size. We also present generic related-key distinguishing-H, internal state recovery and forgery attacks. Our method is new and elegant, and uses a simple cycle-size detection criterion. The issue in the HMAC construction (not present in the NMAC construction) comes from the non-independence of the two inner hash layers and we provide a simple patch in order to avoid this generic attack. Our work finally shows that the choice of the opad and ipad constants value in HMAC is important.
Last updated:  2014-06-24
Fingerprint Tables: A Generalization of Rainbow Tables
Uncategorized
Gildas Avoine, Adrien Bourgeois, Xavier Carpent
Uncategorized
Cryptanalytic time-memory trade-offs were introduced by Hellman in 1980 in order to perform key-recovery attacks on cryptosystems. A major advance was presented at Crypto 2003 by Oechslin, with the rainbow table variant that outperforms Hellman's seminal work. This paper introduces the fingerprint tables, which drastically reduce the number of false alarms during the attack compared to the rainbow tables. The key point of our technique consists in storing in the tables the fingerprints of the chains instead of their endpoints. The fingerprint tables provide a time-memory trade-off that is about two times faster than the rainbow tables on usual problem sizes. We experimentally illustrate the performance of our technique, and demonstrate that it is faster than Ophcrack, a Windows LM Hash password cracker considered so far to be the fastest one ever implemented.
Last updated:  2013-03-08
Proofs of Retrievability with Public Verifiability and Constant Communication Cost in Cloud
Jiawei Yuan, Shucheng Yu
For data storage outsourcing services, it is important to allow data owners to efficiently and securely verify that the storage sever stores their data correctly. To address this issue, several proof-of-retrievability (POR) schemes have been proposed wherein a storage sever must prove to a verifier that all of a client's data are stored correctly. While existing POR schemes offer decent solutions addressing various practical issues, they either have a non-trivial (linear or quadratic) communication complexity, or only support private verification, i.e., only the data owner can verify the remotely stored data. It remains open to design a POR scheme that achieves both public verifiability and constant communication cost simultaneously. In this paper, we solve this open problem and propose the first POR scheme with public verifiability and constant communication cost: in our proposed scheme, the message exchanged between the prover and verifier is composed of a constant number of group elements; different from existing private POR constructions, our scheme allows public verification and releases the data owners from the burden of staying online. We achieved these by tailoring and uniquely combining techniques such as constant size polynomial commitment and homomorphic linear authenticators. Thorough analysis shows that our proposed scheme is efficient and practical. We prove the security of our scheme based on the Computational Diffie-Hellman Problem, the Strong Diffie-Hellman assumption and the Bilinear Strong Diffie-Hellman assumption.\end{abstract}
Last updated:  2012-12-10
Resilience to Distinguishing Attacks on WG-7 Cipher and Their Generalizations
Guang Gong, Mark Aagaard, Xinxin Fan
The stream cipher WG-7 is a lightweight variant of the well-known Welch-Gong (WG) stream cipher family, targeting for resource-constrained devices like RFID tags, smart cards, and wireless sensor nodes. Recently, a distinguishing attack was discovered against the stream cipher WG-7 by Orumiehchiha, Pieprzyk and Steinfeld. In this paper, we extend their work to a general distinguishing attack and suggest criteria to protect the WG stream cipher family from this attack. Our analysis shows that by properly choosing the minimal polynomial of the linear feedback shift register for a WG stream cipher, the general distinguishing attack can be easily thwarted.
Last updated:  2015-12-10
Natural Generalizations of Threshold Secret Sharing
Oriol Farras, Carles Padro, Chaoping Xing, An Yang
We present new families of access structures that, similarly to the multilevel and compartmented access structures introduced in previous works, are natural generalizations of threshold secret sharing. Namely, they admit an ideal linear secret sharing schemes over every large enough finite field, they can be described by a small number of parameters, and they have useful properties for the applications of secret sharing. The use of integer polymatroids makes it possible to find many new such families and it simplifies in great measure the proofs for the existence of ideal secret sharing schemes for them.
Last updated:  2016-04-01
Hiding the Input-Size in Secure Two-Party Computation
Yehuda Lindell, Kobbi Nissim, Claudio Orlandi
In the setting of secure multiparty computation, a set of parties wish to compute a joint function of their inputs, while preserving properties like privacy, correctness, and independence of inputs. One security property that has typically not been considered in the past relates to the length or size of the parties inputs. This is despite the fact that in many cases the size of a party's input can be confidential. The reason for this omission seems to have been the folklore belief that, as with encryption, it is impossible to carry out non-trivial secure computation while hiding the size of parties' inputs. However some recent results (e.g., Ishai and Paskin at TCC 2007, Ateniese, De Cristofaro and Tsudik at PKC 2011) showed that it is possible to hide the input size of one of the parties for some limited class of functions, including secure two-party set intersection. This suggests that the folklore belief may not be fully accurate. In this work, we initiate a theoretical study of input-size hiding secure computation, and focus on the two-party case. We present definitions for this task, and deal with the subtleties that arise in the setting where there is no a priori polynomial bound on the parties' input sizes. Our definitional study yields a multitude of classes of input-size hiding computation, depending on whether a single party's input size remains hidden or both parties' input sizes remain hidden, and depending on who receives output and if the output size is hidden from a party in the case that it does not receive output. We prove feasibility and impossibility results for input-size hiding secure two-party computation. Some of the highlights are as follows: Under the assumption that fully homomorphic encryption (FHE) exists, there exist non-trivial functions (e.g., the millionaire's problem) that can be securely computed while hiding the input size of both parties. Under the assumption that FHE exists, every function can be securely computed while hiding the input size of one party, when both parties receive output (or when the party not receiving output does learn the size of the output). In the case of functions with fixed output length, this implies that every function can be securely computed while hiding one party's input size. There exist functions that cannot be securely computed while hiding both parties' input sizes. This is the first formal proof that, in general, some information about the size of the parties' inputs must be revealed. Our results are in the semi-honest model. The problem of input-size hiding is already challenging in this scenario. We discuss the additional difficulties that arise in the malicious setting and leave this extension for future work.
Last updated:  2012-12-01
Infective Computation and Dummy Rounds: Fault Protection for Block Ciphers without Check-before-Output
Benedikt Gierlichs, Jorn-Marc Schmidt, Michael Tunstall
Implementation attacks pose a serious threat for the security of cryptographic devices and there are a multitude of countermeasures that are used to prevent them. Two countermeasures used in implementations of block ciphers to increase the complexity of such attacks are the use of dummy rounds and redundant computation with consistency checks to prevent fault attacks. In this paper we present several countermeasures based on the idea of infective computation. Our countermeasures ensure that a fault injected into a cipher, dummy, or redundant round will infect the ciphertext such that an attacker cannot derive any information on the secret key being used. This has one clear advantage: the propagation of faults prevents an attacker from being able to conduct any fault analysis on any corrupted ciphertexts. As a consequence, there is no need for any test at the end of an implementation to determine if a fault has been injected and a ciphertext can always be returned.
Last updated:  2012-11-30
What is the Effective Key Length for a Block Cipher: an Attack on Every Block Cipher
Jialin Huang, Xuejia Lai
Recently, several important block ciphers are considered to be broken by the bruteforce-like cryptanalysis, with a time complexity faster than exhaustive key search by going over the entire key space but performing less than a full encryption for each possible key. Motivated by this observation, we describe a meet-in-the-middle attack that can always be successfully mounted against any practical block ciphers with success probability one. The data complexity of this attack is the smallest according to the unicity distance. The time complexity can be written as $2^k(1-\epsilon)$ where $\epsilon > 0$ for all block ciphers. Previously, the security bound that is commonly accepted is the length k of the given master key. From our result we point out that actually this k-bit security is always overestimated and can never be reached due to the inevitable key bits loss. No amount of clever design can prevent it, but increments of the number of rounds can reduce this key loss as much as possible. We give more insight in the problem of the upper bound of eective key bits in block ciphers, and show a more accurate bound. A suggestion about the relation between the key size and block size is given. That is, when the number of rounds is xed, it is better to take a key size equal to the block size. Moreover, eective key bits of many well-known block ciphers are calculated and analyzed, which also conrm their lower security margin than thought before.
Last updated:  2012-11-30
Mixed-integer Linear Programming in the Analysis of Trivium and Ktantan
Julia Borghoff
In this paper we present a rather new approach to apply mixed-integer optimization to the cryptanalysis of cryptographic primitives. We focus on the stream cipher Trivium, that has been recommended by the eSTREAM stream cipher project, and the lightweight block cipher Ktantan. Using these examples we explain how the problem of solving a non-linear multivariate Boolean equation system can be formulated as a mixed-integer linear programming problem. Our main focus is the formulation of the mixed-integer programming model (MIP model), which includes amongst others the choice of a conversion method to convert the Boolean equations into equations over the reals, different guessing strategies and the selection of binary variables. We apply the commercial solver Cplex to our problems. The results and further possible features of the approach are discussed.
Last updated:  2013-03-04
Minkowski sum based lattice construction for multivariate simultaneous Coppersmith's technique and applications to RSA
Uncategorized
Yoshinori Aono
Show abstract
Uncategorized
We investigate a lattice construction method for the Coppersmith technique for finding small solutions of a modular equation. We consider its variant for simultaneous equations and propose a method to construct a lattice by combining lattices for solving single equations. As applications, we consider a new RSA cryptanalyses. Our algorithm can factor an RSA modulus from $\ell \ge 2$ pairs of RSA public exponents with the common modulus corresponding to secret exponents smaller than $N^{(9\ell -5)/(12\ell + 4)}$, which improves on the previously best known result by Sarkar and Maitra. For partial key exposure situation, we also can factor the modulus if $\beta - \delta/2 + 1/4 < (3\ell-1)(3\ell + 1)$, where $\beta$ and $\delta$ are bit-lengths $/ \log N$ of the secret exponent and its exposed LSBs, respectively.
Last updated:  2013-01-17
Lecture Notes in Secret Sharing
Carles Padro
These are basically the lecture notes for the short course "Applications of Combinatorics to Information-Theoretic Cryptography", Central European University, Budapest, May-June 2012. With the objective of covering a full course on secret sharing, additional content will be added in subsequent versions of these lecture notes.
Last updated:  2012-11-29
Robust Encryption, Revisited
Uncategorized
Pooya Farshim, Benoît Libert, Kenneth G. Paterson, Elizabeth A. Quaglia
Show abstract
Uncategorized
We revisit the notions of robustness introduced by Abdalla, Bellare, and Neven (TCC 2010). One of the main motivations for the introduction of strong robustness for public-key encryption (PKE) by Abdalla et al. to prevent certain types of attack on Sako's auction protocol. We show, perhaps surprisingly, that Sako's protocol is still vulnerable to attacks exploiting robustness problems in the underlying PKE scheme, even when it is instantiated with a \emph{strongly} robust scheme. This demonstrates that current notions of robustness are insufficient even for one of its most natural applications. To address this and other limitations in existing notions, we introduce a series of new robustness notions for PKE and explore their relationships. In particular, we introduce \emph{complete} robustness, our strongest new notion of robustness, and give a number of constructions for completely robust PKE schemes.
Last updated:  2013-02-12
Collision Attacks on Up to 5 Rounds of SHA-3 Using Generalized Internal Differentials
Uncategorized
Itai Dinur, Orr Dunkelman, Adi Shamir
Show abstract
Uncategorized
On October 2-nd 2012 NIST announced its selection of the Keccak scheme as the new SHA-3 hash standard. In this paper we present the first published collision finding attacks on reduced-round versions of Keccak-384 and Keccak-512, providing actual collisions for 3-round versions, and describing an attack which is $2^{45}$ times faster than birthday attacks for 4-round Keccak-384. For Keccak-256, we increase the number of rounds which can be attacked to 5. All these results are based on a generalized {\it internal differential attack} (introduced by Peyrin at Crypto 2010), and use it to map a large number of Keccak inputs into a relatively small subset of possible outputs with a surprisingly large probability. In such a \textit{squeeze attack} it is easier to find random collisions in the reduced target subset by a standard birthday argument.
Last updated:  2012-11-28
Fully Secure Unbounded Inner-Product and Attribute-Based Encryption
Tatsuaki Okamoto, Katsuyuki Takashima
In this paper, we present the first inner-product encryption (IPE) schemes that are unbounded in the sense that the public parameters do not impose additional limitations on the predicates and attributes used for encryption and decryption keys. All previous IPE schemes were bounded, or have a bound on the size of predicates and attributes given public parameters fixed at setup. The proposed unbounded IPE schemes are fully (adaptively) secure and fully attribute-hiding in the standard model under a standard assumption, the decisional linear (DLIN) assumption. In our unbounded IPE schemes, the inner-product relation is generalized, where the two vectors of inner-product can be different sizes and it provides a great improvement of efficiency in many applications. We also present the first fully secure unbounded attribute-based encryption (ABE) schemes, and the security is proven under the DLIN assumption in the standard model. To achieve these results, we develop novel techniques, indexing and consistent randomness amplification, on the (extended) dual system encryption technique and the dual pairing vector spaces (DPVS).
Last updated:  2014-03-13
Fast Cryptography in Genus 2
Uncategorized
Joppe W. Bos, Craig Costello, Huseyin Hisil, Kristin Lauter
Show abstract
Uncategorized
In this paper we highlight the benefits of using genus 2 curves in public-key cryptography. Compared to the standardized genus 1 curves, or elliptic curves, arithmetic on genus 2 curves is typically more involved but allows us to work with moduli of half the size. We give a taxonomy of the best known techniques to realize genus 2 based cryptography, which includes fast formulas on the Kummer surface and efficient 4-dimensional GLV decompositions. By studying different modular arithmetic approaches on these curves, we present a range of genus 2 implementations. On a single core of an Intel Core i7-3520M (Ivy Bridge), our implementation on the Kummer surface breaks the 125 thousand cycle barrier which sets a new software speed record at the 128-bit security level for constant-time scalar multiplications compared to all previous genus 1 and genus 2 implementations.
Last updated:  2014-08-27
Blackbox Traceable CP-ABE: How to Catch People Leaking Their Keys by Selling Decryption Devices on eBay
Uncategorized
Zhen Liu, Zhenfu Cao, Duncan S. Wong
Show abstract
Uncategorized
In the context of Ciphertext-Policy Attribute-Based Encryption (CP-ABE), if a decryption device associated with an attribute set $S_{\cal D}$ appears on eBay, and is alleged to be able to decrypt any ciphertexts with policies satisfied by $S_{\cal D}$, no one including the CP-ABE authorities can identify the malicious user(s) who build such a decryption device using their key(s). This has been known as a major practicality concern in CP-ABE applications, for example, providing fine-grained access control on encrypted data. Due to the nature of CP-ABE, users get decryption keys from authorities associated with attribute sets. If there exists two or more users with attribute sets being the supersets of $S_{\cal D}$, existing CP-ABE schemes cannot distinguish which user is the malicious one who builds and sells such a decryption device. In this paper, we extend the notion of CP-ABE to support \emph{Blackbox Traceability} and propose a concrete scheme which is able to identify a user whose key has been used in building a decryption device from multiple users whose keys associated with the attribute sets which are all the supersets of $S_{\cal D}$. The scheme is efficient with sub-linear overhead and when compared with the very recent (non-traceable) CP-ABE scheme due to Lewko and Waters in Crypto 2012, we can consider this new scheme as an extension with the property of \emph{fully collusion-resistant blackbox traceability} added, i.e. an adversary can access an arbitrary number of keys when building a decryption device while the new tracing algorithm can still identify at least one particular key which must have been used for building the underlying decryption device. We show that this new scheme is secure against adaptive adversaries in the standard model, and is highly expressive by supporting any monotonic access structures. Its additional traceability property is also proven against adaptive adversaries in the standard model. As of independent interest, in this paper, we also consider another scenario which we call it ``\emph{found-in-the-wild}". In this scenario, a decryption device is found, for example, from a black market, and reported to an authority (e.g. a law enforcement agency). The decryption device is found to be able to decrypt ciphertexts with certain policy, say $\mathbb{A}$, while the associated attribute set $S_{\cal D}$ is \textbf{missing}. In this found-in-the-wild scenario, we show that the Blackbox Traceable CP-ABE scheme proposed in this paper can still be able to find the malicious users whose keys have been used for building the decryption device, and our scheme can achieve \emph{selective} traceability in the standard model under this scenario.
Last updated:  2012-11-28
Construction of Differential Characteristics in ARX Designs -- Application to Skein
Gaetan Leurent
In this paper, we study differential attacks against ARX schemes. We build upon the generalized characteristics of de Cannière and Rechberger and the multi-bit constraints of Leurent. We describe a more efficient way to propagate multi-bit constraints, that allows us to use the complete set of 2^32 2.5-bit constraints, instead of the reduced sets used by Leurent. As a result, we are able to build complex non-linear differential characteristics for reduced versions of the hash function Skein. We present several characteristics for use in various attack scenarios; this results in attacks with a relatively low complexity, in relatively strong settings. In particular, we show practical free-start and semi-free-start collision attacks for 20 rounds and 12 rounds of Skein-256, respectively. To the best of our knowledge, these are the first examples of complex differential trails are build for pure ARX designs. We believe this is an important work to assess the security of ARX designs against differential cryptanalysis. Our improved tools will be publicly available with the final version of this paper.
Last updated:  2012-11-28
False Negative probabilities in Tardos codes
Uncategorized
Antonino Simone, Boris Skoric
Show abstract
Uncategorized
Forensic watermarking is the application of digital watermarks for the purpose of tracing unauthorized redistribution of content. The most powerful type of attack on watermarks is the collusion attack, in which multiple users compare their differently watermarked versions of the same content. Collusion-resistant codes have been developed against these attacks. One of the most famous such codes is the Tardos code. It has the asymptotically optimal property that it can resist c attackers with a code of length proportional to c^2. Determining error rates for the Tardos code and its various extensions and generalizations turns out to be a nontrivial problem. In recent work we developed an approach called the Convolution and Series Expansion (CSE) method to accurately compute false positive accusation probabilities. In this paper we extend the CSE method in order to make it possible to compute false negative accusation probabilities as well.
Last updated:  2013-02-27
Estimating the Φ(n) of Upper/Lower Bound in its RSA Cryptosystem
Uncategorized
Chenglian Liu, Ziwei Ye
Show abstract
Uncategorized
The RSA-768 (270 decimal digits) was factored by Kleinjung et al. on December 12 2009, and the RSA-704 (212 decimal digits) was factored by Bai et al. on July 2, 2012. And the RSA-200 (663 bits) was factored by Bahr et al. on May 9, 2005. Until right now, there is no body successful to break the RSA-210 (696 bits) currently. In this paper, we would discuss an estimation method to approach lower/upper bound of &#934;(n) in the RSA parameters. Our contribution may help researchers lock the &#934;(n) and the challenge RSA shortly.
Last updated:  2012-11-28
Uniform Compression Functions Can Fail to Preserve “Full” Entropy
Daniel R. L. Brown
To have “full” entropy has been defined in a draft NIST standard to be to have min-entropy very close, proportionally, to the min-entropy of a uniform distribution. A function is uniform if all its preimages have the same size. This report proves that the output of any uniform compression function can fail to have full entropy, even when the input has full entropy.
Last updated:  2012-12-03
PRE- Stronger Security Notion and Efficient Construction with New Property
Jiang Zhang, Zhenfeng Zhang, Yu Chen
In a proxy re-encryption (PRE) scheme, a proxy is given a re-encryption key and has the ability to translate a ciphertext under one key into a ciphertext of the same message under a different key, without learning anything about the message encrypted under either key. PREs have been widely used in many exciting applications, such as email forwarding and law enforcement. Based on a good observation on the applications of PREs, we find that a PRE receiver needs an ability, just like what is provided by public-key encryption with non-interactive opening, to non-interactively and efficiently convince third parties of what he obtains from a particular (transformed) ciphertext, while still keeping the security of his secret key and other ciphertexts. To meet such a practical requirement, we first introduce proxy re-encryption with non-interactive opening (PRENO), and formally define the notions of security against \textit{chosen ciphertext attacks} (CCA) and \textit{proof soundness}. Our security model is natural and strong since we allow the CCA adversary to adaptively choose public keys for malicious users (i.e., a chosen key model), and a scheme secure in previous models (i.e., knowledge of secret key models) is not necessarily secure in our model. Then, we present an efficient PRENO scheme which satisfies our security notions based on the decisional bilinear Diffie-Hellman (DBDH) assumption in the standard model. Compared with two previous PRE schemes, our scheme is competitive in several aspects. First, its CCA security is proved in a strong security model under a well-studied assumption in the standard model. Second, it has a good overall performance in terms of ciphertext length and computational cost. Third, it first provides non-interactive opening for PRE schemes.
Last updated:  2012-11-28
Virtual isomorphisms of ciphers: is AES secure against differential / linear attack?
Alexander Rostovtsev
In [eprint.iacr.org/2009/117] method of virtual isomorphisms of ciphers was proposed for cryptanalysis. Cipher is vulnerable to an attack iff isomorphic cipher is vulnerable to this attack. That method is based on conjugation, and it is not practical because all round operations except one become nonlinear. New isomorphism of AES is proposed, its image IAES has only one nonlinear operation IXOR - isomorphic image of XOR of 5 bytes. Maximal probabilities of byte differentials are increased about 10-11 times, maximal biases of linear sums are increased about 3.6 times comparatively to original AES. IAES possesses computable family of differentials of IXOR with two active input bytes, zero output difference and probability 1. Zero output difference decreases the rate of multiplication of active nonlinearities in differential characteristic of IAES.
Last updated:  2012-11-26
Asynchronous Physical Unclonable Functions – AsyncPUF
Julian Murphy
Physically Unclonable Functions (PUFs) exploit the physical characteristics of silicon and provide an alternative to storing digital encryption keys in non-volatile memory. A PUF maps a unique set of digital inputs to a corresponding set of digital outputs. In this paper, the use of asynchronous logic and design techniques to implement PUFs is advocated for Asynchronous Physically Unclonable Functions (APUFs). A new method of using asynchronous rings to implement PUFs is described called ASYNCPUF which features inherent field programmability. It is both a novel and holistic PUF design compared to the existing state-of-the-art as it naturally addresses the two challenges facing PUFs to-date that prevent wide-spread adoption: robustness and entropy. Results of electrical simulation in a 90 nano-meter lithography process are presented and discussed.
Last updated:  2012-11-26
Breaking Another Quasigroup-Based Cryptographic Scheme
Markus Dichtl, Pascale Böffgen
In their paper ``A Quasigroup Based Random Number Generator for Resource Constrained Environments", the authors Matthew Battey and Abhishek Parakh propose the pseudo random number generator LOQG PRNG 256. We show several highly efficient attacks on LOQG PRNG 256.
Last updated:  2017-08-18
Design of Secure Image Transmission In MANET using Number Theory Based Image Compression and uasigroup Encryption (NTICQE) Algorithm
Uncategorized
Munivel E, Rajeswari Mukesh
Uncategorized
Image compression and image encryption are pivotal to proper storage and transmission of images over MANET. Simultaneous image compression and encryption aims at achieving enhanced bandwidth utilization and security at the same time. The Number Theory based Image Compression and Quasigroup Encryption (NTICQE) algorithm employs number theoretic paradigm - Chinese Remainder Theorem and Quasigroup Encryption, to solve congruencies and hence realize the twin ideals of compression and encryption simultaneously. Quasigroup encryptor that has very good data-scrambling properties and, therefore, it has potential uses in symmetric cryptography.
Last updated:  2012-11-26
Does Counting Still Count? Revisiting the Security of Counting based User Authentication Protocols against Statistical Attacks
Hassan Jameel Asghar, Shujun Li, Ron Steinfeld, Josef Pierpzyk
At NDSS 2012, Yan et al. analyzed the security of several challenge-response type user authentication protocols against passive observers, and proposed a generic counting based statistical attack to recover the secret of some counting based protocols given a number of observed authentication sessions. Roughly speaking, the attack is based on the fact that secret (pass) objects appear in challenges with a different probability from non-secret (decoy) objects when the responses are taken into account. Although they mentioned that a protocol susceptible to this attack should minimize this difference, they did not give details as to how this can be achieved barring a few suggestions. In this paper, we attempt to fill this gap by generalizing the attack with a much more comprehensive theoretical analysis. Our treatment is more quantitative which enables us to describe a method to theoretically estimate a lower bound on the number of sessions a protocol can be safely used against the attack. Our results include 1) two proposed fixes to make counting protocols practically safe against the attack at the cost of usability, 2) the observation that the attack can be used on non-counting based protocols too as long as challenge generation is contrived, 3) and two main design principles for user authentication protocols which can be considered as extensions of the principles from Yan et al. This detailed theoretical treatment can be used as a guideline during the design of counting based protocols to determine their susceptibility to this attack. The Foxtail protocol, one of the protocols analyzed by Yan et al., is used as a representative to illustrate our theoretical and experimental results.
Last updated:  2013-06-12
Digital Signatures with Minimal Overhead from Indifferentiable Random Invertible Functions
Uncategorized
Eike Kiltz, Krzysztof Pietrzak, Mario Szegedy
Show abstract
Uncategorized
In a digital signature scheme with message recovery, rather than transmitting the message $m$ and its signature $\sigma$, a single enhanced signature $\tau$ is transmitted. The verifier is able to recover $m$ from $\tau$ and at the same time verify its authenticity. The two most important parameters of such a scheme are its security and overhead $|\tau|-|m|$. A simple argument shows that for any scheme with ``$n$ bits security" $|\tau|-|m|\ge n$, i.e., the overhead is lower bounded by the security parameter $n$. Currently, the best known constructions in the random oracle model are far from this lower bound requiring an overhead of $n+\log q_h$, where $q_h$ is the number of queries to the random oracle. In this paper we give a construction which basically matches the $n$ bit lower bound. We propose a simple digital signature scheme with $n+o(\log q_h)$ bits overhead, where $q_h$ denotes the number of random oracle queries. Our construction works in two steps. First, we propose a signature scheme with message recovery having optimal overhead in a new ideal model, the random invertible function model. Second, we show that a four-round Feistel network with random oracles as round functions is tightly "public-indifferentiable'' from a random invertible function. At the core of our indifferentiability proof is an almost tight upper bound for the expected number of edges of the densest "small'' subgraph of a random Cayley graph, which may be of independent interest.
Last updated:  2012-11-26
Fixed Argument Pairing Inversion on Elliptic Curves
Sungwook Kim, Jung Hee Cheon
Let $E$ be an elliptic curve over a finite field ${\mathbb F}_q$ with a power of prime $q$, $r$ a prime dividing $\#E({\mathbb F}_q)$, and $k$ the smallest positive integer satisfying $r | \Phi_k(p)$, called embedding degree. Then a bilinear map $t: E({\mathbb F}_q)[r] \times E({\mathbb F}_{q^k})/rE({\mathbb F}_{q^k}) \rightarrow {\mathbb F}_{q^k}^*$ is defined, called the Tate pairing. And the Ate pairing and other variants are obtained by reducing the domain for each argument and raising it to some power. In this paper we consider the {\em Fixed Argument Pairing Inversion (FAPI)} problem for the Tate pairing and its variants. In 2012, considering FAPI for the Ate$_i$ pairing, Kanayama and Okamoto formulated the {\em Exponentiation Inversion (EI)} problem. However the definition gives a somewhat vague description of the hardness of EI. We point out that the described EI can be easily solved, and hence clarify the description so that the problem does contain the actual hardness connection with the prescribed domain for given pairings. Next we show that inverting the Ate pairing (including other variants of the Tate pairing) defined on the smaller domain is neither easier nor harder than inverting the Tate pairing defined on the lager domain. This is very interesting because it is commonly believed that the structure of the Ate pairing is so simple and good (that is, the Miller length is short, the solution domain is small and has an algebraic structure induced from the Frobenius map) that it may leak some information, thus there would be a chance for attackers to find further approach to solve FAPI for the Ate pairing, differently from the Tate pairing.
Last updated:  2012-11-21
Security Evaluation of Rakaposhi Stream Cipher
Mohammad Ali Orumiehchiha, Josef Pieprzyk, Elham Shakour, Ron Steinfeld
Rakaposhi is a synchronous stream cipher, which uses three main components a non-linear feedback shift register (NLFSR), a dynamic linear feedback shift register (DLFSR) and a non-linear filtering function ($NLF$). NLFSR consists of 128 bits and is initialised by the secret key $K$. DLFSR holds 192 bits and is initialised by an initial vector ($IV$). $NLF$ takes 8-bit inputs and returns a single output bit. The work identifies weaknesses and properties of the cipher. The main observation is that the initialisation procedure has the so-called sliding property. The property can be used to launch distinguishing and key recovery attacks. The distinguisher needs four observations of the related $(K,IV)$ pairs. The key recovery algorithm allows to discover the secret key $K$ after observing $2^{9}$ pairs of $(K,IV)$. In the proposed related-key attack, the number of related $(K,IV)$ pairs is $2^{(128+192)/4}$ pairs. The key recovery algorithm allows to discover the secret key $K$ after observing $2^9$ related $(K,IV)$ pairs. Further the cipher is studied when the registers enter short cycles. When NLFSR is set to all ones, then the cipher degenerates to a linear feedback shift register with a non-linear filter. Consequently, the initial state (and Secret Key and $IV$) can be recovered with complexity $2^{63.87}$. If DLFSR is set to all zeros, then $NLF$ reduces to a low non-linearity filter function. As the result, the cipher is insecure allowing the adversary to distinguish it from a random cipher after $2^{17}$ observations of keystream bits. There is also the key recovery algorithm that allows to find the secret key with complexity $2^{54}$.
Last updated:  2014-02-26
Privacy Preserving Revocable Predicate Encryption Revisited
Kwangsu Lee, Intae Kim, Seong Oun Hwang
Predicate encryption (PE) that provides both the access control of ciphertexts and the privacy of ciphertexts is a new paradigm of public-key encryption. An important application of PE is a searchable encryption system in cloud storage, where it enables a client to securely outsource the search of a keyword on encrypted data without revealing the keyword to the cloud server. One practical issue of PE is to devise an efficient revocation method to revoke a user when the secret key of the user is compromised. Privacy preserving revocable PE (RPE) can provide not only revocation, but also the privacy of revoked users. In this paper, we first define two new security models of privacy preserving RPE: the strongly full-hiding security and the weakly full-hiding security. The strongly full-hiding security provides the full privacy of ciphertexts against outside and inside adversaries, but the weakly full-hiding security provides the full privacy of ciphertexts against an outside adversary who cannot decrypt the challenge ciphertext. Next, we propose a general RPE construction from any PE scheme, and prove its security in the weakly full-hiding security model. Our generic RPE scheme is efficient since the number of ciphertext elements is not proportional to the number of users in a receiver set. Additionally, our RPE scheme can support polynomial-size circuits if a recently proposed FE scheme for polynomial-size circuits is used as an underlying PE scheme.
Last updated:  2012-11-21
Refine the Concept of Public Key Encryption with Delegated Search
Qiang Tang, Yuanjie Zhao, Xiaofeng Chen, Hua Ma
We revisit the concept of public key encryption with delegated keyword search (PKEDS), a concept proposed by Ibraimi et al. A PKEDS scheme allows a receiver to authorize third-party server(s) to search in two ways: either according to a message chosen by the server itself or according to a trapdoor sent by the receiver. We show that the existing formulation has some defects and the proposed scheme is unnecessarily inefficient. Based on our analysis, we present a refined formulation of the primitive with a new security model. We then propose a new PKEDS scheme, which is proven secure and much more efficient than the original scheme by Ibraimi et al.
Last updated:  2012-11-21
How powerful are the DDH hard groups?
Periklis A. Papakonstantinou, Charles W. Rackoff, Yevgeniy Vahlis
The question whether Identity-Based Encryption (IBE) can be based on the Decisional Diffie-Hellman (DDH) assumption is one of the most prominent questions in Cryptography related to DDH. We study limitations on the use of the DDH assumption in cryptographic constructions, and show that it is impossible to construct a secure Identity-Based Encryption system using, in a black box way, only the DDH (or similar) assumption about a group. Our impossibility result is set in the generic groups model, where we describe an attack on any IBE construction that relies on oracle access to the group operation of randomly labelled group elements -- a model that formalizes naturally DDH hardness. The vast majority of existing separation results typically give separation from general primitives, whereas we separate a primitive from a class of number theoretic hardness assumptions. Accordingly, we face challenges in creating an attack algorithm that will work against constructions which leverage the underlying algebraic structure of the group. In fact, we know that this algebraic structure is powerful enough to provide generic constructions for several powerful primitives including oblivious transfer and chosen ciphertext secure public-key cryptosystems (note that an IBE generalizes such systems). Technically, we explore statistical properties of the group algebra associated with a DDH oracle, which can be of independent interest.
Last updated:  2013-02-17
Round-Efficient Concurrently Composable Secure Computation via a Robust Extraction Lemma
Vipul Goyal, Huijia Lin, Omkant Pandey, Rafael Pass, Amit Sahai
We consider the problem of constructing protocols for secure computation that achieve strong concurrent and composable notions of security in the plain model. Unfortunately UC-secure secure computation protocols are impossible in this setting, but the Angel-Based Composable Security notion offers a promising alternative. Until now, however, under standard (polynomial-time) assumptions, only protocols with polynomially many rounds were known to exist. In this work, we give the first $\tilde{O}(\log n)$-round secure computation protocol in the plain model that achieves angel-based composable security in the concurrent setting, under standard assumptions. We do so by constructing the first $\tilde{O}(\log n)$-round CCA-secure commitment protocol. Our CCA-secure commitment protocol is secure based on the minimal assumption that one-way functions exist. A central tool in obtaining our result is a new \emph{robust concurrent extraction lemma} that we introduce and prove, based on the minimal assumptions that one-way functions exist. This robust concurrent extraction lemma shows how to build concurrent extraction procedures that work even in the context of an ``external'' protocol that cannot be rewound by the extractor. We believe this lemma can be used to simplify many existing works on concurrent security, and is of independent interest. In fact, our lemma when used in conjunction with the concurrent-simulation schedule of Pass and Venkitasubramaniam (TCC'08), also yields a constant round construction based additionally on the existence of quasi-polynomial time (\pqt) secure one-way functions.
Last updated:  2014-02-10
TAAC: Temporal Attribute-based Access Control for Multi-Authority Cloud Storage Systems
Kan Yang, Zhen Liu, Zhenfu Cao, Xiaohua Jia, Duncan S. Wong, Kui Ren
Data access control is an effective way to ensure the data security in the cloud. Due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems. Ciphertext-Policy Attribute-based Encryption (CP-ABE), as a promising technique for access control of encrypted data, is very suitable for access control in cloud storage systems due to its high efficiency and expressiveness. However, the existing CP-ABE schemes cannot be directly applied to data access control for cloud storage systems because of the attribute revocation problem. In this paper, we consider the problem of attribute revocation in multi-authority cloud storage systems where the users' attributes come from different domains each of which is managed by a different authority. We propose TAAC (Temporal Attribute-based Access Control), an efficient data access control scheme for multi-authority cloud storage systems, where the authorities are independent from each other and no central authority is needed. TAAC can efficiently achieve temporal access control on attribute-level rather than on user-level. Moreover, different from the existing schemes with attribute revocation functionality, TAAC does not require re-encryption of any ciphertext when the attribute revocation happens, which means great improvement on the efficiency of attribute revocation. The analysis results show that TAAC is highly efficient, scalable, and flexible to applications in practice.
Last updated:  2015-03-04
Formal analysis of privacy in Direct Anonymous Attestation schemes
Ben Smyth, Mark D. Ryan, Liqun Chen
This article introduces a definition of privacy for Direct Anonymous Attestation schemes. The definition is expressed as an equivalence property which is suited to automated reasoning using Blanchet's ProVerif. The practicality of the definition is demonstrated by analysing the RSA-based Direct Anonymous Attestation protocol by Brickell, Camenisch & Chen. The analysis discovers a vulnerability in the RSA-based scheme which can be exploited by a passive adversary and, under weaker assumptions, corrupt administrators. A security fix is identified and the revised protocol is shown to satisfy our definition of privacy.
Last updated:  2013-02-25
A Robust and Plaintext-Aware Variant of Signed ElGamal Encryption
Yannick Seurin, Joana Treger
Adding a Schnorr signature to ElGamal encryption is a popular proposal aiming at thwarting chosen-ciphertext attacks by rendering the scheme plaintext-aware. However, there is no known security proof for the resulting scheme, at least not in a weaker model than the one obtained by combining the Random Oracle Model (ROM) and the Generic Group Model (Schnorr and Jakobsson, ASIACRYPT 2000). In this paper, we propose a very simple modification to Schnorr-Signed ElGamal encryption such that the resulting scheme is semantically secure under adaptive chosen-ciphertext attacks (IND-CCA2-secure) in the ROM under the Decisional Diffie-Hellman assumption. In fact, we even prove that our new scheme is plaintext-aware in the ROM as defined by Bellare et al. (CRYPTO'98). Interestingly, we also observe that Schnorr-Signed ElGamal is not plaintext-aware (again, for the definition of Bellare et al.) under the Computational Diffie-Hellman assumption. We show that our new scheme additionally achieves anonymity as well as robustness, a notion formalized by Abdalla et al. (TCC 2010) which captures the fact that it is hard to create a ciphertext that is valid under two different public keys. Finally, we study the hybrid variant of our new proposal, and show that it is IND-CCA2-secure in the ROM under the Computational Diffie-Hellman assumption when used with a symmetric encryption scheme satisfying the weakest security notion, namely ciphertext indistinguishability under one-time attacks (IND-OT-security).
Last updated:  2012-11-21
Search in Encrypted Data: Theoretical Models and Practical Applications
Qiang Tang
Recently, the concept of Search in Encrypted Data (SED) has become a highlight in cryptography. A SED scheme enables a client to have third-party server(s) to perform certain search functionalities on his encrypted data. In this book chapter, we aim at conducting a systematic study on SED schemes. Firstly, we describe three application scenarios and identify the desirable security requirements. Secondly, we provide two orthogonal categorizations and review the related security models for each category of SED schemes. Thirdly, we analyze the practical issues related to SED schemes and identify some future research directions.
Last updated:  2012-11-21
A Measure of Dependence for Cryptographic Primitives Relative to Ideal Functions
Uncategorized
Daniel Smith-Tone, Cristina Tone
Show abstract
Uncategorized
In this work we present a modification of a well-established measure of dependence appropriate for the analysis of stopping times for adversarial processes on cryptographic primitives. We apply this measure to construct generic criteria for the ideal behavior of fixed functions in both the random oracle and ideal permutation setting. More significantly, we provide a nontrivial extension of the notion of hash function indifferentiability, transporting the theory from the status of providing security arguments for protocols utilizing ideal primitives into the more realistic setting of protocol assurance with fixed functions. The methodology this measure introduces to indifferentiability analysis connects the security of a hash function with an indifferentiable mode to the security of the underlying compression function in a quantitative way; thus, we prove that dependence results on cryptographic primitives provide a direct means of determining the practical resistance or vulnerability of protocols employing such primitives.
Last updated:  2013-10-04
Galindo-Garcia Identity-Based Signature, Revisited
Sanjit Chatterjee, Chethan Kamath, Vikas Kumar
In Africacrypt 2009, Galindo-Garcia proposed a lightweight identity-based signature (IBS) scheme based on the Schnorr signature. The construction is simple and claimed to be the most efficient IBS till date. The security is based on the discrete-log assumption and the security argument consists of two reductions: B1 and B2, both of which use the multiple-forking lemma to solve the discrete-log problem (DLP). In this work, we revisit the security argument given in. Our contributions are two fold: (i) we identify several problems in the original argument and (ii) we provide a detailed new security argument which allows significantly tighter reductions. In particular, we show that the reduction B1 in fails in the standard security model for IBS, while the reduction B2 is incomplete. To remedy these problems, we adopt a two-pronged approach. First, we sketch ways to fill the gaps by making minimal changes to the structure of the original security argument; then, we provide a new security argument. The new argument consists of three reductions: R1, R2 and R3 and in each of them, solving the DLP is reduced to breaking the IBS. R1 uses the general forking lemma together with the programming of the random oracles and Coron's technique. Reductions R2 and R3, on the other hand, use the multiple-forking lemma along with the programming of the random oracles. We show that the reductions R1 and R2 are significantly tighter than their original counterparts.
Last updated:  2013-03-22
Simple, Efficient and Strongly KI-Secure Hierarchical Key Assignment Schemes
Eduarda S. V. Freire, Kenneth G. Paterson, Bertram Poettering
Hierarchical Key Assignment Schemes can be used to enforce access control policies by cryptographic means. In this paper, we present a new, enhanced security model for such schemes. We also give simple, efficient, and strongly-secure constructions for Hierarchical Key Assignment Schemes for arbitrary hierarchies using pseudorandom functions and forward-secure pseudorandom generators. We compare instantiations of our constructions with state-of-the-art Hierarchical Key Assignment Schemes, demonstrating that our new schemes possess an attractive trade-off between storage requirements and efficiency of key derivation.
Last updated:  2012-11-20
Impossibility Results for Indifferentiability with Resets
Atul Luykx, Elena Andreeva, Bart Mennink, Bart Preneel
The indifferentiability framework of Maurer, Renner, and Holenstein (MRH) has gained immense popularity in recent years and has proved to be a powerful way to argue security of cryptosystems that enjoy proofs in the random oracle model. Recently, however, Ristenpart, Shacham, and Shrimpton (RSS) showed that the composition theorem of MRH has a more limited scope than originally thought, and that extending its scope required the introduction of reset-indifferentiability, a notion which no practical domain extenders satisfy with respect to random oracles. In light of the results of RSS, we set out to rigorously tackle the specifics of indifferentiability and reset-indifferentiability by viewing the notions as special cases of a more general definition. Our contributions are twofold. Firstly, we provide the necessary formalism to refine the notion of indifferentiability regarding composition. By formalizing the definition of stage minimal games we expose new notions lying in between regular indifferentiability (MRH) and reset-indifferentiability (RSS). Secondly, we answer the open problem of RSS by showing that it is impossible to build any domain extender which is reset-indifferentiable from a random oracle. This result formally confirms the intuition that reset-indifferentiability is too strong of a notion to be satisfied by any hash function. As a consequence we look at the weaker notion of single-reset-indifferentiability, yet there as well we demonstrate that there are no ``meaningful'' domain extenders which satisfy this notion. Not all is lost though, as we also view indifferentiability in a more general setting and point out the possibility for different variants of indifferentiability.
Last updated:  2012-11-20
Protocols for Multiparty Coin Toss With Dishonest Majority
Uncategorized
Amos Beimel, Eran Omri, Ilan Orlov
Show abstract
Uncategorized
Coin-tossing protocols are protocols that generate a random bit with uniform distribution. These protocols are used as a building block in many cryptographic protocols. Cleve [STOC 1986] has shown that if at least half of the parties can be malicious, then, in any r-round coin-tossing protocol, the malicious parties can cause a bias of Omega(1/r) to the bit that the honest parties output. However, for more than two decades the best known protocols had bias t/\sqrt{r}, where t is the number of corrupted parties. Recently, in a surprising result, Moran, Naor, and Segev [TCC 2009] have shown that there is an r-round two-party coin-tossing protocol with the optimal bias of O(1/r). We extend Moran et al. results to the multiparty model when less than 2/3 of the parties are malicious. The bias of our protocol is proportional to 1/r and depends on the gap between the number of malicious parties and the number of honest parties in the protocol. Specifically, for a constant number of parties or when the number of malicious parties is somewhat larger than half, we present an r-round m-party coin-tossing protocol with optimal bias of O(1/r).
Last updated:  2013-08-20
Practical Covertly Secure MPC for Dishonest Majority – or: Breaking the SPDZ Limits
Ivan Damgard, Marcel Keller, Enrique Larraia, Valerio Pastro, Peter Scholl, Nigel P. Smart
SPDZ (pronounced “Speedz”) is the nickname of the MPC protocol of Damg°ard et al. from Crypto 2012. SPDZ provided various efficiency innovations on both the theoretical and practical sides compared to previous work in the preprocessing model. In this paper we both resolve a number of open problems with SPDZ; and present several theoretical and practical improvements to the protocol. In detail, we start by designing and implementing a covertly secure key generation protocol for distributed BGV secret keys. In prior work this was assumed to be provided by a given setup functionality. Protocols for distributingBGV secret keys are likely to be of wider applicability than to the SPDZ protocol alone. We then construct both a covertly and actively secure preprocessing phase, both of which compare favourably with previous work in terms of efficiency and provable security. We also build a new online phase, which solves a major problem of the SPDZ protocol: namely prior to this work preprocessed data could be used for only one function evaluation and then had to be recomputed from scratch for the next evaluation, while our online phase can support reactive functionalities. This improvement comes mainly from the fact that our construction does not require players to reveal the MAC keys to check correctness of MAC’d values. Since our focus is also on practical instantiations, our implementation offloads as much computation as possible into the preprocessing phase, thus resulting in a faster online phase. Moreover, a better analysis of the parameters of the underlying cryptoscheme and a more specific choice of the field where computation is performed allow us to obtain a better optimized implementation. Improvements are also due to the fact that our construction is in the random oracle model, and the practical implementation is multi-threaded.
Last updated:  2012-11-11
A unidirectional conditional proxy re-encryption scheme based on non-monotonic access structure
Bin Wang
Recently, Fang et al. [6] introduced an interactive(bidirectional) conditional proxy re-encryption(C-PRE) scheme such that a proxy can only convert ciphertexts that satisfy access policy set by a delegator. Their scheme supports monotonic access policy expressed by “OR” and “AND” gates. In addition, their scheme is called interactive since generation of re-encryption keys requires interaction between the delegator and delegatee. In this paper, we study the problem of constructing a unidirectional(non-interactive) C-PRE scheme supporting non-monotonic access policy expressed by “NOT”, “OR” and “AND” gates. A security model for unidirectional C-PRE schemes is also proposed in this paper. To yield a unidirectional C-PRE scheme supporting non-monotonic access policy, we extend the unidirectional PRE scheme presented by Libert et al. [8] by using the ideas from the non-monotonic attributed based encryption (ABE) scheme presented by Ostrovsky et al. [9]. Furthermore, the security of our C-PRE scheme is proved under the modified 3-weak Decision Bilinear Diffie-Hellman Inversion assumption in the standard model.
Last updated:  2012-11-11
Preimage and Pseudo-Collision Attacks on Step-Reduced SM3 Hash Function
Gaoli Wang, Yanzhao Shen
SM3~\cite{SM3hf} is the Chinese cryptographic hash standard which was announced in 2010 and designed by Wang $et\ al.$. It is based on the Merkle-Damgård design and its compression function can be seen as a block cipher used in Davies-Meyer mode. It uses message block of length 512 bits and outputs hash value of length 256 bits. This paper studies the security of SM3 hash function against preimage attack and pseudo-collision attack. We propose preimage attacks on 29-step and 30-step SM3, and pseudo-preimage attacks on 31-step and 32-step SM3 out of 64 steps. The complexities of these attacks are $2^{245}$ 29-step operations, $2^{251.1}$ 30-step operations, $2^{245}$ 31-step operations and $2^{251.1}$ 32-step operations, respectively. These (pseudo) preimage attacks are all from the first step of the reduced SM3. Meanwhile, these (pseudo) preimage attacks can be converted into pseudo-collision attacks on SM3 reduced to 29 steps, 30 steps, 31 steps and 32 steps with complexities of $2^{122}$, $2^{125.1}$, $2^{122}$ and $2^{125.1}$ respectively. As far as we know, the previously best known preimage attacks on SM3 cover 28 steps (from the first step) and 30 steps (from the 7-th step), and there is no publicly published result on (pseudo) collision attack on SM3.
Last updated:  2012-11-11
Coarse-grained integer - Smooth? Rough? Both!
Daniel Loebenberger, Michael Nüsken
We count ]B, C]-grained, k-factor integers which are simultaneously B-rough and C-smooth and have a fixed number k of prime factors. Our aim is to exploit explicit versions of the prime number theorem as much as possible to get good explicit bounds for the count of such integers. This analysis was inspired by certain inner procedures in the general number field sieve. The result should at least provide some insight in what happens there. We estimate the given count in terms of some recursively defined functions. Since they are still difficult to handle, only another approximation step reveals their orders. Finally, we use the obtained bounds to perform numerical experiments that show how good the desired count can be approximated for the parameters of the general number field sieve in the mentioned inspiring application.
Last updated:  2012-11-11
Cryptanalysis and Improvement of a Multi-Receiver Generalized Signcryption Scheme
Cai-xue Zhou
Generalized signcryption (GSC) scheme can adaptively work as an encryption scheme, a signature scheme or a signcryption scheme with only one algorithm. It is very suitable for storage-constrained environments. In this paper, we analyze a multi-receiver GSC scheme, and show that it cannot achieve indistinguishability-adaptive chosen ciphertext attack (IND-CCA2) secure in the pure encryption mode and hybrid encryption mode. We further propose a revised version of the scheme, which resolves the security issues of the original scheme without sacrificing its high efficiency and simple design. Our improved scheme can be proved to be IND-CCA2 secure and existentially unforgeable-adaptive chosen message attack (EUF-CMA) under computational Diffie-Hellman (CDH) assumption.
Last updated:  2012-11-11
Efficient Methods for Practical Fully Homomorphic Symmetric-key Encrypton, Randomization and Verification
Aviad Kipnis, Eliphaz Hibshoosh
We present high performance non-deterministic fully-homomorphic methods for practical randomization of data (over commutative ring), and symmetric-key encryption of random mod-N data (over ring of reidues mod-N) well suited for crypto applications. These methods secure, for example, the multivariate input or the coefficients of a polynomial function running in an open untrusted environment. We show that random plaintext is the sufficient condition for proof of security for the homomorphic encryption. The efficient nature of the methods - one large-numbers multiplication per encryption and six for the product of two encrypted values - motivates and enables the use of low cost collaborative security platforms for crypto applications such as keyed-hash or private key derivation algorithms. Such a platform is comprised of a low-cost and low performance security element supported by an untrusted high performance server running the homomorpic algorithms. The methods employed may also provide enhanced protection for some existing crypto algorithms against certain attacks. Specifically, it is shown how to secure OSS public-key signature against Pollard attack. Further, we demonstrate how the homomorphic randomization of data can offer protection for an AES-key against side-channel attacks. Finally, the methods provide both fault detection and verification of computed-data integrity.
Last updated:  2013-07-12
On the Complexity of the BKW Algorithm on LWE
Uncategorized
Martin R. Albrecht, Carlos Cid, Jean-Charles Faugère, Robert Fitzpatrick, Ludovic Perret
Show abstract
Uncategorized
This work presents a study of the complexity of the Blum-Kalai-Wasserman (BKW) algorithm when applied to the Learning with Errors (LWE) problem, by providing refined estimates for the data and computational effort requirements for solving concrete instances of the LWE problem. We apply this refined analysis to suggested parameters for various LWE-based cryptographic schemes from the literature and compare with alternative approaches based on lattice reduction. As a result, we provide new upper bounds for the concrete hardness of these LWE-based schemes. Rather surprisingly, it appears that BKW algorithm outperforms known estimates for lattice reduction algorithms starting in dimension n &#8776; 250 when LWE is reduced to SIS. However, this assumes access to an unbounded number of LWE samples.
Last updated:  2013-03-31
Secure Outsourced Attribute-based Encryption
Jin Li, Jingwei Li, Xiaofeng Chen, Chunfu Jia, Duncan S. Wong
Attribute-Based Encryption (ABE) is a promising cryptographic primitive which significantly enhances the versatility of access control mechanisms. Due to the high expressiveness of ABE policies, the computational complexities of ABE key-issuing (by Attribute Authorities (AAs)) and decryption (by eligible users) are getting prohibitively high. Despite that the existing Outsourced ABE solutions are able to offload some intensive computing tasks to a third party, for example, a cloud, so to relieve the local burden of eligible users during decryption, the high computational complexity of the key-issuing at the AAs has yet to be addressed, while an ABE system will continue to grow with more users being included, and with the user revocation being considered in practice which will trigger more key (re-)issuing. Aiming at tackling the challenges above, for the first time, we propose a Secure Outsourced ABE system, which not only supports secure outsourced decryption, but also provides secure outsourced key-issuing. Unlike the current outsourced ABE systems, our new method offloads all access policy and attribute related operations in the key-issuing process or decryption to a Key Generation Service Provider (KGSP) and a Decryption Service Provider (DSP), respectively, leaving only a constant number of simple operations for the AAs and eligible users to perform locally. Furthermore, we show that both outsourcing processes (to KGSP and to DSP) are secure, namely, the KGSP and the DSP would not be able to recover the keys or decrypt the ciphertexts, respectively. In addition, we consider the scenario that a KGSP or DSP may be dishonest and could maliciously generate some incorrect returning values rather than following the outsourced operations. Therefore, in this paper, we also propose another ABE construction which allows the AAs and eligible users to check the correctness of outsourced operations in an efficient way. The security of the construction is analyzed under a recently formalized model called Refereed Delegation of Computation (RDoC).
Last updated:  2012-11-11
Cryptanalysis of Double-Block-Length Hash Mode MJH
Deukjo Hong, Daesung Kwon
A double-block-length (DBL) hash mode of block ciphers, MJH has been proved to be collision-resistant in the ideal cipher model upto $2^{2n/3- \log n}$ queries. In this paper we provide first cryptanalytic results for MJH. We show that a collision attack on MJH has the time complexity below the birthday bound. When block ciphers with 128-bit blocks are used, it has time complexity around $2^{124}$, which is to be compared to the birthday attack having complexity $2^{128}$. We also give a preimage attack on MJH. It has the time complexity of $2^{3n/2+1}$ with $n$-bit block ciphers, which is to be compared to the brute force attack having complexity $2^{2n}$.
Last updated:  2012-11-13
New Preimage Attack on MDC-4
Uncategorized
Deukjo Hong, Daesung Kwon
Show abstract
Uncategorized
In this paper, we provide some cryptanalytic results for double-block-length (DBL) hash modes of block ciphers, MDC-4. Our preimage attacks follow the framework of Knudsen et al.'s time/memory trade-off preimage attack on MDC-2. We find how to apply it to our objects. When the block length of the underlying block cipher is $n$ bits, the most efficient preimage attack on MDC-4 requires time and space about $2^{3n/2}$, which is to be compared to the previous best known preimage attack having time complexity of $2^{7n/4}$. Additionally, we propose an enhanced version of MDC-4, MDC-4$^*$ based on a simple idea. It is secure against our preimage attack and previous attacks and has the same efficiency as MDC-4.
Last updated:  2012-11-11
Pairings on Generalized Huff Curves
Abdoul Aziz Ciss, Djiby Sow
This paper presents the Tate pairing computation on generalized Huff curves proposed by Wu and Feng in \cite{Wu}. In fact, we extend the results of the Tate pairing computation on the standard Huff elliptic curves done previously by Joye, Tibouchi and Vergnaud in \cite{Joux}. We show that the addition step of the Miller loop can be performed in $1\mathbf{M}+(k+15)\mathbf{m}+2\mathbf{c}$ and the doubling one in $1\mathbf{M} + 1\mathbf{S} + (k + 12) \mathbf{m} + 5\mathbf{s} + 2\mathbf{c}$ on the generalized Huff curve.
Last updated:  2013-03-11
Message-Locked Encryption and Secure Deduplication
Uncategorized
Mihir Bellare, Sriram Keelveedhi, Thomas Ristenpart
Show abstract
Uncategorized
We formalize a new cryptographic primitive, Message-Locked Encryption (MLE), where the key under which encryption and decryption are performed is itself derived from the message. MLE provides a way to achieve secure deduplication (space-efficient secure outsourced storage), a goal currently targeted by numerous cloud-storage providers. We provide definitions both for privacy and for a form of integrity that we call tag consistency. Based on this foundation, we make both practical and theoretical contributions. On the practical side, we provide ROM security analyses of a natural family of MLE schemes that includes deployed schemes. On the theoretical side the challenge is standard model solutions, and we make connections with deterministic encryption, hash functions secure on correlated inputs and the sample-then-extract paradigm to deliver schemes under different assumptions and for different classes of message sources. Our work shows that MLE is a primitive of both practical and theoretical interest.
Last updated:  2013-08-22
On the Security of TLS Renegotiation
Florian Giesen, Florian Kohlar, Douglas Stebila
The Transport Layer Security (TLS) protocol is the most widely used security protocol on the Internet. It supports negotiation of a wide variety of cryptographic primitives through different cipher suites, various modes of client authentication, and additional features such as renegotiation. Despite its widespread use, only recently has the full TLS protocol been proven secure, and only the core cryptographic protocol with no additional features. These additional features have been the cause of several practical attacks on TLS. In 2009, Ray and Dispensa demonstrated how TLS renegotiation allows an attacker to splice together its own session with that of a victim, resulting in a man-in-the-middle attack on TLS-reliant applications such as HTTP. TLS was subsequently patched with two defence mechanisms for protection against this attack. We present the first formal treatment of renegotiation in secure channel establishment protocols. We add optional renegotiation to the authenticated and confidential channel establishment model of Jager et al., an adaptation of the Bellare--Rogaway authenticated key exchange model. We describe the attack of Ray and Dispensa on TLS within our model. We show generically that the proposed fixes for TLS offer good protection against renegotiation attacks, and give a simple new countermeasure that provides renegotiation security for TLS even in the face of stronger adversaries.
Last updated:  2017-01-02
SCAPI: The Secure Computation Application Programming Interface
Yael Ejgenberg, Moriya Farbstein, Meital Levy, Yehuda Lindell
Secure two-party and multiparty computation has long stood at the center of the foundations of theoretical cryptography. Recently, however, interest has grown regarding the efficiency of such protocols and their application in practice. As a result, there has been significant progress on this problem and it is possible to actually carry out secure computation for non-trivial tasks on reasonably large inputs. Part of this research goal of making secure computation practical has also involved \emph{implementations}. Such implementations are of importance for two reasons: first, they demonstrate the real efficiency of known and new protocols; second, they deepen our understanding regarding where the bottlenecks in efficiency lie. However, it is very hard to compare between implementations by different research groups since they are carried out on different platforms and using different infrastructures. In addition, most implementations have been carried out without the goal of code reuse, and so are not helpful to other researchers. The difficulty of beginning implementation projects is further compounded by the fact that existing cryptographic libraries (like openSSL, Bouncy Castle, and others) are tailored for tasks like encryption, authentication and key-exchange, and not for secure computation. We have developed SCAPI in order to address these problems. SCAPI is an \emph{open-source} general library tailored for secure computation implementations. Our aim in developing SCAPI has been to provide a flexible and efficient infrastructure for secure computation implementations, that is both easy to use and robust. Great care has been taken in the design of the library, in writing clean code, and in documentation. We hope that this library will be useful to the community interested in implementations of secure protocols, and will help to promote the goal of making secure computation practical.
Last updated:  2012-11-08
Efficient Group Key Management Schemes for Multicast Dynamic Communication Systems
Muhammad Yasir Malik
Key management in multicast dynamic groups, where users can leave or join at their ease is one of the most crucial and essential part of secure communication. Various efficient management strategies have been proposed during last decade that aim to decrease encryption costs and transmission overheads. In this report, two different types of key management schemes are proposed. First proposed scheme is based on One-way function tree (OFT). The proposed scheme fulfills the security gaps that have been pointed out in recent years. Second proposed scheme is based on logical key hierarchy (LKH). This proposed scheme provides better performance for, rather inflexible and expensive, LKH scheme.
Last updated:  2012-11-26
Efficient Group Signatures in the Standard Model
Laila El Aimani, Olivier Sanders
In a group signature scheme, group members are able to sign on behalf of the group. Since the introduction of this cryptographic authentication mechanism, several schemes have been proposed but only few of them enjoy a security in the standard model. Moreover, those provided in the standard model suffer the recourse to non standard-assumptions, or the expensive cost and bandwidth of the resulting signature. We provide three practical group signature schemes that are provably secure in the standard model under standard assumptions. The three schemes permit dynamic enrollment of new members while keeping a constant size for both keys and group signatures, and they improve the state-of-the art by several orders of magnitude.
Last updated:  2012-11-08
Bit-Parallel $GF(2^{n})$ Squarer Using Shifted Polynomial Basis
Uncategorized
Xi Xiong, Haining Fan
Show abstract
Uncategorized
We present explicit formulae and complexities of bit-parallel shifted polynomial basis (SPB) squarers in finite field $GF(2^{n})$s generated by general irreducible trinomials $x^{n}+x^{k}+1$ ($0< k <n$) and type-II irreducible pentanomials $x^{n}+x^{k+1}+x^{k}+x^{k-1}+1$ ($3<k<(n-3)/2$). The complexities of the proposed squarers match or slightly outperform the previous best results. These formulae can also be used to design polynomial basis Montgomery squarers without any change. Furthermore, we show by examples that XOR gate numbers of SPB squarers are different when different shift factors in the SPB definition, i.e., parameter $v$ in ${\{}x^{i-v}|0\leq i\leq n-1 {\}}$, are used. This corrects previous misinterpretation.
Last updated:  2012-11-05
Order-Preserving Encryption Revisited: Improved Security Analysis and Alternative Solutions
Alexandra Boldyreva, Nathan Chenette, Adam O’Neill
We further the study of order-preserving symmetric encryption (OPE), a primitive for allowing efficient range queries on encrypted data, recently initiated (from a cryptographic perspective) by Boldyreva et al.~(Eurocrypt '09). First, we address the open problem of characterizing what encryption via a random order-preserving function (ROPF) leaks about underlying data (ROPF being the ``ideal object'' in the security definition, POPF, satisfied by their scheme.) In particular, we show that, for a database of randomly distributed plaintexts and appropriate choice of parameters, ROPF encryption leaks neither the precise value of any plaintext nor the precise distance between any two of them. The analysis here introduces useful new techniques. On the other hand, we show that ROPF encryption leaks approximate value of any plaintext as well as approximate distance between any two plaintexts, each to an accuracy of about square root of the domain size. We then study schemes that are not order-preserving, but which nevertheless allow efficient range queries and achieve security notions stronger than POPF. In a setting where the entire database is known in advance of key-generation (considered in several prior works), we show that recent constructions of ``monotone minimal perfect hash functions'' allow to efficiently achieve (an adaptation of) the notion of IND-O(rdered) CPA also considered by Boldyreva et al., which asks that \emph{only} the order relations among the plaintexts is leaked. Finally, we introduce {\em modular} order-preserving encryption (MOPE), in which the scheme of Boldyreva et al. is prepended with a random shift cipher. MOPE improves the security of OPE in a sense, as it does not leak any information about plaintext location. We clarify that our work should not be interpreted as saying the original scheme of Boldyreva et al., or the variants that we introduce, are ``secure'' or ``insecure.'' Rather, the goal of this line of research is to help practitioners decide whether the options provide a suitable security-functionality tradeoff for a given application.
Last updated:  2012-11-05
Order-Preserving Symmetric Encryption
Alexandra Boldyreva, Nathan Chenette, Younho Lee, Adam O’Neill
We initiate the cryptographic study of order-preserving symmetric encryption (OPE), a primitive suggested in the database community by Agrawal et al.~(SIGMOD '04) for allowing efficient range queries on encrypted data. Interestingly, we first show that a straightforward relaxation of standard security notions for encryption such as indistinguishability against chosen-plaintext attack (IND-CPA) is unachievable by a practical OPE scheme. Instead, we propose a security notion in the spirit of pseudorandom functions (PRFs) and related primitives asking that an OPE scheme look ``as-random-as-possible" subject to the order-preserving constraint. We then design an efficient OPE scheme and prove its security under our notion based on pseudorandomness of an underlying blockcipher. Our construction is based on a natural relation we uncover between a random order-preserving function and the hypergeometric probability distribution. In particular, it makes black-box use of an efficient sampling algorithm for the latter.
Last updated:  2012-11-20
Impossible plaintext cryptanalysis and probable-plaintext collision attacks of 64-bit block cipher modes
David McGrew
The block cipher modes of operation that are widely used (CBC, CTR, CFB) are secure up to the birthday bound; that is, if $w2^{w}$ or fewer bits of data are encrypted with a $w$-bit block cipher. However, the detailed security properties close to this bound are not widely appreciated, despite the fact that $64$-bit block ciphers are sometimes used in that domain. This work addresses the issue by analyzing plaintext-recovery attacks that are effective close to that bound. We describe possible-plaintext attacks, which can learn unknown plaintext values that are encrypted with CBC, CFB, or OFB. We also introduce \textit{impossible plaintext} cryptanalysis, which can recover information encrypted with CTR, and can improve attacks against the aforementioned modes as well. These attacks work at the birthday bound, or even slightly below that bound, when the target plaintext values are encrypted under a succession of keys.
Last updated:  2013-03-12
Resolving the conflict between generality and plausibility in verified computation
Srinath Setty, Benjamin Braun, Victor Vu, Andrew J. Blumberg, Bryan Parno, Michael Walfish
The area of proof-based verified computation (outsourced computation built atop probabilistically checkable proofs and cryptographic machinery) has lately seen renewed interest. Although recent work has made great strides in reducing the overhead of naive applications of the theory, these schemes still cannot be considered practical. A core issue is that the work for the prover is immense, in general; it is near-practical only for hand-compiled computations that can be expressed in special forms. This paper addresses that problem. Provided one is willing to batch verification, we develop a protocol that achieves the efficiency of the best manually constructed protocols in the literature yet applies to most computations. Our protocol is built on the observation that the recently-proposed QAPs of Gennaro et al. (ePrint 2012/215) yield a linear PCP that works with the efficient argument protocol of Setty et al. (ePrint 2012/598, Security 2012, NDSS 2012), itself based on the proposal of Ishai et al. (CCC 2007). The consequence is a prover whose total work is not much more than \emph{linear} in the running time of the computation. We implement the protocol in the context of a built system that includes a compiler and a GPU implementation. The result, as indicated by an experimental evaluation, is a system that is {\em almost} usable for real problems---without special-purpose tailoring.
Last updated:  2012-11-08
Biclique Cryptanalysis of Lightweight Block Ciphers PRESENT, Piccolo and LED
Kitae Jeong, HyungChul Kang, Changhoon Lee, Jaechul Sung, Seokhie Hong
In this paper, we evaluate the security of lightweight block ciphers PRESENT, Piccolo and LED against biclique cryptanalysis. To recover the secret key of PRESENT-80/128, our attacks require $2^{79.76}$ full PRESENT-80 encryptions and $2^{127.91}$ full PRESENT-128 encryptions, respectively. Our attacks on Piccolo-80/128 require computational complexities of $2^{79.13}$ and $2^{127.35}$, respectively. The attack on a $29$-round reduced LED-64 needs $2^{63.58}$ 29-round reduced LED-64 encryptions. In the cases of LED-80/96/128, we propose the attacks on two versions. First, to recover the secret key of $45$-round reduced LED-80/96/128, our attacks require computational complexities of $2^{79.45}, 2^{95.45}$ and $2^{127.45}$, respectively. To attack the full version, we require computational complexities of $2^{79.37}, 2^{95.37}$ and $2^{127.37}$, respectively. However, in these cases, we need the full codebook. These results are superior to known biclique cryptanalytic results on them.
Last updated:  2012-11-05
Solving Subset Sum Problems of Densioty close to 1 by "randomized" BKZ-reduction
Claus P. Schnorr, Taras Shevchenko
Subset sum or Knapsack problems of dimension $n$ are known to be hardest for knapsacks of density close to 1.These problems are NP-hard for arbitrary $n$. One can solve such problems either by lattice basis reduction or by optimized birthday algorithms. Recently Becker, Coron, Jou } [BCJ10] present a birthday algorithm that follows Schroeppel, Shamir [SS81], and Howgrave-Graham, Joux [HJ10]. This algorithm solves 50 random knapsacks of dimension 80 and density close to 1 in roughly 15 hours on a 2.67 GHz PC. We present an optimized lattice basis reduction algorithm that follows Schnorr, Euchne} [SE03] using pruning of Schnorr, Hörner [SH95] that solves such random knapsacks of dimension 80 on average in less than a minute, and 50 such problems all together about 9.4 times faster and using much less space than [BCJ10] on another 2.67 GHz PC.
Last updated:  2012-11-07
Asynchronous Computational VSS with Reduced Communication Complexity
Michael Backes, Amit Datta, Aniket Kate
Verifiable secret sharing (VSS) is a vital primitive in secure distributed computing. It allows an untrusted dealer to verifiably share a secret among n parties in the presence of an adversary controlling at most t of them. VSS in the synchronous communication model has received tremendous attention in the cryptographic research community. Nevertheless, recent interest in deploying secure distributed computing over the Internet requires going beyond the synchronous communication model and thoroughly investigating VSS in the asynchronous communication model. In this work, we consider the communication complexity of asynchronous VSS in the com- putational setting for the optimal resilience of n = 3t + 1. The best known asynchronous VSS protocol by Cachin et al. has O(n^2) message complexity and O(kn^3) communication complexity, where k is a security parameter corresponding to the size of the secret. We close the linear complexity gap between these two measures for asynchronous VSS by presenting two protocols with O(n^2) message complexity and O(kn^2) communication complexity. Our first protocol satisfies the standard VSS definition, and can be used in stand-alone VSS scenarios as well as in applications such as Byzantine agreement. Our second and more intricate protocol satisfies a stronger VSS definition, and is useful in all VSS applications including multiparty computation and threshold cryptography.
Last updated:  2014-10-22
An ultra-lightweight ID-based pairwise key establishment scheme aiming at full collusion resistance
Uncategorized
Oscar Garcia-Morchon, Ronald Rietman, Ludo Tolhuizen, Domingo Gomez-Perez, Jaime Gutierrez, Santos Merino del Pozo
Uncategorized
This paper introduces a new key establishment scheme aiming at fully collusion-resistant identity-based symmetric-key agreement. In an identity-based pairwise key agreement scheme, a Trusted Third Party (TTP) manages the system and securely provides any node, e.g., Alice or Bob, with private keying materials. Alice can generate a pairwise key with Bob given her own secret keying material and Bob's identity. The full collusion resistance property would ensure that the scheme remains secure even if arbitrarily many devices collude or are compromised. Our scheme, the HIMMO algorithm, relies on two design concepts: Hiding Information and Mixing Modular Operations. Hiding information is related to the Noisy Interpolation Problem; the Mixing Modular Operations problem seems to be a new hard problem. We describe our scheme, the security of its underlying design principles and give order of magnitude estimations for secure configuration parameters. For these parameters, we show that our prototypic implementation of HIMMO on the 8-bit CPU ATmega128L can generate 128-bit keys in less than 7 ms based on an algorithm fitting in 428 B and with secret keying materials of size 656 B.
Last updated:  2012-11-02
Security Analysis of an Open Car Immobilizer Protocol Stack
Stefan Tillich, Marcin Wójcik
An increasing number of embedded security applications---which traditionally have been heavily reliant on secret and/or proprietary solutions---apply the principle of open evaluation. A recent example is the specification of an open security protocol stack for car immobilizer applications by Atmel, which has been presented at ESCAR 2010. This stack is primarily intended to be used in conjunction with automotive transponder chips of this manufacturer, but could in principle be deployed on any suitable type of transponder chip. In this paper we re-evaluate the security of this protocol stack. We were able to uncover a number of security vulnerabilities. We show that an attacker with a cheap standard reader close to such a car key can track it, lock sections of its EEPROM, and even render its immobilizer functionality completely useless. After eavesdropping on a genuine run of the authentication protocol between the car key and the car, an attacker is enabled to read and write the memory of the car key. Furthermore, we point out the threats of relay attacks and session hijacking, which require slightly more elaborate attack setups. For each of the indicated attacks we propose possible fixes and discuss their overhead.
Last updated:  2015-02-13
Balancing Output Length and Query Bound in Hardness Preserving Constructions of Pseudorandom Functions
Uncategorized
Nishanth Chandran, Sanjam Garg
Show abstract
Uncategorized
We revisit hardness-preserving constructions of a pseudo-random function (PRF) from any length doubling pseudo-random generator (PRG) when there is a non-trivial upper bound $q$ on the number of queries that the adversary can make to the PRF. Very recently, Jain, Pietrzak, and Tentes (TCC 2012) gave a hardness-preserving construction of a PRF that makes only $O(\log q)$ calls to the underlying PRG when $q = 2^{n^\epsilon}$ and $\epsilon \geq \frac{1}{2}$. This dramatically improves upon the efficiency of the construction of Goldreich, Goldwasser, and Micali (FOCS 1984). However, they explicitly left open the question of whether such constructions exist when $\epsilon < \frac{1}{2}$. In this work, we give constructions of PRFs that make only $O(\log q)$ calls to the underlying PRG when $q = 2^{n^\epsilon}$, for $0<\epsilon<1$; our PRF outputs $O(n^{2\epsilon})$ bits (on every input), as opposed to the construction of Jain {\em et al.} that outputs $n$ bits. That is, our PRF is not length preserving; however it outputs more bits than the PRF of Jain {\em et al.} when $\epsilon>\frac{1}{2}$. We obtain our construction through the use of information-theoretic tools such as {\em almost} $\alpha$-wise independent hash functions coupled with a novel proof strategy.
Last updated:  2013-01-28
Polynomial time solutions of computational problems in noncommutative-algebraic cryptography
Boaz Tsaban
We introduce the \emph{linear centralizer method}, and use it to devise a provable polynomial time solution of the Commutator Key Exchange Problem, the computational problem on which, in the passive adversary model, the security of the Anshel--Anshel--Goldfeld 1999 \emph{Commutator} key exchange protocol is based. We also apply this method to the computational problem underlying the \emph{Centralizer} key exchange protocol, introduced by Shpilrain and Ushakov in 2006. This is the first provable polynomial time cryptanalysis of the Commutator key exchange protocol, hitherto the most important key exchange protocol in the realm of noncommutative-algebraic cryptography, and the first cryptanalysis (of any kind) of the Centralizer key exchange protocol. Unlike earlier cryptanalyses of the Commutator key exchange protocol, our cryptanalyses cannot be foiled by changing the distributions used in the protocol.
Last updated:  2012-11-01
An arithmetic intersection formula for denominators of Igusa class polynomials
Kristin Lauter, Bianca Viray
In this paper we prove an explicit formula for an arithmetic intersection number on the Siegel moduli space of abelian surfaces, generalizing the work of Bruinier-Yang and Yang. These intersection numbers allow one to compute the denominators of Igusa class polynomials, which has important applications to the construction of genus 2 curves for use in cryptography. Bruinier and Yang conjectured a formula for intersection numbers on an arithmetic Hilbert modular surface, and as a consequence obtained a conjectural formula for the intersection number relevant to denominators of Igusa class polynomials under strong assumptions on the ramification of the primitive quartic CM field K. Yang later proved this conjecture assuming that the ring of integers is freely generated by one element over the ring of integers of the real quadratic subfield. In this paper, we prove a formula for the intersection number for more general primitive quartic CM fields, and we use a different method of proof than Yang. We prove a tight bound on this intersection number which holds for all primitive quartic CM fields. As a consequence, we obtain a formula for a multiple of the denominators of the Igusa class polynomials for an arbitrary primitive quartic CM field. Our proof entails studying the Embedding Problem posed by Goren and Lauter and counting solutions using our previous article that generalized work of Gross-Zagier and Dorman to arbitrary discriminants.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.