All papers in 2013 (Page 2 of 881 results)
How Did Dread Pirate Roberts Acquire and Protect His Bitcoin Wealth?
Uncategorized
Uncategorized
The Bitcoin scheme is one of the most popular and talked about alternative payment schemes. One of the most active parts of the Bitcoin ecosystem was the Silk Road marketplace, in which highly illegal substances and services were traded. It was run by a person who called himself Dread Pirate Roberts (DPR), whose bitcoin holdings are estimated to be worth hundreds of millions of dollars at today's exchange rate. On October 1-st 2013, the FBI arrested a 29 year old person named Ross William Ulbricht, claiming that he is DPR, and seizing a small fraction of his bitcoin wealth. In this paper we use the publicly available record to trace the evolution of his holdings in order to find how he acquired and how he tried to hide them from the authorities. In particular, we trace the amounts he received and the amounts he transferred out of his accounts, and show that all his Silk Road commissions from the months of May, June and September 2013, along with numerous other amounts, were not seized by the FBI. This analysis demonstrates the power of data mining techniques in analyzing large payment systems, and especially publicly available transaction graphs of the type provided by the Bitcoin scheme.
Indistinguishability Obfuscation from Semantically-Secure Multilinear Encodings
We define a notion of semantic security of multilinear (a.k.a. graded) encoding schemes, which stipulates security of class of algebraic ``decisional'' assumptions: roughly speaking, we require that for every nuPPT distribution over two \emph{constant-length} sequences and auxiliary elements such that all arithmetic circuits (respecting the multilinear restrictions and ending with a zero-test) are \emph{constant} with overwhelming probability over , , we have that encodings of are computationally indistinguishable from encodings of . Assuming the existence of semantically secure multilinear encodings and the LWE assumption, we demonstrate the existence of indistinguishability obfuscators for all polynomial-size circuits. We additionally show that if we assume subexponential hardness, then it suffices to consider a \emph{single} (falsifiable) instance of semantical security (i.e., that semantical security holds w.r.t to a particular distribution ) to obtain the same result.
We rely on the beautiful candidate obfuscation constructions of Garg et al (FOCS'13), Brakerski and Rothblum (TCC'14) and Barak et al (EuroCrypt'14) that were proven secure only in idealized generic multilinear encoding models, and develop new techniques for demonstrating security in the standard model, based only on semantic security of multilinear encodings (which trivially holds in the generic multilinear encoding model).
We also investigate various ways of defining an ``uber assumption'' (i.e., a super-assumption) for multilinear encodings, and show that the perhaps most natural way of formalizing the assumption that ``any algebraic decision assumption that holds in the generic model also holds against nuPPT attackers'' is false.
A Distinguish attack on Rabbit Stream Cipher Based on Multiple Cube Tester
Rabbit stream cipher is one of the finalists of eSTREAM
project which uses 128-bit secret keys. Prior to us, the attacks on Rabbit
has been all focused on the bias analysis and the best result showed the
distinguishing attack with complexity 2136. Our analysis in this paper,
is based on chosen IV analysis on reduced N-S round of Rabbit though
using multi cube tester. For this purpose we show for a mature cube
we could easily identify weak subcubes which increase the probability of
distinguishing for an unknown secret key. We also represent with 225
complexity, using one iteration of next state function the keystream is
completely distinguishable from random.
Distributed Group Authentication for RFID Supply Management
We investigate an application of Radio Frequency Identification (RFID) referred to in the literature as group scanning, in which an RFID reader device interrogates several RFID tags to establish “simultaneous” presence of a group of tags. Our goal is to study the group scanning problem in strong adversarial settings and show how group scanning can be used in distributed applications for supply chain management.
We present a security framework for group scanning and give a formal description of the attending security requirements. Our model is based on the Universal Composability framework and supports re-usability
(through modularity of security guarantees). We propose two novel protocols that realize group scanning in this security model, based on off-the-shelf components such as low-cost (highly optimized) pseudorandom functions, and show how these can be integrated into RFID supply-chain management systems
Multi-Stage Fault Attacks on Block Ciphers
This paper introduces Multi-Stage Fault Attacks, which allow Differential Fault Analysis of block ciphers having independent subkeys. Besides the specification of an algorithm implementing the technique, we show concrete applications to LED-128 and PRINCE and demonstrate that in both cases approximately 3 to 4 fault-injections are enough to reconstruct the full 128-bit key.
Construction of Multiplicative Monotone Span Program
Uncategorized
Uncategorized
Multiplicative monotone span program is one of the important tools to realize secure multiparty computation. It is essential to construct multiplicative monotone span programs for secure multiparty computations. For any access structure, Cramer et al. gave a method to construct multiplicative monotone span programs, but its row size became double, and the column size also increased. In this paper, we propose a new construction which can get a multiplicative monotone span program with the row size less than double without changing the column size.
Location Leakage in Distance Bounding: Why Location Privacy does not Work
Uncategorized
Uncategorized
In many cases, we can only have access to a service by proving we are sufficiently close to a particular location (e.g. in automobile or building access control). In these cases, proximity can be guaranteed through signal attenuation. However, by using additional transmitters an attacker can relay signals between the prover and the verifier. Distance-bounding protocols are the main countermeasure against such attacks; however, such protocols may leak information regarding the location of the prover and/or the verifier who run the distance-bounding protocol.
In this paper, we consider a formal model for location privacy in the context of distance-bounding. In particular, our contributions are threefold: we first define a security game for location privacy in distance-bounding; secondly, we define an adversarial model for this game, with two adversary classes; finally, we assess the feasibility of attaining location privacy for distance-bounding protocols. Concretely, we prove that for protocols with a beginning or a termination, it is theoretically impossible to achieve location privacy for either of the two adversary classes, in the sense that there always exists a polynomially bounded adversary that wins the security game. However, for so-called limited adversaries, which cannot see the location of arbitrary provers, carefully chosen parameters do, in practice, enable computational location privacy.
Differential Cryptanalysis and Linear Distinguisher of Full-Round Zorro
Zorro is an AES-like lightweight block cipher proposed in CHES 2013, which only uses 4 S-boxes per round. The designers showed the resistance of the cipher against various attacks and concluded the cipher has a large security margin. Recently, Guo et. al have given a key recovery attack on full-round Zorro by using the internal differential characteristics. However, the attack only works for out of keys. In this paper, the secret key selected randomly from the whole key space can be recovered with a time complexity of full-round Zorro encryptions and a data complexity of chosen plaintexts. We first observe that the fourth power of the MDS matrix used in Zorro equals to the identity matrix. Moveover, several iterated differential characteristics and iterated linear trails are found due to the interesting property. We select three characteristics with the largest probability to give a key recovery attack on Zorro and a linear trail with the largest correlation to show a a linear distinguishing attack with known plaintexts. The results show that the security of Zorro against linear and differential cryptanalysis evaluated by designers is insufficient and the block cipher Zorro is far from a random permutation.
Multi-Input Functional Encryption
Uncategorized
Uncategorized
\emph{Functional encryption} (FE) is a powerful primitive enabling fine-grained access to encrypted data. In an FE scheme, secret keys (``tokens'') correspond to functions; a user in possession of a
ciphertext and a token for the function~
can compute but learn nothing else about~ . An active area of research over the past few years has focused on the development of ever more expressive FE schemes.
In this work we introduce the notion of \emph{multi-input} functional encryption. Here, informally, a user in possession of a token for an -ary function and \emph{multiple} ciphertexts , \ldots, can compute but nothing else about the~ .
Besides introducing the notion, we explore the feasibility of multi-input FE in the public-key and symmetric-key settings, with respect to both indistinguishability-based and simulation-based definitions of security.
CBEAM: Efficient Authenticated Encryption from Feebly One-Way Functions
We show how efficient and secure cryptographic mixing functions can be constructed from low-degree rotation-invariant functions rather than conventional S-Boxes. These novel functions have surprising properties; many exhibit inherent feeble (Boolean circuit) one-wayness and offer speed/area tradeoffs unobtainable with traditional constructs. Recent theoretical results indicate that even if the inverse is not explicitly computed in an implementation, its degree plays a fundamental role to the security of the iterated composition. To illustrate these properties, we present CBEAM, a Cryptographic Sponge Permutation based on a single -bit Boolean function. This simple nonlinear function is used to construct a 16-bit rotation-invariant function of Degree 4 (but with a very complex Degree 11 inverse), which in turn is expanded into an efficient 256-bit mixing function. In addition to flexible tradeoffs in hardware we show that efficient implementation strategies exist for software platforms ranging from low-end microcontrollers to the very latest x86-64 AVX2 instruction set. A rotational bit-sliced software implementation offers not only comparable speeds to AES but also increased security against cache side channel attacks. Our construction supports Sponge-based Authenticated Encryption, Hashing, and PRF/PRNG modes and is highly useful as a compact ``all-in-one'' primitive for pervasive security.
Beyond Modes: Building a Secure Record Protocol from a Cryptographic Sponge Permutation
BLINKER is a light-weight cryptographic suite and record protocol built from a single permutation. Its design is based on the Sponge construction used by the SHA-3 algorithm KECCAK. We examine the SpongeWrap authenticated encryption mode and expand its padding mechanism to offer explicit domain separation and enhanced security for our specific requirements: shared secret half-duplex keying, encryption, and a MAC-and-continue mode. We motivate these enhancements by showing that unlike legacy protocols, the resulting record protocol is secure against a two-channel synchronization attack while also having a significantly smaller implementation footprint. The design facilitates security proofs directly from a single cryptographic primitive (a single security assumption) rather than via idealization of multitude of algorithms, paddings and modes of operation. The protocol is also uniquely suitable for an autonomous or semi-autonomous hardware implementation of protocols where the secrets never leave the module, making it attractive for smart card and HSM designs.
TOT, a Fast Multivariate Public Key Cryptosystem with Basic Secure Trapdoor
In this paper, we design a novel one-way trapdoor function, and then propose a new multivariate public key cryptosystem called , which can be used for encryption, signature and authentication. Through analysis, we declare that is secure, because it can resist current known algebraic attacks if its parameters are properly chosen. Some practical implementations for are also given, and whose security level is at least . The comparison shows that is more secure than , and (when and , is still secure), and it can reach almost the same speed of computing the secret map by and (even though was broken, its high speed has been affirmed).
Efficient Template Attacks
Template attacks remain a powerful side-channel technique to
eavesdrop on tamper-resistant hardware. They model the probability
distribution of leaking signals and noise to guide a
search for secret data values. In practice, several numerical
obstacles can arise when implementing such attacks
with multivariate normal distributions.
We propose
efficient methods to avoid these. We also demonstrate how to achieve
significant performance improvements, both in terms of information
extracted and computational cost, by pooling covariance estimates
across all data values. We provide a detailed and systematic
overview of many different options for implementing such
attacks. Our experimental evaluation of all these methods based on
measuring the supply current of a byte-load instruction executed in
an unprotected 8-bit microcontroller leads to practical guidance for
choosing an attack algorithm.
Broadcast Amplification
A -broadcast primitive is a communication primitive that allows a
sender to send a value from a domain of size to a set of parties.
A broadcast protocol emulates the -broadcast primitive using only
point-to-point channels, even if some of the parties cheat, in
the sense that all correct recipients agree on the same value
(consistency), and if the sender is correct, then is the value
sent by the sender (validity). A celebrated result by Pease, Shostak
and Lamport states that such a broadcast protocol exists if and only if , where denotes the total number of parties and denotes
the upper bound on the number of cheaters.
This paper is concerned with broadcast protocols for any number of
cheaters ( ), which can be possible only if, in addition to
point-to-point channels, another primitive is available. Broadcast
amplification is the problem of achieving -broadcast when -broadcast
can be used once, for . Let denote the minimal such
for domain size~ .
We show that for parties, broadcast for any domain size is
possible if only a single -broadcast is available, and broadcast of
a single bit ( ) is not sufficient, i.e., for
any . In contrast, for no broadcast amplification
is possible, i.e., for any .
However, if other parties than the sender can also broadcast some
short messages, then broadcast amplification is possible for
\emph{any}~ . Let denote the minimal such that
-broadcast can be constructed from primitives -broadcast,
\ldots, -broadcast, where (i.e., ). Note that .
We show that broadcasting bits in
total suffices, independently of , and that at least parties,
including the sender, must broadcast at least one bit. Hence
.
VMPC-R Cryptographically Secure Pseudo-Random Number Generator Alternative to RC4
We present a new Cryptographically Secure Pseudo-Random Number Generator. It uses permutations as its internal state, similarly to the RC4 stream cipher. We describe a statistical test which revealed non-random patterns in a sample of outputs of a 3-bit RC4.
Our new algorithm produced undistinguishable from random 3-bit outputs in the same test. We probed outputs of the algorithm in different statistical tests with different word sizes
and found no way of distinguishing the keystream from a random source. The size of the algorithm's internal state is (for an 8-bit implementation). The algorithm is cryptographically secure to the extent we were able to analyse it. Its design is simple and easy to implement. We present the generator along with a key scheduling algorithm processing both keys and initialization vectors.
Misuse Resistant Parallel Authenticated Encryptions
The authenticated encryptions which resist misuse of initial value (or nonce) at some desired level of privacy are two-pass or Mac-then-Encrypt constructions (inherently inefficient but provide full privacy) and online constructions, e.g., McOE, sponge-type authenticated encryptions (such as duplex, AEGIS) and COPA. Only the last one is almost parallelizable with some bottleneck in processing associated data. In this paper, {\em we design a new online secure authenticated encryption, called \tx{ELmE} or Encrypt-Linear mix-Encrypt, which is completely (two-stage) {\bf parallel} (even in associated data) and {\bf pipeline implementable}}. It also provides full privacy when associated data (which includes initial value) is not repeated. The basic idea of our construction and COPA are based on \tx{EME}, an Encrypt-Mix-Encrypt type SPRP constructions (secure against chosen plaintext and ciphertext). Unlike \tx{EME}, we consider (so does COPA) online computable {\bf linear mixing}. In addition with getting rid of bottleneck, our construction optionally supports {\bf intermediate tags} which can be verified faster with less buffer size. Intermediate tag provides security against block-wise adversaries which is meaningful in low-end device implementation.
RankSign : an efficient signature algorithm based on the rank metric
In this paper we propose a new approach to code-based signatures that
makes use in particular of rank metric codes. When the classical approach consists in finding the unique
preimage of a syndrome through a decoding algorithm, we propose to introduce the notion
of mixed decoding of erasures and errors for building signature schemes.
In that case the difficult problem becomes, as is the case in lattice-based cryptography,
finding a preimage of weight above the Gilbert-Varshamov bound (case where
many solutions occur) rather than finding a unique preimage of weight below
the Gilbert-Varshamov bound. The paper describes RankSign: a
new signature algorithm for the rank metric
based on a new mixed algorithm for decoding erasures and errors for
the recently introduced Low Rank Parity Check (LRPC) codes.
We explain how it is possible (depending on choices
of parameters) to obtain a full decoding algorithm which is able
to find a preimage of reasonable rank weight for any random syndrome
with a very strong probability. We study the semantic security
of our signature algorithm and show how it is possible to reduce
the unforgeability to direct attacks on the public matrix, so that
no information leaks through signatures. Finally, we give several examples of parameters
for our scheme, some of which with public key of size bits and signature of size bits.
Moreover the scheme can be very fast for small base fields.
Kurosawa-Desmedt Key Encapsulation Mechanism, Revisited and More
While the hybrid public key encryption scheme of Kurosawa and Desmedt (CRYPTO 2004) is provably secure against chosen ciphertext attacks (namely, IND-CCA-secure), its associated key encapsulation mechanism (KEM) is widely known as not \CCA-secure. In this paper, we present a direct proof of IND-CCA security thanks to a simple twist on the Kurosawa-Desmedt KEM. Our KEM beats the standardized version of Cramer-Shoup KEM in ISO/IEC 18033-2 by margins of
-- at least 20\% in encapsulation speed, and
-- up to 60\% in decapsulation speed,
which are verified by both theoretical comparison and experimental results. The efficiency of decapsulation can be even
-- about 40\% better than the decapsulation of the PSEC-KEM in ISO/IEC 18033-2
-- only slightly worse than the decapsulation of the ECIES-KEM in ISO/IEC 18033-2
which is of independent interest since the security of both PSEC-KEM and ECIES-KEM are argued using the controversial random oracle heuristic in contrast to ours.
We then generalize the technique into hash proof systems, proposing several KEM schemes with IND-CCA security under decision linear and decisional composite residuosity assumptions respectively. All the KEMs are in the standard model, and use standard, computationally secure symmetric building blocks.
We finally show that, with additional simple yet innovative twists, the KEMs can be proved resilient to certain amount of leakage on the secret key. Specifically with the DDH-based scheme, a fraction of of the secret key can be leaked, and when conditioned on a fixed leakage rate, we obtain the most efficient leakage-resilient KEMs regarding computation and storage.
Dynamic Countermeasure Against the Zero Power Analysis
Elliptic Curve Cryptography can be vulnerable to Side-Channel Attacks, such as the Zero Power Analysis (ZPA).
This attack takes advantage of the occurrence of special points that bring a zero-value when computing a doubling or an addition of points.
This paper consists in analysing this attack.
Some properties of the said special points are explicited.
A novel dynamic countermeasure is described.
The elliptic curve formul\ae{} are updated depending on the elliptic curve and the provided base point.
Predicate- and Attribute-Hiding Inner Product Encryption in a Public Key Setting
In this paper, we propose a reasonable definition of predicate-hiding inner product encryption (IPE) in a public key setting, which we call inner product encryption with ciphertext conversion (IPE-CC), where original ciphertexts are converted to predicate-searchable ones by an helper in possession of a conversion key. We then define a notion of full security for IPE-CC, which comprises three security properties of being adaptively predicate- and attribute-hiding in the public key setting, adaptively (fully-)attribute-hiding against the helper, and usefully secure even against the private-key generator (PKG). We then present the first fully secure IPE-CC scheme, and convert it into the first fully secure symmetric-key IPE (SIPE) scheme, where the security is defined in the sense of Shen, Shi, Waters. All the security properties are proven under the decisional linear assumption in the standard model. The IPE-CC scheme is comparably as efficient as existing attribute-hiding (not predicate-hiding) IPE schemes. We also present a variant of the proposed IPE-CC scheme with the same security that achieves shorter public and secret keys. We
employ two key techniques, trapdoor basis setup, in which a new trapdoor is embedded in a public key, and multi-system proof technique, which further generalizes an extended dual system approach given by Okamoto and Takashima recently.
Self-Updatable Encryption: Time Constrained Access Control with Hidden Attributes and Better Efficiency
Revocation and key evolving paradigms are central issues in cryptography, and in PKI in particular. A novel concern related to these areas was raised in the recent work of Sahai, Seyalioglu, and Waters (Crypto 2012) who noticed that revoking past keys should at times (e.g., the scenario of cloud storage) be accompanied by revocation of past ciphertexts (to prevent unread ciphertexts from being read by revoked users). They introduced revocable-storage attribute-based encryption (RS-ABE) as a good access control mechanism for cloud storage. RS-ABE protects against the revoked users not only the future data by supporting key-revocation but also the past data by supporting ciphertext-update, through which a ciphertext at time can be updated to a new ciphertext at time using only the public key.
Motivated by this pioneering work, we ask whether it is possible to have a modular approach, which includes a primitive for time managed ciphertext update as a primitive. We call encryption which supports this primitive a ``self-updatable encryption'' (SUE). We then suggest a modular cryptosystems design methodology based on three sub-components: a primary encryption scheme, a key-revocation mechanism, and a time-evolution mechanism which controls the ciphertext self-updating via an SUE method, coordinated with the revocation (when needed). Our goal in this is to allow the self-updating ciphertext component to take part in the design of new and improved cryptosystems and protocols in a flexible fashion. Specifically, we achieve the following results:
- We first introduce a new cryptographic primitive called self-updatable encryption (SUE), realizing a time-evolution mechanism. In SUE, a ciphertext and a private key are associated with time. A user can decrypt a ciphertext if its time is earlier than that of his private key. Additionally, anyone (e.g., a cloud server) can update the ciphertext to a ciphertext with a newer time. We also construct an SUE scheme and prove its full security under static assumptions.
- Following our modular approach, we present a new RS-ABE scheme with shorter ciphertexts than that of Sahai et al. and prove its security. The length efficiency is mainly due to our SUE scheme and the underlying modularity.
- We apply our approach to predicate encryption (PE) supporting attribute-hiding property, and obtain a revocable-storage PE (RS-PE) scheme that is selectively-secure.
- We further demonstrate that SUE is of independent interest, by showing it can be used for timed-release encryption (and its applications), and for augmenting key-insulated encryption with forward-secure storage.
Multi-user collisions: Applications to Discrete Logarithm, Even-Mansour and PRINCE
Uncategorized
Uncategorized
In this paper, we investigate the multi-user setting both in public and in secret-key cryptanalytic applications. In
this setting, the adversary tries to recover keys of many users in parallel more efficiently than with classical attacks, \textit{i.e.}, the number of recovered keys multiplied by the time complexity to find a single key, by
amortizing the cost among several users. One possible scenario is to recover a single key in a large set of users more efficiently than to recover a key in the classical model. Another possibility is, after some shared
precomputation, to be able to learn individual keys very efficiently. This latter model is close to traditional time/memory tradeoff attacks with precomputation. With these goals in mind, we introduce two new algorithmic
ideas to improve collision-based attacks in the multi-user setting. Both ideas are derived from the parallelizable collision search as proposed by van~Oorschot and Wiener. This collision search uses precomputed chains obtained by iterating some basic function. In our cryptanalytic application, each pair of merging chains can be used to correlate the key of two distinct users. The first idea is to construct a graph, whose vertices are keys and whose edges are these correlations. When the graph becomes connected, we simultaneously recover all the keys. Thanks to random graph analysis techniques, we can show that the number of edges that are needed to make this event occurs is small enough to obtain some improved attacks.
The second idea modifies the basic technique of van~Oorschot and Wiener: instead of waiting for two chains to merge, we now require that they become {\it parallel}.
We first show that, using the first idea alone, we can recover the discrete logarithms of users in a group of
size in time . We put these two ideas together and we show that in the multi-user Even-Mansour scheme, \textit{all} the
keys of users can be found with queries for each user (where is the domain
size). Finally, we consider the PRINCE block cipher (with 128-bit keys and 64-bit blocks)
and find the keys of
users among a set of users in time . We also describe a new generic attack in the classical model for PRINCE.
On cross joining de Bruijn sequences
We explain the origins of Boolean feedback functions of nonlinear feedback shift registers (NLFSRs) of fixed order n generating de Bruijn binary sequences. They all come into existence by cross joining operations starting from one maximum period feedback shift register, e.g., a linear one which always exists for any order n. The result obtained yields some constructions of NLFSRs generating maximum period binary sequences.
Vectorization of ChaCha Stream Cipher
This paper describes software optimization for the stream Cipher ChaCha. We leverage the wide vectorization capabilities of the new AVX2 architecture, to speed up ChaCha encryption (and decryption) on the latest x86_64 processors. In addition, we show how to apply vectorization for the future AVX512 architecture, and get further speedup. This leads to significant performance gains. For example, on the latest Intel Haswell microarchitecture, our AVX2 implementation performs at 1.43 cycles per byte (on a 4KB message), which is ~2x faster than the current implementation in the Chromium project.
A Revocable Online-Offline Certificateless Signature Scheme without Pairing
Certificateless Public key Cryptography is a widely studied paradigm due to its advantages of not
having the key-escrow problem and the lack of use of certificates. Online-Offline signature schemes are
extremely relevant today because of their great practical applications. In an online-offline signature
scheme all the heavy computation is done on powerful processors and stored securely in the offline
phase, and the online component requires only light computation. Hence, it is widely used in several
low-resource devices like mobile phones, etc. Revocation is another important problem of wide interest
as it helps to keep a check on misbehaving users. Currently, there are very few revocable certificateless
signature schemes in the literature. We have addressed some of the limitations of the previously existing
schemes and designed a new model for the same that involves periodic time generated keys. We present
a revocable online-offline certificateless signature scheme without pairing. Pairing, though a very useful
mathematical function, comes at the cost of heavy computation. Our scheme is proved secure in the
random oracle model using a tight security reduction to the computational Diffie-Hellman problem.
Practical Signatures from the Partial Fourier Recovery Problem
Abstract. We present PASSSign, a variant of the prior PASS and PASS-2 proposals, as a candidate for a practical post-quantum signature scheme. Its hardness is based on the problem of recovering a ring element with small norm from an incomplete description of its Chinese remainder representation. For our particular instantiation, this corresponds to the recovery of a signal with small infinity norm from a limited set of its Fourier coefficients.
The key improvement over previous versions of PASS is the introduction of a rejection sampling technique from Lyubashevsky (2009) which assures that transcript distributions are completely decoupled from the keys that generate them.
Although the scheme is not supported by a formal security reduction, we present extensive arguments for its security and derive concrete parameters based on the performance of state of the art lattice reduction and enumeration techniques.
A Meet-in-the-Middle Attack on Round-Reduced mCrypton Using the Differential Enumeration Technique
Uncategorized
Uncategorized
This paper describes a meet-in-the-middle (MITM) attack against the round reduced versions of the block cipher mCrypton-64/96/128. We construct a 4-round distinguisher and lower the memory requirement from to using the differential enumeration technique. Based on the distinguisher, we launch a MITM attack on 7-round mCrypton-64/96/128 with complexities of 64-bit blocks and encryptions. Then we extend the basic attack to 8 rounds for mCrypton-128 by adding some key-bridging techniques. The 8-round attack on mCrypton-128 requires a time complexity and a memory complexity . Furthermore, we construct a 5-round distinguisher and propose a MITM attack on 9-round mCrypton-128 with a time complexity of encryptions and a memory complexity of 64-bit blocks.
Last updated: 2014-05-15
Improving security and efficiency for multi-authority access control system in cloud storage
Uncategorized
Uncategorized
Multi-Authority Attribute-Based Encryption (MA-ABE) is an emerging cryptographic primitive for enforcing fine-grained attribute-based access control on the outsourced data in cloud storage. However, most of the previous multi-authority attribute-based systems are either proven security in a weak model or lack of efficiency in user revocation. In this paper, we propose a novel multi-authority attribute-based data access control system for cloud storage. We construct a new multi-authority CP-ABE scheme with decryption outsourcing. We largely eliminate the decryption overhead for users by outsourcing the undesirable bilinear pairing operations to the cloud servers. The proposed scheme is proven adaptively secure in the standard model and supports any monotone access policy. We also design an efficient attribute-level user revocation approach with less computation cost. The security analysis, numeral comparisons indicate that the proposed system is secure, efficient and scalable.
Obfuscation-based Non-black-box Simulation and Four Message Concurrent Zero Knowledge for NP
As recent studies show, the notions of *program obfuscation* and *zero
knowledge* are intimately connected. In this work, we explore this connection further, and prove the following general result. If there exists *differing input obfuscation* (diO) for the class of all polynomial time Turing machines, then there exists a *four message, fully concurrent zero-knowledge* proof system for all languages in NP with negligible soundness error. This result is constructive: given diO, our reduction yields an explicit protocol along with an *explicit* simulator that is ``straight line'' and runs in strict
polynomial time.
Our reduction relies on a new non-black-box simulation technique which does not use the PCP theorem. In addition to assuming diO, our reduction also assumes (standard and polynomial time) cryptographic assumptions such as collision-resistant hash functions.
The round complexity of our protocol also sheds new light on the *exact* round complexity of concurrent zero-knowledge. It shows, for the first time, that in the realm of non-black-box simulation, concurrent zero-knowledge may not necessarily require more rounds than *stand alone* zero-knowledge!
Dietary Recommendations for Lightweight Block Ciphers: Power, Energy and Area Analysis of Recently Developed Architectures
In this paper we perform a comprehensive area, power, and energy analysis of some of the most recently-developed lightweight block ciphers and we compare them to the standard AES algorithm. We do this for several different architectures of the considered block ciphers. Our evaluation method consists of estimating the pre-layout power consumption and the derived energy using Cadence Encounter RTL Compiler and ModelSIM simulations.We show that the area is not always correlated to the power and energy consumption, which is of importance for mobile battery-fed devices. As a result, this paper can be used to make a choice of architecture when the algorithm has already been fixed; or it can help deciding which algorithm to choose based on energy and key/block length requirements.
On the Power of Rewinding Simulators in Functional Encryption
Uncategorized
Uncategorized
In a seminal work, Boneh, Sahai and Waters (BSW, for short) [TCC'11] showed that for functional encryption the indistinguishability notion of security (IND-Security) is weaker than simulation-based security (SIM-Security), and that SIM-Security is in general impossible to achieve. This has opened up the door to a plethora of papers showing feasibility and new impossibility results. Nevertheless, the quest for better definitions that (1) overcome the limitations of IND-Security and (2) the known impossibility results, is still open.
In this work, we explore the benefits and the limits of using efficient rewinding black-box simulators to argue security. To do so, we introduce a new simulation-based security definition, that we call rewinding simulation-based security (RSIM-Security), that is weaker than the previous ones but it is still sufficiently strong to not meet pathological schemes as it is the case for IND-Security (that is implied by the RSIM). This is achieved by retaining a strong simulation-based flavour but adding more rewinding power to the simulator having care to guarantee that it can not learn more than what the adversary would learn in any run of the experiment. What we found is that for RSIM the BSW impossibility result does not hold and that IND-Security is equivalent to RSIM-Security for Attribute-Based Encryption in the standard model. Nevertheless, we prove that there is a setting where rewinding simulators are of no help. The adversary can put in place a strategy that forces the simulator to rewind continuously.
Using Hamiltonian Totems as Passwords
Physical authentication brings extra security to software authentication by adding real-world input to conventional authentication protocols.
Existing solutions such as textual and graphical passwords are subject to brute force and shoulder surfing attacks, while users are reluctant to use biometrics for identification, due to its intrusiveness.
This paper uses Hamiltonian tokens as authentication means. The proposed token structure offers many possible configurations ({\sl i.e.}, passwords) and is small enough to be carried on a physical keychain.
After presenting our general idea, we describe an efficient algorithm to produce these tokens. Our procedure was validated by running a recognition campaign on a wide batch of synthetic samples, and experimented on prototypes manufactured using a commercial 3D-printer.
Fully Deniable Mutual Authentication Protocol Based on RSA Signature
Deniable authentication protocols allow a sender to authenticate a receiver, in a way that the receiver cannot convince a third party that such authentication (or any authentication) ever took place. In this study, we construct a fully deniable mutual authentication protocol based on RSA signature, and then a deniable authenticated key exchange protocol is constructed from the proposed protocol.
Efficient CCA-secure Threshold Public-Key Encryption Scheme
In threshold public-key encryption, the decryption key is divided into n
shares, each one of which is given to a different decryption user in order to avoid single points of failure. In this study, we propose a simple and efficient non-interactive threshold public-key encryption scheme by using the hashed Diffie-Hellman assumption in bilinear groups.
Compared with the other related constructions, the proposed scheme is more efficient.
Plaintext Recovery Attacks Against WPA/TKIP
We conduct an analysis of the RC4 algorithm as it is used in the IEEE WPA/TKIP wireless standard. In that standard, RC4 keys are computed on a per-frame basis, with specific key bytes being set to known values that depend on 2 bytes of the WPA frame counter (called the TSC). We observe very large, TSC-dependent biases in the RC4 keystream when the algorithm is keyed according to the WPA specification. These biases permit us to mount an effective statistical, plaintext-recovering attack in the situation where the same plaintext is encrypted in many different frames (the so-called ``broadcast attack'' setting). We assess the practical impact of these attacks on WPA/TKIP.
Authenticated Multiple Key Establishment Protocol for Wireless Sensor Networks
Uncategorized
Uncategorized
The article proposes a provably secure authenticated multiple key establishment protocol for Wireless Sensor Network. Security of the protocol is based on the computational infeasiblity of solving Elliptic Curve Discrete Logarithm Problem and Computational Diffie-Hellman Problem on Bilinear Pairing. User authentication is a one of the most challenging security requirement in wireless sensor networks
(WSN). It is required to establish the correct session key between two adjacent nodes of WSNs to achieve this security goal. Here we prove that, the proposed protocol is secure against the attack on data integrity and known key security attack on session key. It also provides perfect forward secrecy.
Asymptotically Efficient Lattice-Based Digital Signatures
We present a general framework that converts certain types of linear collision-resistant hash
functions into one-time signatures. Our generic construction can be instantiated based on both
general and ideal (e.g. cyclic) lattices, and the resulting signature schemes are provably secure
based on the worst-case hardness of approximating the shortest vector (and other standard
lattice problems) in the corresponding class of lattices to within a polynomial factor. When
instantiated with ideal lattices, the time complexity of the signing and verification algorithms,
as well as key and signature size is almost linear (up to poly-logarithmic factors) in the dimension
n of the underlying lattice. Since no sub-exponential (in n) time algorithm is known to solve
lattice problems in the worst case, even when restricted to ideal lattices, our construction gives
a digital signature scheme with an essentially optimal performance/security trade-off.
Asynchronous MPC with a Strict Honest Majority Using Non-equivocation
Multiparty computation (MPC) among n parties can tolerate up to t<n/2 active corruptions in a synchronous communication setting; however, in an asynchronous communication setting, the resiliency bound decreases to only t<n/3 active corruptions. We improve the resiliency bound for asynchronous MPC (AMPC) to match synchronous MPC using non-equivocation.
Non-equivocation is a message authentication mechanism to restrict a corrupted sender from making conflicting statements to different (honest) parties. It can be implemented using an increment-only counter and a digital signature oracle, realizable with trusted hardware modules readily available in commodity computers and smartphone devices. A non-equivocation mechanism can also be transferable and allow a receiver to verifiably transfer the authenticated statement to other parties. In this work, using transferable non-equivocation, we present an AMPC protocol tolerating t<n/2 faults. From a practical point of view, our AMPC protocol requires fewer setup assumptions than the previous AMPC protocol with t<n/2 by Beerliovä-Trub\'ıniovä, Hirt and Nielsen [PODC 2010]: unlike their AMPC protocol, it does not require any synchronous broadcast round at the beginning of the protocol and avoids the threshold homomorphic encryption setup assumption. Moreover, our AMPC protocol is also efficient and provides a gain of \Theta(n) in the communication complexity per multiplication gate, over the AMPC protocol of Beerliovä-Trub\'ıniovä et al. In the process, using non-equivocation, we also define the first asynchronous verifiable secret sharing (AVSS) scheme with t<n/2, which is of independent interest to threshold cryptography.
Functional Encryption and Property Preserving Encryption: New Definitions and Positive Results
Uncategorized
Uncategorized
Functional Encryption (FE) is an exciting new paradigm that extends the notion of public key encryption. In this work we explore the security of Inner Product Functional Encryption schemes with the goal of achieving the highest security against practically feasible attacks. In addition, we improve efficiency/ underlying assumptions/ security achieved by existing inner product Functional Encryption and Property Preserving Encryption schemes, in both the private and public key setting. Our results can be summarized as follows:
- We study whether known impossibilities for achieving strong SIM based security imply actual real world attacks. For this, we present a new UC-style SIM based definition of security that captures both data and function hiding, both public key and symmetric key settings and represents the "dream" security of FE. While known impossibilities rule out its achievability in the standard model, we show, surprisingly, that it can be achieved in the generic group model for Inner Product FE (Katz et al., Eurocrypt 2008). This provides evidence that FE implementations may enjoy extremely strong security against a large class of real world attacks, namely generic attacks.
- We provide several improvements to known constructions of Inner Product FE. In the private key setting, the construction by Shen et al. (TCC 2009) was based on non-standard assumptions, used composite order groups, and only achieved selective security. We give the first construction of a symmetric key inner product FE which is built using prime order groups, and is fully secure under the standard DLIN assumption. Our scheme is more efficient in the size of key and ciphertext than Shen et al.'s, when the latter is converted to prime-order groups.
- We give the first construction of a property preserving encryption (PPE) scheme for inner-products. Our scheme is secure under the DLIN assumption and satisfies the strongest definition of security -- Left-or-Right security in the standard model. Note that the only previously known construction for PPE by Pandey et al. (Eurocrypt 2012), which was claimed to be secure in the generic group model, was recently attacked Chatterjee and Das, making our construction the first candidate for PPE.
Last updated: 2013-11-17
Privacy Preserving Unique Statistics in a Smart Grid
Uncategorized
Uncategorized
Smart meters are widely deployed to provide fine-grained data
that correspond to tenant power consumption. These data are analyzed by suppliers for personalized billing, more accurate statistics and energy consumption predictions.
Indirectly this aggregation of data can reveal personal information of tenants such as number of persons in a house, vacation periods and appliance preferences.
To date, work in the area has focused mainly on privacy preserving aggregate statistical functions as the computation of sum.
In this paper we propose a novel solution for privacy preserving unique data collection per smart meter. We consider the operation of identifying the maximum consumption
of a smart meter as an interesting property for energy suppliers, as it can be employed for energy forecasting to allocate in advance electricity. In our solution we employ an order preserving encryption scheme in which the order of numerical
data is preserved in the ciphertext space. We enhance the accuracy of maximum consumption by utilizing a delta encoding scheme.
CODING - Stream Cipher Methods by Varying Components during Ciphering Data
Uncategorized
Uncategorized
Kernel of the symmetric block ciphering methods presented here is the coupling of XOR operations and of invertible substitution tables S with all possible 256**t byte groups (with t=1, 2, 3, ... bytes, fixed at the beginning) being derived from keys:
K(block) := S(S(block) E ) E with
- E upper and E lower triangular (byte-group-)matrix with (byte-block-length/t)**2 values, value 1 at all non-zero positions,
- the byte-group-wise addition without carry ('xor'; 'not xor' is possible too),
- the (vector) multiplication which belongs to .
Variable block lengths (v*t or (mod t)>0) are possible. This kernel can be applied n-times:
K (block) := K(...K(block)...) with n K-operations, in which n can be variable.
Because XOR operations and S-tables only operate in a useful manner if 'block' is not to "homogeneous" and for safety, two further components are determined from keys
- parameters of 2 pseudo random processes, - operation key
used at beginning and at end to get a ciphered block:
cblock := S(ZZ S(Op S(K (Op S(ZZ S(block)))))) with
- ZZ and ZZ are the bytes of the 1. and 2. pseudo random number process in block length,
- Op and Op is the (1./front and 2./back part of the or multiple of the) operation key.
An initial key is first expanded to t*256**t bytes (all further keys have this size too) and can be modified so the result key does not statistically differ from a random key.
Using an invertible S-table, the value (modulo n) of only as much consecutive bits of a key as to represent the number n-1 is determined to shift the last n S-table elements cyclically in accordance with this value, n=2 to 256**t. So all such 256**t! tables can be generated by the top bits of all possible keys and have length of t*256**t bytes.
The byte-group-value +1 at a position of a S-table determines the byte-group in the key from which up 2*7 bytes are used to initialize two floating point numbers (IEEE 754) for a pseudo random process. Floating point numbers are initialized again if a process will be cyclic.
Idea is, to modify (operation) keys similar to data blocks to generate and use more or less continual new S-tables, new pseudo random processes, and new operation keys during ciphering data.
Inspections show that in spite of knowledge of 2 of the 3 components S-table, pseudo random parameters, and operation key as well as the knowledge of original and ciphered data it can not infer the missing 3. component if component modifications are carried out "some time".
As well it is shown that by knowledge of the 3 components generated by a key the key itself can not be inferred (because of usage of interim operation keys). That is compromising of data and with that of components does not concern data ciphered before component-changing to the compromised components. By add-on usage of separate components only for the modifications of keys, it will be guaranteed that data sections ciphered after a component-changing started from compromised components are not compromised automatically.
Because of that a safety stream ciphering should be possible as already constructed for t=1,2,3.
Fast Software Implementation of Binary Elliptic Curve Cryptography
This paper presents an efficient and side channel protected software implementation of point multiplication for the standard NIST and SECG binary elliptic curves. The enhanced performance is achieved by improving the Lòpez-Dahab/Montgomery method at the algorithmic level, and by leveraging Intel's AVX architecture and the pclmulqdq processor instruction at the coding level.
The fast carry-less multiplication is further used to speed up the reduction on the newest Haswell platforms.
For the five NIST curves over with , the resulting point multiplication implementation is about 6 to 12 times faster than that of OpenSSL-1.0.1e, enhancing the ECDHE and ECDSA algorithms significantly.
An efficient FHE proposal based on the hardness of solving systems of nonlinear multivariate equations (II)
We propose a general framework to develop fully homomorphic encryption schemes (FHE) without using Gentry's technique. Initially, a private-key cryptosystem
is built over
( being an RSA modulus). An encryption of
is a randomly chosen vector such that where is a secret multivariate polynomial.
This private-key cryptosystem is not homomorphic in the sense that the vector sum is not a homomorphic operator. Non-linear homomorphic operators are then
developed. The security relies on the difficulty of solving systems of nonlinear equations (which is a -complete problem). While the security of our scheme has not been reduced to a provably hard instance of this problem,
its security is globally investigated.
Last updated: 2016-01-20
NEW DIGITAL SIGNATURE SCHEME USING MULTIPLE PRIVATE KEYS OVER NON-COMMUTATIVE DIVISION SEMIRINGS
In this paper, we propose a new signature scheme connecting two private keys and two public keys based on general non-commutative division semiring. The key idea of our technique engrosses three core steps. In the first step, we assemble polynomials on additive structure of non-commutative division semiring and take them as underlying work infrastructure. In the second step, we generate first set of private and public key pair using polynomial symmetrical decomposition problem. In the third step, we generate second set of private and public key pair using discrete logarithm. We use factorization theorem to generate the private key in discrete logarithm problem. By doing so, we can execute a new signature scheme on multiplicative structure of the semiring using multiple private keys. The security of the proposed signature scheme is based on the intractability of the Polynomial Symmetrical Decomposition Problem and discrete logarithm problem over the given non-commutative division semiring. Hence, this signature scheme is so much strong in security point of view.
On the Resilience and Uniqueness of CPA for Secure Broadcast
We consider the Secure Broadcast problem in incomplete networks. We study the resilience of the Certified Propagation Algorithm (CPA),
which is particularly suitable for ad hoc networks. We address the issue of determining the maximum number of corrupted players that CPA can tolerate under the -locally bounded adversary model, in which the adversary may corrupt at most
players in each player's neighborhood. For any graph and dealer-node we provide upper and lower bounds on that can be efficiently computed in terms of a graph theoretic parameter that we introduce in this work. Along the way we obtain an efficient 2-approximation algorithm for . We further introduce two more graph parameters, one of which matches exactly. Our approach allows to provide an affirmative answer to the open problem of CPA Uniqueness posed by Pelc and Peleg in 2005.
Weakness of F_{3^{6*1429}} and F_{2^{4*3041}} for Discrete Logarithm Cryptography
Uncategorized
Uncategorized
In 2013, Joux and then Barbulsecu et al. presented new algorithms for computing discrete logarithms in finite fields of small characteristic. Shortly thereafter, Adj et al. presented a concrete analysis showing that, when combined with some steps from classical algorithms, the new algorithms render the finite field F_{3^{6*509}} weak for pairing-based cryptography. Granger and Zumbragel then presented a modification of the new algorithms that extends their effectiveness to a wider range of fields.
In this paper, we study the effectiveness of the new algorithms combined with a carefully crafted descent strategy for the fields F_{3^{6*1429}} and F_{2^{4*3041}}. The intractability of the discrete logarithm problem in these fields is necessary for the security of pairings derived from supersingular curves with embedding degree 6 and 4 defined, respectively, over F_{3^{1429}} and F_{2^{3041}}; these curves were believed to enjoy a security level of 192 bits against attacks by Coppersmith's algorithm. Our analysis shows that these pairings offer security levels of at most 96 and 129 bits, respectively, leading us to conclude that they are dead for pairing-based cryptography.
TRS-80 With A Keccak Sponge Cake
The subject of this paper, an improbable implementation of a recently standardized cryptographic hash function on a thirty-five-year-old microcomputer, may strike some as unusual and recreative at best. In the tedious discipline of the process, however, lessons were learned in implementation trade-offs for basic cryptographic primitives which may prove interesting in the current context of securing (small to nano) machine to machine communications. More importantly, that such insights might stem out of revisiting how earlier computing platforms relate to the code written on them to cast a distant light on modern connections of code to material, historical and contextual factors certainly illuminates the joys of retrocomputing.
Masking Tables---An Underestimated Security Risk
The literature on side-channel analysis describes numerous masking schemes designed to protect block ciphers at the implementation level. Such masking schemes typically require the computation of masked tables prior to the execution of an encryption function. In this paper we revisit an attack which directly exploits this computation in such a way as to recover all or some of the masks used. We show that securely implementing masking schemes is only possible where one has access to a significant amount of random numbers.
Elliptic Curve Cryptography in Practice
In this paper, we perform a review of elliptic curve cryptography (ECC), as it is used in practice today, in order to reveal unique mistakes and vulnerabilities that arise in implementations of ECC. We study four popular protocols that make use of this type of public-key cryptography: Bitcoin, secure shell (SSH), transport layer security (TLS), and the Austrian e-ID card. We are pleased to observe that about 1 in 10 systems support ECC across the TLS and SSH protocols. However, we find that despite the high stakes of money, access and resources protected by ECC, implementations suffer from vulnerabilities similar to those that plague previous cryptographic systems.
Last updated: 2013-11-15
A Key Compromise Impersonation attack against Wang's Provably Secure Identity-based Key Agreement Protocol
In a 2005 IACR report, Wang published an efficient identity-based key agreement protocol (IDAK) suitable for resource constraint devices.
The author shows that the IDAK key agreement protocol is secure in the Bellare-Rogaway model with random oracles and also provides an ad-hoc security proof claiming that the IDAK protocol is not vulnerable to Key Compromise Impersonation attacks.
In this report, we claim that the IDAK protocol is vulnerable to key-compromise impersonation attacks. Indeed, Wang's results are valid only for a passive adversary that can corrupt parties or reveal certain session-specific data but is not allowed to manipulate protocol transcripts; a model considering this type of adversary is unable to afford KCI resilience.
SSS-V2: Secure Similarity Search
Encrypting information has been regarded as one of the most substantial approaches to protect users’ sensitive information in radically changing internet technology era. In prior research, researchers have considered similarity search over encrypted documents infeasible, because the single-bit difference of a plaintext would result in an enormous bits difference in the corresponding ciphertext. However, we propose a novel idea of Security Similarity Search (SSS) over encrypted documents by applying character-wise encryption with approximate string matching to keyword index search systems. In order to do this, we define the security requirements of similarity search over encrypted data, propose two similarity search schemes, and formally prove the security of the schemes. The first scheme is more efficient, while the second scheme achieves perfect similarity search privacy. Surprisingly, the second scheme turns out to be faster than other keyword index search schemes with keywordwise encryption, while enjoying the same level of security. The schemes of SSS support “like query(‘ab%’)” and a query with misprints in that
the character-wise encryption preserves the degree of similarity between two plaintexts, and renders approximate string matching between the corresponding ciphertexts possible without decryption.
Constructing Differentially 4-uniform Permutations over GF(2^{2k}) from the Inverse Function Revisited
Constructing S-boxes with low differential uniformity and high
nonlinearity is of cardinal significance in cryptography. In the
present paper, we show that numerous differentially 4-uniform
permutations over GF(2^{2k}) can be constructed by composing
the inverse function and cycles over GF(2^{2k}). Two sufficient
conditions are given, which ensure that the differential uniformity
of the corresponding compositions equals 4. A lower bound on
nonlinearity is also given for permutations constructed with the
method in the present paper. Moreover, up to CCZ-equivalence, a new
differentially 4-uniform permutation with the best known
nonlinearity over GF(2^{2k}) with odd is constructed. For
some special cycles, necessary and sufficient conditions are given
such that the corresponding compositions are differentially
4-uniform.
Stamp \& Extend -- Instant but Undeniable Timestamping based on Lazy Trees
We present a Stamp\&Extend time-stamping scheme based on linking via modified creation of Schnorr signatures.
The scheme is based on lazy construction of a tree of signatures.
Stamp\&Extend returns a timestamp immediately after the request, unlike the schemes based on the concept of timestamping rounds.
Despite the fact that all timestamps are linearly linked, verification of a timestamp requires a logarithmic number of steps with respect to the chain length.
An extra feature of the scheme is that any attempt to forge a timestamp by the Time Stamping Authority (TSA) results in revealing its secret key, providing an undeniable cryptographic evidence of misbehavior of TSA.
Breaking Stamp\&Extend requires not only breaking Schnorr signatures,
but to some extend also breaking Pedersen commitments.
Functional Encryption for Randomized Functionalities
In this work, we present the first definitions and constructions for functional encryption supporting randomized functionalities. The setting of randomized functionalities require us to revisit functional encryption definitions by, for the first time, explicitly adding security requirements for dishonest encryptors, to ensure that they cannot improperly tamper with the randomness that will be used for computing outputs. Our constructions are built using indistinguishability obfuscation.
Modified Alternating Step Generators
Irregular clocking of feedback shift registers is a popular technique to improve parameters of keystream generators in stream ciphers. Another technique is to implement nonlinear functions. We join these techniques and propose Modified Alternating Step Generators built with linear and nonlinear feedback shift registers. Adequate nonlinear Boolean functions are used as feedbacks or as filtering functions of shift registers in order to increase complexity of sequences produced by individual registers and the whole generator. We investigate basic parameters of proposed keystream generators, such as period, linear complexity and randomness.
Multi-Input Functional Encryption
We introduce the problem of Multi-Input Functional Encryption, where a secret key SK_f can correspond to an n-ary function f that takes multiple ciphertexts as input. Multi-input functional encryption is a general tool for computing on encrypting data which allows for mining aggregate information from several different data sources (rather than just a single source as in single input functional encryption). We show wide applications of this primitive to running SQL queries over encrypted database, non-interactive differentially private data release, delegation of computation, etc.
We formulate both indistinguishability-based and simulation-based definitions of security for this notion, and show close connections with indistinguishability and virtual black-box definitions of obfuscation. Assuming indistinguishability obfuscation for circuits, we present constructions achieving indistinguishability security for a large class of settings. We show how to modify this construction to achieve simulation-based security as well, in those settings where simulation security is possible. Assuming differing-inputs obfuscation [Barak et al., FOCS'01], we also provide a construction with similar security guarantees as above, but where the keys and ciphertexts are compact.
Homomorphic Authenticated Encryption Secure Against Chosen-Ciphertext Attack
Uncategorized
Uncategorized
We study homomorphic authenticated encryption, where privacy and authenticity of data are protected simultaneously. We define homomorphic versions of various security notions for privacy and authenticity, and investigate relations between them. In particular, we show that it is possible to give a natural definition of IND-CCA for homomorphic authenticated encryption, unlike the case of homomorphic encryption. Also, we construct a homomorphic authenticated encryption scheme supporting arithmetic circuits, which is chosen-ciphertext secure both for privacy and authenticity. Our scheme is based on the error-free approximate GCD assumption.
Last updated: 2014-01-14
Mobile Transaction over NFC and GSM
Although NFC mobile services have great potential for growth, they have raised a number of issues which are of concern to researchers and are preventing the wide adoption of this technology within society. Dynamic relationships of NFC ecosystem players in an NFC transaction process make them partners in a way that sometimes requires that they share access permission to applications that are running in the service environment. One of the technologies that can be used to ensure secure NFC transactions is cloud computing. This offers a wider range of advantages than the use of a Secure Element (SE) as a single entity in an NFC enabled mobile phone. In this paper, we propose a protocol for NFC mobile payments based on cloud Wallet model. In our protocol, the SE in the mobile device is used for customer authentication whereas the customer’s banking credentials are stored in a cloud under the control of the Mobile Network Operator (MNO). The proposed protocol eliminates the requirement for a shared secret between the Point of Sale (PoS) and the MNO before execution of the protocol, a mandatory requirement in the earlier version of this protocol. This makes it more practicable and user friendly. A detailed analysis of the protocol discusses multiple attack scenarios.
Verifiable Set Operations over Outsourced Databases
Uncategorized
Uncategorized
We study the problem of verifiable delegation of computation over
outsourced data, whereby a powerful worker maintains a large data
structure for a weak client in a verifiable way. Compared to the
well-studied problem of verifiable computation, this setting imposes
additional difficulties since the verifier needs to verify consistency
of updates succinctly and without maintaining large state. In particular,existing general solutions are far from practical in this setting.
We present a scheme for verifiable evaluation of hierarchical set
operations (unions, intersections and set-differences) applied to a
collection of dynamically changing sets of elements from a given
domain. That is, we consider two types of queries issued by the
client: updates (insertions and deletions) and data queries, which
consist of ``circuits'' of unions, intersections, and set-differences
on the current collection of sets.
This type of queries comes up in database queries, keyword search and
numerous other applications, and indeed our scheme can be effectively
used in such scenarios. The verification cost incurred is proportional
only to the size of the final outcome set and to the size of the query, and is independent of the cardinalities of the involved sets. The cost of updates is optimal ( modular operations per update).
Our construction extends that of [Papamanthou et al., Crypto 2011]
and relies on a modified version of the \emph{extractable collision-resistant hash function} (ECRH) construction, introduced in [Bitansky et al., ITCS 2012] that can be used to succinctly hash univariate polynomials.
Amplifying Privacy in Privacy Amplification
We study the classical problem of privacy amplification, where two parties Alice and Bob share a weak secret of min-entropy , and wish to agree on secret key of length over a public communication channel completely controlled by a computationally unbounded attacker Eve.
Despite being extensively studied in the literature, the problem of designing ``optimal'' efficient privacy amplification protocols is still open, because there are several optimization goals. The first of them is (1) minimizing the {\em entropy loss} (it is known that the optimal value for , where is the desired security of the protocol). Other important considerations include (2) minimizing the number of communication rounds, (3) maintaining security even after the secret key is used (this is called {\em post-application robustness}), and (4) ensuring that the protocol does not leak some ``useful information'' about the source (this is called {\em source privacy}). Additionally, when dealing with a very long source , as happens in the so-called Bounded Retrieval Model (BRM), extracting as long a key as possible is no longer the goal. Instead, the goals are (5) to touch as little of as possible (for efficiency), and (6) to be able to run the protocol many times on the same , extracting multiple secure keys.
Achieving goals (1)-(4) (or (2)-(6) in BRM) simultaneously has remained open, and, indeed, all known protocols fail to achieve at least two of them.
In this work we improve upon the current state-of-the-art, by designing a variety of new privacy amplification protocols, in several cases achieving {\em optimal parameters for the first time}. Moreover, in most cases we do it by giving relatively {\em general transformations} which convert a given protocol into a ``better'' protocol . In particular, as special cases of these transformations (applied to best known prior protocols), we achieve the following privacy amplification protocols for the first time:
\begin{itemize}
\item -round (resp. -round) {\em source-private} protocol with {\em optimal entropy loss} , whenever (resp. for some universal constant ). Best previous constant round source-private protocols achieved .
\item -round {\em post-application-robust} protocols with {\em optimal entropy loss} , whenever or (the latter is also {\em source-private}). Best previous post-application robust protocols achieved .
\item The first BRM protocol capable of extracting the optimal number of session keys, improving upon the previously best bound . (Additionally, our BRM protocol is post-application-robust, takes rounds, and can be made source-private by increasing the number of rounds to .)
\end{itemize}
The Realm of the Pairings
Bilinear maps, or pairings, initially proposed in a cryptologic context for cryptanalytic purposes, proved afterward to be an amazingly flexible and useful tool for the construction of cryptosystems with unique features. Yet, they are notoriously hard to implement efficiently, so that their effective deployment requires a careful choice of parameters and algorithms. In this paper we review the evolution of pairing-based cryptosystems, the development of efficient algorithms and the state of the art in pairing computation, and the challenges yet to be addressed on the subject, while also presenting some new algorithmic and implementation refinements in affine and projective coordinates.
Deep Attacks of a Certificateless Signature Scheme
Certificateless public key cryptography is an attractive paradigm since it eliminates the use of certificates in traditional public key cryptography and alleviates the inherent key escrow problem in identity-based cryptography. Recently, Xiong et al. proposed a certificateless signature scheme and proved that their scheme is existentially unforgeable against adaptive chosen message attack under the random oracle model. He et al. pointed out that Xiong et al.’s scheme is insecure against the Type II adversary. But, their forged signatures are not random, and their improved scheme has the same security defects as Xiong et al.’s scheme. In this paper, we present two malicious-but-passive KGC attack methods on Xiong et al.’s scheme and our results show that their scheme is insecure against malicious-but-passive KGC attack.
Outsourced Symmetric Private Information Retrieval
In the setting of searchable symmetric encryption (SSE), a data owner D outsources a database (or document/file collection) to a remote server E in encrypted form such that D can later search the collection at E while hiding information about the database and queries from E. Leakage to E is to be confined to well-defined forms of data-access and query patterns while preventing disclosure of explicit data and query plaintext values. Recently, Cash et al presented a protocol, OXT, which can run arbitrary Boolean queries in the SSE setting and which is remarkably efficient even for very large databases.
In this paper we investigate a richer setting in which the data owner
D outsources its data to a server E but D is now interested to allow clients (third parties) to search the database such that clients learn the information D authorizes them to learn but nothing else while E still does not learn about the data or queried values as in the basic SSE setting. Furthermore, motivated by a wide range of applications, we extend this model and requirements to a setting where, similarly to private information retrieval, the client's queried values need to be hidden also from the data owner D even though the latter still needs to authorize the query. Finally, we consider the scenario in which authorization can be enforced by the data owner D without D learning the policy, a setting that arises in court-issued search warrants.
We extend the OXT protocol of Cash et al to support arbitrary Boolean queries in all of the above models while withstanding adversarial
non-colluding servers (D and E) and arbitrarily malicious clients,
and while preserving the remarkable performance of the protocol.
Constructing Confidential Channels from Authenticated Channels---Public-Key Encryption Revisited
The security of public-key encryption (PKE), a widely-used cryptographic primitive, has received much attention in the cryptographic literature. Many security notions for PKE have been proposed, including several versions of CPA-security, CCA-security, and non-malleability. These security notions are usually defined in terms of a certain game that an efficient adversary cannot win with non-negligible probability or advantage.
If a PKE scheme is used in a larger protocol, then the security of this protocol is proved by showing a reduction of breaking a certain security property of the PKE scheme to breaking the security of the protocol. A major problem is that each protocol requires in principle its own tailor-made security reduction. Moreover, which security notion of the PKE should be used in a given context is a priori not evident; the employed games model the use of the scheme abstractly through oracle access to its algorithms, and the sufficiency for specific applications is neither explicitly stated nor proven.
In this paper we propose a new approach to investigating the application of PKE, following the constructive cryptography paradigm of Maurer and Renner (ICS~2011). The basic use of PKE is to enable confidential communication from a sender A to a receiver B, assuming A is in possession of B's public key. One can distinguish two relevant cases: The (non-confidential) communication channel from A to B can be authenticated (e.g., because messages are signed) or non-authenticated. The application of PKE is shown to provide the construction of a secure channel from A to B from two (assumed) authenticated channels, one in each direction, or, alternatively, if the channel from A to B is completely insecure, the construction of a confidential channel without authenticity. Composition then means that the assumed channels can either be physically realized or can themselves be constructed cryptographically, and also that the resulting channels can directly be used in any applications that require such a channel. The composition theorem shows that several construction steps can be composed, which guarantees the soundness of this approach and eliminates the need for separate reduction proofs.
We also revisit several popular game-based security notions (and variants thereof) and give them a constructive semantics by demonstrating which type of construction is achieved by a PKE scheme satisfying which notion. In particular, the necessary and sufficient security notions for the above two constructions to work are CPA-security and a variant of CCA-security, respectively.
NTRU-KE: A Lattice-based Public Key Exchange Protocol
Public key exchange protocol is identified as an important application in the field of public-key cryptography. Most of the existing public key exchange schemes are Diffie-Hellman (DH)-type, whose security is based on DH problems over different groups. Note that there exists Shor's polynomial-time algorithm to solve these DH problems when a quantum computer is available, we are therefore motivated to seek for a non-DH-type and quantum resistant key exchange protocol. To this end, we turn our attention to lattice-based cryptography. The higher methodology behind our roadmap is that in analogy to the link between ElGamal, DSA, and DH, one should expect a NTRU lattice-based key exchange primitive in related to NTRU-ENCRYPT and NTRU-SIGN. However, this excepted key exchange protocol is not presented yet and still missing. In this paper, this missing key exchange protocol is found, hereafter referred to as NTRU-KE, which is studied in aspects of security and key-mismatch failure. In comparison with ECDH (Elliptic Curve-based Diffie-Hellman), NTRU-KE features faster computation speed, resistance to quantum attack, and more communication overhead. Accordingly, we come to the conclusion that NTRU-KE is currently comparable with ECDH. However, decisive advantage of NTRU-KE will occur when quantum computers become a reality.
NICV: Normalized Inter-Class Variance for Detection of Side-Channel Leakage
Side-Channel Attacks (SCA) are considered a serious threat against embedded cryptography. Therefore security critical chips must be tested for SCA resistance before deployment or certification. SCA are powerful but can need a lot of computation power, especially in the presence of countermeasures. The computation complexity of these attacks can be reduced by selecting a small subset of points where leakage prevails. In this paper, we propose a method to detect relevant leakage points in side-channel traces. The method is based on Normalized Inter-Class Variance (NICV). A key advantage of NICV over state-of-the-art is that NICV does neither need a clone device nor the knowledge of secret parameters of the crypto-system. NICV has a low computation requirement and it detects leakage using public information like input plaintexts or output ciphertexts only. It can also be used to test the efficiency of leakage models, the quality of traces and robustness of countermeasures. A theoretical rationale of NICV with practical application on real crypto-systems are provided to support our claims.
A Secure Obfuscator for Encrypted Blind Signature Functionality
Uncategorized
Uncategorized
This paper introduces a new obfuscation called obfuscation of encrypted blind signature. Informally, Alice is Signer and Bob is User. Bob needs Alice to sign a message, but he does not want Alice to know what the message is. Furthermore, Bob doesn't want anyone to know the interactive process. So we present a secure obfuscator for encrypted blind signature which makes the process of encrypted blind signature unintelligible for any third party, while still keeps the original encrypted blind signature functionality. We use schnorr's blind signature scheme and linear encryption scheme as blocks to construct a new obfuscator. Moreover, we propose two new security definition: blindness w.r.t encrypted blind signature (EBS) obfuscator and one-more unforgeability(OMU) w.r.t EBS obfuscator, and prove them under Decision Liner Diffie-Hellman(DL) assumption and the hardness of discrete logarithm, respectively. We also demonstrate that our obfuscator satisfies the Average-Case Virtual Black-Box Property(ACVBP) property w.r.t dependent oracle, it is indistinguishable secure. Our paper expands a new direction for the application of obfuscation.
Practical Forward-Secure Range and Sort Queries with Update-Oblivious Linked Lists
Uncategorized
Uncategorized
We revisit the problem of privacy-preserving range search and sort
queries on encrypted data in the face of an untrusted data store.
Our new protocol RASP has several advantages over existing work.
First, RASP strengthens privacy by ensuring {forward security}:
after a query for range , any new record added to the data
store is indistinguishable from random, even if the new record falls
within range . We are able to accomplish this
using only traditional hash and block cipher operations, abstaining
from expensive asymmetric cryptography and bilinear pairings.
Consequently, RASP is highly practical, even for large database
sizes. Additionally, we require only cloud {storage} and not a
computational cloud like related works, which can reduce monetary
costs significantly. At the heart of RASP, we develop a new
{update-oblivious} bucket-based data structure. We allow for
data to be added to buckets without leaking into which bucket it has
been added. As long as a bucket is not explicitly queried, the data
store does not learn anything about bucket contents. Furthermore, no
information is leaked about data additions following a
query. Besides formally proving RASP's privacy, we also present a
practical evaluation of RASP on Amazon Dynamo, demonstrating its
efficiency and real world applicability.
Method to secure data in the cloud while preserving summary statistics
In this paper we propose a method which will preserve sum-
mary statistics of data organized in a two way table. We have shown that
fully homomorphic encryption is a powerful solution. However it has a
number of disadvantages which makes it impractical. We have proposed
a Restricted homomorphic encryption method which uses Pailier encryp-
tion and order preserving encryption. This new method can be used for
practical purposes owing to it’s efficiency in terms of both speed and
storage.
Cryptanalysis of Zorro
At CHES 2013 was presented a new block cipher called Zorro. Although it uses only 4 S-boxes per round, the designers showed the resistance of the cipher against various attacks, and concluded the cipher has a large security margin. In this paper, we give a key recovery attack on the full cipher in the single-key model that works for out of keys. Our analysis is based precisely on the fact that the non-linear layer has only 4 S-boxes. We exploit this twice in a two-stage attack: first, we show that Zorro has an equivalent description that does not have constants in the rounds, and then, we launch an internal differential attack on the newly described cipher. With computer verifications we confirm the correctness of the analysis.
Our attack is the first to use internal differentials for block ciphers, thus we adapt Daemen's attack on Even-Mansour construction
to the case of internal differentials (instead of differentials), which allows us to recovery to full key.
This work provides as well insights on alternative descriptions of general Zorro-type ciphers (incomplete non-linear layers),
the importance of well chosen constants, and the advantages of Daemen's attack.
PUF-Based RFID Authentication Secure and Private under Memory Leakage
Uncategorized
Uncategorized
RFID tags are getting their presence noticeable and are expected to become an important tool for e-commerce, logistics, point-ofsale transactions, and so on, representing “things” and “human holding things” in transactions. Since a huge amount of tags are expected to be needed to be attached to various “objects,” a low-cost tag manufacturing is necessary. Thus, it is hard to imagine they will implement costly hardware protection mechanisms (like co-processor, TPMs). Therefore, in this context memory leakage (side-channel) attacks become a critical threat. Another well known threat to RFID devices is tag tracing implying violation of privacy.
We consider physically unclonable functions (PUFs) as tamper resilient building blocks cheaper than protected hardware, and propose security against a memory leaking adversary, trying to violate security and privacy of tags (we emphasize that digitally-oriented PUFs are easy to implement and they are more likely than TPMs to be implemented in RFID chips, more so than TPMs). We then design the first provably secure and provably private RFID authentication protocol withstanding information leakage from the entire memory of the tag, and show its two properties: (1) security against man-in-th-middle attack, and (2) privacy protection against tag tracing.
Ambiguous One-Move Nominative Signature Without Random Oracles
Nominative Signature is a useful tool in situations where a signature has to be created jointly by two parties, a nominator (signer) and a nominee (user), while only the user can verify and prove to a third party about the validity of the signature. In this paper, we study the existing security models of nominative signature and show that though the existing models have captured the essential security requirements of nominative signature in a strong sense, especially on the unforgeability against malicious signers/users and invisibility, they are yet to capture a requirement regarding the privacy of the signer and the user, and this requirement has been one of the original ones since the notion of nominative signature was first introduced. In particular, we show that it is possible to build a highly efficient nominative signature scheme which can be proven secure in the existing security models, while in practice it is obvious to find out from the component(s) of a nominative signature on whether a particular signer or user has involved in the signature generation, which may not be desirable in some actual applications. We therefore propose an enhanced security property, named "Ambiguity", and also propose a new \emph{one-move} nominative scheme for fulfilling this new security requirement without random oracles, and among the various types of nominative signature, one-move is the most efficient type. Furthermore, this new scheme is at least 33% more efficient during signature generation and 17% shorter in signature size when compared with the existing one-move signature schemes without random oracles even that the existing ones in the literature may not satisfy this new Ambiguity requirement.
An Approach to Reduce Storage for Homomorphic Computations
We introduce a hybrid homomorphic encryption by combining public key encryption (PKE) and somewhat homomorphic encryption (SHE) to reduce storage for most applications of somewhat or fully homomorphic encryption (FHE). In this model, one encrypts messages with a PKE and computes on encrypted data using a SHE or a FHE after homomorphic decryption.
To obtain efficient homomorphic decryption, our hybrid schemes is constructed by combining IND-CPA PKE schemes without complicated message paddings with SHE schemes with large integer message space. Furthermore, we remark that if the underlying PKE is multiplicative on a domain closed under addition and multiplication, this scheme has an important advantage that one can evaluate a polynomial of arbitrary degree without recryption.
We propose such a scheme by concatenating ElGamal and Goldwasser-Micali scheme over a ring for a composite integer whose message space is .
To be used in practical applications, homomorphic decryption of the base PKE is too expensive. We accelerate the homomorphic evaluation of the decryption by introducing a method to reduce the degree of exponentiation circuit at the cost of additional public keys. Using same technique, we give an efficient solution to the open problem~\cite{KLYC13} partially.
As an independent interest, we obtain another generic conversion method from private key SHE to public key SHE. Differently from Rothblum~\cite{RothTCC11}, it is free to choose the message space of SHE.
Efficient Statistical Zero-Knowledge Authentication Protocols for Smart Cards Secure Against Active & Concurrent Attacks
Uncategorized
Uncategorized
We construct statistical zero-knowledge authentication protocols for smart cards based on general assumptions. The main protocol is only secure against active attacks, but we present a modification based on trapdoor commitments that can resist concurrent attacks as well. Both protocols are instantiated using lattice-based primitives, which are conjectured to be secure against quantum attacks. We illustrate the practicality of our main protocol on smart cards in terms of storage, computation, communication, and round complexities. Furthermore, we compare it to other lattice-based authentication protocols, which are either zero-knowledge or have a similar structure. The comparison shows that our protocol improves the best previous protocol.
Key Derivation Without Entropy Waste
We revisit the classical problem of converting an imperfect source of randomness into a usable cryptographic key. Assume that we have some cryptographic application that expects a uniformly random -bit key and ensures that the best attack (in some complexity class) against has success probability at most . Our goal is to design a key-derivation function (KDF) that converts any random source of min-entropy into a sufficiently ``good'' key ,
guaranteeing that has comparable security which is `close' to .
Seeded randomness extractors provide a generic way to solve this problem for \emph{all} applications , with resulting security , provided that we start with entropy . By a result of Radhakrishnan and Ta-Shma, this bound on (called the ``RT-bound'') is also known to be tight in general. Unfortunately, in many situations the loss of bits of entropy is unacceptable. This motivates the study KDFs with less entropy waste by placing some restrictions on the source or the application .
In this work we obtain the following new positive and negative results in this regard:
- Efficient samplability of the source does not help beat the RT-bound for general applications. This resolves the SRT (samplable RT) conjecture of Dachman-Soled et al.~\cite{DGKM12} in the affirmative, and also shows that the existence of computationally-secure extractors beating the RT-bound implies the existence of one-way functions.
- We continue in the line of work initiated by Barak et al. \cite{BDK+11} and construct new information-theoretic KDFs which beat the RT-bound for large but restricted classes of applications. Specifically, we design efficient KDFs that work for all unpredictability applications (e.g., signatures, MACs, one-way functions, etc.) and can either: (1) extract \emph{all} of the entropy with a very modest security loss , or alternatively, (2) achieve essentially optimal security with a very modest entropy loss . In comparison, the best prior results from \cite{BDK+11} for this class of applications would only guarantee when , and would need to get .
- The weaker bounds of \cite{BDK+11} hold for a larger class of so-called ``square-friendly'' applications (which includes all unpredictability, but also some important indistinguishability, applications). Unfortunately, we show that these weaker bounds are tight for the larger class of applications.
- We abstract out a clean, information-theoretic notion of -unpredictability extractors, which guarantee ``induced'' security for any -secure unpredictability application , and characterize the parameters achievable for such unpredictability extractors. Of independent interest, we also relate this notion to the previously-known notion of (min-entropy) condensers, and improve the state-of-the-art parameters for such condensers.
A reduction of Semigroup DLP to classic DLP
We present a polynomial-time reduction of the discrete logarithm problem in any periodic (or torsion)
semigroup (SGDLP) to the classic DLP in a _subgroup_ of the same semigroup.
It follows that SGDLP can be solved in polynomial time by quantum computers, and that
SGDLP has subexponential complexity whenever the classic DLP in the corresponding groups has subexponential complexity. We also consider several natural constructions of nonperiodic semigroups,
and provide polynomial time solutions for the DLP in these semigroups.
How to Certify the Leakage of a Chip?
Evaluating side-channel attacks and countermeasures requires determining the amount of information leaked by a target device. For this purpose, information extraction procedures published so far essentially combine a "leakage model" with a "distinguisher". Fair evaluations ideally require exploiting a perfect leakage model (i.e. exactly corresponding to the true leakage distribution) with a Bayesian distinguisher. But since such perfect models are generally unknown, density estimation techniques have to be used to approximate the leakage distribution. This raises the fundamental problem that all security evaluations are potentially biased by both estimation and assumption errors. Hence, the best that we can hope is to be aware of these errors. In this paper, we provide and implement methodological tools to solve this issue. Namely, we show how sound statistical techniques allow both quantifying the leakage of a chip, and certifying that the amount of information extracted is close to the maximum value that would be obtained with a perfect model.
Symmetric Digit Sets for Elliptic Curve Scalar Multiplication without Precomputation
We describe a method to perform scalar multiplication on two classes of
ordinary elliptic curves, namely in prime characteristic
mod~4, and in prime characteristic
mod 3. On these curves, the 4-th and 6-th roots of unity act as (computationally
efficient) endomorphisms. In order to optimise the scalar multiplication, we consider a width- -NAF (non-adjacent form) digit expansion of positive integers to the complex base of , where is a zero of the characteristic polynomial of the Frobenius endomorphism associated to the curve. We provide a precomputationless algorithm by means of a convenient factorisation of the unit group of residue classes modulo in the endomorphism ring, whereby we construct a digit set consisting of powers of subgroup generators, which are chosen as efficient endomorphisms of the curve.
Adaptive Witness Encryption and Asymmetric Password-based Cryptography
We show by counter-example that the soundness security requirement for witness encryption given by Garg, Gentry, Sahai and Waters (STOC 2013) does not suffice for the security of their own applications. We introduce adaptively-sound (AS) witness encryption to fill the gap. We then introduce asymmetric password-based encryption (A-PBE). This offers gains over classical, symmetric password-based encryption in the face of attacks that compromise servers to recover hashed passwords. We distinguish between invasive A-PBE schemes (they introduce new password-based key-derivation functions) and non-invasive ones (they can use existing, deployed password-based key-derivation functions). We give simple and efficient invasive A-PBE schemes and use AS-secure witness encryption to give non-invasive A-PBE schemes.
Limits of Extractability Assumptions with Distributional Auxiliary Input
Uncategorized
Uncategorized
Extractability, or “knowledge,” assumptions have recently gained popularity in the crypto- graphic community, leading to the study of primitives such as extractable one-way functions, extractable hash functions, succinct non-interactive arguments of knowledge (SNARKs), and (public-coin) differing-inputs obfuscation ((PC-)diO), and spurring the development of a wide spectrum of new applications relying on these primitives. For most of these applications, it is required that the extractability assumption holds even in the presence of attackers receiving some auxiliary information that is sampled from some fixed efficiently computable distribution Z.
We show that, assuming the existence of public-coin collision-resistant hash functions, there exists an efficient distributions Z such that either
- PC-diO for Turing machines does not exist, or
- extractable one-way functions w.r.t. auxiliary input Z do not exist.
A corollary of this result shows that additionally assuming existence of fully homomorphic encryption with decryption in NC1, there exists an efficient distribution Z such that either
- SNARKs for NP w.r.t. auxiliary input Z do not exist, or
- PC-diO for NC1 circuits does not exist.
To achieve our results, we develop a “succinct punctured program” technique, mirroring the powerful punctured program technique of Sahai and Waters (STOC’14), and present several other applications of this new technique. In particular, we construct succinct perfect zero knowledge SNARGs and give a universal instantiation of random oracles in full-domain hash applications, based on PC-diO.
As a final contribution, we demonstrate that even in the absence of auxiliary input, care must be taken when making use of extractability as- sumptions. We show that (standard) diO w.r.t. any distribution D over programs and bounded-length auxiliary input is directly implied by any obfuscator that satisfies the weaker indistinguishability obfuscation (iO) security notion and diO for a slightly modified distribution D′ of programs (of slightly greater size) and no auxiliary input. As a consequence, we directly obtain negative results for (standard) diO in the absence of auxiliary input.
Efficient Non-Malleable Codes and Key-Derivation for Poly-Size Tampering Circuits
Non-malleable codes, defined by Dziembowski, Pietrzak and Wichs (ICS '10), provide roughly the following guarantee: if a codeword encoding some message is tampered to such that , then the tampered message contained in reveals no information about . Non-malleable codes have applications to immunizing cryptosystems against tampering attacks and related-key attacks.
One cannot have an efficient non-malleable code that protects against all efficient tampering functions . However, in this work we show ``the next best thing'': for any polynomial bound given a-priori, there is an efficient non-malleable code that protects against all tampering functions computable by a circuit of size . More generally, for any family of tampering functions of size , there is an efficient non-malleable code that protects against all . The rate of our codes, defined as the ratio of message to codeword size, approaches . Our results are information-theoretic and our main proof technique relies on a careful probabilistic method argument using limited independence. As a result, we get an efficiently samplable family of efficient codes, such that a random member of the family is non-malleable with overwhelming probability. Alternatively, we can view the result as providing an efficient non-malleable code in the ``common reference string'' (CRS) model.
We also introduce a new notion of non-malleable key derivation, which uses randomness to derive a secret key in such a way that, even if is tampered to a different value , the derived key does not reveal any information about . Our results for non-malleable key derivation are analogous to those for non-malleable codes.
As a useful tool in our analysis, we rely on the notion of ``leakage-resilient storage'' of Davi, Dziembowski and Venturi (SCN '10) and, as a result of independent interest, we also significantly improve on the parameters of such schemes.
More on the Impossibility of Virtual-Black-Box Obfuscation with Auxiliary Input
Uncategorized
Uncategorized
We show that if there exist indistinguishability obfuscators for a certain class C of circuits then there do not exist independent-auxiliary-input virtual-black-box (VBB) obfuscators for any family of circuits that compute a pseudo-entropic function. A function f_k is pseudo-entropic if it is hard, given oracle access to f_k but without asking explicitly on a value x, to distinguish f_k(x) from a random variable with some real entropy.
This strengthens the bound of Goldwasser and Kalai [FOCS `05, ePrint `13] that rules out dependent-auxiliary-input VBB obfuscation for the same set of circuit families, assuming inditinguishability obfuscators for another class, C', of circuits. That is, while they only rule out the case where the adversary and the simulator obtain auxiliary information that depends on the actual (secret) obfuscated function, we rule out even the case where the auxiliary input depends only
on the (public) family of programs.
Higher Order Masking of Look-up Tables
We describe a new algorithm for masking look-up tables of block-ciphers at any order, as a countermeasure against side-channel attacks. Our technique is a generalization of the classical randomized table countermeasure against first-order attacks. We prove the security of our new algorithm against t-th order attacks in the usual Ishai-Sahai-Wagner model from Crypto 2003; we also improve the bound on the number of shares from n>=4t+1 to n>= 2t+1 for an adversary who can adaptively move its probes between successive executions.
Our algorithm has the same time complexity O(n^2) as the Rivain-Prouff algorithm for AES, and its extension by Carlet et al. to any look-up table. In practice for AES our algorithm is less efficient than Rivain-Prouff, which can take advantage of the special algebraic structure
of the AES Sbox; however for DES our algorithm performs slightly better.
Bootstrapping Obfuscators via Fast Pseudorandom Functions
We show that it is possible to upgrade an obfuscator for a weak complexity class into an obfuscator for arbitrary polynomial size circuits, assuming that the class can compute pseudorandom functions. Specifically, under standard intractability assumptions (e.g., hardness of factoring, Decisional Diffie-Hellman, or Learning with Errors), the existence of obfuscators for or even implies the existence of general-purpose obfuscators for . Previously, such a bootstrapping procedure was known to exist under the assumption that there exists a fully-homomorphic encryption whose decryption algorithm can be computed in . Our reduction works with respect to virtual black-box obfuscators and relativizes to ideal models.
Cryptanalysis and improvement of a dynamic and secure key management model for hierarchical heterogeneous sensor networks
In 2012, Alagheband and Aref presented a dynamic and secure key manage
ment model for hierarchical heterogeneous sensor networks. They proposed a signcryption algorithm which is the main building block in their key management model. They proved the algorithm is as strong as the elliptical curve discrete logarithm problem. In this work,
we study the security of their signcryption algorithm. It is regretful that we found their algorithm is insecure. The adversary can impersonate the base station by sending forged messages to the cluster leaders after capturing the signcrypted messages. Hence, the key management model proposed by them is insecure. Then, we propose an improved signcryption algorithm to fix this weakness.
A More Efficient AES Threshold Implementation
Threshold Implementations provide provable security against first-order power analysis attacks for hardware and software implementations. Like masking, the approach relies on secret sharing but it differs in the implementation of logic functions. At \textsc{Eurocrypt} 2011 Moradi et al. published the to date most compact Threshold Implementation of AES-128 encryption. Their work shows that the number of required random bits may be an additional evaluation criterion, next to area and speed. We present a new Threshold Implementation of AES-128 encryption that is 18\% smaller, 7.5\% faster and that requires 8\% less random bits than the implementation from \textsc{Eurocrypt} 2011. In addition, we provide results of a practical security evaluation based on real power traces in adversary-friendly conditions. They confirm the first-order attack resistance of our implementation and show good resistance against higher-order attacks.
Examination of a New Defense Mechanism: Honeywords
Uncategorized
Uncategorized
It has become much easier to crack a password
hash with the advancements in the graphicalprocessing
unit (GPU) technology. An adversary can
recover a user’s password using brute-force attack on
password hash. Once the password has been recovered
no server can detect any illegitimate user authentication
(if there is no extra mechanism used).
In this context, recently, Juels and Rivest published a
paper for improving the security of hashed passwords.
Roughly speaking, they propose an approach for user
authentication, in which some false passwords, i.e., “honeywords”
are added into a password file, in order to
detect impersonation. Their solution includes an auxiliary
secure server called “honeychecker” which can distinguish
a user’s real password among her honeywords and immediately
sets off an alarm whenever a honeyword is used. In
this paper, we analyze the security of the proposal, provide
some possible improvements which are easy to implement
and introduce an enhanced model as a solution to an open
problem.
AEGIS: A Fast Authenticated Encryption Algorithm
This paper introduces a dedicated authenticated encryption algorithm AEGIS; AEGIS allows for the protection of associated data which makes it very suitable for protecting network packets. AEGIS-128L uses eight AES round functions to process a 32-byte message block (one step). AEGIS-128 uses five AES round functions to process a 16-byte message block (one step); AES-256 uses six AES round functions. The security analysis shows that these algorithms offer a high level of security. On the Intel Sandy Bridge Core i5 processor, the speed of AEGIS-128L, AEGIS-128 and AEGIS-256 is around 0.48, 0.66 and 0.7 clock cycles/byte (cpb) for 4096-byte messages, respectively. This is substantially faster than the AES CCM, GCM and OCB modes.
Write-Only Oblivious RAM based Privacy-Preserved Access of Outsourced Data
Oblivious RAM (ORAM) has recently attracted a lot of interest since
it can be used to protect the privacy of data user's data access pattern from (honest but curious) outsourced storage. This is
achieved by simulating each original data read or write operation with some read and write operations on some real and dummy data items. This paper proposes two single-server write-only ORAM schemes and one multi-server write-only ORAM scheme, which simulate only the write operations and protect only the write pattern. The reduction of functions however allows to build much simpler and efficient (in terms of communication cost and storage usage) write-only ORAMs. Write-only ORAM can be used in conjunction with Private Information Retrieval (PIR), which is a technique to protect data user's read patterns, in order to protect both write and read patterns. Write-only ORAM may be used alone too, when only write patterns need protection. We study two usage scenarios: (i) data publishing/sharing: where a data owner shares the data with others, who only consume the published information. Data consumers should not have write access to the outsourced data, and thus cannot use ORAM to protect their read patterns in this scenario. To hide access patterns from the outsourced storage, the data owner can use ORAM to write data, and data consumers use PIR to read data. Alternatively, for some applications, a data consumer can trivially download all data once or regularly, and neither the data owner nor data consumers mind that the outsourced storage learns such read pattern. Compared with using traditional ORAM, using the simpler write-only ORAM here produces much less communication cost and/or client-side storage usage. Our single-server write-only ORAM scheme produces lower (typically one order lower) communication cost with the same client-side storage usage, or requires much less (typically at least one order less) client-side storage to achieve the same level of communication cost than the best known single-server full functional ORAM schemes do. Compared with the
best known multi-server ORAM scheme, our write-only ORAM schemes have lower (typically one order lower) communication cost, or achieve the same communication cost with the same client-side storage usage in single-server setting. (ii) the data owner's personal use: Our write-only ORAM schemes combined with PIR can be used as building blocks for some existing full functional ORAM schemes. This leads to the reduction of the communication costs for two full-functional ORAM schemes by the factors of and , where is the maximum data item count. One of these resulting schemes has a communication cost of , where is data item length. This is typically one order lower than the previous best known ORAM scheme's cost, which is . The other resulting scheme also achieves communication cost, but its client-side storage usage is several orders lower than the best known single-server ORAM's.
Secure Key Exchange and Sessions Without Credentials
Secure communication is a fundamental cryptographic primitive. Typically, security is achieved by relying on an existing credential infrastructure, such as a PKI or passwords, for identifying the end points to each other. But what can be obtained when no such credential infrastructure is available?
Clearly, when there is no pre-existing credential infrastructure, an adversary can mount successful ``man in the middle'' attacks by modifying the communication between the legitimate endpoints. Still, we show that not all is lost, as long as the adversary's control over the communication is not complete: We present relatively efficient key exchange and secure session protocols that provide the full guarantee of secure communication as long as the adversary fails to intercept even a single message between the legitimate endpoints.
To obtain this guarantee we strengthen the notion of key exchange to require that the keys exchanged in any two sessions are independent of each other as long as each session has at least one honest endpoint, even if both sessions has an adversarial endpoint. We call this notion credential-free key exchange. We then strengthen the existing notion of secure session protocols to provide the above guarantee given a CFKE (existing definitions and constructions are insufficient for this purpose). We provide two alternative definitions and constructions of CFKE, a game-based one with a construction in the RO model, and a UC one with a construction in the CRS model.
Faster Compact Diffie-Hellman: Endomorphisms on the x-line
Abstract: We describe an implementation of fast elliptic curve scalar multiplication, optimized for Diffie–Hellman Key Exchange at the 128-bit security level. The algorithms are compact (using only x-coordinates), run in constant time with uniform execution patterns, and do not distinguish between the curve and its quadratic twist; they thus have a built-in measure of side-channel resistance. (For comparison, we also implement two faster but non-constant-time algorithms.) The core of our construction is a suite of two-dimensional differential addition chains driven by efficient endomorphism decompositions, built on curves selected from a family of Q-curve reductions over F_{p^2} with p = 2^{127}-1. We include state-of-the-art experimental results for twist-secure, constant-time, x-coordinate-only scalar multiplication.
Non-Malleability from Malleability: Simulation-Sound Quasi-Adaptive NIZK Proofs and CCA2-Secure Encryption from Homomorphic Signatures
Verifiability is central to building protocols and systems with integrity. Initially, efficient methods employed the Fiat-Shamir
heuristics. Since 2008, the Groth-Sahai techniques have been the most efficient in constructing non-interactive witness indistinguishable and zero-knowledge proofs for algebraic relations. For the important task of proving membership in linear subspaces, Jutla and Roy (Asiacrypt 2013) gave significantly more efficient proofs in the quasi-adaptive setting (QA-NIZK). For membership of the row space of a matrix, their QA-NIZK proofs save group elements compared to Groth-Sahai. Here, we give QA-NIZK proofs made of a {\it constant} number group elements -- regardless of the number of equations or the number of variables -- and additionally prove them {\it unbounded} simulation-sound. Unlike previous unbounded simulation-sound Groth-Sahai-based proofs, our construction does not involve quadratic pairing product equations and does not rely on a chosen-ciphertext-secure encryption scheme. Instead, we build on structure-preserving signatures with homomorphic properties. We apply our methods to design new and improved CCA2-secure encryption schemes. In particular, we build the first efficient threshold CCA-secure keyed-homomorphic encryption scheme ({\it i.e.}, where homomorphic operations can only be carried out using a dedicated evaluation key) with publicly verifiable ciphertexts.
Obfuscation ==> (IND-CPA Security =/=> Circular Security)
Circular security is an important notion for public-key encryption schemes and is needed by several cryptographic protocols. In circular security the adversary is given an extra ``hint'' consisting of a cycle of encryption of secret keys i.e., (E_{pk_1}(sk_2),..., E_{pk_n}(sk_1)). A natural question is whether every IND-CPA encryption scheme is also circular secure. It is trivial to see that this is not the case when n=1. In 2010 a separation for n=2 was shown by [ABBC10,GH10] under standard assumptions in bilinear groups.
In this paper we finally settle the question showing that for every there exist an IND-CPA secure scheme which is not n-circular secure. Our result relies on the recent progress in program obfuscation.
Differing-Inputs Obfuscation and Applications
In this paper, we study of the notion of differing-input obfuscation, introduced by Barak et al. (CRYPTO 2001, JACM 2012). For any two circuits C_0 and C_1, a differing-input obfuscator diO guarantees that the non-existence of an adversary that can find an input on which C_0 and C_1 differ implies that diO(C_0) and diO(C_1) are computationally indistinguishable. We show many applications of this notion:
- We define the notion of a differing-input obfuscator for Turing machines and give a construction for the same (without converting it to a circuit) with input-specific running times. More specifically, for each input, our obfuscated Turning machine takes time proportional to the running time of the Turing machine on that specific input rather than the machine's worst-case running time.
- We give a functional encryption scheme that allows for secret-keys to be associated with Turing machines, and thereby achieves input-specific running times. Further, we can equip our functional encryption scheme with delegation properties.
- We construct a multi-party non-interactive key exchange protocol with no trusted setup where all parties post only logarithmic-size messages. It is the first such scheme with such short messages. We similarly obtain a broadcast encryption system where the ciphertext overhead and secret-key size is constant (i.e. independent of the number of users), and the public key is logarithmic in the number of users.
All our constructions make inherent use of the power provided by differing-input obfuscation. It is not currently known how to construct systems with these properties from the weaker notion of indistinguishability obfuscation.
Unbalancing Pairing-Based Key Exchange Protocols
In many pairing-based protocols more than one party is involved, and some or all of them may be required to calculate pairings.
Commonly it is the pairing calculation itself which takes most time.
However some parties may be better equipped than others in terms of computational power. By exploiting the bilinearity
property there are established ways to off-load the pairing calculation to an untrusted third party. Here we observe
that this third party may in fact be one of the other participants in the protocol. In this way a protocol may be ``unbalanced''
by shifting the computational load from one participant to another, which may be an advantage in some circumstances.
In this paper we focus on some simple key exchange protocols.
Surprisingly we find that unbalancing a key exchange protocol can endow it with the property of full forward secrecy, even if it did not originally possess it.
Finally we show that a new condition on the choice of pairing-friendly curve can help to minimize the overall computation.
How to Compress (Reusable) Garbled Circuits
Uncategorized
Uncategorized
A fundamental question about (reusable) circuit garbling schemes is: how small can
the garbled circuit be? Our main result is a reusable garbling scheme which
produces garbled circuits that are the same size as the
original circuit {\em plus} an additive bits, where
is the security parameter and is the circuit depth. Save the additive factor, this is the
best one could hope for. In contrast, all previous constructions of even single-use
garbled circuits incurred a {\em multiplicative} blowup.
Our techniques result in constructions of attribute-based and (single key secure)
functional encryption schemes where the secret key of a depth circuit consists of
itself, {\em plus} additional bits. All of these constructions
are based on the subexponential hardness of the learning with errors problem.
We also study the dual question of how short the garbled inputs can be, relative to
the original input. We demonstrate a (different) reusable circuit garbling scheme, based on
multilinear maps, where the size of the garbled input is the same as that of the original
input, {\em plus} a factor.
Similar to the above, this also
results in attribute-based and (single key secure)
functional encryption schemes where the size of the ciphertext encrypting an input
is the same as that of , plus additional bits.
New abstractions in applied pi-calculus and automated verification of protected executions
Protocols for the protected execution of programs, like those based on a hardware root of trust, will become of fundamental importance for computer security. In parallel to such protocols, there is therefore a need to develop models and tools that allow formal specification and automated verification of the desired security properties. Still, current protocols lack realistic models and automated proofs of security. This is due to several challenges that we address in this paper.
We consider the classical setting of applied pi-calculus and ProVerif, that we enrich with several generic models that allow verification of protocols designed for a given computing platform. Our contributions include models for specifying platform states and for dynamically loading and executing protected programs. We also propose a new method to make ProVerif terminate on a challenging search space - the one obtained by allowing an unbounded number of extensions and resets for the platform configuration registers of the TPM.
We illustrate our methods with the case study of a protocol for a dynamic root of trust (based on a TPM), which includes dynamic loading, measurement and protected execution of programs. We prove automatically with ProVerif that code integrity and secrecy of sealed data hold for the considered protocol.
Solving shortest and closest vector problems: The decomposition approach
In this paper, we present a heuristic algorithm for solving exact,
as well as approximate, shortest vector and closest vector problems on lattices. The algorithm can be seen
as a modified sieving algorithm for which the vectors of the intermediate
sets lie in overlattices or translated cosets of overlattices.
The key idea is hence to no longer work with a single lattice but to move the problems around in
a tower of related lattices. We initiate the algorithm by sampling
very short vectors in an overlattice of the original lattice that
admits a quasi-orthonormal basis and hence an efficient enumeration
of vectors of bounded norm. Taking sums of vectors in the sample, we
construct short vectors in the next lattice. Finally, we
obtain solution vector(s) in the initial lattice as a sum of vectors
of an overlattice. The complexity analysis relies on
the Gaussian heuristic. This heuristic is backed by experiments in
low and high dimensions that closely reflect these estimates when
solving hard lattice problems in the average case.
This new approach allows us
to solve not only shortest vector problems, but also closest vector
problems, in lattices of dimension in time using
memory . Moreover, the algorithm is straightforward to
parallelize on most computer architectures.
Fully Bideniable Public-Key Encryption
Bideniable encryption allows both sender and receiver in a public-key setting to simultaneously claim that a different message of their choice was transmitted, and support this claim by a good-looking encryption and key-generation randomness, respectively. A weaker version with two variants of algorithms is called flexible or multi-distributional deniability, a stronger one-algorithm version is called full deniability. Bendlin et al. (ASIACRYPT 2011) showed that certain kinds of fully-deniable schemes are constructible using corresponding flexible schemes. In this paper, we show that their construction in the bideniable case has a flaw, and we provide a fixed scheme. Our construction relies on a deterministic subset matching algorithm that assigns to each nonempty subset of a finite set its proper subset reduced by exactly one element. Although this algorithm is crucial in our construction, it may also be of independent interest.
Separations in Circular Security for Arbitrary Length Key Cycles
Uncategorized
Uncategorized
While standard notions of security suffice to protect any message supplied by an adversary, in some situations stronger notions of security are required. One such notion is n-circular security, where ciphertexts Enc(pk1, sk2), Enc(pk2, sk3), ..., Enc(pkn, sk1) should be indistinguishable from encryptions of zero.
In this work we prove the following results for n-circular security, based upon recent candidate constructions of indistinguishability obfuscation [GGH+ 13b, CLT13]:
- For any n there exists an encryption scheme that is IND-CPA secure but not n-circular secure.
- There exists a bit encryption scheme that is IND-CPA secure, but not 1-circular secure.
- If there exists an encryption system where an attacker can distinguish a key encryption cycle from an encryption of zeroes, then in a transformed cryptosystem there exists an attacker which recovers secret keys from the encryption cycles.
Our last result is generic and applies to any such cryptosystem.
- « Previous
- 1
- 2
- 3
- ...
- 9
- Next »