## All papers in 2014 (1029 results)

On the Cryptographic Hardness of Finding a Nash Equilibrium

We prove that finding a Nash equilibrium of a game is hard, assuming the existence of indistinguishability obfuscation and injective one-way functions with sub-exponential hardness. We do so by showing how these cryptographic primitives give rise to a hard computational problem that lies in the complexity class PPAD, for which finding Nash equilibrium is known to be complete.
Previous proposals for basing PPAD-hardness on program obfuscation considered a strong “virtual black-box” notion that is subject to severe limitations and is unlikely to be realizable for the programs in question. In contrast, for indistinguishability obfuscation no such limitations are known, and recently, several candidate constructions of indistinguishability obfuscation were suggested based on different hardness assumptions on multilinear maps.
Our result provides further evidence of the intractability of finding a Nash equilibrium, one that is extrinsic to the evidence presented so far.

Security Weaknesses of an "Anonymous Attribute Based Encryption" appeared in ASIACCS'13

Uncategorized

Uncategorized

Attribute-based Encryption (ABE) has found enormous application in fine-grained access control of shared data, particularly in public cloud. In 2013, Zhang et al proposed a scheme called match-then-decrypt [1], where before running the decryption algorithm the user requires to perform a match operation with attribute(s) that provides the required information to identify whether a particular user is the intended recipient for the ciphertext. As in [1], the match-then-decrypt operation saves the computational cost at the receiver and the scheme supports receivers' anonymity. In this paper, we show that Zhang et al's scheme [1] does not support receivers' anonymity. Any legitimate user or an adversary can successfully check whether an attribute is required in the matching phase, in turn, can reveal the receivers' identity from the attribute.

Simple Lattice Trapdoor Sampling from a Broad Class of Distributions

At the center of many lattice-based constructions is an algorithm that samples a short vector $s$, satisfying $[A|AR-HG]s=t\bmod q$ where $A,AR, H, G$ are public matrices and $R$ is a trapdoor. Although the algorithm crucially relies on the knowledge of the trapdoor $R$ to perform this sampling efficiently, the distribution it outputs should be independent of $R$ given the public values. We present a new, simple algorithm for performing this task. The main novelty of our sampler is that the distribution of $s$ does not need to be Gaussian, whereas all previous works crucially used the properties of the Gaussian distribution to produce such an $s$. The advantage of using a non-Gaussian distribution is that we are able to avoid the high-precision arithmetic that is inherent in Gaussian sampling over arbitrary lattices. So while the norm of our output vector $s$ is on the order of $\sqrt{n}$ to $n$ - times larger (the representation length, though, is only a constant factor larger) than in the samplers of Gentry, Peikert, Vaikuntanathan (STOC 2008) and Micciancio, Peikert (EUROCRYPT 2012), the sampling itself can be done very efficiently. This provides a useful time/output trade-off for devices with constrained computing power. In addition, we believe that the conceptual simplicity and generality of our algorithm may lead to it finding other applications.

Lattices with Symmetry

Uncategorized

Uncategorized

For large ranks, there is no good algorithm that decides whether a given lattice has an orthonormal basis. But when the lattice is given with enough symmetry, we can construct a provably deterministic polynomial-time algorithm to accomplish this, based on the work of Gentry and Szydlo. The techniques involve algorithmic algebraic number theory, analytic number theory, commutative algebra, and lattice basis reduction.

XPIR: Private Information Retrieval for Everyone

Uncategorized

Uncategorized

A Private Information Retrieval (PIR) scheme is a protocol in which a user retrieves a record from a database while hiding which from the database administrators. PIR can be achieved using mutually-distrustful replicated databases, trusted hardware, or cryptography. In this paper we focus on the later setting which is known as single-
database computationally-Private Information Re-trieval (cPIR). Classic cPIR protocols require that the database server executes an algorithm over all the database content at very low speeds which impairs their usage. In [1], given certain assumptions, realistic at the time, Sion and Carbunar showed that cPIR schemes were not practical and most likely would never be. To this day, this conclusion is widely accepted by researchers and practitioners. Using the paradigm shift introduced by lattice-based cryptography, we show that the conclusion of Sion and Carbunar is not valid anymore: cPIR is of practical value. This is achieved without compromising security, using standard crytosystems, and conservative parameter choices.

Cryptanalysis of the Co-ACD Assumption

Uncategorized

Uncategorized

At ACM-CCS 2014, Cheon, Lee and Seo introduced a new number-theoretic assumption, the co-approximate common divisor (Co-ACD) assumption, based on which they constructed several cryptographic primitives, including a particularly fast additively homomorphic encryption scheme. For their proposed parameters, they found that their scheme was the ``most efficient of those that support an additive homomorphic property''.
In this paper, we analyze the security of the Cheon-Lee-Seo (CLS) homomorphic encryption scheme and of the underlying Co-ACD assumption, and present several lattice-based attacks that are effectively devastating for the proposed constructions. First, we prove that a few known plaintexts are sufficient to decrypt any ciphertext in the symmetric-key CLS scheme. This breaks the one-wayness of both the symmetric-key and the public-key variants of CLS encryption as well as the underlying decisional Co-ACD assumption for a very wide range of parameters. Then, we show that this attack can be heuristically extended to decrypt small messages without any known plaintext. And finally, we find that Coppersmith's theorem can even be used to solve the search variant of the Co-ACD problem, and mount a full key recovery on the public-key CLS scheme.
Concretely speaking, the parameters proposed by Cheon et al. and originally aiming at 128-bit security can be broken in a matter of seconds. And while it is possible to select parameters outside of the range in which our attacks run in polynomial time, they have to be so large as to render the proposed constructions severely uncompetitive (e.g. our asymptotic estimates indicate that 128 bits of security against our attacks require a modulus of at least 400,000 bits).

How to Generate Repeatable Keys Using Physical Unclonable Functions Correcting PUF Errors with Iteratively Broadening and Prioritized Search

Uncategorized

Uncategorized

We present an algorithm for repeatably generating keys using entropy from a Physical Unclonable Function (PUF). PUFs are logically identical physical constructs with Challenge-Response Pairs (CRPs) unique to each device. Applications include initialization of server keys and encryption of FPGA configuration bitstreams. One problem with PUFs is response errors. Our algorithm corrects PUF errors that inhibit key repeatability.
Our approach uses a PUF to generate an error-free PUF value in three steps. First, we repeatedly sample the PUF to determine the most likely value. Second, we apply an iteratively-broadening search to search up to some number of bit errors (in our experiments we use two). Third, we apply exhaustive search until the correct value is found or failure is declared. The searches are prioritized by the known bit error rates in decreasing magnitude. We assume the application includes a test for the correct value (e.g., some matching plaintext-ciphertext pairs).
Previous algorithms often omit noisy PUF bits or use error-correcting codes and helper data. Our algorithm can use all PUF bits regardless of noise. Our approach is simple, and for appropriate parameter choices, fast. Unlike previous approaches using error-correcting codes, when used for public-key cryptography our method requires storing only the public key and no other helperdata in non-volatile storage.
We implemented a latch-based PUF on FPGAs and measured PUF characteristics to analyze the effectiveness of the algorithm. Tests for a 1024-bit PUF show 351 samples reduce the probability of errors to less than 10^-6. The iterative broadening and exhaustive searches further reduce failure rates.

Topology-Hiding Computation

Secure Multi-party Computation (MPC) is one of the foundational achievements of modern cryptography,
allowing multiple, distrusting, parties to jointly compute a function of their inputs, while revealing nothing but the
output of the function. Following the seminal works of Yao and Goldreich, Micali and Wigderson and Ben-Or, Goldwasser and Wigderson,
the study of MPC has expanded to consider a wide variety of questions, including variants in the attack model,
underlying assumptions, complexity and composability of the resulting protocols.
One question that appears to have received very little attention, however, is that of MPC over an
underlying communication network whose structure is, in itself, sensitive information. This question, in addition to being
of pure theoretical interest, arises naturally in many contexts: designing privacy-preserving social-networks, private peer-to-peer computations,
vehicle-to-vehicle networks and the ``internet of things'' are some of the examples.
In this paper, we initiate the study of ``topology-hiding computation'' in the computational setting. We give formal definitions
in both simulation-based and indistinguishability-based flavors. We show that, even for fail-stop adversaries, there are some strong
impossibility results. Despite this, we show that protocols for topology-hiding computation can be constructed in the semi-honest
and fail-stop models, if we somewhat restrict the set of nodes the adversary may corrupt.

Tightly-Secure Signatures from Chameleon Hash Functions

We give a new framework for obtaining signatures with a tight security reduction from standard hardness assumptions. Concretely, we show that any Chameleon Hash function can be transformed into a (binary) tree-based signature scheme with tight security. The transformation is in the standard model, i.e., it does not make use of any random oracle. For specific assumptions (such as RSA, Diffie-Hellman and Short Integer Solution (SIS)) we further manage to obtain a more efficient flat-tree construction. Our framework explains and generalizes most of the existing schemes as well as providing a generic means for constructing tight signature schemes based on arbitrary assumptions, which improves the standard Merkle tree transformation. Moreover, we obtain the first tightly secure signature scheme from the SIS assumption and several schemes based on Diffie-Hellman in the standard model.
Some of our signature schemes can (using known techniques) be combined with Groth-Sahai proof methodology to yield tightly secure and efficient simulation-sound NIZK proofs of knowledge and CCA-secure encryption in the multi-user/-challenge setting under classical assumptions.

Side-Channel Leakage and Trace Compression using Normalized Inter-Class Variance

Security and safety critical devices must undergo penetration testing including Side-Channel Attacks (SCA) before certification.
SCA are powerful and easy to mount but often need huge computation power, especially in the presence of countermeasures.
Few efforts have been done to reduce the computation complexity of SCA by selecting a small subset of points where leakage prevails.
In this paper, we propose a method to detect relevant leakage points in side-channel traces.
The method is based on Normalized Inter-Class Variance (NICV).
A key advantage of NICV over state-of-the-art is that NICV does neither need a clone device nor the knowledge of secret parameters of the crypto-system.
NICV has a low computation requirement and it detects leakage using public information like input plaintexts or output ciphertexts only.
It is shown that NICV can be related to Pearson correlation and signal to noise ratio (SNR) which are standard metrics.
NICV can be used to theoretically compute the minimum number of traces required to attack an implementation.
A theoretical rationale of NICV with some practical application on real crypto-systems are provided to support our claims.

Last updated: 2015-01-13

Related-Key Differential Cryptanalysis of Reduced-Round ITUBee

ITU{\scriptsize{BEE}} is a software oriented lightweight block cipher, which is first proposed at LightSec 2013. The cipher is especially suitable for limited resource application, such as sensor nodes in wireless sensor networks. To evaluate the security level of the cipher, we perform differential attacks on ITU{\scriptsize{BEE}} reduced to 10 rounds and 11 rounds with the time complexities ${2^{65.97}}$ and ${2^{79.03}}$, respectively. To our best knowledge, our analysis is the first related-key differential cryptanalysis on the security of 10-round ITU{\scriptsize{BEE}}.

Algebraic Algorithms for LWE

Uncategorized

Uncategorized

The Learning with Errors (LWE) problem, proposed by Regev in 2005, has become an ever-popular cryptographic primitive, due mainly to its simplicity, flexibility and convincing theoretical arguments regarding its hardness. Among the main proposed approaches to solving LWE instances — namely, lattice algorithms, combinatorial algorithms, and algebraic algorithms — the last is the one that has received the least attention in the literature, and is the focus of this paper. We present a detailed and refined complexity analysis of the original Arora-Ge algorithm, which reduced LWE to solving a system of high-degree, error-free polynomials. Moreover, we generalise their method and establish the complexity of applying Gröbner basis techniques from computational commutative algebra to solving LWE. As a result, we show that the use of Gröbner basis algorithms yields an exponential speed-up over the basic Arora-Ge algorithm. On the other hand, our results show that such techniques do not yield a subexponential algorithm for the LWE problem.
We also apply our algebraic algorithm to the BinaryError-LWE problem, which was recently introduced by Micciancio and Peikert. We show that BinaryError-LWE in dimension n can be solved in subexponential time given access to a quasi-linear number of samples m under a regularity assumption. We also give precise complexity bounds for BinaryError-LWE given access to linearly many samples. Our approach outperforms the best currently-known generic heuristic exact CVP solver as soon as m/n ≥ 6.6.
The results in this work depend crucially on the assumption that the encountered systems have no special structure. We give experimental evidence that this assumption holds and also prove the assumption in some special cases. Therewith, we also make progress towards proving Fröberg's long-standing conjecture from algebraic geometry.

Sorting and Searching Behind the Curtain: Private Outsourced Sort and Frequency-Based Ranking of Search Results Over Encrypted Data

We study the problem of private outsourced sorting of encrypted
data. We start by proposing a novel sorting protocol that allows a user to outsource his data to a cloud server in an encrypted form and then request the server to perform computations on this data and sort the result. To perform the sorting the server is assisted by a secure coprocessor with minimal computational and memory resources. The server and the coprocessor are assumed to be honest but curious, i.e., they honestly follow the protocol but are interested in learning more about the user data. We refer to the new protocol as ``private outsourced sorting'' since it guarantees that neither the server
nor the coprocessor learn anything about user data as long as they are
non-colluding. We formally define private outsourced sorting and provide an efficient construction that is based on semi-homomorphic encryption.
As an application of our private sort, we present MSRE: the first scheme for outsourced search over encrypted data that efficiently answers multi-term queries with the result ranked using frequency of query terms in the data, while maintaining data privacy. To construct MSRE we use searchable encryption techniques combined with our new private sort framework. Finally, although not discussed in this work, we believe that our private sort framework can turn out to be an important tool for more applications that require outsourced sorting while maintaining data privacy, e.g., database queries.

Last updated: 2015-01-02

Modified SIMON and SPECK: Lightweight Hybrid Design for Embedded Security

Lightweight cryptography is an emerging field that will play a critical role in areas like pervasive computing and Internet of Things (IoT). In recent years, many lightweight ciphers have been designed that are better suited for small scale embedded security. Lightweight ciphers like PRESENT, KLEIN, Hummingbird 2, XTEA, CLEFIA etc. are the ciphers known for compact hardware implementations. Recently SIMON and SPECK ciphers have been introduced which are Feistel based designs. SIMON and SPECK are flexible and are having very less memory requirements and better performance in both hardware and software. There is always a tradeoff between security and performance. Strengthening the design of these ciphers will increase their acceptability for all embedded applications. In this paper, we have proposed a novel approach which increases the strength and performance of SIMON and SPECK. Further a confusion layer is added in the design of the newly designed cipher RECTANGLE. RECTANGLE has a robust S-box as compared to other lightweight ciphers which makes the design fast and efficient. We have added the substitution property to the SIMON and SPECK cipher after analyzing the cryptanalysis properties of both the ciphers. S-box of RECTANGLE is best suited for SIMON and SPECK because the SIMON and SPECK designs have an asymmetric permutation which is the basic requirement for RECTANGLE. Combination of S-box and asymmetric permutation together achieves a robust design. The hybrid design proposed in this paper needs less memory space as compared to the existing ciphers. This approach makes SIMON and SPECK design more robust and resistive against all possible attacks due to the addition of the non-linear substitution layer. This robust design will have a positive impact in the field of lightweight cryptosystems.

Compact Accumulator using Lattices

An accumulator is a succinct aggregate of a set of values where it is possible to issue short membership proofs for each accumulated value. A party in possession of such a membership proof can then demonstrate that the value is included in the set. In this paper, we preset the first lattice-based accumulator scheme that issues compact membership proofs. The security of our scheme is based on the hardness of Short Integer Solution problem.

Double-and-Add with Relative Jacobian Coordinates

One of the most efficient ways to implement a scalar multiplication on elliptic curves with precomputed points is to use mixed coordinates (affine and Jacobian). We show how to relax these preconditions by introducing relative Jacobian coordinates and give an algorithm to compute a scalar multiplication where the precomputed points can be given in Jacobian coordinates. We also show that this new approach is compatible with Meloni’s trick, which was already used in other papers to reduce the number of multiplications needed for a double-and-add step to 18 field multiplications.

Computational Independence

We will introduce different notions of independence, especially computational independence (or more precise independence by polynomial-size circuits (PSC)), which is the analog to computational indistinguishability. We will give some first implications and will show that an encryption scheme having PSC independent plaintexts and ciphertexts is equivalent to having indistinguishable encryptions.

The Boomerang Attacks on BLAKE and BLAKE2

n this paper, we study the security margins of hash functions BLAKE and BLAKE2 against the boomerang attack. We launch boomerang attacks on all four members of BLAKE and BLAKE2, and compare their complexities. We propose 8.5-round boomerang attacks on both BLAKE-512 and BLAKE2b with complexities $2^{464}$ and $2^{474}$ respectively. We also propose 8-round attacks on BLAKE-256 with complexity $2^{198}$ and 7.5-round attacks on BLAKE2s with complexity $2^{184}$. We verify the correctness of our analysis by giving practical 6.5-round Type I boomerang quartets for each member of BLAKE and BLAKE2.
According to our analysis, some tweaks introduced by BLAKE2 have increased its resistance against boomerang attacks to a certain extent.
But on the whole, BLAKE still has higher a secure margin than BLAKE2.

Proof-of-Work as Anonymous Micropayment: Rewarding a Tor Relay

In this paper we propose a new micropayments scheme which can be used to
reward Tor relay operators. Tor clients do not pay Tor relays with
electronic cash directly but submit proof of work shares which the
relays can resubmit to a crypto-currency mining pool. Relays credit
users who submit shares with
tickets that can later be used to purchase improved service. Both shares
and tickets when sent over Tor circuits are anonymous. The analysis of
the crypto-currencies market prices shows that the proposed scheme can
compensate significant part of Tor relay operator's expenses.

On Continuous After-the-Fact Leakage-Resilient Key Exchange

Side-channel attacks are severe type of attack against implementation of cryptographic primitives. Leakage-resilient cryptography is a new theoretical approach to formally address the problem of side-channel attacks. Recently, the Continuous After-the-Fact Leakage (CAFL) security model has been introduced for two-party authenticated key exchange (AKE) protocols. In the CAFL model, an adversary can adaptively request arbitrary leakage of long-term secrets even after the test session is activated. It supports continuous leakage even when the adversary learns certain ephemeral secrets or session keys. The amount of leakage is limited per query, but there is no bound on the total leakage. A generic leakage-resilient key exchange protocol $\pi$ has also been introduced that is formally proved to be secure in the CAFL model. In this paper, we comment on the CAFL model, and show that it does not capture its claimed security. Furthermore, we present an attack and counterproofs for the security of protocol $\pi$ which invalidates the formal security proofs of protocol $\pi$ in the CAFL model.

A Preliminary FPGA Implementation and Analysis of Phatak’s Quotient-First Scaling Algorithm in the Reduced-Precision Residue Number System

We built and tested the first hardware implementation of Phatak’s Quotient-First Scaling (QFS) algorithm in the reduced-precision residue number system (RP-RNS). This algorithm is designed to expedite division in the Residue Number System for the special case when the divisor is known ahead of time (i.e., when the divisor can be considered to be a constant, as in the modular exponentiation required for the RSA encryption/decryption). We implemented the QFS algorithm using an FPGA and tested it for operand lengths up to 1024 bits. The RP-RNS modular exponentiation algorithm is not based on Montgomery’s method, but on quotient estimation derived from the straightforward division algorithm, with substantial amount of precomputations whose results are read from look-up tables at run-time.
Phatak’s preliminary analysis indicates that under reasonable assumptions about hardware capabilities, a single modular multiplication’s (or QFS’s) execution time grows logarithmically with respect to the operand word length. We experimentally confirmed this predicted growth rate of the delay of a modular multiplication with our FPGA implementation. Though our implementation did not outperform the most recent implementations such as that by Gandino, et al., we determined that this outcome was solely a consequence of tradeoffs stemming from our decision to store the lookup tables on the FPGA.

DTLS-HIMMO: Efficiently Securing a Post-Quantum World with a Fully-Collusion Resistant KPS

Uncategorized

Uncategorized

The future development of quantum-computers could turn many key agreement algorithms used in the Internet today fully insecure, endangering many applications such as online banking, e-commerce, e-health, etc. At the same time, the Internet is further evolving to enable the Internet of Things (IoT) in which billions of devices
deployed in critical applications like healthcare, smart cities
and smart energy are being connected to the Internet. The IoT not only requires strong and quantum-secure security, as current Internet applications, but also efficient operation. The recently introduced HIMMO scheme enables lightweight identity-based key sharing and verification of credentials in a non-interactive way. The collusion resistance properties of HIMMO enable direct secure communication between any pair of Internet-connected devices. The facts that attacking HIMMO requires lattice techniques and that it is extremely lightweight make HIMMO an ideal lightweight approach for key agreement and information verification in a post-quantum world.
Building on the HIMMO scheme, this paper firstly shows how HIMMO can be efficiently implemented even in resource-constrained devices enabling combined key agreement and credential verification one order of magnitude more efficiently than using ECDH-ECDSA, while being
quantum secure. We further explain how HIMMO helps to secure the Internet and IoT by introducing the DTLS- HIMMO operation mode. DTLS, the datagram version of TLS, is becoming the standard security protocol in the IoT, however, it is very frequently discussed that it does not offer the right performance for IoT scenarios. Our design,
implementation, and evaluation show that DTLS-HIMMOoperation mode achieves the security properties of DTLS Certificate security suite while being quantum secure and exhibiting the overhead of symmetric-key primitives.

Fair Multiple-bank E-cash in the Standard Model

Multiple-bank e-cash (electronic cash) model allows users and merchants to open their accounts at different banks which are monitored by the Center Bank. Some multiple-bank e-cash systems were proposed in recent years. However, prior implementations of multiple-bank e-cash all require the random oracle model idealization in their security analysis. We know some schemes are secure in the random oracle model, but are trivially insecure under any instantiation of the oracle.
In this paper, based on the automorphic blind signature, the Groth-Sahai proof system and a new group blind signature, we construct a fair multiple-bank e-cash scheme. The new scheme is proved secure in the standard model and provides the following
functionalities, such as owner tracing, coin tracing, identification of the double spender and
signer tracing. In order to sign two messages at once, we extend Ghadafi's group blind signature to a new group blind signature. The new signature scheme may be of independent interest.

Simple composition theorems of one-way functions -- proofs and presentations

One-way functions are both central to cryptographic theory and a clear example of its complexity as a theory. From the aim to understand theories, proofs, and communicability of proofs in the area better, we study some small theorems on one-way functions, namely: composition theorems of one-way functions of the form "if $f$ (or $h$) is well-behaved in some sense and $g$ is a one-way function, then $f \circ g$ (respectively, $g \circ h$) is a one-way function".
We present two basic composition theorems, and generalisations of them which may well be folklore. Then we experiment with different proof presentations, including using the Coq theorem prover, using one of the theorems as a case study.

A pure block chain based decentralized exchange.

A pure peer to peer version of the exchange system would allow all parties access to the market without relying on any central organization for market access. Paper proposes a solution for the problem of maintain an order book and determine the execution rate in the peer to peer network. Like crypto-currencies the network relies on blockchain of transaction. Digital signature system would be the core of the decentralized market place. The paper defines basic ground rules for the working of decentralized exchange. The major components of the decentralized exchange are issuing process, co-existence of blockchain and order books and functions of the miner. Unlike other crypto currencies de-centralized exchange would have a trust based issuing process which in long run would be a sum zero game. The decentralized
Exchange would have 3 types of entities namely – Issuer, Trader and Miner.

CONIKS: Bringing Key Transparency to End Users

Uncategorized

Uncategorized

We present CONIKS, an end-user key verification service capable of integration in end-to-end encrypted communication systems. CONIKS builds on transparency log proposals for web server certificates but solves several new challenges specific to key verification for end users. CONIKS obviates the need for global third-party monitors and enables users to efficiently monitor their own key bindings for consistency, downloading less than 20 kB per day to do so even for a provider with billions of users. CONIKS users and providers can collectively audit providers for non-equivocation, and this requires downloading a constant 2.5 kB per provider per day. Additionally, CONIKS preserves the level of privacy offered by today’s major communication services, hiding the list of usernames present and even allowing providers to conceal the total number of users in the system.

COFFE: Ciphertext Output Feedback Faithful Encryption

In this paper we introduce the first authenticated encryption scheme
based on a hash function, called COFFE. This research has been
motivated by the challenge to fit secure cryptography into constrained
devices -- some of these devices have to use a hash function, anyway,
and the challenge is to avoid the usage of an additional block cipher
to provide authenticated encryption. COFFE satisfies the common
security requirements regarding authenticated encryption, i.e., IND-CPA-
and INT-CTXT-security. Beyond that, it provides the following
additional security features: resistance against side-channel attacks
and INT-CTXT security in the nonce-misuse scenario. It also support
failure-friendly authentication under reasonable assumptions.

Experiments in Encrypted and Searchable Network Audit Logs

We consider the scenario where a consumer can securely outsource their network telemetry data to a Cloud Service Provider and enable a third party to audit such telemetry for any security forensics. Especially we consider the use case of privacy preserving search in network log audits. In this paper we experiment with advances in Identity Based Encryption and Attribute-Based encryption schemes for auditing network logs.

Last updated: 2015-05-27

Robustly Secure Two-Party Authenticated Key Exchange from Ring-LWE

Uncategorized

Uncategorized

Using the hard assumption of Ring-Decision Learning With Errors (DLWE) in the lattice, we propose a new
authenticated key exchange (AKE) scheme which is based on Peikert’s reconciliation technique. Under the CK+
model, the proposed scheme is provably secure. Compared with the traditional Diffie-Hellman (DH) authenticated
key exchange (AKE) schemes, the proposed scheme not only has better efficiency and stronger security but also
resists quantum attacks because of the hard assumption on lattice problem. The comparisons between Ring-LWE
based ones shows that the proposed scheme protects the shared session key with balanced key derivation function
(KDF) compared with those current AKE schemes from LWE

Complete Characterization of Fairness in Secure Two-Party Computation of Boolean Functions

Fairness is a desirable property in secure computation; informally it means that if one party gets the output of the function, then all parties get the output.
Alas, an implication of Cleve's result (STOC 86) is that when there is no honest majority, in particular in the important case of the two-party setting, there exist
functions that cannot be computed with fairness. In a surprising result, Gordon et al.\ (JACM 2011) showed that some interesting functions can be computed with fairness in the two-party setting, and re-opened the question of understanding which Boolean functions can be computed with fairness, and which cannot.
Our main result in this work is a complete characterization of the (symmetric)
Boolean functions that can be computed with fairness in the two-party setting; this settles an open problem of Gordon et al. The characterization is quite simple: A function can be computed with fairness \emph{if and only if} the all one-vector or the all-zero vector are in the affine span of either the rows or the columns of the matrix describing the function. This is true for both deterministic and randomized functions. To prove the possibility result, we modify the protocol of Gordon et al.\ (JACM 2011); the resulting protocol computes with full security (and in particular with fairness) all functions that are computable with fairness.
We extend the above result in two directions. First, we completely characterize the Boolean functions that can be computed with fairness in the multiparty case, when the number of parties is constant and at most half of the parties can be malicious.
Second, we consider the two-party setting with asymmetric Boolean functionalities, that is, when the output of each party is one bit, but the outputs are not necessarily the same. We generalize our aforementioned protocol for symmetric functions to handle asymmetric functions, and obtain a sufficient condition for computing such functions with fairness. In addition, we provide a necessary condition for fairness; however, a gap is left between these two conditions.
We then consider a specific asymmetric function in this gap area, and by designing a new protocol, we show that it is computable with fairness. However, we do not give a complete characterization for all functions that lie in this gap, and their classification remains open.

First Experimental Result of Power Analysis Attacks on a FPGA Implementation of LEA

The lightweight encryption algorithm (LEA) is a 128-bit block cipher introduced in 2013. It is based on Addition, rotation, XOR operations for 32-bit words. Because of its structure,it is useful for several devices to achieve a high speed of encryption and low-power consumption.However, side-channel attacks on LEA implementations have not been examined.In this study, we perform a power analysis attack on LEA. We implemented LEA with 128-bit key size on FPGA in a straightforward manner. Our experimental results show that we can successfully retrieve a 128-bit master key by attacking a first round encryption.

Hierarchical deterministic Bitcoin wallets that tolerate key leakage

A Bitcoin wallet is a set of private keys known to a user and which allow that user to spend any Bitcoin associated with those keys. In a hierarchical deterministic (HD) wallet, child private keys are generated pseudorandomly from a master private key, and the corresponding child public keys can be generated by anyone with knowledge of the master public key. These wallets have several interesting applications including Internet retail, trustless audit, and a treasurer allocating funds among departments. A specification of HD wallets has even been accepted as Bitcoin standard BIP32.
Unfortunately, in all existing HD wallets---including BIP32 wallets---an attacker can easily recover the master private key given the master public key and any child private key. This vulnerability precludes use cases such as a combined treasurer-auditor, and some in the Bitcoin community have suspected that this vulnerability cannot be avoided.
We propose a new HD wallet that is not subject to this vulnerability. Our HD wallet can tolerate the leakage of up to m private keys with a master public key size of O(m). We prove that breaking our HD wallet is at least as hard as the so-called ``one more'' discrete logarithm problem.

Constants Count: Practical Improvements to Oblivious RAM

Oblivious RAM (ORAM) is a cryptographic primitive
that hides memory access patterns as seen by untrusted
storage. This paper proposes Ring ORAM, the most
bandwidth-efficient ORAM scheme for the small client
storage setting in both theory and practice. Ring ORAM
is the first tree-based ORAM whose bandwidth is independent
of the ORAM bucket size, a property that
unlocks multiple performance improvements. First,
Ring ORAM’s overall bandwidth is 2.3x to 4x better
than Path ORAM, the prior-art scheme for small client
storage. Second, if memory can perform simple untrusted
computation, Ring ORAM achieves constant online
bandwidth (~60x improvement over Path ORAM
for practical parameters). As a case study, we show Ring
ORAM speeds up program completion time in a secure
processor by 1.5x relative to Path ORAM. On the theory
side, Ring ORAM features a tighter and significantly
simpler analysis than Path ORAM.

Some experiments investigating a possible L(1/4) algorithm for the discrete logarithm problem in algebraic curves

The function field sieve, a subexponential algorithm of complexity L(1/3) that computes discrete logarithms in finite fields, has recently been improved to an algorithm of complexity L(1/4) and subsequently to a quasi-polynomial time algorithm. We investigate whether the new ideas also apply to index calculus algorithms for computing discrete logarithms in Jacobians of algebraic curves. While we do not give a final answer to the question, we discuss a number of ideas, experiments, and possible conclusions.

Partial Garbling Schemes and Their Applications

Garbling schemes (aka randomized encodings of functions) represent a function F by a "simpler" randomized function F^ such that F^(x) reveals F(x) and no additional information about x. Garbling schemes have found applications in many areas of cryptography. Motivated by the goal of improving the efficiency of garbling schemes, we make the following contributions:
– We suggest a general new notion of partial garbling which unifies several previous notions from the literature, including standard garbling schemes, secret sharing schemes, and “conditional disclosure of secrets”. This notion considers garbling schemes in which part of the input is public, in the sense that it can be leaked by F^.
– We present constructions of partial garbling schemes for (boolean and arithmetic) formulas and branching programs which take advantage of the public input to gain better efficiency.
– We demonstrate the usefulness of the new notion by presenting applications to efficient attribute-based encryption, delegation, and secure computation. In each of these applications, we obtain either new schemes for larger classes of functions or efficiency improvements from quadratic to linear. In particular, we obtain the first ABE scheme in bilinear groups for arithmetic formulas, as well as more efficient delegation schemes for boolean and arithmetic branching programs.

Key-Policy Multi-authority Attribute-Based Encryption

Uncategorized

Uncategorized

Bilinear groups are often used to create Attribute-Based Encryption (ABE) algorithms. In particular, they have been used to create an ABE system with multi authorities, but limited to the ciphertext-policy instance. Here, for the first time, we propose a multi-authority key-policy ABE system.
In our proposal, the authorities may be set up in any moment and without any coordination. A party can simply act as an ABE authority by creating its own public parameters and issuing private keys to the users. A user can thus encrypt data choosing both a set of attributes and a set of trusted authorities, maintaining full control unless all his chosen authorities collude against him.
We prove our system secure under the bilinear Diffie-Hellman assumption.
The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-319-23021-4_14

How Different Electrical Circuits of ECC Designs Influence the Shape of Power Traces measured on FPGA

Side channel and fault attacks take advantage from the fact that the behavior of crypto implementations can be observed and provide hints that simplify revealing keys. These attacks use identical devices either for preparation of attacks or for measurements. By the preparation of attacks the structure and the electrical circuit of devices, that are identical to the target, is analyzed. By side channel attacks usually the same device is used many times for measurements, i.e. measurements on the identical device are made serially in time. Another way is to exploit the difference of side channel leakages; here two identical devices are used parallel, i.e. at the same time. In this paper we investigate the influence of the electrical circuit of a cryptographic implementation on the shape of the resulting power trace, because individualizing of circuits of cryptographic devices can be a new means to prevent attacks that use identical devices. We implemented three different designs that provide exactly the same cryptographic function, i.e. an ECC kP multiplication. For our evaluation we use two different FPGAs. The visualization of the routed design and measurement results show clear differences in the resources consumed as well as in the power traces.

Incentivized Outsourced Computation Resistant to Malicious Contractors

With the rise of Internet computing, outsourcing difficult computational tasks became an important need. Yet, once the computation is outsourced, the job owner loses control, and hence it is crucial to provide guarantees against malicious actions of the contractors involved. Cryptographers have an almost perfect solution, called fully homomorphic encryption, to this problem. This solution hides both the job itself and any inputs to it from the contractors, while still enabling them to perform the necessary computation over the encrypted data. This is a very strong security guarantee, but the current constructions are highly impractical.
In this paper, we propose a different approach to outsourcing computational tasks. We are not concerned with hiding the job or the data, but our main task is to ensure that the job is computed correctly. We also observe that not all contractors are malicious; rather, majority are rational. Thus, our approach brings together elements from cryptography, as well as game theory and mechanism design. We achieve the following results: (1) We incentivize all the rational contractors to perform the outsourced job correctly, (2) we guarantee high fraction (e.g., 99.9%) of correct results even in the existence of a relatively large fraction (e.g., 33%) of malicious irrational contractors in the system, (3) and we show that our system achieves these while being almost as efficient as running the job locally (e.g., with only 3% overhead). Such a high correctness guarantee was not known to be achieved with such efficiency.

Constant-Round Concurrent Zero-knowledge from Indistinguishability Obfuscation

Uncategorized

Uncategorized

We present a constant-round concurrent zero-knowledge protocol for NP. Our protocol relies on the existence of families of collision-resistant hash functions, one-way permutations, and indistinguishability obfuscators for P/poly (with slightly super-polynomial security).

Combining Secret Sharing and Garbled Circuits for Efficient Private IEEE 754 Floating-Point Computations

Two of the major branches in secure multi-party computation research are secret sharing and garbled circuits. This work succeeds in combining these to enable seamlessly switching to the technique more efficient for the required functionality. As an example, we add garbled circuits based IEEE 754 floating-point numbers to a secret sharing environment achieving very high efficiency and the first, to our knowledge, fully IEEE 754 compliant secure floating-point implementation.

Controlled Homomorphic Encryption: Definition and Construction

Uncategorized

Uncategorized

In this work we put forth the notion of a Controllable Homomorphic Encryption scheme (CHES), a new primitive that includes features of both FHEs and FunctEs. In a CHES it is possible (similarly to a FHE) to homomorphically evaluate a ciphertext Ct = Enc(m) and a circuit C therefore obtaining Enc(C(m)) but only if (similarly to a FunctE) a token for C has been received from the owner of the secret key. We discuss difficulties in constructing a CHES and then show a construction based on any FunctE. As a byproduct our CHES also represents a FunctE supporting the re-encryption functionality and in that respect improves existing solutions.

Armadillo: a compilation chain for privacy preserving applications

In this work we present Armadillo a compilation chain used for compiling applications written in a high-level language (C++) to work on encrypted data. The back-end of the compilation chain is based on homomorphic encryption. The tool-chain further automatically handle a huge amount of parallelism so as to mitigate the performance overhead of using homomorphic encryption.

Cryptanalysis of Full PRIDE Block Cipher

PRIDE is a lightweight block ciphers designed by Albrecht et al., appears in CRYPTO 2014. The designers claim that the construction of linear layers is nicely in line with a bit-sliced implementation of the Sbox layer and security. In this paper, we find 8 2-round iterative related-key differential characteristics, which can be used to construct 18-round related-key differentials. Then, by discussing the function $g^{(1)}_r$, we also find 4 2-round iterative related-key differential characteristics with $\Delta g^{(1)}_r(k_{1,2})=0x80$ and 4 2-round iterative characteristics with $\Delta g^{(1)}_r(k_{1,2})=0x20$ which cause three weak-key classes. Based on the related-key differentials, we launch related-key differential attack on full PRIDE. The data and time complexity are $2^{39}$ chosen plaintexts and $2^{60}$ encryptions, respectively. Moreover, by using multi related-key differentials, we improve the cryptanalysis, which requires $2^{41.4}$ chosen plaintexts and $2^{44}$ encryptions, respectively. Finally, by using 17-round related-key differentials, the cryptanalysis requires $2^{34}$ plaintexts and $2^{53.7}$ encryptions. These are the first results on full PRIDE.

Related-Key Differential Attack on Round Reduced RECTANGLE-80

RECTANGLE is a newly proposed lightweight block cipher which allows fast implementations for multiple platforms by using bit-slice techniques. It is an iterative 25-round SPN block cipher with a 64-bit block size and a 80-bit or 128-bit key size. Until now, the results on analyzing the cipher are not too much, which includes an attack on the 18-round reduced version proposed by the designers themselves. In this paper, we find all 15-round differential characteristics with 26--30 active S-boxes for given input, output and round subkey differences, which have a total probability $2^{-60.5}$. Based on these differential characteristics, we extend the corresponding distinguisher to 2 rounds backward and forward respectively, and propose an attack on the 19-round reduced RECTANGLE-80 with data complexity of $2^{62}$ plaintexts, time complexity of about $2^{67.42}$ encryptions and memory complexity of $2^{72}$. TThese data and time complexities are much lower than that of the designers for the 18-round reduced RECTANGLE-80.

Statistical weakness in Spritz against VMPC-R: in search for the RC4 replacement

We found a statistical weakness in the Spritz algorithm designed by Ronald L. Rivest and Jacob C. N. Schuldt. For N=8: Prob(output(x)=output(x+2)) = 1/N + 0.000498. The bias becomes statistically significant (for N=8) after observing about 2^21.9 outputs. Analogous bias occurs for N=16. We propose an algorithm (VMPC-R) which for N=8 produced 2^46.8 (31 million times more) outputs which remained undistinguishable from random in the same battery of tests. Supported by a series of additional statistical tests and security analyses we present VMPC-R as an algorithm we hope can be considered a worthwhile replacement for RC4.

Undermining Isolation through Covert Channels in the Fiasco.OC Microkernel

In the new age of cyberwars, system designers have
come to recognize the merits of building critical systems on top
of small kernels for their ability to provide strong isolation at
system level. This is due to the fact that enforceable isolation is
the prerequisite for any reasonable security policy. Towards this
goal we examine some internals of Fiasco.OC, a microkernel of
the prominent L4 family. Despite its recent success in certain highsecurity
projects for governmental use, we prove that Fiasco.OC
is not suited to ensure strict isolation between components meant
to be separated.
Unfortunately, in addition to the construction of system-wide
denial of service attacks, our identified weaknesses of Fiasco.OC
also allow covert channels across security perimeters with high
bandwidth. We verified our results in a strong affirmative way
through many practical experiments. Indeed, for all potential use
cases of Fiasco.OC we implemented a full-fledged system on its
respective archetypical hardware: Desktop server/workstation on
AMD64 x86 CPU, Tablet on Intel Atom CPU, Smartphone on
ARM Cortex A9 CPU. The measured peak channel capacities
ranging from 13500 bits/s (Cortex-A9 device) to 30500 bits/s
(desktop system) lay bare the feeble meaningfulness of Fiasco.
OC’s isolation guarantee. This proves that Fiasco.OC cannot
be used as a separation kernel within high-security areas.

Public Verification of Private Effort

We introduce a new framework for polling responses from a large population. Our framework allows gathering information without violating the responders' anonymity and at the same time enables public verification of the poll's result. In contrast to prior approaches to the problem, we do not require trusting the pollster for faithfully announcing the poll's results, nor do we rely on strong identity verification.
We propose an ``effort based'' polling protocol whose results can be publicly verified by constructing a ``responder certification graph'' whose nodes are labeled by responders' replies to the poll, and whose edges cross-certify that adjacent nodes correspond to honest participants. Cross-certification is achieved using a newly introduced (privately verifiable) ``Private Proof of Effort'' (PPE). In effect, our protocol gives a general method for converting privately-verifiable proofs into a publicly-verifiable protocol. The soundness of the transformation relies on expansion properties of the certification graph.
Our results are applicable to a variety of settings in which crowd-sourced information gathering is required. This includes crypto-currencies, political polling, elections, recommendation systems, viewer voting in TV shows, and prediction markets.

Outlier Privacy

Uncategorized

Uncategorized

We introduce a generalization of differential privacy called \emph{tailored differential privacy}, where an individual's privacy parameter is ``tailored'' for the individual based on the individual's data and the data set. In this paper, we focus on a natural instance of tailored differential privacy, which we call \emph{outlier privacy}: an individual's privacy parameter is determined by how much of an ``\emph{outlier}'' the individual is. We provide a new definition of an outlier and use it to introduce our notion of outlier privacy. Roughly speaking, \emph{$\eps(\cdot)$-outlier privacy} requires that each individual in the data set is guaranteed ``$\eps(k)$-differential privacy protection'', where $k$ is a number quantifying the ``outlierness'' of the individual. We demonstrate how to release accurate histograms that satisfy $\eps(\cdot)$-outlier privacy for various natural choices of $\eps(\cdot)$. Additionally, we show that $\eps(\cdot)$-outlier privacy with our weakest choice of $\eps(\cdot)$---which offers no explicit privacy protection for ``non-outliers''---already implies a ``distributional'' notion of differential privacy w.r.t.~a large and natural class of distributions.

Publicly Verifiable Non-Interactive Arguments for Delegating Computation

Uncategorized

Uncategorized

We construct publicly verifiable non-interactive arguments that can be used to delegate polynomial time computations. These computationally sound proof systems are completely non-interactive in the common reference string model. The verifier's running time is nearly-linear in the input length, and poly-logarithmic in the complexity of the delegated computation. Our protocol is based on graded encoding schemes, introduced by Garg, Gentry and Halevi (Eurocrypt 2012). Security is proved under a falsifiable and arguably simple cryptographic assumption about graded encodings. All prior publicly verifiable non-interactive argument systems were based on non-falsifiable knowledge assumptions. Our new result builds on the beautiful recent work of Kalai, Raz and Rothblum (STOC 2014), who constructed privately verifiable 2-message arguments. While building on their techniques, our protocols avoid no-signaling PCPs, and we obtain a simplified and modular analysis.
As a second contribution, we also construct a publicly verifiable non-interactive argument for (logspace-uniform) computations of bounded depth. The verifier's complexity grows with the depth of the circuit. This second protocol is adaptively sound, and its security is based on a falsifiable assumption about the hardness of a search problem on graded encodings (a milder cryptographic assumption). This result builds on the interactive proof of Goldwasser, Kalai and Rothblum (STOC 2008), using graded encodings to construct a non-interactive version of their protocol.

A Fast Phase-Based Enumeration Algorithm for SVP Challenge through y-Sparse Representations of Short Lattice Vectors

In this paper, we propose a new phase-based enumeration algorithm based on two interesting and useful observations for y-sparse representations of short lattice vectors in lattices from SVP challenge benchmarks. Experimental results show that the phase-based algorithm greatly outperforms other famous enumeration algorithms in running time and achieves higher dimensions, like the Kannan-Helfrich enumeration algorithm. Therefore, the phase-based algorithm is a practically excellent solver for the shortest vector problem (SVP).

The Chaining Lemma and its application

We present a new information-theoretic result which we call the Chaining Lemma. It considers a so-called "chain" of random variables, defined by a source distribution X[0] with high min-entropy and a number (say, t in total) of arbitrary functions (T1,....Tt) which are applied in succession to that source to generate the chain X[0]-->X[1]-->.....-->X[t] such that X[i] = Ti(X[i-1]). Intuitively, the Chaining Lemma guarantees that, if the chain is not too long, then either (i) the entire chain is "highly random", in that every variable has high min-entropy; or (ii) it is possible to find a point j (1 <= j <= t) in the chain such that, conditioned on the end of the chain the preceding part remains highly random. We believe this is an interesting information-theoretic result which is intuitive but nevertheless requires rigorous case-analysis to prove.
We believe that the above lemma will find applications in cryptography. We give an example of this, namely we show an application of the lemma to protect essentially any cryptographic scheme against memory-tampering attacks. We allow several tampering requests, the tampering functions can be arbitrary, however, they must be chosen from a bounded size set of functions that is fixed a priori.

Improved Differential Analysis of Block Cipher PRIDE

In CRYPTO 2014 Albrecht \emph{et al.} brought in a 20-round iterative lightweight block
cipher PRIDE which is based on a good linear layer for achieving a
tradeoff between security and efficiency. A recent
analysis is presented by Zhao \emph{et al.}. Inspired by their work, we use an
automatic search method to find out 56 iterative differential characteristics of PRIDE, containing 24
1-round iterative characteristics, based on three of them we construct a 15-round differential and perform a differential attack on the 19-round PRIDE, with data,
time and memory
complexity of $2^{62}$, $2^{63}$ and $2^{71}$ respectively.

A Survey on Lightweight Entity Authentication with Strong PUFs

Uncategorized

Uncategorized

Physically unclonable functions (PUFs) exploit the unavoidable manufacturing variations of an integrated circuit (IC). Their input-output behavior serves as a unique IC 'fingerprint'. Therefore, they have been envisioned as an IC authentication mechanism, in particular the subclass of so-called strong PUFs. The protocol proposals are typically accompanied with two PUF promises: lightweight and an increased resistance against physical attacks. In this work, we review nineteen proposals in chronological order: from the original strong PUF proposal (2001) to the more complicated noise bifurcation and system of PUFs proposals (2014). The assessment is aided by a unied notation and a
transparent framework of PUF protocol requirements.

Geppetto: Versatile Verifiable Computation

Cloud computing sparked interest in Verifiable Computation protocols, which allow a weak client to securely outsource computations to remote parties. Recent work has dramatically reduced the client’s cost to verify the correctness of results, but the overhead to produce proofs largely remains impractical.
Geppetto introduces complementary techniques for reducing prover overhead and increasing prover flexibility. With Multi-QAPs, Geppetto reduces the cost of sharing state between computations (e.g., for MapReduce) or within a single computation by up to two orders of magnitude. Via a careful instantiation of cryptographic primitives, Geppetto also brings down the cost of verifying outsourced cryptographic computations (e.g., verifiably computing on signed data); together with Geppetto’s notion of bounded proof bootstrapping, Geppetto improves on prior bootstrapped systems by five orders of magnitude, albeit at some cost in universality. Geppetto also supports qualitatively new properties like verifying the correct execution of proprietary (i.e., secret) algorithms. Finally, Geppetto’s use of energy-saving circuits brings the prover’s costs more in line with the program’s actual (rather than worst-case) execution time.
Geppetto is implemented in a full-fledged, scalable compiler that consumes LLVM code generated from a variety of apps, as well as a large cryptographic library.

Cryptanalysis of Two Candidate Fixes of Multilinear Maps over the Integers

Shortly following Cheon, Han, Lee, Ryu and Stehle attack against the multilinear map of Coron, Lepoint and Tibouchi (CLT), two independent approaches to thwart this attack have been proposed on the cryptology ePrint archive, due to Garg, Gentry, Halevi and Zhandry on the one hand, and Boneh, Wu and Zimmerman on the other. In this short note, we show that both countermeasures can be defeated in polynomial time using extensions of the Cheon et al. attack.

Last updated: 2015-04-11

Non-Linearity and Affine Equivalence of Permutations

In this paper we consider permutations on n symbols as bijections
on Z/nZ. Treating permutations this way facilitates us with additional
structures such as group, ring defined in the set Z/nZ. We explore
some of the properties of permutations arising out of this treatment.
We propose two properties viz. affine equivalence and non-linearity for permutations on the lines similar to there description given in the case of functions. We also establish some results which are quite similar to those given for Boolean functions. We also define Mode Transform of a permutation and investigate its relationship with non-linearity.
We propose an efficient algorithm using Mode transform for computing non-linearity of a permutation and show that it is O(n^2), as
compared to O(n^3) of the direct approach. At the end we discuss
these properties in the context of cryptography.

Improved Linear (hull) Cryptanalysis of Round-reduced Versions of SIMON

Uncategorized

Uncategorized

SIMON is a family of lightweight block ciphers designed by the U.S. National Security Agency (NSA) that has attracted much attention since its publication in 2013.
In this paper, we thoroughly investigate the properties of linear approximations of the bitwise AND operation with dependent input bits. By using a Mixed-integer Linear Programming based technique presented in Aasicrypt 2014 for automatic search for characteristics, we obtain improved linear characteristics for several versions of the SIMON family. Moreover, by employing a recently published method for automatic enumeration of differential and linear characteristics by Sun et. al., we present an improved linear hull analysis of some versions of the SIMON family, which are the best results for linear cryptanalysis of SIMON published so far.
Specifically, for SIMON$128$, where the number denotes the block length, a 34-round linear characteristic with correlation $2^{-61}$ is found, which is the longest linear characteristic that can be used in a key-recovery attack for SIMON$128$ published so far. Besides, several linear hulls superior to the best ones known previously are presented as follows: linear hulls for the 13-round SIMON$32$ with potential $2^{-28.99}$ versus previous $2^{-31.69}$, for the 15-round SIMON$48$ with potential $2^{-42.28}$ versus previous $2^{-44.11}$ and linear hulls for the 21-round SIMON$64$ with potential $2^{-60.72}$ versus previous $2^{-62.53}$.

A Chinese Remainder Theorem Approach to Bit-Parallel GF(2^n) Polynomial Basis Multipliers for Irreducible Trinomials

Uncategorized

Uncategorized

We show that the step “modulo the degree-n field generating irreducible polynomial” in the classical definition of the GF(2^n) multiplication operation can be avoided. This leads to an alternative
representation of the finite field multiplication operation. Combining this representation and the Chinese Remainder Theorem, we design bit-parallel GF(2^n) multipliers for irreducible trinomials u^n + u^k + 1
on GF(2) where 1 < k ≤ n=2. For some values of n, our architectures have the same time complexity as the fastest bit-parallel multipliers – the quadratic multipliers, but their space complexities are reduced. Take the special irreducible trinomial u^(2k) + u^k + 1 for example, the space complexity of the proposed design is reduced by about 1=8, while the time complexity matches the best result. Our experimental results show that among the 539 values of n such that 4 < n < 1000 and x^n+x^k+1 is irreducible over GF(2) for some k in the range 1 < k ≤ n=2, the proposed multipliers beat the current fastest parallel multipliers for 290 values of n when (n − 1)=3 ≤ k ≤ n=2: they have the same time complexity, but the space complexities are reduced by 8.4% on average.

Key recovery attacks on Grain family using BSW sampling and certain weaknesses of the filtering function

A novel internal state recovery attack on the whole Grain family of ciphers is proposed in this work. It basically uses the ideas of BSW sampling along with employing a weak placement of the tap positions of the driving LFSRs. The currently best known complexity trade-offs are obtained, and due to the structure of Grain family these attacks are also key recovery attacks. It is shown that the internal state of Grain-v1 can be recovered with the time complexity of about $2^{66}$ operations using a memory of about $2^{58.91}$ bits, assuming availability of $2^{45}$ keystream sequences each of length $2^{49}$ bits generated for different initial values. Moreover, for Grain-128 or Grain-128a, the attack requires about $2^{105}$ operations using a memory of about $2^{82.59}$ bits, assuming availability of $2^{75}$ keystream sequences each of length $2^{76}$ bits generated for different initial values. These results further show that the whole Grain family, due to the choice of tap positions mainly, does not provide enough security margins against internal state recovery attacks. A simple modification of the selection of the tap positions, as a countermeasure against the attacks described here, is given.

Jackpot Stealing Information From Large Caches via Huge Pages

Uncategorized

Uncategorized

The cloud computing infrastructure relies on virtualized servers that provide isolation across guest OS's through sandboxing. This isolation was demonstrated to be imperfect in past work which exploited hardware level information leakages to gain access to sensitive information across co-located virtual machines (VMs). In response virtualization companies and cloud services providers have disabled features such as deduplication to prevent such attacks. In this work, we introduce a fine-grain cross-core cache attack that exploits access time variations on the last level cache. The attack exploits huge pages to work across VM boundaries without requiring deduplication. No configuration changes on the victim OS are needed, making the attack quite viable. Furthermore, only machine co-location is required, while the target and victim OS can still reside on different cores of the machine. Our new attack is a variation of the prime and probe cache attack whose applicability at the time is limited to L1 cache. In contrast, our attack works in the spirit of the flush and reload attack targeting the shared L3 cache instead. Indeed, by adjusting the huge page size our attack can be customized to work virtually at any cache level/size. We demonstrate the viability of the attack by targeting an OpenSSL1.0.1f implementation of AES. The attack recovers AES keys in the cross-VM setting on Xen 4.1 with deduplication disabled, being only slightly less efficient than the flush and reload attack. Given that huge pages are a standard feature enabled in the memory management unit of OS's and that besides co-location no additional assumptions are needed, the attack we present poses a significant risk to existing cloud servers.

Privacy-Preserving Face Recognition with Outsourced Computation

Face recognition is one of the most important biometrics pattern recognitions, which has been widely applied in a variety of enterprise, civilian and law enforcement. The privacy of biometrics data raises important concerns, in particular if computations over biometric data is performed at untrusted servers. In previous work of privacy-preserving face recognition, in order to protect individuals' privacy, face recognition is performed over encrypted face images. However, these results increase the computation cost of the client and the face database owners, which may enable face recognition cannot be efficiently executed. Consequently, it would be desirable to reduce computation over sensitive biometric data in such environments. Currently, no secure techniques for outsourcing face biometric recognition is readily available. In this paper, we propose a privacy-preserving face recognition protocol with outsourced computation for the first time, which efficiently protects individuals' privacy. Our protocol substantially improves the previous works in terms of the online computation cost by outsourcing large computation task to a cloud server who has large computing power. In particular, the overall online computation cost of the client and the database owner in our protocol is at most 1/2 of the corresponding protocol in the state of the art algorithms. In addition, the client requires the decryption operations with only $O(1)$ independent of $M$, where $M$ is the size of the face database. Furthermore, the client can verify the correction of the recognition result.

Attacks on Secure Ownership Transfer for Multi-Tag Multi-Owner Passive RFID Environments

Sundaresan et al proposed recently a novel ownership transfer protocol for multi-tag multi-owner RFID environments that complies with the EPC Class1 Generation2 standard. The authors claim that this provides individual-owner privacy and prevents tracking attacks. In this paper we show that this protocol falls short of its security objectives. We describe attacks that allow: a) an eavesdropper to trace a tag, b) the previous owner to obtain the private information that the tag shares with the new owner, and c) an adversary that has access to the data stored on a tag to link this tag to previous interrogations (forward-secrecy). We then analyze the security proof and show that while the first two cases can be solved with a more careful design, for lightweight RFID applications strong privacy remains an open problem.

A Comprehensive Comparison of Shannon Entropy and Smooth Renyi Entropy

We provide a new result that links two crucial entropy notions: Shannon entropy $\mathrm{H}_1$ and collision entropy $\mathrm{H}_2$. Our formula gives the \emph{worst possible} amount of collision entropy in a probability distribution, when its Shannon entropy is fixed.
Our results and techniques used in the proof immediately imply
many quantitatively tight separations between Shannon and smooth Renyi entropy, which were previously known as qualitative statements or one-sided bounds. In particular, we precisely calculate the number of bits that can be extracted from a Shannon entropy source, and calculate how far from the uniform distribution is a distribution with the given amount Shannon entropy. To illustrate our results we provide clear numerical examples.
In the typical situation, when the gap between Shannon entropy of a distribution and its length is bigger than $1$, the length of the extracted sequence is very small, even if we allow the randomness quality to be poor. In the case of almost full entropy, where the gap is close to $0$, the $\ell_2$-distance to uniform is roughly of the same order as the gap. Therefore, it is actually not possible to decide the strong quality of supposed true randomness, {efficiently and at extremely high confidence level} , by means of Shannon entropy estimators, like Maurer's Universal Test or others.
Our approach involves convex optimization techniques, applied to characterize worst case distributions, and the use of the Lambert $W$ function, by which we resolve equations coming from Shannon entropy constraints. We believe that it may be of independent interests and useful in studying Shannon entropy with constraints elsewhere.

Privacy-Preserving Data Publish-Subscribe Service on Cloud-based Platforms

Uncategorized

Uncategorized

Data publish-subscribe service is an effective approach to share and filter data. Due to the huge volume and velocity of data generated daily, cloud systems are inevitably becoming the platform for data publication and subscription. However, the privacy becomes a challenging issue as the cloud server cannot be fully trusted by both data publishers and data subscribers. In this paper, we propose a privacy-preserving data publish-subscribe service for cloud-based platforms. Specifically, we first formulate the problem of privacy-preserving data publish-subscribe service by refining its security requirements on cloud-based platforms. Then, we propose a bi-policy attribute-based encryption (BP-ABE) scheme as the underlying technique that enables the encryptor to define access policies and the decryptor to define filtering policies. Based on BP-ABE, we also propose a \underline{P}rivacy-preserving \underline{D}ata \underline{P}ublish-\underline{S}ubscribe (PDPS) scheme on cloud-based platforms, which enables the cloud server to evaluate both subscription policy and access policy in a privacy-preserving way. The security analysis and performance evaluation show that the PDPS scheme is secure in standard model and efficient in practice.

Predicate Encryption for Multi-Dimensional Range Queries from Lattices

Uncategorized

Uncategorized

We construct a lattice-based predicate encryption scheme for
multi-dimensional range and multi-dimensional subset queries. Our
scheme is selectively secure and weakly attribute-hiding, and its
security is based on the standard learning with errors (LWE)
assumption. Multi-dimensional range and subset queries capture many interesting
applications pertaining to searching on encrypted data. To the best
of our knowledge, these are the first lattice-based predicate
encryption schemes for functionalities beyond IBE and inner product.

On two windows multivariate cryptosystem depending on random parameters

The concept of multivariate bijective map of an affine space $K^n$ over commutative
Ring $K$ was already used in Cryptography. We consider the idea of nonbijective multivariate
polynomial map $F_n$ of $K^n$ into $K^n$ represented as ''partially invertible decomposition''
$F^{(1)}_nF^{(2)}_n \dots
F^{(k)}_n$, $k=k(n)$, such that knowledge on the decomposition and given
value $u=F(v)$ allow to restore a special part $v'$ of reimage $v$.
We combine an idea of ''oil and vinegar signatures cryptosystem'' with the idea of linguistic graph based map with partially invertible decomposition to introduce a new
cryptosystem. The decomposition will be induced by pseudorandom walk on the linguistic graph
and its special quotient (homomorphic image). We estimate the complexity of such general algorithm in case of special family of graphs with quotients, where both graphs form known
families of Extremal Graph Theory. The map created by key holder (Alice) corresponds to
pseudorandom sequence of ring elements.
The postquantum version of the algorithm can be obtained simply by the usage of random strings
instead of pseudorandom.

Malicious-Client Security in Blind Seer: A Scalable Private DBMS

The Blind Seer system (Oakland 2014) is an efficient and scalable DBMS that affords both client query privacy and server data protection. It also provides the ability to enforce authorization policies on the system, restricting client’s queries while maintaining the privacy of both query and policy. Blind Seer supports a rich query set, including arbitrary boolean formulas, and is provably secure with respect to a controlled amount of search pattern leakage. No other system to date achieves this tradeoff of performance, generality, and provable privacy.
A major shortcoming of Blind Seer is its reliance on semi-honest security, particularly for access control and data protection. A malicious client could easily cheat the query authorization policy and obtain any database records satisfying any query of its choice, thus violating basic security features of any standard DBMS. In sum, Blind Seer offers additional privacy to a client, but sacrifices a basic security tenet of DBMS.
In the present work, we completely resolve the issue of a malicious client. We show how to achieve robust access control and data protection in Blind Seer with virtually no added cost to performance or privacy. Our approach also involves a novel technique for a semi-private function secure function evaluation (SPF-SFE) that may have independent applications.
We fully implement our solution and report on its performance.

Solving Polynomial Systems with Noise over F_2: Revisited

Uncategorized

Uncategorized

Solving polynomial systems with noise over F_2 is a fundamental problem in computer science, especially in cryptanalysis. ISBS is a new method for solving this problem based on the idea of incrementally solving the noisy polynomial systems and backtracking all the possible noises, and it has better performance than other methods in solving the some problems generated from cryptanalysis. In this paper, some further researches on ISBS are presented. The structure and size of the search tree of ISBS are theoretically analyzed. Then two major improvements, artificial noise-bound strategy and s-direction approach, are proposed. Based on these improvements, a modified ISBS algorithm is implemented, and the experiments of solving the Cold Boot key recovery problems of block cipher Serpent with symmetric noise, show that this modified algorithm is more efficient than the original one.

When are Fuzzy Extractors Possible?

Uncategorized

Uncategorized

Fuzzy extractors (Dodis et al., SIAM J. Computing 2008) convert repeated noisy
readings of a high-entropy secret into the same uniformly distributed
key. A minimum condition for the security of the key is the hardness
of guessing a value that is similar to the secret, because the fuzzy
extractor converts such a guess to the key. We codify this quantify this property in a new notion called fuzzy min-entropy. We ask: is fuzzy min-entropy sufficient to build fuzzy extractors? We provide two answers for different settings.
1) If the algorithms have precise knowledge of the probability distribution $W$ that defines the noisy source is a sufficient condition for information-theoretic key extraction from $W$.
2) A more ambitious goal is to design a single extractor that works for all possible sources. This more ambitious goal is impossible: there is a family of sources with high fuzzy min-entropy for which no single fuzzy extractor is secure. This is true in three settings:
a) for standard fuzzy extractors,
b) for fuzzy extractors that are allowed to sometimes be wrong,
c) and for secure sketches, which are the main ingredient of most fuzzy extractor constructions.

Non-Interactive Secure Multiparty Computation

We introduce and study the notion of non-interactive secure multiparty computation (NIMPC). An NIMPC protocol for a function $f(x_1,\ldots,x_n)$ is specified by a joint probability distribution $R=(R_1,\ldots,R_n)$ and local encoding
functions $Enc_i(x_i,R_i)$, $1 <= i <= n$. Given correlated
randomness $(R_1,\ldots,R_n)\in_R R$, each party $P_i$, using its input $x_i$ and its randomness $R_i$, computes the message $m_i= Enc_i(x_i,R_i)$. The messages $m_1,\ldots,m_n$ can be used to decode $f(x_1,\ldots,x_n)$. For a set $T\subseteq[n]$, the protocol is said to be $T$-robust if revealing the messages $(Enc_i(x_i,R_i))_{i\not\in T}$ together with the randomness $(R_i)_{i\in T}$ gives the same information about $(x_i)_{i\not\in T}$ as an oracle access to the function $f$ restricted to these input values. Namely, a coalition $T$ can learn no more than the restriction of $f$ fixing the inputs of uncorrupted parties, which, in this non-interactive setting, one cannot hope to hide.
For $0\le t\le n$, the protocol is $t$-robust if it is $T$-robust for every $T$ of size at most $t$ and it is fully robust if it is $n$-robust. A 0-robust NIMPC protocol for $f$ coincides with a protocol in the private simultaneous messages model of Feige et al.~(STOC 1994).
In the setting of computational (indistinguishability-based) security, fully robust NIMPC is implied by multi-input functional encryption, a notion that was recently introduced by Goldwasser et al. (Eurocrypt 2014) and realized using indistinguishability obfuscation. We consider NIMPC in the information-theoretic setting and obtain unconditional positive results for some special cases of interest:
Group products. For every (possibly non-abelian) finite group $G$, the iterated group product function $f(x_1,\ldots,x_n)=x_1x_2\ldots x_n$ admits an efficient, fully robust NIMPC protocol.
Small functions. Every function $f$ admits a fully robust NIMPC protocol whose complexity is polynomial in the size of the input domain (i.e., exponential in the total bit-length of the inputs).
Symmetric functions. Every symmetric function $f:\X^n\to \Y$, where $\X$ is an input domain of constant size, admits a $t$-robust NIMPC protocol of complexity $n^{O(t)}$. For the case where $f$ is a $w$-out-of-$n$ threshold function, we get a fully robust protocol of complexity $n^{O(w)}$.
On the negative side, we show that natural attempts to realize NIMPC using private simultaneous messages protocols and garbling schemes from the literature fail to achieve even 1-robustness.

Attacking Suggest Boxes in Web Applications Over HTTPS Using Side-Channel Stochastic Algorithms

Web applications are subject to several types of attacks. In particular, side-channel attacks consist in performing a statistical analysis of the web traffic to gain sensitive information about a client. In this paper, we investigate how side-channel leaks can be used on search engines such as Google or Bing to retrieve the client's search query. In contrast to previous works, due to payload randomization and compression, it is not always possible to uniquely map a search query to a web traffic signature and hence stochastic algorithms must be used. They yield, for the French language, an exact recovery of search word in more than 30% of the cases. Finally, we present some methods to mitigate such side-channel leaks.

Authenticated Encryption: How Reordering can Impact Performance

In this work, we look at authenticated encryption schemes from a new perspective. As opposed to focusing solely on the {\em ``security''} implications of the different methods for constructing authenticated encryption schemes, we investigate the effect of the method used to construct an authenticated encryption scheme on the {\em ``performance''} of the construction. We show that, as opposed to the current NIST standard, by performing the authentication operation before the encryption operation, the computational efficiency of the construction can be increased, without affecting the security of the overall construction. In fact, we show that the proposed construction is even more secure than standard authentication based on universal hashing in the sense that the hashing key is resilient to key recovery attacks.

Black Box Separations for Differentially Private Protocols

Uncategorized

Uncategorized

We study the maximal achievable accuracy of distributed differentially private protocols for a large natural class of boolean functions, in the computational setting.
In the information theoretic model, McGregor et al. [FOCS 2010] and Goyal et al. [CRYPTO 2013] have demonstrated several functionalities whose differentially private computation results in much lower accuracies in the distributed setting, as compared to the client-server setting.
We explore lower bounds on the computational assumptions under which this particular accuracy gap can be possibly reduced for general two-party boolean output functions. In the distributed setting, it is possible to achieve optimal accuracy, i.e. the maximal achievable accuracy in the client-server setting, for any function, if a semi-honest secure protocol for oblivious transfer exists. However, we show the following strong impossibility results:
1) For any boolean function and fixed level of privacy, the maximal achievable accuracy of any (fully) black-box construction based on existence of key-agreement protocols is at least a constant smaller than optimal achievable accuracy. Since key-agreement protocols imply the existence of one-way functions, this separation also extends to one-way functions.
2) Our results are tight for the AND and XOR functions. For AND, there exists an accuracy threshold such that any accuracy up to the threshold can be information theoretically achieved; while no (fully) black-box construction based on existence of key-agreement can achieve accuracy beyond this threshold. An analogous statement is also true for XOR (albeit with a different accuracy threshold).
Our results build on recent developments in black-box separation techniques for functions with private input; and consequently translate information theoretic impossibilities into black-box separation results.

Tamper Detection and Continuous Non-Malleable Codes

We consider a public and keyless code $(\Enc,\Dec)$ which is used to encode a message $m$ and derive a codeword $c = \Enc(m)$. The codeword can be adversarially tampered via a function $f \in \F$ from some tampering function family $\F$, resulting in a tampered value $c' = f(c)$. We study the different types of security guarantees that can be achieved in this scenario for different families $\F$ of tampering attacks.
Firstly, we initiate the general study of tamper-detection codes, which must detect that tampering occurred and output $\Dec(c') = \bot$. We show that such codes exist for any family of functions $\F$ over $n$ bit codewords, as long as $|\F| < 2^{2^n}$ is sufficiently smaller than the set of all possible functions, and the functions $f \in \F$ are further restricted in two ways: (1) they can only have a few fixed points $x$ such that $f(x)=x$, (2) they must have high entropy of $f(x)$ over a random $x$. Such codes can also be made efficient when $|\F| = 2^{\poly(n)}$. For example, $\F$ can be the family of all low-degree polynomials excluding constant and identity polynomials. Such tamper-detection codes generalize the algebraic manipulation detection (AMD) codes of Cramer et al. (EUROCRYPT '08).
Next, we revisit non-malleable codes, which were introduced by Dziembowski, Pietrzak and Wichs (ICS '10) and require that $\Dec(c')$ either decodes to the original message $m$, or to some unrelated value (possibly $\bot$) that doesn't provide any information about $m$. We give a modular construction of non-malleable codes by combining tamper-detection codes and leakage-resilient codes. This gives an alternate proof of the existence of non-malleable codes with optimal rate for any family $\F$ of size $|\F| < 2^{2^n}$, as well as efficient constructions for families of size $|\F| = 2^{\poly(n)}$.
Finally, we initiate the general study of continuous non-malleable codes, which provide a non-malleability guarantee against an attacker that can tamper a codeword multiple times. We define several variants of the problem depending on: (I) whether tampering is persistent and each successive attack modifies the codeword that has been modified by previous attacks, or whether tampering is non-persistent and is always applied to the original codeword, (II) whether we can self-destruct and stop the experiment if a tampered codeword is ever detected to be invalid or whether the attacker can always tamper more. In the case of persistent tampering and self-destruct (weakest case), we get a broad existence results, essentially matching what's known for standard non-malleable codes. In the case of non-persistent tampering and no self-destruct (strongest case), we must further restrict the tampering functions to have few fixed points and high entropy. The two intermediate cases correspond to requiring only one of the above two restrictions.
These results have applications in cryptography to related-key attack (RKA) security and to protecting devices against tampering attacks without requiring state or randomness.

On the Asymptotic Idealness of the Asmuth-Bloom Threshold Secret Sharing Scheme

A necessary and sufficient condition for the asymptotic idealness of the Asmuth-Bloom threshold secret sharing scheme is proposed. Apart from this, a comprehensive analysis of the known variants of the Asmuth-Bloom threshold secret sharing scheme is provided, clarifying the security properties achieved by each of them.

Algebraic Fault Analysis of Katan

Uncategorized

Uncategorized

This paper presents a new and more realistic model for fault attacks and statistical and algebraic techniques to improve fault analysis in general. Our algebraic techniques is an adapted solver for systems of equations based on ElimLin and XSL.
We use these techniques to introduce two new fault attacks on the hardware oriented block cipher Katan32 from the Katan family of block ciphers.
We are able to break full Katan using $4$ faults and $2^{29.04}$ Katan evaluations with a theoretical statistical fault attack and $7.19$ faults in $2^{27.2}$ Katan evaluations with a tested algebraic one.
This is a great improvement over the existing fault attacks which need $115$ and $140$ faults respectively.
Furthermore, our algebraic attack can be executed on a normal computer.

The Related-Key Security of Iterated Even-Mansour Ciphers

The simplicity and widespread use of blockciphers based on the iterated Even--Mansour (EM) construction has sparked recent interest in the theoretical study of their security. Previous work has established their strong pseudorandom permutation and indifferentiability properties, with some matching lower bounds presented to demonstrate tightness. In this work we initiate the study of the EM ciphers under related-key attacks which, despite extensive prior work, has received little attention. We show that the simplest one-round EM cipher is strong enough to achieve non-trivial levels of RKA security even under chosen-ciphertext attacks. This class, however, does not include the practically relevant case of offsetting keys by constants. We show that two rounds suffice to reach this level under chosen-plaintext attacks and that three rounds can boost security to resist chosen-ciphertext attacks. We also formalize how indifferentiability relates to RKA security, showing strong positive results despite counterexamples presented for indifferentiability in multi-stage games.

Balanced Encoding to Mitigate Power Analysis: A Case Study

Uncategorized

Uncategorized

Most side channel countermeasures for software implementations of cryptography either rely on masking or randomize the execution
order of the cryptographic implementation. This work proposes a countermeasure that has constant leakage in common linear leakage models.
Constant leakage is achieved not only for internal state values, but also for
their transitions. The proposed countermeasure provides perfect protection in the theoretical leakage model. To study the practical relevance of
the proposed countermeasure, it is applied to a software implementation
of the block cipher Prince. This case study allows us to give realistic values
for resulting implementation overheads as well as for the resulting side
channel protection levels that can be achieved in realistic implementation
scenarios.

Modified Alternating Step Generators with Non-Linear Scrambler

Pseudorandom generators, which produce keystreams for stream ciphers by the exclusive-or sum of output bits from alternately clocked linear feedback shift registers, are vulnerable to cryptanalysis. In order to increase their resistance to attacks, we introduce a nonlinear scrambler at the output of these generators. The role of the scrambler plays the nonlinear feedback shift register. In addition, we propose the Modified Alternating Step Generator (MASG1S) built with the nonlinear scrambler and regularly or irregularly clocked linear feedback shift registers with nonlinear filtering functions.

Tree-Structured Composition of Homomorphic Encryption: How to Weaken Underlying Assumptions

Cryptographic primitives based on infinite families of progressively weaker assumptions have been proposed by Hofheinz-Kiltz and by Shacham (the n-Linear assumptions) and by Escala et al. (the Matrix Diffie-Hellman assumptions). All of these assumptions are extensions of the decisional Diffie-Hellman (DDH) assumption. In contrast, in this paper, we construct (additive) homomorphic encryption (HE) schemes based on a new infinite family of assumptions extending the decisional Composite Residuosity (DCR) assumption. This is the first result on a primitive based on an infinite family of progressively weaker assumptions not originating from the DDH assumption. Our assumptions are indexed by rooted trees, and provides a completely different structure compared to the previous extensions of the DDH assumption.
Our construction of a HE scheme is generic; based on a tree structure, we recursively combine copies of building-block HE schemes associated to each leaf of the tree (e.g., the Paillier cryptosystem, for our DCR-based result mentioned above). Our construction for depth-one trees utilizes the "share-then-encrypt" multiple encryption paradigm, modified appropriately to ensure security of the resulting HE schemes. We prove several separations between the CPA security of our HE schemes based on different trees; for example, the existence of an adversary capable of breaking all schemes based on depth-one trees, does not imply an adversary against our scheme based on a depth-two tree (within a computational model analogous to the generic group model). Moreover, based on our results, we give an example which reveals a type of "non-monotonicity" for security of generic constructions of cryptographic schemes and their building-block primitives; if the building-block primitives for a scheme are replaced with other ones secure under stronger assumptions, it may happen that the resulting scheme becomes secure under a weaker assumption than the original.

Simplification/complication of the basis of prime Boolean ideal

Prime Boolean ideal has the basis of the form (x1 + e1, ..., xn + en) that consists of linear binomials. Its variety consists of the point (e1, ..., en). Complication of the basis is changing the simple linear binomials by non-linear polynomials in such a way, that the variety of ideal stays fixed. Simplification of the basis is obtaining the basis that consists of linear binomials from the complicated one that keeps its variety.
Since any ideal is a module over the ring of Boolean polynomials, the change of the basis is uniquely determined by invertible matrix over the ring.
Algorithms for invertible simplifying and complicating the basis of Boolean ideal that fixes the size of basis are proposed. Algorithm of simplification optimizes the choose of pairs of polynomials during the Groebner basis computation, and eliminates variables without using resultants.

Lattice Point Enumeration on Block Reduced Bases

Uncategorized

Uncategorized

When analyzing lattice based cryptosystems, we often need to solve the Shortest Vector Problem (SVP) in some lattice associated to the system under scrutiny. The go-to algorithms in practice to solve SVP are enumeration algorithms, which usually consist of a preprocessing step, followed by an exhaustive search. Obviously, the two steps offer a trade-off and should be balanced in their running time in order to minimize the overall complexity. In practice, the most common approach to control this trade-off is to use block reduction algorithms during the preprocessing. Despite the popularity of this approach, it lacks any well founded analysis and all practical approaches seem to use ad hoc parameters. This weakens our confidence in the cryptanalysis of the systems. In this work, we aim to shed light on at least one side of this trade-off and analyze the effect of block reduction on the exhaustive search. For this, we give asymptotic worst case bounds and presents results from both experiments and simulation that show its average case behavior in practice.

The SIMON and SPECK Block Ciphers on AVR 8-bit Microcontrollers

The last several years have witnessed a surge of activity in
lightweight cryptographic design. Many lightweight block ciphers have
been proposed, targeted mostly at hardware applications. Typically software performance has not been a priority, and consequently software
performance for many of these algorithms is unexceptional. SIMON and
SPECK are lightweight block cipher families developed by the U.S. National Security Agency for high performance in constrained hardware and software environments. In this paper, we discuss software performance and demonstrate how to achieve high performance implementations of SIMON and SPECK on the AVR family of 8-bit microcontrollers. Both ciphers compare favorably to other lightweight block ciphers on this platform. Indeed, SPECK seems to have better overall performance than any existing block cipher --- lightweight or not.

On a new fast public key cryptosystem

Uncategorized

Uncategorized

This paper presents a new fast public key cryptosystem namely : a key exchange algorithm,
a public key encryption algorithm and a digital signature algorithm, based on the difficulty to
invert the following function : F(x) = (a * x)Mod(2 power p)Div(2 power q) .
Mod is modulo operation , Div is integer division operation , a , p and q are integers where
(p > q) .
In this paper we also evaluate the hardness of this problem by reducing it to SAT .

Boomerang Attack on Step-Reduced SHA-512

SHA-2 (SHA-224, SHA-256, SHA-384 and SHA-512) is hash function family issued by the National Institute of Standards and Technology (NIST) in 2002 and is widely used all over the world. In this work, we analyze the security of SHA-512 with respect to boomerang attack. Boomerang distinguisher on SHA-512 compression function reduced to 48 steps is proposed, with a practical complexity of $2^{51}$. A practical example of the distinguisher for 48-step SHA-512 is also given. As far as we know, it is the best practical attack on step-reduced SHA-512.

Structure-Preserving Signatures on Equivalence Classes and Constant-Size Anonymous Credentials

Structure-preserving signatures (SPS) are a powerful building block for cryptographic protocols. We introduce SPS on equivalence classes (SPS-EQ), which allow joint randomization of messages and signatures. Messages are projective equivalence classes defined on group element vectors, so multiplying a vector by a scalar yields a different representative of the same class. Our scheme lets one adapt a signature for one representative to a signature for another representative without knowledge of any secret. Moreover, given a signature, an adapted signature for a different representative is indistinguishable from a fresh signature on a random message. We propose a definitional framework for SPS-EQ and an efficient construction in Type-3 bilinear groups, which we prove secure against generic forgers.
We also introduce set-commitment schemes that let one open subsets of the committed set. From this and SPS-EQ we then build an efficient multi-show attribute-based anonymous credential system for an arbitrary number of attributes. Our ABC system avoids costly zero-knowledge proofs and only requires a short interactive proof to thwart replay attacks. It is the first credential system whose bandwidth required for credential showing is independent of the number of its attributes, i.e., constant-size. We propose strengthened game-based security definitions for ABC and prove our scheme anonymous against malicious organizations in the standard model; finally, we discuss a concurrently secure variant in the CRS model.

Advancing the State-of-the-Art in Hardware Trojans Detection

Uncategorized

Uncategorized

Over the past decade, Hardware Trojans (HTs) research community has made significant progress towards developing effective countermeasures for various types of HTs, yet these countermeasures are shown to be circumvented by sophisticated HTs designed subsequently. Therefore, instead of guaranteeing a certain (low) false negative rate for a small \textit{constant} set of publicly known HTs, a rigorous security framework of HTs should provide an effective algorithm to detect any HT from an \textit{exponentially large} class (exponential in number of wires in IP core) of HTs with negligible false negative rate.
In this work, we present HaTCh, the first rigorous algorithm of HT detection within the paradigm of pre-silicon logic testing based tools. HaTCh detects any HT from $H_D$, a huge class of deterministic HTs which is orders of magnitude larger than the small subclass (e.g. TrustHub) considered in the current literature.
We prove that HaTCh offers negligible false negative rate and controllable false positive rate for the class $H_D$. Given certain global characteristics regarding the stealthiness of the HT within $H_D$, the computational complexity of HaTCh for practical HTs scales polynomially with the number of wires in the IP core. We implement and test HaTCh on TrustHub and other sophisticated HTs.

Public-Coin Differing-Inputs Obfuscation and Its Applications

Differing inputs obfuscation (diO) is a strengthening of indistinguishability obfuscation (iO)
that has recently found applications to improving the efficiency and generality of obfuscation, functional encryption, and related primitives. Roughly speaking, a diO scheme ensures that the obfuscations of two efficiently generated programs are indistinguishable not only if the two programs are equivalent, but also if it is hard to find an input on which their outputs differ. The above ``indistinguishability'' and ``hardness'' conditions should hold even in the presence of an auxiliary input that is generated together with the programs.
The recent works of Boyle and Pass (ePrint 2013) and Garg et al. (Crypto 2014) cast serious doubt on the plausibility of general-purpose diO with respect to general auxiliary inputs. This leaves open the existence of a variant of diO that is plausible, simple, and useful for applications.
We suggest such a diO variant that we call {\em public-coin} diO. A public-coin diO restricts the original definition of diO by requiring the auxiliary input to be a public random string which is given as input to all relevant algorithms. In contrast to standard diO, we argue that it remains very plausible that current candidate constructions of iO for circuits satisfy the public-coin diO requirement.
We demonstrate the usefulness of the new notion by showing that several applications of diO can be obtained by relying on the public-coin variant instead. These include constructions of {\em succinct} obfuscation and functional encryption schemes for Turing Machines, where the size of the obfuscated code or keys is essentially independent of the running time and space.

Garbled RAM From One-Way Functions

Yao's garbled circuit construction is a fundamental construction in cryptography and recent efficiency optimizations have brought it much closer to practice. However these constructions work only for circuits and garbling a RAM program involves the inefficient process of first converting it into a circuit. Towards the goal of avoiding this inefficiency, Lu and Ostrovsky (Eurocrypt 2013) introduced the notion of ``garbled RAM'' as a method to garble RAM programs directly. It can be seen as a RAM analogue of Yao's garbled circuits such that, the size of the garbled program and the time it takes to create and evaluate it, is proportional only to the running time on the RAM program rather than its circuit size.
Known realizations of this primitive, either need to rely on strong computational assumptions or do not achieve the aforementioned efficiency (Gentry, Halevi, Lu, Ostrovsky, Raykova and Wichs, EUROCRYPT 2014). In this paper we provide the first construction with strictly poly-logarithmic overhead in both space and time based only on the minimal and necessary assumption that one-way functions exist. Our scheme allows for garbling multiple programs being executed on a persistent database, and has the additional feature that the program garbling is decoupled from the database garbling. This allows a client to provide multiple garbled programs to the server as part of a pre-processing phase and then later determine the order and the inputs on which these programs are to be executed, doing work independent of the running times of the programs itself.

Fully Secure Self-Updatable Encryption in Prime Order Bilinear Groups

In CRYPTO 2012, Sahai et al. raised the concern that in a cloud control system revocation of past keys should also be accompanied by updation of previously generated ciphertexts in order to prevent unread ciphertexts from being read by revoked users. Self-updatable encryption (SUE), introduced by Lee et al. in ASIACRYPT 2013, is a newly developed cryptographic primitive that realizes ciphertext update as an inbuilt functionality and thus improves the efficiency of key revocation and time evolution in cloud management. In SUE, a user can decrypt a ciphertext associated with a specific time if and only if the user possesses a private key corresponding to either the
same time as that of the ciphertext or some future time. Furthermore, a ciphertext attached to a certain time can be updated to a new one attached to a future time using only public information. The SUE schemes available in the literature are either (a) fully secure but developed in a composite order bilinear group setting under highly non-standard assumptions or (b) designed in prime order bilinear groups but only selectively secure. This paper presents the first fully secure SUE scheme in prime order bilinear groups under standard assumptions, namely, the Decisional Linear and the Decisional Bilinear Diffie-Hellman assumptions. As pointed out by Freeman (EUROCRYPT 2010)and Lewko (EUROCRYPT 2012), the communication and storage, as well as, computational efficiency of prime order bilinear groups are much higher compared to that of composite order bilinear groups with an equivalent level of security. Consequently, our SUE scheme is highly cost-effective than the existing fully secure SUE.

Last updated: 2016-07-21

Security Analysis of an Authentication Scheme Using Smart Cards

Uncategorized

Uncategorized

In 2010, Sood et al [3] proposed a secure dynamic identity based authentication scheme using smart cards. They claimed that their scheme is secure against various attacks. In this paper, we improve their scheme for outsider attack as well as insider attack. To remedy these security flaws, an improved scheme is proposed to withstand these attacks.

Trapdoor Computational Fuzzy Extractors and Stateless Cryptographically-Secure Physical Unclonable Functions

We present a fuzzy extractor whose security can be reduced to the hardness of Learning Parity with Noise (LPN) and can efficiently correct a constant fraction of errors in a biometric source with a ``noise-avoiding trapdoor." Using this computational fuzzy extractor,
we present a stateless construction of a cryptographically-secure Physical Unclonable Function. Our construct requires no non-volatile (permanent) storage, secure or otherwise, and its computational security can be reduced to the hardness of an LPN variant under the random oracle model. The construction is ``stateless,'' because there is \emph{no} information stored between subsequent queries, which mitigates attacks against the PUF via tampering. Moreover, our stateless construction corresponds to a PUF whose outputs are free of noise because of internal error-correcting capability, which enables a host of applications beyond authentication. We describe the construction, provide a proof of computational security, analysis of the security parameter for system parameter choices, and present experimental evidence that the construction is practical and reliable under a wide environmental range.

Analysis of Lewko-Sahai-Waters Revocation System

In 2010, Lewko, Sahai and Waters proposed an efficient revocation system but they neglected the security differences between one-to-one encryption and one-to-many encryption. In their system, an authority generates all users' decryption keys once and for all. We remark that the inherent drawback results in that the system is vulnerable to an attack launched by some malicious users. These malicious users could exchange their decryption keys after they receive them from the authority in order to maximize their own interests. Thus, the Lewko-Sahai-Waters revocation system cannot truly revoke a malicious user. From the practical point of view, the flaw discounts greatly the importance of the system.

Outsourcing Secure Two-Party Computation as a Black Box

Secure multiparty computation (SMC) offers a technique to preserve functionality and data privacy in mobile applications. Current protocols that make this costly cryptographic construction feasible on mobile devices securely outsource the bulk of the computation to a cloud provider. However, these outsourcing techniques are built on specific secure computation assumptions and tools, and applying new SMC ideas to the outsourced setting requires the protocols to be completely rebuilt and proven secure. In this work, we develop a generic technique for lifting any secure two-party computation protocol into an outsourced two-party SMC protocol. By augmenting the function being evaluated with auxiliary consistency checks and input values, we can create an outsourced protocol with low overhead cost. Our implementation and evaluation show that in the best case, our outsourcing additions execute within the confidence intervals of two servers running the same computation, and consume approximately the same bandwidth. In addition, the mobile device itself uses minimal bandwidth over a single round of communication. This work demonstrates that efficient outsourcing is possible with any underlying SMC scheme, and provides an outsourcing protocol that is efficient and directly applicable to current and future SMC techniques.

Boosting Higher-Order Correlation Attacks by Dimensionality Reduction

Multi-variate side-channel attacks allow to break higher-order masking protections by combining several leakage samples.
But how to optimally extract all the information contained in all possible $d$-tuples of points?
In this article, we introduce preprocessing tools that answer this question.
We first show that maximizing the higher-order CPA coefficient is equivalent to finding the maximum of the covariance.
We apply this equivalence to the problem of trace dimensionality reduction by linear combination of its samples.
Then we establish the link between this problem and the Principal Component Analysis. In a second step we present the optimal solution for the problem of maximizing the covariance.
We also theoretically and empirically compare these methods.
We finally apply them on real measurements, publicly available under the DPA Contest v4, to evaluate how the proposed techniques improve the second-order CPA (2O-CPA).

Efficient Generic Zero-Knowledge Proofs from Commitments

Even though Zero-knowledge has existed for more than 30 years, few
generic constructions for Zero-knowledge exist. In this paper we
present a new kind of commitment scheme on which we build a novel and
efficient Zero-knowledge protocol for circuit satisfiability.
We can prove knowledge of the AES-key which map a particular plaintext to a particular ciphertext
in less than 4 seconds with a soundness error of $2^{-40}$. Our protocol only requires a number
of commitments proportional to the security parameter with a small constant (roughly 5).

Certificateless Proxy Re-Encryption Without Pairing: Revisited

Proxy Re-Encryption was introduced by Blaze, Bleumer and Strauss to efficiently solve the problem of delegation of decryption rights. In proxy re-encryption, a semi-honest proxy transforms a ciphertext intended for Alice to a ciphertext of the same message for Bob without learning anything about the underlying message. From its introduction, several proxy re-encryption schemes in the Public Key Infrastructure (PKI) and Identity (ID) based setting have been proposed. In practice, systems in the public key infrastructure suffer from the \textit{certificate management problem} and those in identity based setting suffer from the \textit{key escrow problem}. Certificateless Proxy Re-encryption schemes enjoy the advantages provided by ID-based constructions without suffering from the key escrow problem.
In this work, we construct the \textit{first} unidirectional, single-hop CCA-secure certificateless proxy re-encryption scheme \textit{without} \textit{pairing} by extending the PKI based construction of Chow et al. proposed in 2010. We prove its security in the random oracle model under the Computational Diffie-Hellman (CDH) assumption. Prior to this work, the only secure certificateless proxy re-encryption scheme is due to Guo et al. proposed in 2013 using bilinear pairing. They proved their construction is RCCA-secure under $q$-weak Decisional Bilinear Diffie-Hellman assumption.
The construction proposed in this work is more efficient than that system and its security relies on more standard assumptions. We also show that the recently proposed construction of Yang et al. is insecure with respect to the security model considered in this work.

Bicliques with Minimal Data and Time Complexity for AES (Extended Version)

Uncategorized

Uncategorized

Biclique cryptanalysis is a recent technique that has been successfully applied to AES resulting in key recovery faster than brute force. However, a major hurdle in carrying out biclique cryptanalysis on AES is that it requires very high data complexity. This naturally warrants questions over the practical feasibility of
implementing biclique attack in the real world. In Crypto'13, Canteaut et al. proposed biclique attack where the data complexity of the attack was reduced to a single plaintext-ciphertext pair. However, no application of the same on AES was suggested.
In this paper, we re-evaluate the security-bound of full round AES against biclique attack. Under some reasonable
restrictions, we exhaustively analyze the most promising class of biclique cryptanalysis as applied to
AES through a computer-assisted search and find optimal attacks towards lowest computational and data
complexities:
- Among attacks with the minimal data complexity of the unicity distance, the ones with computational complexity 2^126.67 (for AES-128), 2^190.9 (for AES-192) and 2^255 (for AES-256) are the fastest. Each attack just requires 2 (for AES-128 and AES-192) or 3 (for AES-256) known plaintexts for success probability 1.
We obtain these results using the improved biclique attack proposed in Crypto'13.
- Among attacks with data complexity less than the full codebook, for AES-128, the ones of computational complexity 2^126.16 are fastest. Within these, the one with data complexity 2^64 requires the smallest amount of data. Thus, the original attack (with data complexity 2^88) did not have the optimal data complexity
for AES-128. Similar findings are observed for AES-192 as well (data complexity 2^48 as against 2^80 in the
original attack). For AES-256, we find an attack that has a lower computational complexity of 2^254.31 as
compared to the original attack complexity of 2^254.42.
- Among all attacks covered, the ones of computational complexity 2^125.56 (for AES-128), 2^189.51 (for AES-192) and 2^253.87 (for AES-256) are fastest, though requiring the full codebook. This can be considered as an indication of the limitations of the independent-biclique attack approach as applied to AES.

Cryptanalysis of JAMBU

In this article, we analyse the security of the authenticated encryption mode JAMBU, a submission to the CAESAR competition that remains currently unbroken. We show that the security claims of this candidate regarding its nonce-misuse resistance can be broken. More precisely, we explain a technique to guess in advance a ciphertext block corresponding to a plaintext that has never been queried before (nor its prefix), thus breaking the confidentiality of the scheme when the attacker can make encryption queries with the same nonce. Our attack is very practical as it requires only about 2^{32} encryption queries and computations (instead of the 2^{128} claimed by the designers). Our cryptanalysis has been fully implemented in order to verify our findings. Moreover, due to the small tag length of JAMBU, we show how this attack can be extended in the nonce-respecting scenario to break confidentiality in the adaptative chosen-ciphertext model (IND-CCA2) with 2^{96} computations, with message prefixes not previously queried.

Immunizing Multilinear Maps Against Zeroizing Attacks

Uncategorized

Uncategorized

In recent work Cheon, Han, Lee, Ryu, and Stehle presented an attack on the multilinear map of Coron, Lepoint, and Tibouchi (CLT). They show that given many low-level encodings of zero, the CLT multilinear map can be completely broken, recovering the secret factorization of the CLT modulus. The attack is a generalization of the "zeroizing" attack of Garg, Gentry, and Halevi.
We first strengthen the attack of Cheon, Han, Lee, Ryu, and Stehle by showing that CLT can be broken even without low-level encodings of zero. This strengthening is sufficient to show that the subgroup elimination assumption does not hold for the CLT multilinear map.
We then present a generic defense against this type of "zeroizing" attack. For an arbitrary asymmetric composite-order multilinear map (including CLT), we give a functionality-preserving transformation that ensures that no sequence of map operations will produce valid encodings (below the zero-testing level) whose product is zero. We prove security of our transformation in a generic model of composite-order multilinear maps. Our new transformation rules out "zeroizing" leaving no currently known attacks on the decision linear assumption, subgroup elimination assumption, and other related problems for the CLT multilinear map. Of course, in time, it is possible that different attacks on CLT will emerge.
Update: Since the publication of this work, Coron, Lepoint, and Tibouchi have further strengthened the original attacks of Cheon et al. With the stregthened attack, the mitigations we describe in this work no longer suffice to secure the original CLT multilinear map. However, we have preserved the original exposition of our zero-immunizing transformation (Section 3), since this transformation is of independent interest. Notably, our transformation still rules out low-level zero encodings (Theorem 3.14), and thus provides robustness in the setting of deterministic encodings.