All papers in 2017 (Page 11 of 1262 results)
When It’s All Just Too Much: Outsourcing MPC-Preprocessing
Most modern actively secure multiparty computation protocols make use of a function and input independent pre-processing phase. This pre-processing phase is tasked with producing some form of correlated randomness and distributing it to the parties. Whilst the “online” phase of such protocols is exceedingly fast, the bottleneck comes in the pre-processing phase. In this paper we examine situations where the computing parties in the online phase may want to outsource the pre-processing phase to another set of parties, or to a sub-committee. We examine how this can be done, and also describe situations where this may be a benefit.
Side-channel Analysis of Lightweight Ciphers: Does Lightweight Equal Easy?
Side-channel attacks represent a powerful category of attacks against cryptographic devices. Still, side-channel analysis for lightweight ciphers is much less investigated than for instance for AES. Although intuition may lead to the conclusion that lightweight ciphers are weaker in terms of side-channel resistance, that remains to be confirmed and quantified. In this paper, we consider various side-channel analysis metrics which should provide an insight on the resistance of lightweight ciphers against side-channel attacks. In particular, for the non-profiled scenario we use the theoretical confusion coefficient and empirical correlation power analysis. Furthermore, we conduct a profiled side-channel analysis using various machine learning attacks on PRESENT and AES. Our results show that the difference between AES and lightweight ciphers is smaller than one would expect. Interestingly, we observe that the studied 4-bit S-boxes have a different side-channel resilience, while the difference in the 8-bit ones is only theoretically present.
Message-Recovery MACs and Verification-Unskippable AE
This paper explores a new type of MACs called message-recovery MACs (MRMACs). MRMACs have an additional input that gets recovered upon verification.
Receivers must execute verification in order to recover , making the verification process unskippable. Such a feature helps avoid mis-implementing verification algorithms.
The syntax and security notions of MRMACs are rigorously formulated. In particular, we formalize the notion of unskippability and present a construction of an unskippable MRMAC from a tweakable cipher and a universal hash function.
Our construction is provided with formal security proofs.
We extend the idea of MRMACs to a new type of authenticated encryption called verification-unskippable AE (VUAE).
We propose a generic Enc-then-MRMAC composition which realizes VUAE. The encryption part needs to satisfy a new security notion called one-time undecipherability. We provide three constructions that are one-time undecipherable, and they are proven secure under various security models.
Gaussian Sampling over the Integers: Efficient, Generic, Constant-Time
Sampling integers with Gaussian distribution is a fundamental problem that arises in almost every application of lattice cryptography, and it can be both time consuming and challenging to implement. Most previous work has focused on the optimization and implementation of integer Gaussian sampling in the context of specific applications, with fixed sets of parameters. We present new algorithms for discrete Gaussian sampling that are both generic (application independent), efficient, and more easily implemented in constant time without incurring a substantial slow-down, making them more resilient to side-channel (e.g., timing) attacks. As an additional contribution, we present new analytical techniques that can be used to simplify the precision/security evaluation of floating point cryptographic algorithms, and an experimental comparison of our algorithms with previous algorithms from the literature.
Pseudorandomness of Ring-LWE for Any Ring and Modulus
We give a polynomial-time quantum reduction from worst-case (ideal)
lattice problems directly to the decision version of
(Ring-)LWE. This extends to decision all the worst-case hardness results that were
previously known for the search version, for the same or even better
parameters and with no algebraic restrictions on the modulus or number
field. Indeed, our reduction is the first that works for decision
Ring-LWE with any number field and any modulus.
Threshold Fully Homomorphic Encryption
We formally define and give the first construction of (leveled) threshold fully homomorphic encryption for any access structure induced by a monotone boolean formula and in particular for the threshold access structure. Our construction is based on the learning with errors assumption and can be instantiated with any existing homomorphic encryption scheme that satisfies fairly general conditions, such as Gentry, Sahai, Waters (CRYPTO 2013) and Brakerski, Gentry, Vaikuntanathan (ITCS 2012).
From threshold homomorphic encryption, we construct function secret sharing and distributed pseudorandom functions for the aforementioned access structures. No such constructions were known prior to this work.
A Framework for Universally Composable Diffie-Hellman Key Exchange
The analysis of real-world protocols, in particular key exchange protocols and protocols building on these protocols, is a very complex, error-prone, and tedious task. Besides the complexity of the protocols itself, one important reason for this is that the security of the protocols has to be reduced to the security of the underlying cryptographic primitives for every protocol time and again.
We would therefore like to get rid of reduction proofs for real-world key exchange protocols as much as possible and in many cases altogether, also for higher-level protocols which use the exchanged keys. So far some first steps have been taken in this direction. But existing work is still quite limited, and, for example, does not support Diffie-Hellman (DH) key exchange, a prevalent cryptographic primitive for real-world protocols.
In this paper, building on work by Küsters and Tuengerthal, we provide an ideal functionality in the universal composability setting which supports several common cryptographic primitives, including DH key exchange. This functionality helps to avoid reduction proofs in the analysis of real-world protocols and often eliminates them completely. We also propose a new general ideal key exchange functionality which allows higher-level protocols to use exchanged keys in an ideal way. As a proof of concept, we apply our framework to three practical DH key exchange protocols, namely ISO 9798-3, SIGMA, and OPTLS.
New and Old Limits for AES Known-Key Distinguishers
Uncategorized
Uncategorized
Known-key distinguishers have been introduced by Knudsen and Rijmen in 2007 to better understand the security of block ciphers in situations where the key can not be considered to be secret, i.e. the ``thing between secret-key model and hash function use-cases''.
AES is often considered as a target of such analyses, simply because AES or its building blocks are used in many settings that go beyond classical encryption. The most recent approach of Gilbert (proposed at Asiacrypt 2014) considers 8 core rounds, and extends it by one round in each direction. The resulting approach on 10-round has a time complexity of , and the best generic approach was shown to beat the proposed method with probably and is hence referred to as a ``distinguisher''. Interestingly, Gilbert's work also for the first time showed that the known-key model may not be weaker than the chosen-key model, as the best chosen-key attacks on AES only cover 9 rounds so far. This current state of affairs is unsatisfying as it contradicts the original intent of the known-key model.
In this paper we pick up the work of Gilbert, further exploring the limits of the known-key model with a focus on the AES, and eventually propose a way to remedy the situation.
In that work, arguments are put forward suggesting that a total of two extension rounds seem to be the limit in the known-key model, and that likely only a distinguisher that exploits the uniform distribution property can be extended in such way.
We disprove both conjectures and arrive at the following results: We firstly show that the technique proposed by Gilbert can also be used to extend a known-key distinguisher based on truncated differential trails. This allows us to present improved known-key distinguishers for AES from 7 to 10 rounds of AES. In particular, we are able to set up a 9-round known-key distinguisher for AES with a time complexity of and a 10-round known-key distinguisher with a time complexity of .
Secondly we are also able to show that more than two extension rounds are possible. As a result of this,
we describe the first known-key distinguishers on 12 rounds of AES, by extending Gilbert's 8-round known-key distinguisher by two rounds in each direction. The time complexity is , and for this result we do have supporting formal arguments, similar to Gilbert, that the best generic approach to beat the proposed method has probably .
This also shows that the counter-intuitive gap between the known-key and the chosen-key model may be wider than initially thought. To remedy the situation, we propose a refinement of the known-key model which restores its original intent.
Towards Easy Key Enumeration
Key enumeration solutions are post-processing schemes for the output sequences of side channel distinguishers, the application of which are prevented by very large key candidate space and computation power requirements. The attacker may spend several days or months to enumerate a huge key space (e.g. ). In this paper, we aim at pre-processing and reducing the key candidate space by deleting impossible key candidates before enumeration. A new distinguisher named Group Collision Attack (GCA) is given. Moreover, we introduce key verification into key recovery and a new divide and conquer strategy named Key Grouping Enumeration (KGE) is proposed. KGE divides the huge key space into several groups and uses GCA to delete impossible key combinations and output possible ones in each group. KGE then recombines the remaining key candidates in each group using verification. The number of remaining key candidates becomes much smaller through these two impossible key candidate deletion steps with a small amount of computation. Thus, the attacker can use KGE as a pre-processing tool of key enumeration and enumerate the key more easily and fast in a much smaller candidate space.
A Modular Security Analysis of EAP and IEEE 802.11
Uncategorized
Uncategorized
We conduct a reduction-based security analysis of the Extensible Authentication Protocol (EAP),
a widely used three-party authentication framework.
EAP is often found in enterprise networks where it allows a client and an authenticator to establish a shared key with the help of a mutually trusted server.
Considered as a three-party authenticated key exchange protocol,
we show that the general EAP construction achieves a security notion we call 3P-AKE .
The 3P-AKE security notion captures the idea of \emph{weak forward secrecy} and is a simplified three-party version of the well-known eCK model in the two-pass variant.
Our analysis is modular and reflects the compositional nature of EAP.
Additionally,
we show that the security of EAP can easily be upgraded to provide \emph{full} forward secrecy simply by adding a subsequent key-confirmation step between the client and the authenticator.
In practice this key-confirmation step is often carried out in the form of a 2P-AKE protocol which uses EAP to bootstrap its authentication.
A concrete example is the extremely common IEEE~802.11 protocol used in WLANs.
In enterprise settings EAP is often used in conjunction with IEEE~802.11 in order to allow the wireless client to authenticate itself to a wireless access point (the authenticator) through some centrally administrated server.
Building on our modular results for EAP,
we get as our second major result the first reduction-based security result for IEEE~802.11 combined with EAP.
High-Order Conversion From Boolean to Arithmetic Masking
Masking with random values is an effective countermeasure against side-channel attacks. For cryptographic algorithms combining arithmetic and Boolean masking, it is necessary to switch from arithmetic to Boolean masking and vice versa. Following a recent approach by Hutter and Tunstall, we describe a high-order Boolean to arithmetic conversion algorithm whose complexity is independent of the register size k. Our new algorithm is proven secure in the Ishai, Sahai and Wagner (ISW) framework for private circuits. In practice, for small orders, our new countermeasure is one order of magnitude faster than previous work.
We also describe a 3rd-order attack against the 3rd-order Hutter-Tunstall algorithm, and a constant, 4th-order attack against the t-th order Hutter-Tunstall algorithms, for any t>=4.
A Lattice-Based Universal Thresholdizer for Cryptographic Systems
We develop a general approach to thresholdizing a large class of (non-threshold) cryptographic schemes. We show how to add threshold functionality to CCA-secure public-key encryption (PKE), signature schemes, pseudorandom functions, and others primitives. To do so, we introduce a general tool, called a universal thresholdizer, from which many threshold systems are possible. The tool builds upon a lattice-based fully-homomorphic encryption (FHE) system. Applying the tool to a (non-threshold) lattice-based signature, gives the first single-round threshold signature from the learning with errors problem (LWE). Applying the tool to a (non-threshold) lattice-base CCA-secure PKE, gives a single-round lattice-based threshold CCA-secure PKE.
Indistinguishability Obfuscation from Trilinear Maps and Block-Wise Local PRGs
Uncategorized
Uncategorized
We consider the question of finding the lowest degree for which -linear maps suffice to obtain IO. The current state of the art (Lin, EUROCRYPT'16, CRYPTO '17; Lin and Vaikunthanathan, FOCS'16; Ananth and Sahai, EUROCRYPT '17) is that -linear maps (under suitable security assumptions) suffice for IO, assuming the existence of pseudo-random generators (PRGs) with output locality . However, these works cannot answer the question of whether suffices, as no polynomial-stretch PRG with locality lower than exists.
In this work, we present a new approach that relies on the existence of PRGs with block-wise locality , i.e., every output bit depends on at most (disjoint) input blocks, each consisting of up to input bits. We show that the existence of PRGs with block-wise locality is plausible for any , and also provide:
* A construction of a general-purpose indistinguishability obfuscator from -linear maps and a subexponentially-secure PRG with block-wise locality and polynomial stretch.
* A construction of general-purpose functional encryption from -linear maps and any slightly super-polynomially secure PRG with block-wise locality and polynomial stretch.
All our constructions are based on the SXDH assumption on -linear maps and subexponential Learning With Errors (LWE) assumption, and follow by instantiating our new generic bootstrapping theorems with Lin's recently proposed FE scheme (CRYPTO '17). Inherited from Lin's work, our security proof requires algebraic multilinear maps (Boneh and Silverberg, Contemporary Mathematics), whereas security when using noisy multilinear maps is based on a family of more complex assumptions that hold in the generic model.
Our candidate PRGs with block-wise locality are based on Goldreich's local functions, and we show that the security of instantiations with block-wise locality is backed by similar validation as constructions with (conventional) locality . We further complement this with hardness amplification techniques that further weaken the pseudorandomness requirements.
Proof of Luck: an Efficient Blockchain Consensus Protocol
In the paper, we present designs for multiple blockchain consensus primitives and a novel blockchain system, all based on the use of trusted execution environments (TEEs), such as Intel SGX-enabled CPUs. First, we show how using TEEs for existing proof of work schemes can make mining equitably distributed by preventing the use of ASICs. Next, we extend the design with proof of time and proof of ownership consensus primitives to make mining energy- and time-efficient. Further improving on these designs, we present a blockchain using a proof of luck consensus protocol. Our proof of luck blockchain uses a TEE platform's random number generation to choose a consensus leader, which offers low-latency transaction validation, deterministic confirmation time, negligible energy consumption, and equitably distributed mining. Lastly, we discuss a potential protection against up to a constant number of compromised TEEs.
IPcore implementation susceptibility: A case study of Low latency ciphers
Security evaluation of third-party cryptographic IP (Intellectual Property) cores is often ignored due to several reasons including, lack of awareness about its adversity, lack of trust validation methodology otherwise view security as a byproduct. Particularly, the validation of low latency cipher IP core on Internet of Things (IoT) devices is crucial as they may otherwise become vulnerable for information theft. In this paper, we share an (Un)intentional way of cipher implementation as IP core(hard) become susceptible against side channel attack and show how the susceptible implementation can be experimentally exploited to reveal secret key in FPGA using power analysis. In this paper our contributions are: First, we present Look-Up Table (LUT) based unrolled implementation of PRINCE block cipher with place and route constraints in FPGA. Second, using power analysis attack we recover 128-bit key of PRINCE with complexity of 2^9. Finally, we conclude the paper with the experimental results.
Efficient Multivariate Ring Signature Schemes
Multivariate Cryptography is one of the main candidates for creating post-quantum cryptosystems. Especially in the area of digital signatures, there exist many practical and secure multivariate schemes. However, there is a lack of more advanced schemes, such as schemes for oblivious transfer and signature schemes with special properties. While, in the last years, a number of multivariate ring signature schemes have been proposed, all of these have weaknesses in terms of security or efficiency. In this paper we propose a simple and efficient technique to extend arbitrary multivariate signature schemes to ring signature schemes and illustrate it using the example of Rainbow. The resulting scheme provides perfect anonymity for the signer (as member of a group), as well as shorter ring signatures than all previously proposed post-quantum ring signature schemes.
An Analysis of FV Parameters Impact Towards its Hardware Acceleration
Uncategorized
Uncategorized
The development of cloud computing services is restrained by privacy concerns.
Centralized medical services for instance, require a guarantee of confidentiality when using outsourced computation platforms.
Fully Homomorphic Encryption is an intuitive solution to address such issue, but until 2009, existing schemes were only able to evaluate a reduced number of operations (Partially Homomorphic Encryption).
In 2009, C. Gentry proposed a blueprint to construct FHE schemes from SHE schemes.
However, it was not practical due to the huge data size overhead and the exponential noise growth of the initial SHE.
Since then, major improvements have been made over SHE schemes and their noise management, and resulting schemes, like BGV and FV, allow to foresee small applications.
Besides scheme improvements, new practical approaches were proposed to bring homomorphic encryption closer to practice.
The -based stream cipher trans-ciphering approach brought by Canteaut et al. in 2015 reduces the on-line latency of the trans-ciphering process to a simple homomorphic addition.
The homomorphic evaluation of stream ciphers, that produces the trans-ciphering keystream, could be computed in an off-line phase, resulting in an almost transparent trans-ciphering process from the user point of view.
This approach combined with hardware accelerations could bring homomorphic encryption closer to practice.
This paper deals the choice of FV parameters for efficient implementation of this scheme in the light of related works' common approaches.
At first sight, using large polynomial degree to reduce the coefficients size seemed to be advantageous, but further observations contradict it.
Large polynomial degrees imply larger ciphertexts and more complex implementations, but smaller ones imply more primes to find for CRT polynomial representation.
The result of this preliminary work for the choice of an adequate hardware target motivates the choice of small degree polynomials rather than small coefficients for the FV scheme.
Cache-Base Application Detection in the Cloud Using Machine Learning
Cross-VM attacks have emerged as a major threat on commercial clouds. These attacks commonly exploit hardware level leakages on shared physical servers. A co-located machine can readily feel the presence of a co-located instance with a heavy computational load through performance degradation due to contention on shared resources. Shared cache architectures such as the last level cache (LLC) have become a popular leakage source to mount cross-VM attack. By exploiting LLC leakages, researchers have already shown that it is possible to recover fine grain information such as cryptographic keys from popular software libraries. This makes it essential to verify implementations that handle sensitive data across the many versions and numerous target platforms, a task too complicated, error prone and costly to be handled by human beings.
Here we propose a machine learning based technique to classify applications according to their cache access profiles. We show that with minimal and simple manual processing steps feature vectors can be used to train models using support vector machines to classify the applications with a high degree of success. The profiling and training steps are completely automated and do not require any inspection or study of the code to be classified. In native execution, we achieve a successful classification rate as high as 98\% (L1 cache) and 78\% (LLC) over 40 benchmark applications in the Phoronix suite with mild training. In the cross-VM setting on the noisy Amazon EC2 the success rate drops to 60\% for a suite of 25 applications. With this initial study we demonstrate that it is possible to train meaningful models to successfully predict applications running in co-located instances.
Model-counting Approaches For Nonlinear Numerical Constraints
Model counting is of central importance in quantitative reasoning about systems. Examples include computing the probability that a system successfully accomplishes its task without errors, and measuring the number of bits leaked by a system to an adversary in Shannon entropy. Most previous work in those areas demonstrated their analysis on programs with linear constraints, in which cases model counting is polynomial time. Model counting for nonlinear constraints is notoriously hard, and thus programs with nonlinear constraints are not well-studied. This paper surveys state-of-the-art techniques and tools for model counting with respect to SMT constraints, modulo the bitvector theory, since this theory is decidable, and it can express nonlinear constraints that arise from the analysis of computer programs. We integrate these techniques within the Symbolic Pathfinder platform and evaluate them on difficult nonlinear constraints generated from the analysis of cryptographic functions.
Key Recovery: Inert and Public
We propose a public key infrastructure framework, inspired by
modern distributed cryptocurrencies, that allows for tunable key escrow, where the availability of key escrow is only provided under strict conditions and enforced through cryptographic measures. We argue that any key escrow scheme designed for the global scale must be both inert --- requiring considerable effort to recover a key --- and public --- everybody should be aware of all key recovery attempts. To this end, one of the contributions of this work is an abstract design of a proofof-work scheme that demonstrates the ability to recover a private key for some generic public key scheme. Our framework represents a new direction for key escrow, seeking an acceptable compromise between the demands for control of cryptography on the Internet and the fundamental rights of privacy, which we seek to align by drawing parallels to the physical world.
Full accounting for verifiable outsourcing
Systems for verifiable outsourcing incur costs for a prover, a verifier, and
precomputation; outsourcing makes sense when the combination of these costs
is cheaper than not outsourcing. Yet, when prior works impose quantitative
thresholds to analyze whether outsourcing is justified, they generally ignore
prover costs. Verifiable ASICs (VA)---in which the prover is a custom chip---is
the other way around: its cost calculations ignore precomputation.
This paper describes a new VA system, called Giraffe; charges Giraffe for
all three costs; and identifies regimes where outsourcing is worthwhile.
Giraffe’s base is an interactive proof geared to data-parallel computation.
Giraffe makes this protocol asymptotically optimal for the prover and
improves the verifier's main bottleneck by almost 3x, both of which are of independent interest. Giraffe also develops a design template that produces hardware
designs automatically for a wide range of parameters, introduces hardware
primitives molded to the protocol’s data flows, and incorporates program
analyses that expand applicability. Giraffe wins even when outsourcing
several tens of sub-computations, scales to 500x larger computations than
prior work, and can profitably outsource parts of programs that are not
worthwhile to outsource in full.
Linear Consistency for Proof-of-Stake Blockchains
Blockchain data structures maintained via the longest-chain rule
have emerged as a powerful algorithmic tool for consensus
algorithms. The technique---popularized by the Bitcoin
protocol---has proven to be remarkably flexible and now supports
consensus algorithms in a wide variety of settings. Despite such
broad applicability and adoption, current analytic understanding of
the technique is highly dependent on details of the protocol's
leader election scheme. A particular challenge appears in the
proof-of-stake setting, where existing analyses suffer from
quadratic dependence on suffix length.
We describe an axiomatic theory of blockchain dynamics that permits
rigorous reasoning about the longest-chain rule in quite general
circumstances and establish bounds---optimal to within a
constant---on the probability of a consistency violation. This settles
a critical open question in the proof-of-stake setting where we
achieve linear consistency for the first time.
Operationally, blockchain consensus protocols achieve consistency by
instructing parties to remove a suffix of a certain length from
their local blockchain. While the analysis of Bitcoin guarantees
consistency with error by removing blocks, recent
work on proof-of-stake (PoS) blockchains has suffered from quadratic
dependence: (PoS) blockchain protocols, exemplified by Ouroboros
(Crypto 2017), Ouroboros Praos (Eurocrypt 2018) and Sleepy Consensus
(Asiacrypt 2017), can only establish that the length of this suffix
should be . This consistency guarantee is a
fundamental design parameter for these systems, as the length of the
suffix is a lower bound for the time required to wait for
transactions to settle. Whether this gap is an intrinsic limitation
of PoS---due to issues such as the ``nothing-at-stake''
problem---has been an urgent open question as deployed PoS
blockchains further rely on consistency for protocol correctness: in
particular, security of the protocol itself relies on this
parameter. Our general theory directly improves the required suffix
length from to . Thus we show, for the
first time, how PoS protocols can match proof-of-work blockchain
protocols for exponentially decreasing consistency error.
Our analysis focuses on the articulation of a two-dimensional stochastic
process that captures the features of interest, an exact recursive
closed form for the critical functional of the process, and tail
bounds established for associated generating functions that dominate
the failure events. Finally, the analysis provides an explicit
polynomial-time algorithm for exactly computing the
exponentially-decaying error function which can directly inform
practice.
Lattice-Based SNARGs and Their Application to More Efficient Obfuscation
Uncategorized
Uncategorized
Succinct non-interactive arguments (SNARGs) enable verifying NP computations with substantially lower complexity than that required for classical NP verification. In this work, we first construct a lattice-based SNARG candidate with quasi-optimal succinctness (where the argument size is quasilinear in the security parameter). Further extension of our methods yields the first SNARG (from any assumption) that is quasi-optimal in terms of both prover overhead (polylogarithmic in the security parameter) as well as succinctness. Moreover, because our constructions are lattice-based, they plausibly resist quantum attacks. Central to our construction is a new notion of linear-only vector encryption which is a generalization of the notion of linear-only encryption introduced by Bitansky et al. (TCC 2013). We conjecture that variants of Regev encryption satisfy our new linear-only definition. Then, together with new information-theoretic approaches for building statistically-sound linear PCPs over small finite fields, we obtain the first quasi-optimal SNARGs.
We then show a surprising connection between our new lattice-based SNARGs and the concrete efficiency of program obfuscation. All existing obfuscation candidates currently rely on multilinear maps. Among the constructions that make black-box use of the multilinear map, obfuscating a circuit of even moderate depth (say, 100) requires a multilinear map with multilinearity degree in excess of 2^100. In this work, we show that an ideal obfuscation of both the decryption function in a fully homomorphic encryption scheme and a variant of the verification algorithm of our new lattice-based SNARG yields a general-purpose obfuscator for all circuits. Finally, we give some concrete estimates needed to obfuscate this "obfuscation-complete" primitive. We estimate that at 80-bits of security, a (black-box) multilinear map with approximately 2^12 levels of multilinearity suffices. This is over 2^80 times more efficient than existing candidates, and thus, represents an important milestone towards implementable program obfuscation for all circuits.
Boosting Authenticated Encryption Robustness With Minimal Modifications
Uncategorized
Uncategorized
Secure and highly efficient authenticated encryption (AE) algorithms which achieve data confidentiality and authenticity in the symmetric-key setting have existed for well over a decade. By all conventional measures, AES-OCB seems to be the AE algorithm of choice on any platform with AES-NI: it has a proof showing it is secure assuming AES is, and it is one of the fastest out of all such algorithms. However, algorithms such as AES-GCM and ChaCha20+Poly1305 have seen more widespread adoption, even though they will likely never outperform AES-OCB on platforms with AES-NI. Given the fact that changing algorithms is a long and costly process, some have set out to maximize the security that can be achieved with the already deployed algorithms, without sacrificing efficiency: ChaCha20+Poly1305 already improves over GCM in how it authenticates, GCM-SIV uses GCM's underlying components to provide nonce misuse resistance, and TLS1.3 introduces a randomized nonce in order to improve GCM's multi-user security. We continue this line of work by looking more closely at GCM and ChaCha20+Poly1305 to see what robustness they already provide over algorithms such as OCB, and whether minor variants of the algorithms can be used for applications where defense in depth is critical. We formalize and illustrate how GCM and ChaCha20+Poly1305 offer varying degrees of resilience to nonce misuse, as they can recover quickly from repeated nonces, as opposed to OCB, which loses all security. More surprisingly, by introducing minor tweaks such as an additional XOR, we can create a GCM variant which provides security even when unverified plaintext is released.
Mixing Confidential Transactions: Comprehensive Transaction Privacy for Bitcoin
The public nature of the blockchain has been shown to be a severe threat for the privacy of Bitcoin users. Even worse, since funds can be tracked and tainted, no two coins are equal, and fungibility, a fundamental property required in every currency, is at risk. With these threats in mind, several privacy-enhancing technologies have been proposed to improve transaction privacy in Bitcoin. However, they either require a deep redesign of the currency, breaking many currently deployed features, or they address only specific privacy issues and consequently provide only very limited guarantees when deployed separately.
The goal of this work is to overcome this trade-off. Building on CoinJoin, we design ValueShuffle, the first coin mixing protocol compatible with Confidential Transactions, a proposed enhancement to the Bitcoin protocol to hide payment values in the blockchain. ValueShuffle ensures the anonymity of mixing participants as well as the confidentiality of their payment values even against other possibly malicious mixing participants. By combining CoinJoin with Confidential Transactions and additionally Stealth Addresses, ValueShuffle provides comprehensive privacy (payer anonymity, payee anonymity, and payment value privacy) without breaking with fundamental design principles or features of the current Bitcoin system. Assuming that Confidential Transactions will be integrated in the Bitcoin protocol, ValueShuffle makes it possible to mix funds of different value as well as to mix and spend funds in the same transaction, which overcomes the two main limitations of previous coin mixing protocols.
Switch Commitments: A Safety Switch for Confidential Transactions
Cryptographic agility is the ability to switch to larger cryptographic parameters or different algorithms in the case of security doubts. This very desirable property of cryptographic systems is inherently difficult to achieve in cryptocurrencies due to their permanent state in the blockchain: for example, if it turns out that the employed signature scheme is insecure, a switch to a different scheme can only protect the outputs of future transactions but cannot fix transaction outputs already recorded in the blockchain, exposing owners of the corresponding money to risk of theft. This situation is even worse with Confidential Transactions, a recent privacy-enhancing proposal to hide transacted monetary amounts in homomorphic commitments. If an attacker manages to break the computational binding property of a commitment, he can create money out of thin air, jeopardizing the security of the entire currency. The obvious solution is to use statistically or perfectly binding commitment schemes but they come with performance drawbacks due to the need for less efficient range proofs.
In this paper, our aim is to overcome this dilemma. We introduce switch commitments, which constitute a cryptographic middle ground between computationally binding and statistically binding commitments. The key property of this novel primitive is the possibility to switch existing commitments, e.g., recorded in the blockchain, from computational bindingness to statistical bindingness if doubts in the underlying hardness assumption arise. This switch trades off efficiency for security. We provide a practical and simple construction of switch commitments
by proving that ElGamal commitments with a restricted message space are secure switch commitments. The combination of switch commitments and statistically sound range proofs yields an instantiation of Confidential Transactions that can be switched to be resilient against post-quantum attackers trying to inflate the currency.
A new rank metric codes based encryption scheme
We design a new McEliece-like rank metric based encryption scheme from Gabidulin codes. We explain why it is not affected by the invariant subspace attacks also known as Overbeck's attacks. The idea of the design mixes two existing approaches designing rank metric based encryption schemes. For a given security our public-keys are more compact than for the same security in the Hamming metric based settings.
Efficient Oblivious Transfer from Lossy Threshold Homomorphic Encryption
Uncategorized
Uncategorized
In this article, a new oblivious transfer (OT) protocol, secure
in the presence of erasure-free one-sided active adaptive adversaries
is presented. The new bit OT protocol achieves better communication
complexity than the existing bit OT protocol in this setting. The new bit OT protocol requires fewer number of public key encryption operations than the existing bit OT protocol in this setting. As a building block, a new two-party lossy threshold homomorphic public key cryptosystem is designed. It is secure in the same adversary model. It is of independent interest.
Automatically Detecting the Misuse of Secrets: Foundations, Design Principles, and Applications
We develop foundations and several constructions for security protocols that can automatically detect, without false positives, if a secret (such as a key or password) has been misused. Such constructions can be used, e.g., to automatically shut down compromised services, or to automatically revoke misused secrets to minimize the effects of compromise. Our threat model includes malicious agents, (temporarily or permanently) compromised agents, and clones.
Previous works have studied domain-specific partial solutions to this problem. For example, Google's Certificate Transparency aims to provide infrastructure to detect the misuse of a certificate authority's signing key, logs have been used for detecting endpoint compromise, and protocols have been proposed to detect cloned RFID/smart cards. Contrary to these existing approaches, for which the designs are interwoven with domain-specific considerations and which usually do not enable fully automatic response (i.e., they need human assessment), our approach shows where automatic action is possible. Our results unify, provide design rationales, and suggest improvements for the existing domain-specific solutions.
Based on our analysis, we construct several mechanisms for the detection of misuse. Our mechanisms enable automatic response, such as revoking keys or shutting down services, thereby substantially limiting the impact of a compromise.
In several case studies, we show how our mechanisms can be used to substantially increase the security guarantees of a wide range of systems, such as web logins, payment systems, or electronic door locks. For example, we propose and formally verify an improved version of Cloudflare's Keyless SSL protocol that enables key misuse detection.
Simplifying Design and Analysis of Complex Predicate Encryption Schemes
Wee (TCC'14) and Attrapadung (Eurocrypt'14) introduced predicate and pair encodings, respectively, as a simple way to construct and analyze attribute-based encryption schemes, or more generally predicate encryption. However, many schemes do not satisfy the simple information theoretic property proposed in those works, and thus require much more complicated analysis. In this paper, we propose a new simple property for pair encodings called symbolic security. Proofs that pair encodings satisfy this property are concise and easy to verify. We show that this property is inherently tied to the security of predicate encryption schemes by arguing that any scheme which is not trivially broken must satisfy it. Then we use this property to discuss several ways to convert between pair encodings to obtain encryption schemes with different properties like small ciphertexts or keys. Finally, we show that any pair encoding satisfying our new property can be used to construct a fully secure predicate encryption scheme. The resulting schemes are secure under a new q-type assumption which we show follows from several of the assumptions used to construct such schemes in previous work.
TwinsCoin: A Cryptocurrency via Proof-of-Work and Proof-of-Stake
We design and implement TwinsCoin, the first cryptocurrency based on a provably secure and scalable public blockchain design using both proof-of-work and proof-of-stake mechanisms. Different from the proof-of-work based Bitcoin, our construction uses two types of resources, computing power and coins~(i.e., stake). The blockchain in our system is more robust than that in a pure proof-of-work based system; even if the adversary controls the majority of mining power, we can still have the chance to secure the system by relying on honest stake. In contrast, Bitcoin blockchain will be insecure if the adversary controls more than 50\% of mining power.
Our design follows a recent provably secure proof-of-work/proof-of-stake hybrid blockchain by Duong et al.~(ePrint 2016). In order to make our construction practical, we enhance Duong et al.'s design. In particular, we introduce a new strategy for difficulty adjustment in the hybrid blockchain and provide an analysis of it. We also show how to construct a light client for proof-of-stake cryptocurrencies and evaluate the proposal practically.
We implement our new design. Our implementation uses a recent modular development framework for blockchains, called Scorex. It allows us to change only certain parts of an application leaving other codebase intact. In addition to the blockchain implementation, a testnet is deployed. Source code is publicly available.
EHE: nonce misuse-resistant message authentication
We propose a nonce misuse-resistant message authentication scheme called EHE (Encrypt-Hash-Encrypt). In EHE, a message-dependent polynomial is evaluated at the point which is an encrypted nonce. The resulting polynomial hash value is encrypted again and becomes an authentication tag. We prove the prf-security of the EHE scheme and extend it to two authenticated encryption modes which follow the "encrypt-then-authenticate" paradigm.
Smart Contracts Make Bitcoin Mining Pools Vulnerable
Uncategorized
Uncategorized
Despite their incentive structure flaws, mining pools account for more
than 95% of Bitcoin's computation power. This paper introduces an attack against mining pools in which a malicious party pays pool members to withhold their solutions from their pool operator. We show that an adversary with a tiny amount of computing power and capital can execute this attack. Smart contracts enforce the malicious party's payments, and therefore miners need neither trust the attacker's intentions nor his ability to pay. Assuming pool members are rational, an adversary with a single mining ASIC can, in theory, destroy all big mining pools without losing any money (and even make some profit).
Multi-Prover Interactive Proofs: Unsound Foundations
Several Multi-Prover Interactive Proofs (MIPs) found in the literature contain proofs of soundness that are lacking. This was first observed by Crépeau, Salvail, Simard and Tapp who defined a notion of {Prover isolation} to partly address the issue. Furthermore, some existing Zero-Knowledge MIPs suffer from a catastrophic flaw: they outright allow the Provers to communicate via the Verifier. Consequently, their soundness claims are now seriously in doubt, if not plain wrong. This paper outlines the lack of isolation and numerous other issues found in the (ZK)MIP literature. A follow-up paper will resolve most of these issues in detail.
Efficient and Secure Outsourcing of Genomic Data Storage
Cloud computing is becoming the preferred solution for efficiently dealing with the increasing amount of genomic data. Yet, outsourcing storage and processing of sensitive data, such as genomic data, comes with important concerns related to privacy and security. This calls for new sophisticated techniques that ensure data protection from untrusted cloud providers and still enables researchers to obtain useful information. We present a novel privacy-preserving algorithm for fully outsourcing the storage of large genomic data files to a public cloud and enable researchers to efficiently search for variants of interest. To preserve data and query confidentiality from possible leakage, our solution exploits optimal encoding for genomic variants and combines it with homomorphic encryption and private information retrieval. The proposed algorithm is implemented in C++ and evaluated on real data as part of the 2016 iDash genome privacy-protection challenge. Results show that our solution outperforms the state-of-the-art and enables researchers to search over millions of encrypted variants in a few seconds. As opposed to prior beliefs that sophisticated privacy-enhancing technologies (PETs) are unpractical for real operational settings, our solution demonstrates that, in the case of genomic data, PETs can represent very efficient enablers.
Towards Shared Ownership in the Cloud
Cloud storage platforms promise a convenient way for users to share files and engage in collaborations, yet they require all files to have a single owner who unilaterally makes access control decisions. Existing clouds are, thus, agnostic to the notion of shared ownership. This can be a significant limitation in many collaborations because, for example, one owner can delete files and revoke access without consulting the other collaborators.
In this paper, we first formally define a notion of shared ownership within a file access control model. We then propose two possible instantiations of our proposed shared ownership model. Our first solution, called Commune, relies on secure file dispersal and collusion-resistant secret sharing to ensure that all access grants in the cloud require the support of an agreed threshold of owners. As such, Commune can be used in existing clouds without modifications to the platforms. Our second solution, dubbed Comrade, leverages the blockchain technology in order to reach consensus on access control decision. Unlike Commune, Comrade requires that the cloud is able to translate access control decisions that reach consensus in the blockchain into storage access control rules, thus requiring minor modifications to existing clouds. We analyze the security of our proposals and compare/evaluate their performance through implementation integrated with Amazon S3.
JIMU: Faster LEGO-based Secure Computation using Additive Homomorphic Hashes
LEGO-style cut-and-choose is known for its asymptotic efficiency in realizing actively-secure computations. The dominant cost of LEGO protocols is due to wire-soldering — the key technique enabling to put independently generated garbled gates together in a bucket to realize a logical gate. Existing wire-soldering constructions rely on homomorphic commitments and their security requires the majority of the garbled gates in every bucket to be correct.
In this paper, we propose an efficient construction of LEGO protocols that does not use homomorphic commitments but is able to guarantee security as long as at least one of the garbled gate in each bucket is correct. Additionally, the faulty gate detection rate in our protocol doubles that of the state-of-the-art LEGO constructions. With moderate additional cost, our approach can even detect faulty gates with probability 1, which enables us to run cut- and-choose on larger circuit gadgets rather than individual AND gates. We have implemented our protocol and our experiments on several benchmark applications show that the performance of our approach is highly competitive in comparison with existing implementations.
Bandwidth Hard Functions for ASIC Resistance
Uncategorized
Uncategorized
Cryptographic hash functions have wide applications including password hashing, pricing functions for spam and denial-of-service countermeasures and proof of work in cryptocurrencies. Recent progress on ASIC (Application Specific Integrated Circuit) hash engines raise concerns about the security of the above applications. This leads to a growing interest in ASIC resistant hash function and ASIC resistant proof of work schemes, i.e., those that do not give ASICs a huge advantage. The standard approach towards ASIC resistance today is through memory hard functions or memory hard proof of work schemes. However, we observe that the memory hardness approach is an incomplete solution. It only attempts to provide resistance to an ASIC's area advantage but overlooks the more important energy advantage. In this paper, we propose the notion of bandwidth hard functions to reduce an ASIC's energy advantage. CPUs cannot compete with ASICs for energy efficiency in computation, but we can rely on memory accesses to reduce an ASIC's energy advantage because energy costs of memory accesses are comparable for ASICs and CPUs. We propose a model for hardware energy cost that has sound foundations in practice. We then analyze the bandwidth hardness property of ASIC resistant candidates. We find scrypt, Catena-BRG and Balloon are bandwidth hard with suitable parameters. Lastly, we observe that a capacity hard function is not necessarily bandwidth hard, with a stacked double butterfly graph being a counterexample.
Simple Encrypted Arithmetic Library - SEAL v2.1
Achieving fully homomorphic encryption was a longstanding open problem in cryptography until it was resolved by Gentry in 2009. Soon after, several homomorphic encryption schemes were proposed. The early homomorphic encryption schemes were extremely impractical, but recently new implementations, new data encoding techniques, and a better understanding of the applications have started to change the situation. In this paper we introduce the most recent version (v2.1) of Simple Encrypted Arithmetic Library - SEAL, a homomorphic encryption library developed by Microsoft Research, and describe some of its core functionality.
0-RTT Key Exchange with Full Forward Secrecy
Reducing latency overhead while maintaining critical security guarantees like forward secrecy has become a major design goal for key exchange (KE) protocols, both in academia and industry. Of particular interest in this regard are 0-RTT protocols, a class of KE protocols which allow a client to send cryptographically protected payload in zero round-trip time (0-RTT) along with the very first KE protocol message, thereby minimizing latency. Prominent examples are Google's QUIC protocol and the upcoming TLS protocol version 1.3.
Intrinsically, the main challenge in a 0-RTT key exchange is to achieve forward secrecy and security against replay attacks for the very first payload message sent in the protocol. According to cryptographic folklore, it is impossible to achieve forward secrecy for this message, because the session key used to protect it must depend on a non-ephemeral secret of the receiver. If this secret is later leaked to an attacker, it should intuitively be possible for the attacker to compute the session key by performing the same computations as the receiver in the actual session.
In this paper we show that this belief is actually false. We construct the first 0-RTT key exchange protocol which provides full forward secrecy for all transmitted payload messages and is automatically resilient to replay attacks. In our construction we leverage a puncturable key encapsulation scheme which permits each ciphertext to only be decrypted once. Fundamentally, this is achieved by evolving the secret key after each decryption operation, but without modifying the corresponding public key or relying on shared state.
Our construction can be seen as an application of the puncturable encryption idea of Green and Miers (S&P 2015). We provide a new generic and standard-model construction of this tool that can be instantiated with any selectively secure hierarchical identity-based key encapsulation scheme.
Last updated: 2019-08-21
A Note on Obtain Confidentiality or/ and Authenticity in Big Data by ID-Based Generalized Signcryption
ID based generalized signcryption can adaptively work as a signature scheme, an encryption scheme or a signcryption scheme and avoid weighty and complicated certificate management like Public Key Infrastructure. It has application in emerging paradigm big data security. Recently,Wei et al proposed a new ID based generalized signcryption scheme to obtain con…dentiality or/and authenticity in big data, and claimed that their scheme is provably secure in standard model. Unfortunately, by giving substantial attack, we indicate that Wei et al scheme suffer from compromise of Private Key Generator (PKG) master secret key and thus not hold the security of indistinguishability against adaptive chosen-ciphertext attacks (IND CCA) and existential unforgeability against adaptive chosen-message attacks(EUF 􀀀 CMA).
A Hybrid Lattice Basis Reduction and Quantum Search Attack on LWE
Uncategorized
Uncategorized
Recently, an increasing amount of papers proposing post-quantum schemes also provide concrete parameter sets aiming for concrete post-quantum security levels. Security evaluations of such schemes need to include all possible attacks, in particular those by quantum adversaries. In the case of lattice-based cryptography, currently existing quantum attacks are mainly classical attacks, carried out with quantum basis reduction as subroutine.
In this work, we propose a new quantum attack on the learning with errors (LWE) problem, whose hardness is the foundation for many modern lattice-based cryptographic constructions. Our quantum attack is based on Howgrave-Graham's Classical Hybrid Attack and is suitable for LWE instances in recent cryptographic proposals. We analyze its runtime complexity and optimize it over all possible choices of the attack parameters. In addition, we analyze the concrete post-quantum security levels of the parameter sets proposed for the New Hope and Frodo key exchange schemes, as well as several instances of the Lindner-Peikert encryption scheme. Our results show that - depending on the assumed basis reduction costs - our Quantum Hybrid Attack either significantly outperforms, or is at least comparable to all other attacks covered by
Albrecht--Player--Scott in their work "On the concrete hardness of Learning with Errors". We further show that our Quantum Hybrid Attack improves upon the Classical Hybrid Attack in the case of LWE with binary error.
Cryptanalysis of PMACx, PMAC2x, and SIVx
At CT-RSA 2017, List and Nandi proposed
two variable input length pseudorandom functions (VI-PRFs)
called PMACx and PMAC2x,
and a deterministic authenticated encryption scheme called SIVx.
These schemes use a tweakable block cipher (TBC) as the underlying primitive,
and are provably secure up to the query complexity of ,
where denotes the block length of the TBC.
In this paper, we falsify the provable security claims by presenting concrete attacks.
We show that with the query complexity of , i.e., with
the birthday complexity, PMACx, PMAC2x, and SIVx are all insecure.
Attribute-Based Encryption from Identity-Based Encryption
Ciphertext-policy attribute-based encryption (CP-ABE) is an access control mechanism where a data provider encrypts a secret message and then sends the ciphertext to the receivers according to the access policy which she/he decides. If the attributes of the receivers match the access policy, then they can decrypt the ciphertext. This paper shows a relation between ABE and identity-based encryption (IBE), and presents a bi-directional conversion between an access structure and identities. By the proposed conversion, the ABE scheme constructed from an IBE scheme will inherit the features, such as constant-size ciphertexts and anonymity, from the IBE scheme, and vice versa. It turns out that the proposed conversion also gives the first ABE achieving access structures with wildcard and constant-size ciphertexts/private keys.
Repeated Games for Generating Randomness in Encryption
In encryption schemes, the sender may not generate randomness properly if generating randomness is costly, and the sender is not concerned about the security of a message. The problem was studied by the first author (2016), and was formalized in a game-theoretic framework. In this work, we construct an encryption scheme with an optimal round complexity on the basis of the mechanism of repeated games.
Cryptanalysis of Wang et al’s Certificateless Signature Scheme without Bilinear Pairings
In these years, the design of certificateless signature (CLS) scheme without bilinear pairings has been thoroughly investigated owing to its effectiveness on solving the key escrow problem in identity-based cryptography. In this paper, we identify that Wang et al.’s certificateless signature scheme cannot fulfil its security claims. We present a series of attack processes to demonstrate that Wang et al.’s scheme is insecure against a super type I adversary.
SCRAPE: Scalable Randomness Attested by Public Entities
Uniform randomness beacons whose output can be publicly attested to be unbiased are required in several cryptographic protocols.
A common approach to building such beacons is having a number parties run a coin tossing protocol with guaranteed output delivery (so that adversaries cannot simply keep honest parties from obtaining randomness, consequently halting protocols that rely on it). However, current constructions face serious scalability issues due to high computational and communication overheads.
We present a coin tossing protocol for an honest majority that allows for any entity to verify that an output was honestly generated by observing publicly available information (even after the execution is complete), while achieving both guaranteed output delivery and scalability.
The main building block of our construction is the first Publicly Verifiable Secret Sharing scheme for threshold access structures that requires only O(n) exponentiations. Previous schemes required O(nt) exponentiations (where t is the threshold) from each of the parties involved, making them unfit for scalable distributed randomness generation, which requires t=n/2 and thus O(n^2) exponentiations.
Last updated: 2017-09-23
SEVDSI: Secure, Efficient and Verifiable Data Set Intersection
Uncategorized
Uncategorized
Private set intersection is one of the most well studied and useful secure computation protocols. Many specific two party secure computation protocols have been constructed for such a functionality, but all of them incur large communication between the parties. A cloud assisted protocol was also considered to provide better efficiency, but with the potential risk of leaking more information to the cloud.
In this paper, we achieve the best of the two worlds: We design and analyze SEVDSI: a ecure, fficient and erifiable ata et ntersection protocol which is non-interactive and cloud based in a stronger security model. Our protocol assures privacy on data set inputs in case of a {\em malicious} cloud and enforces authorized only computations by the users. Moreover, the computation is verifiable and we achieve asymptotic communication cost for users in contrast with the fastest two party computation protocols, which obtain a communication complexity, in case of multiparty PSI. SEVDSI is provably secure in the random oracle model.
Low Cost Constant Round MPC Combining BMR and Oblivious Transfer
In this work, we present two new universally composable, actively secure, constant round multi-party protocols for generating BMR garbled circuits with free-XOR and reduced costs.
(1) Our first protocol takes a generic approach using any secret-sharing based MPC protocol for binary circuits, and a correlated oblivious transfer functionality.
(2) Our specialized protocol uses secret-sharing based MPC with information-theoretic MACs. This approach is less general, but requires no additional correlated OTs to compute the garbled circuit.
In both approaches, the underlying secret-sharing based protocol is only used for one secure multiplication per AND gate. An interesting consequence of this is that, with current techniques, constant round MPC for binary circuits is not much more expensive than practical, non-constant round protocols.
We demonstrate the practicality of our second protocol with an implementation, and perform experiments with up to parties securely computing the AES and SHA-256 circuits. Our running times improve upon the best possible performance with previous BMR-based protocols by 60 times.
Quantum Information Set Decoding Algorithms
The security of code-based cryptosystems such as the McEliece cryptosystem relies primarily on the difficulty of decoding random linear codes. The best decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoding techniques. It is also important to assess the security of such cryptosystems against a quantum computer. This research thread started in Overbeck and Sendrier's 2009 survey on code-based cryptography, and the best algorithm to date has been Bernstein's quantising of the simplest information set decoding algorithm, namely Prange's algorithm. It consists in applying Grover's quantum search to obtain a quadratic speed-up of Prange's algorithm. In this paper, we quantise other information set decoding algorithms by using quantum walk techniques which were devised for the subset-sum problem by Bernstein, Jeffery, Lange and Meurer. This results in improving the worst-case complexity of 2^{0.06035n} of Bernstein's algorithm to 2^{0.05869n} with the best algorithm presented here (where n is the codelength).
Montgomery curves and their arithmetic: The case of large characteristic fields
Uncategorized
Uncategorized
Three decades ago, Montgomery introduced a new elliptic curve model for use in Lenstra's ECM factorization algorithm. Since then, his curves and the algorithms associated with them have become foundational in the implementation of elliptic curve cryptosystems. This article surveys the theory and cryptographic applications of Montgomery curves over non-binary finite fields, including Montgomery's x-only arithmetic and Ladder algorithm, x-only Diffie--Hellman, y-coordinate recovery, and 2-dimensional and Euclidean differential addition chains such as Montgomery's PRAC algorithm.
Multi-level Access in Searchable Symmetric Encryption
Remote storage delivers a cost effective solution for data storage. If data is of a sensitive nature, it should be encrypted prior to outsourcing to ensure confidentiality; however, searching then becomes challenging. Searchable encryption is a well-studied solution to this problem. Many schemes only consider the scenario where users can search over the entirety of the encrypted data.
In practice, sensitive data is likely to be classified according to an access control policy and different users should have different access rights.
It is unlikely that all users have unrestricted access to the entire data set.
Current schemes that consider multi-level access to searchable encryption are predominantly based on asymmetric primitives.
We investigate symmetric solutions to multi-level access in searchable encryption where users have different access privileges to portions of the encrypted data and are not permitted to search over, or learn information about, data for which they are not authorised.
Public Key Cryptosystems with Noisy Secret Keys
Passwords bootstrap symmetric and asymmetric cryptography, tying keys to an individual user. Biometrics are intended to strengthen this tie. Unfortunately, biometrics exhibit noise between repeated readings. Fuzzy extractors (Dodis et al., Eurocrypt 2004) derive stable symmetric keys from noisy sources.
We ask if it is also possible for noisy sources to directly replace private keys in asymmetric cryptosystems. We propose a new primitive called public-key cryptosystems with noisy keys. Such a cryptosystem functions when the private key varies according to some metric. An intuitive solution is to combine a fuzzy extractor with a public key cryptosystem. Unfortunately, fuzzy extractors need static helper information to account for noise. This helper information creates fundamental limitations on the resulting cryptosytems.
To overcome these limitations, we directly construct public-key encryption and digital signature algorithms with noisy keys. The core of our constructions is a computational version of the fuzzy vault (Juels and Sudan, Designs, Codes, and Cryptography 2006). Security of our schemes is based on graded encoding schemes (Garg et al., Eurocrypt 2013, Garg et al., TCC 2016). Importantly, our public-key encryption algorithm is based on a weaker model of grading encoding. If functional encryption or indistinguishable obfuscation exist in this weaker model, they also exist in the standard model.
In addition, we use the computational fuzzy vault to construct the first reusable fuzzy extractor (Boyen, CCS 2004) supporting a linear fraction of errors.
Exploding Obfuscation: A Framework for Building Applications of Obfuscation From Polynomial Hardness
Uncategorized
Uncategorized
There is some evidence that indistinguishability obfuscation (iO) requires either exponentially many assumptions or (sub)exponentially hard assumptions, and indeed, all known ways of building obfuscation suffer one of these two limitations. As such, any application built from iO suffers from these limitations as well. However, for most applications, such limitations do not appear to be inherent to the application, just the approach using iO. Indeed, several recent works have shown how to base applications of iO instead on functional encryption (FE), which can in turn be based on the polynomial hardness of just a few assumptions. However, these constructions are quite complicated and recycle a lot of similar techniques.
In this work, we unify the results of previous works in the form of a weakened notion of obfuscation, called Exploding iO. We show (1) how to build exploding iO from functional encryption, and (2) how to build a variety of applications from exploding iO, including all of the applications already known from FE. The construction in (1) hides most of the difficult techniques in the prior work, whereas the constructions in (2) are much closer to the comparatively simple constructions from iO. As such, exploding iO represents a convenient new platform for obtaining more applications from polynomial hardness.
SoK: Security Models for Pseudo-Random Number Generators
Uncategorized
Uncategorized
Randomness plays an important role in multiple applications in cryptography. It is required in
fundamental tasks such as key generation, masking and hiding values, nonces and initialization
vectors generation.
Pseudo-random number generators have been studied by numerous authors, either to propose clear security notions and associated constructions or to point out potential vulnerabilities. In this systematization of knowledge paper, we present the three notions of generators that have been successively formalized: standard generators, stateful generators and generators with input. For each notion, we present expected security properties, where adversaries have increasing capabilities (including access to partial information on the internal variables) and we propose secure and efficient constructions, all based on the block cipher AES.
In our description of generators with input, we revisit the notions of accumulator and extractor and we point out that security crucially relies on the independence between the randomness source and the seeds of the accumulator and the extractor. To illustrate this requirement, we identify a potential vulnerability of the NIST standard CTR_DRBG.
Private Queries on Encrypted Genomic Data
One of the tasks in the iDASH Secure Genome Analysis Competition in 2016 was to demonstrate the feasibility of privacy-preserving queries on homomorphically encrypted genomic data. More precisely, given a list of up to 100,000 mutations, the task was to encrypt the data using homomorphic encryption in a way that allows it to be stored securely in the cloud, and enables the data owner to query the dataset for the presence of specific mutations, without revealing any information about the dataset or the queries to the cloud. We devise a novel string matching protocol that works particularly nicely with homomorphically encrypted data, and show how it yields an efficient solution to the competition task. The protocol we describe is also of independent interest to the homomorphic encryption community, as it can be applied just as well to any kind of data.
Quantum Resistant Public Key Encryption Scheme RLCE and IND-CCA2 Security for McEliece Schemes
Recently, Wang (2016) introduced a random linear code based quantum resistant public key encryp- tion scheme RLCE which is a variant of McEliece encryption scheme. In this paper, we introduce a revised version of the RLCE encryption scheme. The revised RLCE schemes are more efficient than the original RLCE scheme. Specifically, it is shown that RLCE schemes have smaller public key sizes com- pared to binary Goppa code based McEliece encryption schemes for corresponding security levels. The paper further proposes message padding schemes for RLCE to achieve IND-CCA2 security. Practical RLCE parameters for the security levels of 128, 192, and 256 bits and for the quantum security levels of 80, 110, and 144 are recommended. The implementation of the RLCE encryption scheme and software packages for analyzing the security strength of RLCE parameters are available at http://quantumca.org/
ZETA: Towards Tagless Authenticated Encryption
Uncategorized
Uncategorized
Tag-based message authentication is a popular cryptographic technique to digitally sign messages. However, for short messages, it often incurs additional costs due to large tags. In this paper, we propose a new scheme that achieves tagless message authentication. The scheme leverages a trade-off between character support and complexity of forgery to provide information security and authenticity.
Linear Cryptanalysis Using Low-bias Linear Approximations
This paper deals with linear approximations having absolute bias smaller than which were previously believed to be unusable for a linear attack. We show how a series of observations which are individually not statistically significant can be used to create a distinguisher. This is different from previous works which combined a series of significant observations to reduce the data complexity of a linear attack. We test the distinguisher on a real-world cipher and show that it can be used to improve previous results.
Proofs of Useful Work
Uncategorized
Uncategorized
We give Proofs of Work (PoWs) whose hardness is based on a wide array of computational problems, including Orthogonal Vectors, 3SUM, All-Pairs Shortest Path, and any problem that reduces to them (this includes deciding any graph property that is statable in first-order logic). This results in PoWs whose completion does not waste energy but instead is useful for the solution of computational problems of practical interest.
The PoWs that we propose are based on delegating the evaluation of low-degree polynomials originating from the study of average-case fine-grained complexity. We prove that, beyond being hard on the average (based on worst-case hardness assumptions), the task of evaluating our polynomials cannot be amortized across multiple~instances.
For applications such as Bitcoin, which use PoWs on a massive scale, energy is typically wasted in huge proportions. We give a framework that can utilize such otherwise wasteful work.
Note: An updated version of this paper is available at https://eprint.iacr.org/2018/559. The update is to accommodate the fact (pointed out by anonymous reviewers) that the definition of Proof of Useful Work in this paper is already satisfied by a generic naive construction.
Average-Case Fine-Grained Hardness
We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL '94), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure.
We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems -- namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) -- in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require time to compute on average, and that of APSP gives us a function that requires time.
Using the same techniques we also obtain a conditional average-case time hierarchy of functions.
Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO '03).
Giving State to the Stateless: Augmenting Trustworthy Computation with Ledgers
Uncategorized
Uncategorized
In this work we investigate the problem of achieving secure computation by combining stateless trusted devices with public ledgers. We consider a hybrid paradigm in which a client-side device (such as a co-processor or trusted enclave) performs secure computation, while interacting with a public ledger via a possibly malicious host computer. We explore both the constructive and potentially destructive implications of such systems. We first show that this combination allows for the construction of stateful interactive functionalities (including general computation) even when the device has no persistent storage; this allows us to build sophisticated applications using inexpensive trusted hardware or even pure cryptographic obfuscation techniques. We further show how to use this paradigm to achieve censorship-resistant communication with a network, even when network communications are mediated by a potentially malicious host. Finally we describe a number of practical applications that can be achieved today. These include the synchronization of private smart contracts; rate limited mandatory logging; strong encrypted backups from weak passwords; enforcing fairness in multi-party computation; and destructive applications such as autonomous ransomware, which allows for payments without an online party.
Anonymous Attestation with Subverted TPMs
Various sources have revealed that cryptographic standards and components have been subverted to undermine the security of users, reigniting research on means to achieve security in presence of such subverted components. In this paper we consider direct anonymous attestation (DAA) in this respect. This standardized protocol allows a computer with the help of an embedded TPM chip to remotely attest that it is in a healthy state. Guaranteeing that different attestations by the same computer cannot be linked was an explicit and important design goal of the standard in order to protect the privacy of the user of the computer. Surprisingly, none of the standardized or otherwise proposed DAA protocols achieves privacy when the TPM is subverted, but they all rely on the honesty of the TPM. As the TPM is a piece of hardware, it is hardly possible to tell whether or not a given TPM follows the specified protocol. In this paper we study this setting and provide a new protocol that achieves privacy also in presence of subverted TPMs.
Improved Attacks for Characteristic-2 Parameters of the Cubic ABC Simple Matrix Encryption Scheme
In the last few years multivariate public key cryptography has experienced an infusion of new ideas for encryption. Among these new strategies is the ABC Simple Matrix family of encryption schemes which utilize the structure of a large matrix algebra to construct effectively invertible systems of nonlinear equations hidden by an isomorphism of polynomials. One promising approach to cryptanalyzing these schemes has been structural cryptanalysis, based on applying a strategy similar to MinRank attacks to the discrete differential. These attacks however have been significantly more expensive when applied to parameters using fields of characteristic 2, which have been the most common choice for published parameters. This disparity is especially great for the cubic version of the Simple Matrix Encryption Scheme.
In this work, we demonstrate a technique that can be used to implement a structural attack which is as efficient against parameters of characteristic 2 as are attacks against analogous parameters over higher characteristic fields. This attack demonstrates that, not only is the cubic simple matrix scheme susceptible to structural attacks, but that the published parameters claiming 80 bits of security are less secure than claimed (albeit only slightly.) Similar techniques can also be applied to improve structural attacks against the original Simple Matrix Encryption scheme, but they represent only a modest improvement over previous structural attacks. This work therefore demonstrates that choosing a field of characteristic 2 for the Simple Matrix Encryption Scheme or its cubic variant will not provide any additional security value.
FHE with Recursive Ciphertext
In this paper I propose fully homomorphic public-key encryption (FHPKE) with the recursive ciphertex. A ciphertext consists of three sub-ciphertexts corresponding to one plaintext. When we execute the additional operation or multiplicative operation, a new three sub-ciphertexts are generated from the three sub-ciphertexts recursively without revealing the plaintexts. The scheme is based on the discrete logarithm assumption (DLA) and computational Diffie–Hellman assumption (CDH) of multivariate polynomials on octonion ring with composite number modulus. The scheme is immune from “m and - m attack”.
A Construction of Bent Functions with Optimal Algebraic Degree and Large Symmetric Group
We present a construction of bent function with variables for any nonzero vector and subset of satisfying . We give the simple expression of the dual bent function of . We prove that
has optimal algebraic degree if and only if . This construction provides series of bent functions with optimal algebraic degree and large symmetric group if and are chosen properly.
Attribute-based concurrent signatures
This paper introduces the notion of attribute-based concurrent signatures. This primitive can be considered as an interesting extension of concurrent signatures in the attribute-based setting. It allows two parties fairly exchange their signatures only if each of them has convinced the opposite party that he/she possesses certain attributes satisfying a given signing policy. Due to this new feature, this primitive can find useful applications in online contract signing, electronic transactions and so on. We formalize this notion and present a con-struction which is secure in the random oracle model under the Strong Dif-fie-Hellman assumption and the eXternal Diffie-Hellman assumption.
Design of Lightweight Linear Diffusion Layers from Near-MDS Matrices
Uncategorized
Uncategorized
Near-MDS matrices provide better trade-offs between security and efficiency compared to constructions based on MDS matrices, which are favored for hardware-oriented designs. We present new designs of lightweight linear diffusion layers by constructing lightweight near-MDS matrices.
Firstly generic near-MDS circulant matrices are found for .
Secondly\,, the implementation cost of instantiations of the generic near-MDS matrices is examined.
Surprisingly, for , it turns out that some proposed near-MDS circulant matrices of order have the lowest XOR count among all near-MDS matrices of the same order.
Further, for , we present near-MDS matrices of order having the lowest XOR count as well. The proposed matrices, together with previous construction of order less than five, lead to
solutions of near-MDS matrices with the lowest XOR count over finite fields for and . Moreover, we present some involutory near-MDS matrices of order constructed from Hadamard matrices. Lastly, the security of the proposed linear layers is studied by calculating lower bounds on the number of active S-boxes. It is shown that our linear layers with a well-chosen nonlinear layer can provide sufficient security against differential and linear cryptanalysis.
Improved upper bounds for the expected circuit complexity of dense systems of linear equations over GF(2)
Uncategorized
Uncategorized
Minimizing the Boolean circuit implementation of a given cryptographic function is an important issue. A number of papers [12,13,11,5] only consider cancellation-free straight-line programs for producing short circuits over GF(2) while [4] does not. Boyar-Peralta (BP) heuristic [4] yields a valuable tool for practical applications such as building fast software and low-power circuits for cryptographic applications, e.g. AES [4], PRESENT [7], and GOST [7]. However, BP heuristic does not take into account the matrix density. In a dense linear system the rows can be computed by adding or removing a few elements from a "common path" that is "close" to almost all rows. The new heuristic described in this paper will merge the idea of "cancellation" and "common path". An extensive testing activity has been performed. Experimental results of new and BP heuristic were compared. They show that the Boyar-Peralta bounds are not tight on dense systems.
SecChisel: Language and Tool for Practical and Scalable Security Verification of Security-Aware Hardware Architectures
Due to lack of practical and scalable security verification tools and methodologies, very few of the existing hardware-software security architectures have been thoroughly checked at the design time. To address this issue, our project develops a security verification methodology that is applicable to different hardware-software security architectures during the design phase. The verification framework aims to prove that a system holds desired properties with respect to not just functionality but also security; and we mainly focus on information flow and non-interference properties for verification. Using these properties, confidentiality and integrity of the sensitive data can be checked at design time. The proposed verification framework is built upon Chisel hardware construction language. By extending the Chisel language and tools, we created SecChisel. Ongoing work is focused on implementing SecChisel on top of Chisel~3 and realisation of the static and dynamic security labels.
Last updated: 2017-04-07
Improved Hybrid Consensus Scheme with Privacy-preserving Property
Uncategorized
Uncategorized
Proof-of-work-based consensus, adopted in Bitcoin,
has already drawn much attention from cryptocurrency and
block chain community. Despite its nice decentralization prop-
erty, it has significant limitation in terms of efficiency since
transactions can not be confirmed within seconds. In 2016, hybrid
consensus was proposed to partially deal with this issue by
introducing committee responsible for validating transactions.
However, there still exists some issues with respect to this hybrid
consensus such as selfish mining, fairness to the election of
committee member, incentives for the consensus scheme, and so
on.
To improve the hybrid consensus further, we first present
a substitution for proof-of-work, named as fair-proof-of-work
(fPoW), to solve the issues related to selfish mining and fair
committee election. We also demonstrate the incentives for our
improved hybrid consensus. Then, based on this consensus,
we build privacy-preserving constructions (including identity
and content privacy preserving) to make the consensus more
applicable and powerful. Finally, we give formal security proof
for our newly-proposed consensus scheme. It is expected that
this novel consensus scheme could be adopted in block chains
which require decentralization, high efficiency, as well as privacy-
preserving.
CoverUp: Privacy Through "Forced" Participation in Anonymous Communication Networks
Uncategorized
Uncategorized
The privacy guarantees of anonymous communication networks (ACNs) are bounded by the number of participants. As a consequence, an ACN can only achieve strong privacy guarantees if it succeeds in attracting a large number of active users. Vice versa, weak privacy guarantees renders an ACN unattractive, leading to a low number of users. In this work, we show how to break this vicious circle. We develop CoverUp, a system that "forces" visitors of highly accessed websites to become involuntary participants of an ACN. CoverUp leverages basic browser functionality to execute server-served JavaScript code and to open remote connections to connect all website visitors to an ACN (which we instantiate by a mix server). We build two applications on top of CoverUp: an anonymous feed and a chat. We show that both achieve practical performance and strong privacy guarantees. Towards a network-level attacker, CoverUp makes voluntary and involuntary participants indistinguishable, thereby providing an anonymity set that includes all voluntary and involuntary participants (i.e., all website visitors). Given this, CoverUp provides even more than mere anonymity: the voluntary participants can hide the very intention to use the ACN. As the concept of forced participation raises ethical and legal concerns, we discuss these concerns and describe how these can be addressed.
The first collision for full SHA-1
SHA-1 is a widely used 1995 NIST cryptographic hash function standard that was officially deprecated by NIST in 2011 due to fundamental security weaknesses demonstrated in various analyses and theoretical attacks. Despite its deprecation, SHA-1 remains widely used in 2017 for document and TLS certificate signatures, and also in many software such as the GIT versioning system for integrity and backup purposes.
A key reason behind the reluctance of many industry players to replace SHA-1 with a safer alternative is the fact that finding an actual collision has seemed to be impractical for the past eleven years due to the high complexity and computational cost of the attack.
In this paper, we demonstrate that SHA-1 collision attacks have finally become practical by providing the first known instance of a collision.
Furthermore, the prefix of the colliding messages was carefully chosen so that they allow an attacker to forge two distinct PDF documents with the same SHA-1 hash that display different arbitrarily-chosen visual contents.
We were able to find this collision by combining many special cryptanalytic techniques in complex ways and improving upon previous work. In total the computational effort spent is equivalent to calls to SHA-1's compression function, and took approximately 6,500 CPU years and 100 GPU years. While the computational power spent on this collision is larger than other public cryptanalytic computations, it is still more than 100,000 times faster than a brute force search.
Global-Scale Secure Multiparty Computation
We propose a new, constant-round protocol for multi-party computation of boolean circuits that is secure against an arbitrary number of malicious corruptions. At a high level, we extend and generalize recent work of Wang et al. in the two-party setting and design an efficient preprocessing phase that allows the parties to generate authenticated information; we then show how to use this information to distributively construct a single ``authenticated'' garbled circuit that is evaluated by one party.
Our resulting protocol improves upon the state-of-the-art both asymptotically and concretely. We validate these claims via several experiments demonstrating both the efficiency and scalability of our protocol:
- Efficiency: For three-party computation over a LAN, our protocol requires only 95 ms to evaluate AES. This is roughly a 700 improvement over the best prior work, and only 2.5 slower than the best known result in the two-party setting. In general, for parties our protocol improves upon prior work (which was never implemented) by a factor of more than , e.g., an improvement of 3 orders of magnitude for 5-party computation.
- Scalability: We successfully executed our protocol with a large number of parties located all over the world, computing (for example) AES with 128 parties across 5 continents in under 3 minutes. Our work represents the largest-scale demonstration of secure computation to date.
Division Cryptanalysis of Block Ciphers with a Binary Diffusion Layer
Uncategorized
Uncategorized
In this paper, we propose an accurate security evaluation methodology for block ciphers with a binary diffusion layers against division cryptanalysis. We illustrate the division property by the independence of variables, and exploit a one-to-one mapping between division trails and invertible sub-matrices. We give a new way to model the propagation of division property of linear diffusion layers by the smallest amount of inequalities which are generated from linear combinations of row vectors of the diffusion matrix. The solutions of these inequalities are exactly the division trails of linear transformation. Hence the description is compact and optimal.
As applications of our methodology, we first present a 10-round integral distinguisher for Skinny, proposed at CRYPTO 2016 which is of one round more than that found by using the previous method. For Midori, proposed at ASIACRYPT 2015, the designers have obtained a 3.5-round integral characteristic. Surprisingly, we find 7-round integral distinguishers both for Midori64 and Midori128.
Most importantly, we obtain the longest integral distinguishers for block ciphers with a binary diffusion layer. It seems that any more improvement of this kind of integral distinguishers using the division property is impossible. Therefore, the technique can be used to prove security against division cryptanalysis, and we can hopefully expect it to become a useful technique for designers.
The discrete logarithm problem over prime fields: the safe prime case. The Smart attack, non-canonical lifts and logarithmic derivatives
In this brief note we connect the discrete logarithm problem over prime fields in the safe prime case to the logarithmic derivative.
A Post-Quantum Digital Signature Scheme Based on Supersingular Isogenies
We present the first general-purpose digital signature scheme based on supersingular elliptic curve isogenies secure against quantum adversaries in the quantum random oracle model with small key sizes. This scheme is an application of Unruh’s construction of non-interactive zero-knowledge proofs to an interactive zero-knowledge proof proposed by De Feo, Jao, and Plût. We implement our proposed scheme on an x86-64 PC platform as well as an ARM-powered device. We exploit the state-of-the-art techniques to speed up the computations for general C and assembly. Finally, we provide timing results for real world applications.
A Virtual Wiretap Channel for Secure MessageTransmission
Uncategorized
Uncategorized
In the Wyner wiretap channel, a sender is connected to a receiver and an eavesdropper through two noisy channels. It has been shown that if the noise in the eavesdropper channel is higher than the receiver's channel, information theoretically secure communication from Alice to Bob, without requiring a shared key, is possible. The approach is particularly attractive noting the rise of quantum computers and possibility of the complete collapse of today's’ cryptographic infrastructure. If the eavesdropper’s channel is noise free, however, no secrecy can be obtained. The iJam protocol, proposed by Gollakota and Katabi, is an interactive protocol over noise free channels that uses friendly jamming the receiver to establish an information theoretically secure shared key between the sender and the receiver. The protocol relies on the Basic iJam Transmission Protocol (BiT protocol) that uses properties of OFDM (Orthogonal Frequency-Division Multiplexing) to create uncertainty for Eve (hence noisy view) in receiving the sent information, and use this uncertainty to construct a secure key agreement protocol. The protocol has been implemented and evaluated using extensive experiments that examine the best eavesdropper’s reception strategy. In this paper, we develop an abstract model for BiT protocol as a wiretap channel and refer to it as a virtual wiretap channel. We estimate parameters of this virtual wiretap channel, derive the secrecy capacity of this channel and design a secure message transmission protocol with provable semantic security using the channel. Our analysis and protocol give a physical layer security protocol, with provable security, that is implementable in practice (BiT protocol has already been implemented).
Linking Online Misuse-Resistant Authenticated Encryption and Blockwise Attack Models
Real-world applications of authenticated encryption often require the encryption to be computable {online}, e.g. to compute the block of ciphertext after having processed the first blocks of plaintext. A significant line of research was dedicated to identifying security notions for online authenticated encryption schemes, that capture various security goals related to real-life scenarios. Fouque, Joux, Martinet and Valette proposed definitions of privacy and integrity against adversaries that can query their oracles in a blockwise-adaptive manner, to model memory-constrained applications. A decade later, Fleischmann, Forler and Lucks proposed the notion of online nonce misuse-resistant authenticated encryption (OAE) to capture the security of online authenticated encryption under nonce-reuse.
In this work we investigate the relation between these notions. We first recast the blockwise notions of Fouque et al. to make them compatible with online authenticated encryption schemes that support headers. We then show that OAE and the conjunction of the blockwise notions are "almost" equivalent. We identify the missing property on the side of blockwise notions, and formalize it under the name PRTAG. With PRTAG being just an auxiliary definition, the equivalence we finally show suggests that OAE and the blockwise model for online authenticated encryption capture essentially the same notion of security.
Analysis of Software Countermeasures for Whitebox Encryption
Uncategorized
Uncategorized
Whitebox cryptography aims to ensure the security of cryptographic algorithms in the whitebox model where the adversary has full access to the execution environment. To attain security in this setting is a challenging problem: Indeed, all published whitebox implementations of standard symmetric-key algorithms such as AES to date have been practically broken. However, as far as we know, no whitebox implementation in real-world products has suffered from a key recovery attack. This is due to the fact that commercial products deploy additional software protection mechanisms on top of the whitebox implementation. This makes practical attacks much less feasible in real-world applications.
There are numerous software protection mechanisms which protect against standard whitebox attacks. One such technique is control flow obfuscation which randomizes the order of table lookups for each execution of the whitebox encryption module. Another technique is randomizing the locations of the various Look up tables (LUTs) in the memory address space. In this paper we investigate the effectiveness of these countermeasures against two attack paradigms. The first known as Differential Computational Analysis (DCA) attack was developed by Bos, Hubain, Michiels and Teuwen in CHES 2016. The attack passively collects software execution traces for several plaintext encryptions and uses the collected data to perform an analysis similar to the well known differential power attacks (DPA) to recover the secret key. Since the software execution traces contain time demarcated physical addresses of memory locations being read/written into, they essentially leak the values of the inputs to the various LUTs accessed during the whitebox encryption operation, which as it turns out leaks sufficient information to perform the power attack. We found that if in addition to control flow obfuscation, one were to randomize the locations of the LUTs in the memory, then it is very difficult to perform the DCA on the resultant system using such table inputs and extract the secret key in reasonable time. As an alternative, we investigate the version of the DCA attack which uses the outputs of the tables instead of the inputs to mount the power analysis attack. This modified DCA is able to extract the secret key from the flow obfuscated and location randomized versions of several whitebox binaries available in crypto literature.
We develop another attack called the Zero Difference Enumeration (ZDE) attack. The attack records software traces for several pairs of strategically selected plaintexts and performs a simple statistical test on the effective difference of the traces to extract the secret key. We show that ZDE is able to recover the keys of whitebox systems. Finally we propose a new countermeasure for protecting whitebox binaries based on insertion of random delays which aims to make both the ZDE and DCA attacks practically difficult by adding random noise in the information leaked to the attacker.
The Approximate -List Problem
Uncategorized
Uncategorized
We study a generalization of the -list problem, also known as the Generalized Birthday problem. In the -list problem, one starts with lists of binary vectors and has to find a set of vectors -- one from each list -- that sum to the all-zero target vector. In our generalized {\em Approximate -list problem}, one has to find a set of vectors that sum to a vector of small Hamming weight . Thus, we relax the condition on the target vector and allow for some error positions.
This in turn helps us to significantly reduce the size of the starting lists, which determines the memory consumption, and the running time as a function of . For , our algorithm achieves the original -list run-time/memory consumption, whereas for it has polynomial complexity.
As in the -list case, our Approximate -list algorithm is defined for all . Surprisingly, we also find an Approximate 3-list algorithm that improves in the run-time exponent compared to its 2-list counterpart for all . To the best of our knowledge this is the first such improvement of some variant of the notoriously hard 3-list problem.
As an application of our algorithm we compute small weight multiples of a given polynomial with more flexible degree than with Wagner's algorithm from Crypto 2002 and with smaller memory consumption than with Minder and Sinclair's algorithm from SODA 2009.
New techniques for trail bounds and application to differential trails in Keccak
Uncategorized
Uncategorized
We present new techniques to efficiently scan the space of high-probability
differential trails in bit-oriented ciphers. Differential trails consist in sequences of state patterns that we represent as ordered lists of basic components in order to arrange them in a tree. The task of generating trails with probability above some threshold starts with the traversal of the tree. Our choice of basic components allows us to efficiently prune the tree based on the fact that we can tightly bound the probability of all descendants for any node. Then we extend the state patterns resulting from the tree traversal into longer trails using similar bounding techniques.
We apply these techniques to the 4 largest Keccak-f permutations, for which we
are able to scan the space of trails with weight per round of 15. This space is orders of magnitude larger than previously best result published on Keccak-f[1600] that reached 12, which in turn is orders of magnitude larger than any published results achieved with standard tools, that reached at most 9. As a result we provide new and improved bounds for the minimum weight of differential trails on 3, 4, 5 and 6 rounds. We also report on new trails that are, to the best of our knowledge, the ones with the highest known probability.
Robust P2P Primitives Using SGX Enclaves
Peer-to-peer (P2P) systems such as BitTorrent and Bitcoin are susceptible to serious attacks from byzantine nodes that join as peers. Research has explored many adversarial models with additional assumptions, ranging from mild (such as pre-established PKI) to strong (such as the existence of common random coins). One such widely-studied model is the general-omission model, which yields simple protocols with good efficiency, but has been considered impractical or unrealizable since it artificially limits the adversary only to omitting messages.
In this work, we study the setting of a synchronous network wherein peer nodes have CPUs equipped with a recent trusted computing mechanism called Intel SGX. In this model, we observe that the byzantine adversary reduces to the adversary in the general-omission model. As a first result, we show that by leveraging SGX features, we eliminate any source of advantage for a byzantine adversary beyond that gained by omitting messages, making the general-omission model realizable. Second, we present new protocols that improve the communication complexity of two fundamental primitives
— reliable broadcast and common random coins (or beacons) — in the synchronous setting, by utilizing SGX features. Our evaluation of 1000 nodes running on 40 DeterLab machines
confirms theoretical efficiency claim.
REM: Resource-Efficient Mining for Blockchains
Uncategorized
Uncategorized
Blockchains show promise as potential infrastructure for financial transaction systems. The security of blockchains today, however, relies critically
on Proof-of-Work (PoW), which forces participants to waste computational resources.
We present REM (Resource-Efficient Mining), a new blockchain mining framework that uses trusted hardware (Intel SGX).
REM achieves security guarantees similar to PoW, but leverages the partially decentralized trust model inherent in SGX to achieve a fraction of the waste of PoW. Its key idea, Proof-of-Useful-Work (PoUW), involves miners providing trustworthy reporting on CPU cycles they devote to inherently useful workloads. REM flexibly allows any entity to create a useful workload. REM ensures the trustworthiness of these workloads by means of a novel scheme of hierarchical attestations that may be of independent interest.
To address the risk of compromised SGX CPUs, we develop a statistics-based formal security framework, also relevant to other trusted-hardware-based approaches such as Intel's Proof of Elapsed Time (PoET).
We show through economic analysis that REM achieves less waste than PoET and variant schemes.
We implement REM and, as an example application, swap it into the consensus layer of Bitcoin core.
The result is the first full implementation of an SGX-based blockchain.
We experiment with four example applications as useful workloads for our implementation of REM, and report a computational overhead of .
Optimal Differential Trails in SIMON-like Ciphers
Uncategorized
Uncategorized
In the present paper, we propose an automatic search algorithm for optimal differential trails in SIMON-like ciphers. First, we give a more accurate upper bound on the differential probability of SIMON-like round function. It is shown that when the Hamming weight of the input difference , which is denoted by , is less than one half of the input size, the corresponding maximum differential probability of SIMON-like round function is less than or equal to . Based on this, we adapt Matsui's algorithm and propose an efficient algorithm for searching for optimal differential trails. With the proposed algorithm, we find the provably optimal differential trails for , , , and rounds of SIMON . To the best of our knowledge, it is the first time that the provably optimal differential trails for SIMON , SIMON and SIMON are reported. The provably optimal differential trails for , and rounds of SIMECK are also found respectively, which confirm the results given by K lbl et al. \cite{KolblR15}. Besides the optimal differential trails, we also find the , , , and -round differentials for SIMON , and , and -round differentials for SIMECK , respectively. As far as we know, these are the best differential distinguishers for SIMON and SIMECK so far. Compared with the approach based on SAT/SMT solvers used by K lbl et al., our algorithm is more efficient and more practical to evaluate the security against differential cryptanalysis in the design of SIMON-like ciphers.
Some results on the existence of -all-or-nothing transforms over arbitrary alphabets
A -all-or-nothing transform is a bijective mapping defined on -tuples over an alphabet of size , which satisfies the condition that the values of any input co-ordinates are completely undetermined, given only the values of any output co-ordinates. The main question we address in this paper is: for which choices of parameters does a -all-or-nothing transform (AONT) exist? More specifically, if we fix and , we want to determine the maximum integer such that a -AONT exists. We mainly concentrate on the case for arbitrary values of , where we obtain various necessary as well as sufficient conditions for existence of these objects. We consider both linear and general (linear or nonlinear) AONT. We also show some connections between AONT, orthogonal arrays and resilient functions.
Probabilistically Checkable Proofs of Proximity with Zero-Knowledge
Uncategorized
Uncategorized
A probabilistically Checkable Proof (PCP) allows a randomized verifier, with oracle access to a purported proof, to probabilistically verify an input statement of the form " " by querying only few bits of the proof. A PCP of proximity (PCPP) has the additional feature of allowing the verifier to query only few bits of the input , where if the input is accepted then the verifier is guaranteed that (with high probability) the input is close to some .
Motivated by their usefulness for sublinear-communication cryptography, we initiate the study of a natural zero-knowledge variant of PCPP (ZKPCPP), where the view of any verifier making a bounded number of queries can be efficiently simulated by making the same number of queries to the input oracle alone. This new notion provides a useful extension of the standard notion of zero-knowledge PCPs. We obtain two types of results.
1. Constructions. We obtain the first constructions of query-efficient ZKPCPPs via a general transformation which combines standard query-efficient PCPPs with protocols for secure multiparty computation. As a byproduct, our construction provides a conceptually simpler alternative to a previous construction of honest-verifier zero-knowledge PCPs due to Dwork et al. (Crypto '92).
2. Applications. We motivate the notion of ZKPCPPs by applying it towards sublinear-communication implementations of commit-and-prove functionalities. Concretely, we present the first sublinear-communication commit-and-prove protocols which make a black-box use of a collision-resistant hash function, and the first such multiparty protocols which offer information-theoretic security in the presence of an honest majority.
Analysis of Burn-in period for RC4 State Transition
The internal state of RC4 stream cipher is a permutation over and its state transition is effectively a transposition or swapping of two elements. How the randomness of RC4 state evolves due to its state transitions has been studied for many years. As the number of swaps increases, the state comes closer to a uniform random permutation. We call the burn-in period of RC4 state transition as the number of swaps required to make the state very close to uniform random permutation under some suitably defined distance measure. Earlier, Mantin in his Master's thesis (2001) has performed an approximate analysis of the burn-in period. In this paper, we perform a rigorous analysis of the burn-in period and in the process derive the exact distribution of the RC4 state elements at any stage.
Cost-Aware Cut-and-Choose Games with Applications in Cryptography and Prefix-Free Codes
Cost-aware cut-and-choose game is a fundamental technique that has many cryptographic applications. Best existing solutions of this game assumed for simplicity that the number of challenges is publicly known. This paper considers an extension of this game where the number of challenges can be picked probabilistically and hidden to the adversary. Although this small change leads to a linear program with infinitely many variables and constraints, we discover a surprising efficiency solver — using only O(n2) space and O(n2) time where n is the upper-bound on the number of challenges allowed in the strategy space. We then prove that n is bounded for any fixed cost ratio, implying the optimality of our solution extends to the strategy space that allow any number of challenges. As two interesting applications of our game solver, we demonstrate its value in constructing an actively secure two-party computation protocol and an optimal prefix-free code for encryptions.
Speeding up detection of SHA-1 collision attacks using unavoidable attack conditions
Counter-cryptanalysis, the concept of using cryptanalytic techniques to detect cryptanalytic attacks, was first introduced by Stevens at CRYPTO 2013 with a hash collision detection algorithm. That is, an algorithm that detects whether a given single message is part of a colliding message pair constructed using a cryptanalytic collision attack on MD5 or SHA-1. The concept's utility was proven when it was used to expose the then-unknown cryptanalytic collision attack exploited by the Flame espionage supermalware.
So far there is a significant cost: to detect collision attacks against SHA-1 (respectively MD5) costs the equivalent of hashing the message 15 (respectively 224) times. In this paper we present a significant performance improvement for collision detection based on the new concept of unavoidable conditions. Unavoidable conditions are conditions that are necessary for all feasible attacks in a certain attack class. As such they can be used to quickly dismiss particular attack classes that may have been used in the construction of the message. To determine an unavoidable condition one must rule out any feasible variant attack where this condition might not be necessary, otherwise adversaries aware of counter-cryptanalysis could easily bypass this improved collision detection with a carefully chosen variant attack. We provide a formal model for unavoidable conditions for collision attacks on MD5-like compression functions.
Furthermore, based on a conjecture solidly supported by the current state of the art, we show how we can determine such unavoidable conditions for SHA-1. We have implemented the improved SHA-1 collision detection using such unavoidable conditions and which is about 16 times faster than without our unavoidable condition improvements. We have measured that overall our implemented SHA-1 with collision detection is only a factor 1.96 slower, on average, than SHA-1. Our work is very timely given the recently announced first SHA-1 collision proving that SHA-1 is now practically broken.
On The Exact Security of Message Authentication Using Pseudorandom Functions
Traditionally, modes of Message Authentication Codes(MAC) such as Cipher Block Chaining (CBC) are instantiated using block ciphers or keyed Pseudo Random Permutations(PRP). However, one can also use domain preserving keyed Pseudo Random Functions(PRF) to instantiate MAC modes. The very first security proof of CBC-MAC, essentially modeled the PRP as a PRF. Until now very little work has been done to investigate the difference between PRP vs PRF instantiations. Only known result is the rather loose folklore PRP-PRF transition of any PRP based security proof, which looses a factor of (domain of PRF/PRP is and adversary makes many PRP/PRF calls in total). This loss is significant, considering the fact tight security bounds have been known for PRP based EMAC and ECBC constructions (where is the total number of adversary queries).
In this work, we show for many variations of encrypted CBC MACs (i.e. EMAC, ECBC, FCBC, XCBC and TCBC), random function based instantiation has a security bound . This is a significant improvement over the folklore PRP/PRF transition. We also show this bound is optimal by providing an attack against the underlying PRF based CBC construction. This shows for EMAC, ECBC and FCBC, PRP instantiations are substantially more secure than PRF instantiations. Where as, for XCBC and TMAC, PRP instantiations are at least as secure as PRF instantiations.
Quantum Key Search with Side Channel Advice
Recently, a number of results have been published that show how to combine classical cryptanalysis with quantum algorithms, thereby (potentially) achieving considerable speed-ups. We follow this trend but add a novel twist by considering how to utilise side channel leakage in a quantum setting.
We show how to `rewrite' an existing algorithm for computing the rank of a key after a side channel attack, such that it results in an enumeration algorithm that produces batches of keys that can be tested using Grover's algorithm. This results in the first quantum key search that benefits from side channel information.
Error-free protection of EC point multiplication by modular extension
An implementation of a point multiplication function in an
elliptic-curve cryptosystem can be attacked by fault injections
in order to reveal the secret multiplier.
A special kind of such an attack is the sign-change fault attack.
Here the result of a point multiplication is changed in such a way
that it is still a point on the curve. A well-known countermeasure
against this kind of attack is
to perform the point multiplication on a modular extension of the
main curve by a small curve. Then the result is checked against the
result of the same point multiplication recalculated on the small curve.
The problem with this countermeasure is that the point at infinity
on the small curve may be reached as an intermediate result with a
non-negligible probability. In this case the comparison with
the result on the small curve is either faulty or meaningless.
We propose a variant of the
modular extension countermeasure where the point at infinity is
never reached as an intermediate result on the main or on
the small curve.
UFace: Your Universal Password That No One Can See
With the advantage of not having to memorize long passwords, people are more interested in adopting face authentication for use with mobile devices. However, since facial images are widely shared in social networking sites, it becomes a challenging task to securely employ face authentication for web services to prevent attackers from impersonating the legal users by using the users’ online face photos. Moreover, existing face authentication protocols either require users to disclose their unencrypted facial images to the authentication server or require users to execute computationally expensive secure multiparty computation protocols. For mobile devices with limited
computational power, this presents a problem that cannot be overlooked. In this paper, we present a novel privacy preserving
face authentication system, called UFace, which has users take
close-up facial images for authentication to prevent against impersonation attacks of users’ online facial images. UFace also
guarantees that the facial images are only seen by the users
and not by any other party (e.g., web service providers and authentication servers). UFace was implemented through two
facets: an Android client application to obtain and encrypt the
feature vector of the user’s facial image, and server code to
securely authenticate a feature vector across multiple servers.
The experimental results demonstrate that UFace not only can
correctly authenticate a user, but also can be done within seconds
which is significantly faster than any existing privacy preserving
authentication protocol.
AES-GCM-SIV: Specification and Analysis
In this paper, we describe and analyze the security of the AES-GCM-SIV mode of operation, as defined in the CFRG specification \cite{CFRG}. This mode differs from the original GCM-SIV mode that was designed in \cite{GL2015} in two main aspects. First, the CTR encryption uses a 127-bit pseudo-random counter instead of a 95-bit pseudo-random value concatenated with a 32-bit counter. This construction leads to improved security bounds when encrypting short messages. In addition, a new key derivation function is used for deriving a fresh set of keys for each nonce. This addition allows for encrypting up to messages with the same key, compared to the significant limitation of only messages that were allowed with GCM-SIV (which inherited this same limit from AES-GCM). As a result, the new construction is well suited for real world applications that need a nonce-misuse resistant Authenticated Encryption scheme. We explain the limitations of GCM-SIV, which motivate the new construction, prove the security properties of AES-GCM-SIV, and show how these properties support real usages. Implementations are publicly available in \cite{ShayGit}. We remark that AES-GCM-SIV is already integrated into Google's BoringSSL library \cite{BoringSSL}, and its deployment for ticket encryption in QUIC \cite{QUIC} is underway.
Cloud Storage File Recoverability
Data loss is perceived as one of the major threats for cloud storage. Consequently, the security community developed several challenge-response protocols that allow a user to remotely verify whether an outsourced file is still intact. However, two important practical problems have not yet been considered. First, clients commonly outsource multiple files of different sizes, raising the question how to formalize such a scheme and in particular ensuring that all files can be simultaneously audited. Second, in case auditing of the files fails, existing schemes do not provide a client with any method to prove if the original files are still recoverable.
We address both problems and describe appropriate solutions. The first problem is tackled by providing a new type of "Proofs of Retrievability" scheme, enabling a client to check all files simultaneously in a compact way. The second problem is solved by defining a novel procedure called "Proofs of Recoverability", enabling a client to obtain an assurance whether a file is recoverable or irreparably damaged. Finally, we present a combination of both schemes allowing the client to check the recoverability of all her original files, thus ensuring cloud storage file recoverability.
A roadmap to fully homomorphic elections: Stronger security, better verifiability
After the trials of remote internet voting for local elections in 2011 and parliamentary elections in 2013, a number of local referendums has renewed interest in internet voting in Norway.
The voting scheme used in Norway is not quantum-safe and it has limited voter verifiability. In this case study, we consider how we can use fully homomorphic encryption to construct a quantum-safe voting scheme with better voter verifiability.
While fully homomorphic cryptosystems are not efficient enough for the the system we sketch to be implemented and run today, we expect future improvements in fully homomorphic encryption which may eventually make these techniques practical.
SymSum: Symmetric-Sum Distinguishers Against Round Reduced SHA3
Uncategorized
Uncategorized
In this work we show the existence of special sets of inputs for which the sum
of the images under SHA3 exhibits a symmetric property. We develop an
analytical framework which accounts for the existence of these sets. The
framework constitutes identification of a generic property of iterated SPN
based functions pertaining to the round-constant addition and combining it with the notion of fold vectorial derivatives for differentiation over specially selected subspaces. Based on this we propose a new distinguisher called SymSum for the SHA3 family which penetrates up to 9 rounds and outperforms the ZeroSum distinguisher by a factor of four. Interestingly, the current work is the first analysis of SHA3/Keccak that relies on round-constants but is independent of their Hamming-weights.
Conditional Disclosure of Secrets: Amplification, Closure, Amortization, Lower-bounds, and Separations
In the \emph{conditional disclosure of secrets} problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold inputs and respectively, wish to release a common secret to Carol (who knows both and ) if only if the input satisfies some predefined predicate . Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some joint randomness and the goal is to minimize the communication complexity while providing information-theoretic security.
Following Gay, Kerenidis, and Wee (Crypto 2015), we study the communication complexity of CDS protocols and derive the following positive and negative results.
1. *Closure* A CDS for can be turned into a CDS for its complement with only a minor blow-up in complexity. More generally, for a (possibly non-monotone) predicate , we obtain a CDS for whose cost is essentially linear in the formula size of and polynomial in the CDS complexity of .
2. *Amplification* It is possible to reduce the privacy and correctness error of a CDS from constant to with a multiplicative overhead of . Moreover, this overhead can be amortized over -bit secrets.
3. *Amortization* Every predicate over -bit inputs admits a CDS for multi-bit secrets whose amortized communication complexity per secret bit grows linearly with the input length for sufficiently long secrets. In contrast, the best known upper-bound for single-bit secrets is exponential in .
4. *Lower-bounds* There exists a (non-explicit) predicate over -bit inputs for which any perfect (single-bit) CDS requires communication of at least . This is an exponential improvement over the previously known lower-bound.
5. *Separations* There exists an (explicit) predicate whose CDS complexity is exponentially smaller than its randomized communication complexity. This matches a lower-bound of Gay et. al., and, combined with another result of theirs, yields an exponential separation between the communication complexity of linear CDS and non-linear CDS. This is the first provable gap between the communication complexity of linear CDS (which captures most known protocols) and non-linear CDS.
Homomorphic Encryption without Gaussian Noise
Uncategorized
Uncategorized
We propose a Somewhat Homomorphic Encryption (SHE) scheme based on the Learning With Rounding (LWR) problem. The LWR problem is somewhat similar to the more classical Learning With Errors (LWE) and was proposed as a deterministic variant of it and setting up an LWR instance does not require the generation of gaussian noise. Thus our SHE scheme can be instantiated without the need for expensive Gaussian noise sampling. Our initial scheme provides lower ciphertext sizes for small plaintext spaces than existing leading schemes such as BGV.