All papers in 2017 (Page 2 of 1262 results)
Kayawood, a Key Agreement Protocol
Public-key solutions based on number theory, including RSA, ECC, and Diffie-Hellman, are subject to various quantum attacks, which makes such solutions less attractive long term. Certain group theoretic constructs, however, show promise in providing quantum-resistant cryptographic primitives because of the infinite, non-cyclic, non-abelian nature of the underlying mathematics. This paper introduces Kayawood Key Agreement protocol (Kayawood, or Kayawood KAP), a new group-theoretic key agreement protocol, that leverages the known NP-Hard shortest word problem (among others) to provide an Elgamal-style, Diffie-Hellman-like method. This paper also (i) discusses the implementation of and behavioral aspects of Kayawood, (ii) introduces new methods to obfuscate braids using Stochastic Rewriting, and (iii) analyzes and demonstrates Kayawood's security and resistance to known quantum attacks.
A Review of Existing 4-bit Crypto S-box cryptanalysis Techniques and Two New Techniques with 4-bit Boolean Functions for Cryptanalysis of 4-bit Crypto S-boxes.
4-bit Linear Relations play an important role in Cryptanalysis of 4-bit Bijective Crypto S-boxes. 4-bit finite differences also a major part of cryptanalysis of 4-bit substitution boxes. Count of existence of all 4-bit linear relations, for all of 16 input and 16 output 4-bit bit patterns of 4-bit bijective crypto S-boxes said as S-boxes has been reported in Linear Cryptanalysis of 4-bit S-boxes. Count of existing finite differences from each element of output S-boxes to distant output S-boxes have been noted in Differential Cryptanalysis of S-boxes. In this paper a brief review of these cryptanalytic methods for 4-bit S-boxes has been introduced in a very lucid and conceptual manner. Two new Analysis Techniques, one to search for the existing Linear Approximations among the input Boolean Functions (BFs) and output BFs of a particular 4-bit S-Box has also been introduced in this paper. The search is limited to find the existing linear relations or approximations in the contrary to count the number existent linear relations among all 16 4-bit input and output bit patterns within all possible linear approximations. Another is to find number of balanced 4-bit BFs in difference output S-boxes. Better the number of Balanced BFs, Better the security.
A Practical Cryptanalysis of WalnutDSA
We present a practical cryptanalysis of WalnutDSA, a digital signature algorithm trademarked by SecureRF. WalnutDSA uses techniques from permutation groups, matrix groups, and braid groups, and is designed to provide post-quantum security in lightweight IoT device contexts. The attack given in this paper bypasses the E-Multiplication and cloaked conjugacy search problems at the heart of the algorithm and forges signatures for arbitrary messages in approximately two minutes. We also discuss potential countermeasures to the attack.
Cryptanalysis of indistinguishability obfuscation using GGH13 without ideals
Uncategorized
Uncategorized
Recently, Albrecht, Davidson and Larraia described a variant of the GGH13 without ideals and presented the distinguishing attacks in simplified branching program security model. Their result partially demonstrates that there seems to be a structural defect in the GGH13 encoding that is not related to the ideal . However, it is not clear whether a variant of the CGH attack described by Chen, Gentry and Halevi can be used to break a branching program obfuscator instantiated by GGH13 without ideals. Consequently this is left as an open problem by Albrecht, Davidson and Larraia. In this paper, we describe a variant of the CGH attack which breaks the branching program obfuscator using GGH13 without ideals. To achieve this goal, we introduce matrix approximate eigenvalues and build a relationship between the determinant and the rank of a matrix with noise. Our result further strengthens the work of Albrecht, Davidson and Larraia that there is a structural weakness in `GGH13-type' encodings beyond the presence of .
Oblivious Dynamic Searchable Encryption via Distributed PIR and ORAM
Dynamic Searchable Symmetric Encryption (DSSE) allows to delegate search/update operations over encrypted data via an encrypted index. However, DSSE is known to be vulnerable against statistical inference attacks, which exploits information leakages from access patterns on encrypted index and files. Although generic Oblivious Random Access Machine (ORAM) can hide access patterns, it has been shown to be extremely costly to be directly used in DSSE setting.
We developed a series of Oblivious Distributed DSSE schemes that we refer to as \ODSE, which achieve oblivious access on the encrypted index with a high security and improved efficiency over the use of generic ORAM. Specifically, \ODSE schemes are 3-57 faster than applying the state-of-the-art generic ORAMs on encrypted dictionary index in real network settings. One of the proposed \ODSE schemes offers desirable security guarantees such as information-theoretic security with robustness against malicious servers. These properties are achieved by exploiting some of the unique characteristics of searchable encryption and encrypted index, which permits us to harness the computation and communication efficiency of multi-server PIR and Write-Only ORAM simultaneously. We fully implemented \ODSE and conducted extensive experiments to assess the performance of our proposed schemes in a real cloud environment.
ARM2GC: Succinct Garbled Processor for Secure Computation
Uncategorized
Uncategorized
We present ARM2GC, a novel secure computation framework based on Yao’s Garbled Circuit (GC) protocol and the ARM processor. It allows users to develop privacy-preserving applications using standard high-level programming languages (e.g., C) and compile them using off-the-shelf ARM compilers (e.g., gcc-arm). The main enabler of this framework is the introduction of Skip-Gate, an algorithm that dynamically omits the communication and encryption cost of the gates whose outputs are independent of the private data. SkipGate greatly enhances the performance of ARM2GC by omitting costs of the gates associated with the instructions of the compiled binary, which is known by both parties involved in the computation. Our evaluation on benchmark functions demonstrates that ARM2GC not only outperforms the current GC frameworks that support high-level languages, it also achieves efficiency comparable to the best prior solutions based on hardware description languages. Moreover, in contrast to previous high-level frameworks with domain-specific languages and customized compilers, ARM2GC relies on standard ARM compiler which is rigorously verified and supports programs written in the standard syntax.
Two-Round Multiparty Secure Computation from Minimal Assumptions
We provide new two-round multiparty secure computation (MPC) protocols
assuming the minimal assumption that two-round oblivious transfer (OT)
exists. If the assumed two-round OT protocol is secure against semi-honest adversaries
(in the plain model) then so is our two-round MPC protocol. Similarly, if the assumed two-round OT
protocol is secure against malicious adversaries (in the common
random/reference string model) then so is our two-round MPC protocol.
Previously,
two-round MPC protocols were only known under relatively stronger computational assumptions.
Finally, we provide several extensions.
A Survey and Refinement of Repairable Threshold Schemes
We consider repairable threshold schemes (RTSs), which are threshold schemes that enable a player to securely reconstruct a lost share with help from their peers. We summarise and, where possible, refine existing RTSs and introduce a new parameter for analysis, called the repair metric. We then explore using secure regenerating codes as RTSs and find them to be immediately applicable. We compare all RTS constructions considered and conclude by presenting the best candidate solutions for when either communication complexity or information rate is prioritised.
Post-Quantum Zero-Knowledge Proofs for Accumulators with Applications to Ring Signatures from Symmetric-Key Primitives
Uncategorized
Uncategorized
In this paper we address the construction of privacy-friendly cryptographic primitives for the post-quantum era and in particular accumulators with zero-knowledge membership proofs and ring signatures. This is an important topic as it helps to protect the privacy of users in online authentication or emerging technologies such as cryptocurrencies. Recently, we have seen first such constructions, mostly based on assumptions related to codes and lattices. We, however, ask whether it is possible to construct such primitives without relying on structured hardness assumptions, but solely based on symmetric-key primitives such as hash functions or block ciphers. This is interesting because the resistance of latter primitives to quantum attacks is quite well understood.
In doing so, we choose a modular approach and firstly construct an accumulator (with one-way domain) that allows to efficiently prove knowledge of (a pre-image of) an accumulated value in zero-knowledge. We, thereby, take care that our construction can be instantiated solely from symmetric-key primitives and that our proofs are of sublinear size. Latter is non trivial to achieve in the symmetric setting due to the absence of algebraic structures which are typically used in other settings to make these efficiency gains. Regarding efficient instantiations of our proof system, we rely on recent results for constructing efficient non-interactive zero-knowledge proofs for general circuits. Based on this building block, we then show how to construct logarithmic size ring signatures solely from symmetric-key primitives. As constructing more advanced primitives only from symmetric-key primitives is a very recent field, we discuss some interesting open problems and future research directions. Finally, we want to stress that our work also indirectly impacts other fields: for the first time it raises the requirement for collision resistant hash functions with particularly low AND count.
Tesseract: Real-Time Cryptocurrency Exchange using Trusted Hardware
We propose Tesseract, a secure real-time cryptocurrency exchange service. Existing centralized exchange designs are vulnerable to theft of funds, while decentralized exchanges cannot offer real-time cross-chain trades. All currently deployed exchanges are also vulnerable to frontrunning attacks. Tesseract overcomes these flaws and achieves a best-of-both-worlds design by using Intel SGX as a trusted execution environment. Furthermore, by running a consensus protocol among SGX-enabled servers, Tesseract mitigates denial-of-service attacks. Tesseract supports not only real-time cross-chain cryptocurrency trades, but also secure tokenization of assets pegged to cryptocurrencies. For instance, Tesseract-tokenized bitcoins can circulate on the Ethereum blockchain for use in smart contracts. We provide a reference implementation of Tesseract that supports Bitcoin, Ethereum, and similar cryptocurrencies.
Symbolic Security Criteria for Blockwise Adaptive Secure Modes of Encryption
Symbolic methods for reasoning about the security of cryptographic systems have for some time concentrated mainly on protocols. More recently, however, we see a rising interest in the use of symbolic methods to reason about the security of algorithms as well, especially algorithms that are built by combining well-defined primitives. For this kind of application two things are generally required: the ability to reason about term algebras obeying equational theories at the symbolic level, and the ability to prove computational soundness and completeness of the symbolic model. It is often challenging to provide both these capabilities, especially for an adaptive adversary that can perform chosen plaintext or ciphertext attacks. In this paper we derive sound and complete symbolic criteria for computational security against adaptive chosen plaintext attacks of a class of modes of encryption. These apply to any scheduling policy used to send the cipher text, ranging from the messagewise schedule, in which ciphertext blocks are sent to the adversary only after all the plaintext blocks have been received, to the blockwise schedule, in which ciphertext blocks are sent as soon as they are computed. We also discuss how this approach could extended to larger classes of modes, and how could it be applied to the automatic synthesis of cryptosystems.
Shorter Linear Straight-Line Programs for MDS Matrices
Recently a lot of attention is paid to the search for efficiently implementable MDS matrices for lightweight symmetric primitives. Previous work concentrated on locally optimizing the multiplication with single matrix elements. Separate from this line of work, several heuristics were developed to find shortest linear straight-line programs. Solving this problem actually corresponds to globally optimizing multiplications by matrices.
In this work we combine those, so far largely independent line of works. As a result, we achieve implementations of known, locally optimized, and new MDS matrices that significantly outperform all implementations from the literature. Interestingly, almost all previous locally optimized constructions behave very similar with respect to the globally optimized implementation.
As a side effect, our work reveals the so far best implementation of the AES MixColumns operation with respect to the number of XOR operations needed.
SWiM: Secure Wildcard Pattern Matching From OT Extension
Uncategorized
Uncategorized
Suppose a server holds a long text string and a receiver holds a short pattern string. Secure pattern matching allows the receiver to learn the locations in the long text where the pattern appears, while leaking nothing else to either party besides the length of their inputs. In this work we consider secure wildcard pattern matching WPM, where the receiver's pattern is allowed to contain wildcards that match to any character.
We present SWiM, a simple and fast protocol for WPM that is heavily based on oblivious transfer (OT) extension. As such, the protocol requires only a small constant number of public-key operations and otherwise uses only very fast symmetric-key primitives. SWiM is secure against semi-honest adversaries. We implemented a prototype of our protocol to demonstrate its practicality. We can perform WPM on a DNA text (4-character alphabet) of length and pattern of length in just over 2 seconds, which is over two orders of magnitude faster than the state-of-the-art scheme of Baron et al. (SCN 2012).
Improved Cryptanalysis of HFEv- via Projection
The HFEv- signature scheme is one of the most studied multivariate schemes and one of the major candidates for the upcoming standardization of post-quantum digital signature schemes. In this paper, we propose three new attack strategies against HFEv-, each of them using the idea of projection. Especially our third attack is very effective and is, for some parameter sets, the most efficient known attack against HFEv-. Furthermore, our attack requires much less memory than direct and rank attacks. By our work, we therefore give new insights in the security of the HFEv- signature scheme and restrictions for the parameter choice of a possible future standardized HFEv- instance.
Improvements to the Linear Operations of LowMC: A Faster Picnic
Picnic is a practical approach to digital signatures where the security is primarily based on the existence of a one-way function, and the signature size strongly depends on the number of multiplications in the circuit describing that one-way function. The highly parameterizable block cipher family LowMC has the most competitive properties with respect to this metric and is hence a standard choice. In this paper, we study various options for efficient implementations of LowMC in-depth.
First, we investigate optimizations of the round key computation of LowMC independently of any implementation optimizations. By decomposing the round key computations based on the keys' effect on the S-box layer and general optimizations, we reduce runtime costs by up to a factor of 2 and furthermore reduce the size of the LowMC matrices by around 45% compared to the original Picnic implementation (CCS'17).
Second, we propose two modifications to the remaining matrix multiplication in LowMC's linear layer. The first modification decomposes the multiplication into parts depending on the their effect on the S-box layer. While this requires the linear layer matrices to have an invertible submatrix, it reduces the runtime and memory costs significantly, both by up to a factor of 4 for instances used by Picnic and up to a factor of 25 for LowMC instances with only one S-box. The second modification proposes a Feistel structure using smaller matrices completely replacing the remaining large matrix multiplication in LowMC's linear layer. With this approach, we achieve an operation count logarithmic in the block size but more importantly, improve over Picnic's matrix multiplication by 60% while retaining a constant-time algorithm. Furthermore, this technique also enables us to reduce the memory requirements for storing LowMC matrices by 60%.
Under Pressure: Security of Caesar Candidates beyond their Guarantees
The Competition for Authenticated Encryption: Security, Applicability and Robustness (CAESAR) has as its official goal to ``identify a portfolio of authenticated ciphers that offer advantages over AES-GCM and are suitable for widespread adoption.'' Each of the 15 candidate schemes competing in the currently ongoing 3rd round of CAESAR must clearly declare its security claims, i.a. whether or not it can tolerate nonce misuse, and what is the maximal data complexity for which security is guaranteed. These claims appear to be valid for all 15 candidates.
Interpreting "Robustness" in CAESAR as the ability to mitigate damage even if security guarantees are void, we describe attacks with birthday complexity or beyond, and/or with nonce reuse for each of the 15 candidates. We then sort the candidates into classes depending on how powerful does an attacker need to be to mount (semi-)universal forgeries, decryption attacks, or key recoveries. Rather than invalidating the security claims of any of the candidates, our results provide an additional criterion for evaluating the security that candidates deliver, which can be useful for e.g. breaking ties in the final CAESAR discussions.
A Zero-Knowledge Version of vSQL
Zero-knowledge arguments of knowledge are powerful cryptographic primitives that allow a computationally strong prover to convince a weaker verifier for the validity of an NP statement, without revealing anything about the corresponding witness (beyond its existence). Most state-of-the-art implementations of such arguments that achieve succinct communication and verification cost follow the quadratic arithmetic program paradigm. One notable exception to this is the vSQL system of [Zhang et al. IEEE S&P 2017] which takes an entirely different approach resulting is significantly fewer cryptographic operations. However, it has the notable downside that is not zero-knowledge (i.e., it does not hide the witness from the verifier), a property that has proven to be of utmost importance in many application (e.g., in cryptocurrencies). In this work, we present a zero-knowledge version of the argument upon which vSQL is based. Our construction utilizes two separate techniques: (i) a novel zero-knowledge verifiable polynomial delegation protocol, and (ii) running parts of the argument of vSQL over homomorphic commitments, thus hiding the committed values.
vSQL: Verifying Arbitrary SQL Queries over Dynamic Outsourced Databases
Cloud database systems such as Amazon RDS or Google Cloud SQL enable the outsourcing of a large database to a server who then responds to SQL queries. A natural problem here is to efficiently verify the correctness of responses returned by the (untrusted) server. In this paper we present vSQL, a novel cryptographic protocol for publicly verifiable SQL queries on dynamic databases. At a high level, our construction relies on two extensions of the CMT interactive-proof protocol [Cormode et al., 2012]: (i) supporting outsourced input via the use of a polynomial-delegation protocol with succinct proofs, and (ii) supporting auxiliary input (i.e., non-deterministic computation) efficiently. Compared to previous verifiable-computation systems based on interactive proofs, our construction has verification cost polylogarithmic in the auxiliary input (which for SQL queries can be as large as the database) rather than linear.
In order to evaluate the performance and expressiveness of our scheme, we tested it on SQL queries based on the TPC-H benchmark on a database with rows and columns. The server overhead in our scheme (which is typically the main bottleneck) is up to lower than previous approaches based on succinct arguments of knowledge (SNARKs), and moreover we avoid the need for query-dependent pre-processing which is required by optimized SNARK-based schemes. In our construction, the server/client time and the communication cost are comparable to, and sometimes smaller than, those of existing customized solutions which only support specific queries.
How Far Can We Reach? Breaking Masked AES Smartcard Implementation Using One Trace
Rotating Sbox Masking (RSM) scheme is a highly efficient masking scheme proposed to protect cryptographic implementations from side channel attacks. It is a Low Entropy Masking Scheme and has attracted special attention for its low overhead but high performance. The two public targets of international academic competition DPA Contest v4 are both RSM-masked AES implementations, specifically, RSM-AES-256 for v4.1 and RSM-AES-128 for v4.2 respectively. The side channel security of RSM-AES-256 was intensively studied by researchers worldwide under the framework of DPA Contest and several flaws were identified, while the security of RSM-AES-128 is still not thoroughly studied. In this paper, we focus on analyzing the practical security of RSM-AES-128 from a profiling attack point of view. Specifically, we firstly present a Multivariate Template Attack (MTA) to maximize the success rates of key recovery attack. Next, we propose a new Depth-First Key Enumeration Algorithm (DFKEA) that could be applied to find the correct key efficiently after a side channel attack. By integrating the DFKEA to our MTA, we propose a novel multivariate profiling attack which could recover the whole secret key of RSM-AES-128 with over 95% possibility only using one electromagnetic trace. It is the best attack among all attacks submitted to DPA Contest Official up to now. Finally, we present one proposal to further improve the practical security of RSM-AES-128 at an acceptable overhead.
Faster key compression for isogeny-based cryptosystems
Uncategorized
Uncategorized
Supersingular isogeny-based cryptography is one of the more recent families of post-quantum proposals. An interesting feature is the comparatively low bandwidth occupation in key agreement protocols, which stems from the possibility of key compression. However, compression and decompression introduce a significant overhead to the overall processing cost despite
recent progress. In this paper we address the main processing bottlenecks involved in key compression and decompression, and suggest substantial improvements for each of them. Some of our techniques may have an independent interest for other, more conventional areas of elliptic curve cryptography as well.
PIR with compressed queries and amortized query processing
Uncategorized
Uncategorized
Private information retrieval (PIR) is a key building block in many privacy-preserving systems. Unfortunately, existing constructions remain very expensive. This paper introduces two techniques that make the computational variant of PIR (CPIR) more efficient in practice. The first technique targets a recent class of CPU-efficient CPIR protocols where the query sent by the client contains a number of ciphertexts proportional to the size of the database. We show how to compresses this query, achieving size reductions of up to 274X.
The second technique is a new data encoding called probabilistic batch codes (PBCs). We use PBCs to build a multi-query PIR scheme that allows the server to amortize its computational cost when processing a batch of requests from the same client. This technique achieves up to 40× speedup over processing queries one at a time, and is significantly more efficient than related encodings. We apply our techniques to the Pung private communication system, which relies on a custom multi-query CPIR protocol for its privacy guarantees. By porting our techniques to Pung, we find that we can simultaneously reduce network costs by 36× and increase throughput by 3X.
Cryptanalysis of Bivium using a Boolean all solution solver
Uncategorized
Uncategorized
Cryptanalysis of Bivium is presented with the help of a new Boolean
system solver algorithm. This algorithm uses a Boolean equation
model of Bivium for a known keystream. The Boolean solver uses
implicant based computation of satisfying assignments and is distinct
from well known CNF-satisfiability solvers or algebraic cryptanalysis
methods. The solver is also inherently parallel and returns all satisfying assignments of the system of equations in terms of implicants. Cryptanalysis of Bivium is classified in four categories of increasing strength and it is shown that the solver algorithm is able to complete the key recovery in category 2 in 48 hours by a Python code. (This benchmark is improved to 3 hours by a C++ code). Computational algorithms for formation of equations and symbolic operations are also developed afresh for handling Boolean functions and presented. Limitations of these implementations are determined with respect to Bivium model and its cryptanalysis which shall be useful for cryptanalysis of general stream ciphers.
Lattice Klepto: Turning Post-Quantum Crypto Against Itself
This paper studies ways to backdoor lattice-based systems following
Young and Yung's work on backdooring RSA and discrete-log based systems.
For the NTRU encryption scheme we show how to build a backdoor and to
change the system so that each ciphertext leaks information about the
plaintext to the owner of the backdoor.
For signature schemes the
backdoor leaks information about the signing key to the backdoor owner.
As in Young and Yung's work the backdoor uses the freedom that random
selections offer in the protocol to hide a secret message encrypted to
the backdoor owner.
The most interesting and very different
part though is how to hide and retrieve the hidden messages.
Decoding Linear Codes with High Error Rate and its Impact for LPN Security
Uncategorized
Uncategorized
We propose a new algorithm for the decoding of random binary linear codes of dimension that is superior to previous algorithms for high error rates. In the case of Full Distance decoding, the best known bound of is currently achieved via the BJMM-algorithm of Becker, Joux, May and Meurer. Our algorithm significantly improves this bound down to .
Technically, our improvement comes from the heavy use of Nearest Neighbor techniques in all steps of the construction, whereas the BJMM-algorithm can only take advantage of Nearest Neighbor search in the last step.
Since cryptographic instances of LPN usually work in the high error regime, our algorithm has implications for LPN security.
The Parallel Repetition of Non-Signaling Games: Counterexamples and Dichotomy
Non-signaling games are an important object of study in the theory of
computation, for their role both in quantum information and in (classical)
cryptography. In this work, we study the behavior of these games under
parallel repetition.
We show that, unlike the situation both for classical games and for
two-player non-signaling games, there are -player
non-signaling games (for ) whose values do not tend to with
sufficient parallel repetition. In fact, parallel repetition sometimes does
not decrease their value whatsoever.
We show that in general:
1. Every game's non-signaling value under parallel repetition is either lower bounded by a
positive constant or decreases exponentially with the number of repetitions.
2. Exponential decrease occurs if and only if the game's
sub-non-signaling value (Lancien and Winter, CJTCS '16) is less than .
Note on the Robustness of CAESAR Candidates
Authenticated ciphers rely on the uniqueness of the nonces to meet their security goals. In this work, we investigate the implications of reusing nonces for three third-round candidates of the ongoing CAESAR competition, namely Tiaoxin, AEGIS and MORUS. We show that an attacker that is able to force nonces to be reused can reduce the security of the ciphers with results ranging from full key-recovery to forgeries with practical complexity and a very low number of nonce-misuse queries.
Clustering Related-Tweak Characteristics: Application to MANTIS-6
The TWEAKEY/STK construction is an increasingly popular approach for designing tweakable block ciphers that notably uses a linear tweakey schedule. Several recent attacks have analyzed the implications of this approach for differential cryptanalysis and other attacks that can take advantage of related tweakeys.
We generalize the clustering approach of a recent differential attack on the tweakable block cipher MANTIS-5 and describe a tool for efficiently finding and evaluating such clusters. More specifically, we consider the set of all differential characteristics compatible with a given truncated characteristic, tweak difference, and optional constraints for the differential. We refer to this set as a semi-truncated characteristic and estimate its probability by analyzing the distribution of compatible differences at each step.
We apply this approach to find a semi-truncated differential characteristic for MANTIS-6 with probability about and derive a key-recovery attack with a complexity of about chosen-plaintext queries and computations. The data-time product is .
On the Complexity of the Hybrid Approach on HFEv-
The HFEv- signature scheme is one of the most promising candidates for post-quantum digital signatures. Most notably here is the short signature size of the scheme.
It has long been known that direct attacks against HFEv- systems work more efficiently than against random systems. The reason for this was found by Jintai Ding et al., who proved an upper bound on the degree of regularity of these systems.
However, not much is known about the efficiency of the hybrid approach against the HFEv- scheme. In order to find suitable parameter sets for HFEv- for higher levels of security, this topic has to be studied in more detail.
In this article we consider this question by performing a large number of computer experiments. As our experiments show, guessing variables does not help to speed up direct attacks against HFEv- systems. Therefore, in the parameter selection of these schemes, we do not have to consider the hybrid approach. Furthermore, we develop in this article a simple formula to estimate the degree of regularity of a determined HFEv- system. Together with our results on the behavior of the hybrid approach, this formula gives us an easy way to estimate the complexity of direct attacks against HFEv- systems.
Machine-Learning Attacks on PolyPUFs, OB-PUFs, RPUFs, LHS-PUFs, and PUF–FSMs
Uncategorized
Uncategorized
A physically unclonable function (PUF) is a circuit of which the input–output behavior is designed to be sensitive to the random variations of its manufacturing process. This building block hence facilitates the authentication of any given device in a population of identically laid-out silicon chips, similar to the biometric authentication of a human. The focus and novelty of this work is the development of efficient impersonation attacks on the following five Arbiter PUF–based authentication protocols: (1) the so-called PolyPUF protocol of Konigsmark, Chen, and Wong, as published in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 2016, (2) the so-called OB-PUF protocol of Gao, Li, Ma, Al-Sarawi, Kavehei, Abbott, and Ranasinghe, as presented at the IEEE conference PerCom 2016, (3) the so-called RPUF protocol of Ye, Hu, and Li, as presented at the IEEE conference AsianHOST 2016, (4) the so-called LHS-PUF protocol of Idriss and Bayoumi, as presented at the IEEE conference RFID-TA 2017, and (5) the so-called PUF–FSM protocol of Gao, Ma, Al-Sarawi, Abbott, and Ranasinghe, as published in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 2018. The common flaw of all five designs is that the use of lightweight obfuscation logic provides insufficient protection against machine learning attacks.
Sentiment Protocol: A Decentralized Protocol Leveraging Crowd Sourced Wisdom
The wisdom of the crowd is a valuable asset in today's society. It is not only important in predicting elections but also plays an essential rôle in marketing and the financial industry. Having a trustworthy source of opinion can make forecasts more accurate and markets predictable. Until now, a fundamental problem of surveys is the lack of incentives for participants to provide accurate information. Classical solutions like small monetary rewards or the chance of winning a prize are often not very attractive and also do not prevent multiple entries or randomly filling in answers. In this work, we present a framework that solves both problems via a customizable incentivization framework. Apart from predicting events, this framework can also be used to govern decentralized autonomous organizations.
Doubly-efficient zkSNARKs without trusted setup
We present a zero-knowledge argument for NP with low communication complexity,
low concrete cost for both the prover and the verifier, and no trusted setup,
based on standard cryptographic assumptions. Communication is proportional
to (for the depth and the width of the verifying circuit) plus
the square root of the witness size. When applied to batched or data-parallel
statements, the prover's runtime is linear and the verifier's is sub-linear
in the verifying circuit size, both with good constants. In addition,
witness-related communication can be reduced, at the cost of increased
verifier runtime, by leveraging a new commitment scheme for multilinear
polynomials, which may be of independent interest. These properties represent
a new point in the tradeoffs among setup, complexity assumptions, proof size,
and computational cost.
We apply the Fiat-Shamir heuristic to this argument to produce a zero-knowledge
succinct non-interactive argument of knowledge (zkSNARK) in the random oracle
model, based on the discrete log assumption, which we call Hyrax. We implement
Hyrax and evaluate it against five state-of-the-art baseline systems. Our
evaluation shows that, even for modest problem sizes, Hyrax gives smaller
proofs than all but the most computationally costly baseline, and that its
prover and verifier are each faster than three of the five baselines.
A Certain Family of Subgroups of Is Weakly Pseudo-Free under the General Integer Factoring Intractability Assumption
Let be the subgroup of elements of odd order in the group and let be the uniform probability distribution on . In this paper, we establish a probabilistic polynomial-time reduction from finding a nontrivial divisor of a composite number to finding a nontrivial relation between elements chosen independently and uniformly at random from , where is given in unary as a part of the input. Assume that finding a nontrivial divisor of a random number in some set of composite numbers (for a given security parameter) is a computationally hard problem. Then, using the above-mentioned reduction, we prove that the family of computational abelian groups is weakly pseudo-free. The disadvantage of this result is that the probability ensemble is not polynomial-time samplable. To overcome this disadvantage, we construct a polynomial-time computable function (where ) and a polynomial-time samplable probability ensemble (where is a distribution on for each ) such that the family of computational abelian groups is weakly pseudo-free.
Information-Theoretic Secret-Key Agreement: The Asymptotically Tight Relation Between the Secret-Key Rate and the Channel Quality Ratio
Uncategorized
Uncategorized
Information-theoretically secure secret-key agreement between two parties Alice and Bob is a well-studied problem that is provably impossible in a plain model with public (authenticated) communication, but is known to be possible in a model where the parties also have access to some correlated randomness. One particular type of such correlated randomness is the so-called satellite setting, where a source of uniform random bits (e.g., sent by a satellite) is received by the parties and the adversary Eve over inherently noisy channels. The antenna size determines the error probability, and the antenna is the adversary's limiting resource much as computing power is the limiting resource in traditional complexity-based security. The natural assumption about the adversary is that her antenna is at most times larger than both Alice's and Bob's antenna, where, to be realistic, can be very large.
The goal of this paper is to characterize the secret-key rate per transmitted bit in terms of . Traditional results in this so-called satellite setting are phrased in terms of the error probabilities , , and , of the binary symmetric channels through which the parties receive the bits and, quite surprisingly, the secret-key rate has been shown to be strictly positive unless Eve's channel is perfect ( ) or either Alice's or Bob's channel output is independent of the transmitted bit (i.e., or ). However, the best proven lower bound, if interpreted in terms of the channel quality ratio , is only exponentially small in . The main result of this paper is that the secret-key rate decreases asymptotically only like if the per-bit signal energy, affecting the quality of all channels, is treated as a system parameter that can be optimized. Moreover, this bound is tight if Alice and Bob have the same antenna sizes.
Motivated by considering a fixed sending signal power, in which case the per-bit energy is inversely proportional to the bit-rate, we also propose a definition of the secret-key rate per second (rather than per transmitted bit) and prove that it decreases asymptotically only like .
Probabilistic solution of Yao's millionaires' problem
We offer a probabilistic solution of Yao's millionaires' problem that gives correct answer with probability (slightly) less than 1 but on the positive side, this solution does not use any one-way functions.
Forward Secure Efficient Group Signature in Dynamic Setting using Lattices
Secret key exposure is at high risk in the computing infrastructure due to the increase in use of harmful devices. As a result, achieving forward secrecy is a preferable feature for any cryptosystem where the lifetime of a user is divided into discrete time periods. Forward secrecy preserves the security of past periods even if the secret key is exposed. In this work, we introduce the first lattice based forward secure dynamic group signature scheme. The existing forward secure group signature schemes are secure in the bilinear setting, and becomes insecure in the quantum computer period. We employ a complete binary tree whose leaves are associated with discrete time periods and label the nodes in a unique way that enables each node of the same depth to have different hamming weight. This helps the group manager to produce distinct certificates to distinct users. Our scheme withstand framing attacks, mis-identification attack and preserves anonymity under the learning with errors (LWE) and short integer solution (SIS) assumptions.
On the Leakage Resilience of Ring-LWE Based Public Key Encryption
We consider the leakage resilience of the Ring-LWE analogue of the Dual-Regev encryption scheme (R-Dual-Regev for short), originally presented by Lyubashevsky et al.~(Eurocrypt '13). Specifically, we would like to determine whether the R-Dual-Regev encryption scheme remains IND-CPA secure, even in the case where an attacker leaks information about the secret key.
We consider the setting where is the ring of integers of the -th cyclotomic number field, for which is a power-of-two, and the Ring-LWE modulus is set to . This is the common setting used in practice and is desirable in terms of the efficiency and simplicity of the scheme. Unfortunately, in this setting is very far from being a field so standard techniques for proving leakage resilience in the general lattice setting, which rely on the leftover hash lemma, do not apply. Therefore, new techniques must be developed.
In this work, we put forth a high-level approach for proving the leakage resilience of the R-Dual-Regev scheme, by generalizing the original proof of Lyubashevsky et al.~(Eurocrypt '13). We then give three instantiations of our approach, proving that the R-Dual-Regev remains IND-CPA secure in the presence of three natural, non-adaptive leakage classes.
Privacy Games for Syntactic Privacy Notions
It is well understood that the huge volumes of data captured in recent years have the potential to underpin significant research developments in many fields. But, to realise these benefits, all relevant parties must be comfortable with how this data is shared. At the heart of this is the notion of privacy --- which is recognised as being somewhat difficult to define. Previous authors have shown how privacy notions such as anonymity, unlinkability and pseudonymity might be combined into a single formal framework. In this paper we use and extend this work by defining privacy games for individual and group privacy within distributed environments. More precisely, for each privacy notion, we formulate a game that an adversary has to win in order to break the notion. Via these games, we aim to clarify understanding of, and relationships between, different privacy notions; we also aim to give an unambiguous understanding of adversarial actions. Additionally, we extend previous work via the notion of unobservability.
k-Round MPC from k-Round OT via Garbled Interactive Circuits
We present new constructions of round-efficient, or even round-optimal, Multi-Party Computation (MPC) protocols from Oblivious Transfer (OT) protocols. Our constructions establish a tight connection between MPC and OT: In the setting of semi-honest security, for any , -round semi-honest OT is necessary and complete for -round semi-honest MPC. In the round-optimal case of , we obtain 2-round semi-honest MPC from 2-round semi-honest OT, resolving the round complexity of semi-honest MPC assuming weak and necessary assumption. In comparison, previous 2-round constructions rely on either the heavy machinery of indistinguishability obfuscation or witness encryption, or the algebraic structure of bilinear pairing groups. More generally, for an arbitrary number of rounds , all previous constructions of -round semi-honest MPC require at least OT with rounds for .
In the setting of malicious security, we show: For any , -round malicious OT is necessary and complete for -round malicious MPC. In fact, OT satisfying a weaker notion of delayed-semi-malicious security suffices. In the common reference string model, for any , we obtain -round malicious Universal Composable (UC) protocols from any -round semi-malicious OT and non-interactive zero-knowledge. Previous 5-round protocols in the plain model, and 2-round protocols in the common reference string model all require algebraic assumptions such as DDH or LWE.
At the core of our constructions is a new framework for garbling interactive circuits. Roughly speaking, it allows for garbling interactive machines that participates in interactions of a special form. The garbled machine can emulate the original interactions receiving messages sent in the clear (without being encoded using secrets), and reveals only the transcript of the interactions, provided that the transcript is computationally uniquely defined. We show that garbled interactive circuits for the purpose of constructing MPC can be implemented using OT. Along the way, we also propose a new primitive of witness selector that strengthens witness encryption, and a new notion of zero-knowledge functional commitments.
A formal model of Bitcoin transactions
We propose a formal model of Bitcoin transactions, which is sufficiently abstract to enable formal reasoning, and at the same time is concrete enough to serve as an alternative documentation to Bitcoin. We use our model to formally prove some well-formedness properties of the Bitcoin blockchain, for instance that each transaction can only be spent once. We release an open-source tool through which programmers can write transactions in our abstract model, and compile them into standard Bitcoin transactions.
Relaxed Lattice-Based Signatures with Short Zero-Knowledge Proofs
Higher-level cryptographic privacy-enhancing protocols such as anonymous credentials, voting schemes, and e-cash are often constructed by suitably combining signature, commitment, and encryption schemes with zero-knowledge proofs. Indeed, a large body of protocols have been constructed in that manner from Camenisch-Lysyanskaya signatures and generalized Schnorr proofs. In this paper, we build a similar framework for lattice-based schemes by presenting a signature and commitment scheme that are compatible with Lyubashevsky's Fiat-Shamir proofs with abort, currently the most efficient zero-knowledge proofs for lattices. To cope with the relaxed soundness guarantees of these proofs, we define corresponding notions of relaxed signature and commitment schemes. We demonstrate the flexibility and efficiency of our new primitives by constructing a new lattice-based anonymous attribute token scheme and providing concrete parameters to securely instantiate this scheme.
On post-processing in the quantum algorithm for computing short discrete logarithms
We revisit the quantum algorithm for computing short discrete logarithms that was recently introduced by Ekerå and Håstad. By carefully analyzing the probability distribution induced by the algorithm, we show its success probability to be higher than previously reported. Inspired by our improved understanding of the distribution, we propose an improved post-processing algorithm that is practical, enables better tradeoffs to be achieved, and requires fewer runs, than the original post-processing algorithm. To prove these claims, we construct a classical simulator for the quantum algorithm by sampling the probability distribution it induces for given logarithms. This simulator is in itself a key contribution. We use it to demonstrate that our quantum algorithm achieves an advantage over Shor's algorithms, not only in each individual run, but also overall, when targeting cryptographically relevant instances of RSA and Diffie-Hellman with short exponents.
Differential Attacks on LILLIPUT Cipher
In SAC 2013, Berger et al. defined Extended Generalized Feistel Networks (EGFN) and analyzed their security. Later, they proposed a cipher based on this structure: LILLIPUT. Impossible differential attacks and integral attacks have been mounted on LILLIPUT. We propose a tool which has found some classical, impossible and improbable differential attacks by using the variance method. It has highlighted unusual differential conditions which lead to efficient attacks by the complexity. Moreover, it is the first time we apply the generic variance method to a concrete cipher.
A Ciphertext-Size Lower Bound for Order-Preserving Encryption with Limited Leakage
We consider a recent security definition of Chenette, Lewi, Weis, and Wu for order-revealing encryption (ORE) and order-preserving encryption (OPE) (FSE 2016). Their definition says that the comparison of two ciphertexts should only leak the index of the most significant bit on which the differ. While their work could achieve ORE with short ciphertexts that expand the plaintext by a factor approximate 1.58, it could only find OPE with longer ciphertxts that expanded the plaintext by a security-parameter factor. We give evidence that this gap between ORE and OPE is inherent, by proving that any OPE meeting the information-theoretic version of their security definition (for instance, in the random oracle model) must have ciphertext length close to that of their constructions. We extend our result to identify an abstract security property of any OPE that will result in the same lower bound.
Detection of cryptographic algorithms with grap
The disassembled code of an executable program can be seen as a graph representing the possible sequence of instructions (Control Flow Graph). grap is a YARA-like tool, completely open-source, and able to detect graph patterns, defined by the analyst, within an executable program.
We used grap to detect cryptographic algorithms: we created patterns for AES and ChaCha20 that are based on parts of the assembly code produced by compiling popular implementations (available in LibreSSL and libsodium). Our approach is thus based on the algorithms and their structure and does not rely on constant detection.
Solida: A Blockchain Protocol Based on Reconfigurable Byzantine Consensus
Uncategorized
Uncategorized
The decentralized cryptocurrency Bitcoin has experienced great success but also encountered many challenges. One of the challenges has been the long confirmation time. Another challenge is the lack of incentives at certain steps of the protocol, raising concerns for transaction withholding, selfish mining, etc. To address these challenges, we propose Solida, a decentralized blockchain protocol based on reconfigurable Byzantine consensus augmented by proof-of-work. Solida improves on Bitcoin in confirmation time, and provides safety and liveness assuming the adversary control less than (roughly) one-third of the total mining power.
Risky Traitor Tracing and New Differential Privacy Negative Results
In this work we seek to construct collusion-resistant traitor tracing systems with small ciphertexts
from standard assumptions that also move toward practical efficiency. In our approach we will hold
steadfast to the principle of collusion resistance, but relax the requirement on catching a traitor from
a successful decoding algorithm. We define a -risky traitor tracing system as one where the probability of identifying
a traitor is times the probability a successful box is produced. We then go on to show
how to build such systems from prime order bilinear groups with assumptions close to those used in prior works.
Our core system achieves, for any , where ciphertexts consists of group elements
and decryption requires pairing operations.
At first glance the utility of such a system might seem questionable since the we achieve for short ciphertexts
is relatively small. Indeed an attacker in such a system can more likely than not get away with producing a decoding box.
However, we believe this approach to be viable for four reasons:
1. A risky traitor tracing system will provide deterrence against risk averse attackers. In some settings the
consequences of being caught might bear a high cost and an attacker will have to weigh his utility of producing a
decryption box against the expected cost of being caught.
2. Consider a broadcast system where we want to support low overhead broadcast encrypted communications, but
will periodically allow for a more expensive key refresh operation. We refer to an adversary produced algorithm that
maintains the ability to decrypt across key refreshes as a persistent decoder. We show how if we employ a risky traitor
tracing systems in this setting, even for a small , we can amplify the chances of catching such a ``persistent decoder'' to
be negligibly close to 1.
3. In certain resource constrained settings risky traitor tracing provides a best tracing effort where
there are no other collusion-resistant alternatives. For instance, suppose we had to support 100K users
over a radio link that had just 10KB of additional resources for extra ciphertext overhead. None of the
existing bilinear map systems can fit in these constraints. On the other
hand a risky traitor tracing system provides a spectrum of tracing probability versus overhead tradeoffs and can
be configured to at least give some deterrence in this setting.
4. Finally, we can capture impossibility results for differential privacy from -risky traitor tracing. Since our
ciphertexts are short ( ), we get the negative result which matches what one would get plugging
in the obfuscation based tracing system Boneh-Zhandry (CRYPTO 2014) solution into the prior impossibility result
of Dwork et al. (STOC 2009).
A Constant-Size Signature Scheme with a Tighter Reduction from the CDH Assumption
Uncategorized
Uncategorized
We present a signature scheme with the tightest security-reduction among known constant-size signature schemes secure under the computational Diffie-Hellman (CDH) assumption. It is important to reduce the security-reduction loss of a cryptosystem, which enables choosing of a smaller security parameter without compromising security; hence, enabling constant-size signatures for cryptosystems and faster computation. The tightest security reduction far from the CDH assumption is , presented by Hofheinz et al., where is the number of signing queries. They also proved that the security loss of is optimal if signature schemes are ``re-randomizable". In this paper, we revisit the non-re-randomizable signature scheme proposed by Bohl et al. Their signature scheme is the first that is fully secure under the CDH assumption and has a compact public key. However, they constructed the scheme with polynomial-order security-reduction loss. We first constructed a new existentially unforgeable againt extended random-message attack (EUF-XRMA) secure scheme based on Bohl et al.'s scheme, which has tighter security reduction of to the CDH assumption, where is the number of group elements in a verification key. We then transformed the EUF-XRMA secure signature scheme into an existentially unforgeable against adaptively chosen-message attack (EUF-CMA) secure one using Abe et al.'s technique. In this construction, no pseudorandom function, which results in increase of reduction loss, is used, and the above reduction loss can be achieved. Moreover, a tag can be generated more efficiently than Bohl et al.'s signature scheme, which results in smaller computation.
Consequently, our EUF-CMA secure scheme has tighter security reduction to the CDH assumption than any previous schemes.
Hardware Aspects of Montgomery Modular Multiplication
This chapter compares Peter Montgomery's modular multiplication method
with traditional techniques for suitability on hardware platforms. It also covers systolic array implementations and side channel leakage.
Fast Homomorphic Evaluation of Deep Discretized Neural Networks
Uncategorized
Uncategorized
The rise of machine learning as a service multiplies scenarios where one faces a privacy dilemma: either sensitive user data must be revealed to the entity that evaluates the cognitive model (e.g., in the Cloud), or the model itself must be revealed to the user so that the evaluation can take place locally.
Fully Homomorphic Encryption (FHE) offers an elegant way to reconcile these conflicting interests in the Cloud-based scenario and also preserve non-interactivity.
However, due to the inefficiency of existing FHE schemes, most applications prefer to use Somewhat Homomorphic Encryption (SHE), where the complexity of the computation to be performed has to be known in advance, and the efficiency of the scheme depends on this global complexity.
In this paper, we present a new framework for homomorphic evaluation of neural networks, that we call FHE-DiNN, whose complexity is strictly linear in the depth of the network and whose parameters can be set beforehand.
To obtain this scale-invariance property, we rely heavily on the bootstrapping procedure.
We refine the recent FHE construction by Chillotti et al. (ASIACRYPT 2016) in order to increase the message space and apply the sign function (that we use to activate the neurons in the network) during the bootstrapping.
We derive some empirical results, using TFHE library as a starting point, and classify encrypted images from the MNIST dataset with more than 96% accuracy in less than 1.7 seconds.
Finally, as a side contribution, we analyze and introduce some variations to the bootstrapping technique of Chillotti et al. that offer an improvement in efficiency at the cost of increasing the storage requirements.
The Discrete-Logarithm Problem with Preprocessing
This paper studies discrete-log algorithms that use preprocessing. In our model, an adversary may use a very large amount of precomputation to produce an "advice" string about a specific group (e.g., NIST P-256). In a subsequent online phase, the adversary's task is to use the preprocessed advice to quickly compute discrete logarithms in the group. Motivated by surprising recent preprocessing attacks on the discrete-log problem, we study the power and limits of such algorithms. In particular, we focus on generic algorithms -- these are algorithms that operate in every cyclic group. We show that any generic discrete-log algorithm with preprocessing that uses an -bit advice string, runs in online time , and succeeds with probability , in a group of prime order , must satisfy .
Our lower bound, which is tight up to logarithmic factors, uses a synthesis of incompressibility techniques and classic methods for generic-group lower bounds. We apply our techniques to prove related lower bounds for the CDH, DDH, and multiple-discrete-log problems.
Finally, we demonstrate two new generic preprocessing attacks: one for the multiple-discrete-log problem and one for certain decisional-type problems in groups. This latter result demonstrates that, for generic algorithms with preprocessing, distinguishing tuples of the form from random is much easier than the discrete-log problem.
Manifold Learning Towards Masking Implementations: A First Study
Linear dimensionality reduction plays a very important role in side channel attacks, but it is helpless when meeting the non-linear leakage of masking implementations. Increasing the order of masking makes the attack complexity grow exponentially, which makes the research of nonlinear dimensionality reduction very meaningful. However, the related work is seldom studied. A kernel function was firstly introduced into Kernel Discriminant Analysis (KDA) in CARDIS 2016 to realize nonlinear dimensionality reduction. This is a milestone for attacking masked implementations. However, KDA is supervised and noise-sensitive. Moreover, several parameters and a specialized kernel function are needed to be set and customized. Different kernel functions, parameters and the training results, have great influence on the attack efficiency. In this paper, the high dimensional non-linear leakage of masking implementation is considered as high dimensional manifold, and manifold learning is firstly introduced into side channel attacks to realize nonlinear dimensionality reduction. Several classical and practical manifold learning solutions such as ISOMAP, Locally Linear Embedding (LLE) and Laplacian Eigenmaps (LE) are given. The experiments are performed on the simulated unprotected, first-order and second-order masking implementations. Compared with supervised KDA, manifold learning schemes introduced here are unsupervised and fewer parameters need to be set. This makes manifold learning based nonlinear dimensionality reduction very simple and efficient for attacking masked implementations.
Fuzzy Password-Authenticated Key Exchange
Uncategorized
Uncategorized
Consider key agreement by two parties who start out knowing a common secret (which we refer to as “pass-string”, a generalization of “password”), but face two complications: (1) the pass-string may come from a low-entropy distribution, and (2) the two parties’ copies of the pass-string may have some noise, and thus not match exactly. We provide the first efficient and general solutions to this problem that enable, for example, key agreement based on commonly used biometrics such as iris scans.
The problem of key agreement with each of these complications individually has been well studied in literature. Key agreement from low-entropy shared pass-strings is achieved by password-authenticated key exchange (PAKE), and key agreement from noisy but high-entropy shared pass-strings is achieved by information-reconciliation protocols as long as the two secrets are “close enough.” However, the problem of key agreement from noisy low-entropy pass-strings has never been studied.
We introduce (universally composable) fuzzy password-authenticated key exchange (fPAKE), which solves exactly this problem. fPAKE does not have any entropy requirements for the pass-strings, and enables secure key agreement as long as the two pass-strings are “close” for some notion of closeness. We also give two constructions. The first construction achieves our fPAKE definition for any (efficiently computable) notion of closeness, including those that could not be handled before even in the high-entropy setting. It uses Yao’s garbled circuits in a way that is only two times more costly than their use against semi-honest adversaries, but that guarantees security against malicious adversaries. The second construction is more efficient, but achieves our fPAKE definition only for pass-strings with low Hamming distance. It builds on very simple primitives: robust secret sharing and PAKE.
A Systematic Evaluation of Profiling Through Focused Feature Selection
Uncategorized
Uncategorized
Profiled side-channel attacks consist of several steps one needs to take. An important, but sometimes ignored, step is a selection of the points of interest (features) within side-channel measurement traces. A large majority of the related works start the analyses with an assumption that the features are preselected. Contrary to this assumption, here, we concentrate on the feature selection step. We investigate how advanced feature selection techniques stemming from the machine learning domain can be used to improve the attack efficiency. To this end, we provide a systematic evaluation of the methods of interest. The experiments are performed on several real-world data sets containing software and hardware implementations of AES, including the random delay countermeasure. Our results show that wrapper and hybrid feature selection methods perform extremely well over a wide range of test scenarios and a number of features selected. We emphasize L1 regularization (wrapper approach) and linear support vector machine (SVM) with recursive feature elimination used after chi-square filter (Hybrid approach) that performs well in both accuracy and guessing entropy. Finally, we show that the use of appropriate feature selection techniques is more important for an attack on the high-noise data sets, including those with countermeasures than on the low-noise ones.
EzPC: Programmable, Efficient, and Scalable Secure Two-Party Computation for Machine Learning
We present EZPC: a secure two-party computation (2PC) framework that generates efficient 2PC protocols from high-level, easy-to-write, programs. EZPC provides formal correctness and security guarantees while maintaining performance and scalability. Previous language frameworks, such as CBMC-GC, ObliVM, SMCL, and Wysteria, generate protocols that use either arithmetic or boolean circuits exclusively. Our compiler is the first to generate protocols that combine both arithmetic sharing and garbled circuits for better performance. We empirically demonstrate that the protocols generated by our framework match or outperform (up to 19x) recent works that provide hand-crafted protocols for various functionalities such as secure prediction and matrix factorization.
Cryptographic Pairings
This article appeared as Chapter 9 of the book "Topics in Computational Number Theory inspired by Peter L. Montgomery", edited by Joppe W. Bos and Arjen K. Lenstra and published by Cambridge University Press. See https://www.cambridge.org/9781107109353.
Hardness of Non-Interactive Differential Privacy from One-Way Functions
A central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset. That is, we would like to design a differentially private algorithm that takes a dataset consisting of some small number of elements from some large data universe , and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family .
Ignoring computational constraints, this problem can be solved even when and are exponentially large and is just a small polynomial; however, all algorithms with remotely similar guarantees run in exponential time. There have been several results showing that, under the strong assumption of indistinguishability obfuscation (iO), no efficient differentially private algorithm exists when and can be exponentially large. However, there are no strong separations between information-theoretic and computationally efficient differentially private algorithms under any standard complexity assumption.
In this work we show that, if one-way functions exist, there is no general purpose differentially private algorithm that works when and are exponentially large, and is an arbitrary polynomial. In fact, we show that this result holds even if is just subexponentially large (assuming only polynomially-hard one-way functions). This result solves an open problem posed by Vadhan in his recent survey.
Formal Analysis of a TTP-Free Blacklistable Anonymous Credentials System (Full Version)
Uncategorized
Uncategorized
This paper firstly introduces a novel security definition for BLAC-like schemes (BLAC represents TTP-free BLacklist-able Anonymous Credentials) in the symbolic model using applied pi calculus, which is suitable for automated reasoning via a certain formal analysis tool. We model the definitions of some common security properties: authenticity, non-framebility, mis-authentication resistance and privacy (anonymity and unlinkability). Then the case study of these security definitions is demonstrated by modelling and analyzing BLACR (BLAC with Reputation) system. We verify these security properties by Blanchet’s ProVerif and a ZKP (Zero-Knowledge Proof) compiler developed by Backes et al.. In particular, we model and analyze the express-lane authentication in BLACR system. The analysis discovers a known attack that can be carried out by any potential user. This attack allows a user escaping from being revoked as he wishes. We provide a revised variant that can be proved successfully by ProVerif as well, which also indicates that the fix provided by ExBLACR (Extending BLACR) is incorrect.
FFSSE: Flexible Forward Secure Searchable Encryption with Efficient Performance
Uncategorized
Uncategorized
Searchable Symmetric Encryption (SSE) has been widely applied in the design of encrypted database for exact queries or even range queries in practice. In spite of its efficiency and functionalities, it always suffers from information leakages. Some recent attacks point out that forward privacy is the desirable security goal. However, there are only a very small number of schemes achieving this security. In this paper, we propose a new forward secure SSE scheme, denoted as ``FFSSE'', which has the best performance in the literature, namely with fast search operation, fast token generation and O(1) update complexity. It also supports both add and delete operations in the unique instance. Technically, we exploit a novel ``key-based blocks chain'' technique based on symmetric cryptographic primitive, which can be deployed in arbitrary index tree structures or key-value structures directly to provide forward privacy. In order to reduce the storage on the client side, we further propose an efficient permutation technique (with similar function as trapdoor permutation) to support the re-construction of the search tokens. Experiments show that our scheme is 4 times, 300 times and 300 times faster than the state-of-the-art forward private SSE scheme (proposed in CCS 2016) in search, update and token generation, respectively. Security analysis shows that our scheme is secure.
Efficient provable-secure NTRUEncrypt over any cyclotomic field
NTRUEncrypt is a fast lattice-based cryptosystem and a probable alternative of the existing public key schemes. The existing provable-secure NTRUEncrypts are limited by the cyclotomic field it works on - the prime-power cyclotomic field. This is worth worrying, due to the subfield attack methods proposed in . Also, the module used in computation and security parameters rely heavily on the choice of plaintext space. These disadvantages restrict the applications of NTRUEncrypt.
In this paper, we give a new provable secure NTRUEncrypt in standard model under canonical embedding over any cyclotomic field. We give an reduction from a simple variant of RLWE - an error distribution discretized version of RLWE, hence from worst-case ideal lattice problems, to our NTRUEncrypt. In particular, we get a union bound for reduction parameters and module for all choices of plaintext space, so that our NTRUEncrypt can send more encrypted bits in one encrypt process with higher efficiency and stronger security. Furthermore, our scheme's decryption algorithm succeeds with probability comparing with the previous works' , making our scheme more practical in theory.
A new chosen IV statistical distinguishing framework to attack symmetric ciphers, and its application to ACORN-v3 and Grain-128a
We propose a new attack framework based upon cube testers and d-monomial tests. The d-monomial test is a general framework for comparing the ANF of the symmetric cipher’s output with ANF of
a random Boolean function. In the d-monomial test, the focus is on the frequency of the special monomial in the ANF of Boolean functions, but in the proposed framework, the focus is on the truth table.
We attack ACORN-v3 and Grain-128a and demonstrate the efficiency of our framework. We show how it is possible to apply a distinguishing attack for up to 676 initialization rounds of ACORN-v3 and 171 initialization rounds of Grain-128a using our framework. The attack on ACORN-v3 is the best practical attack (and better results can be obtained by using more computing power).
One can apply distinguishing attacks to black box symmetric ciphers by the proposed framework, and we suggest some guidelines to make it possible to improve the attack by analyzing the internal structure of ciphers. The framework is applicable to all symmetric ciphers and hash functions. We discuss how it can reveal weaknesses that are not possible to find by other statistical tests. The attacks were practically implemented and verified.
ID-HABE: Incorporating ID-based Revocation, Delegation, and Authority Hierarchy into Attribute-Based Encryption
Ciphertext-Policy Attribute-Based Encryption (CP-ABE) has been proposed to implement fine-grained access control. Data owners encrypt data with a certain access policy so that only data users whose attributes satisfy the access policy can decrypt the ciphertext. A user can be automatically assigned an access privilege based on whether his/her attributes satisfying a given access policy described by attributes and their logical relations. In order to provide more flexible policy-based access control, attribute-based revocation approaches had been proposed to provide the NOT logic on attributes to allow attribute-based revocation. However, previous solutions increase the attribute management overhead when considering each user’s ID as an attribute for more precise revocations at the individual user-level. To address this issue, in this paper, an ID-ABE scheme is presented, where a user’s ID is incorporated into the key generation procedure allowing user-ID-based revocation. In addition to ID-based revocation, ID-ABE also presents a hierarchical identity structure to build a delegation framework to enable group-based revocation. In the end, we also evaluate the performance of the proposed scheme in terms of computation, storage and communication overhead, which shows the practical value of the solution for secure data sharing applications.
HIR-CP-ABE: Hierarchical Identity Revocable Ciphertext-Policy Attribute-Based Encryption for Secure and Flexible Data Sharing
Ciphertext Policy Attribute-Based Encryption (CP- ABE) has been proposed to implement the attribute-based access control model. In CP-ABE, data owners encrypt the data with a certain access policy such that only data users whose attributes satisfy the access policy could obtain the corresponding private decryption key from a trusted authority. Therefore, CP-ABE is considered as a promising fine-grained access control mechanism for data sharing where no centralized trusted third party exists, for example, cloud computing, mobile ad hoc networks (MANET), Peer-to-Peer (P2P) networks, information centric networks (ICN), etc.. As promising as it is, user revocation is a cumbersome problem in CP-ABE, thus impeding its application in practice. To solve this problem, we propose a new scheme named HIR-CP-ABE, which implements hierarchical identity- based user revocation from the perceptive of encryption. In particular, the revocation is implemented by data owners directly without any help from any third party. Compared with previous attribute-based revocation solutions, our scheme provides the following nice properties. First, the trusted authority could be offline after system setup and key distribution, thus making it applicable in mobile ad hoc networks, P2P networks, etc., where the nodes in the network are unable to connect to the trusted authority after system deployment. Second, a user does not need to update the private key when user revocation occurs. Therefore, key management overhead is much lower in HIR-CP-ABE for both the users and the trusted authority. Third, the revocation mechanism enables to revoke a group of users affiliated with the same organization in a batch without influencing any other users. To the best of our knowledge, HIR-CP-ABE is the first CP-ABE scheme to provide affiliation-based revocation functionality for data owners. Through security analysis and performance evaluation, we show that the proposed scheme is secure and efficient in terms of computation, communication and storage.
IR-CP-ABE: Identity Revocable Ciphertext-Policy Attribute-Based Encryption for Flexible Secure Group-Based Communication
Ciphertext-Policy Attribute-Based Encryp- tion (CP-ABE) is an access control mechanism over encrypted data and well suited for secure group-based communication. However, it also suffers from the fol- lowing problem, i.e., it is impossible to build all de- sired groups. For example, if two group members have exactly the same attributes, how to construct a group including only one of the two members? Obviously, at- tributes alone cannot distinguish these two members, therefore existing CP-ABE solutions do not work. To address this issue, in this paper, we present a new CP-ABE scheme (called IR-CP-ABE) that incorporates an Identity-based Revocation capability. With IR-CP-ABE, an access policy will be constructed by not only group members’ attributes but also their identities. To build a group, first, build a candidate group based on all de- sired group members’ attributes; second, remove unde- sired members by revoking their identities. By evaluat- ing the security and efficiency of a proposed construc- tion, we show that the IR-CP-ABE scheme is secure and efficient for practical applications.
Security Analysis of a Dynamic Threshold Secret Sharing Scheme Using Linear Subspace Method
A dealer-free and non-interactive dynamic threshold secret sharing scheme has been proposed by Harn et.al., in 2015. In this scheme, a (t; n) secret sharing scheme in secret reconstruction phase can turn into a (m; n) scheme in secret reconstruction phase, where m is the number of participanting shareholders. It has been claimed that the secrecy of shares and the secrecy of the secret are unconditionally preserved if .
This paper provides a security analysis of this scheme in two directions. Firstly, we show that this scheme does not have the dynamic property, i.e. any t + 1 released values are sufficient to reconstruct the secret, even the agreed updated threshold is larger. Secondly, we show that any t + 1 released values are sufficient to forge the released value of a non-participating shareholder.
The technique that we enjoyed for our analysis is the linear subspace method, which basically measures the information leaked by the known parameters of the scheme by computing the dimension of the linear subspace spanned by these parameter. This method has
shown to be capable of cryptanalysis of some secret sharing based schemes, whose security relies on keeping the coefficients of the
underlying polynomial(s) secret.
The Strength of Weak Randomization: Efficiently Searchable Encryption with Minimal Leakage
Efficiently searchable and easily deployable encryption schemes enable an untrusted, legacy service such as a relational database engine to perform searches over encrypted data. The ease with which such schemes can be deployed on top of existing services makes them especially appealing in operational environments where encryption is needed but it is not feasible to replace large infrastructure components like databases or document management systems. Unfortunately all previously known approaches for efficiently searchable encryption are vulnerable to inference attacks where an adversary can use knowledge of the distribution of the data to recover the plaintext with high probability.
In this paper, we present the first efficiently searchable, easily deployable database encryption scheme that is provably secure against inference attacks even when used with real, low-entropy data. Ours is also the only efficiently searchable construction that provides any provable security for protecting multiple related attributes (columns) in the same database. Using this ESE construction as a building block, we give an efficient construction for performing range queries over encrypted data.
We implemented our constructions in Haskell and used them to query encrypted databases of up to 10 million records. In experiments with a local Postgres database and with a Google Cloud Platform database, the response time for our encrypted queries is not excessively slower than for plaintext queries. With the use of parallel query processing, our encrypted queries can achieve similar and in some cases superior performance to queries on
the plaintext.
Non-malleable Randomness Encoders and their Applications
Non-malleable Codes (NMCs), introduced by Dziembowski, Peitrzak and Wichs (ITCS 2010), serve the purpose of preventing "related tampering" of encoded messages. The most popular tampering model considered is the -split-state model where a codeword consists of 2 states, each of which can be tampered independently. While NMCs in the -split state model provide the strongest security guarantee, despite much research in the area we only know how to build them with poor rate ( , where is the codeword length). However, in many applications of NMCs one only needs to be able to encode randomness i.e., security is not required to hold for arbitrary, adversarially chosen messages. For example, in applications of NMCs to tamper-resilient security, the messages that are encoded are typically randomly generated secret keys. To exploit this, in this work, we introduce the notion of "Non-malleable Randomness Encoders" (NMREs) as a relaxation of NMCs in the following sense: NMREs output a random message along with its corresponding non-malleable encoding.
Our main result is the construction of a -split state, rate- NMRE. While NMREs are interesting in their own right and can be directly used in applications such as in the construction of tamper-resilient cryptographic primitives, we also show how to use them, in a black-box manner, to build a -split-state (standard) NMCs with rate . This improves both the number of states, as well as the rate, of existing constant-rate NMCs.
IND-CCA-secure Key Encapsulation Mechanism in the Quantum Random Oracle Model, Revisited
With the gradual progress of NIST's post-quantum cryptography standardization, the Round-1 KEM proposals have been posted for public to discuss and evaluate.
Among the IND-CCA-secure KEM constructions, mostly, an IND-CPA-secure (or OW-CPA-secure) public-key encryption (PKE) scheme is first introduced, then some generic transformations are applied to it.
All these generic transformations are constructed in the random oracle model (ROM).
To fully assess the post-quantum security, security analysis in the quantum random oracle model (QROM) is preferred.
However, current works either lacked a QROM security proof or just followed Targhi and Unruh's proof technique (TCC-B 2016) and modified the original transformations by adding an additional hash to the ciphertext to achieve the QROM security.
In this paper, by using a novel proof technique, we present QROM security reductions for two widely used generic transformations without suffering any ciphertext overhead.
Meanwhile, the security bounds are much tighter than the ones derived by utilizing Targhi and Unruh's proof technique.
Thus, our QROM security proofs not only provide a solid post-quantum security guarantee for NIST Round-1 KEM schemes, but also
simplify the constructions and reduce the ciphertext sizes.
We also provide QROM security reductions for Hofheinz-Hoevelmanns-Kiltz modular transformations (TCC 2017), which can help to obtain a variety of combined transformations with different requirements and properties.
Analysis of the Bitcoin UTXO set
Bitcoin relies on the Unspent Transaction Outputs (UTXO) set to efficiently verify new generated transactions. Every unspent out- put, no matter its type, age, value or length is stored in every full node. In this paper we introduce a tool to study and analyze the UTXO set, along with a detailed description of the set format and functionality. Our analysis includes a general view of the set and quantifies the difference between the two existing formats up to the date. We also provide an ac- curate analysis of the volume of dust and unprofitable outputs included in the set, the distribution of the block height in which the outputs where included, and the use of non-standard outputs.
Privately Constraining and Programming PRFs, the LWE Way
*Constrained* pseudorandom functions allow for delegating
``constrained'' secret keys that let one compute the function at
certain authorized inputs---as specified by a constraining
predicate---while keeping the function value at unauthorized inputs
pseudorandom. In the *constraint-hiding* variant, the
constrained key hides the predicate. On top of this,
*programmable* variants allow the delegator to explicitly set the
output values yielded by the delegated key for a particular set of
unauthorized inputs.
Recent years have seen rapid progress on applications and
constructions of these objects for progressively richer constraint
classes, resulting most recently in constraint-hiding constrained PRFs
for arbitrary polynomial-time constraints from Learning With
Errors~(LWE) [Brakerski, Tsabary, Vaikuntanathan, and Wee, TCC'17],
and privately programmable PRFs from indistinguishability obfuscation
(iO) [Boneh, Lewi, and Wu, PKC'17].
In this work we give a unified approach for constructing both of the
above kinds of PRFs from LWE with subexponential
approximation factors. Our constructions
follow straightforwardly from a new notion we call a
*shift-hiding shiftable function*, which allows for deriving a
key for the sum of the original function and any desired hidden
shift function. In particular, we obtain the first privately
programmable PRFs from non-iO assumptions.
Proposal for Protocol on a Quorum Blockchain with Zero Knowledge
In this paper, we present an implementation scheme of an RTGS on Quorum using the Solidity language. It is heavily inspired by the Schnorr signature protocol to verify the identity of the participants. We have implemented a distributed ledger solution for Delivery vs Payment that promises to offer increased efficiency and resilience. Our architecture mimics current market structure. For such needs, we added an extra layer of security that allows our solution to comply with the requirements of the regulator while enabling competitive actors to collaborate using the shared registry. It also leaves room for regulation, while still running in a decentralised way with coordinating agents.
We use non-interactive zero-knowledge algorithms, which are cryptographic protocols with numerous applications in the fields of cryptocurrencies. They allow an agent to verify that another agent holds a specific information, while the latter never discloses this information.
For the sake of our experimentations, we had to use very small integers in our protocols. These integers are too small to comply with current security standards in finance, although the architectural principles can be easily transposed with better performing protocols. We present suggestions to improve our proof of concept and our architecture in the last part.
Universally Composable Secure Computation with Corrupted Tokens
We introduce the \emph{corrupted token model}. This model generalizes the \emph{tamper-proof token model} proposed by Katz (EUROCRYPT '07) relaxing the trust assumption on the honest behavior of tokens. Our model is motivated by the real-world practice of outsourcing hardware production to possibly corrupted manufacturers. We capture the malicious behavior of token manufacturers by allowing the adversary to corrupt the tokens of honest players at the time of their creation.
We show that under minimal complexity assumptions, i.e., the existence of one-way functions, it is possible to UC-securely realize (a variant of) the tamper-proof token functionality of Katz in the corrupted token model with stateless tokens assuming that the adversary corrupts at most of them (for any positive ). We then apply this result to existing multi-party protocols in Katz's model
to achieve UC-secure MPC in the corrupted token model assuming only the existence of one-way functions.
Finally, we show how to obtain the above results using tokens of small size that take only short inputs. The technique in this result can also be used to improve the assumption of UC-secure hardware obfuscation recently proposed by Nayak et al. (NDSS '17). While their construction requires the existence of collision-resistant hash functions, we can obtain the same result from only one-way functions. Moreover using our main result we can improve the trust assumption on the tokens as well.
Fairness in an Unfair World: Fair Multiparty Computation from public Bulletin Boards
Secure multiparty computation allows mutually distrusting parties to compute a function on their private inputs such that nothing but the function output is revealed. Achieving fairness --- that all parties learn the output or no one does -- is a long studied problem with known impossibility results in the standard model if a majority of parties are dishonest.
We present a new model for achieving fairness in MPC against dishonest majority by using public bulletin boards implemented via existing infrastructure such as blockchains or Google's certificate transparency logs. We present both theoretical and practical constructions using either witness encryption or trusted hardware (such as Intel SGX).
Unlike previous works that either penalize an aborting party or achieve weaker notions such as -fairness, we achieve complete fairness using existing infrastructure.
Enter the Hydra: Towards Principled Bug Bounties and Exploit-Resistant Smart Contracts
Uncategorized
Uncategorized
Bug bounties are a popular tool to help prevent software exploits. Yet, they lack rigorous principles for setting bounty amounts and require high payments to attract economically rational hackers. Rather than claim bounties for serious bugs, hackers often sell or exploit them.
We present the *Hydra Framework*, the first general, principled approach to modeling and administering bug bounties that incentivize bug disclosure. Our key idea is an *exploit gap*, a program transformation that enables runtime detection, and rewarding, of critical bugs. Our framework transforms programs via *N-of-N-version programming*, a variant of classical N-version programming that runs multiple independent program instances.
We apply the Hydra Framework to *smart contracts*, small programs that execute on blockchains. We show how Hydra contracts greatly amplify the power of bounties to incentivize bug disclosure by economically rational adversaries, establishing the first framework for rigorous economic evaluation of smart contract security. We also model powerful adversaries capable of *bug withholding*, exploiting race conditions in blockchains to claim bounties before honest users can. We present *Submarine Commitments*, a countermeasure of independent interest that conceals transactions on blockchains.
We design a simple, automated version of the Hydra Framework for Ethereum (ethereum.org) and implement two Hydra contracts, an ERC20 standard token and a Monty-Hall game. We evaluate our implementation for completeness and soundness with the official Ethereum virtual machine test suite and live blockchain data.
Secure Deduplication of Encrypted Data: Refined Model and New Constructions
Cloud providers tend to save storage via cross-user deduplication,
while users who care about privacy tend to encrypt their files on client-side.
Secure deduplication of encrypted data (SDoE) is an active research topic.
In this paper, we propose a formal security model for this problem.
We also propose two single-server SDoE protocols and prove their security in our model.
We evaluate their deduplication effectiveness via simulations with realistic datasets.
Promise Zero Knowledge and its Applications to Round Optimal MPC
We devise a new partitioned simulation technique for MPC where the simulator uses different strategies for simulating the view of aborting adversaries and non-aborting adversaries. The protagonist of this technique is a new notion of promise zero knowledge (ZK) where the ZK property only holds against non-aborting verifiers. We show how to realize promise ZK in three rounds in the simultaneous-message model assuming polynomially hard DDH (or QR or N^{th}-Residuosity).
We demonstrate the following applications of our new technique:
-> We construct the first round-optimal (i.e., four round) MPC protocol for general functions based on polynomially hard DDH (or QR or N^{th}-Residuosity).
-> We further show how to overcome the four-round barrier for MPC by constructing a three-round protocol for ``list coin-tossing'' -- a slight relaxation of coin-tossing that suffices for most conceivable applications -- based on polynomially hard DDH (or QR or N^{th}-Residuosity). This result generalizes to randomized input-less functionalities.
Previously, four round MPC protocols required sub-exponential-time hardness assumptions and no multi-party three-round protocols were known for any relaxed security notions with polynomial-time simulation against malicious adversaries.
In order to base security on polynomial-time standard assumptions, we also rely upon a leveled rewinding security technique that can be viewed as a polynomial-time alternative to leveled complexity leveraging for achieving ``non-malleability'' across different primitives.
General purpose integer factoring
This chapter describes the developments since 1970 in general purpose integer factoring and highlights the contributions of Peter L. Montgomery.
This article appeared as Chapter 5 of the book "Topics in Computational Number Theory inspired by Peter L. Montgomery", edited by Joppe W. Bos and Arjen K. Lenstra and published by Cambridge University Press. See www.cambridge.org/9781107109353.
Order-Revealing Encryption: File-Injection Attack and Forward Security
Uncategorized
Uncategorized
Order-preserving encryption (OPE) and order-revealing encryption (ORE) are among the core ingredients for encrypted database (EDB) systems as secure cloud storage. In this work, we study the leakage of OPE and ORE and their forward security.
We propose generic yet powerful file-injection attacks (FIAs) on OPE/ORE, aimed at the situations of possessing order by and range queries. The FIA schemes only exploit the ideal leakage of OPE/ORE (in particular, no need of data denseness or frequency). We also improve its efficiency with the frequency statistics using a hierarchical idea such that the high-frequency values will be recovered more quickly. Compared with other attacks against OPE/ORE proposed in recent years, our FIA attacks rely upon less demanding conditions and are more effective for attacking the systems with the function of data sharing or transferring like encrypted email system. We executed some experiments on real datasets to test the performance, and the results show that our FIA attacks can cause an extreme hazard on most of the existing OPE and ORE schemes with high efficiency and 100% recovery rate.
In order to resist the perniciousness of FIA, we propose a practical compilation framework for achieving forward secure ORE. The compilation framework only uses some simple cryptographical tools like pseudo-random function, hash function and trapdoor permutation. It can transform most of the existing OPE/ORE schemes into forward secure ORE schemes, with the
goal of minimizing the extra burden incurred on computation and storage. We also present its security proof and execute some experiments to analyze its performance.
Improving Side-channel Analysis through Semi-supervised Learning
The profiled side-channel analysis represents the most powerful category of side-channel attacks. In this context, the security evaluator (i.e., attacker) gains access to a profiling device to build a precise model that is used to attack another device in the attacking phase. Mostly, it is assumed that the attacker has significant capabilities in the profiling phase, whereas the attacking phase is very restricted. We step away from this assumption and consider an attacker restricted in the profiling phase, while the attacking phase is less limited. We propose the concept of semi-supervised learning for side-channel analysis, where the attacker uses a small number of labeled measurements from the profiling phase as well as the unlabeled measurements from the attacking phase to build a more reliable model. Our results show that the semi-supervised concept significantly helps the template attack (TA) and its pooled version (TAp). More specifically, for low noise scenario, the results
for machine learning techniques and TA are often improved when only a small number of measurements is available in the profiling phase, while there is no significant difference in scenarios where the supervised set is large enough for reliable classification. For high noise scenarios, TAp and multilayer perceptron results are improved for the majority of inspected
dataset sizes, while for high noise scenario with added countermeasures, we show a small improvement for TAp, Naive Bayes, and multilayer perceptron approaches for most inspected dataset sizes. Current results go in favor of using semi-supervised learning, especially the self-training approach, in side-channel attacks.
Lightweight MDS Serial-type Matrices with Minimal Fixed XOR Count (Full version)
Uncategorized
Uncategorized
Many block ciphers and hash functions require the diffusion property of Maximum Distance Separable (MDS) matrices. Serial matrices with the MDS property obtain a trade-off between area requirement and clock cycle performance to meet the needs of lightweight cryptography. In this paper, we propose a new class of serial-type matrices called Diagonal-Serial Invertible (DSI) matrices with the sparse property. These matrices have a fixed XOR count (contributed by the connecting XORs) which is half that of existing matrices. We prove that for matrices of order 4, our construction gives the matrix with the lowest possible fixed XOR cost. We also introduce the Reversible Implementation (RI) property, which allows the inverse matrix to be implemented using the similar hardware resource as the forward matrix, even when the two matrices have different finite field entries. This allows us to search for serial-type matrices which are lightweight in both directions by just focusing on the forward direction. We obtain MDS matrices which outperform existing lightweight (involutory) matrices.
CAMFAS: A Compiler Approach to Mitigate Fault Attacks via Enhanced SIMDization
The trend of supporting wide vector units in general purpose microprocessors suggests opportunities for developing a new and elegant compilation approach to mitigate the impact of faults to cryptographic implementations, which we present in this work.
We propose a compilation flow, CAMFAS, to automatically and selectively introduce vectorization in a cryptographic library
- to translate a vanilla library into a library with vectorized code that is resistant to glitches. Unlike in traditional vectorization, the proposed compilation flow uses the extent of the vectors to introduce spatial redundancy in the intermediate computations. By doing so, without significantly increasing code size and execution time, the compilation flow provides sufficient redundancy in the data to detect errors in the intermediate values of the computation.
Experimental results show that the proposed approach only generates an average of 26\% more dynamic instructions over a series of asymmetric cryptographic algorithms in the Libgcrypt library.
Instruction Duplication: Leaky and Not Too Fault-Tolerant!
Fault injection attacks alter the intended behavior of micro-
controllers, compromising their security. These attacks can be mitigated using software countermeasures. A widely-used software-based solution to deflect fault attacks is instruction duplication and n-plication. We explore two main limitations with these approaches: first, we examine the effect of instruction duplication under fault attacks, demonstrating that as fault tolerance mechanism, code duplication does not provide a strong protection in practice. Second, we show that instruction duplication increases side-channel leakage of sensitive code regions using a multivariate exploitation technique both in theory and in practice.
The Montgomery and Joye Powering Ladders are Dual
Hitherto the duality between left-to-right and right-to-left exponentiation algorithms has been a loosely defined concept. Recently, the author made the definition precise by adding requirements on space usage and operation types. Here it is shown that the Montgomery and Joye powering ladders are dual in this sense. Several versions of these algorithms are derived naturally with a cost-free, natural, built-in blinding mechanism as a side channel counter-measure.
Quantum Lightning Never Strikes the Same State Twice
Public key quantum money can be seen as a version of the quantum no-cloning theorem that holds even when the quantum states can be verified by the adversary. In this work, investigate quantum lightning, a formalization of ``collision-free quantum money'' defined by Lutomirski et al. [ICS'10], where no-cloning holds even when the adversary herself generates the quantum state to be cloned. We then study quantum money and quantum lightning, showing the following results:
- We demonstrate the usefulness of quantum lightning beyond quantum money by showing several potential applications, such as generating random strings with a proof of entropy, to completely decentralized cryptocurrency without a block-chain, where transactions is instant and local.
- We give win-win results for quantum money/lightning, showing that either signatures/hash functions/commitment schemes meet very strong recently proposed notions of security, or they yield quantum money or lightning. Given the difficulty in constructing public key quantum money, this suggests that natural schemes do attain strong security guarantees.
- We show that instantiating the quantum money scheme of Aaronson and Christiano [STOC'12] with \emph{indistinguishability obfuscation} that is secure against quantum computers yields a secure quantum money scheme. This construction can be seen as an instance of our win-win result for signatures, giving the first separation between two security notions for signatures from the literature.
- Finally, we give a plausible construction for quantum lightning, which we prove secure under an assumption related to the multi-collision resistance of degree-2 hash functions. Our construction is inspired by our win-win result for hash functions, and yields the first plausible standard model instantiation of a non-collapsing collision resistant hash function. This improves on a result of Unruh [Eurocrypt'16] which is relative to a quantum oracle.
Thus, we provide the first constructions of public key quantum money from several cryptographic assumptions. Along the way, we develop several new techniques including a new precise variant of the no-cloning theorem.
Entropy Reduction for the Correlation-Enhanced Power Analysis Collision Attack
Uncategorized
Uncategorized
Side Channel Attacks are an important attack vector on secure AES implementations. The Correlation-Enhanced Power Analysis Collision Attack by Moradi et al. is a powerful collision attack that exploits leakage caused by collisions in between S-Box computations of AES. The attack yields observations from which the AES key can be inferred. Due to noise, an insufficient number of collisions, or errors in the measurement setup, the attack does not find the correct AES key uniquely in practice, and it is unclear how to determine the key in such a scenario. Based on a theoretical analysis on how to quantify the remaining entropy, we derive a practical search algorithm. Both our theoretical analysis and practical experiments show that even in a setting with high noise or few available traces we can either successfully recover the full AES key or reduce its entropy significantly.
The Tao of Inference in Privacy-Protected Databases
To protect database confidentiality even in the face of full compromise while supporting standard functionality, recent academic proposals and commercial products rely on a mix of encryption schemes. The
common recommendation is to apply strong, semantically secure encryption to the “sensitive” columns and protect other columns with property-revealing encryption (PRE) that supports operations such as sorting.
We design, implement, and evaluate a new methodology for inferring data stored in such encrypted databases. The cornerstone is the multinomial attack, a new inference technique that is analytically optimal and empirically outperforms prior heuristic attacks against PRE-encrypted data. We also extend the multinomial attack to take advantage of correlations across multiple columns. These improvements
recover PRE-encrypted data with sufficient accuracy to then apply machine learning and record linkage methods to infer the values of columns protected by semantically secure encryption or redaction.
We evaluate our methodology on medical, census, and union-membership datasets, showing for the first time how to infer full database records. For PRE-encrypted attributes such as demographics and
ZIP codes, our attack outperforms the best prior heuristic by a factor of 16. Unlike any prior technique, we also infer attributes, such as incomes and medical diagnoses, protected by strong encryption. For
example, when we infer that a patient in a hospital-discharge dataset has a mental health or substance abuse condition, this prediction is 97% accurate.
A New Generalization of the KMOV Cryptosystem
The KMOV scheme is a public key cryptosystem based on an RSA modulus where and are large prime numbers with . It uses the points of an elliptic curve with equation .
In this paper, we propose a generalization of the KMOV cryptosystem
with a prime power modulus of the form and study its resistance to the known attacks.
A generalized attack on RSA type cryptosystems
Let be an RSA modulus with unknown factorization. Some variants of the RSA cryptosystem, such as LUC, RSA with Gaussian primes and RSA type schemes based on singular elliptic curves use a public key and a private key satisfying an equation of the form . In this paper, we consider the general equation and present a new attack that finds the prime factors and in the case that , and satisfy a specific condition. The attack combines the continued fraction algorithm and Coppersmith's technique and can be seen as a generalization of the attacks of Wiener and Blömer-May on RSA.
XHX - A Framework for Optimally Secure Tweakable Block Ciphers from Classical Block Ciphers and Universal Hashing
Tweakable block ciphers are important primitives for designing cryptographic schemes with high security. In the absence of a standardized tweakable block cipher, constructions built from classical block ciphers remain an interesting research topic in both theory and practice.
Motivated by Mennink's F[2] publication from 2015, Wang et al. proposed 32 optimally secure constructions at ASIACRYPT'16, all of which employ two calls to a classical block cipher each. Yet, those constructions were still limited to n-bit keys and n-bit tweaks. Thus, applications with more general key or tweak lengths still lack support.
This work proposes the XHX family of tweakable block ciphers from a classical block cipher and a family of universal hash functions, which generalizes the constructions by Wang et al. First, we detail the generic XHX construction with three independently keyed calls to the hash function. Second, we show that we can derive the hash keys in efficient manner from the block cipher, where we generalize the constructions by Wang et al.; finally, we propose efficient instantiations for the used hash functions.
A Practical Fault Attack on ARX-like Ciphers with a Case Study on ChaCha20
This paper presents the first practical fault attack on the ChaCha family of addition-rotation-XOR (ARX)-based stream ciphers. ChaCha has recently been deployed for speeding up and strengthening HTTPS connections for Google Chrome on Android devices. In this paper, we propose differential fault analysis attacks on ChaCha without resorting to nonce misuse. We use the instruction skip and instruction replacement fault models, which are popularly mounted on microcontroller-based cryptographic implementations. We corroborate the attack propositions via practical fault injection experiments using a laser-based setup targeting an Atmel AVR 8-bit microcontroller-based implementation of ChaCha. Each of the proposed attacks can be repeated with accuracy in our fault injection setup, and can recover the entire 256 bit secret key using 5-8 fault injections on an average.
One Plus One is More than Two: A Practical Combination of Power and Fault Analysis Attacks on PRESENT and PRESENT-like Block Ciphers
We present the first practically realizable side-channel assisted fault attack on PRESENT, that can retrieve the last round key efficiently using single nibble faults. The attack demonstrates how side-channel leakage can allow the adversary to precisely determine the fault mask resulting from a nibble fault injection instance. We first demonstrate the viability of such an attack model via side-channel analysis experiments on top of a laser-based fault injection setup, targeting a PRESENT-80 implementation on an ATmega328P microcontroller. Subsequently, we present a differential fault analysis (DFA) exploiting the knowledge of the output fault mask in the target round to recover multiple last round key nibbles independently and in parallel. Both analytically and through experimental evidence, we show that the combined attack can recover the last round key of PRESENT with 4 random nibble fault injections in the best case, and around 7-8 nibble fault injections in the average case. Our attack sheds light on a hitherto unexplored vulnerability of PRESENT and PRESENT-like block ciphers that use bit-permutations instead of maximum distance separable (MDS) layers for diffusion.
Settling the mystery of in RC4
In this paper, using probability transition matrix, at first we revisit the work of Mantin on finding the probability distribution of RC4 permutation after the completion of KSA. After that, we extend the same idea to analyse the probabilities during any iteration of Pseudo Random Generation Algorithm. Next, we study the bias (where is the -th output keystream bit), which is one of the significant biases observed in RC4 output keystream. This bias has played an important role in the plaintext recovery attack proposed by Isobe et al. in FSE 2013. However, the accurate theoretical explanation of the bias of is still a mystery. Though several attempts have been made to prove this bias, none of those provides accurate justification.
Here, using the results found with the help of probability transition matrix we justify this bias of accurately and settle this issue. The bias obtained from our proof matches perfectly with the experimental observations.
Meet-in-the-Middle Attacks on 3-Line Generalized Feistel Networks
In the paper, we study the security of 3-line generalized Feistel network, which is a considerate choice for some special needs, such as designing a 96-bit cipher based on a 32-bit round function. We show key recovery attacks on 3-line generic balanced Feistel-2 and Feistel-3 based on the meet-in-the-middle technique in the chosen ciphertext scenario. In our attacks, we consider the key size is as large as one-third of the block size. For the first network, we construct a 9-round distinguisher and launch a 10-round key-recovery attack. For the second network, we show a 13-round distinguisher and give a 17-round attack based on some common assumptions.
The Transaction Graph for Modeling Blockchain Semantics
The advent of Bitcoin paved the way for a plethora of blockchain systems
supporting diverse applications beyond cryptocurrencies.
Although in-depth studies of the protocols, security, and privacy of
blockchains are available, there is no formal model of the transaction
semantics that a blockchain is supposed to guarantee.
In this work, we fill this gap, motivated by the observation that the
semantics of transactions in blockchain systems can be captured by a
directed acyclic graph. Such a transaction graph, or TDAG, generally
consists of the states and the transactions as transitions between the
states, together with conditions for the consistency and validity of
transactions. We instantiate the TDAG model for three prominent
blockchain systems: Bitcoin, Ethereum, and Hyperledger Fabric. We specify
the states and transactions as well as the validity conditions of the
TDAG for each one. This demonstrates the applicability of the model and
formalizes the transaction-level semantics that these systems aim for.
Non-Malleability vs. CCA-Security: The Case of Commitments
Uncategorized
Uncategorized
In this work, we settle the relations among a variety of security notions
related to non-malleability and CCA-security that have been proposed for
commitment schemes in the literature. Interestingly, all our separations
follow from two generic transformations. Given two appropriate security
notions X and Y from the class of security notions we compare, these
transformations take a commitment scheme that fulfills notion X and output a
commitment scheme that still fulfills notion X but not notion Y.
Using these transformations, we are able to show that some of the known
relations for public-key encryption do not carry over to commitments. In
particular, we show that, surprisingly, parallel non-malleability and parallel
CCA-security are not equivalent for commitment schemes. This stands in
contrast to the situation for public-key encryption where these two notions
are equivalent as shown by Bellare et al. at CRYPTO ‘99.
Frequency-smoothing encryption: preventing snapshot attacks on deterministically encrypted data
Uncategorized
Uncategorized
Statistical analysis of ciphertexts has been recently used to carry out devastating inference attacks on deterministic encryption (Naveed, Kamara, and Wright, CCS 2015), order-preserving/revealing encryption (Grubbs et al., S&P 2017), and searchable encryption (Pouliot and Wright, CCS 2016). At the heart of these inference attacks is classical frequency analysis. In this paper, we propose and evaluate another classical technique, homophonic encoding, as a means to combat these attacks. We introduce and develop the concept of frequency-smoothing encryption (FSE) which provably prevents inference attacks in the snapshot attack model, wherein the adversary obtains a static snapshot of the encrypted data, while preserving the ability to efficiently and privately make point queries. We provide provably secure constructions for FSE schemes, and we empirically assess their security for concrete parameters by evaluating them against real data. We show that frequency analysis attacks (and optimal generalisations of them for the FSE setting) no longer succeed.
Regulating Storage Overhead in Existing PoW-based Blockchains
Proof of Work (PoW) blockchains regulate the frequency and security of extensions to the blockchain in a decentralized
manner by adjusting the difficulty in the network. However, analogous decentralized measures to regulate the replication level of the associated transactions and blocks data are completely missing so far. We argue that such measures are required as well. On the one hand, the smaller the number of replicas, the higher the vulnerability of the system against compromises and DoS-attacks. On the other hand, the larger the number of replicas, the higher the storage overhead, and the higher the operational blockchain cost are.
In this paper, we propose a novel solution, EWoK (Entangled proofs of WOrk and Knowledge), that regulates in a decentralized manner the minimum number of replicas that should be stored by miners in the blockchain. EWoK achieves this by tying replication to the only directly-incentivized process in PoW-blockchains – which is PoW itself. EWoK only incurs small modifications to existing PoW protocols and is fully compliant with the specifications of existing mining hardware. Our implementation results confirm that EWoK can be easily integrated within existing mining pool protocols, such as GetBlockTemplate and Stratum mining, and does not impair the mining efficiency.
Bulletproofs: Short Proofs for Confidential Transactions and More
Uncategorized
Uncategorized
We propose Bulletproofs, a new non-interactive zero-knowledge proof protocol
with very short proofs and without a trusted setup; the proof size is only logarithmic in the witness size.
Bulletproofs are especially well suited for efficient range proofs on committed values: they enable proving that a committed value is in a range using only group and field elements, where is the bit length of the range.
Proof generation and verification times are linear in .
Bulletproofs greatly improve on the linear (in ) sized range proofs in existing proposals for confidential transactions in Bitcoin and other cryptocurrencies.
Moreover, Bulletproofs supports aggregation of range proofs, so that a party can prove that commitments lie in a given range by providing only an additive group elements over the length of a single proof.
To aggregate proofs from multiple parties, we enable the parties to generate a single proof without revealing their inputs to each other via a simple multi-party computation (MPC) protocol for constructing Bulletproofs.
This MPC protocol uses either a constant number of rounds and linear communication, or a logarithmic number of rounds and logarithmic communication.
We show that verification time, while asymptotically linear, is very efficient in practice. Moreover, the verification of multiple Bulletproofs can be batched for further speed-up. Concretely, the marginal time to verify an aggregation of 16 range proofs is about the same as the time to verify 16 ECDSA signatures.
Bulletproofs build on the techniques of Bootle et al. (EUROCRYPT 2016).
Beyond range proofs, Bulletproofs provide short zero-knowledge proofs for general arithmetic circuits while only relying on the discrete logarithm assumption and without requiring a trusted setup.
We discuss many applications that would benefit from Bulletproofs, primarily in the area of cryptocurrencies.
The efficiency of Bulletproofs is particularly well suited for the distributed and trustless nature of blockchains.
Last updated: 2018-09-22
--Withdrawn--
Uncategorized
Uncategorized
---
An Algebraic Approach to Maliciously Secure Private Set Intersection
Uncategorized
Uncategorized
Private set intersection is an important area of research and has been the focus of many works over the past decades. It describes the problem of finding an intersection between the input sets of at least two parties without revealing anything about the input sets apart from their intersection.
In this paper, we present a new approach to compute the intersection between sets based on a primitive called Oblivious Linear Function Evaluation (OLE). On an abstract level, we use this primitive to efficiently add two polynomials in a randomized way while preserving the roots of the added polynomials. Setting the roots of the input polynomials to be the elements of the input sets, this directly yields an intersection protocol with optimal asymptotic communication complexity . We highlight that the protocol is information-theoretically secure assuming OLE.
We also present a natural generalization of the 2-party protocol for the fully malicious multi-party case. Our protocol does away with expensive (homomorphic) threshold encryption and zero-knowledge proofs. Instead, we use simple combinatorial techniques to ensure the security. As a result we get a UC-secure protocol with asymptotically optimal communication complexity , where is the number of parties, is the set size and the security parameter. Apart from yielding an asymptotic improvement over previous works, our protocols are also conceptually simple and require only simple field arithmetic.
Along the way we develop tools that might be of independent interest.
Improved Division Property Based Cube Attacks Exploiting Algebraic Properties of Superpoly (Full Version)
Uncategorized
Uncategorized
The cube attack is an important technique for the cryptanalysis of symmetric key primitives, especially for stream ciphers.
Aiming at recovering some secret key bits, the adversary reconstructs a superpoly with the secret key bits involved, by summing over a set of the plaintexts/IV which is called a cube.
Traditional cube attack only exploits linear/quadratic superpolies. Moreover, for a long time after its proposal, the size of the cubes has been largely confined to an experimental range, e.g., typically 40. These limits were first overcome by the division property based cube attacks proposed by Todo et al. at CRYPTO 2017.
Based on MILP modelled division property, for a cube (index set) , they identify the small (index) subset of the secret key bits involved in the resultant superpoly.
During the precomputation phase which dominates the complexity of the cube attacks, encryptions are required to recover the superpoly.
Therefore, their attacks can only be available when the restriction is met.
In this paper, we introduced several techniques to improve the division property based cube attacks by exploiting various algebraic properties of the superpoly.
1. We propose the ``flag'' technique to enhance the preciseness of MILP models so that the proper non-cube IV assignments can be identified to obtain a non-constant superpoly.
2. A degree evaluation algorithm is presented to upper bound the degree of the superpoly. With the knowledge of its degree, the superpoly can be recovered without constructing its whole truth table. This enables us to explore larger cubes 's even if .
3. We provide a term enumeration algorithm for finding the monomials of the superpoly, so that the complexity of many attacks can be further reduced.
As an illustration, we apply our techniques to attack the initialization of several ciphers. To be specific, our key recovery attacks have mounted to 839-round TRIVIUM, 891-round Kreyvium, 184-round Grain-128a and 750-round ACORN respectively.
- « Previous
- 1
- 2
- 3
- ...
- 13
- Next »