All papers in 2018 (Page 11 of 1249 results)
Analysis of Deutsch-Jozsa Quantum Algorithm
The Deutsch-Jozsa quantum algorithm is of great importance to modern quantum computation, but we find it is flawed. It confuses two unitary transformations: one is performed on a pure state, and the other on a superposition.
In the past decades, no constructive specification on the unitary operator performed on involved superposition has been found, and no experimental test on the algorithm has been practically carried out. We think it needs more constructive specifications on the algorithm so as to confirm its correctness.
Stake-Bleeding Attacks on Proof-of-Stake Blockchains
We describe a general attack on proof-of-stake (PoS) blockchains
without checkpointing. Our attack leverages transaction fees, the
ability to treat transactions "out of context," and the standard
longest chain rule to completely dominate a blockchain. The attack
grows in power with the number of honest transactions and the stake
held by the adversary, and can be launched by an adversary
controlling any constant fraction of the stake.
With the present statistical profile of blockchain protocols, the
attack can be launched given a few years of prior blockchain
operation; hence it is within the realm of feasibility for PoS
protocols. Most importantly, it demonstrates how closely
transaction fees and rewards are coupled with the security
properties of PoS protocols. More broadly, our attack must be
reflected and countered in any future PoS design that avoids
checkpointing, as well as any effort to remove checkpointing from
existing protocols. We describe several mechanisms for protecting
against the attack that include context-sensitivity of transactions
and chain density statistics.
Hardware-Supported ORAM in Effect: Practical Oblivious Search and Update on Very Large Dataset
Uncategorized
Uncategorized
The ability to query and update over encrypted data is an essential feature to enable breach- resilient cyber-infrastructures. Statistical attacks on searchable encryption (SE) have demonstrated the importance of sealing information leaks in access patterns. In response to such attacks, the community has proposed the Oblivious Random Access Machine (ORAM). However, due to the logarithmic communication overhead of ORAM, the composition of ORAM and SE is known to be costly in the conventional client-server model, which poses a critical barrier toward its practical adaptations.
In this paper, we propose a novel hardware-supported privacy-enhancing platform called Practical Oblivious Search and Update Platform (POSUP), which enables oblivious keyword search and update operations on large datasets with high efficiency. We harness Intel SGX to realize efficient oblivious data structures for oblivious search/update purposes. We implemented POSUP and evaluated its per- formance on a Wikipedia dataset containing ≥ 229 keyword-file pairs. Our implementation is highly efficient, taking only 1 ms to access a 3 KB block with Circuit-ORAM. Our experiments have shown that POSUP offers up to 70× less end-to-end delay with 100× reduced network bandwidth consump- tion compared with the traditional ORAM-SE composition without secure hardware. POSUP is also at least 4.5× faster for up to 99.5% of keywords that can be searched compared with state-of-the-art Intel SGX-assisted search platforms.
Universally Verifiable MPC with Applications to IRV Ballot Counting
Uncategorized
Uncategorized
We present a very simple universally verifiable MPC protocol. The first component is a threshold somewhat homomorphic cryptosystem that permits an arbitrary number of additions (in the source group), followed by a single multiplication, followed by an arbitrary number of additions in the target group. The second component is a black-box construction of universally verifiable distributed encryption switching between any public key encryption schemes supporting shared setup and key generation phases, as long as the schemes satisfy some natural additive-homomorphic properties. This allows us to switch back from the target group to the source group, and hence perform an arbitrary number of multiplications.
The key generation algorithm of our prototypical cryptosystem, which is based upon concurrent verifiable secret sharing, permits robust re-construction of powers of a shared secret.
We demonstrate the scalability of distribution switching as a viable approach to secure vote tallying by implementing a private verifiable form of Instant Runoff Voting on real Australian election data comprising 40,000 votes.
Secure Search via Multi-Ring Fully Homomorphic Encryption
Secure search is the problem of securely retrieving from a database table (or any unsorted array) the records matching specified attributes, as in SQL ``SELECT...WHERE...'' queries, but where the database and the query are encrypted. Secure search has been the leading example for practical applications of Fully Homomorphic Encryption (FHE) since Gentry's seminal work in 2009, attaining the desired properties of a single-round low-communication protocol with semantic security for database and query (even during search). Nevertheless, the wide belief was that the high computational overhead of current FHE candidates is too prohibitive in practice for secure search solutions (except for the restricted case of searching for a uniquely identified record as in SQL UNIQUE constrain and Private Information Retrieval). This is due to the high degree in existing solutions that is proportional at least to the number of database records m.
We present the first algorithm for secure search that is realized by a polynomial of logarithmic degree (log m)^c for a small constant c>0. We implemented our algorithm in an open source library based on HElib, and ran experiments on Amazon's EC2 cloud with up to 100 processors. Our experiments show that we can securely search to retrieve database records in a rate of searching in millions of database records in less than an hour on a single machine.
We achieve our result by:
(1) Designing a novel sketch that returns the first strictly-positive entry in a (not necessarily sparse) array of non-negative real numbers; this sketch may be of independent interest.
(2) Suggesting a multi-ring evaluation of FHE -- instead of a single ring as in prior works -- and leveraging this to achieve an exponential reduction in the degree.
Faster Homomorphic Linear Transformations in HElib
HElib is a software library that implements homomorphic encryption (HE), with a focus on effective use of "packed" ciphertexts. An important operation (which is used in bootstrapping, as well as in other applications) is applying a known linear map to a vector of encrypted data. In this paper, we describe several algorithmic improvements that significantly speed up this operation: in our experiments, our new algorithms were 30-75 times faster than those currently implemented in HElib for typical parameters.
Our techniques also reduce the size of the large public evaluation key, often using 33%-50% less space than the previous HElib implementation. We also implemented a new tradeoff that enables a drastic reduction in size, maybe a 25x factor or more for some parameters, paying only a 2-4x factor in runtime (and giving up some parallelization opportunities).
A New Approach to Deanonymization of Unreachable Bitcoin Nodes
Mounting deanonymization attacks on the unreachable Bitcoin nodes -- these nodes do not accept incoming connections -- residing behind the NAT is a challenging task. Such an attack was first given by Biryukov, Khovratovich and Pustogarov based on their observation that a node can be uniquely identified in a single session by their directly-connected neighbouring nodes (ACM CCS'15). However, the BKP15 attack is less effective across multiple sessions. To address this issue, Biryukov and Pustogarov later on devised a new strategy exploiting certain properties of address-cookies (IEEE S&P'15). Unfortunately, the BP15 attack is also rendered ineffective by the present modification to the Bitcoin client.
In this paper, we devise an efficient method to link the sessions of unreachable nodes, even if they connect to the Bitcoin network over the Tor. We achieve this using a new approach based on organizing the block-requests made by the nodes in a Bitcoin session graph. This attack also works against the modified Bitcoin client. We performed experiments on the Bitcoin main network, and were able to link consecutive sessions with a precision of 0.90 and a recall of 0.71. We also provide counter-measures to mitigate the attacks.
A New Constant-size Accountable Ring Signature Scheme Without Random Oracles
Accountable ring signature (ARS), introduced by Xu and Yung (CARDIS 2004), combines many useful properties of ring and group signatures. In particular, the signer in an ARS scheme has the flexibility of choosing an ad hoc group of users, and signing on their behalf (like a ring signature). Furthermore, the signer can designate an opener who may later reveal his identity, if required (like a group signature). In 2015, Bootle et al. (ESORICS 2015) formalized the notion and gave an efficient construction for ARS with signature-size logarithmic in the size of the ring. Their scheme is proven to be secure in the random oracle model. Recently, Russell et al. (ESORICS 2016) gave a construction with constant signature-size that is secure in the standard model. Their scheme is based on -type assumptions ( -SDH).
In this paper, we give a new construction for ARS having the following properties: signature is constant-sized, secure in the standard model, and based on indistinguishability obfuscation iO and one-way functions. To the best of our knowledge, this is the first iO-based ARS scheme. Independent of this, our work can be viewed as a new application of puncturable programming and hidden sparse trigger techniques introduced by Sahai and Waters (STOC 2014) to design iO-based deniable encryption.
zkLedger: Privacy-Preserving Auditing for Distributed Ledgers
Distributed ledgers (e.g. blockchains) enable financial institutions to efficiently reconcile cross-organization transactions. For example, banks might use a distributed ledger as a settlement log for digital assets. Unfortunately, these ledgers are either entirely public to all participants, revealing sensitive strategy and trading information, or are private but do not support third-party auditing without revealing the contents of transactions to the auditor. Auditing and financial oversight are critical to proving institutions are complying with regulation.
This paper presents zkLedger, the first system to protect ledger participants' privacy and provide fast, provably correct auditing. Banks create digital asset transactions that are visible only to the organizations party to the transaction, but are publicly verifiable. An auditor sends queries to banks, for example "What is the outstanding amount of a certain digital asset on your balance sheet?" and gets a response and cryptographic assurance that the response is correct.
zkLedger has two important benefits over previous work. First, zkLedger provides fast, rich auditing with a new proof scheme using Schnorr-type non-interactive zero-knowledge proofs. Unlike zkSNARKs, our techniques do not require trusted setup and only rely on widely-used cryptographic assumptions. Second, zkLedger provides *completeness*; it uses a columnar ledger construction so that banks cannot hide transactions from the auditor, and participants can use rolling caches to produce and verify answers quickly.
We implement a distributed version of zkLedger that can produce provably-correct answers to auditor queries on a ledger with a hundred thousand transactions in less than 10 milliseconds.
Towards Non-Interactive Zero-Knowledge for NP from LWE
Uncategorized
Uncategorized
Non-interactive zero-knowledge (NIZK) is a fundamental primitive that is widely used in the construction of cryptographic schemes and protocols. Despite this, general purpose constructions of NIZK proof systems are only known under a rather limited set of assumptions that are either number-theoretic (and can be broken by a quantum computer) or are not sufficiently well understood, such as obfuscation. Thus, a basic question that has drawn much attention is whether it is possible to construct general-purpose NIZK proof systems based on the learning with errors (LWE) assumption.
Our main result is a reduction from constructing NIZK proof systems for all of NP based on LWE, to constructing a NIZK proof system for a particular computational problem on lattices, namely a decisional variant of the Bounded Distance Decoding (BDD) problem. That is, we show that assuming LWE, every language L in NP has a NIZK proof system if (and only if) the decisional BDD problem has a NIZK proof system. This (almost) confirms a conjecture of Peikert and Vaikuntanathan (CRYPTO, 2008).
To construct our NIZK proof system, we introduce a new notion that we call prover-assisted oblivious ciphertext sampling (POCS), which we believe to be of independent interest. This notion extends the idea of oblivious ciphertext sampling, which allows one to sample ciphertexts without knowing the underlying plaintext. Specifically, we augment the oblivious ciphertext sampler with access to an (untrusted) prover to help it accomplish this task. We show that the existence of encryption schemes with a POCS procedure, as well as some additional natural requirements, suffices for obtaining NIZK proofs for NP. We further show that such encryption schemes can be instantiated based on LWE, assuming the existence of a NIZK proof system for the decisional BDD problem.
RepuCoin: Your Reputation is Your Power
Uncategorized
Uncategorized
Existing proof-of-work cryptocurrencies cannot tolerate attackers controlling more than 50% of the network’s computing power at any time, but assume that such a condition happening is “unlikely”. However, recent attack sophistication, e.g., where attackers can rent mining capacity to obtain a majority of computing power temporarily, render this assumption unrealistic.
This paper proposes RepuCoin, the first system to provide guarantees even when more than 50% of the system’s computing power is temporarily dominated by an attacker. RepuCoin physically limits the rate of voting power growth of the entire system. In particular, RepuCoin defines a miner’s power by its ‘reputation’, as a function of its work integrated over the time of the entire blockchain, rather than through instantaneous computing power, which can be obtained relatively quickly and/or temporarily. As an example, after a single year of operation, RepuCoin can tolerate attacks compromising 51% of the network’s computing resources, even if such power stays maliciously seized for almost a whole year. Moreover, RepuCoin provides better resilience to known attacks, compared to existing proof-of-work systems, while achieving a high throughput of 10000 transactions per second (TPS).
Private Set Intersection with Linear Communication from General Assumptions
This work presents a hashing-based algorithm for Private Set Intersection (PSI) in
the honest-but-curious setting. The protocol is generic, modular and provides both asymptotic
and concrete efficiency improvements over existing PSI protocols.
If each player has elements, our scheme requires only communication between the parties,
where is a security parameter.
Our protocol builds on the hashing-based PSI protocol of Pinkas et al. (USENIX 2014, USENIX 2015),
but we replace one of the sub-protocols (handling the cuckoo ``stash'') with a special-purpose PSI protocol
that is optimized for comparing sets of unbalanced size.
This brings the asymptotic communication complexity of the overall protocol down from to ,
and provides concrete performance improvements (10-15\% reduction in communication costs) over Kolesnikov et al. (CCS 2016)
under real-world parameter choices.
Our protocol is simple, generic and benefits from the permutation-hashing optimizations of Pinkas et al. (USENIX 2015) and the
Batched, Relaxed Oblivious Pseudo Random Functions of Kolesnikov et al. (CCS 2016).
On Tightly Secure Non-Interactive Key Exchange
We consider the reduction loss of security reductions for non-interactive key exchange (NIKE) schemes. Currently, no tightly secure NIKE schemes exist, and in fact Bader et al. (EUROCRYPT 2016) provide a lower bound (of O(n^2), where n is the number of parties an adversary interacts with) on the reduction loss for a large class of NIKE schemes.
We offer two results: the first NIKE scheme with a reduction loss of n/2 that circumvents the lower bound of Bader et al., but is of course still far from tightly secure. Second, we provide a generalization of Bader et al.'s lower bound to a larger class of NIKE schemes (that also covers our NIKE scheme), with an adapted lower bound of n/2 on the reduction loss. Hence, in that sense, the reduction for our NIKE scheme is optimal.
Low-Resource Eclipse Attacks on Ethereum's Peer-to-Peer Network
We present eclipse attacks on Ethereum nodes that exploit the peer-to-peer network used for neighbor discovery. Our attacks can be launched using only two hosts, each with a single IP address. Our eclipse attacker monopolizes all of the victim's incoming and outgoing connections, thus isolating the victim from the rest of its peers in the network. The attacker can then filter the victim's view of the blockchain, or co-opt the victim's computing power as part of more sophisticated attacks. We argue that these eclipse-attack vulnerabilities result from Ethereum's adoption of the Kademlia peer-to-peer protocol, and present countermeasures that both harden the network against eclipse attacks and cause it to behave differently from the traditional Kademlia protocol. Several of our countermeasures have been incorporated in the Ethereum geth 1.8 client released on February 14, 2018.
Combining Asynchronous and Synchronous Byzantine Agreement: The Best of Both Worlds
In the problem of byzantine agreement (BA), a set of n parties wishes to agree
on a value v by jointly running a distributed protocol. The protocol is deemed
secure if it achieves this goal in spite of a malicious adversary that
corrupts a certain fraction of the parties and can make them behave in
arbitrarily malicious ways. Since its first formalization by Lamport et al.
(TOPLAS `82), the problem of BA has been extensively studied in the literature
under many different assumptions. One common way to classify protocols for BA
is by their synchrony and network assumptions. For example, some protocols
offer resilience against up to a one-half fraction of corrupted parties by
assuming a synchronized, but possibly slow network, in which parties share a
global clock and messages are guaranteed to arrive after a given time D. By
comparison, other protocols achieve much higher efficiency and work without
these assumptions, but can tolerate only a one-third fraction of corrupted
parties. A natural question is whether it is possible to combine protocols
from these two regimes to achieve the ``best of both worlds'': protocols that
are both efficient and robust. In this work, we answer this question in the
affirmative. Concretely, we make the following contributions:
* We give the first generic compilers that combine BA protocols under
different network and synchrony assumptions and preserve both the efficiency
and robustness of their building blocks. Our constructions are simple and rely
solely on a secure signature scheme.
* We prove that our constructions achieve optimal corruption bounds.
* Finally, we give the first efficient protocol for (binary) asynchronous
byzantine agreement (ABA) which tolerates adaptive corruptions and matches the
communication complexity of the best protocols in the static case.
P2KMV: A Privacy-preserving Counting Sketch for Efficient and Accurate Set Intersection Cardinality Estimations
In this paper, we propose P2KMV, a novel privacy-preserving counting sketch, based on the k minimum values algorithm. With P2KMV, we offer a versatile privacy-enhanced technology for obtaining statistics, following the principle of data minimization, and aiming for the sweet spot between privacy, accuracy, and computational efficiency. As our main contribution, we develop methods to perform set operations, which facilitate cardinality estimates under strong privacy requirements. Most notably, we propose an efficient, privacy-preserving algorithm to estimate the set intersection cardinality. P2KMV provides plausible deniability for all data items contained in the sketch. We discuss the algorithm's privacy guarantees as well as the accuracy of the obtained estimates. An experimental evaluation confirms our analytical expectations and provides insights regarding parameter choices.
Privacy-Preserving Logistic Regression Training
Logistic regression is a popular technique used in machine learning to construct classification models. Since the construction of such models is based on computing with large datasets, it is an appealing idea to outsource this computation to a cloud service. The privacy-sensitive nature of the input data requires appropriate privacy preserving measures before outsourcing it. Homomorphic encryption enables one to compute on encrypted data directly, without decryption, and can be used to mitigate the privacy concerns raised by using a cloud service. In this paper, we propose an algorithm (and its implementation) to train a logistic regression model on a homomorphically encrypted dataset. The core of our algorithm consists of a new iterative method that can be seen as a simplified form of the fixed Hessian method, but with a much lower multiplicative complexity. We test the new method on two interesting real life applications: the first application is in medicine and constructs a model to predict the probability for a patient to have cancer, given genomic data as input; the second application is in finance and the model predicts the probability of a credit card transaction to be fraudulent. The method produces accurate results for both applications, comparable to running standard algorithms on plaintext data. This article introduces a new simple iterative algorithm to train a logistic regression model that is tailored to be applied on a homomorphically encrypted dataset. This algorithm can be used as a privacy-preserving technique to build a binary classification model and can be applied in a wide range of problems that can be modelled with logistic regression. Our implementation results show that our method can handle the large datasets used in logistic regression training.
Improved fully homomorphic public-key encryption with small ciphertext size
A cryptosystem which supports both addition and multiplication (thereby preserving the ring structure of the plaintexts) is known as fully homomorphic encryption (FHE) and is very powerful. Using such a scheme, any circuit can be homomorphically evaluated, effectively allowing the construction of programs which may be run on ciphertexts of their inputs to produce a ciphertext of their output. Since such a program never decrypts its input, it can be run by an untrusted party without revealing its inputs and internal state. The existence of an efficient and fully homomorphic cryptosystem would have great practical implications in the outsourcing of private computations, for instance, in the context of cloud computing. In previous work I proposed the fully homomorphic public-key encryption scheme with the size of ciphertext which is not small enough. In this paper the size of ciphertext is one-eighth of the size in the previously proposed scheme. Because proposed scheme adopts the medium text with zero norm, it is immune from the the “p and -p attack”. As the proposed scheme is based on computational difficulty to solve the multivariate algebraic equations of high degree, it is immune from the Gröbner basis attack, the differential attack, rank attack and so on.
The Violation of Bell's Inequality Represents Nothing
The Bell's mathematical formulation for EPR paradox, especially Bell's inequality, had interested many physicists and philosophers. We revisit the famous formulation and investigate the related arguments, just from a mathematical point of view. We find: (1) there is a key assumption inconsistent with the general hidden variable theory; (2) the mutual independence between measurements has been thoroughly neglected in the past decades; (3) the inequality involves three pairs of particles, not as generally imagined to measure only a pair of particles. The findings could provide a new glimpse into the old and hot issue.
Saber: Module-LWR based key exchange, CPA-secure encryption and CCA-secure KEM
In this paper, we introduce Saber, a package of cryptographic primitives whose security relies on the hardness of the Module Learning With Rounding problem (Mod-LWR). We first describe a secure Diffie-Hellman type key exchange protocol, which is then transformed into an IND-CPA encryption scheme and finally into an IND-CCA secure key encapsulation mechanism using a post-quantum version of the Fujisaki-Okamoto transform. The design goals of this package were simplicity, efficiency and flexibility resulting in the following choices: all integer moduli are powers of avoiding modular reduction and rejection sampling entirely; the use of LWR halves the amount of randomness required compared to LWE-based schemes and reduces bandwidth; the module structure provides flexibility by reusing one core component for multiple security levels. A constant-time AVX2 optimized software implementation of the KEM with parameters providing more than 128 bits of post-quantum security, requires only 101K, 125K and 129K cycles for key generation, encapsulation and decapsulation respectively on a Dell laptop with an Intel i7-Haswell processor.
Optimizing polynomial convolution for NTRUEncrypt
NTRUEncrypt is one of the most promising candidates for
quantum-safe cryptography. In this paper, we focus on the NTRU743 paramter
set. We give a report on all known attacks against this parameter set
and show that it delivers 256 bits of security against classical attackers
and 128 bits of security against quantum attackers. We then present a
parameter-dependent optimization using a tailored hierarchy of multipli-
cation algorithms as well as the Intel AVX2 instructions, and show that
this optimization is constant-time. Our implementation is two to three
times faster than the reference implementation of NTRUEncrypt.
Non-interactive zaps of knowledge
While non-interactive zero-knowledge (NIZK) proofs require trusted parameters, Groth, Ostrovsky and Sahai constructed non-interactive witness-indistinguishable (NIWI) proofs without any setup; they called their scheme a non-interactive zap. More recently, Bellare, Fuchsbauer and Scafuro investigated the security of NIZK in the face of parameter subversion and observe
that NI zaps provide subversion-resistant soundness and WI.
Arguments of knowledge prove that not only the statement is true, but also that the prover knows a witness for it, which is essential for anonymous identification. We present the first NIWI argument of knowledge without parameters, i.e., a NI zap of knowledge. Consequently, our scheme is also the first subversion-resistant knowledge-sound proof system, a notion recently proposed by Fuchsbauer.
Can We Overcome the Barrier for Oblivious Sorting?
It is well-known that non-comparison-based techniques
can allow us to sort elements in time
on a Random-Access Machine (RAM). On the other hand, it is a long-standing open
question whether
(non-comparison-based) circuits can sort
elements from the domain
with boolean gates.
We consider weakened forms of this question: first, we consider
a restricted class of sorting where the number of distinct keys
is much smaller than the input length; and second, we
explore Oblivious RAMs and probabilistic circuit families, i.e.,
computational models that are
somewhat more powerful than circuits but much weaker than RAM.
We show that Oblivious RAMs and probabilistic circuit families
can sort -bit keys in time or circuit
complexity where is the input length.
Our algorithms work in the balls-and-bins model, i.e., not only can they
sort an array of numerical keys --- if each key additionally
carries an opaque ball, our algorithms
can also move the balls into the correct order.
We further show that
in such a balls-and-bins model, it is impossible
to sort -bit keys
in time, and thus
the -bit-key assumption
is necessary for overcoming the barrier.
Finally, we optimize the IO efficiency of our oblivious algorithms
for RAMs --- we show that even the -bit special
case of our algorithm can solve open questions
regarding whether there exist oblivious
algorithms for tight compaction and selection in linear IO.
Non-Uniform Bounds in the Random-Permutation, Ideal-Cipher, and Generic-Group Models
The random-permutation model (RPM) and the ideal-cipher model (ICM) are
idealized models that offer a simple and intuitive way to assess the
conjectured standard-model security of many important symmetric-key and
hash-function constructions. Similarly, the generic-group model (GGM)
captures generic algorithms against assumptions in cyclic groups by modeling
encodings of group elements as random injections and allows to derive simple
bounds on the advantage of such algorithms.
Unfortunately, both well-known attacks, e.g., based on rainbow tables
(Hellman, IEEE Transactions on Information Theory '80), and more recent ones,
e.g., against the discrete-logarithm problem (Corrigan-Gibbs and Kogan,
EUROCRYPT '18), suggest that the concrete security bounds one obtains from
such idealized proofs are often completely inaccurate if one considers
non-uniform or preprocessing attacks in the standard model. To remedy this
situation, this work
1) defines the auxiliary-input (AI) RPM/ICM/GGM, which capture both non-uniform
and preprocessing attacks by allowing an attacker to leak an arbitrary
(bounded-output) function of the oracle's function table;
2) derives the first non-uniform bounds for a number of important practical
applications in the AI-RPM/ICM, including constructions based on the
Merkle-Damgard and sponge paradigms, which underly the SHA hashing standards,
and for AI-RPM/ICM applications with computational security; and
3) using simpler proofs, recovers the AI-GGM security bounds obtained by
Corrigan-Gibbs and Kogan against preprocessing attackers, for a number of
assumptions related to cyclic groups, such as discrete logarithms and
Diffie-Hellman problems, and provides new bounds for two assumptions.
An important step in obtaining these results is to port the tools used in
recent work by Coretti et al. (EUROCRYPT '18) from the ROM to the RPM/ICM/GGM,
resulting in very powerful and easy-to-use tools for proving security bounds
against non-uniform and preprocessing attacks.
A foundation for secret, verifiable elections
Many voting systems rely on art, rather than science, to ensure that
votes are freely made, with equal influence. Such systems build upon
creativity and skill, rather than scientific foundations. These systems
are routinely broken in ways that compromise free-choice or permit
undue influence. Breaks can be avoided by proving that voting systems
satisfy formal notions of voters voting freely and of detecting
undue influence. This manuscript provides a detailed technical
introduction to
a definition of ballot secrecy by Smyth that formalises the former notion and
a definition of verifiability by Smyth, Frink & Clarkson that formalises the latter.
The definitions are presented in the computational model of cryptography:
Ballot secrecy is expressed as the inability to distinguish between an
instance of the voting system in which voters cast some votes, from another
instance in which the voters cast a permutation of those votes. Verifiability
decomposes into individual verifiability, which is expressed as the inability
to cause a collision between ballots,
and universal verifiability, which is expressed as the inability to cause an incorrect
election outcome to be accepted. The definitions are complimented with simple
examples that demonstrate the essence of these properties and detailed
proofs are constructed to show how secrecy and verifiability can be formally
proved. Finally, the Helios and Helios Mixnet voting systems are presented as case
studies to provide an understanding of state-of-the-art systems that are being
used for binding elections.
Threshold Properties of Prime Power Subgroups with Application to Secure Integer Comparisons
We present a semantically secure somewhat homomorphic public-key cryptosystem working in sub-groups of of prime power order. Our scheme introduces a novel threshold homomorphic property, which we use to build a two-party protocol for secure integer comparison. In contrast to related work which encrypts and acts on each bit of the input separately, our protocol compares multiple input bits simultaneously within a single ciphertext. Compared to the related protocol of Damgård et al.~we present results showing this approach to be both several times faster in computation and lower in communication complexity.
Shorter double-authentication preventing signatures for small address spaces
A recent paper by Derler, Ramacher, and Slamanig (IEEE EuroS&P 2018) constructs double-authentication preventing signatures ("DAP signatures", a specific self-enforcement enabled variant of signatures where messages consist of an address and a payload) that have---if the supported address space is not too large---keys and signatures that are considerably more compact than those of prior work. We embark on their approach to restrict attention to small address spaces and construct novel DAP schemes that beat their signature size by a factor of five and reduce the signing key size from linear to constant (the verification key size remains almost the same). We construct our DAP signatures generically from identification protocols, using a transform similar to but crucially different from that of Fiat and Shamir. We use random oracles. We don't use pairings.
Authentication with weaker trust assumptions for voting systems
Some voting systems are reliant on external authentication services.
Others use cryptography to implement their own. We combine
digital signatures and non-interactive proofs to derive a generic construction
for voting systems with their own authentication mechanisms, from systems
that rely on external authentication services. We prove that our
construction produces systems satisfying ballot secrecy and election
verifiability, assuming the underlying voting system does. Moreover,
we observe that works based on similar ideas provide neither ballot secrecy nor
election verifiability. Finally, we demonstrate applicability of
our results by applying our construction to the Helios voting system.
Bandwidth-Hard Functions: Reductions and Lower Bounds
Memory Hard Functions (MHFs) have been proposed as an answer to the growing inequality between the computational speed of general purpose CPUs and Application Specific Integrated Circuits (ASICs).
MHFs have seen widespread applications including password hashing, key stretching and proofs of work.
Several metrics have been proposed to quantify the ``memory hardness'' of a function. Cumulative memory complexity (CMC) (Alwen and Serbinenko, STOC 2015) (or amortized Area Time complexity (Alwen et. al., CCS 2017)) attempts to quantify the cost to acquire/build the hardware to evaluate the function --- after normalizing the time it takes to evaluate the function repeatedly at a given rate.
By contrast, bandwidth hardness (Ren and Devadas, TCC 2017) attempts to quantify the energy costs of evaluating this function --- which in turn is largely dominated by the number of cache misses.
Ideally, a good MHF would be both bandwidth hard and have high cumulative memory complexity.
While the cumulative memory complexity of leading MHF candidates is well understood, little is known about the \emph{bandwidth hardness} of many prominent MHF candidates.
Our contributions are as follows:
First, we provide the first reduction proving that, in the parallel random oracle model, the bandwidth hardness of a Data-Independent Memory Hard Function (iMHF) is described by the red-blue pebbling cost of the directed acyclic graph (DAG) associated with that iMHF.
Second, we show that the goals of designing an MHF with high CMC/bandwidth hardness are well aligned. In particular, we prove that any function (data-independent or not) with high CMC also has relatively high bandwidth costs.
Third, we analyze the bandwidth hardness of several prominent iMHF candidates such as Argon2i (Biryukov et. al., 2015), winner of the password hashing competition, aATSample and DRSample (Alwen et. al., CCS 2017) --- the first practical iMHF with essentially asymptotically optimal CMC. We prove that in the parallel random oracle model each iMHFs are maximally bandwidth hard.
Fourth, we analyze the bandwidth hardness of a prominent dMHF called Scrypt. We prove the first unconditional tight lower bound on the energy cost of Scrypt in the parallel random oracle model.
Finally, we show that the problem of finding the minimum cost red-blue pebbling of a directed acyclic graph is NP-hard.
PRO-ORAM: Constant Latency Read-Only Oblivious RAM
Oblivious RAM is a well-known cryptographic primitive to hide data
access patterns. However, the best known ORAM schemes require a logarithmic
computation time in the general case which makes it infeasible for use in real-world applications. In practice, hiding data access patterns should incur a constant
latency per access.
In this work, we present PRO-ORAM --- an ORAM construction that
achieves constant latencies per access in a large class of applications. PRO-ORAM theoretically and empirically
guarantees this for read-only data access
patterns, wherein data is written once followed by read requests.
It makes hiding data access pattern practical
for read-only workloads, incurring sub-second computational latencies
per access for data blocks of 256 KB, over large (gigabyte-sized)
datasets.PRO-ORAM supports throughputs of tens to hundreds of MBps
for fetching blocks, which exceeds network bandwidth available to
average users today. Our experiments suggest that dominant factor in
latency offered by PRO-ORAM is the inherent network throughput of
transferring final blocks, rather than the computational latencies of
the protocol. At its heart, PRO-ORAM utilizes key observations
enabling an aggressively parallelized algorithm of an ORAM construction and
a permutation operation, as well as the use of trusted computing
technique (SGX) that not only provides safety but also offers the
advantage of lowering communication costs.
On Side-Channel Vulnerabilities of Bit Permutations: Key Recovery and Reverse Engineering
Lightweight block ciphers rely on simple operations to allow compact implementation. Thanks to its efficiency, bit permutation has emerged as an optimal choice for state-wise diffusion. It can be implemented by simple wiring or shifts. However, as recently shown by Spectre and Meltdown attacks, efficiency and security often go against each other.
In this work, we show how bit permutations introduce a side-channel vulnerability that can be exploited to extract the secret key from the cipher. Such vulnerabilities are specific to bit permutations and do not occur in other state-wise diffusion alternatives. We propose Side-Channel Assisted Differential-Plaintext Attack (SCADPA) which targets this vulnerability in bit permutation operation. SCADPA is experimentally demonstrated on PRESENT-80 on an 8-bit microcontroller, with the best case key recovery in 17 encryptions. The attack is then extended to latest bit-permutation based cipher GIFT, allowing full key recovery in 36 encryptions. We also propose and experimentally verify an automatic threshold method which can be easily applied to SCADPA, allowing automation of the attack. Moreover, SCADPA on bit permutations has other applications. Application for reverse engineering secret sboxes in PRESENT-like proprietary ciphers is shown. We also highlight a special case, where fixing one vulnerability opens another one. This is shown by applying SCADPA on some assembly level fault attack countermeasures, rendering it less secure than unprotected implementations. Lastly, we also provide several different attack scenarios, such as targeting different encryption modes.
On Evaluating Fault Resilient Encoding Schemes in Software
Cryptographic implementations are often vulnerable against physical attacks, fault injection analysis being among the most popular techniques. On par with development of attacks, the area of countermeasures is advancing rapidly, utilizing both hardware- and software-based approaches. When it comes to software encoding countermeasures for fault protection and their evaluation, there are very few proposals so far, mostly focusing on single operations rather than cipher as a whole.
In this paper we propose an evaluation framework that can be used for analyzing the effectivity of software encoding countermeasures against fault attacks. We first formalize the encoding schemes in software, helping us to define what properties are required when designing a fault protection. Based on these findings, we develop an evaluation metric that can be used universally to determine the robustness of a software encoding scheme against bit flip faults and instruction skips. We provide a way to select a code according to user criteria and also a dynamic code analysis method to estimate the level of protection of assembly implementations using encoding schemes. Finally, we verify our findings by implementing a block cipher PRESENT, protected by encoding scheme based on anticodes, and provide a detailed evaluation of this implementation using different codes.
Defending Against Key Exfiltration: Efficiency Improvements for Big-Key Cryptography via Large-Alphabet Subkey Prediction
Towards advancing the use of BIG keys as a practical defense against key exfiltration, this paper provides efficiency improvements for cryptographic schemes in the bounded retrieval model (BRM). We identify probe complexity (the number of scheme accesses to the slow storage medium storing the big key) as the dominant cost. Our main technical contribution is what we call the large-alphabet subkey prediction lemma. It gives good bounds on the predictability under leakage of a random sequence of blocks of the big key, as a function of the block size. We use it to significantly reduce the probe complexity required to attain a given level of security. Together with other techniques, this yields security-preserving performance improvements for BRM symmetric encryption schemes and BRM public-key identification schemes.
Secure Computation with Low Communication from Cross-checking
We construct new four-party protocols for secure computation that are secure against a single malicious corruption. Our protocols can perform computations over a binary ring, and require sending just 1.5 ring elements per party, per gate. In the special case of Boolean circuits, this amounts to sending 1.5 bits per party, per gate. One of our protocols is robust, yet requires almost no additional communication. Our key technique can be viewed as a variant of the “dual execution” approach, but, because we rely on four parties instead of two, we can avoid any leakage, achieving the standard notion of security.
Towards everlasting privacy and efficient coercion resistance in remote electronic voting
In this work, we propose a first version of an e-voting scheme that achieves end-to-end verifiability, everlasting privacy and efficient coercion resistance in the JCJ setting. Everlasting privacy is achieved assuming an anonymous channel, without resorting to dedicated channels between the election authorities to exchange private data. In addition, the proposed scheme achieves coercion resistance under standard JCJ assumptions. As a core building block of our scheme, we also propose a new primitive called publicly auditable conditional blind signature (PACBS), where a client receives a token from the signing server after interaction; the token is a valid signature only if a certain condition holds and the validity of the signature can only be checked by a designated verifier. We utilize this primitive to blindly mark votes under coercion in an auditable manner
RMAC -- A Lightweight Authentication Protocol for Highly Constrained IoT Devices
Nowadays, highly constrained IoT devices have earned an important place in our everyday lives. These
devices mainly comprise RFID (Radio-Frequency IDentification) or WSN (Wireless Sensor Networks) components.
Their adoption is growing in areas where data security or privacy or both must be guaranteed. Therefore, it is necessary
to develop appropriate security solutions for these systems. Many papers have proposed solutions for encryption
or authentication. But it turns out that sometimes the proposal has security flaw or is ill-suited for the constrained IoT
devices (which has very limited processing and storage capacities). In this paper we introduce a new authentication
protocol inspired by Mirror-Mac (MM) which is a generic construction of authentication protocol proposed by Mol
et al. Our proposal named RMAC is well suited for highly constrained IoT devices since its implementation uses
simple and lightweight algorithms.We also prove that RMAC is at least as secure as the MM protocol and thus secure
against man-in-the-middle attacks.
Committing to Quantum Resistance: A Slow Defence for Bitcoin against a Fast Quantum Computing Attack
Quantum computers are expected to have a dramatic impact on numerous fields, due to their anticipated ability to solve classes of mathematical problems much more efficiently than their classical counterparts. This particularly applies to domains involving integer factorisation and discrete logarithms, such as public key cryptography.
In this paper we consider the threats a quantum-capable adversary could impose on Bitcoin, which currently uses the Elliptic Curve Digital Signature Algorithm (ECDSA) to sign transactions.
We then propose a simple but slow commit-delay-reveal protocol, which allows users to securely move their funds from old (non-quantum-resistant) outputs to those adhering to a quantum-resistant digital signature scheme.
The transition protocol functions even if ECDSA has already been compromised.
While our scheme requires modifications to the Bitcoin protocol, these can be implemented as a soft fork.
How to Subvert Backdoored Encryption: Security Against Adversaries that Decrypt All Ciphertexts
In this work, we examine the feasibility of secure and undetectable point-to-point communication in a world where governments can read all the encrypted communications of their citizens. We consider a world where the only permitted method of communication is via a government-mandated encryption scheme, instantiated with government-mandated keys. Parties cannot simply encrypt ciphertexts of some other encryption scheme, because citizens caught trying to communicate outside the government's knowledge (e.g., by encrypting strings which do not appear to be natural language plaintexts) will be arrested. The one guarantee we suppose is that the government mandates an encryption scheme which is semantically secure against outsiders: a perhaps reasonable supposition when a government might consider it advantageous to secure its people's communication against foreign entities. But then, what good is semantic security against an adversary that holds all the keys and has the power to decrypt?
We show that even in the pessimistic scenario described, citizens can communicate securely and undetectably. In our terminology, this translates to a positive statement: all semantically secure encryption schemes support subliminal communication. Informally, this means that there is a two-party protocol between Alice and Bob where the parties exchange ciphertexts of what appears to be a normal conversation even to someone who knows the secret keys and thus can read the corresponding plaintexts. And yet, at the end of the protocol, Alice will have transmitted her secret message to Bob. Our security definition requires that the adversary not be able to tell whether Alice and Bob are just having a normal conversation using the mandated encryption scheme, or they are using the mandated encryption scheme for subliminal communication.
Our topics may be thought to fall broadly within the realm of steganography: the science of hiding secret communication within innocent-looking messages, or cover objects. However, we deal with the non-standard setting of an adversarially chosen distribution of cover objects (i.e., a stronger-than-usual adversary), and we take advantage of the fact that our cover objects are ciphertexts of a semantically secure encryption scheme to bypass impossibility results which we show for broader classes of steganographic schemes. We give several constructions of subliminal communication schemes under the assumption that key exchange protocols with pseudorandom messages exist (such as Diffie-Hellman, which in fact has truly random messages). Each construction leverages the assumed semantic security of the adversarially chosen encryption scheme, in order to achieve subliminal communication.
Number "Not Used" Once - Practical fault attack on pqm4 implementations of NIST candidates
In this paper, we demonstrate practical fault attacks over a number of lattice based schemes, in particular NewHope, Kyber, Frodo, Dilithium which are based on the hardness of the Learning with Errors (LWE) problem. One of the common traits of all the considered LWE schemes is the use of nonces as domain separators to sample the secret components of the LWE instance. We show that simple faults targeting the usage of nonce can result in a nonce-reuse scenario which allows key recovery and message recovery attacks. To the best of our knowledge, we propose the first practical fault attack on lattice-based Key encapsulation schemes secure in the CCA model. We perform experimental validation of our attack using Electromagnetic fault injection on reference implementations of the aforementioned schemes taken from the pqm4 library, a benchmarking and testing framework for post quantum cryptographic implementations for the ARM Cortex-M4. We use the instruction skip fault model, which is very practical and popular in microcontroller based implementations. Our attack requires to inject a very few number of faults (numbering less than 10 for recommended parameter sets) and can be repeated with a 100% accuracy with our Electromagnetic fault injection setup.
A Simple Obfuscation Scheme for Pattern-Matching with Wildcards
Uncategorized
Uncategorized
We give a simple and efficient method for obfuscating pattern matching with wildcards. In other words, we construct a way to check an input against a secret pattern, which is described in terms of prescribed values interspersed with unconstrained “wildcard” slots. As long as the support of the pattern is sufficiently sparse and the pattern itself is chosen from an appropriate distribution, we prove that a polynomial-time adversary cannot find a matching input, except with negligible probability. We rely upon the generic group heuristic (in a regular group, with no multilinearity). Previous work provided less efficient constructions based on multilinear maps or LWE.
CALYPSO: Private Data Management for Decentralized Ledgers
Distributed ledger technologies provide high availability and
integrity, making them a key enabler for practical and secure
computation of distributed workloads among mutually distrustful parties.
However, many practical applications also
require confidentiality, the third pillar of the CIA triad. In
this work, we enhance permissioned and permissionless blockchains with the ability to manage confidential data without
forfeiting availability or decentralization. More specifically,
CALYPSO sets out to achieve two orthogonal goals that
challenge modern distributed ledgers: (a) enable blockchains
to auditably manage secrets and (b) protect distributed computations against arbitrage attacks when their results depend
on the ordering and secrecy of inputs.
To this end, CALYPSO proposes on-chain secrets, a novel
abstraction that enforces atomic deposition of an auditable
trace whenever users access confidential data. Furthermore,
CALYPSO provides user-controlled consent management
that ensures revocation atomicity and accountable anonymity.
Finally, to enable the permissionless deployment of CALYPSO, we introduce an incentive scheme and provide users
with the option to select their preferred trustees. We evaluated our CALYPSO prototype with a confidential document
sharing application and a decentralized lottery. Our benchmarks show that the latency of processing transactions increases linearly to the added security (in number of trustees)
and is in the range of 0.2 to 8 seconds for 16 to 128 trustees.
TinyKeys: A New Approach to Efficient Multi-Party Computation
We present a new approach to designing concretely efficient MPC protocols with semi-honest security in the dishonest majority setting. Motivated by the fact that within the dishonest majority setting the efficiency of most practical protocols does not depend on the number of honest parties, we investigate how to construct protocols which improve in efficiency as the number of honest parties increases. Our central idea is to take a protocol which is secure for corruptions and modify it to use short symmetric keys, with the aim of basing security on the concatenation of all honest parties' keys. This results in a more efficient protocol tolerating fewer corruptions, whilst also introducing an LPN-style syndrome decoding assumption.
We first apply this technique to a modified version of the semi-honest GMW protocol, using OT extension with short keys, to improve the efficiency of standard GMW with fewer corruptions. We also obtain more efficient constant-round MPC, using BMR-style garbled circuits with short keys, and present an implementation of the online phase of this protocol. Our techniques start to improve upon existing protocols when there are around parties with honest parties, and as these increase we obtain up to a 13 times reduction (for ) in communication complexity for our GMW variant, compared with the best-known GMW-based protocol modified to use the same threshold.
Non-Malleable Codes for Small-Depth Circuits
We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by small-depth circuits. For constant-depth circuits of polynomial size (i.e.~ tampering functions), our codes have codeword length for a -bit message. This is an exponential improvement of the previous best construction due to Chattopadhyay and Li (STOC 2017), which had codeword length . Our construction remains efficient for circuit depths as large as (indeed, our codeword length remains , and extending our result beyond this would require separating from .
We obtain our codes via a new efficient non-malleable reduction from small-depth tampering to split-state tampering. A novel aspect of our work is the incorporation of techniques from unconditional derandomization into the framework of non-malleable reductions. In particular, a key ingredient in our analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC 2013), a derandomization of the influential switching lemma from circuit complexity; the randomness-efficiency of this switching lemma translates into the rate-efficiency of our codes via our non-malleable reduction.
Reading in the Dark: Classifying Encrypted Digits with Functional Encryption
As machine learning grows into a ubiquitous technology that finds many interesting applications, the privacy of data is becoming a major concern. This paper deals with machine learning and encrypted data. Namely, our contribution is twofold: we first build a new Functional Encryption scheme for quadratic multi-variate polynomials, which outperforms previous schemes. It enables the efficient computation of quadratic polynomials on encrypted vectors, so that only the result is in clear. We then turn to quadratic networks, a class of machine learning models, and show that their design makes them particularly suited to our encryption scheme. This synergy yields a technique for efficiently recovering a plaintext classification of encrypted data. Eventually, we prototype our construction and run it on the MNIST dataset to demonstrate practical relevance. We obtain 97.54% accuracy, with decryption and encryption taking few seconds.
Static-Memory-Hard Functions, and Modeling the Cost of Space vs. Time
Uncategorized
Uncategorized
A series of recent research starting with (Alwen and Serbinenko, STOC 2015) has deepened our understanding of the notion of memory-hardness in cryptography — a useful property of hash functions for deterring large-scale password-cracking attacks — and has shown memory-hardness to have intricate connections with the theory of graph pebbling. Definitions of memory-hardness are not yet unified in the somewhat nascent field of memory-hardness, however, and the guarantees proven to date are with respect to a range of proposed definitions. In this paper, we observe two significant and practical considerations that are not analyzed by existing models of memory-hardness, and propose new models to capture them, accompanied by constructions based on new hard-to-pebble graphs. Our contribution is two-fold, as follows.
First, existing measures of memory-hardness only account for dynamic memory usage (i.e., memory read/written at runtime), and do not consider static memory usage (e.g., memory on disk). Among other things, this means that memory requirements considered by prior models are inherently upper-bounded by a hash function’s runtime; in contrast, counting static memory would potentially allow quantification of much larger memory requirements, decoupled from runtime. We propose a new definition of static-memory-hard function (SHF) which takes static memory into account: we model static memory usage by oracle access to a large preprocessed string, which may be considered part of the hash function description. Static memory requirements are complementary to dynamic memory requirements: neither can replace the other, and to deter large-scale password-cracking attacks, a hash function will benefit from being both dynamic memory-hard and static-memory-hard. We give two SHF constructions based on pebbling. To prove static-memory-hardness, we define a new pebble game (“black-magic pebble game”), and new graph constructions with optimal complexity under our proposed measure. Moreover, we provide a prototype implementation of our first SHF construction (which is based on pebbling of a simple “cylinder” graph), providing an initial demonstration of practical feasibility for a limited range of parameter settings.
Secondly, existing memory-hardness models implicitly assume that the cost of space and time are more or less on par: they consider only linear ratios between the costs of time and space. We propose a new model to capture nonlinear time-space trade-offs: e.g., how is the adversary impacted when space is quadratically more expensive than time? We prove that nonlinear tradeoffs can in fact cause adversaries to employ different strategies from linear tradeoffs. Finally, as an additional contribution of independent interest, we present an asymptotically tight graph construction that achieves the best possible space complexity up to log log n-factors for an existing memory-hardness measure called cumulative complexity in the sequential pebbling model.
Short Non-Malleable Codes from Related-Key Secure Block Ciphers
A non-malleable code is an unkeyed randomized encoding scheme that offers the strong guarantee that decoding a tampered codeword either results in the original message, or in an unrelated message. We consider the simplest possible construction in the computational split-state model, which simply encodes a message as for a uniformly random key , where is a block cipher.
This construction is comparable to, but greatly simplifies over, the one of Kiayias et al. (ACM CCS 2016), who eschewed this simple scheme in fear of related-key attacks on .
In this work, we prove this construction to be a strong non-malleable code as long as is: (i) a pseudorandom permutation under leakage and (ii) related-key secure with respect to an arbitrary but fixed key relation. Both properties are believed to hold for "good" block ciphers, such as AES-128, making this non-malleable code very efficient with short codewords of length (where is the security parameter, e.g., 128 bits), without significant security penalty.
Impeccable Circuits
By injecting faults, active physical attacks pose serious threats to cryptographic hardware where Concurrent Error Detection (CED) schemes are promising countermeasures. They are usually based on an Error-Detecting Code (EDC) which enables detecting certain injected faults depending on the specification of the underlying code. Here, we propose a methodology to enable correct, practical, and robust implementation of code-based CEDs. We show that straightforward hardware implementations of given code-based CEDs can suffer from severe vulnerabilities, not providing the desired protection level. In particular, propagation of faults into combinatorial logic is often ignored in security evaluation of these schemes. First, we formally define this detrimental effect and demonstrate its destructive impact. Second, we introduce an implementation strategy to limit the fault propagation effect. Third, in contrast to many other works where the fault coverage is the main focus, we present a detailed implementation strategy which can guarantee the detection of any fault covered by the underlying EDC. This holds for any time of the computation and any location in the circuit, both in data processing and control unit. In short, we provide practical guidelines how to construct efficient CED schemes with arbitrary EDCs to achieve the desired protection level. We practically evaluate the efficiency of our methodology by case studies covering different symmetric block ciphers and various linear EDCs.
Doing Real Work with FHE: The Case of Logistic Regression
We describe our recent experience, building a system that uses fully-homomorphic encryption (FHE) to approximate the coefficients of a logistic-regression model, built from genomic data. The aim of this project was to examine the feasibility of a solution that operates "deep within the bootstrapping regime," solving a problem that appears too hard to be addressed just with somewhat-homomorphic encryption.
As part of this project, we implemented optimized versions of many "bread and butter" FHE tools. These tools include binary arithmetic, comparisons, partial sorting, and low-precision approximation of "complicated functions" such as reciprocals and logarithms. Our eventual solution can handle thousands of records and hundreds of fields, and it takes a few hours to run. To achieve this performance we had to be extremely frugal with expensive bootstrapping and data-movement operations.
We believe that our experience in this project could server as a guide for what is or is not currently feasible to do with fully-homomorphic encryption.
Efficient Parallel Binary Operations on Homomorphic Encrypted Real Numbers
A number of homomorphic encryption application areas, such as privacy-preserving machine learning analysis in the cloud, could be better enabled if there existed a general solution for combining sufficiently expressive logical and numerical circuit primitives to form higher-level algorithms relevant to the application domain. Logical primitives are more efficient in a binary plaintext message space, whereas numeric primitives favour a word-based message space before encryption. In a step closer to an overall strategy of combining logical and numeric operation types, this paper examines accelerating binary operations on real numbers suitable for somewhat homomorphic encryption. A parallel solution based on SIMD can be used to efficiently perform addition, subtraction and comparison operations in a single step. The result maximises computational efficiency, memory space usage and minimises multiplicative circuit depth. Performance of these primitives and their application in min-max and sorting operations are demonstrated. In sorting real numbers, a speed up of 25-30 times is observed.
Hermes. A framework for cryptographically assured access control and data security
This paper presents Hermes – a practical data security scheme with a reference implementation, which enables distributed sharing and collaboration, enforcing access control with the help of cryptographic methods (public key cryptography and traditional symmetric cryptography).
Bloom Filter Encryption and Applications to Efficient Forward-Secret 0-RTT Key Exchange
Forward secrecy is considered an essential design goal of modern key establishment (KE) protocols, such as TLS 1.3, for example. Furthermore, efficiency considerations such as zero round-trip time (0-RTT), where a client is able to send cryptographically protected payload data along with the very first KE message, are motivated by the practical demand for secure low-latency communication.
For a long time, it was unclear whether protocols that simultaneously achieve 0-RTT and full forward secrecy exist. Only recently, the first forward-secret 0-RTT protocol was described by Günther et al. (Eurocrypt 2017). It is based on Puncturable Encryption. Forward secrecy is achieved by "puncturing" the secret key after each decryption operation, such that a given ciphertext can only be decrypted once (cf. also Green and Miers, S&P 2015). Unfortunately, their scheme is completely impractical, since one puncturing operation takes between 30 seconds and several minutes for reasonable security and deployment parameters, such that this solution is only a first feasibility result, but not efficient enough to be deployed in practice.
In this paper, we introduce a new primitive that we term Bloom Filter Encryption (BFE), which is derived from the probabilistic Bloom filter data structure. We describe different constructions of BFE schemes, and show how these yield new puncturable encryption mechanisms with extremely efficient puncturing. Most importantly, a puncturing operation only involves a small number of very efficient computations, plus the deletion of certain parts of the secret key, which outperforms previous constructions by orders of magnitude. This gives rise to the first forward-secret 0-RTT protocols that are efficient enough to be deployed in practice. We believe that BFE will find applications beyond forward-secret 0-RTT protocols.
A Key-recovery Attack on 855-round Trivium
Uncategorized
Uncategorized
In this paper, we propose a key-recovery attack on Trivium reduced to 855 rounds.
As the output is a complex Boolean polynomial over secret key and IV bits and it is hard to find the solution of the secret keys, we propose a novel nullification technique of the Boolean polynomial to reduce the output Boolean polynomial of 855-round Trivium. Then we determine the degree upper bound of the reduced nonlinear boolean polynomial and detect the right keys. These techniques can be applicable to most stream ciphers based on nonlinear feedback shift registers (NFSR). Our attack on -round Trivium costs time complexity . As far as we know, this is the best key-recovery attack on round-reduced Trivium.
To verify our attack, we also give some experimental data on 721-round reduced Trivium.
Green Mining: toward a less energetic impact of cryptocurrencies
While cryptocurrencies continue to gain popularity, their energy cost is increasingly becoming unsustainable. In this paper, we present an innovative scheme which eliminates the burden of the proof of work which is the main cause of the energy waste in cryptocurrencies such as Bitcoin. The scheme is based on a green leader election scheme which guarantees a bounded average number of simultaneous mining whatever the size of the population in competition.
Non-Profiled Deep Learning-Based Side-Channel Attacks
Uncategorized
Uncategorized
Deep Learning has recently been introduced as a new alternative to perform Side-Channel analysis. Until now, studies have been focused on applying Deep Learning techniques to perform Profiled Side-Channel attacks where an attacker has a full control of a profiling device and is able to collect a large amount of traces for different key values in order to characterize the device leakage prior to the attack. In this paper we introduce a new method to apply Deep Learning techniques in a Non-Profiled context, where an attacker can only collect a limited number of side-channel traces for a fixed unknown key value from a closed device. We show that by combining key guesses with observations of Deep Learning metrics, it is possible to recover information about the secret key. The main interest of this method, is that it is possible to use the power of Deep Learning and Neural Networks in a Non-Profiled scenario. We show that it is possible to exploit the translation-invariance property of Convolutional Neural Networks against de-synchronized traces and use Data Augmentation techniques also during Non-Profiled side-channel attacks. Additionally, the present work shows that in some conditions, this method can outperform classic Non-Profiled attacks as Correlation Power Analysis. We also highlight that it is possible to target masked implementations without leakages combination pre-preprocessing and with less assumptions than classic high-order attacks. To illustrate these properties, we present a series of experiments performed on simulated data and real traces collected from the ChipWhisperer board and from the ASCAD database. The results of our experiments demonstrate the interests of this new method and show that this attack can be performed in practice.
Breach-Resistant Structured Encryption
Uncategorized
Uncategorized
Motivated by the problem of data breaches, we formalize a notion of security for dynamic structured encryption (STE) schemes that guarantees security against a snapshot adversary; that is, an adversary that receives a copy of the encrypted structure at various times but does not see the transcripts related to any queries. In particular, we focus on the construction of dynamic encrypted multi-maps which are used to build efficient searchable symmetric encryption schemes, graph encryption schemes and encrypted relational databases. Interestingly, we show that a form of snapshot security we refer to as breach resistance implies previously-studied notions such as a (weaker version) of history independence and write-only obliviousness.
Moreover, we initiate the study of dual-secure dynamic STE constructions: schemes that are forward-private against a persistent adversary and breach-resistant against a snapshot adversary. The notion of forward privacy guarantees that updates to the encrypted structure do not reveal their association to any query made in the past. As a concrete instantiation, we propose a new dual-secure dynamic multi-map encryption scheme that outperforms all existing constructions; including schemes that are not dual-secure. Our construction has query complexity that grows with the selectivity of the query and the number of deletes since the client executed a linear-time rebuild protocol which can be de-amortized.
We implemented our scheme (with the de-amortized rebuild protocol) and evaluated its concrete efficiency empirically. Our experiments show that it is highly efficient with queries taking less than 1 microsecond per label/value pair.
Proofs of Catalytic Space
Proofs of space (PoS) [DFKP15] are proof systems where a prover can convince a verifier that he ``wastes" disk space. PoS were introduced as a more ecological and economical replacement for proofs of work which are currently used to secure blockchains like Bitcoin. In this work we investigate extensions of PoS which allow the prover to embed useful data into the dedicated space, which later can be recovered.
The first contribution of this paper is a security proof for the PoS from [DFKP15] in the random oracle model (the original proof only applied to a restricted class of adversaries which can store a subset of the data an honest prover would store). When this PoS is instantiated with recent constructions of maximally depth robust graphs,
our proof implies basically optimal security.
As a second contribution we introduce and construct proofs of catalytic space (PoCS), which are defined like classical PoS, but most of the space required by the prover can at the same time be used to store useful data.
Our first construction has almost no overhead (i.e., the useful data is almost as large as the dedicated space), whereas our second construction has a slightly larger overhead, but allows for efficient updates of the data.
Our constructions are extensions of the [DFKP15] PoS, and our tight proof for the PoS extends (non-trivially) to the PoCS.
As our last contribution we construct a proof of replication (PoR), coming up with such an object has recently been stated as an open problem in the Filecoin paper.
Also this construction (and its proof) are extensions of the [DFKP15] PoS.
A New Family of Pairing-Friendly elliptic curves
Uncategorized
Uncategorized
There have been recent advances in solving the finite extension field discrete logarithm problem as it arises in the context of pairing-friendly elliptic curves.
This has lead to the abandonment of approaches based on super-singular curves of small characteristic, and to the
reconsideration of the field sizes required for implementation based on non-supersingular curves of large characteristic.
This has resulted in a revision of recommendations for suitable curves, particularly at a higher level of security. Indeed for AES-256 levels of security the BLS48 curves have been suggested, and demonstrated to be superior to other candidates. These curves have an embedding degree of 48. The well known taxonomy of Freeman, Scott and Teske only considered curves with embedding degrees up to 50. Given some uncertainty around the constants that apply to the best discrete logarithm algorithm, it would seem to be prudent to push a little beyond 50. In this note we announce the discovery of a new family of pairing friendly elliptic curves which includes a new construction for a curve with an embedding degree of 54.
SoK: unraveling Bitcoin smart contracts
Albeit the primary usage of Bitcoin is to exchange currency, its blockchain and consensus mechanism can also be exploited to securely execute some forms of smart contracts. These are agreements among mutually distrusting parties, which can be automatically enforced without resorting to a trusted intermediary. Over the last few years a variety of smart contracts for Bitcoin have been proposed, both by the academic community and by that of developers. However, the heterogeneity in their treatment, the informal (often incomplete or imprecise) descriptions, and the use of poorly documented Bitcoin features, pose obstacles to the research. In this paper we present a comprehensive survey of smart contracts on Bitcoin, in a uniform framework. Our treatment is based on a new formal specification language for smart contracts, which also helps us to highlight some subtleties in existing informal descriptions, making a step towards automatic verification. We discuss some obstacles to the diffusion of smart contracts on Bitcoin, and we identify the most promising open research challenges.
Signatures with Flexible Public Key: Introducing Equivalence Classes for Public Keys
Uncategorized
Uncategorized
We introduce a new cryptographic primitive called signatures with flexible public key (SFPK). We divide the key space into equivalence classes induced by a relation . A signer can efficiently change his or her key pair to a different representatives of the same class, but without a trapdoor it is hard to distinguish if two public keys are related. Our primitive is motivated by structure-preserving signatures on equivalence classes (SPSEQ), where the partitioning is done on the message space. Therefore, both definitions are complementary and their combination has various applications.
We first show how to efficiently construct static group signatures and self-blindable certificates by combining the two primitives. When properly instantiated, the result is a group signature scheme that has a shorter signature size than the current state-of-the-art scheme by Libert, Peters, and Yung from Crypto'15, but is secure in the same setting.
In its own right, our primitive has stand-alone applications in the cryptocurrency domain, where it can be seen as a straightforward formalization of so-called stealth addresses. Finally, it can be used to build the first efficient ring signature scheme in the plain model without trusted setup, where signature size depends only sub-linearly on the number of ring members. Thus, we solve an open problem stated by Malavolta and Schr{ö}der at ASIACRYPT'2017.
New Lower Bounds on Predicate Entropy for Function Private Public-Key Predicate Encryption
We present function private public-key predicate encryption schemes from standard cryptographic assumptions, that achieve new lower bounds on the min-entropy of underlying predicate distributions. Existing function private predicate encryption constructions in the public-key setting can be divided into two broad categories. The first category of constructions are based on standard assumptions, but impose highly stringent requirements on the min-entropy of predicate distributions, thereby limiting their applicability in the context of real-world predicates. For example, the statistically function private constructions of Boneh, Raghunathan and Segev (CRYPTO'13 and ASIACRYPT'13) are inherently restricted to predicate distributions with min-entropy roughly proportional to the security parameter . The second category of constructions mandate more relaxed min-entropy requirements, but are either based on non-standard assumptions (such as indistinguishability obfuscation) or are secure in the generic group model. In this paper, we affirmatively bridge the gap between these categories by presenting new public-key constructions for identity-based encryption, hidden-vector encryption, and subspace-membership encryption~(a generalization of inner-product encryption) that are both data and function private under variants of the well-known DBDH, DLIN and matrix DDH assumptions, while relaxing the min-entropy requirement on the predicate distributions to . In summary, we establish that the minimum predicate entropy necessary for any meaningful notion of function privacy in the public-key setting, is in fact, sufficient, for a fairly rich class of predicates.
Threshold Implementation in Software - Case Study of PRESENT
Masking is one of the predominantly deployed countermeasures in order to prevent side-channel analysis (SCA) attacks. Over the years, various masking schemes have been proposed. However, the implementation of Boolean masking schemes has proven to be difficult in particular for embedded devices due to undisclosed architecture details and device internals. In this article, we investigate the application of Threshold Implementation (TI) in terms of Boolean masking in software using the PRESENT cipher as a case study. Since TI has proven to be a proper solution in order to implement Boolean masking for hardware circuits, we apply the same concept for software implementations and compare it to classical first- and second-order Boolean masking schemes. Eventually, our practical security evaluations reveal that amongst all our considered implementation variants only the TI can provide first-order security while all others still exhibit detectable first-order leakage.
Kissing numbers and transference theorems from generalized tail bounds
We generalize Banaszczyk's seminal tail bound for the Gaussian mass of a lattice to a wide class of test functions. From this we obtain quite general transference bounds, as well as bounds on the number of lattice points contained in certain bodies. As applications, we bound the lattice kissing number in norms by for , and also give a proof of a new transference bound in the norm.
Making Groth's zk-SNARK Simulation Extractable in the Random Oracle Model
Uncategorized
Uncategorized
We describe a variant of Groth's zk-SNARK [Groth, Eurocrypt 2016] that satisfies simulation extractability, which is a strong form of adaptive non-malleability. The proving time is almost identical to [Groth] and requires only two additional group operations. Our proof consists of 5 group elements rather than 3 as in [Groth], and the security proof requires the random oracle model.
RKHD ElGamal signing and 1-way sums
An ECDSA modification with signing equation has the properties that the signer avoids modular inversion and that passive universal forgery is equivalent to inverting a sum of two functions with freely independent inputs.
Let and where is an integer representation of the point . The free sum of and is . A RKHD signature verifies if and only if , where is the hash of the message and is the public key. So RKHD security relies upon, among other things, the assumption that free sum is 1-way (or unforgoable, to be precise).
Other free sums are 1-way under plausible assumptions: elliptic curve discrete logs, integer factoring, and secure small-key
Wegman--Carter--Shoup authentication. Yet other free sums of 1-way functions (integer-factoring based) fail to be 1-way. The ease with which these free sums arise hints at the ease determining RKHD security.
RKHD signatures are very similar to ECGDSA (an elliptic curve version Agnew--Mullin--Vanstone signatures): variable- forgers of the two schemes are algorithmically equivalent. But ECGDSA requires the signer to do one modular inversion, a small implementation security risk.
A privacy-preserving method for temporarily linking/revoking pseudonym certificates in vehicular networks
Vehicular communication (V2X) technologies are expected to become increasingly common in the future. Although they enable improvements on transportation safety and efficiency, the large scale deployment of V2X requires addressing some challenges. In particular, to prevent abuse by drivers and by the system itself, V2X architectures must:
(1) ensure the authenticity of messages, which is usually accomplished by means of digital certification; and
(2) preserve the privacy of honest users, so owners of non-revoked certificates cannot be easily identified and tracked by eavesdroppers. A promising design to address these requirements is the Security Credential Management System (SCMS), which is currently among the main candidates for protecting V2X communications in the United States. Even though SCMS provides efficient, scalable and privacy-preserving mechanisms for managing V2X-oriented certificates, in this article we show that its certificate revocation process can be further enhanced. Namely, we present two birthday attacks against SCMS's revocation process, both of which degrade the system's security as time passes and more certificates are revoked. We then describe an alternative design to prevent such security degradation with minimal computational overhead. In complement to these security gains, we also describe a mechanism for improving the flexibility of the revocation procedure, allowing certificates (as well as their owner's privacy) to be temporarily revoked in an efficient manner. This should be useful, for example, to implement suspension mechanisms or to aid in investigations by law-enforcement authorities.
Can you find the one for me? Privacy-Preserving Matchmaking via Threshold PSI
Uncategorized
Uncategorized
Private set-intersection (PSI) allows a client to only learn the intersection between his/her set and the set of another party, while this latter party learns nothing. We aim to enhance PSI in different dimensions, motivated by the use cases of increasingly popular online matchmaking --- Meeting ``the one'' who possesses \emph{all} desired qualities and \emph{free from any} undesirable attributes may be a bit idealistic. In this paper, we realize \emph{over-} (resp. \emph{below-}) threshold PSI, such that the client learns the intersection (or other auxiliary private data) only when (resp. ). The threshold corresponds to tunable criteria for (mis)matching, without marking all possible attributes as desired or not. In other words, the matching criteria are in a succinct form and the matching computation does not exhaust the whole universe of attributes. To the best of our knowledge, our constructions are the very first solution for these two open problems posed by Bradley~\etal (SCN~'16) and Zhao and Chow (PoPETS~'17), without resorting to the asymptotically less efficient generic approach from garbled circuits.
Moreover, we consider an ``outsourced'' setting with a service provider coordinating the PSI execution, instead of having two strangers to be online simultaneously for running a highly-interactive PSI directly with each other. Outsourcing our protocols are arguably optimal --- the two users perform and decryptions, for unlocking the private set and the outcome of matching.
Simple Proofs of Sequential Work
At ITCS 2013, Mahmoody, Moran and Vadhan [MMV'13] introduce and construct publicly verifiable proofs of sequential work, which is a protocol for proving that one spent sequential computational work related to some statement.
The original motivation for such proofs included non-interactive time-stamping and universally verifiable CPU benchmarks. A more recent application, and our main motivation, are blockchain designs, where proofs of sequential work can be used -- in combination with proofs of space -- as a more ecological and economical substitute for proofs of work which are currently used to secure Bitcoin and other cryptocurrencies.
The construction proposed by [MMV'13] is based on a hash function and can be proven secure in the random oracle model, or assuming inherently sequential hash-functions, which is a new standard model assumption introduced in their work.
In a proof of sequential work, a prover gets a "statement" , a time parameter and access to a hash-function , which for the security proof is modelled as a random oracle. Correctness requires that an honest prover can make a verifier accept making only queries to , while soundness requires that any prover who makes the verifier accept must have made (almost) sequential queries to . Thus a solution constitutes a proof that time passed since was received. Solutions must be publicly verifiable in time at most polylogarithmic in .
The construction of [MMV'13] is based on "depth-robust" graphs, and as a consequence has rather poor concrete parameters. But the major drawback is that the prover needs not just time, but also space to compute a proof.
In this work we propose a proof of sequential work which is much simpler, more efficient and achieves much better concrete bounds. Most importantly, the space required can be as small as (but we get better soundness using slightly more memory than that).
An open problem stated by [MMV'13] that our construction does not solve either is achieving a "unique" proof, where even a cheating prover can only generate a single accepting proof. This property would be extremely useful for applications to blockchains.
Truncated Differential Properties of the Diagonal Set of Inputs for 5-round AES
In the last couple of years, a new wave of results appeared, proposing and exploiting new properties of round-reduced AES.
In this paper we survey and combine some of these results (namely, the multiple-of-n property and the mixture differential cryptanalysis) in a systematic way in order to answer more general questions regarding the probability distribution of encrypted diagonal sets.
This allows to analyze this special set of inputs, and report on new properties regarding the probability distribution of the number of different pairs of corresponding ciphertexts are equal in certain anti-diagonal(s) after 5 rounds.
An immediate corollary of the multiple-of-8 property is that the variance of such a distribution can be shown to be higher
than for a random permutation. Surprisingly, also the mean of the distribution is significantly different from
random, something which cannot be explained by the multiple-of-8 property.
We propose a theoretical explanation of this, by assuming an APN-like assumption on the S-Box which closely resembles the AES-Sbox.
By combining the multiple-of-8 property, the mixture differential approach, and the results just mentioned about the mean and the variance, we are finally able to formulate the probability distribution of the diagonal set after 5-round AES as a sum of independent binomial distributions.
Rasta: A cipher with low ANDdepth and few ANDs per bit
Recent developments in multi party computation (MPC) and fully homomorphic encryption (FHE) promoted the design and analysis of symmetric cryptographic schemes that minimize multiplications in one way or another. In this paper, we propose with Rasta a design strategy for symmetric encryption that has ANDdepth d and at the same time only needs d ANDs per encrypted bit. Even for very low values of d between 2 and 6 we can give strong evidence that attacks may not exist. This contributes to a better understanding of the limits of what concrete symmetric-key constructions can theoretically achieve with respect to AND-related metrics, and is to the best of our knowledge the first attempt that minimizes both metrics simultaneously. Furthermore, we can give evidence that for choices of d between 4 and 6 the resulting implementation properties may well be competitive by testing our construction in the use-case of removing the large ciphertext-expansion when using the BGV scheme.
Two-Round Multiparty Secure Computation Minimizing Public Key Operations
We show new constructions of semi-honest and malicious two-round multiparty secure computation
protocols using only (a fixed) invocations of a two-round oblivious
transfer protocol (which use expensive public-key operations) and cheaper one-way function calls, where is the security parameter, is the number of parties, and is the circuit being computed. All previously known two-round multiparty secure computation protocols required expensive public-key operations.
Efficient and Constant-Rounds Secure Comparison through Dynamic Groups and Asymmetric Computations
Within recent years, secure comparison protocols have been proposed using binary decomposition and properties of algebraic fields.
These have been repeatedly optimized and increased in efficiency, but
have seemingly reached a plateau. We propose a new approach to this
problem that takes advantage of dynamic group sizes for intermediate
calculations and asymmetric computations among participating parties.
As a consequence, according to our analysis, communication and computation costs have been brought to a very low and efficient level. Particularly, the communication costs have been considerably reduced both in order as well as the dominating term's order of magnitude. In addition, our proposed protocol requires no secure multi-party multiplication invocations in contrast to those required by the existing protocols, leading to inefficient constructions of secure comparisons.
Last updated: 2018-02-28
--Withdrawn--
Uncategorized
Uncategorized
On the Use of Independent Component Analysis to Denoise Side-Channel Measurements
Uncategorized
Uncategorized
Independent Component Analysis (ICA) is a powerful technique for blind source separation.
It has been successfully applied to signal processing problems, such as feature extraction and noise reduction, in many different areas including medical signal processing and telecommunication. In this work, we propose a framework to apply ICA to denoise side-channel measurements and hence to reduce the complexity of key recovery attacks.
Based on several case studies, we afterwards demonstrate the overwhelming advantages of ICA with respect to the commonly used preprocessing techniques such as the singular spectrum analysis. Mainly, we target a software masked implementation of an AES and a hardware unprotected one. Our results show a significant Signal-to-Noise Ratio (SNR) gain which translates into a gain in the number of traces needed for a successful side-channel attack. This states the ICA as an important new tool for the security assessment of cryptographic implementations.
Fine-Tuning Decentralized Anonymous Payment Systems based on Arguments for Arithmetic Circuit Satisfiability
Digital currencies like Bitcoin and other blockchain based systems provide means to record monetary transfers between accounts.
In Bitcoin like systems transactions are published on a decentralized ledger and reveal the sender, receiver and amount of a transfer, hence such systems give only moderate anonymity guarantees.
Payment systems like ZCash attempt to offer much stronger anonymity by hiding the origin, destination and value of a payment.
The ZCash system is able to offer strong anonymity, mainly due to use of Zero-Knowledge Succinct Non-interactive Arguments of Knowledge (ZK-SNARK) of arithmetic circuit satisfiability. One drawback of ZCash is that the arithmetic circuit is rather large, thus requires a large common reference string and complex prover for the ZK-SNARK. In fact, the memory and prover complexity is dominated by the ZK-SNARK in use
and is mainly determined by the complexity of the circuit.
In this paper we design a Decentralized Anonymous Payment system (DAP), functionally similar to ZCash, however with significantly smaller arithmetic circuits, thus greatly reducing the memory and prover complexity of the system. Our construction is based on
algebraic primitives, from the realm of elliptic curve and lattice based cryptography, which satisfiability might be efficiently verified by an arithmetic circuit.
Scalable Key Rank Estimation (and Key Enumeration) Algorithm for Large Keys
Uncategorized
Uncategorized
Evaluation of security margins after a side-channel attack is an important step of side-channel resistance evaluation. The security margin indicates the brute force effort needed to recover the key given the leakages. In the recent years, several solutions for key rank estimation algorithms have been proposed. All these solutions give an interesting trade-off between the tightness of the result and the time complexity for symmetric key. Unfortunately, none of them has a linear complexity in the number of subkeys, hence these solutions are slow for large (asymmetric) keys. In this paper, we present a solution to obtain a key rank estimation algorithm with a reasonable trade-off between the efficiency and the tightness that is suitable for large keys. Moreover, by applying backtracking we obtain a parallel key enumeration algorithm.
A New Framework for Finding Nonlinear Superpolies in Cube Attacks against Trivium-Like Ciphers
In this paper, we study experimental cube attacks against Trivium-like ciphers and we focus on improving nonlinear superpolies recovery. We first present a general framework in cube attacks to test nonlinear superpolies, by exploiting a kind of linearization technique. It worth noting that, in the new framework, the complexities of testing and recovering nonlinear superpolies are almost the same as those of testing and recovering linear superpolies. To demonstrate the effectiveness of our new attack framework, we do extensive experiments on Trivium, Kreyvium, and TriviA-SC-v2 respectively. We obtain several linear and quadratic superpolies for the 802-round Trivium, which is the best experimental results against Trivium regarding the number of initialization rounds. For Kreyvium, it is shown that the probability of finding a quadratic superpoly using the new framework is twice as large as finding a linear superpoly. Hopefully, this new framework would provide some new insights on cube attacks against NFSR-based ciphers, and in particular make nonlinear superpolies potentially useful in the future cube attacks.
Vectorizing Higher-Order Masking
The cost of higher-order masking as a countermeasure against side-channel attacks is often considered too high for practical scenarios, as protected implementations become very slow. At Eurocrypt 2017, the bounded moment leakage model was proposed to study the (theoretical) security of parallel implementations of masking schemes. Work at CHES 2017 then brought this to practice by considering an implementation of AES with 32 shares, bitsliced inside 32-bit registers of ARM Cortex-M processors. In this paper we show how the NEON vector instructions of larger ARM Cortex-A processors can be exploited to build much faster masked implementations of AES. Specifically, we present AES with 4 and 8 shares, which in theory provide security against 3rd and 7th-order attacks, respectively. The software is publicly available and optimized for the ARM Cortex-A8. We use refreshing and multiplication algorithms that are proven to be secure in the bounded moment leakage model and to be strongly non-interfering. Additionally, we perform a concrete side-channel evaluation on a BeagleBone Black, using a combination of test vector leakage assessment (TVLA), leakage certification tools and information-theoretic bounds.
A First-Order SCA Resistant AES without Fresh Randomness
Since the advent of Differential Power Analysis (DPA) in the late 1990s protecting embedded devices against Side-Channel Analysis (SCA) attacks has been a major research effort. Even though many different first-order secure masking schemes are available today, when applied to the AES S-box they all require fresh random bits in every evaluation. As the quality criteria for generating random numbers on an embedded device are not well understood, an integrated Random Number Generator (RNG) can be the weak spot of any protected implementation and may invalidate an otherwise secure implementation. We present a new construction based on Threshold Implementations and Changing of the Guards to realize a first-order secure AES with zero per-round randomness. Hence, our design does not need a built-in RNG, thereby enhancing security and reducing the overhead.
On the Complexity of Simulating Auxiliary Input
Uncategorized
Uncategorized
We construct a simulator for the simulating auxiliary input problem with
complexity better than all previous results and prove the optimality up to
logarithmic factors by establishing a black-box lower bound. Specifically, let
be the length of the auxiliary input and be the
indistinguishability parameter. Our simulator is
more complicated than the distinguisher family.
For the lower bound, we show the relative complexity to the distinguisher of a
simulator is at least assuming the simulator is
restricted to use the distinguishers in a black-box way and satisfy a mild
restriction.
On the Ring-LWE and Polynomial-LWE problems
Uncategorized
Uncategorized
The Ring Learning With Errors problem (RLWE) comes in various forms.
Vanilla RLWE is the decision dual-RLWE variant, consisting in distinguishing from uniform a distribution depending on a secret belonging
to the dual O_K^vee of the ring of integers O_K of a specified number field K.
In primal-RLWE, the secret instead belongs to O_K. Both
decision dual-RLWE and primal-RLWE enjoy search counterparts.
Also widely used is (search/decision) Polynomial Learning With Errors (PLWE),
which is not defined
using a ring of integers O_K of a number field K but
a polynomial ring ZZ[x]/f for a monic
irreducible f in ZZ[x].
We show that there exist reductions between all of these six
problems that incur limited parameter losses.
More precisely: we prove that the (decision/search) dual to
primal reduction from Lyubashevsky et al. [EUROCRYPT~2010]
and Peikert [SCN~2016]
can be implemented with a small error rate growth for all rings
(the resulting reduction is non-uniform polynomial time); we
extend it to polynomial-time reductions between (decision/search)
primal RLWE and PLWE that work for a family
of polynomials f that is exponentially large as a function
of deg f (the resulting reduction is also
non-uniform polynomial time); and we
exploit the recent technique from Peikert et al. [STOC~2017]
to obtain a search to decision reduction for RLWE for arbitrary number fields.
The reductions incur error rate increases that depend
on intrinsic quantities related to K and f.
Full Indifferentiable Security of the Xor of Two or More Random Permutations Using the Method
Uncategorized
Uncategorized
The construction (bitwise-xor of outputs of two independent -bit random permutations) has gained broad attention over the last two decades due to its high security. Very recently, Dai \textit{et al.} (CRYPTO'17), by using a method which they term the {\em Chi-squared method} ( method), have shown -bit security of when the underlying random permutations are kept secret to the adversary. In this work, we consider the case where the underlying random permutations are publicly available to the adversary. The best known security of in this security game (also known as {\em indifferentiable security}) is -bit, due to Mennink \textit{et al.} (ACNS'15). Later, Lee (IEEE-IT'17) proved a better -bit security for the general construction which returns the xor of ( ) independent random permutations. However, the security was shown only for the cases where is an even integer. In this paper, we improve all these known bounds and prove full, {\em i.e.,} -bit (indifferentiable) security of as well as for any . Our main result is -bit security of , and we use the method to prove it.
Statistical Witness Indistinguishability (and more) in Two Messages
Uncategorized
Uncategorized
Two-message witness indistinguishable protocols were first constructed by Dwork and Naor (FOCS 00). They have since proven extremely useful in the design of several cryptographic primitives. However, so far no two-message arguments for NP provided statistical privacy against malicious verifiers.
In this paper, we construct the first:
- Two-message statistical witness indistinguishable (SWI) arguments for NP.
- Two-message statistical zero-knowledge arguments for NP with super-polynomial simulation (Statistical SPS-ZK). These were previously believed to be impossible to construct via black-box reductions (Chung et al., ePrint 2012).
- Two-message statistical distributional weak zero-knowledge (SwZK) arguments for NP with polynomial simulation, where the instance is sampled by the prover in the second round.
These protocols are based on quasi-polynomial hardness of two-message oblivious transfer (OT) with game-based security, which can in turn be based on quasi-polynomial DDH or QR or N'th residuosity. We also demonstrate simple applications of these arguments to constructing more secure forms of oblivious transfer.
Along the way, we show that the Kalai and Raz (Crypto 09) transform compressing interactive proofs to two-message arguments can be generalized to compress certain types of interactive arguments. We introduce and construct a new technical tool, which is a variant of extractable two-message statistically hiding commitments, by extending the work of Khurana and Sahai (FOCS 17). These techniques may be of independent interest.
On the Existence of Three Round Zero-Knowledge Proofs
Uncategorized
Uncategorized
We study the round complexity of zero-knowledge (ZK) proof systems. While five round ZK proofs for NP are known from standard assumptions [Goldreich-Kahan, J. Cryptology'96], Katz [TCC'08] proved that four rounds are insufficient for this task w.r.t. black-box simulation.
In this work, we study the feasibility of ZK proofs using non-black-box simulation. Our main result is that three round private-coin ZK proofs for NP do not exist (even w.r.t. non-black-box simulation), under certain assumptions on program obfuscation. Our approach builds upon the recent work of Kalai et al. [Crypto'17] who ruled out constant round public-coin ZK proofs under the same assumptions as ours.
Optimal Forgeries Against Polynomial-Based MACs and GCM
Uncategorized
Uncategorized
Polynomial-based authentication algorithms, such as GCM and Poly1305, have seen widespread adoption in practice. Due to their importance, a significant amount of attention has been given to understanding and improving both proofs and attacks against such schemes. At EUROCRYPT 2005, Bernstein published the best known analysis of the schemes when instantiated with PRPs, thereby establishing the most lenient limits on the amount of data the schemes can process per key. A long line of work, initiated by Handschuh and Preneel at CRYPTO 2008, finds the best known attacks, advancing our understanding of the fragility of the schemes. Yet surprisingly, no known attacks perform as well as the predicted worst-case attacks allowed by Bernstein's analysis, nor has there been any advancement in proofs improving Bernstein's bounds, and the gap between attacks and analysis is significant. We settle the issue by finding a novel attack against polynomial-based authentication algorithms using PRPs, and combine it with new analysis, to show that Bernstein's bound, and our attacks, are optimal.
The Wonderful World of Global Random Oracles
Uncategorized
Uncategorized
The random-oracle model by Bellare and Rogaway (CCS'93) is an indispensable tool for the security analysis of practical cryptographic protocols. However, the traditional random-oracle model fails to guarantee security when a protocol is composed with arbitrary protocols that use the same random oracle. Canetti, Jain, and Scafuro (CCS'14) put forth a global but non-programmable random oracle in the Generalized UC framework and showed that some basic cryptographic primitives with composable security can be efficiently realized in their model. Because their random-oracle functionality is non-programmable, there are many practical protocols that have no hope of being proved secure using it. In this paper, we study alternative definitions of a global random oracle and, perhaps surprisingly, show that these allow one to prove GUC-secure existing, very practical realizations of a number of essential cryptographic primitives including public-key encryption, non-committing encryption, commitments, Schnorr signatures, and hash-and-invert signatures. Some of our results hold generically for any suitable scheme proven secure in the traditional ROM, some hold for specific constructions only. Our results include many highly practical protocols, for example, the folklore commitment scheme H(m|r) (where m is a message and r is the random opening information) which is far more efficient than the construction of Canetti et al.
An Efficiency-Preserving Transformation from Honest-Verifier Statistical Zero-Knowledge to Statistical Zero-Knowledge
Uncategorized
Uncategorized
We present an unconditional transformation from any honest-verifier statistical zero-knowledge (HVSZK) protocol to standard SZK that preserves round complexity and efficiency of both the verifier and the prover. This improves over currently known transformations, which either rely on some computational assumptions or introduce significant computational overhead. Our main conceptual contribution is the introduction of instance-dependent SZK proofs for NP, which serve as a building block in our transformation. Instance-dependent SZK for NP can be constructed unconditionally based on instance-dependent commitment schemes of Ong and Vadhan (TCC'08).
As an additional contribution, we give a simple constant-round SZK protocol for Statistical-Difference resembling the textbook HVSZK proof of Sahai and Vadhan (J.ACM'03). This yields a conceptually simple constant-round protocol for all of SZK.
OPAQUE: An Asymmetric PAKE Protocol Secure Against Pre-Computation Attacks
Uncategorized
Uncategorized
Password-Authenticated Key Exchange (PAKE) protocols allow two parties that only share a password to establish a shared key in a way that is immune to offline attacks. Asymmetric PAKE (aPAKE) strengthens this notion for the more common client-server setting where the server stores a mapping of the password and security is required even upon server compromise, that is, the only allowed attack in this case is an (inevitable) offline exhaustive dictionary attack against individual user passwords. Unfortunately, current aPAKE protocols (that dispense with the use of servers' public keys) allow for pre-computation attacks that lead to the instantaneous compromise of user passwords upon server compromise, thus forgoing much of the intended aPAKE security. Indeed, these protocols use - in essential ways - deterministic password mappings or use random "salt" transmitted in the clear from servers to users, and thus are vulnerable to pre-computation attacks.
We initiate the study of "Strong aPAKE" protocols that are secure as aPAKE's but are also secure against pre-computation attacks. We formalize this notion in the Universally Composable (UC) settings and present two modular constructions using an Oblivious PRF as a main tool. The first builds a Strong aPAKE from any aPAKE (which in turn can be constructed from any PAKE [GMR'06]) while the second builds a Strong aPAKE from any authenticated key-exchange protocol secure against reverse impersonation (a.k.a. KCI). Using the latter transformation, we show a practical instantiation of a UC-secure Strong aPAKE in the Random Oracle model. The protocol ("OPAQUE") consists of 2 messages (3 with mutual authentication), requires 3 and 4 exponentiations for server and client, respectively (2 to 4 of which can be fixed-base depending on optimizations), provides forward secrecy, is PKI-free, supports user-side hash iterations, has a built-in facility for password-based storage and retrieval of secrets and credentials, and accommodates a user-transparent server-side threshold implementation.
Untagging Tor: A Formal Treatment of Onion Encryption
Tor is a primary tool for maintaining anonymity online. It provides a low-latency, circuit-based, bidirectional secure channel between two parties through a network of onion routers, with the aim of obscuring exactly who is talking to whom, even to adversaries controlling part of the network. Tor relies heavily on cryptographic techniques, yet its onion encryption scheme is susceptible to tagging attacks (Fu and Ling, 2009), which allow an active adversary controlling the first and last node of a circuit to deanonymize with near-certainty. This contrasts with less active traffic correlation attacks, where the same adversary can at best deanonymize with high probability. The Tor project has been actively looking to defend against tagging attacks and its most concrete alternative is proposal 261, which specifies a new onion encryption scheme based on a variable-input-length tweakable cipher.
We provide a formal treatment of low-latency, circuit-based onion encryption, relaxed to the unidirectional setting, by expanding existing secure channel notions to the new setting and introducing circuit hiding to capture the anonymity aspect of Tor. We demonstrate that circuit hiding prevents tagging attacks and show proposal 261's relay protocol is circuit hiding and thus resistant against tagging attacks.
Boomerang Connectivity Table: A New Cryptanalysis Tool
A boomerang attack is a cryptanalysis framework that regards a block cipher as the composition of two sub-ciphers and builds a particular characteristic for with probability by combining differential characteristics for and with probability and , respectively.
Crucially the validity of this figure is under the assumption that the characteristics for and can be chosen independently. Indeed, Murphy has shown that independently chosen characteristics may turn out to be incompatible. On the other hand, several researchers observed that the probability can be improved to or around the boundary between and by considering a positive dependency of the two characteristics, e.g.~the ladder switch and S-box switch by Biryukov and Khovratovich.
This phenomenon was later formalised by Dunkelman et al.~as a sandwich attack that regards as , where satisfies some differential propagation among four texts with probability , and the entire probability is .
In this paper, we revisit the issue of dependency of two characteristics in , and propose a new tool called Boomerang Connectivity Table (BCT), which evaluates in a systematic and easy-to-understand way when is composed of a single S-box layer. With the BCT, previous observations on the S-box including the incompatibility, the ladder switch and the S-box switch are represented in a unified manner. Moreover, the BCT can detect a new switching effect, which shows that the probability around the boundary may be even higher than or .
To illustrate the power of the BCT-based analysis, we improve boomerang attacks against Deoxys-BC, and disclose the mechanism behind an unsolved probability amplification for generating a quartet in SKINNY. Lastly, we discuss the issue of searching for S-boxes having good BCT and extending the analysis to modular addition.
DelegaTEE: Brokered Delegation Using Trusted Execution Environments
We introduce a new concept called brokered delegation. Brokered delegation allows users to flexibly delegate credentials and rights for a range of service providers to other users and third parties. We explore how brokered delegation can be implemented using novel trusted execution environments (TEEs). We introduce a system called DelegaTEE that enables users (Delegatees) to log into different online services using the credentials of other users (Owners). Credentials in DelegaTEE are never revealed to Delegatees and Owners can restrict access to their accounts using a range of rich, contextually dependent delegation policies.
DelegaTEE fundamentally shifts existing access control models for centralized online services. It does so by using TEEs to permit access delegation at the user's discretion. DelegaTEE thus effectively reduces mandatory access control (MAC) in this context to discretionary access control (DAC). The system demonstrates the significant potential for TEEs to create new forms of resource sharing around online services without the direct support from those services.
We present a full implementation of DelegaTEE using Intel SGX and demonstrate its use in four real-world applications: email access (SMTP/IMAP), restricted website access using a HTTPS proxy, e-banking/credit card, and a third-party payment system (PayPal).
The Missing Difference Problem, and its Applications to Counter Mode Encryption
The counter mode (CTR) is a simple, efficient and widely used encryption
mode using a block cipher. It comes with a security proof that
guarantees no attacks up to the birthday bound (i.e. as long as the
number of encrypted blocks satisfies ), and
a matching attack that can distinguish plaintext/ciphertext pairs from
random using about blocks of data.
The main goal of this paper is to study attacks against the counter mode
beyond this simple distinguisher. We focus on message recovery attacks,
with realistic assumptions about the capabilities of an adversary, and
evaluate the full time complexity of the attacks rather than just the
query complexity. Our main result is an attack to recover a block of
message with complexity . This shows that
the actual security of CTR is similar to that of CBC, where collision
attacks are well known to reveal information about the message.
To achieve this result, we study a simple algorithmic problem related to
the security of the CTR mode: the missing difference problem. We give
efficient algorithms for this problem in two practically relevant cases:
where the missing difference is known to be in some linear subspace, and
when the amount of data is higher than strictly required.
As a further application, we show that the second algorithm can also be
used to break some polynomial MACs such as GMAC and Poly1305, with a
universal forgery attack with complexity
.
Correlation Cube Attacks: From Weak-Key Distinguisher to Key Recovery
Uncategorized
Uncategorized
In this paper, we describe a new variant of cube attacks called correlation cube attack. The new attack recovers the secret key of a cryptosystem by exploiting conditional correlation properties between the superpoly of a cube and a specific set of low-degree polynomials that we call a basis, which satisfies that the superpoly is a zero constant when all the polynomials in the basis are zeros. We present a detailed procedure of correlation cube attack for the general case, including how to find a basis of the superpoly of a given cube. One of the most significant advantages of this new analysis technique over other variants of cube attacks is that it converts from a weak-key distinguisher to a key recovery attack.
As an illustration, we apply the attack to round-reduced variants of the stream cipher Trivium. Based on the tool of numeric mapping introduced by Liu at CRYPTO 2017, we develop a specific technique to efficiently find a basis of the superpoly of a given cube as well as a large set of potentially good cubes used in the attack on Trivium variants, and further set up deterministic or probabilistic equations on the key bits according to the conditional correlation properties between the superpolys of the cubes and their bases. For a variant when the number of initialization rounds is reduced from 1152 to 805, we can recover about 7-bit key information on average with time complexity , using keystream bits and preprocessing time . For a variant of Trivium reduced to 835 rounds, we can recover about 5-bit key information on average with the same complexity. All the attacks are practical and fully verified by experiments. To the best of our knowledge, they are thus far the best known key recovery attacks for these variants of Trivium, and this is the first time that a weak-key distinguisher on Trivium stream cipher can be converted to a key recovery attack.
ROYALE: A Framework for Universally Composable Card Games with Financial Rewards and Penalties Enforcement
While many tailor made card game protocols are known, the vast majority of those suffer from three main issues: lack of mechanisms for distributing financial rewards and punishing cheaters, lack of composability guarantees and little flexibility, focusing on the specific game of poker. Even though folklore holds that poker protocols can be used to play any card game, this conjecture remains unproven and, in fact, does not hold for a number of protocols (including recent results). We both tackle the problem of constructing protocols for general card games and initiate a treatment of such protocols in the Universal Composability (UC) framework, introducing an ideal functionality that captures general card games constructed from a set of core card operations. Based on this formalism, we introduce Royale, the first UC-secure general card games which supports financial rewards/penalties enforcement. We remark that Royale also yields the first UC-secure poker protocol. Interestingly, Royale performs better than most previous works (that do not have composability guarantees), which we highlight through a detailed concrete complexity analysis and benchmarks from a prototype implementation.
A New Approach to Black-Box Concurrent Secure Computation
Uncategorized
Uncategorized
We consider the task of constructing concurrently composable protocols for general secure computation by making only black-box use of underlying cryptographic primitives. Existing approaches for this task first construct a black-box version of CCA-secure commitments which provide a strong form of concurrent security to the committed value(s). This strong form of security is then crucially used to construct higher level protocols such as concurrently secure OT/coin-tossing (and eventually all functionalities).
This work explores a fresh approach. We first aim to construct a concurrently-secure OT protocol whose concurrent security is proven directly using concurrent simulation techniques; in particular, it does not rely on the usual ``non-polynomial oracles'' of CCA-secure commitments. The notion of concurrent security we target is super-polynomial simulation (SPS). We show that such an OT protocol can be constructed from polynomial hardness assumptions in a black-box manner, and within a constant number of rounds. In fact, we only require the existence of (constant round) semi-honest OT and standard collision-resistant hash functions.
Next, we show that such an OT protocol is sufficient to obtain SPS-secure (concurrent) multiparty computation (MPC) for general functionalities. This transformation does not require any additional assumptions; it also maintains the black-box nature as well as the constant round feature of the original OT protocol.
Prior to our work, the only known black-box construction of constant-round concurrently composable MPC required stronger assumptions; namely, verifiable perfectly binding homomorphic commitment schemes and PKE with oblivious public-key generation.
Memory Lower Bounds of Reductions Revisited
Uncategorized
Uncategorized
In Crypto 2017, Auerbach et al. initiated the study on memory-tight reductions and proved two negative results on the memory-tightness of restricted black-box reductions from multi-challenge security to single-challenge security for signatures and an artificial hash function. In this paper, we revisit the results by Auerbach et al. and show that for a large class of reductions treating multi-challenge security, it is impossible to avoid loss of memory-tightness unless we sacrifice the efficiency of their running-time. Specifically, we show three lower bound results. Firstly, we show a memory lower bound of natural black-box reductions from the multi-challenge unforgeability of unique signatures to any computational assumption. Then we show a lower bound of restricted reductions from multi-challenge security to single-challenge security for a wide class of cryptographic primitives with unique keys in the multi-user setting. Finally, we extend the lower bound result shown by Auerbach et al. treating a hash function to one treating any hash function with a large domain.
Constrained PRFs for NC1 in Traditional Groups
We propose new constrained pseudorandom functions (CPRFs) in traditional groups. Traditional groups mean cyclic and multiplicative groups of prime order that were widely used in the 1980s and 1990s (sometimes called ``pairing free'' groups). Our main constructions are as follows.
- We propose a selectively single-key secure CPRF for circuits with depth (that is, circuits) in traditional groups} where is the input size. It is secure under the -decisional Diffie-Hellman inversion ( -DDHI) assumption in the group of quadratic residues and the decisional Diffie-Hellman (DDH) assumption in a traditional group of order in the standard model.
- We propose a selectively single-key private bit-fixing CPRF in traditional groups. It is secure under the DDH assumption in any prime-order cyclic group in the standard model.
- We propose adaptively single-key secure CPRF for and private bit-fixing CPRF in the random oracle model.
To achieve the security in the standard model, we develop a new technique using correlated-input secure hash functions.
Bootstrapping for Approximate Homomorphic Encryption
This paper extends the leveled homomorphic encryption scheme for an approximate arithmetic of Cheon et al. (ASIACRYPT 2017) to a fully homomorphic encryption, i.e., we propose a new technique to refresh low-level ciphertexts based on Gentry's bootstrapping procedure.
The modular reduction operation is the main bottleneck in the homomorphic evaluation of the decryption circuit. We exploit a scaled sine function as an approximation of the modular reduction operation and present an efficient evaluation strategy. Our method requires only one homomorphic multiplication for each of iterations and so the total computation cost grows linearly with the depth of the decryption circuit.
We also show how to recrypt packed ciphertexts on the RLWE construction with an open-source implementation. For example, it takes 139.8 seconds to refresh a ciphertext that encrypts 128 numbers with 12 bits of precision, yielding an amortized rate of 1.1 seconds per slot.
A General Framework for the Related-key Linear Attack against Block Ciphers with Linear Key Schedules
We present a general framework for the related-key linear attack that can be applied to iterative block ciphers with linear key schedules.
The attack utilizes a newly introduced related-key linear approximation that is obtained directly from a linear trail.
The attack makes use of a known related-key data consisting of triplets of a plaintext, a ciphertext, and a key difference such that the ciphertext is the encrypted value of the plaintext under the key that is the xor of the key to be recovered and the specified key difference.
If such a block cipher has a linear trail with linear correlation \epsilon, it admits attacks with related-key data of size \epsilon^{-2} just as in the case of classical Matsui's Algorithms.
But since the attack makes use of a related-key data, the attacker can use a linear trail with the squared correlation less than 2^{-n}, n being the block size, in case the key size is larger than n.
Moreover, the standard key hypotheses seem to be appropriate even when the trail is not dominant as validated by experiments.
The attack can be applied in two ways.
First, using a linear trail with squared correlation smaller than 2^{-n}, one can get an effective attack covering more rounds than existing attacks against some ciphers, such as Simon48/96, Simon64/128 and Simon128/256.
Secondly, using a trail with large squared correlation, one can use related-key data for key recovery even when the data is not suitable for existing linear attacks.
Adaptively Secure Garbling with Near Optimal Online Complexity
Uncategorized
Uncategorized
We construct an adaptively secure garbling scheme with an online communication complexity of
where is the circuit being garbled,
and is the security parameter. The security of our scheme can be based on (polynomial hardness of) the
Computational Diffie-Hellman (CDH) assumption, or the Factoring assumption or the Learning with Errors assumption.
This is nearly the best achievable in the standard model (i.e., without random oracles)
as the online communication complexity must be larger than both and . The online computational
complexity of our scheme is .
Previously known standard model adaptively secure garbling schemes had asymptotically worse
online cost or relied on exponentially hard computational assumptions.
Analysis of Error-Correcting Codes for Lattice-Based Key Exchange
Lattice problems allow the construction of very efficient key exchange and public-key encryption schemes. When using the Learning with Errors (LWE) or Ring-LWE (RLWE) problem such schemes exhibit an interesting trade-off between decryption error rate and security. The reason is that secret and error distributions with a larger standard deviation lead to better security but also increase the chance of decryption failures. As a consequence, various message/key encoding or reconciliation techniques have been proposed that usually encode one payload bit into several coefficients. In this work, we analyze how error-correcting codes can be used to enhance the error resilience of protocols like NewHope, Frodo, or Kyber. For our case study, we focus on the recently introduced NewHope Simple and propose and analyze four different options for error correction: i) BCH code; ii) combination of BCH code and additive threshold encoding; iii) LDPC code; and iv) combination of BCH and LDPC code. We show that lattice-based cryptography can profit from classical and modern codes by combining BCH and LDPC codes. This way we achieve quasi-error-free communication and an increase of the estimated post-quantum bit-security level by 20.39% and a decrease of the communication overhead by 12.8%.