## All papers in 2017 (1262 results)

A New Index Calculus Algorithm for the Elliptic Curve Discrete Logarithm Problem and Summation Polynomial Evaluation

Uncategorized

Uncategorized

The introduction of summation polynomials for elliptic curves by Semaev has opened up new avenues of
investigation in index calculus type algorithms for the elliptic curve discrete logarithm problem,
and several recent papers have explored their use.
We propose an index calculus algorithm to solve the Elliptic Curve Discrete Logarithm Problem
that makes use of a technique for fast evaluation of the summation polynomials,
and unlike all other algorithms using summation polynomials, does not
involve a Gröbner basis computation.
We further propose another algorithm that does not involve Gröbner basis computations or summation polynomials.
We give a complexity estimate of our algorithms and provide extensive computational data.

A Comprehensive Performance Analysis of Hardware Implementations of CAESAR Candidates

Authenticated encryption with Associated Data (AEAD) plays a significant role in cryptography because of its ability to provide integrity, confidentiality and authenticity at the same time. Due to the emergence of security at the edge of computing fabric, such as, sensors and smartphone devices, there is a growing need of lightweight AEAD ciphers. Currently, a worldwide contest, titled CAESAR, is being held to decide on a set of AEAD ciphers, which are distinguished by their security, run-time performance, energy-efficiency and low area budget. For accurate evaluation of CAESAR candidates, it is of utmost importance to have independent and thorough optimization for each of the ciphers both for their corresponding hardware and software implementations.
In this paper, we have carried out an evaluation of the optimized hardware implementation of AEAD ciphers selected in CAESAR third round. We specifically focus on manual optimization of the micro-architecture, evaluations for ASIC technology libraries and the effect of CAESAR APIs on the performances. While these has been studied for FPGA platforms and standalone cipher implementation - to the best of our knowledge, this is the first detailed ASIC benchmarking of CAESAR candidates including manual optimization. In this regard, we benchmarked all prior reported designs, including the code generated by high-level synthesis flows.
Detailed optimization studies are reported for NORX, CLOC and Deoxys-I. Our pre-layout results using commercial ASIC technology library and synthesis tools show that optimized NORX is 40.81% faster and 18.02% smaller, optimized CLOC is 38.30% more energy efficient and 20.65% faster and optimized Deoxys-I is 35.16% faster, with respect to the best known results. Similar or better performance results are also achieved for FPGA platforms.

Collision Resistant Hashing from Sub-exponential Learning Parity with Noise

The Learning Parity with Noise (LPN) problem has recently found many cryptographic applications such as authentication protocols, pseudorandom generators/functions and even asymmetric tasks including public-key encryption (PKE) schemes and oblivious transfer (OT) protocols. It however remains a long-standing open problem whether LPN implies collision resistant hash (CRH) functions. Based on the recent work of Applebaum et al. (ITCS 2017), we introduce a general framework for constructing CRH from LPN for various parameter choices. We show that, just to mention a few notable ones, under any of the following hardness assumptions (for the two most common variants of LPN)
1) constant-noise LPN is $2^{n^{0.5+\epsilon}}$-hard for any constant $\epsilon>0$;
2) constant-noise LPN is $2^{\Omega(n/\log n)}$-hard given $q=poly(n)$ samples;
3) low-noise LPN (of noise rate $1/\sqrt{n}$) is $2^{\Omega(\sqrt{n}/\log n)}$-hard given $q=poly(n)$ samples.
there exists CRH functions with constant (or even poly-logarithmic) shrinkage, which can be implemented using polynomial-size depth-3 circuits with NOT, (unbounded fan-in) AND and XOR gates. Our technical route LPN$\rightarrow$bSVP$\rightarrow$CRH is reminiscent of the known reductions for the large-modulus analogue, i.e., LWE$\rightarrow$SIS$\rightarrow$CRH, where the binary Shortest Vector Problem (bSVP) was recently introduced by Applebaum et al. (ITCS 2017) that enables CRH in a similar manner to Ajtai's CRH functions based on the Short Integer Solution (SIS) problem.
Furthermore, under additional (arguably minimal) idealized assumptions such as small-domain random functions or random permutations (that trivially imply collision resistance), we still salvage a simple and elegant collision-resistance-preserving domain extender that is (asymptotically) more parallel and efficient than previously known. In particular, assume $2^{n^{0.5+\epsilon}}$-hard constant-noise LPN or $2^{n^{0.25+\epsilon}}$-hard low-noise LPN, we obtain a polynomially shrinking collision resistant hash function that evaluates in parallel only a single layer of small-domain random functions (or random permutations) and produces their XOR sum as output.

Cryptanalysis of HK17

Uncategorized

Uncategorized

Very recently, a key exchange scheme called HK17 was submitted to NIST as a candidate of the standard of post-quantum cryptography. The HK17 scheme employs some hypercomplex numbers as the basic objects, such as quaternions and octonions. In this paper, we show that HK17 is insecure since a passive adversary can recover the shared key in polynomial time.

Remarks on Quaternions/Octonion Based Diffie-Hellman Key Exchange Protocol Submitted to NIST PQC Project

Uncategorized

Uncategorized

In November 2017, Juan edro Hecht and Jorge Alejandro Kamlofsky submitted a quaternions/octonions based Diffie-Hellman key agreement protocol HK17 to NIST post quantum cryptography project. Daniel J. Bernstein and Tanja Lange showed how to break the scheme in O(p) steps where p is the modulo used in the scheme. One may wonder whether the scheme could be secure if p is sufficiently large (e.g., p is 1000 bits long)? In this note, we show that the scheme could be broken by solving a homogeneous quadratic equation system of eight equations in four unknowns. Thus no matter how big the p it is, it could be trivailly broken using Kipnis and Shamir’s relinearization techniques.

A first-order chosen-plaintext DPA attack on the third round of DES

DPA attacks usually exhibit a "divide-and-conquer" property: the adversary needs to enumerate only a small space of the key (a key sub-space) when performing the DPA attack. This is achieved trivially in the outer rounds of a cryptographic implementation since intermediates depend on only few key bits. In the inner rounds, however, intermediates depend on too many key bits to make DPA practical or even to pose an advantage over cryptanalysis. For this reason, DPA countermeasures may be deployed only to outer rounds if performance or efficiency are critical. This paper shows a DPA attack exploiting leakage from the third round of a Feistel cipher, such as DES. We require the ability of fixing inputs, but we do not place any special restriction on the leakage model. The complexity of the attack is that of two to three DPA attacks on the first round of DES plus some minimal differential cryptanalysis.

A Universally Composable Treatment of Network Time

The security of almost any real-world distributed system today depends on the participants having some "reasonably accurate" sense of current real time. Indeed, to name one example, the very authenticity of practically any communication on the Internet today hinges on the ability of the parties to accurately detect revocation of certificates, or expiration of passwords or shared keys.
However, as recent attacks show, the standard protocols for determining time are subvertible, resulting in wide-spread security loss. Worse yet, we do not have security notions for network time protocols that (a) can be rigorously asserted and (b) rigorously guarantee security of applications that require a sense of real time.
We propose such notions, within the universally composable (UC) security framework. That is, we formulate ideal functionalities that capture a number of prevalent forms of time measurement within existing systems. We show how they can be realized by real-world protocols, and how they can be used to assert security of time-reliant applications --- specifically, certificates with revocation and expiration times. This allows for relatively clear and modular treatment of the use of time in security-sensitive systems.
Our modeling and analysis are done within the existing UC framework, in spite of its asynchronous, event-driven nature. This allows incorporating the use of real time within the existing body of analytical work done in this framework. In particular it allows for rigorous incorporation of real time within cryptographic tools and primitives.

On the Strategy and Behavior of Bitcoin Mining with N-attackers

Uncategorized

Uncategorized

Selfish mining is a well-known mining attack strategy discovered by Eyal and Sirer in 2014. After that, the attackers' strategy space has been extended by many works. These works only analyze the strategy and behavior of one single attacker. The extension of the strategy space is based on the assumption that there is only one attacker in the blockchain network. However, a proof of work blockchain is likely to have several attackers. The attackers can be independent of other attackers instead of sharing information and attacking the blockchain as a whole. During this problem, we are the team who for the first time analyze the miners' behavior in a proof of work blockchain with several attackers by establishing a new model. Based on our model, we extend the attackers' strategy space by proposing a new strategy set publish-n. Meanwhile, we revisit other attacking strategies such as selfish mining and stubborn mining in our model to explore whether these strategies work or not when there are several attackers. We compare the performance of different strategies through relative stale block rate of the attackers. In a proof of work blockchain model with two attackers, strategy publish-n can beat selfish mining by up to 26.3%.

Practical Applications of Improved Gaussian Sampling for Trapdoor Lattices

Lattice trapdoors are an important primitive used in a wide range of
cryptographic protocols, such as identity-based encryption (IBE),
attribute-based encryption, functional encryption, and program obfuscation. In this paper, we present software implementations of the Gentry-Peikert-Vaikuntanathan (GPV) digital signature, IBE and ciphertext-policy attribute-based
encryption (CP-ABE) schemes based on an efficient Gaussian sampling algorithm for trapdoor lattices, and demonstrate
that these three important cryptographic protocols are practical. One important aspect of our implementation is that it
supports prime moduli, which are required in many cryptographic
schemes. Also, our implementation uses bases larger than two for the gadget matrix
whereas most previous implementations use the binary base. We show that
the use of higher bases significantly decreases execution times and
storage requirements. We adapt IBE and CP-ABE schemes originally
based on learning with errors (LWE) hardness assumptions to a more
efficient Ring LWE (RLWE) construction. To the best
of our knowledge, ours are the first implementations employing the Gaussian sampling for non-binary bases of the gadget matrix. The experimental results demonstrate
that our lattice-based signature, IBE and CP-ABE implementations, which are based on standard assumptions with post-quantum security, provide a performance comparable to the recent state-of-the-art implementation works based on stronger/non-post-quantum assumptions.

Micro-Architectural Power Simulator for Leakage Assessment of Cryptographic Software on ARM Cortex-M3 Processors

Masking is a common technique to protect software implementations of symmetric cryptographic algorithms against Differential Power Analysis (DPA) attacks. The development of a properly masked version of a block cipher is an incremental and time-consuming process since each iteration of the development cycle involves a costly leakage assessment. To achieve a high level of DPA resistance, the architecture-specific leakage properties of the target processor need to be taken into account. However, for most embedded processors, a detailed description of these leakage properties is lacking and often not even the HDL model of the micro-architecture is openly available. Recent research has shown that power simulators for leakage assessment can significantly speed up the development process. Unfortunately, few such simulators exist and even fewer take target-specific leakages into account. To fill this gap, we present MAPS, a micro-architectural power simulator for the M3 series of ARM Cortex processors, one of today's most widely-used embedded platforms. MAPS is fast, easy to use, and able to model the Cortex-M3 pipeline leakages, in particular the leakage introduced by the pipeline registers. The M3 leakage properties are inferred from its HDL source code, and therefore MAPS does not need a complicated and expensive profiling phase. Taking first-order masked Assembler implementations of the lightweight cipher Simon as example, we study how the pipeline leakages manifest and discuss some guidelines on how to avoid them.

Breakdown Resilience of Key Exchange Protocols: NewHope, TLS 1.3, and Hybrids

Broken cryptographic algorithms and hardness assumptions are a constant threat to real-world protocols. Prominent examples are hash functions for which collisions become known, or number-theoretic assumptions which are threatened by advances in quantum computing. Especially when it comes to key exchange protocols, the switch to quantum-resistant primitives has begun and aims to protect today's secrets against future developments, moving from common Diffie-Hellman-based solutions to Learning-With-Errors-based approaches, often via intermediate hybrid designs.
To this date there exists no security notion for key exchange protocols that could capture the scenario of breakdowns of arbitrary cryptographic primitives to argue security of prior or even ongoing and future sessions. In this work we extend the common Bellare-Rogaway model to capture breakdown resilience of key exchange protocols. Our extended model allows us to study security of a protocol even in case of unexpected failure of employed primitives, may it be number-theoretic assumptions, hash functions, signature schemes, key derivation functions, etc. We then apply our security model to analyze two real-world protocols, showing that breakdown resilience for certain primitives is achieved by both an authenticated variant of the post-quantum secure key encapsulation mechanism NewHope (Alkim et al.) which is a second round candidate in the Post Quantum Cryptography standardization process by NIST, as well as by TLS 1.3, which has recently been standardized as RFC 8446 by the Internet Engineering Task Force. Finally, we analyze the security of a generic hybrid key exchange protocol, formally showing how such designs ensure resilience against breakdowns of one of their key exchange components.

A toolbox for software optimization of QC-MDPC code-based cryptosystems

The anticipated emergence of quantum computers in the foreseeable future drives the cryptographic community to start considering cryptosystems, which are based on problems that remain intractable even with strong quantum computers. One example is the family of code-based cryptosystems that relies on the Syndrome Decoding Problem
(SDP). Recent work by Misoczki et al. [34] showed a variant of McEliece encryption which is based on Quasi Cyclic - Moderate Density Parity Check (MDPC) codes, and has significantly smaller keys than the original McEliece encryption. It was followed by the newly proposed QC-MDPC based cryptosystems CAKE [9] and Ouroboros [13]. These motivate dedicated new software optimizations.
This paper lists the cryptographic primitives that QC-MDPC cryptosystems commonly employ, studies their software optimizations on modern processors, and reports the achieved speedups. It also assesses methods for side channel protection of the implementations, and their performance costs. These optimized primitives offer a useful toolbox that can be used, in various ways, by designers and implementers of QC-MDPC cryptosystems.

Non-Interactive Delegation for Low-Space Non-Deterministic Computation

We construct a delegation scheme for verifying non-deterministic computations, with complexity proportional only to the non-deterministic space of the computation. Specifically, letting $n$ denote the input length, we construct a delegation scheme for any language verifiable in non-deterministic time and space $(T (n); S(n))$ with communication complexity $poly(S(n))$, verifier runtime $n polylog(T (n)) + poly(S(n))$, and prover runtime $poly(T (n))$.
Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a sub-exponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, sub-exponential) cryptographic assumptions, such as the sub-exponential LWE assumption. Specifically, the verifier publishes a (short) public key ahead of time, and this key can be used by any prover to non-interactively prove the correctness of any adaptively chosen non-deterministic computation. Such a scheme is referred to as a noninteractive delegation scheme. Our scheme is privately verifiable, where the verifier needs the corresponding secret key in order to verify proofs.
Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions. Our results yield succinct non-interactive arguments based on subexponential LWE, for many natural languages believed to be outside of P.

Quantum cryptanalysis on some Generalized Feistel Schemes

Post-quantum cryptography has attracted much attention from worldwide cryptologists.
In ISIT 2010, Kuwakado and Morii gave a quantum distinguisher with polynomial time against 3-round Feistel networks. However, generalized Feistel schemes (GFS) have not been systematically investigated against quantum attacks.
In this paper, we study the quantum distinguishers about some generalized Feistel schemes. For $d$-branch Type-1 GFS (CAST256-like Feistel structure), we introduce ($2d-1$)-round quantum distinguishers with polynomial time. For $2d$-branch Type-2 GFS (RC6/CLEFIA-like Feistel structure), we give ($2d+1$)-round quantum distinguishers with polynomial time. Classically, Moriai and Vaudenay proved that a 7-round $4$-branch Type-1 GFS and 5-round $4$-branch Type-2 GFS are secure pseudo-random permutations. Obviously, they are no longer secure in quantum setting.
Using the above quantum distinguishers, we introduce generic quantum key-recovery attacks by applying the combination of Simon's and Grover's algorithms recently proposed by Leander and May. We denote $n$ as the bit length of a branch. For $(d^2-d+2)$-round Type-1 GFS with $d$ branches, the time complexity is $2^{(\frac{1}{2}d^2-\frac{3}{2}d+2)\cdot \frac{n}{2}}$, which is better than the quantum brute force search (Grover search) by a factor $2^{(\frac{1}{4}d^2+\frac{1}{4}d)n}$. For $4d$-round Type-2 GFS with $2d$ branches, the time complexity is $2^{{\frac{d^2 n}{2}}}$, which is better than the quantum brute force search by a factor $2^{{\frac{3d^2 n}{2}}}$.

Foundations of Homomorphic Secret Sharing

Homomorphic secret sharing (HSS) is the secret sharing analogue of homomorphic encryption. An HSS scheme supports a local evaluation of functions on shares of one or more secret inputs, such that the resulting shares of the output are short. Some applications require the stronger notion of additive HSS, where the shares of the output add up to the output over a finite Abelian group. While strong feasibility results for HSS are known under specific cryptographic assumptions, many natural questions remain open.
We initiate a systematic study of HSS, making the following contributions.
* A definitional framework. We present a general framework for defining HSS schemes that unifies and extends several previous notions from the literature, and cast known results within this framework.
* Limitations. We establish limitations on information-theoretic multi-input HSS with short output shares via a relation with communication complexity. We also show that additive HSS for non-trivial functions, even the AND of two input bits, implies non-interactive key exchange, and is therefore unlikely to be implied by public-key encryption or even oblivious transfer.
* Applications. We present two types of applications of HSS. First, we construct 2-round protocols for secure multiparty computation from a simple constant-size instance of HSS. As a corollary, we obtain 2-round protocols with attractive asymptotic efficiency features under the Decision Diffie Hellman (DDH) assumption. Second, we use HSS to obtain nearly optimal worst-case to average-case reductions in P. This in turn has applications to fine-grained average-case hardness and verifiable computation.

Block encryption of quantum messages

In modern cryptography, block encryption is a fundamental cryptographic primitive.
However, it is impossible for block encryption to achieve the same security as one-time pad.
Quantum mechanics has changed the modern cryptography, and lots of researches have shown that quantum cryptography
can outperform the limitation of traditional cryptography.
This article proposes a new constructive mode for private quantum encryption, named $\mathcal{EHE}$, which is a very simple method to construct quantum encryption from classical primitive. Based on $\mathcal{EHE}$ mode, we construct a quantum block encryption (QBE)
scheme from pseudorandom functions. If the pseudorandom functions are standard secure, our scheme is indistinguishable encryption under chosen plaintext attack. If the pseudorandom functions are permutation on the key space, our scheme can achieve perfect security. In our scheme, the key can be reused and the randomness cannot, so a $2n$-bit key can be used in exponential times of encryption, where the randomness will be refreshed in each time of encryption. Thus $2n$-bit key can perfectly encrypt $O(n2^n)$ qubits, and the perfect secrecy would not be broken if the $2n$-bit key is reused only exponential times.
Comparing with quantum one-time pad (QOTP), our scheme can be the same secure as QOTP, and the secret key can be reused (no matter whether the eavesdropping exists or not). Thus, the limitation of perfectly secure encryption (Shannon's theory) is broken in the quantum setting. Moreover, our scheme can be viewed as a positive answer to an open problem in quantum cryptography ``how to unconditionally reuse or recycle the whole key of private-key quantum encryption".
In order to physically implement the QBE scheme, we only need to implement two kinds of single-qubit gates (Pauli $X$ gate and Hadamard gate), so it is within reach of current quantum technology.

Verification of FPGA-augmented trusted computing mechanisms based on Applied Pi Calculus

Trusted computing technologies may play a key role for cloud security as they enable users to relax the trustworthiness assumptions about the provider that operates the physical cloud infrastructure. This work focuses on the possibility of embodying Field-Programmable Gate Array (FPGA) devices in cloud-based infrastructures, where they can benefit compute-intensive workloads like data compression, machine learning, data encoding, etc. The focus is on the implications for cloud applications with security requirements. We introduce a general architecture model of a CPU+FPGA platform pinpointing key roles and specific assumptions that are relevant for the trusted computing mechanisms and the associated security properties. In addition, we formally verified the proposed solution based on Applied Pi Calculus, a descendant of Pi Calculus, that introduces constructs allowing the symbolic description of cryptographic primitives. The verification phase was automated by means of ProVerif, a tool taking as input a model expressed in Applied Pi Calculus along with some queries and annotations that define security properties to be proved or denied. The results of the analysis confirmed that the properties defined in our work hold under the Dolev Yao attacker model.

IntegriKey: End-to-End Integrity Protection of User Input

Various safety-critical devices, such as industrial control systems, medical devices, and home automation systems, are configured through web interfaces from remote hosts that are standard PCs. The communication link from the host to the safety-critical device is typically easy to protect, but if the host gets compromised, the adversary can manipulate any user-provided configuration settings with severe consequences including safety violations.
In this paper, we propose IntegriKey, a novel system for user input integrity protection in compromised host. The user installs a simple plug-and-play device between the input peripheral and the host. This device observes user input events and sends a trace of them to the server that compares the trace to the application payload received from the untrusted host. To prevent subtle attacks where the adversary exchanges values from interchangeable input fields, we propose a labeling scheme where the user annotates input values. We built a prototype of IntegriKey, using an embedded USB bridge, and our experiments show that such integrity protection adds only minor delay. We also developed a UI analysis tool that helps developers to protect their services and evaluated it on commercial safety-critical systems.

Corrections to ''Further Improving Efficiency of Higher-Order Masking Schemes by Decreasing Randomness Complexity''

Provably secure masking schemes always require too many random generations, which signficantly increases the implementation cost. Recently in IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY (TIFS) (DOI:10.1109/TIFS.2017.2713323), Zhang, Qiu, and Zhou improve the efficiency of the CPRR scheme by decreasing the random generations. Recently, Barthe et al. claim that security flaws exist in both proposals and provide the counter-examples. In this paper, we fix these security flaws by changing the addition order. In this way, the two proposals are corrected with no extra random generation.

Augmented Black-Box Simulation and Zero Knowledge Argument for NP

The standard zero knowledge notion is formalized by requiring that for any probabilistic polynomial-time (PPT) verifier $V^*$, there is a PPT algorithm (simulator) $S_{V^*}$, such that the outputs of $S_{V^*}$ is indistinguishable from real protocol views. The simulator is not permitted to access the verifier $V^*$'s private state. So the power of $S_{V^*}$ is, in fact, inferior to that of $V^*$.
In this paper, a new simulation method, called augmented black-box simulation, is presented by permitting the simulator to have access to the verifier's current private state in a special manner. The augmented black-box simulator only has the same computing power as the verifier although it is given access to the verifier's current private state. Therefore, augmented black-box simulation is a reasonable method to prove zero knowledge property, and brings results that hard to obtain with previous simulation techniques. Zero knowledge property, proved by means of augmented black-box simulation, is called augmented black-box zero-knowledge.
We present a 5-round statistical augmented black-box zero-knowledge argument for Exact Cover Problem under the Decision Multilinear No-Exact-Cover Assumption. In addition, we show a 2-round computational augmented black-box zero-knowledge argument protocol for Exact Cover problem under the Decision Multilinear No-Exact-Cover Assumption and the assumption of the existence of hash functions. It is well known that 2-round zero knowledge protocols does not exist under general zero knowledge notion. Besides, following [19], we consider leakage-resilient property of augmented black-box zero knowledge, and prove that the presented statistical zero-knowledge protocol has optimal leakage-resilient property.

Designing Proof of Transaction Puzzles for Cryptocurrency

Uncategorized

Uncategorized

One of the Bitcoin's innovations is the Proof of Work puzzle (aka scratch-off puzzle) as a consensus protocol for anonymous networks without pre-established PKI. Bitcoins based on the Proof of Work puzzle have been harshly blamed today for problems such as energy wasted and not easily scalable. In this paper, we construct a novel Proof of Transaction(PoT) puzzle, and prove that PoT puzzle satisfies the basic construction conditions of scratch-off puzzle. We also show construction of PoTcoin as application. PoTcoin has many advantage but not limited as strengthening the network topology, promoting currency circulation, anti-outsourcing computing and environment-friendly.

A Public-key Encryption Scheme Based on Non-linear Indeterminate Equations (Giophantus)

Uncategorized

Uncategorized

In this paper, we propose a post-quantum public-key encryption scheme whose security depends on a problem arising from a multivariate non-linear indeterminate equation. The security of lattice cryptosystems, which are considered to be the most promising candidate for a post-quantum cryptosystem, is based on the shortest vector problem or the closest vector problem in the discrete linear solution spaces of simultaneous equations. However, several improved attacks for the underlying problems have recently been developed by using approximation methods, which result in requiring longer key sizes. As a scheme to avoid such attacks, we propose a public-key encryption scheme based on the "smallest" solution problem in the non-linear solution spaces of multivariate indeterminate equations that was developed from the algebraic surface cryptosystem. Since no efficient algorithm to find such a smallest solution is currently known, we introduce a new computational assumption under which proposed scheme is proven to be secure in the sense of IND-CPA. Then, we perform computational experiments based on known attack methods and evaluate that the key size of our scheme under the linear condition. This paper is a revised version of SAC2017.

UWB with Pulse Reordering: Securing Ranging against Relay and Physical-Layer Attacks

Physical-layer attacks allow attackers to manipulate (spoof) ranging and positioning. These attacks had real-world impact and allowed car thefts, executions of unauthorized payments and manipulation of navigation. UWB impulse radio, standardized within 802.15.4a,f, has emerged as a prominent technique for precise ranging that allows high operating distances despite power constraints by transmitting multi-pulse symbols. Security of UWB ranging (in terms of the attacker's ability to manipulate the measured distance) has been discussed in the literature and is, since recently also being addressed as a part of the emerging 802.15.4z standard. However, all research so far, as well as security enhancements proposed within this emerging standard face one main limitation: they achieve security through short symbol lengths and sacrifice performance (i.e., limit the maximum distance of measurement), or use longer symbol lengths, therefore sacrificing security. We present UWB with pulse reordering (UWB-PR), the first modulation scheme that secures distance measurement between two mutually trusted devices against all physical-layer distance shortening attacks without sacrificing performance, therefore simultaneously enabling extended range and security. We analyze the security of UWB-PR under the attacker that fully controls the communication channel and show that UWB-PR resists such strong attackers. We evaluate UWB-PR within a UWB system built on top of the IEEE 802.15.4 device and show that it achieves distances of up to 93m with 10cm precision (LoS). UWB-PR is, therefore, a good candidate for the extended mode of the new 802.15.4z Low Rate Pulse standard. Finally, UWB-PR shows that secure distance measurement can be built on top of modulation schemes with longer symbol lengths - so far, this was considered insecure.

An Efficient NIZK Scheme for Privacy-Preserving Transactions over Account-Model Blockchain

We introduce the abstract framework of decentralized smart contracts system with balance and transaction amount hiding property under the ACCOUNT architecture. To build a concrete system with such properties, we utilize a homomorphic public key encryption scheme and construct a highly efficient non-interactive zero knowledge (NIZK) argument based upon the encryption scheme to ensure the validity of the transactions. Our NIZK scheme is perfect zero knowledge in the common reference string model, while its soundness holds in the random oracle model. Compared to previous similar constructions, our proposed NIZK argument dramatically improves the time efficiency in generating a proof, at the cost of relatively longer proof size.

Efficient Oblivious Data Structures for Database Services on the Cloud

Database-as-a-service (DBaaS) allows the client to store and manage structured data on the cloud remotely. Despite its merits, DBaaS also brings significant privacy issues. Existing encryption techniques (e.g., SQL-aware encryption) can mitigate privacy concerns, but they still leak information through access patterns, which are vulnerable to statistical inference attacks. Oblivious Random Access Machine (ORAM) can seal such leakages; however, the recent studies showed significant
challenges on the integration of ORAM into databases. That is, the direct usage of ORAM on databases is not only costly but also permits very limited query functionalities. In this paper, we
propose new oblivious data structures called Oblivious Matrix Structure (OMAT) and Oblivious Tree Structure (OTREE), which allow tree-based ORAM to be integrated into database systems in a more efficient manner with diverse query functionalities supported. OMAT provides special ORAM packaging strategies for table structures, which not only offers a significantly better performance but also enables a broad range of query types that may not be efficient in existing frameworks. On the
other hand, OTREE allows oblivious conditional queries to be performed on tree-indexed databases more efficiently than existing techniques. We implemented our proposed techniques and evaluated
their performance on a real cloud database with various metrics, compared with state-of-the-art counterparts.

A High-Security Searchable Encryption Framework for Privacy-Critical Cloud Storage Services

Searchable encryption has received a significant attention from the research community with various constructions being proposed, each achieving asymptotically optimal complexity for specific metrics (e.g., search, update). Despite their elegancy, the recent attacks and deployment efforts have shown that the optimal asymptotic complexity might not always imply practical performance, especially if the application demands a high privacy. Hence, there is a significant need for searchable encryption frameworks that capture the recent attacks with actual deployments on cloud infrastructures to assess the practicality under realistic settings.
In this article, we introduce a new Dynamic Searchable Symmetric Encryption (DSSE) framework called Incidence Matrix (IM)-DSSE, which achieves a high level of privacy, efficient search/update, and low client storage with actual deployments on real cloud settings. We harness an incidence matrix along with two hash tables to create an encrypted index, on which both search and update operations can be performed effectively with minimal information leakage. This simple set of data structures surprisingly offers a high level of DSSE security while at the same time achieving practical performance. Specifically, IM-DSSE achieves forward privacy, backward privacy and size-obliviousness properties simultaneously. We also create several DSSE variants, each offering different trade-offs (e.g., security, computation) that are suitable for different cloud applications and infrastructures. Our framework was fully-implemented and its performance was rigorously evaluated on a real cloud system (Amazon EC2). Our experimental results confirm that IM-DSSE is highly practical even when deployed on mobile phones with a large outsourced dataset. Finally, we have released our IM-DSSE framework as an open-source library for a wide development and adaptation.

Fast Quantum Algorithm for Solving Multivariate Quadratic Equations

Uncategorized

Uncategorized

In August 2015 the cryptographic world was shaken by a sudden and surprising announcement by the US National Security Agency (NSA) concerning plans to transition to post-quantum algorithms. Since this announcement post-quantum cryptography has become a topic of primary interest for several standardization bodies. The transition from the currently deployed public-key algorithms to post-quantum algorithms has been found to be challenging in many aspects. In particular the problem of evaluating the quantum-bit security of such post-quantum cryptosystems remains vastly open. Of course this question is of primarily concern in the process of standardizing the post-quantum cryptosystems. In this paper we consider the quantum security of the problem of solving a system of $m$ Boolean multivariate quadratic equations in $n$ variables (MQ$_2$); a central problem in post-quantum cryptography. When $n=m$, under a natural algebraic assumption, we present a Las-Vegas quantum algorithm solving MQ$_2$ that requires the evaluation of, on average, $O(2^{0.462n})$ quantum gates. To our knowledge this is the fastest algorithm for solving MQ$_2$.

Practical Quantum-Safe Voting from Lattices

We propose a lattice-based electronic voting scheme, EVOLVE (Electronic Voting from Lattices with Verification), which is conjectured to resist attacks by quantum computers. Our protocol involves a number of voting authorities so that vote privacy is maintained as long as at least one of the authorities is honest, while the integrity of the result is guaranteed even when all authorities collude. Furthermore, the result of the vote can be independently computed by any observer.
At the core of the protocol is the utilization of a homomorphic commitment scheme with strategically orchestrated zero-knowledge proofs: voters use approximate but efficient “Fiat-Shamir with Aborts” proofs to show the validity of their vote, while the authorities use amortized exact proofs to show that the commitments are well-formed. We also present a novel efficient zero-knowledge proof that one of two lattice-based statements is true (so-called OR proof) and a new mechanism to control the size of the randomness when applying the homomorphism to commitments.
We give concrete parameter choices to securely instantiate and evaluate the efficiency of our scheme. Our prototype implementation shows that the voters require 8 milliseconds to submit a vote of size about 20KB to each authority and it takes each authority 0.15 seconds per voter to create a proof that his vote was valid. The size of the vote share that each authority produces is approximately 15KB per voter, which we believe is well within the practical bounds for a large-scale election.

High-Precision Privacy-Preserving Real-Valued Function Evaluation

We propose a novel multi-party computation protocol for evaluating
continuous real-valued functions with high numerical precision.
Our method is based on approximations with Fourier series and uses
at most two rounds of communication during the online phase.
For the offline phase, we propose a trusted-dealer and honest-but-curious aided solution, respectively.
We apply our algorithm to train a logistic regression classifier via
a variant of Newton's method (known as IRLS) to compute unbalanced classification problems that detect rare events
and cannot be solved using previously proposed privacy-preserving
optimization algorithms (e.g., based on piecewise-linear approximations
of the sigmoid function).
Our protocol is efficient as it can be implemented using standard quadruple-precision floating point arithmetic. We report multiple experiments and provide a demo application that implements our algorithm for training a logistic regression model.

Provably secure compilation of side-channel countermeasures

Software-based countermeasures provide effective mitigation against side-channel attacks, often with minimal efficiency and deployment overheads. Their effectiveness is often amenable to rigorous analysis: specifically, several popular countermeasures can be formalized as information flow policies, and correct implementation of the countermeasures can be verified with state-of-the-art analysis and verification techniques. However, in absence of further justification, the guarantees only hold for the language (source, target, or intermediate representation) on which the analysis is performed.
We consider the problem of preserving side-channel countermeasures by compilation, and present a general method for proving that compilation preserves software-based side-channel countermeasures. The crux of our method is the notion of 2-simulation, which adapts to our setting the notion of simulation from compiler verification. Using the Coq proof assistant, we verify the correctness of our method and of several representative instantiations.

Optimal Linear Secret Sharing Schemes for Graph Access Structures on Six Participants

We review the problem of finding the optimal information ratios of graph access structures on six participants. Study of such access structures were initiated by van Dijk [Des. Codes Cryptogr. 15 (1998), 301-321].Through a sequence of follow up works, exact values of optimal information ratios of nine access structures, out of 18 initially unsolved non-isomorphic ones, were determined. Very recently [O. Farras et al. Cryptology ePrint Archive: Report 2017/919], for each of the remained such cases, the known lower bound on the optimal information ratio of linear secret sharing schemes was improved, establishing the optimal information ratio of linear secret sharing schemes for two of them. Here, for each of the other seven cases, we provide a new upper bound on the optimal information ratio of linear secret sharing schemes; our improved upper bounds match the corresponding recently presented lower bounds. Improved upper bounds are achieved using decomposition techniques. As an additional contribution, we present a new decomposition technique, called $(\lambda,\omega)$-weighted decomposition, which is a generalization of all known decomposition techniques.

Integer Reconstruction Public-Key Encryption

In [AJPS17], Aggarwal, Joux, Prakash & Santha described
an elegant public-key cryptosystem (AJPS-1) mimicking NTRU over the
integers. This algorithm relies on the properties of Mersenne primes
instead of polynomial rings.
A later ePrint [BCGN17] by Beunardeau et al. revised AJPS-1’s initial
security estimates. While lower than initially thought, the best known
attack on AJPS-1 still seems to leave the defender with an exponential
advantage over the attacker [dBDJdW17]. However, this lower exponential
advantage implies enlarging AJPS-1’s parameters. This, plus the fact that
AJPS-1 encodes only a single plaintext bit per ciphertext, made AJPS-1
impractical. In a recent update, Aggarwal et al. overcame this limitation
by extending AJPS-1’s bandwidth. This variant (AJPS-ECC) modifies the
definition of the public-key and relies on error-correcting codes.
This paper presents a different high-bandwidth construction. By opposition
to AJPS-ECC, we do not modify the public-key, avoid using errorcorrecting
codes and use backtracking to decrypt. The new algorithm
is orthogonal to AJPS-ECC as both mechanisms may be concurrently
used in the same ciphertext and cumulate their bandwidth improvement
effects. Alternatively, we can increase AJPS-ECC’s information rate by a
factor of 26 for the parameters recommended in [AJPS17].
The obtained bandwidth improvement and the fact that encryption and
decryption are reasonably efficient, make our scheme an interesting postquantum
candidate.

Overdrive: Making SPDZ Great Again

SPDZ denotes a multiparty computation scheme in the preprocessing model based on somewhat homomorphic encryption (SHE) in the form of BGV. At CCS '16, Keller et al. presented MASCOT, a replacement of the preprocessing phase using oblivious transfer instead of SHE, improving by two orders of magnitude on the SPDZ implementation by Damgård et al. (ESORICS '13). In this work, we show that using SHE is faster than MASCOT in many aspects:
- We present a protocol that uses semi-homomorphic (addition-only) encryption. For two parties, our BGV-based implementation is 6 times faster than MASCOT on a LAN and 20 times faster in a WAN setting. The latter is roughly the reduction in communication.
- We show that using the proof of knowledge in the original work by Damgård et al. (Crypto '12) is more efficient in practice than the one used in the implementation mentioned above by about one order of magnitude.
- We present an improvement to the verification of the aforementioned proof of knowledge that increases the performance with a growing number of parties, doubling it for 16 parties.

Quantum Demiric-Selçuk Meet-in-the-Middle Attacks: Applications to 6-Round Generic Feistel Constructions

Uncategorized

Uncategorized

This paper shows that quantum computers can significantly speed-up a type of meet-in-the-middle attacks initiated by Demiric and Selçuk (DS-MITM attacks), which is currently one of the most powerful cryptanalytic approaches in the classical setting against symmetric-key schemes. The quantum DS-MITM attacks are demonstrated against 6 rounds of the generic Feistel construction supporting an $n$-bit key and an $n$-bit block, which was attacked by Guo et al. in the classical setting with data, time, and memory complexities of $O(2^{3n/4})$. The complexities of our quantum attacks depend on the adversary's model and the number of qubits available. When the adversary has an access to quantum computers for offline computations but online queries are made in a classical manner (so called Q1 model), the attack complexities are $O(2^{n/2})$ classical queries, $O(2^n/q)$ quantum computations by using about $q$ qubits. Those are balanced at $\tilde{O}(2^{n/2})$, which significantly improves the classical attack. Technically, we convert the quantum claw finding algorithm to be suitable in the Q1 model. The attack is then extended to the case that the adversary can make superposition queries (so called Q2 model). The attack approach is drastically changed from the one in the Q1 model; the attack is based on 3-round distinguishers with Simon's algorithm and then appends 3 rounds for key recovery. This can be solved by applying the combination of Simon's and Grover's algorithms recently proposed by Leander and May.

Speed-ups and time-memory trade-offs for tuple lattice sieving

Uncategorized

Uncategorized

In this work we study speed-ups and time-space trade-offs for solving the shortest vector problem (SVP) on Euclidean lattices based on tuple lattice sieving.
Our results extend and improve upon previous work of Bai-Laarhoven-Stehlë [ANTS'16] and Herold-Kirshanova [PKC'17], with better complexities for arbitrary tuple sizes and offering tunable time-memory trade-offs.The trade-offs we obtain stem from the generalization and combination of two algorithmic techniques: the configuration framework introduced by Herold-Kirshanova, and the spherical locality-sensitive filters of Becker-Ducas-Gama-Laarhoven [SODA'16].
When the available memory scales quasi-linearly with the list size, we show that with triple sieving we can solve SVP in dimension $n$ in time $2^{0.3588n + o(n)}$ and space $2^{0.1887n + o(n)}$, improving upon the previous best triple sieve time complexity of $2^{0.3717n + o(n)}$ of Herold-Kirshanova. Using more memory we obtain better asymptotic time complexities. For instance, we obtain a triple sieve requiring only $2^{0.3300n + o(n)}$ time and $2^{0.2075n + o(n)}$ memory to solve SVP in dimension $n$. This improves upon the best double Gauss sieve of Becker-Ducas-Gama-Laarhoven, which runs in $2^{0.3685n + o(n)}$ time when using the same amount of space.

VerMI: Verification Tool for Masked Implementations

Masking is a widely used countermeasure against Side-Channel
Attacks (SCA), but the implementation of these countermeasures is challenging. Experimental security evaluation requires special equipment, a considerable amount of time and extensive technical knowledge. So, to automate and to speed up this process, a formal verification can be performed to asses the security of a design. Multiple theoretical approaches and verification tools have been proposed in the literature. The majority of them are tailored for software implementations, not applicable to hardware since they do not take into account glitches. Existing hardware verification tools are limited either to combinational logic or to small designs due to
the computational resources needed.
In this work we present VerMI, a verification tool in the form of a logic simulator that checks the properties defined in Threshold Implementations to address the security of a hardware implementation for meaningful orders of security. The tool is designed so that any masking scheme can be evaluated. It accepts combinational and sequential logic and is able to analyze an entire cipher in short time. With the tool we have managed to spot a flaw in the round-based Keccak implementation by Gross et al., published in DSD 2017.

New (and Old) Proof Systems for Lattice Problems

We continue the study of statistical zero-knowledge (SZK)
proofs, both interactive and noninteractive, for computational
problems on point lattices. We are particularly interested in the
problem GapSPP of approximating the $\varepsilon$-smoothing
parameter (for some $\varepsilon < 1/2$) of an $n$-dimensional
lattice. The smoothing parameter is a key quantity in the study of
lattices, and GapSPP has been emerging as a core problem in
lattice-based cryptography, e.g., in worst-case to average-case
reductions.
We show that GapSPP admits SZK proofs for *remarkably low*
approximation factors, improving on prior work by up to
roughly $\sqrt{n}$. Specifically:
-- There is a *noninteractive* SZK proof for
$O(\log(n) \sqrt{\log (1/\varepsilon)})$-approximate GapSPP. Moreover,
for any negligible $\varepsilon$ and a larger approximation factor
$\tilde{O}(\sqrt{n \log(1/\varepsilon)})$, there is such a proof with an
*efficient prover*.
-- There is an (interactive) SZK proof with an efficient prover for
$O(\log n + \sqrt{\log(1/\varepsilon)/\log n})$-approximate
coGapSPP. We show this by proving that
$O(\log n)$-approximate GapSPP is in coNP.
In addition, we give an (interactive) SZK proof with an efficient
prover for approximating the lattice *covering radius* to within
an $O(\sqrt{n})$ factor, improving upon the prior best factor of
$\omega(\sqrt{n \log n})$.

Fast Garbling of Circuits over 3-Valued Logic

Uncategorized

Uncategorized

In the setting of secure computation, a set of parties wish to compute
a joint function of their private inputs without revealing anything but the output. Garbled circuits, first introduced by Yao, are a central tool in the construction of protocols for secure computation (and other tasks like secure outsourced computation), and are the fastest known method for constant-round protocols. In this paper, we initiate a study of garbling multivalent-logic circuits, which are circuits whose wires may carry values from some finite/infinite set of
values (rather than only True and False). In particular, we focus on the three-valued logic system of Kleene, in which the admissible values are True, False, and Unknown. This logic system is used in practice in SQL where some of the values may be missing. Thus, efficient constant-round secure computation of SQL over a distributed database requires the ability to efficiently garble circuits over
3-valued logic. However, as we show, the two natural (naive) methods of garbling 3-valued logic are very expensive.
In this paper, we present a general approach for garbling three-valued logic, which is based on first encoding the 3-value logic into Boolean logic, then using standard garbling techniques, and final decoding back into 3-value logic. Interestingly, we find that the specific encoding chosen can have a significant impact on efficiency. Accordingly, the aim is to find Boolean encodings of 3-value logic that enable efficient Boolean garbling (i.e., minimize the number of AND gates). We also show that Boolean AND gates can be garbled at the same cost of garbling XOR gates in the 3-value logic setting. Thus, it is unlikely that an analogue of free-XOR exists for 3-value logic garbling (since this would imply free-AND in the Boolean setting).

Practical Cryptanalysis of a Public-key Encryption Scheme Based on Non-linear Indeterminate Equations at SAC 2017

We investigate the security of a public-key encryption scheme, the Indeterminate Equation Cryptosystem (IEC), introduced by Akiyama, Goto, Okumura, Takagi, Nuida, and Hanaoka at SAC 2017 as postquantum cryptography. They gave two parameter sets PS1 (n,p,deg X,q) = (80,3,1,921601) and PS2 (n,p,deg X,q) = (80,3,2,58982400019).
The paper gives practical key-recovery and message-recovery attacks against those parameter sets of IEC through lattice basis-reduction algorithms. We exploit the fact that n = 80 is composite and adopt the idea of Gentry's attack against NTRU-Composite (EUROCRYPT2001) to this setting. The summary of our attacks follows:
* On PS1, we recover 84 private keys from 100 public keys in 30–40 seconds per key.
* On PS1, we recover partial information of all message from 100 ciphertexts in a second per ciphertext.
* On PS2, we recover partial information of all message from 100 ciphertexts in 30 seconds per ciphertext.
Moreover, we also give message-recovery and distinguishing attacks against the parameter sets with prime n, say, n = 83. We exploit another subring to reduce the dimension of lattices in our lattice-based attacks and our attack succeeds in the case of deg X = 2.
* For PS2’ (n,p,deg X,q) = (83,3,2,68339982247), we recover 7 messages from 10 random ciphertexts within 61,000 seconds \approx 17 hours per ciphertext.
* Even for larger n, we can fnd short vector from lattices to break the underlying assumption of IEC. In our experiment, we can found such vector within 330,000 seconds \approx 4 days for n = 113.

Generic Low-Latency Masking in Hardware

In this work, we introduce a generalized concept for low-latency masking that is applicable to any implementation and protection order, and (in its most extreme form) does not require on-the-fly randomness. The main idea of our approach is to avoid collisions of shared variables in nonlinear circuit parts and to skip the share compression. We show the feasibility of our approach on a full implementation of a one-round unrolled Ascon variant and on an AES S-box case study. Additionally, we discuss possible trade-offs to make our approach interesting for practical implementations. As a result, we obtain a first-order masked AES S-box that is calculated in a single clock cycle with rather high implementation costs (60.7 kGE), and a two-cycle variant with much less implementation costs (6.7 kGE). The side-channel resistance of our Ascon S-box designs up to order three are then verified using the formal analysis tool of [BGI+18]. Furthermore, we introduce a taint checking based verification approach that works specifically for our low-latency approach and allows us to verify large circuits like our low-latency AES S-box design in reasonable time.

Forward-Private Dynamic Searchable Symmetric Encryption with Efficient Search

Dynamic Searchable Symmetric Encryption (DSSE) allows to delegate keyword search and file update over an encrypted database via encrypted indexes, and therefore provides opportunities to mitigate the data privacy and utilization dilemma in cloud storage platforms. Despite its merits, recent works have shown that efficient DSSE schemes are vulnerable to statistical attacks due to the lack of forward-privacy, whereas forward-private DSSE schemes suffer from practicality concerns as a result of their extreme computation overhead. Due to significant practical impacts of statistical attacks, there is a critical need for new DSSE schemes that can achieve the forward-privacy in a more practical and efficient manner.
We propose a new DSSE scheme that we refer to as Forward-private Sublinear DSSE (FS-DSSE). FS-DSSE harnesses special secure update strategies and a novel caching strategy to reduce the computation cost of repeated queries. Therefore, it achieves forward-privacy, sublinear search complexity, low end-to-end delay, and parallelization capability simultaneously. We fully implemented our proposed method and evaluated its performance on a real cloud platform. Our experimental evaluation results showed that the proposed scheme is highly secure and highly efficient compared with state-of-the-art DSSE techniques. Specifically, FS-DSSE is one to three magnitude of times faster than forward-secure DSSE counterparts.

Weak-Unforgeable Tags for Secure Supply Chain Management

Given the value of imported counterfeit and pirated goods, the need for secure supply chain management is pertinent. Maleki et al. (HOST 2017) propose a new management scheme based on RFID tags (with 2-3K bits NVM) which, if compared to other schemes, is competitive on several performance and security metrics. Its main idea is to have each RFID tag stores its reader events in its own NVM while moving through the supply chain. In order to bind a tag's identity to each event such that an adversary is not able to impersonate the tag's identity on another duplicate tag, a function with a weak form of unforgeability is needed. In this paper, we formally dene this security property, present three constructions (MULTIPLY-ADD, ADD-XOR, and S-Box-CBC) having this security property, and show how to bound the probability of successful impersonation in concrete parameter settings. Finally, we compare our constructions with the light-weight hash function PHOTON used by Maleki et al. in terms of security and circuit area needed. We conclude that our ADD-XOR and S-Box-CBC constructions have approximately 1/4 - 1/3 of PHOTON's total circuit area (this also includes the control circuitry besides PHOTON) while maintaining an appropriate security level which takes care of economically motivated adversaries.

Off-line Digital Cash Schemes Providing Unlinkability, Anonymity and Change

Uncategorized

Uncategorized

Several ecash systems have been proposed in the last twenty years or so,
each offering features similar to real cash. One feature which to date has not been provided is that of a payee giving change to a payer for an e-coin in an off-line setting. In this paper, we indicate how an off-line ecash system can solve the change-giving problem. In addition, our protocol provides the usual expected features of anonymity and unlinkability of the payer, but can reveal the identity of an individual who illegally tries to spend ecash twice.

Correlations Between (Nonlinear) Combiners of Input and Output of Random Functions and Permutations

Linear cryptanalysis considers correlations between linear input and output combiners for block ciphers and stream ciphers.
Daemen and Rijmen (2007) had obtained the distributions of the correlations between linear input and output combiners of
uniform random functions and uniform random permutations. The present work generalises these results to obtain the distributions of the correlations between arbitrary input and output combiners of uniform random functions and uniform random permutations.

TERMinator Suite: Benchmarking Privacy-Preserving Architectures

Security and privacy are fundamental objectives characterizing contemporary cloud computing. Despite the wide adoption of encryption for protecting data in transit and at rest, data in use remains unencrypted inside cloud processors and memories, as computation is not applicable on encrypted values. This limitation introduces security risks, as unencrypted values can be leaked through side-channels or hardware Trojans. To address this problem, encrypted architectures have recently been proposed, which leverage homomorphic encryption to natively process encrypted data using datapaths of thousands of bits. In this case, additional security protections are traded for higher performance penalties, which drives the need for more efficient architectures. In this work, we develop benchmarks specifically tailored to encrypted computers, to enable comparisons across different architectures. Our benchmark suite, dubbed TERMinator, is unique as it avoids 'termination problems' that prohibit making control-flow decisions and evaluating early termination conditions based on encrypted data, as these can leak information. Contrary to generic suites that ignore the fundamental challenges of encrypted computation, our algorithms are tailored to the security primitives of the target encrypted architecture, such as the existence of branching oracles. In our experiments, we compiled our benchmarks for the Cryptoleq architecture and evaluated their performance for a range of security parameters.

Linear Regression Side Channel Attack Applied on Constant XOR

Uncategorized

Uncategorized

Linear regression side channel attack (LRA) used to be known as a robust attacking method as it makes use of independent bits leakage. This leakage assumption is more general than Hamming weight/ Hamming distance model used in correlation power attack (CPA). However, in practice, Hamming weight and Hamming distance model suit most devices well. In this paper, we restudy linear regression attack under Hamming weight/ Hamming distance model and propose our novel LRA methods. We find that in many common scenarios LRA is not only an alternative but also a more efficient tool compared with CPA. Two typical cases are recovering keys with XOR operation leakage and chosen plaintext attack on block ciphers with leakages from round output. Simulation results are given to compare with traditional CPA in both cases. Our LRA method achieves up to 400% and 300% improvements for corresponding case compared with CPA respectively. Experiments with AES on SAKURA-G board also prove the efficiency of our methods in practice where 128 key bits are recovered with 1500 traces using XOR operation leakage and one key byte is recovered with only 50 chosen-plaintext traces in the other case.

Probabilistic and Considerate Attestation of IoT Devices against Roving Malware

Remote Attestation (RA) is a popular means of detecting malware presence (or verifying its absence) on embedded
and IoT devices. It is especially relevant to low-end devices that are incapable of protecting themselves against infection.
Malware that is aware of ongoing or impending attestation and aims to avoid detection
can relocate itself during computation of the attestation measurement. In order to thwart such behavior,
prior RA techniques are either non-interruptible or explicitly forbid modification of storage during measurement
computation. However, since the latter can be a time-consuming task, this curtails availability of device's other
(main) functions, which is especially undesirable, or even dangerous, for devices with time- and/or safety-critical missions.
In this paper, we propose SMARM, a light-weight technique, based on shuffled measurements, as a defense against
roving malware. In SMARM, memory is measured in a randomized and secret order. This does not impact device's availability --
the measurement process can be interrupted, even by malware, which can relocate itself at will. We analyze various
malware behaviors and show that, while malware can escape detection in a single attestation instance, it is
highly unlikely to avoid eventual detection.

Lattice-Based Public Key Searchable Encryption from Experimental Perspectives

Uncategorized

Uncategorized

Public key Encryption with Keyword Search (PEKS) aims in mitigating the impacts of data privacy versus utilization dilemma by allowing {\em any user in the system} to send encrypted files to the server to be searched by a receiver. The receiver can retrieve the encrypted files containing specific keywords by providing the corresponding trapdoors of these keywords to the server. Despite their merits, the existing PEKS schemes introduce a high end-to-end delay that may hinder their adoption in practice. Moreover, they do not scale well for large security parameters and provide no post-quantum security promises. In this paper, we propose two novel lattice-based PEKS schemes that offer a high computational efficiency along with better security assurances than that of the existing alternatives. Specifically, our NTRU-PEKS scheme achieves 18 times lower end-to-end delay than the most efficient pairing-based alternatives. Our LWE-PEKS offers provable security in the standard model with a reduction to the worst-case lattice problems. We fully implemented our NTRU-PEKS scheme and benchmarked its performance as deployed on Amazon Web Services cloud infrastructures.

HILA5 Pindakaas: On the CCA security of lattice-based encryption with error correction

We show that the NISTPQC submission HILA5 is not secure against chosen-ciphertext attacks. Specifically, we demonstrate a key-recovery attack on HILA5 using an active attack on reused keys. The attack works around the error correction in HILA5. The attack applies to the HILA5 key-encapsulation mechanism (KEM), and also to the public-key encryption mechanism (PKE) obtained by NIST's procedure for combining the KEM with authenticated encryption. This contradicts the most natural interpretation of the IND-CCA security claim for HILA5.

On hybrid SIDH schemes using Edwards and Montgomery curve arithmetic

Supersingular isogeny Diffie-Hellman (SIDH) is a proposal for a quantum-resistant key exchange. The state-of-the-art implementation works entirely with Montgomery curves and basically can be divided into elliptic curve arithmetic and isogeny arithmetic. It is well known that twisted Edwards curves can provide a more efficient elliptic curve arithmetic. Therefore it was hinted by Costello and Hisil, that by using only Edwards curves for isogeny and curve arithmetic, or a hybrid scheme, that uses Edwards curve arithmetic and switches between the models whenever needed, a speedup in the computation may be gained.
Following the latter case, we investigated how to efficiently switch between Montgomery and twisted Edwards curves in SIDH, and how to insert Edwards arithmetic in the current state-of-the-art implementation.
We did not gain a speedup compared to the results of Costello, Longa, and Naehrig, but in some cases the performance of Edwards arithmetic is almost equally fast. Thus, we suppose that a hybrid scheme does not improve the performance of SIDH, but still can be interesting for platforms having special hardware acceleration for Edwards curves. However, a full Edwards SIDH version may give a speedup, if fast Edwards isogeny formulas can be found.

A New Crypto-Classifier Service for Energy Efficiency in Smart Cities

Smart Cities draw a nice picture of a connected city where useful services and data are ubiquitous, energy is properly used and urban infrastructures are well orchestrated.
Fulfilling this vision in our cities implies unveiling citizens
data and assets. Thus, security and data privacy appear as crucial issues to consider. In this paper, we study a way of offering a secured energy management service for diagnosis and classification of buildings in a district upon their energy efficiency. Our remote service can be beneficial both for local authorities and householders without revealing private data. Our framework is designed such that the private data is permanently encrypted and that the server performing the labeling algorithm has no information about the sensitive data and no capability to decrypt it. The underlying cryptographic technology used is homomorphic encryption, allowing to perform calculations directly on encrypted data. We present here the prototype of a crypto-classification service for energy consumption profiles involving different actors of a smart city community, as well as the associated performances results. We assess our proposal atop of real data taken from an Irish residential district and we show that our service can achieve acceptable performances in terms of security, execution times and memory requirements.

Zero-Sum Partitions of PHOTON Permutations

Uncategorized

Uncategorized

We describe an approach to zero-sum partitions using Todo’s division property at EUROCRYPT 2015. It follows the inside-out methodology, and includes MILP-assisted search for the forward and backward trails, and subspace approach to connect those two trails that is less restrictive than commonly done.
As an application we choose PHOTON, a family of sponge-like hash function proposals that was recently standardized by ISO. With respect to the security claims made by the designers, we for the first time show zero-sum partitions for almost all of those full 12-round permutation variants that use a 4-bit S-Box. As with essentially any other zero-sum property in the literature, also here the gap between a generic attack and the shortcut is small.

Two-Face: New Public Key Multivariate Schemes

We present here new multivariate schemes that can be seen as HFE generalization having a property called `Two-Face'.
Particularly, we present five such families of algorithms named `Dob', `Simple Pat', `General Pat', `Mac', and `Super Two-Face'. These families have connections between them, some of them are refinements or generalizations of others. Notably, some of these schemes can be used for public key encryption, and some for public key signature. We introduce also new multivariate quadratic permutations that may have interest beyond cryptography.

Improvements for Finding Impossible Differentials of Block Cipher Structures

Uncategorized

Uncategorized

In this paper we improve Wu and Wang's method for finding impossible differentials of block cipher structures. This improvement is more general than Wu and Wang's method that it can find more impossible differentials with less time.
We apply it on Gen-CAST256, Misty, Gen-Skipjack, Four-Cell, Gen-MARS, SMS4, MIBS, Camellia*, LBlock, E2 and SNAKE block ciphers. All impossible differentials discovered by the algorithm are the same as Wu's method. Besides, for the 8-round MIBS block cipher, we find 4 new impossible differentials, which are not listed in Wu and Wang's results.
The experiment results show that the improved algorithm can not only find more impossible differentials, but also
largely reduce the search time.

Security notions for cloud storage and deduplication

Uncategorized

Uncategorized

Cloud storage is in widespread use by individuals and enterprises but introduces a wide array of attack vectors. A basic step for users is to encrypt their data, but it is not obvious what precise security properties are required for encryption. Furthermore, cloud storage providers often use techniques such as data deduplication for improving efficiency which restricts the application of semantically-secure encryption. Generic security goals and attack models have thus far proved elusive: primitives are considered in isolation and protocols are often proved secure under ad hoc models for restricted classes of adversaries.
We provide a generic syntax for storage systems that allows us to formally model natural security notions for cloud storage and deduplication. We define security notions for confidentiality and integrity in encrypted cloud storage and determine relations between these notions. We show how to build cloud storage systems that satisfy our defined security notions using generic cryptographic components.

Unconditionally secure multi-party quantum commitment scheme

A new unconditionally secure multi-party quantum commitment is proposed in this paperby encoding the committed message to the phase of a quantum state. Multi-party means that there are more than one recipient in our scheme. We show that our quantum commitment scheme is unconditional hiding and binding, and hiding is perfect. Our technique is based on the interference of phase-encoded coherent states of light.
Its security proof relies on the no-cloning theorem of quantum theory and the properties of quantum information.

Asymptotically faster quantum algorithms to solve multivariate quadratic equations

This paper designs and analyzes a quantum algorithm to solve a system of $m$ quadratic equations in $n$ variables over a finite field ${\bf F}_q$. In the case $m=n$ and $q=2$, under standard assumptions, the algorithm takes time $2^{(t+o(1))n}$ on a mesh-connected computer of area $2^{(a+o(1))n}$, where $t\approx 0.45743$ and $a\approx 0.01467$. The area-time product has asymptotic exponent $t+a\approx 0.47210$.
For comparison, the area-time product of Grover's algorithm has asymptotic exponent $0.50000$. Parallelizing Grover's algorithm to reach asymptotic time exponent $0.45743$ requires asymptotic area exponent $0.08514$, much larger than $0.01467$.

Connecting Legendre with Kummer and Edwards

Scalar multiplication on Legendre form elliptic curves can be speeded up in two ways. One can perform the bulk of the computation either on the associated Kummer line or on an appropriate twisted Edwards form elliptic curve. This paper provides details of moving to and from between Legendre form elliptic curves and associated Kummer line and moving to and from between Legendre form elliptic curves and related twisted Edwards form elliptic curves. Further, concrete twisted Edwards form elliptic curves are identified which correspond to known Kummer lines at the 128-bit security level which provide very fast scalar multiplication on modern architectures supporting SIMD operations.

Horizontal Clustering Side-Channel Attacks on Embedded ECC Implementations (Extended Version)

Side-channel attacks are a threat to cryptographic algorithms running on embedded devices. Public-key cryptosystems, including elliptic curve cryptography (ECC), are particularly vulnerable because their private keys are usually long-term. Well known countermeasures like regularity, projective coordinates and scalar randomization, among others, are used to harden implementations against common side-channel attacks like DPA.
Horizontal clustering attacks can theoretically overcome these countermeasures by attacking individual side-channel traces. In practice horizontal attacks have been applied to overcome protected ECC implementations on FPGAs. However, it has not been known yet whether such attacks can be applied to protected implementations working on embedded devices, especially in a non-profiled setting.
In this paper we mount non-profiled horizontal clustering attacks on two protected implementations of the Montgomery Ladder on Curve25519 available in the µNaCl library targeting electromagnetic (EM) emanations. The first implementation performs the conditional swap (cswap) operation through arithmetic of field elements (cswap-arith), while the second does so by swapping the pointers (cswap-pointer). They run on a 32-bit ARM Cortex-M4F core.
Our best attack has success rates of 97.64% and 99.60% for cswap-arith and cswap-pointer, respectively. This means that at most 6 and 2 bits are incorrectly recovered, and therefore, a subsequent brute-force can fix them in reasonable time. Furthermore, our horizontal clustering framework used for the aforementioned attacks can be applied against other protected implementations.

Short Double- and N-Times-Authentication-Preventing Signatures from ECDSA and More

Double-authentication-preventing signatures (DAPS) are signatures designed with the aim that signing two messages with an identical first part (called address) but different second parts (called payload) allows to publicly extract the secret signing key from two such signatures. A prime application for DAPS is disincentivizing and/or penalizing the creation of two signatures on different payloads within the same address, such as penalizing double spending of transactions in Bitcoin by the loss of the double spender's money.
So far DAPS have been constructed from very specific signature schemes not used in practice and using existing techniques it has proved elusive to construct DAPS schemes from signatures widely used in practice. This, unfortunately, has prevented practical adoption of this interesting tool so far. In this paper we ask whether one can construct DAPS from signature schemes used in practice. We affirmatively answer this question by presenting novel techniques to generically construct provably secure DAPS from a large class of discrete logarithm based signatures. This class includes schemes like Schnorr, DSA, EdDSA, and, most interestingly for practical applications, the widely used ECDSA signature scheme. The resulting DAPS are highly efficient and the shortest among all existing DAPS schemes. They are nearly half of the size of the most efficient factoring based schemes (IACR PKC'17) and improve by a factor of 100 over the most efficient discrete logarithm based ones (ACM CCS'15). Although this efficiency comes at the cost of a reduced address space, i.e., size of keys linear in the number of addresses, we will show that this is not a limitation in practice. Moreover, we generalize DAPS to any N>2, which we denote as N-times-authentication-preventing signatures (NAPS). Finally, we also provide an integration of our ECDSA-based DAPS into the OpenSSL library and perform an extensive comparison with existing approaches.

Faster Cryptographic Hash Function From Supersingular Isogeny Graphs

We propose a variant of the CGL hash, Charles et al. 2009, that is significantly
faster than the original algorithm, and prove that it is preimage and collision resistant. For
$n = \log p$ where $p$ is the characteristic of the finite field, the performance ratio between
CGL and the new proposal is $(5.7n + 110) / (13.5\log n + 46.4)$. This gives an exponential
speed up as the size of $p$ increases. Assuming the best quantum preimage attack on the hash
has complexity $O(p^{\frac{1}{4}})$, we attain a concrete speed-up for a 256-bit quantum
preimage security level by a factor 33.5. For a 384-bit quantum preimage security level, the
speed-up is by a factor 47.8.

Collusion Resistant Watermarking Schemes for Cryptographic Functionalities

A cryptographic watermarking scheme embeds a message into a program while preserving its functionality. Recently, a number of watermarking schemes have been proposed, which are proven secure in the sense that given one marked program, any attempt to remove the embedded message will substantially change its functionality.
In this paper, we formally initiate the study of collusion attacks for watermarking schemes, where the attacker’s goal is to remove the embedded messages given multiple copies of the same program, each with a different embedded message. This is motivated by practical scenarios, where a program may be marked multiple times with different messages.
The results of this work are twofold. First, we examine existing cryptographic watermarking schemes and observe that all of them are vulnerable to collusion attacks. Second, we construct collusion resistant watermarking schemes for various cryptographic functionalities (e.g., pseudorandom function evaluation, decryption, etc.). To achieve our second result, we present a new primitive called puncturable functional encryption scheme, which may be of independent interest.

MixColumns Properties and Attacks on (round-reduced) AES with a Single Secret S-Box

Uncategorized

Uncategorized

In this paper, we present new key-recovery attacks on AES with a single secret S-Box. Several attacks for this model have been proposed in literature, the most recent ones at Crypto’16 and FSE’17. Both these attacks exploit a particular property of the MixColumns matrix to
recover the secret-key.
In this work, we show that the same attacks work exploiting a weaker
property of the MixColumns matrix. As first result, this allows to (largely) increase the number of MixColumns matrices for which it is possible to set up all these attacks. As a second result, we present new attacks on 5-round AES with a single secret S-Box that exploit the new multiple-of-n property recently proposed at Eurocrypt’17. This property is based on the fact that choosing a particular set of plaintexts, the number of pairs of ciphertexts that lie in a particular subspace is a multiple of n.

Quantum Key-recovery Attack on Feistel Structures

Post-quantum cryptography has drawn considerable attention from cryptologists on a global scale. At Asiacrypt 2017, Leander and May combined Grover's and Simon's quantum algorithms to break the FX-based block ciphers, which were introduced by Kilian and Rogaway to strengthen DES. In this study, we investigate the Feistel constructions using Grover's and Simon's algorithms to generate new quantum key-recovery attacks on different rounds of Feistel constructions. Our attacks
require $2^{nr/4~-~3n/4}$ quantum queries to break an $r$-round Feistel construction.
The time complexity of our attacks is less than that observed for quantum brute-force search by a factor of $2^{0.75n}$. When compared with the best classical attacks, i.e., Dinur \emph{et al.}'s attacks at CRYPTO 2015, the time complexity is reduced by a factor of $2^{0.5n}$ without incurring any memory cost.

Computing isogenies between Montgomery curves using the action of (0,0)

A recent paper by Costello and Hisil at Asiacrypt'17 presents efficient formulas for computing isogenies with odd-degree cyclic kernels on Montgomery curves. We provide a constructive proof of a generalization of this theorem which shows the connection between the shape of the isogeny and the simple action of the point (0,0). This generalization removes the restriction of a cyclic kernel and allows for any separable isogeny whose kernel does not contain (0,0). As a particular case, we provide efficient formulas for 2-isogenies between Montgomery curves and show that these formulas can be used in isogeny-based cryptosystems without expensive square root computations and without knowledge of a special point of order 8. We also consider elliptic curves in triangular form containing an explicit point of order 3.

Reassessing Security of Randomizable Signatures

The Camenisch-Lysyanskaya (CL) signature is a very popular tool in cryptography, especially among privacy-preserving constructions. Indeed, the latter benefit from their numerous features such as randomizability.
Following the evolution of pairing-based cryptography, with the move from symmetric pairings to asymmetric pairings, Pointcheval and Sanders (PS) proposed at CT-RSA '16 an alternative scheme which improves performances while keeping the same properties.
Unfortunately, CL and PS signatures raise concerns in the cryptographic community because they both rely on interactive assumptions that essentially state their EUF-CMA security. This lack of precise security assessment is obviously a barrier to a widespread use of these signatures and a reason for preferring other constructions, such as the ones relying on $q$-type assumptions.
In this paper, we study more thoroughly the security of these signatures and prove that it actually relies, for both constructions, on simple variants of the $\textsf{SDH}$ assumption, assuming a slight modification of the original constructions. Our work thus shows that the CL and PS signature schemes offer similar security guarantees as those provided by several other constructions using bilinear groups, and so that one can benefit from their interesting features without jeopardizing security.

Post-Quantum Secure Remote Password Protocol from RLWE Problem

Secure Remote Password (SRP) protocol is an augmented Password-based Authenticated Key Exchange (PAKE) protocol based on discrete logarithm problem (DLP) with various attractive security features. Compared with basic PAKE protocols, SRP does not require server to store user's password and user does not send password to server to authenticate. These features are desirable for secure client-server applications. SRP has gained extensive real-world deployment, including Apple iCloud, 1Password etc. However, with the advent of quantum computer and Shor's algorithm, classic DLP-based public key cryptography algorithms are no longer secure, including SRP. Motivated by importance of SRP and threat from quantum attacks, we propose a RLWE-based SRP protocol (RLWE-SRP) which inherit advantages from SRP and elegant design from RLWE key exchange. We also present parameter choice and efficient portable C++ implementation of RLWE-SRP. Implementation of our 209-bit secure RLWE-SRP is more than 3x faster than 112-bit secure original SRP protocol, 5.5x faster than 80-bit secure J-PAKE and 14x faster than two 184-bit secure RLWE-based PAKE protocols with more desired properties.

CAPA: The Spirit of Beaver against Physical Attacks

Uncategorized

Uncategorized

In this paper we introduce two things: On one hand we introduce the Tile-Probe-and-Fault model, a model generalising the wire-probe model of Ishai et al. extending it to cover both more realistic side-channel leakage scenarios on a chip and also to cover fault and combined attacks. Secondly we introduce CAPA: a combined Countermeasure Against Physical Attacks. Our countermeasure is motivated by our model, and aims to provide security against higher-order SCA, multiple-shot FA and combined attacks. The tile-probe-and-fault model leads one to naturally look (by analogy) at actively secure multi-party computation protocols. Indeed, CAPA draws much inspiration from the MPC protocol SPDZ. So as to demonstrate that the model, and the CAPA countermeasure, are not just theoretical constructions, but could also serve to build practical countermeasures, we present initial experiments of proof-of-concept designs using the CAPA methodology. Namely, a hardware implementation of the KATAN and AES block ciphers, as well as a software bitsliced AES S-box implementation. We demonstrate experimentally that the design can resist second-order DPA attacks, even when the attacker is presented with many hundreds of thousands of traces. In addition our proof-of-concept can also detect faults within our model with high probability in accordance to the methodology.

Improved Differential Cryptanalysis on Generalized Feistel Schemes

Nachef et al used differential cryptanalysis to study four types of Generalized Feistel Scheme (GFS). They gave the lower bound of maximum number of rounds that is indistinguishable from a random permutation. In this paper, we study the security of several types of GFS by exploiting the asymmetric property. We show that better lower bounds can be achieved for the Type-1 GFS, Type-3 GFS and Alternating Feistel Scheme. Furthermore, we give the first general results regarding to the lower bound of the Unbalanced Feistel Scheme.

Rhythmic Keccak: SCA Security and Low Latency in HW

Glitches entail a great issue when securing a cryptographic implementation in hardware. Several masking schemes have been proposed in the literature that provide security even in the presence of glitches. The key property that allows this protection was introduced in threshold implementations as non-completeness. We address crucial points to ensure the right compliance of this property especially for
low-latency implementations. Specifically, we first discuss the existence of a flaw in DSD 2017 implementation of Keccak by Gross et al. in violation of the non-completeness property and propose a solution. We perform a side-channel evaluation on the first-order and second-order implementations of the proposed design where no leakage is detected with up to 55 million traces. Then, we present a method to ensure a non-complete scheme of an unrolled implementation applicable to any order of security or algebraic degree of the shared function. By using this method we design a two-rounds unrolled first-order Keccak-f [200] implementation that completes an encryption in 20.61ns, the fastest implementation in the literature to this date.

Efficient Implementation of Password-Based Authenticated Key Exchange from RLWE and Post-Quantum TLS

Two post-quantum password-based authenticated key exchange (PAKE) protocols were proposed at CT-RSA 2017. Following this work, we give much more efficient and portable C++ implementation of these two protocols. We also choose more compact parameters providing 200-bit security. Compared with original implementation, we achieve 21.5x and 18.5x speedup for RLWE-PAK and RLWE-PPK respectively. Compare with quantum-vulnerable J-PAKE protocol, we achieve nearly 8x speedup. We also integrate RLWE-PPK into TLS to construct a post-quantum TLS ciphersuite. This allows simpler key management, mutual authentication and resistant to phishing attack. Benchmark shows that our ciphersuite is indeed practical.

Data Is a Stream: Security of Stream-Based Channels

The common approach to defining secure channels in the literature is to consider transportation of discrete messages provided via atomic encryption and decryption interfaces. This, however, ignores that many practical protocols (including TLS, SSH, and QUIC) offer streaming interfaces instead, moreover with the complexity that the network (possibly under adversarial control) may deliver arbitrary fragments of ciphertexts to the receiver. To address this deficiency, we initiate the study of stream-based channels and their security. We present notions of confidentiality and integrity for such channels, akin to the notions for atomic channels, but taking the peculiarities of streams into account. We provide a composition result for our setting, saying that combining chosen-plaintext confidentiality with integrity of the transmitted ciphertext stream lifts confidentiality of the channel to chosen-ciphertext security. Notably, for our proof of this theorem in the streaming setting we need an additional property, called error predictability. We give an AEAD-based construction that achieves our notion of a secure stream-based channel. The construction matches rather well the one used in TLS, providing validation of that protocol's design. Finally, we study how applications that actually aim at transporting atomic messages can do so safely over a stream-based channel. We provide corresponding security notions and a generic and secure 'encode-then-stream' paradigm.

EPIC: Efficient Private Image Classification (or: Learning from the Masters)

Uncategorized

Uncategorized

Outsourcing an image classification task raises privacy concerns, both from the image provider's perspective, who wishes to keep their images confidential, and from the classification algorithm provider's perspective, who wishes to protect the intellectual property of their classifier.
We propose EPIC, an efficient private image classification system based on support vector machine (SVM) learning, which is secure against malicious adversaries. The novelty of EPIC is that it builds upon transfer learning techniques known from the Machine Learning (ML) literature and minimizes the load on the privacy-preserving part.
Our solution is based on Secure Multiparty Computation (MPC), it is 34 times faster than Gazelle (USENIX 2018) --the state-of-the-art in private image classification-- and it improves the total communication cost by 50 times, while achieving a 7\% higher accuracy on CIFAR-10 dataset. When benchmarked for performance, while maintaining the same CIFAR-10 accuracy as Gazelle, EPIC is 700 times faster and the communication cost is reduced by 500 times.

Return Of Bleichenbacher's Oracle Threat (ROBOT)

Many web hosts are still vulnerable to one of the oldest attacks against RSA in TLS. We show that Bleichenbacher’s RSA vulnerability from 1998 is still very prevalent in the Internet and affects almost a third of the top 100 domains in the Alexa Top 1 Million list, among them Facebook and Paypal. We identified vulnerable products from at least eight different vendors and open source projects, among them F5, Citrix, Radware, Cisco, Erlang, Bouncy Castle, and WolfSSL. Further we have demonstrated practical exploitation by signing a message with the private key of facebook.com’s HTTPS certificate. Finally, we discuss countermeasures against Bleichenbacher attacks in TLS and recommend to deprecate the RSA encryption key exchange in TLS and the PKCS #1 v1.5 standard.

Signature Schemes with a Fuzzy Private Key

In this paper, we introduce a new concept of digital signature that we call \emph{fuzzy signature}, which is a signature scheme that uses a noisy string such as biometric data as a private key, but \emph{does not require user-specific auxiliary data} (which is also called a helper string in the context of fuzzy extractors), for generating a signature.
Our technical contributions are three-fold:
(1) We first give the formal definition of fuzzy signature, together with a formal definition of a \lq\lq setting'' that specifies some necessary information for fuzzy data.
(2) We give a generic construction of a fuzzy signature scheme based on a signature scheme that has certain homomorphic properties regarding keys and satisfies a kind of related key attack security with respect to addition, and a new tool that we call \emph{linear sketch}.
(3) We specify two concrete settings for fuzzy data, and for each of the settings give a concrete instantiation of these building blocks for our generic construction, leading to two concrete fuzzy signature schemes.
We also discuss how fuzzy signature schemes can be used to realize a biometric-based PKI that uses biometric data itself as a cryptographic key, which we call the \emph{public biometric infrastructure (PBI)}.

On the Round Complexity of OT Extension

We show that any OT extension protocol based on one-way functions (or more generally any symmetric-key primitive) either requires an additional round compared to the base OTs or must make a non-black-box use of one-way functions. This result also holds in the semi-honest setting or in the case of certain setup models such as the common random string model. This implies that OT extension in any secure computation protocol must come at the price of an additional round of communication or the non-black-box use of symmetric key primitives. Moreover, we observe that our result is tight in the sense that positive results can indeed be obtained using non-black-box techniques or at the cost of one additional round of communication.

On Multiparty Garbling of Arithmetic Circuits

Uncategorized

Uncategorized

We initiate a study of garbled circuits that contain both Boolean and arithmetic gates in secure multiparty computation. In particular, we incorporate the garbling gadgets for arithmetic circuits recently presented by Ball, Malkin, and Rosulek (ACM CCS 2016) into the multiparty garbling paradigm initially introduced by Beaver, Micali, and Rogaway (STOC '90). This is the first work that studies arithmetic garbled circuits in the multiparty setting. Using mixed Boolean-arithmetic circuits allows more efficient secure computation of functions that naturally combine Boolean and arithmetic computations. Our garbled circuits are secure in the semi-honest model, under the same hardness assumptions as Ball et al., and can be efficiently and securely computed in constant rounds assuming an honest majority.
We first extend free addition and multiplication by a constant to the multiparty setting.
We then extend to the multiparty setting efficient garbled multiplication gates. The garbled multiplication gate construction we show was previously achieved only in the two-party setting and assuming a random oracle.
We further present a new garbling technique, and show how this technique can improve efficiency in garbling selector gates. Selector gates compute a simple ``if statement" in the arithmetic setting: the gate selects the output value from two input integer values, according to a Boolean selector bit; if the bit is $0$ the output equals the first value, and if the bit is $1$ the output equals the second value. Using our new technique, we show a new and designated garbled selector gate that reduces by approximately $33\%$ the evaluation time, for any number of parties, from the best previously known constructions that use existing techniques and are secure based on the same hardness assumptions.
On the downside, we find that testing equality and computing exponentiation by a constant are significantly more complex to garble in the multiparty setting than in the two-party setting.

Complete Attack on RLWE Key Exchange with reused keys, without Signal Leakage

Uncategorized

Uncategorized

Key Exchange (KE) from RLWE (Ring-Learning with Errors) is a potential alternative to Diffie-Hellman (DH) in a post quantum setting. Key leakage with RLWE key exchange protocols in the context of key reuse has already been pointed out in previous work. The initial attack described by Fluhrer is designed in such a way that it only works on Peikert's KE protocol and its variants that derives the shared secret from the most significant bits of the approximately equal keys computed by both parties. It does not work on Ding's key exchange that uses the least significant bits to derive a shared key. The Signal leakage attack relies on changes in the signal sent by the responder reusing his key, in a sequence of key exchange sessions initiated by an attacker with a malformed key. A possible defense against this attack would be to require the initiator of a key exchange to send the signal, which is the one pass case of the KE protocol.
In this work, we describe a new attack on Ding's one pass case without relying on the signal function output but using only the information of whether the final key of both parties agree. We also use LLL reduction to create the adversary's keys in such a way that the party being compromised cannot identify the attack in trivial ways. This completes the series of attacks on RLWE key exchange with key reuse for all variants in both cases of the initiator and responder sending the signal. Moreover, we show that the previous Signal leakage attack can be made more efficient with fewer queries and how it can be extended to Peikert's key exchange, which was used in the BCNS implementation and integrated with TLS and a variant used in the New Hope implementation.

EFLASH: A New Multivariate Encryption Scheme

Multivariate Public Key Cryptography is a leading option for security in a post quantum society. In this paper we propose a new encryption scheme, EFLASH, and analyze its efficiency and security.

Round2: KEM and PKE based on GLWR

Uncategorized

Uncategorized

Cryptographic primitives that are secure against quantum computing are receiving growing attention with recent, steady advances in quantum computing and standardization initiatives in post-quantum cryptography by NIST and ETSI. Lattice-based cryptography is one of the families in post-quantum cryptography, demonstrating desirable features such as well-understood security, efficient performance, and versatility.
In this work, we present Round2 that consists of a key-encapsulation mechanism and a public-key encryption scheme. Round2 is based on the General Learning with Rounding problem, that unifies the Learning with Rounding and Ring Learning with Rounding problems. Round2's construction using the above problem allows for a unified description and implementation. The key-encapsulation mechanism and public-key encryption scheme furthermore share common building blocks, simplifying (security and operational) analysis and code review. Round2's reliance on prime cyclotomic rings offers a large design space that allows fine-tuning of parameters to required security levels. The use of rounding reduces bandwidth requirements and the use of sparse-trinary secrets improves CPU performance and decryption success rates. Finally, Round2 includes various approaches of refreshing the system public parameter A, allowing efficient ways of preventing precomputation and back-door attacks.

Distributed Algorithms Made Secure: A Graph Theoretic Approach

In the area of distributed graph algorithms a number of network's entities with local views solve some computational task by exchanging messages with their neighbors. Quite unfortunately, an inherent property of most existing distributed algorithms is that throughout the course of their execution, the nodes get to learn not only their own output but rather learn quite a lot on the inputs or outputs of many other entities. This leakage of information might be a major
obstacle in settings where the output (or input) of network's individual is a private information (e.g., distributed networks of selfish agents, decentralized digital currency such as Bitcoin).
While being quite an unfamiliar notion in the classical distributed setting, the notion of secure multi-party computation (MPC) is one of the main themes in the Cryptographic community. The existing secure MPC protocols do not quite fit the framework of classical distributed models in which only messages of bounded size are sent on graph edges in each round. In this paper, we introduce a new framework for \emph{secure distributed graph algorithms} and provide the first \emph{general compiler} that takes any "natural" non-secure distributed algorithm that runs in $r$ rounds, and turns it into a secure algorithm that runs in $\widetilde{O}(r \cdot D \cdot
poly(\Delta))$ rounds where $\Delta$ is the maximum degree in the graph and $D$ is its diameter. A "natural"
distributed algorithm is one where the local computation at each node can be performed in polynomial time. An interesting advantage of our approach is that it allows one to decouple between the price of locality and the price of \emph{security} of a given graph function $f$. The security of the compiled algorithm is information-theoretic but holds only against a semi-honest adversary that controls a single node in the network.
This compiler is made possible due to a new combinatorial structure called \emph{private neighborhood trees}: a collection of $n$ trees $T(u_1),\ldots, T(u_n)$, one for each vertex $u_i \in V(G)$, such that each tree $T(u_i)$ spans the neighbors of $u_i$ {\em without going through $u_i$}. Intuitively, each tree $T(u_i)$ allows all neighbors of $u_i$ to exchange a \emph{secret} that is hidden from $u_i$, which is the basic graph infrastructure of the compiler. In a
$(d,c)$-private neighborhood trees each tree $T(u_i)$ has depth at most $d$ and each edge $e \in G$ appears in at most $c$ different trees. We show a construction of private neighborhood trees with
$d=\widetilde{O}(\Delta \cdot D)$ and $c=\widetilde{O}(D)$,
both these bounds are \emph{existentially} optimal.

Implementing Joux-Vitse's Crossbred Algorithm for Solving MQ Systems over GF(2) on GPUs

The hardness of solving multivariate quadratic (MQ) systems is the underlying problem for multivariate-based schemes in the field of post-quantum cryptography. The concrete, practical hardness of this problem needs to be measured by state-of-the-art algorithms and high-performance implementations. We describe, implement, and evaluate an adaption of the Crossbred algorithm by Joux and Vitse from 2017 for solving MQ systems over GF(2). Our adapted algorithm is highly parallelizable and is suitable for solving MQ systems on GPU architectures. Our implementation is able to solve an MQ system of 134 equations in 67 variables in 98.39 hours using one single commercial Nvidia GTX 980 graphics card, while the original Joux-Vitse algorithm requires 6200 CPU-hours for the same problem size. We used our implementation to solve all the Fukuoka Type-I MQ challenges for n = 55, ..., 74. Based on our implementation, we estimate that the expected computation time for solving an MQ system of 80 equations in 84 variables is about one year using a cluster of 3600 GTX 980 graphics cards. These parameters have been proposed for 80-bit security by, e.g., Sakumoto, Shirai, and Hiwatari at Crypto 2011.

FPGA-based Niederreiter Cryptosystem using Binary Goppa Codes

Uncategorized

Uncategorized

This paper presents an FPGA implementation of the Niederreiter cryptosystem using binary Goppa codes, including modules for encryption, decryption, and key generation. We improve over previous implementations in terms of efficiency (time-area product and raw performance) and security level. Our implementation is constant time in order to protect against timing side-channel analysis. The design is fully parameterized, using code-generation scripts, in order to support a wide range of parameter choices for security, including binary field size, the degree of the Goppa polynomial, and the code length. The parameterized design allows us to choose design parameters for time-area trade-offs in order to support a wide variety of applications ranging from smart cards to server accelerators. For parameters that are considered to provide ‘’128-bit post-quantum security’‘, our time-optimized implementation requires 966,400 cycles for the generation of both public and private portions of a key and 14,291 cycles to decrypt a ciphertext. The time-optimized design uses only 121,806 ALMs (52% of the available logic) and 961 RAM blocks (38% of the available memory), and results in a design that runs at about 250MHz on a medium-size Stratix V FPGA.

On the exponents of APN power functions and Sidon sets, sum-free sets, and Dickson polynomials

We derive necessary conditions related to the notions, in additive combinatorics, of Sidon sets and sum-free sets, on those exponents $d\in {\mathbb Z}/(2^n-1){\mathbb Z}$ which are such that $F(x)=x^d$ is an APN function over ${\mathbb F}_{2^n}$ (which is an important cryptographic property). We study to which extent these new conditions may speed up the search for new APN exponents $d$.
We also show a new connection between APN exponents and Dickson polynomials: $F(x)=x^d$ is APN if and only if the reciprocal polynomial of the Dickson polynomial of index $d$ is an injective function from $\{y\in {\Bbb F}_{2^n}^*; tr_n(y)=0\}$ to ${\Bbb F}_{2^n}\setminus \{1\}$. This also leads to a new and simple connection between Reversed Dickson polynomials and reciprocals of Dickson polynomials in characteristic 2 (which generalizes to every characteristic thanks to a small modification): the squared Reversed Dickson polynomial of some index and the reciprocal of the Dickson polynomial of the same index are equal.

Comparison analysis and efficient implementation of reconciliation-based RLWE key exchange protocol

Error reconciliation is an important technique for Learning With Error (LWE) and Ring-LWE (RLWE)-based constructions. In this paper, we present a comparison analysis on two error reconciliation-based RLWE key exchange protocols: Ding et al. in 2012 (DING12) and Bos et al. in 2015 (BCNS15). We take them as examples to explain core idea of error reconciliation, building key exchange over RLWE problem, implementation, real-world performance and compare them comprehensively. We also analyse a LWE key exchange “Frodo” that uses an improved error reconciliation mechanism in BCNS15. To the best of our knowledge, our work is the first to present at least 128-bit classic (80-bit quantum) and 256-bit classic (>200-bit quantum) secure parameter choices for DING12 with efficient portable C/C++ implementations. Benchmark shows that our efficient implementation is 11x faster than BCNS15 and one key exchange execution only costs 0.07ms on a 4-year-old middle range CPU. Error reconciliation is 1.57x faster than BCNS15.

Reusable Authentication from the Iris

Uncategorized

Uncategorized

Biometrics exhibit noise between repeated readings. Due to the noise, devices store a plaintext template of the biometric. This stored template is an appetizing target for an attacker. Due to this risk, the primary use case for biometrics is mobile device authentication (templates are stored within the mobile device’s secure processor). There has been little adoption in client-server applications.
Fuzzy extractors derive a stable cryptographic key from biometrics (Dodis et al., Eurocrypt 2004). In this work we describe an iris key derivation system with 32 bits of security even when multiple keys are derived from the same iris.
We are fully aware that 32 bits of security is insufficient for a secure system. The goal of this work is to inspire researchers to design multi-factor authentication systems that uses our scheme as one component. Our system is based on repeated hashing which simplifies incorporating multiple factors (such as a password).
Our starting point a recent fuzzy extractor due to Canetti et al.(Eurocrypt 2016). Achieving satisfactory parameters requires modifying and coupling the image processing and cryptographic algorithms. Our scheme is implemented in C and Python and is open-sourced. On a moderately powerful server, authentication usually completes within .30s.

Cyclic Locking and Memristor-based Obfuscation Against CycSAT and Inside Foundry Attacks

Uncategorized

Uncategorized

The high cost of IC design has made chip protection one of the first priorities of the semiconductor industry. Although there is a common impression that combinational circuits must be designed without any cycles, circuits with cycles can be combinational as well. Such cyclic circuits can be used to reliably lock ICs. Moreover, since memristor is compatible with CMOS structure, it is possible to efficiently obfuscate cyclic circuits using polymorphic memristor-CMOS gates. In this case, the layouts of the circuits with different functionalities look exactly identical, making it impossible even for an inside foundry attacker to distinguish the defined functionality of an IC by looking at its layout. In this paper, we propose a comprehensive chip protection method based on cyclic locking and polymorphic memristor-CMOS obfuscation. The robustness against state-of-the-art key-pruning attacks is demonstrated and the overhead of the polymorphic gates is investigated.

Short Solutions to Nonlinear Systems of Equations

Uncategorized

Uncategorized

This paper presents a new hard problem for use in cryptography, called Short Solutions to Nonlinear Equations (SSNE). This problem generalizes the Multivariate Quadratic (MQ) problem by requiring the solution be short; as well as the Short Integer Solutions (SIS) problem by requiring the underlying system of equations be nonlinear. The joint requirement causes common solving strategies such as lattice reduction or Gröbner basis algorithms to fail, and as a result SSNE admits shorter representations of equally hard problems. We show that SSNE can be used as the basis for a provably secure hash function. Despite failing to find public key cryptosystems relying on SSNE, we remain hopeful about that possibility.

Efficient Optimal Ate Pairing at 128-bit Security Level

Following the emergence of Kim and Barbulescu's new number field sieve (exTNFS) algorithm at CRYPTO'16 [21] for solving discrete logarithm problem (DLP) over the finite field; pairing-based cryptography researchers are intrigued to find new parameters that confirm standard security levels against exTNFS. Recently, Barbulescu and Duquesne have suggested new parameters [3] for well-studied pairing-friendly curves i.e., Barreto-Naehrig (BN) [5], Barreto-Lynn-Scott (BLS-12) [4] and Kachisa-Schaefer-Scott (KSS-16) [19] curves at 128-bit security level (twist and sub-group attack secure). They have also concluded that in the context of Optimal-Ate pairing with their suggested parameters, BLS-12 and KSS-16 curves are more efficient choices than BN curves.
Therefore, this paper selects the atypical and less studied pairing-friendly curve in literature, i.e., KSS-16 which offers quartic twist, while BN and BLS-12 curves have sextic twist. In this paper, the authors optimize Miller's algorithm of Optimal-Ate pairing for the KSS-16 curve by deriving efficient sparse multiplication and implement them. Furthermore, this paper concentrates on the Miller's algorithm to experimentally verify Barbulescu et al.'s estimation. The result shows that Miller's algorithm time with the derived pseudo 8-sparse multiplication is most efficient for KSS-16 than other two curves. Therefore, this paper defends Barbulescu and Duquesne's conclusion for 128-bit security.

Fully Verifiable Secure Delegation of Pairing Computation: Cryptanalysis and An Efficient Construction

Uncategorized

Uncategorized

We address the problem of secure and verifiable delegation of general pairing computation. We first analyze some recently proposed pairing delegation schemes and present several attacks on their security and/or verifiability properties. In particular, we show that none of these achieve the claimed security and verifiability properties simultaneously. We then provide a fully verifiable secure delegation scheme ${\sf VerPair}$ under one-malicious version of a two-untrusted-program model (OMTUP). ${\sf VerPair}$ not only significantly improves the efficiency of all the previous schemes, such as fully verifiable schemes of Chevallier-Mames et al. and Canard et al. by eliminating the impractical exponentiation- and scalar-multiplication-consuming steps, but also offers for the first time the desired full verifiability property unlike other practical schemes. Furthermore, we give a more efficient and less memory consuming invocation of the subroutine ${\sf Rand}$ for ${\sf VerPair}$ by eliminating the requirement of offline computations of modular exponentiations and scalar-multiplications. In particular, ${\sf Rand}$ includes a fully verifiable partial delegation under the OMTUP assumption. The partial delegation of ${\sf Rand}$ distinguishes ${\sf VerPair}$ as a useful lightweight delegation scheme when the delegator is resource-constrained (e.g. RFID tags, smart cards or sensor nodes).

A Note on Stream Ciphers that Continuously Use the IV

Time-memory-data tradeoff (TMD-TO) attacks limit the security level of many classical stream ciphers (like $E_0$, A5/1, Trivium, Grain) to $n/2$, where $n$ denotes the inner state length of the underlying keystream generator. This implies that to withstand TMD tradeoff attacks, the state size should be at least double the key size. In 2015, Armknecht and Mikhalev introduced a new line of research, which pursues the goal of reducing the inner state size of lightweight stream ciphers below this boundary by deploying a key-dependent state update function in a Grain-like stream cipher. Although their design Sprout was broken soon after publication, it has raised interest in the design principle, and a number of related ciphers have been suggested since, including Plantlet, a follow-up of Sprout, and the cipher Fruit.
In 2017, Hamann et al. showed that the initial hope of achieving full security against TMD-TO attacks by continuously using the secret key has failed. In particular, they demonstrated that there are generic distinguishing attacks against such ciphers with a complexity significantly smaller than that of exhaustive key search. However, by studying the assumptions underlying the applicability of these attacks, they came up with a new design idea for small-state stream ciphers, which is based on also continuously using the public IV as part of the state update. The authors conjectured that this design principle might allow to finally achieve full security against TMD-TO attacks.
In this note, we take their idea one step further. While Hamann et al. aimed for improving the security of small-state stream ciphers that continuously use the secret key against distinguishing, we explain here that also other stream cipher constructions can benefit from continuously using the IV. In particular, our approach allows for thwarting the well-known TMD-TO inner state recovery attacks of Babbage and Biryukov and Shamir without using the secret key more than once.

Attacks on the AJPS Mersenne-based cryptosystem

Aggarwal, Joux, Prakash and Santha recently introduced a new potentially quantum-safe public-key cryptosystem, and suggested that a brute-force attack is essentially optimal against it. They consider but then dismiss both Meet-in-the-Middle attacks and LLL-based attacks. Very soon after their paper appeared, Beunardeau et al.\ proposed a practical LLL-based technique that seemed to significantly reduce the security of the AJPS system.
In this paper we do two things. First, we show that a Meet-in-the-Middle attack can also be made to work against the AJPS system, using locality-sensitive hashing to overcome the difficulty that Aggarwal et al.\ saw for such attacks. We also present a quantum version of this attack. Second, we give a more precise analysis of the attack of Beunardeau et al., confirming and refining their results.

SAT-based Bit-flipping Attack on Logic Encryptions

Uncategorized

Uncategorized

Logic encryption is a hardware security technique that uses extra key inputs to prevent unauthorized use of a circuit. With the discovery of the SAT-based attack, new encryption techniques such as SARLock and Anti-SAT are proposed, and further combined with traditional logic encryption techniques, to guarantee both high error rates and resilience to the SAT-based attack. In this paper, the SAT-based bit-flipping attack is presented. It first separates the two groups of keys via SAT-based bit-flippings, and then attacks the traditional encryption and the SAT-resilient encryption, by conventional SAT-based attack and by-passing attack, respectively. The experimental results show that the bit-flipping attack successfully returns a circuit with the correct functionality and significantly reduces the executing time compared with other advanced attacks.

There Goes Your PIN: Exploiting Smartphone Sensor Fusion Under Single and Cross User Setting

Uncategorized

Uncategorized

A range of zero-permission sensors are found in modern smartphones to enhance user experience. These sensors can lead to unintentional leakage of user private data. In this paper, we combine leakage from a pool of zero-permission sensors, to reconstruct user's secret PIN. By harvesting the power of machine learning algorithms, we show a practical attack on the full four-digit PIN space. Able to classify all 10,000 PIN combinations, results show up to 83.7% success within 20 tries in a single user setting. Latest previous work demonstrated 74% success on a reduced space of 50 chosen PINs, where we report 99.5% success with a single try in a similar setting. Moreover, we extend the PIN recovery attack from a single user to a cross-user scenario. Firstly, we show that by training on several users, the PIN recovery success can be boosted, when a target user is part of the training pool. On the other hand, PIN recovery is still possible when training pool is mutually exclusive to the target user, albeit with low success rate.

Itsuku: a Memory-Hardened Proof-of-Work Scheme

Proof-of-Work (PoW) schemes allow to limit access to resources or to share rewards for crypto-currency mining. The MTP-Argon2 PoW by Biryukov and Khovratovich is loosely based on the Argon2 memory-hard password hashing function. Several attacks have been published. We introduce a new transposed parallel implementation attack which achieves higher performance by circumventing apparent bandwidth requirements. We then present Itsuku, a new scheme that fixes known issues by changing MTP-Argon2 parameters and adds new operations to improve memory hardness. Our scheme is built on a simple security criterion: any implementation which requires half the memory or less should induce at least a times-64 computation cost for difficulty d <= 100. The Itsuku proof size is typically 1/16 th of the initial scheme, while providing better memory hardness. We also describe high-end hardware designs for MTP-Argon2 and Itsuku.

Cryptocurrency Voting Games

This work shows that weighted majority voting games occur in cryptocurrencies. In particular, two such games are highlighted. The first game, which we call the Rule Game, pertains to the scenario where the entities in the system engage in a voting procedure to accept or reject a change of rules. The second game, which we call the Attack Game, refers to the scenario where a group of entities in a cryptocurrency system can form a coalition to engage in double spending. For the Rule Game we provide analysis to argue that the Coleman’s preventive power measure is the appropriate tool for measuring a player’s influence in the game while for the Attack Game, we define a notion of stability based on the notion of minimal winning coalitions. For both the Rule Game and the Attack Game, we show how to analyse the games based on a snapshot of real world data for Bitcoin which is presently the most popular of all the cryptocurrencies.

SCADPA: Side-Channel Assisted Differential-Plaintext Attack on Bit Permutation Based Ciphers

Bit permutations are a common choice for diffusion function in lightweight block ciphers, owing to their low implementation footprint. In this paper, we present a novel Side-Channel Assisted Differential-Plaintext Attack (SCADPA), exploiting specific vulnerabilities of bit permutations. SCADPA is a chosen-plaintext attack, knowledge of the ciphertext is not required. Unlike statistical methods, commonly used for distinguisher in standard power analysis, the proposed method is more differential in nature. The attack shows that diffusion layer can play a significant role in distinguishing the internal cipher state. We demonstrate how to practically exploit such vulnerability to extract the secret key. Results on microcontroller-based PRESENT-80 cipher lead to full key retrieval using as low as 17 encryptions.
It is possible to automate the attack by using a thresholding method detailed in the paper. Several case studies are presented, using various attacker models and targeting different encryption modes (such as CTR and CBC). We provide a discussion on how to avoid such attack from the design point of view.

Fast and Universally-Composable Oblivious Transfer and Commitment Scheme with Adaptive Security

Adaptive security embodies one of the strongest notions of security that allows an adversary to corrupt parties at any point during protocol execution and gain access to its internal state. Since it models real-life situations such as ``hacking", efficient adaptively-secure multiparty computation (MPC) protocols are desirable. Such protocols demand primitives such as oblivious transfer (OT) and commitment schemes that are adaptively-secure as building blocks. Efficient realizations of these primitives have been found to be challenging, especially in the no erasure model. We make progress in this direction and provide efficient constructions that are Universally-Composable in the random oracle model.
Oblivious Transfer: We present the first round optimal framework for building adaptively-secure OT in the programmable random oracle (PRO) model, relying upon the framework of Peikert et al. (Crypto 2008). When instantiated with Decisional Diffie Hellman assumption, it incurs a minimal communication overhead of one k bit string and computational overhead of 5 random oracle queries over its static counterpart, where k is the security parameter. This computation overhead translates to 0.02% and 1% in the LAN and WAN setting. Additionally, we obtain a construction of adaptively-secure 1-out-of-N OT by extending the result of Naor et al. (Journal of Cryptology 2005) that transforms logN copies of 1-out-of-2 OTs to one 1-out-of-N OT in the PRO model. We complete the picture of efficient OT constructions by presenting the first adaptively secure OT Extension, extending the protocol of Asharov et al. (Eurocrypt 2015) for the adaptive setting using PRO. Our OT extension enables us to obtain adaptive OTs at an amortized cost of 3 symmetric key operations and communication of 3k bit strings. It incurs a runtime overhead of 2% and 11.95%, in the LAN and WAN setting and almost no communication overhead over the static OT extension protocol. In concrete terms, the cost is 2microsecs and 115 microsecs for each OT in LAN and WAN.
Commitment Scheme: We present an adaptively secure commitment scheme in the Global Random Oracle model solely relying on observable random oracle (ORO). Our commitment scheme has a one-time offline setup phase, where a common reference string (crs) is generated between the parties using an ORO. In the online phase, the parties use the crs and ORO to generate commitments in a non-interactive fashion. Our construction incurs communication of 4k bit strings and computation of 4 exponentiations and 4 random oracle queries for committing to an arbitrary length message. Empirically, it takes around 0.18ms and 0.2 ms for committing to 128 bits and 2048 bits respectively. It finds applications in secure two-party computation (2PC) protocols that adopt offline-online paradigm, where the crs can be generated in the offline phase and the scheme can be used in the online phase.

Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications

We present Chameleon, a novel hybrid (mixed-protocol) framework for secure function evaluation (SFE) which enables two parties to jointly compute a function without disclosing their private inputs. Chameleon combines the best aspects of generic SFE protocols with the ones that are based upon additive secret sharing. In particular, the framework performs linear operations in the ring $\mathbb{Z}_{2^l}$ using additively secret shared values and nonlinear operations using Yao's Garbled Circuits or the Goldreich-Micali-Wigderson protocol. Chameleon departs from the common assumption of additive or linear secret sharing models where three or more parties need to communicate in the online phase: the framework allows two parties with private inputs to communicate in the online phase under the assumption of a third node generating correlated randomness in an offline phase. Almost all of the heavy cryptographic operations are precomputed in an offline phase which substantially reduces the communication overhead.
Chameleon is both scalable and significantly more efficient than the ABY framework (NDSS'15) it is based on. Our framework supports signed fixed-point numbers. In particular, Chameleon's vector dot product of signed fixed-point numbers improves the efficiency of mining and classification of encrypted data for algorithms based upon heavy matrix multiplications. Our evaluation of Chameleon on a 5 layer convolutional deep neural network shows 110x and 3.5x faster executions than Microsoft CryptoNets (ICML'16) and MiniONN (CCS'17), respectively.

MILP-aided Cryptanalysis of Round Reduced ChaCha

The inclusion of ChaCha20 and Poly1305 into the list of supported ciphers in TLS 1.3 necessitates a security evaluation of those ciphers with all the state-of-the-art tools and innovative cryptanalysis methodologies. Mixed Integer Linear Programming (MILP) has been successfully applied to find more accurate characteristics of several ciphers such as SIMON and SPECK. In our research, we use MILP-aided cryptanalysis to search for differential characteristics, linear approximations and integral properties of ChaCha. We are able to find differential trails up to 2 rounds and linear trails up to 1 round. However, no integral distinguisher has been found, even for 1 round.