All papers in 2018 (Page 12 of 1249 results)

Last updated:  2018-03-05
NTRU-LPR IND-CPA: A New Ideal Lattices-based Scheme
Uncategorized
Soda Diop, Bernard Ousmane Sané, Nafissatou Diarra, Michel Seck
Show abstract
Uncategorized
In this paper, we propose NTRU-LPR IND-CPA, a new secure scheme based on the decisional variant of Bounded Distance Decoding problem over rings (DR-BDD). This scheme is IND-CPA secure and has two KEM variants IND-CCA2 secure in the random oracle model. NTRU-LPR IND-CPA is similar to NTRU LPRime and LPR Cryptosystem. NTRU-LPR IND-CPA does not have a problem of decryption failures. Our polynomial ring can be any ring of the form $\mathbb{Z}[x]/(q,f(x))$, where $f$ is a polynomial of degree $n$ and $q$ is an integer. Relatively to the DR-BDD problem, we propose to use square-free polynomials and such polynomials include $f(x)=x^n-x-1$ (as in NTRU LPRime) and $f(x)=x^n-1$ (as in NTRU). To avoid some weaknesses in Ring-LWE or NTRU-like schemes (Meet-in-the-middle attack, Hybrid attack, Weak keys, etc.), we do not use sparse polynomials or inversion of polynomials. Furthermore, to avoid backdoors, all polynomials in our scheme can be generated by hash functions. We also give a short comparative analysis between our new scheme and some proposals of the NIST Post-Quantum call (November 2017).
Last updated:  2018-04-18
Generic Round-Function-Recovery Attacks for Feistel Networks over Small Domains
Uncategorized
F. Betül Durak, Serge Vaudenay
Show abstract
Uncategorized
Feistel Networks (FN) are now being used massively to encrypt credit card numbers through format-preserving encryption. In our work, we focus on FN with two branches, entirely unknown round functions, modular additions (or other group operations), and when the domain size of a branch (called $N$) is small. We investigate round-function-recovery attacks. The best known attack so far is an improvement of Meet-In-The-Middle (MITM) attack by Isobe and Shibutani from ASIACRYPT~2013 with optimal data complexity $q=r \frac{N}{2}$ and time complexity $N^{ \frac{r-4}{2}N + o(N)}$, where $r$ is the round number in FN. We construct an algorithm with a surprisingly better complexity when $r$ is too low, based on partial exhaustive search. When the data complexity varies from the optimal to the one of a codebook attack $q=N^2$, our time complexity can reach $N^{O \left( N^{1-\frac{1}{r-2}} \right) }$. It crosses the complexity of the improved MITM for $q\sim N\frac{\mathrm{e}^3}{r}2^{r-3}$. We also estimate the lowest secure number of rounds depending on $N$ and the security goal. We show that the format-preserving-encryption schemes FF1 and FF3 standardized by NIST and ANSI cannot offer 128-bit security (as they are supposed to) for $N\leq11$ and $N\leq17$, respectively (the NIST standard only requires $N \geq 10$), and we improve the results by Durak and Vaudenay from CRYPTO~2017.
Last updated:  2019-01-30
Towards Practical Lattice-Based One-Time Linkable Ring Signatures
Carsten Baum, Huang Lin, Sabine Oechsner
Ring signatures, as introduced by Rivest, Shamir, and Tauman (Asiacrypt ’01), allow to generate a signature for a message on be half of an ad-hoc set of parties. To sign a message, only the public keys must be known and these can be generated independently. It is furthermore not possible to identify the actual signer based on the signature. Ring signatures have recently gained attention due to their applicability in the construction of practical anonymous cryptocurrencies, where they are used to secure transactions while hiding the identity of the actual spender. To be applicable in that setting, ring signatures must allow to determine when a party signed multiple transactions, which is done using a property called linkability. This work presents a linkable ring signature scheme constructed from a lattice-based collision-resistant hash function. We follow the idea of existing schemes which are secure based on the hardness of the discrete logarithm problem, but adapt and optimize ours to the lattice setting. In comparison to other designs for (lattice-based) linkable ring signatures, our approach avoids the standard solution for achieving linkability, which involves proofs about correct evaluation of a pseudorandom function using heavy zero-knowledge machinery.
Last updated:  2018-01-30
On the Gold Standard for Security of Universal Steganography
Sebastian Berndt, Maciej Liśkiewicz
While symmetric-key steganography is quite well understood both in the information-theoretic and in the computational setting, many fundamental questions about its public-key counterpart resist persistent attempts to solve them. The computational model for public-key steganography was proposed by von Ahn and Hopper in EUROCRYPT 2004. At TCC 2005, Backes and Cachin gave the first universal public-key stegosystem - i.e. one that works on all channels - achieving security against replayable chosen-covertext attacks (SS-RCCA) and asked whether security against non-replayable chosen-covertext attacks (SS-CCA) is achievable. Later, Hopper (ICALP 2005) provided such a stegosystem for every efficiently sampleable channel, but did not achieve universality. He posed the question whether universality and SS-CCA-security can be achieved simultaneously. No progress on this question has been achieved since more than a decade. In our work we solve Hopper's problem in a somehow complete manner: As our main positive result we design an SS-CCA-secure stegosystem that works for every memoryless channel. On the other hand, we prove that this result is the best possible in the context of universal steganography. We provide a family of 0-memoryless channels - where the already sent documents have only marginal influence on the current distribution - and prove that no SS-CCA-secure steganography for this family exists in the standard non-look-ahead model.
Last updated:  2018-06-21
Combining Private Set-Intersection with Secure Two-Party Computation
Uncategorized
Michele Ciampi, Claudio Orlandi
Show abstract
Uncategorized
Private Set-Intersection (PSI) is one of the most popular and practically relevant secure two-party computation (2PC) tasks. Therefore, designing special-purpose PSI protocols (which are more efficient than generic 2PC solutions) is a very active line of research. In particular, a recent line of work has proposed PSI protocols based on oblivious transfer (OT) which, thanks to recent advances in OT-extension techniques, is nowadays a very cheap cryptographic building block. Unfortunately, these protocols cannot be plugged into larger 2PC applications since in these protocols one party (by design) learns the output of the intersection. Therefore, it is not possible to perform secure post-processing of the output of the PSI protocol. In this paper we propose a novel and efficient OT-based PSI protocol that produces an "encrypted" output that can therefore be later used as an input to other 2PC protocols. In particular, the protocol can be used in combination with all common approaches to 2PC including garbled circuits, secret sharing and homomorphic encryption. Thus, our protocol can be combined with the right 2PC techniques to achieve more efficient protocols for computations of the form $z=f(X\cap Y)$ for arbitrary functions $f$.
Last updated:  2021-11-10
PHANTOM and GHOSTDAG: A Scalable Generalization of Nakamoto Consensus
Yonatan Sompolinsky, Shai Wyborski, Aviv Zohar
In 2008 Satoshi Nakamoto invented the basis for blockchain-based distributed ledgers. The core concept of this system is an open and anonymous network of nodes, or miners, which together maintain a public ledger of transactions. The ledger takes the form of a chain of blocks, the blockchain, where each block is a batch of new transactions collected from users. One primary problem with Satoshi's blockchain is its highly limited scalability. The security of Satoshi's longest chain rule, more generally known as the Bitcoin protocol, requires that all honest nodes be aware of each other's blocks very soon after the block's creation. To this end, the throughput of the system is artificially suppressed so that each block fully propagates before the next one is created, and that very few ``orphan blocks'' that fork the chain be created spontaneously. In this paper we present PHANTOM, a proof-of-work based protocol for a permissionless ledger that generalizes Nakamoto's blockchain to a direct acyclic graph of blocks (blockDAG). PHANTOM includes a parameter $k$ that controls the level of tolerance of the protocol to blocks that were created concurrently, which can be set to accommodate higher throughput. It thus avoids the security-scalability tradeoff which Satoshi's protocol suffers from. PHANTOM solves an optimization problem over the blockDAG to distinguish between blocks mined properly by honest nodes and those created by non-cooperating nodes who chose to deviate from the mining protocol. Using this distinction, PHANTOM provides a robust total order on the blockDAG in a way that is eventually agreed upon by all honest nodes. Implementing PHANTOM requires solving an NP-hard problem, and to avoid this prohibitive computation, we devised an efficient greedy algorithm GHOSTDAG that captures the essence of PHANTOM. We provide a formal proof of the security of GHOSTDAG, namely, that its ordering of blocks is irreversible up to an exponentially negligible factor. We discuss the properties of GHOSTDAG and how it compares to other DAG based protocols.
Last updated:  2020-11-02
Decomposition of Permutations in a Finite Field
Svetla Nikova, Ventzislav Nikov, Vincent Rijmen
We describe a method to decompose any power permutation, as a sequence of power permutations of lower algebraic degree. As a result we obtain decompositions of the inversion in $\mathrm{GF}(2^n)$ for small $n$ from $3$ up to $16$, as well as for the APN functions, when $n=5$. More precisely, we find decompositions into quadratic power permutations for any $n$ not multiple of $4$ and decompositions into cubic power permutations for $n$ multiple of $4$. Finally, we use the Theorem of Carlitz to prove that for $3 \leq n \leq 16$ any $n$-bit permutation can be decomposed in quadratic and cubic permutations.
Last updated:  2018-04-13
Grafting Trees: a Fault Attack against the SPHINCS framework
Uncategorized
Laurent Castelnovi, Ange Martinelli, Thomas Prest
Show abstract
Uncategorized
Because they require no assumption besides the preimage or collision resistance of hash functions, hash-based signatures are a unique and very attractive class of post-quantum primitives. Among them, the schemes of the SPHINCS family are arguably the most practical stateless schemes, and can be implemented on embedded devices such as FPGAs or smart cards. This naturally raises the question of their resistance to implementation attacks. In this paper, we propose the first fault attack against the framework underlying SPHINCS, Gravity-SPHINCS and SPHINCS+. Our attack allows to forge any message signature at the cost of a single faulted message. Furthermore, the fault model is very reasonable and the faulted signatures remain valid, which renders our attack both stealthy and practical. As the attack involves a non-negligible computational cost, we propose a fine-grained trade-off allowing to lower this cost by slightly increasing the number of faulted messages. Our attack is generic in the sense that it does not depend on the underlying hash function(s) used.
Last updated:  2018-01-29
A Secure and Privacy-preserving Protocol for Smart Metering Operational Data Collection
Uncategorized
Mustafa A. Mustafa, Sara Cleemput, Abdelrahaman Aly, Aysajan Abidin
Show abstract
Uncategorized
In this paper we propose a novel protocol that allows suppliers and grid operators to collect users' aggregate metering data in a secure and privacy-preserving manner. We use secure multiparty computation to ensure privacy protection. In addition, we propose three different data aggregation algorithms that offer different balances between privacy-protection and performance. Our protocol is designed for a realistic scenario in which the data need to be sent to different parties, such as grid operators and suppliers. Furthermore, it facilitates an accurate calculation of transmission, distribution and grid balancing fees in a privacy-preserving manner. We also present a security analysis and a performance evaluation of our protocol based on well known multiparty computation algorithms implemented in C++.
Last updated:  2018-01-29
A Nonstandard Variant of Learning with Rounding with Polynomial Modulus and Unbounded Samples
Hart Montgomery
The learning with rounding problem (LWR) has become a popular cryptographic assumption to study recently due to its determinism and resistance to known quantum attacks. Unfortunately, LWR is only known to be provably hard for instances of the problem where the LWR modulus $q$ is at least as large as some polynomial function of the number of samples given to an adversary, meaning LWR is provably hard only when (1) an adversary can only see a fixed, predetermined amount of samples or (2) the modulus $q$ is superpolynomial in the security parameter, meaning that the hardness reduction is from superpolynomial approximation factors on worst-case lattices. In this work, we show that there exists a (still fully deterministic) variant of the LWR problem that allows for both unbounded queries and a polynomial modulus $q$, breaking an important theoretical barrier. To our knowledge, our new assumption, which we call the "nearby learning with lattice rounding problem" (NLWLR), is the first fully deterministic version of the learning with errors (LWE) problem that allows for both unbounded queries and a polynomial modulus. We note that our assumption is not practical for any kind of use and is mainly intended as a theoretical proof of concept to show that provably hard deterministic forms of LWE can exist with a modulus that does not grow polynomially with the number of samples.
Last updated:  2019-03-04
Improved Bounds on the Threshold Gap in Ramp Secret Sharing
Uncategorized
Ignacio Cascudo, Jaron Skovsted Gundersen, Diego Ruano
Show abstract
Uncategorized
In this paper we consider linear secret sharing schemes over a finite field $\mathbb{F}_q$, where the secret is a vector in $\mathbb{F}_q^\ell$ and each of the $n$ shares is a single element of $\mathbb{F}_q$. We obtain lower bounds on the so-called threshold gap $g$ of such schemes, defined as the quantity $r-t$ where $r$ is the smallest number such that any subset of $r$ shares uniquely determines the secret and $t$ is the largest number such that any subset of $t$ shares provides no information about the secret. Our main result establishes a family of bounds which are tighter than previously known bounds for $\ell\geq 2$. Furthermore, we also provide bounds, in terms of $n$ and $q$, on the partial reconstruction and privacy thresholds, a more fine-grained notion that considers the amount of information about the secret that can be contained in a set of shares of a given size. Finally, we compare our lower bounds with known upper bounds in the asymptotic setting.
Last updated:  2018-01-31
How to Reveal the Secrets of an Obscure White-Box Implementation
Uncategorized
Louis Goubin, Pascal Paillier, Matthieu Rivain, Junwei Wang
Show abstract
Uncategorized
White-box cryptography protects key extraction from software implementations of cryptographic primitives. It is widely deployed in DRM and mobile payment applications in which a malicious attacker might control the entire execution environment. So far, no provably secure white-box implementation of AES has been put forward, and all the published practical constructions are vulnerable to differential computation analysis (DCA) and differential fault analysis (DFA). As a consequence, the industry relies on home-made obscure white-box implementations based on secret designs. It is therefore of interest to investigate the achievable resistance of an AES implementation to thwart a white-box adversary in this paradigm. To this purpose, the ECRYPT CSA project has organized the WhibOx contest as the catch the flag challenge of CHES 2017. Researchers and engineers were invited to participate either as designers by submitting the source code of an AES-128 white-box implementation with a freely chosen key, or as breakers by trying to extract the hard-coded keys in the submitted challenges. The participants were not expected to disclose their identities or the underlying designing/attacking techniques. In the end, 94 submitted challenges were all broken and only 13 of them held more than 1 day. The strongest (in terms of surviving time) implementation, submitted by Biryukov and Udovenko, survived for 28 days (which is more than twice as much as the second strongest implementation), and it was broken by a single team, i.e., the authors of the present paper, with reverse engineering and algebraic analysis. In this paper, we give a detailed description of the different steps of our cryptanalysis. We then generalize it to an attack methodology to break further obscure white-box implementations. In particular, we formalize and generalize the linear decoding analysis that we use to extract the key from the encoded intermediate variables of the target challenge.
Last updated:  2018-01-28
Exploiting an HMAC-SHA-1 optimization to speed up PBKDF2
Uncategorized
Andrea Visconti, Federico Gorla
Show abstract
Uncategorized
PBKDF2 [27] is a well-known password-based key derivation function. In order to slow attackers down, PBKDF2 introduces CPU-intensive operations based on an iterated pseudorandom function (in our case HMAC-SHA-1). If we are able to speed up a SHA-1 or an HMAC implementation, we are able to speed up PBKDF2-HMAC-SHA-1. This means that a performance improvement might be exploited by regular users and attackers. Interestingly, FIPS 198-1 [31] suggests that it is possible to precompute first message block of a keyed hash function only once, store such a value and use it each time is needed [43]. Therefore the computation of first message block does not contribute to slowing attackers down, thus making the computation of second message block crucial. In this paper we focus on the latter, investigating the possibility to avoid part of the HMAC-SHA-1 operations. We show that some CPU-intensive operations may be replaced with a set of equivalent, but less onerous, instructions. We identify useless XOR operations exploiting and extending Intel optimizations [26], and applying the Boyar-Peralta heuristic [12]. In addition, we provide an alternative method to compute the SHA-1 message scheduling function and explain why attackers might exploit these findings to speed up a brute force attack.
Last updated:  2019-09-24
Paralysis Proofs: Secure Access-Structure Updates for Cryptocurrencies and More
Fan Zhang, Philip Daian, Gabriel Kaptchuk, Iddo Bentov, Ian Miers, Ari Juels
Conventional (M, N )-threshold signature schemes leave users with a painful choice. Setting M = N offers maximum resistance to key compromise. With this choice, though, loss of a single key renders the signing capability unavailable, creating paralysis in systems that use signatures for access control. Lower M improves availability, but at the expense of security. For example, (3, 3)-multisig wallet experiences access-control paralysis upon loss of a single key, but a (2, 3)-multisig allows any two players to collude and steal funds from the third. In this paper, we introduce techniques that address this impasse by making general cryptographic access structures dynamic. Our schemes permit, e.g., a (3, 3)-multisig, to be downgraded to a (2, 3)- multisig if a player goes missing. This downgrading is secure in the sense that it occurs only if a player is provably unavailable. Our main tool is what we call a Paralysis Proof, evidence that play- ers, i.e., key holders, are unavailable or incapacitated. Using Paraly- sis Proofs, we show how to construct a Dynamic Access Structure System, which can securely and flexibly update target access struc- tures without a trusted third party such as a system administrator. We present DASS constructions that combine a trust anchor (a trusted execution environment or smart contract) with a censorship- resistant channel in the form of a blockchain. We offer a formal framework for specifying DASS policies, and define and show how to achieve critical security and usability properties (safety, liveness, and paralysis-freeness) in a DASS. Paralysis Proofs can help address pervasive key-management chal- lenges in many different settings. We present DASS schemes for three important example use cases: recovery of cryptocurrency funds should players become unavailable, returning funds to users when cryptocurrency custodians fail, and remediating critical smart- contract failures such as frozen funds. We report on practical im- plementations for Bitcoin and Ethereum.
Last updated:  2018-01-28
Towards Fully Automated Analysis of Whiteboxes: Perfect Dimensionality Reduction for Perfect Leakage
Cees-Bart Breunesse, Ilya Kizhvatov, Ruben Muijrers, Albert Spruyt
Differential computation analysis (DCA) is a technique recently introduced by Bos et al. and Sanfelix et al. for key recovery from whitebox implementations of symmetric ciphers. It consists in applying the differential power analysis approach to software execution traces that are obtained by tracing the memory accesses of a whitebox application. While being very effective, DCA relies on analyst intuition to be efficient. In particular, memory range selection is needed to prevent software execution traces from becoming prohibitively long. Moreover, analyst failure to specify the relevant range lets the vulnerable whitebox implementation be evaluated as secure. We present a novel approach for dimensionality reduction of software execution traces, that takes a significant part of analyst intuition out of the loop. The approach exploits the lack of measurement noise in the traces and selects only the samples that are relevant for the key recovery. Our experiments with the published whitebox implementations show that the length of software execution traces can be automatically reduced to a few dozens of bits. This results in an attack speedup of several orders of magnitude, which in turn facilitates the use of more computationally intensive DCA flavours such as multiple leakage targets proposed by Klemsa. Our approach simplifies the methodology for whitebox analysis down to the tracing of a large default memory range, letting our dimensionality reduction techniques extract the relevant points for DCA, and run the attack on multiple leakage targets, excluding analyst errors and saving analysis time. It also provides quick insights in case of whitebox implementations with additional protection layers such as encodings, and can be used to identify the range for fault injection in differential fault analysis. We make our techniques available to the community as a part of a free/libre open-source side channel analysis toolkit. We believe they are a step forward for fully automated whitebox analysis tools.
Last updated:  2018-01-28
Parameterization of Edwards curves on the rational field Q with given torsion subgroups
Linh Tung Vo
This paper presents the basic concepts of the Edwards curves, twisted Edwards curves and the point addition laws on these curves. The main result is the parameterization of the Edward curve with the given torsion subgroup in the rational field Q.
Last updated:  2018-01-28
Statistical Attacks on Cookie Masking for RC4
Kenneth G. Paterson, Jacob C. N. Schuldt
Levillain et al. (AsiaCCS 2015) proposed two cookie masking methods, TLS Scramble and MCookies, to counter a class of attacks on SSL/TLS in which the attacker is able to exploit its ability to obtain many encryptions of a target HTTP cookie. In particular, the masking methods potentially make it viable to continue to use the RC4 algorithm in SSL/TLS. In this paper, we provide a detailed analysis of TLS Scramble and MCookies when used in conjunction with RC4 in SSL/TLS. We show that, in fact, both are vulnerable to variants of the known attacks against RC4 in SSL/TLS exploiting the Mantin biases (Mantin, EUROCRYPT 2005): * For the TLS Scramble mechanism, we provide a detailed statistical analysis coupled with extensive simulations that show that about $2^{37}$ encryptions of the cookie are sufficient to enable its recovery. * For the MCookies mechanism, our analysis is made more complex by the presence of a Base64 encoding step in the mechanism, which (unintentionally) acts like a classical block cipher S-box in the masking process. Despite this, we are able to develop a maximum likelihood analysis which provides a rigorous statistical procedure for estimating the unknown cookie. Based on simulations, we estimate that $2^{45}$ encryptions of the cookie are sufficient to enable its recovery. Taken together, our analyses show that the cookie masking mechanisms as proposed by Levillain et al. only moderately increase the security of RC4 in SSL/TLS.
Last updated:  2018-02-12
Constructions of S-boxes with uniform sharing
Kerem Varici, Svetla Nikova, Ventzislav Nikov, Vincent Rijmen
In this paper we focus on S-box constructions. We consider the uniformity property of an S-box which plays an important role in Threshold Implementations (TI). Most papers so far have studied TI sharings for given S-boxes. We proceed in the opposite way: starting from $n$-bit S-boxes with known sharings we construct new $(n+1)$-bit S-boxes from them with the desired sharings. In addition, we investigate the self-equivalency of S-boxes and show some interesting properties.
Last updated:  2018-01-28
Polynomial multiplication over binary finite fields: new upper bounds
Alessandro De Piccoli, Andrea Visconti, Ottavio Giulio Rizzo
When implementing a cryptographic algorithm, efficient operations have high relevance both in hardware and software. Since a number of operations can be performed via polynomial multiplication, the arithmetic of polynomials over finite fields plays a key role in real-life implementations. One of the most interesting paper that addressed the problem has been published in 2009. In [5], Bernstein suggests to split polynomials into parts and presents a new recursive multiplication technique which is faster than those commonly used. In order to further reduce the number of bit operations [6] required to multiply n-bit polynomials, researchers adopt different approaches. In [18] a greedy heuristic has been applied to linear straight-line sequences listed in [6]. In 2013, D'angella, Schiavo and Visconti [20] skip some redundant operations of the multiplication algorithms described in [5]. In 2015, Cenk, Negre and Hasan [12] suggest new multiplication algorithms. In this paper, (a) we present a "k-1"-level Recursion algorithm that can be used to reduce the effective number of bit operations required to multiply n-bit polynomials; and (b) we use algebraic extensions of F_2 combined with Lagrange interpolation to improve the asymptotic complexity.
Last updated:  2018-01-28
Secure and Scalable Multi-User Searchable Encryption
Cédric Van Rompay, Refik Molva, Melek Önen
By allowing a large number of users to behave as readers or writers, Multi-User Searchable Encryption (MUSE) raises new security and performance challenges beyond the typical requirements of Symmetric Searchable Encryption (SSE). In this paper we identify two core mandatory requirements of MUSE protocols being privacy in face of users colluding with the CSP and low complexity for the users, pointing that no existing MUSE protocol satisfies these two requirements at the same time. We then come up with the first MUSE protocol that satisfies both of them. The design of the protocol also includes new constructions for a secure variant of Bloom Filters (BFs) and multi-query Oblivious Transfer (OT).
Last updated:  2018-11-15
The Unified Butterfly Effect: Efficient Security Credential Management System for Vehicular Communications
Marcos A. Simplicio Jr., Eduardo Lopes Cominetti, Harsh Kupwade Patil, Jefferson E. Ricardini, Marcos Vinicius M. Silva
Security and privacy are important requirements for the broad deployment of intelligent transportation systems (ITS). This motivated the development of many proposals aimed at creating a Vehicular Public Key Infrastructure (VPKI) for addressing such needs. Among them, schemes based on pseudonym certificates are considered particularly prominent: they provide data authentication in a privacy-preserving manner while allowing vehicles to be revoked in case of misbehavior. Indeed, this is the approach followed by the Security Credential Management System (SCMS), one of the leading candidate designs for protecting vehicular communications in the United States. Despite SCMS's appealing design, in this article we show that it still can be further improved. Namely, one of the main benefits of SCMS is its so-called butterfly key expansion process, which allows batches of pseudonym certificates to be issued for authorized vehicles by means of a single request. Whereas this procedure requires the vehicle to provide two separate public/private key pairs for registration authorities, we present a modified protocol that uses a single key for the same purpose. As a result, the processing and bandwidth costs of the certificate provisioning protocol drop as far as 50%. Such performance gains come with no negative impact in terms of security, flexibility or scalability when compared to the original SCMS.
Last updated:  2018-01-28
Fully homomorphic public-key encryption with small ciphertext size
Masahiro Yagisawa
In previous work I proposed a fully homomorphic encryption without bootstrapping which has the large size of ciphertext. This tme I propose the fully homomorphic public-key encryption scheme on non-associative octonion ring over finite field with the small size of ciphertext. In this scheme the size of ciphertext is one-third of the size in the scheme proposed before. Because proposed scheme adopts the medium text with zero norm, it is immune from the “p and -p attack”. As the proposed scheme is based on computational difficulty to solve the multivariate algebraic equations of high degree, it is immune from the Gröbner basis attack, the differential attack, rank attack and so on.
Last updated:  2018-04-18
(Short Paper) A Wild Velvet Fork Appears! Inclusive Blockchain Protocol Changes in Practice
Alexei Zamyatin, Nicholas Stifter, Aljosha Judmayer, Philipp Schindler, Edgar Weippl, William J. Knottenbelt
The loosely defined terms hard fork and soft fork have established themselves as descriptors of different classes of upgrade mechanisms for the underlying consensus rules of (proof-of-work) blockchains. Recently, a novel approach termed velvet fork, which expands upon the concept of a soft fork, was outlined. Specifically, velvet forks intend to avoid the possibility of disagreement by a change of rules through rendering modifications to the protocol backward compatible and inclusive to legacy blocks.We present an overview and definitions of these different upgrade mechanisms and outline their relationships. Hereby, we expose examples where velvet forks or similar constructions are already actively employed in Bitcoin and other cryptocurrencies. Furthermore, we expand upon the concept of velvet forks by proposing possible applications and discuss potentially arising security implications.
Last updated:  2018-01-26
Constructing low-weight dth-order correlation-immune Boolean functions through the Fourier-Hadamard transform
Claude Carlet, Xi Chen
The correlation immunity of Boolean functions is a property related to cryptography, to error correcting codes, to orthogonal arrays (in combinatorics, which was also a domain of interest of S. Golomb) and in a slightly looser way to sequences. Correlation-immune Boolean functions (in short, CI functions) have the property of keeping the same output distribution when some input variables are fixed. They have been widely used as combiners in stream ciphers to allow resistance to the Siegenthaler correlation attack. Very recently, a new use of CI functions has appeared in the framework of side channel attacks (SCA). To reduce the cost overhead of counter-measures to SCA, CI functions need to have low Hamming weights. This actually poses new challenges since the known constructions which are based on properties of the Walsh-Hadamard transform, do not allow to build unbalanced CI functions. In this paper, we propose constructions of low-weight dth-order CI functions based on the Fourier- Hadamard transform, while the known constructions of resilient functions are based on the Walsh-Hadamard transform. We first prove a simple but powerful result, which makes that one only need to consider the case where d is odd in further research. Then we investigate how constructing low Hamming weight CI functions through the Fourier-Hadamard transform (which behaves well with respect to the multiplication of Boolean functions). We use the characterization of CI functions by the Fourier-Hadamard transform and introduce a related general construction of CI functions by multiplication. By using the Kronecker product of vectors, we obtain more constructions of low-weight d-CI Boolean functions. Furthermore, we present a method to construct low-weight d-CI Boolean functions by making additional restrictions on the supports built from the Kronecker product.
Last updated:  2018-07-30
Protecting Block Ciphers against Differential Fault Attacks without Re-keying (Extended Version)
Anubhab Baksi, Shivam Bhasin, Jakub Breier, Mustafa Khairallah, Thomas Peyrin
In this article, we propose a new method to protect block cipher implementations against Differential Fault Attacks (DFA). Our strategy, so-called ``Tweak-in-Plaintext'', ensures that an uncontrolled value (`tweak-in') is inserted into some part of the block cipher plaintext, thus effectively rendering DFA much harder to perform. Our method is extremely simple yet presents many advantages when compared to previous solutions proposed at AFRICACRYPT 2010 or CARDIS 2015. Firstly, we do not need any Tweakable block cipher, nor any related-key security assumption (we do not perform any re-keying). Moreover, performance for lightweight applications is improved, and we do not need to send any extra data. Finally, our scheme can be directly used with standard block ciphers such as AES or PRESENT. Experimental results show that the throughput overheads, for incorporating our scheme into AES-128, range between $\approx$ 5\% to $\approx$ 26.9\% for software, and between $\approx$ 3.1\% to $\approx$ 25\% for hardware implementations; depending on the tweak-in size.
Last updated:  2018-01-26
Threat-Adjusting Security: BitFlip as an AI-Ready, Post-Quantum cipher
Gideon Samid
Generally ciphers project a fixed measure of security, defined by the com- plexity of their algorithms. Alas, threat is variable, and should be met with matching security. It is useless to project insu cient security, and it is wasteful and burden- some to over-secure data. BitFlip comes with threat-adjustable flexibility, established via: (i) smart decoy strategy, (ii) parallel encryption, (iii) uniform letter frequency adjustment – tools which enable the BitFlip user to (a) adjust its ciphertexts to match the appraised threat, and (b) sustain security levels for aging keys. The use of these threat-adjusting tools may be automated to allow (1) AI engines to enhance the security service of the cipher, and (2) to enable remote hard-to-access IoT devices to keep aging keys useful, and preserve precious energy by matching security to the ad-hoc threat level. BitFlip may also be operated in a zero-leakage mode where no attributes of a conversation are disclosed, up to full steganographic levels. BitFlip se- curity is two-dimensional: intractability and equivocation, both may be conveniently increased to meet quantum cryptanalytic attacks.
Last updated:  2018-01-26
Flaws in a Verifiably Multiplicative Secret Sharing Scheme from ICITS 2017
Maki Yoshida, Satoshi Obana
In this paper, we point out flaws in an existing verifiably multiplicative secret sharing (VMSS) scheme. Namely, we show that a scheme proposed by Yoshida and Obana presented at ICITS 2017 is insecure against an adversary who corrupts a single player. We then show that in the model of ICITS 2017 which restricts the decoder additive, the error-free verification is impossible. We further show that by allowing a general class of decoders which include a linear one, the scheme is error-free.
Last updated:  2018-01-24
Synchronized Aggregate Signatures from the RSA Assumption
Susan Hohenberger, Brent Waters
In this work we construct efficient aggregate signatures from the RSA assumption in the synchronized setting. In this setting, the signing algorithm takes as input a (time) period $t$ as well the secret key and message. A signer should sign at most once for each $t$. A set of signatures can be aggregated so long as they were all created for the same period $t$. Synchronized aggregate signatures are useful in systems where there is a natural reporting period such as log and sensor data, or for signatures embedded in a blockchain protocol where the creation of an additional block is a natural synchronization event. We design a synchronized aggregate signature scheme that works for a bounded number of periods $T$ that is given as a parameter to a global system setup. The big technical question is whether we can create solutions that will perform well with the large $T$ values that we might use in practice. For instance, if one wanted signing keys to last up to ten years and be able to issue signatures every second, then we would need to support a period bound of upwards of $2^{28}$. We build our solution in stages where we start with an initial solution that establishes feasibility, but has an impractically large signing time where the number of exponentiations and prime searches grows linearly with $T$. We prove this scheme secure in the standard model under the RSA assumption with respect to honestly-generated keys. We then provide a tradeoff method where one can tradeoff the time to create signatures with the space required to store private keys. One point in the tradeoff is where each scales with $\sqrt{T}$. Finally, we reach our main innovation which is a scheme where both the signing time and storage scale with $\lg{T}$ which allows for us to keep both computation and storage costs modest even for large values of $T$. Conveniently, our final scheme uses the same verification algorithm, and has the same distribution of public keys and signatures as the first scheme. Thus we are able to recycle the existing security proof for the new scheme. We also show how to extend our results to the identity-based setting in the random oracle model, which can further reduce the overall cryptographic overhead. We conclude with a detailed evaluation of the signing time and storage requirements for various practical settings of the system parameters.
Last updated:  2018-01-24
How to validate the secret of a Ring Learning with Errors (RLWE) key
Uncategorized
Jintai Ding, Saraswathy RV, Saed Alsayigh, Crystal Clough
Show abstract
Uncategorized
We use the signal function from RLWE key exchange to derive an efficient zero knowledge authentication protocol to validate an RLWE key $p=as+e$ with secret $s$ and error $e$ in the Random Oracle Model (ROM). With this protocol, a verifier can validate that a key $p$ presented to him by a prover $P$ is of the form $p=as+e$ with $s,e$ small and that the prover knows $s$. We accompany the description of the protocol with proof to show that it has negligible soundness and completeness error. The soundness of our protocol relies directly on the hardness of the RLWE problem. The protocol is applicable for both LWE and RLWE but we focus on the RLWE based protocol for efficiency and practicality. We also present a variant of the main protocol with a commitment scheme to avoid using the ROM.
Last updated:  2018-01-23
A Cryptographic Analysis of the WireGuard Protocol
Benjamin Dowling, Kenneth G. Paterson
WireGuard (Donenfeld, NDSS 2017) is a recently proposed secure network tunnel operating at layer 3. WireGuard aims to replace existing tunnelling solutions like IPsec and OpenVPN, while requiring less code, being more secure, more performant, and easier to use. The cryptographic design of WireGuard is based on the Noise framework. It makes use of a key exchange component which combines long-term and ephemeral Diffie-Hellman values (along with optional preshared keys). This is followed by the use of the established keys in an AEAD construction to encapsulate IP packets in UDP. To date, WireGuard has received no rigorous security analysis. In this paper, we, rectify this. We first observe that, in order to prevent Key Compromise Impersonation (KCI) attacks, any analysis of WireGuard's key exchange component must take into account the first AEAD ciphertext from initiator to responder. This message effectively acts as a key confirmation and makes the key exchange component of WireGuard a 1.5 RTT protocol. However, the fact that this ciphertext is computed using the established session key rules out a proof of session key indistinguishability for WireGuard's key exchange component, limiting the degree of modularity that is achievable when analysing the protocol's security. To overcome this proof barrier, and as an alternative to performing a monolithic analysis of the entire WireGuard protocol, we add an extra message to the protocol. This is done in a minimally invasive way that does not increase the number of round trips needed by the overall WireGuard protocol. This change enables us to prove strong authentication and key indistinguishability properties for the key exchange component of WireGuard under standard cryptographic assumptions.
Last updated:  2018-02-13
Progressive lattice sieving
Thijs Laarhoven, Artur Mariano
Most algorithms for hard lattice problems are based on the principle of rank reduction: to solve a problem in a $d$-dimensional lattice, one first solves one or more problem instances in a sublattice of rank $d - 1$, and then uses this information to find a solution to the original problem. Existing lattice sieving methods, however, tackle lattice problems such as the shortest vector problem (SVP) directly, and work with the full-rank lattice from the start. Lattice sieving further seems to benefit less from starting with reduced bases than other methods, and finding an approximate solution almost takes as long as finding an exact solution. These properties currently set sieving apart from other methods. In this work we consider a progressive approach to lattice sieving, where we gradually introduce new basis vectors only when the sieve has stabilized on the previous basis vectors. This leads to improved (heuristic) guarantees on finding approximate shortest vectors, a bigger practical impact of the quality of the basis on the run-time, better memory management, a smoother and more predictable behavior of the algorithm, and significantly faster convergence - compared to traditional approaches, we save between a factor $20$ to $40$ in the time complexity for SVP.
Last updated:  2018-01-18
A Systematic Approach To Cryptocurrency Fees
Alexander Chepurnoy, Vasily Kharin, Dmitry Meshkov
This paper is devoted to the study of transaction fees in massively replicated open blockchain systems. In such systems, like Bitcoin, a snapshot of current state required for the validation of transactions is being held in the memory, which eventually becomes a scarce resource. Uncontrolled state growth can lead to security issues. We propose a modification of a transaction fee scheme based on how much additional space will be needed for the objects created as a result of transaction processing and for how long will they live in the state. We also work out the way to combine fees charged for different resources spent (bandwidth, random-access state memory, processor cycles) in a composite fee and demonstrate consistency of the approach by analyzing the statistics from Ethereum network. We show a possible implementation for state-related fee in a form of regular payments to miners.
Last updated:  2019-05-27
On the Bit Security of Cryptographic Primitives
Daniele Micciancio, Michael Walter
We introduce a formal quantitative notion of ``bit security'' for a general type of cryptographic games (capturing both decision and search problems), aimed at capturing the intuition that a cryptographic primitive with $k$-bit security is as hard to break as an ideal cryptographic function requiring a brute force attack on a $k$-bit key space. Our new definition matches the notion of bit security commonly used by cryptographers and cryptanalysts when studying search (e.g., key recovery) problems, where the use of the traditional definition is well established. However, it produces a quantitatively different metric in the case of decision (indistinguishability) problems, where the use of (a straightforward generalization of) the traditional definition is more problematic and leads to a number of paradoxical situations or mismatches between theoretical/provable security and practical/common sense intuition. Key to our new definition is to consider adversaries that may explicitly declare failure of the attack. We support and justify the new definition by proving a number of technical results, including tight reductions between several standard cryptographic problems, a new hybrid theorem that preserves bit security, and an application to the security analysis of indistinguishability primitives making use of (approximate) floating point numbers. This is the first result showing that (standard precision) 53-bit floating point numbers can be used to achieve 100-bit security in the context of cryptographic primitives with general indistinguishability-based security definitions. Previous results of this type applied only to search problems, or special types of decision problems.
Last updated:  2018-01-18
EM Analysis in the IoT Context: Lessons Learned from an Attack on Thread
Daniel Dinu, Ilya Kizhvatov
The distinguishing feature of the Internet of Things is that many devices get interconnected. The threat of side-channel attacks in this setting is less understood than the threat of traditional network and software exploitation attacks that are perceived to be more powerful. This work is a case study of Thread, an emerging network and transport level stack designed to facilitate secure communication between heterogeneous IoT devices. We perform the first side-channel vulnerability analysis of the Thread networking stack. We leverage various network mechanisms to trigger manipulations of the security material or to get access to the network credentials. We choose the most feasible attack vector to build a complete attack that combines network specific mechanisms and Differential Electromagnetic Analysis. When successfully applied on a Thread network, the attack gives full network access to the adversary. We evaluate the feasibility of our attack in a TI CC2538 setup running OpenThread, a certified open-source implementation of the stack. The full attack does not succeed in our setting. The root cause for this failure is not any particular security feature of the protocol or the implementation, but a side-effect of a feature not related to security. We summarize the problems that we find in the protocol with respect to side-channel analysis, and suggest a range of countermeasures to prevent our attack and the other attack vectors we identified during the vulnerability analysis. In general, we demonstrate that elaborate security mechanisms of Thread make a side-channel attack not trivial to mount. Similar to a modern software exploit, it requires chaining multiple vulnerabilities. Nevertheless, such attacks are feasible. Being perhaps too expensive for settings like smart homes, they pose a relatively higher threat to the commercial setting. We believe our experience provides a useful lesson to designers of IoT protocols and devices.
Last updated:  2018-07-27
MILP-aided Cube-attack-like Cryptanalysis on Keccak Keyed Modes
Uncategorized
Wenquan Bi, Xiaoyang Dong, Zheng Li, Rui Zong, Xiaoyun Wang
Show abstract
Uncategorized
Cube-attack-like cryptanalysis was proposed by Dinur et al. at EUROCRYPT 2015, which recovers the key of Keccak keyed modes in a divide-and-conquer manner. In their attack, one selects cube variables manually, which leads to more key bits involved in the key-recovery attack, so the complexity is too high unnecessarily. In this paper, we introduce a new MILP model and make the cube attacks better on the Keccak keyed modes. Using this new MILP tool, we find the optimal cube variables for Keccak-MAC, Keyak and Ketje, which makes that a minimum number of key bits are involved in the key-recovery attack. For example, when the capacity is 256, we find a new 32-dimension cube for Keccak-MAC that involves only 18 key bits instead of Dinur et al.'s 64 bits and the complexity of the 6-round attack is reduced to $2^{42}$ from $2^{66}$. More impressively, using this new tool, we give the very first 7-round key-recovery attack on Keccak-MAC-512. We get the 8-round key-recovery attacks on Lake Keyak in nonce-respected setting. In addition, we get the best attacks on Ketje Major/Minor. For Ketje Major, when the length of nonce is 9 lanes, we could improve the best previous 6-round attack to 7-round. Our attacks do not threaten the full-round (12) Keyak/Ketje or the full-round (24) Keccak-MAC. When comparing with Huang et al.'s conditional cube attack, the MILP-aided cube-attack-like cryptanalysis has larger effective range and gets the best results on the Keccak keyed variants with relatively smaller number of degrees of freedom.
Last updated:  2018-03-26
Secure Logistic Regression Based on Homomorphic Encryption: Design and Evaluation
Uncategorized
Miran Kim, Yongsoo Song, Shuang Wang, Yuhou Xia, Xiaoqian Jiang
Show abstract
Uncategorized
Learning a model without accessing raw data has been an intriguing idea to the security and machine learning researchers for years. In an ideal setting, we want to encrypt sensitive data to store them on a commercial cloud and run certain analysis without ever decrypting the data to preserve the privacy. Homomorphic encryption technique is a promising candidate for secure data outsourcing but it is a very challenging task to support real-world machine learning tasks. Existing frameworks can only handle simplified cases with low-degree polynomials such as linear means classifier and linear discriminative analysis. The goal of this study is to provide a practical support to the mainstream learning models (eg logistic regression). We adapted a novel homomorphic encryption scheme optimized for real numbers computation. We devised (1) the least squares approximation of the logistic function for accuracy and efficiency (ie reduce computation cost) and (2) new packing and parallelization techniques. Using real-world datasets, we evaluated the performance of our model and demonstrated its feasibility in speed and memory consumption. For example, it took about 116 minutes to obtain the training model from homomorphically encrypted Edinburgh dataset. In addition, it gives fairly accurate predictions on the testing dataset. We present the first homomorphically encrypted logistic regression outsourcing model based on the critical observation that the precision loss of classification models is sufficiently small so that the decision plan stays still.
Last updated:  2018-01-18
GAZELLE: A Low Latency Framework for Secure Neural Network Inference
Chiraag Juvekar, Vinod Vaikuntanathan, Anantha Chandrakasan
The growing popularity of cloud-based machine learning raises a natural question about the privacy guarantees that can be provided in such a setting. Our work tackles this problem in the context where a client wishes to classify private images using a convolutional neural network (CNN) trained by a server. Our goal is to build efficient protocols whereby the client can acquire the classification result without revealing their input to the server, while guaranteeing the privacy of the server's neural network. To this end, we design Gazelle, a scalable and low-latency system for secure neural network inference, using an intricate combination of homomorphic encryption and traditional two-party computation techniques (such as garbled circuits). Gazelle makes three contributions. First, we design the Gazelle homomorphic encryption library which provides fast algorithms for basic homomorphic operations such as SIMD (single instruction multiple data) addition, SIMD multiplication and ciphertext permutation. Second, we implement the Gazelle homomorphic linear algebra kernels which map neural network layers to optimized homomorphic matrix-vector multiplication and convolution routines. Third, we design optimized encryption switching protocols which seamlessly convert between homomorphic and garbled circuit encodings to enable implementation of complete neural network inference. We evaluate our protocols on benchmark neural networks trained on the MNIST and CIFAR-10 datasets and show that Gazelle outperforms the best existing systems such as MiniONN (ACM CCS 2017) by 20x and Chameleon (Crypto Eprint 2017/1164) by 30x in online runtime. Similarly when compared with fully homomorphic approaches like CryptoNets (ICML 2016) we demonstrate *three orders of magnitude* faster online run-time.
Last updated:  2018-10-18
Template-based Fault Injection Analysis of Block Ciphers
Ashrujit Ghoshal, Sikhar Patranabis, Debdeep Mukhopadhyay
We present the first template-based fault injection analysis of FPGA-based block cipher implementations. While template attacks have been a popular form of side-channel analysis in the cryptographic literature, the use of templates in the context of fault attacks has not yet been explored to the best of our knowledge. Our approach involves two phases. The first phase is a profiling phase where we build templates of the fault behavior of a cryptographic device for different secret key segments under different fault injection intensities. This is followed by a matching phase where we match the observed fault behavior of an identical but black-box device with the pre-built templates to retrieve the secret key. We present a generic treatment of our template-based fault attack approach for SPN block ciphers, and illustrate the same with case studies on a Xilinx Spartan-6 FPGA-based implementation of AES-128.
Last updated:  2018-09-04
SIFA: Exploiting Ineffective Fault Inductions on Symmetric Cryptography
Uncategorized
Christoph Dobraunig, Maria Eichlseder, Thomas Korak, Stefan Mangard, Florian Mendel, Robert Primas
Show abstract
Uncategorized
Since the seminal work of Boneh et al., the threat of fault attacks has been widely known and techniques for fault attacks and countermeasures have been studied extensively. The vast majority of the literature on fault attacks focuses on the ability of fault attacks to change an intermediate value to a faulty one, such as differential fault analysis (DFA), collision fault analysis, statistical fault attack (SFA), fault sensitivity analysis, or differential fault intensity analysis (DFIA). The other aspect of faults---that faults can be induced and do not change a value---has been researched far less. In case of symmetric ciphers, ineffective fault attacks (IFA) exploit this aspect. However, IFA relies on the ability of an attacker to reliably induce reproducible deterministic faults like stuck-at faults on parts of small values (e.g., one bit or byte), which is often considered to be impracticable. As a consequence, most countermeasures against fault attacks do not focus on such attacks, but on attacks exploiting changes of intermediate values and usually try to detect such a change (detection-based), or to destroy the exploitable information if a fault happens (infective countermeasures). Such countermeasures implicitly assume that the release of "fault-free" ciphertexts in the presence of a fault-inducing attacker does not reveal any exploitable information. In this work, we show that this assumption is not valid and we present novel fault attacks that work in the presence of detection-based and infective countermeasures. The attacks exploit the fact that intermediate values leading to "fault-free" ciphertexts show a non-uniform distribution, while they should be distributed uniformly. The presented attacks are entirely practical and are demonstrated to work for software implementations of AES and for a hardware co-processor. These practical attacks rely on fault induction by means of clock glitches and hence, are achieved using only low-cost equipment. This is feasible because our attack is very robust under noisy fault induction attempts and does not require the attacker to model or profile the exact fault effect. We target two types of countermeasures as examples: simple time redundancy with comparison and several infective countermeasures. However, our attacks can be applied to a wider range of countermeasures and are not restricted to these two countermeasures.
Last updated:  2018-01-18
A Unified Framework for Trapdoor-Permutation-Based Sequential Aggregate Signatures
Uncategorized
Craig Gentry, Adam O'Neill, Leonid Reyzin
Show abstract
Uncategorized
We give a framework for trapdoor-permutation-based sequential aggregate signatures (SAS) that unifies and simplifies prior work and leads to new results. The framework is based on ideal ciphers over large domains, which have recently been shown to be realizable in the random oracle model. The basic idea is to replace the random oracle in the full-domain-hash signature scheme with an ideal cipher. Each signer in sequence applies the ideal cipher, keyed by the message, to the output of the previous signer, and then inverts the trapdoor permutation on the result. We obtain different variants of the scheme by varying additional keying material in the ideal cipher and making different assumptions on the trapdoor permutation. In particular, we obtain the first scheme with lazy verification and signature size independent of the number of signers that does not rely on bilinear pairings. Since existing proofs that ideal ciphers over large domains can be realized in the random oracle model are lossy, our schemes do not currently permit practical instantiation parameters at a reasonable security level, and thus we view our contribution as mainly conceptual. However, we are optimistic tighter proofs will be found, at least in our specific application.
Last updated:  2018-01-18
Reusing Nonces in Schnorr Signatures
Marc Beunardeau, Aisling Connolly, Houda Ferradi, Rémi Géraud, David Naccache, Damien Vergnaud
The provably secure Schnorr signature scheme is popular and efficient. However, each signature requires a fresh modular exponentiation, which is typically a costly operation. As the increased uptake in connected devices revives the interest in resource-constrained signature algorithms, we introduce a variant of Schnorr signatures that mutualises exponentiation efforts. Combined with precomputation techniques (which would not yield as interesting results for the original Schnorr algorithm), we can amortise the cost of exponentiation over several signatures: these signatures share the same nonce. Sharing a nonce is a deadly blow to Schnorr signatures, but is not a security concern for our variant. Our Scheme is provably secure, asymptotically-faster than Schnorr when combined with efficient precomputation techniques, and experimentally $2$ to $6$ times faster than Schnorr for the same number of signatures when using 1\,MB of static storage.
Last updated:  2018-05-20
Simple Schnorr Multi-Signatures with Applications to Bitcoin
Gregory Maxwell, Andrew Poelstra, Yannick Seurin, Pieter Wuille
We describe a new Schnorr-based multi-signature scheme (i.e., a protocol which allows a group of signers to produce a short, joint signature on a common message) called MuSig, provably secure in the plain public-key model (meaning that signers are only required to have a public key, but do not have to prove knowledge of the private key corresponding to their public key to some certification authority or to other signers before engaging the protocol), which improves over the state-of-art scheme of Bellare and Neven (ACM-CCS 2006) and its variants by Bagherzandi et al. (ACM-CCS 2008) and Ma et al. (Des. Codes Cryptogr., 2010) in two respects: (i) it is simple and efficient, having the same key and signature size as standard Schnorr signatures; (ii) it allows key aggregation, which informally means that the joint signature can be verified exactly as a standard Schnorr signature with respect to a single ``aggregated'' public key which can be computed from the individual public keys of the signers. To the best of our knowledge, this is the first multi-signature scheme provably secure in the plain public-key model which allows key aggregation. As an application, we explain how our new multi-signature scheme could improve both performance and user privacy in Bitcoin.
Last updated:  2018-02-04
Homomorphic Lower Digits Removal and Improved FHE Bootstrapping
Hao Chen, Kyoohyung Han
Bootstrapping is a crucial operation in Gentry's breakthrough work on fully homomorphic encryption (FHE), where a homomorphic encryption scheme evaluates its own decryption algorithm. There has been a couple of implementations of bootstrapping, among which HElib arguably marks the state-of-the-art in terms of throughput, ciphertext/message size ratio and support for large plaintext moduli. In this work, we apply a family of "lowest digit removal" polynomials to improve homomorphic digit extraction algorithm which is crucial part in bootstrapping for both FV and BGV schemes. If the secret key has 1-norm $h=l_1(s)$ and the plaintext modulus is $t = p^r$, we achieved bootstrapping depth $\log h + \log( \log_p(ht))$ in FV scheme. In case of the BGV scheme, we bring down the depth from $\log h + 2 \log t$ to $\log h + \log t$. We implemented bootstrapping for FV in the SEAL library. Besides the regular mode, we introduce another "slim mode'", which restrict the plaintexts to batched vectors in $\mathbb{Z}_{p^r}$. The slim mode has similar throughput as the regular mode, while each individual run is much faster and uses much smaller memory. For example, bootstrapping takes $6.75$ seconds for 7 bit plaintext space with 64 slots and $1381$ seconds for $GF(257^{128})$ plaintext space with 128 slots. We also implemented our improved digit extraction procedure for the BGV scheme in HElib.
Last updated:  2018-01-18
Tweaking Generic OTR to Avoid Forgery Attacks
Hassan Qahur Al Mahri, Leonie Simpson, Harry Bartlett, Ed Dawson, Kenneth Koon-Ho Wong
This paper considers the security of the Offset Two-Round (OTR) authenticated encryption mode \cite{cryptoeprint:2013:628} with respect to forgery attacks. The current version of OTR gives a security proof for specific choices of the block size $(n)$ and the primitive polynomial used to construct the finite field $\mathbb{F}_{2^n}$. Although the OTR construction is generic, the security proof is not. For every choice of finite field the distinctness of masking coefficients must be verified to ensure security. In this paper, we show that some primitive polynomials result in collisions among the masking coefficients used in the current instantiation, from which forgeries can be constructed. We propose a new way to instantiate OTR so that the masking coefficients are distinct in every finite field $\mathbb{F}_{2^n}$, thus generalising OTR without reducing the security of OTR.
Last updated:  2019-03-16
Non-Locality in Interactive Proofs
Claude Crépeau, Nan Yang
In multi-prover interactive proofs (MIPs), the verifier is usually non-adaptive. This stems from an implicit problem which we call ``contamination'' by the verifier. We make explicit the verifier contamination problem, and identify a solution by constructing a generalization of the MIP model. This new model quantifies non-locality as a new dimension in the characterization of MIPs. A new property of zero-knowledge emerges naturally as a result by also quantifying the non-locality of the simulator.
Last updated:  2018-01-18
Systematization Of A 256-Bit Lightweight Block Cipher Marvin
Uncategorized
Sukanya Saha, Krishnendu Rarhi, Abhishek Bhattacharya
Show abstract
Uncategorized
In a world heavily loaded by information, there is a great need for keeping specific information secure from adversaries. The rapid growth in the research field of lightweight cryptography can be seen from the list of the number of lightweight stream as well as block ciphers that has been proposed in the recent years. This paper focuses only on the subject of lightweight block ciphers. In this paper, we have proposed a new 256 bit lightweight block cipher named as Marvin, that belongs to the family of Extended LS designs.
Last updated:  2018-01-27
The Viability of Post-quantum X.509 Certificates
Panos Kampanakis, Peter Panburana, Ellie Daw, Daniel Van Geest
If quantum computers were built, they would pose concerns for public key cryptography as we know it. Among other cryptographic techniques, they would jeopardize the use of PKI X.509 certificates (RSA, ECDSA) used today for authentication. To overcome the concern, new quantum secure signature schemes have been proposed in the literature. Most of these schemes have significantly larger public key and signature sizes than the ones used today. Even though post-quantum signatures could work well for some usecases like software signing, there are concerns about the effect their size and processing cost would have on technologies using X.509 certificates. In this work, we investigate the viability of post-quantum signatures in X.509 certificates and protocols that use them (e.g. TLS, IKEv2). We prove that, in spite of common concerns, they could work in today's protocols and could be a viable solution to the emergence of quantum computing. We also quantify the overhead they introduce in protocol connection establishment and show that even though it is significant, it is not detrimental. Finally, we formalize the areas of further testing necessary to conclusively establish that the signature schemes standardized in NIST's PQ Project can work with X.509 certs in a post-quantum Internet.
Last updated:  2018-01-18
Countermeasures against a side-channel attack in a kernel memory
Na-Young Ahn, Dong Hoon Lee
We proposed a zero-contention in cache lines a cache policy between REE and TEE to prevent from TruSpy attacks in a kernel memory of an embedded system. We suggested that delay time of data path of REE is equal or similar to that of data path of TEE to prevent timing side-channel attacks.
Last updated:  2018-02-13
Full-Hiding (Unbounded) Multi-Input Inner Product Functional Encryption from the $k$-Linear Assumption
Pratish Datta, Tatsuaki Okamoto, Junichi Tomida
This paper presents two non-generic and practically efficient private key multi-input functional encryption (MIFE) schemes for the multi-input version of the inner product functionality that are the first to achieve simultaneous message and function privacy, namely, the full-hiding security for a non-trivial multi-input functionality under well-studied cryptographic assumptions. Our MIFE schemes are built in bilinear groups of prime order, and their security is based on the standard $k$-Linear ($k$-LIN) assumption (along with the existence of semantically secure symmetric key encryption and pseudorandom functions). Our constructions support polynomial number of encryption slots (inputs) without incurring any super-polynomial loss in the security reduction. While the number of encryption slots in our first scheme is apriori bounded, our second scheme can withstand an arbitrary number of encryption slots. Prior to our work, there was no known MIFE scheme for a non-trivial functionality, even without function privacy, that can support an unbounded number of encryption slots without relying on any heavy-duty building block or little-understood cryptographic assumption.
Last updated:  2018-01-16
A Simple Reduction from State Machine Replication to Binary Agreement in Partially Synchronous or Asynchronous Networks
Abhinav Aggarwal, Yue Guo
The recent advent of blockchains has spurred a huge interest in the research and development of numerous cryptocurrencies as well as understanding the fundamental concepts that underly this technology. At the heart of this design is the classic state machine replication protocol in which a group of n machines (out of which f are Byzantine) want to agree on an ever-growing log of transactions. In this paper, we present a simple black box reduction from state machine replication (SMR) to the classical binary agreement (BA) protocol on top of a fully decentralized network. We consider both synchronous and partially synchronous/asynchronous settings for our reduction. We also present an algorithm for a reduction from BA to SMR, thus establishing an equivalence between the two. In each of these settings, we analyze our algorithms with respect to the required security properties. Although there is prior work that establishes these reductions, our solutions are simpler (at the cost of efficiency) and useful from a pedagogical point of view.
Last updated:  2018-01-16
New Insights into Divide-and-Conquer Attacks on the Round-Reduced Keccak-MAC
Chen-Dong Ye, Tian Tian
Keccak is the final winner of SHA-3 competition and it can be used as message authentic codes as well. The basic and balanced divide-and-conquer attacks on Keccak-MAC were proposed by Dinur et al. at Eurocrypt 2015. The idea of cube attacks is used in the two attacks to divide key bits into small portions. In this paper, by carefully analysing the mappings used in Keccak-MAC, it is found that some cube variables could divide key bits into smaller portions and so better divide-and-conquer attacks are obtained. Furthermore, in order to evaluate the resistance of Keccak-MAC against divide-and-conquer attacks based on cubes, we theoretically analyse the lower bounds of the complexities of divide-and-conquer attacks. It is shown that the lower bounds of the complexities are still not better than those of the conditional cube tester proposed by Senyang Huang et al.. This indicates that Keccak-MAC can resist the divide-and-conquer attack better than the conditional cube tester. We hope that these techniques still could provide some new insights on the future cryptanalysis of Keccak.
Last updated:  2021-03-23
Leakage-resilient Algebraic Manipulation Detection Codes with Optimal Parameters
Divesh Aggarwal, Tomasz Kazana, Maciej Obremski
Algebraic Manipulation Detection (AMD) codes [CDF+08] are keyless message authentication codes that protect messages against additive tampering by the adversary assuming that the adversary cannot "see" the codeword. For certain applications, it is unreasonable to assume that the adversary computes the added offset without any knowledge of the codeword c. Recently, Ahmadi and Safavi-Naini [AS13], and then Lin, Safavi-Naini, and Wang [LSW16] gave a construction of leakage-resilient AMD codes where the adversary has some partial information about the codeword before choosing added offset, and the scheme is secure even conditioned on this partial information. In this paper we show the bounds on the leakage rate r and the code rate k for leakage-resilient AMD codes. In particular we prove that 2r + k < 1 and for the weak case (security is averaged over a uniformly random message) r + k < 1. These bounds hold even if adversary is polynomial-time bounded, as long as we allow leakage function to be arbitrary. We present the constructions of AMD codes that (asymptotically) fulfill above bounds for almost full range of parameters r and k. This shows that above bounds and constructions are in-fact optimal. In the last section we show that if a leakage function is computationally bounded (we use Ideal Cipher Model) then it is possible to break these bounds.
Last updated:  2019-10-03
Efficient Noninteractive Certification of RSA Moduli and Beyond
Sharon Goldberg, Leonid Reyzin, Omar Sagga, Foteini Baldimtsi
In many applications, it is important to verify that an RSA public key $(N,e)$ specifies a permutation over the entire space $Z_N$, in order to prevent attacks due to adversarially-generated public keys. We design and implement a simple and efficient noninteractive zero-knowledge protocol (in the random oracle model) for this task. Applications concerned about adversarial key generation can just append our proof to the RSA public key without any other modifications to existing code or cryptographic libraries. Users need only perform a one-time verification of the proof to ensure that raising to the power $e$ is a permutation of the integers modulo $N$. For typical parameter settings, the proof consists of nine integers modulo $N$; generating the proof and verifying it both require about nine modular exponentiations. We extend our results beyond RSA keys and also provide efficient noninteractive zero-knowledge proofs for other properties of $N$, which can be used to certify that $N$ is suitable for the Paillier cryptosystem, is a product of two primes, or is a Blum integer. As compared to the recent work of Auerbach and Poettering (PKC 2018), who provide two-message protocols for similar languages, our protocols are more efficient and do not require interaction, which enables a broader class of applications.
Last updated:  2018-07-01
SETLA: Signature and Encryption from Lattices
Uncategorized
François Gérard, Keno Merckx
Show abstract
Uncategorized
In data security, the main objectives one tries to achieve are privacy, data integrity and authentication. In a public-key setting, privacy is reached through asymmetric encryption and both data integrity and authentication through signature. Meeting all the security objectives for data exchange requires to use a concatenation of those primitives in an encrypt-then-sign or sign-then-encrypt fashion. Signcryption aims at providing all the security requirements in one single primitive at a lower cost than using encryption and signature together. Most existing signcryption schemes are using ElGamal-based or pairing-based techniques and thus rely on the decisional Diffie-Hellman assumption. With the current growth of a quantum threat, we seek for post-quantum counterparts to a vast majority of public-key primitives. In this work, we propose a lattice-based signcryption scheme in the random oracle model inspired from a construction of Malone-Lee. It comes in two flavors, one integrating the usual lattice-based key exchange into the signature and the other merging the scheme with a RLWE encryption. Our instantiation is based on a ring version of the scheme of Bai and Galbraith as was done in ring-TESLA and TESLA$\sharp$. It targets 128 bits of classical security and offers a save in bandwidth over a naive concatenation of state-of-the-art key exchanges and signatures from the literature. Another lightweight instantiation derived from GLP is feasible but raises long-term security concerns since the base scheme is somewhat outdated.
Last updated:  2018-01-16
High-Resolution EM Attacks Against Leakage-Resilient PRFs Explained - And An Improved Construction
Uncategorized
Florian Unterstein, Johann Heyszl, Fabrizio De Santis, Robert Specht, Georg Sigl
Show abstract
Uncategorized
Achieving side-channel resistance through Leakage Resilience (LR) is highly relevant for embedded devices where requirements of other countermeasures such as e.g. high quality random numbers are hard to guarantee. The main challenge of LR lays in the initialization of a secret pseudorandom state from a long-term key and public input. Leakage-Resilient Pseudo-Random Functions (LR-PRFs) aim at solving this by bounding side-channel leakage to non-exploitable levels through frequent re-keying. Medwed et al. recently presented an improved construction at ASIACRYPT 2016 which uses 'unknown-inputs' in addition to limited data complexity and correlated algorithmic noise from parallel S-boxes. However, a subsequent investigation uncovered a vulnerability to high-precision EM analysis on FPGA. In this paper, we follow up on the reasons why such attacks succeed on FPGAs. We find that in addition to the high spatial resolution, it is mainly the high temporal resolution which leads to the reduction of algorithmic noise from parallel S-boxes. While spatial resolution is less threatening for smaller technologies than the used FPGA, temporal resolution will likely remain an issue since balancing the timing behavior of signals in the nanosecond range seems infeasible today. Nonetheless, we present an improvement of the ASIACRYPT 2016 construction to effectively protect against EM attacks with such high spatial and high temporal resolution. We carefully introduce additional key entropy into the LR-PRF construction to achieve a high remaining security level even when implemented on FPGAs. With this improvement, we finally achieve side-channel secure LR-PRFs in a practical and simple way under verifiable empirical assumptions.
Last updated:  2018-08-20
More Efficient (Almost) Tightly Secure Structure-Preserving Signatures
Romain Gay, Dennis Hofheinz, Lisa Kohl, Jiaxin Pan
We provide a structure-preserving signature (SPS) scheme with an (almost) tight security reduction to a standard assumption. Compared to the state-of-the-art tightly secure SPS scheme of Abe et al. (CRYPTO 2017), our scheme has smaller signatures and public keys (of about \(56\%\), resp. \(40\%\) of the size of signatures and public keys in Abe et al.'s scheme), and a lower security loss (of \(O(\log Q)\) instead of \(O(\lambda)\), where \(\lambda\) is the security parameter, and \(Q=poly(\lambda)\) is the number of adversarial signature queries). While our scheme is still less compact than structure-preserving signature schemes \emph{without} tight security reduction, it significantly lowers the price to pay for a tight security reduction. In fact, when accounting for a non-tight security reduction with larger key (i.e., group) sizes, the computational efficiency of our scheme becomes at least comparable to that of non-tightly secure SPS schemes. Technically, we combine and refine recent existing works on tightly secure encryption and SPS schemes. Our technical novelties include a modular treatment (that develops an SPS scheme out of a basic message authentication code), and a refined hybrid argument that enables a lower security loss of \(O(\log Q)\) (instead of \(O(\lambda)\)).
Last updated:  2020-06-04
Study of Deep Learning Techniques for Side-Channel Analysis and Introduction to ASCAD Database
Emmanuel Prouff, Remi Strullu, Ryad Benadjila, Eleonora Cagli, Cecile Dumas
To provide insurance on the resistance of a system against side-channel analysis, several national or private schemes are today promoting an evaluation strategy, common in classical cryptography, which is focussing on the most powerful adversary who may train to learn about the dependency between the device behaviour and the sensitive data values. Several works have shown that this kind of analysis, known as Template Attacks in the side-channel domain, can be rephrased as a classical Machine Learning classification problem with learning phase. Following the current trend in the latter area, recent works have demonstrated that deep learning algorithms were very efficient to conduct security evaluations of embedded systems and had many advantage compared to the other methods. Unfortunately, their hyper-parametrization has often been kept secret by the authors who only discussed on the main design principles and on the attack efficiencies. This is clearly an important limitation of previous works since (1) the latter parametrization is known to be a challenging question in Machine Learning and (2) it does not allow for the reproducibility of the presented results. This paper aims to address theses limitations in several ways. First, completing recent works, we propose a comprehensive study of deep learning algorithms when applied in the context of side-channel analysis and we clarify the links with the classical template attacks. Secondly, we address the question of the choice of the hyper-parameters for the class of multi-layer perceptron networks and convolutional neural networks. Several benchmarks and rationales are given in the context of the analysis of a masked implementation of the AES algorithm. To enable perfect reproducibility of our tests, this work also introduces an open platform including all the sources of the target implementation together with the campaign of electro-magnetic measurements exploited in our benchmarks. This open database, named ASCAD, has been specified to serve as a common basis for further works on this subject. Our work confirms the conclusions made by Cagli et al. at CHES 2017 about the high potential of convolutional neural networks. Interestingly, it shows that the approach followed to design the algorithm VGG-16 used for image recognition seems also to be sound when it comes to fix an architecture for side-channel analysis.
Last updated:  2018-01-15
Optimizing Trees for Static Searchable Encryption
Mohammad Etemad, Mohammad Mahmoody, David Evans
Searchable symmetric encryption (SSE) enables data owners to conduct searches over encrypted data stored by an untrusted server, retrieving only those encrypted files that match the search queries. Several recent schemes employ a server-side encrypted index in the form of a search tree where each node stores a bit vector denoting for each keyword whether any file in its subtree contains that keyword. Our work is motivated by the observation that the way data is distributed in such a search tree has a big impact on the cost of searches. For single-keyword queries, it impacts the number of different paths that must be followed to find all the matching files; for multi-keyword queries, the arrangement of the tree also impacts the number of nodes visited during the search on paths that do not lead to any satisfying data elements. We present three algorithms that improve the performance of SSE schemes based on tree indexes and prove that for cases where the search cost is high, the cost of our algorithms converges to the cost of the optimal tree. In our experiments, the resulting search trees outperform the arbitrary search trees used in previous works by a factor of up to two
Last updated:  2019-02-23
Semantic Security Invariance under Variant Computational Assumptions
Eftychios Theodorakis, John C. Mitchell
A game-based cryptographic proof is a relation that establishes equivalence between probabilistic sequences of actions by real and ideal world players. The author of a proof selects a hardness assumption system for their proof upon which to base their subsequent statements. In this paper, we prove the existence of proof-invariant transformations for varying hardness assumptions. We show that for two systems satisfying certain algebraic properties any proof in one system has an equivalent valid proof in the other. This validates Kurosawa’s remark about the existence of proof similarities. Our result implies a correspondence between the Learning With Errors (LWE) problems and both the Elliptic Curve Discrete Log problem (ECDLP) and the Discrete Logarithm (DLOG) problem. To illustrate this result, we provide a series of example transformations in the appendix. The concrete result of this paper is a prototype proof translation tool.
Last updated:  2018-11-29
A Constructive Perspective on Signcryption Security
Christian Badertscher, Fabio Banfi, Ueli Maurer
Signcryption is a public-key cryptographic primitive, originally introduced by Zheng (Crypto '97), that allows parties to establish secure communication without the need of prior key agreement. Instead, a party registers its public key at a certificate authority (CA), and only needs to retrieve the public key of the intended partner from the CA before being able to protect the communication. Signcryption schemes provide both authenticity and confidentiality of sent messages and can offer a simpler interface to applications and better performance compared to generic compositions of signature and encryption schemes. Although introduced two decades ago, the question which security notions of signcryption are adequate in which applications has still not reached a fully satisfactory answer. To resolve this question, we conduct a constructive analysis of this public-key primitive. Similar to previous constructive studies for other important primitives, this treatment allows to identify the natural goal that signcryption schemes should achieve and to formalize this goal in a composable framework. More specifically, we capture the goal of signcryption as a gracefully-degrading secure network, which is basically a network of independent parties that allows secure communication between any two parties. However, when a party is compromised, its respective security guarantees are lost, while all guarantees for the remaining users remain unaffected. We show which security notions for signcryption are sufficient to construct this kind of secure network from a certificate authority (or key registration resource) and insecure communication. Our study does not only unveil that it is the so-called insider-security notion that enables this construction, but also that a weaker version thereof would already be sufficient. This may be of interest in the context of practical signcryption schemes that do not achieve the stronger notions. Last but not least, we observe that the graceful-degradation property is actually an essential feature of signcryption that stands out in comparison to alternative and more standard constructions that achieve secure communication from the same assumptions. This underlines the vital importance of the insider security notion for signcryption and strongly supports, in contrast to the initial belief, the recent trend to consider the insider security notion as the standard notion for signcryption.
Last updated:  2021-05-31
Attacks and Countermeasures for White-box Designs
Uncategorized
Alex Biryukov, Aleksei Udovenko
Show abstract
Uncategorized
In traditional symmetric cryptography, the adversary has access only to the inputs and outputs of a cryptographic primitive. In the white-box model the adversary is given full access to the implementation. He can use both static and dynamic analysis as well as fault analysis in order to break the cryptosystem, e.g. to extract the embedded secret key. Implementations secure in such model have many applications in industry. However, creating such implementations turns out to be a very challenging if not an impossible task. Recently, Bos et al. proposed a generic attack on white-box primitives called differential computation analysis (DCA). This attack was applied to many white-box implementations both from academia and industry. The attack comes from the area of side-channel analysis and the most common method protecting against such attacks is masking, which in turn is a form of secret sharing. In this paper we present multiple generic attacks against masked white-box implementations. We use the term “masking” in a very broad sense. As a result, we deduce new constraints that any secure white-box implementation must satisfy. Based on the new constraints, we develop a general method for protecting white-box implementations. We split the protection into two independent components: value hiding and structure hiding. Value hiding must pro- vide protection against passive DCA-style attacks that rely on analysis of computation traces. Structure hiding must provide protection against circuit analysis attacks. In this paper we focus on developing the value hiding component. It includes protection against the DCA attack by Bos et al. and protection against a new attack called algebraic attack. We present a provably secure first-order protection against the new al- gebraic attack. The protection is based on small gadgets implementing secure masked XOR and AND operations. Furthermore, we give a proof of compositional security allowing to freely combine secure gadgets. We derive concrete security bounds for circuits built using our construction.
Last updated:  2018-08-08
Impossible Differential Cryptanalysis on Deoxys-BC-256
Alireza mehrdad, Farokhlagha Moazami, Hadi Soleimany
Deoxys is a third-round candidate of the CAESAR competition. This paper presents the first impossible differential cryptanalysis of Deoxys-BC-256 which is used in Deoxys as an internal tweakable block cipher. First, we find a 4.5-round ID characteristic by utilizing a miss-in-the-middle-approach. We then present several cryptanalyses based upon the 4.5 rounds distinguisher against round-reduced Deoxys-BC-256 in both single-key and related-key settings. Our contributions include impossible differential attacks on up to 8-rounds Deoxys-BC-256 in the tweak-key model which is, to the best of our knowledge, the first independent investigation of the security of Deoxys-BC-256 in the single-key model. Our attack reaches 9 rounds in the related-key related-tweak model which has a slightly higher data complexity than the best previous results obtained by a rectangle attack presented at FSE 2018 but requires a lower memory complexity with an equal time complexity.
Last updated:  2018-01-15
The distinguishing attack on Speck, Simon, Simeck, HIGHT and LEA
Boris Ryabko, Aleksandr Soskov
The purpose of the work is to estimate the resistance of lightweight block ciphers Speck, Simon, Simeck, HIGHT, LEA to a distinguishing attack. (This attack is a form of cryptanalysis on data encrypted by a cipher that allows an attacker to distinguish the encrypted data from random data.) Modern lightweight block ciphers must be designed to be immune to such an attack. It turned out that Speck, Simon, HIGHT and LEA showed a sufficient resistance to the distinguishing attack, but Simeck with 48-bit block size and 96-bit key size was not immune to this attack.
Last updated:  2018-03-06
Scalable, transparent, and post-quantum secure computational integrity
Uncategorized
Eli Ben-Sasson, Iddo Bentov, Yinon Horesh, Michael Riabzev
Show abstract
Uncategorized
Human dignity demands that personal information, like medical and forensic data, be hidden from the public. But veils of secrecy designed to preserve privacy may also be abused to cover up lies and deceit by parties entrusted with Data, unjustly harming citizens and eroding trust in central institutions. Zero knowledge (ZK) proof systems are an ingenious cryptographic solution to the tension between the ideals of personal privacy and institutional integrity, enforcing the latter in a way that does not compromise the former. Public trust demands transparency from ZK systems, meaning they be set up with no reliance on any trusted party, and have no trapdoors that could be exploited by powerful parties to bear false witness. For ZK systems to be used with Big Data, it is imperative that the public verification process scale sublinearly in data size. Transparent ZK proofs that can be verified exponentially faster than data size were first described in the 1990s but early constructions were impractical, and no ZK system realized thus far in code (including that used by crypto-currencies like Zcash) has achieved both transparency and exponential verification speedup, simultaneously, for general computations. Here we report the first realization of a transparent ZK system (ZK-STARK) in which verification scales exponentially faster than database size, and moreover, this exponential speedup in verification is observed concretely for meaningful and sequential computations, described next. Our system uses several recent advances on interactive oracle proofs (IOP), such as a “fast” (linear time) IOP system for error correcting codes. Our proof-of-concept system allows the Police to prove to the public that the DNA profile of a Presidential Candidate does not appear in the forensic DNA profile database maintained by the Police. The proof, which is generated by the Police, relies on no external trusted party, and reveals no further information about the contents of the database, nor about the candidate’s profile; in particular, no DNA information is disclosed to any party outside the Police. The proof is shorter than the size of the DNA database, and verified faster than the time needed to examine that database naively.
Last updated:  2018-01-10
Efficient Batch Zero-Knowledge Arguments for Low Degree Polynomials
Uncategorized
Jonathan Bootle, Jens Groth
Show abstract
Uncategorized
The work of Bootle et al. (EUROCRYPT 2016) constructs an extremely efficient zero-knowledge argument for arithmetic circuit satisfiability in the discrete logarithm setting. However, the argument does not treat relations involving commitments, and furthermore, for simple polynomial relations, the complex machinery employed is unnecessary. In this work, we give a framework for expressing simple relations between commitments and field elements, and present a zero-knowledge argument which is considerably more efficient than Bootle et al. in the case where the polynomials in the relation have low degree. Our method also directly yields a batch protocol, which allows many copies of the same relation to be more efficiently proved and verified in a single argument. We instantiate our protocol with concrete polynomial relations to construct zero-knowledge arguments for membership proofs, polynomial evaluation proofs, and range proofs. Our work can be seen as a unified explanation of the underlying ideas of these protocols. In some of these instantiations we also achieve better efficiency than the state of the art.
Last updated:  2018-01-10
Fast Lattice Basis Reduction Suitable for Massive Parallelization and Its Application to the Shortest Vector Problem
Uncategorized
Tadanori Teruya, Kenji Kashiwabara, Goichiro Hanaoka
Show abstract
Uncategorized
The hardness of the shortest vector problem for lattices is a fundamental assumption underpinning the security of many lattice-based cryptosystems, and therefore, it is important to evaluate its difficulty. Here, recent advances in studying the hardness of problems in large-scale lattice computing have pointed to need to study the design and methodology for exploiting the performance of massive parallel computing environments. In this paper, we propose a lattice basis reduction algorithm suitable for massive parallelization. Our parallelization strategy is an extension of the Fukase-Kashiwabara algorithm~(J. Information Processing, Vol. 23, No. 1, 2015). In our algorithm, given a lattice basis as input, variants of the lattice basis are generated, and then each process reduces its lattice basis; at this time, the processes cooperate and share auxiliary information with each other to accelerate lattice basis reduction. In addition, we propose a new strategy based on our evaluation function of a lattice basis in order to decrease the sum of squared lengths of orthogonal basis vectors. We applied our algorithm to problem instances from the SVP Challenge. We solved a 150-dimension problem instance in about 394 days by using large clusters, and we also solved problem instances of dimensions 134, 138, 140, 142, 144, 146, and 148. Since the previous world record is the problem of dimension 132, these results demonstrate the effectiveness of our proposal.
Last updated:  2018-01-16
Efficient Adaptively Secure Zero-knowledge from Garbled Circuits
Uncategorized
Chaya Ganesh, Yashvanth Kondi, Arpita Patra, Pratik Sarkar
Show abstract
Uncategorized
Zero-knowledge (ZK) protocols are undoubtedly among the central primitives in cryptography, lending their power to numerous applications such as secure computation, voting, auctions, and anonymous credentials to name a few. The study of efficient ZK protocols for non-algebraic statements has seen rapid progress in recent times, relying on the techniques from secure computation. The primary contribution of this work lies in constructing efficient UC-secure constant round ZK protocols from garbled circuits that are secure against $adaptive$ corruptions, with communication linear in the size of the statement. We begin by showing that the practically efficient ZK protocol of Jawurek et al. (CCS 2013) is adaptively secure when the underlying oblivious transfer (OT) satisfies a mild adaptive security guarantee. We gain adaptive security with little to no overhead over the static case. A conditional verification technique is then used to obtain a three-round adaptively secure zero-knowledge argument in the non-programmable random oracle model (NPROM). We draw motivation from state-of-the-art non-interactive secure computation protocols and leveraging specifics of ZK functionality show a two-round protocol that achieves static security. It is a proof, while most known efficient ZK protocols and our three round protocol are only arguments.
Last updated:  2019-01-31
Improved (Almost) Tightly-Secure Structure-Preserving Signatures
Uncategorized
Charanjit S. Jutla, Miyako Ohkubo, Arnab Roy
Show abstract
Uncategorized
Structure Preserving Signatures (SPS) allow the signatures and the messages signed to be further encrypted while retaining the ability to be proven valid under zero-knowledge. In particular, SPS are tailored to have structure suitable for Groth-Sahai NIZK proofs. More precisely, the messages, signatures, and verification keys are required to be elements of groups that support efficient bilinear-pairings (bilinear groups), and the signature verification consists of just evaluating one or more bilinear-pairing product equations. Since Groth-Sahai NIZK proofs can (with zero-knowledge) prove the validity of such pairing product equations, it leads to interesting applications such as blind signatures, group signatures, traceable signatures, group encryption, and delegatable credential systems. In this paper, we further improve on the SPS scheme of Abe, Hofheinz, Nishimaki, Ohkubo and Pan (CRYPTO 2017) while maintaining only an $O(\lambda)$-factor security reduction loss to the SXDH assumption. In particular, we compress the size of the signatures by almost 40%, and reduce the number of pairing-product equations in the verifier from fifteen to seven. Recall that structure preserving signatures are used in applications by encrypting the messages and/or the signatures, and hence these optimizations are further amplified as proving pairing-product equations in Groth-Sahai NIZK system is not frugal. While our scheme uses an important novel technique introduced by Hofheinz (EuroCrypt 2017), i.e., structure-preserving adaptive partitioning, our approach to building the signature scheme is different and this leads to the optimizations mentioned. Thus we make progress towards an open problem stated by Abe et al (CRYPTO 2017) to design more compact SPS-es with smaller number of group elements.
Last updated:  2018-01-09
Related Randomness Security for Public Key Encryption, Revisited
Uncategorized
Takahiro Matsuda, Jacob C. N. Schuldt
Show abstract
Uncategorized
Motivated by the history of randomness failures in practical systems, Paterson, Schuldt, and Sibborn (PKC 2014) introduced the notion of related randomness security for public key encryption. In this paper, we firstly show an inherent limitation of this notion: if the family of related randomness functions is sufficiently rich to express the encryption function of the considered scheme, then security cannot be achieved. This suggests that achieving security for function families capable of expressing more complex operations, such as those used in random number generation, might be difficult. The current constructions of related randomness secure encryption in the standard model furthermore reflect this; full security is only achieved for function families with a convenient algebraic structure. We additionally revisit the seemingly optimal random oracle model construction by Paterson et al. and highlight its limitations. To overcome this difficulty, we propose a new notion which we denote related refreshable randomness security. This notion captures a scenario in which an adversary has limited time to attack a system before new entropy is added. More specifically, the number of encryption queries with related randomness the adversary can make before the randomness is refreshed, is bounded, but the adversary is allowed to make an unbounded total number of queries. Furthermore, the adversary is allowed to influence how entropy is added to the system. In this setting, we construct an encryption scheme which remains secure in the standard model for arbitrary function families of size $2^p$ (where $p$ is polynomial in the security parameter) that satisfy certain collision-resistant and output-unpredictability properties. This captures a rich class of functions, which includes, as a special case, circuits of polynomial size. Our scheme makes use of a new construction of a (bounded) related-key attack secure pseudorandom function, which in turn is based on a new flavor of the leftover hash lemma. These technical results might be of independent interest.
Last updated:  2018-01-09
An Analysis of Acceptance Policies For Blockchain Transactions
Seb Neumayer, Mayank Varia, Ittay Eyal
The standard acceptance policy for a cryptocurrency transaction at most exchanges is to wait until the transaction is placed in the blockchain and followed by a certain number of blocks. However, as noted by Sompolinsky and Zohar, the amount of time for blocks to arrive should also be taken into account as it affects the probability of double spending. Specifically, they propose a dynamic policy for transaction acceptance that depends on both the number of confirmations and the amount of time since transaction broadcast. In this work we study the implications of using such a policy compared with the standard option that ignores block timing information. Using an exact expression for the probability of double spend, via numerical results, we analyze time to transaction acceptance (performance) as well as the time and cost to perform a double spend attack (security). We show that while expected time required for transaction acceptance is improved using a dynamic policy, the time and cost to perform a double spend attack for a particular transaction is reduced.
Last updated:  2018-01-09
Faster AVX2 optimized NTT multiplication for Ring-LWE lattice cryptography
Gregor Seiler
Constant-time polynomial multiplication is one of the most time-consuming operations in many lattice-based cryptographic constructions. For schemes based on the hardness of Ring-LWE in power-of-two cyclotomic fields with completely splitting primes, the AVX2 optimized implementation of the Number-Theoretic Transform (NTT) from the NewHope key-exchange scheme is the state of the art for fast multiplication. It uses floating point vector instructions. We show that by using a modification of the Montgomery reduction algorithm that enables a fast approach with integer instructions, we can improve on the polynomial multiplication speeds of NewHope and Kyber by a factor of $4.2$ and $6.3$ on Skylake, respectively.
Last updated:  2018-01-08
On the Message Complexity of Secure Multiparty Computation
Uncategorized
Yuval Ishai, Manika Mittal, Rafail Ostrovsky
Show abstract
Uncategorized
We study the minimal number of point-to-point messages required for general secure multiparty computation (MPC) in the setting of computational security against semi-honest, static adversaries who may corrupt an arbitrary number of parties. We show that for functionalities that take inputs from $n$ parties and deliver outputs to $k$ parties, $2n+k-3$ messages are necessary and sufficient. The negative result holds even when given access to an arbitrary correlated randomness setup. The positive result can be based on any 2-round MPC protocol (which can in turn can be based on 2-message oblivious transfer), or on a one-way function given a correlated randomness setup.
Last updated:  2018-01-08
Weakly Secure Equivalence-Class Signatures from Standard Assumptions
Uncategorized
Georg Fuchsbauer, Romain Gay
Show abstract
Uncategorized
Structure-preserving signatures on equivalence classes, or equivalence-class signatures for short (EQS), are signature schemes defined over bilinear groups whose messages are vectors of group elements. Signatures are perfectly randomizable and given a signature on a vector, anyone can derive a signature on any multiple of the vector; EQS thus sign projective equivalence classes. Applications of EQS include the first constant-size anonymous attribute-based credentials, efficient round-optimal blind signatures without random oracles and efficient access-control encryption. To date, the only existing instantiation of EQS is proven secure in the generic-group model. In this work we show that by relaxing the definition of unforgeability, which makes it efficiently verifiable, we can construct EQS from standard assumptions, namely the Matrix-Diffie-Hellman assumptions. We then show that our unforgeability notion is sufficient for most applications.
Last updated:  2018-01-08
Extending Oblivious Transfer with Low Communication via Key-Homomorphic PRFs
Uncategorized
Peter Scholl
Show abstract
Uncategorized
We present a new approach to extending oblivious transfer with communication complexity that is logarithmic in the security parameter. Our method only makes black-box use of the underlying cryptographic primitives, and can achieve security against an active adversary with almost no overhead on top of passive security. This results in the first oblivious transfer protocol with sublinear communication and active security, which does not require any non-black-box use of cryptographic primitives. Our main technique is a novel twist on the classic OT extension of Ishai et al. (Crypto 2003), using an additively key-homomorphic PRF to reduce interaction. We first use this to construct a protocol for a large batch of 1-out-of-$n$ OTs on random inputs, with amortized $o(1)$ communication. Converting these to 1-out-of-2 OTs on chosen strings requires logarithmic communication. The key-homomorphic PRF used in the protocol can be instantiated under the learning with errors assumption with exponential modulus-to-noise ratio.
Last updated:  2018-01-08
A Linearly Homomorphic Signature Scheme From Weaker Assumptions
Lucas Schabhüser, Johannes Buchmann, Patrick Struck
In delegated computing, prominent in the context of cloud computing, guaranteeing both the correctness and authenticity of computations is of critical importance. Homomorphic signatures can be used as cryptographic solutions to this problem. In this paper we solve the open problem of constructing a linearly homomorphic signature scheme that is secure against an active adversary under standard assumptions. We provide a construction based on the DL and CDH assumption. Furthermore we show how our scheme can be combined with homomorphic encryption under the framework of Linearly Homomorphic Authenticated Encryption with Public Verifiability. This way we can provide the first such scheme that is context hiding. Furthermore our solution even allows verification in constant time (in an amortized sense).
Last updated:  2018-01-08
Constant-size Group Signatures from Lattices
Uncategorized
San Ling, Khoa Nguyen, Huaxiong Wang, Yanhong Xu
Show abstract
Uncategorized
Lattice-based group signature is an active research topic in recent years. Since the pioneering work by Gordon, Katz and Vaikuntanathan (Asiacrypt 2010), ten other schemes have been proposed, providing various improvements in terms of security, efficiency and functionality. However, in all known constructions, one has to fix the number $N$ of group users in the setup stage, and as a consequence, the signature sizes are dependent on $N$. In this work, we introduce the first constant-size group signature from lattices, which means that the size of signatures produced by the scheme is independent of $N$ and only depends on the security parameter $\lambda$. More precisely, in our scheme, the sizes of signatures, public key and users' secret keys are all of order $\widetilde{\mathcal{O}}(\lambda)$. The scheme supports dynamic enrollment of users and is proven secure in the random oracle model under the Ring Short Integer Solution (RSIS) and Ring Learning With Errors (RLWE) assumptions. At the heart of our design is a zero-knowledge argument of knowledge of a valid message-signature pair for the Ducas-Micciancio signature scheme (Crypto 2014), that may be of independent interest.
Last updated:  2020-08-31
Two-Factor Password-Authenticated Key Exchange with End-to-End Password Security
Stanislaw Jarecki, Mohammed Jubur, Hugo Krawczyk, Maliheh Shirvanian, Nitesh Saxena
We present a secure two-factor authentication (TFA) scheme based on the possession by the user of a password and a crypto-capable device. Security is ``end-to-end" in the sense that the attacker can attack all parts of the system, including all communication links and any subset of parties (servers, devices, client terminals), can learn users' passwords, and perform active and passive attacks, online and offline. In all cases the scheme provides the highest attainable security bounds given the set of compromised components. Our solution builds a TFA scheme using any Device-Enhanced PAKE, defined by Jarecki et al., and any Short Authenticated String (SAS) Message Authentication, defined by Vaudenay. We show an efficient instantiation of this modular construction which utilizes any password-based client-server authentication method, with or without reliance on public-key infrastructure. The security of the proposed scheme is proven in a formal model that we formulate as an extension of the traditional PAKE model. We also report on a prototype implementation of our schemes, including TLS-based and PKI-free variants, as well as several instantiations of the SAS mechanism, all demonstrating the practicality of our approach. Finally, we present a usability study evaluating the viability of our protocol contrasted with the traditional PIN-based TFA approach in terms of efficiency, potential for errors, user experience and security perception of the underlying manual process.
Last updated:  2018-01-09
Publicly Verifiable Proofs of Space
Markus Jakobsson
Abstract—We introduce a simple and practical Proof of Space (PoS) with applicability to ledger-based payment schemes. It has a dramatically simpler structure than previous proposals, and with that, becomes very easy to analyze. A proof can be as short as a few hundred bits, and can be publicly verified using only two hash function computations.
Last updated:  2018-01-08
Secure Remote Attestation
Markus Jakobsson
More than ten years ago, a devastating data substitution attack was shown to successfully compromise all previously proposed remote attestation techniques. In fact, the authors went further than simply attacking previously proposed methods: they called into question whether it is theoretically possible for remote attestation methods to exist in face of their attack. Subsequently, it has been shown that it is possible, by relying on self-modifying code. We show that it is possible to create remote attestation that is secure against all data substitution attacks, without relying on self-modifying code. Our proposed method relies on a construction of the checksum process that forces frequent L2 cache overflows if any data substitution attack takes place.
Last updated:  2018-01-09
Tightly SIM-SO-CCA Secure Public Key Encryption from Standard Assumptions
Uncategorized
Lin Lyu, Shengli Liu, Shuai Han, Dawu Gu
Show abstract
Uncategorized
Selective opening security (SO security) is desirable for public key encryption (PKE) in a multi-user setting. {In a selective opening attack, an adversary receives a number of ciphertexts for possibly correlated messages, then it opens a subset of them and gets the corresponding messages together with the randomnesses used in the encryptions. SO security aims at providing security for the unopened ciphertexts.} Among the existing simulation-based, selective opening, chosen ciphertext secure (SIM-SO-CCA secure) PKEs, only one (Libert et al. Crypto'17) enjoys tight security, which is reduced to the Non-Uniform LWE assumption. However, their public key and ciphertext are not compact. In this work, we focus on constructing PKE with tight SIM-SO-CCA security based on standard assumptions. We formalize security notions needed for key encapsulation mechanism (KEM) and show how to transform these securities into SIM-SO-CCA security of PKE through a tight security reduction, while the construction of PKE from KEM follows the general framework proposed by Liu and Paterson (PKC'15). We present two KEM constructions with tight securities based on the Matrix Decision Diffie-Hellman assumption. These KEMs in turn lead to two tightly SIM-SO-CCA secure PKE schemes. One of them enjoys not only tight security but also compact public key.
Last updated:  2018-05-08
Practical, Anonymous, and Publicly Linkable Universally-Composable Reputation Systems
Johannes Blömer, Fabian Eidens, Jakob Juhnke
We consider reputation systems in the Universal Composability Framework where users can anonymously rate each others products that they purchased previously. To obtain trustworthy, reliable, and honest ratings, users are allowed to rate products only once. Everybody is able to detect users that rate products multiple times. In this paper we present an ideal functionality for such reputation systems and give an efficient realization that is usable in practical applications.
Last updated:  2018-08-25
Compact Energy and Delay-Aware Authentication
Uncategorized
Muslum Ozgur Ozmen, Rouzbeh Behnia, Attila A. Yavuz
Show abstract
Uncategorized
Authentication and integrity are fundamental security services that are critical for any viable system. However, some of the emerging systems (e.g., smart grids, aerial drones) are delay-sensitive, and therefore their safe and reliable operation requires delay-aware authentication mechanisms. Unfortunately, the current state-of-the-art authentication mechanisms either incur heavy computations or lack scalability for such large and distributed systems. Hence, there is a crucial need for digital signature schemes that can satisfy the requirements of delay-aware applications. In this paper, we propose a new digital signature scheme that we refer to as Compact Energy and Delay-aware Authentication (CEDA). In CEDA, signature generation and verification only require a small-constant number of multiplications and Pseudo Random Function (PRF) calls. Therefore, it achieves the lowest end-to-end delay among its counterparts. Our implementation results on an ARM processor and commodity hardware show that CEDA has the most efficient signature generation on both platforms, while offering a fast signature verification. Among its delay-aware counterparts, CEDA has a smaller private key with a constant-size signature. All these advantages are achieved with the cost of a larger public key. This is a highly favorable trade-off for applications wherein the verifier is not memory-limited. We open-sourced our implementation of CEDA to enable its broad testing and adaptation.
Last updated:  2018-01-07
A verifiable shuffle for the GSW cryptosystem
Martin Strand
We provide the first verifiable shuffle specifically for fully homomorphic schemes. A verifiable shuffle is a way to ensure that if a node receives and sends encrypted lists, the content will be the same, even though no adversary can trace individual list items through the node. Shuffles are useful in e-voting, traffic routing and other applications. We build our shuffle on the ideas and techniques of Groth's 2010 shuffle, but make necessary modifications for a less ideal setting where the randomness and ciphertexts admit no group structure. The protocol relies heavily on the properties of the so-called gadget matrices, so we have included a detailed introduction to these.
Last updated:  2018-01-07
Zero-Knowledge Proof of Decryption for FHE Ciphertexts
Christopher Carr, Anamaria Costache, Gareth T. Davies, Kristian Gjøsteen, Martin Strand
Zero-knowledge proofs of knowledge and fully-homomorphic encryption are two areas that have seen considerable advances in recent years, and these two techniques are used in conjunction in the context of verifiable decryption. Existing solutions for verifiable decryption are aimed at the batch setting, however there are many applications in which there will only be one ciphertext that requires a proof of decryption. The purpose of this paper is to provide a zero-knowledge proof of correct decryption on an FHE ciphertext, which for instance could hold the result of a cryptographic election. We give two main contributions. Firstly, we present a bootstrapping-like protocol to switch from one FHE scheme to another. The first scheme has efficient homomorphic capabilities; the second admits a simple zero-knowledge protocol. To illustrate this, we use the Brakerski et al. (ITCS, 2012) scheme for the former, and Gentry's original scheme (STOC, 2009) for the latter. Secondly, we present a simple one-shot zero-knowledge protocol for verifiable decryption using Gentry's original FHE scheme.
Last updated:  2018-01-07
Hedged Nonce-Based Public-Key Encryption: Adaptive Security under Randomness Failures
Uncategorized
Zhengan Huang, Junzuo Lai, Wenbin Chen, Man Ho Au, Zhen Peng, Jin Li
Show abstract
Uncategorized
Nowadays it is well known that randomness may fail due to bugs or deliberate randomness subversion. As a result, the security of traditional public-key encryption (PKE) cannot be guaranteed any more. Currently there are mainly three approaches dealing with the problem of randomness failures: deterministic PKE, hedged PKE, and nonce-based PKE. However, these three approaches only apply to different application scenarios respectively. Since the situations in practice are dynamic and very complex, it's almost impossible to predict the situation in which a scheme is deployed, and determine which approach should be used beforehand. In this paper, we initiate the study of hedged security for nonce-based PKE, which adaptively applies to the situations whenever randomness fails, and achieves the best-possible security. Specifically, we lift the hedged security to the setting of nonce-based PKE, and formalize the notion of chosen-ciphertext security against chosen-distribution attacks (IND-CDA2) for nonce-based PKE. By presenting two counterexamples, we show a separation between our IND-CDA2 security for nonce-based PKE and the original NBP1/NBP2 security defined by Bellare and Tackmann (EUROCRYPT 2016). We show two nonce-based PKE constructions meeting IND-CDA2, NBP1 and NBP2 security simultaneously. The first one is a concrete construction in the random oracle model, and the second one is a generic construction based on a nonce-based PKE scheme and a deterministic PKE scheme.
Last updated:  2018-01-07
KEM Combiners
Federico Giacon, Felix Heuer, Bertram Poettering
Key-encapsulation mechanisms (KEMs) are a common stepping stone for constructing public-key encryption. Secure KEMs can be built from diverse assumptions, including ones related to integer factorization, discrete logarithms, error correcting codes, or lattices. In light of the recent NIST call for post-quantum secure PKE, the zoo of KEMs that are believed to be secure continues to grow. Yet, on the question of which is the most secure KEM opinions are divided. While using the best candidate might actually not seem necessary to survive everyday life situations, placing a wrong bet can actually be devastating, should the employed KEM eventually turn out to be vulnerable. We introduce KEM combiners as a way to garner trust from different KEM constructions, rather than relying on a single one: We present efficient black-box constructions that, given any set of `ingredient' KEMs, yield a new KEM that is (CCA) secure as long as at least one of the ingredient KEMs is. As building blocks our constructions use cryptographic hash functions and blockciphers. Some corresponding security proofs require idealized models for these primitives, others get along on standard assumptions.
Last updated:  2018-01-07
Public-Key Encryption Resistant to Parameter Subversion and its Realization from Efficiently-Embeddable Groups
Benedikt Auerbach, Mihir Bellare, Eike Kiltz
We initiate the study of public-key encryption (PKE) schemes and key-encapsulation mechanisms (KEMs) that retain security even when public parameters (primes, curves) they use may be untrusted and subverted. We define a strong security goal that we call ciphertext pseudo-randomness under parameter subversion attack (CPR-PSA). We also define indistinguishability (of ciphertexts for PKE, and of encapsulated keys from random ones for KEMs) and public-key hiding (also called anonymity) under parameter subversion attack, and show they are implied by CPR-PSA, for both PKE and KEMs. We show that hybrid encryption continues to work in the parameter subversion setting to reduce the design of CPR-PSA PKE to CPR-PSA KEMs and an appropriate form of symmetric encryption. To obtain efficient, elliptic-curve-based KEMs achieving CPR-PSA, we introduce efficiently-embeddable group families and give several constructions from elliptic-curves.
Last updated:  2018-01-07
Attribute-based Signatures for Unbounded Circuits in the ROM and Efficient Instantiations from Lattices
Uncategorized
Ali El Kaafarani, Shuichi Katsumata
Show abstract
Uncategorized
Attribute-based signature (ABS), originally introduced by Maji et al. (CT-RSA'11), represents an essential mechanism to allow for fine-grained authentication. A user associated with an attribute $x$ can sign w.r.t. a given public policy $C$ only if his attribute satisfies $C$, i.e., $C(x)=1$. So far, much effort on constructing bilinear map-based ABS schemes have been made, where the state-of-the-art scheme of Sakai et al. (PKC'16) supports the very wide class of unbounded circuits as policies. However, construction of ABS schemes without bilinear maps are less investigated, where it was not until recently that Tsabary (TCC'17) showed a lattice-based ABS scheme supporting bounded circuits as policies, at the cost of weakening the security requirement. In this work, we affirmatively close the gap between ABS schemes based on bilinear maps and lattices by constructing the first lattice-based ABS scheme for unbounded circuits in the random oracle model. We start our work by providing a generic construction of ABS schemes for unbounded-circuits in the random oracle model, which in turn implies that one-way functions are sufficient to construct ABS schemes. To prove security, we formalize and prove a generalization of the Forking Lemma, which we call "general multi-forking lemma with oracle access", capturing the situation where the simulator is interacting with some algorithms he cannot rewind, and also covering many features of the recent lattice-based ZKPs. This, in fact, was a formalization lacking in many existing anonymous signatures from lattices so far (e.g., group signatures). Therefore, this formalization is believed to be of independent interest. Finally, we provide a concrete instantiation of our generic ABS construction from lattices by introducing a new $\Sigma$-protocol, that highly departs from the previously known techniques, for proving possession of a valid signature of the lattice-based signature scheme of Boyen (PKC'10).
Last updated:  2018-11-24
Regular Lossy Functions and Their Applications in Leakage-Resilient Cryptography
Yu Chen, Baodong Qin, Haiyang Xue
In STOC 2008, Peikert and Waters introduced a powerful primitive called lossy trapdoor functions (LTFs). In a nutshell, LTFs are functions that behave in one of two modes. In the normal mode, functions are injective and invertible with a trapdoor. In the lossy mode, functions statistically lose information about their inputs. Moreover, the two modes are computationally indistinguishable. In this work, we put forward a relaxation of LTFs, namely, regular lossy functions (RLFs). Compared to LTFs, the functions in the normal mode are not required to be efficiently invertible or even unnecessary to be injective. Instead, they could also be lossy, but in a regular manner. We also put forward richer abstractions of RLFs, namely all-but-one regular lossy functions (ABO-RLFs) and one-time regular lossy filters (OT-RLFs). We show that (ABO)-RLFs admit efficient constructions from both a variety of number- theoretic assumptions and hash proof system (HPS) for subset membership problems satisfying natural algebraic properties. Thanks to the relaxations on functionality, the constructions enjoy much compact key size and better computational efficiency than that of (ABO)-LTFs. We demonstrate the utility of RLFs and their extensions in the leakage-resilient cryptography. As a special case of RLFs, lossy functions imply leakage-resilient injective one-way functions with optimal leakage rate $1 - o(1)$. ABO-RLFs (or OT-RLFs) immediately imply leakage-resilient one-time message authentication code (MAC) with optimal leakage rate $1 - o(1)$. ABO-RLFs together with HPS give rise to leakage-resilient chosen-ciphertext (CCA) secure key encapsulation mechanisms (KEM) (this approach extends naturally to the identity-based setting). Combining the construction of ABO-RLFs from HPS, this gives the first leakage-resilient CCA-secure public-key encryption (PKE) with optimal leakage rate based solely on HPS, and thus goes beyond the barrier posed by Dodis et al. (Asiacrypt 2010). Our construction also applies to the identity-based setting, yielding LR-CCA secure IB-KEM with higher leakage rate than previous works.
Last updated:  2018-07-08
Ciphertext-Only Attacks against Compact-LWE Submitted to NIST PQC Project
Uncategorized
Haoyu Li, Renzhang Liu, Yanbin Pan, Tianyuan Xie
Show abstract
Uncategorized
In 2017, Liu, Li, Kim and Nepal submitted a new public-key encryption scheme Compact-LWE to NIST as a candidate of the standard of post-quantum cryptography. Compact-LWE features its structure similar to LWE, but with different distribution of errors. Liu, Li, Kim and Nepal thought that the special error distribution they employed would protect Compact-LWE from the known lattice-based attacks. Furthermore, they recommended a set of small parameters to improve the efficiency of Compact-LWE and claimed it can offer 192 bits of security. However, in this paper, we show that Compact-LWE is not secure with recommended parameters by presenting two efficient ciphertext-only attacks against it. \begin{itemize} \item The first one is to recover the equivalent private keys just from the public keys. By exploiting the special structure of Compact-LWE, employing some known skills such as orthogonal-lattice technique, and also developing some new techniques, we finally recovered the equivalent private keys for more than 80\% of the random generated instances in our experiments. \item The second one is to recover the corresponding message given the public keys and a ciphertext. Note that any short enough solutions of corresponding inhomogeneous linear systems can be used to decrypt a ciphertext equivalently. We recovered all the messages without knowing the private keys in our experiments. \end{itemize}
Last updated:  2018-01-05
Two Sides of the Same Coin: Counting and Enumerating Keys Post Side-Channel Attacks Revisited.
Daniel P. Martin, Luke Mather, Elisabeth Oswald
Motivated by the need to assess the concrete security of a device after a side channel attack, there has been a flurry of recent work designing both key rank and key enumeration algorithms. Two main competitors for key ranking can be found in the literature: a convolution based algorithm put forward by Glowacz et al. (FSE 2015), and a path counting based algorithm proposed by Martin et al. (Asiacrypt 2015). Both key ranking algorithms can be extended to key enumeration algorithms (Poussier et al. (CHES 2016) and Martin et al. (Asiacrypt 2015)). The two approaches were proposed independently, and have so far been treated as uniquely different techniques, with different levels of accuracy. However, we show that both approaches (for ranking) are mathematically equivalent for a suitable choice of their respective discretisation parameter. This settles questions about which one returns more accurate rankings. We then turn our attention to their related enumeration algorithms and determine why and how these algorithms differ in their practical performance.
Last updated:  2018-05-31
Multi-Key Searchable Encryption, Revisited
Uncategorized
Ariel Hamlin, abhi shelat, Mor Weiss, Daniel Wichs
Show abstract
Uncategorized
We consider a setting where users store their encrypted documents on a remote server and can selectively share documents with each other. A user should be able to perform keyword searches over all the documents she has access to, including the ones that others shared with her. The contents of the documents, and the search queries, should remain private from the server. This setting was considered by Popa et al. (NSDI '14) who developed a new cryptographic primitive called Multi-Key Searchable Encryption (MKSE), together with an instantiation and an implementation within a system called Mylar, to address this goal. Unfortunately, Grubbs et al. (CCS '16) showed that the proposed MKSE definition fails to provide basic security guarantees, and that the Mylar system is susceptible to simple attacks. Most notably, if a malicious Alice colludes with the server and shares a document with an honest Bob then the privacy of all of Bob's search queries is lost. In this work we revisit the notion of MKSE and propose a new strengthened definition that rules out the above attacks. We then construct MKSE schemes meeting our definition. We first give a simple and efficient construction using only pseudorandom functions. This construction achieves our strong security definition at the cost of increasing the server storage overhead relative to Mylar, essentially replicating the document each time it is shared. We also show that high server storage overhead is not inherent, by giving an alternate (albeit impractical) construction that manages to avoid it using obfuscation.
Last updated:  2018-08-31
Verifiability of Helios Mixnet
Ben Smyth
We study game-based definitions of individual and universal verifiability by Smyth, Frink & Clarkson. We prove that building voting systems from El Gamal coupled with proofs of correct key generation suffices for individual verifiability. We also prove that it suffices for an aspect of universal verifiability. Thereby eliminating the expense of individual-verifiability proofs and simplifying universal-verifiability proofs for a class of encryption-based voting systems. We use the definitions of individual and universal verifiability to analyse the mixnet variant of Helios. Our analysis reveals that universal verifiability is not satisfied by implementations using the weak Fiat-Shamir transformation. Moreover, we prove that individual and universal verifiability are satisfied when statements are included in hashes (i.e., when using the Fiat-Shamir transformation, rather than the weak Fiat-Shamir transformation).
Last updated:  2018-01-04
New Techniques for Public Key Encryption with Sender Recovery
Uncategorized
Murali Godi, Roopa Vishwanathan
Show abstract
Uncategorized
In this paper, we consider a scenario where a sender transmits ciphertexts to multiple receivers using a public-key encryption scheme, and at a later point of time, wants to retrieve the plaintexts, without having to request the receivers' help in decrypting the ciphertexts, and without having to locally store a separate recovery key for every receiver the sender interacts with. This problem, known as public key encryption with sender recovery has intuitive solutions based on hybrid encryption-based key encapsulation mechanism and data encapsulation mechanism (KEM/DEM) schemes. We propose a KEM/DEM-based solution that is CCA2-secure, allows for multiple receivers, only requires the receivers to be equipped with public/secret keypairs (the sender needs only a single symmetric recovery key), and uses an analysis technique called plaintext randomization that results in greatly simplified, clean, and intuitive proofs compared to prior work in this area. We instantiate our protocol for public key encryption with sender recovery with the Cramer-Shoup hybrid encryption scheme.
Last updated:  2021-04-28
On Composable Security for Digital Signatures
Christian Badertscher, Ueli Maurer, Björn Tackmann
A digital signature scheme (DSS), which consists of a key-generation, a signing, and a verification algorithm, is an invaluable tool in cryptography. The first and still most widely used security definition for a DSS, existential unforgeability under chosen-message attack, was introduced by Goldwasser, Micali, and Rivest in 1988. As DSSs serve as a building block in numerous complex cryptographic protocols, a security definition that specifies the guarantees of a DSS under composition is needed. Canetti (FOCS 2001, CSFW 2004) as well as Backes, Pfitzmann, and Waidner (CCS 2003) have described ideal functionalities for signatures in their respective composable-security frameworks. While several variants of these functionalities exist, they all share that the verification key and signature values appear explicitly. In this paper, we describe digital signature schemes from a different, more abstract perspective. Instead of modeling all aspects of a DSS in a monolithic ideal functionality, our approach characterizes a DSS as a construction of a repository for authentically reading values written by a certain party from certain assumed repositories, e.g., for transmitting verification key and signature values. This approach resolves several technical complications of previous simulation-based approaches, captures the security of signature schemes in an abstract way, and allows for modular proofs. We show that our definition is equivalent to existential unforgeability. We then model two example applications: (1) the certification of values via a signature from a specific entity, which with public keys as values is the core functionality of public-key infrastructures, and (2) the authentication of a session between a client and a server with the help of a digitally signed assertion from an identity provider. Single-sign-on mechanisms such as SAML rely on the soundness of the latter approach.
Last updated:  2018-03-18
Ubiquitous Weak-key Classes of BRW-polynomial Function
Kaiyan Zheng, Peng Wang, Dingfeng Ye
BRW-polynomial function is suggested as a preferred alternative of polynomial function, owing to its high efficiency and seemingly non-existent weak keys. In this paper we investigate the weak-key issue of BRW-polynomial function as well as BRW-instantiated cryptographic schemes. Though, in BRW-polynomial evaluation, the relationship between coefficients and input blocks is indistinct, we give out a recursive algorithm to compute another $(2^{v+1}-1)$-block message, for any given $(2^{v+1}-1)$-block message, such that their output-differential through BRW-polynomial evaluation, equals any given $s$-degree polynomial, where $v\ge\lfloor\log_2(s+1)\rfloor$. With such algorithm, we illustrate that any non-empty key subset is a weak-key class in BRW-polynomial function. Moreover any key subset of BRW-polynomial function, consisting of at least $2$ keys, is a weak-key class in BRW-instantiated cryptographic schemes like the Wegman-Carter scheme, the UHF-then-PRF scheme, DCT, etc. Especially in the AE scheme DCT, its confidentiality, as well as its integrity, collapses totally, when using weak keys of BRW-polynomial function, which are ubiquitous.
Last updated:  2018-01-03
Hashing solutions instead of generating problems: On the interactive certification of RSA moduli
Benedikt Auerbach, Bertram Poettering
Certain RSA-based protocols, for instance in the domain of group signatures, require a prover to convince a verifier that a set of RSA parameters is well-structured (e.g., that the modulus is the product of two distinct primes and that the exponent is co-prime to the group order). Various corresponding proof systems have been proposed in the past, with different levels of generality, efficiency, and interactivity. This paper proposes two new proof systems for a wide set of properties that RSA and related moduli might have. The protocols are particularly efficient: The necessary computations are simple, the communication is restricted to only one round, and the exchanged messages are short. While the first protocol is based on prior work (improving on it by reducing the number of message passes from four to two), the second protocol is novel. Both protocols require a random oracle.
Last updated:  2018-01-03
An Inside Job: Remote Power Analysis Attacks on FPGAs
Falk Schellenberg, Dennis R. E. Gnad, Amir Moradi, Mehdi B. Tahoori
Hardware Trojans have gained increasing interest during the past few years. Undeniably, the detection of such malicious designs needs a deep understanding of how they can practically be built and developed. In this work we present a design methodology dedicated to FPGAs which allows measuring a fraction of the dynamic power consumption. More precisely, we develop internal sensors which are based on FPGA primitives, and transfer the internally-measured side-channel leakages outside. These are distributed and calibrated delay sensors which can indirectly measure voltage fluctuations due to power consumption. By means of a cryptographic core as a case study, we present different settings and parameters for our employed sensors. Using their side-channel measurements, we further exhibit practical key-recovery attacks confirming the applicability of the underlying measurement methodology. This opens a new door to integrate hardware Trojans in a) applications where the FPGA is remotely accessible and b) FPGA-based multi-user platforms where the reconfigurable resources are shared among different users. This type of Trojan is highly difficult to detect since there is no signal connection between targeted (cryptographic) core and the internally-deployed sensors.
Last updated:  2018-04-12
Graded Encoding Schemes from Obfuscation
Pooya Farshim, Julia Hesse, Dennis Hofheinz, Enrique Larraia
We construct a graded encoding scheme (GES), an approximate form of graded multilinear maps. Our construction relies on indistinguishability obfuscation, and a pairing-friendly group in which (a suitable variant of) the strong Diffie--Hellman assumption holds. As a result of this abstract approach, our GES has a number of advantages over previous constructions. Most importantly: a) We can prove that the multilinear decisional Diffie--Hellman (MDDH) assumption holds in our setting, assuming the used ingredients are secure (in a well-defined and standard sense). In particular, and in contrast to previous constructions, our GES does not succumb to so-called ``zeroizing'' attacks. Indeed, our scheme is currently the only GES for which no known cryptanalysis applies. b) Encodings in our GES do not carry any noise. Thus, unlike previous GES constructions, there is no upper bound on the number of operations one can perform with our encodings. Hence, our GES essentially realizes what Garg et al.~(EUROCRYPT 2013) call the ``dream version'' of a GES. Technically, our scheme extends a previous, non-graded approximate multilinear map scheme due to Albrecht et al.~(TCC 2016-A). To introduce a graded structure, we develop a new view of encodings at different levels as polynomials of different degrees.
Last updated:  2018-03-20
Interactively Secure Groups from Obfuscation
Thomas Agrikola, Dennis Hofheinz
We construct a mathematical group in which an interactive variant of the very general Uber assumption holds. Our construction uses probabilistic indistinguishability obfuscation, fully homomorphic encryption, and a pairing-friendly group in which a mild and standard computational assumption holds. While our construction is not practical, it constitutes a feasibility result that shows that under a strong but generic, and a mild assumption, groups exist in which very general computational assumptions hold. We believe that this grants additional credibility to the Uber assumption.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.