All papers in 2017 (Page 12 of 1262 results)

Last updated:  2017-02-24
Analysis of AES, SKINNY, and Others with Constraint Programming
Uncategorized
Siwei Sun, David Gerault, Pascal Lafourcade, Qianqian Yang, Yosuke Todo, Kexin Qiao, Lei Hu
Show abstract
Uncategorized
Search for different types of distinguishers are common tasks in symmetric-key cryptanalysis. In this work, we employ the constraint programming (CP) technique to tackle such problems. First, we show that a simple application of the CP approach proposed by Gerault \Dengdeng~leads to the solution of the open problem of determining the exact lower bound of the number of active S-boxes for 6-round AES-128 in the related-key model. Subsequently, we show that the same approach can be applied in searching for integral distinguishers, impossible differentials, zero-correlation linear approximations, in both the single-key and related-(twea)key model. We implement the method using the open source constraint solver Choco and apply it to the block ciphers PRESENT, SKINNY, and HIGHT (ARX construction). As a result, we find 16 related-tweakey impossible differentials for 12-round SKINNY-64-128 based on which we construct an 18-round attack on SKINNY-64-128 (one target version for the crypto competition \url{https://sites.google.com/site/skinnycipher} announced at ASK 2016). Moreover, we show that in some cases, when equipped with proper strategies (ordering heuristic, restart and dynamic branching strategy), the CP approach can be very efficient. Therefore, we suggest that the constraint programming technique should become a convenient tool at hand of the symmetric-key cryptanalysts
Last updated:  2017-03-30
Security Notions for Bidirectional Channels
Giorgia Azzurra Marson, Bertram Poettering
This paper closes a definitional gap in the context of modeling cryptographic two-party channels. We note that, while most security models for channels consider exclusively unidirectional communication, real-world protocols like TLS and SSH are rather used for bidirectional interaction. The motivational question behind this paper is: Can analyses conducted with the unidirectional setting in mind--including the current ones for TLS and SSH--also vouch for security in the case of bidirectional channel usage? And, in the first place, what does security in the bidirectional setting actually mean? After developing confidentiality and integrity notions for bidirectional channels, we analyze a standard way of combining two unidirectional channels to realize one bidirectional channel. Although it turns out that this construction is, in general, not as secure as commonly believed, we confirm that for many practical schemes security is provided also in the bidirectional sense.
Last updated:  2017-02-23
Conditional Cube Attack on Round-Reduced ASCON
Uncategorized
Zheng Li, Xiaoyang Dong, Xiaoyun Wang
Show abstract
Uncategorized
This paper evaluates the secure level of authenticated encryption Ascon against cube-like method. Ascon submitted by Dobraunig et al. is one of 16 survivors of the 3rd round CAESAR competition. The cube-like method is first used by Dinur et al. to analyze Keccak keyed modes. At CT-RSA 2015, Dobraunig et al. applied this method to 5/6-round reduced Ascon, whose structure is similar to Keccak keyed modes. However, for Ascon the non-linear layer is more complex and state is much smaller, which make it hard for the attackers to select enough cube variables that do not multiply with each other after the first round. This seems to be the reason why the best previous key-recovery attack is on 6-round Ascon, while for Keccak keyed modes (Keccak-MAC and Keyak) the attacked round is no less than 7-round. In this paper, we generalize the conditional cube attack proposed by Huang et al., and find new cubes depending on some key bit conditions for 5/6-round reduced Ascon, and translate the previous theoretic 6-round attack with $2^{66}$ time complexity to a practical one with $2^{40}$ time complexity. Moreover, we propose the first 7-round key-recovery attack on Ascon. By introducing the cube-like key-subset technique, we divide the full key space into many subsets according to different key conditions. For each key subset, we launch the cube tester to determine if the key falls into it. Finally, we recover the full key space by testing all the key subsets. The total time complexity is about $2^{103.9}$. In addition, for a weak-key subset, whose size is $2^{117}$, the attack is more efficient and costs only $2^{77}$ time complexity. Those attacks do not threaten the full round (12 rounds) Ascon.
Last updated:  2017-02-23
Cube-like Attack on Round-Reduced Initialization of Ketje Sr
Uncategorized
Xiaoyang Dong, Zheng Li, Xiaoyun Wang, Ling Qin
Show abstract
Uncategorized
This paper studies the Keccak-based authenticated encryption (AE) scheme Ketje Sr against cube-like attacks. Ketje is one of the remaining 16 candidates of third round CAESAR competition, whose primary recommendation is Ketje Sr. Although the cube-like method has been successfully applied to Ketje's sister ciphers, including Keccak-MAC and Keyak -- another Keccak-based AE scheme, similar attacks are missing for Ketje. For Ketje Sr, the state (400-bit) is much smaller than Keccak-MAC and Keyak (1600-bit), thus the 128-bit key and cubes with the same dimension would occupy more lanes in Ketje Sr. Hence, the number of key bits independent of the cube sum is very small, which makes the divide-and-conquer method (it has been applied to 7-round attack on Keccak-MAC by Dinur et al.)~can not be translated to Ketje Sr trivially. This property seems to be the barrier for the translation of the previous cube-like attacks to Ketje Sr. In this paper, we evaluate Ketje Sr against the divide-and-conquer method. Firstly, by applying the linear structure technique, we find some 32/64-dimension cubes of Ketje Sr that do not multiply with each other as well as some bits of the key in the first round. In addition, we introduce the new dynamic variable instead of the auxiliary variable (it was used in Dinur et al.'s divide-and-conquer attack to reduce the diffusion of the key) to reduce the diffusion of the key as well as the cube variables. Finally, we successfully launch a 6/7-round key recovery attack on Ketje Sr v1 and v2 (v2 is presented for the 3rd round CAESAR competition.). In 7-round attack, the complexity of online phase for Ketje Sr v1 is $2^{113}$, while for Ketje Sr v2, it is $2^{97}$ (the preprocessing complexity is the same). We claim 7-round reduced Ketje Sr v2 is weaker than v1 against our attacks. In addition, some results on other Ketje instances and Ketje Sr with smaller nonce are given. Those are the first results on Ketje and bridge the gaps of cryptanalysis between its sister ciphers -- Keyak and the Keccak keyed modes.
Last updated:  2017-02-22
Passphone: Outsourcing Phone-based Web Authentication while Protecting User Privacy
Martin Potthast, Christian Forler, Eik List, Stefan Lucks
This work introduces PassPhone, a new smartphone-based authentication scheme that outsources user verification to a trusted third party without sacrificing privacy: neither can the trusted third party learn the relation between users and service providers, nor can service providers learn those of their users to others. When employed as a second factor in conjunction with, for instance, passwords as a first factor, our scheme maximizes the deployability of two-factor authentication for service providers while maintaining user privacy. We conduct a twofold formal analysis of our scheme, the first regarding its general security, and the second regarding anonymity and unlinkability of its users. Moreover, we provide an automatic analysis using AVISPA, a comparative evaluation to existing schemes under Bonneau et al.'s framework, and an evaluation of a prototypical implementation.
Last updated:  2018-08-12
Detecting General Algebraic Manipulation Attacks
Kim Ramchen
Algebraic manipulation detection codes are a class of error detecting codes which have found numerous applications in cryptography. In this paper we extend these codes to defeat general algebraic attacks - we call such codes general algebraic manipulation detection (GAMD) codes. Positive results are shown for the existence of GAMDs for the families of tampering functions corresponding to point additions and polynomial functions over a finite field. Compared to non-malleable codes, we demonstrate both positive and negative results regarding the existence of GAMDs for arbitrary families of tampering functions.
Last updated:  2017-02-22
Trust Is Risk: A Decentralized Financial Trust Platform
Orfeas Stefanos Thyfronitis Litos, Dionysis Zindros
Centralized reputation systems use stars and reviews and thus require algorithm secrecy to avoid manipulation. In autonomous open source decentralized systems this luxury is not available. We create a reputation network for decentralized marketplaces where the trust each user gives to the other users is quantifiable and expressed in monetary terms. We introduce a new model for bitcoin wallets in which user coins are split among trusted associates. Direct trust is defined using shared bitcoin accounts via bitcoin's 1-of-2 multisig. Indirect trust is subsequently defined transitively. This enables formal game theoretic arguments pertaining to risk analysis. We prove that risk and maximum flows are equivalent in our model and that our system is Sybil-resilient. Our system allows for concrete financial decisions on the subjective monetary amount a pseudonymous party can be trusted with. Risk remains invariant under a direct trust redistribution operation followed by a purchase.
Last updated:  2017-02-23
Random Sampling Revisited: Lattice Enumeration with Discrete Pruning
Uncategorized
Yoshinori Aono, Phong Q. Nguyen
Show abstract
Uncategorized
In 2003, Schnorr introduced Random sampling to find very short lattice vectors, as an alternative to enumeration. An improved variant has been used in the past few years by Kashiwabara et al. to solve the largest Darmstadt SVP challenges. However, the behaviour of random sampling and its variants is not well-understood: all analyses so far rely on a questionable heuristic assumption, namely that the lattice vectors produced by some algorithm are uniformly distributed over certain parallelepipeds. In this paper, we introduce lattice enumeration with discrete pruning, which generalizes random sampling and its variants, and provides a novel geometric description based on partitions of the n-dimensional space. We obtain what is arguably the first sound analysis of random sampling, by showing how discrete pruning can be rigorously analyzed under the well-known Gaussian heuristic, in the same model as the Gama-Nguyen-Regev analysis of pruned enumeration from EUROCRYPT '10, albeit using different tools: we show how to efficiently compute the volume of the intersection of a ball with a box, and to efficiently approximate a large sum of many such volumes, based on statistical inference. Furthermore, we show how to select good parameters for discrete pruning by enumerating integer points in an ellipsoid. Our analysis is backed up by experiments and allows for the first time to reasonably estimate the success probability of random sampling and its variants, and to make comparisons with previous forms of pruned enumeration. Our work unifies random sampling and pruned enumeration and show that they are complementary of each other: both have different characteristics and offer different trade-offs to speed up enumeration.
Last updated:  2017-02-22
Linear Cryptanalysis: Key Schedules and Tweakable Block Ciphers
Uncategorized
Thorsten Kranz, Friedrich Wiemer, Gregor Leander
Show abstract
Uncategorized
This paper serves as a systematization of knowledge of linear cryptanalysis and provides novel insights in the areas of key schedule design and tweakable block ciphers. We examine in a step by step manner the linear hull theorem in a general and consistent setting. Based on this, we study the influence of the choice of the key scheduling on linear cryptanalysis, a -- notoriously difficult -- but important subject. Moreover, we investigate how tweakable block ciphers can be analyzed with respect to linear cryptanalysis, a topic that surprisingly has not been scrutinized until now.
Last updated:  2017-06-28
Storage Efficient Substring Searchable Symmetric Encryption
Uncategorized
Iraklis Leontiadis, Ming Li
Show abstract
Uncategorized
We address the problem of substring searchable encryption. A single user produces a big stream of data and later on wants to learn the positions in the string that some patterns occur. Although current techniques exploit auxiliary data structures to achieve efficient substring search on the server side, the cost at the user side may be prohibitive. We revisit the work of substring searchable encryption in order to reduce the storage cost of auxiliary data structures. Our solution entails a suffix array based index design, which allows optimal storage cost $O(n)$ with small hidden factor at the size of the string $n$. We analyze the security of the protocol in the real ideal framework. Moreover, we implemented our scheme and the state of the art protocol \cite{Chase} to demonstrate the performance advantage of our solution with precise benchmark results. We improved the storage overhead of the encrypted index by a factor of $\mathbf{1.7}$ and the computation time thereof $\mathbf{4x}$ on $10^6$ character data stream.
Last updated:  2017-02-22
Encryptor Combiners: A Unified Approach to Multiparty NIKE, (H)IBE, and Broadcast Encryption
Fermi Ma, Mark Zhandry
We define the concept of an encryptor combiner. Roughly, such a combiner takes as input n public keys for a public key encryption scheme, and produces a new combined public key. Anyone knowing a secret key for one of the input public keys can learn the secret key for the combined public key, but an outsider who just knows the input public keys (who can therefore compute the combined public key for himself) cannot decrypt ciphertexts from the combined public key. We actually think of public keys more generally as encryption procedures, which can correspond to, say, encrypting to a particular identity under an IBE scheme or encrypting to a set of attributes under an ABE scheme. We show that encryptor combiners satisfying certain natural properties can give natural constructions of multi-party non-interactive key exchange, low-overhead broadcast encryption, and hierarchical identity-based encryption. We then show how to construct two different encryptor combiners. Our first is built from universal samplers (which can in turn be built from indistinguishability obfuscation) and is sufficient for each application above, in some cases improving on existing obfuscation-based constructions. Our second is built from lattices, and is sufficient for hierarchical identity-based encryption. Thus, encryptor combiners serve as a new abstraction that (1) is a useful tool for designing cryptosystems, (2) unifies constructing hierarchical IBE from vastly different assumptions, and (3) provides a target for instantiating obfuscation applications from better tools.
Last updated:  2017-06-23
Practical Functional Encryption for Quadratic Functions with Applications to Predicate Encryption
Carmen Elisabetta Zaira Baltico, Dario Catalano, Dario Fiore, Romain Gay
We present two practically efficient functional encryption schemes for a large class of quadratic functionalities. Specifically, our constructions enable the computation of so-called bilinear maps on encrypted vectors. This represents a practically relevant class of functions that includes, for instance, multivariate quadratic polynomials (over the integers). Our realizations work over asymmetric bilinear groups and are surprisingly efficient and easy to implement. For instance, in our most efficient scheme the public key and each ciphertext consist of \(2n + 1\) and \(4n + 2\) group elements respectively, where n is the dimension of the encrypted vectors, while secret keys are only two group elements. Our two schemes build on similar ideas, but develop them in a different way in order to achieve distinct goals. Our first scheme is proved (selectively) secure under standard assumptions, while our second construction is concretely more efficient and is proved (adaptively) secure in the generic group model. As a byproduct of our functional encryption schemes, we show new predicate encryption schemes for degree-two polynomial evaluation, where ciphertexts consist of only \(O(n)\) group elements. This significantly improves the \(O(n^2)\) bound one would get from inner product encryption-based constructions.
Last updated:  2017-07-07
Group-Based Secure Computation: Optimizing Rounds, Communication, and Computation
Uncategorized
Elette Boyle, Niv Gilboa, Yuval Ishai
Show abstract
Uncategorized
A recent work of Boyle et al. (Crypto 2016) suggests that ``group-based'' cryptographic protocols, namely ones that only rely on a cryptographically hard (Abelian) group, can be surprisingly powerful. In particular, they present succinct two-party protocols for securely computing branching programs and NC1 circuits under the DDH assumption, providing the first alternative to fully homomorphic encryption. In this work we further explore the power of group-based secure computation protocols, improving both their asymptotic and concrete efficiency. We obtain the following results. - Black-box use of group. We modify the succinct protocols of Boyle et al. so that they only make a black-box use of the underlying group, eliminating an expensive non-black-box setup phase. - Round complexity. For any constant number of parties, we obtain 2-round MPC protocols based on a PKI setup under the DDH assumption. Prior to our work, such protocols were only known using fully homomorphic encryption or indistinguishability obfuscation. - Communication complexity. Under DDH, we present a secure 2-party protocol for any NC1 or log-space computation with n input bits and m output bits using n+(1+o(1)) m+\poly(\lambda) bits of communication, where \lambda is a security parameter. In particular, our protocol can generate n instances of bit-oblivious-transfer using (4+o(1))\cdot n bits of communication. This gives the first constant-rate OT protocol under DDH. - Computation complexity. We present several techniques for improving the computational cost of the share conversion procedure of Boyle et al., improving the concrete efficiency of group-based protocols by several orders of magnitude.
Last updated:  2024-04-05
Bitcoin as a Transaction Ledger: A Composable Treatment
Christian Badertscher, Ueli Maurer, Daniel Tschudi, and Vassilis Zikas
Bitcoin is one of the most prominent examples of a distributed cryptographic protocol that is extensively used in reality. Nonetheless, existing security proofs are property-based, and as such they do not support composition. In this work, we put forth a universally composable treatment of the Bitcoin protocol. We specify the goal that Bitcoin aims to achieve as an instance of a parameterizable ledger functionality and present a UC abstraction of the Bitcoin blockchain protocol. Our ideal functionality is weaker than the first proposed candidate by Kiayias, Zhou, and Zikas [EUROCRYPT’16], but unlike the latter suggestion, which is arguably not implementable by the UC Bitcoin protocol, we prove that the one proposed here is securely UC-realized by the protocol assuming access to a global clock, to model time-based executions, a random oracle, to model hash functions, and an idealized network, to model message dissemination. We further show how known property-based approaches can be cast as special instances of our treatment and how their underlying assumptions can be cast in UC as part of the setup functionalities and without restricting the environment or the adversary.
Last updated:  2018-08-27
Pattern Matching on Encrypted Streams
Uncategorized
Nicolas Desmoulins, Pierre-Alain Fouque, Cristina Onete, Olivier Sanders
Show abstract
Uncategorized
Pattern matching is essential in applications such as deep-packet inspection (DPI), searching on genomic data, or analyzing medical data. A simple task to do on plaintext data, pattern matching is much harder to do when the privacy of the data must be preserved. Existent solutions involve searchable encryption mechanisms with at least one of these three drawbacks: requiring an exhaustive (and static) list of keywords to be prepared before the data is encrypted (like in symmetric searchable encryption); requiring tokenization, i.e., breaking up the data to search into substrings and encrypting them separately (e.g., like BlindBox); relying on symmetric-key cryptography, thus implying a token-regeneration step for each encrypted-data source (e.g., user). Such approaches are ill-suited for pattern-matching with evolving patterns (e.g., updating virus signatures), variable searchword lengths, or when a single entity must filter ciphertexts from multiple parties. In this work, we introduce Searchable Encryption with Shiftable Trapdoors (SEST): a new primitive that allows for pattern matching with universal tokens (usable by all entities), in which keywords of arbitrary lengths can be matched to arbitrary ciphertexts. Our solution uses public-key encryption and bilinear pairings. It consists of projecting keywords on polynomials of degree equal to the length of the keyword and using a sliding-window-like technique to match the trapdoor to successive fragments of the encrypted data. In addition, very minor modifications to our solution enable it to take into account regular expressions, such as fully- or partly-unknown characters in a keyword (wildcards and interval/subset searches). Our trapdoor size is at most linear in the keyword length (and independent of the plaintext size), and we prove that the leakage to the searcher is only the trivial one: since the searcher learns whether the pattern occurs and where, it can distinguish based on different search results of a single trapdoor on two different plaintexts. To better show the usability of our scheme, we implemented it to run DPI on all the SNORT rules. We show that even for very large plaintexts, our encryption algorithm scales well. The pattern-matching algorithm is slightly slower, but extremely parallelizable, and it can thus be run even on very large data. Although our proofs use a (marginally) interactive assumption, we argue that this is a relatively small price to pay for the flexibility and privacy that we are able to attain.
Last updated:  2017-02-20
Ad Hoc PSM Protocols: Secure Computation Without Coordination
Uncategorized
Amos Beimel, Yuval Ishai, Eyal Kushilevitz
Show abstract
Uncategorized
We study the notion of {\em ad hoc secure computation}, recently introduced by Beimel et al. (ITCS 2016), in the context of the {\em Private Simultaneous Messages} (PSM) model of Feige et al.\ (STOC 2004). In ad hoc secure computation we have $n$ parties that may potentially participate in a protocol but, at the actual time of execution, only $k$ of them, whose identity is {\em not} known in advance, actually participate. This situation is particularly challenging in the PSM setting, where protocols are non-interactive (a single message from each participating party to a special output party) and where the parties rely on pre-distributed, correlated randomness (that in the ad-hoc setting will have to take into account all possible sets of participants). We present several different constructions of \apsm\ protocols from standard PSM protocols. These constructions imply, in particular, that efficient information-theoretic \apsm\ protocols exist for NC1 and different classes of log-space computation, and efficient computationally-secure \apsm\ protocols for polynomial-time computable functions can be based on a one-way function. As an application, we obtain an information-theoretic implementation of {\em order-revealing encryption} whose security holds for two messages. We also consider the case where the actual number of participating parties $t$ may be larger than the minimal $k$ for which the protocol is designed to work. In this case, it is unavoidable that the output party learns the output corresponding to each subset of $k$ out of the $t$ participants. Therefore, a ``best possible security'' notion, requiring that this will be the {\em only} information that the output party learns, is needed. We present connections between this notion and the previously studied notion of {\em $t$-robust PSM} (also known as ``non-interactive MPC''). We show that constructions in this setting for even simple functions (like AND or threshold) can be translated into non-trivial instances of program obfuscation (such as {\em point function obfuscation} and {\em fuzzy point function obfuscation}, respectively). We view these results as a negative indication that protocols with ``best possible security'' are impossible to realize efficiently in the information-theoretic setting or require strong assumptions in the computational setting.
Last updated:  2017-03-09
Toward Fine-Grained Blackbox Separations Between Semantic and Circular-Security Notions
Mohammad Hajiabadi, Bruce M. Kapron
We address the problems of whether t-circular-secure encryption can be based on (t-1)-circular-secure encryption or on semantic (CPA) security, if t = 1. While for t = 1 a folklore construction, based on CPA-secure encryption, can be used to build a 1-circular-secure encryption with the same secret-key and message space, no such constructions are known for the bit-encryption case, which is of particular importance in fully-homomorphic encryption. Also, for $t \geq 2$, all constructions of t-circular-secure encryption (bitwise or otherwise) are based on specific assumptions. We make progress toward these problems by ruling out all fully-blackbox constructions of -- 1-seed circular-secure public-key bit encryption from CPA-secure public-key encryption; -- t-seed circular-secure public-key encryption from (t-1)-seed circular-secure public-key encryption, for any $t \geq 2$. Informally, seed-circular security is a variant of the circular security notion in which the seed of the key-generation algorithm, instead of the secret key, is encrypted. We also show how to extend our first result to rule out a large and non-trivial class of constructions of 1-circular-secure bit encryption, which we dub key-isolating constructions. Our separation model follows that of Gertner, Malkin and Reingold (FOCS’01), which is a weaker separation model than that of Impagliazzo and Rudich.
Last updated:  2018-11-29
The Multi-User Security of Double Encryption
Viet Tung Hoang, Stefano Tessaro
It is widely known that double encryption does not substantially increase the security of a block cipher. Indeed, the classical meet-in-the middle attack recovers the $2k$-bit secret key at the cost of roughly $2^k$ off-line enciphering operations, in addition to very few known plaintext-ciphertext pairs. Thus, essentially as efficiently as for the underlying cipher with a $k$-bit key. This paper revisits double encryption under the lens of multi-user security. We prove that its security degrades only very mildly with an increasing number of users, as opposed to single encryption, where security drops linearly. More concretely, we give a tight bound for the multi-user security of double encryption as a pseudorandom permutation in the ideal-cipher model, and describe matching attacks. Our contribution is also conceptual: To prove our result, we enhance and generalize the generic technique recently proposed by Hoang and Tessaro for lifting single-user to multi-user security. We believe this technique to be broadly applicable.
Last updated:  2018-06-10
Privacy-Preserving Search of Similar Patients in Genomic Data
Uncategorized
Gilad Asharov, Shai Halevi, Yehuda Lindell, Tal Rabin
Show abstract
Uncategorized
The growing availability of genomic data holds great promise for advancing medicine and research, but unlocking its full potential requires adequate methods for protecting the privacy of individuals whose genome data we use. One example of this tension is running Similar Patient Query on remote genomic data: In this setting a doctor that holds the genome of his/her patient may try to find other individuals with ``close" genomic data, and use the data of these individuals to help diagnose and find effective treatment for that patient's conditions. This is clearly a desirable mode of operation. However, the privacy exposure implications are considerable, and so we would like to carry out the above ``closeness'' computation in a privacy preserving manner. In this work we put forward a new approach for highly efficient secure computation for computing an approximation of the Similar Patient Query problem. We present contributions on two fronts. First, an approximation method that is designed with the goal of achieving efficient private computation. Second, further optimizations of the two-party protocol. Our tests indicate that the approximation method works well, it returns the exact closest records in 98% of the queries and very good approximation otherwise. As for speed, our protocol implementation takes just a few seconds to run on databases with thousands of records, each of length thousands of alleles, and it scales almost linearly with both the database size and the length of the sequences in it. As an example, in the datasets of the recent iDASH competition, after a one-time preprocessing of around 12 seconds, it takes around a second to find the nearest five records to a query, in a size-500 dataset of length-3500 sequences. This is 2-3 orders of magnitude faster than using state-of-the-art secure protocols with existing edit distance algorithms.
Last updated:  2019-12-31
Constraint-hiding Constrained PRFs for NC1 from LWE
Uncategorized
Ran Canetti, Yilei Chen
Show abstract
Uncategorized
Constraint-hiding constrained PRFs (CHCPRFs), initially studied by Boneh, Lewi, and Wu [PKC 2017], are constrained PRFs where the constrained key hides the description of the constraint. Envisioned with powerful applications such as searchable encryption, private-detectable watermarking, and symmetric deniable encryption, the only known candidates of CHCPRFs are based on indistinguishability obfuscation or multilinear maps with strong security properties. In this paper, we construct CHCPRFs for all NC1 circuits from the Learning with Errors assumption. The construction draws heavily from the graph-induced multilinear maps by Gentry, Gorbunov, and Halevi [TCC 2015], as well as the existing lattice-based PRFs. Our construction gives an instance of the GGH15 applications with a security reduction from LWE. We also show how to build from CHCPRFs reusable garbled circuits (RGC), or equivalently private-key function-hiding functional encryptions with 1-key security. This provides a different approach to constructing RGC from that of Goldwasser et al. [STOC 2013].
Last updated:  2018-06-11
Computing generator in cyclotomic integer rings, A subfield algorithm for the Principal Ideal Problem in L(1/2) and application to cryptanalysis of a FHE scheme
Jean-François Biasse, Thomas Espitau, Pierre-Alain Fouque, Alexandre Gélin, Paul Kirchner
The Principal Ideal Problem (resp. Short Principal Ideal Problem), shorten as PIP (resp. SPIP), consists in finding a generator (resp. short generator) of a principal ideal in the ring of integers of a number field. Several lattice-based cryptosystems rely on the presumed hardness of these two problems. In practice, most of them do not use an arbitrary number field but a power-of-two cyclotomic field. The Smart and Vercauteren fully homomorphic encryption scheme and the multilinear map of Garg, Gentry, and Halevi epitomize this common restriction. Recently, Cramer, Ducas, Peikert, and Regev showed that solving the SPIP in such cyclotomic rings boiled down to solving the PIP. In this paper, we present a heuristic algorithm that solves the PIP in prime-power cyclotomic fields in subexponential time L(1/2) in the discriminant of the number field. This is achieved by descending to its totally real subfield. The implementation of our algorithm allows to recover in practice the secret key of the Smart and Vercauteren scheme, for the smallest proposed parameters (in dimension 256).
Last updated:  2017-02-20
Partitioned Group Password-Based Authenticated Key Exchange
Dario Fiore, Maria Isabel Gonzalez Vasco, Claudio Soriente
Group Password-Based Authenticated Key Exchange (GPAKE) allows a group of users to establish a secret key, as long as all of them share the same password. However, in existing GPAKE protocols as soon as one user runs the protocol with a non-matching password, all the others abort and no key is established. In this paper we seek for a more flexible, yet secure, GPAKE and put forward the notion of partitioned GPAKE. Partitioned GPAKE tolerates users that run the protocol on different passwords. Through a protocol run, any subgroup of users that indeed share a password, establish a session key, factoring out the ``noise'' of inputs by users holding different passwords. At the same time any two keys, each established by a different subgroup of users, are pair-wise independent if the corresponding subgroups hold different passwords. We also introduce the notion of password-privacy for partitioned GPAKE, which is a kind of affiliation hiding property, ensuring that an adversary should not be able to tell whether any given set of users share a password. Finally, we propose an efficient instantiation of partitioned GPAKE building on an unforgeable symmetric encryption scheme and a PAKE by Bellare et al. Our proposal is proven secure in the random oracle/ideal cipher model, and requires only two communication rounds.
Last updated:  2017-07-09
Estimation of the Hardness of the Learning with Errors Problem with a Restricted Number of Samples
Uncategorized
Nina Bindel, Johannes Buchmann, Florian Göpfert, Markus Schmidt
Show abstract
Uncategorized
The Learning with Errors problem (LWE) is one of the most important hardness assumptions lattice-based constructions base their security on. Recently, Albrecht et al. (Journal of Mathematical Cryptology, 2015) presented the software tool LWE-Estimator to estimate the hardness of concrete LWE instances, making the choice of parameters for lattice-based primitives easier and better comparable. To give lower bounds on the hardness it is assumed that each algorithm has given the corresponding optimal number of samples. However, this is not the case for many cryptographic applications. In this work we first analyze the hardness of LWE instances given a restricted number of samples. For this, we describe LWE solvers from the literature and estimate their runtime considering a limited number of samples. Based on our theoretical results we extend the LWE-Estimator. Furthermore, we evaluate LWE instances proposed for cryptographic schemes and show the impact of restricting the number of available samples.
Last updated:  2018-07-02
Revisiting AES Related-Key Differential Attacks with Constraint Programming
David Gérault, Pascal Lafourcade, Marine Minier, Christine Solnon
The Advanced Encryption Standard (AES) is one of the most studied symmetric encryption schemes. During the last years, several attacks have been discovered in different adversary models. In this paper, we focus on related-key differential attacks, where the adversary may introduce differences in plaintext pairs and also in keys. We show that Constraint Programming (CP) can be used to model these attacks, and that it allows us to efficiently find all optimal related-key differential characteristics for AES-128, AES-192 and AES-256. In particular, we improve the best related-key differential for the whole AES-256 and give the best related-key differential on 10 rounds of AES-192, which is the differential trail with the longest path. Those results allow us to improve existing related-key distinguishers, basic related-key attacks and $q$-multicollisions on AES-256.
Last updated:  2018-10-15
How (not) to Use Welch's T-test in Side-Channel Security Evaluations
Uncategorized
François-Xavier Standaert
Show abstract
Uncategorized
The Test Vector Leakage Assessment (TVLA) methodology is a qualitative tool relying on Welch's T-test to assess the security of cryptographic implementations against side-channel attacks. Despite known limitations (e.g., risks of false negatives and positives), it is sometimes considered as a pass-fail test to determine whether such implementations are "safe" or not (without clear definition of what is "safe"). In this note, we clarify the limited quantitative meaning of this test when used as a standalone tool. For this purpose, we first show that the straightforward application of this approach to assess the security of a masked implementation is not sufficient. More precisely, we show that even in a simple (more precisely, univariate) case study that seems best suited for the TVLA methodology, detection (or lack thereof) with Welch's T-test can be totally disconnected from the actual security level of an implementation. For this purpose, we put forward the case of a realistic masking scheme that looks very safe from the TVLA point-of-view and is nevertheless easy to break. We then discuss this result in more general terms and argue that this limitation is shared by all "moment-based" security evaluations. We conclude the note positively, by describing how to use moment-based analyzes as a useful ingredient of side-channel security evaluations, to determine a "security order".
Last updated:  2017-07-05
Modifying an Enciphering Scheme after Deployment
Uncategorized
Paul Grubbs, Thomas Ristenpart, Yuval Yarom
Show abstract
Uncategorized
Assume that a symmetric encryption scheme has been deployed and used with a secret key. We later must change the encryption scheme in a way that preserves the ability to decrypt (a subset of) previously encrypted plaintexts. Frequent real-world examples are migrating from a token-based encryption system for credit-card numbers to a format-preserving encryption (FPE) scheme, or extending the message space of an already deployed FPE. The ciphertexts may be stored in systems for which it is not easy or not efficient to retrieve them (to re-encrypt the plaintext under the new scheme). We introduce methods for functionality-preserving modifications to encryption, focusing particularly on deterministic, length-preserving ciphers such as those used to perform format-preserving encryption. We provide a new technique, that we refer to as the Zig-Zag construction, that allows one to combine two ciphers using different domains in a way that results in a secure cipher on one domain. We explore its use in the two settings above, replacing token-based systems and extending message spaces. We develop appropriate security goals and prove security relative to them assuming the underlying ciphers are themselves secure as strong pseudorandom permutations.
Last updated:  2017-02-20
Dispersed Cryptography and the Quotient Ring Transform
Anna Johnston
This paper describes a radically different privacy, security and integrity solution. Dispersed Cryptography converts the cloud from a security threat into a security asset by combining a standard stream cipher and the Quotient Ring Transform (QRT). The result is an integrated error correction/encryption algorithm. This encoding disperses data, breaking it into many smaller pieces and scattering them to different sites. No single site is critical; any can be lost without losing data. No single site can access data, even if the cryptovariable (secret key) is compromised. The resulting system is more flexible and seamlessly adds both data integrity and security. The underlying codes are linear, and therefore have homomorphic properties and may be used in coding based quantum resistant cryptography.
Last updated:  2019-05-20
Hashing Garbled Circuits for Free
Uncategorized
Xiong Fan, Chaya Ganesh, Vladimir Kolesnikov
Show abstract
Uncategorized
We introduce {\em Free Hash}, a new approach to generating Garbled Circuit (GC) hash at no extra cost during GC generation. This is in contrast with state-of-the-art approaches, which hash GCs at computational cost of up to $6\times$ of GC generation. GC hashing is at the core of the cut-and-choose technique of GC-based secure function evaluation (SFE). Our main idea is to intertwine hash generation/verification with GC generation and evaluation. While we {\em allow} an adversary to generate a GC $\widehat{\GC}$ whose hash collides with an honestly generated $\GC$, such a $\widehat{\GC}$ w.h.p. will fail evaluation and cheating will be discovered. Our GC hash is simply a (slightly modified) XOR of all the gate table rows of GC. It is compatible with Free XOR and half-gates garbling, and can be made to work with many cut-and-choose SFE protocols. With today's network speeds being not far behind hardware-assisted fixed-key garbling throughput, eliminating the GC hashing cost will significantly improve SFE performance. Our estimates show substantial cost reduction in typical settings, and up to factor $6$ in specialized applications relying on GC hashes. We implemented GC hashing algorithm and report on its performance.
Last updated:  2017-02-28
A Provably Secure PKCS\#11 Configuration Without Authenticated Attributes
Ryan Stanley-Oakes
Cryptographic APIs like PKCS#11 are interfaces to trusted hardware where keys are stored; the secret keys should never leave the trusted hardware in plaintext. In PKCS#11 it is possible to give keys conflicting roles, leading to a number of key-recovery attacks. To prevent these attacks, one can authenticate the attributes of keys when wrapping, but this is not standard in PKCS#11. Alternatively, one can configure PKCS#11 to place additional restrictions on the commands permitted by the API. Bortolozzo et al. proposed a configuration of PKCS#11, called the Secure Templates Patch (STP), supporting symmetric encryption and key wrapping. However, the security guarantees for STP given by Bortolozzo et al. are with respect to a weak attacker model. STP has been implemented as a set of filtering rules in Caml Crush, a software filter for PKCS#11 that rejects certain API calls. The filtering rules in Caml Crush extend STP by allowing users to compute and verify MACs and so the previous analysis of STP does not apply to this configuration. We give a rigorous analysis of STP, including the extension used in Caml Crush. Our contribution is as follows: (i) We show that the extension of STP used in Caml Crush is insecure. (ii) We propose a strong, computational security model for configurations of PKCS#11 where the adversary can adaptively corrupt keys and prove that STP is secure in this model. (iii) We prove the security of an extension of STP that adds support for public-key encryption and digital signatures.
Last updated:  2018-09-28
Composable and Robust Outsourced Storage
Uncategorized
Christian Badertscher, Ueli Maurer
Show abstract
Uncategorized
The security of data outsourcing mechanisms has become a crucial aspect of today's IT infrastructures and are the cryptographic foundations of real-world applications. The very fundamental goals are ensuring storage integrity and auditability, confidentiality, and access pattern hiding, as well as combinations of all of them. Despite sharing a common setting, security analyses of these tasks are often performed in a stand-alone fashion expressed in different models, which makes it hard to assess the overall security of a protocol or application involving several security schemes at once. In this work, we fill this gap and propose a composable framework suitable to capture various aspects of outsourced storage security and its applications. We instantiate the basic client-server setting in this model, where the goal of the honest client is to retain security in the presence of a malicious server. Three specific contributions of this paper are: 1.) We present a novel definition for secure and robust outsourcing schemes and underline why this is needed in practice. Our definition is stronger than previous definitions for oblivious RAM or software protection in that it assures strong security guarantees against active attacks. Schemes meeting the definition not only assure that an attacker cannot learn the access pattern, but guarantee resilience to errors and the prevention of targeted attacks to specific locations. Unfortunately, several existing schemes cannot achieve this high level of security. For completeness, we provide a protocol based on Path ORAM that showcases that stronger security is actually achievable. 2.) We present a novel definition for auditable storage, capturing the guarantee that a successful audit implies that the current server state allows the client to retrieve his data. We develop an audit mechanism, based on secure and robust outsourcing schemes, that is similar to the construction by Cash et al. (Eurocrpyt 2013), but is universally composable and fault-tolerant. 3.) We revisit the security claim of a widely-used challenge-response audit mechanism, in which the server has to compute a hash $H(F||c)$ on the file $F$ concatenated with a uniformly random challenge $c$ chosen by the client. Being concerned with composable security, we prove that this audit mechanism is not secure, even in the random oracle model, without additional assumptions. The composable security of this basic audit scheme was implicitly assumed in Ristenpart et al. (Eurocrypt 2011). To complete the picture, we state the additional assumptions for this audit mechanism to be provably secure and investigate the (in)applicability of hash-function constructions in this setting.
Last updated:  2017-02-16
Attacks on Karlsson and Mitrokotsa's Grouping-Proof-Distance-Bounding Protocol
Roel Peeters, Jens Hermans, Aysajan Abidin
In the recent IEEE communication letter ``Grouping-Proof-Distance-Bounding Protocols: Keep All Your Friends Close" by Karlsson and Mitrokotsa, a protocol for grouping-proof distance-bounding (GPDB) is proposed. In this letter, we show that the proof that is generated by the proposed GBDP protocol does not actually prove anything. Furthermore, we provide a construction towards a distance-bounding grouping-proof, however it remains unclear if one can ever truly combine (privacy-preserving) distance-bounding and a grouping-proof.
Last updated:  2017-02-21
A Practical Multivariate Blind Signature Scheme
Albrecht Petzoldt, Alan Szepieniec, Mohamed Saied Emam Mohamed
Multivariate Cryptography is one of the main candidates for creating post-quantum cryptosystems. Especially in the area of digital signatures, there exist many practical and secure multivariate schemes. However, there is a lack of multivariate signature schemes with special properties such as blind, ring and group signatures. In this paper, we propose a generic technique to transform multivariate signature schemes into blind signature schemes and show the practicality of the construction on the example of Rainbow. The resulting scheme satisfies the usual blindness criterion and a one-more-unforgeability criterion adapted to MQ signatures, produces short blind signatures and is very efficient.
Last updated:  2017-02-16
Topology-Hiding Computation Beyond Logarithmic Diameter
Uncategorized
Adi Akavia, Tal Moran
Show abstract
Uncategorized
A distributed computation in which nodes are connected by a partial communication graph is called \emph{topology-hiding} if it does not reveal information about the graph (beyond what is revealed by the output of the function). Previous results [Moran, Orlov, Richelson; TCC'15] have shown that topology-hiding computation protocols exist for graphs of logarithmic diameter (in the number of nodes), but the feasibility question for graphs of larger diameter was open even for very simple graphs such as chains, cycles and trees. In this work, we take a step towards topology-hiding computation protocols for arbitrary graphs by constructing protocols that can be used in a large class of {\em large-diameter networks}, including cycles, trees and graphs with logarithmic \emph{circumference}. Our results use very different methods from [MOR15] and can be based on a standard assumption (such as DDH).
Last updated:  2017-02-16
Sublinear Zero-Knowledge Arguments for RAM Programs
Uncategorized
Payman Mohassel, Mike Rosulek, Alessandra Scafuro
Show abstract
Uncategorized
We describe a new succinct zero-knowledge argument protocol with the following properties. The prover commits to a large data-set $M$, and can thereafter prove many statements of the form $\exists w : \mathcal{R}_i(M,w)=1$, where $\mathcal{R}_i$ is a public function. The protocol is {\em succinct} in the sense that the cost for the verifier (in computation \& communication) does not depend on $|M|$, not even in any initialization phase. In each proof, the computation/communication cost for {\em both} the prover and the verifier is proportional only to the running time of an oblivious RAM program implementing $\mathcal{R}_i$ (in particular, this can be sublinear in $|M|$). The only costs that scale with $|M|$ are the computational costs of the prover in a one-time initial commitment to $M$. Known sublinear zero-knowledge proofs either require an initialization phase where the work of the verifier is proportional to $|M|$ and are therefore sublinear only in an amortized sense, or require that the computational cost for the prover is proportional to $|M|$ upon {\em each proof}. Our protocol uses efficient crypto primitives in a black-box way and is UC-secure in the {\em global}, non-programmable random oracle, hence it does not rely on any trusted setup assumption.
Last updated:  2017-03-03
New Collision Attacks on Round-Reduced Keccak
Uncategorized
Kexin Qiao, Ling Song, Meicheng Liu, Jian Guo
Show abstract
Uncategorized
In this paper, we focus on collision attacks against Keccak hash function family and some of its variants. Following the framework developed by Dinur et al. at FSE~2012 where 4-round collisions were found by combining 3-round differential trails and 1-round connectors, we extend the connectors one round further hence achieve collision attacks for up to 5 rounds. The extension is possible thanks to the large degree of freedom of the wide internal state. By linearization of all S-boxes of the first round, the problem of finding solutions of 2-round connectors are converted to that of solving a system of linear equations. However, due to the quick freedom reduction from the linearization, the system has solution only when the 3-round differential trails satisfy some additional conditions. We develop a dedicated differential trail search strategy and find such special differentials indeed exist. As a result, the first practical collision attack against 5-round SHAKE128 and two 5-round instances of the Keccak collision challenges are found with real examples. We also give the first results against 5-round Keccak224 and 6-round Keccak collision challenges. It is remarked that the work here is still far from threatening the security of the full 24-round Keccak family.
Last updated:  2017-10-09
Robust Transforming Combiners from Indistinguishability Obfuscation to Functional Encryption
Uncategorized
Prabhanjan Ananth, Aayush Jain, Amit Sahai
Show abstract
Uncategorized
Indistinguishability Obfuscation (iO) has enabled an incredible number of new and exciting applications. However, our understanding of how to actually build secure iO remains in its infancy. While many candidate constructions have been published, some have been broken, and it is unclear which of the remaining candidates are secure. This work deals with the following basic question: \emph{Can we hedge our bets when it comes to iO candidates?} In other words, if we have a collection of iO candidates, and we only know that at least one of them is secure, can we still make use of these candidates? This topic was recently studied by Ananth, Jain, Naor, Sahai, and Yogev [CRYPTO 2016], who showed how to construct a robust iO combiner: Specifically, they showed that given the situation above, we can construct a single iO scheme that is secure as long as (A) at least one candidate iO scheme is a subexponentially secure iO, and (B) either the subexponential DDH or LWE assumptions hold. In this work, we make three contributions: \begin{itemize} \item (\textbf{Better robust iO combiners.}) First, we work to improve the assumptions needed to obtain the same result as Ananth et al.: namely we show how to replace the DDH/LWE assumption with the assumption that subexponentially secure one-way functions exist. \item (\textbf{FE and NIKE from iO candidates and minimal assumptions.}) Second, we consider a broader question: what if we start with several iO candidates where only one works, but we don't care about achieving iO itself, rather we want to achieve concrete applications of iO? In this case, we are able to work with the \emph{minimal} assumption of just polynomially secure one-way functions, and where the working iO candidate only achieves polynomial security. Under these circumstances, we show how to achieve both \emph{functional encryption} and \emph{non-interactive multiparty key exchance (NIKE)}. \item (\textbf{Correctness Amplification for iO from polynomial security and one-way functions.}) Finally, along the way, we obtain a result of independent interest: Recently, Bitansky and Vaikuntanathan [TCC 2016] showed how to amplify the correctness of an iO scheme, but they needed subexponential security for the iO scheme and also require subexponentially secure DDH or LWE. We show how to achieve the same correctness amplification result, but requiring only polynomial security from the iO scheme, and assuming only polynomially secure one-way functions. \end{itemize}
Last updated:  2017-09-22
Boolean Searchable Symmetric Encryption with Worst-Case Sub-Linear Complexity
Uncategorized
Seny Kamara, Tarik Moataz
Show abstract
Uncategorized
Recent work on searchable symmetric encryption (SSE) has focused on increasing its expressiveness. A notable example is the OXT construction (Cash et al., CRYPTO '13 ) which is the first SSE scheme to support conjunctive keyword queries with sub-linear search complexity. While OXT efficiently supports disjunctive and boolean queries that can be expressed in searchable normal form, it can only handle arbitrary disjunctive and boolean queries in linear time. This motivates the problem of designing expressive SSE schemes with worst-case sub-linear search; that is, schemes that remain highly efficient for any keyword query. In this work, we address this problem and propose non-interactive highly efficient SSE schemes that handle arbitrary disjunctive and boolean queries with worst-case sub-linear search and optimal communication complexity. Our main construction, called IEX, makes black-box use of an underlying single keyword SSE scheme which we can instantiate in various ways. Our first instantiation, IEX-2Lev, makes use of the recent 2Lev construction (Cash et al., NDSS '14 ) and is optimized for search at the expense of storage overhead. Our second instantiation, IEX-ZMF, relies on a new single keyword SSE scheme we introduce called ZMF and is optimized for storage overhead at the expense of efficiency (while still achieving asymptotically sub-linear search). Our ZMF construction is the first adaptively-secure highly compact SSE scheme and may be of independent interest. At a very high level, it can be viewed as an encrypted version of a new Bloom filter variant we refer to as a Matryoshka filter. In addition, we show how to extend IEX to be dynamic and forward-secure. To evaluate the practicality of our schemes, we designed and implemented a new encrypted search framework called Clusion. Our experimental results demonstrate the practicality of IEX and of its instantiations with respect to either search (for IEX-2Lev) and storage overhead (for IEX-ZMF).
Last updated:  2017-02-16
Non-Interactive Secure 2PC in the Offline/Online and Batch Settings
Uncategorized
Payman Mohassel, Mike Rosulek
Show abstract
Uncategorized
In cut-and-choose protocols for two-party secure computation (2PC) the main overhead is the number of garbled circuits that must be sent. Recent work (Lindell, Riva; Huang et al., Crypto 2014) has shown that in a batched setting, when the parties plan to evaluate the same function $N$ times, the number of garbled circuits per execution can be reduced by a $O(\log N)$ factor compared to the single-execution setting. This improvement is significant in practice: an order of magnitude for $N$ as low as one thousand. % Besides the number of garbled circuits, communication round trips are another significant performance bottleneck. Afshar et al. (Eurocrypt 2014) proposed an efficient cut-and-choose 2PC that is round-optimal (one message from each party), but in the single-execution setting. In this work we present new malicious-secure 2PC protocols that are round-optimal and also take advantage of batching to reduce cost. Our contributions include: \begin{itemize} \item A 2-message protocol for batch secure computation ($N$ instances of the same function). The number of garbled circuits is reduced by a $O(\log N)$ factor over the single-execution case. However, other aspects of the protocol that depend on the input/output size of the function do not benefit from the same $O(\log N)$-factor savings. \item A 2-message protocol for batch secure computation, in the random oracle model. All aspects of this protocol benefit from the $O(\log N)$-factor improvement, except for small terms that do not depend on the function being evaluated. \item A protocol in the offline/online setting. After an offline preprocessing phase that depends only on the function $f$ and $N$, the parties can securely evaluate $f$, $N$ times (not necessarily all at once). Our protocol's online phase is only 2 messages, and the total online communication is only $\ell + O(\kappa)$ bits, where $\ell$ is the input length of $f$ and $\kappa$ is a computational security parameter. This is only $O(\kappa)$ bits more than the information-theoretic lower bound for malicious 2PC.
Last updated:  2017-02-16
On the Exact Round Complexity of Self-Composable Two-Party Computation
Uncategorized
Sanjam Garg, Susumu Kiyoshima, Omkant Pandey
Show abstract
Uncategorized
The round complexity of secure computation has been a fundamental problem in cryptography. Katz and Ostrovsky proved that 5 rounds are both necessary and sufficient for secure computation in the stand alone setting, thus resolving the exact round complexity of standalone secure computation. In contrast, round complexity of secure computation in the concurrent setting, where several protocols may run simultaneously, is poorly understood. Since standard polynomial time simulation is impossible in the concurrent setting, alternative security notions have been proposed, e.g., super-polynomial simulation (SPS). While SPS security can be achieved in constant rounds, the actual constant (> 20) is far from optimal. In this work, we take the first steps towards studying the exact round complexity of concurrent secure computation. We focus on the two party case and present a new secure computation protocol that achieves SPS security under concurrent self-composition. Our protocol has 5 rounds assuming quasi-polynomially-hard injective one-way functions (or 7 rounds assuming standard polynomially-hard collision-resistant hash functions). We also require other standard assumptions, specifically trapdoor OWPs and lossy TDFs. This matches the rounds for standalone secure computation. More specifically, our security proof presents a polynomial time reduction from SPS security to 3-round public-coin non-malleable commitments with appropriate extractability properties. Such commitments are known based on quasi-polynomially-hard injective OWFs. (The reduction also works with a special 6-round non-malleable commitment to yield the 7-round result under CRHFs.)
Last updated:  2017-02-16
Separating IND-CPA and Circular Security for Unbounded Length Key Cycles
Uncategorized
Rishab Goyal, Venkata Koppula, Brent Waters
Show abstract
Uncategorized
A public key encryption scheme is said to be n-circular secure if no PPT adversary can distinguish between encryptions of an n length key cycle and n encryptions of zero. One interesting question is whether circular security comes for free from IND-CPA security. Recent works have addressed this question, showing that for all integers n, there exists an IND-CPA scheme that is not n-circular secure. However, this leaves open the possibility that for every IND-CPA cryptosystem, there exists a cycle length l, dependent on the cryptosystem (and the security parameter) such that the scheme is l-circular secure. If this is true, then this would directly lead to many applications, in particular, it would give us a fully homomorphic encryption scheme via Gentry’s bootstrapping. In this work, we show that is not true. Assuming indistinguishability obfuscation and leveled homomorphic encryption, we construct an IND-CPA scheme such that for all cycle lengths l, the scheme is not l-circular secure.
Last updated:  2018-03-21
One-Shot Verifiable Encryption from Lattices
Uncategorized
Vadim Lyubashevsky, Gregory Neven
Show abstract
Uncategorized
Verifiable encryption allows one to prove properties about encrypted data and is an important building block in the design of cryptographic protocols, e.g., group signatures, key escrow, fair exchange protocols, etc. Existing lattice-based verifiable encryption schemes, and even just proofs of knowledge of the encrypted data, require parallel composition of proofs to reduce the soundness error, resulting in proof sizes that are only truly practical when amortized over a large number of ciphertexts. In this paper, we present a new construction of a verifiable encryption scheme, based on the hardness of the Ring-LWE problem in the random-oracle model, for short solutions to linear equations over polynomial rings. Our scheme is "one-shot", in the sense that a single instance of the proof already has negligible soundness error, yielding compact proofs even for individual ciphertexts. Whereas verifiable encryption usually guarantees that decryption can recover a witness for the original language, we relax this requirement to decrypt a witness of a related but extended language. This relaxation is sufficient for many applications and we illustrate this with example usages of our scheme in key escrow and verifiably encrypted signatures. One of the interesting aspects of our construction is that the decryption algorithm is probabilistic and uses the proof as input (rather than using only the ciphertext). The decryption time for honestly-generated ciphertexts only depends on the security parameter, while the expected running time for decrypting an adversarially-generated ciphertext is directly related to the number of random-oracle queries of the adversary who created it. This property suffices in most practical scenarios, especially in situations where the ciphertext proof is part of an interactive protocol, where the decryptor is substantially more powerful than the adversary, or where adversaries can be otherwise discouraged to submit malformed ciphertexts.
Last updated:  2017-02-16
Twisted $\mu_4$-normal form for elliptic curves
David Kohel
We introduce the twisted $\mu_4$-normal form for elliptic curves, deriving in particular addition algorithms with complexity $9M + 2S$ and doubling algorithms with complexity $2M + 5S + 2m$ over a binary field. Every ordinary elliptic curve over a finite field of characteristic 2 is isomorphic to one in this family. This improvement to the addition algorithm is comparable to the $7M + 2S$ achieved for the $\mu_4$-normal form, and replaces the previously best known complexity of $13M + 3S$ on López-Dahab models applicable to these twisted curves. The derived doubling algorithm is essentially optimal, without any assumption of special cases. We show moreover that the Montgomery scalar multiplication with point recovery carries over to the twisted models, giving symmetric scalar multiplication adapted to protect against side channel attacks, with a cost of $4M + 4S + 1m_t + 2m_c$ per bit. In characteristic different from 2, we establish a linear isomorphism with the twisted Edwards model. This work, in complement to the introduction of $\mu_4$-normal form, fills the lacuna in the body of work on efficient arithmetic on elliptic curves over binary fields, explained by this common framework for elliptic curves if $\mu_4$-normal form in any characteristic. The improvements are analogous to those which the Edwards and twisted Edwards models achieved for elliptic curves over finite fields of odd characteristic and extend $\mu_4$-normal form to cover the binary NIST curves.
Last updated:  2017-02-16
Separating Semantic and Circular Security for Symmetric-Key Bit Encryption from the Learning with Errors Assumption
Uncategorized
Rishab Goyal, Venkata Koppula, Brent Waters
Show abstract
Uncategorized
In this work we separate private-key semantic security from circular security using the Learning with Error assumption. Prior works used the less standard assumptions of multilinear maps or indistinguishability obfuscation. To achieve our results we develop new techniques for obliviously evaluating branching programs.
Last updated:  2017-03-29
Quantum Authentication with Key Recycling
Uncategorized
Christopher Portmann
Show abstract
Uncategorized
We show that a family of quantum authentication protocols introduced in [Barnum et al., FOCS 2002] can be used to construct a secure quantum channel and additionally recycle all of the secret key if the message is successfully authenticated, and recycle part of the key if tampering is detected. We give a full security proof that constructs the secure channel given only insecure noisy channels and a shared secret key. We also prove that the number of recycled key bits is optimal for this family of protocols, i.e., there exists an adversarial strategy to obtain all non-recycled bits. Previous works recycled less key and only gave partial security proofs, since they did not consider all possible distinguishers (environments) that may be used to distinguish the real setting from the ideal secure quantum channel and secret key resource.
Last updated:  2017-02-22
A New Structural-Differential Property of 5-Round AES
Uncategorized
Lorenzo Grassi, Christian Rechberger, Sondre Rønjom
Show abstract
Uncategorized
AES is probably the most widely studied and used block cipher. Also versions with a reduced number of rounds are used as a building block in many cryptographic schemes, e.g. several candidates of the CAESAR competition are based on it. So far, non-random properties which are independent of the secret key are known for up to 4 rounds of AES. These include differential, impossible differential, and integral properties. In this paper we describe a new structural property for up to 5 rounds of AES, differential in nature and which is independent of the secret key, of the details of the MixColumns matrix (with the exception that the branch number must be maximal) and of the SubBytes operation. It is very simple: By appropriate choices of difference for a number of input pairs it is possible to make sure that the number of times that the difference of the resulting output pairs lie in a particular subspace is always a multiple of 8. We not only observe this property experimentally (using a small-scale version of AES), we also give a detailed proof as to why it has to exist. As a first application of this property, we describe a way to distinguish the 5-round AES permutation (or its inverse) from a random permutation with only $2^{32}$ chosen texts that has a computational cost of $2^{35.6}$ look-ups into memory of size $2^{36}$ bytes which has a success probability greater than 99%.
Last updated:  2018-10-30
The SM9 Cryptographic Schemes
Zhaohui Cheng
SM9 is a Chinese official cryptography standard which defines a set of identity-based cryptographic schemes from pairings. This report describes the technical specification of SM9. The security of schemes is also analyzed.
Last updated:  2018-02-08
Masking Proofs are Tight (and How to Exploit it in Security Evaluations)
Uncategorized
Vincent Grosso, François-Xavier Standaert
Show abstract
Uncategorized
Evaluating the security level of a leaking implementation against side-channel attacks is a challenging task. This is especially true when countermeasures such as masking are implemented since in this case: (i) the amount of measurements to perform a key recovery may become prohibitive for certification laboratories, and (ii) applying optimal (multivariate) attacks may be computationally intensive and technically challenging. In this paper, we show that by taking advantage of the tightness of masking security proofs, we can significantly simplify this evaluation task in a very general manner. More precisely, we show that the evaluation of a masked implementation can essentially be reduced to the one of an unprotected implementation. In addition, we show that despite optimal attacks against masking schemes are computationally intensive for large number of shares, heuristic (soft analytical side-channel) attacks can approach optimality very efficiently. As part of this second contribution, we also improve over the recent multivariate (aka horizontal) side-channel attacks proposed at CHES 2016 by Battistello et al.
Last updated:  2017-07-03
An efficient self-blindable attribute-based credential scheme
Sietse Ringers, Eric Verheul, Jaap-Henk Hoepman
An attribute-based credential scheme allows a user, given a set of attributes, to prove ownership of these attributes to a verifier, voluntarily disclosing some of them while keeping the others secret. A number of such schemes exist, of which some additionally provide unlinkability: that is, when the same attributes were disclosed in two transactions, it is not possible to tell if one and the same or two different credentials were involved. Recently full-fledged implementations of such schemes on smart cards have emerged; however, these need to compromise the security level to achieve reasonable transaction speeds. In this paper we present a new unlinkable attribute-based credential scheme with a full security proof, using a known hardness assumption in the standard model. Defined on elliptic curves, the scheme involves bilinear pairings but only on the verifier's side, making it very efficient both in terms of speed and size on the user's side.
Last updated:  2017-02-14
Zero-Knowledge Proofs of Proximity
Itay Berman, Ron D. Rothblum, Vinod Vaikuntanathan
Interactive proofs of proximity (Ergun, Kumar and Rubinfeld, Information & Computation, 2004 and Rothblum, Vadhan and Wigderson, STOC 2013), or IPPs, are interactive proofs in which the verifier runs in time sub-linear in the input's length. Since the verifier cannot even read the entire input, following the property testing literature, the requirement is that she accepts inputs that are in the language and rejects ones that are far from the language. However, these proofs could (and in many cases, do) betray considerable global information about the input to the verifier. In this work, we initiate the study of zero-knowledge proofs of proximity (ZKPP). A ZKPP convinces a sub-linear time verifier while ensuring that she learns nothing more than a few locations of the input (and the fact that the input is ``close'' to the language). Our main focus is the setting of statistical zero-knowledge where we show that the following hold unconditionally (where $N$ denotes the input size): * Statistical ZKPPs can be sub-exponentially more efficient than property testers (or even non-interactive IPPs): We show a natural property which has a statistical ZKPP with a polylog(N) time verifier, but requires $\Omega(\sqrt{N})$ queries (and hence also runtime) for every property tester. * Statistical ZKPPs can be sub-exponentially less efficient than IPPs: We show a property which has an IPP with a polylog(N) time verifier, but cannot have a statistical ZKPP with even an $N^{o(1)}$ time verifier. * Statistical ZKPPs for some graph-based properties such as promise versions of expansion and bipartiteness. Lastly, we also consider the computational setting where we show that: 1. Assuming the existence of one-way functions, every language computable either in (logspace uniform) NC or in SC, has a computational ZKPP with a (roughly) $\sqrt{N}$ time verifier. 2. Assuming the existence of collision-resistant hash functions, every language in NP has a statistical zero-knowledge argument of proximity with a polylog(N) verifier.
Last updated:  2017-02-14
Algebraic Fault Analysis of SHA-3
Pei Luo, Konstantinos Athanasiou, Yunsi Fei, Thomas Wahl
This paper presents an efficient algebraic fault analysis on all four modes of SHA-3 under relaxed fault models. This is the first work to apply algebraic techniques on fault analysis of SHA-3. Results show that algebraic fault analysis on SHA-3 is very efficient and effective due to the clear algebraic properties of Keccak operations. Comparing with previous work on differential fault analysis of SHA-3, algebraic fault analysis can identify the injected faults with much higher rates, and recover an entire internal state of the penultimate round with much fewer fault injections.
Last updated:  2017-02-14
Zero-Knowledge Proxy Re-Identification Revisited
Xavier Bultel, Pascal Lafourcade
Zero-knowledge proxy re-identification (ZK-PRI) has been introduced by Blaze et al. in 1998 together with two other well known primitives of recryptography, namely proxy re-encryption (PRE) and proxy re-signature (PRS). A ZK-PRI allows a proxy to transform an identification protocol for Alice into an identification protocol for Bob using a re-proof key. PRE and PRS have been largely studied in the last decade, but surprisingly, no results about ZK-PRI have been published since the pioneer paper of Blaze et al.. We first show the insecurity of this scheme: just by observing the communications Alice can deduce Bob’s secret key. Then we give (i) definitions of the different families of ZK-PRI(bidirectional/unidirectional and interactive/non-interactive)(ii) a formal security model for these primitives and (iii) a concrete construction for each family. Moreover, we show that ZK-PRI can be used to manage the acces policy to several services that require a public key authentication.
Last updated:  2017-02-14
EC-OPRF: Oblivious Pseudorandom Functions using Elliptic Curves
Jonathan Burns, Daniel Moore, Katrina Ray, Ryan Speers, Brian Vohaska
We introduce a secure elliptic curve oblivious pseudorandom function (EC-OPRF) which operates by hashing strings onto an elliptic curve to provide a simple and efficient mechanism for computing an oblivious pseudorandom function (OPRF). The EC-OPRF protocol enables a semi-trusted server to receive a set of cryptographically masked elliptic curve points from a client, secure those points with a private key, and return the resulting set to the client for unmasking. We also introduce extensions and generalizations to this scheme, including a novel mechanism that provides forward secrecy, and discuss the security and computation complexity for each variant. Benchmark tests for the implementations of the EC-OPRF protocol and one of its variants are provided, along with test vectors for the original protocol.
Last updated:  2017-02-14
A Smart Contract for Boardroom Voting with Maximum Voter Privacy
Patrick McCorry, Siamak F. Shahandashti, Feng Hao
We present the first implementation of a decentralised and self-tallying internet voting protocol with maximum voter privacy using the Blockchain. The Open Vote Network is suitable for boardroom elec- tions and is written as a smart contract for Ethereum. Unlike previously proposed Blockchain e-voting protocols, this is the first implementation that does not rely on any trusted authority to compute the tally or to protect the voter’s privacy. Instead, the Open Vote Network is a self- tallying protocol, and each voter is in control of the privacy of their own vote such that it can only be breached by a full collusion involving all other voters. The execution of the protocol is enforced using the consensus mechanism that also secures the Ethereum blockchain. We tested the implementation on Ethereum’s official test network to demonstrate its feasibility. Also, we provide a financial and computational breakdown of its execution cost.
Last updated:  2017-02-14
Unilaterally-Authenticated Key Exchange
Yevgeniy Dodis, Dario Fiore
Key Exchange (KE), which enables two parties (e.g., a client and a server) to securely establish a common private key while communicating over an insecure channel, is one of the most fundamental cryptographic primitives. In this work, we address the setting of unilaterally-authenticated key exchange (UAKE), where an unauthenticated (unkeyed) client establishes a key with an authenticated (keyed) server. This setting is highly motivated by many practical uses of KE on the Internet, but received relatively little attention so far. Unlike the prior work, defining UAKE by downgrading a relatively complex definition of mutually authenticated key exchange (MAKE), our definition follows the opposite approach of upgrading existing definitions of public key encryption (PKE) and signatures towards UAKE. As a result, our new definition is short and easy to understand. Nevertheless, we show that it is equivalent to the UAKE definition of Bellare-Rogaway (when downgraded from MAKE), and thus captures a very strong and widely adopted security notion, while looking very similar to the simple ``one-oracle'' definition of traditional PKE/signature schemes. As a benefit of our intuitive framework, we show two exactly-as-you-expect (i.e., having no caveats so abundant in the KE literature!) UAKE protocols from (possibly interactive) signature and encryption. By plugging various one- or two-round signature and encryption schemes, we derive provably-secure variants of various well-known UAKE protocols (such as a unilateral variant of SKEME with and without perfect forward secrecy, and Shoup's A-DHKE-1), as well as new protocols, such as the first $2$-round UAKE protocol which is both (passively) forward deniable and forward-secure. To further clarify the intuitive connections between PKE/Signatures and UAKE, we define and construct stronger forms of (necessarily interactive) PKE/Signature schemes, called confirmed encryption and confidential authentication, which, respectively, allow the sender to obtain confirmation that the (keyed) receiver output the correct message, or to hide the content of the message being authenticated from anybody but the participating (unkeyed) receiver. Using confirmed PKE/confidential authentication, we obtain two concise UAKE protocols of the form: ``send confirmed encryption/confidential authentication of a random key $K$.''
Last updated:  2017-02-14
Photonic Side Channel Attacks Against RSA
Elad Carmon, Jean-Pierre Seifert, Avishai Wool
This paper describes the first attack utilizing the photonic side channel against a public-key crypto-system. We evaluated three common implementations of RSA modular exponentiation, all using the Karatsuba multiplication method. We discovered that the key length had marginal impact on resilience to the attack: attacking a 2048-bit key required only 9\% more decryption attempts than a 1024-bit key. We found that the most dominant parameter impacting the attacker's effort is the minimal block size at which the Karatsuba method reverts to naive multiplication: even for parameter values as low as 32 or 64 bits our attacks achieve 100\% success rate with under 10,000 decryption operations. Somewhat surprisingly, we discovered that Montgomery's Ladder---commonly perceived as the most resilient of the three implementations to side-channel attacks---was actually the most susceptible: for 2048-bit keys, our attack reveals 100\% of the secret key bits with as few as 4000 decryptions.
Last updated:  2017-04-29
Secure Logging with Crash Tolerance
Erik-Oliver Blass, Guevara Noubir
Forward-secure logging protects old log entries in a log file against an adversary compromising the log device. However, we show that previous work on forward-secure logging is prone to crash-attacks where the adversary removes log entries and then crashes the log device. As the state of the log after a crash-attack is indistinguishable from the state after a real crash, e.g., power failure, the adversary can hide attack traces. We present SLiC, a new logging protocol that achieves forward-security against crash-attacks. Our main idea is to decouple the time of a log event with the position of its resulting log entry in the log file. Each event is encrypted and written to a pseudo-random position in the log file. Consequently, the adversary can only remove random log events, but not specific ones. Yet, during forensic analysis, the verifier can replay pseudo-random positions. This allows to distinguish a real crash (last events missing) from a crash-attack (random events missing). Besides a formal analysis, we also present an evaluation of SLiC as a syslog server to indicate its practicality.
Last updated:  2020-06-02
$\mu$chain: How to Forget without Hard Forks
Ivan Puddu, Alexandra Dmitrienko, Srdjan Capkun
In this paper, we explore an idea of making (proof-of-work) blockchains mutable. We propose and implement $\mu$chain, a mutable blockchain, that enables modifications of blockchain history. Blockchains are, by common definition, distributed and immutable data structures that store a history of events, such as transactions in a digital currency system. While the very idea of mutable event history may seem controversial at a first glance, we show that $\mu$chain does not undermine security guarantees provided by immutable blockchains. In particular, all mutations in our system are controlled by fiat, enforced by consensus and are verifiable in the same way as regular transactions. At the same time, $\mu$chain provides a solution to a number of challenging problems, such as the patching of vulnerable smart contracts and removal of abusive content from blockchain history. It also gives rise to new blockchain applications that were not possible with immutable blockchains. For instance, governments and companies could now maintain registers of citizens and customers, while preserving their legislated rights to be forgotten. Banks could consider consolidation of cryptocurrency with traditional payments, which is hard to achieve without the ability to revert transactions. To further illustrate the power of $\mu$chain on more concrete examples, we present two new applications, the collaborative recommendation system with the ability to censor inappropriate content, and a time-lock encryption mechanism that provides a method to decrypt messages after a certain deadline has passed.
Last updated:  2017-02-13
A Secure and Fast Dispersal Storage Scheme Based on the Learning with Errors Problem
Uncategorized
Ling Yang, Fuyang Fang, Xianhui Lu, Wen-Tao Zhu, Qiongxiao Wang, Shen Yan, Shiran Pan
Show abstract
Uncategorized
Data confidentiality and availability are of primary concern in data storage. Dispersal storage schemes achieve these two security properties by transforming the data into multiple codewords and dispersing them across multiple storage servers. Existing schemes achieve confidentiality and availability by various cryptographic and coding algorithms, but only under the assumption that an adversary cannot obtain more than a certain number of codewords. Meanwhile existing schemes are designed for storing archives. In this paper, we propose a novel dispersal storage scheme based on the learning with errors problem, known as storage with errors (SWE). SWE can resist even more powerful adversaries. Besides, SWE favorably supports dynamic data operations that are both efficient and secure, which is more practical for cloud storage. Furthermore, SWE achieves security at relatively low computational overhead, but the same storage cost compared with the state of the art. We also develop a prototype to validate and evaluate SWE. Analysis and experiments show that with proper configurations, SWE outperforms existing schemes in encoding/decoding speed.
Last updated:  2017-09-23
Implementing BP-Obfuscation Using Graph-Induced Encoding
Uncategorized
Shai Halevi, Tzipora Halevi, Victor Shoup, Noah Stephens-Davidowitz
Show abstract
Uncategorized
We implemented (a simplified version of) the branching-program obfuscator due to Gentry et al. (GGH15), which is itself a variation of the first obfuscation candidate by Garg et al. (GGHRSW13). To keep within the realm of feasibility, we had to give up on some aspects of the construction, specifically the ``multiplicative bundling'' factors that protect against mixed-input attacks. Hence our implementation can only support read-once branching programs. To be able to handle anything more than just toy problems, we developed a host of algorithmic and code-level optimizations. These include new variants of discrete Gaussian sampler and lattice trapdoor sampler, efficient matrix-manipulation routines, and many tradeoffs. We expect that these optimizations will find other uses in lattice-based cryptography beyond just obfuscation. Our implementation is the first obfuscation attempt using the GGH15 graded encoding scheme, offering performance advantages over other graded encoding methods when obfuscating finite-state machines with many states. In out most demanding setting, we were able to obfuscate programs with input length of 20 nibbles (80 bits) and over 100 states, which seems out of reach for prior implementations. Although further optimizations are surely possible, we do not expect any implementation of current schemes to be able to handle much larger parameters.
Last updated:  2017-06-26
Reconciling d+1 Masking in Hardware and Software
Hannes Gross, Stefan Mangard
The continually growing number of security-related autonomous devices require efficient mechanisms to counteract low-cost side-channel analysis (SCA) attacks like differential power analysis. Masking provides a high resistance against SCA at an adjustable level of security. A high level of security, however, goes hand in hand with an increasing demand for fresh randomness which also affects other implementation costs. Since software based masking has other security requirements than masked hardware implementations, the research in these fields have been quite separated from each other over the last ten years. One important practical difference is that recently published software based masking schemes show a lower randomness footprint than hardware masking schemes. In this work we combine existing software and hardware based masking schemes into a unified masking approach (UMA). We demonstrate how UMA can be used to protect software and hardware implementations likewise, and for lower randomness costs especially for hardware implementations. Theoretical considerations as well as practical implementation results are then used to compare this unified masking approach to other schemes from different perspectives and at different levels of security.
Last updated:  2017-02-13
Quantum Authentication and Encryption with Key Recycling
Serge Fehr, Louis Salvail
We propose an information-theoretically secure encryption scheme for classical messages with quantum ciphertexts that offers *detection* of eavesdropping attacks, and *re-usability of the key* in case no eavesdropping took place: the entire key can be securely re-used for encrypting new messages as long as no attack is detected. This is known to be impossible for fully classical schemes, where there is no way to detect plain eavesdropping attacks. This particular application of quantum techniques to cryptography was originally proposed by Bennett, Brassard and Breidbart in 1982, even before proposing quantum-key-distribution, and a simple candidate scheme was suggested but no rigorous security analysis was given. The idea was picked up again in 2005, when Damgard, Pedersen and Salvail suggested a new scheme for the same task, but now with a rigorous security analysis. However, their scheme is much more demanding in terms of quantum capabilities: it requires the users to have a *quantum computer*. In contrast, and like the original scheme by Bennett et al, our new scheme merely requires the preparation of BB84 qubits. As such, we not only show a provably-secure scheme that is within reach of current technology, but we also confirm Bennett et al's original intuition that a scheme in the spirit of their original construction is indeed secure.
Last updated:  2017-11-27
Optimizing Implementations of Lightweight Building Blocks
Jeremy Jean, Thomas Peyrin, Siang Meng Sim, Jade Tourteaux
We study the synthesis of small functions used as building blocks in lightweight cryptographic designs in terms of hardware implementations. This phase most notably appears during the ASIC implementation of cryptographic primitives. The quality of this step directly affects the output circuit, and while general tools exist to carry out this task, most of them belong to proprietary software suites and apply heuristics to any size of functions. In this work, we focus on small functions (4- and 8-bit mappings) and look for their optimal implementations on a specific weighted instructions set which allows fine tuning of the technology. We propose a tool named LIGHTER, based on two related algorithms, that produce optimized implementations of small functions. To demonstrate the validity and usefulness of our tool, we applied it to two practical cases: first, linear permutations that define diffusion in most of SPN ciphers; second, non-linear 4-bit permutations that are used in nearly all modern lightweight block ciphers. For linear permutations, we exhibit several new MDS diffusion matrices lighter than the state-of-the-art, and we also decrease the implementation cost of several already known MDS matrices. As for non-linear permutations, LIGHTER outperforms the area-optimized synthesis of the state-of-the-art academic tool ABC. Smaller circuits can also be reached when ABC and LIGHTER are used jointly.
Last updated:  2017-02-15
Private Puncturable PRFs From Standard Lattice Assumptions
Uncategorized
Dan Boneh, Sam Kim, Hart Montgomery
Show abstract
Uncategorized
A puncturable pseudorandom function (PRF) has a master key $k$ that enables one to evaluate the PRF at all points of the domain, and has a punctured key $k_x$ that enables one to evaluate the PRF at all points but one. The punctured key $k_x$ reveals no information about the value of the PRF at the punctured point $x$. Punctured PRFs play an important role in cryptography, especially in applications of indistinguishability obfuscation. However, in previous constructions, the punctured key $k_x$ completely reveals the punctured point $x$: given $k_x$ it is easy to determine $x$. A {\em private} puncturable PRF is one where $k_x$ reveals nothing about~$x$. This concept was defined by Boneh, Lewi, and Wu, who showed the usefulness of private puncturing, and gave constructions based on multilinear maps. The question is whether private puncturing can be built from a standard (weaker) cryptographic assumption. We construct the first privately puncturable PRF from standard lattice assumptions, namely from the hardness of learning with errors (LWE) and 1 dimensional short integer solutions (1D-SIS), which have connections to worst-case hardness of general lattice problems. Our starting point is the (non-private) PRF of Brakerski and Vaikuntanathan. We introduce a number of new techniques to enhance this PRF, from which we obtain a privately puncturable PRF. In addition, we also study the simulation based definition of private constrained PRFs for general circuits, and show that the definition is not satisfiable.
Last updated:  2022-08-09
Making NSEC5 Practical for DNSSEC
Dimitrios Papadopoulos, Duane Wessels, Shumon Huque, Moni Naor, Jan Včelák, Leonid Reyzin, Sharon Goldberg
NSEC5 is a proposed modification to DNSSEC that guarantees two security properties: (1) privacy against offline zone enumeration, and (2) integrity of zone contents, even if an adversary compromises the authoritative nameserver responsible for responding to DNS queries for the zone. In this work, we redesign NSEC5 in order to make it practical and performant. Our NSEC5 redesign features a new verifiable random function (VRF) based on elliptic curve cryptography (ECC), along with a cryptographic proof of its security. This VRF is also of independent interest, as it is being standardized by the IETF and being used by several other projects. We show how to integrate NSEC5 using our ECC-based VRF into DNSSEC, leveraging precomputation to improve performance and DNS protocol-level optimizations to shorten responses. Next, we present the first full-fledged implementation of NSEC5 for both nameserver and recursive resolver, and evaluate performance under aggressive DNS query loads. We find that our redesigned NSEC5 can be viable even for high-throughput scenarios.
Last updated:  2017-02-13
Designing Fully Secure Protocols for Secure Two-Party Computation of Constant-Domain Functions
Vanesa Daza, Nikolaos Makriyannis
In a sense, a two-party protocol achieves fairness if the output from the computation is obtained simultaneously by both parties. A seminal result by Cleve (STOC 1986) states that fairness is impossible, in general. Surprisingly, Gordon et al.~(JACM 2011) showed that there exist interesting functions that are computable with fairness. The two results give rise to a distinction between \emph{fair} functions and \emph{unfair} ones. The question of characterizing these functions has been studied in a sequence of works leading to the complete characterization of (symmetric) Boolean functions by Asharov et al.~(TCC 2015). In this paper, we propose a generic construction of a fully secure (fair) protocol, starting with a constant-round protocol satisfying limited security requirements. Our results introduce new conceptual tools for the analysis of fairness and they apply to arbitrary (constant-domain) functions. As a case study, we consider asymmetric Boolean functions. While the characterization remains open, we believe that our results lay the foundation for a deeper understanding of fairness.
Last updated:  2017-02-13
Boolean functions with restricted input and their robustness; application to the FLIP cipher
Uncategorized
Claude Carlet, Pierrick Méaux, Yann Rotella
Show abstract
Uncategorized
We study the main cryptographic features of Boolean functions (balancedness, nonlinearity, algebraic immunity) when, for a given number $n$ of variables, the input to these functions is restricted to some subset $E$ of $\mathbb{F}_2^n$. We study in particular the case when $E$ equals the set of vectors of fixed Hamming weight, which plays a role in the FLIP stream cipher and we study the robustness of the Boolean function in this cipher.
Last updated:  2017-06-05
Asymptotically Compact Adaptively Secure Lattice IBEs and Verifiable Random Functions via Generalized Partitioning Techniques
Shota Yamada
In this paper, we focus on the constructions of adaptively secure identity-based encryption (IBE) from lattices and verifiable random function (VRF) with large input spaces. Existing constructions of these primitives suffer from low efficiency, whereas their counterparts with weaker guarantees (IBEs with selective security and VRFs with small input spaces) are reasonably efficient. We try to fill these gaps by developing new partitioning techniques that can be performed with compact parameters and proposing new schemes based on the idea. - We propose new lattice IBEs with poly-logarithmic master public key sizes, where we count the number of the basic matrices to measure the size. Our constructions are proven secure under the LWE assumption with polynomial approximation factors. They achieve the best asymptotic space efficiency among existing schemes that depend on the same assumption and achieve the same level of security. - We also propose several new VRFs on bilinear groups. In our first scheme, the size of the proofs is poly-logarithmic in the security parameter, which is the smallest among all the existing schemes with similar properties. On the other hand, the verification keys are long. In our second scheme, the size of the verification keys is poly-logarithmic, which is the smallest among all the existing schemes. The size of the proofs is sub-linear, which is larger than our first scheme, but still smaller than all the previous schemes.
Last updated:  2017-02-13
Attacks on Secure Logging Schemes
Gunnar Hartung
We present four attacks on three cryptographic schemes intended for securing log files against illicit retroactive modification. Our first two attacks regard the LogFAS scheme by Yavuz et al. (Financial Cryptography 2012), whereas our third and fourth attacks break the BM- and AR-FssAgg schemes by Ma (AsiaCCS 2008). All schemes have an accompanying security proof, seemingly contradicting the existence of attacks. We point out flaws in these proofs, resolving the contradiction.
Last updated:  2017-02-13
Quantum Tokens for Digital Signatures
Uncategorized
Shalev Ben-David, Or Sattath
Show abstract
Uncategorized
The fisherman caught a quantum fish. "Fisherman, please let me go", begged the fish, "and I will grant you three wishes". The fisherman agreed. The fish gave the fisherman a quantum computer, three quantum signing tokens and his classical public key. The fish explained: "to sign your three wishes, use the tokenized signature scheme on this quantum computer, then show your valid signature to the king, who owes me a favor". The fisherman used one of the signing tokens to sign the document "give me a castle!" and rushed to the palace. The king executed the classical verification algorithm using the fish's public key, and since it was valid, the king complied. The fisherman's wife wanted to sign ten wishes using their two remaining signing tokens. The fisherman did not want to cheat, and secretly sailed to meet the fish. "Fish, my wife wants to sign ten more wishes". But the fish was not worried: "I have learned quantum cryptography following the previous story (The Fisherman and His Wife by the brothers Grimm). The quantum tokens are consumed during the signing. Your polynomial wife cannot even sign four wishes using the three signing tokens I gave you". "How does it work?" wondered the fisherman. "Have you heard of quantum money? These are quantum states which can be easily verified but are hard to copy. This tokenized quantum signature scheme extends Aaronson and Christiano's quantum money scheme, which is why the signing tokens cannot be copied". "Does your scheme have additional fancy properties?" the fisherman asked. "Yes, the scheme has other security guarantees: revocability, testability and everlasting security. Furthermore, if you're at sea and your quantum phone has only classical reception, you can use this scheme to transfer the value of the quantum money to shore", said the fish, and swam away.
Last updated:  2017-02-10
On new multivariate cryptosystems based on hidden Eulerian equations over finite fields
Vasyl Ustimenko
We propose new multivariate cryptosystems over $n$-dimensional vector space over a finite field $F_q$ based on idea of hidden discrete logarithm problem for ${F^*}_q$. These cryptosystems are based on hidden eulerian equations $x^{\alpha}=a$, $(\alpha, q-1)=1$. The method is based on the idea of Eulerian transformations, which allow us to use asymmetric algorithms based on families of nonlinear multiplicatively injective maps of prescribed polynomial density and flexible degree.
Last updated:  2017-07-26
Small CRT-Exponent RSA Revisited
Uncategorized
Atsushi Takayasu, Yao Lu, Liqiang Peng
Show abstract
Uncategorized
Since May (Crypto'02) revealed the vulnerability of the small CRT-exponent RSA using Coppersmith's lattice-based method, several papers have studied the problem and two major improvements have been made. (1) Bleichenbacher and May (PKC'06) proposed an attack for small $d_q$ when the prime factor $p$ is significantly smaller than the other prime factor $q$; the attack works for $p<N^{0.468}$. (2) Jochemsz and May (Crypto'07) proposed an attack for small $d_p$ and $d_q$ when the prime factors $p$ and $q$ are balanced; the attack works for $d_p, d_q<N^{0.073}$. Even a decade has passed since their proposals, the above two attacks are still considered as the state-of-the-art, and no improvements have been made thus far. A novel technique seems to be required for further improvements since it seems that the attacks have been studied with all the applicable techniques for Coppersmith's methods proposed by Durfee-Nguyen (Asiacrypt'00), Jochemsz-May (Asiacrypt'06), and Herrmann-May (Asiacrypt'09, PKC'10). In this paper, we propose two improved attacks on the small CRT-exponent RSA: a small $d_q$ attack for $p<N^{0.5}$ (an improvement of Bleichenbacher-May's) and a small $d_p$ and $d_q$ attack for $d_p, d_q < N^{0.122}$ (an improvement of Jochemsz-May's). The latter result is also an improvement of our result in the proceeding version (Eurocrypt '17); $d_p, d_q < N^{0.091}$. We use Coppersmith's lattice-based method to solve modular equations and obtain the improvements from a novel lattice construction by exploiting useful algebraic structures of the CRT-RSA key generation equation. We explicitly show proofs of our attacks and verify the validities by computer experiments. In addition to the two main attacks, we also propose small $d_q$ attacks on several variants of RSA.
Last updated:  2017-08-03
Design and Implementation of Low Depth Pairing-based Homomorphic Encryption Scheme
Uncategorized
Vincent Herbert, Bhaskar Biswas, Caroline Fontaine
Show abstract
Uncategorized
Homomorphic Encryption is a recent promising tool in modern cryptography, that allows to carry out operations on encrypted data. In this paper we focus on the design of a scheme based on pairings and elliptic curves, that is able to handle applications where the number of multiplication is not too high, with interesting practical efficiency when compared to lattice based solutions. The starting point is the Boneh-Goh-Nissim (BGN for short) encryption scheme \cite{BGN05}, which enables the homomorphic evaluation of polynomials of degree at most $2$ on ciphertexts. In our scheme, we use constructions coming from \cite{F10,CF15}, to propose a variant of $\operatorname{BGN}$ scheme that can handle the homomorphic evaluation of polynomials of degree at most $4$. We discuss both the mathematical structure of the scheme, and its implementation. We provide simulation results, showing the relevance of this solution for applications requiring a low multiplicative depth, and give relative comparison with respect to lattice based homomorphic encryption schemes.
Last updated:  2019-03-26
Crypt-DAC: Cryptographically Enforced Dynamic Access Control in the Cloud
Saiyu Qi, Yuanqing Zheng
Enabling cryptographically enforced access controls for data hosted in untrusted cloud is attractive for many users and organizations. However, designing efficient cryptographically enforced dynamic access control system in the cloud is still challenging. In this paper, we propose Crypt-DAC, a system that provides practical cryptographic enforcement of dynamic access control. Crypt-DAC revokes access permissions by delegating the cloud to update encrypted data. In Crypt-DAC, a file is encrypted by a symmetric key list which records a file key and a sequence of revocation keys. In each revocation, a dedicated administrator uploads a new revocation key to the cloud and requests it to encrypt the file with a new layer of encryption and update the encrypted key list accordingly. Crypt-DAC proposes three key techniques to constrain the size of key list and encryption layers. As a result, Crypt-DAC enforces dynamic access control that provides efficiency, as it does not require expensive decryption/re- encryption and uploading/re-uploading of large data at the administrator side, and security, as it immediately revokes ac- cess permissions. We use formalization framework and system implementation to demonstrate the security and efficiency of our construction.
Last updated:  2017-02-10
On a Linear Cryptanalysis of a Family of Modified DES Ciphers with Even Weight S-boxes
Yuri Borissov, Peter Boyvalenkov, Robert Tsenkov
We investigate the effect of inserting extra linearity in the Data Encryption Standard (DES) through appropriate singular linear encodings of the output of the individual S-boxes. More specifically, we examine the general situation when the output of each S-box of the DES is precoded separately into a properly constructed copy of the inherent even-weight code of length 4. The study is focused on finding multi-round linear characteristics for thus modified DES ciphers having maximal effectiveness. It turns out, depending on the particular encodings, that the effectiveness of interest may be larger but in most cases is smaller than that one for the original DES with the same number of rounds. The latter means that the complexity of successful linear cryptanalysis against these ciphers will mainly increase comparing to the DES itself. The present research extends in a natural way our previous work [Linear Cryptanalysis and Modified DES with Parity Check in the S-boxes, LNCS 9540 (2016), pp. 60 – 78].
Last updated:  2017-02-10
A Differential Fault Attack on Plantlet
Uncategorized
Subhamoy Maitra, Akhilesh Siddhanti
Show abstract
Uncategorized
Lightweight stream ciphers have received serious attention in the last few years. The present design paradigm considers very small state (less than twice the key size) and use of the secret key bits during pseudo-random stream generation. One such effort, Sprout, had been proposed two years back and it was broken almost immediately. After carefully studying these attacks, a modified version named Plantlet has been designed very recently. While the designers of Plantlet do not provide any analysis on fault attack, we note that Plantlet is even weaker than Sprout in terms of Differential Fault Attack (DFA). Our investigation, following the similar ideas as in the analysis against Sprout, shows that we require only around 4 faults to break Plantlet by DFA in a few hours time. While fault attack is indeed difficult to implement and our result does not provide any weakness of the cipher in normal mode, we believe that these initial results will be useful for further understanding of Plantlet.
Last updated:  2017-02-10
Cryptanalysis of full round Fruit
Sabyasachi Dey, Santanu Sarkar
In FSE 2015, Armknetcht et al. proposed a new technique to design stream cipher. This technique involves repeated use of keybits in each round of keystream bit generation. This idea showed the possibility to design stream ciphers where internal state size is significantly lower than twice the key size. They proposed a new cipher based on this idea, named Sprout. But soon Sprout was proved to be insecure. In Crypto 2015, Lallemand et al. proposed an attack on Sprout, which was $2^{10}$ times faster than the exhaustive search. But the new idea used in Sprout showed a new direction in the design of stream cipher, which led to the proposal of several new ciphers with small size of internal state. Fruit is another cipher in this direction proposed recently where both the key size and state size are 80. So far, there is no attack against this cipher. In this paper, we attack full round Fruit by a divide-and-conquer method. We use several types of sieving to reduce the possible candidates for an internal state. Our attack is equivalent to $2^{74.95}$ many Fruit encryption, which is around $16.95$ times faster than average exhaustive key search. This is the first proposed attack against Fruit.
Last updated:  2017-02-10
Homomorphic Proxy Re-Authenticators and Applications to Verifiable Multi-User Data Aggregation
David Derler, Sebastian Ramacher, Daniel Slamanig
We introduce the notion of homomorphic proxy re-authenticators, a tool that adds security and verifiability guarantees to multi-user data aggregation scenarios. It allows distinct sources to authenticate their data under their own keys, and a proxy can transform these single signatures or message authentication codes (MACs) to a MAC under a receiver's key without having access to it. In addition, the proxy can evaluate arithmetic circuits (functions) on the inputs so that the resulting MAC corresponds to the evaluation of the respective function. As the messages authenticated by the sources may represent sensitive information, we also consider hiding them from the proxy and other parties in the system, except from the receiver. We provide a general model and two modular constructions of our novel primitive, supporting the class of linear functions. On our way, we establish various novel building blocks. Most interestingly, we formally define the notion and present a construction of homomorphic proxy re-encryption, which may be of independent interest. The latter allows users to encrypt messages under their own public keys, and a proxy can re-encrypt them to a receiver's public key (without knowing any secret key), while also being able to evaluate functions on the ciphertexts. The resulting re-encrypted ciphertext then holds an evaluation of the function on the input messages.
Last updated:  2017-02-10
Information Security Applications of Bit-Mixers
Laszlo Hars
A Bit-Mixer is a function of fixed size input and output, which computes uncorrelated output from correlated input values, and its behavior is altered by parameters, called keys. Several bit-mixer constructions have been published with very fast, power efficient implementations in electronic hardware, having very little side channel leakage. In this paper a dozen cryptographic applications are discussed, in most of which the output of the employed bit-mixers are hidden from an adversary. In these cases bit-mixers don’t have to satisfy strict cryptographic requirements, but the security of the applications is improved by reducing exploitable correlations among intermediate values, and by diminishing side channel leakage of electronic implementations
Last updated:  2017-02-10
Hardware Bit-Mixers
Laszlo Hars
A new concept, the Bit-Mixer is introduced. It is a function of fixed, possibly different size of input and output, which computes statistically uncorrelated output from correlated input values, and its behavior is altered by parameters, called keys. Several constructions are presented, with very fast, power efficient implementations in electronic hardware, having very little side channel leakage. In information security bit-mixers have many applications, mostly when their output is hidden from an adversary. They include key generators, parallel stream ciphers, hash functions, data dependent authentication codes, and many more
Last updated:  2017-02-07
Shor's Algorithm and Factoring: Don't Throw Away the Odd Orders
Anna Johnston
Shor's algorithm factors an integer $N$ in two steps. The quantum step computes the order of $a\bmod{N}$ where $a$ is relatively prime to $N$. The classical step uses this order to factor $N$. Descriptions of the classical step require the order, $s$, to be even and that $a^{s/2}\not\equiv -1\bmod{N}$. If $s$ is odd or $a^{s/2}\equiv -1\bmod{N}$, then the quantum step is repeated. This paper describes how each prime divisor of the order $s$, not just $2$, can be used to factor $N$.
Last updated:  2017-02-06
Replay Attacks on Zero Round-Trip Time: The Case of the TLS 1.3 Handshake Candidates
Marc Fischlin, Felix Günther
We investigate security of key exchange protocols supporting so-called zero round-trip time (0-RTT), enabling a client to establish a fresh provisional key without interaction, based only on cryptographic material obtained in previous connections. This key can then be already used to protect early application data, transmitted to the server before both parties interact further to switch to fully secure keys. Two recent prominent examples supporting such 0-RTT modes are Google's QUIC protocol and the latest drafts for the upcoming TLS version 1.3. We are especially interested in the question how replay attacks, enabled through the lack of contribution from the server, affect security in the 0-RTT case. Whereas the first proposal of QUIC uses state on the server side to thwart such attacks, the latest version of QUIC and TLS 1.3 rather accept them as inevitable. We analyze what this means for the key secrecy of both the preshared-key-based 0-RTT handshake in draft-14 of TLS 1.3 as well as the Diffie-Hellman-based 0-RTT handshake in TLS 1.3 draft-12. As part of this we extend previous security models to capture such cases, also shedding light on the limitations and options for 0-RTT security under replay attacks.
Last updated:  2017-07-26
Estonian Voting Verification Mechanism Revisited Again
Ivo Kubjas, Tiit Pikma, Jan Willemson
Recently, Mus, Kiraz, Cenk and Sertkaya proposed an improvement over the present Estonian Internet voting vote verification. This paper points to the weaknesses and questionable design choices of the new scheme. We show that the scheme does not fix the vote privacy issue it claims to. It also introduces a way for a malicious voting application to manipulate the vote without being detected by the verification mechanism, hence breaking the cast-as-intended property. As a solution, we propose modifying the protocol of Mus et al. slightly and argue for improvement of the security guarantees. However, there is inherent drop in usability in the protocol as proposed by Mus et al., and this issue will also remain in our improved protocol.
Last updated:  2017-02-06
From Minicrypt to Obfustopia via Private-Key Functional Encryption
Ilan Komargodski, Gil Segev
Private-key functional encryption enables fine-grained access to symmetrically-encrypted data. Although private-key functional encryption (supporting an unbounded number of keys and ciphertexts) seems significantly weaker than its public-key variant, its known realizations all rely on public-key functional encryption. At the same time, however, up until recently it was not known to imply any public-key primitive, demonstrating our poor understanding of this extremely-useful primitive. Recently, Bitansky et al. [TCC '16B] showed that sub-exponentially-secure private-key function encryption bridges from nearly-exponential security in Minicrypt to slightly super-polynomial security in Cryptomania, and from sub-exponential security in Cryptomania to Obfustopia. Specifically, given any sub-exponentially-secure private-key functional encryption scheme and a nearly-exponentially-secure one-way function, they constructed a public-key encryption scheme with slightly super-polynomial security. Assuming, in addition, a sub-exponentially-secure public-key encryption scheme, they then constructed an indistinguishability obfuscator. We settle the problem of positioning private-key functional encryption within the hierarchy of cryptographic primitives by placing it in Obfustopia. First, given any quasi-polynomially-secure private-key functional encryption scheme, we construct an indistinguishability obfuscator for circuits with inputs of poly-logarithmic length. Then, we observe that such an obfuscator can be used to instantiate many natural applications of indistinguishability obfuscation. Specifically, relying on sub-exponentially-secure one-way functions, we show that quasi-polynomially-secure private-key functional encryption implies not just public-key encryption but leads all the way to public-key functional encryption for circuits with inputs of poly-logarithmic length. Moreover, relying on sub-exponentially-secure injective one-way functions, we show that quasi-polynomially-secure private-key functional encryption implies a hard-on-average distribution over instances of a PPAD-complete problem. Underlying our constructions is a new transformation from single-input functional encryption to multi-input functional encryption in the private-key setting. The previously known such transformation [Brakerski et al., EUROCRYPT '16] required a sub-exponentially-secure single-input scheme, and obtained a scheme supporting only a slightly super-constant number of inputs. Our transformation both relaxes the underlying assumption and supports more inputs: Given any quasi-polynomially-secure single-input scheme, we obtain a scheme supporting a poly-logarithmic number of inputs.
Last updated:  2017-02-06
Faster Bootstrapping of FHE over the Integers
Jung Hee Cheon, Kyoohyung Han, Duhyeong Kim
Bootstrapping in fully homomorphic encryption (FHE) over the integers is a homomorphic evaluation of the squashed decryption function suggested by van Dijk et al. The typical approach for the bootstrapping is representing the decryption function as a binary circuit with a fixed message space. All bootstrapping methods in FHEs over the integers use this approach; however, these methods require too many homomorphic multiplications, slowing down the whole procedure. In this paper, we propose an efficient bootstrapping method using various message spaces. Our bootstrapping method requires only $O(\log^{2}\lambda)$ number of homomorphic multiplications, which is significantly lower than $\tilde{O}(\lambda^{4})$ of the previous methods. We implement our bootstrapping method on the scale-invariant FHE over the integers; the CLT scheme introduced by Coron, Lepoint and Tibouchi. It takes 6 seconds for a 500-bit message space and a 72-bit security in PC. This is the fastest result among the bootstrapping methods on FHEs over the integers. We also apply our bootstrapping method to evaluate an AES-128 circuit homomorphically. As a result, it takes about 8 seconds per 128-bit block and is faster than the previous result of homomorphic evaluation of AES circuit using FHEs over the integers without bootstrapping.
Last updated:  2017-06-14
LPN Decoded
Andre Esser, Robert Kübler, Alexander May
We propose new algorithms with small memory consumption for the Learning Parity with Noise (LPN) problem, both classically and quantumly. Our goal is to predict the hardness of LPN depending on both parameters, its dimension $k$ and its noise rate $\tau$, as accurately as possible both in theory and practice. Therefore, we analyze our algorithms asymptotically, run experiments on medium size parameters and provide bit complexity predictions for large parameters. Our new algorithms are modifications and extensions of the simple Gaussian elimination algorithm with recent advanced techniques for decoding random linear codes. Moreover, we enhance our algorithms by the dimension reduction technique from Blum, Kalai, Wasserman. This results in a hybrid algorithm that is capable for achieving the best currently known run time for any fixed amount of memory. On the asymptotic side, we achieve significant improvements for the run time exponents, both classically and quantumly. To the best of our knowledge, we provide the first quantum algorithms for LPN. Due to the small memory consumption of our algorithms, we are able to solve for the first time LPN instances of medium size, e.g. with $k=243, \tau = \frac 1 8$ in only 15 days on 64 threads. Our algorithms result in bit complexity prediction that require relatively large $k$ for small $\tau$. For instance for small noise LPN with $\tau= \frac 1 {\sqrt k}$, we predict $80$-bit classical and only $64$-bit quantum security for $k~\geq~2048$. For the common cryptographic choice $k=512, \tau = \frac 1 8$, we achieve with limited memory classically $97$-bit and quantumly $70$-bit security.
Last updated:  2017-02-06
Quantum algorithms for computing short discrete logarithms and factoring RSA integers
Martin Ekerå, Johan Håstad
In this paper we generalize the quantum algorithm for computing short discrete logarithms previously introduced by Ekerå so as to allow for various tradeoffs between the number of times that the algorithm need be executed on the one hand, and the complexity of the algorithm and the requirements it imposes on the quantum computer on the other hand. Furthermore, we describe applications of algorithms for computing short discrete logarithms. In particular, we show how other important problems such as those of factoring RSA integers and of finding the order of groups under side information may be recast as short discrete logarithm problems. This immediately gives rise to an algorithm for factoring RSA integers that is less complex than Shor’s general factoring algorithm in the sense that it imposes smaller requirements on the quantum computer. In both our algorithm and Shor’s algorithm, the main hurdle is to compute a modular exponentiation in superposition. When factoring an n bit integer, the exponent is of length 2n bits in Shor’s algorithm, compared to slightly more than n/2 bits in our algorithm.
Last updated:  2017-02-06
DFA on LS-Designs with a Practical Implementation on SCREAM (extended version)
Benjamin Lac, Anne Canteaut, Jacques Fournier, Renaud Sirdey
LS-Designs are a family of SPN-based block ciphers whose linear layer is based on the so-called interleaved construction. They will be dedicated to low-end devices with high performance and low-resource constraints, objects which need to be resistant to physical attacks. In this paper we describe a complete Differential Fault Analysis against LS-Designs and also on other families of SPN-based block ciphers. First we explain how fault attacks can be used against their implementations depending on fault models. Then, we validate the DFA in a practical example on a hardware implementation of SCREAM running on an FPGA. The faults have been injected using electromagnetic pulses during the execution of SCREAM and the faulty ciphertexts have been used to recover the key’s bits. Finally, we discuss some countermeasures that could be used to thwart such attacks.
Last updated:  2017-02-03
A First DFA on PRIDE: from Theory to Practice (extended version)
Benjamin Lac, Marc Beunardeau, Anne Canteaut, Jacques Fournier, Renaud Sirdey
PRIDE is one of the most effcient lightweight block cipher proposed so far for connected objects with high performance and low resource constraints. In this paper we describe the first ever complete Differential Fault Analysis against PRIDE. We describe how fault attacks can be used against implementations of PRIDE to recover the entire encryption key. Our attack has been validated first through simulations, and then in practice on a software implementation of PRIDE running on a device that could typically be used in IoT devices. Faults have been injected using electromagnetic pulses during the PRIDE execution and the faulty ciphertexts have been used to recover the key bits. We also discuss some countermeasures that could be used to thwart such attacks.
Last updated:  2017-02-04
Honey Chatting: A novel instant messaging system robust to eavesdropping over communication
Joo-Im Kim, Ji Won Yoon
There have been many efforts to strengthen security of Instant Messaging (IM) system. One of the typical technologies is the conventional message encryption using a secret or private key. However, the key is fundamentally vulnerable to a bruteforce attack, causing to acquire the original message. In this respect, a countermeasure was suggested as the way to generating plausible-looking but fake plaintexts, which is called Honey Encryption (HE). In this paper, we present a HE-based statistical scheme and design a Honey Chatting application, which is robust to eavesdropping. Besides, we verify the effectiveness of the Honey Chatting by comparing the entropy of decrypted messages through experiments.
Last updated:  2017-02-03
Visual Honey Encryption: Application to Steganography
Ji Won Yoon, Hyoungshick Kim, Hyun-Ju Jo, Hyelim Lee, Kwangsu Lee
Honey encryption (HE) is a new technique to overcome the weakness of conventional password-based encryption (PBE). However, conventional honey encryption still has the limitation that it works only for binary bit streams or integer sequences because it uses a fixed distribution-transforming encoder (DTE). In this paper, we propose a variant of honey encryption called visual honey encryption which employs an adaptive DTE in a Bayesian framework so that the proposed approach can be applied to more complex domains including images and videos. We applied this method to create a new steganography scheme which significantly improves the security level of traditional steganography.
Last updated:  2017-02-03
How to Circumvent the Two-Ciphertext Lower Bound for Linear Garbling Schemes
Carmen Kempka, Ryo Kikuchi, Koutarou Suzuki
At EUROCRYPT 2015, Zahur et al.\ argued that all linear, and thus, efficient, garbling schemes need at least two $k$-bit elements to garble an AND gate with security parameter $k$. We show how to circumvent this lower bound, and propose an efficient garbling scheme which requires less than two $k$-bit elements per AND gate for most circuit layouts. Our construction slightly deviates from the linear garbling model, and constitutes no contradiction to any claims in the lower-bound proof. With our proof of concept construction, we hope to spur new ideas for more practical garbling schemes. Our construction can directly be applied to semi-private function evaluation by garbling XOR, XNOR, NAND, OR, NOR and AND gates in the same way, and keeping the evaluator oblivious of the gate function.
Last updated:  2017-02-03
Efficient Differential Trail Searching Algorithm for ARX Block Ciphers
Seojin Kim, HyungChul Kang, Deukjo Hong, Jaechul Sung, Seokhie Hong
In this paper, we suggest an advanced method searching for differential trails of block cipher with ARX structure. We use two techniques to optimize the automatic search algorithm of differential trails suggested by Biryukov et al. and obtain 2~3 times faster results than the previous one when implemented in block cipher SPECK.
Last updated:  2017-01-31
Symbolic Models for Isolated Execution Environments
Charlie Jacomme, Steve Kremer, Guillaume Scerri
Isolated Execution Environments (IEEs), such as ARM TrustZone and Intel SGX, offer the possibility to execute sensitive code in isolation from other malicious programs, running on the same machine, or a potentially corrupted OS. A key feature of IEEs is the ability to produce reports binding cryptographically a message to the program that produced it, typically ensuring that this message is the result of the given program running on an IEE. We present a symbolic model for specifying and verifying applications that make use of such features. For this we introduce the S$\ell$APiC process calculus, that allows to reason about reports issued at given locations. We also provide tool support, extending the SAPiC/Tamarin toolchain and demonstrate the applicability of our framework on several examples implementing secure outsourced computation (SOC), a secure licensing protocol and a one-time password protocol that all rely on such IEEs.
Last updated:  2017-01-31
The Exact Security of PMAC
Peter Gaži, Krzysztof Pietrzak, Michal Rybár
PMAC is a simple and parallel block-cipher mode of operation, which was introduced by Black and Rogaway at Eurocrypt 2002. If instantiated with a (pseudo)random permutation over n-bit strings, PMAC constitutes a provably secure variable input-length (pseudo)random function. For adversaries making q queries, each of length at most $\ell$ (in n-bit blocks), and of total length $\sigma \leq q\ell$, the original paper proves an upper bound on the distinguishing advantage of $O(\sigma^2/2^n)$, while the currently best bound is $O(q\sigma/2^n)$. In this work we show that this bound is tight by giving an attack with advantage $\Omega(q^2\ell/2^n)$. In the PMAC construction one initially XORs a mask to every message block, where the mask for the i-th block is computed as $\tau_i := \gamma_i \cdot L$, where $L$ is a (secret) random value, and $\gamma_i$ is the i-th codeword of the Gray code. Our attack applies more generally to any sequence of $\gamma_{i}$’s which contains a large coset of a subgroup of $GF(2^n)$. We then investigate, if the security of PMAC can be further improved by using $\tau_{i}$’s that are $k$-wise independent, for $k > 1$ (the original distribution is only 1-wise independent). We observe that the security of PMAC will not increase in general, even if the masks are chosen from a 2-wise independent distribution, and then prove that the security increases to $O(q^2/2^n)$, if the $\tau_i$'s are 4-wise independent. Due to simple extension attacks, this is the best bound one can hope for, using any distribution on the masks. Whether 3-wise independence is already sufficient to get this level of security is left as an open problem.
Last updated:  2017-09-13
Authenticated Encryption in the Face of Protocol and Side Channel Leakage
Guy Barwell, Daniel P. Martin, Elisabeth Oswald, Martijn Stam
Authenticated encryption schemes in practice have to be robust against adversaries that have access to various types of leakage, for instance decryption leakage on invalid ciphertexts (protocol leakage), or leakage on the underlying primitives (side channel leakage). This work includes several novel contributions: we augment the notion of nonce-base authenticated encryption with the notion of continuous leakage and we prove composition results in the face of protocol and side channel leakage. Moreover, we show how to achieve authenticated encryption that is simultaneously both misuse resistant and leakage resilient, based on a sufficiently leakage resilient PRF, and finally we propose a concrete, pairing-based, instantiation of the latter.
Last updated:  2017-03-29
Computation of a 768-bit prime field discrete logarithm
Thorsten Kleinjung, Claus Diem, Arjen K. Lenstra, Christine Priplata, Colin Stahlke
This paper reports on the number field sieve computation of a 768-bit prime field discrete logarithm, describes the different parameter optimizations and resulting algorithmic changes compared to the factorization of a 768-bit RSA modulus, and briefly discusses the cryptologic relevance of the result.
Last updated:  2017-06-07
Subring Homomorphic Encryption
Seiko Arita, Sari Handa
In this paper, we construct {\em subring homomorphic encryption} scheme that is a homomorphic encryption scheme build on the decomposition ring, which is a subring of cyclotomic ring. In the scheme, each plaintext slot contains an integer in $\mathbb{Z}_{p^l}$, rather than an element of $\mathrm{GF}(p^d)$ as in conventional homomorphic encryption schemes on cyclotomic rings. Our benchmark results indicate that the subring homomorphic encryption scheme is several times faster than HElib {\em for mod-$p^l$ plaintexts}, due to its high parallelism of mod-$p^l$ slot structure. We believe in that the plaintext structure composed of mod-$p^l$ slots will be more natural, easy to handle, and significantly more efficient for many applications such as outsourced data mining.
Last updated:  2017-02-06
FHE Over the Integers: Decomposed and Batched in the Post-Quantum Regime
Daniel Benarroch, Zvika Brakerski, Tancrède Lepoint
Fully homomorphic encryption over the integers (FHE-OI) is currently the only alternative to lattice-based FHE. FHE-OI includes a family of schemes whose security is based on the hardness of different variants of the approximate greatest common divisor (AGCD) problem. The majority of these works is based on the noise-free variant of AGCD which is potentially weaker than the general one. In particular, the noise-free variant relies on the hardness of factoring and is thus vulnerable to quantum attacks. A lot of effort was made to port techniques from second generation lattice-based FHE (using tensoring) to FHE-OI. Gentry, Sahai and Waters (Crypto 13) showed that third generation techniques (which were later formalized using the ``gadget matrix'') can also be ported. In this work, we propose a comprehensive study of applying third generation FHE techniques to the regime of FHE-OI. We present and analyze a third generation FHE-OI based on decisional AGCD without the noise-free assumption. We proceed to showing a batch version of our scheme where each ciphertext can encode a vector of messages and operations are performed coordinate-wise. We use a similar AGCD variant to Cheon et al.~(Eurocrypt 13) who suggested the batch approach for second generation FHE, but we do not require the noise-free component or a subset sum assumption. However, like Cheon et al., we do require circular security for our scheme, even for bounded homomorphism. Lastly, we discuss some of the obstacles towards efficient implementation of our schemes and discuss a number of possible optimizations.
Last updated:  2017-08-09
Fast Montgomery-like Square Root Computation over $GF(2^m)$ for All Trinomials
Yin Li, Yu Zhang
This letter is concerned with an extension of square root computation over $GF(2^m)$ defined by irreducible trinomials. We introduce a new type of Montgomery-like square root formulae, which is more efficient compared with classic square root operation. By choosing proper Montgomery factor regarding to different types of trinomials, the space and time complexities of our proposal outperform or match the best results. Furthermore, a practical application of the Montgomery-like square root in exponentiation computation is also presented.
Last updated:  2020-02-25
Optimal Extension Protocols for Byzantine Broadcast and Agreement
Uncategorized
Chaya Ganesh, Arpita Patra
Show abstract
Uncategorized
The problems of Byzantine Broadcast (BB) and Byzantine Agreement (BA) are of interest to both distributed computing and cryptography community. Extension protocols for these primitives have been introduced to handle long messages efficiently at the cost of small number of single-bit broadcasts, referred to as seed broadcasts. While the communication optimality has remained the most sought-after property of an extension protocol in the literature, we prioritize both communication and round optimality in this work. In a setting with $n$ parties and an adversary controlling at most $t$ parties in Byzantine fashion, we present BB and BA extension protocols with $t<n$, $t < n/2$ and $t<n/3$ that are simultaneously optimal in terms of communication and round complexity. The best communication that an extension protocol can achieve in any setting is $O(\ell n)$ bits for a message of length $\ell$ bits. The best achievable round complexity is $O(n)$ for the setting $t< n$ and $O(1)$ in the other two settings $t < n/2$ and $t<n/3$. The existing constructions are either optimal only in terms of communication complexity, or require more rounds than our protocols, or achieve optimal round complexity at the cost of sub-optimal communication. Specifically, we construct communication-optimal protocols in the three corruption scenarios with the following round complexities: 1. $t<n/3$: $3$ rounds, improving over $O(\sqrt{\ell} + n^2)$ 2. $t<n/2$: $5$ rounds, improving over $6$ 3. $t<n$: $O(n)$ rounds, improving over $O(n^2)$ A concrete protocol from an extension protocol is obtained by replacing the seed broadcasts with a BB protocol for a single bit. Our extension protocols minimize the seed-round complexity and seed-communication complexity. The former refers to the number of rounds in an extension protocol in which seed broadcasts are invoked and impacts the round complexity of a concrete protocol due to a number of sequential calls to bit broadcast. The latter refers to the number of bits communicated through the seed broadcasts and impacts the round and communication complexity due to parallel instances of single-bit broadcast. In the settings of $t<n/3$, $t<n/2$ and $t<n$, our protocols improve the seed-round complexity from $O(\sqrt{\ell} + n^2)$ to $1$, from $3$ to $2$ and from $O(n^2)$ to $O(n)$ respectively. Our protocols keep the seed-communication complexity independent of the message length $\ell$ and, either improve or keep the complexity almost in the same order compared to the existing protocols.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.