All papers in 2016 (Page 10 of 1195 results)

Last updated:  2016-03-17
Collision Attack on GRINDAHL
Thomas Peyrin
Hash functions have been among the most scrutinized cryptographic primitives in the previous decade, mainly due to the cryptanalysis breakthroughs on MD-SHA family and the NIST SHA3 competition that followed. GRINDAHL is a hash function proposed at FSE 2007 that inspired several SHA3 candidates. One of its particularities is that it follows the RIJNDAEL design strategy, with an efficiency comparable to SHA2. This paper provides the first cryptanalytic work on this scheme and we show that the 256-bit version of GRINDAHL is not collision resistant. Our attack uses byte-level truncated differentials and leverages a counterintuitive method (reaching an internal state where all bytes are active) in order to ease the construction of good differential paths. Then, by a careful utilization of the freedom degrees inserted every round, and with a work effort of approximatively $2^{112}$ hash computations, an attacker can generate a collision for the full 256-bit version of GRINDAHL.
Last updated:  2016-03-18
Evaluation and Improvement of Generic-Emulating DPA Attacks
Weijia Wang, Yu Yu, Junrong Liu, Zheng Guo, François-Xavier Standaert, Dawu Gu, Sen Xu, Rong Fu
At CT-RSA 2014, Whitnall, Oswald and Standaert gave the impossibility result that no generic DPA strategies (i.e., without any \emph{a priori} knowledge about the leakage characteristics) can recover secret information from a physical device by considering an injective target function (e.g., AES and PRESENT S-boxes), and as a remedy, they proposed a slightly relaxed strategy ``generic-emulating DPAs'' free from the non-injectivity constraint. However, as we show in this paper, the only generic-emulating DPA proposed in their work, namely the SLR-based DPA, suffers from two drawbacks: unstable outcomes in the high-noise regime (i.e., for a small number of traces) and poor performance especially on real smart cards (compared with traditional DPAs with a specific power model). In order to solve these problems, we introduce two new generic-emulating distinguishers, based on lasso and ridge regression strategies respectively, with more stable and better performances than the SLR-based one. Further, we introduce the cross-validation technique that improves the generic-emulating DPAs in general and might be of independent interest. Finally, we compare the performances of all aforementioned generic-emulating distinguishers (both with and without cross-validation) in simulated leakages functions of different degrees, and on an AES ASIC implementation. Our experimental results show that our generic-emulating distinguishers are stable and some of them behave even better than (resp., almost the same as) the best Difference-of-Means distinguishers in simulated leakages (resp., on a real implementation), and thus make themselves good alternatives to traditional DPAs.
Last updated:  2016-03-17
A Parametric Family of Attack Models for Proxy Re-Encryption
David Nuñez, Isaac Agudo, Javier Lopez
Proxy Re-Encryption (PRE) is a type of Public-Key Encryption (PKE) that provides an additional re-encryption functionality. Although PRE is inherently more complex than PKE, attack models for PRE have not been developed further than those inherited from PKE. In this paper we address this gap and define a parametric family of attack models for PRE, based on the availability of both the decryption and re-encryption oracles during the security game. This family enables the definition of fine-grained security notions for PRE, ranging from “plain” IND-CPA to “full” IND-CCA. We analyze some relations among these notions of security, and in particular, the separations, which further support the importance of the re-encryption oracle. The identified separations stem from the study of a new property of PRE, called privacy of re-encryption keys, which captures the requirement that re-encryption keys should not be leaked through the re-encryption function. Finally, we show that the scheme by Kirshanova (PKC 2014), which does not satisfy this property, cannot achieve a meaningful security notion for PRE since it is vulnerable to chosen-ciphertext attacks using the re-encryption oracle. This attack emphasizes the fact that PRE schemes that leak re-encryption keys cannot achieve strong security notions.
Last updated:  2016-03-17
New Bounds for Keyed Sponges with Extendable Output: Independence between Capacity and Message Length
Yusuke Naito, Kan Yasuda
We provide new bounds for the pseudo-random function security of keyed sponge constructions. For the case $c\leq b/2$ ($c$ the capacity and $b$ the permutation size), our result improves over all previously-known bounds. A remarkable aspect of our bound is that dependence between capacity and message length is removed, partially solving the open problem posed by Gaži~et~al. at CRYPTO~2015. Our bound is essentially tight, matching the two types of attacks pointed out by Gaži~et~al. For the case $c>b/2$, Gaži~et~al.'s bound remains the best for the case of single-block output, but for keyed sponges with extendable outputs, our result partly (when query complexity is relatively large) provides better security than Mennink~et~al.'s bound presented at ASIACRYPT~2015.
Last updated:  2016-03-17
Spooky Interaction and its Discontents: Compilers for Succinct Two-Message Argument Systems
Cynthia Dwork, Moni Naor, Guy N. Rothblum
We are interested in constructing short two-message arguments for various languages, where the complexity of the verifier is small (e.g. linear in the input size, or even sublinear if it is coded properly). Suppose that we have a low communication public-coin interactive protocol for proving (or arguing) membership in the language. We consider a ``compiler'' from the literature that takes a protocol consisting of several rounds and produces a two-message argument system. The compiler is based on any Fully Homomorphic Encryption (FHE) scheme, or on PIR (under additional conditions on the protocol). This compiler has been used successfully in several proposed protocols. We investigate the question of whether this compiler can be proven to work under standard cryptographic assumptions. We prove: (i) If FHEs or PIR systems exist, then there is a sound interactive proof protocol that, when run through the compiler, results in a protocol that is not sound. (ii) If the verifier in the original protocol runs in logarithmic space and has no ``long-term'' secret memory (a generalization of public coins), then the compiled protocol is sound. This yields a succinct two-message argument system for any language in NC, where the verifier's work is linear (or even polylog if the input is coded appropriately). This is the first (non trivial) two-message succinct argument system that is based on a standard polynomial-time hardness assumption.
Last updated:  2016-03-17
Nonce-Based Cryptography: Retaining Security when Randomness Fails
Mihir Bellare, Björn Tackmann
We take nonce-based cryptography beyond symmetric encryption, developing it as a broad and practical way to mitigate damage caused by failures in randomness, whether inadvertent (bugs) or malicious (subversion). We focus on definitions and constructions for nonce-based public-key encryption and briefly treat nonce-based signatures. We introduce and construct hedged extractors as a general tool in this domain. Our nonce-based PKE scheme guarantees that if the adversary wants to violate IND-CCA security then it must do both of the following: (1) fully compromise the RNG (2) penetrate the sender system to exfiltrate a seed used by the sender
Last updated:  2016-04-18
Obfuscation Combiners
Marc Fischlin, Amir Herzberg, Hod Bin Noon, Haya Shulman
Obfuscation is challenging; we currently have practical candidates with rather vague security guarantees on the one side, and theoretical constructions which have recently experienced jeopardizing attacks against the underlying cryptographic assumptions on the other side. This motivates us to study and present robust combiners for obfuscators, which integrate several candidate obfuscators into a single obfuscator which is secure as long as a quorum of the candidates is indeed secure. We give several results about building obfuscation combiners, with matching upper and lower bounds for the precise quorum of secure candidates. Namely, we show that one can build 3-out-of-4 obfuscation combiners where at least three of the four combiners are secure, whereas 2-out-of-3 structural combiners (which combine the obfuscator candidates in a black-box sense) with only two secure candidates, are impossible. Our results generalize to (2c+1)-out-of-(3c+1) combiners for the positive result, and to 2c-out-of-3c results for the negative result, for any integer c. To reduce overhead, we define detecting combiners, where the combined obfuscator may sometimes produce an error-indication instead of the desired output, indicating that some of the component obfuscators is faulty. We present a (c+1)-out-of-(2c+1) detecting combiner for any integer c, bypassing the previous lower bound. We further show that c-out-of-2c structural detecting combiners are again impossible. Since our approach can be used for practical obfuscators, as well as for obfuscators proven secure (based on assumptions), we also briefly report on implementation results for some applied obfuscator programs.
Last updated:  2016-03-17
Optimization of LPN Solving Algorithms
Sonia Bogos, Serge Vaudenay
In this article we focus on constructing an algorithm that automatizes the generation of LPN solving algorithms from the considered parameters. When searching for an algorithm to solve an LPN instance, we make use of the existing techniques and optimize their use. We formalize an LPN algorithm as a path in a graph G and our algorithm is searching for the optimal paths in this graph. The results bring improvements over the existing work by a factor from 2^8 to 2^{10}, i.e. we improve the results of the covering code from ASIACRYPT'14. Furthermore, we propose concrete practical codes and a method to find good codes.
Last updated:  2016-03-17
Verifiability Notions for E-Voting Protocols
Veronique Cortier, David Galindo, Ralf Kuesters, Johannes Mueller, Tomasz Truderung
There have been intensive research efforts in the last two decades or so to design and deploy electronic voting (e-voting) protocols/systems which allow voters and/or external auditors to check that the votes were counted correctly. This security property, which not least was motivated by numerous problems in even national elections, is called \emph{verifiability}. It is meant to defend against voting devices and servers that have programming errors or are outright malicious. In order to properly evaluate and analyze e-voting protocols w.r.t.~verifiability, one fundamental challenge has been to formally capture the meaning of this security property. While the first formal definitions of verifiability were devised in the late 1980s already, new verifiability definitions are still being proposed. The definitions differ in various aspects, including the classes of protocols they capture and even their formulations of the very core of the meaning of verifiability. This is an unsatisfying state of affairs, leaving the research on the verifiability of e-voting protocols in a fuzzy state. In this paper, we review all formal definitions of verifiability proposed in the literature and cast them in a framework proposed by Küsters, Truderung, and Vogt (the KTV framework), yielding a uniform treatment of verifiability. This enables us to provide a detailed comparison of the various definitions of verifiability from the literature. We thoroughly discuss advantages and disadvantages, and point to limitations and problems. Finally, from these discussions and based on the KTV framework, we distill a general definition of verifiability, which can be instantiated in various ways, and provide precise guidelines for its instantiation. The concepts for verifiability we develop should be widely applicable also beyond the framework used here. Altogether, our work offers a well-founded reference point for future research on the verifiability of e-voting systems.
Last updated:  2016-03-15
On a remarkable property of APN Gold functions
Anastasiya Gorodilova
In [13] for a given vectorial Boolean function $F$ from $\mathbb{F}_2^n$ to itself it was defined an associated Boolean function $\gamma_F(a,b)$ in $2n$ variables that takes value~$1$ iff $a\neq{\bf 0}$ and equation $F(x)+F(x+a)=b$ has solutions. In this paper we introduce the notion of differentially equivalent functions as vectorial functions that have equal associated Boolean functions. It is an interesting open problem to describe differential equivalence class of a given APN function. We consider the APN Gold function $F(x)=x^{2^k+1}$, where gcd$(k,n)=1$, and prove that there exist exactly $2^{2n+n/2}$ distinct affine functions $A$ such that $F$ and $F+A$ are differentially equivalent if $n=4t$ for some $t$ and $k = n/2 \pm 1$; otherwise the number of such affine functions is equal to $2^{2n}$. This theoretical result and computer calculations obtained show that APN Gold functions for $k=n/2\pm1$ and $n=4t$ are the only functions (except one function in 6 variables) among all known quadratic APN functions in $2,\ldots,8$ variables that have more than $2^{2n}$ trivial affine functions $A^F_{c,d}(x)=F(x)+F(x+c)+d$, where $c,d\in\mathbb{F}_2^n$, preserving the associated Boolean function when adding to $F$.
Last updated:  2016-03-15
Bit-Based Division Property and Application to Simon Family
Yosuke Todo, Masakatu Morii
Ciphers that do not use S-boxes have been discussed for the demand on lightweight cryptosystems, and their round functions consist of and, rotation, and xor. Especially, the Simon family is one of the most famous ciphers, and there are many cryptanalyses again the Simon family. However, it is very difficult to guarantee the security because we cannot use useful techniques for S-box-based ciphers. Very recently, the division property, which is a new technique to find integral characteristics, was shown in Eurocrypt 2015. The technique is powerful for S-box-based ciphers, and it was used to break, for the first time, the full MISTY1 in CRYPTO 2015. However, it has not been applied to non-S-box-based ciphers like the Simon family effectively, and only the existence of the 10-round integral characteristic on Simon32 was proven. On the other hand, the experimental characteristic, which possibly does not work for all keys, covers 15 rounds, and there is a 5-round gap. To fill the gap, we introduce a bit-based division property, and we apply it to show that the experimental 15-round integral characteristic always works for all keys. Though the bit-based division property finds more accurate integral characteristics, it requires much time and memory complexity. As a result, we cannot apply it to symmetric-key ciphers whose block length is over 32. Therefore, we alternatively propose a method for designers. The method works for ciphers with large block length, and it shows ``provable security'' against integral cryptanalyses using the division property. We apply this technique to the Simon family and show that Simon48, 64, 96, and 128 probably do not have 17-, 20-, 25-, and 29-round integral characteristics, respectively.
Last updated:  2016-03-15
Co-location detection on the Cloud
Uncategorized
Mehmet Sinan Inci, Berk Gulmezoglu, Thomas Eisenbarth, Berk Sunar
Show abstract
Uncategorized
In this work we focus on the problem of co-location as a first step of conducting Cross-VM attacks such as Prime and Probe or Flush+Reload in commercial clouds. We demonstrate and compare three co-location detection methods namely, cooperative Last-Level Cache (LLC) covert channel, software profiling on the LLC and memory bus locking. We conduct our experiments on three commercial clouds, Amazon EC2, Google Compute Engine and Microsoft Azure. Finally, we show that both cooperative and non-cooperative co-location to specific targets on cloud is still possible on major cloud services.
Last updated:  2016-03-15
Secure Audit Logs with Verifiable Excerpts
Gunnar Hartung
Log files are the primary source of information when the past peration of a computing system needs to be determined. Keeping correct and accurate log files is importantfor after-the-fact forensics, as well as for system administration, maintenance, and auditing. Therefore, a line of research has emerged on how to cryptographically protect the integrity of log files even against intruders who gain control of the logging machine. We contribute to this line of research by devising a scheme where one can verify integrity not only of the log file as a whole, but also of excerpts. This is helpful in various scenarios, including cloud provider auditing.
Last updated:  2016-03-15
Detecting flawed masking schemes with leakage detection tests
Oscar Reparaz
Masking is a popular countermeasure to thwart side-channel attacks on embedded systems. Many proposed masking schemes, even carrying ``security proofs'', are eventually broken because they are flawed by design. The security validation process is nowadays a lengthy, tedious and manual process. In this paper, we report on a method to verify the soundness of a masking scheme before implementing it on a device. We show that by instrumenting a high-level implementation of the masking scheme and by applying leakage detection techniques, a system designer can quickly assess at design time whether the masking scheme is flawed or not, and to what extent. Our method requires not more than working high-level source code and is based on simulation. Thus, our method can be used already in the very early stages of design. We validate our approach by spotting in an automated fashion first-, second- and third-order flaws in recently published state-of-the-art schemes in a matter of seconds with limited computational resources. We also present a new second-order flaw on a table recomputation scheme, and show that the approach is useful when designing a hardware masked implementation.
Last updated:  2016-08-31
Universal Obfuscation and Witness Encryption: Boosting Correctness and Combining Security
Prabhanjan Ananth, Aayush Jain, Moni Naor, Amit Sahai, Eylon Yogev
Over the last few years a new breed of cryptographic primitives has arisen: on one hand they have previously unimagined utility and on the other hand they are not based on simple to state and tried out assumptions. With the on-going study of these primitives, we are left with several different candidate constructions each based on a different, not easy to express, mathematical assumptions, where some even turn out to be insecure. A {\em combiner} for a cryptographic primitive takes several candidate constructions of the primitive and outputs one construction that is as good as any of the input constructions. Furthermore, this combiner must be efficient: the resulting construction should remain polynomial-time even when combining polynomially many candidate. Combiners are especially important for a primitive where there are several competing constructions whose security is hard to evaluate, as is the case for indistinguishability obfuscation (IO) and witness encryption (WE). One place where the need for combiners appears is in design of a {\em universal construction}, where one wishes to find ``one construction to rule them all": an explicit construction that is secure if {\em any} construction of the primitive exists. In a recent paper, Goldwasser and Kalai posed as a challenge finding universal constructions for indistinguishability obfuscation and witness encryption. In this work we resolve this issue: we construct universal schemes for IO, and for witness encryption, and also resolve the existence of combiners for these primitives along the way. For IO, our universal construction and combiners can be built based on \emph{either} assuming DDH, or assuming LWE, with security against subexponential adversaries. For witness encryption, we need only one-way functions secure against polynomial time adversaries.
Last updated:  2016-03-14
Low Power Montgomery Modular Multiplication on Reconfigurable Systems
Pedro Maat C. Massolino, Lejla Batina, Ricardo Chaves, Nele Mentens
This paper presents an area-optimized FPGA architecture of the Montgomery modular multiplication algorithm on a low power reconfigurable IGLOO® 2 FPGA of Microsemi®. Our contributions consist of the mapping of the Montgomery algorithm to the specific architecture of the target FPGA, using the pipelined Math blocks and the embedded memory blocks. We minimize the occupation of these blocks as well as the usage of the regular FPGA cells (LUT4 and Flip Flops) through an dedicated scheduling algorithm. The obtained results suggest that a 224-bit modular multiplication can be computed in 2.42 µs, at a cost of 444 LUT4, 160 Flip Flops, 1 Math Block and 1 64x18 RAM, with a power consumption of 25.35 mW. If more area resources are considered, modular multiplication can be performed in 1.30 µs at a cost of 658 LUT4, 268 Flip Flops, 2 Math Blocks, 2 64x18 RAMs and a power consumption of 36.02 mW.
Last updated:  2016-03-14
Constrained PRFs for Unbounded Inputs with Short Keys
Hamza Abusalah, Georg Fuchsbauer
A constrained pseudorandom function (CPRF) $F \colon {\cal K} \times {\cal X} \to {\cal Y}$ for a family ${\cal T}$ of subsets of $\cal X$ is a function where for any key $k \in {\cal K}$ and set $S \in {\cal T}$ one can efficiently compute a short constrained key $k_S$, which allows to evaluate $F(k,\cdot)$ on all inputs $x \in S$, while the outputs on all inputs $x \notin S$ look random even given $k_S$. Abusalah et al. recently constructed the first constrained PRF for inputs of arbitrary length whose sets $S$ are decided by Turing machines. They use their CPRF to build broadcast encryption and the first ID-based non-interactive key exchange for an unbounded number of users. Their constrained keys are obfuscated circuits and are therefore large. In this work we drastically reduce the key size and define a constrained key for a Turing machine $M$ as a short signature on $M$. For this, we introduce a new signature primitive with constrained signing keys that let one only sign certain messages, while forging a signature on others is hard even when knowing the coins for key generation.
Last updated:  2017-05-25
Various Proxy Re-Encryption Schemes from Lattices
Xiong Fan, Feng-Hao Liu
Proxy re-encryption (PRE) was introduced by Blaze, Bleumer and Strauss [Eurocrypt '98]. Basically, PRE allows a semi-trusted proxy to transform a ciphertext encrypted under one key into an encryption of the same plaintext under another key, without revealing the underlying plaintext. Since then, many interesting applications have been explored, and many constructions in various settings have been proposed. In 2007, Cannetti and Honhenberger [CCS '07] defined a stronger notion -- CCA-security and construct a bi-directional PRE scheme. Later on, several work considered CCA-secure PRE based on bilinear group assumptions. Very recently, Kirshanova [PKC '14] proposed the first single-hop CCA-secure PRE scheme based on learning with errors (LWE) assumption. In this work, we first point out a subtle but serious mistake in the security proof of the work by Kirshanova. This reopens the direction of lattice-based CCA-secure constructions, even in the single-hop setting. Then we propose a new LWE-based single-hop CCA-secure PRE scheme. Finally, we extend the construction to support multi-hop re-encryptions for different levels of security under different settings.
Last updated:  2016-03-14
Public Key Encryption Supporting Equality Test and Flexible Authorization without Bilinear Pairings
Xi-Jun Lin, Haipeng Qu, Xiaoshuai Zhang
In recent years, public key encryption with equality test (PKEET) has become a hot research topic in the cryptography community due to the advancement of cloud computing. Recently, Ma et al. proposed a new related primitive, called public key encryption with equality test supporting flexible authorization (PKEET-FA), and constructed a concrete scheme. In their proposal, four types of authorization were presented to support different authorization policies. However, their proposal is based on bilinear pairings which are time costing operations compared with modular exponentiations. In this paper, we present a new PKEET-FA scheme without bilinear pairings. Compared with the existing work, our proposal is more efficient.
Last updated:  2017-01-23
Arithmetic coding and blinding countermeasures for lattice signatures
Uncategorized
Markku-Juhani O. Saarinen
Show abstract
Uncategorized
We describe new arithmetic coding techniques and side-channel blinding countermeasures for lattice-based cryptography. Using these techniques, we develop a practical, compact, and more quantum-resistant variant of the BLISS Ideal Lattice Signature Scheme. We first show how the BLISS parameters and hash-based random oracle can be modified to be more secure against quantum pre-image attacks while optimizing signature size. Arithmetic Coding offers an information theoretically optimal compression for stationary and memoryless sources, such as the discrete Gaussian distributions often present in lattice-based cryptography. We show that this technique gives better signature sizes than the previously proposed advanced Huffman-based signature compressors. We further demonstrate that arithmetic decoding from an uniform source to target distribution is also an optimal non-uniform sampling method in the sense that a minimal amount of true random bits is required. Performance of this new Binary Arithmetic Coding sampler is comparable to other practical samplers. The same code tables, or circuitry can be utilized for both tasks, eliminating the need for separate sampling and compression components. We then describe simple randomized blinding techniques that can be applied to anti-cyclic polynomial multiplication to mask timing- and power consumption side-channels in ring arithmetic. We further show that the Gaussian sampling process can also be blinded by a split-and-permute techniques as an effective countermeasure against side-channel attacks.
Last updated:  2016-03-11
Faster Algorithms for Solving LPN
Uncategorized
Bin Zhang, Lin Jiao, Mingsheng Wang
Show abstract
Uncategorized
The LPN problem, lying at the core of many cryptographic constructions for lightweight and post-quantum cryptography, receives quite a lot attention recently. The best published algorithm for solving it at Asiacrypt 2014 improved the classical BKW algorithm by using covering codes, which claimed to marginally compromise the $80$-bit security of HB variants, LPN-C and Lapin. In this paper, we develop faster algorithms for solving LPN based on an optimal precise embedding of cascaded concrete perfect codes, in a similar framework but with many optimizations. Our algorithm outperforms the previous methods for the proposed parameter choices and distinctly break the 80-bit security bound of the instances suggested in cryptographic schemes like HB$^+$, HB$^\#$, LPN-C and Lapin.
Last updated:  2016-03-10
What users should know about Full Disk Encryption based on LUKS
Uncategorized
Simone Bossi, Andrea Visconti
Show abstract
Uncategorized
Mobile devices, laptops, and USB memory usually store large amounts of sensitive information frequently unprotected. Unauthorized access to or release of such information could reveal business secrets, users habits, non-public data or anything else. Full Disk Encryption (FDE) solutions might help users to protect sensitive data in the event that devices are lost or stolen. In this paper we focus on the security of Linux Unified Key Setup (LUKS) specifications, the most common FDE solution implemented in Linux based operating systems. In particular, we analyze the key management process used to compute and store the encryption key, and the solution adopted to mitigate the problem of brute force attacks based on weak user passwords. Our testing activities show that unwitting users can significantly reduce the security of a LUKS implementation by setting specific hash functions and aggressive power management options.
Last updated:  2016-03-10
On the weaknesses of PBKDF2
Uncategorized
Andrea Visconti, Simone Bossi, Hany Ragab, Alexandro Calò
Show abstract
Uncategorized
Password-based key derivation functions are of particular interest in cryptography because they (a) input a password/passphrase (which usually is short and lacks enough entropy) and derive a cryptographic key; (b) slow down brute force and dictionary attacks as much as possible. In PKCS#5 [17], RSA Laboratories described a password based key derivation function called PBKDF2 that has been widely adopted in many security related applications [6, 7, 11]. In order to slow down brute force attacks, PBKDF2 introduce CPU-intensive operations based on an iterated pseudorandom function. Such a pseudorandom function is HMAC-SHA-1 by default. In this paper we show that, if HMAC-SHA-1 is computed in a standard mode without following the performance improvements described in the implementation note of RFC 2104 [13] and FIPS 198-1 [14], an attacker is able to avoid 50% of PBKDF2’s CPU intensive operations, by replacing them with precomputed values. We note that a number of well-known and widely-used crypto libraries are subject to this vulnerability.In addition to such a vulnerability, we describe some other minor optimizations that an attacker can exploit to reduce even more the key derivation time.
Last updated:  2016-03-10
Spooky Encryption and its Applications
Yevgeniy Dodis, Shai Halevi, Ron D. Rothblum, Daniel Wichs
Consider a setting where inputs $x_1,\ldots,x_n$ are encrypted under independent public keys. Given the ciphertexts $\{c_i = Enc(pk_i,x_i)\}_i$, Alice outputs ciphertexts $c'_1,\ldots,c'_n$ that decrypt to $y_1,\ldots,y_n$ respectively. What relationships between the $x_i$'s and $y_i$'s can Alice induce? Motivated by applications to delegating computations, Dwork, Langberg, Naor, Nissim and Reingold (unpublished manuscript, 2004) showed that a semantically secure scheme disallows signaling in this setting, meaning that $y_i$ cannot depend on $x_j$ for $j \neq i$ . On the other hand if the scheme is homomorphic then any local (component-wise) relationship is achievable, meaning that each $y_i$ can be an arbitrary function of $x_i$. However, there are also relationships which are neither signaling nor local. Dwork et al. asked if it is possible to have encryption schemes that support such ``spooky'' relationships. Answering this question is the focus of our work. Our first result shows that, under the LWE assumption, there exist encryption schemes supporting a large class of ``spooky'' relationships, which we call additive function sharing (AFS) spooky. In particular, for any polynomial-time function $f$, Alice can ensure that $y_1,\ldots,y_n$ are random subject to $\sum_{i=1}^n y_i = f(x_1,\ldots,x_n)$. For this result, the public keys all depend on common public randomness. Our second result shows that, assuming sub-exponentially hard indistinguishability obfuscation (iO) (and additional more standard assumptions), we can remove the common randomness and choose the public keys completely independently. Furthermore, in the case of $n=2$ inputs, we get a scheme that supports an even larger class of spooky relationships. We discuss several implications of AFS-spooky encryption. Firstly, it gives a strong counter-example to a method proposed by Aiello et al. (ICALP, 2000) for building arguments for NP from homomorphic encryption. Secondly, it gives a simple 2-round multi-party computation protocol where, at the end of the first round, the parties can locally compute an additive secret sharing of the output. Lastly, it immediately yields a function secret sharing (FSS) scheme for all functions. We also define a notion of spooky-free encryption, which ensures that no spooky relationship is achievable. We show that any non-malleable encryption scheme is spooky-free. Furthermore, we can construct spooky-free homomorphic encryption schemes from SNARKs, and it remains an open problem whether it is possible to do so from falsifiable assumptions.
Last updated:  2016-03-10
Cryptanalysis of the FLIP Family of Stream Ciphers
Sébastien Duval, Virginie Lallemand, Yann Rotella
At Eurocrypt 2016, Méaux et al. proposed FLIP, a new family of stream ciphers intended for use in Fully Homomorphic Encryption systems. Unlike its competitors which either have a low initial noise that grows at each successive encryption, or a high constant noise, the FLIP family of ciphers achieves a low constant noise thanks to a new construction called filter permutator. In this paper, we present an attack on the early version of FLIP that exploits the structure of the filter function and the constant internal state of the cipher. Applying this attack to the two instantiations proposed by Méaux et al. allows for a key recovery in $2^{54}$ basic operations (resp. $2^{68}$), compared to the claimed security of $2^{80}$ (resp. $2^{128}$).
Last updated:  2016-03-10
Automated Unbounded Analysis of Cryptographic Constructions in the Generic Group Model
Miguel Ambrona, Gilles Barthe, Benedikt Schmidt
We develop a new method to automatically prove security statements in the Generic Group Model as they occur in actual papers. We start by defining (i) a general language to describe security definitions, (ii) a class of logical formulas that characterize how an adversary can win, and (iii) a translation from security definitions to such formulas. We prove a Master Theorem that relates the security of the construction to the existence of a solution for the associated logical formulas. Moreover, we define a constraint solving algorithm that proves the security of a construction by proving the absence of solutions. We implement our approach in a fully automated tool, the $gga^{\infty}$ tool, and use it to verify different examples from the literature. The results improve on the tool by Barthe et al. (CRYPTO'14, PKC'15): for many constructions, $gga^{\infty}$ succeeds in proving standard (unbounded) security, whereas Barthe's tool is only able to prove security for a small number of oracle queries.
Last updated:  2016-03-10
The Adjacency Graphs of Linear Feedback Shift Registers with Primitive-like Characteristic Polynomials
Ming Li, Dongdai Lin
We consider the adjacency graphs of the linear feedback shift registers (LFSRs) with characteristic polynomials of the form l(x)p(x), where l(x) is a polynomial of small degree and p(x) is a primitive polynomial. It is shown that, their adjacency graphs are closely related to the association graph of l(x) and the cyclotomic numbers over finite fields. By using this connection, we give a unified method to determine their adjacency graphs. As an application of this method, we explicitly calculate the adjacency graphs of LFSRs with characteristic polynomials of the form (1+x+x^3+x^4)p(x), and construct a large class of De Bruijn sequences from them.
Last updated:  2016-03-10
Efficient Lattice-based Authenticated Encryption: A Practice-Oriented Provable Security Approach
Uncategorized
Ahmad Boorghany, Siavash Bayat-Sarmadi, Rasool Jalili
Show abstract
Uncategorized
Lattice-based cryptography has been received significant attention in the past decade. It has attractive properties such as being a major post-quantum cryptography candidate, enjoying worst-case to average-case security reductions, and being supported by efficient implementations.In recent years, lattice-based schemes have achieved enough maturity to become interesting also for the industry. Additionally, authenticated encryption (AE) is another important topic in the community of cryptography. In this paper, considering two above-mentioned subjects, we propose three lattice-based AEs with an acceptable practical efficiency. These schemes are provably secure assuming the hardness of elementary lattice problems. That is in contrast to the other practical provably-secure AEs, which are based on the hardness assumption of another cryptographic primitive, such as AES. Moreover, we analyze the exact security of these schemes in the paradigm of practice-oriented provable security, while the security proofs of almost all previous lattice-based schemes are asymptotic. The implementation results show that one of the proposed schemes becomes even faster than an AES-256-GCM implementation to encrypt messages of length 64 bytes or longer. Particularly, for a 1500-byte message, this scheme is 34% faster than AES-256-GCM.
Last updated:  2016-03-10
Improved Meet-in-the-Middle Attacks on Round-Reduced Crypton-256
Yonglin Hao
The meet-in-the-middle (MITM) attack has prove to be efficient in analyzing the AES block cipher. Its efficiency has been increasing with the introduction of various techniques such as differential enumeration, key-dependent sieve, super-box etc. The recent MITM attack given by Li and Jin has successfully mounted to 10-round AES-256. Crypton is an AES-like block cipher. In this paper, we apply the MITM method to the cryptanalysis of Crypton-256. Following Li and Jin's idea, we give the first 6-round distinguisher for Crypton. Based on the distinguisher as well as the properties of Crypton's simple key schedule, we successfully launch MITM attacks on Crypton-256 reduced to 9 and 10 rounds. For 9-round Crypton-256, our MITM attack can recover the 256-bit key with a time complexity $2^{173.05}$, a memory complexity $2^{241.17}$. For the 10-round version, we give two MITM attacks. The basic attack requires a time complexity $2^{240.01}$ and memory complexity $2^{241.59}$. The time/memory complexity of the advanced MITM attack on 10-round Crypton is $2^{245.05}/2^{209.59}$. Our MITM attacks share the same data complexity $2^{113}$ and their error rates are negligible.
Last updated:  2016-03-10
Exact Error Bound of Cox-Rower Architecture for RNS Arithmetic
Shinichi Kawamura, Tomoko Yonemura, Yuichi Komano, Hideo Shimizu
Residue Number System (RNS) is a method for representing an integer as an n-tuple of its residues with respect to a given base. Since RNS has inherent parallelism, it is actively researched to implement fast public-key cryptography using RNS. This paper derives the exact error bound of approximation on the Cox-Rower architecture which was proposed for RNS modular multiplication. This is the tightest bound ever found and enables us to find new parameter sets for the Cox-Rower architecture, which cannot be found with old bounds.
Last updated:  2016-03-08
Multi-prover Proof-of-Retrievability
Uncategorized
Maura B. Paterson, Douglas R. Stinson, Jalaj Upadhyay
Show abstract
Uncategorized
There has been considerable recent interest in ``cloud storage'' wherein a user asks a server to store a large file. One issue is whether the user can verify that the server is actually storing the file, and typically a challenge-response protocol is employed to convince the user that the file is indeed being stored correctly. The security of these schemes is phrased in terms of an extractor which will recover the file given any ``proving algorithm'' that has a sufficiently high success probability. This forms the basis of {\em proof-of-retrievability} (PoR) systems. In this paper, we study multiple server PoR systems. Our contribution in multiple-server $\por$ systems is as follows. 1. We formalize security definitions for two possible scenarios: (i) when a threshold of servers succeed with high enough probability (worst-case) and (ii) when the average of the success probability of all the servers is above a threshold (average-case). We also motivate the study of confidentiality of the outsourced message. 2. We give M-PoR schemes which are secure under both these security definitions and provide reasonable confidentiality guarantees even when there is no restriction on the computational power of the servers. We also show how classical statistical techniques used by Paterson, Stinson and Upadhyay (Journal of Mathematical Cryptology: 7(3)) can be extended to evaluate whether the responses of the provers are accurate enough to permit successful extraction. 3. We also look at one specific instantiation of our construction when instantiated with the unconditionally secure version of the Shacham-Waters scheme (Asiacrypt, 2008). This scheme gives reasonable security and privacy guarantee. We show that, in the multi-server setting with computationally unbounded provers, one can overcome the limitation that the verifier needs to store as much secret information as the provers.
Last updated:  2016-03-08
How Fast Can Higher-Order Masking Be in Software?
Dahmun Goudarzi, Matthieu Rivain
It is widely accepted that higher-order masking is a sound countermeasure to protect implementations of block ciphers against side-channel attacks. The main issue while designing such a countermeasure is to deal with the nonlinear parts of the cipher \textit{i.e.} the so-called s-boxes. The prevailing approach to tackle this issue consists in applying the Ishai-Sahai-Wagner (ISW) scheme from CRYPTO 2003 to some polynomial representation of the s-box. Several efficient constructions have been proposed that follow this approach, but higher-order masking is still considered as a costly (impractical) countermeasure. In this paper, we investigate efficient higher-order masking techniques by conducting a case study on ARM architectures (the most widespread architecture in embedded systems). We follow a bottom-up approach by first investigating the implementation of the base field multiplication at the assembly level. Then we describe optimized low-level implementations of the ISW scheme and its variant (CPRR) due to Coron \textit{et al.} (FSE 2013). Finally we present improved state-of-the-art methods with custom parameters and various implementation-level optimizations. We also investigate an alternative to polynomials methods which is based on bitslicing at the s-box level. We describe new masked bitslice implementations of the AES and PRESENT ciphers. These implementations happen to be significantly faster than (optimized) state-of-the-art polynomial methods. In particular, our bitslice AES masked at order 10 runs in $0.48$ megacycles, which makes $8$ milliseconds in presence of a $60$ MHz clock frequency.
Last updated:  2016-03-08
Efficient Zero-Knowledge Arguments for Arithmetic Circuits in the Discrete Log Setting
Jonathan Bootle, Andrea Cerulli, Pyrros Chaidos, Jens Groth, Christophe Petit
We provide a zero-knowledge argument for arithmetic circuit satisfiability with a communication complexity that grows logarithmically in the size of the circuit. The round complexity is also logarithmic and for an arithmetic circuit with fan-in 2 gates the computation of the prover and verifier is linear in the size of the circuit. The soundness of our argument relies solely on the well-established discrete logarithm assumption in prime order groups. At the heart of our new argument system is an efficient zero-knowledge argument of knowledge of openings of two Pedersen multicommitments satisfying an inner product relation, which is of independent interest. The inner product argument requires logarithmic communication, logarithmic interaction and linear computation for both the prover and the verifier. We also develop a scheme to commit to a polynomial and later reveal the evaluation at an arbitrary point, in a verifiable manner. This is used to build an optimized version of the constant round square root complexity argument of Groth (CRYPTO 2009), which reduces both communication and round complexity.
Last updated:  2017-12-11
Collaborative Multi-Authority Key-Policy Attribute-Based Encryption for Shorter Keys and Parameters
Uncategorized
Riccardo Longo, Chiara Marcolla, Massimiliano Sala
Show abstract
Uncategorized
Architectures relying on a single central authority often offer a great efficiency, but suffer of resiliency problems and are quite vulnerable to attacks. In our proposal, a Multiple-Authorities Key-Policy Attribute-Based Encryption scheme is constructed including a collaboration phase between the authorities, in order to achieve shorter keys and parameters, thus enhancing the efficiency of encryption and decryption. \\ We prove our system secure under a variation of the bilinear Diffie-Hellman assumption, providing a lower bound on its complexity.
Last updated:  2016-03-08
MEMS-based Gyroscopes as Physical Unclonable Functions
Oliver Willers, Christopher Huth, Jorge Guajardo, Helmut Seidel
We are at the dawn of a hyper connectivity age otherwise known as the Internet of Things (IoT). It is widely accepted that to be able to reap all benefits from the IoT promise, device security will be of paramount importance. A key requirement for most security solutions is the ability to provide secure cryptographic key storage in a way that will easily scale in the IoT age. In this paper, we focus on providing such a solution based on Physical Unclonable Functions (PUFs). To this end, we focus on microelectromechanical systems (MEMS)-based gyroscopes and show via wafer-level measurements and simulations, that it is feasible to use the physical and electrical properties of these sensors for cryptographic key generation. After identifying the most promising features, we propose a novel quantization scheme to extract bit strings from the MEMS analog measurements. We provide upper and lower bounds for the minimum entropy of the bit strings derived from the measurements and fully analyze the intra- and inter-class distributions across the operation range of the MEMS device. We complement these measurements via Monte-Carlo simulations based on the distributions of the parameters measured on actual devices. We also propose and evaluate a key derivation procedure based on fuzzy extractors for Hamming distance, using the min-entropy estimates obtained to derive a full entropy 128-bit key, requiring 1219-bits of helper data with an (authentication) failure probability of 4x10^-7. Thereby, we present a complete cryptographic key generation chain. In addition, we propose a dedicated MEMS-PUF design, which is superior to our measured sensor, in terms of chip area, quality and quantity of key seed features.
Last updated:  2016-05-31
On the Size of Pairing-based Non-interactive Arguments
Uncategorized
Jens Groth
Show abstract
Uncategorized
Non-interactive arguments enable a prover to convince a verifier that a statement is true. Recently there has been a lot of progress both in theory and practice on constructing highly efficient non-interactive arguments with small size and low verification complexity, so-called succinct non-interactive arguments (SNARGs) and succinct non-interactive arguments of knowledge (SNARKs). Many constructions of SNARGs rely on pairing-based cryptography. In these constructions a proof consists of a number of group elements and the verification consists of checking a number of pairing product equations. The question we address in this article is how efficient pairing-based SNARGs can be. Our first contribution is a pairing-based (preprocessing) SNARK for arithmetic circuit satisfiability, which is an NP-complete language. In our SNARK we work with asymmetric pairings for higher efficiency, a proof is only 3 group elements, and verification consists of checking a single pairing product equations using 3 pairings in total. Our SNARK is zero-knowledge and does not reveal anything about the witness the prover uses to make the proof. As our second contribution we answer an open question of Bitansky, Chiesa, Ishai, Ostrovsky and Paneth (TCC 2013) by showing that 2-move linear interactive proofs cannot have a linear decision procedure. It follows from this that SNARGs where the prover and verifier use generic asymmetric bilinear group operations cannot consist of a single group element. This gives the first lower bound for pairing-based SNARGs. It remains an intriguing open problem whether this lower bound can be extended to rule out 2 group element SNARGs, which would prove optimality of our 3 element construction.
Last updated:  2016-03-08
Adaptive Oblivious Transfer and Generalizations
Olivier Blazy, Céline Chevalier, Paul Germouty
Oblivious Transfer (OT) protocols were introduced in the seminal paper of Rabin, and allow a user to retrieve a given number of lines (usually one) in a database, without revealing which ones to the server. The server is ensured that only this given number of lines can be accessed per interaction, and so the others are protected; while the user is ensured that the server does not learn the numbers of the lines required. This primitive has a huge interest in practice, for example in secure multi-party computation, and directly echoes to Symmetrically Private Information Retrieval (SPIR). Recent Oblivious Transfer instantiations secure in the UC framework suf- fer from a drastic fallback. After the first query, there is no improvement on the global scheme complexity and so subsequent queries each have a global complexity of O(|DB|) meaning that there is no gain compared to running completely independent queries. In this paper, we propose a new protocol solving this issue, and allowing to have subsequent queries with a complexity of O(log(|DB|)), and prove the protocol security in the UC framework with adaptive corruptions and reliable erasures. As a second contribution, we show that the techniques we use for Obliv- ious Transfer can be generalized to a new framework we call Oblivi- ous Language-Based Envelope (OLBE). It is of practical interest since it seems more and more unrealistic to consider a database with uncontrolled access in access control scenarii. Our approach generalizes Oblivious Signature-Based Envelope, to handle more expressive credentials and requests from the user. Naturally, OLBE encompasses both OT and OSBE, but it also allows to achieve Oblivious Transfer with fine grain access over each line. For example, a user can access a line if and only if he possesses a certificate granting him access to such line. We show how to generically and efficiently instantiate such primitive, and prove them secure in the Universal Composability framework, with adaptive corruptions assuming reliable erasures. We provide the new UC ideal functionalities when needed, or we show that the existing ones fit in our new framework. The security of such designs allows to preserve both the secrecy of the database values and the user credentials. This symmetry allows to view our new approach as a generalization of the notion of Symmetrically PIR.
Last updated:  2016-03-08
Structure-Preserving Smooth Projective Hashing
Olivier Blazy, Céline Chevalier
Smooth projective hashing has proven to be an extremely useful primitive, in particular when used in conjunction with commitments to provide implicit decommitment. This has lead to applications proven secure in the UC framework, even in presence of an adversary which can do adaptive corruptions, like for example Password Authenticated Key Exchange (PAKE), and 1-out-of-m Oblivious Transfer (OT). However such solutions still lack in efficiency, since they heavily scale on the underlying message length. Structure-preserving cryptography aims at providing elegant and efficient schemes based on classical assumptions and standard group operations on group elements. Recent trend focuses on constructions of structure- preserving signatures, which require message, signature and verification keys to lie in the base group, while the verification equations only consist of pairing-product equations. Classical constructions of Smooth Projective Hash Function suffer from the same limitation as classical signatures: at least one part of the computation (messages for signature, witnesses for SPHF) is a scalar. In this work, we introduce and instantiate the concept of Structure- Preserving Smooth Projective Hash Function, and give as applications more efficient instantiations for one-round PAKE and three-round OT, and information retrieval thanks to Anonymous Credentials, all UC- secure against adaptive adversaries.
Last updated:  2016-03-08
Indistinguishability Obfuscation from Constant-Degree Graded Encoding Schemes
Uncategorized
Huijia Lin
Show abstract
Uncategorized
We construct a general-purpose indistinguishability obfuscation (IO) scheme for all polynomial-size circuits from {\em constant-degree} graded encoding schemes in the plain model, assuming the existence of a subexponentially secure Pseudo-Random Generator (PRG) computable by constant-degree arithmetic circuits (or equivalently in $\NC^0)$, and the subexponential hardness of the Learning With Errors (LWE) problems. In contrast, previous general-purpose IO schemes all rely on polynomial-degree graded encodings. Our general-purpose IO scheme is built upon two key components: \begin{itemize} \item a new bootstrapping theorem that subexponentially secure IO for a subclass of {\em constant-degree arithmetic circuits} implies IO for all polynomial size circuits (assuming PRG and LWE as described above), and \item a new construction of IO scheme for any generic class of circuits in the ideal graded encoding model, in which the degree of the graded encodings is bounded by a variant of the degree, called type degree, of the obfuscated circuits. \end{itemize} In comparison, previous bootstrapping theorems start with IO for $\NC^1$, and previous constructions of IO schemes require the degree of graded encodings to grow polynomially in the size of the obfuscated circuits.
Last updated:  2016-03-08
SE-ORAM: A Storage-Efficient Oblivious RAM for Privacy-Preserving Access to Cloud Storage
Qiumao Ma, Jinsheng Zhang, Wensheng Zhang, Daji Qiao
Oblivious RAM (ORAM) is a security-provable approach for protecting clients' access patterns to remote cloud storage. Recently, numerous ORAM constructions have been proposed to improve the communication efficiency of the ORAM model, but little attention has been paid to the storage efficiency. The state-of-the-art ORAM constructions have the storage overhead of $O(N)$ or $O(N\log N)$ blocks at the server, when $N$ data blocks are hosted. To fill the blank, this paper proposes a storage-efficient ORAM (SE-ORAM) construction with configurable security parameter $\lambda$ and zero storage overhead at the server. Extensive analysis has also been conducted and the results show that, SE-ORAM achieves the configured level of security, introduces zero storage overhead to the storage server (i.e., the storage server only storages $N$ data blocks), and incurs $O(\log N)$ blocks storage overhead at the client, as long as $\lambda\geq 2$ and each node on the storage tree stores $4\log N$ or more data blocks.
Last updated:  2017-08-01
More Efficient Structure-Preserving Signatures - Or: Bypassing the Type-III Lower Bounds
Essam Ghadafi
Structure-preserving signatures are an important cryptographic primitive that is useful for the design of modular cryptographic protocols. It has been proven that structure-preserving signatures (in the most efficient Type-III bilinear group setting) have a lower bound of 3 group elements in the signature (which must include elements from both source groups) and require at least 2 pairing-product equations for verification. In this paper, we show that such lower bounds can be circumvented. In particular, we define the notion of Unilateral Structure-Preserving Signatures on Diffie-Hellman pairs (USPSDH) which are structure-preserving signatures in the efficient Type-III bilinear group setting with the message space being the set of Diffie-Hellman pairs, in the terminology of Abe et al. (Crypto 2010). The signatures in these schemes are elements of one of the source groups, i.e. unilateral, whereas the verification key elements' are from the other source group. We construct a number of new structure-preserving signature schemes which bypass the Type-III lower bounds and hence they are much more efficient than all existing structure-preserving signature schemes. We also prove optimality of our constructions by proving lower bounds and giving some impossibility results. Our contribution can be summarized as follows: \begin{itemize} \item We construct two optimal randomizable CMA-secure schemes with signatures consisting of only 2 group elements from the first short source group and therefore our signatures are at least half the size of the best existing structure-preserving scheme for unilateral messages in the (most efficient) Type-III setting. Verifying signatures in our schemes requires, besides checking the well-formedness of the message, the evaluation of a single Pairing-Product Equation (PPE) and requires a fewer pairing evaluations than all existing structure-preserving signature schemes in the Type-III setting. Our first scheme has a feature that permits controlled randomizability (combined unforgeability) where the signer can restrict some messages such that signatures on those cannot be re-randomized which might be useful for some applications. \item We construct optimal strongly unforgeable CMA-secure one-time schemes with signatures consisting of 1 group element, and which can also sign a vector of messages while maintaining the same signature size. \item We give a one-time strongly unforgeable CMA-secure structure-preserving scheme that signs unilateral messages, i.e. messages in one of the source groups, whose efficiency matches the best existing optimal one-time scheme in every respect. \item We investigate some lower bounds and prove some impossibility results regarding this variant of structure-preserving signatures. \item We give an optimal (with signatures consisting of 2 group elements and verification requiring 1 pairing-product equation) fully randomizable CMA-secure partially structure-preserving scheme that simultaneously signs a Diffie-Hellman pair and a vector in $\Z^k_p$. \item As an example application of one of our schemes, we obtain efficient instantiations of randomizable weakly blind signatures which do not rely on random oracles. The latter is a building block that is used, for instance, in constructing Direct Anonymous Attestation (DAA) protocols, which are protocols deployed in practice. \end{itemize} Our results offer value along two fronts: On the practical side, our constructions are more efficient than existing ones and thus could lead to more efficient instantiations of many cryptographic protocols. On the theoretical side, our results serve as a proof that many of the lower bounds for the Type-III setting can be circumvented.
Last updated:  2016-03-08
Towards Stream Ciphers for Efficient FHE with Low-Noise Ciphertexts
Uncategorized
Pierrick Méaux, Anthony Journault, François-Xavier Standaert, Claude Carlet
Show abstract
Uncategorized
Symmetric ciphers purposed for Fully Homomorphic Encryption (FHE) have recently been proposed for two main reasons. First, minimizing the implementation (time and memory) overheads that are inherent to current FHE schemes. Second, improving the homomorphic capacity, \textit{i.e.} the amount of operations that one can perform on homomorphic ciphertexts before bootstrapping, which amounts to limit their level of noise. Existing solutions for this purpose suggest a gap between block ciphers and stream ciphers. The first ones typically allow a constant but small homomorphic capacity, due to the iteration of rounds eventually leading to complex Boolean functions (hence large noise). The second ones typically allow a larger homomorphic capacity for the first ciphertext blocks, that decreases with the number of ciphertext blocks (due to the increasing Boolean complexity of the stream ciphers' output). In this paper, we aim to combine the best of these two worlds, and propose a new stream cipher construction that allows constant and small(er) noise. Its main idea is to apply a Boolean (filter) function to a public bit permutation of a constant key register, so that the Boolean complexity of the stream cipher outputs is constant. We also propose an instantiation of the filter function designed to exploit recent (3rd-generation) FHE schemes, where the error growth is quasi-additive when adequately multiplying ciphertexts with the same amount of noise. In order to stimulate further investigation, we then specify a few instances of this stream cipher, for which we provide a preliminary security analysis. We finally highlight the good properties of our stream cipher regarding the other goal of minimizing the time and memory complexity of calculus delegation (for 2nd-generation FHE~schemes). We conclude the paper with open problems related to the large design space opened by these new constructions.
Last updated:  2016-07-22
Run-time Accessible DRAM PUFs in Commodity Devices
Wenjie Xiong, André Schaller, Nikolaos A. Anagnostopoulos, Muhammad Umair Saleem, Sebastian Gabmeyer, Stefan Katzenbeisser, Jakub Szefer
A Physically Unclonable Function (PUF) is a unique and stable physical characteristic of a piece of hardware, which emerges due to variations in the fabrication processes. Prior works have demonstrated that PUFs are a promising cryptographic primitive to enable secure key storage, hardware-based device authentication and identification. So far, most PUF constructions require addition of new hardware or FPGA implementations for their operation. Recently, intrinsic PUFs, which can be found in commodity devices, have been investigated. Unfortunately, most of them suffer from the drawback that they can only be accessed at boot time. This paper is the first to enable the run-time access of decay-based intrinsic DRAM PUFs in commercial off-the-shelf systems, which requires no additional hardware or FPGAs. A key advantage of our PUF construction is that it can be queried during run-time of a Linux system. Furthermore, by exploiting different decay times of individual DRAM cells, the challenge-response space is increased. Finally, we introduce lightweight protocols for device authentication and secure channel establishment, that leverage the DRAM PUFs at run-time.
Last updated:  2017-07-25
The Exact Round Complexity of Secure Computation
Sanjam Garg, Pratyay Mukherjee, Omkant Pandey, Antigoni Polychroniadou
We revisit the exact round complexity of secure computation in the multi-party and two-party settings. For the special case of two-parties without a simultaneous message exchange channel, this question has been extensively studied and resolved. In particular, Katz and Ostrovsky (CRYPTO '04) proved that 5 rounds are necessary and sufficient for securely realizing every two-party functionality where both parties receive the output. However, the exact round complexity of general multi-party computation, as well as two-party computation with a simultaneous message exchange channel, is not very well understood. These questions are intimately connected to the round complexity of non-malleable commitments. Indeed, the exact relationship between the round complexities of non-malleable commitments and secure multi-party computation has also not been explored. In this work, we revisit these questions and obtain several new results. First, we establish the following main results. Suppose that there exists a k-round non-malleable commitment scheme, and let k' = max(4, k + 1); then, – (Two-party setting with simultaneous message transmission): there exists a k'-round protocol for securely realizing every two-party functionality; – (Multi-party setting):there exists a k'-round protocol for securely realizing the multi-party coin-flipping functionality. As a corollary of the above results, by instantiating them with existing non-malleable commitment protocols (from the literature), we establish that four rounds are both necessary and sufficient for both the results above. Furthermore, we establish that, for every multi-party functionality five rounds are sufficient. We actually obtain a variety of results offering trade-offs between rounds and the cryptographic assumptions used, depending upon the particular instantiations of underlying protocols.
Last updated:  2020-03-04
Searchable Symmetric Encryption: Optimal Locality in Linear Space via Two-Dimensional Balanced Allocations
Uncategorized
Gilad Asharov, Moni Naor, Gil Segev, Ido Shahaf
Show abstract
Uncategorized
Searchable symmetric encryption (SSE) enables a client to store a database on an untrusted server while supporting keyword search in a secure manner. Despite the rapidly increasing interest in SSE technology, experiments indicate that the performance of the known schemes scales badly to large databases. Somewhat surprisingly, this is not due to their usage of cryptographic tools, but rather due to their poor locality (where locality is defined as the number of non-contiguous memory locations the server accesses with each query). The only known schemes that do not suffer from poor locality suffer either from an impractical space overhead or from an impractical read efficiency (where read efficiency is defined as the ratio between the number of bits the server reads with each query and the actual size of the answer). We construct the first SSE schemes that simultaneously enjoy optimal locality, optimal space overhead, and nearly-optimal read efficiency. Specifically, for a database of size $N$, under the modest assumption that no keyword appears in more than $N^{1 - 1/\log\log N}$ documents, we construct a scheme with read efficiency $\tilde{O}(\log \log N)$. This essentially matches the lower bound of Cash and Tessaro (EUROCRYPT '14) showing that any SSE scheme must be sub-optimal in either its locality, its space overhead, or its read efficiency. In addition, even without making any assumptions on the structure of the database, we construct a scheme with read efficiency $\tilde{O}(\log N)$. Our schemes are obtained via a two-dimensional generalization of the classic balanced allocations (``balls and bins'') problem that we put forward. We construct nearly-optimal two-dimensional balanced allocation schemes, and then combine their algorithmic structure with subtle cryptographic techniques.
Last updated:  2016-09-25
Fixed Point Arithmetic in SHE Scheme
A. Costache, N. P. Smart, S. Vivek, A. Waller
The purpose of this paper is to investigate fixed-point arithmetic in ring-based Somewhat Homomorphic Encryption (SHE) schemes. We provide three main contributions: firstly, we investigate the representation of fixed-point numbers. We analyse the two representations from Dowlin et al, representing a fixed-point number as a large integer (encoded as a scaled polynomial) versus a polynomial-based fractional representation. We show that these two are, in fact, isomorphic by presenting an explicit isomorphism between the two that enables us to map the parameters from one representation to another. Secondly, given a computation and a bound on the fixed-point numbers used as inputs and scalars within the computation, we achieve a way of producing lower bounds on the plaintext modulus $p$ and the degree of the ring $d$ needed to support complex homomorphic operations. Finally, as an application of these bounds, we investigate homomorphic image processing.
Last updated:  2017-04-01
Improved Side-Channel Analysis Attacks on Xilinx Bitstream Encryption of 5, 6, and 7 Series
Amir Moradi, Tobias Schneider
Since 2012, it is publicly known that the bitstream encryption feature of modern Xilinx FPGAs can be broken by side-channel analysis. Presented at CT-RSA 2012, using graphics processing units (GPUs) the authors demonstrated power analysis attacks mounted on side-channel evaluation boards optimized for power measurements. In this work, we extend such attacks by moving to the EM side channel to examine their practical relevance in real-world scenarios. Furthermore, by following a certain measurement procedure we reduce the search space of each part of the attack from 2^{32} to 2^8, which allows mounting the attacks on ordinary workstations. Several Xilinx FPGAs from different families - including the 7 series devices - are susceptible to the attacks presented here.
Last updated:  2016-03-06
Invariant subspaces in Simpira
Sondre Rønjom
In this short note we report on invariant subspaces in Simpira in the case of four registers. In particular, we show that the whole input space (respectively output space) can be partitioned into invariant cosets of dimension $56$ over $\F_{2^8}^{64}$. These invariant subspaces are found by exploiting the \emph{non-invariant} subspace properties of AES together with the particular choice of Feistel configuration. Though we give the invariant subspaces for $b=4$ in this paper, we remark that there are invariant subspaces in several of the Simpira instances; these can be determined with only minor adjustments to the analysis in this paper.
Last updated:  2016-11-28
Public Verifiable Function Secret Sharing
Wang Qiang, Zhou Fucai, Chen Chunyu, Li Fuxiang, Xu Zifeng
Function secret sharing(FSS) was introduced by Boyle et al. in Eurocrypt 2015, which allowed a dealer to split a secret function f into n separate pieces such that each piece enables the server who owns it to generate a secret share of the evaluation of f(x). However, when just only one collusive server returns a wrong result, reconstructing the secret will fail. Therefore, we are required to find an applicable approach to check the correctness of result returned by the untrusted server. To solve this issue, we firstly introduce a primitive called Public Verifiable Function Secret Sharing (PVFSS), and define three new important properties: public delegation, public verification and high efficiency. Then we ini- tiate a systematic study of PVFSS and construct a PVFSS scheme for point function. Not only captures our scheme these three properties, but also allows the client to verify the outcome in time constant i.e., in indeed substantially less time than performing the computation locally. We conducted a performance analysis, which manifested that our scheme can be applied into practice such as cloud/outsource computing.
Last updated:  2016-03-06
LINGUISTIC CRACKING OF PASSPHRASES USING MARKOV CHAINS
Uncategorized
Peder Sparell, Mikael Simovits
Show abstract
Uncategorized
In order to remember long passwords, it is not uncommon users are recommended to create a sentence which then is assembled to form a long password, a passphrase. However, theoretically a language is very limited and predictable, why a linguistically correct passphrase according to Shannon's definition of information theory should be relatively easy to crack compared to bruteforce. This work focuses on cracking linguistically correct passphrases, partly to determine to what extent it is advisable to base a password policy on such phrases for protection of data, and partly because today, widely available effective methods to crack passwords based on phrases are missing. Within this work, phrases were generated for further processing by available cracking applications, and the language of the phrases were modeled using a Markov process. In this process, phrases were built up by using the number of observed instances of subsequent characters or words in a source text, known as n-grams, to determine the possible/probable next character/word in the phrases. The work shows that by creating models of language, linguistically correct passphrases can be broken in a practical way compared to an exhaustive search. In the tests, passphrases consisting of up to 20 characters were broken.
Last updated:  2016-04-01
DEcryption Contract ENforcement Tool (DECENT): A Practical Alternative to Government Decryption Backdoors
Peter Linder
A cryptographic contract and enforcement technology would guarantee release of a data decryption key to an authorized party if and only if predetermined contract requirements are satisfied. Threshold secret sharing can be used to eliminate the need for access to the hidden key under normal circumstances. It can also eliminate the liability and burden normally carried by device manufacturers or service providers when they store the decryption keys of their customers. Blockchain technology provides a mechanism for a public audit trail of the creation and release of the hidden key. The use of peer-to-peer mix-net network overlay technology can be added to insure that the blockchain audit trail documents the release of the key even if an all-powerful entity forces actors to act under duress.
Last updated:  2016-08-25
Cryptanalysis of Simpira v1
Christoph Dobraunig, Maria Eichlseder, Florian Mendel
Simpira v1 is a recently proposed family of permutations, based on the AES round function. The design includes recommendations for using the Simpira permutations in block ciphers, hash functions, or authenticated ciphers. The designers' security analysis is based on computer-aided bounds for the minimum number of active S-boxes. We show that the underlying assumptions of independence, and thus the derived bounds, are incorrect. For family member Simpira-4, we provide differential trails with only 40 (instead of 75) active S-boxes for the recommended 15 rounds. Based on these trails, we propose full-round collision attacks on the proposed Simpira-4 Davies-Meyer hash construction, with complexity $2^{82.62}$ for the recommended full 15 rounds and a truncated 256-bit hash value, and complexity $2^{110.16}$ for 16 rounds and the full 512-bit hash value. These attacks violate the designers' security claims that there are no structural distinguishers with complexity below $2^{128}$.
Last updated:  2016-03-05
On the Key Dependent Message Security of the Fujisaki-Okamoto Constructions
Fuyuki Kitagawa, Takahiro Matsuda, Goichiro Hanaoka, Keisuke Tanaka
In PKC 1999, Fujisaki and Okamoto showed how to convert any public key encryption (PKE) scheme secure against chosen plaintext attacks (CPA) to a PKE scheme which is secure against chosen ciphertext attacks (CCA) in the random oracle model. Surprisingly, the resulting CCA secure scheme has almost the same efficiency as the underlying CPA secure scheme. Moreover, in J. Cryptology 2013, they proposed the more efficient conversion by using the hybrid encryption framework. In this work, we clarify whether these two constructions are also secure in the sense of key dependent message security against chosen ciphertext attacks (KDM-CCA security), under exactly the same assumptions on the building blocks as those used by Fujisaki and Okamoto. Specifically, we show two results: Firstly, we show that the construction proposed in PKC 1999 does not satisfy KDM-CCA security generally. Secondly, on the other hand, we show that the construction proposed in J. Cryptology 2013 satisfies KDM-CCA security.
Last updated:  2016-03-04
Attribute-Based Signatures for Circuits from Bilinear Map
Yusuke Sakai, Nuttapong Attrapadung, Goichiro Hanaoka
In attribute-based signatures, each signer receives a signing key from the authority, which is associated with the signer's attribute, and using the signing key, the signer can issue a signature on any message under a predicate, if his attribute satisfies the predicate. One of the ultimate goals in this area is to support a wide class of predicates, such as the class of \emph{arbitrary circuits}, with \emph{practical efficiency} from \emph{a simple assumption}, since these three aspects determine the usefulness of the scheme. We present an attribute-based signature scheme which allows us to use an arbitrary circuit as the predicate with practical efficiency from the symmetric external Diffie-Hellman assumption. We achieve this by combining the efficiency of Groth-Sahai proofs, which allow us to prove algebraic equations efficiently, and the expressiveness of Groth-Ostrovsky-Sahai proofs, which allow us to prove any NP relation via circuit satisfiability.
Last updated:  2018-02-18
A trivial debiasing scheme for Helper Data Systems
Uncategorized
Boris Skoric
Show abstract
Uncategorized
We introduce a debiasing scheme that solves the more-noise-than-entropy problem which can occur in Helper Data Systems when the source is very biased. We perform a condensing step, similar to Index Based Syndrome coding, that reduces the size of the source space in such a way that some source entropy is lost while the noise entropy is greatly reduced. In addition, our method allows for even more entropy extraction by means of a `spamming' technique. Our method outperforms solutions based on the one-pass and two-pass von Neumann algorithms.
Last updated:  2016-05-31
On Error Distributions in Ring-based LWE
Wouter Castryck, Ilia Iliashenko, Frederik Vercauteren
Since its introduction in 2010 by Lyubashevsky, Peikert and Regev, the Ring Learning With Errors problem (Ring-LWE) has become a popular building block for cryptographic primitives, due to its great versatility and its hardness proof consisting of a (quantum) reduction from ideal lattice problems. But for a given modulus $q$ and degree $n$ number field $K$, generating Ring-LWE samples can be perceived as cumbersome, because the secret keys have to be taken from the reduction mod $q$ of a certain fractional ideal $\mathcal{O}_K^\vee \subset K$ called the codifferent or `dual', rather than from the ring of integers $\mathcal{O}_K$ itself. This has led to various non-dual variants of Ring-LWE, in which one compensates for the non-duality by scaling up the errors. We give a comparison of these versions, and revisit some unfortunate choices that have been made in the recent literature, one of which is scaling up by $|\Delta_K|^{1/2n}$ with $\Delta_K$ the discriminant of $K$. As a main result, we provide for any $\varepsilon > 0$ a family of number fields $K$ for which this variant of Ring-LWE can be broken easily as soon as the errors are scaled up by $|\Delta_K|^{(1-\varepsilon)/n}$.
Last updated:  2016-03-04
Provably Weak Instances of Ring-LWE Revisited
Wouter Castryck, Ilia Iliashenko, Frederik Vercauteren
In CRYPTO 2015, Elias, Lauter, Ozman and Stange described an attack on the non-dual decision version of the ring learning with errors problem (RLWE) for two special families of defining polynomials, whose construction depends on the modulus q that is being used. For particularly chosen error parameters, they managed to solve non-dual decision RLWE given 20 samples, with a success rate ranging from 10% to 80%. In this paper we show how to solve the search version for the same families and error parameters, using only 7 samples with a success rate of 100%. Moreover our attack works for every modulus q instead of the q that was used to construct the defining polynomial. The attack is based on the observation that the RLWE error distribution for these families of polynomials is very skewed in the directions of the polynomial basis. For the parameters chosen by Elias et al. the smallest errors are negligible and simple linear algebra suffices to recover the secret. But enlarging the error paremeters makes the largest errors wrap around, thereby turning the RLWE problem unsuitable for cryptographic applications. These observations also apply to dual RLWE, but do not contradict the seminal work by Lyubashevsky, Peikert and Regev.
Last updated:  2016-03-03
Algorithmic Countermeasures Against Fault Attacks and Power Analysis for RSA-CRT
Ágnes Kiss, Juliane Krämer, Pablo Rauzy, Jean-Pierre Seifert
In this work, we analyze all existing RSA-CRT countermeasures against the Bellcore attack that use binary self-secure exponentiation algorithms. We test their security against a powerful adversary by simulating fault injections in a fault model that includes random, zeroing, and skipping faults at all possible fault locations. We find that most of the countermeasures are vulnerable and do not provide sufficient security against all attacks in this fault model. After investigating how additional measures can be included to counter all possible fault injections, we present three countermeasures which prevent both power analysis and many kinds of fault attacks.
Last updated:  2016-06-20
May-Ozerov Algorithm for Nearest-Neighbor Problem over $\mathbb{F}_{q}$ and Its Application to Information Set Decoding
Uncategorized
Shoichi Hirose
Show abstract
Uncategorized
May and Ozerov proposed an algorithm for the nearest-neighbor problem of vectors over the binary field at EUROCRYPT 2015. They applied their algorithm to the decoding problem of random linear codes over the binary field and confirmed the performance improvement. We describe their algorithm generalized to work for vectors over the finite field $\mathbb{F}_{q}$ with arbitrary prime power $q$. We also apply the generalized algorithm to the decoding problem of random linear codes over $\mathbb{F}_{q}$. It is observed by our numerical analysis of asymptotic time complexity that the May-Ozerov nearest-neighbor algorithm may not contribute to the performance improvement of the Stern information set decoding over $\mathbb{F}_{q}$ with $q\geq 3$.
Last updated:  2016-03-04
A Distinguisher on PRESENT-Like Permutations with Application to SPONGENT
Guoyan Zhang, Meicheng Liu
At Crypto 2015, Blondeau et al. showed a known-key analysis on the full PRESENT lightweight block cipher. Based on some of the best differential distinguishers, they introduced a meet in the middle (MitM) layer to pre-add the differential distinguisher, which extends the number of attacked rounds on PRESENT from 26 rounds to full rounds without reducing differential probability. In this paper, we generalize their method and present a distinguisher on a kind of permutations called PRESENT-like permutations. This generic distinguisher is divided into two phases. The first phase is a truncated differential distinguisher with strong bias, which describes the unbalancedness of the output collision on some fixed bits, given the fixed input in some bits, and we take advantage of the strong relation between truncated differential probability and capacity of multidimensional linear approximation to derive the best differential distinguishers. The second phase is the meet-in-the-middle layer, which is pre-added to the truncated differential to propagate the differential properties as far as possible. Different with Blondeau et al.'s work, we extend the MitM layers on a 64-bit internal state to states with any size, and we also give a concrete bound to estimate the attacked rounds of the MitM layer. As an illustration, we apply our technique to all versions of SPONGENT permutations. In the truncated differential phase, as a result we reach one, two or three rounds more than the results shown by the designers. In the meet-in-the-middle phase, we get up to 11 rounds to pre-add to the differential distinguishers. Totally, we improve the previous distinguishers on all versions of SPONGENT permutations by up to 13 rounds.
Last updated:  2016-03-03
Trading Plaintext-Awareness for Simulatability to Achieve Chosen Ciphertext Security
Takahiro Matsuda, Goichiro Hanaoka
In PKC 2014, Dachman-Soled showed a construction of a chosen ciphertext (CCA) secure public key encryption (PKE) scheme based on a PKE scheme which simultaneously satisfies a security property called weak simulatability and (standard model) plaintext awareness (sPA1) in the presence of multiple public keys. It is not well-known if plaintext awareness for the multiple keys setting is equivalent to the more familiar notion of that in the single key setting, and it is typically considered that plaintext awareness is a strong security assumption (because to achieve it we have to rely on a "knowledge"-type assumption). In Dachman-Soled's construction, the underlying PKE scheme needs to be plaintext aware in the presence of $2k+2$ public keys. The main result in this work is to show that the strength of plaintext awareness required in the Dachman-Soled construction can be somehow "traded" with the strength of a "simulatability" property of other building blocks. Furthermore, we also show that we can "separate" the assumption that a single PKE scheme needs to be both weakly simulatable and plaintext aware in her construction. Specifically, in this paper we show two new constructions of CCA secure key encapsulation mechanisms (KEMs): Our first scheme is based on a KEM which is chosen plaintext (CPA) secure and plaintext aware only under the $2$ keys setting, and a PKE scheme satisfying a "slightly stronger" simulatability than weak simulatability, called \emph{trapdoor simulatability} (introduced by Choi et al. ASIACRYPT 2009). Our second scheme is based on a KEM which is $1$-bounded CCA secure (Cramer et al. ASIACRYPT 2007) and plaintext aware only in the \emph{single} key setting, and a trapdoor simulatable PKE scheme. Our results add new recipes for constructing CCA secure PKE/KEM from general assumptions (that are incomparable to those used by Dachman-Soled), and in particular show interesting trade-offs among building blocks with those used in Dachman-Soled's construction.
Last updated:  2017-01-25
Trick or Tweak: On the (In)security of OTR’s Tweaks
Raphael Bost, Olivier Sanders
Tweakable blockcipher (TBC) is a powerful tool to design authenticated encryption schemes as illustrated by Minematsu's Offset Two Rounds (OTR) construction. It considers an additional input, called tweak, to a standard blockcipher which adds some variability to this primitive. More specifically, each tweak is expected to define a different, independent pseudo-random permutation. In this work we focus on OTR's way to instantiate a TBC and show that it does not achieve such a property for a large amount of parameters. We indeed describe collisions between the input masks derived from the tweaks and explain how they result in practical attacks against this scheme, breaking privacy, authenticity, or both, using a single encryption query, with advantage at least 1/4. We stress however that our results do not invalidate the OTR construction as a whole but simply prove that the TBC's input masks should be designed differently.
Last updated:  2017-03-29
Smooth NIZK Arguments with Applications to Asymmetric UC-PAKE and Threshold-IBE
Uncategorized
Charanjit S. Jutla, Arnab Roy
Show abstract
Uncategorized
We introduce a novel notion of smooth (-verifier) non-interactive zero-knowledge proofs (NIZK) which parallels the familiar notion of smooth projective hash functions (SPHF). We also show that the recent single group element quasi-adaptive NIZK (QA-NIZK) of Jutla and Roy (CRYPTO 2014) for linear subspaces can be easily extended to be computationally smooth. One important distinction of the new notion from SPHFs is that in a smooth NIZK the public evaluation of the hash on a language member using the projection key does not require the witness of the language member, but instead just requires its NIZK proof. This has the remarkable consequence that in the Gennaro-Lindell paradigm of designing universally-composable password-authenticated key-exchange (UC-PAKE) protocols, if one replaces the traditionally employed SPHFs with the novel smooth QA-NIZK, one gets highly efficient UC-PAKE protocols that are secure even under dynamic corruption. This simpler and modular design methodology allows us to give the first single-round asymmetric UC-PAKE protocol, which is also secure under dynamic corruption in the erasure model. We also define a related concept of smooth signatures, which we show is black-box equivalent to identity-based encryption (IBE). The novel abstraction allows us to give the first threshold (private-key generation) fully-secure IBE in the standard model.
Last updated:  2016-03-03
Efficient Privacy-Preserving Matrix Factorization via Fully Homomorphic Encryption
Sungwook Kim, Jinsu Kim, Dongyoung Koo, Yuna Kim, Hyunsoo Yoon, Junbum Shin
Recommendation systems become popular in our daily life. It is well known that the more the release of users’ personal data, the better the quality of recommendation. However, such services raise serious privacy concerns for users. In this paper, focusing on matrix factorization-based recommendation systems, we propose the first privacy-preserving matrix factorization using fully homomorphic encryption. On inputs of encrypted users' ratings, our protocol performs matrix factorization over the encrypted data and returns encrypted outputs so that the recommendation system knows nothing on rating values and resulting user/item profiles. It provides a way to obfuscate the number and list of items a user rated without harming the accuracy of recommendation, and additionally protects recommender's tuning parameters for business benefit and allows the recommender to optimize the parameters for quality of service. To overcome performance degradation caused by the use of fully homomorphic encryption, we introduce a novel data structure to perform computations over encrypted vectors, which are essential operations for matrix factorization, through secure 2-party computation in part. With the data structure, the proposed protocol requires dozens of times less computation cost over those of previous works. Our experiments on a personal computer with 3.4 GHz 6-cores 64 GB RAM show that the proposed protocol runs in 1.5 minutes per iteration. It is more efficient than Nikolaenko et al.'s work proposed in CCS 2013, in which it took about 170 minutes on two servers with 1.9 GHz 16-cores 128 GB RAM.
Last updated:  2016-03-02
Side-Channel Analysis of Weierstrass and Koblitz Curve ECDSA on Android Smartphones
Pierre Belgarric, Pierre-Alain Fouque, Gilles Macario-Rat, Mehdi Tibouchi
In this paper, we study the side-channel resistance of the implementation of the ECDSA signature scheme in Android's standard cryptographic library. We show that, for elliptic curves over prime fields, one can recover the secret key very efficiently on smartphones using electromagnetic side-channel and well-known lattice reduction techniques. We experimentally show that elliptic curve operations (doublings and additions) can be distinguished in a multi-core CPU clocking over the giga-hertz. We then extend the standard lattice attack on ECDSA over prime fields to binary Koblitz curves. This is the first time that such an attack is described on Koblitz curves. These curves, which are also available in Bouncy Castle, allow very efficient implementations using the Frobenius operation. This leads to signal processing challenges since the number of available points are reduced. We investigate practical side-channel, showing the concrete vulnerability of such implementations. In comparison to previous works targeting smartphones, the attacks presented in the paper benefits from discernible architectural features, like specific instructions computations or memory accesses.
Last updated:  2016-08-19
ECDSA Key Extraction from Mobile Devices via Nonintrusive Physical Side Channels
Daniel Genkin, Lev Pachmanov, Itamar Pipman, Eran Tromer, Yuval Yarom
We show that elliptic-curve cryptography implementations on mobile devices are vulnerable to electromagnetic and power side-channel attacks. We demonstrate full extraction of ECDSA secret signing keys from OpenSSL and CoreBitcoin running on iOS devices, and partial key leakage from OpenSSL running on Android and from iOS's CommonCrypto. These non-intrusive attacks use a simple magnetic probe placed in proximity to the device, or a power probe on the phone's USB cable. They use a bandwidth of merely a few hundred kHz, and can be performed cheaply using an audio card and an improvised magnetic probe.
Last updated:  2016-04-15
Key Compression for Isogeny-Based Cryptosystems
Uncategorized
Reza Azarderakhsh, David Jao, Kassem Kalach, Brian Koziel, Christopher Leonardi
Show abstract
Uncategorized
We present a method for key compression in quantum-resistant isogeny-based cryptosystems, which allows a reduction in and transmission costs of per-party public information by a factor of two, with no effect on security. We achieve this reduction by associating a canonical choice of elliptic curve to each $j$-invariant, and representing elements on the curve as linear combinations with respect to a canonical choice of basis. This method of compressing public information can be applied to numerous isogeny-based protocols, such as key exchange, zero-knowledge identification, and public-key encryption. We performed personal computer and ARM implementations of the key exchange with compression and decompression in C and provided timing results, showing the computational cost of key compression and decompression at various security levels. Our results show that isogeny-based cryptosystems achieve by far the smallest possible key sizes among all existing families of post-quantum cryptosystems at practical security levels; e.g. 3073-bit public keys at the quantum 128-bit security level, comparable to (non-quantum) RSA key sizes.
Last updated:  2017-03-19
On a decentralized trustless pseudo-random number generation algorithm
Uncategorized
Serguei Popov
Show abstract
Uncategorized
We construct an algorithm that permits a large group of individuals to reach consensus on a random number, without having to rely on any third parties. The algorithm works with high probability if there are less than 50% of colluding parties in the group. We describe also some modifications and generalizations of the algorithm.
Last updated:  2016-03-02
Process Table Covert Channels: Exploitation and Countermeasures
Uncategorized
Jean-Michel Cioranesco, Houda Ferradi, Rémi Géraud, David Naccache
Show abstract
Uncategorized
How to securely run untrusted software? A typical answer is to try to isolate the actual effects this software might have. Such counter-measures can take the form of memory segmentation, sandboxing or virtualisation. Besides controlling potential damage this software might do, such methods try to prevent programs from peering into other running programs' operation and memory. As programs, no matter how many layers of indirection in place, are really being run, they consume resources. Should this resource usage be precisely monitored, malicious programs might be able to communicate in spite of software protections. We demonstrate the existence of such a covert channel bypassing isolations techniques and IPC policies. This covert channel that works over all major consumer OSes (Windows, Linux, MacOS) and relies on exploitation of the process table. We measure the bandwidth of this channel and suggest countermeasures.
Last updated:  2020-03-20
On Statistically Secure Obfuscation with Approximate Correctness
Zvika Brakerski, Chris Brzuska, Nils Fleischhacker
Goldwasser and Rothblum (TCC '07) prove that statistical indistinguishability obfuscation (iO) cannot exist if the obfuscator must maintain perfect correctness (under a widely believed complexity theoretic assumption: $\mathcal{NP} \not\subseteq \mathcal{SZK}\subseteq\mathcal{AM}\cap\mathbf{co}\mathcal{AM}$). However, for many applications of iO, such as constructing public-key encryption from one-way functions (one of the main open problems in theoretical cryptography), approximate correctness is sufficient. It had been unknown thus far whether statistical approximate iO (saiO) can exist. We show that saiO does not exist, even for a minimal correctness requirement, if $\mathcal{NP} \not\subseteq \mathcal{AM}\cap\mathbf{co}\mathcal{AM}$, and if one-way functions exist. A simple complementary observation shows that if one-way functions do not exist, then average-case saiO exists. Technically, previous approaches utilized the behavior of the obfuscator on evasive functions, for which saiO always exists. We overcome this barrier by using a PRF as a "baseline" for the obfuscated program. We broaden our study and consider relaxed notions of security for iO. We introduce the notion of correlation obfuscation, where the obfuscations of equivalent circuits only need to be mildly correlated (rather than statistically indistinguishable). Perhaps surprisingly, we show that correlation obfuscators exist via a trivial construction for some parameter regimes, whereas our impossibility result extends to other regimes. Interestingly, within the gap between the parameters regimes that we show possible and impossible, there is a small fraction of parameters that still allow to build public-key encryption from one-way functions and thus deserve further investigation.
Last updated:  2016-03-01
A New Birthday-Type Algorithm for Attacking the Fresh Re-Keying Countermeasure
Qian Guo, Thomas Johansson
The fresh re-keying scheme is a countermeasure designed to protect low-cost devices against side-channel attacks. In this paper, we present a new birthday-type attack based on a refined reduction to Ring-LPN with a reducible polynomial. Compared with the previous research, our algorithm significantly reduces the time complexity in the 128-bit leakage model—with an SNR equal to 8 and at most $2^{20}$ traces, for instance, the key can be recovered using $2^{41.99}$ bit-operations.
Last updated:  2016-03-01
CacheBleed: A Timing Attack on OpenSSL Constant Time RSA
Yuval Yarom, Daniel Genkin, Nadia Heninger
The scatter-gather technique is a commonly-implemented approach to prevent cache-based timing attacks. In this paper we show that scatter-gather is not constant-time. We implement a cache timing attack against the scatter-gather implementation used in the modular exponentiation routine in OpenSSL version 1.0.2f. Our attack exploits cache-bank conflicts on the Sandy Bridge microarchitecture. We have tested the attack on an Intel Xeon E5-2430 processor. For 4096-bit RSA our attack can fully recover the private key after observing 16,000 decryptions.
Last updated:  2016-11-23
Still Wrong Use of Pairings in Cryptography
Uncategorized
Mehmet Sabır Kiraz, Osmanbey Uzunkol
Show abstract
Uncategorized
Several pairing-based cryptographic protocols are recently proposed with a wide variety of new novel applications including the ones in emerging technologies like cloud computing, internet of things (IoT), e-health systems and wearable technologies. There have been however a wide range of incorrect use of these primitives. The paper of Galbraith, Paterson, and Smart (2006) pointed out most of the issues related to the incorrect use of pairing-based cryptography. However, we noticed that some recently proposed applications still do not use these primitives correctly. This leads to unrealizable, insecure or too inefficient designs of pairing-based protocols. We observed that one reason is not being aware of the recent advancements on solving the discrete logarithm problems in some groups. The main purpose of this article is to give an understandable, informative, and the most up-to-date criteria for the correct use of pairing-based cryptography. We thereby deliberately avoid most of the technical details and rather give special emphasis on the importance of the correct use of bilinear maps by realizing secure cryptographic protocols. We list a collection of some recent papers having wrong security assumptions or realizability/efficiency issues. Finally, we give a compact and an up-to-date recipe of the correct use of pairings.
Last updated:  2016-02-29
Time-Memory Trade-Off for Lattice Enumeration in a Ball
Paul Kirchner, Pierre-Alain Fouque
Enumeration algorithms in lattices are a well-known technique for solving the Short Vector Problem (SVP) and improving blockwise lattice reduction algorithms. Here, we propose a new algorithm for enumerating lattice point in a ball of radius $1.156\lambda_1(\Lambda)$ in time $3^{n+o(n)}$, where $\lambda_1(\Lambda)$ is the length of the shortest vector in the lattice $\Lambda$. Then, we show how this method can be used for solving SVP and the Closest Vector Problem (CVP) with approximation factor $\gamma=1.993$ in a $n$-dimensional lattice in time $3^{n+o(n)}$. Previous algorithms for enumerating take super-exponential running time with polynomial memory. For instance, Kannan algorithm takes time $n^{n/(2e)+o(n)}$, however ours also requires exponential memory and we propose different time/memory tradeoffs. Recently, Aggarwal, Dadush, Regev and Stephens-Davidowitz describe a randomized algorithm with running time $2^{n+o(n)}$ at STOC' 15 for solving SVP and approximation version of SVP and CVP at FOCS'15. However, it is not possible to use a time/memory tradeoff for their algorithms. Their main result presents an algorithm that samples an exponential number of random vectors in a Discrete Gaussian distribution with width below the smoothing parameter of the lattice. Our algorithm is related to the hill climbing of Liu, Lyubashevsky and Micciancio from RANDOM' 06 to solve the bounding decoding problem with preprocessing. It has been later improved by Dadush, Regev, Stephens-Davidowitz for solving the CVP with preprocessing problem at CCC'14. However the latter algorithm only looks for one lattice vector while we show that we can enumerate all lattice vectors in a ball. Finally, in these papers, they use a preprocessing to obtain a succinct representation of some lattice function. We show in a first step that we can obtain the same information using an exponential-time algorithm based on a collision search algorithm similar to the reduction of Micciancio and Peikert for the SIS problem with small modulus at CRYPTO' 13.
Last updated:  2019-10-16
Post-Compromise Security
Uncategorized
Katriel Cohn-Gordon, Cas Cremers, Luke Garratt
Show abstract
Uncategorized
In this work we study communication with a party whose secrets have already been compromised. At first sight, it may seem impossible to provide any type of security in this scenario. However, under some conditions, practically relevant guarantees can still be achieved. We call such guarantees ``post-compromise security''. We provide the first informal and formal definitions for post-compromise security, and show that it can be achieved in several scenarios. At a technical level, we instantiate our informal definitions in the setting of authenticated key exchange (AKE) protocols, and develop two new strong security models for two different threat models. We show that both of these security models can be satisfied, by proposing two concrete protocol constructions and proving they are secure in the models. Our work leads to crucial insights on how post-compromise security can (and cannot) be achieved, paving the way for applications in other domains.
Last updated:  2016-04-06
Algorithms on Ideal over Complex Multiplication order
Paul Kirchner
We show in this paper that the Gentry-Szydlo algorithm for cyclotomic orders, previously revisited by Lenstra-Silverberg, can be extended to complex-multiplication (CM) orders, and even to a more general structure. This algorithm allows to test equality over the polarized ideal class group, and finds a generator of the polarized ideal in polynomial time. Also, the algorithm allows to solve the norm equation over CM orders and the recent reduction of principal ideals to the real suborder can also be performed in polynomial time. Furthermore, we can also compute in polynomial time a unit of an order of any number field given a (not very precise) approximation of it. Our description of the Gentry-Szydlo algorithm is different from the original and Lenstra- Silverberg’s variant and we hope the simplifications made will allow a deeper understanding. Finally, we show that the well-known speed-up for enumeration and sieve algorithms for ideal lattices over power of two cyclotomics can be generalized to any number field with many roots of unity.
Last updated:  2016-02-29
Nonce-based Kerberos is a Secure Delegated AKE Protocol
Jörg Schwenk
Kerberos is one of the most important cryptographic protocols, first because it is the basisc authentication protocol in Microsoft's Active Directory and shipped with every major operating system, and second because it served as a model for all Single-Sign-On protocols (e.g. SAML, OpenID, MS Cardspace, OpenID Connect). Its security has been confirmed with several Dolev-Yao style proofs, and attacks on certain versions of the protocol have been described. However despite its importance, despite its longevity, and despite the wealth of Dolev-Yao-style security proofs, no reduction based security proof has been published until now. This has two reasons: (1) All widely accepted formal models either deal with two-party protocols, or group key agreement protocols (where all entities have the same role), but not with 3-party protocols where each party has a different role. (2) Kerberos uses timestamps and nonces, and formal security models for timestamps are not well understood up to now. As a step towards a full security proof of Kerberos, we target problem (1) here: We propose a variant of the Kerberos protocol, where nonces are used instead of timestamps. This requires one additional protocol message, but enables a proof in the standard Bellare-Rogaway (BR) model. The key setup and the roles of the different parties are identical to the original Kerberos protocol. For our proof, we only require that the authenticated encryption and the message authentication code (MAC) schemes are secure. Under these assumptions we show that the probability that a client or server process oracle accepts maliciously, and the advantage of an adversary trying to distinguish a real Kerberos session key from a random value, are both negligible. One main idea in the proof is to model the Kerberos server a a public oracle, so that we do not have to consider the security of the connection client--Kerberos. This idea is only applicable to the communication pattern adapted by Kerberos, and not to other 3-party patterns (e.g. EAP protocols).
Last updated:  2016-03-18
Semantic Security and Key-Privacy With Random Split of St-Gen Codes
Danilo Gligoroski, Simona Samardjiska
Recently we have defined Staircase-Generator codes (St-Gen codes) and their variant with a random split of the generator matrix of the codes. One unique property of these codes is that they work with arbitrary error sets. In this paper we give a brief overview of St-Gen codes and the list decoding algorithm for their decoding. We also analyze the semantic security against chosen plaintext attack (IND-CPA) and key-privacy i.e. indistinguishability of public keys under chosen plaintext attack (IK-CPA) of the encryption scheme with random split of St-Gen codes. In a similar manner as it was done by Nojima et al., and later by Yamakawa et al., we show that padding the plaintext with a random bit-string provides IND-CPA and IK-CPA in the standard model. The difference with McEliece scheme is that with our scheme the length of the padded random string is significantly shorter.
Last updated:  2016-02-29
Practical backward unlinkable revocation in FIDO, German e-ID, Idemix and U-Prove
Eric R. Verheul
FIDO, German e-ID, Idemix and U-Prove constitute privacy-enhanced public-key infrastructures allowing users to authenticate in an anonymous way. This however hampers timely revocation in a privacy friendly way. From a legal perspective, revocation typically should be effective within 24 hours after user reporting. It should also be backward unlinkable, i.e. user anonymity cannot be removed after revocation. We describe a new, generic revocation mechanism based on pairing based encryption and apply it to supplement the systems mentioned. This allows for both flexible and privacy friendly revocation. Protocol execution takes less than a quarter of a second on modern smartcards. An additional property is that usage after revocation is linkable, allowing users to identify fraudulent usage after revocation. Our technique is the first Verifier Local Revocation scheme with backwards unlinkable revocation for the systems mentioned. This also allows for a setup resembling the well-known Online Certificate Status Protocol (OCSP). Here the service provider sends a pseudonym to a revocation provider that returns its status. As the information required for this is not secret the status service can be distributed over many cloud services. In addition to the status service our technique also supports the publication of a central revocation list.
Last updated:  2016-02-29
Fair mPSI and mPSI-CA: Efficient Constructions in Prime Order Groups with Security in the Standard Model against Malicious Adversary
Sumit Kumar Debnath, Ratna Dutta
In this paper, we propose a construction of fair and efficient mutual Private Set Intersection (mPSI) with linear communication and computation complexities, where the underlying group is of prime order. The main tools in our approach include: (i) ElGamal and Distributed ElGamal Cryptosystems as multiplicatively Homomorphic encryptions, (ii) Cramer-Shoup Cryptosystem as Verifiable encryption. Our mPSI is secure in standard model against malicious parties under Decisional Diffie-Hellman (DDH) assumption. Fairness is achieved using an off-line semi-trusted arbiter. Further, we extend our mPSI to mutual Private Set Intersection Cardinality (mPSI-CA) retaining all the security properties of mPSI. More interestingly, our mPSI-CA is the first fair mPSI-CA with linear complexity.
Last updated:  2016-06-01
Algorithms for the Approximate Common Divisor Problem
Uncategorized
Steven D. Galbraith, Shishay W. Gebregiyorgis, Sean Murphy
Show abstract
Uncategorized
The security of several homomorphic encryption schemes depends on the hardness of the Approximate Common Divisor (ACD) problem. In this paper we review and compare existing algorithms to solve the ACD problem using lattices. In particular we consider the simultaneous Diophantine approximation method, the orthogonal lattice method, and a method based on multivariate polynomials and Coppersmith's algorithm that was studied in detail by Cohn and Heninger. One of our main goals is to compare the multivariate polynomial approach with other methods. We find that the multivariate polynomial approach is not better than the orthogonal lattice algorithm for practical cryptanalysis. Another contribution is to consider a sample-amplification technique for ACD samples, and to consider a pre-processing algorithm similar to the Blum-Kalai-Wasserman (BKW) algorithm for learning parity with noise. We explain why, unlike in other settings, the BKW algorithm does not give an improvement over the lattice algorithms. This is the full version of a paper published at ANTS-XII in 2016.
Last updated:  2016-02-29
An Improvement of Both Security and Reliability for Keccak Implementations on Smart Card
Pei Luo, Liwei Zhang, Yunsi Fei, A. Adam Ding
As the new SHA-3 standard, the security and reliability of Keccak have attracted a lot of attentions. Previous works already show that both software and hardware implementations of Keccak have strong side-channel power (electromagnetic) leakages, and these leakages can be easily used by attackers to recover secret key bits. Meanwhile, Keccak is vulnerable to random errors and injected faults, which will cause errors in the computation results. In this paper, we introduce a scheme based on the round rotation invariance property of Keccak to reduce the side-channel leakages while improve its reliability. The proposed scheme is resource friendly. Side-channel analysis results show that this method can efficiently reduce the side-channel leakages of Keccak implementations. Meanwhile, fault injection simulation results show that the proposed scheme can effectively improve the reliability of Keccak implementation, with error coverage almost 100%.
Last updated:  2016-09-19
3-Message Zero Knowledge Against Human Ignorance
Nir Bitansky, Zvika Brakerski, Yael Kalai, Omer Paneth, Vinod Vaikuntanathan
The notion of Zero Knowledge has driven the field of cryptography since its conception over thirty years ago. It is well established that two-message zero-knowledge protocols for NP do not exist, and that four-message zero-knowledge arguments exist under the minimal assumption of one-way functions. Resolving the precise round complexity of zero-knowledge has been an outstanding open problem for far too long. In this work, we present a three-message zero-knowledge argument system with soundness against uniform polynomial-time cheating provers. The main component in our construction is the recent delegation protocol for RAM computations (Kalai and Paneth, TCC 2016B and Brakerski, Holmgren and Kalai, ePrint 2016). Concretely, we rely on a three-message variant of their protocol based on a {\em key-less} collision-resistant hash functions secure against uniform adversaries as well as other standard primitives. More generally, beyond uniform provers, our protocol provides a natural and meaningful security guarantee against real-world adversaries, which we formalize following Rogaway's ``human-ignorance" approach (VIETCRYPT 2006): in a nutshell, we give an explicit uniform reduction from any adversary breaking the soundness of our protocol to finding collisions in the underlying hash function.
Last updated:  2016-02-29
Low Linear Complexity Estimates for Coordinate Sequences of Linear Recurrences of Maximal Period over Galois Ring
Vadim N. Tsypyschev
In this work we provide low rank estimations for coordinate sequences of linear recurrent sequences (LRS) of maximal period (MP) over Galois ring $R=GR(p^n,r)$, $p\ge 5$, $r\ge2$, with numbers $s$ such that $s=kr+2$, $k\in \mathbb{N}_0$.
Last updated:  2016-02-29
Randomness Complexity of Private Circuits for Multiplication
Uncategorized
Sonia Belaïd, Fabrice Benhamouda, Alain Passelègue, Emmanuel Prouff, Adrian Thillard, Damien Vergnaud
Show abstract
Uncategorized
Many cryptographic algorithms are vulnerable to side channel analysis and several leakage models have been introduced to better understand these flaws. In 2003, Ishai, Sahai and Wagner introduced the d-probing security model, in which an attacker can observe at most d intermediate values during a processing. They also proposed an algorithm that securely performs the multiplication of 2 bits in this model, using only d(d + 1)/2 random bits to protect the computation. We study the randomness complexity of multiplication algorithms secure in the d-probing model. We propose several contributions: we provide new theoretical characterizations and constructions, new practical constructions and a new efficient algorithmic tool to analyze the security of such schemes. We start with a theoretical treatment of the subject: we propose an algebraic model for multiplication algorithms and exhibit an algebraic characterization of the security in the d-probing model. Using this characterization, we prove a linear (in d) lower bound and a quasi-linear (non-constructive) upper bound for this randomness cost. Then, we construct a new generic algorithm to perform secure multiplication in the d-probing model that only uses d + d^2 /4 random bits. From a practical point of view, we consider the important cases d ≤ 4 that are actually used in current real-life implementations and we build algorithms with a randomness complexity matching our theoretical lower bound for these small-order cases. Finally, still using our algebraic characterization, we provide a new dedicated verification tool, based on information set decoding, which aims at finding attacks on algorithms for fixed order d at a very low computational cost.
Last updated:  2016-02-29
Hopes, Fears and Software Obfuscation: A Survey
Boaz Barak
I survey some of the recent progress on software obfuscation spurred by the exciting paper of Garg, Gentry, Halevi, Raykova, Sahai and Waters (FOCS 2013). This is a preprint version of a review article~ appearing in the Communications of the ACM. That article was written for a general computing audience, and is also not up to date on the latest research in this fast-moving field since it was mostly written in 2014. Nevertheless, I thought it might still be of interest to cryptographers.
Last updated:  2017-07-04
Automatic Differential Analysis of ARX Block Ciphers with Application to SPECK and LEA
Uncategorized
Ling Song, Zhangjie Huang, Qianqian Yang
Show abstract
Uncategorized
In this paper, we focus on the automatic differential cryptanalysis of ARX block ciphers with respect to XOR-difference, and develop Mouha et al.'s framework for finding differential characteristics by adding a new method to construct long characteristics from short ones. The new method reduces the searching time a lot and makes it possible to search differential characteristics for ARX block ciphers with large word sizes such as $n=48,64$. What's more, we take the differential effect into consideration and find that the differential probability increases by a factor of $1.4\sim 8$ for SPECK and about $2^{10}$ for LEA when multiple characteristics are counted in. The efficiency of our method is demonstrated by improved attacks of SPECK and LEA, which attack 1, 1, 4 and 6 more rounds of SPECK48, SPECK64, SPECK96 and SPECK128, respectively, and 2 more rounds of LEA than previous works.
Last updated:  2016-11-23
Constant-Round Asynchronous Multi-Party Computation Based on One-Way Functions
Sandro Coretti, Juan Garay, Martin Hirt, Vassilis Zikas
Constant-Round Asynchronous Multi-Party Computation Secure multi-party computation (MPC) allows several mutually distrustful parties to securely compute a joint function of their inputs and exists in two main variants: In *synchronous* MPC parties are connected by a synchronous network with a global clock, and protocols proceed in *rounds* with strong delivery guarantees, whereas *asynchronous* MPC protocols can be deployed even in networks that deliver messages in an arbitrary order and impose arbitrary delays on them. The two models---synchronous and asynchronous---have to a large extent developed in parallel with results on both feasibility and asymptotic efficiency improvements in either track. The most notable gap in this parallel development is with respect to round complexity. In particular, although under standard assumptions on a synchronous communication network (availability of secure channels and broadcast), synchronous MPC protocols with (exact) constant rounds have been constructed, to the best of our knowledge, thus far no constant-round asynchronous MPC protocols are known, with the best protocols requiring a number of rounds that is linear in the multiplicative depth of the arithmetic circuit computing the desired function. In this work we close this gap by providing the first constant-round asynchronous MPC protocol. Our protocol is optimally resilient (i.e., it tolerates up to $t<n/3$ corrupted parties), adaptively secure, and makes black-box use of a pseudo-random function. It works under the standard network assumptions for protocols in the asynchronous MPC setting, namely, a complete network of point-to-point (secure) asynchronous channels with eventual delivery and asynchronous Byzantine agreement (aka consensus). We provide formal definitions of these primitives and a proof of security in the Universal Composability framework.
Last updated:  2016-10-20
Fault analysis and weak key-IV attack on Sprout
Uncategorized
Dibyendu Roy, Sourav Mukhopadhyay
Show abstract
Uncategorized
Armknecht and Mikhalev proposed a new stream cipher `Sprout' based on the design specification of the stream cipher, Grain-128a. Sprout has shorter state size than Grain family with a round key function. The output of the round key function is XOR'ed with the feedback bit of the NFSR of the cipher. In this paper, we propose a new fault attack on Sprout by injecting a single bit fault after the key initialization phase at any arbitrary position of the NFSR of the cipher. By injecting a single bit fault, we recover the bits of the secret key of the cipher by observing the normal and faulty keystream bits at certain clockings of the cipher. By implementing the attack, we verify our result for one particular case. We also show that the Sprout generates same states for several rounds in key initialization phase for two different key-IV pairs, which proves that the key initialization round is having very poor period.
Last updated:  2016-03-24
Construction of Fully CCA-Secure Predicate Encryptions from Pair Encoding Schemes
Johannes Blömer, Gennadij Liske
This paper presents a new framework for constructing fully CCA-secure predicate encryption schemes from pair encoding schemes. Our construction is the first in the context of predicate encryption which uses the technique of well-formedness proofs known from public key encryption. The resulting constructions are simpler and more efficient compared to the schemes achieved using known generic transformations from CPA-secure to CCA-secure schemes. The reduction costs of our framework are comparable to the reduction costs of the underlying CPA-secure framework. We achieve this last result by applying the dual system encryption methodology in a novel way.
Last updated:  2016-02-25
Addressing the Algebraic Eraser Diffie--Hellman Over-the-Air Protocol
Derek Atkins, Dorian Goldfeld
The Algebraic Eraser Diffie-Hellman (AEDH) protocol, first introduced in 2005 as a key agreement and authentication protocol, has been proposed as a standard in ISO JTC-1/SC-31 (29167-20) to protect various communication protocols like RFID, NFC, or Bluetooth for devices associated with ISO-18000 and the Internet of Things. A recent paper by M.J.B. Robshaw and Simon R Blackburn claims to recover sufficient data to impersonate a device or, with a bit more work, recover the private keys of a device if an attacker uses the draft 29167-20 protocol and gains direct access to the resulting shared secret computation. This paper shows that simply adding a Hash or a Message Authentication Code (MAC) to the proposed authentication protocol overcomes the purported attacks. These simple standard enhancements thwart all of these attacks; that is, attacks of this nature fail. As the 29167-20 draft is currently a work item under active development within the ISO process, all these attacks would normally have been addressed in the working group, and no AEDH protocol in the public domain currently transmits the computed shared secret. Therefore, contrary to the conclusion of Robshaw and Blackburn, a simple addition to the draft protocol, similar in nature to protections in other protocols like TLS, makes the AEDH protocol perfectly suitable for authentication of passive tags and other low-power, constrained devices.
Last updated:  2016-02-25
A Memory Encryption Engine Suitable for General Purpose Processors
Shay Gueron
Cryptographic protection of memory is an essential ingredient for any technology that allows a closed computing system to run software in a trustworthy manner and handle secrets, while its external memory is susceptible to eavesdropping and tampering. An example for such a technology is Intel's emerging Software Guard Extensions technology (Intel SGX) that appears in the latest processor generation, Architecture Codename Skylake. This technology operates under the assumption that the security perimeter includes only the internals of the CPU package, and in particular, leaves the DRAM untrusted. It is supported by an autonomous hardware unit called the Memory Encryption Engine (MEE), whose role is to protect the confidentiality, integrity, and freshness of the CPU-DRAM traffic over some memory range. To succeed in adding this unit to the micro architecture of a general purpose processor product, it must be designed under very strict engineering constraints. This requires a careful combination of cryptographic primitives operating over a customized integrity tree that mostly resides on the DRAM while relying only on a small internally stored root. The purpose of this paper is to explain how this hardware component of SGX works, and the rationale behind some of its design choices. To this end, we formalize the MEE threat model and security objectives, describe the MEE design, cryptographic properties, security margins, and report some concrete performance results.
Last updated:  2016-06-24
White-Box Cryptography in the Gray Box - A Hardware Implementation and its Side Channels
Uncategorized
Pascal Sasdrich, Amir Moradi, Tim Güneysu
Show abstract
Uncategorized
Implementations of white-box cryptography aim to protect a secret key in a white-box environment in which an adversary has full control over the execution process and the entire environment. Its fundamental principle is the map of the cryptographic architecture, including the secret key, to a number of encoded tables that shall resist the inspection and decomposition of an attacker. In a gray-box scenario, however, the property of hiding required implementation details from the attacker could be used as a promising mitigation strategy against side-channel attacks (SCA). In this work, we present a first white-box implementation of AES on reconfigurable hardware for which we evaluate this approach assuming a gray-box attacker. We show that - unfortunately - such an implementation does not provide sufficient protection against an SCA attacker. We continue our evaluations by a thorough analysis of the source of the observed leakage, and present additional results which can be used to build stronger white-box designs.
Last updated:  2016-03-04
An Encryption Scheme based on Random Split of St-Gen Codes
Simona Samardjiska, Danilo Gligoroski
Staircase-Generator codes (St-Gen codes) have recently been introduced in the design of code-based public key schemes and for the design of steganographic matrix embedding schemes. In this paper we propose a method for random splitting of St-Gen Codes and use it to design a new coding based public key encryption scheme. The scheme uses the known list decoding method for St-Gen codes, but introduces a novelty in the creation of the public and private key. We modify the classical approach for hiding the structure of the generator matrix by introducing a technique for splitting it into random parts. This approach counters the weaknesses found in the previous constructions of public key schemes using St-Gen codes. Our initial software implementation shows that encryption using Random Split of St-Gen Codes compared to original St-Gen Codes is slower by a linear factor in the number of random splits of the St-Gen code, while the decryption complexity remains the same.
Last updated:  2016-02-25
From Stateful Hardware to Resettable Hardware Using Symmetric Assumptions
Nico Doettling, Daniel Kraschewski, Joern Mueller-Quade, Tobias Nilges
Universally composable multi-party computation is impossible without setup assumptions. Motivated by the ubiquitous use of secure hardware in many real world security applications, Katz (EUROCRYPT 2007) proposed a model of tamper-proof hardware as a UC-setup assumption. An important aspect of this model is whether the hardware token is allowed to hold a state or not. Real world examples of tamper-proof hardware that can hold a state are expensive hardware security modules commonly used in mainframes. Stateless, or resettable hardware tokens model cheaper devices such as smartcards, where an adversarial user can cut off the power supply, thus resetting the card's internal state. A natural question is how the stateful and the resettable hardware model compare in their cryptographic power, given that either the receiver or the sender of the token (and thus the token itself) might be malicious. In this work we show that any UC-functionality that can be implemented by a protocol using a single untrusted stateful hardware token can likewise be implemented using a single untrusted resettable hardware token, assuming only the existence of one-way functions. We present two compilers that transform UC-secure protocols in the stateful hardware model into UC-secure protocols in the resettable hardware model. The first compiler can be proven secure assuming merely the existence of one-way functions. However, it (necessarily) makes use of computationally rather expensive non-black-box techniques. We provide an alternative second compiler that replaces the expensive non-black-box component of the first compiler by few additional seed OTs. While this second compiler introduces the seed OTs as additional setup assumptions, it is computationally very efficient.
Last updated:  2016-03-01
An Alternative View of the Graph-Induced Multilinear Maps
Uncategorized
Yilei Chen
Show abstract
Uncategorized
In this paper, we view multilinear maps through the lens of ``homomorphic obfuscation". In specific, we show how to homomorphically obfuscate the kernel-test and affine subspace-test functionalities of high dimensional matrices. Namely, the evaluator is able to perform additions and multiplications over the obfuscated matrices, and test subspace memberships on the resulting code. The homomorphic operations are constrained by the prescribed data structure, e.g. a tree or a graph, where the matrices are stored. The security properties of all the constructions are based on the hardness of Learning with errors problem (LWE). The technical heart is to ``control" the ``chain reactions'' over a sequence of LWE instances. Viewing the homomorphic obfuscation scheme from a different angle, it coincides with the graph-induced multilinear maps proposed by Gentry, Gorbunov and Halevi (GGH15). Our proof technique recognizes several ``safe modes" of GGH15 that are not known before, including a simple special case: if the graph is acyclic and the matrices are sampled independently from binary or error distributions, then the encodings of the matrices are pseudorandom.
Last updated:  2016-10-24
The Honey Badger of BFT Protocols
Andrew Miller, Yu Xia, Kyle Croman, Elaine Shi, Dawn Song
The surprising success of cryptocurrencies has led to a surge of interest in deploying large scale, highly robust, Byzantine fault tolerant (BFT) proto- cols for mission-critical applications, such as finan- cial transactions. Although the conventional wisdom is to build atop a (weakly) synchronous protocol such as PBFT (or a variation thereof), such protocols rely critically on network timing assumptions, and only guarantee liveness when the network behaves as ex- pected. We argue these protocols are ill-suited for this deployment scenario. We present an alternative, HoneyBadgerBFT, the first practical asynchronous BFT protocol, which guarantees liveness without making any timing as- sumptions. We base our solution on a novel atomic broadcast protocol that achieves optimal asymptotic efficiency. We present an implementation and ex- perimental results to show our system can achieve throughput of tens of thousands of transactions per second, and scales to over a hundred nodes on a wide area network. We even conduct BFT experi- ments over Tor, without needing to tune any parame- ters. Unlike the alternatives, HoneyBadgerBFT sim- ply does not care about the underlying network.
Last updated:  2017-07-13
Optimizing S-box Implementations for Several Criteria using SAT Solvers
Ko Stoffelen
We explore the feasibility of applying SAT solvers to optimizing implementations of small functions such as S-boxes for multiple optimization criteria, e.g., the number of nonlinear gates and the number of gates. We provide optimized implementations for the S-boxes used in Ascon, ICEPOLE, Joltik/Piccolo, Keccak/Ketje/Keyak, LAC, Minalpher, PRIMATEs, Pr\o st, and RECTANGLE, most of which are candidates in the secound round of the CAESAR competition. We then suggest a new method to optimize for circuit depth and we make tooling publicly available to find efficient implementations for several criteria. Furthermore, we illustrate with the 5-bit S-box of PRIMATEs how multiple optimization criteria can be combined.
Last updated:  2016-02-24
Post-quantum Security of the CBC, CFB, OFB, CTR, and XTS Modes of Operation
Mayuresh Vivekanand Anand, Ehsan Ebrahimi Targhi, Gelo Noel Tabia, Dominique Unruh
We examine the IND-qCPA security of the wide-spread block cipher modes of operation CBC, CFB, OFB, CTR, and XTS (i.e., security against quantum adversaries doing queries in superposition). We show that OFB and CTR are secure assuming that the underlying block cipher is a standard secure PRF (a pseudorandom function secure under classical queries). We give counterexamples that show that CBC, CFB, and XTS are not secure under the same assumption. And we give proofs that CBC and CFB mode are secure if we assume a quantum secure PRF (secure under queries in superposition).
Last updated:  2016-08-24
Multi-Key FHE from LWE, Revisited
Chris Peikert, Sina Shiehian
Traditional fully homomorphic encryption (FHE) schemes only allow computation on data encrypted under a \emph{single} key. Löpez-Alt, Tromer, and Vaikuntanathan (STOC 2012) proposed the notion of \emph{multi-key} FHE, which allows homomorphic computation on ciphertexts encrypted under different keys, and also gave a construction based on a (somewhat nonstandard) assumption related to NTRU.\@ More recently, Clear and McGoldrick (CRYPTO 2015), followed by Mukherjee and Wichs (EUROCRYPT 2016), proposed a multi-key FHE that builds upon the LWE-based FHE of Gentry, Sahai, and Waters (CRYPTO 2013). However, unlike the original construction of Löpez-Alt \etal, these later LWE-based schemes have the somewhat undesirable property of being ``single-hop for keys:'' all relevant keys must be known at the start of the homomorphic computation, and the output cannot be usefully combined with ciphertexts encrypted under other keys (unless an expensive ``bootstrapping'' step is performed). In this work we construct two multi-key FHE schemes, based on LWE assumptions, which are \emph{multi-hop for keys}: the output of a homomorphic computation on ciphertexts encrypted under a set of keys can be used in further homomorphic computation involving \emph{additional} keys, and so on. Moreover, incorporating ciphertexts associated with new keys is a relatively efficient ``native'' operation akin to homomorphic multiplication, and does not require bootstrapping (in contrast with all other LWE-based solutions). Our systems also have smaller ciphertexts than the previous LWE-based ones; in fact, ciphertexts in our second construction are simply GSW ciphertexts with no auxiliary data.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.