All papers in 2015 (Page 2 of 1255 results)

Last updated:  2016-09-04
From Identification to Signatures, Tightly: A Framework and Generic Transforms
Uncategorized
Mihir Bellare, Bertram Poettering, Douglas Stebila
Show abstract
Uncategorized
This paper provides a framework to treat the problem of building signature schemes from identification schemes in a unified and systematic way. The outcomes are (1) Alternatives to the Fiat-Shamir transform that, applied to trapdoor identification schemes, yield signature schemes with tight security reductions to standard assumptions (2) An understanding and characterization of existing transforms in the literature. One of our transforms has the added advantage of producing signatures shorter than produced by the Fiat-Shamir transform. Reduction tightness is important because it allows the implemented scheme to use small parameters (thereby being as efficient as possible) while retaining provable security.
Last updated:  2017-05-28
An Identity Based Encryption Scheme Resilient to RAM Scraper Like Malware Attacks
Dipanjan Das, Priyanka Bose, S. Sree Vivek, S. Sharmila Deva Selvi, C. Pandu Rangan
Modern software ecosystem is data-centric. Data exfiltration due to the attacks of Memory Scraper type malwares is an emerging threat. In this paper, we set up an appropriate mathematical model capturing the threat such attacks pose to Identity-Based Cryptosystems (IBE). Following the formalism, we demonstrate an attack on popular Boneh-Franklin CCA2 secure IBE construction that compels us to relook the fact of CCA2 being the de-facto standard of security. We offer two constructions, one identity based and another public-key based (PKE) encryption schemes capable of withstanding Ram Scraper attacks. Our design assumes a hybrid system equipped with a bare minimal 'Trusted Platform Module' (TPM) that can only perform group exponentiation operation. Building systems to implement our IBE/PKE protocols should be feasible as well as efficient from practical standpoint.
Last updated:  2016-03-11
Cross Processor Cache Attacks
Uncategorized
Gorka Irazoqui, Thomas Eisenbarth, Berk Sunar
Show abstract
Uncategorized
Multi-processor systems are becoming the de-facto standard across different computing domains, ranging from high-end multi-tenant cloud servers to low-power mobile platforms. The denser integration of CPUs creates an opportunity for great economic savings achieved by packing processes of multiple tenants or by bundling all kinds of tasks at vari- ous privilege levels to share the same platform. This level of sharing carries with it a serious risk of leaking sensitive information through the shared microarchitectural compo- nents. Microarchitectural attacks initially only exploited core-private resources, but were quickly generalized to re- sources shared within the CPU. We present the first fine grain side channel attack that works across processors. The attack does not require CPU co- location of the attacker and the victim. The novelty of the proposed work is that, for the first time the directory protocol of high efficiency CPU interconnects is targeted. The directory protocol is common to all modern multi-CPU systems. Examples include AMD's HyperTransport, Intel's Quickpath, and ARM's AMBA Coherent Interconnect. The proposed attack does not rely on any specic characteristic of the cache hierarchy, e.g. inclusiveness. Note that in- clusiveness was assumed in all earlier works. Furthermore, the viability of the proposed covert channel is demonstrated with two new attacks: by recovering a full AES key in OpenSSL, and a full ElGamal key in libgcrypt within the range of seconds on a shared AMD Opteron server.
Last updated:  2015-11-30
NORX8 and NORX16: Authenticated Encryption for Low-End Systems
Jean-Philippe Aumasson, Philipp Jovanovic, Samuel Neves
This paper presents NORX8 and NORX16, the 8-bit and 16-bit versions of the authenticated cipher NORX, one of the CAESAR candidates. These new versions are better suited for low-end systems---such as ``internet of things'' devices---than the original 32-bit and 64-bit versions: whereas 32-bit NORX requires 64 bytes of RAM or cache memory, NORX8 and NORX16 require just 16 and 32 bytes, respectively. Both of the low-end variants were designed to retain the security properties of the initial NORX and be fast on small CPUs.
Last updated:  2015-11-30
Obliv-C: A Language for Extensible Data-Oblivious Computation
Samee Zahur, David Evans
Many techniques for secure or private execution depend on executing programs in a data-oblivious way, where the same instructions execute independent of the private inputs which are kept in encrypted form throughout the computation. Designers of such computations today must either put substantial effort into constructing a circuit representation of their algorithm, or use a high-level language and lose the opportunity to make important optimizations or experiment with protocol variations. We show how extensibility can be improved by judiciously exposing the nature of data-oblivious computation. We introduce a new language that allows application developers to program secure computations without being experts in cryptography, while enabling programmers to create abstractions such as oblivious RAM and width-limited integers, or even new protocols without needing to modify the compiler. This paper explains the key language features that safely enable such extensibility and describes the simple implementation approach we use to ensure security properties are preserved.
Last updated:  2016-11-18
Privacy-preserving Friendship-based Recommender Systems
Qiang Tang, Jun Wang
Privacy-preserving recommender systems have been an active research topic for many years. However, until today, it is still a challenge to design an efficient solution without involving a fully trusted third party or multiple semi-trusted third parties. The key obstacle is the large underlying user populations (i.e. huge input size) in the systems. In this paper, we revisit the concept of friendship-based recommender systems, proposed by Jeckmans et al. and Tang and Wang. These solutions are very promising because recommendations are computed based on inputs from a very small subset of the overall user population (precisely, a user's friends and some randomly chosen strangers). We first clarify the single prediction protocol and Top-n protocol by Tang and Wang, by correcting some flaws and improving the efficiency of the single prediction protocol. We then design a decentralized single protocol by getting rid of the semi-honest service provider. In order to validate the designed protocols, we crawl Twitter and construct two datasets (FMT and 10-FMT) which are equipped with auxiliary friendship information. Based on 10-FMT and MovieLens 100k dataset with simulated friendships, we show that even if our protocols use a very small subset of the datasets, their accuracy can still be equal to or better than some baseline algorithm. Based on these datasets, we further demonstrate that the outputs of our protocols leak very small amount of information of the inputs, and the leakage decreases when the input size increases. We finally show that he single prediction protocol is quite efficient but the Top-n is not. However, we observe that the efficiency of the Top-n protocol can be dramatically improved if we slightly relax the desired security guarantee.
Last updated:  2016-03-29
Fully Leakage-Resilient Codes
Uncategorized
Antonio Faonio, Jesper Buus Nielsen
Show abstract
Uncategorized
Leakage resilient codes (LRCs) are probabilistic encoding schemes that guarantee message hiding even under some bounded leakage on the codeword. We introduce the notion of \emph{fully} leakage resilient codes (FLRCs), where the adversary can leak some $\lambda_0$ bits from the encoding process, i.e., the message and the randomness involved during the encoding process. In addition the adversary can as usual leak from the codeword. We give a simulation-based definition requiring that the adversary's leakage from the encoding process and the codework can be simulated given just $\lambda_0$ bits of leakage from the message. For $\lambda_0 = 0$ our new simulation-based notion is equivalent to the usual game-based definition. A FLRC would be interesting in its own right and would be useful in building other leakage-resilient primitives in a composable manner. We give a fairly general impossibility result for FLRCs in the popular split-state model, where the codeword is broken into independent parts and where the leakage occurs independently on the parts. We show that if the leakage is allowed to be any poly-time function of the secret and if collision-resistant hash functions exist, then there is no FLRC for the split-state model. The result holds only when the message length can be linear in the security parameter. However, we can extend the impossibility result to FLRCs for constant-length messages under assumptions related to differing-input obfuscation. These results show that it is highly unlikely that we can build FLRCs for the split-state model when the leakage can be any poly-time function of the secret state. We then give two feasibility results for weaker models. First, we show that for $\NC^0$-bounded leakage from the randomness and arbitrary poly-time leakage from the parts of the codeword the inner-product construction proposed by Daví \etal (SCN'10) and successively improved by Dziembowski and Faust (ASIACRYPT'11) is a FLRC for the split-state model. Second, we provide a compiler from any LRC to a FLRC in the common reference string model for any fixed leakage family of small cardinality. In particular, this compiler applies to the split-state model but also to many other models.
Last updated:  2016-09-19
From Stateless to Stateful: Generic Authentication and Authenticated Encryption Constructions with Application to TLS
Colin Boyd, Britta Hale, Stig Frode Mjølsnes, Douglas Stebila
Authentication and authenticated encryption with associated data (AEAD) are applied in cryptographic protocols to provide message integrity. The definitions in the literature and the constructions used in practice all protect against forgeries, but offer varying levels of protection against replays, reordering, and drops. As a result of the lack of a systematic hierarchy of authentication and AEAD security notions, gaps have arisen in the literature, specifically in the provable security analysis of the Transport Layer Security (TLS) protocol. We present a hierarchy of authentication and AEAD security notions, interpolating between the lowest level of protection (against forgeries) and the highest level (against forgeries, replays, reordering, and drops). We show generically how to construct higher level schemes from a basic scheme and appropriate use of sequence numbers, and apply that to close the gap in the analysis of TLS record layer encryption.
Last updated:  2015-11-27
An Asymptotically Optimal Method for Converting Bit Encryption to Multi-Bit Encryption
Takahiro Matsuda, Goichiro Hanaoka
Myers and Shelat (FOCS 2009) showed how to convert a chosen ciphertext secure (CCA secure) PKE scheme that can encrypt only $1$-bit plaintexts into a CCA secure scheme that can encrypt arbitrarily long plaintexts (via the notion of key encapsulation mechanism (KEM) and hybrid encryption), and subsequent works improved efficiency and simplicity. In terms of efficiency, the best known construction of a CCA secure KEM from a CCA secure 1-bit PKE scheme, has the public key size $\Omega(k) \cdot |pk|$ and the ciphertext size $\Omega(k^2) \cdot |c|$, where $k$ is a security parameter, and $|pk|$ and $|c|$ denote the public key size and the ciphertext size of the underlying $1$-bit scheme, respectively. In this paper, we show a new CCA secure KEM based on a CCA secure $1$-bit PKE scheme which achieves the public key size $2 \cdot |pk|$ and the ciphertext size $(2k + o(k)) \cdot |c|$. These sizes are asymptotically optimal in the sense that they are (except for a constant factor) the same as those of the simplest \lq\lq bitwise-encrypt'' construction (seen as a KEM by encrypting a $k$-bit random session-key) that works for the chosen plaintext attack and non-adaptive chosen ciphertext attack settings. We achieve our main result by developing several new techniques and results on the \lq\lq double-layered'' construction (which builds a KEM from an inner PKE/KEM and an outer PKE scheme) by Myers and Shelat and on the notion of detectable PKE/KEM by Hohenberger, Lewko, and Waters (EUROCRYPT 2012).
Last updated:  2015-11-27
An Inverse-free Single-Keyed Tweakable Enciphering Scheme
Ritam Bhaumik, Mridul Nandi
In CRYPTO 2003, Halevi and Rogaway proposed CMC, a tweakable enciphering scheme (TES) based on a blockcipher. It requires two blockcipher keys and it is not inverse-free (i.e., the decryption algorithm uses the inverse (decryption) of the underlying blockcipher). We present here a new inverse-free, single-keyed TES. Our construction is a tweakable strong pseudorandom permutation (tsprp), i.e., it is secure against chosen-plaintext-ciphertext adversaries assuming that the underlying blockcipher is a pseudorandom permutation (prp), i.e., secure against chosen-plaintext adversaries. In comparison, sprp assumption of the blockcipher is required for the sprp security of CMC. Our scheme can be viewed as a mixture of type-1 and type-3 Feistel cipher and so we call it FMix or mixed-type Feistel cipher.
Last updated:  2016-07-11
Collusion Resistant Aggregation from Convertible Tags
Uncategorized
Iraklis Leontiadis, Ming Li
Uncategorized
The progress in communication and hardware technology increases the computational capabilities of personal devices. Aggregators, acting as third parties, are interested in learning a statistical function as the sum over a census of data. Users are reluctant to reveal their information in cleartext, since it is treated as personal sensitive information. The paradoxical paradigm of preserving the privacy of individual data while granting an untrusted third party to learn in cleartext a function thereof, is partially addressed by the current privacy preserving aggregation protocols. Current solutions are either focused on a honest-but-curious Aggregator who is trusted to follow the rules of the protocol or they model a malicious Aggregator with trustworthy users. In this paper we are the first to propose a protocol with fully malicious users who collude with a malicious Aggregator in order to forge a message of a trusted user. We introduce the new cryptographic primitive of \emph{convertible tag}, that consists of a two-layer authentication tag. Users first tag their data with their secret key and then an untrusted \emph{Converter} converts the first layer tags in a second layer. The final tags allow the Aggregator to produce a proof for the correctness of a computation over users' data. Security and privacy of the scheme is preserved against the \emph{Converter} and the Aggregator, under the notions of \emph{Aggregator obliviousness} and \emph{Aggregate unforgeability} security definitions, augmented with malicious users. Our protocol is provably secure and experimental evaluations demonstrate its practicality.
Last updated:  2015-11-27
libgroupsig: An extensible C library for group signatures
Jesus Diaz, David Arroyo, Francisco B. Rodriguez
One major need in the context of Privacy Enhancing Technologies (PETs) is to bridge theoretical proposals and practical implementations. In order to foster easy deployment of PETs, the crux is on proposing standard and well-defined programming interfaces. This need is not completely fulfilled in the case of group signatures. Group signatures are key cryptographic primitives to build up privacy respectful protocols and endorsing fair management of anonymity. To the best of our knowledge, currently there exists no abstract and unified programming interface definition for group signatures. In this work we address this matter and propose a programming interface definition enclosing the functionality of current group signatures schemes. Furthermore, for the sake of abstraction and generalization, we have also endowed our interface with the means to include new group signatures schemes. Finally, we have considered three well known group signature schemes to implement an open source library of the interface using C programming language. We have also performed an analysis of the software implementation with respect to different values of the key size and other parameters of the group signatures interface.
Last updated:  2015-11-27
Lattice Attacks on the DGHV Homomorphic Encryption Scheme
Abderrahmane Nitaj, Tajjeeddine Rachidi
In 2010, van Dijk, Gentry, Halevi, and Vaikuntanathan described the first fully homomorphic encryption over the integers, called DGHV. The scheme is based on a set of $m$ public integers $c_i=pq_i+r_i$, $i=1,\cdots,m$, where the integers $p$, $q_i$ and $r_i$ are secret. In this paper, we describe two lattice-based attacks on DGHV. The first attack is applicable when $r_1=0$ and the public integers $c_i$ satisfy a linear equation $a_2c_2+\ldots+a_mc_m=a_1q_1$ for suitably small integers $a_i$, $i=2,\ldots,m$. The second attack works when the positive integers $q_i$ satisfy a linear equation $a_1q_1+\ldots+a_mq_m=0$ for suitably small integers $a_i$, $i=1,\ldots,m$. We further apply our methods for the DGHV recommended parameters as specified in the original work of van Dijk, Gentry, Halevi, and Vaikuntanathan.
Last updated:  2015-11-27
Mitigating Server Breaches in Password-Based Authentication: Secure and Efficient Solutions
Olivier Blazy, Céline Chevalier, Damien Vergnaud
Password-Authenticated Key Exchange allows users to generate a strong cryptographic key based on a shared \human-memorable" password without requiring a public-key infrastructure. It is one of the most widely used and fundamental cryptographic primitives. Unfortunately, mass password theft from organizations is continually in the news and, even if passwords are salted and hashed, brute force breaking of password hashing is usually very successful in practice. In this paper, we propose two efficient protocols where the password database is somehow shared among two servers (or more), and authentication requires a distributed computation involving the client and the servers. In this scenario, even if a server compromise is doable, the secret exposure is not valuable to the adversary since it reveals only a share of the password database and does not permit to brute force guess a password without further interactions with the parties for each guess. Our protocols rely on smooth projective hash functions and are proven secure under classical assumption in the standard model (i.e. do not require idealized assumption, such as random oracles).
Last updated:  2018-02-12
A Multi-Bit Fully Homomorphic Encryption with Shorter Public Key from LWE
Zhigang Chen, Xinxia Song
The efficiency of fully homomorphic encryption is a big question at present. To improve efficiency of fully homomorphic encryption, we use the technique of packed ciphertexts to construct a multi-bit fully homomorphic encryption based on Learning with Errors problem. Our scheme has a short public key. Since our fully homomorphic encryption scheme builds on the basic encryption scheme that choose Learning with Errors samples from Gaussian distribution and add Gaussian error to it, which result in that the number of Learning with Errors samples decrease from 2nlogq to n+1. We prove that our fully homomorphic encryption scheme is feasible and its security relies on the hardness of Learning with Errors problem. In addition we adapt the optimization for the process of key switching from GHS13 and formal this new process of key switching for multi-bit fully homomorphic encryption. At last, we analyze the concert parameters and compare these parameters between our scheme and GHS13 scheme. The data show that our scheme has public key smaller by a factor of about logq than it in GHS13 scheme.
Last updated:  2015-11-27
Midori: A Block Cipher for Low Energy (Extended Version)
Subhadeep Banik, Andrey Bogdanov, Takanori Isobe, Kyoji Shibutani, Harunaga Hiwatari, Toru Akishita, Francesco Regazzoni
In the past few years, lightweight cryptography has become a popular research discipline with a number of ciphers and hash functions proposed. The designers' focus has been predominantly to minimize the hardware area, while other goals such as low latency have been addressed rather recently only. However, the optimization goal of low energy for block cipher design has not been explicitly addressed so far. At the same time, it is a crucial measure of goodness for an algorithm. Indeed, a cipher optimized with respect to energy has wide applications, especially in constrained environments running on a tight power/energy budget such as medical implants. This paper presents the block cipher Midori that is optimized with respect to the energy consumed by the circuit per bit in encryption or decryption operation. We deliberate on the design choices that lead to low energy consumption in an electrical circuit, and try to optimize each component of the circuit as well as its entire architecture for energy. An added motivation is to make both encryption and decryption functionalities available by small tweak in the circuit that would not incur significant area or energy overheads. We propose two energy-efficient block ciphers Midori128 and Midori64 with block sizes equal to 128 and 64 bits respectively. These ciphers have the added property that a circuit that provides both the functionalities of encryption and decryption can be designed with very little overhead in terms of area and energy. We compare our results with other ciphers with similar characteristics: it was found that the energy consumptions of Midori64 and Midori128 are by far better when compared ciphers like PRINCE and NOEKEON.
Last updated:  2016-09-24
Amplifying Side Channels Through Performance Degradation
Uncategorized
Thomas Allan, Billy Bob Brumley, Katrina Falkner, Joop van de Pol, Yuval Yarom
Show abstract
Uncategorized
Interference between processes executing on shared hardware can be used to mount performance-degradation attacks. However, in most cases, such attacks offer little benefit for the adversary. In this paper, we demonstrate that software-based performance-degradation attacks can be used to amplify side-channel leaks, enabling the adversary to increase both the amount and the quality of information captured. We identify a new information leak in the OpenSSL implementation of the ECDSA digital signature algorithm, albeit seemingly unexploitable due to the limited granularity of previous trace procurement techniques. To overcome this imposing hurdle, we combine the information leak with a microarchitectural performance=degradation attack that can slow victims down by a factor of over 150. We demonstrate how this combination enables the amplification of a side-channel sufficiently to exploit this new information leak. Using the combined attack, an adversary can break a private key of the secp256k1 curve, used in the Bitcoin protocol, after observing only 6 signatures—a four-fold improvement over all previously described attacks.
Last updated:  2015-11-26
Modular Inversion Hidden Number Problem- A Lattice Approach
Pranjal Dutta
The Modular Inversion Hidden Number Problem (MIHNP) was introduced by Boneh, Halevi and Howgrave-Graham in Asiacrypt 2001 (BHH’01). They provided two heuristics - in Method I, two-third of the output bits are required to solve the problem, whereas the more efficient heuristic (Method II) requires only one-third of the bits of the output. After more than a decade, here Sarkar in [28] identified that the claim in Method II is actually not correct and a detailed calculation justified that this method too requires two-third of the bits of the output, contrary to the claim in BHH’01. He reconstructed the lattice and give a bound which heuristically solve with half of the output bits. Although J.Xu et al in [29] solved it with only one-third of the output bits asymptotically but that technique is difficult to understand and implement. Here we essentially use similar idea of [28] but in a clever way such that it is a better bound although we solve the problem heuristically with only half of the output bits in asymptotic sense. This is lot easier to understand and implement. Also experimental results support the claim corresponding to our heuristics. In the last section we actually talk about a variant of this which seems to be hard to solve under lattice attack.
Last updated:  2015-11-26
Secret Sharing Schemes with General Access Structures (Full version)
Jian Liu, Sihem Mesnager, Lusheng Chen
Secret sharing schemes with general monotone access structures have been widely discussed in the literature. But in some scenarios, non-monotone access structures may have more practical significance. In this paper, we shed a new light on secret sharing schemes realizing general (not necessarily monotone) access structures. Based on an attack model for secret sharing schemes with general access structures, we redefine perfect secret sharing schemes, which is a generalization of the known concept of perfect secret sharing schemes with monotone access structures. Then, we provide for the first time two constructions of perfect secret sharing schemes with general access structures. The first construction can be seen as a democratic scheme in the sense that the shares are generated by the players themselves. Our second construction significantly enhance the efficiency of the system, where the shares are distributed by the trusted center (TC).
Last updated:  2015-12-08
Lightweight CRC-based Authentication
Elena Dubrova, Mats Näslund, Göran Selander, Fredrik Lindqvist
Low-cost resource-constrained devices can allocate very limited resources for implementing security. At the same time, they still require some level of protection. In this paper, we present a lightweight message authentication scheme based on Cyclic Redundancy Check (CRC). The presented CRC inherits the implementation simplicity of the conventional CRC checksum except that the LFSR implementing its encoding and decoding is made re-programmable. Similarly to previously proposed cryptographic CRCs, it detects both random and malicious errors without increasing bandwidth. The main difference from previous approaches is that we use arbitrary instead of irreducible generator polynomials. This eliminates the need for irreducibility tests. We provide a detailed quantitative analysis of the achieved security as a function of message and CRC sizes. The results show that the presented scheme is particularly suitable for the authentication of short messages.
Last updated:  2018-03-18
Improved Factoring Attacks on Multi-Prime RSA with Small Prime Difference
Mengce Zheng, Noboru Kunihiro, Honggang Hu
In this paper, we study the security of multi-prime RSA with small prime difference and propose two improved factoring attacks. The modulus involved in this variant is the product of r distinct prime factors of the same bit-size. Zhang and Takagi (ACISP 2013) showed a Fermat-like factoring attack on multi-prime RSA. In order to improve the previous result, we gather more information about the prime factors to derive r simultaneous modular equations. The first attack is to combine all the equations and solve one multivariate equation by generic lattice approaches. Since the equation form is similar to multi-prime Phi-hiding problem, we propose the second attack by applying the optimal linearization technique. We also show that our attacks can achieve better bounds in the experiments.
Last updated:  2015-11-26
Multi-Input Functional Encryption for Unbounded Arity Functions
Saikrishna Badrinarayanan, Divya Gupta, Abhishek Jain, Amit Sahai
The notion of multi-input functional encryption (MI-FE) was recently introduced by Goldwasser et al. [EUROCRYPT’14] as a means to non-interactively compute aggregate information on the joint private data of multiple users. A fundamental limitation of their work, however, is that the total number of users (which corresponds to the arity of the functions supported by the MI-FE scheme) must be a priori bounded and fixed at the system setup time. In this work, we overcome this limitation by introducing the notion of unbounded input MI-FE that supports the computation of functions with unbounded arity. We construct such an MI-FE scheme with indistinguishability security in the selective model based on the existence of public-coin differing-inputs obfuscation for turing machines and collision-resistant hash functions. Our result enables several new exciting applications, including a new paradigm of on-the-fly secure multiparty computation where new users can join the system dynamically.
Last updated:  2015-11-26
On the Security of the Schnorr Signature Scheme and DSA against Related-Key Attacks
Hiraku Morita, Jacob C. N. Schuldt, Takahiro Matsuda, Goichiro Hanaoka, Tetsu Iwata
In the ordinary security model for signature schemes, we consider an adversary that may forge a signature on a new message using only his knowledge of other valid message and signature pairs. To take into account side channel attacks such as tampering or fault-injection attacks, Bellare and Kohno (Eurocrypt 2003) formalized related-key attacks (RKA), where stronger adversaries are considered. In RKA for signature schemes, the adversary can also manipulate the signing key and obtain signatures for the modified key. This paper considers RKA security of two established signature schemes: the Schnorr signature scheme and (a well-known variant of) DSA. First, we show that these signature schemes are secure against a weak notion of RKA. Second, we demonstrate that, on the other hand, neither the Schnorr signature scheme nor DSA achieves the standard notion of RKA security, by showing concrete attacks on these. Lastly, we show that a slight modification of both the Schnorr signature scheme and (the considered variant of) DSA yields fully RKA secure schemes.
Last updated:  2016-08-17
$\Lambda \circ \lambda$: Functional Lattice Cryptography
Eric Crockett, Chris Peikert
This work describes the design, implementation, and evaluation of \lol, a general-purpose software framework for lattice-based cryptography. The \lol framework has several novel properties that distinguish it from prior implementations of lattice cryptosystems, including the following. \emph{Generality, modularity, concision:} \lol defines a collection of general, highly composable interfaces for mathematical operations used across lattice cryptography, allowing for a wide variety of schemes to be expressed very naturally and at a high level of abstraction. For example, we implement an advanced fully homomorphic encryption (FHE) scheme in as few as 2--5 lines of code per feature, via code that very closely matches the scheme's mathematical definition. \emph{Theory affinity:} \lol is designed from the ground-up around the specialized ring representations, fast algorithms, and worst-case hardness proofs that have been developed for the Ring-LWE problem and its cryptographic applications. In particular, it implements fast algorithms for sampling from \emph{theory-recommended} error distributions over \emph{arbitrary} cyclotomic rings, and provides tools for maintaining tight control of error growth in cryptographic schemes. \emph{Safety:} \lol has several facilities for reducing code complexity and programming errors, thereby aiding the \emph{correct} implementation of lattice cryptosystems. In particular, it uses strong typing to \emph{statically enforce}---i.e., at compile time---a wide variety of constraints among the various parameters. \emph{Advanced features:} \lol exposes the rich \emph{hierarchy} of cyclotomic rings to cryptographic applications. We use this to give the first-ever implementation of a collection of FHE operations known as ``ring switching,'' and also define and analyze a more efficient variant that we call ``ring tunneling.'' Lastly, this work defines and analyzes a variety of mathematical objects and algorithms for the recommended usage of Ring-LWE in cyclotomic rings, which we believe will serve as a useful knowledge base for future implementations.
Last updated:  2015-11-26
Comment on ``Realization of a scalable Shor algorithm"
Zhengjun Cao, Lihua Liu
Recently, Monz, et al. [arXiv:1507.08852] have reported the demonstration of factoring 15 using a scalable Shor algorithm with an ion-trap quantum computer. We remark that the report is flawed because there are three flaws in the proposed circuit diagram of Shor algorithm. We also remark that the principles behind the demonstration have not been explained properly, including its correctness and complexity.
Last updated:  2015-11-27
Tighter Security for Efficient Lattice Cryptography via the Rényi Divergence of Optimized Orders
Katsuyuki Takashima, Atsushi Takayasu
In security proofs of lattice based cryptography, bounding the closeness of two probability distributions is an important procedure. To measure the closeness, the Rényi divergence has been used instead of the classical statistical distance. Recent results have shown that the Rényi divergence offers security reductions with better parameters, e.g. smaller deviations for discrete Gaussian distributions. However, since previous analyses used a fixed order Rényi divergence, i.e., order two, they lost tightness of reductions. To overcome the deficiency, we adaptively optimize the orders based on the advantages of the adversary for several lattice-based schemes. The optimizations enable us to prove the security with both improved efficiency and tighter reductions. Indeed, our analysis offers security reductions with smaller parameters than the statistical distance based analysis and the reductions are tighter than those of previous Rényi divergence based analyses. As applications, we show tighter security reductions for sampling discrete Gaussian distributions with smaller precomputed tables for Bimodal Lattice Signature Scheme (BLISS), and the variants of learning with errors (LWE) problem and the small integer solution (SIS) problem called k-LWE and k-SIS, respectively.
Last updated:  2015-11-26
On the Usability of Two-Factor Authentication
Uncategorized
Ding Wang, Ping Wang
Show abstract
Uncategorized
Smart-card-based password authentication, known as two-factor authentication, is one of the most widely used security mechanisms to validate the legitimacy of a remote client, who must hold a valid smart card and the correct password in order to successfully login the server. So far the research on this domain has mainly focused on developing more secure, privacy-preserving and efficient protocols, which has led to numerous efficient proposals with a diversity of security provisions, yet little attention has been directed towards another important aspect, i.e. the usability of a scheme. This paper focuses on the study of two specific security threats on usability in two-factor authentication. Using two representative protocols as case studies, we demonstrate two types of security threats on usability: (1) Password change attack, which may easily render the smart card completely unusable by changing the password to a random value; and (2) De-synchronization attack, which breaks the consistence of the pseudo-identities between the user and the server. These threats, though realistic in practice, have been paid little attention in the literature. In addition to revealing the vulnerabilities, we discuss how to thwart these security threats and secure the protocols.
Last updated:  2017-02-14
A Note on Perfect Correctness by Derandomization
Nir Bitansky, Vinod Vaikuntanathan
In this note, we show how to transform a large class of erroneous cryptographic schemes into perfectly correct ones. The transformation works for schemes that are correct on every input with probability noticeably larger than half, and are secure under parallel repetition. We assume the existence of one-way functions and of functions with deterministic (uniform) time complexity $2^{O(n)}$ and non-deterministic circuit complexity $2^{\Omega(n)}$. The transformation complements previous results showing that public-key encryption and indistinguishability obfuscation that err on a noticeable fraction of inputs can be turned into ones that are often correct {\em for all inputs}. The technique relies on the idea of ``reverse randomization'' [Naor, Crypto 1989] and on Nisan-Wigderson style derandomization, which was previously used in cryptography to obtain non-interactive witness-indistinguishable proofs and commitment schemes [Barak, Ong and Vadhan, Crypto 2003].
Last updated:  2016-02-17
Lucky Microseconds: A Timing Attack on Amazon's s2n Implementation of TLS
Martin R. Albrecht, Kenneth G. Paterson
s2n is an implementation of the TLS protocol that was released in late June 2015 by Amazon. It is implemented in around 6,000 lines of C99 code. By comparison, OpenSSL needs around 70,000 lines of code to implement the protocol. At the time of its release, Amazon announced that s2n had undergone three external security evaluations and penetration tests. We show that, despite this, s2n - as initially released - was vulnerable to a timing attack in the case of CBC-mode ciphersuites, which could be extended to complete plaintext recovery in some settings. Our attack has two components. The first part is a novel variant of the Lucky 13 attack that works even though protections against Lucky 13 were implemented in s2n. The second part deals with the randomised delays that were put in place in s2n as an additional countermeasure to Lucky 13. Our work highlights the challenges of protecting implementations against sophisticated timing attacks. It also illustrates that standard code audits are insufficient to uncover all cryptographic attack vectors.
Last updated:  2016-10-05
New directions in nearest neighbor searching with applications to lattice sieving
Anja Becker, Léo Ducas, Nicolas Gama, Thijs Laarhoven
To solve the approximate nearest neighbor search problem (NNS) on the sphere, we propose a method using locality-sensitive filters (LSF), with the property that nearby vectors have a higher probability of surviving the same filter than vectors which are far apart. We instantiate the filters using spherical caps of height 1 - A, where a vector survives a filter if it is contained in the corresponding spherical cap, and where ideally each filter has an independent, uniformly random direction. For small A, these filters are very similar to the spherical locality-sensitive hash (LSH) family previously studied by Andoni et al. For larger A bounded away from 0, these filters potentially achieve a superior performance, provided we have access to an efficient oracle for finding relevant filters. Whereas existing LSH schemes are limited by a performance parameter of P \geq 1/(2c^2 - 1) to solve approximate NNS with approximation factor c, with spherical LSF we potentially achieve smaller asymptotic values of P, depending on the density of the data set. For sparse data sets where the dimension is super-logarithmic in the size of the data set, we asymptotically obtain P = 1/(2c^2 - 1), while for a logarithmic dimensionality with density constant K we obtain asymptotics of P \sim 1/(4 K c^2). To instantiate the filters and prove the existence of an efficient decoding oracle, we replace the independent filters by filters taken from certain structured random product codes. We show that the additional structure in these concatenation codes allows us to decode efficiently using techniques similar to lattice enumeration, and we can find the relevant filters with low overhead, while at the same time not significantly changing the collision probabilities of the filters. We finally apply spherical LSF to sieving algorithms for solving the shortest vector problem (SVP) on lattices, and show that this leads to a heuristic time complexity for solving SVP in dimension n of (3/2)^{n/2 + o(n)} ~ 2^{0.292 n + o(n)}. This asymptotically improves upon the previous best algorithms for solving SVP which use spherical LSH and cross-polytope LSH and run in time 2^{0.298 n + o(n)}. Experiments with the GaussSieve validate the claimed speedup and show that this method may be practical as well, as the polynomial overhead is small. Our implementation is available under an open-source license.
Last updated:  2017-05-07
Pseudo-Free Families of Finite Computational Elementary Abelian $p$-Groups
Uncategorized
Mikhail Anokhin
Show abstract
Uncategorized
Loosely speaking, a family of computational groups is a family $(G_d)_{d\in D}$ of groups (where $D$ is a set of bit strings) whose elements are represented by bit strings in such a way that equality testing, multiplication, inversion, computing the identity element, and sampling random elements in $G_d$ can be performed efficiently when $d$ is given. A family $(G_d)_{d\in D}$ of computational groups is called pseudo-free if, given a random index $d$ (for an arbitrary value of the security parameter) and random elements $g_1,\ldots,g_m\in G_d$, it is computationally hard to find a system of group equations $v_i(a_1,\ldots,a_m;x_1,\ldots,x_n)=w_i(a_1,\ldots,a_m;x_1,\ldots,x_n)$, $i=1,\ldots,s$, and elements $h_1,\ldots,h_n\in G_d$ such that this system of equations is unsatisfiable in the free group freely generated by $a_1,\ldots,a_m$ (over variables $x_1,\ldots,x_n$), but $v_i(g_1,\ldots,g_m;h_1,\ldots,h_n)=w_i(g_1,\ldots,g_m;h_1,\ldots,h_n)$ in $G_d$ for all $i\in\{1,\ldots,s\}$. If a family of computational groups satisfies this definition with the additional requirement that $n=0$, then this family is said to be weakly pseudo-free. The definition of a (weakly) pseudo-free family of computational groups can be easily generalized to the case when all groups in the family belong to a fixed variety of groups. In this paper, we initiate the study of (weakly) pseudo-free families of computational elementary abelian $p$-groups, where $p$ is an arbitrary fixed prime. We restrict ourselves to families $(G_d)_{d\in D}$ of computational elementary abelian $p$-groups such that for every index $d$, each element of $G_d$ is represented by a single bit string of length polynomial in the length of $d$. First, we prove that pseudo-freeness and weak pseudo-freeness for families of computational elementary abelian $p$-groups are equivalent. Second, we give some necessary and sufficient conditions for a family of computational elementary abelian $p$-groups to be pseudo-free (provided that at least one of two additional conditions holds). These necessary and sufficient conditions are formulated in terms of collision-intractability or one-wayness of certain homomorphic families of knapsack functions. Third, we establish some necessary and sufficient conditions for the existence of pseudo-free families of computational elementary abelian $p$-groups. With one exception, these conditions are the existence of certain homomorphic collision-intractable families of $p$-ary hash functions or certain homomorphic one-way families of functions. As an example, we construct a Diffie-Hellman-like key agreement protocol from an arbitrary family of computational elementary abelian $p$-groups. Unfortunately, we do not know whether this protocol is secure under reasonable assumptions.
Last updated:  2015-11-22
A Practical Oblivious Map Data Structure with Secure Deletion and History Independence
Daniel S. Roche, Adam J. Aviv, Seung Geol Choi
We present a new oblivious RAM that supports variable-sized storage blocks (vORAM), which is the first ORAM to allow varying block sizes without trivial padding. We also present a new history-independent data structure (a HIRB tree) that can be stored within a vORAM. Together, this construction provides an efficient and practical oblivious data structure (ODS) for a key/value map, and goes further to provide an additional privacy guarantee as compared to prior ODS maps: even upon client compromise, deleted data and the history of old operations remain hidden to the attacker. We implement and measure the performance of our system using Amazon Web Services, and the single-operation time for a realistic database (up to $2^{18}$ entries) is less than 1 second. This represents a 100x speed-up compared to the current best oblivious map data structure (which provides neither secure deletion nor history independence) by Wang et al. (CCS 14).
Last updated:  2018-08-07
Practical Order-Revealing Encryption with Limited Leakage
Nathan Chenette, Kevin Lewi, Stephen A. Weis, David J. Wu
In an order-preserving encryption scheme, the encryption algorithm produces ciphertexts that preserve the order of their plaintexts. Order-preserving encryption schemes have been studied intensely in the last decade, and yet not much is known about the security of these schemes. Very recently, Boneh et al. (Eurocrypt 2015) introduced a generalization of order-preserving encryption, called order-revealing encryption, and presented a construction which achieves this notion with best-possible security. Because their construction relies on multilinear maps, it is too impractical for most applications and therefore remains a theoretical result. In this work, we build efficiently implementable order-revealing encryption from pseudorandom functions. We present the first efficient order-revealing encryption scheme which achieves a simulation-based security notion with respect to a leakage function that precisely quantifies what is leaked by the scheme. In fact, ciphertexts in our scheme are only about 1.6 times longer than their plaintexts. Moreover, we show how composing our construction with existing order-preserving encryption schemes results in order-revealing encryption that is strictly more secure than all preceding order-preserving encryption schemes.
Last updated:  2018-03-23
Secret Sharing Schemes Based on Resilient Boolean Maps
Uncategorized
Juan Carlos Ku-Cauich, Guillermo Morales-Luna
Show abstract
Uncategorized
We introduce a linear code based on resilient maps on vector spaces over finite fields, we give a basis of this code and upper and lower bounds for its minimal distance. Then the use of the introduced code for building vector space secret sharing schemes is explained and an estimation of the robustness of the schemes against cheaters is provided.
Last updated:  2022-08-22
Practical, Predictable Lattice Basis Reduction
Uncategorized
Daniele Micciancio, Michael Walter
Show abstract
Uncategorized
Lattice reduction algorithms are notoriously hard to predict, both in terms of running time and output quality, which poses a major problem for cryptanalysis. While easy to analyze algorithms with good worst-case behavior exist, previous experimental evidence suggests that they are outperformed in practice by algorithms whose behavior is still not well understood, despite more than 30 years of intensive research. This has lead to a situation where a rather complex simulation procedure seems to be the most common way to predict the result of their application to an instance. In this work we present new algorithmic ideas towards bridging this gap between theory and practice. We report on an extensive experimental study of several lattice reduction algorithms, both novel and from the literature, that shows that theoretical algorithms are in fact surprisingly practical and competitive. In light of our results we come to the conclusion that in order to predict lattice reduction, simulation is superfluous and can be replaced by a closed formula using weaker assumptions. One key technique to achieving this goal is a novel algorithm to solve the Shortest Vector Problem (SVP) in the dual without computing the dual basis. Our algorithm enjoys the same practical efficiency as the corresponding primal algorithm and can be easily added to an existing implementation of it.
Last updated:  2016-03-21
Schnorr Signatures in the Multi-User Setting
Eike Kiltz, Daniel Masny, Jiaxin Pan
A theorem by Galbraith, Malone-Lee, and Smart (GMLS) from 2002 showed that, for Schnorr signatures, single-user security tightly implies multi-user security. Recently, Bernstein pointed to an error in the above theorem and promoted a key-prefixing variant of Schnorr signatures for which he proved a tight implication from single to multi-user security. Even worse, he identified an “apparently insurmountable obstacle to the claimed [GMLS] theorem”. This paper shows that, without key prefixing, single-user security of Schnorr signatures tightly implies multi-user security of the same scheme.
Last updated:  2019-06-13
On the First Fall Degree of Summation Polynomials
Uncategorized
Stavros Kousidis, Andreas Wiemers
Show abstract
Uncategorized
We improve on the first fall degree bound of polynomial systems that arise from a Weil descent along Semaev's summation polynomials relevant to the solution of the Elliptic Curve Discrete Logarithm Problem via Gröbner basis algorithms.
Last updated:  2015-12-22
Even More Practical Key Exchanges for the Internet using Lattice Cryptography
Uncategorized
Vikram Singh, Arjun Chopra
Show abstract
Uncategorized
In 2014, Peikert described the first practical lattice-based key exchange that is provably secure and provides perfect forward security. However, his presentation lacks concrete proposals for parameters. We aim to provide a clear description of how the algorithm can be implemented along with some analysis for potential parameters. Previously in 2015, Singh considered the simpler case, as chosen by Bos, Costello, Naehrig and Steblia in 2014, of cyclotomic rings with power-of-two degree. In this work we focus on the case of cyclotomic rings with degree p-1 for prime p. This allows for a greater degree of flexibility in choosing lattice dimension, which determines the security level and efficiency of the scheme. We describe the necessary arithmetic setup and then present Peikert's Diffie-Hellman-like key exchange along with security, correctness and implementation analysis.
Last updated:  2015-12-16
On the Possibility of Non-Interactive E-Voting in the Public-key Setting
Uncategorized
Rosario Giustolisi, Vincenzo Iovino, Peter B. Rønne
Show abstract
Uncategorized
In 2010 Hao, Ryan and Zielinski proposed a simple decentralized e- voting protocol that only requires 2 rounds of communication. Thus, for k elections their protocol needs 2k rounds of communication. Observing that the first round of their protocol is aimed to establish the public-keys of the voters, we propose an extension of the protocol as a non-interactive e-voting scheme in the public-key setting (NIVS) in which the voters, after having published their public-keys, can use the corresponding secret-keys to participate in an arbitrary number of one-round elections. We first construct a NIVS with a standard tally function where the number of votes for each candidate is counted. Further, we present constructions for two alternative types of elections. Specifically in the first type (dead or alive elections) the tally shows if at least one voter cast a vote for the candidate. In the second one (elections by unanimity), the tally shows if all voters cast a vote for the candidate. Our constructions are based on bilinear maps of prime order. As definitional contribution we provide formal computational definitions for privacy and verifiability of NIVSs. We conclude by showing intriguing relations between our results, secure computation, electronic exams and conference management systems.
Last updated:  2015-11-29
Bitsliced Implementations of the PRINCE, LED and RECTANGLE Block Ciphers on AVR 8-bit Microcontrollers
Zhenzhen Bao, Peng Luo, Dongdai Lin
Due to the demand for low-cost cryptosystems from industry, there spring up a lot of lightweight block ciphers which are excellent for some different implementation features. An innovative design is the block cipher PRINCE. To meet the requirement for low-latency and instantaneously encryption, NXP Semiconductors and its academic partners cooperate and design the low-latency block cipher PRINCE. Another good example is the block cipher LED which is very compact in hardware, and whose designers also aim to maintain a reasonable software performance. In this paper, we demonstrate how to achieve high software performance of these two ciphers on the AVR 8-bit microcontrollers using bitslice technique. Our bitsliced implementations speed up the execution of these two ciphers several times with less memory usage than previous work. In addition to these two nibble-oriented ciphers, we also evaluate the software performance of a newly proposed lightweight block cipher RECTANGLE, whose design takes bitslicing into consider. Our results show that RECTANGLE has very high performance ranks among the existing block ciphers on 8-bit microcontrollers in the real-world usage scenarios.
Last updated:  2015-12-03
Efficient implementation of AND, OR and NOT operators for ABCs
Antonio de la Piedra
In the last few years several practitioners have proposed different strategies for implementing Attribute-based credentials (ABCs) on smart cards. ABCs allow citizens to prove certain properties about themselves without necessarily revealing their full identity. The Idemix ABC is the most versatile ABC system proposed in the literature, supporting pseudonyms, equality proofs of representation, verifiable encryption of attributes and proving properties of attributes via AND, NOT and OR operators. Recently, Vullers et al. and De La Piedra et al. addressed the implementation of the selective disclosure operations, pseudonyms and multi-credential proofs such as equality proofs of representation. In this manuscript, we present implementation strategies for proving properties of user attributes via these operators and show how to combine them via external and internal commitment reordering.
Last updated:  2016-05-25
CHf-ORAM: A Constant Communication ORAM without Homomorphic Encryption
Tarik Moataz, Erik-Oliver Blass, Travis Mayberry
Recent techniques reduce ORAM communication complexity down to constant in the number of blocks N. However, they induce expensive homomorphic encryption on both the server and the client. In this paper, we present an alternative approach CHf-ORAM. This ORAM features constant communication complexity without homomorphic encryption, in exchange for expanding the traditional ORAM setting from single-server to multiple non-colluding servers. We show that adding as few as 4 servers allows for substantially reduced client and server computation compared to existing single-server alternatives. Our approach uses techniques from information-theoretically secure Private Information Retrieval to replace homomorphic encryption with simple XOR operations. Besides O(1) communication complexity, our construction also features O(1) client memory and a block size of only Omega(log^3 N). This leads to an ORAM which is extremely lightweight and suitable for deployment even on memory and compute constrained devices. Finally, CHf-ORAM features a circuit size which is constant in the blocksize making it especially attractive for secure RAM computations.
Last updated:  2015-11-18
Efficient Threshold Secret Sharing Schemes Secure against Rushing Cheaters
Avishek Adhikari, Kirill Morozov, Satoshi Obana, Partha Sarathi Roy, Kouichi Sakurai, Rui Xu
In this paper, we consider three very important issues namely detection, identification and robustness of $k$-out-of-$n$ secret sharing schemes against rushing cheaters who are allowed to submit (possibly forged) shares {\em after} observing shares of the honest users in the reconstruction phase. Towards this we present five different schemes. Among these, first we present two $k$-out-of-$n$ secret sharing schemes, the first one being capable of detecting $(k-1)/3$ cheaters such that $|V_i|=|S|/\epsilon^3$ and the second one being capable of detecting $n-1$ cheaters such that $|V_i|=|S|/\epsilon^{k+1}$, where $S$ denotes the set of all possible secrets, $\epsilon$ denotes the successful cheating probability of cheaters and $V_i$ denotes set all possible shares. Next we present two $k$-out-of-$n$ secret sharing schemes, the first one being capable of identifying $(k-1)/3$ rushing cheaters with share size $|V_i|$ that satisfies $|V_i|=|S|/\epsilon^k$. This is the first scheme whose size of shares does not grow linearly with $n$ but only with $k$, where $n$ is the number of participants. For the second one, in the setting of public cheater identification, we present an efficient optimal cheater resilient $k$-out-of-$n$ secret sharing scheme against rushing cheaters having the share size $|V_i|= (n-t)^{n+2t}|S|/\epsilon^{n+2t}$. The proposed scheme achieves {\em flexibility} in the sense that the security level (i.e. the cheater(s) success probability) is independent of the secret size. Finally, we design an efficient $(k, \delta)$ robust secret sharing secure against rushing adversary with optimal cheater resiliency. Each of the five proposed schemes has the smallest share size having the mentioned properties among the existing schemes in the respective fields.
Last updated:  2015-11-18
Faster arithmetic on elliptic curves using Fp2. Application to GLV-GLS and NIST elliptic curves over Fp isomorphic to twisted Hessian curves over fields extension
Michał Wroński
In this article we present how we can use fast F_{p²} multiplication to speed-up arithmetic on elliptic curves. We use parallel computations for multiplication in F_{p²} which is not much slower than multiplication in F_{p}. We show two applications of this method. In the first we show that using twisted Edwards curves over F_{p²} with fast computable endomorphism (GLV-GLS method) may be nowadays on of the fastest (or even the fastest) solution in hardware applications. In the second we show how we can speed-up point scalar multiplication on NIST P-224 and NIST P-256 curves. We use field extension (F_{p²}) to find isomorphic to these curves twisted Hessian curves over F_{p²}. Our solution is faster than classic solutions up to 28.5% for NIST P-256 and up to 27.2% for NIST P-224 if we consider solution invulnerable for side channel attacks. We can also use different formula for point doubling and points addition and then our solution is faster up to 21.4% for NIST P-256 and up to 19.9% for NIST P-224 comparing to classic solutions.
Last updated:  2015-11-18
Multi-Input Functional Encryption with Unbounded-Message Security
Uncategorized
Vipul Goyal, Aayush Jain, Adam O' Neill
Show abstract
Uncategorized
Multi-input functional encryption (MIFE) was introduced by Goldwasser \emph{et al.} (EUROCRYPT 2014) as a compelling extension of functional encryption. In MIFE, a receiver is able to compute a joint function of multiple, independently encrypted plaintexts. Goldwasser \emph{et al.} (EUROCRYPT 2014) show various applications of MIFE to running SQL queries over encrypted databases, computing over encrypted data streams, etc. The previous constructions of MIFE due to Goldwasser \emph{et al.} (EUROCRYPT 2014) based on indistinguishability obfuscation had a major shortcoming: it could only support encrypting an \emph{a priori bounded} number of message. Once that bound is exceeded, security is no longer guaranteed to hold. In addition, it could only support \emph{selective-security}, meaning that the challenge messages and the set of ``corrupted'' encryption keys had to be declared by the adversary up-front. In this work, we show how to remove these restrictions by relying instead on \emph{sub-exponentially secure} indistinguishability obfuscation. This is done by carefully adapting an alternative MIFE scheme of Goldwasser \emph{et al.} that previously overcame these shortcomings (except for selective security wrt.~the set of ``corrupted'' encryption keys) by relying instead on differing-inputs obfuscation, which is now seen as an implausible assumption. Our techniques are rather generic, and we hope they are useful in converting other constructions using differing-inputs obfuscation to ones using sub-exponentially secure indistinguishability obfuscation instead.
Last updated:  2015-11-25
Efficient Culpably Sound NIZK Shuffle Argument without Random Oracles
Prastudy Fauzi, Helger Lipmaa
One way to guarantee security against malicious voting servers is to use NIZK shuffle arguments. Up to now, only two NIZK shuffle arguments in the CRS model have been proposed. Both arguments are relatively inefficient compared to known random oracle based arguments. We propose a new, more efficient, shuffle argument in the CRS model. Importantly, its online prover's computational complexity is dominated by only two $(n + 1)$-wide multi-exponentiations, where $n$ is the number of ciphertexts. Compared to the previously fastest argument by Lipmaa and Zhang, it satisfies a stronger notion of soundness.
Last updated:  2016-02-03
Comparison of TERO-cell implementations and characterisation on SRAM FPGAs
Cedric Marchand, Lilian Bossuet, AbdelKarim Cherkaoui
Physical unclonable functions (PUF) are a promising approach in design for trust and security. A PUF derives a unique identifier for different similar dies using some of their physical characteristics, so it can be used to authenticate chips and to fight against counterfeiting and theft of devices. The transient effect ring oscillator (TERO) PUF is based on the extraction of the entropy of the process variations by comparison between TERO cells characteristics. This TERO cells need to be carefully designed in order to construct a PUF. This task need to be done with precision especially in the size of used gates and in the delays of all connections inside the element which is particularly challenging in FPGA. This paper presents the design of TERO cells in two FPGA families: Xilinx Spartan 6 and Altera Cyclone V. In addition, the result of the characterization of the TERO-PUF are compared for the two technologies.
Last updated:  2015-11-18
Privacy-Aware Authentication in the Internet of Things
Hannes Gross, Marko Hölbl, Daniel Slamanig, Raphael Spreitzer
Besides the opportunities o ered by the all-embracing Internet of Things (IoT) technology, it also poses a tremendous threat to the privacy of the carriers of these devices. In this work, we build upon the idea of an RFID-based IoT realized by means of standardized and well-established Internet protocols. In particular, we demonstrate how the Internet Protocol Security protocol suite (IPsec) can be applied in a privacy-aware manner. Therefore, we introduce a privacy-aware mutual authentication protocol compatible with restrictions imposed by the IPsec standard and analyze its privacy and security properties. In order do so, we revisit and adapt the RFID privacy model (HPVP) of Hermans et al. (ESORICS'11). With this work, we show that privacy in the IoT can be achieved without relying on proprietary protocols and on the basis of existing Internet standards.
Last updated:  2015-11-18
Efficient and Low-complexity Hardware Architecture of Gaussian Normal Basis Multiplication over GF(2m) for Elliptic Curve Cryptosystems
Bahram Rashidi, Sayed Masoud Sayedi, Reza Rezaeian Farashahi
In this paper an efficient high-speed architecture of Gaussian normal basis multiplier over binary finite field GF(2m) is presented. The structure is constructed by using regular modules for computation of exponentiation by powers of 2 and low-cost blocks for multiplication by normal elements of the binary field. Since the exponents are powers of 2, the modules are implemented by some simple cyclic shifts in the normal basis representation. As a result, the multiplier has a simple structure with a low critical path delay. The efficiency of the proposed structure is studied in terms of area and time complexity by using its implementation on Vertix-4 FPGA family and also its ASIC design in 180nm CMOS technology. Comparison results with other structures of the Gaussian normal basis multiplier verify that the proposed architecture has better performance in terms of speed and hardware utilization.
Last updated:  2015-11-20
Recommender Systems and their Security Concerns
Jun Wang, Qiang Tang
Instead of simply using two-dimensional $User \times Item$ features, advanced recommender systems rely on more additional dimensions (e.g. time, location, social network) in order to provide better recommendation services. In the first part of this paper, we will survey a variety of dimension features and show how they are integrated into the recommendation process. When the service providers collect more and more personal information, it brings great privacy concerns to the public. On another side, the service providers could also suffer from attacks launched by malicious users who want to bias the recommendations. In the second part of this paper, we will survey attacks from and against recommender service providers, and existing solutions.
Last updated:  2015-11-18
Concurrent Secure Computation via Non-Black Box Simulation
Vipul Goyal, Divya Gupta, Amit Sahai
Recently, Goyal (STOC'13) proposed a new non-black box simulation techniques for fully concurrent zero knowledge with straight-line simulation. Unfortunately, so far this technique is limited to the setting of concurrent zero knowledge. The goal of this paper is to study what can be achieved in the setting of concurrent secure computation using non-black box simulation techniques, building upon the work of Goyal. The main contribution of our work is a secure computation protocol in the fully concurrent setting with a straight-line simulator, that allows us to achieve several new results: \begin{itemize} \item We give first positive results for concurrent blind signatures and verifiable random functions in the plain model \emph{as per the ideal/real world security definition}. Our positive result is somewhat surprising in light of the impossibility result of Lindell (STOC'03) for black-box simulation. We circumvent this impossibility using non-black box simulation. This gives us a quite natural example of a functionality in concurrent setting which is impossible to realize using black-box simulation but can be securely realized using non-black-box simulation. \item Moreover, we expand the class of realizable functionalities in the concurrent setting. Our main theorem is a positive result for concurrent secure computation as long as the ideal world satisfies the \emph{bounded pseudo-entropy condition} (BPC) of Goyal (FOCS'12). The BPC requires that in the ideal world experiment, the total amount of information learnt by the adversary (via calls to the ideal functionality) should have ``bounded pseudoentropy". \item We also improve the round complexity of protocols in the single-input setting of Goyal (FOCS'12) both qualitatively and quantitatively. In Goyal's work, the number of rounds depended on the length of honest party inputs. In our protocol, the round complexity depends only on the security parameter, and is completely independent of the length of the honest party inputs. \end{itemize} Our results are based on a non-black-box simulation technique using a new language (which allows the simulator to commit to an Oracle program that can access information with bounded pseudoentropy), and a simulation-sound version of the concurrent zero-knowledge protocol of Goyal (STOC'13). We assume the existence of collision resistant hash functions and constant round semi-honest oblivious transfer.
Last updated:  2016-10-13
POPE: Partial Order Preserving Encoding
Uncategorized
Daniel S. Roche, Daniel Apon, Seung Geol Choi, Arkady Yerukhimovich
Show abstract
Uncategorized
Recently there has been much interest in performing search queries over encrypted data to enable functionality while protecting sensitive data. One particularly efficient mechanism for executing such queries is order-preserving encryption/encoding (OPE) which results in ciphertexts that preserve the relative order of the underlying plaintexts thus allowing range and comparison queries to be performed directly on ciphertexts. Recently, Popa et al. (S&P 2013) gave the first construction of an ideally-secure OPE scheme and Kerschbaum (CCS 2015) showed how to achieve the even stronger notion of frequency-hiding OPE. However, as Naveed et al. (CCS 2015) have recently demonstrated, these constructions remain vulnerable to several attacks. Additionally, all previous ideal OPE schemes (with or without frequency-hiding) either require a large round complexity of O(log n) rounds for each insertion, or a large persistent client storage of size O(n), where n is the number of items in the database. It is thus desirable to achieve a range query scheme addressing both issues gracefully. In this paper, we propose an alternative approach to range queries over encrypted data that is optimized to support insert-heavy workloads as are common in "big data" applications while still maintaining search functionality and achieving stronger security. Specifically, we propose a new primitive called partial order preserving encoding (POPE) that achieves ideal OPE security with frequency hiding and also leaves a sizable fraction of the data pairwise incomparable. Using only O(1) persistent and $O(n^\epsilon)$ non-persistent client storage for $0 < \epsilon < 1$, our POPE scheme provides extremely fast batch insertion consisting of a single round, and efficient search with O(1) amortized cost for up to $O(n^{1-\epsilon})$ search queries. This improved security and performance makes our scheme better suited for today's insert-heavy databases.
Last updated:  2015-11-14
Selene: Voting with Transparent Verifiability and Coercion-Mitigation
Peter Y A Ryan, Peter B Roenne, Vincenzo Iovino
End-to-end verifiable voting schemes typically involves voters handling an encrypted ballot in order to confirm that their vote is accurately included in the tally. While this may be technically valid, from a public acceptance standpoint is may be problematic: many voters may not really understand the purpose of the encrypted ballot and the various checks that they can perform. In this paper we take a different approach and revisit an old idea: to provide each voter with a private tracking number. Votes are posted on a bulletin board in the clear along with their associated tracking number. This is appealing in that it provides voters with a very simple, intuitive way to verify their vote, in the clear. However, there are obvious drawbacks: we must ensure that no two voters are assigned the same tracker and we need to keep the trackers private. In this paper, we propose a scheme that addresses both of these problems: we ensure that voters get unique trackers and we close off the coercer's window of opportunity by ensuring that the voters only learn their tracking numbers after votes have been posted. The resulting scheme provides receipt-freeness, and indeed a good level of coercion-resistance while also providesinga more immediately understandable form of verifiability. The cryptographyis under the bonnet as far as the voter is concerned. The basic scheme still has a problem in some contexts: if the coercer is himself a voter there is a chance that the coerced voter might light on the coercer's tracker, or the coercer simply claims that it is his. We argue that in many contexts this may be an acceptable threat when weighed against the more transparent verification provided by the scheme. Nonetheless, we describe some elaborations of the basic scheme to mitigate such threats.
Last updated:  2016-06-02
Computing Jacobi's \theta in quasi-linear time
Hugo Labrande
Jacobi's \theta function has numerous applications in mathematics and computer science; a naive algorithm allows the computation of \theta(z, \tau), for z, \tau verifying certain conditions, with precision P in O(M(P) \sqrt{P}) bit operations, where M(P) denotes the number of operations needed to multiply two complex P-bit numbers. We generalize an algorithm which computes specific values of the \theta function (the theta-constants) in asymptotically faster time; this gives us an algorithm to compute \theta(z, \tau) with precision P in O(M(P) log P) bit operations, for any \tau \in F and z reduced using the quasi-periodicity of \theta.
Last updated:  2015-12-11
Linear codes with few weights from weakly regular bent functions based on a generic construction
Uncategorized
Sihem Mesnager
Show abstract
Uncategorized
We contribute to the knowledge of linear codes with few weights from special polynomials and functions. Substantial eorts (especially due to C. Ding) have been directed towards their study in the past few years. Such codes have several applications in secret sharing, authentication codes, association schemes and strongly regular graphs. Based on a generic construction of linear codes from mappings and by employing weakly regular bent functions, we provide a new class of linear p-ary codes with three weights given with its weight distribution. The class of codes presented in this paper is dierent from those known in literature. Also, it contains some optimal codes meeting certain bound on linear codes.
Last updated:  2016-06-02
A Practical Cryptanalysis of the Algebraic Eraser
Adi Ben-Zvi, Simon R. Blackburn, Boaz Tsaban
Anshel, Anshel, Goldfeld and Lemieaux introduced the Colored Burau Key Agreement Protocol (CBKAP) as the concrete instantiation of their Algebraic Eraser scheme. This scheme, based on techniques from permutation groups, matrix groups and braid groups, is designed for lightweight environments such as RFID tags and other IoT applications. It is proposed as an underlying technology for ISO/IEC~29167-20. SecureRF, the company owning the trademark Algebraic Eraser, has presented the scheme to the IRTF with a view towards standardisation. We present a novel cryptanalysis of this scheme. For parameter sizes corresponding to claimed 128-bit security, our implementation recovers the shared key using less than 8 CPU hours, and less than 64MB of memory.
Last updated:  2015-11-14
Virtual Smart Cards: How to Sign with a Password and a Server
Jan Camenisch, Anja Lehmann, Gregory Neven, Kai Samelin
An important shortcoming of client-side cryptography on consumer devices is the poor protection of secret keys. Encrypting the keys under a human-memorizable password hardly offers any protection when the device is stolen. Trusted hardware tokens such as smart cards can provide strong protection of keys but are cumbersome to use. We consider the case where secret keys are used for digital signatures and propose a password-authenticated server-aided signature Pass2Sign protocol, where signatures are collaboratively generated by a device and a server, while the user authenticates to the server with a (low-entropy) password. Neither the server nor the device store enough information to create a signature by itself or to perform an offline attack on the password. The signed message remains hidden from the server. We argue that our protocol offers comparable security to trusted hardware, but without its inconveniences. We prove it secure in the universal composability (UC) framework in a very strong adaptive corruption model where, unlike standard UC, the adversary does not obtain past inputs and outputs upon corrupting a party. This is crucial to hide previously entered passwords and messages from the adversary when the device gets corrupted. The protocol itself is surprisingly simple: it is round-optimal, efficient, and relies exclusively on standard primitives such as hash functions and RSA. The security proof involves a novel random-oracle programming technique that may be of independent interest.
Last updated:  2016-01-25
Area-Efficient Hardware Implementation of the Optimal Ate Pairing over BN curves.
Uncategorized
Anissa Sghaier, Loubna Ghammam, Medyen Zeghid, Sylvain Duquesne, Mohsen Machhout
Show abstract
Uncategorized
To have an efficient asymmetric key encryption scheme such as elliptic curves, hyperelliptic curves, pairing etc., we have to go through an arithmetic optimization then a hardware one. Taking into consideration restricted environments’ compromises, we should strike a balance between efficiency and memory resources. For this reason, we studied the mathematical aspect of pairing computation and gave new development of the methods that compute the hard part of the final exponentiation in [2]. They prove that these new methods save an important number of temporary variables, and they are certainly faster than the existing one. In this paper, we will also present a new way of computing Miller loop, more precisely in the doubling algorithm. So we will use this result and the arithmetic optimization presented in [2]. Then, we will apply hardware optimization to find a satisfactory design which give the best compromise between area occupation and execution time. Our hardware implementation on a Virtex-6 FPGA(XC6VHX250T) used only 5976 Slices, 30 DSP, which is less resources used compared with state-ofthe-art hardware implementations, so we can say that our approach cope with the limited resources of restricted environment
Last updated:  2017-03-30
Device-Enhanced Password Protocols with Optimal Online-Offline Protection
Stanislaw Jarecki, Hugo Krawczyk, Maliheh Shirvanian, Nitesh Saxena
We introduce a setting that we call Device-Enhanced PAKE (DE-PAKE), where PAKE (password-authenticated key exchange) protocols are strengthened against online and offline attacks through the use of an auxiliary device that aids the user in the authentication process. We build such schemes and show that their security, properly formalized, achieves maximal-attainable resistance to online and offline attacks in both PKI and PKI-free settings. In particular, an online attacker must guess the user's password and also corrupt the user's auxiliary device to authenticate, while an attacker who corrupts the server cannot learn the users' passwords via an offline dictionary attack. Our solutions do not require secure channels, and nothing (in an information-theoretic sense) is learned about the password by the device (or a malicious software running on the device) or over the device-client channel, even without any external protection of this channel. An attacker taking over the device still requires a full online attack to impersonate the user. Importantly, our DE-PAKE scheme can be deployed at the user end without the need to modify the server and without the server having to be aware that the user is using a DE-PAKE scheme. In particular, the schemes can work with standard servers running the usual password-over-TLS authentication.
Last updated:  2015-12-17
Ring Signature Confidential Transactions for Monero
Uncategorized
Shen Noether
Show abstract
Uncategorized
This article introduces a method of hiding transaction amounts in the strongly decentralized anonymous cryptocurrency Monero. Similar to Bitcoin, Monero is a cryptocurrency which is distributed through a proof of work ``mining'' process. The original Monero protocol was based on CryptoNote, which uses ring signatures and one-time keys to hide the destination and origin of transactions. Recently the technique of using a commitment scheme to hide the amount of a transaction has been discussed and implemented by Bitcoin Core Developer Gregory Maxwell. In this article, a new type of ring signature, A Multi-layered Linkable Spontaneous Anonymous Group signature is described which allows for hidden amounts, origins and destinations of transactions with reasonable efficiency and verifiable, trustless coin generation. The author would like to note that early drafts of this were publicized in the Monero Community and on the bitcoin research irc channel. Blockchain hashed drafts are available in \cite{Snoe}.
Last updated:  2016-06-07
On the Communication required for Unconditionally Secure Multiplication
Ivan Damgård, Jesper Buus Nielsen, Antigoni Polychroniadou, Michael Raskin
Many information-theoretic secure protocols are known for general secure multi-party computation, in the honest majority setting, and in the dishonest majority setting with preprocessing. All known protocols that are efficient in the circuit size of the evaluated function follow the same ''gate-by-gate'' design pattern: we work through an arithmetic (boolean) circuit on secret-shared inputs, such that after we process a gate, the output of the gate is represented as a random secret sharing among the players. This approach usually allows non-interactive processing of addition gates but requires communication for every multiplication gate. Thus, while information-theoretic secure protocols are very efficient in terms of computational work, they (seem to) require more communication and more rounds than computationally secure protocols. Whether this is inherent is an open and probably very hard problem. However, in this work we show that it is indeed inherent for protocols that follow the ''gate-by-gate'' design pattern. We present the following results: -In the honest majority setting, as well as for dishonest majority with preprocessing, any gate-by-gate protocol must communicate \Omega(n) bits for every multiplication gate, where n is the number of players. -In the honest majority setting, we show that one cannot obtain a bound that also grows with the field size. Moreover, for a constant number of players, amortizing over several multiplication gates does not allow us to save on the computational work, and - in a restricted setting - we show that this also holds for communication. All our lower bounds are met up to a constant factor by known protocols that follow the typical gate-by-gate paradigm. Our results imply that a fundamentally new approach must be found in order to improve the communication complexity of known protocols, such as BGW, GMW, SPDZ etc.
Last updated:  2016-02-08
Watermarking Cryptographic Capabilities
Aloni Cohen, Justin Holmgren, Ryo Nishimaki, Vinod Vaikuntanathan, Daniel Wichs
A watermarking scheme for programs embeds some information called a mark into a program while preserving its functionality. No adversary can remove the mark without damaging the functionality of the program. In this work, we study the problem of watermarking various cryptographic programs such as pseudorandom function (PRF) evaluation, decryption, and signing. For example, given a PRF F, we create a marked program C~ that evaluates F(). An adversary that gets C~ cannot come up with any program C* in which the mark is removed but which still evaluates the PRF correctly on even a small fraction of the inputs. The work of Barak, Goldreich, Impagliazzo, Rudich, Sahai, Vadhan, and Yang (CRYPTO'01 and Journal of ACM 59(2)) shows that, assuming indistinguishability obfuscation (iO), such watermarking is impossible if the marked program C~ evaluates the original program with perfect correctness. In this work we show that, assuming iO, such watermarking is possible if the marked program C~ is allowed to err with even a negligible probability, which would be undetectable to the user. Our watermarking schemes are public key, meaning that we use a secret marking key to embed marks in programs, and a public detection key that allows anyone to detect marks in programs. Our schemes are secure against chosen program attacks where the adversary is given oracle access to the marking functionality. We emphasize that our security notion of watermark non-removability considers arbitrary adversarial strategies to modify the marked program, in contrast to the prior works (Nishimaki, EUROCRYPT '13).
Last updated:  2016-03-21
Non-Malleable Multi-Prover Interactive Proofs and Witness Signatures
Uncategorized
Vipul Goyal, Aayush Jain, Dakshita Khurana
Show abstract
Uncategorized
We explore a new man-in-the-middle adversarial model for multi-prover interactive proofs (MIPs), and construct round-optimal, unconditionally secure, non-malleable MIPs. We compile from a large sub-class of Sigma protocols to a non-malleable MIP, avoiding the use of expensive NP-reductions to Graph Hamiltonicity or other NP-complete problems. Our compiler makes novel use of non-malleable codes - in particular, we rely on many-many non-malleable codes constructed recently by Chattopadhyay, Goyal and Li (STOC 2016). We introduce another (seemingly unrelated) primitive - witness signatures - motivated by the goal of removing central trust assumptions from cryptography. Witness signatures allow any party with a valid witness to an NP statement to sign a message on behalf of that statement. These signatures must be unforgeable - that is, signing a new message, even given several signatures, should be as hard as computing a witness to the NP statement itself. We first observe that most natural notions of witness signatures are impossible to achieve in the plain model. While still wanting to avoid a central trusted setup, we turn to the tamper proof hardware token model of Katz (Eurocrypt 2007). We show that non-malleable MIPs yield efficient, unconditional witness signatures in the hardware token model. However, our construction of unconditional witness signatures only supports bounded verification. We also obtain unbounded polynomial verification assuming the existence of one-way functions. Finally, we give a matching lower bound - obtaining unconditional unbounded-verifiable witness signatures with black-box extraction, is impossible even with access to an unbounded number of stateful tamper-proof hardware tokens.
Last updated:  2015-11-10
Affine-malleable Extractors, Spectrum Doubling, and Application to Privacy Amplification
Divesh Aggarwal, Kaave Hosseini, Shachar Lovett
The study of seeded randomness extractors is a major line of research in theoretical computer science. The goal is to construct deterministic algorithms which can take a ``weak" random source $X$ with min-entropy $k$ and a uniformly random seed $Y$ of length $d$, and outputs a string of length close to $k$ that is close to uniform and independent of $Y$. Dodis and Wichs~\cite{DW09} introduced a generalization of randomness extractors called non-malleable extractors ($\nmExt$) where $\nmExt(X,Y)$ is close to uniform and independent of $Y$ and $\nmExt(X,f(Y))$ for any function $f$ with no fixed points. We relax the notion of a non-malleable extractor and introduce what we call an affine-malleable extractor ($\AmExt: \F^n \times \F^d \mapsto \F$) where $\AmExt(X,Y)$ is close to uniform and independent of $Y$ and has some limited dependence of $\AmExt(X,f(Y))$ - that conditioned on $Y$, $(\AmExt(X,Y), \AmExt(X,f(Y)))$ is close to $(U, A \cdot U + B)$ where $U$ is uniformly distributed in $\F$ and $A, B \in \F$ are random variables independent of $\F$. We show under a plausible conjecture in additive combinatorics (called the Spectrum Doubling Conjecture) that the inner-product function $\IP{\cdot,\cdot}:\F^n \times \F^n \mapsto \F$ is an affine-malleable extractor. As a modest justification of the conjecture, we show that a weaker version of the conjecture is implied by the widely believed Polynomial Freiman-Ruzsa conjecture. We also study the classical problem of privacy amplification, where two parties Alice and Bob share a weak secret $X$ of min-entropy $k$, and wish to agree on secret key $R$ of length $m$ over a public communication channel completely controlled by a computationally unbounded attacker Eve. The main application of non-malleable extractors and its many variants has been in constructing secure privacy amplification protocols. We show that affine-malleable extractors along with affine-evasive sets can also be used to construct efficient privacy amplification protocols. We show that our protocol, under the Spectrum Doubling Conjecture, achieves near optimal parameters and achieves additional security properties like source privacy that have been the focus of some recent results in privacy amplification.
Last updated:  2017-04-09
C$\emptyset$C$\emptyset$: A Framework for Building Composable Zero-Knowledge Proofs
Ahmed Kosba, Zhichao Zhao, Andrew Miller, Yi Qian, Hubert Chan, Charalampos Papamanthou, Rafael Pass, abhi shelat, Elaine Shi
Non-interactive zero-knowledge proofs are a powerful cryptographic primitive used in privacy-preserving protocols. We design and build C$\emptyset$C$\emptyset$, the first system enabling developers to build efficient, composable, non-interactive zero-knowledge proofs for generic, user-defined statements. C$\emptyset$C$\emptyset$ extends state-of-the-art SNARK constructions by applying known strengthening transformations to yield UC-composable zero-knowledge proofs suitable for modular use in larger cryptographic protocols. To attain fast practical performance, C$\emptyset$C$\emptyset$ includes a library of several ``SNARK-friendly'' cryptographic primitives. These primitives are used in the strengthening transformations in order to reduce the overhead of achieving composable security. Our open-source library of optimized arithmetic circuits for these functions are up to 40$\times$ more efficient than standard implementations and are thus of independent interest for use in other NIZK projects. Finally, we evaluate C$\emptyset$C$\emptyset$ on applications such as anonymous credentials, private smart contracts, and nonoutsourceable proof-of-work puzzles and demonstrate 5$\times$ to 8$\times$ speedup in these application settings compared to naive implementations.
Last updated:  2019-07-10
Post-quantum key exchange - a new hope
Erdem Alkim, Léo Ducas, Thomas Pöppelmann, Peter Schwabe
In 2015, Bos, Costello, Naehrig, and Stebila (IEEE Security & Privacy 2015) proposed an instantiation of Ding's ring-learning-with-errors (Ring-LWE) based key-exchange protocol (also including the tweaks proposed by Peikert from PQCrypto 2014), together with an implementation integrated into OpenSSL, with the affirmed goal of providing post-quantum security for TLS. In this work we revisit their instantiation and stand-alone implementation. Specifically, we propose new parameters and a better suited error distribution, analyze the scheme's hardness against attacks by quantum computers in a conservative way, introduce a new and more efficient error-reconciliation mechanism, and propose a defense against backdoors and all-for-the-price-of-one attacks. By these measures and for the same lattice dimension, we more than double the security parameter, halve the communication overhead, and speed up computation by more than a factor of 8 in a portable C implementation and by more than a factor of 27 in an optimized implementation targeting current Intel CPUs. These speedups are achieved with comprehensive protection against timing attacks.
Last updated:  2016-04-28
Construction for de Bruijn Sequences with Large Orders
Junwu Dong, Dingyi Pei
Sequences generated by maximum-period nonlinear feedback shift registers are known as de Bruijn sequences. The problem of designing de Bruijn sequences has received considerable attention. There is only one full cycle in the state graph of de Bruijn sequences. Most popular algorithms for generating de Bruijn sequences start from a nonsingular linear feedback shift register producing several shorter cycles in its state graph, then join them into one cycle. Unfortunately, the order $n$ of the resulting de Bruijn sequence by using this kind of algorithms is small so far (usually $n \le 40$). We introduce a new concept of correlated cycles between the cycles in the state graph of a LFSR. Based on this concept we present a algorithm for constructing de Bruijn sequences with large orders (such as $n=128$). This is the first publication for designing de Bruijn sequences with such large orders.
Last updated:  2015-11-10
Do Distributed Differentially-Private Protocols Require Oblivious Transfer?
Vipul Goyal, Dakshita Khurana, Ilya Mironov, Omkant Pandey, Amit Sahai
We study the cryptographic complexity of two-party differentially-private protocols for a large natural class of boolean functionalities. Information theoretically, McGregor et al. [FOCS 2010] and Goyal et al. [Crypto 2013] demonstrated several functionalities for which the maximal possible accuracy in the distributed setting is significantly lower than that in the client-server setting. Goyal et al. [Crypto 2013] further showed that ``highly accurate'' protocols in the distributed setting for any non-trivial functionality in fact imply the existence of one-way functions. However, it has remained an open problem to characterize the exact cryptographic complexity of this class. In particular, we know that semi-honest oblivious transfer helps obtain optimally accurate distributed differential privacy. But we do not know whether the reverse is true. We study the following question: Does the existence of optimally accurate distributed differentially private protocols for any class of functionalities imply the existence of oblivious transfer? We resolve this question in the affirmative for the class of boolean functionalities that contain an XOR embedded on adjacent inputs. - We construct a protocol implementing oblivious transfer from any optimally accurate, distributed differentially private protocol for any functionality with a boolean XOR embedded on adjacent inputs. - While the previous result holds for optimally accurate protocols for any privacy parameter \epsilon > 0, we also give a reduction from oblivious transfer to distributed differentially private protocols computing XOR, for a constant small range of non-optimal accuracies and a constant small range of values of privacy parameter \epsilon. At the heart of our techniques is an interesting connection between optimally-accurate two-party protocols for the XOR functionality and noisy channels, which were shown by Crepeau and Kilian [FOCS 1988] to be sufficient for oblivious transfer.
Last updated:  2015-11-09
Linear Secret Sharing Schemes from Error Correcting Codes and Universal Hash Functions
Ronald Cramer, Ivan Bjerre Damgård, Nico Döttling, Serge Fehr, Gabriele Spini
We present a novel method for constructing linear secret sharing schemes (LSSS) from linear error correcting codes and linear universal hash functions in a blackbox way. The main advantage of this new construction is that the privacy property of the resulting secret sharing scheme essentially becomes independent of the code we use, only depending on its rate. This allows us to fully harness the algorithmic properties of recent code constructions such as efficient encoding and decoding or efficient list-decoding. Choosing the error correcting codes and universal hash functions involved carefully, we obtain solutions to the following open problems: - A linear near-threshold secret sharing scheme with both linear time sharing and reconstruction algorithms and large secrets (i.e. secrets of size Omega(n)). Thus, the computational overhead per shared bit in this scheme is constant. - An efficiently reconstructible robust secret sharing scheme for n/3 <= t < (1 - epsilon) n/2 corrupted players (for any constant epsilon > 0) with shares of optimal size O(1 + sec / n) and secrets of size Omega(n + sec), where sec is the security parameter.
Last updated:  2015-12-24
Note on the RKA security of Continuously Non-Malleable Key-Derivation Function from PKC 2015
Eiichiro Fujisaki, Keita Xagawa
Qin, Liu, Yuen, Deng, and Chen (PKC 2015) gave a new security notion of key-derivation function (KDF), continuous non-malleability with respect to $\Phi$-related-key attacks ($\Phi$-CNM), and its application to RKA-secure public-key cryptographic primitives. They constructed a KDF from cryptographic primitives and showed that the obtained KDF is $\Phi_{hoe\&iocr}$-CNM, where $\Phi_{hoe\&iocr}$ contains the identity function, the constant functions, and functions that have high output-entropy (HOE) and input-output collision-resistance (IOCR) simultaneously. This short note disproves the security of their KDF by giving $\Phi_{hoe\&iocr}$-RKAs by exploiting the components of their KDF. We note that their proof is still correct for $\Phi$-CNM for a subset of $\Phi_{hoe\&iocr}$; for example the KDF satisfies $\Phi_{poly(d)}$-CNM, in which an adversary can tamper with a secret by using polynomials of degree at most $d$.
Last updated:  2016-04-26
Efficient Signature Schemes from R-LWE
Ting Wang, Jianping Yu, Guoqiang Han, Peng Zhang
Compared to the classical cryptography, lattice-based cryptography is more secure, flexible and simple, and it is believed to be secure against quantum computers. In this paper, an efficient signature scheme is proposed from the ring learning with errors (R-LWE), which avoids sampling from discrete Gaussians and has the characteristics of the much simpler description etc. Then, the scheme is implemented in C/C++ and makes a comparison with the RSA signature scheme in detail. Additionally, a linearly homomorphic signature scheme without trapdoor is proposed from the R-LWE assumption. The security of the above two schemes are reducible to the worst-case hardness of shortest vectors on ideal lattices. The security analyses indicate the proposed schemes are unforgeable under chosen message attack model, and the efficiency analyses also show that the above schemes are much more efficient than other correlative signature schemes.
Last updated:  2016-03-01
Chicken or the Egg - Computational Data Attacks or Physical Attacks
Uncategorized
Julien Allibert, Benoit Feix, Georges Gagnerot, Ismael Kane, Hugues Thiebeauld, Tiana Razafindralambo
Show abstract
Uncategorized
Side-channel and fault injection analyses are well-known domains that have been used for years to evaluate the resistance of hardware based products. These techniques remain a threat for the secret assets embedded in products like smart cards or System On Chip. But most of these products contain nowadays several strong protections rendering side-channel and fault attacks difficult or inefficient. For two decades embedded cryptography for payment, pay tv, identity areas have relied a lot on secure elements. Nowadays more alternative solutions on mobile phones appear with the aim to offer software-based security services including payment and security solutions as the HCE and DRM products. Cryptographic operations running in such applications are then executed most often on unprotected hardware devices. Therefore the binary code is often accessible to attackers who can use static and dynamic reverse engineering techniques to extract and analyse operations including data modification as faults. Hence, hiding or obfuscating secrets and/or whitebox cryptography becomes a strong alternatives to secure element storage for assets. We explain in this paper how directly from the binary or with the extracted source code we can perform statistical and fault analyses using similar techniques as those used in hardware-based security. This concerns particularly side-channel or fault injections techniques. Using our tool and virtualization technique, an attacker can emulate and trace and modify any chosen computational data (memory or register manipulation, any machine language operation) executed in the mobile application. It means the attacker is not no longer restricted by any physical limitations imposing a leakage model (and additional noise) or making fault injection tied with physical limitations. Hence statistical and fault attacks can go potentially further in software-based implementation compared to hardware based devices. As a consequence, complex techniques like high order, collision and horizontal statistical attacks become very efficient and can be easily performed on the computational data execution traces. A similar consequence applies for fault injection attacks. Hence the word statistical and fault analysis on computational data becomes more appropriate and one can wonder who has been the first between computational data or physical attack techniques? Chicken or the Egg?
Last updated:  2015-11-09
Malicious Keccak
Pawel Morawiecki
In this paper, we investigate Keccak --- the cryptographic hash function adopted as the SHA-3 standard. We propose a malicious variant of the function, where new round constants are introduced. We show that for such the variant, collision and preimage attacks are possible. We also identify a class of weak keys for the malicious Keccak working in the MAC mode. Ideas presented in the paper were verified by implementing the attacks on the function with the 128-bit hash.
Last updated:  2017-08-15
Patchable Indistinguishability Obfuscation: iO for Evolving Software
Uncategorized
Prabhanjan Ananth, Abhishek Jain, Amit Sahai
Show abstract
Uncategorized
In this work, we introduce patchable indistinguishability obfuscation: our notion adapts the notion of indistinguishability obfuscation (iO) to a very general setting where obfuscated software evolves over time. We model this broadly by considering software patches P as arbitrary Turing Machines that take as input the description of a Turing Machine M, and output a new Turing Machine description M' = P(M). Thus, a short patch P can cause changes everywhere in the description of M and can even cause the description length of the machine to increase by an arbitrary polynomial amount. We further consider multi-program patchable indistinguishability obfuscation where a patch is applied not just to a single machine M, but to an unbounded set of machines M_1,..., M_n to yield P(M_1),.., P(M_n). We consider both single-program and multi-program patchable indistinguishability obfuscation in a setting where there are an unbounded number of patches that can be adaptively chosen by an adversary. We show that sub-exponentially secure iO for circuits and sub-exponentially secure re-randomizable encryption schemes imply single-program patchable indistinguishability obfuscation; and we show that sub-exponentially secure iO for circuits and sub-exponentially secure DDH imply multi-program patchable indistinguishability obfuscation. At the our heart of results is a new notion of splittable iO that allows us to transform any iO scheme into a patchable one. Finally, we exhibit some simple applications of patchable indistinguishability obfuscation, to demonstrate how these concepts can be applied.
Last updated:  2015-11-09
Implementation Attacks on Post-Quantum Cryptographic Schemes
Mostafa Taha, Thomas Eisenbarth
Post-quantum cryptographic schemes have been developed in the last decade in response to the rise of quantum computers. Fortunately, several schemes have been developed with quantum resistance. However, there is very little effort in evaluating and comparing these schemes in the embedded settings. Low cost embedded devices represents a highly-constraint environment that challenges all post-quantum cryptographic schemes. Moreover, there are even fewer efforts in evaluating the security of these schemes against implementation attacks including side-channel and fault attacks. It is commonly accepted that, any embedded cryptographic module that is built without a sound countermeasure, can be easily broken. Therefore, we investigate the question: Are we ready to implement post-quantum cryptographic schemes on embedded systems? We present an exhaustive survey of research efforts in designing embedded modules of post-quantum cryptographic schemes and the efforts in securing these modules against implementation attacks. Unfortunately, the study shows that: we are not ready yet to implement any post-quantum cryptographic scheme in practical embedded systems. There is still a considerable amount of research that needs to be conducted before reaching a satisfactory level of security.
Last updated:  2016-10-18
Delegating RAM Computations with Adaptive Soundness and Privacy
Prabhanjan Ananth, Yu-Chi Chen, Kai-Min Chung, Huijia Lin, Wei-Kai Lin
We consider the problem of delegating RAM computations over persistent databases. A user wishes to delegate a sequence of computations over a database to a server, where each computation may read and modify the database and the modifications persist between computations. Delegating RAM computations is important as it has the distinct feature that the run-time of computations maybe sub-linear in the size of the database. We present the first RAM delegation scheme that provide both soundness and privacy guarantees in the adaptive setting, where the sequence of delegated RAM programs are chosen adaptively, depending potentially on the encodings of the database and previously chosen programs. Prior works either achieved only adaptive soundness without privacy [Kalai and Paneth, ePrint'15], or only security in the selective setting where all RAM programs are chosen statically [Chen et al. ITCS'16, Canetti and Holmgren ITCS'16]. Our scheme assumes the existence of indistinguishability obfuscation ($\iO$) for circuits and the decisional Diffie-Hellman (DDH) assumption. However, our techniques are quite general and in particular, might be applicable even in settings where iO is not used. We provide a "security lifting technique" that "lifts" any proof of selective security satisfying certain special properties into a proof of adaptive security, for arbitrary cryptographic schemes. We then apply this technique to the delegation scheme of Chen et al. and its selective security proof, obtaining that their scheme is essentially already adaptively secure. Because of the general approach, we can also easily extend to delegating parallel RAM (PRAM) computations. We believe that the security lifting technique can potentially find other applications and is of independent interest.
Last updated:  2015-11-09
NEON PQCryto: Fast and Parallel Ring-LWE Encryption on ARM NEON Architecture
Reza Azarderakhsh, Zhe Liu, Hwajeong Seo, Howon Kim
Recently, ARM NEON architecture has occupied a significant share of tablet and smartphone markets due to its low cost and high performance. This paper studies efficient techniques of lattice-based cryptography on ARM processor and presents the first implementation of ring-LWE encryption on ARM NEON architecture. In particular, we propose a vectorized version of Iterative Number Theoretic Transform (NTT) for high-speed computation. We present a 32-bit variant of SAMS2 technique, original proposed in CHES’15, for fast reduction. A combination of proposed and previous optimizations results in a very efficient implementation. For 128-bit security level, our ring-LWE implementation requires only 145; 200 clock cycles for encryption and 32; 800 cycles for decryption. These result are more than 17:6 times faster than the fastest ECC implementation on ARM NEON with same security level.
Last updated:  2016-03-28
Variations to the cryptographics algorithms AES and TWOFISH
P. Freyre, N. Díaz, O. Cuellar
The cryptographic algorithms AES and Twofish guarantee a high diffusion by the use of fixed 4x4 MDS matrices. In this article variations to the algorithms AES and Twofish are made. They allow that the process of cipher - decipher come true with MDS matrices selected randomly from the set of all MDS matrices with the use of proposed algorithm. A new key schedule with a high diffusion is designed for the algorithm AES. Besides proposed algorithm generate new S - box that depends of the key.
Last updated:  2015-11-06
De Bruijn Sequences from Symmetric Shift Registers
Ming Li, Mingxing Wang, Dongdai Lin
We consider the symmetric Feedback Shift Registers (FSRs), especially a special class of symmetric FSRs (we call them scattered symmetric FSRs), and construct a large class of De Bruijn sequences from them. It is shown that, at least O(2^((n-6)(logn)/2)) De Bruijn sequences of order n can be constructed from just one n-stage scattered symmetric FSR. To generate the next bit in the De Bruijn sequence from the current state, it requires no more than 2n comparisons and n+1 FSR shifts. By further analyse the cycle structure of the scattered symmetric FSRs, other methods for constructing De Bruijn sequences are suggested.
Last updated:  2016-06-04
Revisiting the Cryptographic Hardness of Finding a Nash Equilibrium
Uncategorized
Sanjam Garg, Omkant Pandey, Akshayaram Srinivasan
Show abstract
Uncategorized
The exact hardness of computing a Nash equilibrium is a fundamental open question in algorithmic game theory. This problem is complete for the complexity class \ppad. It is well known that problems in \ppad\ cannot be \np-complete unless $\np=\conp$. Therefore, a natural direction is to reduce the hardness of \ppad\ to the hardness of problems used in cryptography. Bitansky, Paneth, and Rosen [FOCS 2015] prove the hardness of \ppad\ assuming the existence of quasi-polynomially hard indistinguishability obfuscation and sub-exponentially hard one-way functions. This leaves open the possibility of basing \ppad\ hardness on simpler, polynomially hard, computational assumptions. We make further progress in this direction and reduce \ppad\ hardness directly to polynomially hard assumptions. Our first result proves hardness of \ppad\ assuming the existence of {\em polynomially hard} indistinguishability obfuscation (\io) and one-way permutations. While this improves upon Bitansky et al.'s work, it does not give us a reduction to simpler, polynomially hard computational assumption because constructions of \io\ inherently seems to require assumptions with sub-exponential hardness. In contrast, {\em public key functional encryption} is a much simpler primitive and does not suffer from this drawback. Our second result shows that $\ppad$ hardness can be based on {\em polynomially hard} compact public key functional encryption and one-way permutations. Our results further demonstrate the power of polynomially hard compact public key functional encryption which is believed to be weaker than indistinguishability obfuscation. Our techniques are general and we expect them to have various applications.
Last updated:  2015-11-05
Fault Analysis on the Stream Ciphers LILI-128 and Achterbahn
Uncategorized
Dibyendu Roy, Sourav Mukhopadhyay
Show abstract
Uncategorized
LILI-128 is a clock controlled stream cipher based on two LFSRs with one clock control function and one non-linear filter function. The clocking of the second LFSR is controlled by the first LFSR. In this paper we propose a fault algebraic attack on LILI-128 stream cipher. We first recover the state bits of the first LFSR by injecting a single bit fault in the first LFSR. After that we recover the second LFSR state bits by following algebraic cryptanalysis technique. We also propose fault attack on Achterbahn stream cipher, which is based on 8 NLFSRs, 8 LFSRs and one non-linear combining function. We first inject a single bit fault into the NLFSR-A then observe the normal and faulty keystream bits to recover almost all the state bits of the NLFSR-A after key initialization phase. One can apply our technique to other NLFSR-B, C, D to recover their state bits also
Last updated:  2016-02-17
An appendix for a recent paper of Kim
Uncategorized
Razvan Barbulescu
Show abstract
Uncategorized
This appendix proposes an improvement of Kim's exTNFS. It has been merged the original paper of Taechan Kim 2015/1027 (version 1).
Last updated:  2015-11-05
Cybersecurity in an era with quantum computers: will we be ready?
Uncategorized
Michele Mosca
Show abstract
Uncategorized
Quantum computers will break currently deployed public-key cryptography, and significantly weaken symmetric key cryptography, which are pillars of modern-day cybersecurity. Thus, before large-scale quantum computers are built, we need to migrate our systems and practices to ones that cannot be broken by quantum computers. For systems that aim to provide long-term confidentiality, this migration should happen even sooner. There are viable options for quantum-proofing our cryptographic infrastructure, but the road ahead is neither easy nor fast. Impressive progress in developing the building blocks of a fault-tolerant scalable quantum computer indicates that the prospect of a large-scale quantum computer is a medium-term threat. For example, I estimate a $1/2$ chance of breaking RSA-2048 by $2031$. In this note, I briefly overview the problem, the solutions and some of the next steps.
Last updated:  2015-11-05
Succinct Adaptive Garbled RAM
Ran Canetti, Yilei Chen, Justin Holmgren, Mariana Raykova
We show how to garble a large persistent database and then garble, one by one, a sequence of adaptively and adversarially chosen RAM programs that query and modify the database in arbitrary ways. Still, it is guaranteed that the garbled database and programs reveal only the outputs of the programs when run in sequence on the database. The runtime, space requirements and description size of the garbled programs are proportional only to those of the plaintext programs and the security parameter. We assume indistinguishability obfuscation for circuits and poly-to-one collision-resistant hash functions. The latter can be constructed based on standard algebraic assumptions such as the hardness of discrete log or factoring. In contrast, all previous garbling schemes with persistent data were shown secure only in the static setting where all the programs are known in advance. As an immediate application, our scheme is the first to provide a way to outsource large databases to untrusted servers, and later query and update the database over time in a private and verifiable way, with complexity and description size proportional to those of the unprotected queries. Our scheme extends the non-adaptive RAM garbling scheme of Canetti and Holmgren [ITCS 2016]. We also define and use a new primitive, called adaptive accumulators, which is an adaptive alternative to the positional accumulators of Koppula et al [STOC 2015] and somewhere statistical binding hashing of Hubacek and Wichs [ITCS 2015]. This primitive might well be useful elsewhere.
Last updated:  2018-01-09
Practical Witness Encryption for Algebraic Languages Or How to Encrypt Under Groth-Sahai Proofs
David Derler, Daniel Slamanig
Witness encryption (WE) is a recent powerful encryption paradigm, which allows to encrypt a message using the description of a hard problem (a word in an NP-language) and someone who knows a solution to this problem (a witness) is able to efficiently decrypt the ciphertext. Recent work thereby focuses on constructing WE for NP complete languages (and thus NP). While this rich expressiveness allows flexibility w.r.t. applications, it makes existing instantiations impractical. Thus, it is interesting to study practical variants of WE schemes for subsets of NP that are still expressive enough for many cryptographic applications. We show that such WE schemes can be generically constructed from smooth projective hash functions (SPHFs). In terms of concrete instantiations of SPHFs (and thus WE), we target languages of statements proven in the popular Groth-Sahai (GS) non-interactive witness-indistinguishable/zero-knowledge proof framework. This allows us to provide a novel way to encrypt. In particular, encryption is with respect to a GS proof and efficient decryption can only be done by the respective prover. The so obtained constructions are entirely practical. To illustrate our techniques, we apply them in context of privacy-preserving exchange of information.
Last updated:  2018-10-09
Quantum One-Time Memories from Stateless Hardware
Anne Broadbent, Sevag Gharibian, Hong-Sheng Zhou
A central tenet of theoretical cryptography is the study of the minimal assumptions required to implement a given cryptographic primitive. One such primitive is the one-time memory (OTM), introduced by Goldwasser, Kalai, and Rothblum [CRYPTO 2008], which is a classical functionality modeled after a non-interactive 1-out-of-2 oblivious transfer, and which is complete for one-time classical and quantum programs. It is known that secure OTMs do not exist in the standard model in both the classical and quantum settings. Here, we show how to use quantum information, together with the assumption of stateless (i.e., reusable) hardware tokens, to build statistically secure OTMs. This is in sharp contrast with the classical case, where stateless hardware tokens alone cannot yield OTMs. In addition, our scheme is technologically simple. We prove security in the quantum universal composability framework, employing semi-definite programming results of Molina, Vidick and Watrous [TQC 2013] and combinatorial techniques of Pastawski et al. [Proc. Natl. Acad. Sci. 2012].
Last updated:  2016-05-24
Revisiting Secure Two-Party Computation with Rational Players
Uncategorized
Arpita Maitra, Goutam Paul, Asim K. Pal
Show abstract
Uncategorized
A seminal result of Cleve (STOC 1986) showed that fairness, in general, is impossible to achieve in case of two-party computation if one of them is malicious. Later, Gordon et al. (STOC 2008, JACM 2011) observed that there exist two distinct classes of functions for which fairness can be achieved. One is any function without an embedded XOR, and the other one is a particular function containing an embedded XOR. In this paper, we revisit both classes of functions in two-party computation under rational players for the first time. We identify that the protocols proposed by Gordon et al. achieve fairness in non-rational setting only. In this direction, we design two protocols, one for the millionares' problem or the greater-than function (any function without embedded XOR can be converted to this function) and the other for the particular embedded XOR function of Gordon et al., and show that with rational players, our protocols achieve fairness, correctness and strict Nash equilibrium under suitable choice of parameters in complete information game setting. The dealer is offline in both of our protocols and this is in contrast with the work of Groce et al. (Eurocrypt 2012) which shows fairness and Bayesian Nash equilibrium in two party computation with rational players for arbitrary function in an incomplete information game setting.
Last updated:  2015-11-04
Barriers to Black-Box Constructions of Traitor Tracing Systems
Bo Tang, Jiapeng Zhang
Reducibility between different cryptographic primitives is a fundamental problem in modern cryptography. As one of the primitives, traitor tracing systems help content distributors recover the identities of users that collaborated in the pirate construction by tracing pirate decryption boxes. We present the first negative result on designing efficient traitor tracing systems via black-box constructions from symmetric cryptographic primitives, e.g. one-way functions. More specifically, we show that there is no secure traitor tracing scheme in the random oracle model, such that $\ell_k\cdot\ell_c^2\ge\widetilde{\Omega}(n)$, where $\ell_k$ is the length of user key, $\ell_c$ is the length of ciphertext and $n$ is the number of users, under the assumption that the scheme does not access the oracle to generate user keys. To our best knowledge, almost all the practical (non-artificial) cryptographic schemes (not limited to traitor tracing systems) via black-box constructions satisfy this assumption. Thus, our negative results indicate that most of the standard black-box reductions in cryptography cannot help construct a more efficient traitor tracing system. We prove our results by extending the connection between traitor tracing systems and differentially private database sanitizers to the setting with random oracle access. After that, we prove the lower bound for traitor tracing schemes by constructing a differentially private sanitizer that only queries the random oracle polynomially many times. In order to reduce the query complexity of the sanitizer, we prove a large deviation bound for decision forests, which might be of independent interest.
Last updated:  2018-03-12
Indifferentiability of 8-Round Feistel Networks
Uncategorized
Yuanxi Dai, John Steinberger
Show abstract
Uncategorized
We prove that a balanced 8-round Feistel network is indifferentiable from a random permutation. This result comes on the heels of (and is part of the same body of work as) a 10-round indifferentiability result for Feistel network recently announced by the same team of authors. The current 8-round simulator achieves similar security, query complexity and runtime as the 10-round simulator and is not significantly more involved. The security of our simulator is also slightly better than the security of the 14-round simulator of Holenstein et al. for essentially the same runtime and query complexity.
Last updated:  2015-11-03
Black-Box Parallel Garbled RAM
Steve Lu, Rafail Ostrovsky
In 1982, Yao introduced a fundamental technique of ``circuit garbling'' that became a central building block in cryptography. Recently, the question of garbling general random-access memory (RAM) programs received a lot of attention in the literature where garbling an encrypted data can be done separately from garbling program(s) that execute on this (garbled) RAM. The most recent results of Garg, Lu, and Ostrovsky (FOCS 2015) achieve a garbled RAM with black-box use of any one-way functions and poly-log overhead of data and program garbling in all the relevant parameters, including program run-time. The advantage of their solution is that large data can be garbled first, and act as persistent garbled storage (e.g. in the cloud) and later programs can be garbled and sent to be executed on this garbled database in a non-interactive manner. One of the main advantages of cloud computing is not only that it has large storage but also that it has a large number of parallel processors. Despite multiple successful efforts on parallelizing (interactive) Oblivious RAM, the non-interactive garbling of parallel programs remained open until very recently. Specifically, Boyle, Chung and Pass in their upcoming TCC 2016 paper (see their recently updated eprint version) have recently shown how to garbled PRAM program with poly-logarithmic (parallel) overhead assuming non-black-box use of identity-based encryption. The question whether such a strong assumption and non-black-box use of such a strong assumption are needed. In this paper, we resolve this important open question and show how to garble parallel programs, with only black-box use one one-way functions and with only poly-log overhead in the (parallel) running time. Our result works for any number of parallel processors.
Last updated:  2015-11-03
Public Verifiability in the Covert Model (Almost) for Free
Vladimir Kolesnikov, Alex J. Malozemoff
The covert security model (Aumann and Lindell, TCC 2007) offers an important security/efficiency trade-off: a covert player may arbitrarily cheat, but is caught with a certain fixed probability. This permits more efficient protocols than the malicious setting while still giving meaningful security guarantees. However, one drawback is that cheating cannot be proven to a third party, which prevents the use of covert protocols in many practical settings. Recently, Asharov and Orlandi (ASIACRYPT 2012) enhanced the covert model by allowing the honest player to generate a \emph{proof of cheating}, checkable by any third party. Their model, which we call the PVC (\emph{publicly verifiable covert}) model, offers a very compelling trade-off. Asharov and Orlandi (AO) propose a practical protocol in the PVC model, which, however, relies on a specific expensive oblivious transfer (OT) protocol incompatible with OT extension. In this work, we improve the performance of the PVC model by constructing a PVC-compatible OT extension as well as making several practical improvements to the AO protocol. As compared to the state-of-the-art OT extension-based two-party covert protocol, our PVC protocol adds relatively little: four signatures and an $\approx 67\%$ wider OT extension matrix. This is a significant improvement over the AO protocol, which requires public-key-based OTs per input bit. We present detailed estimates showing (up to orders of magnitude) concrete performance improvements over the AO protocol and a recent malicious protocol.
Last updated:  2015-11-02
Cryptanalysis of A Privacy-Preserving Smart Metering Scheme Using Linkable Anonymous Credential
Haipeng Qu, Peng Shang, Xi-Jun Lin, Lin Sun
To accomplish effective privacy protection in smart grid systems, various approaches were proposed combining information security technology with the smart grid's new features. Diao et al. proposed a privacy-preserving scheme using linkable anonymous credential based on CL signature, and demonstrated its identity anonymity, message authentication and traceability of broken smart meters. In this paper, a forgery attack is presented to point out the protocol dissatisfies message authentication and unforgeability. We hold the idea that this scheme doesn't have basic safety requirements and application value.
Last updated:  2015-11-04
Bucket ORAM: Single Online Roundtrip, Constant Bandwidth Oblivious RAM
Christopher Fletcher, Muhammad Naveed, Ling Ren, Elaine Shi, Emil Stefanov
Known Oblivious RAM (ORAM) constructions achieve either optimal bandwidth blowup or optimal latency (as measured by online roundtrips), but not both. We are the first to demonstrate an ORAM scheme, called Bucket ORAM, which attains the best of both worlds. Bucket ORAM simultaneously achieves a single online roundtrip as well as constant overall bandwidth blowup.
Last updated:  2015-11-02
Déjà Q: Encore! Un Petit IBE
Uncategorized
Hoeteck Wee
Show abstract
Uncategorized
We present an identity-based encryption (IBE) scheme in composite-order bilinear groups with essentially optimal parameters: the ciphertext overhead and the secret key are one group element each and decryption requires only one pairing. Our scheme achieves adaptive security and anonymity under standard decisional subgroup assumptions as used in Lewko and Waters (TCC '10). Our construction relies on a novel extension to the Deja Q framework of Chase and Meiklejohn (Eurocrypt '14).
Last updated:  2015-10-30
Optimal Computational Split-state Non-malleable Codes
Uncategorized
Divesh Aggarwal, Shashank Agrawal, Divya Gupta, Hemanta K. Maji, Omkant Pandey, Manoj Prabhakaran
Show abstract
Uncategorized
Non-malleable codes are a generalization of classical error-correcting codes where the act of ``corrupting'' a codeword is replaced by a ``tampering'' adversary. Non-malleable codes guarantee that the message contained in the tampered codeword is either the original message m, or a completely unrelated one. In the common split-state model, the codeword consists of multiple blocks (or states) and each block is tampered with independently. The central goal in the split-state model is to construct high rate non-malleable codes against all functions with only two states (which are necessary). Following a series of long and impressive line of work, constant rate, two-state, non-malleable codes against all functions were recently achieved by Aggarwal et al. (STOC 2015). Though constant, the rate of all known constructions in the split state model, is very far from optimal (even with more than two states). In this work, we consider the question of improving the rate of split-state non-malleable codes. In the ``information theoretic'' setting, it is not possible to go beyond rate 1/2. We therefore focus on the standard computational setting. In this setting, each tampering function is required to be efficiently computable, and the message in the tampered codeword is required to be either the original message m or a ``computationally'' independent one. In this setting, assuming only the existence of one-way functions, we present a compiler which converts any poor rate, two-state, (sufficiently strong) non-malleable code into a rate 1, two-state, computational non-malleable code. These parameters are asymptotically optimal. Furthermore, for the qualitative optimality of our result, we generalize the result of Cheraghchi and Guruswami (ITCS 2014) to show that the existence of one-way functions is necessary to achieve rate >1/2 for such codes. Our compiler requires a stronger form of non-malleability, called augmented non-malleability. This notion requires a stronger simulation guarantee for non-malleable codes and simplifies their modular usage in cryptographic settings where composition occurs. Unfortunately, this form of non-malleability is neither straightforward nor generally guaranteed by known results. Nevertheless, we prove this stronger form of non-malleability for the two-state construction of Aggarwal, Dodis, and Lovett (STOC 14). This result is of independent interest.
Last updated:  2015-10-31
Lower Bounds on Assumptions behind Indistinguishability Obfuscation
Uncategorized
Mohammad Mahmoody, Ameer Mohammed, Soheil Nematihaji, Rafael Pass, abhi shelat
Show abstract
Uncategorized
Since the seminal work of Garg et. al (FOCS'13) in which they proposed the first candidate construction for indistinguishability obfuscation (iO for short), iO has become a central cryptographic primitive with numerous applications. The security of the proposed construction of Garg et al. and its variants are proved based on multi-linear maps (Garg et. al Eurocrypt'13) and their idealized model called the graded encoding model (Brakerski and Rothblum TCC'14 and Barak et al. Eurocrypt'14). Whether or not iO could be based on standard and well-studied hardness assumptions has remain an elusive open question. In this work we prove emph{lower bounds} on the assumptions that imply iO in a black-box way, based on computational assumptions. Note that any lower bound for iO needs to somehow rely on computational assumptions, because if P = NP then statistically secure iO does exist. Our results are twofold: 1. There is no fully black-box construction of iO from (exponentially secure) collision-resistant hash functions unless the polynomial hierarchy collapses. Our lower bound extends to (separate iO from) any primitive implied by a random oracle in a black-box way. 2. Let P be any primitive that exists relative to random trapdoor permutations, the generic group model for any finite abelian group, or degree-$O(1)$ graded encoding model for any finite ring. We show that achieving a black-box construction of iO from P is emph{as hard as} basing public-key cryptography on one-way functions. In particular, for any such primitive P we present a constructive procedure that takes any black-box construction of iO from P and turns it into a a construction of semantically secure public-key encryption form any one-way functions. Our separations hold even if the construction of iO from P is {semi-} black-box (Reingold, Trevisan, and Vadhan, TCC'04) and the security reduction could access the adversary in a non-black-box way.
Last updated:  2015-11-03
On Basing Private Information Retrieval on NP-Hardness
Uncategorized
Tianren Liu, Vinod Vaikuntanathan
Show abstract
Uncategorized
The possibility of basing the security of cryptographic objects on the (minimal) assumption that $\comp{NP} \nsubseteq \comp{BPP}$ is at the very heart of complexity-theoretic cryptography. Most known results along these lines are negative, showing that assuming widely believed complexity-theoretic conjectures, there are no reductions from an $\comp{NP}$-hard problem to the task of breaking certain cryptographic schemes. We make progress along this line of inquiry by showing that the security of single-server single-round private information retrieval schemes cannot be based on $\comp{NP}$-hardness, unless the polynomial hierarchy collapses. Our main technical contribution is in showing how to break the security of a PIR protocol given an $\comp{SZK}$ oracle. Our result is tight in terms of both the correctness and the privacy parameter of the PIR scheme.
Last updated:  2016-03-08
Complete addition formulas for prime order elliptic curves
Joost Renes, Craig Costello, Lejla Batina
An elliptic curve addition law is said to be complete if it correctly computes the sum of any two points in the elliptic curve group. One of the main reasons for the increased popularity of Edwards curves in the ECC community is that they can allow a complete group law that is also relatively efficient (e.g., when compared to all known addition laws on Edwards curves). Such complete addition formulas can simplify the task of an ECC implementer and, at the same time, can greatly reduce the potential vulnerabilities of a cryptosystem. Unfortunately, until now, complete addition laws that are relatively efficient have only been proposed on curves of composite order and have thus been incompatible with all of the currently standardized prime order curves. In this paper we present optimized addition formulas that are complete on every prime order short Weierstrass curve defined over a field k with char(k) not 2 or 3. Compared to their incomplete counterparts, these formulas require a larger number of field additions, but interestingly require fewer field multiplications. We discuss how these formulas can be used to achieve secure, exception-free implementations on all of the prime order curves in the NIST (and many other) standards.
Last updated:  2016-03-09
A General Framework for Redactable Signatures and New Constructions
David Derler, Henrich C. Pöhls, Kai Samelin, Daniel Slamanig
A redactable signature scheme (RSS) allows removing parts of a signed message by any party without invalidating the respective signature. State-of-the-art constructions thereby focus on messages represented by one specific data structure, e.g., lists, sets or trees, and adjust the security model accordingly. To overcome the necessity for this myriad of models, we present a general framework covering arbitrary data-structures and even more sophisticated possibilities. For example, we cover fixed elements which must not be redactable and dependencies between elements. Moreover, we introduce the notion of designated redactors, i.e., the signer can give some extra information to selected entities which become redactors. In practice, this often allows to obtain more efficient schemes. We then present two RSSs; one for sets and one for lists, both constructed from any EUF-CMA secure signature scheme and indistinguishable cryptographic accumulators in a black-box way and show how the concept of designated redactors can be used to increase the efficiency of these schemes. Finally, we present a black-box construction of a designated redactor RSS by combining an RSS for sets with non-interactive zero knowledge proof systems. All the three constructions presented in this paper provide transparency, which is an important property, but quite hard to achieve, as we also conceal the length of the original message and the positions of the redactions.
Last updated:  2015-10-30
Rational Sumchecks
Uncategorized
Siyao Guo, Pavel Hubacek, Alon Rosen, Margarita Vald
Show abstract
Uncategorized
Rational proofs, introduced by Azar and Micali (STOC 2012) are a variant of interactive proofs in which the prover is neither honest nor malicious, but rather rational. The advantage of rational proofs over their classical counterparts is that they allow for extremely low communication and verification time. In recent work, Guo et al. (ITCS 2014) demonstrated their relevance to delegation of computation by showing that, if the rational prover is additionally restricted to being computationally bounded, then every language in NC1 admits a single-round delegation scheme that can be verified in sublinear time. We extend the Guo et al. result by constructing a single-round delegation scheme with sublinear verification for all languages in P. Our main contribution is the introduction of {\em rational sumcheck protocols}, which are a relaxation of classical sumchecks, a crucial building block for interactive proofs. Unlike their classical counterparts, rational sumchecks retain their (rational) soundness properties, {\em even if the polynomial being verified is of high degree} (in particular, they do not rely on the Schwartz-Zippel lemma). This enables us to bypass the main efficiency bottleneck in classical delegation schemes, which is a result of sumcheck protocols being inapplicable to the verification of the computation's input level. As an additional contribution we study the possibility of using rational proofs as efficient blocks within classical interactive proofs. Specifically, we show a composition theorem for substituting oracle calls in an interactive proof by a rational protocol.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.