All papers in 2017 (Page 10 of 1262 results)

Last updated:  2017-04-26
Universally Composable Zero-Knowledge Proof of Membership
Jesper Buus Nielsen
Since its introduction the UC framework by Canetti has received a lot of attention. A contributing factor to its popularity is that it allows to capture a large number of common cryptographic primitives using ideal functionalities and thus can be used to give modular proofs for many cryptographic protocols. However, an important member of the cryptographic family has not yet been captured by an ideal functionality, namely the zero-knowledge proof of membership. We give the first formulation of a UC zero-knowledge proof of membership and show that it is closely related to the notions of straight-line zero-knowledge and simulation soundness.
Last updated:  2018-02-19
Indistinguishability Obfuscation for All Circuits from Secret-Key Functional Encryption
Fuyuki Kitagawa, Ryo Nishimaki, Keisuke Tanaka
We show that indistinguishability obfuscation (IO) for all circuits can be constructed solely from secret-key functional encryption (SKFE). In the construction, SKFE need to be able to issue a-priori unbounded number of functional keys, that is, collusion-resistant. Our strategy is to replace public-key functional encryption (PKFE) in the construction of IO proposed by Bitansky and Vaikuntanathan (FOCS 2015) with puncturable SKFE. Bitansky and Vaikuntanathan introduced the notion of puncturable SKFE and observed that the strategy works. However, it has not been clear whether we can construct puncturable SKFE without assuming PKFE. In particular, it has not been known whether puncturable SKFE is constructed from ordinary SKFE. In this work, we show that a relaxed variant of puncturable SKFE can be constructed from collusion-resistant SKFE. Moreover, we show that the relaxed variant of puncturable SKFE is sufficient for constructing IO.
Last updated:  2017-04-26
Provably Secure Three-party Password Authenticated Key Exchange Protocol Based On Ring Learning With Error
Dongqing Xu, Debiao He, Kim-Kwang Raymond Choo, Jianhua Chen
Three-party Password Authenticated Key Exchange (3PAKE) protocol is an important cryptographic primitive, where clients can establish a session key using easy-to-remember passwords. A number of 3PAKE protocols based on traditional mathematical problems have been presented in the literature, but these protocols are not able to resist attacks using quantum computers. In this paper, we construct the first 3PAKE protocol from lattices. Lattice-based cryptography is a promising post-quantum cryptography approach. We then prove its security in the random oracle model, and implement the proposed protocol using LatticeCrypto. The implementation results shows our protocol is very efficient in practice.
Last updated:  2017-06-12
Conditional Disclosure of Secrets via Non-Linear Reconstruction
Uncategorized
Tianren Liu, Vinod Vaikuntanathan, Hoeteck Wee
Show abstract
Uncategorized
We present new protocols for conditional disclosure of secrets (CDS), where two parties want to disclose a secret to a third party if and only if their respective inputs satisfy some predicate. - For general predicates , we present two protocols that achieve communication: the first achieves communication and the second achieves sub-polynomial communication. - As a corollary, we obtain improved share complexity for forbidden graph access structures. Namely, for every graph on vertices, there is a secret-sharing scheme for parties in which each pair of parties can reconstruct the secret if and only if the corresponding vertices in are connected, and where each party gets a share of size . Prior to this work, the best protocols for both primitives required communication complexity . Indeed, this is essentially the best that all prior techniques could hope to achieve as they were limited to so-called ``linear reconstruction''. This is the first work to break this ``linear reconstruction'' barrier in settings related to secret sharing. To obtain these results, we draw upon techniques for non-linear reconstruction developed in the context of information-theoretic private information retrieval. We further extend our results to the setting of private simultaneous messages (PSM), and provide applications such as an improved attribute-based encryption (ABE) for quadratic polynomials.
Last updated:  2017-04-26
Almost Optimal Oblivious Transfer from QA-NIZK
Olivier Blazy, Céline Chevalier, Paul Germouty
We show how to build a UC-Secure Oblivious Transfer in the presence of Adaptive Corruptions from Quasi-Adaptive Non-Interactive Zero-Knowledge proofs. Our result is based on the work of Jutla and Roy at Asiacrypt 2015, where the authors proposed a constant-size very efficient PAKE scheme. As a stepping stone, we first show how a two-flow PAKE scheme can be generically transformed in an optimized way, in order to achieve an efficient three-flow Oblivious-Transfer scheme. We then compare our generic transformations to existing OT constructions and see that we manage to gain at least a factor 2 to the best known constructions. To the best of our knowledge, our scheme is the first UC-secure Oblivious Transfer with a constant size flow from the receiver, and nearly optimal size for the server.
Last updated:  2018-04-09
Continuous Non-Malleable Codes in the 8-Split-State Model
Uncategorized
Divesh Aggarwal, Nico Dottling, Jesper Buus Nielsen, Maciej Obremski, Erick Purwanto
Show abstract
Uncategorized
Non-malleable codes (NMCs), introduced by Dziembowski, Pietrzak and Wichs~\cite{DPW10}, provide a useful message integrity guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. NMCs have emerged as a fundamental object at the intersection of coding theory and cryptography. In particular, progress in the study of non-malleable codes and the related notion of non-malleable extractors has led to new insights and progress on even more fundamental problems like the construction of multi-source randomness extractors. A large body of the recent work has focused on various constructions of non-malleable codes in the split-state model. Many variants of NMCs have been introduced in the literature i.e. strong NMCs, super strong NMCs and continuous NMCs. The most general, and hence also the most useful notion among these is that of continuous non-malleable codes, that allows for continuous tampering by the adversary. We present the first efficient information-theoretically secure continuously non-malleable code in the constant split-state model, where there is a self-destruct mechanism which ensures that the adversary loses access to tampering after the first failed decoding. We believe that our main technical result could be of independent interest and some of the ideas could in future be used to make progress on other related questions.
Last updated:  2017-04-26
XOR of PRPs in a Quantum World
Bart Mennink, Alan Szepieniec
In the classical world, the XOR of pseudorandom permutations for is a well-established way to design a pseudorandom function with ``optimal'' security: security up to approximately queries, where and are the key and state space of the block cipher . We investigate security of this construction against adversaries who have access to quantum computers. We first present a key recovery attack in complexity. The attack relies on a clever application of a claw-finding algorithm and testifies of a significant gap with the classical setting where pseudorandom permutations already yield optimal security. Next, we perform a quantum security analysis of the construction, and prove that it achieves security up to queries. The analysis relies on a generic characterization of classical and quantum distinguishers and a universal transformation of classical security proofs to the quantum setting that is of general interest.
Last updated:  2019-02-01
White-Box Cryptography: Don't Forget About Grey Box Attacks
Uncategorized
Estuardo Alpirez Bock, Joppe W. Bos, Chris Brzuska, Charles Hubain, Wil Michiels, Cristofaro Mune, Eloi Sanfelix Gonzalez, Philippe Teuwen, Alexander Treff
Show abstract
Uncategorized
Despite the fact that all current scientific white-box approaches of standardized cryptographic primitives have been publicly broken, these attacks require knowledge of the internal data representation used by the implementation. In practice, the level of implementation knowledge required is only attainable through significant reverse engineering efforts. In this paper we describe new approaches to assess the security of white-box implementations which require neither knowledge about the look-up tables used nor expensive reverse engineering efforts. We introduce the differential computation analysis (DCA) attack which is the software counterpart of the differential power analysis attack as applied by the cryptographic hardware community. Similarly, the differential fault analysis (DFA) attack is the software counterpart of fault-injection attacks on cryptographic hardware. For DCA, we developed plugins to widely available dynamic binary instrumentation (DBI) frameworks to produce software execution traces which contain information about the memory addresses being accessed. For the DFA attack, we developed modified emulators and plugins for DBI frameworks that allow injecting faults at selected moments within the execution of the encryption or decryption process as well as a framework to automate static fault injection. To illustrate the effectiveness, we show how DCA and DFA can extract the secret key from numerous publicly available non-commercial white-box implementations of standardized cryptographic algorithms. These approaches allow one to extract the secret key material from white-box implementations significantly faster and without specific knowledge of the white-box design in an automated or semi-automated manner.
Last updated:  2017-06-27
Tightly Secure Ring-LWE Based Key Encapsulation with Short Ciphertexts
Martin R. Albrecht, Emmanuela Orsini, Kenneth G. Paterson, Guy Peer, Nigel P. Smart
We provide a tight security proof for an IND-CCA Ring-LWE based Key Encapsulation Mechanism that is derived from a generic construction of Dent (IMA Cryptography and Coding, 2003). Such a tight reduction is not known for the generic construction. The resulting scheme has shorter ciphertexts than can be achieved with other generic constructions of Dent or by using the well-known Fujisaki-Okamoto constructions (PKC 1999, Crypto 1999). Our tight security proof is obtained by reducing to the security of the underlying Ring-LWE problem, avoiding an intermediate reduction to a CPA-secure encryption scheme. The proof technique maybe of interest for other schemes based on LWE and Ring-LWE.
Last updated:  2017-04-26
Lattice-Based Group Signatures: Achieving Full Dynamicity with Ease
San Ling, Khoa Nguyen, Huaxiong Wang, Yanhong Xu
Lattice-based group signature is an active research topic in recent years. Since the pioneering work by Gordon, Katz and Vaikuntanathan (Asiacrypt 2010), eight other schemes have been proposed, providing various improvements in terms of security, efficiency and functionality. However, most of the existing constructions work only in the static setting where the group population is fixed at the setup phase. The only two exceptions are the schemes by Langlois et al. (PKC 2014) that handles user revocations (but new users cannot join), and by Libert et al. (Asiacrypt 2016) which addresses the orthogonal problem of dynamic user enrollments (but users cannot be revoked). In this work, we provide the first lattice-based group signature that offers full dynamicity (i.e., users have the flexibility in joining and leaving the group), and thus, resolve a prominent open problem posed by previous works. Moreover, we achieve this non-trivial feat in a relatively simple manner. Starting with Libert et al.'s fully static construction (Eurocrypt 2016) - which is arguably the most efficient lattice-based group signature to date, we introduce simple-but-insightful tweaks that allow to upgrade it directly into the fully dynamic setting. More startlingly, our scheme even produces slightly shorter signatures than the former. The scheme satisfies the strong security requirements of Bootle et al.'s model (ACNS 2016), under the Short Integer Solution (SIS) and the Learning With Errors (LWE) assumptions.
Last updated:  2017-04-26
A low-resource quantum factoring algorithm
Daniel J. Bernstein, Jean-François Biasse, Michele Mosca
In this paper, we present a factoring algorithm that, assuming standard heuristics, uses just qubits to factor an integer in time where and . For comparison, the lowest asymptotic time complexity for known pre-quantum factoring algorithms, assuming standard heuristics, is where . The new time complexity is asymptotically worse than Shor's algorithm, but the qubit requirements are asymptotically better, so it may be possible to physically implement it sooner.
Last updated:  2017-04-26
Post-quantum RSA
Daniel J. Bernstein, Nadia Heninger, Paul Lou, Luke Valenta
This paper proposes RSA parameters for which (1) key generation, encryption, decryption, signing, and verification are feasible on today's computers while (2) all known attacks are infeasible, even assuming highly scalable quantum computers. As part of the performance analysis, this paper introduces a new algorithm to generate a batch of primes. As part of the attack analysis, this paper introduces a new quantum factorization algorithm that is often much faster than Shor's algorithm and much faster than pre-quantum factorization algorithms. Initial pqRSA implementation results are provided.
Last updated:  2017-07-03
The Montgomery ladder on binary elliptic curves
Uncategorized
Thomaz Oliveira, Julio López, Francisco Rodríguez-Henríquez
Show abstract
Uncategorized
In this survey paper we present a careful analysis of the Montgomery ladder procedure applied to the computation of the constant-time point multiplication operation on elliptic curves defined over binary extension fields. We give a general view of the main improvements and formula derivations that several researchers have contributed across the years, since the publication of Peter Lawrence Montgomery seminal work in 1987. We also report a fast software implementation of the Montgomery ladder applied on a Galbraith-Lin-Scott (GLS) binary elliptic curve that offers a security level close to 128 bits. Using our software we can execute the ephemeral Diffie-Hellman protocol in just 95,702 clock cycles when implemented on an Intel Skylake machine running at 4 GHz.
Last updated:  2017-11-23
LMS vs XMSS: Comparion of two Hash-Based Signature Standards
Panos Kampanakis, Scott Fluhrer
Quantum computing poses challenges to public key signatures as we know them today. LMS and XMSS are two hash based signature schemes that have been proposed in the IETF as quantum secure. Both schemes are based on well-studied hash trees, but their similarities and differences have not yet been discussed. In this work, we attempt to compare the two standards. We compare their security assumptions and quantify their signature and public key sizes. We also address the computation overhead they introduce. Our goal is to provide a clear understanding of the schemes’ similarities and differences for implementers and protocol designers to be able to make a decision as to which standard to chose.
Last updated:  2017-08-26
Removal Attacks on Logic Locking and Camouflaging Techniques
Muhammad Yasin, Bodhisatwa Mazumdar, Ozugr Sinanoglu, Jeyavijayan Rajendran
With the adoption of a globalized and distributed IC design flow, IP piracy, reverse engineering, and counterfeiting threats are becoming more prevalent. Logic obfuscation techniques including logic locking and IC camouflaging have been developed to address these emergent challenges. A major challenge for logic locking and camouflaging techniques is to resist Boolean satisfiability (SAT) based attacks that can circumvent state-of-the-art solutions within minutes. Over the past year, multiple SAT attack resilient solutions such as Anti-SAT and AND-tree insertion (ATI) have been presented. In this paper, we perform a security analysis of these countermeasures and show that they leave structural traces behind in their attempts to thwart the SAT attack. We present three attacks, namely “signal probability skew” (SPS) attack, “AppSAT guided removal (AGR) attack, and “sensitization guided SAT” (SGS) attack”, that can break Anti-SAT and ATI, within minutes.
Last updated:  2017-04-21
Predictive Aging of Reliability of two Delay PUFs
Naghmeh Karimi, Jean-Luc Danger, Florent Lozac'h, Sylvain Guilley
To protect integrated circuits against IP piracy, Physically Unclonable Functions (PUFs) are deployed. PUFs provide a specific signature for each integrated circuit. However, environmental variations, (e.g., temperature change), power supply noise and more influential IC aging affect the functionally of PUFs. Thereby, it is important to evaluate aging effects as early as possible, preferentially at design time. In this paper we investigate the effect of aging on the stability of two delay PUFs: arbiter-PUFs and loop-PUFs and analyze the architectural impact of these PUFS on reliability decrease due to aging. We observe that the reliability of the arbiter-PUF gets worse over time, whereas the reliability of the loop-PUF remains constant. We interpret this phenomenon by the asymmetric aging of the arbiter, because one half is active (hence aging fast) while the other is not (hence aging slow). Besides, we notice that the aging of the delay chain in the arbiter-PUF and in the loop-PUF has no impact on their reliability, since these PUFs operate differentially.
Last updated:  2017-04-21
Some cryptanalytic results on Lizard
Subhadeep Banik, Takanori Isobe
Lizard is a lightweight stream cipher proposed by Hamann, Krause and Meier in IACR ToSC 2017. It has a Grain-like structure with two state registers of size 90 and 31 bits. The cipher uses a 120 bit Secret Key and a 64 bit IV. The authors claim that Lizard provides 80 bit security against key recovery attacks and a 60-bit security against distinguishing attacks. In this paper, we present an assortment of results and observations on Lizard. First, we show that by doing random trials it is possible to a set of triplets such that the Key-IV pairs and produce identical keystream bits. Second, we show that by performing only around random trials it is possible to obtain Key-IV pairs and that produce identical keystream bits. Thereafter, we show that one can construct a distinguisher for Lizard based on IVs that produce shifted keystream sequences. The process takes around random IV encryptions and around bits of memory. Finally, we propose a key recovery attack on a version of Lizard with the number of initialization rounds reduced to 223 (out of 256) based on IV collisions.
Last updated:  2017-04-21
Mind the Gap: Towards Secure 1st-order Masking in Software
Kostas Papagiannopoulos, Nikita Veshchikov
Cryptographic implementations are vulnerable to side-channel analysis. Implementors often opt for masking countermeasures to protect against these types of attacks. Masking countermeasures can ensure theoretical protection against value-based leakages. However, the practical effectiveness of masking is often halted by physical effects such as glitches couplings and distance-based leakages, which violate the independent leakage assumption (ILA) and result in security order reductions. This paper aims to address this gap between masking theory and practice in the following threefold manner. First, we perform an in-depth investigation of the device-specific effects that invalidate ILA in the AVR microcontroller ATMega163. Second, we provide an automated tool, capable of detecting ILA violations in AVR assembly code. Last, we craft the first (to our knowledge) "hardened" 1st-order ISW-based, masked Sbox implementation, which is capable of resisting 1st-order, univariate side-channel attacks. Enforcing the ILA in the masked RECTANGLE Sbox requires 1319 clock cycles, i.e. a 15-fold increase compared to a naive 1st-order ISW-based implementation.
Last updated:  2017-06-22
DUPLO: Unifying Cut-and-Choose for Garbled Circuits
Vladimir Kolesnikov, Jesper Buus Nielsen, Mike Rosulek, Ni Trieu, Roberto Trifiletti
Cut-and-choose (C&C) is the standard approach to making Yao’s garbled circuit two-party computation (2PC) protocol secure against malicious adversaries. Traditional cut-and-choose operates at the level of entire circuits, whereas the LEGO paradigm (Nielsen & Orlandi, TCC 2009) achieves asymptotic improvements by per- forming cut-and-choose at the level of individual gates. In this work we propose a unified approach called DUPLO that spans the entire continuum between these two extremes. The cut-and-choose step in our protocol operates on the level of arbitrary circuit “components,” which can range in size from a single gate to the entire circuit itself. With this entire continuum of parameter values at our disposal, we find that the best way to scale 2PC to computations of realistic size is to use C&C components of in- termediate size, and not at the extremes. On computations requiring several millions of gates or more, our more general approach to C&C gives between 4-7x improvement over existing approaches. In addition to our technical contributions of modifying and optimizing previous proto- col techniques to work with general C&C components, we also provide an extension of the recent Frigate circuit compiler (Mood et al, EuroS&P 2016) to effectively express any C-style program in terms of components which can be processed efficiently using our protocol.
Last updated:  2017-04-21
Towards a Classification of Non-interactive Computational Assumptions in Cyclic Groups
Essam Ghadafi, Jens Groth
We study non-interactive computational intractability assumptions in prime-order cyclic groups. We focus on the broad class of computational assumptions, which we call target assumptions, where the adversary's goal is to compute a concrete group element and investigate the structure of this class. Our analysis identifies two families of intractability assumptions, the -Generalized Diffie-Hellman Exponent assumptions and the -Simple Fractional assumptions that imply all other target assumptions. These two assumptions therefore serve as Uber assumptions that can underpin all the target assumptions where the adversary has to compute specific group elements. We also study the internal hierarchy among members of these two assumption families. We provide heuristic evidence that both families are necessary to cover the full class of target assumptions, and we show that the lowest level in the -GDHE hierarchy (the -GDHE assumption) is equivalent to the computational Diffie-Hellman assumption. We generalize our results to the bilinear group setting. For the base groups our results translate nicely and a similar structure of non-interactive computational assumptions emerges. We also identify Uber assumptions in the target group but this requires replacing the -GDHE assumption with a more complicated assumption, which we call the Bilinar Gap Assumption. Our analysis can assist both cryptanalysts and cryptographers. For cryptanalysts, we propose the -GDHE and the -SDH assumptions are the most natural and important targets for cryptanalysis in prime-order groups. For cryptographers, we believe our classification can aid the choice of assumptions underpinning cryptographic schemes and be used as a guide to minimize the overall attack surface that different assumptions expose.
Last updated:  2017-09-28
Multilinear Maps Using a Variant of Ring-LWE
Gu Chunsheng
GGH13, CLT13 and GGH15 of multilinear maps suffer from zeroizing attacks. In this paper, we present a new construction of multilinear maps using a variant of ring-LWE (vRLWE). Furthermore, we also present two new variants of vRLWE, which respectively support the applications of multipartite key exchange and witness encryption. At the same time, we also present a new variant of GGH13 using matrix form. The security of our construction depends upon new hardness assumptions.
Last updated:  2017-06-23
Steganography techniques
Uncategorized
Dragoş Dumitrescu, Ioan-Mihail Stan, Emil Simion
Show abstract
Uncategorized
As cryptography is the standard way of ensuring privacy, integrity and confidentiality of a public channel, steganography steps in to provide even stronger assumptions. Thus, in the case of cryptology, an attacker cannot obtain information about the payload while inspecting its encrypted content. In the case of steganography, one cannot prove the existence of the covert communication itself. The main purpose of the current paper is to provide insights into some of the existing techniques in steganography. A comparison between the performances of several steganography algorithms is accomplished, with focus on the metrics that characterize a steganography technique.
Last updated:  2017-06-16
Enhancing Security by Combining Biometrics and Cryptography
Uncategorized
Diana Popa, Emil Simion
Uncategorized
The impressive amount of recent technological advancements in the area of information systems have brought along, besides the multitude of positive aspects, some negative aspects too. The most obvious one is represented by the fact that the technological innovations are prone to various categories of threats. Making sure that information stays safe, unaltered and secret is an integral part of providing technology that behaves in the manner it is supposed to. Along with researching techniques of effectively securing the communication, other aspects that also deserve to receive attention are authentication, authorization and accounting. One security aspect that has been the most intensely researched in the past from the three, is authentication. From the various methods used in verifying the identity of users, one more recent one is biometrics that can significantly heighten the safety and security of a system. However, authenticating the user into an information system may not be entirely safe all the time. The aim of this paper is to present an overview of the different plausible methods of combining cryptography related concepts and biometrics techniques. This additional attachment of cryptography has proven to make the process of gainingaccess to an information system much more secure. The problem tackled in this paper will be presented from a two way perspective: on one hand, it will be discussed how biometrics can make use of cryptography-specific solutions in order to enhance its powers and, on the other hand, it will be presented how cryptography aspects can use the specific biometric data of a user to generate encryption keys that are much harder to decipher or to obtain. After this methods are presented, the theoretical basis for measuring performance of a biometric system will be presented and a survey on current performance results on fuzzy vault techniques will be enumerated and described.
Last updated:  2017-04-21
ElsieFour: A Low-Tech Authenticated Encryption Algorithm For Human-to-Human Communication
Alan Kaminsky
ElsieFour (LC4) is a low-tech cipher that can be computed by hand; but unlike many historical ciphers, LC4 is designed to be hard to break. LC4 is intended for encrypted communication between humans only, and therefore it encrypts and decrypts plaintexts and ciphertexts consisting only of the English letters A through Z plus a few other characters. LC4 uses a nonce in addition to the secret key, and requires that different messages use unique nonces. LC4 performs authenticated encryption, and optional header data can be included in the authentication. This paper defines the LC4 encryption and decryption algorithms, analyzes LC4's security, and describes a simple appliance for computing LC4 by hand.
Last updated:  2017-04-18
A Traceability Analysis of Monero's Blockchain
Uncategorized
Amrit Kumar, Clément Fischer, Shruti Tople, Prateek Saxena
Show abstract
Uncategorized
Monero is a cryptocurrency that has rapidly gained popularity since its launch in April 2014. The source of its growth can be mainly attributed to its unique privacy properties that go well beyond the pseudonymity property of cryptocurrencies such as Bitcoin. In this work, we conduct a forensic analysis of the Monero blockchain. Our main goal is to investigate Monero’s untraceability guarantee, which essentially means that given a transaction input, the real output being redeemed in it should be anonymous among a set of other outputs. To this end, we develop three heuristics that lead to simple-to-implement attack routines. We evaluate our attacks on the Monero blockchain and show that in 87% of cases, the real output being redeemed can be easily identified with certainty. Moreover, we have compelling evidence that two of our attacks also extend to Monero RingCTs — the second generation Monero that even hides the transaction value. Furthermore, we observe that for over 98% of the inputs that we have been able to trace, the real output being redeemed in it is the one that has been on the blockchain for the shortest period of time. This result shows that the mitigation measures currently employed in Monero fall short of preventing temporal analysis. Motivated by our findings, we also propose a new mitigation strategy against temporal analysis. Our mitigation strategy leverages the real spending habit of Monero users.
Last updated:  2017-04-18
Authentication of Outsourced Linear Function Query with Efficient Updates
Gang Sheng, Chunming Tang, Wei Gao, Yunlu Cai, Xing Hu
Storing the large-scale data on the cloud server side becomes nowadays an alternative for the data owner with the popularity and maturity of the cloud computing technique, where the data owner can manage the data with limited resources, and the user issues the query request to the cloud server instead of the data owner. As the server is not completely trusted, it is necessary for the user to perform results authentication to check whether or not the returned results from the cloud server are correct. We investigate in this paper how to perform efficient data update for the result authentication of the outsourced univariate linear function query. We seek to outsource almost all the data and computing to the server, and as few data and computations as possible are stored and performed on the data owner side, respectively. We present a novel scheme to achieve the security goal, which is divided into two parts. The first part is a verification algorithm for the outsourced computing of line intersections, which enables the data owner to store most of the data on the server side, and to execute less of the computing of the line intersections. The second part is an authentication data structure Two Level Merkle B Tree for the outsourced univariate linear function query, where the top level is used to index the user input and authenticate the query results, and the bottom level is used to index the query condition and authenticate the query results. The authentication data structure enables the data owner to update the data efficiently, and to implement the query on the server side. The theoretic analysis shows that our proposed scheme works with higher efficiency.
Last updated:  2017-04-18
NIST RANDOMNESS TESTS (IN)DEPENDENCE
Carmina GEORGESCU, Alina PETRESCU-NITA, Emil SIMION, Antonela TOMA
In this paper we focus on three open questions regarding NIST SP 800-22 randomness test: the probability of false acceptance, the number of minimum sample size to achieve a given probability error and tests independence. We shall point out statistical testing assumptions, source of errors, sample constructions and a computational method for determining the probability of false acceptance and estimating the correlation between the statistical tests.
Last updated:  2017-05-22
Privacy-Preserving Linear Regression on Distributed Data
Irene Giacomelli, Somesh Jha, C. David Page
Linear regression is an important statistical tool that models the relationship between some explanatory values and an outcome value using a linear function. In many current applications (e.g. predictive modelling in personalized healthcare), these values represent sensitive data owned by several different parties that are unwilling to share them. In this setting, training a linear regression model becomes challenging and needs specific cryptographic solutions. In this work, we propose a new system that can train two different variants of linear regression (i.e. ridge regression and lasso regression) on a dataset obtained by merging a finite number of private datasets. Our system assures that no extra information on a single private dataset is revealed to the entities performing the learning algorithm. Moreover, our solution is based on efficient cryptographic tools (e.g. Paillier’s scheme and pseudorandom generator).
Last updated:  2018-02-01
Updating key size estimations for pairings
Razvan Barbulescu, Sylvain Duquesne
Recent progress on NFS imposed a new estimation of the security of pairings. In this work we study the best attacks against some of the most popular pairings and propose new key sizes using an analysis which is more precise than the analysis in a recent article of Menezes, Sarkar and Singh. We also select pairing-friendly curves for standard security levels.
Last updated:  2017-04-18
Faster Homomorphic Function Evaluation using Non-Integral Base Encoding
Charlotte Bonte, Carl Bootland, Joppe W. Bos, Wouter Castryck, Ilia Iliashenko, Frederik Vercauteren
In this paper we present an encoding method for fixed-point numbers tailored for homomorphic function evaluation. The choice of the degree of the polynomial modulus used in all popular somewhat homomorphic encryption schemes is dominated by security considerations, while with the current encoding techniques the correctness requirement allows for much smaller values. We introduce a generic encoding method using expansions with respect to a non-integral base, which exploits this large degree at the benefit of reducing the growth of the coefficients when performing homomorphic operations. In practice this allows one to choose a smaller plaintext coefficient modulus which results in a significant reduction of the running time. We illustrate our approach by applying this encoding in the setting of homomorphic electricity load forecasting for the smart grid which results in a speed-up by a factor 13 compared to previous work, where encoding was done using balanced ternary expansions.
Last updated:  2017-04-18
Reforgeability of Authenticated Encryption Schemes
Christian Forler, Eik List, Stefan Lucks, Jakob Wenzel
This work pursues the idea of multi-forgery attacks as introduced by Ferguson in 2002. We recoin reforgeability for the complexity of obtaining further forgeries once a first forgery has succeeded. First, we introduce a security notion for the integrity (in terms of reforgeability) of authenticated encryption schemes: j-Int-CTXT, which is derived from the notion INT-CTXT. Second, we define an attack scenario called j-IV-Collision Attack (j-IV-CA), wherein an adversary tries to construct j forgeries provided a first forgery. The term collision in the name stems from the fact that we assume the first forgery to be the result from an internal collision within the processing of the associated data and/or the nonce. Next, we analyze the resistance to j-IV-CAs of classical nonce-based AE schemes (CCM, CWC, EAX, GCM) as well as all 3rd-round candidates of the CAESAR competition. The analysis is done in the nonce-respecting and the nonce-ignoring setting. We find that none of the considered AE schemes provides full built-in resistance to j-IV-CAs. Based on this insight, we briefly discuss two alternative design strategies to resist j-IV-CAs.
Last updated:  2017-04-17
Optimal attacks on qubit-based Quantum Key Recycling
Uncategorized
Daan Leermakers, Boris Skoric
Show abstract
Uncategorized
Quantum Key Recycling (QKR) is a quantum-cryptographic primitive that allows one to re-use keys in an unconditionally secure way. By removing the need to repeatedly generate new keys it improves communication efficiency. Škorić and de Vries recently proposed a QKR scheme based on 8-state encoding (four bases). It does not require quantum computers for encryption/decryption but only single-qubit operations. We provide a missing ingredient in the security analysis of this scheme in the case of noisy channels: accurate bounds on the privacy amplification. We determine optimal attacks against the message and against the key, for 8-state encoding as well as 4-state and 6-state conjugate coding. We show that the Shannon entropy analysis for 8-state encoding reduces to the analysis of Quantum Key Distribution, whereas 4-state and 6-state suffer from additional leaks that make them less effective. We also provide results in terms of the min-entropy. Overall, 8-state encoding yields the highest capacity.
Last updated:  2017-12-01
Distinguisher-Dependent Simulation in Two Rounds and its Applications
Abhishek Jain, Yael Tauman Kalai, Dakshita Khurana, Ron Rothblum
We devise a novel simulation technique that makes black-box use of the adversary as well as the distinguisher. Using this technique we construct several round-optimal protocols, many of which were previously unknown even using non-black-box simulation techniques: - Two-round witness indistinguishable (WI) arguments for from different assumptions than previously known. - Two-round arguments and three-round arguments of knowledge for that achieve strong WI, witness hiding (WH) and distributional weak zero knowledge (WZK) properties in a setting where the instance is only determined by the prover in the last round of the interaction. The soundness of these protocols is guaranteed against adaptive provers. - Three-round two-party computation satisfying input-indistinguishable security as well as a weaker notion of simulation security against malicious adversaries. - Three-round extractable commitments with guaranteed correctness of extraction from polynomial hardness assumptions. Our three-round protocols can be based on DDH or QR or N^th residuosity and our two-round protocols require quasi-polynomial hardness of the same assumptions. In particular, prior to this work, two-round WI arguments for NP were only known based on assumptions such as the existence of trapdoor permutations, hardness assumptions on bilinear maps, or the existence of program obfuscation; we give the first construction based on (quasi-polynomial) DDH. Our simulation technique bypasses known lower bounds on black-box simulation [Goldreich-Krawcyzk'96] by using the distinguisher's output in a meaningful way. We believe that this technique is likely to find more applications in the future.
Last updated:  2017-04-17
Maliciously Secure Multi-Client ORAM
Matteo Maffei, Giulio Malavolta, Manuel Reinert, Dominique Schröder
Oblivious RAM (ORAM) has emerged as an enabling technology to secure cloud-based storage services. The goal of this cryptographic primitive is to conceal not only the data but also the access patterns from the server. While the early constructions focused on a single client scenario, a few recent works have focused on a setting where multiple clients may access the same data, which is crucial to support data sharing applications. All these works, however, either do not consider malicious clients or they significantly constrain the definition of obliviousness and the system's practicality. It is thus an open question whether a natural definition of obliviousness can be enforced in a malicious multi-client setting and, if so, what the communication and computational lower bounds are. In this work, we formalize the notion of maliciously secure multi-client ORAM, we prove that the server-side computational complexity of any secure realization has to be , and we present a cryptographic instantiation of this primitive based on private information retrieval techniques, which achieves an communication complexity. We further devise an efficient access control mechanism, built upon a novel and generally applicable realization of plaintext equivalence proofs for ciphertext vectors. Finally, we demonstrate how our lower bound can be bypassed by leveraging a trusted proxy, obtaining logarithmic communication and server-side computational complexity. We implemented our scheme and conducted an experimental evaluation, demonstrating the feasibility of our approach.
Last updated:  2017-11-02
Evaluating Bernstein-Rabin-Winograd Polynomials
Sebati Ghosh, Palash Sarkar
We describe an algorithm which can efficiently evaluate Bernstein-Rabin-Winograd (BRW) polynomials. The presently best known complexity of evaluating a BRW polynomial on field elements is field multiplications. Typically, a field multiplication consists of a basic multiplication followed by a reduction. The new algorithm requires basic multiplications and reductions. Based on the new algorithm for evaluating BRW polynomials, we propose two new hash functions and with digest sizes bits and bits respectively. The practicability of these hash functions is demonstrated by implementing them using instructions available on modern Intel processors. Timing results obtained from the implementations suggest that based hashing compares favourably to the highly optimised implementation by Gueron of Horner's rule based hash function.
Last updated:  2017-04-17
MQ Signatures for PKI
Alan Szepieniec, Ward Beullens, Bart Preneel
It is well known that multivariate quadratic (MQ) digital signature schemes have small signatures but huge public keys. However, in some settings, such as public key infrastructure (PKI), both variables are important. This paper explains how to transform any MQ signature scheme into one with a much smaller public key at the cost of a larger signature. The transformation aims to reduce the combined size of the public key and signature and this metric is improved significantly. The security of our transformation reduces to that of the underlying MQ signature scheme in the random oracle model. It is possible to decrease signature sizes even further but then its security is related to the conjectured hardness of a new problem, the Approximate MQ Problem (AMQ).
Last updated:  2017-04-17
Labeled Homomorphic Encryption: Scalable and Privacy-Preserving Processing of Outsourced Data
Manuel Barbosa, Dario Catalano, Dario Fiore
We consider the problem of privacy-preserving processing of outsourced data, where a Cloud server stores data provided by one or multiple data providers and then is asked to compute several functions over it. We propose an efficient methodology that solves this problem with the guarantee that a honest-but-curious Cloud learns no information about the data and the receiver learns nothing more than the results. Our main contribution is the proposal and efficient instantiation of a new cryptographic primitive called Labeled Homomorphic Encryption (labHE). The fundamental insight underlying this new primitive is that homomorphic computation can be significantly accelerated whenever the program that is being computed over the encrypted data is known to the decrypter and is not secret---previous approaches to homomorphic encryption do not allow for such a trade-off. Our realization and implementation of labHE targets computations that can be described by degree-two multivariate polynomials, which capture an important range of statistical functions. As a specific application, we consider the problem of privacy preserving Genetic Association Studies (GAS), which require computing risk estimates for given traits from statistically relevant features in the human genome. Our approach allows performing GAS efficiently, non interactively and without compromising neither the privacy of patients nor potential intellectual property that test laboratories may want to protect.
Last updated:  2024-12-13
CHVote Protocol Specification
Rolf Haenni, Reto E. Koenig, Philipp Locher, and Eric Dubuis
This document provides a self-contained, comprehensive, and fully-detailed specification of a new cryptographic voting protocol designed for political elections in Switzerland. The document describes every relevant aspect and every necessary technical detail of the computations and communications performed by the participants during the protocol execution. To support the general understanding of the cryptographic protocol, the document accommodates the necessary mathematical and cryptographic background information. By providing this information to the maximal possible extent, it serves as an ultimate companion document for the developers in charge of implementing this system. It may also serve as a manual for developers trying to implement an independent election verification software. The decision of making this document public even enables implementations by third parties, for example by students trying to develop a clone of the system for scientific evaluations or to implement protocol extensions to achieve additional security properties. In any case, the target audience of this document are system designers, software developers, and cryptographic experts.
Last updated:  2018-07-22
Family of PRGs based on Collections of Arithmetic Progressions
Uncategorized
Ch. Srikanth, C. E. Veni Madhavan
Show abstract
Uncategorized
We consider the mathematical object: \textit{collection of arithmetic progressions} with elements satisfying the property: \textit{ terms of and progressions of the collection are multiplicative inverses of each other modulo the term of progression}. Under a \textit{certain} condition on the common differences of the progressions, such a collection is {\em uniquely} generated from a pair of co-prime seed integers. The object is closely connected to the standard Euclidean gcd algorithm. In this work, we present one application of this object to a novel construction of a new family of pseudo random number generators (PRG) or symmetric key ciphers. We present an authenticated encryption scheme which is another application of the defined object. In this paper, we pay our attention to a basic symmetric key method of the new family. The security of the method is based on a well-defined hard problem. Interestingly, a special case of the hard problem (defined as Problem A) is shown to be computationally equivalent to the problem of factoring integers. The work leaves some open issues, which are being addressed in our ongoing work.
Last updated:  2018-08-23
Revocable Identity-based Encryption with Bounded Decryption Key Exposure Resistance: Lattice-based Construction and More
Atsushi Takayasu, Yohei Watanabe
In general, identity-based encryption (IBE) does not support an efficient revocation procedure. In ACM CCS'08, Boldyreva et al. proposed revocable identity-based encryption (RIBE), which enables us to efficiently revoke (malicious) users in IBE. In PKC 2013, Seo and Emura introduced an additional security notion for RIBE, called decryption key exposure resistance (DKER). Roughly speaking, RIBE with DKER guarantees that the security is not compromised even if an adversary gets (a number of) short-term decryption keys. Therefore, DKER captures realistic scenarios and is an important notion. In this paper, we introduce bounded decryption key exposure resistance (B-DKER), where an adversary is allowed to get a-priori bounded number of short-term decryption keys in the security game.B-DKER is a weak version of DKER, but it seems to be sufficient for practical use. We obtain the following results: (1) We propose a lattice-based (anonymous) RIBE scheme with B-DKER, which is the first lattice-based construction resilient to decryption key exposure. Our lattice-based construction is secure under the LWE assumption. A previous lattice-based construction satisfies anonymity but is vulnerable even with a single decryption key exposure. (2) We propose the first pairing-based RIBE scheme that simultaneously realizes anonymity and B-DKER. Our pairing-based construction is secure under the SXDH assumption. Our two constructions rely on cover free families to satisfy B-DKER, whereas all the existing works rely on the key re-randomization property to achieve DKER.
Last updated:  2019-10-29
Approximate Polynomial Common Divisor Problem Relates to Noisy Multipolynomial Reconstruction
Jun Xu, Santanu Sarkar, Lei Hu
In this paper, we investigate the hardness of the approximate polynomial common divisor problem, which is regarded as a polynomial analogy of the approximate integer common divisor problem. In order to solve this problem, we present a simple method by using the polynomial lattice reduction algorithm and contain complete theoretical analyses. Further, we propose an improved lattice attack to reduce both space and time costs. Moreover, these two attacking methods can be directly applied to solving the noisy multipolynomial reconstruction problem in the field of error-correcting codes. On the basis of the above situations, our improved lattice attack performs fastest.
Last updated:  2018-05-11
How Fast Can We Obfuscate Using Ideal Graded Encoding Schemes
Uncategorized
Dingfeng Ye, Peng Liu, Jun Xu
Show abstract
Uncategorized
In this work, we present a new obfuscator using a Graded Encoding Scheme (GES) with a binary slot. We characterize a class of circuits taking locally keyed input (each input bit of the circuit is a keyed function over c>1 bits of a binary-variable vector X of length n, where c is called the locality), called ideal functions, such that any function of algebraic degree d (called d-function) over them, can be obfuscated with multilinearity \mu=(d+1)n/c. Next we show that obfuscation of a general circuit C can be bootstrapped by O(n)-functions (the circuit (called RE) composing a garbled circuit (GC) with a pseudorandom function (PRF)), following an approach similar to that of Zimmerman and Applebaum et al., assuming PRF (or more precisely RE) exists among d-functions with constant d. To instantiate the above scheme, we achieve the following: 1. A concrete GC of algebraic degree 3 over its random bits, which has output size no more than 20\lambda|C| and random tape length about 10\lambda|C|, where \lambda is the security parameter, |C| denotes the number of gates of the circuit C. 2. A candidate d-function construction, where we argue that d=1 suffices to stop linear distinguishing attacks and d=2 seems enough for fully secure PRF. 3. Instantiation of the GES with a simplified version of the CLT multilinear map, and various techniques that further reduce \mu of the core obfuscator cost-equivalently to dn/(2c)+1 in cases of our interest. If we replace the PRF with d-functions, then we get various heuristic obfuscation-friendly REs, and thus general obfuscators with explicit complexities. For the most optimistic choice, we have \mu=1.5n'/c +2.5, n'\approx n+\log |C|+\log \lambda, n is the number of input bits of C, and c is a selectable constant which result in a {2^c}/{c} times increase of the key size of the RE. Our general obfuscator is VBB secure assuming that our RE is secure and our simplified CLT map is a secure instantiation of our GES (defined relative to known attacks). We leave these assumptions with concrete parameter sets as open challenges. We illustrate the efficiency of our methods with some examples: 1. Our obfuscated AES (c=13, \mu=20.5) has code size <1.5\times 10^{17} bits, whereas no implementable solution is known prior to this work. 2. We can practically obfuscate conjunction functions for n=64, while the latest implementation can only handle n=32 with comparable resources. We also verify the security against algebraic attacks in this example.
Last updated:  2017-04-19
Speeding up Huff Form of Elliptic Curves
Uncategorized
Neriman Gamze Orhon, Huseyin Hisil
Show abstract
Uncategorized
This paper presents faster inversion-free point addition formulas for the curve y*(1+a*x^2)=c*x*(1+d*y^2). The proposed formulas improve the point doubling operation count record from 6M+5S to 8M and mixed-addition operation count record from 10M to 8M. Both sets of formulas are shown to be 4-way parallel, leading to an effective cost of 2M per either of the group operations.
Last updated:  2017-09-18
Embed-Augment-Recover: Function Private Predicate Encryption from Minimal Assumptions in the Public-Key Setting
Uncategorized
Sikhar Patranabis, Debdeep Mukhopadhyay
Show abstract
Uncategorized
We present a new class of public-key predicate encryption schemes that are provably function private in the standard model under well-known cryptographic assumptions, and assume predicate distributions satisfying realistic min-entropy requirements. More concretely, we present public-key constructions for identity-based encryption (IBE) and inner-product encryption (IPE) that are computationally function private in the standard model under a family of weaker variants of the DLIN assumption. Existing function private constructions in the public-key setting impose highly stringent requirements on the min-entropy of predicate distributions, thereby limiting their applicability in the context of real-world predicates. For example, the statistically function private constructions of Boneh, Raghunathan and Segev (CRYPTO'13 and ASIACRYPT'13) are inherently restricted to predicate distributions with min-entropy roughly proportional to , where is the security parameter. Our constructions allow relaxing this min-entropy requirement to , while achieving a computational notion of function privacy against probabilistic polynomial-time adversaries, which suffices for most real-world applications. Our constructions also avoid the need for strong assumptions such as indistinguishability obfuscation.
Last updated:  2017-04-20
Key-Aggregate Searchable Encryption with Constant-Size Trapdoors for Fine-Grained Access Control in the Cloud
Sikhar Patranabis, Debdeep Mukhopadhyay
Fine-grained access control, especially in shared data environments such as the cloud, is a common scenario. Suppose a data owner encrypts and stores a collection of documents in the cloud, and wishes to grant access to a subset of these to a data user. The data user must therefore be able to perform search operations only on the subset of documents, and nothing else. For example, the user may want to store the documents in separate folders pertaining to work, family, personal etc., and selectively grant search access to one or more folders among different users. This is a commonly encountered in document sharing environments such as Dropbox and Google Drive. Most existing searchable encryption schemes today assume that a user has search permissions over the entire document collection, and are hence not designed explicitly to address such a controlled-search scenario. The only previous work in this direction by Cui et al. possesses inherent security weaknesses that can be exploited to trivially break the privacy of their system. \noindent In this paper, we provide a novel, efficient and secure solution to this problem by introducing a new public-key searchable encryption primitive referred to as the key-aggregate searchable encryption~(KASE). KASE is secure under well-defined cryptographic assumptions, and requires optimal network communication between the server and the data owners/data users. KASE is the first public key searchable encryption scheme to achieve performance and security at par with the most efficient symmetric key searchable encryption constructions. Additionally, while all existing schemes produce trapdoors that grow linearly with the size of the document collection, KASE produces constant-size trapdoors that are independent of the document collection size. KASE is also efficient and scalable with respect to data updates, making it ideal for use in dynamic cloud-based search applications.
Last updated:  2017-08-31
Solidus: Confidential Distributed Ledger Transactions via PVORM
Ethan Cecchetti, Fan Zhang, Yan Ji, Ahmed Kosba, Ari Juels, Elaine Shi
Blockchains and more general distributed ledgers are becoming increasingly popular as efficient, reliable, and persistent records of data and transactions. Unfortunately, they ensure reliability and correctness by making all data public, raising confidentiality concerns that eliminate many potential uses. In this paper we present Solidus, a protocol for confidential transactions on public blockchains, such as those required for asset transfers with on-chain settlement. Solidus operates in a framework based on real-world financial institutions: a modest number of banks each maintain a large number of user accounts. Within this framework, Solidus hides both transaction values and the transaction graph (i.e., the identities of transacting entities) while maintaining the public verifiability that makes blockchains so appealing. To achieve strong confidentiality of this kind, we introduce the concept of a Publicly-Verifiable Oblivious RAM Machine (PVORM). We present a set of formal security definitions for both PVORM and Solidus and show that our constructions are secure. Finally, we implement Solidus and present a set of benchmarks indicating that the system is efficient in practice.
Last updated:  2018-06-13
Exploring Potential 6LoWPAN Traffic Side Channels
Yan Yan, Elisabeth Oswald, Theo Tryfonas
The Internet of Things (IoT) has become a reality: small connected devices feature in everyday objects including childrens' toys, TVs, fridges, heating control units, etc. Supply chains feature sensors throughout, and significant investments go into researching next-generation healthcare, where sensors monitor wellbeing. A future in which sensors and other (small) devices interact to create sophisticated applications seems just around the corner. All of these applications have a fundamental need for security and privacy and thus cryptography is deployed as part of an attempt to secure them. In this paper we explore a particular type of flaw, namely side channel information, on the protocol level that can exist despite the use of cryptography. Our research investigates the potential for utilising packet length and timing information (both are easily obtained) to extract interesting information from a system. We find that using these side channels we can distinguish between devices, different programs running on the same device including which sensor is accessed. We also find it is possible to distinguish between different types of ICMP messages despite the use of encryption. Based on our findings, we provide a set of recommendations to efficiently mitigate these side channels in the IoT context.
Last updated:  2017-04-14
Multimodal Indexable Encryption for Mobile Cloud-based Applications (Extended Version)
Uncategorized
Bernardo Ferreira, Joaão Leitão, Henrique Domingos
Show abstract
Uncategorized
In this paper we propose MIE, a Multimodal Indexable Encryption framework that for the first time allows mobile applications to securely outsource the storage and search of their multimodal data (i.e. data containing multiple media formats) to public clouds with privacy guarantees. MIE is designed as a distributed framework architecture, leveraging on shared cloud repositories that can be accessed simultaneously by multiple users. At its core MIE relies on Distance Preserving Encodings (DPE), a novel family of encoding algorithms with cryptographic properties that we also propose. By applying DPE to multimodal data features, MIE enables high-cost clustering and indexing operations to be handled by cloud servers in a privacy-preserving way. Experiments show that MIE achieves better performance and scalability when compared with the state of art, with measurable impact on mobile resources and battery life.
Last updated:  2017-04-14
Post-quantum cryptography---dealing with the fallout of physics success
Daniel J. Bernstein, Tanja Lange
Cryptography is essential for the security of Internet communication, cars, and implanted medical devices. However, many commonly used cryptosystems will be completely broken once big quantum computers exist. Post-quantum cryptography is cryptography under the assumption that the attacker has a large quantum computer; post-quantum cryptosystems strive to remain secure even in this scenario. This relatively young research area has seen some successes in identifying mathematical operations for which quantum algorithms offer little speedup, and then building cryptographic systems around those. The central challenge in post-quantum cryptography is to meet demands for cryptographic usability and flexibility without sacrificing trust.
Last updated:  2017-04-23
A Generic Approach to Identity-based Sequential Aggregate Signatures: New constructions from 2-level HIBE Schemes
Yanqing Yao, Hua Guo, Zhoujun Li
Identity-based sequential aggregate signature (IBSAS) schemes are usually applied to secure network routing and sensor networks, since they allow multiple signers to sequentially produce a short signature of different messages to reduce bandwidth overhead and storage space for signatures, and allow signers to attest to these messages as well as the order in which they signed using their identities. In CCS’07, Boldyreva et al. introduced this concept and constructed the first IBSAS scheme in the random oracle model. After that, a couple of IBSAS schemes are proposed and proved. Unfortunately, none of them is constructed based on a standard computational problem and secure in the standard model (i.e., without random oracles). How to construct this kind of scheme is still an open problem. In this paper, we propose a generic construction of IBSAS schemes by employing 2-level Hierarchical Identity-based Encryption Schemes, and then prove its security in the security model proposed by Boldyreva et al. in CCS'07. Afterwards, we instantiate the generic construction to obtain a concrete IBSAS scheme secure under the Computational Diffie-Hellman (CDH) assumption in the standard model, thus solving the above open problem. An extra fruit of our generic construction is that it can be used to construct the first lattice-based IBSAS scheme, which is secure in the random oracle model. Finally, we show the performance comparisons between our schemes and previous ones.
Last updated:  2019-05-23
Limits on Low-Degree Pseudorandom Generators (Or: Sum-of-Squares Meets Program Obfuscation)
Uncategorized
Boaz Barak, Zvika Brakerski, Ilan Komargodski, Pravesh K. Kothari
Show abstract
Uncategorized
Consider a pseudorandom generator with outputs, whose seed contains blocks of bits each. Further, assume that this PRG has block-locality , i.e. each output bit depends on at most out of the blocks. The question of the maximum stretch that such PRGs can have, as a function of recently emerged in the context of constructing provably secure program obfuscation. It also relates to the question of refuting constraint satisfaction problems on predicates with large alphabets in complexity theory. We show that such -block local PRGs can have output length at most , by presenting a polynomial time algorithm that distinguishes inputs of the form (for any ) from inputs where each coordinate is sampled independently according to the marginal distributions of the coordinates of . As a corollary, we refute some conjectures recently made in the context of constructing provably secure indistinguishability obfuscation (iO). This includes refuting the assumptions underlying Lin and Tessaro's \cite{LinT17} recently proposed candidate iO from bilinear maps. Specifically, they assumed the existence of a secure pseudorandom generator as above for large enough with . (Following this work, and an independent work of Lombardi and Vaikuntanthan \cite{LombardiV17a}, Lin and Tessaro retracted the bilinear maps based candidate from their manuscript.) Our results follow from a general framework that handles more general class of pseudorandom generators. Namely they work even if the outputs are not binary valued and are computed using low-degree polynomial over (instead of the more restrictive local/block-local assumption). Specifically, we prove that for every function ( = reals), if every output of is a polynomial (over the real numbers ) of degree at most of at most monomials and , then there is a polynomial time algorithm for the distinguishing task above. This implies that any such map cannot be a pseudorandom generator. Our results yield, in particular, that natural modifications to notion of generators that are still sufficient for obtaining indistinguishability obfuscation from bilinear maps run into similar barriers. Our algorithms follow the Sum of Squares (SoS) paradigm, and in most cases can even be defined more simply using a semidefinite program. We complement our algorithm by presenting a class of candidate generators with block-wise locality and constant block size, that resists both Gaussian elimination and sum of squares (SOS) algorithms whenever . This class is extremely easy to describe: Let be any simple non-abelian group with the group operation ``'', and interpret the blocks of as elements in . The description of the pseudorandom generator is a sequence of triples of indices chosen at random and each output of the generator is of the form .
Last updated:  2017-04-11
Constructing Multidimensional Differential Addition Chains and their Applications
Uncategorized
Aaron Hutchinson, Koray Karabina
Show abstract
Uncategorized
We propose new algorithms for constructing multidimensional differential addition chains and for performing multidimensional scalar point multiplication based on these chains. Our algorithms work in any dimension and offer some key efficiency and security features. In particular, our scalar point multiplication algorithm is uniform, it has high potential for constant time implementation, and it can be parallelized. It also allows trading speed for precomputation cost and storage requirements. These key features and our theoretical estimates indicate that this new algorithm may offer significant performance advantages over the existing point multiplication algorithms in practice. We also report some experimental results and verify some of our theoretical findings.
Last updated:  2017-04-20
KDM-Secure Public-Key Encryption from Constant-Noise LPN
Shuai Han, Shengli Liu
The Learning Parity with Noise (LPN) problem has found many applications in cryptography due to its conjectured post-quantum hardness and simple algebraic structure. Over the years, constructions of different public-key primitives were proposed from LPN, but most of them are based on the LPN assumption with _low noise_ rate rather than _constant noise_ rate. A recent breakthrough was made by Yu and Zhang (Crypto'16), who constructed the first Public-Key Encryption (PKE) from constant-noise LPN. However, the problem of designing a PKE with _Key-Dependent Message_ (KDM) security from constant-noise LPN is still open. In this paper, we present the first PKE with KDM-security assuming certain sub-exponential hardness of constant-noise LPN, where the number of users is predefined. The technical tool is two types of _multi-fold LPN on squared-log entropy_, one having _independent secrets_ and the other _independent sample subspaces_. We establish the hardness of the multi-fold LPN variants on constant-noise LPN. Two squared-logarithmic entropy sources for multi-fold LPN are carefully chosen, so that our PKE is able to achieve correctness and KDM-security simultaneously.
Last updated:  2017-04-11
Perfectly Secure Message Transmission Scheme against Rational Adversaries
Maiki Fujita, Takeshi Koshiba
Secure Message Transmission (SMT) is a two-party cryptographic scheme by which a sender securely and reliably sends messages to a receiver using channels. Suppose that an adversary corrupts at most out of channels and makes eavesdropping or tampering over the corrupted channels. It is known that if then the perfect SMT (PSMT) in the information-theoretic sense is achievable and if then no PSMT scheme is possible to construct. If we are allowed to use a public channel in addition to the normal channels, we can achieve the almost reliable SMT (ARSMT), which admits transmission failures of small probability, against corruptions. In the standard setting in cryptography, the participants are classified into honest ones and corrupted ones: every honest participant follows the protocol but corrupted ones are controlled by the adversary and behave maliciously. As a real setting, the notion of rationality in the game theory is often incorporated into cryptography. In this paper, we first consider ``rational adversary'' who behaves according to his own preference in SMT. We show that it is possible to achieve PSMT even against any corruptions under some reasonable settings for rational adversaries. \end{abstract}
Last updated:  2022-04-25
Faster Gaussian Sampling for Trapdoor Lattices with Arbitrary Modulus
Nicholas Genise, Daniele Micciancio
We present improved algorithms for gaussian preimage sampling using the lattice trapdoors of (Micciancio and Peikert, CRYPTO 2012). The MP12 work only offered a highly optimized algorithm for the on-line stage of the computation in the special case when the lattice modulus is a power of two. For arbitrary modulus , the MP12 preimage sampling procedure resorted to general lattice algorithms with complexity cubic in the bitsize of the modulus (or quadratic, but with substantial preprocessing and storage overheads.) Our new preimage sampling algorithm (for any modulus ) achieves linear complexity, and has very modest storage requirements. As an additional contribution, we give a new off-line quasi-linear time perturbation sampling algorithm, with performance similar to the (expected) running time of an efficient method proposed by (Ducas and Nguyen, Asiacrypt 2012) for power-of-two cyclotomics, but without the (matrix factorization) preprocessing and (lattice rounding) postprocessing required by that algorithm. All our algorithms are fairly simple, with small hidden constants, and offer a practical alternative to use the MP12 trapdoor lattices in a broad range of cryptographic applications.
Last updated:  2017-09-12
Efficient Synchronous Byzantine Consensus
Uncategorized
Ittai Abraham, Srinivas Devadas, Danny Dolev, Kartik Nayak, Ling Ren
Show abstract
Uncategorized
We present new protocols for Byzantine state machine replication and Byzantine agreement in the synchronous and authenticated setting. The celebrated PBFT state machine replication protocol tolerates f Byzantine faults in an asynchronous setting using 3f +1 replicas, and has since been studied or deployed by numerous works. In this work, we improve the Byzantine fault tolerance threshold to n = 2f + 1 by utilizing a relaxed synchrony assumption. We present a synchronous state machine replication protocol that commits a decision every 3 rounds in the common case. The key challenge is to ensure quorum intersection at one honest replica. Our solution is to rely on the synchrony assumption to form a post-commit quorum of size 2f + 1, which intersects at f + 1 replicas with any pre-commit quorums of size f + 1. Our protocol also solves synchronous authenticated Byzantine agreement in expected 8 rounds. The best previous solution (Katz and Koo, 2006) requires expected 24 rounds. Our protocols may be applied to build Byzantine fault tolerant systems or improve cryptographic protocols such as cryptocurrencies when synchrony can be assumed.
Last updated:  2017-10-25
Cube Attacks on Non-Blackbox Polynomials Based on Division Property (Full Version)
Yosuke Todo, Takanori Isobe, Yonglin Hao, Willi Meier
The cube attack is a powerful cryptanalytic technique and is especially powerful against stream ciphers. Since we need to analyze the complicated structure of a stream cipher in the cube attack, the cube attack basically analyzes it by regarding it as a blackbox. Therefore, the cube attack is an experimental attack, and we cannot evaluate the security when the size of cube exceeds an experimental range, e.g., 40. In this paper, we propose cube attacks on non-blackbox polynomials. Our attacks are developed by using the division property, which is recently applied to various block ciphers. The clear advantage is that we can exploit large cube sizes because it never regards the cipher as a blackbox. We apply the new cube attack to Trivium, Grain128a, ACORN and Kreyvium. As a result, the secret keys of 832-round Trivium, 183-round Grain128a, 704-round ACORN and 872-round Kreyvium are recovered. These attacks are the current best key-recovery attack against these ciphers.
Last updated:  2017-04-10
A Zero Knowledge Sumcheck and its Applications
Alessandro Chiesa, Michael A. Forbes, Nicholas Spooner
Many seminal results in Interactive Proofs (IPs) use algebraic techniques based on low-degree polynomials, the study of which is pervasive in theoretical computer science. Unfortunately, known methods for endowing such proofs with zero knowledge guarantees do not retain this rich algebraic structure. In this work, we develop algebraic techniques for obtaining zero knowledge variants of proof protocols in a way that leverages and preserves their algebraic structure. Our constructions achieve unconditional (perfect) zero knowledge in the Interactive Probabilistically Checkable Proof (IPCP) model of Kalai and Raz [KR08] (the prover first sends a PCP oracle, then the prover and verifier engage in an Interactive Proof in which the verifier may query the PCP). Our main result is a zero knowledge variant of the sumcheck protocol [LFKN92] in the IPCP model. The sumcheck protocol is a key building block in many IPs, including the protocol for polynomial-space computation due to Shamir [Sha92], and the protocol for parallel computation due to Goldwasser, Kalai, and Rothblum [GKR15]. A core component of our result is an algebraic commitment scheme, whose hiding property is guaranteed by algebraic query complexity lower bounds [AW09,JKRS09]. This commitment scheme can then be used to considerably strengthen our previous work [BCFGRS16] that gives a sumcheck protocol with much weaker zero knowledge guarantees, itself using algebraic techniques based on algorithms for polynomial identity testing [RS05,BW04]. We demonstrate the applicability of our techniques by deriving zero knowledge variants of well-known protocols based on algebraic techniques. First, we construct zero knowledge IPCPs for NEXP starting with the Multi-prover Interactive Proofs of Babai, Fortnow, and Lund [BFL91]. This result is a direct application of our zero knowledge sumcheck and our algebraic commitment scheme, augmented with the use of `randomized' low-degree extensions. We also construct protocols in a more restricted model where the prover and verifier engage in a standard Interactive Proof with oracle access to a uniformly random low-degree polynomial (soundness holds with respect to any oracle). In this setting we achieve zero knowledge variants of the protocols of Shamir and of Goldwasser, Kalai, and Rothblum.
Last updated:  2018-07-25
Provably Secure NTRUEncrypt over More General Cyclotomic Rings
Uncategorized
Yang Yu, Guangwu Xu, Xiaoyun Wang
Show abstract
Uncategorized
NTRUEncrypt is a fast and standardized lattice-based public key encryption scheme, but it lacks a proof of security. Stehle and Steinfeld (EUROCRYPT 2011) first gave a variant of NTRUEncrypt, denoted by pNE, over power-of-2 cyclotomic rings. The pNE scheme is provably secure assuming the hardness of worst-case problems over ideal lattices. Recently, Yu, Xu and Wang (PKC 2017) proposed a pNE variant over prime cyclotomic rings, but it requires the parameters to be of rather larger sizes. In this paper, working with canonical embedding, we modify the key generation algorithm of pNE scheme to make it applicable to general cyclotomic rings. Through an improved analysis, we provide tighter parameters of pNE over prime power cyclotomic rings. To be more specific, even for the general case, our parameters are as good as that obtained by Stehle and Steinfeld for the case of power-of-2; compared to that of Yu, Xu and Wang (PKC 2017), the sizes of our parameters get significantly reduced. Thus our result not only applies to a larger class of rings but also enjoys greater efficiency. In proving our results, we have developed some technical tools which may be of general interest. Some remarks on further extension of the work (e.g., for more general polynomial rings) have also been made.
Last updated:  2017-05-18
Locally Decodable and Updatable Non-Malleable Codes in the Bounded Retrieval Model
Uncategorized
Dana Dachman-Soled, Mukul Kulkarni, Aria Shahverdi
Show abstract
Uncategorized
In a recent result, Dachman-Soled et al.(TCC '15) proposed a new notion called locally decodable and updatable non-malleable codes, which informally, provides the security guarantees of a non-malleable code while also allowing for efficient random access. They also considered locally decodable and updatable non-malleable codes that are leakage-resilient, allowing for adversaries who continually leak information in addition to tampering. The bounded retrieval model (BRM) (cf. [Alwen et al., CRYPTO '09] and [Alwen et al., EUROCRYPT '10]) has been studied extensively in the setting of leakage resilience for cryptographic primitives. This threat model assumes that an attacker can learn information about the secret key, subject only to the constraint that the overall amount of leaked information is upper bounded by some value. The goal is then to construct cryptosystems whose secret key length grows with the amount of leakage, but whose runtime (assuming random access to the secret key) is independent of the leakage amount. In this work, we combine the above two notions and construct locally decodable and updatable non-malleable codes in the split-state model, that are secure against bounded retrieval adversaries. Specifically, given leakage parameter l, we show how to construct an efficient, 3-split-state, locally decodable and updatable code (with CRS) that is secure against one-time leakage of any polynomial time, 3-split-state leakage function whose output length is at most l, and one-time tampering via any polynomial-time 3-split-state tampering function. The locality we achieve is polylogarithmic in the security parameter.
Last updated:  2017-08-15
Quantum preimage, 2nd-preimage, and collision resistance of SHA3
Uncategorized
Jan Czajkowski, Leon Groot Bruinderink, Andreas Hülsing, Christian Schaffner
Uncategorized
SHA3 and its extendable output variant SHAKE belong to the family of sponge functions. In this work, we present formal security arguments for the quantum preimage, -preimage, and collision resistance of any sponge function. We just assume that the internally used transformation behaves like a random transformation. These are the first formal arguments that sponge functions (incl. SHA3 and SHAKE) are secure in the post-quantum setting. We even go one step further and prove that sponges are collapsing (Unruh, EUROCRYPT'16). Thereby, we can also derive the applicability of sponge functions for collapse-binding commitments. In addition to the security arguments, we also present a quantum collision attack against sponges. The complexity of our attack asymptotically matches the proven lower bound up to a square root.
Last updated:  2017-10-08
Limits on the Locality of Pseudorandom Generators and Applications to Indistinguishability Obfuscation
Uncategorized
Alex Lombardi, Vinod Vaikuntanathan
Show abstract
Uncategorized
Lin and Tessaro (ePrint 2017) recently proposed indistinguishability obfuscation (IO) and functional encryption (FE) candidates and proved their security based on two assumptions: a standard assumption on bilinear maps and a non-standard assumption on ``Goldreich-like'' pseudorandom generators. In a nutshell, their second assumption requires the existence of pseudorandom generators for some -size alphabet , each of whose output bits depend on at most two input alphabet symbols, and which achieve sufficiently large stretch. We show polynomial-time attacks against such generators, invalidating the security of the IO and FE candidates. Our attack uses tools from the literature on two-source extractors (Chor and Goldreich, SICOMP 1988) and efficient refutation of random - instances (Charikar and Wirth, FOCS 2004).
Last updated:  2017-04-07
Tortoise and Hares Consensus: the Meshcash Framework for Incentive-Compatible, Scalable Cryptocurrencies
Iddo Bentov, Pavel Hubáček, Tal Moran, Asaf Nadler
We propose Meshcash, a new framework for cryptocurrency protocols that combines a novel, proof-of-work based, permissionless byzantine consensus protocol (the tortoise) that guarantees eventual consensus and irreversibility, with a possibly-faulty but quick consensus protocol (the hare). The construction is modular, allowing any suitable ``hare'' protocol to be plugged in. The combined protocol enjoys best of both worlds properties: consensus is quick if the hare protocol succeeds, but guaranteed even if it is faulty. Unlike most existing proof-of-work based consensus protocols, our tortoise protocol does not rely on leader-election (e.g., the single miner who managed to extend the longest chain). Rather, we use ideas from asynchronous byzantine agreement protocols to gradually converge to a consensus. Meshcash, is designed to be race-free: there is no ``race'' to generate the next block, hence honestly-generated blocks are always rewarded. This property, which we define formally as a game-theoretic notion, turns out to be useful in analyzing rational miners' behavior: we prove (using a generalization of the blockchain mining games of Kiayias et al.) that race-free blockchain protocols are incentive-compatible and satisfy linearity of rewards (i.e., a party receives rewards proportional to its computational power). Because Meshcash can tolerate a high block rate regardless of network propagation delays (which will only affect latency), it allows us to lower both the variance and the expected time between blocks for honest miners; together with linearity of rewards, this makes pooled mining far less attractive. Moreover, race-free protocols scale more easily (in terms of transaction rate). This is because the race-free property implies that the network propagation delays are not a factor in terms of rewards, which removes the main impediment to accommodating a larger volume of transactions. We formally prove that all of our guarantees hold in the asynchronous communication model of Pass, Seeman and shelat, and against a constant fraction of byzantine (malicious) miners; not just rational ones.
Last updated:  2017-09-06
Fast Private Set Intersection from Homomorphic Encryption
Uncategorized
Hao Chen, Kim Laine, Peter Rindal
Show abstract
Uncategorized
Private Set Intersection (PSI) is a cryptographic technique that allows two parties to compute the intersection of their sets without revealing anything except the intersection. We use fully homomorphic encryption to construct a fast PSI protocol with a small communication overhead that works particularly well when one of the two sets is much smaller than the other, and is secure against semi-honest adversaries. The most computationally efficient PSI protocols have been constructed using tools such as hash functions and oblivious transfer, but a potential limitation with these approaches is the communication complexity, which scales linearly with the size of the larger set. This is of particular concern when performing PSI between a constrained device (cellphone) holding a small set, and a large service provider (e.g. \emph{WhatsApp}), such as in the Private Contact Discovery application. Our protocol has communication complexity linear in the size of the smaller set, and logarithmic in the larger set. More precisely, if the set sizes are , we achieve a communication overhead of . Our running-time-optimized benchmarks show that it takes seconds of online-computation, seconds of non-interactive (receiver-independent) pre-processing, and only MB of round trip communication to intersect five thousand -bit strings with million -bit strings. Compared to prior works, this is roughly a -- reduction in communication with minimal difference in computational overhead.
Last updated:  2017-04-25
An Investigation of Sources of Randomness Within Discrete Gaussian Sampling
Séamus Brannigan, Neil Smyth, Tobias Oder, Felipe Valencia, Elizabeth O’Sullivan, Tim Güneysu, Francesco Regazzoni
This paper presents a performance and statistical analysis of random number generators and discrete Gaussian samplers implemented in software. Most Lattice-based cryptographic schemes utilise discrete Gaussian sampling and will require a quality random source. We examine a range of candidates for this purpose, including NIST DRBGs, stream ciphers and well-known PRNGs. The performance of these random sources is analysed within 64-bit implementations of Bernoulli, CDT and Ziggurat sampling. In addition we perform initial statistical testing of these samplers and include an investigation into improper seeding issues and their effect on the Gaussian samplers. Of the NIST approved Deterministic Random Bit Generators (DRBG), the AES based CTR-DRBG produced the best balanced performance in our tests.
Last updated:  2017-04-07
A Terrorist-fraud Resistant and Extractor-free Anonymous Distance-bounding Protocol
Gildas Avoine, Xavier Bultel, Sébastien Gambs, David Gérault, Pascal Lafourcade, Cristina Onete, Jean-Marc Robert
Distance-bounding protocols have been introduced to thwart relay attacks against contactless authentication protocols. In this context, verifiers have to authenticate the credentials of untrusted provers. Unfortunately, these protocols are themselves subject to complex threats such as terrorist-fraud attacks, in which a malicious prover helps an accomplice to authenticate. Provably guaranteeing the resistance of distance-bounding protocols to these attacks is a complex task. The classical countermeasures usually assume that rational provers want to protect their long-term authentication credentials, even with respect to their accomplices. Thus, terrorist-fraud resistant protocols generally rely on artificial extraction mechanisms, ensuring that an accomplice can retrieve the credential of his partnering prover. In this paper, we propose a novel approach to obtain provable terrorist-fraud resistant protocols without assuming that provers have any long-term secret key. Instead, the attacker simply has to replay the information that he has received from his accomplice. Based on this, we present a generic construction for provably secure distance-bounding protocols, and give three instances: (1) an efficient symmetric-key protocol, (2) a public-key protocol protecting the identities of the provers against external eavesdroppers, and finally (3) a fully anonymous protocol protecting the identities of the provers even against malicious verifiers trying to profile them.
Last updated:  2018-01-24
Topology-Hiding Computation on all Graphs
Adi Akavia, Rio LaVigne, Tal Moran
A distributed computation in which nodes are connected by a partial communication graph is called topology-hiding if it does not reveal information about the graph beyond what is revealed by the output of the function. Previous results have shown that topology-hiding computation protocols exist for graphs of constant degree and logarithmic diameter in the number of nodes [Moran-Orlov-Richelson, TCC'15; Hirt \etal, Crypto'16] as well as for other graph families, such as cycles, trees, and low circumference graphs [Akavia-Moran, Eurocrypt'17], but the feasibility question for general graphs was open. In this work we positively resolve the above open problem: we prove that topology-hiding computation is feasible for all graphs under either the Decisional Diffie-Hellman or Quadratic-Residuosity assumption. Our techniques employ random-walks to generate paths covering the graph, upon which we apply the Akavia-Moran topology-hiding broadcast for chain-graphs (paths). To prevent topology information revealed by the random-walk, we design multiple random-walks that, together, are locally identical to receiving at each round a message from each neighbors and sending back processed messages in a randomly permuted order.
Last updated:  2017-05-02
Improved key-reconciliation method
Uncategorized
Ludo Tolhuizen, Ronald Rietman, Oscar Garcia-Morchon
Show abstract
Uncategorized
At PQ Crypto 2014, Peikert proposed efficient and practical lattice-based protocols for key transport, encryption and authenticated key exchange. One of the main technical innovations of this work is a reconciliation technique that allows two parties who "approximately agree" on a secret value to reach exact agreement, a setting common to essentially all lattice-based encryption schemes. Peikert's reconciliation technique has been extended in the Frodo key exchange scheme, allowing for agreement on more than one bit. In both cases, only one reconciliation bit is required to reach exact agreement. As symmetric keys typically require many bits, say 128 or more, the parties compute multiple secret values, and reach exact agreement on each of those values individually. In this paper, we propose a reconciliation method that sends more than one reconciliation bit. In this way, the parties can agree on the same number of bits as with Peikert's method with less stringent conditions on "how approximate" the approximate agreement must be. An instance of our method allows the two parties on a secret value that is one bit longer than with the previous methods, with virtually the same approximation requirements (i.e., with virtually the same security guarantees) as before. We numerically illustrate the advantages of our method with the impact to the instantiations of the Frodo scheme.
Last updated:  2018-10-27
Secure searching of biomarkers through hybrid homomorphic encryption scheme
Uncategorized
Miran Kim, Yongsoo Song, Jung Hee Cheon
Show abstract
Uncategorized
As genome sequencing technology develops rapidly, there has lately been an increasing need to keep genomic data secure even when stored in the cloud and still used for research. In this paper, we are interested in designing a protocol for the secure outsourcing matching problem on encrypted data. We propose an efficient method to securely search a matching position with the query data and extract some information at the position. After decryption, we only perform a small amount of comparison with the query information in plaintext. We apply this method to find a set of biomarkers in encrypted genomes. The important feature of our method is to encode a genomic database as a single element of polynomial ring. It also requires only a single homomorphic multiplication for query computation. Thus this method has the advantage over the previous methods in parameter size, computational complexity, and communication cost. We evaluate the performance of our method and verify that computation on large-scale personal data can be securely and practically outsourced to a cloud environment during data analysis. It takes about 3.9 seconds to search-and-extract the reference and alternate sequences of the queried position in a database of size 4M.
Last updated:  2017-04-03
Montgomery curves and the Montgomery ladder
Daniel J. Bernstein, Tanja Lange
The Montgomery ladder is a remarkably simple method of computing scalar multiples of points on a broad class of elliptic curves. This article surveys a wide range of topics related to the Montgomery ladder, both from the historical perspective of Weierstrass curves and from the modern perspective of Edwards curves. New material includes a full proof of a complete constant-time ladder algorithm suitable for cryptographic applications.
Last updated:  2017-04-05
Involutory Differentially 4-Uniform Permutations from Known Constructions
Shihui Fu, Xiutao Feng
Substitution box (S-box) is an important component of block ciphers for providing confusion into the cryptosystems. The functions used as S-boxes should have low differential uniformity, high nonlinearity and high algebraic degree. Due to the lack of knowledge on the existence of APN permutations over , which have the lowest differential uniformity, when , they are often constructed from differentially 4-uniform permutations. Up to now, many infinite families of such functions have been constructed. Besides, the less cost of hardware implementation of S-boxes is also an important criterion in the design of block ciphers. If the S-box is an involution, which means that the compositional inverse of the permutation is itself, then the implementation cost for its inverse is saved. The same hardware circuit can be used for both encryption and decryption, which is an advantage in hardware implementation. In this paper, we investigate all the differentially 4-uniform permutations that are known in the literature and determine whether they can be involutory. We found that some involutory differentially 4-uniform permutations with high nonlinearity and algebraic degree can be given from these known constructions.
Last updated:  2017-05-19
How to Achieve Non-Malleability in One or Two Rounds
Uncategorized
Dakshita Khurana, Amit Sahai
Show abstract
Uncategorized
Despite over 25 years of research on non-malleable commitments in the plain model, their round complexity has remained open. The goal of achieving non-malleable commitment protocols with only one or two rounds has been especially elusive. Pass (TCC 2013, CC 2016) captured this difficulty by proving important impossibility results regarding two-round non-malleable commitments. This led to the widespread belief that achieving two-round non-malleable commitments was impossible from standard sub-exponential assumptions. We show that this belief was false. Indeed, we obtain the following positive results: - We construct the first two-message non-malleable commitments satisfying the strong definition of non-malleability with respect to commitment, assuming standard sub-exponential assumptions, namely: sub-exponentially hard one-way permutations, sub-exponential ZAPs, and sub-exponential DDH. Furthermore, our protocol is public-coin. - We also obtain two-message private-coin non-malleable commitments with respect to commitment, assuming only sub-exponentially hard DDH or QR or N^{th}-residuosity. - We bootstrap the above protocols (under the same assumptions) to obtain constant bounded-concurrent non-malleability while preserving round complexity. - We compile the above protocols to obtain, in the simultaneous messages model, the first {\em one-round} non-malleable commitments, with full concurrent security respect to opening, under standard sub-exponential assumptions. 1. This implies non-interactive non-malleable commitments with respect to opening, in a restricted model with a broadcast channel, and a-priori bounded polynomially many parties such that every party is aware of every other party in the system. To the best of our knowledge, this is the first protocol to achieve completely non-interactive non-malleability in any plain model setting from standard assumptions. 2. As an application of this result, in the simultaneous exchange model, we obtain the first two-round multi-party pseudorandom coin-flipping. We believe that our protocols are likely to find several additional applications. - In order to obtain our results, we develop several tools and techniques that may be of independent interest. 1. We give the first two-round black-box rewinding strategy based on standard sub-exponential assumptions, in the plain model. 2. We also develop the first two-message zero-knowledge arguments with strong super-polynomial simulation. 3. Finally, we give a two-round tag amplification technique for non-malleable commitments, that amplifies a 4-tag scheme to a scheme for all tags, while only relying on sub-exponential DDH. This includes a more efficient alternative to the DDN encoding.
Last updated:  2017-04-03
Double DIP: Re-Evaluating Security of Logic Encryption Algorithms
Yuanqi Shen, Hai Zhou
Logic encryption is a hardware security technique that uses extra key inputs to lock a given combinational circuit. A recent study by Subramanyan et al. shows that all existing logic encryption techniques can be successfully attacked. As a countermeasure, SARLock was proposed to enhance the security of existing logic encryptions. In this paper, we re- evaluate the security of these approaches. A SAT-based at- tack called Double DIP is proposed and shown to success- fully defeat SARLock-enhanced encryptions.
Last updated:  2017-04-03
On the Hardness of Trivium and Grain with respect to Generic Time-Memory-Data Tradeoff Attacks
Matthias Krause
Time-Memory-Data tradeoff attacks (TMD-attacks) like those of Babbage, Biryukov and Shamir, and Dunkelman and Keller reduce the security level of keystream generator based-stream ciphers to , where denotes the inner state length. This is one of the reasons why stream ciphers like Trivium and Grain use a session key length of at most . In this paper, we deal with the question if this small-key-large-state design principle really provides the maximal possible security of w.r.t. to TMD-attacks, or if it opens the door for new TMD-attack approaches, which possibly allow to discover a secret inner state and/or the secret session key with cost essentially lower than . We give an answer by analyzing the security of stream ciphers like Trivium and Grain w.r.t. generic TMD-attacks in an appropriate random oracle model. We show that each TMD-attacker with TMD-costs bounded by can only achieve an advantage of for distinguishing a truly random bitstream from a pseudorandom bitstream generated by the stream cipher. This lower bound matches the security upper bounds defined by exhaustive key search and the classical TMD-attacks mentioned above.
Last updated:  2017-04-03
Security of Symmetric Primitives under Incorrect Usage of Keys
Uncategorized
Pooya Farshim, Claudio Orlandi, Răzvan Roşie
Show abstract
Uncategorized
We study the security of symmetric primitives under the incorrect usage of keys. Roughly speaking, a key-robust scheme does not output ciphertexts/tags that are valid with respect to distinct keys. Key-robustness is a notion that is often tacitly expected/assumed in protocol design — as is the case with anonymous auction, oblivious transfer, or public-key encryption. We formalize simple, yet strong definitions of key robustness for authenticated-encryption, message-authentication codes and PRFs. We show standard notions (such as AE or PRF security) guarantee a basic level of key-robustness under honestly generated keys, but fail to imply key-robustness under adversarially generated (or known) keys. We show robust encryption and MACs compose well through generic composition, and identify robust PRFs as the main primitive used in building robust schemes. Standard hash functions are expected to satisfy key-robustness and PRF security, and hence suffice for practical instantiations. We however provide further theoretical justifications (in the standard model) by constructing robust PRFs from (left-and-right) collision-resistant PRGs.
Last updated:  2018-04-26
Towards Sound and Optimal Leakage Detection Procedure
Uncategorized
Liwei Zhang, A. Adam Ding, Francois Durvaux, Francois-Xavier Standaert, Yunsi Fei
Show abstract
Uncategorized
Evaluation of side channel leakage for the embedded crypto systems requires sound leakage detection procedures. We relate the test vector leakage assessment (TVLA) procedure to the statistical minimum p-value (mini-p) procedure, and propose a sound method of deciding leakage existence in the statistical hypothesis setting. To improve detection, an advanced statistical procedure Higher Criticism (HC) is applied. The detection of leakage existence and the identification of exploitable leakage are separated when there are multiple leakage points. For leakage detection, the HC-based procedure is shown to be optimal in that, for a given number of traces with given length, it detects existence of leakage at the signal level as low as possibly detectable by any statistical procedure. We provide theoretical proof of the optimality of the HC procedure. Numerical studies show that the HC-based procedure perform as well as the mini-p based procedure when leakage signals are very sparse, and can improve the leakage detection significantly when there are multiple leakages.
Last updated:  2017-11-15
Impossible Differential Attack on Midori128 Using Rebound-like Technique
Uncategorized
Wenquan Bi, Zheng Li, Xiaoyang Dong, Xiaoyun Wang
Uncategorized
Midori is a family of lightweight block cipher proposed by Banik et al. in ASIACRYPT 2015 and it is optimized with respect to the energy consumed by the circuit per bit in encryption or decryption operation. Midori is based on the Substitution-Permutation Network, which has two variants according to the state sizes, i.e. Midori64 and Midori128. It attracted a lot of attention of cryptanalyst since its release. For Midori64, the first meet-in-the-middle attack was proposed by Lin and Wu, which was published on the ToSC 2017 recently. The first impossible differential attack of Midori64 was presented by Chen et al. and Dong gave the first related-key differential attack. Guo et al. introduced an invariant space attack against full-round Midori64 in weak key setting, which was published in ToSC 2017 recently. However, for Midori128, there are only one impossible differential cryptanalysis result proposed by Chen et al. against 10-round reduced Midori128 and one related-key result by Gerault et al. in INDOCRYPT 2016. In this paper, we present a new impossible differential attack on Midori128 by using a new impossible differential proposed by Sasaki et al., we achieve 10-round impossible differential attack with the time complexity and 11-round impossible differential attack with the time complexity finally. This is the best single-key cryptanalytic result of Midori128 as far as we know. We should point out the our attacks do not threaten the security of full-round Midori128.
Last updated:  2018-10-03
Implementation and Evaluation of Improved Gaussian Sampling for Lattice Trapdoors
Kamil Doruk Gür, Yuriy Polyakov, Kurt Rohloff, Gerard W. Ryan, Erkay Savaş
We report on our implementation of a new Gaussian sampling algorithm for lattice trapdoors. Lattice trapdoors are used in a wide array of lattice-based cryptographic schemes including digital signatures, attributed-based encryption, program obfuscation and others. Our implementation provides Gaussian sampling for trapdoor lattices with prime moduli, and supports both single- and multi-threaded execution. We experimentally evaluate our implementation through its use in the GPV hash-and-sign digital signature scheme as a benchmark. We compare our design and implementation with prior work reported in the literature. The evaluation shows that our implementation 1) has smaller space requirements and faster runtime, 2) does not require multi-precision floating-point arithmetic, and 3) can be used for a broader range of cryptographic primitives than previous implementations.
Last updated:  2017-03-30
SafeDRP: Yet Another Way Toward Power-Equalized Designs in FPGA
Maik Ender, Alexander Wild, Amir Moradi
Side-channel analysis attacks, particularly power analysis attacks, have become one of the major threats, that hardware designers have to deal with. To defeat them, the majority of the known concepts are based on either masking, hiding, or rekeying (or a combination of them). This work deals with a hiding scheme, more precisely a power-equalization technique which is ideally supposed to make the amount of power consumption of the device independent of its processed data. We propose and practically evaluate a novel construction dedicated to Xilinx FPGAs, which rules out the state of the art with respect to the achieved security level and the resource overhead.
Last updated:  2017-03-30
On the Easiness of Turning Higher-Order Leakages into First-Order
Thorben Moos, Amir Moradi
Applying random and uniform masks to the processed intermediate values of cryptographic algorithms is arguably the most common countermeasure to thwart side-channel analysis attacks. So-called masking schemes exist in various shapes but are mostly used to prevent side-channel leakages up to a certain statistical order. Thus, to learn any information about the key-involving computations a side-channel adversary has to estimate the higher-order statistical moments of the leakage distributions. However, the complexity of this approach increases exponentially with the statistical order to be estimated and the precision of the estimation suffers from an enormous sensitivity to the noise level. In this work we present an alternative procedure to exploit higher-order leakages which captivates by its simplicity and effectiveness. Our approach, which focuses on (but is not limited to) univariate leakages of hardware masking schemes, is based on categorizing the power traces according to the distribution of leakage points. In particular, at each sample point an individual subset of traces is considered to mount ordinary first-order attacks. We present the theoretical concept of our approach based on simulation traces and examine its efficiency on noisy real-world measurements taken from a first-order secure threshold implementation of the block cipher PRESENT-80, implemented on a 150nm CMOS ASIC prototype chip. Our analyses verify that the proposed technique is indeed a worthy alternative to conventional higher-order attacks and suggest that it might be able to relax the sensitivity of higher-order evaluations to the noise level.
Last updated:  2017-03-30
Collapsing sponges: Post-quantum security of the sponge construction
Dominique Unruh
We investigate the post-quantum security of hash functions based on the sponge construction. A crucial property for hash functions in the post-quantum setting is the collapsing property (a strengthening of collision-resistance). We show that the sponge construction is collapsing (and in consequence quantum collision-resistant) under suitable assumptions about the underlying block function. In particular, if the block function is a random function or a (non-invertible) random permutation, the sponge construction is collapsing.
Last updated:  2018-03-16
Practical Secure Aggregation for Privacy Preserving Machine Learning
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, Karn Seth
We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and malicious settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers communication expansion for users and -dimensional vectors, and expansion for users and -dimensional vectors over sending data in the clear.
Last updated:  2017-08-02
Amortization with Fewer Equations for Proving Knowledge of Small Secrets
Rafael del Pino, Vadim Lyubashevsky
For a linear function , a vector with small coefficients, and a vector , we would like to be able to give a zero-knowledge proof for the knowledge of an with small coefficients that satisfies . This is a common scenario in lattice-based cryptography, and there is currently no satisfactory solution for this problem. All known protocols are built via the repetition a basic protocol that only has constant ( or ) soundness error. This implies that the communication complexity of the final protocol will be at least a factor of larger than that of the basic one, where is the security parameter. One can do better if one considers simultaneously proving the knowledge of many instances of the above linear equation. The protocol that has the smallest amortized communication complexity while achieving close-to-optimal slack (i.e. the ratio between the coefficients in the secret and those that can be extracted from the proof) is due to Cramer et al. (Eurocrypt '17) which builds on an earlier work of Baum et al. (Crypto '16). The main downside of this protocol is that the amortization only kicks in when the number of equations is rather large -- . This means that for , it is only truly optimal when one has more than equations to prove. The aforementioned work of Cramer et al. also shows how to achieve a protocol requiring samples, but it is only applicable for much larger values of and the number of required samples ends up being larger than . The main result of our work is reducing the concrete minimal number of equations required for the amortization, while keeping the communication complexity almost unchanged. The cost of this is an increase in the running time of the zero-knowledge proof. More specifically, we show that one can decrease the required number of equations by a factor of at the cost of increasing the running time by a factor of . For example, increasing the running time by a factor of allows us to decrease the required number of samples from to -- a factor of . As a side benefit, the slack of our protocol decreases by a factor of as well. We also show that in the case that is a function over the polynomial ring and we would like to give a proof of knowledge of an with small coefficients such that , then the number of samples needed for amortization is even lower. Without any trade-offs in the running time, our algorithm requires around samples, and for the same factor increase in the running time, the requirement goes down to .
Last updated:  2018-03-22
Post-Quantum Zero-Knowledge and Signatures from Symmetric-Key Primitives
Melissa Chase, David Derler, Steven Goldfeder, Claudio Orlandi, Sebastian Ramacher, Christian Rechberger, Daniel Slamanig, Greg Zaverucha
We propose a new class of post-quantum digital signature schemes that: (a) derive their security entirely from the security of symmetric-key primitives, believed to be quantum-secure, and (b) have extremely small keypairs, and, (c) are highly parameterizable. In our signature constructions, the public key is an image y = f(x) of a one-way function f and secret key x. A signature is a non-interactive zero-knowledge proof of x, that incorporates a message to be signed. For this proof, we leverage recent progress of Giacomelli et al. (USENIX'16) in constructing an efficient Sigma-protocol for statements over general circuits. We improve this Sigma-protocol to reduce proof sizes by a factor of two, at no additional computational cost. While this is of independent interest as it yields more compact proofs for any circuit, it also decreases our signature sizes. We consider two possibilities to make the proof non-interactive: the Fiat-Shamir transform and Unruh's transform (EUROCRYPT'12, '15,'16). The former has smaller signatures, while the latter has a security analysis in the quantum-accessible random oracle model. By customizing Unruh's transform to our application, the overhead is reduced to 1.6x when compared to the Fiat-Shamir transform, which does not have a rigorous post-quantum security analysis. We implement and benchmark both approaches and explore the possible choice of f, taking advantage of the recent trend to strive for practical symmetric ciphers with a particularly low number of multiplications and end up using LowMC (EUROCRYPT'15).
Last updated:  2017-03-27
New Observations on Invariant Subspace Attack
Yunwen Liu, Vincent Rijmen
Invariant subspace attack is a novel cryptanalytic technique which breaks several recently proposed lightweight block ciphers. In this paper, we propose a new method to bound the dimension of some invariant subspaces in a class of lightweight block ciphers which have a similar structure as the AES but with 4-bit Sboxes. With assumptions on the diffusion layer, the dimension of any invariant subspaces is at most 32 when the inputs into each Sboxes are linearly independent. The observation brings new insights about the invariant subspace attack, as well as lightweight countermeasures to enhance the resistance against it.
Last updated:  2017-03-27
Minimizing the Complexity of Goldreich's Pseudorandom Generator
Alex Lombardi, Vinod Vaikuntanathan
In the study of cryptography in , it was previously known that Goldreich's candidate pseudorandom generator (PRG) is insecure when instantiated with a predicate in or fewer variables, if one wants to achieve polynomial stretch (that is, stretching bits to bits for some constant ). The current standard candidate predicate for this setting is the ``tri-sum-and'' predicate , yielding a candidate PRG of locality . Moreover, Goldreich's PRG, when instantiated with TSA as the predicate, is known to be secure against several families of attacks, including -linear attacks and attacks using SDP hierarchies such as the Lasserre/Parrilo sum-of-squares hierarchy. However, it was previously unknown if is an ``optimal'' predicate according to other complexity measures: in particular, decision tree (DT-)complexity (i.e., the smallest depth of a binary decision tree computing ) and -degree (i.e., the degree of as a polynomial over ), which are important measures of complexity in cryptographic applications such as the construction of an indistinguishability obfuscation scheme. In this work, we ask: Can Goldreich's PRG be instantiated with a predicate with DT-complexity or -degree less than ? We show that this is indeed possible: we give a candidate predicate for Goldreich's PRG with DT-complexity and -degree ; in particular, this candidate PRG therefore has the property that every output bit is a degree 3 polynomial in its input. Moreover, Goldreich's PRG instantiated with our predicate has security properties similar to what is known for , namely security against -linear attacks and security against attacks from SDP hierarchies such as the Lasserre/Parrilo sum-of-squares hierarchy. We also show that all predicates with either DT-complexity less than or -degree less than yield insecure PRGs, so our candidate predicate simultaneously achieves the best possible locality, DT-complexity, -degree, and -degree according to all known attacks.
Last updated:  2017-08-16
Obfuscating Compute-and-Compare Programs under LWE
Daniel Wichs, Giorgos Zirdelis
We show how to obfuscate a large and expressive class of programs, which we call compute-and-compare programs, under the learning-with-errors (LWE) assumption. Each such program is parametrized by an arbitrary polynomial-time computable function along with a target value and we define to output if and otherwise. In other words, the program performs an arbitrary computation and then compares its output against a target . Our obfuscator satisfies distributional virtual-black-box security, which guarantees that the obfuscated program does not reveal any partial information about the function or the target value , as long as they are chosen from some distribution where has sufficient pseudo-entropy given . We also extend our result to multi-bit compute-and-compare programs which output a message if . Compute-and-compare programs are powerful enough to capture many interesting obfuscation tasks as special cases. This includes obfuscating conjunctions, and therefore we improve on the prior work of Brakerski et al. (ITCS '16) which constructed a conjunction obfuscator under a non-standard "entropic" ring-LWE assumption, while here we obfuscate a significantly broader class of programs under standard LWE. We show that our obfuscator has several interesting applications. For example, we can take any encryption scheme and publish an obfuscated plaintext equality tester that allows users to check whether an arbitrary ciphertext encrypts some target value ; as long as has sufficient pseudo-entropy this will not harm semantic security. We can also use our obfuscator to generically upgrade attribute-based encryption to predicate encryption with one-sided attribute-hiding security, as well as witness encryption to indistinguishability obfuscation which is secure for all null circuits. Furthermore, we show that our obfuscator gives new circular-security counter-examples for public-key bit encryption and for unbounded length key cycles. Our result uses the graph-induced multi-linear maps of Gentry, Gorbunov and Halevi (TCC '15), but only in a carefully restricted manner which is provably secure under LWE. Our technique is inspired by ideas introduced in a recent work of Goyal, Koppula and Waters (EUROCRYPT '17) in a seemingly unrelated context.
Last updated:  2018-02-16
Simple and Generic Constructions of Succinct Functional Encryption
Fuyuki Kitagawa, Ryo Nishimaki, Keisuke Tanaka
We propose simple generic constructions of succinct functional encryption. Our key tool is exponentially-efficient indistinguishability obfuscator (XIO), which is the same as indistinguishability obfuscator (IO) except that the size of an obfuscated circuit (or the running-time of an obfuscator) is slightly smaller than that of a brute-force canonicalizer that outputs the entire truth table of a circuit to be obfuscated. A ``compression factor'' of XIO indicates how much XIO compresses the brute-force canonicalizer. In this study, we propose a significantly simple framework to construct succinct functional encryption via XIO and show that XIO is a powerful enough to achieve cutting-edge cryptography. In particular, we propose the following constructions: Single-key weakly succinct secret-key functional encryption (SKFE) is constructed from XIO (even with a bad compression factor) and one-way function. Single-key weakly succinct public-key functional encryption (PKFE) is constructed from XIO with a good compression factor and public-key encryption. Single-key weakly succinct PKFE is constructed from XIO (even with a bad compression factor) and identity-based encryption. Our new framework has side benefits. Our constructions do not rely on any number theoretic or lattice assumptions such as decisional Diffie-Hellman and learning with errors assumptions. Moreover, all security reductions incur only polynomial security loss. Known constructions of weakly succinct SKFE or PKFE from XIO with polynomial security loss rely on number theoretic or lattice assumptions.
Last updated:  2017-07-16
Lockable Obfuscation
Rishab Goyal, Venkata Koppula, Brent Waters
In this paper we introduce the notion of lockable obfuscation. In a lockable obfuscation scheme there exists an obfuscation algorithm that takes as input a security parameter , a program , a message and ``lock value'' and outputs an obfuscated program . One can evaluate the obfuscated program on any input where the output of evaluation is the message if and otherwise receives a rejecting symbol . We proceed to provide a construction of lockable obfuscation and prove it secure under the Learning with Errors (LWE) assumption. Notably, our proof only requires LWE with polynomial hardness and does not require complexity leveraging. We follow this by describing multiple applications of lockable obfuscation. First, we show how to transform any attribute-based encryption (ABE) scheme into one in which the attributes used to encrypt the message are hidden from any user that is not authorized to decrypt the message. (Such a system is also know as predicate encryption with one-sided security.) The only previous construction due to Gorbunov, Vaikuntanathan and Wee is based off of a specific ABE scheme of Boneh et al. By enabling the transformation of any ABE scheme we can inherent different forms and features of the underlying scheme such as: multi-authority, adaptive security from polynomial hardness, regular language policies, etc. We also show applications of lockable obfuscation to separation and uninstantiability results. We first show how to create new separation results in circular encryption that were previously based on indistinguishability obfuscation. This results in new separation results from learning with error including a public key bit encryption scheme that it IND-CPA secure and not circular secure. The tool of lockable obfuscation allows these constructions to be almost immediately realized by translation from previous indistinguishability obfuscation based constructions. In a similar vein we provide random oracle uninstantiability results of the Fujisaki-Okamoto transformation (and related transformations) from the lockable obfuscation combined with fully homomorphic encryption. Again, we take advantage that previous work used indistinguishability obfuscation that obfuscated programs in a form that could easily be translated to lockable obfuscation.
Last updated:  2019-05-21
Two-Round and Non-Interactive Concurrent Non-Malleable Commitments from Time-Lock Puzzles
Huijia Lin, Rafael Pass, Pratik Soni
Non-malleable commitments are a fundamental cryptographic tool for preventing (concurrent) man-in-the-middle attacks. Since their invention by Dolev, Dwork, and Naor in 1991, the round-complexity of non-malleable commitments has been extensively studied, leading up to constant-round concurrent non-malleable commitments based only on one-way functions, and even 3-round concurrent non-malleable commitments based on subexponential one-way functions, or standard polynomial-time hardness assumptions, such as, DDH and ZAPs. But constructions of two-round, or non-interactive, non-malleable commitments have so far remained elusive; the only known construction relied on a strong and non-falsifiable assumption with a non-malleability flavor. Additionally, a recent result by Pass shows the impossibility of basing two-round non-malleable commitments on falsifiable assumptions using a polynomial-time black-box security reduction. In this work, we show how to overcome this impossibility, using super-polynomial-time hardness assumptions. Our main result demonstrates the existence of a two-round concurrent non-malleable commitment based on sub-exponential ``standard-type" assumptions---notably, assuming the existence of all four of the following primitives (all with subexponential security): (1) non-interactive commitments, (2) ZAPs (i.e., 2-round witness indistinguishable proofs), (3) collision-resistant hash functions, and (4) a ``weak'' time-lock puzzle. Primitives (1),(2),(3) can be based on e.g., the discrete log assumption and the RSA assumption. Time-lock puzzles--puzzles that can be solved by ``brute-force" in time , but cannot be solved significantly faster even using parallel computers--were proposed by Rivest, Shamir, and Wagner in 1996, and have been quite extensively studied since; the most popular instantiation relies on the assumption that repeated squarings mod require ``roughly" parallel time. Our notion of a ``weak" time-lock puzzle requires only that the puzzle cannot be solved in parallel time (and thus we only need to rely on the relatively mild assumption that there are no huge improvements in the parallel complexity of repeated squaring algorithms). We additionally show that if replacing assumption (2) for a non-interactive witness indistinguishable proof (NIWI), and (3) for a uniform collision-resistant hash function, then a non-interactive (i.e., one-message) version of our protocol satisfies concurrent non-malleability w.r.t. uniform attackers. Finally, we show that our two-round (and non-interactive) non-malleable commitments, in fact, satisfy an even stronger notion of Chosen Commitment Attack (CCA) security (w.r.t. uniform attackers).
Last updated:  2017-12-12
Dissecting Leakage Resilient PRFs with Multivariate Localized EM Attacks - A Practical Security Evaluation on FPGA
Uncategorized
Florian Unterstein, Johann Heyszl, Fabrizio De Santis, Robert Specht
Show abstract
Uncategorized
In leakage-resilient symmetric cryptography, two important concepts have been proposed in order to decrease the success rate of differential side-channel attacks. The first one is to limit the attacker’s data complexity by restricting the number of observable inputs; the second one is to create correlated algorithmic noise by using parallel S-boxes with equal inputs. The latter hinders the typical divide and conquer approach of differential side-channel attacks and makes key recovery much more difficult in practice. The use of localized electromagnetic (EM) measurements has already been shown to limit the effectiveness of such measures in previous works based on PRESENT S-boxes and 90nm FPGAs. However, it has been left for future investigation in recent publications based on AES S-boxes. We aim at providing helpful results and insights from LDA-preprocessed, multivariate, localized EM attacks against a 45nm FPGA implementation using AES S-boxes. We show, that even in the case of densely placed S-boxes (with identical routing constraints), and even when limiting the data complexity to the minimum of only two inputs, the guessing entropy of the key is reduced to only 2^48 , which remains well within the key enumeration capabilities of today’s adversaries. Relaxing the S-box placement constraints further reduces the guessing entropy. Also, increasing the data complexity for efficiency, decreases it down to a direct key recovery. While our results are empirical and reflective of one device and implementation, they emphasize the threat of multivariate localized EM attacks to such AES-based leakage-resilient constructions, more than currently believed.
Last updated:  2018-01-10
High Order Masking of Look-up Tables with Common Shares
Jean-Sebastien Coron, Franck Rondepierre, Rina Zeitoun
Masking is an effective countermeasure against side-channel attacks. In this paper, we improve the efficiency of the high-order masking of look-up tables countermeasure introduced at Eurocrypt 2014, based on a combination of three techniques, and still with a proof of security in the Ishai-Sahai-Wagner (ISW) probing model. The first technique consists in proving security under the stronger t-SNI definition, which enables to use n=t+1 shares instead of n=2t+1 against t-th order attacks. The second technique consists in progressively incrementing the number of shares within the countermeasure, from a single share to n, thereby reducing the complexity of the countermeasure. The third technique consists in adapting the common shares approach introduced by Coron et al. at CHES 2016, so that half of a randomized look-up table can be pre-computed for multiple SBoxes. When combined, our three techniques lead to a factor 10.7 improvement in efficiency, asymptotically for a large number of shares n. For a practical implementation with a reasonable number of shares, we get a 4.8 speed-up factor compared to the initial countermeasure for AES.
Last updated:  2017-03-25
Rational Proofs against Rational Verifiers
Keita Inasawa, Kenji Yasunaga
Rational proofs, introduced by Azar and Micali (STOC 2012), are a variant of interactive proofs in which the prover is rational, and may deviate from the protocol for increasing his reward. Guo et al.\ (ITCS 2014) demonstrated that rational proofs are relevant to delegation of computation. By restricting the prover to be computationally bounded, they presented a one-round delegation scheme with sublinear verification for functions computable by log-space uniform circuits with logarithmic depth. In this work, we study rational proofs in which the verifier is also rational, and may deviate from the protocol for decreasing the prover's reward. We construct a three-message delegation scheme with sublinear verification for functions computable by log-space uniform circuits with polylogarithmic depth in the random oracle model.
Last updated:  2017-03-25
Extending Glitch-Free Multiparty Protocols to Resist Fault Injection Attacks
Okan Seker, Thomas Eisenbarth, Rainer Steinwandt
Side channel analysis and fault attacks are two powerful methods to analyze and break cryptographic implementations. Recently, secure multiparty computation has been applied to prevent side channel attacks. While multiparty computation is known to be fault resistant as well, the particular schemes popular for side channel protection do not currently offer this feature. In this paper we introduce a new secure multiparty circuit to prevent both fault attacks and side channel analysis. The new scheme builds on an existing side channel countermeasure and extends it to preserve errors and propagate them until the end of the circuit. A new recombination operation ensures randomization of the output in the case of an error, ensuring that nothing can be learned from the faulty output. After introducing the new secure multiparty circuit, we show how it can be applied to AES and present the performance and security analysis.
Last updated:  2017-03-25
Efficient Sanitizable Signatures without Random Oracles
Uncategorized
Russell W. F. Lai, Tao Zhang, Sherman S. M. Chow, Dominique Schröder
Show abstract
Uncategorized
Sanitizable signatures, introduced by Ateniese et al. (ESORICS '05), allow the signer to delegate the sanitization right of signed messages. The sanitizer can modify the message and update the signature accordingly, so that the sanitized part of the message is kept private. For a stronger protection of sensitive information, it is desirable that no one can link sanitized message-signature pairs of the same document. This idea was formalized by Brzuska et al. (PKC '10) as unlinkability, which was followed up recently by Fleischhacker et al. (PKC '16). Unfortunately, the existing generic constructions of sanitizable signatures, unlinkable or not, are based on building blocks with specially crafted features of which efficient (standard model) instantiations are absent. Basing on existing primitives or a conceptually simple primitive is more desirable. In this work, we present two such generic constructions, leading to efficient instantiations in the standard model. The first one is based on rerandomizable tagging, a new primitive which may find independent interests. It captures the core accountability mechanism of sanitizable signatures. The second one is based on accountable ring signatures (CARDIS '04, ESORICS '15). As an intermediate result, we propose the first accountable ring signature scheme in the standard model.
Last updated:  2017-12-12
A Masked White-box Cryptographic Implementation for Protecting against Differential Computation Analysis
Seungkwang Lee
Recently, gray-box attacks on white-box cryptographic implementations have succeeded. These attacks are more efficient than white-box attacks because they can be performed without detailed knowledge of the target implementation. The success of the gray-box attack is reportedly due to the unbalanced encoding used to generate the white-box lookup table. In this paper, we propose a method to protect the gray-box attack against white-box implementations. The basic idea is to apply the masking technique before encoding intermediate values during the white-box lookup table generation. Because we do not require any random source in runtime, it is possible to perform efficient encryption and decryption using our method. The security and performance analysis shows that the proposed method can be a reliable and efficient countermeasure.
Last updated:  2017-03-25
From Higher-Order Differentials to Polytopic Cryptanalysis
Tyge Tiessen
Polytopic cryptanalysis was introduced at EUROCRYPT 2016 as a cryptanalytic technique for low-data-complexity attacks on block ciphers. In this paper, we give an account of how the technique was developed, quickly go over the basic ideas and techniques of polytopic cryptanalysis, look into how the technique differs from previously existing cryptographic techniques, and discuss whether the attack angle can be useful for developing improved cryptanalytic techniques.
Last updated:  2017-03-25
Enhanced Outsider-anonymous Broadcast Encryption with Subset Difference Revocation
Kamalesh Acharya, Ratna Dutta
This paper puts forward an efficient broadcast encryption in public key setting employing ternary tree subset difference method for revocation. It provides outsider anonymity disabling the revoked users from getting any information of message and concealing the set of subscribed users from the revoked users. Our approach utilizes composite order bilinear group setting and exhibits significant improvement in the broadcast efficiency. The proposed scheme compares favourably over the existing similar schemes in standard model. The public key and secret key sizes are poly-logarithmic while the ciphertext size is sub linear in total number of users. Our scheme achieves selective security against chosen plaintext attack in the standard model under reasonable assumptions.
Last updated:  2017-10-01
A note on how to (pre-)compute a ladder
Uncategorized
Thomaz Oliveira, Julio López, Hüseyin Hışıl, Armando Faz-Hernández, Francisco Rodrıíguez-Henrıíquez
Show abstract
Uncategorized
In the RFC 7748 memorandum, the Internet Research Task Force specified a Montgomery-ladder scalar multiplication function based on two recently adopted elliptic curves, ``curve25519" and ``curve448". The purpose of this function is to support the Diffie-Hellman key exchange algorithm that will be included in the forthcoming version of the Transport Layer Security cryptographic protocol. In this paper, we describe a ladder variant that permits to accelerate the fixed-point multiplication function inherent to the Diffie-Hellman key pair generation phase. Our proposal combines a right-to-left version of the Montgomery ladder along with the pre-computation of constant values directly derived from the base-point and its multiples. To our knowledge, this is the first proposal of a Montgomery ladder procedure for prime elliptic curves that admits the extensive use of pre-computation. In exchange of very modest memory resources and a small extra programming effort, the proposed ladder obtains significant speedups for software implementations. Moreover, our proposal fully complies with the RFC 7748 specification. Our estimates suggest that a full implementation of our pre-computable ladder should outperform state-of-the-art software implementations of the X25519 and X448 functions by a 40\% speedup when working in the fixed-point scenario.
Last updated:  2017-03-25
Bivariate attacks and confusion coefficients
Sylvain Guilley, Liran Lerman
We solve the problem of finding the success rate of an optimal side-channel attack targeting at once the first and the last round of a block cipher. We relate the results to the properties of the direct and inverse substitution boxes (when they are bijective), in terms of confusion coefficients.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.