All papers in 2017 (Page 13 of 1262 results)

Last updated:  2017-01-31
Efficient Maliciously Secure Two Party Computation for Mixed Programs
Arash Afshar, Payman Mohassel, Mike Rosulek
We propose a new approach for practical secure two-party computation (2PC) achieving security in the presence of malicious adversaries. Given a program to compute, the idea is to identify subcomputations that depend on only one or neither of the parties’ private inputs. Such computations can be secured at significantly lower cost, using different protocol paradigms for each case. We then show how to securely connect these subprotocols together, and with standard 2PC yielding our new approach for 2PC for mixed programs. Our empirical evaluations confirm that the mixed-2PC approach outperforms state-of-the-art monolithic 2PC protocols for most computations.
Last updated:  2018-09-03
Short Digital Signatures and ID-KEMs via Truncation Collision Resistance
Uncategorized
Tibor Jager, Rafael Kurek
Show abstract
Uncategorized
Truncation collision resistance is a simple non-interactive complexity assumption that seems very plausible for standard cryptographic hash functions like SHA-3. We describe how this assumption can be leveraged to obtain standard-model constructions of public-key cryptosystems that previously seemed to require a programmable random oracle. This includes the first constructions of identity-based key encapsulation mechanisms (ID-KEMs) and digital signatures over bilinear groups with full adaptive security and without random oracles, where a ciphertext or signature consists of only a single element of a prime-order group. We also describe a generic construction of ID-KEMs with full adaptive security from a scheme with very weak security ("selective and non-adaptive chosen-ID security"), and a similar generic construction for digital signatures.
Last updated:  2017-06-22
Zero Round-Trip Time for the Extended Access Control Protocol
Jacqueline Brendel, Marc Fischlin
The Extended Access Control (EAC) protocol allows to create a shared cryptographic key between a client and a server. It is for instance referenced by the International Civil Aviation Organization for securing the communication between machine readable travel documents and terminals, and is also deployed on current German identity cards. Here we discuss how to enhance the EAC protocol by a so-called zero-round trip time (0RTT) mode. Through this mode the client can, without further interaction, immediately derive a new key from cryptographic material exchanged in previous executions. This makes the 0RTT mode attractive from an efficiency viewpoint such that the upcoming TLS 1.3 standard, for instance, will include its own 0RTT mode. Here we show that the EAC protocol can be augmented to support a 0RTT mode, too. Our proposed EAC+0RTT protocol is compliant with the basic EAC protocol and adds the 0RTT mode smoothly on top. We also prove the security of our proposal according to the common security model of Bellare and Rogaway in the multi-stage setting.
Last updated:  2017-01-31
Adaptively Secure Recipient Revocable Broadcast Encryption with Constant size Ciphertext
Kamalesh Acharya, Ratna Dutta
In this paper, we put forward the first adaptively secure recipient revocable broadcast encryption (RR-BE) scheme in the standard model. The scheme is adaptively secure against chosen plaintext attack (CPA) under the q-weaker Decisional Augmented Bilinear Diffie-Hellman Exponent (q-wDABDHE) assumption. Our scheme compares well with the only existing RR-BE scheme of Susilo et al. which is selectively secure in the random oracle model. More interestingly, achieving adaptive security in the standard model does not blow up the communication cost in our construction. To be more precise, the size of the ciphertext which is broadcasted by the broadcaster is constant.
Last updated:  2017-11-30
WalnutDSA(TM): A Quantum-Resistant Digital Signature Algorithm
Uncategorized
Iris Anshel, Derek Atkins, Dorian Goldfeld, Paul E Gunnells
Show abstract
Uncategorized
In 2005 I. Anshel, M. Anshel, D. Goldfeld, and S. Lemieux introduced E-Multiplication(TM), a quantum-resistant, group-theoretic, one-way function which can be used as a basis for many different cryptographic applications. This one-way function was specifically designed for constrained devices, running extremely quickly and requiring very little code. This paper introduces WalnutDSA, a new E-Multiplication-based public-key method which provides efficient verification, allowing low-power and constrained devices to quickly and inexpensively validate digital signatures (e.g., a certificate or authentication). It presents an in-depth discussion of the construction of the digital signature algorithm, analyzes the security of the scheme, provides a proof of security under EUF-CMA, and discusses the practical results from implementations on several constrained devices.
Last updated:  2017-01-31
Single--Trace Template Attack on the DES Round Keys of a Recent Smart Card
Mathias Wagner, Stefan Heyse
A new template attack on the DES key scheduling is demonstrated that allows recovery of a sufficiently large portion of the DES key of a recent and widely deployed smart card chip with a {\it single} EM (electromagnetic) trace during the Exploitation Phase. Depending on the use case, the remainder of the key may then be found with reasonable brute--force effort on a PC. Remaining rest entropies as low as $\approx 19$ bits have been found for some single--trace attacks, meaning that effectively 37 bits were recovered in a single trace. The nature of single--trace attacks has it that conventional software countermeasures are rendered useless by this attack, and thus the only remaining remedy is a hardware redesign.
Last updated:  2017-06-23
Verifiable Classroom Voting in Practice
Feng Hao, Dylan Clarke, Brian Randell, Siamak F. Shahandashti
Classroom voting is an important pedagogical technique in which students learn by voting on the answers to questions. The same voting platform is also often used for exercises such as rating lecturer performance and voting for prizes. In this paper, we present VCV, an end-to-end (E2E) verifiable classroom voting system built based on the DRE-i protocol. Our system provides E2E verifiability without tallying authorities; it supports voting through mobile phones with constrained computing resources; it reports the tallying results instantly after voting is finished along with cryptographic proofs that enable the public to verify the tallying integrity. Since 2013, the VCV system has been used regularly in real classroom teaching, as well as academic prize competitions, in Newcastle University with positive user feedback. Our experience suggests that E2E verifiable voting through the internet and using mobile phones is feasible for daily routine activities such as classroom voting.
Last updated:  2017-01-31
A Probabilistic Baby-Step Giant-Step Algorithm
Prabhat Kushwaha, Ayan Mahalanobis
In this paper, a new algorithm to solve the discrete logarithm problem is presented which is similar to the usual baby-step giant-step algorithm. Our algorithm exploits the order of the discrete logarithm in the multiplicative group of a finite field. Using randomization with parallelized collision search, our algorithm indicates some weakness in NIST curves over prime fields which are considered to be the most conservative and safest curves among all NIST curves.
Last updated:  2017-11-28
Attribute-Based Encryption Implies Identity-Based Encryption
Javier Herranz
In this short paper we formally prove that designing attribute-based encryption schemes cannot be easier than designing identity-based encryption schemes. In more detail, we show how an attribute-based encryption scheme which admits, at least, AND policies can be combined with a collision-resistant hash function to obtain an identity-based encryption scheme. Even if this result may seem natural, not surprising at all, it has not been explicitly written anywhere, as far as we know. Furthermore, it may be an unknown result for some people: Odelu et al. \cite{OdKu16,J+17} have proposed both an attribute-based encryption scheme in the Discrete Logarithm setting, without bilinear pairings, and an attribute-based encryption scheme in the RSA setting, both admitting AND policies. If these schemes were secure, then by using the implication that we prove in this paper, we would obtain secure identity-based encryption schemes in both the RSA and the Discrete Logarithm settings, without bilinear pairings, which would be a breakthrough in the area. Unfortunately, we present here complete attacks of the two schemes proposed by Odelu et al. in \cite{OdKu16,J+17}.
Last updated:  2017-01-31
Horizontal isogeny graphs of ordinary abelian varieties and the discrete logarithm problem
Uncategorized
Dimitar Jetchev, Benjamin Wesolowski
Show abstract
Uncategorized
Fix an ordinary abelian variety defined over a finite field. The ideal class group of its endomorphism ring acts freely on the set of isogenous varieties with same endomorphism ring, by complex multiplication. Any subgroup of the class group, and generating set thereof, induces an isogeny graph on the orbit of the variety for this subgroup. We compute (under the Generalized Riemann Hypothesis) some bounds on the norms of prime ideals generating it, such that the associated graph has good expansion properties. We use these graphs, together with a recent algorithm of Dudeanu, Jetchev and Robert for computing explicit isogenies in genus 2, to prove random self-reducibility of the discrete logarithm problem within the subclasses of principally polarizable ordinary abelian surfaces with fixed endomorphism ring. In addition, we remove the heuristics in the complexity analysis of an algorithm of Galbraith for explicitly computing isogenies between two elliptic curves in the same isogeny class, and extend it to a more general setting including genus 2.
Last updated:  2017-01-31
A short note on the security of Round-Robin Differential Phase-Shift QKD
Uncategorized
Boris Skoric
Show abstract
Uncategorized
Round-Robin Differential Phase-Shift (RRDPS) is a Quantum Key Distribution (QKD) scheme proposed by Sasaki, Yamamoto and Koashi in 2014. It works with high-dimensional quantum digits (qudits). Its main advantage is that it tolerates more noise than qubit-based schemes while being easy to implement. The security of RRDPS has been discussed in several papers. However, these analyses do not have the mathematical rigor that is customary in cryptology. In this short note we prove a simple result regarding the min-entropy of the distributed key; this may serve as a step towards a full security proof.
Last updated:  2017-01-31
A note on VRFs from Verifiable Functional Encryption
Uncategorized
Saikrishna Badrinarayanan, Vipul Goyal, Aayush Jain, Amit Sahai
Show abstract
Uncategorized
Recently, Bitansky [Bit17] and Goyal et.al [GHKW17] gave generic constructions of selec- tively secure verifiable random functions(VRFs) from non-interactive witness indistinguishable proofs (NIWI) and injective one way functions. In this short note, we give an alternate construc- tion of selectively secure VRFs based on the same assumptions as an application of the recently introduced notion of verifiable functional encryption [BGJS16]. Our construction and proof is much simpler than the ones in [Bit17, GHKW17], given previous work (most notably given the constructions of verifiable functional encryption in [BGJS16]).
Last updated:  2017-10-18
An Obfuscating Compiler
Peter T. Breuer
Privacy for arbitrary encrypted remote computation in the cloud depends on the running code on the server being obfuscated from the standpoint of the operator in the computer room. This paper shows formally as well as practically that that may be arranged on a platform with the appropriate machine code architecture, given the obfuscating compiler described.
Last updated:  2017-01-30
LARA - A Design Concept for Lattice-based Encryption
El Bansarkhani Rachid
Lattice-based encryption schemes still suffer from a low message throughput per ciphertext and inefficient solutions towards realizing enhanced security characteristics such as CCA1- or CCA2-security. This is mainly due to the fact that the underlying schemes still follow a traditional design concept and do not tap the full potentials of LWE. In particular, many constructions still encrypt data in an one-time-pad manner considering LWE instances as random vectors added to a message, most often encoded bit vectors. The desired security features are also often achieved by costly approaches or less efficient generic transformations.\\ Recently, a novel encryption scheme based on the A-LWE assumption (relying on the hardness of LWE) has been proposed, where data is embedded into the error term without changing its target distributions. By this novelty it is possible to encrypt much more data as compared to the classical approach. Combinations of both concepts are also possible. In this paper we revisit this approach and propose amongst others a standard model variant of the scheme as well as several techniques in order to improve the message throughput per ciphertext. Furthermore, we introduce a new discrete Gaussian sampler, that is inherently induced by the encryption scheme itself, and present a very efficient trapdoor construction of reduced storage size. More precisely, the secret and public key sizes are reduced to just 1 polynomial, as opposed to $O( \log q)$ polynomials following previous constructions. Finally, we give a security analysis as well as an efficient implementation of the scheme instantiated with the new trapdoor construction. In particular, we attest high message throughputs (message expansion factors close to 1-2) at running times comparable to the CPA-secure encryption scheme from Lindner and Peikert (CT-RSA 2011). Our scheme even ensures CCA (or RCCA) security, while entailing a great deal of flexibility to encrypt arbitrary large messages or signatures by use of the same secret key. This feature is naturally induced by the characteristics of LWE.
Last updated:  2017-06-29
ROTE: Rollback Protection for Trusted Execution
Uncategorized
Sinisa Matetic, Mansoor Ahmed, Kari Kostiainen, Aritra Dhar, David Sommer, Arthur Gervais, Ari Juels, Srdjan Capkun
Show abstract
Uncategorized
Security architectures such as Intel SGX need protection against rollback attacks, where the adversary violates the integrity of a protected application state by replaying old persistently stored data or by starting multiple application instances. Successful rollback attacks have serious consequences on applications such as financial services. In this paper, we propose a new approach for rollback protection on SGX. The intuition behind our approach is simple. A single platform cannot efficiently prevent rollback, but in many practical scenarios, multiple processors can be enrolled to assist each other. We design and implement a rollback protection system called ROTE that realizes integrity protection as a distributed system. We construct a model that captures adversarial ability to schedule enclave execution and show that our solution achieves a strong security property: the only way to violate integrity is to reset all participating platforms to their initial state. We implement ROTE and demonstrate that distributed rollback protection can provide significantly better performance than previously known solutions based on local non-volatile memory.
Last updated:  2017-05-06
On dual lattice attacks against small-secret LWE and parameter choices in HElib and SEAL
Martin R. Albrecht
We present novel variants of the dual-lattice attack against LWE in the presence of an unusually short secret. These variants are informed by recent progress in BKW-style algorithms for solving LWE. Applying them to parameter sets suggested by the homomorphic encryption libraries HElib and SEAL v2.0 yields revised security estimates. Our techniques scale the exponent of the dual-lattice attack by a factor of \((2\,L)/(2\,L+1)\) when \(\log q = \Theta{\left(L \log n\right)}\), when the secret has constant hamming weight \(h\) and where \(L\) is the maximum depth of supported circuits. They also allow to half the dimension of the lattice under consideration at a multiplicative cost of \(2^{h}\) operations. Moreover, our techniques yield revised concrete security estimates. For example, both libraries promise 80 bits of security for LWE instances with $n=1024$ and $\log_2 q \approx {47}$, while the techniques described in this work lead to estimated costs of 68 bits (SEAL v2.0) and 62 bits (HElib).
Last updated:  2017-06-08
Practical Passive Leakage-Abuse Attacks Against Symmetric Searchable Encryption
Uncategorized
Matthieu Giraud, Alexandre Anzala-Yamajako, Olivier Bernard, Pascal Lafourcade
Show abstract
Uncategorized
Symmetric Searchable Encryption (SSE) schemes solve efficiently the problem of securely outsourcing client data with search functionality. These schemes are provably secure with respect to an explicit leakage profile; however, determining how much information can be inferred in practice from this leakage remains difficult. First, we recall the leakage hierarchy introduced in 2015 by Cash et al. Second, we present complete practical attacks on SSE schemes of L4, L3 and L2 leakage profiles which are deployed in commercial cloud solutions. Our attacks are passive and only assume the knowledge of a small sample of plaintexts. Moreover, we show their devastating effect on real-world data sets since, regardless of the leakage profile, an adversary knowing a mere 1% of the document set is able to retrieve 90% of documents whose content is revealed over 70%. Then, we further extend the analysis of existing attacks to highlight the gap of security that exists between L2- and L1-SSE and give some simple countermeasures to prevent our attacks.
Last updated:  2017-06-12
Efficient Round-Optimal Blind Signatures in the Standard Model
Essam Ghadafi
Blind signatures are at the core of e-cash systems and have numerous other applications. In this work we construct efficient blind and partially blind signature schemes over bilinear groups in the standard model. Our schemes yield short signatures consisting of only a couple of elements from the shorter source group and have very short communication overhead consisting of $1$ group element on the user side and $3$ group elements on the signer side. At $80$-bit security, our schemes yield signatures consisting of only $40$ bytes which is $67\%$ shorter than the most efficient existing scheme with the same security in the standard model. Verification in our schemes requires only a couple of pairings. Our schemes compare favorably in every efficiency measure to all existing counterparts offering the same security in the standard model. In fact, the efficiency of our signing protocol as well as the signature size compare favorably even to many existing schemes in the random oracle model. For instance, our signatures are shorter than those of Brands' scheme which is at the heart of the U-Prove anonymous credential system used in practice. The unforgeability of our schemes is based on new intractability assumptions of a ``one-more'' type which we show are intractable in the generic group model, whereas their blindness holds w.r.t.~malicious signing keys in the information-theoretic sense. We also give variants of our schemes for a vector of messages.
Last updated:  2017-01-20
Anonymous contribution of data
Matthew McKague, David Eyers
Many application domains depend on the collection of aggregate statistics from a large number of participants. In such situations, often the individual data points are not required. Indeed participants may wish to preserve the privacy of their specific data despite being willing to contribute to the aggregate statistics. We propose a protocol that allows a server to gather aggregate statistics, while providing anonymity to participants. Our protocol is information theoretically secure so that the server gains no information about participants’ data other than what is revealed by the aggregate statistics themselves.
Last updated:  2017-01-24
Accumulators with Applications to Anonymity-Preserving Revocation
Uncategorized
Foteini Baldimtsi, Jan Camenisch, Maria Dubovitskaya, Anna Lysyanskaya, Leonid Reyzin, Kai Samelin, Sophia Yakoubov
Show abstract
Uncategorized
Membership revocation is essential for cryptographic applications, from traditional PKIs to group signatures and anonymous credentials. Of the various solutions for the revocation problem that have been explored, dynamic accumulators are one of the most promising. We propose Braavos, a new, RSA-based, dynamic accumulator. It has optimal communication complexity and, when combined with efficient zero-knowledge proofs, provides an ideal solution for anonymous revocation. For the construction of Braavos we use a modular approach: we show how to build an accumulator with better functionality and security from accumulators with fewer features and weaker security guarantees. We then describe an anonymous revocation component (ARC) that can be instantiated using any dynamic accumulator. ARC can be added to any anonymous system, such as anonymous credentials or group signatures, in order to equip it with a revocation functionality. Finally, we implement ARC with Braavos and plug it into Idemix, the leading implementation of anonymous credentials. This work resolves, for the first time, the problem of practical revocation for anonymous credential systems.
Last updated:  2017-06-10
Indifferentiability of Iterated Even-Mansour Ciphers with Non-Idealized Key-Schedules: Five Rounds are Necessary and Sufficient
Yuanxi Dai, Yannick Seurin, John Steinberger, Aishwarya Thiruvengadam
We prove that the 5-round iterated Even-Mansour (IEM) construction (which captures the high-level structure of the class of key-alternating ciphers) with a non-idealized key-schedule (such as the trivial key-schedule, where all round keys are equal) is indifferentiable from an ideal cipher. In a separate result, we also prove that five rounds are necessary by describing an attack against the corresponding 4-round construction. This closes the gap regarding the exact number of rounds for which the IEM construction with a non-idealized key-schedule is indifferentiable from an ideal cipher, which was previously only known to lie between four and twelve.
Last updated:  2017-02-21
Reducing Garbled Circuit Size While Preserving Circuit Gate Privacy
Yongge Wang, Qutaibah m. Malluhi
Yao's garbled circuits have been extensively used in Secure Function Evaluations (SFE). Several improvements have been proposed to improve the efficiency of garbled circuits. Kolesnikov and Schneider (2008) proposed the free-XOR technique. Naor, Pinkas, and Sumner (1999) introduced garbled row-reduction technique GRR3 to reduce each garbled gate to three ciphertexts, Pinkas et al (2009) proposed GRR2, and Zahur, Rosulek, and Evans (2015) introduced a half-gates technique to design free-XOR compatible GRR2. The GRR2, half-gates, and free-XOR garbling schemes improve efficiency by leaking locations of XOR gates in the circuit. This kind of information leakage is acceptable for SFE protocols though it maybe be unacceptable for other protocols such as Private Function Evaluation (PFE). For example, these techniques could not be used to improve the efficiency of existing non-universal-circuit-based constant round PFE protocols. The first result of this paper is a Gate Privacy preserving Garbled Row Reduction technique GPGRR2 for designing garbled circuits with at most two ciphertexts for each garbled gate. Compared with state-of-the-art gate-privacy-preserving garbling scheme GRR3, the scheme GPGRR2 reduces the garbled circuit size by a factor of at least 33%. The second result is the design of a linear (over integers) garbling scheme to garble a single odd gate to one ciphertext. Zahur, Rosulek, and Evans (2015) proved that a linear garbling scheme garbles each odd gate to at least $2$ ciphertexts. Our result shows that Zahur et al's result should be formulated more appropriately by restricting the ``linear concept'' explicitly to the field $GF({2^t})$. The third result is to use the GPGRR2 scheme to reduce the number of ciphertexts in non-universal-circuit based PFE protocols by a factor of 25%.
Last updated:  2022-12-19
Practical Non-Malleable Codes from $\ell$-more Extractable Hash Functions
Uncategorized
Aggelos Kiayias, Feng-Hao Liu, Yiannis Tselekounis
Show abstract
Uncategorized
In this work, we significantly improve the efficiency of non-malleable codes in the split state model, by constructing a code with codeword length $|s|+O(k)$, where $|s|$ is the length of the message, and $k$ is the security parameter. This is a substantial improvement over previous constructions, both asymptotically and concretely. Our construction relies on a new primitive which we define and study, called $\ell$-more extractable hash functions. This notion, which may be of independent interest, guarantees that any adversary that is given access to $\ell \in \mathbb{N}$ precomputed hash values $v_{1},\dots, v_{\ell}$, and produces a new valid hash value $\tilde v$, then it must know a pre-image of $\tilde v$. This is a stronger notion that the one by Bitansky et al. (Eprint '11) and Goldwasser et al. (ITCS '12, Eprint '14), which considers adversaries that get no access to precomputed hash values prior to producing their own value. By appropriately relaxing the extractability requirement (without hurting the applicability of the primitive) we instantiate $\ell$-more extractable hash functions under the same assumptions used for the previous extractable hash functions by Bitansky et al. and Goldwasser et al. (a variant of the Knowledge of Exponent Assumption).
Last updated:  2017-06-28
SePCAR: A Secure and Privacy-Enhancing Protocol for Car Access Provision (Full Version)
Uncategorized
Iraklis Symeonidis, Abdelrahaman Aly, Mustafa A. Mustafa, Bart Mennink, Siemen Dhooghe, Bart Preneel
Show abstract
Uncategorized
We present an efficient secure and privacy-enhancing protocol for car access provision, named SePCAR. The protocol is fully decentralised and allows users to share their cars conveniently in such a way that the security and privacy of the users is not sacrificed. It provides generation, update, revocation, and distribution mechanisms for access tokens to shared cars, as well as procedures to solve disputes and to deal with law enforcement requests, for instance in the case of car incidents. We prove that SePCAR meets its appropriate security and privacy requirements and that it is efficient: our practical efficiency analysis through a proof-of-concept implementation shows that SePCAR takes only 1.55 seconds for a car access provision.
Last updated:  2017-01-15
CCA-Secure Inner-Product Functional Encryption from Projective Hash Functions
Fabrice Benhamouda, Florian Bourse, Helger Lipmaa
In an inner-product functional encryption scheme, the plaintexts are vectors and the owner of the secret key can delegate the ability to compute weighted sums of the coefficients of the plaintext of any ciphertext. Recently, many inner-product functional encryption schemes were proposed. However, none of the known schemes are secure against chosen ciphertext attacks (IND-FE-CCA). We present a generic construction of IND-FE-CCA inner-product functional encryption from projective hash functions with homomorphic properties. We show concrete instantiations based on the DCR assumption, the DDH assumption, and more generally, any Matrix DDH assumption.
Last updated:  2017-01-15
Double-base scalar multiplication revisited
Uncategorized
Daniel J. Bernstein, Chitchanok Chuengsatiansup, Tanja Lange
Show abstract
Uncategorized
This paper reduces the number of field multiplications required for scalar multiplication on conservative elliptic curves. For an average 256-bit integer n, this paper's multiply-by-n algorithm takes just 7.47M per bit on twisted Edwards curves -x^2+y^2=1+dx^2y^2 with small d. The previous record, 7.62M per bit, was unbeaten for seven years. Unlike previous record-setting algorithms, this paper's multiply-by-n algorithm uses double-base chains. The new speeds rely on advances in tripling speeds and on advances in constructing double-base chains. This paper's new tripling formula for twisted Edwards curves takes just 11.4M, and its new algorithm for constructing an optimal double-base chain for n takes just (log n)^{2.5+o(1)} bit operations. Extending this double-base algorithm to double-scalar multiplications, as in signature verification, takes 8.80M per bit to compute n_1P+n_2Q. Previous techniques used 9.34M per bit.
Last updated:  2017-01-14
Low-Complexity Cryptographic Hash Functions
Benny Applebaum, Naama Haramaty, Yuval Ishai, Eyal Kushilevitz, Vinod Vaikuntanathan
Cryptographic hash functions are efficiently computable functions that shrink a long input into a shorter output while achieving some of the useful security properties of a random function. The most common type of such hash functions is {\em collision resistant} hash functions (CRH), which prevent an efficient attacker from finding a pair of inputs on which the function has the same output. Despite the ubiquitous role of hash functions in cryptography, several of the most basic questions regarding their computational and algebraic complexity remained open. In this work we settle most of these questions under new, but arguably quite conservative, cryptographic assumptions, whose study may be of independent interest. Concretely, we obtain the following results: (1) Low-complexity CRH. Assuming the intractability of finding short codewords in natural families of linear error-correcting codes, there are CRH that shrink the input by a constant factor and have a {\em constant algebraic degree} over $Z_2$ (as low as 3), or even {\em constant output locality and input locality}. Alternatively, CRH with an arbitrary polynomial shrinkage can be computed by {\em linear-size} circuits. (2) Win-win results. If low-degree CRH with good shrinkage {\em do not} exist, this has useful consequences for learning algorithms and data structures. (3) Degree-2 hash functions. Assuming the conjectured intractability of solving a random system of quadratic equations over $Z_2$, a uniformly random degree-2 mapping is a {\em universal one-way hash function} (UOWHF). UOWHF relaxes CRH by forcing the attacker to find a collision with a random input picked by a challenger. On the other hand, a uniformly random degree-2 mapping is {\em not} a CRH. We leave the existence of degree-2 CRH open, and relate it to open questions on the existence of degree-2 randomized encodings of functions.
Last updated:  2017-03-24
Privacy-Preserving Classification on Deep Neural Network
Hervé Chabanne, Amaury de Wargny, Jonathan Milgram, Constance Morel, Emmanuel Prouff
Neural Networks (NN) are today increasingly used in Machine Learning where they have become deeper and deeper to accurately model or classify high-level abstractions of data. Their development however also gives rise to important data privacy risks. This observation motives Microsoft researchers to propose a framework, called Cryptonets. The core idea is to combine simplifications of the NN with Fully Homomorphic Encryptions (FHE) techniques to get both confidentiality of the manipulated data and efficiency of the processing. While efficiency and accuracy are demonstrated when the number of non-linear layers is small (eg $2$), Cryptonets unfortunately becomes ineffective for deeper NNs which let the problem of privacy preserving matching open in these contexts. This work successfully addresses this problem by combining the original ideas of Cryptonets' solution with the batch normalization principle introduced at ICML 2015 by Ioffe and Szegedy. We experimentally validate the soundness of our approach with a neural network with $6$ non-linear layers. When applied to the MNIST database, it competes the accuracy of the best non-secure versions, thus significantly improving Cryptonets.
Last updated:  2017-01-13
Analysis of the NORX Core Permutation
Alex Biryukov, Aleksei Udovenko, Vesselin Velichkov
NORX is one of the fifteen authenticated encryption algorithms that have reached the third round of the CAESAR competition. NORX is built using the sponge-based Monkey Duplex construction. In this note we analyze the core permutation $F$. We show that it has rotational symmetries on different structure levels. This yields simple distinguishing properties for the permutation, which propagate with very high probability or even probability one. We also investigate differential symmetries in NORX at the word level. A new type of truncated differentials called symmetric truncated differentials (STD) is proposed. It is shown that, under the Markov assumption, up to $2.125$ rounds of the $F$ function of NORX32 and NORX64 can be distinguished using STD. Finally, we note that our analysis covers only the permutation $F$ and does not immediately threaten the security claims of the designers.
Last updated:  2017-01-13
Analyzing the Shuffling Side-Channel Countermeasure for Lattice-Based Signatures
Peter Pessl
Implementation security for lattice-based cryptography is still a vastly unexplored field. At CHES 2016, the very first side-channel attack on a lattice-based signature scheme was presented. Later, shuffling was proposed as an inexpensive means to protect the Gaussian sampling component against such attacks. However, the concrete effectiveness of this countermeasure has never been evaluated. We change that by presenting an in-depth analysis of the shuffling countermeasure. Our analysis consists of two main parts. First, we perform a side-channel attack on a Gaussian sampler implementation. We combine templates with a recovery of data-dependent branches, which are inherent to samplers. We show that an adversary can realistically recover some samples with very high confidence. Second, we present a new attack against the shuffling countermeasure in context of Gaussian sampling and lattice-based signatures. We do not attack the shuffling algorithm as such, but exploit differing distributions of certain variables. We give a broad analysis of our attack by considering multiple modeled SCA adversaries. We show that a simple version of shuffling is not an effective countermeasure. With our attack, a profiled SCA adversary can recover the key by observing only 7000 signatures. A second version of this countermeasure, which uses Gaussian convolution in conjunction with shuffling twice, can increase side-channel security and the number of required signatures significantly. Here, roughly 285000 observations are needed for a successful attack. Yet, this number is still practical.
Last updated:  2017-01-13
Cryptanalysis of GlobalPlatform Secure Channel Protocols
Uncategorized
Mohamed Sabt, Jacques Traoré
Show abstract
Uncategorized
GlobalPlatform (GP) card specifications are the de facto standards for the industry of smart cards. Being highly sensitive, GP specifications were defined regarding stringent security requirements. In this paper, we analyze the cryptographic core of these requirements; i.e. the family of Secure Channel Protocols (SCP). Our main results are twofold. First, we demonstrate a theoretical attack against SCP02, which is the most popular protocol in the SCP family. We discuss the scope of our attack by presenting an actual scenario in which a malicious entity can exploit it in order to recover encrypted messages. Second, we investigate the security of SCP03 that was introduced as an amendment in 2009. We find that it provably satisfies strong notions of security. Of particular interest, we prove that SCP03 withstands algorithm substitution attacks (ASAs) defined by Bellare et al. that may lead to secret mass surveillance. Our findings highlight the great value of the paradigm of provable security for standards and certification, since unlike extensive evaluation, it formally guarantees the absence of security flaws.
Last updated:  2017-01-13
Honey Encryption for Language
Uncategorized
Marc Beunardeau, Houda Ferradi, Rémi Géraud, David Naccache
Show abstract
Uncategorized
Honey Encryption (HE), introduced by Juels and Ristenpart (Eurocrypt 2014), is an encryption paradigm designed to produce ciphertexts yielding plausible-looking but bogus plaintexts upon decryption with wrong keys. Thus brute-force attackers need to use additional information to determine whether they indeed found the correct key. At the end of their paper, Juels and Ristenpart leave as an open question the adaptation of honey encryption to natural language messages. A recent paper by Chatterjee et al. takes a mild attempt at the challenge and constructs a natural language honey encryption scheme relying on simple models for passwords. In this position paper we explain why this approach cannot be extended to reasonable-size human-written documents e.g. e-mails. We propose an alternative approach and evaluate its security.
Last updated:  2017-05-22
Authenticated Garbling and Efficient Maliciously Secure Two-Party Computation
Xiao Wang, Samuel Ranellucci, Jonathan Katz
We propose a simple and efficient framework for obtaining efficient constant-round protocols for maliciously secure two-party computation. Our framework uses a function-independent preprocessing phase to generate authenticated information for the two parties; this information is then used to construct a single ``authenticated'' garbled circuit which is transmitted and evaluated. We also show how to efficiently instantiate the preprocessing phase by designing a highly optimized version of the TinyOT protocol by Nielsen et al. Our overall protocol outperforms existing work in both the single-execution and amortized settings, with or without preprocessing: - In the single-execution setting, our protocol evaluates an AES circuit with malicious security in 37 ms with an online time of just 1 ms. Previous work with the best online time (also 1 ms) requires 124 ms in total; previous work with the best total time requires 62 ms (with 14 ms online time). - If we amortize the computation over 1024 executions, each AES computation requires just 6.7 ms with roughly the same online time as above. The best previous work in the amortized setting has roughly the same total time but does not support function-independent preprocessing. Our work shows that the performance penalty for maliciously secure two-party computation (as compared to semi-honest security) is much smaller than previously believed.
Last updated:  2019-05-30
Bounded-Collusion Attribute-Based Encryption from Minimal Assumptions
Uncategorized
Gene Itkis, Emily Shen, Mayank Varia, David Wilson, Arkady Yerukhimovich
Show abstract
Uncategorized
Attribute-based encryption (ABE) enables encryption of messages under access policies so that only users with attributes satisfying the policy can decrypt the ciphertext. In standard ABE, an arbitrary number of colluding users, each without an authorized attribute set, cannot decrypt the ciphertext. However, all existing ABE schemes rely on concrete cryptographic assumptions such as the hardness of certain problems over bilinear maps or integer lattices. Furthermore, it is known that ABE cannot be constructed from generic assumptions such as public-key encryption using black-box techniques. In this work, we revisit the problem of constructing ABE that tolerates collusions of arbitrary but a priori bounded size. We present an ABE scheme secure against bounded collusions that requires only semantically secure public-key encryption. Our scheme achieves significant improvement in the size of the public parameters, secret keys, and ciphertexts over the previous construction of bounded-collusion ABE from minimal assumptions by Gorbunov et al. (CRYPTO 2012). We also obtain bounded-collusion symmetric-key ABE (which requires the secret key for encryption) by replacing the public-key encryption with symmetric-key encryption, which can be built from the minimal assumption of one-way functions.
Last updated:  2017-07-31
A Decentralized PKI In A Mobile Ecosystem
Uncategorized
Varun Chandrasekaran, Lakshminarayanan Subramanian
Uncategorized
A public key infrastructure (PKI) supports the authentication and distribution of public encryption keys, enabling secure communication among other uses. However, PKIs rely heavily on a centralized trusted party called the certificate authority (CA) which acts as the root of trust, and is also a single point of compromise. We describe the design of a decentralized PKI utilizing the intrinsic trust of the cellular (GSM) network. Secure Mobile Identities (SMI) is a repetitive key-exchange protocol that utilizes cryptographic proofs to prove the unique identities of mobile users. In this paper, we discuss how one can create strong reputations for an identity despite the presence of an adversary who can exploit the probabilistic one-way trust property of GSM networks. Our evaluation shows minimal computational overhead on existing devices, and quick bootstrap times of under 10 minutes of mobile interaction despite minimal trust assumptions placed, suggesting easy adoption in today's operational ecosystem.
Last updated:  2018-08-18
Scalable Multi-Party Private Set-Intersection
Uncategorized
Carmit Hazay, Muthuramakrishnan Venkitasubramaniam
Show abstract
Uncategorized
In this work we study the problem of private set-intersection in the multi-party setting and design two protocols with the following improvements compared to prior work. First, our protocols are designed in the so-called star network topology, where a designated party communicates with everyone else, and take a new approach of leveraging the 2PC protocol of [FreedmanNP04]. This approach minimizes the usage of a broadcast channel, where our semi-honest protocol does not make any use of such a channel and all communication is via point-to-point channels. In addition, the communication complexity of our protocols scales with the number of parties. More concretely, (1) our first semi-honest secure protocol implies communication complexity that is linear in the input sizes, namely $O((\sum_{i=1}^n m_i)\cdot\kappa)$ bits of communication where $\kappa$ is the security parameter and $m_i$ is the size of $P_i$'s input set, whereas overall computational overhead is quadratic in the input sizes only for a designated party, and linear for the rest. We further reduce this overhead by employing two types of hashing schemes. (2) Our second protocol is proven secure in the malicious setting. This protocol induces communication complexity $O((n^2 + nm_\maxx + nm_\minn\log m_\maxx)\kappa)$ bits of communication where $m_\minn$ (resp. $m_\maxx$) is the minimum (resp. maximum) over all input sets sizes and $n$ is the number of parties.
Last updated:  2017-01-13
Constant Round Adaptively Secure Protocols in the Tamper-Proof Hardware Model
Uncategorized
Carmit Hazay, Antigoni Polychroniadou, Muthuramakrishnan Venkitasubramaniam
Show abstract
Uncategorized
Achieving constant-round adaptively secure protocols (where all parties can be corrupted) in the plain model is a notoriously hard problem. Very recently, three works published in TCC 2015 (Dachman-Soled et al., Garg and Polychroniadou, Canetti et al.), solved the problem in the Common Reference String (CRS) model. In this work, we present a constant-round adaptive UC-secure computation protocol for all well-formed functionalities in the tamper-proof hardware model using stateless tokens from only one-way functions. In contrast, all prior works in the CRS model require very strong assumptions, in particular, the existence of indistinguishability obfuscation. As a corollary to our techniques, we present the first adaptively secure protocols in the Random Oracle Model (ROM) with round complexity proportional to the depth of circuit implementing the functionality. Our protocols are secure in the Global Random Oracle Model introduced recently by Canetti, Jain and Scafuro in CCS 2014 that provides strong compositional guarantees. More precisely, we obtain an adaptively secure UC-commitment scheme in the global ROM assuming only one-way functions. In comparison, the protocol of Canetti, Jain and Scafuro achieves only static security and relies on the specific assumption of Discrete Diffie-Hellman assumption (DDH).
Last updated:  2017-01-13
Improved Structure Preserving Signatures under Standard Bilinear Assumptions
Uncategorized
Charanjit S. Jutla, Arnab Roy
Show abstract
Uncategorized
We show that the recent structure-preserving signature (SPS) scheme of Kiltz, Pan and Wee [CRYPTO 2015], provably secure under the standard bilinear pairings group assumption SXDH, can be improved to have one less group element and one less pairing product equation in the signature verification step. Our improved SPS scheme only requires six group elements (five in one group, and one in the other), and two pairing product equations for verification. The number of pairing product equations is optimal, as it matches a known lower bound of Abe et al [CRYPTO 2011]. The number of group elements in the signature also approaches the known lower bound of four for SXDH assumption. Further, while the earlier scheme had a security reduction which incurred a security loss that is quadratic in number of queries $Q$, our novel security reduction incurs only a $Q \log{Q}$ factor loss in security. Structure-preserving signatures are used pervasively in group signatures, group encryptions, blind signatures, proxy signatures and many other anonymous credential applications. Our work directly leads to improvements in these schemes. Moreover, the improvements are usually of a higher multiplicative factor order, as these constructions use Groth-Sahai NIZK proofs for zero-knowledge verification of pairing-product equations. We also give our construction under the more general and standard $\D_k$-MDDH (Matrix-DDH) assumption. The signature size in our scheme is $3k+2$ elements in one group, and one element in the other. The number of pairing product equations required for verification is only $2k$, whereas the earlier schemes required at least $2k+1$ equations.
Last updated:  2018-05-01
Searchable Encrypted Relational Databases: Risks and Countermeasures
Uncategorized
Mohamed Ahmed Abdelraheem, Tobias Andersson, Christian Gehrmann
Show abstract
Uncategorized
We point out the risks of protecting relational databases via Searchable Symmetric Encryption (SSE) schemes by proposing an inference attack exploiting the structural properties of relational databases. We show that record-injection attacks mounted on relational databases have worse consequences than their file-injection counterparts on un- structured databases. Moreover, we discuss some techniques to reduce the effectiveness of inference attacks exploiting the access pattern leakage existing in SSE schemes. To the best of our knowledge, this is the first work that investigates the security of relational databases protected by SSE schemes.
Last updated:  2017-01-13
Dual System Framework in Multilinear Settings and Applications to Fully Secure (Compact) ABE for Unbounded-Size Circuits
Uncategorized
Nuttapong Attrapadung
Show abstract
Uncategorized
We propose a new generic framework for constructing fully secure attribute based encryption (ABE) in multilinear settings. It is applicable in a generic manner to any predicates. Previous generic frameworks of this kind are given only in bilinear group settings, where applicable predicate classes are limited. Our framework provides an abstraction of dual system paradigms over composite-order graded multilinear encoding schemes in a black-box manner. As applications, we propose new fully secure ABE systems for general predicates, namely, ABE for circuits. We obtain two schemes for each of key-policy (KP) and ciphertext-policy (CP) variants of ABE. All of our four fully secure schemes can deal with unbounded-size circuits, while enjoy succinctness, meaning that the key and ciphertext sizes are (less than or) proportional to corresponding circuit sizes. In the CP-ABE case, no scheme ever achieves such properties, even when considering selectively secure systems. Furthermore, our second KP-ABE achieves constant-size ciphertexts, whereas our second CP-ABE achieves constant-size keys. Previous ABE systems for circuits are either selectively secure (Gorbunov et al. STOC'13, Garg et al. Crypto'13, and subsequent works), or semi-adaptively secure (Brakerski and Vaikuntanathan Crypto'16), or fully-secure but not succinct and restricted to bounded-size circuits (Garg et al. ePrint 2014/622, and Garg et al. TCC'16-A).
Last updated:  2017-01-19
Privacy for Distributed Databases via (Un)linkable Pseudonyms
Jan Camenisch, Anja Lehmann
When data maintained in a decentralized fashion needs to be synchronized or exchanged between different databases, related data sets usually get associated with a unique identifier. While this approach facilitates cross-domain data exchange, it also comes with inherent drawbacks in terms of controllability. As data records can easily be linked, no central authority can limit or control the information flow. Worse, when records contain sensitive personal data, as is for instance the case in national social security systems, such linkability poses a massive security and privacy threat. An alternative approach is to use domain-specific pseudonyms, where only a central authority knows the cross-domain relation between the pseudonyms. However, current solutions require the central authority to be a fully trusted party, as otherwise it can provide false conversions and exploit the data it learns from the requests. We propose an (un)linkable pseudonym system that overcomes those limitations, and enables controlled yet privacy-friendly exchange of distributed data. We prove our protocol secure in the UC framework and provide an efficient instantiation based on discrete-logarithm related assumptions.
Last updated:  2017-10-08
A Generic Approach to Constructing and Proving Verifiable Random Functions
Uncategorized
Rishab Goyal, Susan Hohenberger, Venkata Koppula, Brent Waters
Show abstract
Uncategorized
Verifiable Random Functions (VRFs) as introduced by Micali, Rabin and Vadhan are a special form of Pseudo Random Functions (PRFs) wherein a secret key holder can also prove validity of the function evaluation relative to a statistically binding commitment. Prior works have approached the problem of constructing VRFs by proposing a candidate under specific number theoretic setting --- mostly in bilinear groups --- and then grapple with the challenges of proving security in the VRF environments. These constructions achieved different results and tradeoffs in practical efficiency, tightness of reductions and cryptographic assumptions. In this work we take a different approach. Instead of tackling the VRF problem as a whole we demonstrate a simple and generic way of building Verifiable Random Functions from more basic and narrow cryptographic primitives. Then we can turn to exploring solutions to these primitives with a more focused mindset. In particular, we show that VRFs can be constructed generically from the ingredients of: (1) a 1-bounded constrained pseudo random function for a functionality that is ``admissible hash friendly" , (2) a non-interactive statistically binding commitment scheme (without trusted setup) and (3) a non-interactive witness indistinguishable proofs or NIWIs. The first primitive can be replaced with a more basic puncturable PRF constraint if one is willing to settle for selective security or assume sub-exponential hardness of assumptions. In the second half of our work we support our generic approach by giving new constructions of the underlying primitives. We first provide new constructions of perfectly binding commitments from the Learning with Errors (LWE) and Learning Parity with Noise (LPN) assumptions. Second, we give give two new constructions of 1-bounded constrained PRFs for admissible hash friendly constructions. Our first construction is from the $\nddh$ assumption. The next is from the $\phi$ hiding assumption.
Last updated:  2017-01-11
concerto: A Methodology Towards Reproducible Analyses of TLS Datasets
Uncategorized
Olivier Levillain, Maxence Tury, Nicolas Vivet
Show abstract
Uncategorized
Over the years, SSL/TLS has become an essential part of Internet security. As such, it should offer robust and state-of-the-art security, in particular for HTTPS, its first application. Theoretically, the protocol allows for a trade-off between secure algorithms and decent performance. Yet in practice, servers do not always support the latest version of the protocol, nor do they all enforce strong cryptographic algorithms. To assess the quality of HTTPS and other TLS deployment at large, several studies have been led to grasp the state of the ecosystem, and to characterize the quality of certificate chains in particular. In this paper, we propose to analyse some of the existing data concerning TLS measures on the Internet. We studied several datasets, from the first public ones in 2010 to more recent scans. Even if the collection methodology and the used tools vary between campaigns, we propose a unified and reproducible way to analyse the TLS ecosystem through different datasets. Our approach is based on a set of open-source tools, concerto. Our contribution is therefore threefold: an analysis of existing datasets to propose a unified methodology, the implementation of our approach with concerto, and the presentation of some results to validate our toolsets.
Last updated:  2017-06-29
SmartPool: Practical Decentralized Pooled Mining
Loi Luu, Yaron Velner, Jason Teutsch, Prateek Saxena
Cryptocurrencies such as Bitcoin and Ethereum are operated by a handful of mining pools. Nearly $95\%$ of Bitcoin's and $80\%$ of Ethereum's mining power resides with less than ten and six mining pools respectively. Although miners benefit from low payout variance in pooled mining, centralized mining pools require members to trust that pool operators will remunerate them fairly. Furthermore, centralized pools pose the risk of transaction censorship from pool operators, and open up possibilities for collusion between pools for perpetrating severe attacks. In this work, we propose SmartPool, a novel protocol design for a decentralized mining pool. Our protocol shows how one can leverage {\em smart contracts}, autonomous blockchain programs, to decentralize cryptocurrency mining. SmartPool gives transaction selection control back to miners while yielding low-variance payouts. SmartPool incurs mining fees lower than centralized mining pools and is designed to scale to a large number of miners. We implemented and deployed a robust SmartPool implementation on the Ethereum and Ethereum Classic networks. To date, our deployed pools have handled a peak hashrate of 30 GHs from Ethereum miners, resulting in $105$ blocks, costing miners a mere $0.6\%$ of block rewards in transaction fees.
Last updated:  2017-09-14
Verifiable Random Functions from Non-Interactive Witness-Indistinguishable Proofs
Nir Bitansky
{\em Verifiable random functions} (VRFs) are pseudorandom functions where the owner of the seed, in addition to computing the function's value $y$ at any point $x$, can also generate a non-interactive proof $\pi$ that $y$ is correct, without compromising pseudorandomness at other points. Being a natural primitive with a wide range of applications, considerable efforts have been directed towards the construction of such VRFs. While these efforts have resulted in a variety of algebraic constructions (from bilinear maps or the RSA problem), the relation between VRFs and other general primitives is still not well understood. We present new constructions of VRFs from general primitives, the main one being {\em non-interactive witness-indistinguishable proofs} (NIWIs). This includes: \begin{itemize} \item A selectively-secure VRF assuming NIWIs and non-interactive commitments. As usual, the VRF can be made adaptively-secure assuming subexponential hardness of the underlying primitives. \item An adaptively-secure VRF assuming (polynomially-hard) NIWIs, non-interactive commitments, and {\em (single-key) constrained pseudorandom functions} for a restricted class of constraints. \end{itemize} The above primitives can be instantiated under various standard assumptions, which yields corresponding VRF instantiations, under different assumptions than were known so far. One notable example is a non-uniform construction of VRFs from subexponentially-hard trapdoor permutations, or more generally, from {\em verifiable pseudorandom generators} (the construction can be made uniform under a standard derandomization assumption). This partially answers an open question by Dwork and Naor (FOCS '00). The construction and its analysis are quite simple. Both draw from ideas commonly used in the context of {\em indistinguishability obfuscation}.
Last updated:  2017-01-26
Improved Algorithms for the Approximate k-List Problem in Euclidean Norm
Uncategorized
Gottfried Herold, Elena Kirshanova
Show abstract
Uncategorized
We present an algorithm for the approximate $k$-List problem for the Euclidean distance that improves upon the Bai-Laarhoven-Stehle (BLS) algorithm from ANTS'16. The improvement stems from the observation that almost all the solutions to the approximate $k$-List problem form a particular configuration in $n$-dimensional space. Due to special properties of configurations, it is much easier to verify whether a $k$-tuple forms a configuration rather than checking whether it gives a solution to the $k$-List problem. Thus, phrasing the $k$-List problem as a problem of finding such configurations immediately gives a better algorithm. Furthermore, the search for configurations can be sped up using techniques from Locality-Sensitive Hashing (LSH). Stated in terms of configuration-search, our LSH-like algorithm offers a broader picture on previous LSH algorithms. For the Shortest Vector Problem, our configuration-search algorithm results in an exponential improvement for memory-efficient sieving algorithms. For $k=3$, it allows us to bring down the complexity of the BLS sieve algorithm on an $n$-dimensional lattice from $2^{0.4812n+o(n)}$ to $2^{0.3962n + o(n)}$ with the same space-requirement $2^{0.1887n + o(n)}$. Note that our algorithm beats the Gauss Sieve algorithm with time resp. space requirements of $2^{0.415n+o(n)}$ resp. $2^{0.208n + o(n)}$, while being easy to implement. Using LSH techniques, we can further reduce the time complexity down to $2^{0.3717n + o(n)}$ while retaining a memory complexity of $2^{0.1887n+o(n)}$.
Last updated:  2017-09-27
Provable Security of Substitution-Permutation Networks
Yevgeniy Dodis, Jonathan Katz, John Steinberger, Aishwarya Thiruvengadam, Zhe Zhang
Many modern block ciphers are constructed based on the paradigm of substitution-permutation networks (SPNs). But, somewhat surprisingly---especially in comparison with Feistel networks, which have been analyzed by dozens of papers going back to the seminal work of Luby and Rackoff---there are essentially no provable-security results about SPNs. In this work, we initiate a comprehensive study of the security of SPNs as strong pseudorandom permutations when the underlying "$S$-box" is modeled as a public random permutation. We show that 3~rounds of S-boxes are necessary and sufficient for secure linear SPNs, but that even 1-round SPNs can be secure when non-linearity is allowed. Additionally, our results imply security in settings where an SPN structure is used for domain extension of a block cipher, even when the attacker has direct access to the small-domain block cipher.
Last updated:  2017-01-11
Tight Upper and Lower Bounds for Leakage-Resilient, Locally Decodable and Updatable Non-Malleable Codes
Uncategorized
Dana Dachman-Soled, Mukul Kulkarni, Aria Shahverdi
Show abstract
Uncategorized
In a recent result, Dachman-Soled et al.~(TCC '15) proposed a new notion called locally decodable and updatable non-malleable codes, which informally, provides the security guarantees of a non-malleable code while also allowing for efficient random access. They also considered locally decodable and updatable non-malleable codes that are leakage-resilient, allowing for adversaries who continually leak information in addition to tampering. Unfortunately, the locality of their construction in the continual setting was Omega(log n), meaning that if the original message size was n, then Omega(log n) positions of the codeword had to be accessed upon each decode and update instruction. In this work, we ask whether super-constant locality is inherent in this setting. We answer the question affirmatively by showing tight upper and lower bounds. Specifically, in any threat model which allows for a rewind attack-wherein the attacker leaks a small amount of data, waits for the data to be overwritten and then writes the original data back-we show that a locally decodable and updatable non-malleable code with block size Chi in poly(lambda) number of bits requires locality delta(n) in omega(1), where n = poly(lambda) is message length and lambda is security parameter. On the other hand, we re-visit the threat model of Dachman-Soled et al.~(TCC '15)-which indeed allows the adversary to launch a rewind attack-and present a construction of a locally decodable and updatable non-malleable code with block size Chi in Omega(lambda^{1/mu}) number of bits (for constant 0 < mu < 1) with locality delta(n), for any delta(n) in omega(1), and n = poly(lambda).
Last updated:  2017-02-27
ORAMs in a Quantum World
Uncategorized
Tommaso Gagliardoni, Nikolaos P. Karvelas, Stefan Katzenbeisser
Show abstract
Uncategorized
We study the security of {\em Oblivious Random Access Machines (ORAM)} in the quantum world. First we introduce a new formal treatment of ORAMs, which is at the same time elegant and simpler than the known formalization by Goldreich and Ostrovsky. Then we define and analyze the notion of post-quantum security for ORAMs, i.e., classical ORAMs resistant against quantum adversaries. We show that merely switching %from classically secure to post-quantum secure encryption in a classically secure ORAM construction does not generally yield a post-quantum secure ORAM construction. On the other hand, we provide a post-quantum secure construction based on a modification of Path-ORAM, the most efficient general ORAM construction, introduced by Stefanov et al. Furthermore, we initiate the study of {\em Quantum ORAMs (QORAMs)}, that is, ORAM constructions meant to be executed between quantum parties acting on arbitrary quantum data. We address many problems arising when formalizing Quantum ORAMs, and we provide a secure construction (based on Path-ORAM and a quantum encryption scheme introduced by Alagic et al.) which has the interesting property of making read and write operations {\em inherently equivalent}. In so doing, we develop a novel technique of quantum extractability which is of independent interest. We believe that QORAMs represent a natural and interesting step in the direction of achieving privacy in future scenarios where quantum computing is ubiquitous.
Last updated:  2017-06-21
Pinocchio-Based Adaptive zk-SNARKs and Secure/Correct Adaptive Function Evaluation
Meilof Veeningen
Pinocchio is a practical zk-SNARK that allows a prover to perform cryptographically verifiable computations with verification effort sometimes less than performing the computation itself. A recent proposal showed how to make Pinocchio adaptive (or ``hash-and-prove''), i.e., to enable proofs with respect to computation-independent commitments. This enables computations to be chosen after the commitments have been produced, and for data to be shared in different computations in a flexible way. Unfortunately, this proposal is not zero-knowledge. In particular, it cannot be combined with Trinocchio, a system in which Pinocchio is outsourced to three workers that do not learn the inputs thanks to multi-party computation (MPC). In this paper, we show how to make Pinocchio adaptive in a zero-knowledge way; apply it to make Trinocchio work on computation-independent commitments; present tooling to easily program fleible verifiable computations (with or without MPC); and use it to build a prototype in a medical research case study.
Last updated:  2017-01-11
Universal Samplers with Fast Verification
Uncategorized
Venkata Koppula, Andrew Poelstra, Brent Waters
Show abstract
Uncategorized
Recently, Hofheinz, Jager, Khurana, Sahai, Waters and Zhandry proposed a new primitive called universal samplers that allows oblivious sampling from arbitrary distributions, and showed how to construct universal samplers using indistinguishability obfuscation (iO) in the ROM. One important limitation for applying universal samplers in practice is that the constructions are built upon indistinguishability obfuscation. The costs of using current iO constructions is prohibitively large. We ask is whether the cost of a (universal) sampling could be paid by one party and then shared (soundly) with all other users? We address this question by introducing the notion of universal samplers with verification. Our notion follows the general path of Hofheinz et al, but has additional semantics that allows for validation of a sample. In this work we define and give a construction for universal samplers with verification. Our verification procedure is simple and built upon one-time signatures, making verification of a sample much faster than computing it. Security is proved under the sub exponential hardness of indistinguishability obfuscation, puncturable pseudorandom functions, and one-time signatures.
Last updated:  2017-12-13
Chameleon-Hashes with Ephemeral Trapdoors And Applications to Invisible Sanitizable Signatures
Jan Camenisch, David Derler, Stephan Krenn, Henrich C. Pöhls, Kai Samelin, Daniel Slamanig
A chameleon-hash function is a hash function that involves a trapdoor the knowledge of which allows one to find arbitrary collisions in the domain of the function. In this paper, we introduce the notion of chameleon-hash functions with ephemeral trapdoors. Such hash functions feature additional, i.e., ephemeral, trapdoors which are chosen by the party computing a hash value. The holder of the main trapdoor is then unable to find a second pre-image of a hash value unless also provided with the ephemeral trapdoor used to compute the hash value. We present a formal security model for this new primitive as well as provably secure instantiations. The first instantiation is a generic black-box construction from any secure chameleon-hash function. We further provide three direct constructions based on standard assumptions. Our new primitive has some appealing use-cases, including a solution to the long-standing open problem of invisible sanitizable signatures, which we also present.
Last updated:  2017-01-11
Circuit-Private Multi-Key FHE
Wutichai Chongchitmate, Rafail Ostrovsky
Multi-key fully homomorphic encryption (MFHE) schemes allow polynomially many users without trusted setup assumptions to send their data (encrypted under different FHE keys chosen by users independently of each other) to an honest-but-curious server that can compute the output of an arbitrary polynomial-time computable function on this joint data and issue it back to all participating users for decryption. One of the main open problems left in MFHE was dealing with malicious users without trusted setup assumptions. We show how this can be done, generalizing previous results of circuit-private FHE. Just like standard circuit-private FHE, our security model shows that even if both ciphertexts and public keys of individual users are not well-formed, no information is revealed regarding the server computation--- other than that gained from the output on some well-formed inputs of all users. MFHE schemes have direct applications to server-assisted multiparty computation (MPC), called on-the-fly MPC, introduced by López-Alt et al. (STOC '12), where the number of users is not known in advance. In this setting, a poly-time server wants to evaluate a circuit $C$ on data uploaded by multiple clients and encrypted under different keys. Circuit privacy requires that users' work is independent of $|C|$ held by the server, while each client learns nothing about $C$ other than its output. We present a framework for transforming MFHE schemes with no circuit privacy into maliciously circuit-private schemes. We then construct 3-round on-the-fly MPC with circuit privacy against malicious clients in the plain model.
Last updated:  2017-01-11
Access Control Encryption for Equality, Comparison, and More
Uncategorized
Georg Fuchsbauer, Romain Gay, Lucas Kowalczyk, Claudio Orlandi
Show abstract
Uncategorized
Access Control Encryption (ACE) is a novel paradigm for encryption which allows to control not only what users in the system are allowed to \emph{read} but also what they are allowed to \emph{write}. The original work of Damgård et al.~\cite{cryptoeprint:2016:106} introducing this notion left several open questions, in particular whether it is possible to construct ACE schemes with polylogarithmic complexity (in the number of possible identities in the system) from standard cryptographic assumptions. In this work we answer the question in the affirmative by giving (efficient) constructions of ACE for an interesting class of predicates which includes equality, comparison, interval membership, and more. We instantiate our constructions based both on standard pairing assumptions (SXDH) or more efficiently in the generic group model.
Last updated:  2017-01-11
Externally Verifiable Oblivious RAM
Joshua Gancher, Adam Groce, Alex Ledger
We present the idea of externally verifiable oblivious RAM (ORAM). Our goal is to allow a client and server carrying out an ORAM protocol to have disputes adjudicated by a third party, allowing for the enforcement of penalties against an unreliable or malicious server. We give a security definition that guarantees protection not only against a malicious server but also against a client making false accusations. We then give modifications of the Path ORAM and Ring ORAM protocols that meet this security definition. These protocols both have the same asymptotic runtimes as the semi-honest original versions and require the external verifier to be involved only when the client or server deviates from the protocol. Finally, we implement externally verified ORAM, along with an automated cryptocurrency contract to use as the external verifier.
Last updated:  2018-01-06
Algebraic Attack Efficiency versus S-box Representation
Hossein Arabnezhad-Khanoki, Babak Sadeghiyan, Josef Pieprzyk
Algebraic analysis of block ciphers aims at finding the secret key by solving a collection of polynomial equations that describe the internal structure of a cipher for chosen observations of plaintext/ciphertext pairs. Although algebraic attacks are addressed for cryptanalysis of block and stream ciphers, there is a lack of understanding of the impact of algebraic representation of the cipher on efficiency of solving the resulting collection of equations. The work investigates different S-box representations and their effect on complexity of algebraic attacks. In particular, we observe that a S-box representation defined in the work as \textit{Forward-Backward} (FWBW) leads to a collection of equations that can be solved efficiently. We show that the $SR(10,2,1,4)$ cipher can be broken using standard algebra software \textsc{Singular} and FGb. This is the best result achieved so far. The effect of description of S-boxes for some light-weight block ciphers is investigated. A by-product of this result is that we have achieved some improvements on the algebraic cryptanalysis of LBlock, PRESENT and MIBS light-weight block ciphers. Our study and experiments confirms a counter-intuitive conclusion that algebraic attacks work best for the FWBW S-box representation. This contradicts a common belief that algebraic attacks are more efficient for quadratic S-box representation.
Last updated:  2017-01-11
Reduced Mumford divisors of a genus 2 curve through its jacobian function field
Uncategorized
Eduardo Ruiz Duarte
Show abstract
Uncategorized
We explore the function field of the jacobian JH of a hyperelliptic curve H of genus 2 in order to find reduced coordinates to represent points of JH and do arithmetic. We show how this relates to the usual Mumford representation of points of JH. Moreover we identify the open subsets of JH where our reduced coordinates are defined, characterizing the elements which can be reduced and we discuss the group operation with them.
Last updated:  2017-01-11
High-speed Hardware Implementations of Point Multiplication for Binary Edwards and Generalized Hessian Curves
Bahram Rashidi, Reza Rezaeian Farashahi, Sayed Masoud Sayedi
In this paper high-speed hardware architectures of point multiplication based on Montgomery ladder algorithm for binary Edwards and generalized Hessian curves in Gaussian normal basis are presented. Computations of the point addition and point doubling in the proposed architecture are concurrently performed by pipelined digit-serial finite field multipliers. The multipliers in parallel form are scheduled for lower number of clock cycles. The structure of proposed digit-serial Gaussian normal basis multiplier is constructed based on regular and low-cost modules of exponentiation by powers of two and multiplication by normal elements. Therefore, the structures are area efficient and have low critical path delay. Implementation results of the proposed architectures on Virtex-5 XC5VLX110 FPGA show that then execution time of the point multiplication for binary Edwards and generalized Hessian curves over GF(2163) and GF(2233) are 8.62µs and 11.03µs respectively. The proposed architectures have high-performance and high-speed compared to other works.
Last updated:  2017-01-06
A New Approach for Practical Function-Private Inner Product Encryption
Sungwook Kim, Jinsu Kim, Jae Hong Seo
Functional Encryption (FE) is a new paradigm supporting restricted decryption keys of function $f$ that allows one to learn $f(x_j)$ from encryptions of messages $x_j$. A natural and practical security requirements for FE is to keep not only messages $x_1,\ldots,x_q$ but also functions $f_1,\ldots f_q$ confidential from encryptions and decryptions keys, except inevitable information $\{f_i(x_j)\}_{i,j\in[q]}$, for any polynomial a-priori unknown number $q$, where $f_i$'s and $x_j$'s are adaptively chosen by adversaries. Such the security requirement is called {\em full function privacy}. In this paper, we particularly focus on function-private FE for inner product functionality in the {\em private key setting} (simply called Inner Product Encryption (IPE)). To the best of our knowledge, there are two approaches for fully function-private IPE schemes in the private key setting. One of which is to employ a general transformation from (non-function-private) FE for general circuits (Brakerski and Segev, TCC 2015). This approach requires heavy crypto tools such as indistinguishability obfuscation (for non-function-private FE for general circuits) and therefore inefficient. The other approach is relatively practical; it directly constructs IPE scheme by using {\em dual pairing vector spaces (DPVS)} (Bishop et al. ASIACRYPT 2015, Datta et al. PKC 2016, and Tomida et al. ISC 2016). \quad We present a new approach for practical function-private IPE schemes that does not employ DPVS but generalizations of Brakerski-Segev transformation. Our generalizations of Brakerski-Segev transformation are easily combinable with existing (non-function-private) IPE schemes as well as (non-function-private) FE schemes for general circuits in several levels of security. Our resulting IPE schemes achieve better performance in comparison with Bishop et al. IPE scheme as well as Datta et al. IPE scheme while preserving the same security notion under the same complexity assumption. In comparison with Tomida et al. IPE scheme, ours have comparable performance in the size of both ciphertext and decryption key, but better performance in the size of master key.
Last updated:  2019-11-14
The STROBE protocol framework
Mike Hamburg
The “Internet of Things” (IoT) promises ubiquitous, cheap, connected devices. Unfortunately, most of these devices are hastily developed and will never receive code updates. Part of the IoT’s security problem is cryptographic, but established cryptographic solutions seem too heavy or too inflexible to adapt to new use cases. Here we describe Strobe, a new lightweight framework for building both cryptographic primitives and network protocols. Strobe is a sponge construction in the same family as Markku Saarinen’s BLINKER framework. The Strobe framework is simple and extensible. It is suitable for use as a hash, authenticated cipher, pseudorandom generator, and as the symmetric component of a network protocol engine. With an elliptic curve or other group primitive, it also provides a flexible Schnorr signature variant. Strobe can be instantiated with different sponge functions for different purposes. We show how to instantiate Strobe as an instance of NIST’s draft cSHAKE algorithm. We also show a lightweight implementation which is especially suitable for 16- and 32- bit microcontrollers, and also for small but high-speed hardware.
Last updated:  2017-04-05
Generalized Tweakable Even-Mansour Cipher with Strong Security Guarantee and Its Application to Authenticated Encryption
Uncategorized
Ping Zhang, Honggang Hu, Peng Wang
Uncategorized
We present a generalized tweakable blockcipher HPH, which is constructed from a public random permutation $P$ and an almost-XOR-universal (AXU) hash function $H$ with a tweak and key schedule $(t_1,t_2,K)\in \mathcal{T}\times \mathcal{K}$, and defined as $y=HPH_K((t_1,t_2),x)=P(x\oplus H_K(t_1))\oplus H_K(t_2)$, where the key $K$ is chosen from a key space $\mathcal{K}$, the tweak $(t_1,t_2)$ is chosen from a tweak space $\mathcal{T}$, $x$ is a plaintext, and $y$ is a ciphertext. We prove that HPH is a secure strong tweakable pseudorandom permutation (STPRP) by using H-coefficients technique. Then we focus on the security of HPH against multi-key and related-key attacks. We prove that HPH achieves multi-key-STPRP (MK-STPRP) security and HPH with related-key-AXU hash functions achieves related-key-STPRP (RK-STPRP) security, and derive a tight bound, respectively. HPH can be extended to wide applications. It can be directly applied to authentication and authenticated encryption modes. We appy HPH to PMAC1 and OPP, provide two improved modes HPMAC and OPH, and prove that they are single-key-secure, multi-key-secure, and related-key-secure.
Last updated:  2017-01-03
Equivalences and Black-Box Separations of Matrix Diffie-Hellman Problems
Uncategorized
Jorge Luis Villar
Show abstract
Uncategorized
In this paper we provide new algebraic tools to study the relationship between different Matrix Diffie-Hellman (MDDH) Problems, which are recently introduced as a natural generalization of the so-called Linear Problem. Namely, we provide an algebraic criterion to decide whether there exists a generic black-box reduction, and in many cases, when the answer is positive we also build an explicit reduction with the following properties: it only makes a single oracle call, it is tight and it makes use only of operations in the base group. It is well known that two MDDH problems described by matrices with a different number of rows are separated by an oracle computing certain multilinear map. Thus, we put the focus on MDDH problems of the same size. Then, we show that MDDH problems described with a different number of parameters are also separated (meaning that a successful reduction cannot decrease the amount of randomness used in the problem instance description). When comparing MDDH problems of the same size and number of parameters, we show that they are either equivalent or incomparable. This suggests that a complete classification into equivalence classes could be done in the future. In this paper we give some positive and negative partial results about equivalence, in particular solving the open problem of whether the Linear and the Cascade MDDH problems are reducible to each other. The results given in the paper are limited by some technical restrictions in the shape of the matrices and in the degree of the polynomials defining them. However, these restrictions are also present in most of the work dealing with MDDH Problems. Therefore, our results apply to all known instances of practical interest.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.