All papers in 2015 (Page 4 of 1255 results)

Last updated:  2017-05-18
Delegating RAM Computations
Uncategorized
Yael Tauman Kalai, Omer Paneth
Show abstract
Uncategorized
In the setting of cloud computing a user wishes to delegate its data, as well as computations over this data, to a cloud provider. Each computation may read and modify the data, and these modifications should persist between computations. Minding the computational resources of the cloud, delegated computations are modeled as RAM programs. In particular, the delegated computations' running time may be sub-linear, or even exponentially smaller than the memory size. We construct a two-message protocol for delegating RAM computations to an untrusted cloud. In our protocol, the user saves a short digest of the delegated data. For every delegated computation, the cloud returns, in addition to the computation's output, the digest of the modified data, and a proof that the output and digest were computed correctly. When delegating a T-time RAM computation M with security parameter k, the cloud runs in time Poly(T,k) and the user in time Poly(|M|, log T, k). Our protocol is secure assuming super-polynomial hardness of the Learning with Error (LWE) assumption. Security holds even when the delegated computations are chosen adaptively as a function of the data and output of previous computations. We note that RAM delegation schemes are an improved variant of memory delegation schemes [Chung et al. CRYPTO 2011]. In memory delegation, computations are modeled as Turing machines, and therefore, the cloud's work always grows with the size of the delegated data.
Last updated:  2016-09-19
Analysis of the Kupyna-256 Hash Function
Christoph Dobraunig, Maria Eichlseder, Florian Mendel
The hash function Kupyna was recently published as the Ukrainian standard DSTU 7564:2014. It is structurally very similar to the SHA-3 finalist Grøstl, but differs in details of the round transformations. Most notably, some of the round constants are added with a modular addition, rather than bitwise xor. This change prevents a straightforward application of some recent attacks, in particular of the rebound attacks on the compression function of similar AES-like hash constructions. However, we show that it is actually possible to mount rebound attacks, despite the presence of modular constant additions. More specifically, we describe collision attacks on the compression function for 6 (out of 10) rounds of Kupyna-256 with an attack complexity of 2^{70}, and for 7 rounds with complexity 2^{125.8}. In addition, we have been able to use the rebound attack for creating collisions for the round-reduced hash function itself. This is possible for 4 rounds of Kupyna-256 with complexity 2^{67} and for 5 rounds with complexity 2^{120}.
Last updated:  2017-02-18
On the Power of Pair Encodings: Frameworks for Predicate Cryptographic Primitives
Mridul Nandi, Tapas Pandit
Recently Attrapadung (Eurocrypt 2014) proposed a generic framework for fully (adaptively) secure predicate encryption (PE) based on a new primitive, called pair encodings. The author shows that if the underlying pair encoding scheme is either perfectly secure or computationally (doubly-selectively) secure, then the PE scheme will be fully secure. Although the pair encodings were solely introduced for PE, we show that these can also be used to construct predicate signatures, a signature analogue of PE. More precisely, we propose a generic construction for predicate signature (PS) from the pair encoding schemes. Our construction provides the signer privacy, and unforgeability in the adaptive-predicate model. Thereafter, we instantiate many PS schemes with new results, e.g., the first practical PS schemes for regular languages, the first attribute-based signature (ABS) scheme with constant-size signatures in adaptive-predicate model, the unbounded ABS with large universes in key-policy flavor, etc. Following the CCA conversions of Yamada et al. (PKC 2011, 2012) and Nandi et al. (ePrint Archive: 2015/457), one can have CCA-secure PE from CPA-secure PE if the primitive PE has either verifiability or delegation. We show that the fully secure CPA-construction of Attrapadung possesses the verifiability. The aforesaid approach degrades the performance of the resultant CCA-secure PE scheme. As an alternative, we provide a direct fully secure CCA-construction for PE from the pair encoding scheme. This costs an extra computation of group element in encryption and three extra pairing computations in decryption as compared to the CPA-construction of Attrapadung. The predicate signcryption (PSC) is a super class of the existing class, the attribute-based signcryption (ABSC), where the confidentiality, unforgeability and signer privacy are well preserved. By combining the proposed frameworks for PS and PE, we provide a generic construction for PSC from the pair encodings. It achieves the perfect privacy, and the strong unforgeability and CCA security in the adaptive-predicates model. The construction has the support of combined-setup, where the distributions of public parameters and keys in the underlying signature and encryption schemes are identical. The proposed PSC provides many new results, e.g., the first PSC schemes for regular languages, the first ABSC with constant-size signcryptions and constant-size keys respectively, the unbounded ABSC with large universes in adaptive-predicates model, etc.
Last updated:  2017-06-12
Online-Offline Homomorphic Signatures for Polynomial Functions
Kaoutar Elkhiyaoui, Melek Önen, Refik Molva
The advent of cloud computing has given rise to a plethora of work on verifiable delegation of computation. Homomorphic signatures are powerful tools that can be tailored for verifiable computation, as long as they are efficiently verifiable. The main advantages of homomorphic signatures for verifiable computation are twofold: \begin{inparaenum}[(i)] \item Any third party can verify the correctness of the delegated computation, \item and this third party is not required to have access to the dataset on which the computation was performed. \end{inparaenum} In this paper, we design a homomorphic signature suitable for multivariate polynomials of bounded degree, which draws upon the algebraic properties of \emph{eigenvectors} and \emph{leveled multilinear maps}. The proposed signature yields an efficient verification process (in an amortized sense) and supports online-offline signing. Furthermore, our signature is provably secure and its size grows only linearly with the degree of the evaluated polynomial.
Last updated:  2015-12-08
Gaussian Sampling Precision in Lattice Cryptography
Uncategorized
Markku-Juhani O. Saarinen
Show abstract
Uncategorized
Security parameters and attack countermeasures for Lattice-based cryptosystems have not yet matured to the level that we now expect from RSA and Elliptic Curve implementations. Many modern Ring-LWE and other lattice-based public key algorithms require high precision random sampling from the Discrete Gaussian distribution. The sampling procedure often represents the biggest implementation bottleneck due to its memory and computational requirements. We examine the stated requirements of precision for Gaussian samplers, where statistical distance to the theoretical distribution is typically expected to be below $2^{-90}$ or $2^{-128}$ for 90 or 128 ``bit'' security level. We argue that such precision is excessive and give precise theoretical arguments why half of the precision of the security parameter is almost always sufficient. This leads to faster and more compact implementations; almost halving implementation size in both hardware and software. We further propose new experimental parameters for practical Gaussian samplers for use in Lattice Cryptography.
Last updated:  2017-10-24
Commitment and Oblivious Transfer in the Bounded Storage Model with Errors
Rafael Dowsley, Felipe Lacerda, Anderson C. A. Nascimento
In the bounded storage model the memory of the adversary is restricted, instead of its computational power. With this different restriction it is possible to design protocols with information-theoretical (instead of only computational) security. We present the first protocols for commitment and oblivious transfer in the bounded storage model with errors, i.e., the model where the public random sources available to the two parties are not exactly the same, but instead are only required to have a small Hamming distance between themselves. Commitment and oblivious transfer protocols were known previously only for the error-free variant of the bounded storage model, which is harder to realize.
Last updated:  2015-10-05
Nearly Optimal Robust Secret Sharing
Mahdi Cheraghchi
We prove that a known approach to improve Shamir's celebrated secret sharing scheme; i.e., adding an information-theoretic authentication tag to the secret, can make it robust for $n$ parties against any collusion of size $\delta n$, for any constant $\delta \in (0, 1/2)$. This result holds in the so-called "non-rushing" model in which the $n$ shares are submitted simultaneously for reconstruction. We thus obtain an efficient and robust secret sharing scheme in this model that is essentially optimal in all parameters including the share size which is $k(1+o(1)) + O(\kappa)$, where $k$ is the secret length and $\kappa$ is the security parameter. Like Shamir's scheme, in this modified scheme any set of more than $\delta n$ honest parties can efficiently recover the secret. Using algebraic geometry codes instead of Reed-Solomon codes, we decrease the share length to a constant (only depending on $\delta$) while the number of shares $n$ can grow independently. In this case, when $n$ is large enough, the scheme satisfies the "threshold" requirement in an approximate sense; i.e., any set of $\delta n(1+\rho)$ honest parties, for arbitrarily small $\rho > 0$, can efficiently reconstruct the secret.
Last updated:  2015-09-30
A Compiler of Two-Party Protocols for Composable and Game-Theoretic Security, and Its Application to Oblivious Transfer
Shota Goto, Junji Shikata
In this paper, we consider the following question: Does composing protocols having game-theoretic security result in a secure protocol in the sense of game-theoretic security? In order to discuss the composability of game-theoretic properties, we study security of cryptographic protocols in terms of the universal composability (UC) and game theory simultaneously. The contribution of this paper is the following: (i) We propose a compiler of two-party protocols in the local universal composability (LUC) framework such that it transforms any two-party protocol secure against semi-honest adversaries into a protocol secure against malicious adversaries in the LUC framework; (ii) We consider the application of our compiler to oblivious transfer (OT) protocols, by which we obtain a construction of OT meeting both UC security and game-theoretic security.
Last updated:  2016-01-25
Private Processing of Outsourced Network Functions: Feasibility and Constructions
Uncategorized
Luca Melis, Hassan Jameel Asghar, Emiliano De Cristofaro, Mohamed Ali Kaafar
Show abstract
Uncategorized
Aiming to reduce the cost and complexity of maintaining networking infrastructures, organizations are increasingly outsourcing their network functions (e.g., firewalls, traffic shapers and intrusion detection systems) to the cloud, and a number of industrial players have started to offer network function virtualization (NFV)-based solutions. Alas, outsourcing network functions in its current setting implies that sensitive network policies, such as firewall rules, are revealed to the cloud provider. In this paper, we investigate the use of cryptographic primitives for processing outsourced network functions, so that the provider does not learn any sensitive information. More specifically, we present a cryptographic treatment of privacy-preserving outsourcing of network functions, introducing security definitions as well as an abstract model of generic network functions, and then propose a few instantiations using partial homomorphic encryption and public-key encryption with keyword search. We include a proof-of-concept implementation of our constructions and show that network functions can be privately processed by an untrusted cloud provider in a few milliseconds.
Last updated:  2015-09-28
A Provably Secure Short Signature Scheme from Coding Theory
Maryam Rajabzadeh Asaar, Mahmoud Salmasizadeh, Mohammad Reza Aref
Signatures with partially message recovery in which some parts of messages are not transmitted with signatures to make them shorter are useful where bandwidth is one of the crucial concern and especially in case of signing short messages in applications such as time stamping, certified email services and identitybased cryptosystems. In this paper, to have quantum-attackresistant short signatures, a signature scheme with partially message recovery from coding theory is proposed. The security of the proposed scheme is proved under Goppa Parametrized Bounded Decoding and the Goppa Code Distinguishing assumptions in the random oracle model. Relying on the partially message recovery property, the proposal is shorter than the Dallot signature scheme, the only provably secure and practical code-based signature scheme. We should highlight that our scheme can be used as a building block of code-based signature schemes with additional properties since it compared to Dallot signature scheme not only improves its communication overhead but also it preserves its signature efficiency.
Last updated:  2015-09-28
Weave ElGamal Encryption for Secure Outsourcing Algebraic Computations over Zp
Yi-Ruei Chen, Shiuan-Tzuo Shen, Wen-Guey Tzeng
Thispaperaddressesthesecureoutsourcingproblemforlarge-scalematrixcomputationto a public cloud. We propose a novel public-key weave ElGamal encryption (WEE) scheme for encrypting a matrix over the field Zp. The scheme has the echelon transformation property. We can apply a series of elementary row/column operations to transform an encrypted matrix under our WEE scheme into the row/column echelon form. The decrypted result matches the result of the corresponding operations performed on the original matrix. For security, our WEE scheme is shown to be entry irrecoverable for non-zero entries under the computational Diffie-Hellman assumption. By using our WEE scheme, we propose five secure outsourcing protocols of Gaussian elimination, Gaussian-Jordan elimination, matrix determinant, linear system solver, and matrix inversion. Each of these protocols preserves data privacy for clients (data owners). Furthermore, the linear system solver and matrix inversion protocols provide a cheating-resistant mechanism to verify correctness of computation results. Our experimental result shows that our protocols gain efficiency significantly for an outsourcer. Our outsourcing protocol solves a linear system of n = 1, 000 equations and m = 1, 000 unknown variables about 472 times faster than a non-outsourced version. The efficiency gain is more substantial when (n, m) gets larger. For example, when n = 10, 000 and m = 10, 000, the protocol can solve it about 56, 274 times faster. Our protocols can also be easily implemented in parallel computation architecture to get more efficiency improvement.
Last updated:  2016-10-27
Equihash: Asymmetric Proof-of-Work Based on the Generalized Birthday Problem
Uncategorized
Alex Biryukov, Dmitry Khovratovich
Show abstract
Uncategorized
Proof-of-work is a central concept in modern cryptocurrencies and denial-of-service protection tools, but the requirement for fast verification so far made it an easy prey for GPU-, ASIC-, and botnet-equipped users. The attempts to rely on memory-intensive computations in order to remedy the disparity between architectures have resulted in slow or broken schemes. In this paper we solve this open problem and show how to construct an asymmetric proof-of-work (PoW) based on a computationally hard problem, which requires a lot of memory to generate a proof (called "memory-hardness" feature) but is instant to verify. Our primary proposal Equihash is a PoW based on the generalized birthday problem and enhanced Wagner's algorithm for it. We introduce the new technique of algorithm binding to prevent cost amortization and demonstrate that possible parallel implementations are constrained by memory bandwidth. Our scheme has tunable and steep time-space tradeoffs, which impose large computational penalties if less memory is used. Our solution is practical and ready to deploy: a reference implementation of a proof-of-work requiring 700 MB of RAM runs in 15 seconds on a 2.1 GHz CPU, increases the computations by the factor of 1000 if memory is halved, and presents a proof of just 120 bytes long.
Last updated:  2015-09-28
Secure Set-based Policy Checking and Its Application to Password Registration
Changyu Dong, Franziskus Kiefer
Policies are the corner stones of today's computer systems. They define secure states and safe operations. A common problem with policies is that their enforcement is often in conflict with user privacy. In order to check the satisfiability of a policy, a server usually needs to collect from a client some information which may be private. In this work we introduce the notion of secure set-based policy checking (SPC) that allows the server to verify policies while preserving the client's privacy. SPC is a generic protocol that can be applied in many policy-based systems. As an example, we show how to use SPC to build a password registration protocol so that a server can check whether a client's password is compliant with its password policy without seeing the password. We also analyse SPC and the password registration protocol and provide security proofs. To demonstrate the practicality of the proposed primitives, we report performance evaluation results based on a prototype implementation of the password registration protocol.
Last updated:  2016-04-21
New Complexity Trade-Offs for the (Multiple) Number Field Sieve Algorithm in Non-Prime Fields
Uncategorized
Palash Sarkar, Shashank Singh
Show abstract
Uncategorized
The selection of polynomials to represent number fields crucially determines the efficiency of the Number Field Sieve (NFS) algorithm for solving the discrete logarithm in a finite field. An important recent work due to Barbulescu et al. builds upon existing works to propose two new methods for polynomial selection when the target field is a non-prime field. These methods are called the generalised Joux-Lercier (GJL) and the Conjugation methods. In this work, we propose a new method (which we denote as $\mathcal{A}$) for polynomial selection for the NFS algorithm in fields $\mathbb{F}_{Q}$, with $Q=p^n$ and $n>1$. The new method both subsumes and generalises the GJL and the Conjugation methods and provides new trade-offs for both $n$ composite and $n$ prime. Let us denote the variant of the (multiple) NFS algorithm using the polynomial selection method ``{X}" by (M)NFS-{X}. Asymptotic analysis is performed for both the NFS-$\mathcal{A}$ and the MNFS-$\mathcal{A}$ algorithms. In particular, when $p=L_Q(2/3,c_p)$, for $c_p\in [3.39,20.91]$, the complexity of NFS-$\mathcal{A}$ is better than the complexities of all previous algorithms whether classical or MNFS. The MNFS-$\mathcal{A}$ algorithm provides lower complexity compared to NFS-$\mathcal{A}$ algorithm; for $c_p\in (0, 1.12] \cup [1.45,3.15]$, the complexity of MNFS-$\mathcal{A}$ is the same as that of the MNFS-Conjugation and for $c_p\notin (0, 1.12] \cup [1.45,3.15]$, the complexity of MNFS-$\mathcal{A}$ is lower than that of all previous methods.
Last updated:  2015-09-30
Sandy2x: New Curve25519 Speed Records
Tung Chou
This paper sets speed records on well-known Intel chips for the Curve25519 elliptic-curve Diffie-Hellman scheme and the Ed25519 digital signature scheme. In particular, it takesonly 159 128 Sandy Bridge cycles or 156 995 Ivy Bridge cycles to compute a Diffie-Hellman shared secret, while the previous records are 194 036 Sandy Bridge cycles or 182 708 Ivy Bridge cycles. There have been many papers analyzing elliptic-curve speeds on Intel chips, and they all use Intel’s serial 64 x 64 -> 128-bit multiplier for field arithmetic. These papers have ignored the 2-way vectorized 32 x 32 -> 64-bit multiplier on Sandy Bridge and Ivy Bridge: it seems obvious that the serial multiplier is faster. However, this paper uses the vectorized multiplier. This is the first speed record set for elliptic-curve cryptography using a vectorized multiplier on Sandy Bridge and Ivy Bridge. Our work suggests that the vectorized multiplier might be a better choice for elliptic-curve computation, or even other types of computation that involve prime-field arithmetic, even in the case where the computation does not exhibit very nice internal parallelism.
Last updated:  2021-07-10
Ballot secrecy: Security definition, sufficient conditions, and analysis of Helios
Ben Smyth
We propose a definition of ballot secrecy as an indistinguishability game in the computational model of cryptography. Our definition improves upon earlier definitions to ensure ballot secrecy is preserved in the presence of an adversary that controls ballot collection. We also propose a definition of ballot independence as an adaptation of an indistinguishability game for asymmetric encryption. We prove relations between our definitions. In particular, we prove ballot independence is sufficient for ballot secrecy in voting systems with zero-knowledge tallying proofs. Moreover, we prove that building systems from non-malleable asymmetric encryption schemes suffices for ballot secrecy, thereby eliminating the expense of ballot-secrecy proofs for a class of encryption-based voting systems. We demonstrate applicability of our results by analysing the Helios voting system and its mixnet variant. Our analysis reveals that Helios does not satisfy ballot secrecy in the presence of an adversary that controls ballot collection. The vulnerability cannot be detected by earlier definitions of ballot secrecy, because they do not consider such adversaries. We adopt non-malleable ballots as a fix and prove that the fixed system satisfies ballot secrecy.
Last updated:  2015-10-08
Cryptanalysis of the New Multilinear Map over the Integers
Uncategorized
Brice Minaud, Pierre-Alain Fouque
Show abstract
Uncategorized
This article describes a polynomial attack on the new multilinear map over the integers presented by Coron, Lepoint and Tibouchi at CRYPTO 2015 (CLT15). This version is a fix of the first multilinear map over the integers presented by the same authors at CRYPTO 2013 (CLT13) and broken by Cheon et al. at EUROCRYPT 2015. The attack essentially downgrades CLT15 to its original version CLT13, and leads to a full break of the multilinear map for virtually all applications. In addition to the main attack, we present an alternate probabilistic attack underpinned by a different technique, as well as an instant-time attack on the optimized variant of the scheme.
Last updated:  2015-09-28
Secure Association for the Internet of Things
Almog Benin, Sivan Toledo, Eran Tromer
Existing standards (ZigBee and Bluetooth Low Energy) for networked low-power wireless devices do not support secure association (or pairing) of new devices into a network: their association process is vulnerable to man-in-the-middle attacks. This paper addresses three essential aspects in attaining secure association for such devices. First, we define a user-interface primitive, oblivious comparison, that allows users to approve authentic associations and abort compromised ones. This distills and generalizes several existing approve/abort mechanisms, and moreover we experimentally show that OC can be implemented using very little hardware: one LED and one switch. Second, we provide a new Message Recognition Protocol (MRP) that allows devices associated using oblivious comparison to exchange authenticated messages without the use of public-key cryptography (which exceeds the capabilities of many IoT devices). This protocol improves upon previously proposed MRPs in several respects. Third, we propose a robust definition of security for MRPs that is based on universal composability, and show that our MRP satisfies this definition.
Last updated:  2016-02-18
A Decade of Lattice Cryptography
Uncategorized
Chris Peikert
Show abstract
Uncategorized
\emph{Lattice-based cryptography} is the use of conjectured hard problems on point lattices in~$\R^{n}$ as the foundation for secure cryptographic systems. Attractive features of lattice cryptography include apparent resistance to \emph{quantum} attacks (in contrast with most number-theoretic cryptography), high asymptotic efficiency and parallelism, security under \emph{worst-case} intractability assumptions, and solutions to long-standing open problems in cryptography. This work surveys most of the major developments in lattice cryptography over the past ten years. The main focus is on the foundational \emph{short integer solution}~(SIS) and \emph{learning with errors}~(LWE) problems (and their more efficient ring-based variants), their provable hardness assuming the worst-case intractability of standard lattice problems, and their many cryptographic applications.
Last updated:  2015-12-21
Lattice Based Cryptography for Beginners
Uncategorized
Dong Pyo Chi, Jeong Woon Choi, Jeong San Kim, Taewan Kim
Show abstract
Uncategorized
The purpose of this lecture note is to introduce lattice based cryptography, which is thought to be a cryptosystem of post-quantum age. We have tried to give as many details possible specially for novice on the subject. Something may be trivial to an expert but not to a novice. Many fundamental problems about lattice are thought to be hard even against quantum computer, compared to factorization problem which can be solved easily with quantum computer, via the celebrated Shor factorization quantum algorithm. The first part of our presentation is based on slides of Christ Peikert 2013 Bonn lecture (crypt@b-it2013). We, more or less, give somewhat detailed explanation of Professor Peikert's lecture slides. We unfortunately could not attend his Bonn class. We are afraid that there are many mistakes in this note; if any, they are due to our misunderstanding of the material. Part II of our lecture note is on ring LWE, based on the paper ``A tool-kit for Ring-LWE Cryptography" by Lyubashevsky, Peikert and Regev. Part III is about multilinear maps together with cryptanalysis of GGH map due to Hu and Jia. Our presentation follows professor Steinfeld's lecture slides on GGHLite, and the paper by Yupu Hu and Huiwen Jia. When you read this lecture note, the corresponding original paper should be accompanied. We thank professor Jung Hee Cheon for introducing the subject and asking Dong Pyo Chi to give a lecture on the subject at the department of mathematics in Seoul National University. We also thank Hyeongkwan Kim for many helps, especially many corrections and improvements of the manuscript during the 2015 Summer session at UNIST. We also thank the students who took the classes at SNU and UNIST. The lecture was given by a novice for novice, so many mistakes are unavoidable. If the reader lets us know any errors, we will very much appreciate it.
Last updated:  2015-11-11
End-to-end Design of a PUF-based Privacy Preserving Authentication Protocol
Aydin Aysu, Ege Gulcan, Daisuke Moriyama, Patrick Schaumont, Moti Yung
We demonstrate a prototype implementation of a provably secure protocol that supports privacy-preserving mutual authentication between a server and a constrained device. Our proposed protocol is based on a physically unclonable function (PUF) and it is optimized for resource-constrained platforms. The reported results include a full protocol analysis, the design of its building blocks, their integration into a constrained device, and finally its performance evaluation. We show how to obtain efficient implementations for each of the building blocks of the protocol, including a fuzzy extractor with a novel helper-data construction technique, a truly random number generator (TRNG), and a pseudo-random function (PRF). The prototype is implemented on a SASEBO-GII board, using the on-board SRAM as the source of entropy for the PUF and the TRNG. We present three different implementations. The first two execute on a MSP430 soft-core processor and have a security level of 64-bit and 128-bit respectively. The third uses a hardware accelerator and has 128-bit security level. To our best knowledge, this work is the first effort to describe the end-to-end design and evaluation of a privacy-preserving PUF-based authentication protocol.
Last updated:  2015-10-09
A Privacy Preserving Improvement for SRTA in Telecare Medicine Information System
Uncategorized
Seyed salman Sajjadi GhaemMaghami, Mahtab Mirmohseni, Afrooz Haghbin
Uncategorized
Radio Frequency Identification (RFID) is a modern communication technology, which provides authentication and identification through a nonphysical contact. Recently, the use of this technology is almost developed in healthcare environments. Although RFID technology can prepare sagacity in systems, privacy and security issues ought to be considered before. Recently, in 2015, Li et al. proposed a hash-based RFID authentication protocol in medication verification for healthcare. In this paper, we study this protocol and show that Li et al.’s protocol is vulnerable to traceability, impersonation and DoS attacks. So it does not provide the privacy and security of RFID end-users. Therefore, we propose an improved secure and efficient RFID authentication protocol to enhance the performance of Li et al.’s method. Our analyze show that the existing weaknesses of Li et al.’s protocol are eliminated in our proposed protocol.
Last updated:  2017-01-15
Joint Data and Key Distribution of Simple, Multiple, and Multidimensional Linear Cryptanalysis Test Statistic and Its Impact to Data Complexity
Uncategorized
Céline Blondeau, Kaisa Nyberg
Show abstract
Uncategorized
The power of a statistical attack is inversely proportional to the number of plaintexts needed to recover information on the encryption key. By analyzing the distribution of the random variables involved in the attack, cryptographers aim to provide a good estimate of the data complexity of the attack. In this paper, we analyze the hypotheses made in simple, multiple, and multidimensional linear attacks that use either non-zero or zero correlations, and provide more accurate estimates of the data complexity of these attacks. This is achieved by taking, for the first time, into consideration the key variance of the statistic for both the right and wrong keys. For the family of linear attacks considered in this paper, we differentiate between the attacks which are performed in the known-plaintext and those in the distinct-known-plaintext model.
Last updated:  2016-01-15
Cryptanalysis of the New CLT Multilinear Maps
Uncategorized
Jung Hee Cheon, Changmin Lee, Hansol Ryu
Show abstract
Uncategorized
Multilinear maps have many cryptographic applications. The first candidate construction of multilinear maps was proposed by Garg, Gentry, and Halevi (GGH13) in 2013, and soon afterwards, another candidate was suggested by Coron, Lepoint, and Tibouchi (CLT13) that works over the integers. However, both of these were found to be insecure in the face of a so-called zeroizing attack (HJ15, CHL+15). To improve on CLT13, Coron, Lepoint, and Tibouchi proposed another candidate of new multilinear maps over the integers (CLT15). In this paper, we describe an attack against CLT15. Our attack shares the essence of the cryptanalysis of CLT13 and exploits low level encodings of zero, as well as other public parameters. As in CHL+15, this leads to finding all the secret parameters of \kappa-multilinear maps in polynomial time of the security parameter.
Last updated:  2015-09-27
Cryptanalysis of Provably Secure Certicateless Short Signature Scheme
Jayaprakash Kar
Recently, Choi et al. proposed certificateless short signature scheme in random oracle model and the author claims that it is provably secure. Certificateless Public Key Cryptography is a new paradigm, where it allows resolving the inherent key escrow and key management problem. Attack to certificateless signature scheme are of two types as Type-I where the adversary can replace the public key of the user and cannot able to retrieve the master secret key from Key Generator Center (KGC). In Type-II, the adversary can able to obtain the master secret key and cannot replace the public key of the user. In this paper we have proven that, the proposed scheme is not secure against Type-I adversary. To prove, we solve linear Diophantine equation and obtain the partial-private key of the user.
Last updated:  2015-09-27
Using Tweaks To Design Fault Resistant Ciphers
Sikhar Patranabis, Debapriya Basu Roy, Debdeep Mukhopadhyay
Side channel analysis and active fault analysis are now major threats to even mathematically robust cryptographic algorithms that are otherwise resistant to classical cryptanalysis. It is necessary to design suitable countermeasures to protect cryptographic primitives against such attacks. This paper focuses on designing encryption schemes that are innately secure against fault analysis. The paper formally proves that one such design strategy, namely the use of key-dependent SBoxes, is only partially secure against DFA. The paper then examines the fault tolerance of encryption schemes that use a key-independent secret tweak value for randomization. In particular, the paper focuses on a linear tweak based and a non-linear tweak based version of a recently proposed block cipher DRECON. The paper demonstrates that while both versions are secure against classical DFA, the non-linear tweak based version provides greater fault coverage against stronger fault models. This fact, together with the DPA resistance provided by the use of variable S-Boxes, makes DRECON a strong candidate for the design of secure cryptographic primitives. All claims have been validated by experimental results on a SASEBO GII platform.
Last updated:  2015-09-27
Fast and Secure Three-party Computation: The Garbled Circuit Approach
Payman Mohassel, Mike Rosulek, Ye Zhang
Many deployments of secure multi-party computation (MPC) in practice have used information-theoretic three-party protocols that tolerate a single, semi-honest corrupt party, since these protocols enjoy very high efficiency. We propose a new approach for secure three-party computation (3PC) that improves security while maintaining practical efficiency that is competitive with traditional information-theoretic protocols. Our protocol is based on garbled circuits and provides security against a single, malicious corrupt party. Unlike information-theoretic 3PC protocols, ours uses a constant number of rounds. Our protocol only uses inexpensive symmetric-key cryptography: hash functions, block ciphers, pseudorandom generators (in particular, no oblivious transfers) and has performance that is comparable to that of Yao's (semi-honest) 2PC protocol. We demonstrate the practicality of our protocol with an implementation based on the JustGarble framework of Bellare et al. (S&P 2013). The implementation incorporates various optimizations including the most recent techniques for efficient circuit garbling. We perform experiments on several benchmarking circuits, in different setups. Our experiments confirm that, despite providing a more demanding security guarantee, our protocol has performance comparable to existing information-theoretic 3PC.
Last updated:  2016-04-07
Nearly Sparse Linear Algebra and application to Discrete Logarithms Computations
Antoine Joux, Cécile Pierrot
In this article, we propose a method to perform linear algebra on a matrix with nearly sparse properties. More precisely, although we require the main part of the matrix to be sparse, we allow some dense columns with possibly large coefficients. This is achieved by modifying the Block Wiedemann algorithm. Under some precisely stated conditions on the choices of initial vectors in the algorithm, we show that our variation not only produces a random solution of a linear system but gives a full basis of the set of solutions. Moreover, when the number of heavy columns is small, the cost of dealing with them becomes negligible. In particular, this eases the computation of discrete logarithms in medium and high characteristic finite fields, where nearly sparse matrices naturally occur.
Last updated:  2015-09-27
Are you The One to Share? Secret Transfer with Access Structure
Yongjun Zhao, Sherman S. M. Chow
Sharing information to others is common nowadays, but the question is with whom to share. To address this problem, we propose the notion of secret transfer with access structure (STAS). STAS is a two-party computation protocol that enables the server to transfer a secret to a client who satisfies the prescribed access structure. In this paper, we focus on the case of STAS for threshold access structure, i.e. threshold secret transfer (TST). We also discuss how to replace it with linear secret sharing to make the access structure more expressive. Our proposed TST scheme enables a number of applications including a simple construction of oblivious transfer with threshold access control, and (a variant of) threshold private set intersection (t-PSI), which are the first of their kinds in the literature to the best of our knowledge. Moreover, we show that TST is useful a number of applications such as privacy-preserving matchmaking with interesting features. The underlying primitive of STAS is a variant of oblivious transfer (OT) which we call OT for sparse array. We provide two constructions which are inspired from state-of-the-art PSI techniques including oblivious polynomial evaluation and garbled Bloom filter (GBF). We implemented the more efficient construction and provide its performance evaluation.
Last updated:  2015-09-27
HLDCA-WSN: Homomorphic Lightweight Data Confidentiality Algorithm for Wireless Sensor Network
Uncategorized
Hassan Noura, Damien Couroussé
Show abstract
Uncategorized
Wireless Sensor Networks (WSN) has become more and more important in many applications especially those required a high level of security such as: commercial, military and telemedicine applications. However, security in WSN suffers from several kinds of attacks (ranging between passive and active attacks). Eavesdropping attack remains the most powerful attack, since it has the capability to compromise the confidentiality of the whole packet content. In this context, several solutions and techniques have been presented in the literature, to ensure a secure transmission of packets in a large scale WSN. Unfortunately, many of these solutions failed to meet the main characteristics of WSN (limited energy consumption, low power, large bandwidth), and are considered as not efficient candidates to deal with tiny devices. For this reason, a novel homomorphic lightweight security scheme HLDCA-WSN based on dynamic permutation layer that is performed on a set of packets (denoted by generation) is proposed and discussed in this paper. HLDCA-WSN scheme overcomes passive attacks and ensures a significant reduction of computational complexity, energy cost, and communication overhead. Moreover, the dynamic property of the proposed scheme adds more robustness against traditional and physical attacks. The efficiency of the HLDCA ciphering scheme is demonstrated by an extensive security analysis and simulation results.
Last updated:  2015-09-26
Rich Queries on Encrypted Data: Beyond Exact Matches
Sky Faber, Stanislaw Jarecki, Hugo Krawczyk, Quan Nguyen, Marcel Rosu, Michael Steiner
We extend the searchable symmetric encryption (SSE) protocol of [Cash et al., Crypto'13] adding support for range, substring, wildcard, and phrase queries, in addition to the Boolean queries supported in the original protocol. Our techniques apply to the basic single-client scenario underlying the common SSE setting as well as to the more complex Multi-Client and Outsourced Symmetric PIR extensions of [Jarecki et al., CCS'13]. We provide performance information based on our prototype implementation, showing the practicality and scalability of our techniques to very large databases, thus extending the performance results of [Cash et al., NDSS'14] to these rich and comprehensive query types.
Last updated:  2015-09-25
CRITERION OF MAXIMAL PERIOD OF A TRINOMIAL OVER NONTRIVIAL GALOIS RING OF ODD CHARACTERISTIC
Vadim N. Tsypyschev, Julia S. Vinogradova
In earlier eighties of XX century A.A.Nechaev has obtained the criterion of full period of a Galois polynomial over primary residue ring modulo power of 2. Also he has obtained necessary conditions of maximal period of the Galois polynomial over such ring in terms of coefficients of this polynomial. Further A.S.Kuzmin has obtained analogous results for the case of Galois polynomial over primary residue ring of odd characteristic . Later the first author of this article has carried the criterion of full period of the Galois polynomial over primary residue ring of odd characteristic obtained by A.S.Kuzmin to the case of Galois polynomial over nontrivial Galois ring of odd characteristic. Using this criterion as a basis we have obtained criterion calling attention to. This result is an example how to apply results of the previous work of V.N.Tsypyschev in order to construct polynomials of maximal period over nontrivial Galois ring of odd characteristic. During this it is assumed that period of polynomial modulo prime ideal is known and maximal .
Last updated:  2015-12-10
Exploiting the Order of Multiplier Operands: A Low Cost Approach for HCCA Resistance
Uncategorized
Poulami Das, Debapriya Basu Roy, Debdeep Mukhopadhyay
Show abstract
Uncategorized
Horizontal collision correlation analysis (HCCA) imposes a serious threat to simple power analysis resistant elliptic curve cryptosystems involving unified algorithms, for e.g. Edward curve unified formula. This attack can be mounted even in presence of differential power analysis resistant randomization schemes. In this paper we have designed an effective countermeasure for HCCA protection, where the dependency of side-channel leakage from a school-book multiplication with the underling multiplier operands is investigated. We have shown how changing the sequence in which the operands are passed to the multiplication algorithm introduces dissimilarity in the information leakage. This disparity has been utilized in constructing a zero-cost countermeasure against HCCA. This countermeasure integrated with an effective randomization method has been shown to successfully thwart HCCA. Additionally we provide experimental validation for our proposed countermeasure technique on a SASEBO platform. To the best of our knowledge, this is the first time that asymmetry in information leakage has been utilized in designing a side channel countermeasure.
Last updated:  2015-09-22
Masking Large Keys in Hardware: A Masked Implementation of McEliece
Cong Chen, Thomas Eisenbarth, Ingo von Maurich, Rainer Steinwandt
Instantiations of the McEliece cryptosystem which are considered computationally secure even in a post-quantum era still require hardening against side channel attacks for practical applications. Recently, the first differential power analysis attack on a McEliece cryptosystem successfully recovered the full secret key of a state-of-the-art FPGA implementation of QC-MDPC McEliece. In this work we show how to apply masking countermeasures to the scheme and present the first masked FPGA implementation that includes these countermeasures. We validate the side channel resistance of our design by practical DPA attacks and statistical tests for leakage detection.
Last updated:  2015-09-22
DYNAMIC KEY-AGGREGATE CRYPTOSYSTEM ON ELLIPTIC CURVES FOR ONLINE DATA SHARING
Sikhar Patranabis, Yash Shrivastava, Debdeep Mukhopadhyay
The recent advent of cloud computing and the IoT has made it imperative to have efficient and secure cryptographic schemes for online data sharing. Data owners would ideally want to store their data/files online in an encrypted manner, and delegate decryption rights for some of these to users with appropriate credentials. An efficient and recently proposed solution in this regard is to use the concept of aggregation that allows users to decrypt multiple classes of data using a single key of constant size. In this paper, we propose a secure and dynamic key aggregate encryption scheme for online data sharing that operates on elliptic curve subgroups while allowing dynamic revocation of user access rights. We augment this basic construction to a generalized two-level hierarchical structure that achieves optimal space and time complexities, and also efficiently accommodates extension of data classes. Finally, we propose an extension to the generalized scheme that allows use of efficiently computable bilinear pairings for encryption and decryption operations. Each scheme is formally proven to be semantically secure. Practical experiments have been conducted to validate all claims made in the paper.
Last updated:  2015-09-22
Localised Multisecret Sharing
Uncategorized
Thalia M. Laing, Keith M. Martin, Maura B. Paterson, Douglas R. Stinson
Show abstract
Uncategorized
A localised multisecret sharing scheme is a multisecret sharing scheme for an ordered set of players in which players in the smallest sets who are authorised to access secrets are close together in the underlying ordering. We define threshold versions of localised multisecret sharing schemes, we provide lower bounds on the share size of perfect localised multisecret sharing schemes in an information theoretic setting, and we give explicit constructions of schemes to show that these bounds are tight. We then analyse a range of approaches to relaxing the model that provide trade-offs between the share size and the level of security guarantees provided by the scheme, in order to permit the construction of schemes with smaller shares. We show how these techniques can be used in the context of an application to key distribution for RFID-based supply-chain management motivated by the proposal of Juels, Pappu and Parno from USENIX 2008.
Last updated:  2019-05-07
Identity-Based Revocation from Subset Difference Methods under Simple Assumptions
Kwangsu Lee, Jong Hwan Park
Identity-based revocation (IBR) is a specific kind of broadcast encryption that can effectively send a ciphertext to a set of receivers. In IBR, a ciphertext is associated with a set of revoked users instead of a set of receivers and the maximum number of users in the system can be an exponential value in the security parameter. In this paper, we reconsider the general method of Lee, Koo, Lee, and Park (ESORICS 2014) that constructs a public-key revocation (PKR) scheme by combining the subset difference (SD) method of Naor, Naor, and Lotspiech (CRYPTO 2001) and a single revocation encryption (SRE) scheme. Lee et al. left it as an open problem to construct an SRE scheme under the standard assumption without random oracles. In this work, we first propose a selectively secure SRE scheme under the standard assumption without random oracles. We also propose a fully secure SRE scheme under simple static assumptions without random oracles. Next, we present an efficient IBR scheme derived from the SD method and our SRE scheme. The security of our IBR scheme depends on that of the underlying SRE scheme. Finally, we implemented our SRE and IBR schemes and measured the performance.
Last updated:  2015-09-23
Leakage-Resilient Identification Schemes from Zero-Knowledge Proofs of Storage
Giuseppe Ateniese, Antonio Faonio, Seny Kamara
We provide a framework for constructing leakage-resilient identification(ID) protocols in the bounded retrieval model (BRM) from proofs of storage(PoS) that hide partial information about the file. More precisely, we describe a generic transformation from any zero-knowledge PoS to a leakage-resilient ID protocol in the BRM. We then describe a ZK-PoS based on RSA which, under our transformation, yields the first ID protocol in the BRM based on RSA (in the ROM). The resulting protocol relies on a different computational assumption and is more efficient than previously-known constructions.
Last updated:  2017-02-20
Privacy-preserving computation with trusted computing via Scramble-then-Compute
Uncategorized
Hung Dang, Anh Dinh, Ee-Chien Chang, Beng Chin Ooi
Uncategorized
We consider privacy-preserving computation of big data using trusted computing primitives with limited private memory. Simply ensuring that the data remains encrypted outside the trusted computing environment is insufficient to preserve data privacy, for data movement observed during computation could leak information. While it is possible to thwart such leakage using generic solution such as ORAM [43], designing efficient privacy-preserving algorithms is challenging. Besides computation efficiency, it is critical to keep trusted code bases lean, for large ones are unwieldy to vet and verify. In this paper, we advocate a simple approach wherein many basic algorithms (e.g., sorting) can be made privacy-preserving by adding a step that securely scrambles the data before feeding it to the original algorithms. We call this approach Scramble-then-Compute (StC), and give a sufficient condition whereby existing external memory algorithms can be made privacy-preserving via StC. This approach facilitates code-reuse, and its simplicity contributes to a smaller trusted code base. It is also general, allowing algorithm designers to leverage an extensive body of known efficient algorithms for better performance. Our experiments show that StC could offer up to 4.1× speedups over known, application-specific alternatives.
Last updated:  2015-09-22
Finding State Collisions in the Authenticated Encryption Stream Cipher ACORN
Md Iftekhar Salam, Kenneth Koon-Ho Wong, Harry Bartlett, Leonie Simpson, Ed Dawson, Josef Pieprzyk
This paper analyzes the authenticated encryption algorithm ACORN, a candidate in the CAESAR cryptographic competition. We identify weaknesses in the state update function of ACORN which result in collisions in the internal state of ACORN. This paper shows that for a given set of key and initialization vector values we can construct two distinct input messages which result in a collision in the ACORN internal state. Using a standard PC the collision can be found almost instantly when the secret key is known.
Last updated:  2015-09-22
Private Proximity Testing on Steroids: An NTRU-based Protocol
Constantinos Patsakis, Panayiotis Kotzanikolaou, M ́elanie Bouroche
Nowadays, most smartphones come pre-equipped with location (GPS) sensing capabilities, allowing developers to create a wide variety of location-aware applications and services. While location awareness provides novel features and functionality, it opens the door to many privacy nightmares. In many occasions, however, users do not need to share their actual location, but to determine whether they are in proximity to others, which is practically one bit of information. Private proximity protocols allow this functionality without any further information leakage. In this work we introduce a novel protocol which is far more efficient than the current state of the art and bases its security on lattice-based cryptography.
Last updated:  2016-06-14
Rigorous Upper Bounds on Data Complexities of Block Cipher Cryptanalysis
Uncategorized
Subhabrata Samajder, Palash Sarkar
Show abstract
Uncategorized
Statistical analysis of symmetric key attacks aims to obtain an expression for the data complexity which is the number of plaintext-ciphertext pairs needed to achieve the parameters of the attack. Existing statistical analyses invariably use some kind of approximation, the most common being the approximation of the distribution of a sum of random variables by a normal distribution. Such an approach leads to expressions for data complexities which are {\em inherently approximate}. Prior works do not provide any analysis of the error involved in such approximations. In contrast, this paper takes a rigorous approach to analysing attacks on block ciphers. In particular, no approximations are used. Expressions for upper bounds on the data complexities of several basic and advanced attacks are obtained. The analysis is based on the hypothesis testing framework. Probabilities of Type-I and Type-II errors are upper bounded using standard tail inequalities. In the cases of single linear and differential cryptanalysis, we use the Chernoff bound. For the cases of multiple linear and multiple differential cryptanalysis, Hoeffding bounds are used. This allows bounding the error probabilities and obtaining expressions for data complexities. We believe that our method provides important results for the attacks considered here and more generally, the techniques that we develop should have much wider applicability.
Last updated:  2015-09-22
A Generic Construction for Verifiable Attribute-based Keyword Search Schemes
Mohammmad Hassan Ameri, Maryam Rajabzadeh Assar, Javad Mohajeri, Mahmoud Salmasizadeh
Cloud data owners encrypt their documents before outsourcing to provide their privacy. They could determine a search control policy and delegate the ability of search token generation to the users whose attributes satisfy the search control policy. Verifiable attribute-based keyword search (VABKS) where the users can also verify the accuracy of cloud functionality is one of such schemes. In this paper, the first generic construction for VABKS is proposed. To this end, the notion of hierarchical identity-based multi-designated verifier signature (HIB-MDVS) has been introduced and existential forgery under chosen message attack (EF-CMA) is formally defined for its unforgeability. Furthermore, anonymity against chosen identity vector set and chosen plaintext attack (Anon-CIVS-CPA) has been defined as the security definition of hierarchical identity-based broadcast encryption (HIBBE) in a formal way. The proposed construction is built in a modular structure by using HIBBE, HIB-MDVS, and Bloom filter as the building blocks. We prove that the security of proposed construction is based on the unforgeability of HIB-MDVS and the anonymity of HIBBE. Finally, the concept of verifiable ranked keyword search will be introduced and a construction of this primitive will be presented which is based on proposed VABKS.
Last updated:  2017-01-31
A Cryptographic Analysis of the TLS 1.3 Handshake Protocol Candidates
Benjamin Dowling, Marc Fischlin, Felix Günther, Douglas Stebila
The Internet Engineering Task Force (IETF) is currently developing the next version of the Transport Layer Security (TLS) protocol, version 1.3. The transparency of this standardization process allows comprehensive cryptographic analysis of the protocols prior to adoption, whereas previous TLS versions have been scrutinized in the cryptographic literature only after standardization. Here we look at two related, yet slightly different candidates which were in discussion for TLS 1.3 at the point of writing of the main part of the paper in May 2015, called draft-ietf-tls-tls13-05 and draft-ietf-tls-tls13-dh-based. We give a cryptographic analysis of the primary ephemeral Diffie-Hellman-based handshake protocol, which authenticates parties and establishes encryption keys, of both TLS 1.3 candidates. We show that both candidate handshakes achieve the main goal of providing secure authenticated key exchange according to an augmented multi-stage version of the Bellare-Rogaway model. Such a multi-stage approach is convenient for analyzing the design of the candidates, as they establish multiple session keys during the exchange. An important step in our analysis is to consider compositional security guarantees. We show that, since our multi-stage key exchange security notion is composable with arbitrary symmetric-key protocols, the use of session keys in the record layer protocol is safe. Moreover, since we can view the abbreviated TLS resumption procedure also as a symmetric-key protocol, our compositional analysis allows us to directly conclude security of the combined handshake with session resumption. We include a discussion on several design characteristics of the TLS 1.3 drafts based on the observations in our analysis.
Last updated:  2015-09-22
Functional Signcryption: Notion, Construction, and Applications
Pratish Datta, Ratna Dutta, Sourav Mukhopadhyay
Functional encryption (FE) enables sophisticated control over decryption rights in a multi-user scenario, while functional signature (FS) allows to enforce complex constraints on signing capabilities. This paper introduces the concept of functional signcryption (FSC) that aims to provide the functionalities of both FE and FS in an unified cost-effective primitive. FSC provides a solution to the problem of achieving confidentiality and authenticity simultaneously in digital communication and storage systems involving multiple users with better efficiency compared to a sequential implementation of FE and FS. We begin by providing formal definition of FSC and formulating its security requirements. Next, we present a generic construction of this challenging primitive that supports arbitrary polynomial-size signing and decryption functions from known cryptographic building blocks, namely, indistinguishability obfuscation (IO) and statistically simulation-sound noninteractive zero-knowledge proof of knowledge (SSS-NIZKPoK). Finally, we exhibit a number of representative applications of FSC: (I) We develop the first construction of attribute-based signcryption (ABSC) supporting signing and decryption policies representable by general polynomial-size circuits from FSC. (II) We show how FSC can serve as a tool for building SSS-NIZKPoK system and IO, a result which in conjunction with our generic FSC construction can also be interpreted as establishing an equivalence between FSC and the other two fundamental cryptographic primitives.
Last updated:  2015-09-22
Extraction of intrinsic structure for Hardware Trojan detection
Maxime Lecomte, Jacques Fournier, Philippe Maurine
In this paper we present a novel solution to address the problem of potential malicious circuitry on FPGA. This method is based on an a technique of structure extraction which consider the infection of an all lot. This structure is related to the design (place and route, power grid\dots) of the integrated circuits which composes the lot. In case of additional circuitry this design will be modify and the extracted structure will be affected. After developing the extraction techniques we present a methodology to insert detection of hardware trojan and counterfeit in different IC manufacturing steps. At last an application example using 30 FPGA boards validate our extraction method. Finally, statistical tools are then applied on the experimental results to distinguish a genuine lot from an infected one and confirm the potential of detection the extracted structure.
Last updated:  2015-09-23
Security Evaluation on Simeck against Zero Correlation Linear Cryptanalysis
Uncategorized
Kai Zhang, Jie Guan, Bin Hu, Dongdai Lin
Show abstract
Uncategorized
SIMON and SPECK family ciphers have attracted the attention of cryptographers all over the world since proposed by NSA in June, 2013. At CHES 2015, Simeck, a new block cipher inspired from both SIMON and SPECK is proposed, which is more compact and efficient. However, the security evaluation on Simeck against zero correlation linear cryptanalysis seems missing from the specification. The main focus of this paper is to fill this gap and evaluate the security level on Simeck against zero correlation linear cryptanalysis. According to our study, 11/13/15 rounds zero correlation linear distinguishers on Simeck32/48/64 are proposed respectively, then zero correlation linear cryptanalysis on 20/24/27 rounds Simeck32/48/64 are firstly proposed. As far as we know, for Simeck32, our result is the best result to date.
Last updated:  2016-02-22
QA-NIZK Arguments in Asymmetric Groups: New Tools and New Constructions
Uncategorized
Alonso González, Alejandro Hevia, Carla Ràfols
Show abstract
Uncategorized
A sequence of recent works have constructed constant-size quasi-adaptive (QA) NIZK arguments of membership in linear subspaces of $\mathbb{G}^m$, where $\mathbb{G}$ is a group equipped with a bilinear map $e : G \times H \to T$. Although applicable to any bilinear group, these techniques are less useful in the asymmetric case. For example, Jutla and Roy (Crypto 2014) show how to do QA aggregation of Groth- Sahai proofs, but the types of equations which can be aggregated are more restricted in the asymmetric setting. Furthermore, there are natural statements which cannot be expressed as membership in linear subspaces, for example the satisfiability of quadratic equations. In this paper we develop specific techniques for asymmetric groups. We introduce a new computational assumption, under which we can recover all the aggregation results of Groth-Sahai proofs known in the symmetric setting. We adapt the arguments of membership in linear spaces of $\mathbb{G}^m$ to linear subspaces of $\mathbb{G}^m \times \mathbb{H}^n$. In particular, we give a constant-size argument that two sets of Groth-Sahai commitments, defined over different groups $\mathbb{G},\mathbb{H}$, open to the same scalars in $\mathbb{Z}_q$, a useful tool to prove satisfiability of quadratic equations in $\mathbb{Z}_q$. We then use one of the arguments for subspaces in $\mathbb{G}^m \times\mathbb{H}^n$ and develop new techniques to give constant-size QA-NIZK proofs that a commitment opens to a bit-string. To the best of our knowledge, these are the first constant-size proofs for quadratic equations in $\mathbb{Z}_q$ under standard and falsifiable assumptions. As a result, we obtain improved threshold Groth-Sahai proofs for pairing product equations, ring signatures, proofs of membership in a list, and various types of signature schemes.
Last updated:  2015-11-25
On the Impact of Known-Key Attacks on Hash Functions
Bart Mennink, Bart Preneel
Hash functions are often constructed based on permutations or blockciphers, and security proofs are typically done in the ideal permutation or cipher model. However, once these random primitives are instantiated, vulnerabilities of these instantiations may nullify the security. At ASIACRYPT 2007, Knudsen and Rijmen introduced known-key security of blockciphers, which gave rise to many distinguishing attacks on existing blockcipher constructions. In this work, we analyze the impact of such attacks on primitive-based hash functions. We present and formalize the weak cipher model, which captures the case a blockcipher has a certain weakness but is perfectly random otherwise. A specific instance of this model, considering the existence of sets of B queries whose XOR equals 0 at bit-positions C, where C is an index set, covers a wide range of known-key attacks in literature. We apply this instance to the PGV compression functions, as well as to the Groestl (based on two permutations) and Shrimpton-Stam (based on three permutations) compression functions, and show that these designs do not seriously succumb to any differential known-key attack known to date.
Last updated:  2015-09-17
Linear Distinguishers in the Key-less Setting: Application to PRESENT
Martin M. Lauridsen, Christian Rechberger
The application of the concept of linear cryptanalysis to the domain of key-less primitives is largely an open problem. In this paper we, for the first time, propose a model in which its application is meaningful for distinguishing block ciphers. Combining our model with ideas from message modification and rebound-like approaches, we initiate a study of cryptographic primitives with respect to this new attack vector and choose the lightweight block cipher PRESENT as an example target. This leads to known-key distinguishers over up to 27 rounds, whereas the best previous result is up to 18 rounds in the chosen-key model.
Last updated:  2015-10-27
Cryptographic Assumptions: A Position Paper
Uncategorized
Shafi Goldwasser, Yael Tauman Kalai
Show abstract
Uncategorized
The mission of theoretical cryptography is to define and construct provably secure cryptographic protocols and schemes. Without proofs of security, cryptographic constructs offer no guarantees whatsoever and no basis for evaluation and comparison. As most security proofs necessarily come in the form of a reduction between the security claim and an intractability assumption, such proofs are ultimately only as good as the assumptions they are based on. Thus, the complexity implications of every assumption we utilize should be of significant substance, and serve as the yard stick for the value of our proposals. Lately, the field of cryptography has seen a sharp increase in the number of new assumptions that are often complex to define and difficult to interpret. At times, these assumptions are hard to untangle from the constructions which utilize them. We believe that the lack of standards of what is accepted as a reasonable cryptographic assumption can be harmful to the credibility of our field. Therefore, there is a great need for {\em measures} according to which we classify and compare assumptions, as to which are {\it safe} and which are not. In this paper, we propose such a classification and review recently suggested assumptions in this light. This follows the footsteps of Naor (Crypto 2003). Our governing principle is relying on hardness assumptions that are independent of the cryptographic constructions.
Last updated:  2015-09-17
RoadRunneR: A Small And Fast Bitslice Block Cipher For Low Cost 8-bit Processors
Adnan Baysal, Suhap Sahin
Designing block ciphers targeting resource constrained 8-bit CPUs is a challenging problem. There are many recent lightweight ciphers designed for better performance in hardware. On the other hand, most software efficient lightweight ciphers either lack a security proof or have a low security margin. To fill the gap, we present RoadRunneR which is an efficient block cipher in 8-bit software, and its security is provable against differential and linear attacks. RoadRunneR has lowest code size in Atmel’s ATtiny45, except NSA’s design SPECK, which has no security proof. Moreover, we propose a new metric for the fair comparison of block ciphers. This metric, called ST/A, is the first metric to use key length as a parameter to rank ciphers of different key length in a fair way. By using ST/A and other metrics in the literature, we show that RoadRunneR is competitive among existing ciphers on ATtiny45.
Last updated:  2015-09-17
Mapping the Intel Last-Level Cache
Yuval Yarom, Qian Ge, Fangfei Liu, Ruby B. Lee, Gernot Heiser
Modern Intel processors use an undisclosed hash function to map memory lines into last-level cache slices. In this work we develop a technique for reverse-engineering the hash function. We apply the technique to a 6-core Intel processor and demonstrate that knowledge of this hash function can facilitate cache-based side channel attacks, reducing the amount of work required for profiling the cache by three orders of magnitude. We also show how using the hash function we can double the number of colours used for page-colouring techniques.
Last updated:  2015-10-10
Almost-tight Identity Based Encryption against Selective Opening Attack
Junqing Gong, Xiaolei Dong, Zhenfu Cao, Jie Chen
The paper presented an identity based encryption (IBE) under selective opening attack (SOA) whose security is almost-tightly related to a set of computational assumptions. Our result is a combination of Bellare, Waters, and Yilek's method [TCC, 2011] for constructing (not tightly) SOA secure IBE and Hofheinz, Koch, and Striecks' technique [PKC, 2015] on building almost-tightly secure IBE in the multi-ciphertext setting. In particular, we first tuned Bellare et al.'s generic construction for SOA secure IBE to show that a one-bit IBE achieving ciphertext indistinguishability under chosen plaintext attack in the multi-ciphertext setting (with one-sided publicly openability) tightly implies a multi-bit IBE secure under selective opening attack. Next, we almost-tightly reduced such a one-bit IBE to static assumptions in the composite-order bilinear groups employing the technique of Hofheinz et al. This yielded the first SOA secure IBE with almost-tight reduction.
Last updated:  2015-09-17
A Note on the Indifferentiability of the 10-Round Feistel Construction
Yannick Seurin
Holenstein et al. (STOC 2011) have shown that the Feistel construction with fourteen rounds and public random round functions is indifferentiable from a random permutation. In the same paper, they pointed out that a previous proof for the 10-round Feistel construction by Seurin (PhD thesis) was flawed. However, they left open the question of whether the proof could be patched (leaving hope that the simulator described by Seurin could still be used to prove indifferentiability of the 10-round Feistel construction). In this note, we show that the proof cannot be patched (and hence that the simulator described by Seurin cannot be used to prove the indifferentiability of the 10-round Feistel construction) by describing a distinguishing attack that succeeds with probability close to one against this simulator. We stress that this does not imply that the 10-round Feistel construction is not indifferentiable from a random permutation (since our attack does not exclude the existence of a different simulator that would work).
Last updated:  2016-04-14
Differential Analysis on Simeck and SIMON with Dynamic Key-guessing Techniques
Uncategorized
Kexin Qiao, Lei Hu, Siwei Sun
Show abstract
Uncategorized
The Simeck family of lightweight block ciphers was proposed in CHES 2015 which combines the good design components from NSA designed ciphers SIMON and SPECK. Dynamic key-guessing techniques were proposed by Wang {\it et al.} to greatly reduce the key space guessed in differential cryptanalysis and work well on SIMON. In this paper, we implement the dynamic key-guessing techniques in a program to automatically give out the data in dynamic key-guessing procedure and thus simplify the security evaluation of SIMON and Simeck like block ciphers regarding differential attacks. We use the differentials from Kölbl {\it et al.}'s work and also a differential with lower Hamming weight we find using Mixed Integer Linear Programming method to attack 22-round Simeck32, 28-round Simeck48 and 35-round Simeck64. Besides, we launch the same attack procedure on four members of SIMON family by use of newly proposed differentials in CRYPTO2015 and get new attack results on 22-round SIMON32/64, 24-round SIMON48/96, 28, 29-round SIMON64/96 and 29, 30-round SIMON64/128. As far as we are concerned, our results on SIMON64 are currently the best results.
Last updated:  2015-09-16
A Unified Approach to MPC with Preprocessing using OT
Tore Kasper Frederiksen, Marcel Keller, Emmanuela Orsini, Peter Scholl
SPDZ, TinyOT and MiniMAC are a family of MPC protocols based on secret sharing with MACs, where a preprocessing stage produces multiplication triples in a finite field. This work describes new protocols for generating multiplication triples in fields of characteristic two using OT extensions. Before this work, TinyOT, which works on binary circuits, was the only protocol in this family using OT extensions. Previous SPDZ protocols for triples in large finite fields require somewhat homomorphic encryption, which leads to very inefficient runtimes in practice, while no dedicated preprocessing protocol for MiniMAC (which operates on vectors of small field elements) was previously known. Since actively secure OT extensions can be performed very efficiently using only symmetric primitives, it is highly desirable to base MPC protocols on these rather than expensive public key primitives. We analyze the practical efficiency of our protocols, showing that they should all perform favorably compared with previous works; we estimate our protocol for SPDZ triples in $\mathbb{F}_{2^{40}}$ will perform around 2 orders of magnitude faster than the best known previous protocol.
Last updated:  2015-09-16
New Results on Identity-based Encryption from Quadratic Residuosity
Uncategorized
Ferucio Laurentiu Tiplea, Emil Simion
Show abstract
Uncategorized
This paper surveys the results obtained so far in designing identity-based encryption (IBE) schemes based on the quadratic residuosity assumption (QRA). We begin by describing the first such scheme due to Cocks, and then we advance to the novel idea of Boneh, Gentry and Hamburg. Major improvements of the Boneh-Gentry-Hamburg scheme are then recalled. The recently revealed algebraic torus structures of the Cocks scheme allows for a better understanding of this scheme, as well as for new applications of it such as homomorphic and anonymous variants of it.
Last updated:  2015-09-16
Privacy-preserving Attribute Based Searchable Encryption
Payal Chaudhari, Maniklal Das
Attribute Based Encryption (ABE) is a promising public-key cryptographic primitive that can be used for cryptographically enforced access control in untrusted storage. Storing data on untrusted storage not only requires data security for data owners but also poses data protection from untrusted storage server. To address this important requirement, Anonymous Attribute Based Encryption (AABE) is a suitable primitive that provides users to access data from untrusted storage without revealing their identities. At the same time user data can be stored in untrusted storage in an encrypted form. While storing data in an encrypted form, keyword-based query search (and data retrieval) is a challenging research problem. In this paper we present an anonymous attribute based searchable encryption (A2SBE) scheme which facilitates user to retrieve only a subset of documents pertaining to his chosen keyword(s). User can upload documents in public cloud in an encrypted form, search documents based on keyword(s) and retrieve documents without revealing his identity. The scheme is proven secure under the selective ciphertext- policy and chosen plaintext attack (IND-sCP-CPA) model and selective ciphertext-policy and chosen keyword attack (IND-sCP-CKA) model. The scheme requires small storage for user's decryption key and reduced computation for decryption in comparison to other schemes.
Last updated:  2015-09-22
Seriously, get off my cloud! Cross-VM RSA Key Recovery in a Public Cloud
Uncategorized
Mehmet Sinan Inci, Berk Gulmezoglu, Gorka Irazoqui, Thomas Eisenbarth, Berk Sunar
Show abstract
Uncategorized
It has been six years since Ristenpart et al. demonstrated the viability of co-location and provided the first concrete evidence for sensitive information leakage on a commercial cloud. We show that co-location can be achieved and detected by monitoring the last level cache in public clouds. More significantly, we present a full-fledged attack that exploits subtle leakages to recover RSA decryption keys from a co-located instance. We target a recently patched Libgcrypt RSA implementation by mounting Cross-VM Prime and Probe cache attacks in combination with other tests to detect co-location in Amazon EC2. In a preparatory step, we reverse engineer the unpublished nonlinear slice selection function for the 10 core Intel Xeon processor which significantly accelerates our attack (this chipset is used in Amazon EC2). After co-location is detected and verified, we perform the Prime and Probe attack to recover noisy keys from a carefully monitored Amazon EC2 VM running the aforementioned vulnerable libgcrypt library. We subsequently process the noisy data and obtain the complete 2048-bit RSA key used during encryption. This work reaffirms the privacy concerns and underlines the need for deploying stronger isolation techniques in public clouds.
Last updated:  2015-09-15
Integrity-Aware Parallelizable Cipher Feedback Mode for Real-time Cryptography
Prosanta Gope
Conventional Cipher Feedback Mode (CFB) can allow the transmission unit to be shorter than the block-cipher length. Eventually, it causes no delay and even any message expansion unlike the ECB and CBC mode of operation where encryption cannot begin unless and until a complete block of full-length (say 64 bits) plain-text data is available. However, because of stalling during the block encryption, CFB cannot provide low latency, low jitter; these are two imperative properties in the sense of real-time cryptography. For that, it is important that the input stream should not wait for the key-stream to be generated; that means, key-streams are required to be arranged in advance, which cannot be expected in case of the conventional CFB mode. Besides, the conventional Cipher Feedback Mode is also incompetent for such real-time crypto systems, where the integrity of the message is also greatly desirable along with privacy. In this article, we propose a variant of Cipher Feedback Mode, called, Integrity-Aware, Parallelizable Cipher Feedback Mode (IAP-CFB), which can guarantee all the aforesaid requirements, such as, low latency, low jitter, privacy, and integrity assurance, etc.
Last updated:  2015-09-15
Improved Attacks on Reduced-Round Camellia-128/192/256
Xiaoyang Dong, Leibo Li, Keting Jia, Xiaoyun Wang
Camellia is a widely used block cipher, which has been selected as an international standard by ISO/IEC. In this paper, we consider a new family of dierentials of round-reduced Camellia-128 depending on dierent key subsets. There are totally 224 key subsets corresponding to 224 types of 8-round dierentials, which cover a fraction of 1- 1=2^{15} of the keyspace. And each type of 8-round dierential consists of 2^{43} dierentials. Combining with the multiple dierential attack techniques, we give the key-dependent multiple dierential attack on 10-round Camellia-128 with data complexity 2^{91} and time complexity 2^{113}. Furthermore, we propose a 7-round property for Camellia-192 and an 8-round property for Camellia-256, and then mount the meet-in-the-middle attacks on 12-round Camellia-192 and 13-round Camellia-256, with complexity of 2^{180} encryptions and 2^{232.7} encryptions, respectively. All these attacks start from the rst round in a single keysetting.
Last updated:  2016-05-07
Rogue Decryption Failures: Reconciling AE Robustness Notions
Guy Barwell, Dan Page, Martijn Stam
An authenticated encryption scheme is deemed secure (AE) if ciphertexts both look like random bitstrings and are unforgeable. AE is a much stronger notion than the traditional IND--CCA. One shortcoming of AE as commonly understood is its idealized, all-or-nothing decryption: if decryption fails, it will always provide the \emph{same single} error message \emph{and nothing more}. Reality often turns out differently: encode-then-encipher schemes often output decrypted ciphertext before verification has taken place whereas pad-then-MAC-then-encrypt schemes are prone to distinguishable verification failures due to the subtle interaction between padding and the MAC-then-encrypt concept. Three recent papers provided what appeared independent and radically different definitions to model this type of decryption leakage. We reconcile these three works by providing a reference model of security for authenticated encryption in the face of decryption leakage from invalid queries. Having tracked the development of AE security games, we provide a single expressive framework allowing us to compare and contrast the previous notions. We find that at their core, the notions are essentially equivalent, with their key differences stemming from definitional choices independent of the desire to capture real world behaviour.
Last updated:  2015-09-15
Comparison of cube attacks over different vector spaces
Richard Winter, Ana Salagean, Raphael C. -W. Phan
We generalise the cube attack of Dinur and Shamir (and the similar AIDA attack of Vielhaber) to a more general higher order differentiation attack, by summing over an arbitrary subspace of the space of initialisation vectors. The Moebius transform can be used for efficiently examining all the subspaces of a big space, similar to the method used by Fouque and Vannet for the usual cube attack. Secondly we propose replacing the Generalised Linearity Test proposed by Dinur and Shamir with a test based on higher order differentiation/ Moebius transform. We show that the proposed test provides all the information provided by the Generalised Linearity Test, at the same computational cost. In addition, for functions that do not pass the linearity test it also provides, at no extra cost, an estimate of the degree of the function. This is useful for guiding the heuristics for the cube/AIDA attacks. Finally we implement our ideas and test them on the stream cipher Trivium.
Last updated:  2018-09-28
Robust Authenticated Encryption and the Limits of Symmetric Cryptography
Christian Badertscher, Christian Matt, Ueli Maurer, Phillip Rogaway, Björn Tackmann
Robust authenticated encryption (RAE) is a primitive for symmetric encryption that allows to flexibly specify the ciphertext expansion, i.e., how much longer the ciphertext is compared to the plaintext. For every ciphertext expansion, RAE aims at providing the best-possible authenticity and confidentiality. To investigate whether this is actually achieved, we characterize exactly the guarantees symmetric cryptography can provide for any given ciphertext expansion. Our characterization reveals not only that RAE reaches the claimed goal, but also, contrary to prior belief, that one cannot achieve full confidentiality without ciphertext expansion. This provides new insights into the limits of symmetric cryptography. Moreover, we provide a rigorous treatment of two previously only informally stated additional features of RAE; namely, we show how redundancy in the message space can be exploited to improve the security and we analyze the exact security loss if multiple messages are encrypted with the same nonce.
Last updated:  2015-09-15
Security Against Related Randomness Attacks via Reconstructive Extractors
Kenneth G. Paterson, Jacob C. N. Schuldt, Dale L. Sibborn, Hoeteck Wee
This paper revisits related randomness attacks against public key encryption schemes as introduced by Paterson, Schuldt and Sibborn (PKC 2014). We present a general transform achieving security for public key encryption in the related randomness setting using as input any secure public key encryption scheme in combination with an auxiliary-input reconstructive extractor. Specifically, we achieve security in the function-vector model introduced by Paterson et al., obtaining the first constructions providing CCA security in this setting. We consider instantiations of our transform using the Goldreich-Levin extractor; these outperform the previous constructions in terms of public-key size and reduction tightness, as well as enjoying CCA security. Finally, we also point out that our approach leads to an elegant construction for Correlation Input Secure hash functions, which have proven to be a versatile tool in diverse areas of cryptography.
Last updated:  2015-09-15
Private Ciphertext-Policy Attribute-based Encryption Schemes With Constant-Size Ciphertext Supporting CNF Access Policy
Uncategorized
Sébastien Canard, Viet Cuong Trinh
Show abstract
Uncategorized
Attribute-based encryption (ABE) is an extension of traditional public key encryption in which the encryption and decryption phases are based on user's attributes. More precisely, we focus on cipher-text-policy ABE (CP-ABE) where the secret-key is associated to a set of attributes and the ciphertext is generated with an access policy. It then becomes feasible to decrypt a ciphertext only if one's attributes satisfy the used access policy. In this paper, we give the first private CP-ABE constructions with a constant-size ciphertext, supporting CNF (Conjunctive Normal Form) access policy, with the simple restriction that each attribute can only appear $k_{max}$ times in the access formula. Our two constructions are based on the BGW scheme at Crypto'05. The first scheme is basic selective secure (in the standard model) while our second one reaches the selective CCA security (in the random oracle model).
Last updated:  2015-09-15
MI-T-HFE, a New Multivariate Signature Scheme
Wenbin Zhang, Chik How Tan
In this paper, we propose a new multivariate signature scheme named MI-T-HFE as a competitor of QUARTZ. The core map of MI-T-HFE is of an HFEv type but more importantly has a specially designed trapdoor. This special trapdoor makes MI-T-HFE have several attractive advantages over QUARTZ. First of all, the core map and the public map of MI-T-HFE are both surjective. This surjectivity property is important for signature schemes because any message should always have valid signatures; otherwise it may be troublesome to exclude those messages without valid signatures. However this property is missing for a few major signature schemes, including QUARTZ. A practical parameter set is proposed for MI-T-HFE with the same length of message and same level of security as QUARTZ, but it has smaller public key size, and is more efficient than (the underlying HFEv- of) QUARTZ with the only cost that its signature length is twice that of QUARTZ.
Last updated:  2015-11-23
Which Ring Based Somewhat Homomorphic Encryption Scheme is Best?
Anamaria Costache, Nigel P. Smart
The purpose of this paper is to compare side-by-side the NTRU and BGV schemes in their non-scale invariant (messages in the lower bits), and their scale invariant (message in the upper bits) forms. The scale invariant versions are often called the FV and YASHE schemes. As an additional optimization, we also investigate the affect of modulus reduction on the scale-invariant schemes. We compare the schemes using the ``average case'' noise analysis presented by Gentry et al. In addition we unify notation and techniques so as to show commonalities between the schemes. We find that the BGV scheme appears to be more efficient for large plaintext moduli, whilst YASHE seems more efficient for small plaintext moduli (although the benefit is not as great as one would have expected).
Last updated:  2015-09-14
Tweak-Length Extension for Tweakable Blockciphers
Kazuhiko Minematsu, Tetsu Iwata
Tweakable blockcipher (TBC) is an extension of standard blockcipher introduced by Liskov, Rivest and Wagner in 2002. TBC is a versatile building block for efficient symmetric-key cryptographic functions, such as authenticated encryption. In this paper we study the problem of extending tweak of a given TBC of fixed-length tweak, which is a variant of popular problem of converting a blockcipher into a TBC, i.e., blockcipher mode of operation. The problem is particularly important for known dedicated TBCs since they have relatively short tweak. We propose a simple and efficient solution, called XTX, for this problem. XTX converts a TBC of fixed-length tweak into another TBC of arbitrarily long tweak, by extending the scheme of Liskov, Rivest and Wagner that converts a blockcipher into a TBC. Given a TBC of $n$-bit block and $m$-bit tweak, XTX provides $(n+m)/2$-bit security while conventional methods provide $n/2$ or $m/2$-bit security. We also show that XTX is even useful when combined with some blockcipher modes for building TBC having security beyond the birthday bound.
Last updated:  2016-09-01
Composable Security in the Tamper Proof Hardware Model under Minimal Complexity
Uncategorized
Carmit Hazay, Antigoni Polychroniadou, Muthuramakrishnan Venkitasubramaniam
Show abstract
Uncategorized
We put forth a new formulation of tamper-proof hardware in the Global Universal Composable (GUC) framework introduced by Canetti et al. in TCC 2007. Almost all of the previous works rely on the formulation by Katz in Eurocrypt 2007 and this formulation does not fully capture tokens in a concurrent setting. We address these shortcomings by relying on the GUC framework where we make the following contributions: (1) We construct secure Two-Party Computation (2PC) protocols for general functionalities with optimal round complexity and computational assumptions using stateless tokens. More precisely, we show how to realize arbitrary functionalities with GUC security in two rounds under the minimal assumption of One-Way Functions (OWFs). Moreover, our construction relies on the underlying function in a black-box way. As a corollary, we obtain feasibility of Multi-Party Computation (MPC) with GUC-security under the minimal assumption of OWFs. As an independent contribution, we identify an issue with a claim in a previous work by Goyal, Ishai, Sahai, Venkatesan and Wadia in TCC 2010 regarding the feasibility of UC-secure computation with stateless tokens assuming collision-resistant hash-functions (and the extension based only on one-way functions). (2) We then construct a 3-round MPC protocol to securely realize arbitrary functionalities with GUC-security starting from any semi-honest secure MPC protocol. For this construction, we require the so-called one-many commit-and-prove primitive introduced in the original work of Canetti, Lindell, Ostrovsky and Sahai in STOC 2002 that is round-efficient and black-box in the underlying commitment. Using specially designed ``input-delayed'' protocols we realize this primitive (with a 3-round protocol in our framework) using stateless tokens and one-way functions (where the underlying one-way function is used in a black-box way).
Last updated:  2015-09-13
Applying Cryptographic Acceleration Techniques to Error Correction
Rémi Géraud, Diana-Stefania Maimut, David Naccache, Rodrigo Portella do Canto, Emil Simion
Modular reduction is the basic building block of many public-key cryptosystems. BCH codes require repeated polynomial reductions modulo the same constant polynomial. This is conceptually very similar to the implementation of public-key cryptography where repeated modular reduction in $\mathbb{Z}_n$ or $\mathbb{Z}_p$ are required for some fixed $n$ or $p$. It is hence natural to try and transfer the modular reduction expertise developed by cryptographers during the past decades to obtain new BCH speed-up strategies. Error correction codes (ECCs) are deployed in digital communication systems to enforce transmission accuracy. BCH codes are a particularly popular ECC family. This paper generalizes Barrett's modular reduction to polynomials to speed-up BCH ECCs. A BCH$(15,7,2)$ encoder was implemented in Verilog and synthesized. Results show substantial improvements when compared to traditional polynomial reduction implementations. We present two BCH code implementations (regular and pipelined) using Barrett polynomial reduction. These implementations, are respectively 4.3 and 6.7 faster than an improved BCH LFSR design. The regular Barrett design consumes around 53$\%$ less power than the BCH LFSR design, while the faster pipelined version consumes 2.3 times more power than the BCH LFSR design.
Last updated:  2015-09-13
A New Standard of Ukraine: The Kupyna Hash Function
Uncategorized
Roman Oliynykov, Ivan Gorbenko, Oleksandr Kazymyrov, Victor Ruzhentsev, Oleksandr Kuznetsov, Yurii Gorbenko, Artem Boiko, Oleksandr Dyrda, Viktor Dolgov, Andrii Pushkaryov
Show abstract
Uncategorized
The Kupyna hash function was approved as the new Ukrainian standard DSTU 7564:2014 in 2015. Main requirements for it were both high security level and good performance of software implementation on general-purpose 64-bit CPUs. The new hash function uses Davies-Meyer compression function based on Even-Mansour cipher construction. Kupyna is built on the transformations of the Kalyna block cipher (Ukrainian standard DSTU 7624:2014). In this paper we present the adapted English translated specification of the Kupyna hash function as it is described in the national standard of Ukraine.
Last updated:  2015-09-13
General Circuit Realizing Compact Revocable Attribute-Based Encryption from Multilinear Maps
Pratish Datta, Ratna Dutta, Sourav Mukhopadhyay
This paper demonstrates new technique for managing revocation in the context of attribute-based encryption (ABE) and presents two selectively secure directly revocable ABE (RABE) constructions – supporting decryption policies realizable by polynomial size Boolean circuits of arbitrary fan-out and – featuring compactness in the sense that the number of revocation controlling components in ciphertexts and decryption keys are constant. In fact, our RABE schemes are the first to achieve these parameters. Both our constructions utilize multilinear maps. The size of public parameter in our first construction is linear to the maximum number of users supported by the system while in the second construction we reduce it to logarithmic.
Last updated:  2015-09-14
Revisiting Sum of CBC-MACs and Extending NI2-MAC to Achieve Beyond-Birthday Security
Uncategorized
Avijit Dutta, Goutam Paul
Uncategorized
In CT-RSA 2010, Kan Yasuda has shown that the sum of two independent Encrypted CBC (ECBC) MACs is a secure PRF with security beyond birthday bound. It was mentioned in the abstract of the paper that ``no proof of security above the birthday bound $(2^{n/2})$ has been known for the sum of CBC MACs" (where $n$ is the tag size in bits). Kan Yasuda's paper did not consider the sum of actual CBC outputs and hence the PRF-security of the same has been left open. In this paper, we solve this problem by proving the beyond birthday security of sum of two CBC MACs. As a tool to prove this result, we develope a generalization of the result of S. Lucks from EUROCRYPT 2000 that the sum of two secure PRPs is a secure PRF. We extend this to the case when the domain and the range of the permutations may have some restrictions. Finally, we also lift the birthday bound of NI2 MAC construction (the bound was proven in CRYPTO 2014 by Gazi et al.) to beyond birthday by a small change in the existing construction.
Last updated:  2016-08-14
Using Modular Extension to Provably Protect Edwards Curves Against Fault Attacks
Margaux Dugardin, Sylvain Guilley, Martin Moreau, Zakaria Najm, Pablo Rauzy
Fault injection attacks are a real-world threat to cryptosystems, in particular asymmetric cryptography. In this paper, we focus on countermeasures which guarantee the integrity of the computation result, hence covering most existing and future fault attacks. Namely, we study the modular extension protection scheme in previously existing and newly contributed variants of the countermeasure on elliptic curve scalar multiplication (ECSM) algorithms. We find that an existing countermeasure is incorrect and we propose new “test-free” variant of the modular extension scheme that fixes it. We then formally prove the correctness and security of modular extension: specifically, the fault non-detection probability is inversely proportional to the security parameter. Finally, we implement an ECSM protected with test-free modular extension during the elliptic curve operation to evaluate the efficient of this method on Edwards and twisted Edwards curves.
Last updated:  2015-11-13
Generic Security of NMAC and HMAC with Input Whitening
Peter Gaži, Krzysztof Pietrzak, Stefano Tessaro
HMAC and its variant NMAC are the most popular approaches to deriving a MAC (and more generally, a PRF) from a cryptographic hash function. Despite nearly two decades of research, their exact security still remains far from understood in many different contexts. Indeed, recent works have re-surfaced interest for {\em generic} attacks, i.e., attacks that treat the compression function of the underlying hash function as a black box. Generic security can be proved in a model where the underlying compression function is modeled as a random function -- yet, to date, the question of proving tight, non-trivial bounds on the generic security of HMAC/NMAC even as a PRF remains a challenging open question. In this paper, we ask the question of whether a small modification to HMAC and NMAC can allow us to exactly characterize the security of the resulting constructions, while only incurring little penalty with respect to efficiency. To this end, we present simple variants of NMAC and HMAC, for which we prove tight bounds on the generic PRF security, expressed in terms of numbers of construction and compression function queries necessary to break the construction. All of our constructions are obtained via a (near) {\em black-box} modification of NMAC and HMAC, which can be interpreted as an initial step of key-dependent message pre-processing. While our focus is on PRF security, a further attractive feature of our new constructions is that they clearly defeat all recent generic attacks against properties such as state recovery and universal forgery. These exploit properties of the so-called ``functional graph'' which are not directly accessible in our new constructions.
Last updated:  2015-09-13
Generic Efficient Dynamic Proofs of Retrievability
Mohammad Etemad, Alptekin Küpçü
Together with its great advantages, cloud storage brought many interesting security issues to our attention. Since 2007, with the first efficient storage integrity protocols Proofs of Retrievability (PoR) of Juels and Kaliski, and Provable Data Possession (PDP) of Ateniese et al., many researchers worked on such protocols. The first proposals worked for static or limited dynamic data, whereas later proposals enabled fully dynamic data integrity and retrievability. Since the beginning, the difference between PDP and PoR models were greatly debated. Most notably, it was thought that dynamic PoR (DPoR) is harder than dynamic PDP (DPDP). Historically this was true: The first DPDP scheme was shown by Erway et al. in 2009, whereas the first DPoR scheme was created by Cash et al. in 2013. We show how to obtain DPoR from DPDP and PDP, together with erasure codes, making us realize that even though we did not know it, in 2009 we already could have had a DPoR solution. We propose a general framework for constructing DPoR schemes. Our framework encapsulates all known DPoR schemes as its special cases. We further show practical and interesting optimizations that enable even better performance than Chandran et al. and Shi et al. constructions. For the first time, we show how to obtain audit bandwidth for DPoR that is independent of the data size, and how the client can greatly speed up updates with O(λ√n) local storage (where n is the number of blocks, and λ is the security parameter), which corresponds to less than 3 MB for 10 GB outsourced data, and can easily be obtained in today’s smart phones, let alone computers.
Last updated:  2015-09-13
Computing information on domain parameters from public keys selected uniformly at random
Martin Ekerå
The security of many cryptographic schemes and protocols rests on the conjectured computational intractability of the discrete logarithm problem in some group <g> of prime order. Such schemes and protocols require domain parameters that specify <g> and a specific generator g. In this paper we consider the problem of computing information on the domain parameters from public keys selected uniformly at random from <g>. We show that it is not possible to compute any information on the generator g regardless of the number of public keys observed. In the case of elliptic curves E(GF(p)) or E(GF(2^n)) on short Weierstrass form, or E(K) on Edwards form, twisted Edwards form or Montgomery form, where K is a non-binary field, we show how to compute the domain parameters excluding the generator from four keys on affine form. Hence, if the domain parameters excluding the generator are to be kept private, points may not be transmitted on affine form. It is an open question whether point compression is a sufficient requirement. Regardless of whether points are transmitted on affine or compressed form, it is in general possible to create a distinguisher for the domain parameters, excluding the generator, both in the case of the elliptic curve groups previously mentioned, and in the case of multiplicative subgroups of GF(p). We propose that a good method for preventing all of the above attacks may be to use blinding schemes, and suggest new applications for existing blinding schemes originally designed for steganographic applications.
Last updated:  2015-09-13
Making Existential-Unforgeable Signatures Strongly Unforgeable in the Quantum Random-Oracle Model
Edward Eaton, Fang Song
Strongly unforgeable signature schemes provide a more stringent security guarantee than the standard existential unforgeability. It requires that not only forging a signature on a new message is hard, it is infeasible as well to produce a new signature on a message for which the adversary has seen valid signatures before. Strongly unforgeable signatures are useful both in practice and as a building block in many cryptographic constructions. This work investigates a generic transformation that compiles any existential-unforgeable scheme into a strongly unforgeable one, which was proposed by Teranishi et al. and was proven in the classical random-oracle model. Our main contribution is showing that the transformation also works against quantum adversaries in the quantum random-oracle model. We develop proof techniques such as adaptively programming a quantum random-oracle in a new setting, which could be of independent interest. Applying the transformation to an existential-unforgeable signature scheme due to Cash et al., which can be shown to be quantum-secure assuming certain lattice problems are hard for quantum computers, we get an efficient quantum-secure strongly unforgeable signature scheme in the quantum random-oracle model.
Last updated:  2015-09-13
Study of a Parity Check Based Fault-Detection Countermeasure for the AES Key Schedule
Uncategorized
Christophe Clavier, Julien Francq, Antoine Wurcker
Show abstract
Uncategorized
In this paper we study a parity check based countermeasure proposed by Chen et al. that thwarts their attack by detecting byte fault injection during the AES key schedule process. We provide a generalization of their approach that allows to derive parity equations for every AES sizes not given by the authors. We analyze why Chen et al. countermeasure does not properly works. Doing so we are able to extend the coverage of the fault detection to the full expanded key. Finally we suggest optimizations that reduce memory and computation costs, and propose an adaptation to a more general fault model.
Last updated:  2015-09-13
10-Round Feistel is Indifferentiable from an Ideal Cipher
Uncategorized
Dana Dachman-Soled, Jonathan Katz, Aishwarya Thiruvengadam
Show abstract
Uncategorized
We revisit the question of constructing an ideal cipher from a random oracle. Coron et al.~(Journal of Cryptology, 2014) proved that a 14-round Feistel network using random, independent, keyed round functions is indifferentiable from an ideal cipher, thus demonstrating the feasibility of such a construction. Left unresolved is the best possible efficiency of the transformation. We improve upon the result of Coron et al.\ and show that 10 rounds suffice.
Last updated:  2015-09-13
Bent and Semi-bent Functions via Linear Translators
Uncategorized
Neşe Koçak, Sihem Mesnager, Ferruh Özbudak
Show abstract
Uncategorized
The paper is dealing with two important subclasses of plateaued functions: bent and semi-bent functions. In the first part of the paper, we construct mainly bent and semi-bent functions in the Maiorana-McFarland class using Boolean functions having linear structures (linear translators) systematically. Although most of these results are rather direct applications of some recent results, using linear structures (linear translators) allows us to have certain flexibilities to control extra properties of these plateaued functions. In the second part of the paper, using the results of the first part and exploiting these flexibilities, we modify many secondary constructions. Therefore, we obtain new secondary constructions of bent and semi-bent functions not belonging to the Maiorana-McFarland class. Instead of using bent (semi-bent) functions as ingredients, our secondary constructions use only Boolean (vectorial Boolean) functions with linear structures (linear translators) which are very easy to choose. Moreover, all of them are very explicit and we also determine the duals of the bent functions in our constructions. We show how these linear structures should be chosen in order to satisfy the corresponding conditions coming from using derivatives and quadratic/cubic functions in our secondary constructions.
Last updated:  2015-12-17
Indifferentiability of 10-Round Feistel Networks
Yuanxi Dai, John Steinberger
We prove that a (balanced) 10-round Feistel network is indifferentiable from a random permutation. In a previous seminal result, Holenstein et al. had established indifferentiability of Feistel at 14 rounds. Our simulator achieves security $O(q^8/2^n)$ and query complexity $O(q^4)$, where $n$ is half the block length, similarly to the 14-round simulator of Holenstein et al., so that our result is a strict (and also the first) improvement of that work. Our simulator is very similar to a 10-round simulator of Seurin that was subsequently found to be flawed. Indeed, the main change of our simulator is to switch to "FIFO" path completion from "LIFO" path completion. This relatively minor change results in an overall significant paradigm shift, including a conceptually simpler proof.
Last updated:  2015-09-14
On the Diffusion Property of Iterated Functions
Uncategorized
Jian Liu, Sihem Mesnager, Lusheng Chen
Show abstract
Uncategorized
For vectorial Boolean functions, the behavior of iteration has consequence in the diffusion property of the system. We present a study on the diffusion property of iterated vectorial Boolean functions. The measure that will be of main interest here is the notion of the degree of completeness, which has been suggested by the NESSIE project. We provide the first (to the best of our knowledge) two constructions of $(n,n)$-functions having perfect diffusion property and optimal algebraic degree. We also obtain the complete enumeration results for the constructed functions.
Last updated:  2015-09-19
Traceability Improvements of a New RFID Protocol Based On EPC C1G2
Uncategorized
Seyed Salman Sajjadi GhaemMaghami, Afrooz Haghbin, Mahtab Mirmohseni
Show abstract
Uncategorized
Radio Frequency Identification (RFID) applications have spread all over the world and, in order to provide their security and pri-vacy, researchers proposed different kind of protocols. In this pa-per, we analyzes the privacy of a new protocol, proposed by Yu-Jehn in 2015 which is based on Electronic Product Code Class1 Generation 2 (EPC C1 G2) standard. By applying the Ouafi-Phan privacy model, we show that the Yu-Jehn protocol is vulnerable against traceability attack and forward traceability attack and it does not provide the privacy of RFID users. Then, to enhance the privacy of the analyzed protocol, an improved version of the pro-tocol is proposed which eliminates the existing weaknesses of Yu-Jehn protocol.
Last updated:  2016-02-21
Photonic Side Channel Analysis of Arbiter PUFs
Uncategorized
Shahin Tajik, Enrico Dietz, Sven Frohmann, Helmar Dittrich, Dmitry Nedospasov, Clemens Helfmeier, Jean-Pierre Seifert, Christian Boit, Heinz-Wilhelm Hübers
Show abstract
Uncategorized
As intended by its name, Physically Unclonable Functions (PUFs) are considered as an ultimate solution to deal with insecure storage, hardware counterfeiting, and many other security problems. However, many different successful attacks have already revealed vulnerabilities of certain digital intrinsic PUFs. This paper demonstrates that legacy arbiter PUF and its popular extended versions (i.e., Feed-forward and XOR-enhanced) can be completely and linearly characterized by means of photonic emission analysis. Our experimental setup is capable of measuring every PUF-internal delay with a resolution of 6 picoseconds. Due to this resolution we indeed require only the theoretical minimum number of linear independent equations (i.e., physical measurements) to directly solve the underlying inhomogeneous linear system. Moreover, it is not required to know the actual PUF responses for our physical delay extraction. We present our practical results for an arbiter PUF implementation on a Complex Programmable Logic Device (CPLD) manufactured with a 180 nanometer process. Finally, we give an insight into photonic emission analysis of arbiter PUF on smaller chip architectures by performing experiments on a Field Programmable Gate Array (FPGA) manufactured with a 60 nanometer process.
Last updated:  2015-09-08
Gambling, Computational Information and Encryption Security
Mohammad Hajiabadi, Bruce M. Kapron
We revisit the question, originally posed by Yao (1982), of whether encryption security may be characterized using computational information. Yao provided an affirmative answer, using a compression-based notion of computational information to give a characterization equivalent to the standard computational notion of semantic security. We give two other equivalent characterizations. The first uses a computational formulation of Kelly's (1957) model for "gambling with inside information", leading to an encryption notion which is similar to Yao's but where encrypted data is used by an adversary to place bets maximizing the rate of growth of total wealth over a sequence of independent, identically distributed events. The difficulty of this gambling task is closely related to Vadhan and Zheng's (2011) notion of KL-hardness, which in certain cases is equivalent to a conditional form of the pseudoentropy introduced by Hastad et. al. (1999). Using techniques introduced to prove this equivalence, we are also able to give a characterization of encryption security in terms of conditional pseudoentropy. Finally, we reconsider the gambling model with respect to "risk neutral" adversaries in an attempt to understand whether assumptions about the rationality of adversaries may impact the level of security achieved by an encryption scheme.
Last updated:  2015-09-08
New Realizations of Somewhere Statistically Binding Hashing and Positional Accumulators
Tatsuaki Okamoto, Krzysztof Pietrzak, Brent Waters, Daniel Wichs
A somewhere statistically binding (SSB) hash, introduced by Hubacek and Wichs (ITCS '15), can be used to hash a long string $x$ to a short digest $y = H_{\hk}(x)$ using a public hashing-key $\hk$. Furthermore, there is a way to set up the hash key $\hk$ to make it statistically binding on some arbitrary hidden position $i$, meaning that: (1) the digest $y$ completely determines the $i$'th bit (or symbol) of $x$ so that all pre-images of $y$ have the same value in the $i$'th position, (2) it is computationally infeasible to distinguish the position $i$ on which $\hk$ is statistically binding from any other position $i'$. Lastly, the hash should have a local opening property analogous to Merkle-Tree hashing, meaning that given $x$ and $y = H_{\hk}(x)$ it should be possible to create a short proof $\pi$ that certifies the value of the $i$'th bit (or symbol) of $x$ without having to provide the entire input $x$. A similar primitive called a positional accumulator, introduced by Koppula, Lewko and Waters (STOC '15) further supports dynamic updates of the hashed value. These tools, which are interesting in their own right, also serve as one of the main technical components in several recent works building advanced applications from indistinguishability obfuscation (iO). The prior constructions of SSB hashing and positional accumulators required fully homomorphic encryption (FHE) and iO respectively. In this work, we give new constructions of these tools based on well studied number-theoretic assumptions such as DDH, Phi-Hiding and DCR, as well as a general construction from lossy/injective functions.
Last updated:  2015-09-08
Optimally Secure Block Ciphers from Ideal Primitives
Stefano Tessaro
Recent advances in block-cipher theory deliver security analyses in models where one or more underlying components (e.g., a function or a permutation) are {\em ideal} (i.e., randomly chosen). This paper addresses the question of finding {\em new} constructions achieving the highest possible security level under minimal assumptions in such ideal models. We present a new block-cipher construction, derived from the Swap-or-Not construction by Hoang et al. (CRYPTO '12). With $n$-bit block length, our construction is a secure pseudorandom permutation (PRP) against attackers making $2^{n - O(\log n)}$ block-cipher queries, and $2^{n - O(1)}$ queries to the underlying component (which has itself domain size roughly $n$). This security level is nearly optimal. So far, only key-alternating ciphers have been known to achieve comparable security levels using $O(n)$ independent random permutations. In contrast, here we only assume that a {\em single} {\em function} or {\em permutation} is available, while achieving similar efficiency. Our second contribution is a generic method to enhance a block cipher, initially only secure as a PRP, to achieve related-key security with comparable quantitative security.
Last updated:  2015-11-25
Multilinear and Aggregate Pseudorandom Functions: New Constructions and Improved Security
Uncategorized
Michel Abdalla, Fabrice Benhamouda, Alain Passelègue
Show abstract
Uncategorized
Since its introduction, pseudorandom functions (PRFs) have become one of the main building blocks of cryptographic protocols. In this work, we revisit two recent extensions of standard PRFs, namely multilinear and aggregate PRFs, and provide several new results for these primitives. In the case of aggregate PRFs, one of our main results is a proof of security for the Naor-Reingold PRF with respect to read-once boolean aggregate queries under the standard Decision Diffie-Hellman problem, which was an open problem. In the case of multilinear PRFs, one of our main contributions is the construction of new multilinear PRFs achieving indistinguishability from random symmetric and skew-symmetric multilinear functions, which was also left as an open problem. In order to achieve these results, our main technical tool is a simple and natural generalization of the recent linear independent polynomial framework for PRFs proposed by Abdalla, Benhamouda, and Passelègue in Crypto 2015, that can handle larger classes of PRF constructions. In addition to simplifying and unifying proofs for multilinear and aggregate PRFs, our new framework also yields new constructions which are secure under weaker assumptions, such as the decisional $k$-linear assumption.
Last updated:  2015-10-30
Graded Encoding, Variations on a Scheme
Shai Halevi
In this note we provide a more-or-less unified framework to talk about the functionality and security of graded encoding schemes, describe some variations of recent schemes, and discuss their security. In particular we describe schemes that combine elements from both the GGH13 scheme of Garg, Gentry and Halevi (EUROCRYPT 2013) and the GGH15 scheme of Gentry, Gorbunov and Halevi (TCC 2015). On one hand, we show how to use techniques from GGH13 in the GGH15 construction to enable encoding of arbitrary plaintext elements (as opposed to only small ones) and to introduce "levels/subsets" (e.g., as needed to implement straddling sets). On the other hand, we show how to modify the GGH13 scheme to support graph-induced constraints (either instead of, or in addition to, the levels from GGH13). Turning to security, we describe zeroizing attacks on the GGH15 scheme, similar to those described by Cheon et al. (EUROCRYPT 2015) and Coron et al. (CRYPTO 2015) on the CLT13 and GGH13 constructions. As far as we know, however, these attacks to not break the GGH15 multi-partite key-agreement protocol. We also describe a new multi-partite key-agreement protocol using the GGH13 scheme, which also seems to resist known attacks. That protocol suggests a relatively simple hardness assumption for the GGH13 scheme, that we put forward as a target for cryptanalysis.
Last updated:  2015-11-02
Card-based Cryptographic Protocols Using a Minimal Number of Cards
Uncategorized
Alexander Koch, Stefan Walzer, Kevin Härtel
Show abstract
Uncategorized
Secure multiparty computation can be done with a deck of playing cards. For example, den Boer (EUROCRYPT ’89) devised his famous “five-card trick”, which is a secure two-party AND protocol using five cards. However, the output of the protocol is revealed in the process and it is therefore not suitable for general circuits with hidden intermediate results. To overcome this limitation, protocols in committed format, i.e., with concealed output, have been introduced, among them the six-card AND protocol of (Mizuki and Sone, FAW 2009). In their paper, the authors ask whether six cards are minimal for committed format AND protocols. We give a comprehensive answer to this problem: there is a four-card AND protocol with a runtime that is finite in expectation (i.e., a Las Vegas protocol), but no protocol with finite runtime. Moreover, we show that five cards are sufficient for finite runtime. In other words, improving on (Mizuki, Kumamoto, and Sone, ASIACRYPT 2012) “The Five-Card Trick can be done with four cards”, our results can be stated as “The Five-Card Trick can be done in committed format” and furthermore it “can be done with four cards in Las Vegas committed format”. By devising a Las Vegas protocol for any $k$-ary boolean function using $2k$ cards, we address the open question posed by (Nishida et al., TAMC 2015) on whether $2k+6$ cards are necessary for computing any $k$-ary boolean function. For this we use the shuffle abstraction as introduced in the computational model of card-based protocols in (Mizuki and Shizuya, Int. J. Inf. Secur., 2014). We augment this result by a discussion on implementing such general shuffle operations.
Last updated:  2015-09-08
Encryption Performance Improvements of the Paillier Cryptosystem
Christine Jost, Ha Lam, Alexander Maximov, Ben Smeets
Homomorphic encryption methods provide a way to outsource computations to the cloud while protecting the confidentiality of the data. In order to deal with the large and growing data sets that are being processed nowadays, good encryption performance is an important step for practicality of homomorphic encryption methods. In this article, we study the encryption performance of the Paillier cryptosystem, a partially homomorphic cryptosystem that allows to perform sums on encrypted data without having to decrypt first. With a combination of both new and known methods, we increase the encryption performance by orders of magnitude compared to a naïve implementation. The new methods reduce the bottleneck of noise calculation by using pre-computed noise to generate new noise in a much faster way than by using standard methods.
Last updated:  2015-09-09
Is There an Oblivious RAM Lower Bound?
Uncategorized
Elette Boyle, Moni Naor
Show abstract
Uncategorized
An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e. for every input the observed locations accessed are similarly distributed. Great progress has been made in recent years in minimizing the overhead of ORAM constructions, with the goal of obtaining the smallest overhead possible. We revisit the lower bound on the overhead required to obliviously simulate programs, due to Goldreich and Ostrovsky. While the lower bound is fairly general, including the offline case, when the simulator is given the reads and writes ahead of time, it does assume that the simulator behaves in a “balls and bins” fashion. That is, the simulator must act by shuffling data items around, and is not allowed to have sophisticated encoding of the data. We prove that for the offline case, showing a lower bound without the above restriction is related to the size of the circuits for sorting. Our proof is constructive, and uses a bit-slicing approach which manipulates the bit representations of data in the simulation. This implies that without obtaining yet unknown superlinear lower bounds on the size of such circuits, we cannot hope to get lower bounds on offline (unrestricted) ORAMs.
Last updated:  2015-09-08
Idealizing Identity-Based Encryption
Dennis Hofheinz, Christian Matt, Ueli Maurer
We formalize the standard application of identity-based encryption (IBE), namely non-interactive secure communication, as realizing an ideal system which we call delivery controlled channel (DCC). This system allows users to be registered (by a central authority) for an identity and to send messages securely to other users only known by their identity. Quite surprisingly, we show that existing security definitions for IBE are not sufficient to realize DCC. In fact, it is impossible to do so in the standard model. We show, however, how to adjust any IBE scheme that satisfies the standard security definition IND-ID-CPA to achieve this goal in the random oracle model. We also show that the impossibility result can be avoided in the standard model by considering a weaker ideal system that requires all users to be registered in an initial phase before any messages are sent. To achieve this, a weaker security notion, which we introduce and call IND-ID1-CPA, is actually sufficient. This justifies our new security definition and might open the door for more efficient schemes. We further investigate which ideal systems can be realized with schemes satisfying the standard notion and variants of selective security. As a contribution of independent interest, we show how to model features of an ideal system that are potentially available to dishonest parties but not guaranteed, and which such features arise when using IBE.
Last updated:  2015-09-06
A Synthetic Indifferentiability Analysis of Interleaved Double-Key Even-Mansour Ciphers
Uncategorized
Chun Guo, Dongdai Lin
Show abstract
Uncategorized
Iterated Even-Mansour scheme (IEM) is a generalization of the basic 1-round proposal (ASIACRYPT '91). The scheme can use one key, two keys, or completely independent keys. Most of the published security proofs for IEM against relate-key and chosen-key attacks focus on the case where all the round-keys are derived from a single master key. Whereas results beyond this barrier are relevant to the cryptographic problem whether a secure blockcipher with key-size twice the block-size can be built by mixing two \emph{relatively independent} keys into IEM and iterating sufficiently many rounds, and this strategy actually has been used in designing blockciphers for a long-time. This work makes the first step towards breaking this barrier and considers IEM with Interleaved Double \emph{independent} round-keys: $$\text{IDEM}_r((k_1,k_2),m)=k_i\oplus (P_r(\ldots k_1\oplus P_2(k_2\oplus P_1(k_1\oplus m))\ldots)),$$ where $i=2$ when $r$ is odd, and $i=1$ when $r$ is even. As results, this work proves that 15 rounds can achieve (full) indifferentiability from an ideal cipher with $O({q^{8}}/{2^n})$ security bound. This work also proves that 7 rounds is sufficient and necessary to achieve sequential-indifferentiability (a notion introduced at TCC 2012) with $O({q^{6}}/{2^n})$ security bound, so that $\text{IDEM}_{7}$ is already correlation intractable and secure against any attack that exploits evasive relations between its input-output pairs.
Last updated:  2015-09-15
Selective Opening Security for Receivers
Carmit Hazay, Arpita Patra, Bogdan Warinschi
In a selective opening (SO) attack an adversary breaks into a subset of honestly created ciphertexts and tries to learn information on the plaintexts of some untouched (but potentially related) ciphertexts. Contrary to intuition, standard security notions do not always imply security against this type of adversary, making SO security an important standalone goal. In this paper we study {\em receiver security}, where the attacker is allowed to obtain the decryption keys corresponding to some of the ciphertexts. First we study the relation between two existing security definitions, one based on simulation and the other based on indistinguishability, and show that the former is strictly stronger. We continue with feasibility results for both notions which we show can be achieved from (variants of) non-committing encryption schemes. In particular, we show that indistinguishability-based SO security can be achieved from a tweaked variant of non-committing encryption which, in turn, can be instantiated from a variety of basic, well-established, assumptions. We conclude our study by showing that SO security is however strictly weaker than all variants of non-committing encryption that we consider, leaving potentially more efficient constructions as an interesting open problem.
Last updated:  2017-02-27
Factor Base Discrete Logarithms in Kummer Extensions
Uncategorized
Dianyan Xiao, Jincheng Zhuang, Qi Cheng
Show abstract
Uncategorized
The discrete logarithm over finite fields of small characteristic can be solved much more efficiently than previously thought. This algorithmic breakthrough is based on pinpointing relations among the factor base discrete logarithms. In this paper, we concentrate on the Kummer extension $ \F_{q^{2(q-1)}}=\F_{q^2}[x]/(x^{q-1}-A). $ It has been suggested that in this case, a small number of degenerate relations (from the Borel subgroup) are enough to solve the factor base discrete logarithms. We disprove the conjecture, and design a new heuristic algorithm with an improved bit complexity $ \tilde{O}(q^{1+ \theta} ) $ (or algebraic complexity $\tilde{O}(q^{\theta} )$) to compute discrete logarithms of all the elements in the factor base $\{ x+\alpha | \alpha \in \F_{q^2} \} $, where $ \theta<2.38 $ is the matrix multiplication exponent over rings. Given additional time $ \tilde{O} (q^4), $ we can compute discrete logarithms of at least $ \Omega(q^3) $ many monic irreducible quadratic polynomials. We reduce the correctness of the algorithm to a conjecture concerning the determinant of a simple $ (q+1)$-dimensional lattice, rather than to elusive smoothness assumptions. We verify the conjecture numerically for all prime powers $ q $ such that $ \log_2(q^{2(q-1)}) \leq 5134 $, and provide theoretical supporting evidences.
Last updated:  2020-10-26
Skipping the $q$ in Group Signatures
Olivier Blazy, Saqib A. Kakvi
he notion of group signatures was introduced to allow group members to sign anonymously on behalf of a group. A group manager allows a user to join a group, and another will be able to open a signature to revoke its anonymity. Several schemes have already been proposed to fulfil these properties, however very few of them are proven in the standard model. Of those proven in the standard model, most schemes rely on a so called $q$-assumption. The underlying idea of a $q$-assumptions is that to prove the security of the scheme, we are given a challenge long enough to allow the simulator to answer queries. Another common solution is to rely on interactive hypothesis. We provide one of the first schemes proven in the standard model, requiring a constant-size non-interactive hypothesis. We then compare its efficiency to existing schemes, and show that this trade-off is acceptable as most schemes with better efficiency rely on either an interactive or a $q$-hypothesis. The exception to this is the recent independent of our work Libert, Peters and Yung (CRYPTO 2015), who presented an efficient group signature scheme in the standard model relying on standard assumptions.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.