All papers in 2016 (Page 4 of 1195 results)

Last updated:  2016-09-14
Security Analysis of Anti-SAT
Muhammad Yasin, Bodhisatwa Mazumdar, Ozgur Sinanoglu, Jeyavijayan Rajendran
Logic encryption protects integrated circuits (ICs) against intellectual property (IP) piracy and over- building attacks by encrypting the IC with a key. A Boolean satisfiability (SAT) based attack breaks all existing logic encryption technique within few hours. Recently, a defense mechanism known as Anti-SAT was presented that protects against SAT attack, by rendering the SAT-attack effort exponential in terms of the number of key gates. In this paper, we highlight the vulnerabilities of Anti-SAT and propose signal probability skew (SPS) attack against Anti-SAT block. SPS attack leverages the structural traces in Anti-SAT block to identify and isolate Anti-SAT block. The attack is 100% successful on all variants of Anti-SAT block. SPS attack is scalable to large circuits, as it breaks circuits with up to 22K gates within two minutes.
Last updated:  2017-05-24
Leakage-Abuse Attacks against Order-Revealing Encryption
Paul Grubbs, Kevin Sekniqi, Vincent Bindschaedler, Muhammad Naveed, Thomas Ristenpart
Order-preserving encryption and its generalization order-revealing encryption (OPE/ORE) are used in a variety of settings in practice in order to allow sorting, performing range queries, and filtering data — all while only having access to ciphertexts. But OPE and ORE ciphertexts necessarily leak information about plaintexts, and what level of security they provide has been unclear. In this work, we introduce new leakage-abuse attacks that show how to recover plaintexts from OPE/ORE-encrypted databases. Underlying our new attacks against practically-used schemes is a framework in which we cast the adversary’s challenge as a non- crossing bipartite matching problem. This allows easy tailoring of attacks to a specific scheme’s leakage profile. In a case study of customer records, we show attacks that recover 99% of first names, 97% of last names, and 90% of birthdates held in a database, despite all values being encrypted with the OPE scheme most widely used in practice. We also show the first attack against the recent frequency- hiding Kerschbaum scheme, to which no prior attacks have been demonstrated. Our attack recovers frequently occurring plaintexts most of the time.
Last updated:  2017-01-13
Indifferentiability of 3-Round Even-Mansour with Random Oracle Key Derivation
Uncategorized
Chun Guo, Dongdai Lin
Show abstract
Uncategorized
We revisit the Even-Mansour (EM) scheme with random oracle key derivation previously considered by Andreeva et al. (CRYPTO 2013). For this scheme, Andreeva et al. provided an indifferentiability (from an ideal $(k,n)$-cipher) proof for 5 rounds while they exhibited an attack for 2 rounds. Left open is the (in)differentiability of 3 and 4 rounds. We present a proof for the indifferentiability of 3 rounds and thus closing the aforementioned gap. This also separates EM ciphers with non-invertible key derivations from those with invertible ones in the full indifferentiability setting. Prior work only established such a separation in the weaker sequential-indifferentiability setting (ours, DCC, 2015). Our results also imply 3-round EM indifferentiable under multiple random known-keys, partially settling a problem left by Cogliati and Seurin (FSE 2016). The key point for our indifferentiability simulator is to pre-emptively obtain some chains of ideal-cipher-queries to simulate the structures due to the related-key boomerang property in the 3-round case. The length of such chains have to be as large as the number of queries issued by the distinguisher. Thus the situation somehow resembles the context of hash-of-hash $H^2$ considered by Dodis et al. (CRYPTO 2012). Besides, a technical novelty of our proof is the absence of the so-called distinguisher that completes all chains.
Last updated:  2016-09-14
Building web applications on top of encrypted data using Mylar
Raluca Ada Popa, Emily Stark, Jonas Helfer, Steven Valdez, Nickolai Zeldovich, M. Frans Kaashoek, Hari Balakrishnan
Web applications rely on servers to store and process confidential information. However, anyone who gains access to the server (e.g., an attacker, a curious administrator, or a government) can obtain all of the data stored there. This paper presents Mylar, a platform that provides end-to-end encryption to web applications. Mylar protects the confidentiality of sensitive data fields against attackers that gained access to servers. Mylar stores sensitive data encrypted on the server, and decrypts that data only in users’ browsers. Mylar addresses three challenges in making this approach work. First, Mylar allows the server to perform keyword search over encrypted documents, even if the documents are encrypted with different keys. Second, Mylar allows users to share keys securely in the presence of an active adversary. Finally, Mylar ensures that client-side application code is authentic, even if the server is malicious. Results with a prototype of Mylar built on top of the Meteor framework are promising: porting 6 applications required changing just 36 lines of code on average, and the performance overheads are modest, amounting to a 17% throughput loss and a 50 ms latency increase for sending a message in a chat application.
Last updated:  2017-10-17
Privacy-Preserving Distributed Linear Regression on High-Dimensional Data
Adrià Gascón, Phillipp Schoppmann, Borja Balle, Mariana Raykova, Jack Doerner, Samee Zahur, David Evans
We propose privacy-preserving protocols for computing linear regression models, in the setting where the training dataset is vertically distributed among several parties. Our main contribution is a hybrid multi-party computation protocol that combines Yao's garbled circuits with tailored protocols for computing inner products. Like many machine learning tasks, building a linear regression model involves solving a system of linear equations. We conduct a comprehensive evaluation and comparison of different techniques for securely performing this task, including a new Conjugate Gradient Descent (CGD) algorithm. This algorithm is suitable for secure computation because it uses an efficient fixed-point representation of real numbers while maintaining accuracy and convergence rates comparable to what can be obtained with a classical solution using floating point numbers. Our technique improves on Nikolaenko et al.'s method for privacy-preserving ridge regression (S&P 2013), and can be used as a building block in other analyses. We implement a complete system and demonstrate that our approach is highly scalable, solving data analysis problems with one million records and one hundred features in less than one hour of total running time.
Last updated:  2017-02-01
Tightly Secure IBE under Constant-size Master Public Key
Uncategorized
Jie Chen, Junqing Gong, Jian Weng
Show abstract
Uncategorized
Chen and Wee [CRYPTO, 2013] proposed the first almost tightly and adaptively secure IBE in the standard model and left two open problems which called for a tightly secure IBE with (1) constant-size master public key and/or (2) constant security loss. In this paper, we propose an IBE scheme with constant-size master public key and tighter security reduction. This (partially) solves Chen and Wee's first open problem and makes progress on the second one. Technically, our IBE scheme is built based on Wee's petit IBE scheme [TCC, 2016] in the composite-order bilinear group whose order is product of four primes. The sizes of master public key, ciphertexts, and secret keys are not only constant but also nearly optimal as Wee's petit IBE. We can prove its adaptive security in the multi-instance, multi-ciphertext setting [PKC, 2015] based on the decisional subgroup assumption and a subgroup variant of DBDH assumption. The security loss is $O(\log q)$ where $q$ is the upper bound of the total number of secret keys and challenge ciphertexts revealed to adversary in each single IBE instance. It's much smaller than those for all known adaptively secure IBE schemes in a concrete sense.
Last updated:  2018-02-23
A Parallel Variant of LDSieve for the SVP on Lattices
Artur Mariano, Thijs Laarhoven, Christian Bischof
In this paper, we propose a parallel implementation of LDSieve, a recently published sieving algorithm for the SVP, which achieves the best theoretical complexity to this day, on parallel shared-memory systems. In particular, we propose a scalable parallel variant of LDSieve that is probabilistically lock-free and relaxes the properties of the algorithm to favour parallelism. We use our parallel variant of LDSieve to answer a number of important questions pertaining to the algorithm. In particular, we show that LDSieve scales fairly well on sharedmemory systems and uses much less memory than HashSieve on random lattices, for the same or even less execution time.
Last updated:  2019-07-20
Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol
Uncategorized
Aggelos Kiayias, Alexander Russell, Bernardo David, Roman Oliynykov
Show abstract
Uncategorized
We present ``Ouroboros,'' the first blockchain protocol based on proof of stake with rigorous security guarantees. We establish security properties for the protocol comparable to those achieved by the bitcoin blockchain protocol. As the protocol provides a ``proof of stake'' blockchain discipline, it offers qualitative efficiency advantages over blockchains based on proof of physical resources (e.g., proof of work). We also present a novel reward mechanism for incentivizing proof of stake protocols and we prove that, given this mechanism, honest behavior is an approximate Nash equilibrium, thus neutralizing attacks such as selfish mining. We also present initial evidence of the practicality of our protocol in real world settings by providing experimental results on transaction confirmation and processing.
Last updated:  2019-02-16
Finding closest lattice vectors using approximate Voronoi cells
Emmanouil Doulgerakis, Thijs Laarhoven, Benne de Weger
The two traditional hard problems underlying the security of lattice-based cryptography are the shortest vector problem (SVP) and the closest vector problem (CVP). For a long time, lattice enumeration was considered the fastest method for solving these problems in high dimensions, but recent work on memory-intensive methods has resulted in lattice sieving overtaking enumeration both in theory and in practice. Some of the recent improvements [Ducas, Eurocrypt 2018; Laarhoven-Mariano, PQCrypto 2018; Albrecht-Ducas-Herold-Kirshanova-Postlethwaite-Stevens, Eurocrypt 2019] are based on the fact that these methods find more than just one short lattice vector, and this additional data can be reused effectively later on to solve other, closely related problems faster. Similarly, results for the preprocessing version of CVP (CVPP) have demonstrated that once this initial data has been generated, instances of CVP can be solved faster than when solving them directly, albeit with worse memory complexities [Laarhoven, SAC 2016]. In this work we study CVPP in terms of approximate Voronoi cells, and obtain better time and space complexities using randomized slicing, which is similar in spirit to using randomized bases in lattice enumeration [Gama-Nguyen-Regev, Eurocrypt 2010]. With this approach, we improve upon the state-of-the-art complexities for CVPP, both theoretically and experimentally, with a practical speedup of several orders of magnitude compared to non-preprocessed SVP or CVP. Such a fast CVPP solver may give rise to faster enumeration methods, where the CVPP solver is used to replace the bottom part of the enumeration tree, consisting of a batch of CVP instances in the same lattice. Asymptotically, we further show that we can solve an exponential number of instances of CVP in a lattice in essentially the same amount of time and space as the fastest method for solving just one CVP instance. This is in line with various recent results, showing that perhaps the biggest strength of memory-intensive methods lies in being able to reuse the generated data several times. Similar to [Ducas, Eurocrypt 2018], this further means that we can achieve a ``few dimensions for free'' for sieving for SVP or CVP, by doing $\Theta(d/\log d)$ levels of enumeration on top of a CVPP solver based on approximate Voronoi cells.
Last updated:  2017-02-28
A generalisation of Dillon's APN permutation with the best known differential and nonlinear properties for all fields of size $2^{4k+2}$
Uncategorized
Anne Canteaut, Sébastien Duval, Léo Perrin
Show abstract
Uncategorized
The existence of Almost Perfect Nonlinear (APN) permutations operating on an even number of variables was a long-standing open problem, until an example with six variables was exhibited by Dillon et al. in~2009. However it is still unknown whether this example can be generalised to any even number of inputs. In a recent work, Perrin et al. described an infinite family of permutations, named butterflies, operating on (4k+2) variables and with differential uniformity at most 4, which contains the Dillon APN permutation. In this paper, we generalise this family, and we completely solve the two open problems raised by Perrin et al. Indeed we prove that all functions in this larger family have the best known nonlinearity. We also show that this family does not contain any APN permutation besides the Dillon permutation, implying that all other functions have differential uniformity exactly four.
Last updated:  2016-09-14
A Robust and Sponge-Like PRNG with Improved Efficiency
Uncategorized
Daniel Hutchinson
Show abstract
Uncategorized
Ever since Keccak won the SHA3 competition, sponge-based constructions are being suggested for many different applications, including pseudo-random number generators (PRNGs). Sponges are very desirable, being well studied, increasingly efficient to implement and simplistic in their design. The initial construction of a sponge-based PRNG (Bertoni et al. CHES 2010) based its security on the well known sponge indifferentiability proof in the random permutation model and provided no forward security. Since then, another improved sponge-based PRNG has been put forward by Gaži and Tessaro (Eurocrypt 2016) who point out the necessity for a public seed to prevent an adversarial sampler from gaining non-negligible advantage. The authors further update the security model of Dodis et al. (CCS 2013) to accommodate a public random permutation, modelled in the ideal cipher model, and how this affects the notions of security. In this paper we introduce \reverie, an improved and practical, sponge-like pseudo-random number generator together with a formal security analysis in the PRNG with input security model of Dodis et al. with the modifications of the Gaži and Tessaro paper. We prove that \reverie is \emph{robust} when used with a public random permutation; robustness is the strongest notion of security in the chosen security model. Robustness is proved by establishing two weaker notions of security, preserving and recovering security, which together, can be shown to imply the robustness result. The proofs utilise the H-coefficient technique that has found recent popularity in this area; providing a very useful tool for proving the generator meets the necessary security notions.
Last updated:  2017-03-28
Short Stickelberger Class Relations and application to Ideal-SVP
Ronald Cramer, Léo Ducas, Benjamin Wesolowski
The worst-case hardness of finding short vectors in ideals of cyclotomic number fields (Ideal-SVP) is a central matter in lattice based cryptography. Assuming the worst-case hardness of Ideal-SVP allows to prove the Ring-LWE and Ring-SIS assumptions, and therefore to prove the security of numerous cryptographic schemes and protocols --- including key-exchange, digital signatures, public-key encryption and fully-homomorphic encryption. A series of recent works has shown that Principal Ideal-SVP is not always as hard as finding short vectors in general lattices, and some schemes were broken using quantum algorithms --- the Soliloquy encryption scheme, Smart-Vercauteren fully homomorphic encryption scheme from PKC 2010, and Gentry-Garg-Halevi cryptographic multilinear-maps from Eurocrypt 2013. Those broken schemes were using a special class of principal ideals, but these works also showed how to solve SVP for principal ideals in the worst-case in quantum polynomial time for an approximation factor of $\exp(\tilde O(\sqrt n))$. This exposed an unexpected hardness gap between general lattices and some structured ones, and called into question the hardness of various problems over structured lattices, such as Ideal-SVP and Ring-LWE. In this work, we generalize the previous result to general ideals. Precisely, we show how to solve the close principal multiple problem (CPM) by exploiting the classical theorem that the class-group is annihilated by the (Galois-module action of) the so-called Stickelberger ideal. Under some plausible number-theoretical hypothesis, our approach provides a close principal multiple in quantum polynomial time. Combined with the previous results, this solves Ideal-SVP in the worst case in quantum polynomial time for an approximation factor of $\exp(\tilde O(\sqrt n))$. Although it does not seem that the security of Ring-LWE based cryptosystems is directly affected, we contribute novel ideas to the cryptanalysis of schemes based on structured lattices. Moreover, our result shows a deepening of the gap between general lattices and structured ones.
Last updated:  2016-10-10
Robust, low-cost, auditable random number generation for embedded system security
Ben Lampert, Riad S. Wahby, Shane Leonard, Philip Levis
This paper presents an architecture for a discrete, high-entropy hardware random number generator. Because it is constructed out of simple hardware components, its operation is transparent and auditable. Using avalanche noise, a nondeterministic physical phenomenon, the circuit is inherently probabilistic and resists adversarial control. Furthermore, because it compares the outputs from two matched noise sources, it rejects environmental disturbances like power supply ripple. The resulting hardware produces more than 0.98 bits of entropy per sample, is inexpensive, has a small footprint, and can be disabled to conserve power when not in use.
Last updated:  2016-09-14
DEMO: Integrating MPC in Big Data Workflows
Nikolaj Volgushev, Malte Schwarzkopf, Andrei Lapets, Mayank Varia, Azer Bestavros
Secure multi-party computation (MPC) allows multiple parties to perform a joint computation without disclosing their private inputs. Many real-world joint computation use cases, however, involve data analyses on very large data sets, and are implemented by software engineers who lack MPC knowledge. Moreover, the collaborating parties -- e.g., several companies -- often deploy different data analytics stacks internally. These restrictions hamper the real-world usability of MPC. To address these challenges, we combine existing MPC frameworks with data-parallel analytics frameworks by extending the Musketeer big data workflow manager. Musketeer automatically generates code for both the sensitive parts of a workflow, which are executed in MPC, and the remaining portions of the computation, which run on scalable, widely-deployed analytics systems. In a prototype use case, we compute the Herfindahl-Hirschman Index (HHI), an index of market concentration used in antitrust regulation, on an aggregate 156GB of taxi trip data over five transportation companies. Our implementation computes the HHI in about 20 minutes using a combination of Hadoop and VIFF, while even ``mixed mode'' MPC with VIFF alone would have taken many hours. Finally, we discuss future research questions that we seek to address using our approach.
Last updated:  2017-01-03
MSKT-ORAM: A Constant Bandwidth ORAM without Homomorphic Encryption
Jinsheng Zhang, Qiumao Ma, Wensheng Zhang, Daji Qiao
This paper proposes MSKT-ORAM, an efficient multiple server ORAM construction, to protect a client’s access pattern to outsourced data. MSKT-ORAM organizes each of the server storage as a k-ary tree and adopts XOR based PIR and a novel delayed eviction technique to optimize both the data query and data eviction process. MSKT-ORAM is proved to protect the data access pattern privacy at a failure probability of $2^{80}$ when $k\geq 128$. Meanwhile, given constant local storage, when $N$ (i.e., the total number of outsourced data blocks) ranges from $2^{16}$ to $2^{34}$ and data block size $B\geq 20$KB, the communication cost of MSKT-ORAM is only $22$ to $46$ data blocks. Asymptotical analysis and detailed implementation comparisons are conducted to show that MSKT-ORAM achieves better communication, storage and access delay in practical scenario over the compared state-of-the-art ORAM schemes.
Last updated:  2016-09-14
Near Collisions in the RC4 Stream Cipher
Anindya Shankar Bhandari
In this paper we explore the intriguing factors involved in the non one-one nature of the RC4, and explore new techniques and present interesting findings regarding the same. The first part of this paper studies near colliding keys of the RC4, and discusses how these keys are localized into clusters in the key-space. The second part of this paper proposes a new collision search algorithm specifically for 16-byte keys. It is generally the practice to choose the byte that differs between two keys to be near the end of the key. However, this is not necessary for 16-byte keys, and the second part of this paper discusses how this may be used to grant us an additional degree of control.
Last updated:  2017-06-25
Naor-Yung Paradigm with Shared Randomness and Applications
Silvio Biagioni, Daniel Masny, Daniele Venturi
The Naor-Yung paradigm (Naor and Yung, STOC '90) allows to generically boost security under chosen-plaintext attacks (CPA) to security against chosen-ciphertext attacks (CCA) for public-key encryption (PKE) schemes. The main idea is to encrypt the plaintext twice (under independent public keys), and to append a non-interactive zero-knowledge (NIZK) proof that the two ciphertexts indeed encrypt the same message. Later work by Camenisch, Chandran, and Shoup (Eurocrypt '09) and Naor and Segev (Crypto '09 and SIAM J. Comput. '12) established that the very same techniques can also be used in the settings of key-dependent message (KDM) and key-leakage attacks (respectively). In this paper we study the conditions under which the two ciphertexts in the Naor-Yung construction can share the same random coins. We find that this is possible, provided that the underlying PKE scheme meets an additional simple property. The motivation for re-using the same random coins is that this allows to design much more efficient NIZK proofs. We showcase such an improvement in the random oracle model, under standard complexity assumptions including Decisional Diffie-Hellman, Quadratic Residuosity, and Subset Sum. The length of the resulting ciphertexts is reduced by 50\%, yielding truly efficient PKE schemes achieving CCA security under KDM and key-leakage attacks. As an additional contribution, we design the first PKE scheme whose CPA security under KDM attacks can be directly reduced to (low-density instances of) the Subset Sum assumption. The scheme supports key-dependent messages computed via any affine function of the secret key.
Last updated:  2016-09-14
Zero-Knowledge Arguments for Matrix-Vector Relations and Lattice-Based Group Encryption
Benoît Libert, San Ling, Fabrice Mouhartem, Khoa Nguyen, Huaxiong Wang
Group encryption (GE) is the natural encryption analogue of group signatures in that it allows verifiably encrypting messages for some anonymous member of a group while providing evidence that the receiver is a properly certified group member. Should the need arise, an opening authority is capable of identifying the receiver of any ciphertext. As introduced by Kiayias, Tsiounis and Yung (Asiacrypt'07), GE is motivated by applications in the context of oblivious retriever storage systems, anonymous third parties and hierarchical group signatures. This paper provides the first realization of group encryption under lattice assumptions. Our construction is proved secure in the standard model (assuming interaction in the proving phase) under the Learning-With-Errors (LWE) and Short-Integer-Solution (SIS) assumptions. As a crucial component of our system, we describe a new zero-knowledge argument system allowing to demonstrate that a given ciphertext is a valid encryption under some hidden but certified public key, which incurs to prove quadratic statements about LWE relations. Specifically, our protocol allows arguing knowledge of witnesses consisting of $\mathbf{X} \in \mathbb{Z}_q^{m \times n}$, $\mathbf{s} \in \mathbb{Z}_q^n$ and a small-norm $\mathbf{e} \in \mathbb{Z}^m$ which underlie a public vector $\mathbf{b}=\mathbf{X} \cdot \mathbf{s} + \mathbf{e} \in \mathbb{Z}_q^m$ while simultaneously proving that the matrix $\mathbf{X} \in \mathbb{Z}_q^{m \times n}$ has been correctly certified. We believe our proof system to be useful in other applications involving zero-knowledge proofs in the lattice setting.
Last updated:  2016-09-14
Linear Structures: Applications to Cryptanalysis of Round-Reduced Keccak
Uncategorized
Jian Guo, Meicheng Liu, Ling Song
Show abstract
Uncategorized
In this paper, we analyze the security of round-reduced versions of the Keccak hash function family. Based on the work pioneered by Aumasson and Meier, and Dinur et al., we formalize and develop a technique named linear structure, which allows linearization of the underlying permutation of Keccak for up to 3 rounds with large number of variable spaces. As a direct application, it extends the best zero-sum distinguishers by 2 rounds without increasing the complexities. We also apply linear structures to preimage attacks against Keccak. By carefully studying the properties of the underlying Sbox, we show bilinear structures and find ways to convert the information on the output bits to linear functions on input bits. These findings, combined with linear structures, lead us to preimage attacks against up to 4-round Keccak with reduced complexities. An interesting feature of such preimage attacks is low complexities for small variants. As extreme examples, we can now find preimages of 3-round SHAKE128 with complexity 1, as well as the first practical solutions to two 3-round instances of Keccak challenge. Both zero-sum distinguishers and preimage attacks are verified by implementations. It is noted that the attacks here are still far from threatening the security of the full 24-round Keccak.
Last updated:  2016-12-04
How to Obtain Fully Structure-Preserving (Automorphic) Signatures from Structure-Preserving Ones
Uncategorized
Yuyu Wang, Zongyang Zhang, Takahiro Matsuda, Goichiro Hanaoka, Keisuke Tanaka
Show abstract
Uncategorized
In this paper, we bridge the gap between structure-preserving signatures (SPSs) and fully structure-preserving signatures (FSPSs). In SPSs, all the messages, signatures, and verification keys consist only of group elements, while in FSPSs, even signing keys are required to be a collection of group elements. To achieve our goal, we introduce two new primitives called trapdoor signature and signature with auxiliary key, both of which can be derived from SPSs. By carefully combining both primitives, we obtain generic constructions of FSPSs from SPSs. Upon instantiating the above two primitives, we get many instantiations of FSPS with unilateral and bilateral message spaces. Different from previously proposed FSPSs, many of our instantiations also have the automorphic property, i.e., a signer can sign his own verification key. As by-product results, one of our instantiations has the shortest verification key size, signature size, and lowest verification cost among all previous constructions based on standard assumptions, and one of them is the first FSPS scheme in the type I bilinear groups.
Last updated:  2016-09-14
How to Build Fully Secure Tweakable Blockciphers from Classical Blockciphers
Uncategorized
Lei Wang, Jian Guo, Guoyan Zhang, Jingyuan Zhao, Dawu Gu
Show abstract
Uncategorized
This paper focuses on building a tweakable blockcipher from a classical blockcipher whose input and output wires all have a size of $n$ bits. The main goal is to achieve full $2^n$ security. Such a tweakable blockcipher was proposed by Mennink at FSE'15, and it is also the only tweakable blockcipher so far that claimed full $2^n$ security to our best knowledge. However, we find a key-recovery attack on Mennink's proposal (in the proceeding version) with a complexity of about $2^{n/2}$ adversarial queries. The attack well demonstrates that Mennink's proposal has at most $2^{n/2}$ security, and therefore invalidates its security claim. In this paper, we study a construction of tweakable blockciphers denoted as $\tilde{\mathbb E}[s]$ that is built on $s$ invocations of a blockcipher and additional simple XOR operations. As proven in previous work, at least two invocations of blockcipher with linear mixing are necessary to possibly bypass the birthday-bound barrier of $2^{n/2}$ security, we carry out an investigation on the instances of $\tilde{\mathbb E}[s]$ with $s \ge 2$, and find $32$ highly efficient tweakable blockciphers $\widetilde{E1}$, $\widetilde{E2}$, $\ldots$, $\widetilde{E32}$ that achieve $2^n$ provable security. Each of these tweakable blockciphers uses two invocations of a blockcipher, one of which uses a tweak-dependent key generated by XORing the tweak to the key (or to a secret subkey derived from the key). We point out the provable security of these tweakable blockciphers is obtained in the ideal blockcipher model due to the usage of the tweak-dependent key.
Last updated:  2017-02-13
Depth-Robust Graphs and Their Cumulative Memory Complexity
Uncategorized
Joël Alwen, Jeremiah Blocki, Krzysztof Pietrzak
Show abstract
Uncategorized
Data-independent Memory Hard Functions (iMHFS) are finding a growing number of applications in security; especially in the domain of password hashing. An important property of a concrete iMHF is specified by fixing a directed acyclic graph (DAG) $G_n$ on $n$ nodes. The quality of that iMHF is then captured by the following two pebbling complexities of $G_n$: \begin{itemize} \item The parallel cumulative pebbling complexity $\Pi^{\parallel}_{cc}(G_n)$ must be as high as possible (to ensure that the amortized cost of computing the function on dedicated hardware is dominated by the cost of memory). \item The sequential space-time pebbling complexity $\Pi_{st}(G_n)$ should be as close as possible to $\Pi^{\parallel}_{cc}(G_n)$ (to ensure that using many cores in parallel and amortizing over many instances does not give much of an advantage). \end{itemize} In this paper we construct a family of DAGs with best possible parameters in an asymptotic sense, i.e., where $\Pi^{\parallel}_{cc}(G_n)=\Omega(n^2/\log(n))$ (which matches a known upper bound) and $\Pi_{st}(G_n)$ is within a constant factor of $\Pi^{\parallel}_{cc}(G_n)$. Our analysis relies on a new connection between the pebbling complexity of a DAG and its depth-robustness (DR) -- a well studied combinatorial property. We show that high DR is {\em sufficient} for high $\Pi^{\parallel}_{cc}$. Alwen and Blocki (CRYPTO'16) showed that high DR is {\em necessary} and so, together, these results fully characterize DAGs with high $\Pi^{\parallel}_{cc}$ in terms of DR. Complementing these results, we provide new upper and lower bounds on the $\Pi^{\parallel}_{cc}$ of several important candidate iMHFs from the literature. We give the first lower bounds on the memory hardness of the Catena and Balloon Hashing functions in a parallel model of computation and we give the first lower bounds of any kind for (a version) of Argon2i. Finally we describe a new class of pebbling attacks improving on those of Alwen and Blocki (CRYPTO'16). By instantiating these attacks we upperbound the $\Pi^{\parallel}_{cc}$ of the Password Hashing Competition winner Argon2i and one of the Balloon Hashing functions by $O\left(n^{1.71} \right)$. We also show an upper bound of $O(n^{1.625})$ for the Catena functions and the two remaining Balloon Hashing functions.
Last updated:  2016-09-11
Cryptographic Reverse Firewall via Malleable Smooth Projective Hash Functions
Uncategorized
Rongmao Chen, Yi Mu, Guomin Yang, Willy Susilo, Fuchun Guo, Mingwu Zhang
Show abstract
Uncategorized
Motivated by the revelations of Edward Snowden, post-Snowden cryptography has become a prominent research direction in recent years. In Eurocrypt 2015, Mironov and Stephens-Davidowitz proposed a novel concept named cryptographic reverse firewall (CRF) which can resist exfiltration of secret information from an arbitrarily compromised machine. In this work, we continue this line of research and present generic CRF constructions for several widely used cryptographic protocols based on a new notion named malleable smooth projective hash function. Our contributions can be summarized as follows. We introduce the notion of malleable smooth projective hash function, which is an extension of the smooth projective hash function (SPHF) introduced by Cramer and Shoup (Eurocrypt'02) with the new properties of key malleability and element rerandomizability. We demonstrate the feasibility of our new notion using graded rings proposed by Benhamouda et al. (Crypto'13), and present an instantiation from the k-linear assumption. Moreover, we show that the malleable SPHF can also be based on other standard assumptions. We show how to generically construct CRFs via malleable SPHFs in a modular way for some widely used cryptographic protocols. Specifically, we propose generic constructions of CRFs for the unkeyed message-transmission protocol and the oblivious signature-based envelope (OSBE) protocol of Blazy, Pointcheval and Vergnaud (TCC'12). We also present a new malleable SPHFfrom the linear encryption of valid signatures for instantiating the OSBE protocol with CRFs. We further study the two-pass oblivious transfer (OT) protocol and show that the malleable SPHF does not suffice for its CRF constructions. We then develop a new OT framework from graded rings and show how to construct OT-CRFs by modifying the malleable SPHF framework. This new framework encompasses the DDH-based OT-CRF constructions proposed by Mironov and Stephens-Davidowitz (Eurocrypt'15), and yields a new construction under the $k$-linear assumption.
Last updated:  2016-09-10
Iterated Random Oracle: A Universal Approach for Finding Loss in Security Reduction
Uncategorized
Fuchun Guo, Willy Susilo, Yi Mu, Rongmao Chen, Jianchang Lai, Guomin Yang
Show abstract
Uncategorized
The indistinguishability security of a public-key cryptosystem can be reduced to a computational hard assumption in the random oracle model, where the solution to a computational hard problem is hidden in one of the adversary's queries to the random oracle. Usually, there is a finding loss in finding the correct solution from the query set, especially when the decisional variant of the computational problem is also hard. The problem of finding loss must be addressed towards tight(er) reductions under this type. In EUROCRYPT 2008, Cash, Kiltz and Shoup proposed a novel approach using a trapdoor test that can solve the finding loss problem. The simulator can find the correct solution with overwhelming probability 1, if there exists a trapdoor test for the adopted hard problem. The proposed approach is efficient and can be used for many Diffie-Hellman computational assumptions. The only limitation is the requirement of a trapdoor test that must be found for the adopted computational assumptions. In this paper, we introduce a universal approach for finding loss, namely Iterated Random Oracle, which can be applied to all computational assumptions. The finding loss in our proposed approach is very small. For $2^{60}$ queries to the random oracle, the success probability of finding the correct solution from the query set will be as large as $1/{64}$ compared to $1/{2^{60}}$ by a random pick. We show how to apply the iterated random oracle for security transformation from key encapsulation mechanism with one-way security to normal encryption with indistinguishability security. The security reduction is very tight due to a small finding loss. The transformation does not expand the ciphertext size. We also give the application of the iterated random oracle in the key exchange.
Last updated:  2017-03-13
Blockchain-Free Cryptocurrencies: A Framework for Truly Decentralised Fast Transactions
Uncategorized
Xavier Boyen, Christopher Carr, Thomas Haines
Show abstract
Uncategorized
The "blockchain" distributed ledger pioneered by Bitcoin is effective at preventing double-spending, but inherently attracts (1) "user cartels" and (2) incompressible delays, as a result of linear verification and a winner-takes-all incentive lottery. We propose to forgo the blocks and chain entirely, and build a truly distributed ledger system based on a lean graph of cross-verifying transactions, which now become the main and only objects in the system. A fully distributed consensus mechanism, based on progressive proofs of work with predictable incentives, ensures rapid convergence even across a large network of unequal participants, who all get rewards working at their own pace. Graph-based affirmation fosters snappy response through automatic scaling, while application-agnostic design supports all modern cryptocurrency features such as multiple denominations, swaps, securitisation, scripting, smart contracts, etc. We prove theoretically, and experimentally verify, our proposal to show it achieves a crucial "convergence" property, meaning that any valid transaction entering the system will quickly become enshrined into the ancestry upon which all future transactions will rest.
Last updated:  2017-02-20
Faster Fully Homomorphic Encryption: Bootstrapping in less than 0.1 Seconds
Ilaria Chillotti, Nicolas Gama, Mariya Georgieva, Malika Izabachène
In this paper, we revisit fully homomorphic encryption (FHE) based on GSW and its ring variants. We notice that the internal product of GSW can be replaced by a simpler external product between a GSW and an LWE ciphertext. We show that the bootstrapping scheme FHEW of Ducas and Micciancio (Eurocrypt 2015) can be expressed only in terms of this external product. As a result, we obtain a speed up from less than 1 second to less than 0.1 seconds. We also reduce the 1GB bootstrapping key size to 24MB, preserving the same security levels, and we improve the noise propagation overhead by replacing exact decomposition algorithms with approximate ones. Moreover, our external product allows to explain the unique asymmetry in the noise propagation of GSW samples and makes it possible to evaluate deterministic automata homomorphically as in (ePrint 2014/283) in an efficient way with a noise overhead only linear in the length of the tested word. Finally, we provide an alternative practical analysis of LWE based scheme, which directly relates the security parameter to the error rate of LWE and the entropy of the LWE secret key.
Last updated:  2016-09-10
Cryptographic applications of capacity theory: On the optimality of Coppersmith's method for univariate polynomials
Uncategorized
Ted Chinburg, Brett Hemenway, Nadia Heninger, Zachary Scherr
Show abstract
Uncategorized
We draw a new connection between Coppersmith's method for finding small solutions to polynomial congruences modulo integers and the capacity theory of adelic subsets of algebraic curves. Coppersmith's method uses lattice basis reduction to construct an auxiliary polynomial that vanishes at the desired solutions. Capacity theory provides a toolkit for proving when polynomials with certain boundedness properties do or do not exist. Using capacity theory, we prove that Coppersmith's bound for univariate polynomials is optimal in the sense that there are no auxiliary polynomials of the type he used that would allow finding roots of size $N^{1/d+\epsilon}$ for any monic degree-$d$ polynomial modulo $N$. Our results rule out the existence of polynomials of any degree and do not rely on lattice algorithms, thus eliminating the possibility of improvements for special cases or even superpolynomial-time improvements to Coppersmith's bound. We extend this result to constructions of auxiliary polynomials using binomial polynomials, and rule out the existence of any auxiliary polynomial of this form that would find solutions of size $N^{1/d+\epsilon}$ unless $N$ has a very small prime factor.
Last updated:  2016-09-10
Selective-Opening Security in the Presence of Randomness Failures
Uncategorized
Viet Tung Hoang, Jonathan Katz, Adam O’Neill, Mohammad Zaheri
Show abstract
Uncategorized
We initiate the study of public-key encryption (PKE) secure against selective-opening attacks (SOA) in the presence of randomness failures, i.e., when the sender may (inadvertently) use low-quality randomness. In the SOA setting, an adversary can adaptively corrupt senders; this notion is natural to consider in tandem with randomness failures since an adversary may target senders by multiple means. Concretely, we first treat SOA security of nonce-based PKE. After formulating an appropriate definition of SOA- secure nonce-based PKE,we provide efficient constructions in the non-programmable random-oracle model, based on lossy trapdoor functions. We then lift our notion of security to the setting of "hedged" PKE, which ensures security as long as the sender's seed, message, and nonce jointly have high entropy. This unifies the notions and strengthens the protection that nonce-based PKE provides against randomness failures even in the non-SOA setting.We lift our definitions and constructions of SOA-secure nonce-based PKE to the hedged setting as well.
Last updated:  2016-09-10
A survey on physiological-signal-based security for medical devices
Uncategorized
Eduard Marin, Enrique Argones Rúa, Dave Singelée, Bart Preneel
Show abstract
Uncategorized
Implantable Medical Devices (IMDs) are used to monitor and control patients with chronic diseases. A growing number of IMDs are equipped with a wireless interface that allows non-invasive monitoring and reprogramming through an external device, also known as device programmer. However, this wireless interface also brings important security and privacy risks that may lead to remote attacks. In this domain, the use of cryptography is challenging due to the inherent tensions between security vs accessibility and security vs energy cost. A well-studied problem yet unsolved is how to establish (and manage) cryptographic keys between the device programmer and the IMD. Recent work has investigated how Physiological Signals (PS) extracted from the patient can be used for key agreement or authentication between the devices. This paper surveys some of the proposed countermeasures in the field of medical device security, with a special focus on those that use patient's physiological signals for key establishment or authentication between the devices. We point out that most of the existing solutions, including those relying on PS, take assumptions that do not necessarily hold in practical scenarios. Furthermore, we show that the H2H protocol and the Biosec protocol have serious security weaknesses and design flaws which make them vulnerable to attacks. Based on our analysis, we define some of the challenges that need be addressed before adopting these solutions. Furthermore, we investigate how to use physiological-signal-based protocols in cryptography, possibly in combination with other solutions, such as pre-installed factory keys, to achieve higher security protection.
Last updated:  2016-09-10
A Shuffle Argument Secure in the Generic Model
Uncategorized
Prastudy Fauzi, Helger Lipmaa, Michał Zając
Show abstract
Uncategorized
We propose a new random oracle-less NIZK shuffle argument. It has a simple structure, where the first verification equation ascertains that the prover has committed to a permutation matrix, the second verification equation ascertains that the same permutation was used to permute the ciphertexts, and the third verification equation ascertains that input ciphertexts were ``correctly'' formed. The new argument has $3.5$ times more efficient verification than the up-to-now most efficient shuffle argument by Fauzi and Lipmaa (CT-RSA 2016). Compared to the Fauzi-Lipmaa shuffle argument, we (i) remove the use of knowledge assumptions and prove our scheme is sound in the generic bilinear group model, and (ii) prove standard soundness, instead of culpable soundness.
Last updated:  2016-09-10
Reverse Cycle Walking and Its Applications
Uncategorized
Sarah Miracle, Scott Yilek
Show abstract
Uncategorized
We study the problem of constructing a block-cipher on a "possibly-strange" set $\mathcal S$ using a block-cipher on a larger set $\mathcal T$. Such constructions are useful in format-preserving encryption, where for example the set $\mathcal S$ might contain "valid 9-digit social security numbers" while $\mathcal T$ might be the set of 30-bit strings. Previous work has solved this problem using a technique called cycle walking, first formally analyzed by Black and Rogaway. Assuming the size of $\mathcal S$ is a constant fraction of the size of $\mathcal T$, cycle walking allows one to encipher a point $x \in \mathcal S$ by applying the block-cipher on $\mathcal T$ a small /expected/ number of times and $O(N)$ times in the worst case, where $N = |\mathcal T|$, without any degradation in security. We introduce an alternative to cycle walking that we call /reverse cycle walking/, which lowers the worst-case number of times we must apply the block-cipher on $\mathcal T$ from $O(N)$ to $O(\log N)$. Additionally, when the underlying block-cipher on $\mathcal T$ is secure against $q = (1-\epsilon)N$ adversarial queries, we show that applying reverse cycle walking gives us a cipher on $\mathcal S$ secure even if the adversary is allowed to query all of the domain points. Such fully-secure ciphers have been the the target of numerous recent papers.
Last updated:  2016-09-10
Salvaging Weak Security Bounds for Blockcipher-Based Constructions
Uncategorized
Thomas Shrimpton, R. Seth Terashima
Show abstract
Uncategorized
The concrete security bounds for some blockcipher-based constructions sometimes become worrisome or even vacuous; for example, when a light-weight blockcipher is used, when large amounts of data are processed, or when a large number of connections need to be kept secure. Rotating keys helps, but introduces a ``hybrid factor'' $m$ equal to the number of keys used. In such instances, analysis in the ideal-cipher model (ICM) can give a sharper picture of security, but this heuristic is called into question when cryptanalysis of the real-world blockcipher reveals weak keys, related-key attacks, etc. To address both concerns, we introduce a new analysis model, the ideal-cipher model under key-oblivious access (ICM-KOA). Like the ICM, the ICM-KOA can give sharp security bounds when standard-model bounds do not. Unlike the ICM, results in the ICM-KOA are less brittle to current and future cryptanalytic results on the blockcipher used to instantiate the ideal cipher. Also, results in the ICM-KOA immediately imply results in the ICM _and_ the standard model, giving multiple viewpoints on a construction with a single effort. The ICM-KOA provides a conceptual bridge between ideal ciphers and tweakable blockciphers (TBC): blockcipher-based constructions secure in the ICM-KOA have TBC-based analogs that are secure under standard-model TBC security assumptions. Finally, the ICM-KOA provides a natural framework for analyzing blockcipher key-update strategies that use the blockcipher to derive the new key. This is done, for example, in the NIST CTR-DRBG and in the hardware RNG that ships on Intel chips.
Last updated:  2016-09-10
More Powerful and Reliable Second-level Statistical Randomness Tests for NIST SP 800-22
Uncategorized
Shuangyi Zhu, Yuan Ma, Jingqiang Lin, Jia Zhuang, Jiwu Jing
Show abstract
Uncategorized
Random number generators (RNGs) are essential for cryptographic systems, and statistical tests are usually employed to assess the randomness of their outputs. As the most commonly used statistical test suite, the NIST SP 800-22 suite includes 15 test items, each of which contains two-level tests. For the test items based on the binomial distribution, we find that their second-level tests are flawed due to the inconsistency between the assessed distribution and the assumed one. That is, the sequence that passes the test could still have statistical flaws in the assessed aspect. For this reason, we propose Q-value as the metric for these second-level tests to replace the original P-value without any extra modification, and the first-level tests are kept unchanged. We provide the correctness proof of the proposed Q-value based second-level tests. We perform the theoretical analysis to demonstrate that the modification improves not only the detectability, but also the reliability. That is, the tested sequence that dissatisfies the randomness hypothesis has a higher probability to be rejected by the improved test, and the sequence that satisfies the hypothesis has a higher probability to pass it. The experimental results on several deterministic RNGs indicate that, the Q-value based method is able to detect some statistical flaws that the original SP 800-22 suite cannot realize under the same test parameters.
Last updated:  2016-12-23
Flaw in the Security Analysis of Leakage-resilient Authenticated Key Exchange Protocol from CT-RSA 2016 and Restoring the Security Proof
Suvradip Chakraborty, Goutam Paul, C. Pandu Rangan
In this paper, we revisit the security result of an authenticated key exchange (AKE) protocol recently proposed in CT-RSA 2016 by Chen, Mu, Yang, Susilo and Guo (we refer to this scheme as the CMYSG scheme). The security of the CMYSG scheme is shown in a new (stronger) challenge-dependent leakage-resilient eCK (CLR-eCK) model that captures (bounded) leakage from both the long term secret key of the parties as well the (per-session) randomness of the parties involved in an AKE protocol even after the challenge/test session. In this model, they proposed a generic framework for constructing one-round AKE protocols. The main tool employed in their construction is a (extended) 2-smooth projective hash proof system. The security of their protocol is reduced to the security of the underling hash-proof system, the existence of pseudo-random functions (PRF) and $\pi$-PRFs, collision-resistant hash functions and the Decisional Diffie-Hellman (DDH) hardness assumption. However, we disprove their security result and show that the security of the CMYSG protocol is incorrectly reduced to that of the DDH assumption. We then re-prove the security of the CMYSG scheme in the CLR-eCK model under the Gap Diffie-Hellman (GDH) hardness assumption in the random oracle model. Our security analysis continues the troubled past of the make-and-break efforts of constructing leakage-resilient AKE protocols and also leaves open the construction of CLR-eCK secure AKE protocol in the standard model.
Last updated:  2018-06-19
Secure Stable Matching at Scale
Jack Doerner, David Evans, abhi shelat
When a group of individuals and organizations wish to compute a stable matching---for example, when medical students are matched to medical residency programs---they often outsource the computation to a trusted arbiter in order to preserve the privacy of participants' preferences. Secure multi-party computation offers the possibility of private matching processes that do not rely on any common trusted third party. However, stable matching algorithms have previously been considered infeasible for execution in a secure multi-party context on non-trivial inputs because they are computationally intensive and involve complex data-dependent memory access patterns. We adapt the classic Gale-Shapley algorithm for use in such a context, and show experimentally that our modifications yield a lower asymptotic complexity and more than an order of magnitude in practical cost improvement over previous techniques. Our main improvements stem from designing new oblivious data structures that exploit the properties of the matching algorithms. We apply a similar strategy to scale the Roth-Peranson instability chaining algorithm, currently in use by the National Resident Matching Program. The resulting protocol is efficient enough to be useful at the scale required for matching medical residents nationwide, taking just over 18 hours to complete an execution simulating the 2016 national resident match with more than 35,000 participants and 30,000 residency slots.
Last updated:  2016-09-10
Efficient IBE with Tight Reduction to Standard Assumption in the Multi-challenge Setting
Uncategorized
Junqing Gong, Xiaolei Dong, Jie Chen, Zhenfu Cao
Show abstract
Uncategorized
In 2015, Hofheinz et al. [PKC, 2015] extended Chen and Wee's almost-tight reduction technique for identity based encryptions (IBE) [CRYPTO, 2013] to the multi-instance, multi-ciphertext (MIMC, or multi-challenge) setting, where the adversary is allowed to obtain multiple challenge ciphertexts from multiple IBE instances, and gave the first almost-tightly secure IBE in this setting using composite-order bilinear groups. Several prime-order realizations were proposed lately. However there seems to be a dilemma of high system performance (involving ciphertext/key size and encryption/decryption cost) or weak/standard security assumptions. A natural question is: can we achieve high performance without relying on stronger/non-standard assumptions? In this paper, we answer the question in the affirmative by describing a prime-order IBE scheme with the same performance as the most efficient solutions so far but whose security still relies on the standard k-linear (k-Lin) assumption. Our technical start point is Blazy et al.'s almost-tightly secure IBE [CRYPTO, 2014]. We revisit their concrete IBE scheme and associate it with the framework of nested dual system group. This allows us to extend Blazy et al.'s almost-tightly secure IBE to the MIMC setting using Gong et al.'s method [PKC, 2016]. We emphasize that, when instantiating our construction by the Symmetric eXternal Diffie-Hellman assumption (SXDH = 1-Lin), we obtain the most efficient concrete IBE scheme with almost-tight reduction in the MIMC setting, whose performance is even comparable to the most efficient IBE in the classical model (i.e., the single-instance, single-ciphertext setting). Besides pursuing high performance, our IBE scheme also achieves a weaker form of anonymity pointed out by Attrapadung et al. [AsiaCrypt, 2015].
Last updated:  2017-01-31
On the Security of Supersingular Isogeny Cryptosystems
Steven D. Galbraith, Christophe Petit, Barak Shani, Yan Bo Ti
We study cryptosystems based on supersingular isogenies. This is an active area of research in post-quantum cryptography. Our first contribution is to give a very powerful active attack on the supersingular isogeny encryption scheme. This attack can only be prevented by using a (relatively expensive) countermeasure. Our second contribution is to show that the security of all schemes of this type depends on the difficulty of computing the endomorphism ring of a supersingular elliptic curve. This result gives significant insight into the difficulty of the isogeny problem that underlies the security of these schemes. Our third contribution is to give a reduction that uses partial knowledge of shared keys to determine an entire shared key. This can be used to retrieve the secret key, given information leaked from a side-channel attack on the key exchange protocol. A corollary of this work is the first bit security result for the supersingular isogeny key exchange: Computing any component of the $j$-invariant is as hard as computing the whole $j$-invariant. Our paper therefore provides an improved understanding of the security of these cryptosystems. We stress that our work does not imply that these systems are insecure, or that they should not be used. However, it highlights that implementations of these schemes will need to take account of the risks associated with various active and side-channel attacks. This is the full version of the paper: Steven D. Galbraith, Christophe Petit, Barak Shani and Yan Bo Ti, On the Security of Supersingular Isogeny Cryptosystems, in J. H. Cheon and T. Takagi (eds.), Proceedings of ASIACRYPT 2016, Part I, Springer Lecture Notes in Computer Science 10031 (2016) 63--91.
Last updated:  2016-09-08
A Key Recovery Attack on MDPC with CCA Security Using Decoding Errors
Uncategorized
Qian Guo, Thomas Johansson, Paul Stankovski
Show abstract
Uncategorized
Algorithms for secure encryption in a post-quantum world are currently receiving a lot of attention in the research community, including several larger projects and a standardization effort from NIST. One of the most promising algorithms is the code-based scheme called QC-MDPC, which has excellent performance and a small public key size. In this work we present a very efficient key recovery attack on the QC-MDPC scheme using the fact that decryption uses an iterative decoding step and this can fail with some small probability. We identify a dependence between the secret key and the failure in decoding. This can be used to build what we refer to as a distance spectrum for the secret key, which is the set of all distances between any two ones in the secret key. In a reconstruction step we then determine the secret key from the distance spectrum. The attack has been implemented and tested on a proposed instance of QC-MDPC for 80 bit security. It successfully recovers the secret key in minutes. A slightly modified version of the attack can be applied on proposed versions of the QC-MDPC scheme that provides IND-CCA security. The attack is a bit more complex in this case, but still very much below the security level. The reason why we can break schemes with proved CCA security is that the model for these proofs typically does not include the decoding error possibility.
Last updated:  2016-09-08
Applying MILP Method to Searching Integral Distinguishers Based on Division Property for 6 Lightweight Block Ciphers
Uncategorized
Zejun Xiang, Wentao Zhang, Zhenzhen Bao, Dongdai Lin
Show abstract
Uncategorized
Division property is a generalized integral property proposed by Todo at EUROCRYPT 2015, and very recently, Todo et al. proposed bit-based division property and applied to SIMON32 at FSE 2016. However, this technique can only be applied to block ciphers with block size no larger than 32 due to its high time and memory complexity. In this paper, we extend Mixed Integer Linear Programming (MILP) method, which is used to search differential characteristics and linear trails of block ciphers, to search integral distinguishers of block ciphers based on division property with block size larger than 32. Firstly, we study how to model division property propagations of three basic operations (copy, bitwise AND, XOR) and an Sbox operation by linear inequalities, based on which we are able to construct a linear inequality system which can accurately describe the division property propagations of a block cipher given an initial division property. Secondly, by choosing an appropriate objective function, we convert a search algorithm under Todo's framework into an MILP problem, and we use this MILP problem appropriately to search integral distinguishers. As an application of our technique, we have searched integral distinguishers for SIMON, SIMECK, PRESENT, RECTANGLE, LBlock and TWINE. Our results show that we can find 14-, 16-, 18-, 22- and 26-round integral distinguishers for SIMON32, 48, 64, 96 and 128 respectively. Moreover, for two SP-network lightweight block ciphers PRESENT and RECTANGLE, we found 9-round integral distinguishers for both ciphers which are two more rounds than the best integral distinguishers in the literature. For LBlock and TWINE, our results are consistent with the best known ones with respect to the longest distinguishers.
Last updated:  2016-09-07
Spritz---a spongy RC4-like stream cipher and hash function.
Ronald L. Rivest, Jacob C. N. Schuldt
This paper reconsiders the design of the stream cipher RC4, and proposes an improved variant, which we call ``Spritz'' (since the output comes in fine drops rather than big blocks.) Our work leverages the considerable cryptanalytic work done on the original RC4 and its proposed variants. It also uses simulations extensively to search for biases and to guide the selection of intermediate expressions. We estimate that Spritz can produce output with about 24 cycles/byte of computation. Furthermore, our statistical tests suggest that about $2^{81}$ bytes of output are needed before one can reasonably distinguish Spritz output from random output; this is a marked improvement over RC4. [Footnote: However, see Appendix F for references to more recent work that suggest that our estimates of the work required to break Spritz may be optimistic.] In addition, we formulate Spritz as a ``sponge (or sponge-like) function,'' (see Bertoni et al.), which can ``Absorb'' new data at any time, and from which one can ``Squeeze'' pseudorandom output sequences of arbitrary length. Spritz can thus be easily adapted for use as a cryptographic hash function, an encryption algorithm, or a message-authentication code generator. (However, in hash-function mode, Spritz is rather slow.)
Last updated:  2016-09-07
Combinatorial Repairability for Threshold Schemes
Douglas R. Stinson, Ruizhong Wei
In this paper, we consider methods whereby a subset of players in a $(k,n)$-threshold scheme can ``repair'' another player's share in the event that their share has been lost or corrupted. This will take place without the participation of the dealer who set up the scheme. The repairing protocol should not compromise the (unconditional) security of the threshold scheme, and it should be efficient, where efficiency is measured in terms of the amount of information exchanged during the repairing process. We study two approaches to repairing. The first method is based on the ``enrollment protocol'' from (Nojoumian-Stinson-Grainger, 2010) which was originally developed to add a new player to a threshold scheme (without the participation of the dealer) after the scheme was set up.The second method distributes ``multiple shares'' to each player, as defined by a suitable combinatorial design. This method results in larger shares, but lower communication complexity, as compared to the first method.
Last updated:  2016-09-07
Algebraic Security Analysis of Key Generation with Physical Unclonable Functions
Matthias Hiller, Michael Pehl, Gerhard Kramer, Georg Sigl
Physical Unclonable Functions (PUFs) provide cryptographic keys for embedded systems without secure non-volatile key storage. Several error correction schemes for key generation with PUFs were introduced, analyzed and implemented over the last years. This work abstracts from the typical algorithmic level and provides an algebraic view to reveal fundamental similarities and differences in the security of these error correction schemes. An algebraic core is introduced for key generation with Physical Unclonable Functions (PUFs). It computes the secret key through the helper data from the input PUF response and an optional random number. For nearly uniformly distributed PUF responses, the leakage of the secret key and the helper data can be brought to zero if and only if the rank of the algebraic core is equal to the sum of the ranks of the key generating part and the rank of the helper data generating part. This rank criterion has the practical advantage that a security check can be performed for linear codes at an early design stage of an algorithm. The criterion is applied to state-of-the-art approaches to show that fuzzy commitment and systematic low leakage coding are the only analyzed schemes that achieve zero leakage.
Last updated:  2016-09-07
Stronger Security Variants of GCM-SIV
Tetsu Iwata, Kazuhiko Minematsu
At CCS 2015, Gueron and Lindell proposed GCM-SIV, a provably secure authenticated encryption scheme that remains secure even if the nonce is repeated. While this is an advantage over the original GCM, we first point out that GCM-SIV allows a trivial distinguishing attack with about $2^{48}$ queries, where each query has one plaintext block. This shows the tightness of the security claim and does not contradict the provable security result. However, the original GCM resists the attack, and this poses a question of designing a variant of GCM-SIV that is secure against the attack. We present a minor variant of GCM-SIV, which we call GCM-SIV1, and discuss that GCM-SIV1 resists the attack, and it offers a security trade-off compared to GCM-SIV. As the main contribution of the paper, we explore a scheme with a stronger security bound. We present GCM-SIV2 which is obtained by running two instances of GCM-SIV1 in parallel and mixing them in a simple way. We show that it is secure up to $2^{85.3}$ query complexity, where the query complexity is measured in terms of the total number of blocks of the queries. Finally, we generalize this to show GCM-SIV$r$ by running $r$ instances of GCM-SIV1 in parallel, where $r\ge 3$, and show that the scheme is secure up to $2^{128r/(r+1)}$ query complexity. The provable security results are obtained under the standard assumption that the blockcipher is a pseudorandom permutation.
Last updated:  2016-09-07
Faster LLL-type Reduction of Lattice Bases
Uncategorized
Arnold Neumaier, Damien Stehle
Show abstract
Uncategorized
We describe an asymptotically fast variant of the LLL lattice reduction algorithm. It takes as input a basis $\mathbf B \in \mathbb Z^{n \times n}$ and returns a (reduced) basis $\mathbf C$ of the Euclidean lattice $L$ spanned by $\mathbf B$, whose first vector satisfies $||\mathbf c_1|| \leq (1+c)(4/3)^{(n-1)/4} \cdot (\det L)^{1/n}$ for any fixed $c>0$. It terminates within $O(n^{4+\epsilon} \beta^{1+\epsilon})$ bit operations for any $\epsilon >0$, with $\beta = \log \max_i ||\mathbf b_i||$. It does rely on fast integer arithmetic but does not make use of fast matrix multiplication.
Last updated:  2016-09-07
A New Algorithm for the Unbalanced Meet-in-the-Middle Problem
Uncategorized
Ivica Nikolic, Yu Sasaki
Show abstract
Uncategorized
A collision search for a pair of $n$-bit unbalanced functions (one is $R$ times more expensive than the other) is an instance of the meet-in-the-middle problem, solved with the familiar standard algorithm that follows the tradeoff $TM=N$, where $T$ and $M$ are time and memory complexities and $N=2^n$. By combining two ideas, unbalanced interleaving and Oorschot-Wiener parallel collision search, we construct an alternative algorithm that follows $T^2 M = R^2 N$, where $M\le R$. Among others, the algorithm solves the well-known open problem: how to reduce the memory of unbalanced collision search.
Last updated:  2016-09-07
Lightweight Fault Attack Resistance in Software Using Intra-Instruction Redundancy
Conor Patrick, Bilgiday Yuce, Nahid Farhady Ghalaty, Patrick Schaumont
Fault attack countermeasures can be implemented by storing or computing sensitive data in redundant form, such that the faulty data can be detected and restored. We present a class of lightweight, portable software countermeasures for block ciphers. Our technique is based on redundant bit-slicing, and it is able to detect faults in the execution of a single instruction. In comparison to earlier techniques, we are able to intercept data faults as well as instruction sequence faults using a uniform technique. Our countermeasure thwarts precise bit-fault injections through pseudo-random shifts in the allocation of data bit-slices. We demonstrate our solution on a full AES design and confirm the claimed security protection through a detailed fault simulation for a 32-bit embedded processor. We also quantify the overhead of the proposed fault countermeasure, and find a minimal increase in footprint (14%), and a moderate performance overhead between 125% to 317%, depending on the desired level of fault-attack resistance.
Last updated:  2017-01-17
Asymptotically Tight Bounds for Composing ORAM with PIR
Uncategorized
Ittai Abraham, Christopher W. Fletcher, Kartik Nayak, Benny Pinkas, Ling Ren
Show abstract
Uncategorized
Oblivious RAM (ORAM) is a cryptographic primitive that allows a trusted client to outsource storage to an untrusted server while hiding the client's memory access patterns to the server. The last three decades of research on ORAMs have reduced the bandwidth blowup of ORAM schemes from $O(\sqrt{N})$ to $O(1)$. However, all schemes that achieve a bandwidth blowup smaller than $O(\log N)$ use expensive computations such as homomorphic encryptions. In this paper, we achieve a sub-logarithmic bandwidth blowup of $O(\log_d N)$ (where $d$ is a free parameter) without using expensive computation. We do so by using a $d$-ary tree and a two server private information retrieval (PIR) protocol based on inexpensive XOR operations at the servers. We also show a $\Omega(\log_{cD} N)$ lower bound on bandwidth blowup in the modified model involving PIR operations. Here, $c$ is the number of blocks stored by the client and $D$ is the number blocks on which PIR operations are performed. Our construction matches this lower bound implying that the lower bound is tight for certain parameter ranges. Finally, we show that C-ORAM (CCS'15) and CHf-ORAM violate the lower bound. Combined with concrete attacks on C-ORAM/CHf-ORAM, we claim that there exist security flaws in these constructions.
Last updated:  2017-02-18
From Weakly Selective to Selective Security in Compact Functional Encryption, Revisited
Uncategorized
Linfeng Zhou
Show abstract
Uncategorized
We provide a generic transformation from weakly selective secure \(\mathsf{FE}\) to selective secure \(\mathsf{FE}\) through an approach called \textit{hybrid functional key generation}. Furthermore, our transformation preserves the compactness of the \(\mathsf{FE}\) scheme. Additionally, we note that this transformation is much simpler than the prior work \cite{garg2016single}. We consider the simplicity of the construction in this work as a positive feature and the hybrid functional key generation approach as a new method that can be applied in functional encryption schemes. Furthermore, we try to weaken the input \(\mathsf{FE}\) scheme of our transformation to be a non-compact one instead of a fully-compact one, by additionally assuming the hardness of LWE (or Ring-LWE) assumption. We achieve this result by utilizing the \(\mathsf{FE}\) scheme for bounded collusions with \textit{decomposable and succinct ciphertext property}, which can be solely based on the LWE (or Ring-LWE) assumption. Finally we present the implications of our result, which improves previous results, in building general-purpose indistinguishability obfuscator from (well-expressed) functional encryption.
Last updated:  2022-03-18
On the smallest ratio problem of lattice bases
Jianwei Li
Let $(\mathbf{b}_1, \ldots, \mathbf{b}_{n})$ be a lattice basis with Gram-Schmidt orthogonalization $(\mathbf{b}_1^{\ast}, \ldots, \mathbf{b}_{n}^{\ast})$, the quantities $\|\mathbf{b}_{1}\|/\|\mathbf{b}_{i}^{\ast}\|$ for $i = 1, \ldots, n$ play important roles in analyzing lattice reduction algorithms and lattice enumeration algorithms. In this paper, we study the problem of minimizing the quantity $\|\mathbf{b}_{1}\|/\|\mathbf{b}_{n}^{\ast}\|$ over all bases $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ of a given $n$-dimensional lattice. We first prove that there exists a basis $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ for any lattice $L$ of dimension $n$ such that $\|\mathbf{b}_1\| = \min_{\mathbf{v} \in L\backslash\{\mathbf{0}\}} \|\mathbf{v}\|$, $\|\mathbf{b}_{1}\|/\|\mathbf{b}_{i}^{\ast}\| \leq i$ and $\|\mathbf{b}_{i}\|/\|\mathbf{b}_{i}^{\ast}\| \leq i^{1.5}$ for $1 \leq i \leq n$. This leads us to introduce a new NP-hard computational problem, that is, the smallest ratio problem (SRP): given an $n$-dimensional lattice $L$, find a basis $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ of $L$ such that $\|\mathbf{b}_{1}\|/\|\mathbf{b}_{n}^{\ast}\|$ is minimal. The problem inspires the new lattice invariant $\mu_{n}(L) = \min\{\|\mathbf{b}_1\|/\|\mathbf{b}_n^{\ast}\|: (\mathbf{b}_1, \ldots, \mathbf{b}_n) \textrm{ is a basis of } L\}$ and new lattice constant $\mu_{n} = \max \mu_{n}(L)$ over all $n$-dimensional lattices $L$: both the minimum and maximum are justified. The properties of $\mu_{n}(L)$ and $\mu_{n}$ are discussed. We also present an exact algorithm and an approximation algorithm for SRP. This is the first sound study of SRP. Our work is a tiny step towards solving an open problem proposed by Dadush-Regev-Stephens-Davidowitz (CCC '14) for tackling the closest vector problem with preprocessing, that is, whether there exists a basis $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ for any $n$-rank lattice such that $\max_{1 \le i \le j \le n} \|\vec{b}_{i}^{\ast}\|/\vec{b}_{j}^{\ast}\| \le \textrm{poly}(n)$.
Last updated:  2022-10-06
Survey of Approaches and Techniques for Security Verification of Computer Systems
Ferhat Erata, Shuwen Deng, Faisal Zaghloul, Wenjie Xiong, Onur Demir, Jakub Szefer
This paper surveys the landscape of security verification approaches and techniques for computer systems at various levels: from a software-application level all the way to the physical hardware level. Different existing projects are compared, based on the tools used and security aspects being examined. Since many systems require both hardware and software components to work together to provide the system's promised security protections, it is not sufficient to verify just the software levels or just the hardware levels in a mutually exclusive fashion. This survey especially highlights system levels that are verified by the different existing projects and presents to the readers the state of the art in hardware and software system security verification. Few approaches come close to providing full-system verification, and there is still much room for improvement.
Last updated:  2016-09-06
Selective Opening Security from Simulatable Data Encapsulation
Felix Heuer, Bertram Poettering
The confidentiality notion of security against selective opening attacks considers adver- saries that obtain challenge ciphertexts and are allowed to adaptively open them, thereby revealing the encrypted message and the randomness used to encrypt. The SO notion is stronger than that of CCA security and is often required when formally arguing towards the security of multi-user applications. While different ways of achieving correspondingly secure schemes are known, as they generally employ expensive asymmetric building blocks like lossy trapdoor functions or lossy en- cryption, such constructions are routinely left aside by practitioners and standardization bodies. So far, formal arguments towards the SO security of schemes used in practice (e.g., for email encryption) are not known. In this work we shift the focus from the asymmetric to the symmetric building blocks of PKE and prove the following statement: If a PKE scheme is composed of a key encapsulation mechanism (KEM) and a blockcipher-based data encapsulation mechanism (DEM), and the DEM meets spe- cific combinatorial properties, then the PKE scheme offers SO security, in the ideal cipher model. Fortunately, as we show, the required properties hold for popular modes of operation like CTR, CBC, CCM, and GCM. This paper not only establishes the corresponding theoretical framework of analysis, but also contributes very concretely to practical cryptography by concluding that selective opening security is given for many real-world schemes.
Last updated:  2016-09-06
Secure and Efficient Construction of Broadcast Encryption with Dealership
Kamalesh Acharya, Ratna Dutta
Broadcast encryption with dealership (BED) has been proposed to achieve more innovative and scalable business models for broadcast services. It has an extensive application future. However, designing secure BED is a challenging task. The only known BED construction sofar is by Gritti et al. We aim to raise the profile of BED primitives which has not received much attention despite of its importance. This paper presents a selectively chosen plaintext attack (CPA) secure BED scheme supporting maximum number of accountability and privacy (hides the group of users from broadcaster). Our scheme is a key encapsulation mechanism and practically more efficient. It reduces the parameter sizes and computation cost compared to Gritti et al. More interestingly, the broadcaster does not need to rely on users to detect the dishonest dealer. We provide concrete security analysis of our design under reasonable assumptions.
Last updated:  2017-01-12
Partitioning via Non-Linear Polynomial Functions: More Compact IBEs from Ideal Lattices and Bilinear Maps
Shuichi Katsumata, Shota Yamada
In this paper, we present new adaptively secure identity-based encryption (IBE) schemes. One of the distinguishing property of the schemes is that it achieves shorter public parameters than previous schemes. Both of our schemes follow the general framework presented in the recent IBE scheme of Yamada (Eurocrypt 2016), employed with novel techniques tailored to meet the underlying algebraic structure to overcome the difficulties arising in our specific setting. Specifically, we obtain the following: - Our first scheme is proven secure under the ring learning with errors (RLWE) assumption and achieves the best asymptotic space efficiency among existing schemes from the same assumption. The main technical contribution is in our new security proof that exploits the ring structure in a crucial way. Our technique allows us to greatly weaken the underlying hardness assumption (e.g., we assume the hardness of RLWE with a fixed polynomial approximation factor whereas Yamada's scheme requires a super-polynomial approximation factor) while improving the overall efficiency. - Our second IBE scheme is constructed on bilinear maps and is secure under the $3$-computational bilinear Diffie-Hellman exponent assumption. This is the first IBE scheme based on the hardness of a computational/search problem, rather than a decisional problem such as DDH and DLIN on bilinear maps with sub-linear public parameter size.
Last updated:  2017-03-16
Improved, Black-Box, Non-Malleable Encryption from Semantic Security
Seung Geol Choi, Dana Dachman-Soled, Tal Malkin, Hoeteck Wee
We give a new black-box transformation from any semantically secure encryption scheme into a non-malleable one which has a better rate than the best previous work of Coretti et al. (TCC 2016-A). We achieve a better rate by departing from the “matrix encoding” methodology used by previous constructions, and working directly with a single codeword. We also use a Shamir secret-share packing technique to improve the rate of the underlying error-correcting code.
Last updated:  2016-09-29
A Methodology for the Characterisation of Leakages in Combinatorial Logic
Guido Bertoni, Marco Martinoli
Glitches represent a great danger for hardware implementations of cryptographic schemes. Their intrinsic random nature makes them difficult to tackle and their occurrence threatens side-channel protections. Although countermeasures aiming at structurally solving the problem already exist, they usually require some effort to be applied or introduce non-negligible overhead in the design. Our work addresses the gap between such countermeasures and the naïve implementation of schemes being vulnerable in the presence of glitches. Our contribution is twofold: (1) we expand the mathematical framework proposed by Brzozowski and Ësik (FMSD 2003) by meaningfully adding the notion of information leakage, (2) thanks to which we define a formal methodology for the analysis of vulnerabilities in combinatorial circuits when glitches are taken into account.
Last updated:  2016-09-06
Deja Q All Over Again: Tighter and Broader Reductions of q-Type Assumptions
Melissa Chase, Mary Maller, Sarah Meiklejohn
In this paper, we demonstrate that various cryptographic constructions--including ones for broadcast, attribute-based, and hierarchical identity-based encryption--can rely for security on only the static subgroup hiding assumption when instantiated in composite-order bilinear groups, as opposed to the dynamic q-type assumptions on which their security previously was based. This specific goal is accomplished by more generally extending the recent Deja Q framework (Chase and Meiklejohn, Eurocrypt 2014) in two main directions. First, by teasing out common properties of existing reductions, we expand the q-type assumptions that can be covered by the framework; i.e., we demonstrate broader classes of assumptions that can be reduced to subgroup hiding. Second, while the original framework applied only to asymmetric composite-order bilinear groups, we provide a reduction to subgroup hiding that works in symmetric (as well as asymmetric) composite-order groups. As a bonus, our new reduction achieves a tightness of log(q) rather than q.
Last updated:  2016-09-06
On the Division Property of SIMON48 and SIMON64
Zejun Xiang, Wentao Zhang, Dongdai Lin
{\sc Simon} is a family of lightweight block ciphers published by the U.S. National Security Agency (NSA) in 2013. Due to its novel and bit-based design, integral cryptanalysis on {\sc Simon} seems a tough job. At EUROCRYPT 2015 Todo proposed division property which is a generalized integral property, and he applied this technique to searching integral distinguishers of {\sc Simon} block ciphers by considering the left and right halves of {\sc Simon} independently. As a result, he found 11-round integral distinguishers for both {\sc Simon}48 and {\sc Simon}64. Recently, at FSE 2016 Todo \emph{et al.} proposed bit-based division property that considered each bit independently. This technique can find more accurate distinguishers, however, as pointed out by Todo \emph{et al.} the time and memory complexity is bounded by $ 2^n $ for an $ n$-bit block cipher. Thus, bit-based division property is only applicable to {\sc Simon}32. In this paper we propose a new technique that achieves a trade-off between considering each bit independently and considering left and right halves as a whole, which is actually a trade-off between time-memory and the accuracy of the distinguishers. We proceed by splitting the state of {\sc Simon} into small pieces and study the division property propagations of circular shift and bitwise AND operations under the state partition. Moreover, we propose two different state partitions and study the influences of different partitions on the propagation of division property. We find that different partitions greatly impact the division property propagation of circular shift which will finally result in a big difference on the length of integral distinguishers. By using a tailored search algorithm for {\sc Simon}, we find 12-round integral distinguishers for {\sc Simon}48 and {\sc Simon}64 respectively, which improve Todo's results by one round for both variants.
Last updated:  2016-09-03
Passive Secret Disclosure Attack on an Ultralightweight Authentication Protocol for Internet of Things
Masoumeh Safkhani, Nasour Bagheri
Recently, Tewari and Gupta have proposed an ultralightweight RFID authentication protocol. In this paper, we consider the security of the proposed protocol and present a passive secret disclosure attack against it. The success probability of the attack is `1' while the complexity of the attack is only eavesdropping one session of the protocol. The presented attack has negligible complexity. We simulated our attack and verified its correctness.
Last updated:  2016-09-04
Fully Homomorphic Encryption over the Integers Revisited
Uncategorized
Jung Hee Cheon, Damien Stehle
Show abstract
Uncategorized
Two main computational problems serve as security foundations of current fully homomorphic encryption schemes: Regev's Learning With Errors problem (LWE) and Howgrave-Graham's Approximate Greatest Common Divisor problem (AGCD). Our first contribution is a reduction from LWE to AGCD. As a second contribution, we describe a new AGCD-based fully homomorphic encryption scheme, which outperforms all prior AGCD-based proposals: its security does not rely on the presumed hardness of the so-called Sparse Subset Sum problem, and the bit-length of a ciphertext is only softO(lambda), where lambda refers to the security parameter.
Last updated:  2016-09-03
The Discrete Logarithm Problem over Prime Fields can be transformed to a Linear Multivariable Chinese Remainder Theorem
H. Gopalakrishna Gadiyar, R. Padma
The Discrete Logarithm Problem over Prime Fields can be transformed to a Linear Multivariable Chinese Remainder Theorem.
Last updated:  2016-09-30
Lightweight Diffusion Layer: Importance of Toeplitz Matrices
Uncategorized
Sumanta Sarkar, Habeeb Syed
Show abstract
Uncategorized
MDS matrices are used as building blocks of diffusion layers in block ciphers, and XOR count is a metric that estimates the hardware implementation cost. In this paper we report the minimum value of XOR counts of $4 \times 4$ MDS matrices over $\mathbb{F}_{2^4}$ and $\mathbb{F}_{2^8}$, respectively. We give theoretical constructions of Toeplitz MDS matrices and show that they achieve the minimum XOR count. We also prove that Toeplitz matrices cannot be both MDS and involutory. Further we give theoretical constructions of $4 \times 4$ involutory MDS matrices over $\mathbb{F}_{2^4}$ and $\mathbb{F}_{2^8}$ that have the best known XOR counts so far: for $\mathbb{F}_{2^4}$ our construction gives an involutory MDS matrix that actually improves the existing lower bound of XOR count, whereas for $\mathbb{F}_{2^8}$, it meets the known lower bound.
Last updated:  2018-11-02
Multi-Key Homomorphic Signatures Unforgeable under Insider Corruption
Uncategorized
Russell W. F. Lai, Raymond K. H. Tai, Harry W. H. Wong, Sherman S. M. Chow
Show abstract
Uncategorized
Homomorphic signatures (HS) allows the derivation of the signature of the message-function pair $(m, g)$, where $m = g(m_1, \ldots, m_K)$, given the signatures of each of the input messages $m_k$ signed under the same key. Multi-key HS (M-HS) introduced by Fiore et al. (ASIACRYPT'16) further enhances the utility by allowing evaluation of signatures under different keys. While the unforgeability of existing M-HS notions unrealistically assumes that all signers are honest, we consider the setting where an arbitrary number of signers can be corrupted, which is typical in natural applications (e.g., verifiable multi-party computation) of M-HS. Surprisingly, there is a huge gap between M-HS with and without unforgeability under corruption: While the latter can be constructed from standard lattice assumptions (ASIACRYPT'16), we show that the former must rely on non-falsifiable assumptions. Specifically, we propose a generic construction of M-HS with unforgeability under corruption from adaptive zero-knowledge succinct non-interactive arguments of knowledge (ZK-SNARK) (and other standard assumptions), and then show that such M-HS implies adaptive zero-knowledge succinct non-interactive arguments (ZK-SNARG). Our results leave open the pressing question of what level of authenticity can be guaranteed in the multi-key setting under standard assumptions.
Last updated:  2018-01-15
Multi-Cast Key Distribution: Scalable, Dynamic and Provably Secure Construction
Kazuki Yoneyama, Reo Yoshida, Yuto Kawahara, Tetsutaro Kobayashi, Hitoshi Fuji, Tomohide Yamamoto
In this paper, we propose a two-round dynamic multi-cast key distribution (DMKD) protocol under the star topology with a central authentication server. Users can share a common session key without revealing any information of the session key to the server, and can join/leave to/from the group at any time even after establishing the session key. Our protocol is scalable because communication and computation costs of each user are independent from the number of users. Also, our protocol is still secure if either private key or session-specific randomness of a user is exposed. Furthermore, time-based backward secrecy is guaranteed by renewing the session key for every time period even if the session key is exposed. We introduce the first formal security definition for DMKD under the star topology in order to capture such strong exposure resilience and time-based backward secrecy. We prove that our protocol is secure in our security model in the standard model.
Last updated:  2016-08-30
Is AEZ v4.1 Sufficiently Resilient Against Key-Recovery Attacks?
Uncategorized
Colin Chaigneau, Henri Gilbert
Show abstract
Uncategorized
AEZ is a parallelizable, AES-based authenticated encryption algorithm that is well suited for software implementations on processors equipped with the AES-NI instruction set. It aims at offering exceptionally strong security properties such as nonce and decryption-misuse resistance and optimal security given the selected ciphertext expansion. AEZ was submitted to the authenticated ciphers competition CAESAR and was selected in 2015 for the second round of the competition. In this paper, we analyse the resilience of the latest algorithm version, AEZ v4.1 (October 2015), against key-recovery attacks. While AEZ modifications introduced in 2015 were partly motivated by thwarting a key-recovery attack of birthday complexity against AEZ v3 published at Asiacrypt 2015 by Fuhr, Leurent and Suder, we show that AEZ v4.1 remains vulnerable to a key-recovery attack of similar complexity and security impact. Our attack leverages the use, in AEZ, of an underlying tweakable block cipher based on a 4-round version of AES. Although the presented key-recovery attack does not violate the security claims of AEZ since the designers made no claim for beyond-birthday security, it may be interpreted as an indication that AEZ does not meet in all respects the objective of being a highly conservative and misuse-resilient algorithm.
Last updated:  2016-08-30
Reducing the Number of Non-linear Multiplications in Masking Schemes
Jürgen Pulkus, Srinivas Vivek
In recent years, methods to securely mask S-boxes against side-channel attacks by representing them as polynomials over finite binary fields have become quite efficient. A good cost model for this is to count how many non-linear multiplications are needed. In this work we improve on the current state-of-the-art generic method published by Coron-Roy-Vivek at CHES 2014 by working over slightly larger fields than strictly needed. This leads us, for example, to evaluate DES S-boxes with only 3 non-linear multiplications and, as a result, obtain \(25\%\) improvement in the running time for secure software implementations of DES when using three or more shares. On the theoretical side, we prove a logarithmic upper bound on the number of non-linear multiplications required to evaluate any \(d\)-bit S-box, when ignoring the cost of working in unreasonably large fields. This upper bound is lower than the previous lower bounds proved under the assumption of working over the field \(\mathbb{F}_{2^d}\), and we show this bound to be sharp. We also achieve a way to evaluate the AES S-box using only 3 non-linear multiplications over \(\mathbb{F}_{2^{16}}\).
Last updated:  2016-09-04
IO-DSSE: Scaling Dynamic Searchable Encryption to Millions of Indexes By Improving Locality
Ian Miers, Payman Mohassel
Free cloud-based services are powerful candidates for deploying ubiquitous encryption for messaging. In the case of email and increasingly chat, users expect the ability to store and search their messages persistently. Using data from one of the top three mail providers, we confirm that for a searchable encryption scheme to scale to millions of users, it should be highly IO-efficient (locality), and handle a very dynamic message corpi. We observe that existing solutions fail to achieve both properties simultaneously. We then design, build, and evaluate a provably secure Dynamic Searchable Symmetric Encryption (DSSE) scheme with significant reduction in IO cost compared to preceding works when used for email or other highly dynamic message corpi.
Last updated:  2016-09-09
Efficient KDM-CCA Secure Public-Key Encryption for Polynomial Functions
Shuai Han, Shengli Liu, Lin Lyu
KDM$[\mathcal{F}]$-CCA secure public-key encryption (PKE) protects the security of message $f(sk)$, with $f \in \mathcal{F}$, that is computed directly from the secret key, even if the adversary has access to a decryption oracle. An efficient KDM$[\mathcal{F}_{\text{aff}}]$-CCA secure PKE scheme for affine functions was proposed by Lu, Li and Jia (LLJ, EuroCrypt2015). We point out that their security proof cannot go through based on the DDH assumption. In this paper, we introduce a new concept _Authenticated Encryption with Auxiliary-Input_ $\mathsf{AIAE}$ and define for it new security notions dealing with related-key attacks, namely _IND-RKA security_ and _weak INT-RKA security_. We also construct such an $\mathsf{AIAE}$ w.r.t. a set of restricted affine functions from the DDH assumption. With our $\mathsf{AIAE}$, -- we construct the first efficient KDM$[\mathcal{F}_{\text{aff}}]$-CCA secure PKE w.r.t. affine functions with compact ciphertexts, which consist only of a constant number of group elements; -- we construct the first efficient KDM$[\mathcal{F}_{\text{poly}}^d]$-CCA secure PKE w.r.t. polynomial functions of bounded degree $d$ with almost compact ciphertexts, and the number of group elements in a ciphertext is polynomial in $d$, independent of the security parameter. Our PKEs are both based on the DDH & DCR assumptions, free of NIZK and free of pairing.
Last updated:  2016-08-30
Faster Key Recovery Attack on Round-Reduced PRINCE
Shahram Rasoolzadeh, Håvard Raddum
We introduce a new technique for doing the key recovery part of an integral or higher order differential attack. This technique speeds up the key recovery phase significantly and can be applied to any block cipher with S-boxes. We show several properties of this technique, then apply it to PRINCE and report on the improvements in complexity from earlier integral and higher order differential attacks on this cipher. Our attacks on 4 and 6 rounds were the fastest and the winner of PRINCE Challenge's last round in the category of chosen plaintext attack.
Last updated:  2016-08-31
Security Analysis of BLAKE2's Modes of Operation
Uncategorized
Atul Luykx, Bart Mennink, Samuel Neves
Show abstract
Uncategorized
BLAKE2 is a hash function introduced at ACNS 2013, which has been adopted in many constructions and applications. It is a successor to the SHA-3 finalist BLAKE, which received a significant amount of security analysis. Nevertheless, BLAKE2 introduces sufficient changes so that not all results from BLAKE carry over, meaning new analysis is necessary. To date, all known cryptanalysis done on BLAKE2 has focused on its underlying building blocks, with little focus placed on understanding BLAKE2's generic security. We prove that BLAKE2's compression function is indifferentiable from a random function in a weakly ideal cipher model, which was not the case for BLAKE. This implies that there are no generic attacks against any of the modes that BLAKE2 uses.
Last updated:  2016-09-19
Rotational Cryptanalysis in the Presence of Constants
Tomer Ashur, Yunwen Liu
Rotational cryptanalysis is a statistical method for attacking ARX constructions. It was previously shown that ARX-C, i.e., ARX with the injection of constants can be used to implement any function. In this paper we investigate how rotational cryptanalysis is affected when constants are injected into the state. We introduce the notion of an RX-difference, generalizing the idea of a rotational difference. We show how RX-differences behave around modular addition, and give a formula to calculate their transition probability. We experimentally verify the for- mula using Speck32/64, and present a 7-round distinguisher based on RX-differences. We then discuss two types of constants: round constants, and constants which are the result of using a fixed key, and provide recommendations to designers for optimal choice of parameters.
Last updated:  2017-05-23
Revisiting Cascade Ciphers in Indifferentiability Setting
Uncategorized
Chun Guo, Dongdai Lin, Meicheng Liu
Show abstract
Uncategorized
Shannon defined an ideal $(\kappa,n)$-blockcipher as a secrecy system consisting of $2^{\kappa}$ independent $n$-bit random permutations. In this paper, we revisit the following question: in the ideal cipher model, can a cascade of several ideal $(\kappa,n)$-blockciphers realize an ideal $(2\kappa,n)$-blockcipher? The motivation goes back to Shannon's theory on product secrecy systems, and similar question was considered by Even and Goldreich (CRYPTO '83) in different settings. We give the first positive answer: for the cascade of independent ideal $(\kappa,n)$-blockciphers with two alternated independent keys, four stages are necessary and sufficient to realize an ideal $(2\kappa,n)$-blockcipher, in the sense of indifferentiability of Maurer et al. (TCC 2004). This shows cascade capable of achieving key-length extension in the settings where keys are \emph{not necessarily secret}.
Last updated:  2016-08-30
P2P Mixing and Unlinkable Bitcoin Transactions
Tim Ruffing, Pedro Moreno-Sanchez, Aniket Kate
Starting with Dining Cryptographers networks (DC-net), several peer-to-peer (P2P) anonymous communication protocols have been proposed. Despite their strong anonymity guarantees none of those has been employed in practice so far: Most fail to simultaneously handle the crucial problems of slot collisions and malicious peers, while the remaining ones handle those with a significant increased latency (communication rounds) linear in the number of participating peers in the best case, and quadratic in the worst case. We conceptualize these P2P anonymous communication protocols as P2P mixing, and present a novel P2P mixing protocol, DiceMix, that only requires constant (i.e., four) communication rounds in the best case, and $4+2f$ rounds in the worst case of $f$ malicious peers. As every individual malicious peer can prevent a protocol run from success by omitting his messages, we find DiceMix with its worst-case linear-round complexity to be an optimal P2P mixing solution. On the application side, we find DiceMix to be an ideal privacy-enhancing primitive for crypto-currencies such as Bitcoin. The public verifiability of their pseudonymous transactions through publicly available ledgers (or blockchains) makes these systems highly vulnerable to a variety of linkability and deanonymization attacks. DiceMix can allow pseudonymous users to make their transactions unlinkable to each other in a manner fully compatible with the existing systems. We demonstrate the efficiency of DiceMix with a proof-of-concept implementation. In our evaluation, DiceMix requires less than 8 seconds to mix 50 messages (160 bits, i.e., Bitcoin addresses), while the best protocol in the literate requires almost 3 minutes in a very similar setting. As a representative example, we use apply DiceMix to define a protocol for creating unlinkable Bitcoin transactions. Finally, we discover a generic attack on P2P mixing protocols that exploits the implicit unfairness of a protocol with a dishonest majority to break anonymity. Our attack uses the attacker’s real-world ability to omit some communication from a honest peer to deanonymize her input message. We also discuss how this attack is resolved in our application to crypto-currencies by employing uncorrelated input messages by across different protocol runs.
Last updated:  2017-04-23
Post-Quantum Attribute-Based Signatures from Lattice Assumptions
Rachid El Bansarkhani, Ali El Kaafarani
Attribute based signature schemes (ABS) constitute important and powerful primitives when it comes to protecting the privacy of the user's identity and signing information. More specifically, ABS schemes provide the advantage of anonymously signing a message once a given policy is satisfied. As opposed to other related privacy preserving signatures, the verifier is not able to deduce from the signature, which attributes have been used to satisfy the (public) signing policy. In this work, we propose the first lattice-based ABS signature scheme for expressive policies. More precisely, the scheme that we propose doesn't follow the traditional approach to build ABS schemes for expressive policies, i.e. using span programs or secret sharing schemes as for classical schemes. In fact, our approach is simpler and does not require such complex subroutines. We first construct a new (t,B)-threshold ABS scheme that allows to anonymously generate signatures, only if $t$ out of p=|B| attributes are covered by valid credentials. Based on this scheme, we propose a lattice-based ABS scheme for expressive (And,Or)-policies. For this, we construct a new credential aggregation system that is built on top of a modified variant of Boyen's signature scheme, which could be of independent interest. Our ABS scheme for expressive policies yields signature sizes that are linear in the number of attributes similar to state-of-the-art classical ABS schemes.
Last updated:  2017-06-02
A Secure and Efficient Authentication Technique for Vehicular Ad-Hoc Networks
Uncategorized
Maryam Rajabzadeh Asaar, Mahmoud Salmasizadeh, Willy Susilo, Akbar Majidi
Show abstract
Uncategorized
Vehicular ad-hoc networks (VANETs) have been emerging due to the recent technologies in wireless and network communications. The most fundamental part in VANETs is to enable message authentications between vehicles and roadside units. Message authentication using proxy vehicles has been proposed to reduce the computational overhead of roadside units significantly. In this type of message authentication schemes, proxy vehicles with verifying multiple messages at the same time improve computational efficiency of roadside units when there are a large number of vehicles in their coverage areas. In this paper, first we show that the only proxy-based authentication scheme (PBAS) presented for this goal by Liu et al. cannot achieve authenticity of messages, and also it is not resistant against impersonation and modification attacks and false acceptance of batching invalid signatures. Next, we propose a new identity based message authentication using proxy vehicles (ID-MAP). Then, to guarantee that it can satisfy message authentication requirement, existential unforgeability of underlying signature against adaptively chosen-message and identity attack is proved under Elliptic Curve Discrete Logarithm Problem in the random oracle model. It should be highlighted that ID-MAP not only is more efficient than PBAS since it is pairing-free and does not use map-to-point hash functions, but also it satisfies security and privacy requirements of vehicular ad hoc networks. Furthermore, analysis shows that the required time to verify 3000 messages in ID-MAP is reduced by 76% compared to that of PBAS.
Last updated:  2019-07-15
Multivariate Cryptography with Mappings of Discrete Logarithms and Polynomials
Uncategorized
Duggirala Meher Krishna, Duggirala Ravi
Show abstract
Uncategorized
In this paper, algorithms for multivariate public key cryptography and digital signature are described. Plain messages and encrypted messages are arrays, consisting of elements from a fixed finite ring or field. The encryption and decryption algorithms are based on multivariate mappings. The security of the private key depends on the difficulty of solving a system of parametric simultaneous multivariate equations involving polynomial or exponential mappings. The method is a general purpose utility for most data encryption, digital certificate or digital signature applications. For security protocols of the application layer level in the OSI model, the methods described in this paper are useful.
Last updated:  2016-08-28
Separating Computational and Statistical Differential Privacy in the Client-Server Model
Uncategorized
Mark Bun, Yi-Hsiu Chen, Salil Vadhan
Show abstract
Uncategorized
Differential privacy is a mathematical definition of privacy for statistical data analysis. It guarantees that any (possibly adversarial) data analyst is unable to learn too much information that is specific to an individual. Mironov et al.~(CRYPTO 2009) proposed several computational relaxations of differential privacy (CDP), which relax this guarantee to hold only against computationally bounded adversaries. Their work and subsequent work showed that CDP can yield substantial accuracy improvements in various multiparty privacy problems. However, these works left open whether such improvements are possible in the traditional client-server model of data analysis. In fact, Groce, Katz and Yerukhimovich~(TCC 2011) showed that, in this setting, it is impossible to take advantage of CDP for many natural statistical tasks. Our main result shows that, assuming the existence of sub-exponentially secure one-way functions and 2-message witness indistinguishable proofs (zaps) for NP, that there is in fact a computational task in the client-server model that can be efficiently performed with CDP, but is infeasible to perform with information-theoretic differential privacy.
Last updated:  2016-08-26
Virtual Grey-Boxes Beyond Obfuscation: A Statistical Security Notion for Cryptographic Agents
Uncategorized
Shashank Agrawal, Manoj Prabhakaran, Ching-Hua Yu
Show abstract
Uncategorized
We extend the simulation-based definition of Virtual Grey Box (VGB) security -- originally proposed for obfuscation (Bitansky and Canetti, 2010) -- to a broad class of cryptographic primitives. These include functional encryption, graded encoding schemes, bi-linear maps (with uber assumptions), as well as unexplored ones like homomorphic functional encryption. Our main result is a characterization of VGB security, in all these cases, in terms of an indistinguishability-preserving notion of security, called $\Gamma^*-s-\textsf{IND}-\textsf{PRE}$ security, formulated using an extension of the recently proposed Cryptographic Agents framework (Agrawal et al., 2015). We further show that this definition is equivalent to an indistinguishability based security definition that is restricted to 'concentrated' distributions (wherein the outcome of any computation on encrypted data is essentially known ahead of the computation). A result of Bitansky et al. (2014), who showed that VGB obfuscation is equivalent to strong indistinguishability obfuscation (SIO), is obtained by specializing our result to obfuscation. Our proof, while sharing various elements from the proof of Bitansky et al., is simpler and significantly more general, as it uses $\Gamma^*-s-\textsf{IND}-\textsf{PRE}$ security as an intermediate notion. Our characterization also shows that the semantic security for graded encoding schemes (Pass et al. 2014), is in fact an instance of this same definition. We also present a composition theorem for rtestfamily-sINDPRE security. We can then recover the result of Bitansky et al. (2014) regarding the existence of VGB obfuscation for all NC1 circuits, simply by instantiating this composition theorem with a reduction from obfuscation of NC1 circuits to graded encoding schemas (Barak et al., 2014) and the assumption that there exists an $\Gamma^*-s-\textsf{IND}-\textsf{PRE}$ secure scheme for the graded encoding schema (Pass et al. 2014).
Last updated:  2016-08-29
Composable Adaptive Secure Protocols without Setup under Polytime Assumptions
Uncategorized
Carmit Hazay, Muthuramakrishnan Venkitasubramaniam
Show abstract
Uncategorized
All previous constructions of general multiparty computation protocols that are secure against adaptive corruptions in the concurrent setting either require some form of setup or non-standard assumptions. In this paper we provide the first general construction of secure multi-party computation protocol without any setup that guarantees composable security in the presence of an adaptive adversary based on standard polynomial-time assumptions. We prove security under the notion of ``UC with super-polynomial helpers'' introduced by Canetti et al. (FOCS 2010), which is closed under universal composition and implies ``super-polynomial-time simulation''. Moreover, our construction relies on the underlying cryptographic primitives in a black-box manner. Next, we revisit the zero-one law for two-party secure functions evaluation initiated by the work of Maji, Prabhakaran and Rosulek (CRYPTO 2010). According to this law, every two-party functionality is either trivial (meaning, such functionalities can be reduced to any other functionality) or complete (meaning, any other functionality can be reduced to these functionalities) in the Universal Composability (UC) framework. As our second contribution, assuming the existence of a simulatable public-key encryption scheme, we establish a zero-one law in the adaptive setting. Our result implies that every two-party non-reactive functionality is either trivial or complete in the UC framework in the presence of adaptive, malicious adversaries.
Last updated:  2016-08-26
Secure Obfuscation in a Weak Multilinear Map Model
Uncategorized
Sanjam Garg, Eric Miles, Pratyay Mukherjee, Amit Sahai, Akshayaram Srinivasan, Mark Zhandry
Show abstract
Uncategorized
All known candidate indistinguishibility obfuscation (iO) schemes rely on candidate multilinear maps. Until recently, the strongest proofs of security available for iO candidates were in a generic model that only allows "honest" use of the multilinear map. Most notably, in this model the zero-test procedure only reveals whether an encoded element is 0, and nothing more. However, this model is inadequate: there have been several attacks on multilinear maps that exploit extra information revealed by the zero-test procedure. In particular, Miles, Sahai and Zhandry [Crypto'16] recently gave a polynomial-time attack on several iO candidates when instantiated with the multilinear maps of Garg, Gentry, and Halevi [Eurocrypt'13], and also proposed a new "weak multilinear map model" that captures all known polynomial-time attacks on GGH13. In this work, we give a new iO candidate which can be seen as a small modification or generalization of the original candidate of Garg, Gentry, Halevi, Raykova, Sahai, and Waters [FOCS'13]. We prove its security in the weak multilinear map model, thus giving the first iO candidate that is provably secure against all known polynomial-time attacks on GGH13. The proof of security relies on a new assumption about the hardness of computing annihilating polynomials, and we show that this assumption is implied by the existence of pseudorandom functions in $\text{NC}^1$.
Last updated:  2016-08-26
Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds
Uncategorized
Mark Bun, Thomas Steinke
Show abstract
Uncategorized
"Concentrated differential privacy" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Renyi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of "approximate concentrated differential privacy."
Last updated:  2016-08-26
Secure Multiparty RAM Computation in Constant Rounds
Uncategorized
Sanjam Garg, Divya Gupta, Peihan Miao, Omkant Pandey
Show abstract
Uncategorized
Securing computation of a random access machine (RAM) program typically entails that it be first converted into a circuit. This conversion is unimaginable in the context of big-data applications where the size of the circuit can be exponential in the running time of the original RAM program. Realizing these constructions, without relinquishing the efficiency of RAM programs, often poses considerable technical hurdles. Our understanding of these techniques in the multi-party setting is largely limited. Specifically, the round complexity of all known protocols grows linearly in the running time of the program being computed. In this work, we consider the multi-party case and obtain the following results: 1. Semi-honest model: We present a constant-round black-box secure computation protocol for RAM programs. This protocol is obtained by building on the new black-box garbled RAM construction by Garg, Lu, and Ostrovsky [FOCS 2015], and constant-round secure computation protocol for circuits of Beaver, Micali, and Rogaway [STOC 1990]. This construction allows execution of multiple programs on the same persistent database. 2. Malicious model: Next, we show how to extend our semi-honest results to the malicious setting, while ensuring that the new protocol is still constant-round and black-box in nature.
Last updated:  2016-08-25
Adaptive Security of Yao's Garbled Circuits
Zahra Jafargholi, Daniel Wichs
A garbling scheme is used to garble a circuit $C$ and an input $x$ in a way that reveals the output $C(x)$ but hides everything else. Yao's construction from the 80's is known to achieve selective security, where the adversary chooses the circuit $C$ and the input $x$ in one shot. It has remained as an open problem whether the construction also achieves adaptive security, where the adversary can choose the input $x$ after seeing the garbled version of the circuit $C$. A recent work of Hemenway et al. (CRYPTO '16) modifies Yao's construction and shows that the resulting scheme is adaptively secure. This is done by encrypting the garbled circuit from Yao's construction with a special type of ``somewhere equivocal encryption'' and giving the key together with the garbled input. The efficiency of the scheme and the security loss of the reduction is captured by a certain pebbling game over the circuit. In this work we prove that Yao's construction itself is already adaptively secure, where the security loss can be captured by the same pebbling game. For example, we show that for circuits of depth $d$, the security loss of our reduction is $2^{O(d)}$, meaning that Yao's construction is adaptively secure for NC1 circuits without requiring complexity leveraging. Our technique is inspired by the ``nested hybrids'' of Fuchsbauer et al. (Asiacrypt '14, CRYPTO '15) and relies on a careful sequence of hybrids where each hybrid involves some limited guessing about the adversary's adaptive choices. Although it doesn't match the parameters achieved by Hemenway et al. in their full generality, the main advantage of our work is to prove the security of Yao's construction as is, without any additional encryption layer.
Last updated:  2016-08-25
Fast Pseudorandom Functions Based on Expander Graphs
Uncategorized
Benny Applebaum, Pavel Raykov
Show abstract
Uncategorized
We present direct constructions of pseudorandom function (PRF) families based on Goldreich's one-way function. Roughly speaking, we assume that non-trivial local mappings $f:\{0,1\}^n\rightarrow \{0,1\}^m$ whose input-output dependencies graph form an expander are hard to invert. We show that this one-wayness assumption yields PRFs with relatively low complexity. This includes weak PRFs which can be computed in linear time of $O(n)$ on a RAM machine with $O(\log n)$ word size, or by a depth-3 circuit with unbounded fan-in AND and OR gates (AC0 circuit), and standard PRFs that can be computed by a quasilinear size circuit or by a constant-depth circuit with unbounded fan-in AND, OR and Majority gates (TC0). Our proofs are based on a new search-to-decision reduction for expander-based functions. This extends a previous reduction of the first author (STOC 2012) which was applicable for the special case of \emph{random} local functions. Additionally, we present a new family of highly efficient hash functions whose output on exponentially many inputs jointly forms (with high probability) a good expander graph. These hash functions are based on the techniques of Miles and Viola (Crypto 2012). Although some of our reductions provide only relatively weak security guarantees, we believe that they yield novel approach for constructing PRFs, and therefore enrich the study of pseudorandomness.
Last updated:  2016-08-25
Towards Non-Black-Box Separations of Public Key Encryption and One Way Function
Uncategorized
Dana Dachman-Soled
Show abstract
Uncategorized
Separating public key encryption from one way functions is one of the fundamental goals of complexity-based cryptography. Beginning with the seminal work of Impagliazzo and Rudich (STOC, 1989), a sequence of works have ruled out certain classes of reductions from public key encryption (PKE)---or even key agreement---to one way function. Unfortunately, known results---so called black-box separations---do not apply to settings where the construction and/or reduction are allowed to directly access the code, or circuit, of the one way function. In this work, we present a meaningful, non-black-box separation between public key encryption (PKE) and one way function. Specifically, we introduce the notion of $\textsf{BBN}^-$ reductions (similar to the $\textsf{BBN}\text{p}$ reductions of Baecher et al. (ASIACRYPT, 2013)), in which the construction $E$ accesses the underlying primitive in a black-box way, but wherein the universal reduction $R$ receives the efficient code/circuit of the underlying primitive as input and is allowed oracle access to the adversary $\textsf{Adv}$. We additionally require that the number of oracle queries made to $\textsf{Adv}$, and the success probability of $R$ are independent of the run-time/circuit size of the underlying primitive. We prove that there is no non-adaptive, $\textsf{BBN}^-$ reduction from PKE to one way function, under the assumption that certain types of strong one way functions exist. Specifically, we assume that there exists a regular one way function $f$ such that there is no Arthur-Merlin protocol proving that ``$z \not\in \textsf{Range}(f)$'', where soundness holds with high probability over ``no instances,'' $y \sim f(U_n)$, and Arthur may receive polynomial-sized, non-uniform advice. This assumption is related to the average-case analogue of the widely believed assumption $\textbf{coNP} \not\subseteq \textbf{NP}/\textbf{poly}$.
Last updated:  2019-04-25
MILP-Aided Bit-Based Division Property for Primitives with Non-Bit-Permutation Linear Layers
Uncategorized
Ling Sun, Wei Wang, Meiqin Wang
Show abstract
Uncategorized
Division property is a general integral property introduced by Todo at EUROCRYPT 2015. Recently, at ASIACRYPT 2016, Xiang et al. applied the Mixed Integer Linear Programming (MILP) method to search bit-based division property, and handled the complexity which restricted the application of bit-based division property proposed by Todo and Morii at FSE 2016. However, their MILP-aided search was only applied to some lightweight block ciphers whose linear layers were limited to bit-permutations, and the feasibility of MILP-aided bit-based division property for ciphers with non-bit-permutation linear layers was an open problem. This paper comes out with the affirmative answer. First, we transform the complicated linear layers to their primitive representations, which only involves Copy and XOR operations. Then, the original Copy and XOR models are respectively generalized to deal with more output branches and input elements, and these generalized models are adopted to depict the primitive representations. Accordingly, the MILP-aided bit-based division property can be applied to much more primitives with complicated linear layers. As an illustration, we first evaluate the bit-based division propertyies of some word-oriented block ciphers including Midori64, LED, Joltik-BC, and AES. For Midori64, we obtain a 7-round integral distinguisher, which achieves one more round than the previous results. At the same time, the data requirements of some existing distinguishers are also reduced. We decrease the number of required chosen plaintexts of 4-round and 5-round integral distinguishers for LED and Joltik-BC by half. As to AES, our searching experiments show that integral distinguishers, which are based on the bit-based division property, covering more than four rounds probably do not exist. Then, the bit-based division properties of some bit-oriented block ciphers, such as Serpent and Noekeon, are considered. The data complexities of their distinguishers for short rounds are improved. Moreover, we evaluate the bit-based division properties of the internal permutations involved in some hash functions, e.g., SPONGENT and PHOTON. An 18-round zero-sum distinguisher for SPONGENT-88 is proposed, which achieves four more rounds than the previous ones. We also provide 20-round and 21-round zero-sum distinguishers for SPONGENT-128 and SPONGENT-160, respectively. For most PHOTON permutations $P_{t}$ with 4-bit cell, the data requirements for the 4-round distinguishers are reduced by half. Besides, the length of $P_{256}$'s distinguisher is extended by one round. Furthermore, for $P_{288}$ using 8-bit S-boxes, we improve the data complexities of their integral distinguishers significantly.
Last updated:  2016-08-25
Fault Injection using Crowbars on Embedded Systems
Colin O'Flynn
Causing a device to incorrectly execute an instruction or store faulty data is well-known strategy for attacking cryptographic implementations on embedded systems. One technique to generate such faults is to manipulate the supply voltage of the device. This paper introduces a novel technique to introduce those supply voltage manipulations onto existing digital systems, requiring minimal modifications to the device being attacked. This uses a crowbar to short the power supply for controlled periods of time. High-accuracy faults are demonstrated on the 8-bit AVR microcontroller, which can generate both single and multi-bit faults with high repeatability. Additionally this technique is demonstrated on a FPGA where it is capable of generating faults in both internal registers and the configuration fabric.
Last updated:  2016-08-25
Binary AMD Circuits from Secure Multiparty Computation
Uncategorized
Daniel Genkin, Yuval Ishai, Mor Weiss
Show abstract
Uncategorized
An AMD circuit over a finite field $\mathbb F$ is a randomized arithmetic circuit that offers the ``best possible protection'' against additive attacks. That is, the effect of every additive attack that may blindly add a (possibly different) element of $\mathbb F$ to every internal wire of the circuit can be simulated by an ideal attack that applies only to the inputs and outputs. Genkin et al. (STOC 2014, Crypto 2015) introduced AMD circuits as a means for protecting MPC protocols against active attacks, and showed that every arithmetic circuit C over F can be transformed into an equivalent AMD circuit of size $O(|C|)$ with $O(1/|\mathbb F|)$ simulation error. However, for the case of the binary field $\mathbb F=\mathbb F_2$, their constructions relied on a tamper-proof output decoder and could only realize a weaker notion of security. We obtain the first constructions of fully secure binary AMD circuits. Given a boolean circuit $C$ and a statistical security parameter $s$, we construct an equivalent binary AMD circuit $C'$ of size $|C|*polylog(|C|,s)$ (ignoring lower order additive terms) with $2^{-s}$ simulation error. That is, the effect of toggling an arbitrary subset of wires can be simulated by toggling only input and output wires. Our construction combines in a general way two types of ``simple'' honest-majority MPC protocols: protocols that only offer security against passive adversaries, and protocols that only offer correctness against active adversaries. As a corollary, we get a conceptually new technique for constructing active-secure two-party protocols in the OT-hybrid model, and reduce the open question of obtaining such protocols with constant computational overhead to a similar question in these simpler MPC models.
Last updated:  2016-08-25
Simulating Auxiliary Inputs, Revisited
Uncategorized
Maciej Skorski
Show abstract
Uncategorized
For any pair $(X,Z)$ of correlated random variables we can think of $Z$ as a randomized function of $X$. If the domain of $Z$ is small, one can make this function computationally efficient by allowing it to be only approximately correct. In folklore this problem is known as _simulating auxiliary inputs_. This idea of simulating auxiliary information turns out to be a very usefull tool, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results: (a) We present a novel boosting algorithm for constructing the simulator. This boosting proof is of independent interest, as it shows how to handle "negative mass" issues when constructing probability measures by shifting distinguishers in descent algorithms. Our technique essentially fixes the flaw in the TCC'14 paper "How to Fake Auxiliary Inputs". (b) The complexity of our simulator is better than in previous works, including results derived from the uniform min-max theorem due to Vadhan and Zheng. To achieve $(s,\epsilon)$-indistinguishability we need the complexity $O\left(s\cdot 2^{5\ell}\epsilon^{-2}\right)$ in time/circuit size, which improve previous bounds by a factor of $\epsilon^{-2}$. In particular, with we get meaningful provable security for the EUROCRYPT'09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like $\mathsf{AES256}$. Our boosting technique utilizes a two-step approach. In the first step we shift the current result (as in gradient or sub-gradient descent algorithms) and in the separate step we fix the biggest non-negative mass constraint violation (if applicable).
Last updated:  2016-09-02
Multilateral White-Box Cryptanalysis: Case study on WB-AES of CHES Challenge 2016
Uncategorized
Hyunjin Ahn, Dong-Guk Han
Show abstract
Uncategorized
The security requirement of white-box cryptography (WBC) is that it should protect the secret key from a white-box security model that permits an adversary who is able to entirely control the execution of the cryptographic algorithm and its environment. It has already been demonstrated that most of the WBCs are vulnerable to algebraic attacks from a white-box security perspective. Recently, a new differential computation analysis (DCA) attack has been proposed that thwarts the white-box implementation of block cipher AES (WB-AES) by monitoring the memory information accessed during the execution of the algorithm. Although the attack requires the ability to estimate the internal information of the memory pattern, it retrieves the secret key after a few attempts. In addition, it is proposed that the hardware implementation of WB-AES is vulnerable to differential power analysis (DPA) attack. In this paper, we propose a DPA-based attack that directly exploits the intermediate values of WB-AES computation with ut requiring to utilize memory data. We also demonstrate its practicability with respect to public software implementation of WB-AES. Additionally, we investigate the vulnerability of our target primitive to DPA by acquiring actual power consumption traces of software implementation.
Last updated:  2016-08-25
Healing the Hill Cipher, Improved Approach to Secure Modified Hill against Zero-plaintext Attack
Mohammad Hadi Valizadeh
Hill Cipher is a symmetric cryptosystem that was claimed to suffer from known-plaintext attack for many years. Different methods have been proposed to make this cipher more secure against known attacks. The introduced classic Hill cipher by Tourani and Falahati in 2011 that was devised in two variants and based upon affine transformation, was considered to be more secure against known attacks. Recently, this well modified Hill cipher is claimed to be vulnerable to zero-plaintext attack. In this paper, by using a chaotic map and scrambling methods, a novel cryptosystem based on Tourani and Falahati Hill cipher is presented which overcomes the zero-plaintext attack. The proposed Hill cipher is more reliable and faster.
Last updated:  2019-04-03
Constant-Round Maliciously Secure Two-Party Computation in the RAM Model
Carmit Hazay, Avishay Yanai
The random-access memory (RAM) model of computation allows program constant-time memory lookup and is more applicable in practice today, covering many important algorithms. This is in contrast to the classic setting of secure 2-party computation (2PC) that mostly follows the approach for which the desired functionality must be represented as a boolean circuit. In this work we design the first constant round maliciously secure two-party protocol in the RAM model. Our starting point is the garbled RAM construction of Gentry et al. (EUROCRYPT 2014) that readily induces a constant round semi-honest two-party protocol for any RAM program assuming identity-based encryption schemes. We show how to enhance the security of their construction into the malicious setting while facing several challenges that stem due to handling the data memory. Next, we show how to apply our techniques to a more recent garbled RAM construction by Garg et al. (STOC 2015) that is based on one-way functions.
Last updated:  2016-08-24
Multi-Key Homomorphic Authenticators
Dario Fiore, Aikaterini Mitrokotsa, Luca Nizzardo, Elena Pagnin
Homomorphic authenticators (HAs) enable a client to authenticate a large collection of data elements $m_1, . . . , m_t$ and outsource them, along with the corresponding authenticators, to an untrusted server. At any later point, the server can generate a $short$ authenticator $\sigma_{f, y}$ vouching for the correctness of the output $y$ of a function $f$ computed on the outsourced data, i.e., $y = f(m_1,...,m_t)$. Recently researchers have focused on HAs as a solution, with minimal communication and interaction, to the problem of delegating computation on outsourced data. The notion of HAs studied so far, however, only supports executions (and proofs of correctness) of computations over data authenticated by a single user. Motivated by realistic scenarios (ubiquitous computing, sensor networks, etc.) in which large datasets include data provided by multiple users, we study the concept of $multi-key$ $homomorphic$ $authenticators$. In a nutshell, multi-key HAs are like HAs with the extra feature of allowing the holder of public evaluation keys to compute on data authenticated under different secret keys. In this paper, we introduce and formally define multi-key HAs. Secondly, we propose a construction of a multi-key homomorphic signature based on standard lattices and supporting the evaluation of circuits of bounded polynomial depth. Thirdly, we provide a construction of multi-key homomorphic MACs based only on pseudorandom functions and supporting the evaluation of low-degree arithmetic circuits. Albeit being less expressive and only secretly verifiable, the latter construction presents interesting efficiency properties.
Last updated:  2016-08-24
Biometric Based Network Security Using MIPS Cryptography Processor
Uncategorized
Kirat Pal Singh
Show abstract
Uncategorized
—The empowerment in network on chip (NOC) and System on chip (SOC) in Microelectronics and Sensors have developed the various wireless communication Network technologies. In the past few years, many researchers have been focusing on building system architecture of network monitoring to improve the technical requirement specially designed for network security. Less research was found in providing the strong biometric based network security system to provide bulletproof security. The popular MIPS based cryptography processor is used for hardware and software products and standards require big cryptography keys length for higher security level. The major weakness of Normal cryptography system based on asymmetric algorithms need the storage of secret keys. Stored keys are often protected by poorly selected user passwords that can either be guessed or obtained through brute force attacks. Combining biometric with MIPS cryptography processor is as a possible solution. In this paper I propose a new approach to network security using MIPS based crypto processor based on contactless palm vein biometric system. This approach takes into account NOC constraints and its topology. It provides more security with less key length and there is no need to store any private key anywhere.
Last updated:  2017-02-20
Proofs of Data Residency: Checking whether Your Cloud Files Have Been Relocated
Uncategorized
Hung Dang, Erick Purwanto, Ee-Chien Chang
Uncategorized
While cloud storage services offer manifold benefits such as cost-effectiveness or elasticity, there also exists various security and privacy concerns. Among such concerns, we pay our primary attention to data residency – a notion that requires outsourced data to be retrievable in its entirety from local drives of a storage server in-question. We formulate such notion under a security model called Proofs of Data Residency (PoDR). PoDR can be employed to check whether the data is replicated across different storage servers, or combined with storage server geolocation to “locate” the data in the cloud. We make key observations that the data residency checking protocol should exclude all server-side computation and each challenge should ask for no more than a single atomic fetching operation. We illustrate challenges and subtleties in protocol design by showing potential attacks to naive constructions. Next, we present a secure PoDR scheme structured as a timed challenge-response protocol. Two implementation variants of the proposed solution, namely N-ResCheck and E-ResCheck, describe an interesting use-case of trusted computing, in particular the use of Intel SGX, in cryptographic timed challenge-response protocols whereby having the verifier co-locating with the prover offers security enhancement. Finally, we conduct extensive experiments to exhibit potential attacks to insecure constructions and validate the performance as well as the security of our solution.
Last updated:  2016-08-24
Blind Web Search: How far are we from a privacy preserving search engine?
Uncategorized
Gizem S. Çetin, Wei Dai, Yarkın Doröz, William J. Martin, Berk Sunar
Show abstract
Uncategorized
Recent rapid progress in fully homomorphic encryption (FHE) and somewhat homomorphic encryption (SHE) has catalyzed renewed efforts to develop efficient privacy preserving protocols. Several works have already appeared in the literature that provide solutions to these problems by employing FHE or SHE techniques. In this work, we focus on a natural application where privacy is a major concern: web search. An estimated 5 billion web queries are processed by the world's leading search engines each day. It is no surprise, then, that privacy-preserving web search was proposed as the paragon FHE application in Gentry's seminal FHE paper. Indeed, numerous proposals have emerged in the intervening years that attack various privatized search problems over encrypted user data, e.g. private information retrieval (PIR). Yet, there is no known work that focuses on implementing a completely blind web search engine using an FHE/SHE construction. In this work, we focus first on single keyword queries with exact matches, aiming toward real-world viability. We then discuss multiple-keyword searches and tackle a number of issues currently hindering practical implementation, such as communication and computational efficiency.
Last updated:  2016-08-24
Almost-Optimally Fair Multiparty Coin-Tossing with Nearly Three-Quarters Malicious
Bar Alon, Eran Omri
An $\alpha$-fair coin-tossing protocol allows a set of mutually distrustful parties to generate a uniform bit, such that no efficient adversary can bias the output bit by more than $\alpha$. Cleve [STOC 1986] has shown that if half of the parties can be corrupted, then, no $r$-round coin-tossing protocol is $o(1/r)$-fair. For over two decades the best known $m$-party protocols, tolerating up to $t\geq m/2$ corrupted parties, were only $O(t/\sqrt{r})$-fair. In a surprising result, Moran, Naor, and Segev [TCC 2009] constructed an $r$-round two-party $O(1/r)$-fair coin-tossing protocol, i.e., an optimally fair protocol. Beimel, Omri, and Orlov [Crypto 2010] extended the results of Moran et al.~to the {\em multiparty setting} where strictly fewer than 2/3 of the parties are corrupted. They constructed a $2^{2^k}/r$-fair $r$-round $m$-party protocol, tolerating up to $t=\frac{m+k}{2}$ corrupted parties. Recently, in a breakthrough result, Haitner and Tsfadia [STOC 2014] constructed an $O(\log^3(r)/r)$-fair (almost optimal) three-party coin-tossing protocol. Their work brings forth a combination of novel techniques for coping with the difficulties of constructing fair coin-tossing protocols. Still, the best coin-tossing protocols for the case where more than 2/3 of the parties may be corrupted (and even when $t=2m/3$, where $m>3$) were $\theta(1/\sqrt{r})$-fair. We construct an $O(\log^3(r)/r)$-fair $m$-party coin-tossing protocol, tolerating up to $t$ corrupted parties, whenever $m$ is constant and $t<3m/4$.
Last updated:  2016-08-24
Efficient Batched Oblivious PRF with Applications to Private Set Intersection
Vladimir Kolesnikov, Ranjit Kumaresan, Mike Rosulek, Ni Trieu
We describe a lightweight protocol for oblivious evaluation of a pseudorandom function (OPRF) in the presence of semi-honest adversaries. In an OPRF protocol a receiver has an input $r$; the sender gets output $s$ and the receiver gets output $F(s,r)$, where $F$ is a pseudorandom function and $s$ is a random seed. Our protocol uses a novel adaptation of 1-out-of-2 OT-extension protocols, and is particularly efficient when used to generate a large batch of OPRF instances. The cost to realize $m$ OPRF instances is roughly the cost to realize $3.5 m$ instances of standard 1-out-of-2 OTs (using state-of-the-art OT extension). We explore in detail our protocol's application to semi-honest secure private set intersection (PSI). The fastest state-of-the-art PSI protocol (Pinkas et al., Usenix 2015) is based on efficient OT extension. We observe that our OPRF can be used to remove their PSI protocol's dependence on the bit-length of the parties' items. We implemented both PSI protocol variants and found ours to be 3.1--3.6$\times$ faster than Pinkas et al.\ for PSI of 128-bit strings and sufficiently large sets. Concretely, ours requires only 3.8 seconds to securely compute the intersection of $2^{20}$-size sets, regardless of the bitlength of the items. For very large sets, our protocol is only $4.3\times$ slower than the {\em insecure} na\"ıve hashing approach for PSI.
Last updated:  2018-12-08
On the Practical (In-)Security of 64-bit Block Ciphers: Collision Attacks on HTTP over TLS and OpenVPN
Karthikeyan Bhargavan, Gaëtan Leurent
While modern block ciphers, such as AES, have a block size of at least 128 bits, there are many 64-bit block ciphers, such as 3DES and Blowfish, that are still widely supported in Internet security protocols such as TLS, SSH, and IPsec. When used in CBC mode, these ciphers are known to be susceptible to collision attacks when they are used to encrypt around $2^{32}$ blocks of data (the so-called birthday bound). This threat has traditionally been dismissed as impractical since it requires some prior knowledge of the plaintext and even then, it only leaks a few secret bits per gigabyte. Indeed, practical collision attacks have never been demonstrated against any mainstream security protocol, leading to the continued use of 64-bit ciphers on the Internet. In this work, we demonstrate two concrete attacks that exploit collisions on short block ciphers. First, we present an attack on the use of 3DES in HTTPS that can be used to recover a secret session cookie. Second, we show how a similar attack on Blowfish can be used to recover HTTP BasicAuth credentials sent over OpenVPN connections. In our proof-of-concept demos, the attacker needs to capture about 785GB of data, which takes between 19-38 hours in our setting. This complexity is comparable to the recent RC4 attacks on TLS: the only fully implemented attack takes 75 hours. We evaluate the impact of our attacks by measuring the use of 64-bit block ciphers in real-world protocols. We discuss mitigations, such as disabling all 64-bit block ciphers, and report on the response of various software vendors to our responsible disclosure of these attacks.
Last updated:  2016-08-25
An MPC-based Privacy-Preserving Protocol for a Local Electricity Trading Market
Aysajan Abidin, Abdelrahaman Aly, Sara Cleemput, Mustafa A. Mustafa
This paper proposes a decentralised and privacy-preserving local electricity trading market. The proposed market employs a bidding protocol based upon secure multiparty computations and allows users to trade their excess electricity among themselves. The bid selection and calculation of the clearance price at which the electricity is traded are performed by the market in a decentralised and privacy-preserving manner. We implemented the market in C++ and tested its performance with realistic data sets. Our simulation results show that the market tasks can be performed for 2500 bids in less than five minutes in the online phase, showing its feasibility for a typical electricity trading period of, for example, 30 minutes.
Last updated:  2016-08-20
Digital Signatures Based on the Hardness of Ideal Lattice Problems in all Rings
Vadim Lyubashevsky
Many practical lattice-based schemes are built upon the Ring-SIS or Ring-LWE problems, which are problems that are based on the presumed difficulty of finding low-weight solutions to linear equations over polynomial rings $Z_q[x]/\langle f(x) \rangle$. Our belief in the asymptotic computational hardness of these problems rests in part on the fact that there are reduction showing that solving them is as hard as finding short vectors in all lattices that correspond to ideals of the polynomial ring $Z[x]/\langle f(x) \rangle$. These reductions, however, do not give us an indication as to the effect that the polynomial $f(x)$, which defines the ring, has on the average-case or worst-case problems. \\ As of today, there haven't been any weaknesses found in Ring-SIS or Ring-LWE problems when one uses an $f(x)$ which leads to a meaningful worst-case to average-case reduction, but there have been some recent algorithms for related problems that heavily use the algebraic structures of the underlying rings. It is thus conceivable that some rings could give rise to more difficult instances of Ring-SIS and Ring-LWE than other rings. A more ideal scenario would therefore be if there would be an average-case problem, allowing for efficient cryptographic constructions, that is based on the hardness of finding short vectors in ideals of $Z[x]/\langle f(x)\rangle$ for \emph{every} $f(x)$.\\ In this work, we show that the above may actually be possible. We construct a digital signature scheme based (in the random oracle model) on a simple adaptation of the Ring-SIS problem which is as hard to break as worst-case problems in every $f(x)$ whose degree is bounded by the parameters of the scheme. Up to constant factors, our scheme is as efficient as the highly practical schemes that work over the ring $Z[x]/\langle x^n+1\rangle$.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.