All papers in 2014 (Page 4 of 1029 results)

Last updated:  2020-12-28
Faster Binary-Field Multiplication and Faster Binary-Field MACs
Daniel J. Bernstein, Tung Chou
This paper shows how to securely authenticate messages using just 29 bit operations per authenticated bit, plus a constant overhead per message. The authenticator is a standard type of "universal" hash function providing information-theoretic security; what is new is computing this type of hash function at very high speed. At a lower level, this paper shows how to multiply two elements of a field of size 2^128 using just 9062 \approx 71 * 128 bit operations, and how to multiply two elements of a field of size 2^256 using just 22164 \approx 87 * 256 bit operations. This performance relies on a new representation of field elements and new FFT-based multiplication techniques. This paper's constant-time software uses just 1.89 Core 2 cycles per byte to authenticate very long messages. On a Sandy Bridge it takes 1.43 cycles per byte, without using Intel's PCLMULQDQ polynomial-multiplication hardware. This is much faster than the speed records for constant-time implementations of GHASH without PCLMULQDQ (over 10 cycles/byte), even faster than Intel's best Sandy Bridge implementation of GHASH with PCLMULQDQ (1.79 cycles/byte), and almost as fast as state-of-the-art 128-bit prime-field MACs using Intel's integer-multiplication hardware (around 1 cycle/byte).
Last updated:  2015-10-27
Unpicking PLAID - A Cryptographic Analysis of an ISO-standards-track Authentication Protocol
Jean Paul Degabriele, Victoria Fehr, Marc Fischlin, Tommaso Gagliardoni, Felix Günther, Giorgia Azzurra Marson, Arno Mittelbach, Kenneth G. Paterson
The Protocol for Lightweight Authentication of Identity (PLAID) aims at secure and private authentication between a smart card and a terminal. Originally developed by a unit of the Australian Department of Human Services for physical and logical access control, PLAID has now been standardized as an Australian standard AS-5185-2010 and is currently in the fast-track standardization process for ISO/IEC 25185-1. We present a cryptographic evaluation of PLAID. As well as reporting a number of undesirable cryptographic features of the protocol, we show that the privacy properties of PLAID are significantly weaker than claimed: using a variety of techniques we can fingerprint and then later identify cards. These techniques involve a novel application of standard statistical and data analysis techniques in cryptography. We discuss potential countermeasures to our attacks and comment on our experiences with the standardization process of PLAID.
Last updated:  2014-09-19
The Q-curve Construction for Endomorphism-Accelerated Elliptic Curves
Benjamin Smith
We give a detailed account of the use of \(\mathbb{Q}\)-curve reductions to construct elliptic curves over \(\mathbb{F}_{p^2}\) with efficiently computable endomorphisms, which can be used to accelerate elliptic curve-based cryptosystems in the same way as Gallant--Lambert--Vanstone (GLV) and Galbraith--Lin--Scott (GLS) endomorphisms. Like GLS (which is a degenerate case of our construction), we offer the advantage over GLV of selecting from a much wider range of curves, and thus finding secure group orders when \(p\) is fixed for efficient implementation. Unlike GLS, we also offer the possibility of constructing twist-secure curves. We construct several one-parameter families of elliptic curves over \(\mathbb{F}_{p^2}\) equipped with efficient endomorphisms for every \(p > 3\), and exhibit examples of twist-secure curves over \(\mathbb{F}_{p^2}\) for the efficient Mersenne prime \(p = 2^{127}-1\).
Last updated:  2014-09-24
CIARP: A RISC Processor For Cryptography Applications
Uncategorized
Nima Karimpour Darav, Reza Ebrahimi Atani, Erfan Aghaei, Ahmad Tahmasivand, Mahsa Rahmani, Mina Moazzam Jazi
Uncategorized
Security is one of the most important features of industrial products. Cryptographic algorithms are mainly used for this purpose to obtain confidentiality and integrity of data in industry. One of the main concerns of researchers in designing cryptographic algorithms is efficiency in either software implementation or hardware implementation. However, the efficiency of some well-known algorithms is highly questionable. The main goal of this paper is to present a novel processor architecture called CIARP (stands for Crypto Instruction-Aware RISC Processor) being feasible for high speed implementation of low throughput cryptographic algorithms. CIARP has been designed based on a proposed instruction set named Crypto Specific Instruction Set (CSIS), that can speed up encryption and decryption processes of data.
Last updated:  2015-01-13
Efficient Software Implementation of Ring-LWE Encryption
Uncategorized
Ruan de Clercq, Sujoy Sinha Roy, Frederik Vercauteren, Ingrid Verbauwhede
Show abstract
Uncategorized
Present-day public-key cryptosystems such as RSA and Elliptic Curve Cryptography (ECC) will become insecure when quantum computers become a reality. This paper presents the new state of the art in efficient software implementations of a post-quantum secure public-key encryption scheme based on the ring-LWE problem. We use a 32-bit ARM Cortex-M4F microcontroller as the target platform. Our contribution includes optimization techniques for fast discrete Gaussian sampling and efficient polynomial multiplication. This implementation beats all known software implementations, on any architecture, by at least one order of magnitude. We further show that our scheme beats all ECC-based public-key encryption schemes by at least one order of magnitude. At 128-bit security we require 121166 cycles per encryption and 43324 cycles per decryption, while at a 256-bit security we require 261939 cycles per encryption and 96520 cycles per decryption. Gaussian sampling is done at an average of 28.5 cycles per sample.
Last updated:  2014-12-30
Protecting Encrypted Cookies from Compression Side-Channel Attacks
Janaka Alawatugoda, Douglas Stebila, Colin Boyd
Compression is desirable for network applications as it saves bandwidth; however, when data is compressed before being encrypted, the amount of compression leaks information about the amount of redundancy in the plaintext. This side channel has led to successful CRIME and BREACH attacks on web traffic protected by the Transport Layer Security (TLS) protocol. The general guidance in light of these attacks has been to disable compression, preserving confidentiality but sacrificing bandwidth. In this paper, we examine two techniques---heuristic separation of secrets and fixed-dictionary compression---for enabling compression while protecting high-value secrets, such as cookies, from attack. We model the security offered by these techniques and report on the amount of compressibility that they can achieve.
Last updated:  2014-09-16
Crypto-analyses on “secure and efficient privacy-preserving public auditing scheme for cloud storage”
Yalin Chen, Jue-Sam Chou
Recently, Worku et al. pointed out that the work “privacy-preserving public auditing for data storage security in cloud computing” proposed by Wang et al. is insecure and their second work “privacy- preserving public auditing for secure cloud the storage” is inefficient. Thus, they offered a secure and efficient-privacy public auditing scheme for cloud storage. They claimed that their system is provably secure in the random oracle model and the operation is effective. However, after crypto-analysis, we found that the scheme cannot reach the security goal, it has the existential forgery attack. We, therefore, alter it to incorporate the desired privacy preserving requirement, which is very significant in a privacy-preserving public auditing protocol for cloud storage.
Last updated:  2014-09-16
Differential Cryptanalysis of SipHash
Christoph Dobraunig, Florian Mendel, Martin Schläffer
SipHash is an ARX based message authentication code developed by Aumasson and Bernstein. SipHash was designed to be fast on short messages. Already, a lot of implementations and applications for SipHash exist, whereas the cryptanalysis of SipHash lacks behind. In this paper, we provide the first published third-party cryptanalysis of SipHash regarding differential cryptanalysis. We use existing automatic tools to find differential characteristics for SipHash. To improve the quality of the results, we propose several extensions for these tools to find differential characteristics. For instance, to get a good probability estimation for differential characteristics in SipHash, we generalize the concepts presented by Mouha et al. and Velichkov et al. to calculate the probability of ARX functions. Our results are a characteristic for SipHash-2-4 with a probability of $2^{-236.3}$ and a distinguisher for the Finalization of SipHash-2-4 with practical complexity. Even though our results do not pose any threat to the security of SipHash-2-4, they significantly improve the results of the designers and give new insights in the security of SipHash-2-4.
Last updated:  2014-09-16
On Shor's Factoring Algorithm with More Registers and the Problem to Certify Quantum Computers
Zhengjun Cao, Zhenfu Cao
Shor's factoring algorithm uses two quantum registers. By introducing more registers we show that the measured numbers in these registers which are of the same pre-measurement state, should be equal if the original Shor's complexity argument is sound. This contradicts the argument that the second register has $r$ possible measured values. There is an anonymous comment which argues that the states in these registers are entangled. If so, the entanglement involving many quantum registers can not be interpreted by the mechanism of EPR pairs and the like. In view of this peculiar entanglement has not yet been mentioned and investigated, we think the claim that the Shor's algorithm runs in polynomial time needs more physical verifications. We also discuss the problem to certify quantum computers.
Last updated:  2016-12-26
Adaptively Secure Constrained Pseudorandom Functions
Uncategorized
Dennis Hofheinz, Akshay Kamath, Venkata Koppula, Brent Waters
Show abstract
Uncategorized
A constrained pseudo random function (PRF) behaves like a standard PRF, but with the added feature that the (master) secret key holder, having secret key K, can produce a constrained key, K_f, that allows for the evaluation of the PRF on a subset of the domain as determined by a predicate function f within some family F. While previous constructions gave constrained PRFs for poly-sized circuits, all reductions for such functionality were based in the selective model of security where an attacker declares which point he is attacking before seeing any constrained keys. In this paper we give new constrained PRF constructions for circuits that have polynomial reductions to indistinguishability obfuscation in the random oracle model. Our solution is constructed from two recently emerged primitives: an adaptively secure Attribute-Based Encryption (ABE) for circuits and a Universal Parameters as introduced by Hofheinz et al. Both primitives are constructible from indistinguishability obfuscation (iO) (and injective pseudorandom generators) with only polynomial loss.
Last updated:  2015-03-08
Bivariate Polynomials Modulo Composites and their Applications
Dan Boneh, Henry Corrigan-Gibbs
We investigate the hardness of finding solutions to bivariate polynomial congruences modulo RSA composites. We establish necessary conditions for a bivariate polynomial to be one-way, second preimage resistant, and collision resistant based on arithmetic properties of the polynomial. From these conditions we deduce a new computational assumption that implies an efficient algebraic collision-resistant hash function. We explore the assumption and relate it to known computational problems. The assumption leads to (i) a new statistically hiding commitment scheme that composes well with Pedersen commitments, (ii) a conceptually simple cryptographic accumulator, and (iii) an efficient chameleon hash function.
Last updated:  2014-09-16
Square Span Programs with Applications to Succinct NIZK Arguments
George Danezis, Cedric Fournet, Jens Groth, Markulf Kohlweiss
We propose a new characterization of NP using square span programs (SSPs). We first characterize NP as affine map constraints on small vectors. We then relate this characterization to SSPs, which are similar but simpler than Quadratic Span Programs (QSPs) and Quadratic Arithmetic Programs (QAPs) since they use a single series of polynomials rather than 2 or 3. We use SSPs to construct succinct non-interactive zero-knowledge arguments of knowledge. For performance, our proof system is defined over Type III bilinear groups; proofs consist of just 4 group elements, verified in just 6 pairings. Concretely, using the Pinocchio libraries, we estimate that proofs will consist of 160 bytes verified in less than 6 ms.
Last updated:  2015-09-23
How to Split a Secret into Unknown Shares
Uncategorized
Ruxandra F. Olimid
Show abstract
Uncategorized
Grigoriev and Shpilrain recently considered secret sharing systems for which nobody (including the dealer) knows the share of a particular party and introduced a construction for the special case of all-or-nothing schemes. We extend their work and propose two threshold secret sharing schemes that satisfy this property.
Last updated:  2014-09-16
Wire-Tap Codes as Side-Channel Countermeasure - an FPGA-based experiment
Amir Moradi
In order to provide security against side-channel attacks a masking scheme which makes use of wire-tap codes has recently been proposed. The scheme benefits from the features of binary linear codes, and its application to AES has been presented in the seminal article. In this work – with respect to the underlying scheme – we re-iterate the fundamental operations of the AES cipher in a hopefully more understandable terminology. Considering an FPGA platform we address the challenges each AES operation incurs in terms of implementation complexity. We show different scenarios on how to realize the SubBytes operation as the most critical issue is to deal with the large S-boxes encoded by the underlying scheme. Constructing various designs to actualize a full AES-128 encryption engine of the scheme, we provide practical side-channel evaluations based on traces collected from a Spartan-6 FPGA platform. As a result, we show that – despite nice features of the scheme - with respect to its area and power overhead its advantages are very marginal unless its fault-detection ability is also being employed.
Last updated:  2014-09-16
Cryptanalysis on `Robust Biometrics-Based Authentication Scheme for Multi-server Environment'
Vanga Odelu, Ashok Kumar Das, Adrijit Goswami
Authentication plays an important role in an open network environment in order to authenticate two communication parties among each other. Authentication protocols should protect the sensitive information against a malicious adversary by providing a variety of services, such as authentication, user credentials' privacy, user revocation and re-registration, when the smart card is lost/stolen or the private key of a user or a server is revealed. Unfortunately, most of the existing multi-server authentication schemes proposed in the literature do not support the fundamental security property such as the revocation and re-registration with same identity. Recently, in 2014, He and Wang proposed a robust and efficient multi-server authentication scheme using biometrics-based smart card and elliptic curve cryptography (ECC). In this paper, we analyze the He-Wang's scheme and show that He-Wang's scheme is vulnerable to a known session-specific temporary information attack and impersonation attack. In addition, we show that their scheme does not provide strong user's anonymity. Furthermore, He-Wang's scheme cannot support the revocation and re-registration property. Apart from these, He-Wang's scheme has some design flaws, such as wrong password login and its consequences, and wrong password update during password change phase.
Last updated:  2014-09-16
A comprehensive empirical comparison of parallel ListSieve and GaussSieve
Uncategorized
Artur Mariano, Ozgur Dagdelen, Christian Bischof
Show abstract
Uncategorized
The security of lattice-based cryptosystems is determined by the performance of practical implementations of, among others, algo- rithms for the Shortest Vector Problem (SVP). In this paper, we conduct a comprehensive, empirical comparison of two SVP-solvers: ListSieve and GaussSieve. We also propose a practical par- allel implementation of ListSieve, which achieves super-linear speedups on multi-core CPUs, with efficiency levels as high as 183%. By compar- ing our implementation with a parallel implementation of GaussSieve, we show that ListSieve can, in fact, outperform GaussSieve for a large num- ber of threads, thus answering a question that was still open to this day.
Last updated:  2014-09-12
Co-Location-Resistant Clouds
Uncategorized
Yossi Azar, Seny Kamara, Ishai Menache, Mariana Raykova, Bruce Shepherd
Show abstract
Uncategorized
We consider the problem of designing multi-tenant public infrastructure clouds resistant to cross-VM attacks without relying on single-tenancy or on assumptions about the cloud's servers. In a cross-VM attack (which have been demonstrated recently in Amazon EC2) an adversary launches malicious virtual machines (VM) that perform side-channel attacks against co-located VMs in order to recover their contents. We propose a formal model in which to design and analyze \emph{secure} VM placement algorithms, which are online vector bin packing algorithms that simultaneously satisfy certain optimization constraints and notions of security. We introduce and formalize several notions of security, establishing formal connections between them. We also introduce a new notion of efficiency for online bin packing algorithms that better captures their cost in the setting of cloud computing. Finally, we propose a secure placement algorithm that achieves our strong notions of security when used with a new cryptographic mechanism we refer to as a shared deployment scheme.
Last updated:  2014-09-12
Hybrid Anomaly Detection using K-Means Clustering in Wireless Sensor Networks
Mohammad Wazid
Security is the biggest concern in Wireless Sensor Networks (WSNs) especially for the ones which are deployed for military applications and monitoring. They are prone to various attacks which degrades the network performance very rapidly. Sometimes multiple attacks are launched in the network using hybrid anomaly. In this situation it is very difficult to find out which kind of anomaly is activated. In this paper, we have proposed a hybrid anomaly detection technique with the application of k-means clustering. The analysis of the network data set consists of traffic data and end to end delay data is performed. The data set is clustered using weka 3.6.10. After clustering, we get the threshold values of various network performance parameters (traffic and delay). These threshold values are used by the hybrid anomaly detection technique to detect the anomaly. During the experimentation, it has been observed that two types of anomalies are activated in the network causing misdirection and blackhole attacks.
Last updated:  2014-09-11
New Class of Multivariate Public Key Cryptosystem, K(XI)RSE(2)PKC, Constructed based on Reed-Solomon Code Along with K(X)RSE(2)PKC over $\mathbb{F}_2$
Masao KASAHARA
Extensive studies have been made of the public key cryptosystems based on multivariate polynomials (Multi-variate PKC, MPKC) over $\mathbb{F}_2$ and $\mathbb{F}_2^m$. However most of the proposed MPKC are proved not secure. In this paper, we propose a new class of MPKC based on Reed-Solomon code, referred to as K(XI)RSE(2)PKC. In Appendix, we present another class of MPKC referred to as K(X)RSE(2)PKC over $\mathbb{F}_2$. Both K(X)RSE(2)PKC and K(XI)RSE(2)PKC yield the coding rate of 1.0. We show that the proposed schemes can be sufficiently secure against various attacks, including Gröbner basis attack.
Last updated:  2015-09-06
An Efficient Transform from Sigma Protocols to NIZK with a CRS and Non-Programmable Random Oracle
Yehuda Lindell
In this short paper, we present a Fiat-Shamir type transform that takes any Sigma protocol for a relation $R$ and outputs a non-interactive zero-knowledge proof (not of knowledge) for the associated language $L_R$, in the common reference string model. As in the Fiat-Shamir transform, we use a hash function $H$. However, zero-knowledge is achieved under standard assumptions in the common reference string model (without any random oracle), and soundness is achieved in the \emph{non-programmable} random oracle model. The concrete computational complexity of the transform is only slightly higher than the original Fiat-Shamir transform.
Last updated:  2014-09-09
A Note on Quantum Security for Post-Quantum Cryptography
Fang Song
Shor's quantum factoring algorithm and a few other efficient quantum algorithms break many classical crypto-systems. In response, people proposed post-quantum cryptography based on computational problems that are believed hard even for quantum computers. However, security of these schemes against \emph{quantum} attacks is elusive. This is because existing security analysis (almost) only deals with classical attackers and arguing security in the presence of quantum adversaries is challenging due to unique quantum features such as no-cloning. This work proposes a general framework to study which classical security proofs can be restored in the quantum setting. Basically, we split a security proof into (a sequence of) classical security reductions, and investigate what security reductions are ``quantum-friendly''. We characterize sufficient conditions such that a classical reduction can be ``lifted'' to the quantum setting. We then apply our lifting theorems to post-quantum signature schemes. We are able to show that the classical generic construction of hash-tree based signatures from one-way functions and and a more efficient variant proposed in~\cite{BDH11} carry over to the quantum setting. Namely, assuming existence of (classical) one-way functions that are resistant to efficient quantum inversion algorithms, there exists a quantum-secure signature scheme. We note that the scheme in~\cite{BDH11} is a promising (post-quantum) candidate to be implemented in practice and our result further justifies it. Actually, to obtain these results, we formalize a simple criteria, which is motivated by many classical proofs in the literature and is straightforward to check. This makes our lifting theorem easier to apply, and it should be useful elsewhere to prove quantum security of proposed post-quantum cryptographic schemes. Finally we demonstrate the generality of our framework by showing that several existing works (Full-Domain hash in the quantum random-oracle model~\cite{Zha12ibe} and the simple hybrid arguments framework in~\cite{HSS11}) can be reformulated under our unified framework.
Last updated:  2014-09-09
Formal Treatment of Privacy-Enhancing Credential Systems
Jan Camenisch, Stephan Krenn, Anja Lehmann, Gert Læssøe Mikkelsen, Gregory Neven, Michael Østergaard Pedersen
Privacy-enhancing attribute-based credentials (PABCs) are the core ingredient to privacy-friendly authentication systems, allowing users to obtain credentials on attributes and prove possession of these credentials in an unlinkable fashion while revealing only a subset of the attributes. To be useful in practice, however, PABCs typically need additional features such as i) revocation, ii) pooling prevention by binding credentials to users' secret keys, iii) pseudonyms as privacy-friendly user public keys, iv) proving equality of attributes without revealing their values, v) or advanced issuance where attributes can be "blindly" carried over into new credentials. Provably secure solutions exist for most of these features in isolation, but it is unclear how they can be securely combined into a full-fledged PABC system, or even which properties such a system would aim to fulfill. We provide a formal treatment of PABC systems supporting the mentioned features by defining their syntax and security properties, resulting in the most comprehensive definitional framework for PABCs so far. Unlike previous efforts, our definitions are not targeted at one specific use-case; rather, we try to capture generic properties that can be useful in a variety of scenarios. We believe that our definitions can also be used as a starting point for diverse application-dependent extensions and variations of PABCs. We present and prove secure a generic and modular construction of a PABC system from simpler building blocks, allowing for a "plug-and-play" composition based on different instantiations of the building blocks. Finally, we give secure instantiations for each of the building blocks, including in particular instantiations based on CL- and Brands-signatures which are the core of the Idemix and U-Prove protocols.
Last updated:  2014-09-09
Analysis Of Variance and CPA in SCA
Uncategorized
Sebastien Tiran, Guillaume Reymond, Jean-Baptiste Rigaud, Driss Aboulkassimi, Benedikt Gierlichs, Mathieu Carbone, Gilles Ducharme, Philippe Maurine
Show abstract
Uncategorized
This paper introduces Side-Channel Analysis results obtained on an unprotected circuit characterized by a surprisingly non-linear leakage. While in such a case, Correlation Power Analysis is not adapted, we show that a more generic attack, based on the Analysis Of Variance (AOV) outperfoms CPA. It has the advantage of detecting non-linear leakage, unlike Correlation Power Analysis, and of providing similar or much better results in all cases, with a similar computation time.
Last updated:  2016-06-29
The Feasibility of Outsourced Database Search in the Plain Model
Carmit Hazay, Hila Zarosim
The problem of securely outsourcing computation to an untrusted server gained momentum with the recent penetration of cloud computing services. The ultimate goal in this setting is to design efficient protocols that minimize the computational overhead of the clients and instead rely on the extended resources of the server. In this paper, we focus on the outsourced database search problem which is highly motivated in the context of delegatable computing since it offers storage alternatives for massive databases, that may contain confidential data. This functionality is described in two phases: (1) setup phase and (2) query phase. The main goal is to minimize the parties workload in the query phase so that it is proportional to the query size and its corresponding response. We study whether a trusted setup or a random oracle are necessary for protocols with minimal interaction that meet the optimal communication and computation bounds in the query phase. We answer this question positively and demonstrate a lower bound on the communication or the computational overhead in this phase.
Last updated:  2016-03-20
Structure-Preserving Signatures on Equivalence Classes and their Application to Anonymous Credentials
Christian Hanser, Daniel Slamanig
Structure-preserving signatures are a quite recent but important building block for many cryptographic protocols. In this paper, we introduce a new type of structure-preserving signatures, which allows to sign group element vectors and to consistently randomize signatures and messages without knowledge of any secret. More precisely, we consider messages to be (representatives of) equivalence classes on vectors of group elements (coming from a single prime order group), which are determined by the mutual ratios of the discrete logarithms of the representative's vector components. By multiplying each component with the same scalar, a different representative of the same equivalence class is obtained. We propose a definition of such a signature scheme, a security model and give an efficient construction, which we prove secure in the SXDH setting, where EUF-CMA security is proven against generic forgers in the generic group model and the so called class hiding property is proven under the DDH assumption. As a second contribution, we use the proposed signature scheme to build an efficient multi-show attribute-based anonymous credential (ABC) system that allows to encode an arbitrary number of attributes. This is -- to the best of our knowledge -- the first ABC system that provides constant-size credentials and constant-size showings. To allow an efficient construction in combination with the proposed signature scheme, we also introduce a new, efficient, randomizable polynomial commitment scheme. Aside from these two building blocks, the credential system requires a very short and constant-size proof of knowledge to provide freshness in the showing protocol. We present our ABC system along with a suitable security model and rigorously prove its security.
Last updated:  2017-04-30
A 128-bit Block Cipher Based on Three Group Arithmetics
Uncategorized
Shenghui Su, Shuwang Lü, Daqiang Dong
Show abstract
Uncategorized
Enlightened by the IDEA block cipher, the authors put forward a symmetric key cryptosystem called REESSE3+ based on three group arithmetics: addition modulo 2 (bit XOR), addition modulo 2 ^ 16, and multiplication modulo 2 ^ 16 + 1. Different from IDEA, REESSE3+ uses a 128-bit block, a 256-bit key, and a renovative round function. The authors describe the REESSE3+ cipher algorithm in the graph, and expound the encryption subkeys, encryption operation, decryption subkeys, and decryption operation. Further, demonstrate the correctness of the REESSE3+ cipher algorithm, and analyze the security of REESSE3+ from four aspects. The measures for assuring the security of REESSE3+ cover those for assuring the security of IDEA, which indicates that the ability of REESSE3+ in resisting differential cryptanalysis should be at least equivalent to that of IDEA. Moreover, experiments show that a mini-version of REESSE3+ is immune to differential cryptanalysis, thus it may be expected that REESSE3+ is secure against differential attack after 8 rounds.
Last updated:  2014-09-05
Linearity Measures for MQ Cryptography
Simona Samardjiska, Danilo Gligoroski
We propose a new general framework for the security of multivariate quadratic (\mathcal{MQ}) schemes with respect to attacks that exploit the existence of linear subspaces. We adopt linearity measures that have been used traditionally to estimate the security of symmetric cryptographic primitives, namely the nonlinearity measure for vectorial functions introduced by Nyberg at Eurocrypt '92, and the $(s, t)$--linearity measure introduced recently by Boura and Canteaut at FSE'13. We redefine some properties of \mathcal{MQ} cryptosystems in terms of these known symmetric cryptography notions, and show that our new framework is a compact generalization of several known attacks in \mathcal{MQ} cryptography against single field schemes. We use the framework to explain various pitfalls regarding the successfulness of these attacks. Finally, we argue that linearity can be used as a solid measure for the susceptibility of \mathcal{MQ} schemes to these attacks, and also as a necessary tool for prudent design practice in \mathcal{MQ} cryptography.
Last updated:  2014-09-05
Defeating ISO9797-1 MAC Algo 3 by Combining Side-Channel and Brute Force Techniques
Uncategorized
Benoit Feix, Hugues Thiebeauld
Show abstract
Uncategorized
Side-channel analysis is a well-known and efficient hardware technique to recover embedded secrets in microprocessors. Over the past years, the state-of-the-art side-channel attacks has significantly increased, leading to a myriad of vulnerability paths that secure codes must withstand. Nowadays most of the attacks target the cryptographic algorithms, but very few exploit the cryptographic protocol. In this paper, we present a new attack that exploits the information exchange at the cryptographic protocol level in order to disclose the secret key. This attack is applicable to the MAC calculations standardized in ISO/IEC 9797-1 especially the MAC algorithm 3 with the DES function. This protocol is spread in secure products nowadays, this is the case typically for some EMV implementations. By using a side-channel technique combined with a reasonable brute force effort, we show that the secret key can be fully retrieved even though the DES implementation seems to be well-protected against side-channel attacks.
Last updated:  2014-09-24
A Practical Iterative Side Channel Cube Attack on AES-128/256
Uncategorized
Erfan Aghaee, Majid Rahimi, Hamed Yusefi
Uncategorized
The Side Channel Cube Attack (SCCA) is a kind of Algebraic Side Channel Attack (ASCA) consisting of theoretical and practical aspects. This paper presents a general framework for the SCCA (called an Iterative SCCA (ISCCA)) on block ciphers in which these aspects are explained and the requirements are listed. On the theoretical side, we use extracting quadratic equations, recognizing iterated chosen plaintexts, and cube iteration to improve the SCCA on block ciphers. On the experimental side, we define a feasible scenario in which ISCCA can be applied on block ciphers. Then, we implement the ISCCA on AES and verify the results on an ARM micro controller. Finally, we compare the proposed SCCA (ISCCA) with the Simple Power Analysis, the previous SCCAs, and the previous attacks on AES. This comparison is based on the template building and data, time, and memory complexity. We show that the SCCA can recover 128 and 256 key bits of the AES-128/256 only with data complexity 2^{7.3}, time complexity 2^{15.74}, and memory complexity 2^{7.89} on AES-128, and data complexity 2^{7.75}, time complexity 2^{16.2}, and memory complexity 2^{8.21} on AES-256. We show only nine interesting points are needed for template matching phase. This is the most efficient SCCA on AES-128/256.
Last updated:  2014-09-05
Bounded Pre-Image Awareness and the Security of Hash-Tree Keyless Signatures
Uncategorized
Ahto Buldas, Risto Laanoja, Peeter Laud, Ahto Truu
Show abstract
Uncategorized
We present a new tighter security proof for unbounded hash tree keyless signature (time-stamping) schemes that use Merkle-Damg\aa rd (MD) hash functions with Preimage Aware (PrA) compression functions. It is known that the PrA assumption alone is insufficient for proving the security of unbounded hash tree schemes against back-dating attacks. We show that many known PrA constructions satisfy a stronger \emph{Bounded Pre-Image Awareness (BPrA)} condition that assumes the existence of an extractor $\EXT$ that is bounded in the sense that for any efficiently computable query string $\alpha$, the number of outputs $y$ for which $\EXT(y,\alpha)$ succeeds does not exceed the number of queries in $\alpha$. We show that blockcipher based MD-hash functions with rate-1 compression functions (such as Davies-Meyer and Miyaguchi-Preneel) of both type I and type II are BPrA. We also show that the compression function of Shrimpton-Stam that uses non-compressing components is BPrA. The security proof for unbounded hash-tree schemes is very tight under the BPrA assumption. In order to have $2^s$-security against back-dating, the hash function must have $n=2s + 4$ output bits, assuming that the security of the hash function is close to the birthday barrier, i.e. that there are no structural weaknesses in the hash function itself. Note that the previous proofs that assume PrA gave the estimation $n=2s + 2 \log_2 C + 2$, where $C$ is the maximum allowed size of the hash tree. For example, if $s=100$ ($2^{100}$-security) and $C=2^{50}$, the previous proofs require $n=302$ output bits, while the new proof requires $n=204$ output bits.
Last updated:  2014-09-09
Scrutinizing and Improving Impossible Differential Attacks: Applications to CLEFIA, Camellia, LBlock and Simon (Full Version)
Christina Boura, María Naya-Plasencia, Valentin Suder
Impossible differential cryptanalysis has shown to be a very powerful form of cryptanalysis against block ciphers. These attacks, even if extensively used, remain not fully understood because of their high technicality. Indeed, numerous are the applications where mistakes have been discovered or where the attacks lack optimality. This paper aims in a first step at formalizing and improving this type of attacks and in a second step at applying our work to block ciphers based on the Feistel construction. In this context, we derive generic complexity analysis formulas for mounting such attacks and develop new ideas for optimizing impossible differential cryptanalysis. These ideas include for example the testing of parts of the internal state for reducing the number of involved key bits. We also develop in a more general way the concept of using multiple differential paths, an idea introduced before in a more restrained context. These advances lead to the improvement of previous attacks against well known ciphers such as CLEFIA-128 and Camellia, while also to new attacks against 23-round LBlock and all members of the Simon family.
Last updated:  2015-08-18
HIMMO - A lightweight collusion-resistant key predistribution scheme
Uncategorized
Oscar Garcia-Morchon, Domingo Gomez-Perez, Jaime Gutierrez, Ronald Rietman, Berry Schoenmakers, Ludo Tolhuizen
Show abstract
Uncategorized
In this paper we introduce HIMMO as a truly practical and lightweight collusion-resistant key predistribution scheme. The scheme is reminiscent ofBlundo et al's elegant key predistribution scheme, in which the master key is a symmetric bivariate polynomial over a finite field, and a unique common key is defined for every pair of nodes as the evaluation of the polynomial at the finite field elements associated with the nodes. Unlike Blundo et al's scheme, however, which completely breaks down once the number of colluding nodes exceeds the degree of the polynomial, the new scheme is designed to tolerateany number of colluding nodes. Key establishment in HIMMO amounts to the evaluation of a single low-degree univariate polynomial involving reasonably sized numbers, thus exhibiting excellent performance even for constrained devices such as 8-bit CPUs, as we demonstrate. On top of this, the scheme is very versatile, as it not only supports implicit authentication of the nodes like any key predistribution scheme, but also supports identity-based key predistribution in a natural and efficient way. The latter property derives from the fact that HIMMO supports long node identifiers at a reasonable cost, allowing outputs of a collision-resistant hash function to be used as node identifiers. Moreover, HIMMO allows for a transparent way to split the master key between multiple parties. The new scheme is superior to any of the existing alternatives due to the intricate way it combines the use of multiple symmetric bivariate polynomials evaluated over ``different'' finite rings. We have extensively analyzed the security of HIMMO against two attacks. For these attacks, we have identified the Hiding Information (HI) problem and the Mixing Modular Operations (MMO) problem as the underlying problems. These problems are closely related to some well-defined lattice problems, and therefore the best attacks on HIMMO are dependent on lattice-basis reduction. Based on these connections, we propose concrete values for all relevant parameters, for which we conjecture that the scheme is secure.
Last updated:  2014-09-05
A Fully Homomorphic Encryption Scheme with Better Key Size
Zhigang Chen, Jian Wang, ZengNian Zhang, Xinxia Song
Fully homomorphic encryption is faced with two problems now. One is candidate fully homomorphic encryption schemes are few. Another is that the efficiency of fully homomorphic encryption is a big question. In this paper, we propose a fully homomorphic encryption scheme based on LWE, which has better key size. Our main contributions are: (1) According to the binary-LWE recently, we choose secret key from binary set and modify the basic encryption scheme proposed in Linder and Peikert in 2010. We propose a fully homomorphic encryption scheme based on the new basic encryption scheme. We analyze the correctness and give the proof of the security of our scheme. The public key, evaluation keys and tensored ciphertext have better size in our scheme. (2) Estimating parameters for fully homomorphic encryption scheme is an important work. We estimate the concert parameters for our scheme. We compare these parameters between our scheme and Bra12 scheme. Our scheme have public key and private key that smaller by a factor of about logq than in Bra12 scheme. Tensored ciphertext in our scheme is smaller by a factor of about log2q than in Bra12 scheme. Key switching matrix in our scheme is smaller by a factor of about log3q than in Bra12 scheme.
Last updated:  2014-09-04
Security Proofs for the BLT Signature Scheme
Ahto Buldas, Risto Laanoja, Ahto Truu
We present security proofs for the BLT signature scheme in the model, where hash functions are built from ideal components (random oracles, ideal ciphers, etc.). We show that certain strengthening of the Pre-image Awareness (PrA) conditions like boundedness of the extractor, and certain natural properties (balancedness and the so-called output one-wayness) of the hash function are sufficient for existential unforgeability of the BLT signature scheme.
Last updated:  2015-01-16
Proof of Proximity of Knowledge
Serge Vaudenay
Public-key distance bounding schemes are needed to defeat relay attacks in payment systems. So far, only two such schemes exist, but fail to fully protect against malicious provers. In this paper, we solve this problem. We provide a full formalism to define the proof of proximity of knowledge (PoPoK). Protocols should succeed if and only if a prover holding a secret is within the proximity of the verifier. Like proofs of knowledge, these protocols must satisfy completeness, soundness (protection for the honest verifier), and security (protection for the honest prover). We construct ProProx, the very first fully secure PoPoK.
Last updated:  2024-06-07
Malicious Hashing: Eve's Variant of SHA-1
Ange Albertini, Jean-Philippe Aumasson, Maria Eichlseder, Florian Mendel, and Martin Schläffer
We present collisions for a version of SHA-1 with modified constants, where the colliding payloads are valid binary files. Examples are given of colliding executables, archives, and images. Our malicious SHA-1 instances have round constants that differ from the original ones in only 40 bits (on average). Modified versions of cryptographic standards are typically used on closed systems (e.g., in pay-TV, media and gaming platforms) and aim to differentiate cryptographic components across customers or services. Our proof-of-concept thus demonstrates the exploitability of custom SHA-1 versions for malicious purposes, such as the injection of user surveillance features. To encourage further research on such malicious hash functions, we propose definitions of malicious hash functions and of associated security notions.
Last updated:  2015-01-16
Optimal Proximity Proofs
Ioana Boureanu, Serge Vaudenay
Provably secure distance-bounding is a rising subject, yet an unsettled one; indeed, very few distance-bounding protocols, with formal security proofs, have been proposed. In fact, so far only two protocols, namely SKI (by Boureanu et al.) and FO (by Fischlin and Onete), offer all-encompassing security guaranties, i.e., resistance to distance-fraud, mafia-fraud, and terrorist-fraud. Matters like security, alongside with soundness, or added tolerance to noise do not always coexist in the (new) distance-bounding designs. Moreover, as we will show in this paper, efficiency and simultaneous protection against all frauds seem to be rather conflicting matters, leading to proposed solutions which were/are sub-optimal. In fact, in this recent quest for provable security, efficiency has been left in the shadow. Notably, the tradeoffs between the security and efficiency have not been studied. In this paper, we will address these limitations, setting the "security vs. efficiency" record straight. Concretely, by combining ideas from SKI and FO, we propose symmetric protocols that are efficient, noise-tolerant and-at the same time-provably secure against all known frauds. Indeed, our new distance-bounding solutions outperform the two aforementioned provably secure distance-bounding protocols. For instance, with a noise level of 5%, we obtain the same level of security as those of the pre-existent protocols, but we reduce the number of rounds needed from 181 to 54.
Last updated:  2014-09-04
Extending Oblivious Transfer Efficiently, or - How to get active security with constant cryptographic overhead
Enrique Larraia
On top of the passively secure extension protocol of [IKNP03] we build a new construction secure against active adversaries. We can replace the invocation of the hash function that is used to check the receiver is well-behaved with the XOR of bit strings. This is possible by applying a cut-and-choose technique on the length of the bit strings that the receiver sends in the reversed OT. We also improve on the number of seeds required for the extension, both asymptotically and practically. Moreover, the protocol used to test receiver's behaviour enjoys unconditional security.
Last updated:  2014-09-04
Integration of hardware tokens in the Idemix library
Uncategorized
Antonio de la Piedra
Show abstract
Uncategorized
The Idemix library provides the implementation of the Camenisch-Lysyanskaya (CL) Attribute-based Credential System (ABC), its protocol extensions and the U-Prove ABC. In the case of the CL ABC, the library can delegate some cryptographic operations to a hardware token (e.g. a smart card). In the last few years several practitioners have proposed different implementations of ABCs in smart cards. The IRMA card provides at the time of writing this manuscript, an optimal performance for practical applications. In this report, we address the case of integrating this implementation in the Idemix library. We opted for implementing the key binding use case together with the generation of exclusive scope pseudonyms and public key commitments on card. The integration requires two additional classes (one that parses system parameters, credential specifications and issuer public keys and other one that interfaces the card and its functionalities with the CL building block) together with one modification in the code if the signature randomization is delegated to the card (only required in one of the proposed alternatives). The integration of the key binding use case requires 540 bytes extra in the smart card. We can perform all the involved cryptographic operations in only 206.75 ms, including the computation of exclusive scope pseudonyms (55.19 ms).
Last updated:  2014-09-04
Efficient Interval Check in the Presence of Malicious Adversaries
Genqiang Wu, Yeping He, Yi Lu, Liping Ding
We consider the following problem: Assuming that Alice and Bob have an integer interval $[a, e]$ and an integer $b$ respectively, for a commitment $c$ to $b$, Alice and Bob jointly check whether $b$ is within $[a, e]$ without revealing their inputs, where either party may behave maliciously. A special case of the problem is the secure integer comparison in the malicious model. This problem mainly arises from location-based access control systems where one party needs to assure to the other party that its location is within some definite area. Our main result is a constant-round protocol that exhibit the square of $\log e$ communication and the square of $\log e$ exponentiations with simulation-based security. At the heart of the construction is perfect $k$-ary index and corresponding zero-knowledge proof techniques. We consider a more general case of the problem where the interval is substituted by a union of intervals.
Last updated:  2014-09-04
Efficient Implementation of Keyless Signatures with Hash Sequence Authentication
Uncategorized
Ahto Buldas, Risto Laanoja, Ahto Truu
Show abstract
Uncategorized
We present new ideas for decreasing the size of secure memory needed for hardware implementations of hash-sequence based signatures proposed recently by Buldas, Laanoja and Truu (in the following referred to as BLT). In their scheme, a message $m$ is signed by time-stamping a concatenation $m\| z_t$ of the message and the one-time pseudo-random password $z_t$ intended to sign messages at a particular time $t$. The signature is valid only if the time-stamp points to the same time $t$. Hence, the one time passwords cannot be abused after their use. To efficiently and securely implement such a scheme at the client side, dedicated hardware is needed and thereby, the solutions that save the (secure) memory and computational time are important. For such schemes, the memory consumption directly depends on the efficiency of the \emph{hash sequence reversal algorithms}. The best known reversal algorithm for the BLT scheme uses $O(\log^2 \ell)$ memory. This means that for a signing key that is valid for one year (i.e. $\ell\approx 2^{25}$ with one-second time resolution), the device needs to store about $25^2=625$ hash values which for SHA-256 hashing algorithm means about $20$ K bytes of secure memory. Another problem with hash sequence reversal algorithms is that they mostly assume that the signature device is always connected to the computer or has an independent power supply. This is a serious limitation for smart-card implementations of the scheme. We show first that a mini Public Key Infrastructure in the signature device can be used to lower the memory consumption about twice. There is a master key (i.e. a hash sequence) that is used to certify short term (about five minutes) signing keys so that a signature consists of a short term certificate which is a hash chain in the master hash tree (used to authenticate the master hash sequence), and a hash chain that is used to authenticate a particular hash value $z_t$ in the sequence. We also discuss how to implement hash sequence signatures in devices that have no power supply and are not regularly connected to computers, such as smart-cards which are often used as personal digital signature devices. General-purpose cryptographic smart-cards also have many restrictions that limit the use of hash sequence signatures. For example, their hashing speed is relatively low: up to 500 hashing steps per second; their secure memory is of limited size, etc. This all combined with irregular usage patterns makes the use of hash sequence signatures questionable. We show why the hash sequence signature (in its original form) cannot be used as the CA signature in the mini PKI solution. Finally, we propose a new type of hash sequence signature that is more suitable for smart-card implementations.
Last updated:  2016-05-20
White-Box AES Implementation Revisited
Uncategorized
Chung Hun Baek, Jung Hee Cheon, Hyunsook Hong
Show abstract
Uncategorized
White-box cryptography is an obfuscation technique for protecting secret keys in software implementations even if an adversary has full access to the implementation of the encryption algorithm and full control over its execution platforms. This concept was presented by Chow et al. with white-box implementations of DES and AES in 2002. The strategy used in the implementations has become a design principle for subsequent white-box implementations. However, despite its practical importance, progress has not been substantial. In fact, it is repeated that as a proposal for a white-box implementation is reported, an attack of lower complexity is soon announced. This is mainly because most cryptanalytic methods target specific implementations, and there is no general attack tool for white-box cryptography. In this paper, we present an analytic toolbox on white-box implementations in this design framework and show how to reveal the secret information obfuscated in the implementation using this. For a substitution-linear transformation cipher on $n$ bits with S-boxes on $m$ bits, if $m_Q$-bit nonlinear encodings are used to obfuscate output values in the implementation, our attack tool can remove the nonlinear encodings with complexity $O(\frac{n}{m_Q}2^{3m_Q})$. We should increase $m_Q$ to obtain higher security, but it yields exponential storage blowing up and so there are limits to increase the security using the nonlinear encoding. If the inverse of the encoded round function $F$ on $n$ bits is given, the affine encoding $A$ can be recovered in $O(\frac{n}{m}\cdot{m_A}^32^{3m})$ time using our specialized affine equivalence algorithm, where $m_A$ is the smallest integer $p$ such that $A$ (or its similar matrix obtained by permuting rows and columns) is a block-diagonal matrix with $p\times p$ matrix blocks. According to our toolbox, a white-box implementation in the Chow et al.'s framework has complexity at most $O\left(\min\left\{ \tfrac{2^{2m}}{m}\cdot n^{m+4}, n\log n \cdot 2^{n/2}\right\}\right)$ within reasonable storage, which is much less than $2^n$. To overcome this, we introduce an idea that obfuscates two AES-128 ciphers at once with input/output encoding on 256 bits. To reduce storage, we use a sparse unsplit input encoding. As a result, our white-box AES implementation has up to 110-bit security against our toolbox, close to that of the original cipher. More generally, we may consider a white-box implementation on the concatenation of $t$ ciphertexts to increase security.
Last updated:  2014-09-04
Reducing the Complexity of Normal Basis Multiplication
Omer Egecioglu, Cetin Kaya Koc
In this paper we introduce a new transformation method and a multiplication algorithm for multiplying the elements of the field GF$(2^k)$ expressed in a normal basis. The number of XOR gates for the proposed multiplication algorithm is fewer than that of the optimal normal basis multiplication, not taking into account the cost of forward and backward transformations. The algorithm is more suitable for applications in which tens or hundreds of field multiplications are performed before needing to transform the results back.
Last updated:  2015-11-05
A Recursive Relation Between The Adjacency Graph of Some LFSRs and Its Applications
Uncategorized
Ming Li, Dongdai Lin
Uncategorized
In this paper, a general way to determine the adjacency graph of linear feedback shift registers (LFSRs) with characteristic polynomial (1+x)c(x) from the adjacency graph of LFSR with characteristic polynomial c(x) is discussed, where c(x) can be any polynomial. As an application, the adjacency graph of LFSRs with characteristic polynomial (1+x)^4p(x) are determined, where p(x) is a primitive polynomial. Besides, some properties about the cycles in LFSRs are presented. The adjacency graph of LFSRs with characteristic polynomial (1+x)^mp(x) are also discussed.
Last updated:  2015-05-25
Bit Security of the CDH Problems over Finite Field
Uncategorized
Mingqiang Wang, Tao Zhan, Haibin Zhang
Show abstract
Uncategorized
It is a long-standing open problem to prove the existence of (deterministic) hard-core predicates for the Computational Diffie-Hellman (CDH) problem over finite fields, without resorting to the generic approaches for any one-way functions (e.g., the Goldreich-Levin hard-core predicates). Fazio et al. (FGPS, Crypto '13) make important progress on this problem by defining a weaker Computational Diffie-Hellman problem over $\mathbb{F}_{p^2}$, i.e., Partial-CDH problem, and proving, when allowing changing field representations, the unpredictability of every single bit of one of the coordinates of the secret Diffie-Hellman value.
Last updated:  2014-09-03
Towards a Full-Featured Implementation of Attribute Based Credentials on Smart Cards
Antonio de la Piedra, Jaap-Henk Hoepman, Pim Vullers
Attribute-based Credentials (ABCs) allow citizens to prove certain properties about themselves without necessarily revealing their full identity. Smart cards are an attractive container for such credentials, for security and privacy reasons. But their limited processing power and random access storage capacity pose a severe challenge. Recently, we, the IRMA team, managed to fully implement a limited subset of the Idemix ABC system on a smart card, with acceptable running times. In this paper we extend this functionality by overcoming the main hurdle: limited RAM. We implement an efficient extended Pseudo-Random Number Generator (PRNG) for recomputing pseudorandomness and reconstructing variables. Using this we implement Idemix standard and domain pseudonyms, AND proofs based on prime-encoded attributes, and equality proofs of representation modulo a composite, together with terminal verification and secure messaging. In contrast to prior work that only addressed the verification of one credential with only one attribute (particularly, the master secret), we can now perform multi-credential proofs on credentials of 5 attributes and complex proofs in reasonable time. We provide a detailed performance analysis and compare our results to other approaches.
Last updated:  2014-09-01
Error-Tolerant Algebraic Side-Channel Attacks Using BEE
Ling Song, Lei Hu, Siwei Sun, Zhang Zhang, Danping Shi, Ronglin Hao
Algebraic side-channel attacks are a type of side-channel analysis which can recover the secret information with a small number of samples (e.g., power traces). However, this type of side-channel analysis is sensitive to measurement errors which may make the attacks fail. In this paper, we propose a new method of algebraic side-channel attacks which considers noisy leakages as integers restricted to intervls and finds out the secret information with a constraint programming solver named BEE. To demonstrate the efficiency of this new method in algebraic side-channel attacks, we analyze some popular implementations of block ciphers---PRESENT, AES, and SIMON under the Hamming weight or Hamming distance leakage model. For AES, our method requires the least leakages compared with existing works under the same error model. For both PRESENT and SIMON, we provide the first analytical results of them under algebraic side-channel attacks in the presence of errors. To further demonstrate the wide applicability of this new method, we also extend it to cold boot attacks. In the cold boot attacks against AES, our method increases the success rate by over $25\%$ than previous works.
Last updated:  2014-09-03
A Unified Formalism for Physical Attacks
Uncategorized
Hélène Le Bouder, Ronan Lashermes, Yanis Linge, Bruno Robisson, Assia Tria
Show abstract
Uncategorized
The security of cryptographic algorithms can be considered in two contexts. On the one hand, these algorithms can be proven secure mathematically. On the other hand, physical attacks can weaken the implementation of an algorithm yet proven secure. Under the common name of physical attacks, different attacks are regrouped: side channel attacks and fault injection attacks. This paper presents a common formalism for these attacks and highlights their underlying principles. All physical attacks on symmetric algorithms can be described with a 3-step process. Moreover it is possible to compare different physical attacks, by separating the theoretical attack path and the experimental parts of the attacks.
Last updated:  2014-12-02
Improved Linear Cryptanalysis of Reduced-round SIMON
Uncategorized
Mohamed Ahmed Abdelraheem, Javad Alizadeh, Hoda A. Alkhzaimi, Mohammad Reza Aref, Nasour Bagheri, Praveen Gauravaram, Martin M. Lauridsen
Show abstract
Uncategorized
SIMON is a family of ten lightweight block ciphers published by Beaulieu et al.\ from U.S. National Security Agency (NSA). In this paper we investigate the security of SIMON against different variants of linear cryptanalysis techniques, i.e.\ classical and multiple linear cryptanalysis and linear hulls. We present a connection between linear- and differential characteristics as well as differentials and linear hulls in SIMON. We employ it to adapt the current known results on differential cryptanalysis of SIMON into the linear setting. In addition to finding a linear approximation with a single characteristic, we show the effect of the linear hulls in SIMON by finding better approximations that enable us to improve the previous results. Our best linear cryptanalysis employs average squared correlation of the linear hull of SIMON based on correlation matrices. The result covers 21 out of 32 rounds of SIMON32/64 with time and data complexity $2^{54.56}$ and $2^{30.56}$ respectively. We have implemented our attacks for small scale variants of SIMON and our experiments confirm the theoretical biases and correlation presented in this work. So far, our results are the best known with respect to linear cryptanalysis for any variant of SIMON.
Last updated:  2014-08-31
Remarks on the Cryptographic Primitive of Attribute-based Encryption
Zhengjun Cao, Lihua Liu
Attribute-based encryption (ABE) which allows users to encrypt and decrypt messages based on user attributes is a type of one-to-many encryption. Unlike the conventional one-to-one encryption which has no intention to exclude any partners of the intended receiver from obtaining the plaintext, an ABE system tries to exclude some unintended recipients from obtaining the plaintext whether they are partners of some intended recipients. We remark that this requirement for ABE is very hard to meet. An ABE system cannot truly exclude some unintended recipients from decryption because some users can exchange their decryption keys in order to maximize their own interests. The flaw discounts the importance of the cryptographic primitive.
Last updated:  2014-08-31
A Note on the Bellare-Rivest Protocol for Translucent Cryptography
Zhengjun Cao, Lihua Liu
We remark that the Bellare-Rivest protocol for translucent cryptography [J. Cryptology (1999) 12: 117-139] can not truly enable the government to decrypt partial encrypted communications.
Last updated:  2014-08-31
A Counterexample to the Chain Rule for Conditional HILL Entropy
Stephan Krenn, Krzysztof Pietrzak, Akshay Wadia, Daniel Wichs
Most entropy notions $H(.)$ like Shannon or min-entropy satisfy a chain rule stating that for random variables $X,Z$ and $A$ we have $H(X|Z,A)\ge H(X|Z)-|A|$. That is, by conditioning on $A$ the entropy of $X$ can decrease by at most the bitlength $|A|$ of $A$. Such chain rules are known to hold for some computational entropy notions like Yao's and unpredictability-entropy. For HILL entropy, the computational analogue of min-entropy, the chain rule is of special interest and has found many applications, including leakage-resilient cryptography, deterministic encryption and memory delegation. These applications rely on restricted special cases of the chain rule. Whether the chain rule for conditional HILL entropy holds in general was an open problem for which we give a strong negative answer: We construct joint distributions $(X,Z,A)$, where $A$ is a distribution over a \emph{single} bit, such that the HILL entropy $H_\infty(X|Z)$ is large but $H_\infty(X|Z,A)$ is basically zero. Our counterexample just makes the minimal assumption that ${\bf NP}\nsubseteq{\bf P/poly}$. Under the stronger assumption that injective one-way function exist, we can make all the distributions efficiently samplable. Finally, we show that some more sophisticated cryptographic objects like lossy functions can be used to sample a distribution constituting a counterexample to the chain rule making only a single invocation to the underlying object.
Last updated:  2014-08-30
Attacks in Stream Ciphers: A Survey
Gustavo Banegas
Nowadays there are different types of attacks in block and stream ciphers. In this work we will present some of the most used attacks on stream ciphers. We will present the newest techniques with an example of usage in a cipher, explain and comment. Previous we will explain the difference between the block ciphers and stream ciphers.
Last updated:  2014-08-30
Fully Collusion-Resistant Traceable Key-Policy Attribute-Based Encryption with Sub-linear Size Ciphertexts
Zhen Liu, Zhenfu Cao, Duncan S. Wong
Recently a series of expressive, secure and efficient Attribute-Based Encryption (ABE) schemes, both in key-policy flavor and ciphertext-policy flavor, have been proposed. However, before being applied into practice, these systems have to attain traceability of malicious users. As the decryption privilege of a decryption key in Key-Policy ABE (resp. Ciphertext-Policy ABE) may be shared by multiple users who own the same access policy (resp. attribute set), malicious users might tempt to leak their decryption privileges to third parties, for financial gain as an example, if there is no tracing mechanism for tracking them down. In this work we study the traceability notion in the setting of Key-Policy ABE, and formalize Key-Policy ABE supporting fully collusion-resistant blackbox traceability. An adversary is allowed to access an arbitrary number of keys of its own choice when building a decryption-device, and given such a decryption-device while the underlying decryption algorithm or key may not be given, a Blackbox tracing algorithm can find out at least one of the malicious users whose keys have been used for building the decryption-device. We propose a construction, which supports both fully collusion-resistant blackbox traceability and high expressiveness (i.e. supporting any monotonic access structures). The construction is fully secure in the standard model (i.e. it achieves the best security level that the conventional non-traceable ABE systems do to date), and is efficient that the fully collusion-resistant blackbox traceability is attained at the price of making ciphertexts grow only sub-linearly in the number of users in the system, which is the most efficient level to date.
Last updated:  2014-08-30
The Usage of Counter Revisited: Second-Preimage Attack on New Russian Standardized Hash Function
Jian Guo, Jérémy Jean, Gaëtan Leurent, Thomas Peyrin, Lei Wang
Streebog is a new Russian hash function standard. It follows the HAIFA framework as domain extension algorithm and claims to resist recent generic second-preimage attacks with long messages. However, we demonstrate in this article that the specific instantiation of the HAIFA framework used in Streebog makes it weak against such attacks. More precisely, we observe that Streebog makes a rather poor usage of the HAIFA counter input in the compression function, which allows to construct second-preimages on the full Streebog-512 with a complexity as low as 2^{266} compression function evaluations for long messages. This complexity has to be compared with the expected 2^{512} computations bound that an ideal hash function should provide. Our work is a good example that one must be careful when using a design framework for which not all instances are secure. HAIFA helps designers to build a secure hash function, but one should pay attention to the way the counter is handled inside the compression function.
Last updated:  2015-07-24
Efficient RAM and control flow in verifiable outsourced computation
Riad S. Wahby, Srinath Setty, Max Howald, Zuocheng Ren, Andrew J. Blumberg, Michael Walfish
Recent work on proof-based verifiable computation has resulted in built systems that employ tools from complexity theory and cryptography to address a basic problem in systems security: allowing a local computer to outsource the execution of a program while providing the local computer with a guarantee of integrity and the remote computer with a guarantee of privacy. However, support for programs that use RAM and control flow has been problematic. State of the art systems either restrict the use of these constructs (e.g., requiring static loop bounds), incur sizeable overhead on every step, or pay tremendous costs when the constructs are invoked. This paper describes Buffet, a built system that solves these problems by providing inexpensive "a la carte" RAM and dynamic control flow. Buffet composes an elegant prior approach to RAM with a novel adaptation of techniques from the compilers literature. Buffet allows the programmer to express programs in an expansive subset of C (disallowing only "goto" and function pointers), can handle essentially any example in the verifiable computation literature, and achieves the best performance in the area by multiple orders of magnitude.
Last updated:  2014-09-11
How to Estimate the Success Rate of Higher-Order Side-Channel Attacks
Victor Lomné, Emmanuel Prouff, Matthieu Rivain, Thomas Roche, Adrian Thillard
The resistance of a cryptographic implementation with regards to side-channel analysis is often quantified by measuring the success rate of a given attack. This approach cannot always be followed in practice, especially when the implementation includes some countermeasures that may render the attack too costly for an evaluation purpose, but not costly enough from a security point of view. An evaluator then faces the issue of estimating the success rate of an attack he cannot mount. The present paper adresses this issue by presenting a methodology to estimate the success rate of higher-order side-channel attacks targeting implementations protected by masking. Specifically, we generalize the approach initially proposed at SAC 2008 in the context of first-order side-channel attacks. The principle is to approximate the distribution of an attack's score vector by a multivariate Gaussian distribution, whose parameters are derived by profiling the leakage. One can then accurately compute the expected attack success rate with respect to the number of leakage measurements. We apply this methodology to higher-order side-channel attacks based on the widely used correlation and likelihood distinguishers. Moreover, we validate our approach with simulations and practical attack experiments against masked AES implemenations running on two different microcontrollers.
Last updated:  2016-11-28
Circuit ORAM: On Tightness of the Goldreich-Ostrovsky Lower Bound
Uncategorized
Xiao Wang, Hubert Chan, Elaine Shi
Show abstract
Uncategorized
We propose a new tree-based ORAM scheme called Circuit ORAM. Circuit ORAM makes both theoretical and practical contributions. From a theoretical perspective, Circuit ORAM shows that the well-known Goldreich-Ostrovsky logarithmic ORAM lower bound is tight under certain parameter ranges, for several performance metrics. Therefore, we are the first to give an answer to a theoretical challenge that remained open for the past twenty-seven years. Second, Circuit ORAM earns its name because it achieves (almost) optimal circuit size both in theory and in practice for realistic choices of block sizes. We demonstrate compelling practical perfor- mance and show that Circuit ORAM is an ideal candidate for secure multi-party computation applications.
Last updated:  2014-09-15
SCORAM: Oblivious RAM for Secure Computation
Xiao Shaun Wang, Yan Huang, T-H. Hubert Chan, abhi shelat, Elaine Shi
Oblivious RAMs (ORAMs) have traditionally been measured by their \emph{bandwidth overhead} and \emph{client storage}. We observe that when using ORAMs to build secure computation protocols for RAM programs, the \emph{size} of the ORAM circuits is more relevant to the performance. We therefore embark on a study of the \emph{circuit-complexity} of several recently proposed ORAM constructions. Our careful implementation and experiments show that asymptotic analysis is not indicative of the true performance of ORAM in secure computation protocols with practical data sizes. We then present SCORAM, a heuristic \emph{compact} ORAM design optimized for secure computation protocols. Our new design is almost 10x smaller in circuit size and also faster than all other designs we have tested for realistic settings (i.e., memory sizes between 4MB and 2GB, constrained by $2^{-80}$ failure probability). SCORAM\ makes it feasible to perform secure computations on gigabyte-sized data sets.
Last updated:  2014-08-28
DoubleMod and SingleMod: Simple Randomized Secret-Key Encryption with Bounded Homomorphicity
Dhananjay S. Phatak, Qiang Tang, Alan T. Sherman, Warren D. Smith, Peter Ryan, Kostas Kalpakis
An encryption relation $f \subseteq {\mathbb Z} \times {\mathbb Z}$ with decryption function $f^{-1}$ is {\it ``group-homomorphic''} if, for any suitable plaintexts $x_1$ and $x_2$, $\, x_1+x_2 = f^{-1} ( f(x_1) + f(x_2) )$. It is {\it ``ring-homomorphic''} if furthermore $x_1 x_2 = f^{-1} ( f(x_1) f(x_2) )$; it is {\it ``field-homomorphic''} if furthermore $1/x_1 = f^{-1} ( f(1/x_1) )$. Such relations would support oblivious processing of encrypted data. We propose a simple randomized encryption relation~$f$ over the integers, called\linebreak {\it \mbox{DoubleMod}}, which is ``bounded ring-homomorphic'' or what some call "somewhat homomorphic." Here, ``bounded'' means that the number of additions and multiplications that can be performed, while not allowing the encrypted values to go out of range, is limited~(any pre-specified bound on the operation-count can be accommodated). Let $R$ be any large integer. For any plaintext $x \in {\mathbb Z}_R$, DoubleMod encrypts $x$ as $f(x) = x + au + bv$, where $a$ and $b$ are randomly chosen integers in some appropriate interval, while $(u,v)$ is the secret key. Here $u>R^2$ is a large prime and the smallest prime factor of $v$ exceeds $u$. With knowledge of the key, but not of $a$ and $b$, the receiver decrypts the ciphertext by computing $f^{-1}(y) =(y \bmod v) \bmod u$. DoubleMod generalizes an independent idea of van Dijk {\it et al.} 2010. We present and refine a new CCA1 chosen-ciphertext attack that finds the secret key of both systems (ours and van Dijk {\it et al.}'s) in linear time in the bit length of the security parameter. Under a known-plaintext attack, breaking DoubleMod is at most as hard as solving the {\it Approximate GCD (AGCD)} problem. The complexity of AGCD is not known. We also introduce the \mbox{{\it SingleMod}} {field}-homomorphic cryptosystems. The simplest\linebreak \mbox{SingleMod} system based on the integers can be broken trivially. We had hoped, that if SingleMod is implemented inside non-Euclidean quadratic or higher-order fields with large discriminants, where GCD computations appear difficult, it may be feasible to achieve a desired level of security. We show, however, that a variation of our chosen-ciphertext attack works against SingleMod even in non-Euclidean fields.
Last updated:  2014-08-28
On the Communication Complexity of Secure Function Evaluation with Long Output
Pavel Hubacek, Daniel Wichs
We study the communication complexity of secure function evaluation (SFE). Consider a setting where Alice has a short input $x_A$, Bob has an input $x_B$ and we want Bob to learn some function $y = f(x_A, x_B)$ with large output size. For example, Alice has a small secret decryption key, Bob has a large encrypted database and we want Bob to learn the decrypted data without learning anything else about Alice's key. In a trivial insecure protocol, Alice can just send her short input $x_A$ to Bob. However, all known SFE protocols have communication complexity that scales with size of the output $y$, which can potentially be much larger. Is such ``output-size dependence'' inherent in SFE? Surprisingly, we show that output-size dependence can be avoided in the \emph{honest-but-curious} setting. In particular, using indistinguishability obfuscation (iO) and fully homomorphic encryption (FHE), we construct the first honest-but-curious SFE protocol whose communication complexity only scales with that of the best insecure protocol for evaluating the desired function, independent of the output size. Our construction relies on a novel way of using iO via a new tool that we call a ``somewhere statistically binding (SSB) hash'', and which may be of independent interest. On the negative side, we show that output-size dependence is inherent in the \emph{fully malicious} setting, or even already in an \emph{honest-but-deterministic} setting, where the corrupted party follows the protocol as specified but fixes its random tape to some deterministic value. Moreover, we show that even in an offline/online protocol, the communication of the online phase must have output-size dependence. This negative result uses an incompressibility argument and it generalizes several recent lower bounds for functional encryption and (reusable) garbled circuits, which follow as simple corollaries of our general theorem.
Last updated:  2016-09-26
Fairness versus Guaranteed Output Delivery in Secure Multiparty Computation
Uncategorized
Ran Cohen, Yehuda Lindell
Show abstract
Uncategorized
In the setting of secure multiparty computation, a set of parties wish to compute a joint function of their private inputs. The computation should preserve security properties such as privacy, correctness, independence of inputs, fairness and guaranteed output delivery. In the case of no honest majority, fairness and guaranteed output delivery cannot always be obtained. Thus, protocols for secure multiparty computation are typically of two disparate types: protocols that assume an honest majority (and achieve all properties \emph{including} fairness and guaranteed output delivery), and protocols that do not assume an honest majority (and achieve all properties \emph{except for} fairness and guaranteed output delivery). In addition, in the two-party case, fairness and guaranteed output delivery are equivalent. As a result, the properties of fairness (which means that if corrupted parties receive output then so do the honest parties) and guaranteed output delivery (which means that corrupted parties cannot prevent the honest parties from receiving output in any case) have typically been considered to be the same. In this paper, we initiate a study of the relation between fairness and guaranteed output delivery in secure multiparty computation. We show that in the multiparty setting these properties are distinct and proceed to study under what conditions fairness implies guaranteed output delivery (the opposite direction always holds). We also show the existence of non-trivial functions for which complete fairness is achievable (without an honest majority) but guaranteed output delivery is not, and the existence of non-trivial functions for which complete fairness and guaranteed output delivery are achievable. Our study sheds light on the role of broadcast in fairness and guaranteed output delivery, and shows that these properties should sometimes be considered separately.
Last updated:  2014-08-28
Cut-and-Choose Based Two-Party Computation in the Online/Offline and Batch Settings
Yehuda Lindell, Ben Riva
Protocols for secure two-party computation enable a pair of mistrusting parties to compute a joint function of their private inputs without revealing anything but the output. One of the fundamental techniques for obtaining secure computation is that of Yao's garbled circuits. In the setting of malicious adversaries, where the corrupted party can follow any arbitrary (polynomial-time) strategy in an attempt to breach security, the cut-and-choose technique is used to ensure that the garbled circuit is constructed correctly. The cost of this technique is the construction and transmission of multiple circuits; specifically, $s$ garbled circuits are used in order to obtain a maximum cheating probability of $2^{-s}$. In this paper, we show how to reduce the amortized cost of cut-and-choose based secure two-party computation to ${\cal O}\(\frac{s}{\log N}\)$ garbled circuits when $N$ secure computations are run. We use this method to construct a secure protocol in the batch setting. Next, we show how the cut-and-choose method on garbled circuits can be used in an online/offline setting in order to obtain a very fast online phase with very few exponentiations, and we apply our amortization method to this setting as well. Our online/offline protocols are competitive with the TinyOT and SPDZ protocols due to the minimal interaction in the online phase (previous protocols require only information-theoretic operations in the online phase and are therefore very efficient; however, they also require many rounds of communication which increases latency). Although ${\cal O}(\frac{s}{\log N})$ may seem to be a mild efficiency improvement asymptotically, it is a \emph{dramatic improvement} for concrete parameters since $s$ is a statistical security parameter and so is typically small. Specifically, instead of $40$ circuits to obtain an error of $2^{-40}$, when running $2^{10}$ executions we need only $7.06$ circuits on average per secure computation, and when running $2^{20}$ executions this is reduced to an average of just $4.08$. In addition, in the online/offline setting, the online phase per secure computation consists of evaluating only $6$ garbled circuits for $2^{10}$ executions and $4$ garbled circuits for $2^{20}$ executions (plus some small additional overhead). In practice, when using fast implementations (like the JustGarble framework of Bellare et al.), the resulting protocol is remarkably fast. We present a number of variants of our protocols with different assumptions and efficiency levels. Our basic protocols rely on the DDH assumption alone, while our most efficient variants are proven secure in the random-oracle model. Interestingly, the variant in the random-oracle model of our protocol for the online/offline setting has online communication that is independent of the size of the circuit in use. None of the previous protocols in the online/offline setting achieves this property, which is very significant since communication is usually a dominant cost in practice.
Last updated:  2015-12-03
Functional Encryption Without Obfuscation
Sanjam Garg, Craig Gentry, Shai Halevi, Mark Zhandry
Previously known functional encryption (FE) schemes for general circuits relied on indistinguishability obfuscation, which in turn either relies on an exponential number of assumptions (basically, one per circuit), or a polynomial set of assumptions, but with an exponential loss in the security reduction. Additionally these schemes are proved in the weaker selective security model, where the adversary is forced to specify its target before seeing the public parameters. For these constructions, full security can be obtained but at the cost of an exponential loss in the security reduction. In this work, we overcome the above limitations and realize a fully secure functional encryption scheme without using indistinguishability obfuscation. Specifically the security of our scheme relies only on the polynomial hardness of simple assumptions on multilinear maps. As a separate technical contribution of independent interest, we show how to add to existing graded encoding schemes a new \emph{extension function}, that can be though of as dynamically introducing new encoding levels.
Last updated:  2016-08-07
Orthogonal Direct Sum Masking: A Smartcard Friendly Computation Paradigm in a Code, with Builtin Protection against Side-Channel and Fault Attacks
Julien Bringer, Claude Carlet, Hervé Chabanne, Sylvain Guilley, Houssem Maghrebi
Secure elements, such as smartcards or trusted platform modules (TPMs), must be protected against implementation-level attacks. Those include side-channel and fault injection attacks. We introduce ODSM, Orthogonal Direct Sum Masking, a new computation paradigm that achieves protection against those two kinds of attacks. A large vector space is structured as two supplementary orthogonal subspaces. One subspace (called a code $\mathcal{C}$) is used for the functional computation, while the second subspace carries random numbers. As the random numbers are entangled with the sensitive data, ODSM ensures a protection against (monovariate) side-channel attacks. The random numbers can be checked either occasionally, or globally, thereby ensuring a fine or coarse detection capability. The security level can be formally detailed: it is proved that monovariate side-channel attacks of order up to $d_\mathcal{C}-1$, where $d_\mathcal{C}$ is the minimal distance of $\mathcal{C}$, are impossible, and that any fault of Hamming weight strictly less than $d_\mathcal{C}$ is detected. A complete instantiation of ODSM is given for AES. In this case, all monovariate side-channel attacks of order strictly less than $5$ are impossible, and all fault injections perturbing strictly less than $5$ bits are detected.
Last updated:  2014-08-28
On the Optimal Pre-Computation of Window $\tau$NAF for Koblitz Curves
William R. Trost, Guangwu Xu
Koblitz curves have been a nice subject of consideration for both theoretical and practical interests. The window $\tau$-adic algorithm of Solinas (window $\tau$NAF) is the most powerful method for computing point multiplication for Koblitz curves. Pre-computation plays an important role in improving the performance of point multiplication. In this paper, the concept of optimal pre-computation for window $\tau$NAF is formulated. In this setting, an optimal pre-computation has some mathematically natural and clean forms, and requires $2^{w-2}-1$ point additions and two evaluations of the Frobenius map $\tau$, where $w$ is the window width. One of the main results of this paper is to construct an optimal pre-computation scheme for each window width $w$ from $4$ to $15$ (more than practical needs). These pre-computations can be easily incorporated into implementations of window $\tau$NAF. The ideas in the paper can also be used to construct other suitable pre-computations. This paper also includes a discussion of coefficient sets for window $\tau$NAF and the divisibility by powers of $\tau$ through different approaches.
Last updated:  2014-08-28
Locally Decodable and Updatable Non-Malleable Codes and Their Applications
Dana Dachman-Soled, Feng-Hao Liu, Elaine Shi, Hong-Sheng Zhou
Non-malleable codes, introduced as a relaxation of error-correcting codes by Dziembowski, Pietrzak and Wichs (ICS '10), provide the security guarantee that the message contained in a tampered codeword is either the same as the original message or is set to an unrelated value. Various applications of non-malleable codes have been discovered, and one of the most significant applications among these is the connection with tamper-resilient cryptography. There is a large body of work considering security against various classes of tampering functions, as well as non-malleable codes with enhanced features such as leakage resilience. In this work, we propose combining the concepts of non-malleability, leakage resilience, and locality in a coding scheme. The contribution of this work is three-fold: 1. As a conceptual contribution, we define a new notion of locally decodable and updatable non-malleable code that combines the above properties. 2. We present two simple and efficient constructions achieving our new notion with different levels of security. 3. We present an important application of our new tool--securing RAM computation against memory tampering and leakage attacks. This is analogous to the usage of traditional non-malleable codes to secure implementations in the circuit model against memory tampering and leakage attacks.
Last updated:  2015-04-30
Outsourced Pattern Matching
Uncategorized
Sebastian Faust, Carmit Hazay, Daniele Venturi
Show abstract
Uncategorized
In secure delegatable computation, computationally weak devices (or clients) wish to outsource their computation and data to an untrusted server in the cloud. While most earlier work considers the general question of how to securely outsource any computation to the cloud server, we focus on concrete and important functionalities and give the first protocol for the pattern matching problem in the cloud. Loosely speaking, this problem considers a text $T$ that is outsourced to the cloud $\bfS$ by a sender $\sen$. In a query phase, receivers $\rec_1, \ldots , \rec_l$ run an efficient protocol with the server $\bfS$ and the sender $\sen$ in order to learn the positions at which a pattern of length $m$ matches the text (and nothing beyond that). This is called the outsourced pattern matching problem which is highly motivated in the context of delegatable computing since it offers storage alternatives for massive databases that contain confidential data (e.g., health related data about patient history). Our constructions are simulation-based secure in the presence of semi-honest and malicious adversaries (in the random oracle model) and limit the communication in the query phase to $O(m)$ bits plus the number of occurrences---which is optimal. In contrast to generic solutions for delegatable computation, our schemes do not rely on fully homomorphic encryption but instead use novel ideas for solving pattern matching, based on a reduction to the subset sum problem. Interestingly, we do not rely on the hardness of the problem, but rather we exploit instances that are solvable in polynomial-time. A follow-up result demonstrates that the random oracle is essential in order to meet our communication bound.
Last updated:  2014-08-28
One-Round Deniable Key Exchange with Perfect Forward Security
Weiqiang Wen, Libin Wang, Min Xie
In response to the need for secure one-round authenticated key exchange protocols providing both perfect forward secrecy and full deniability, we put forward a new paradigm for constructing protocols from a Diffie-Hellman type protocol plus a non-interactive designated verifier proof of knowledge (DV-PoK) scheme. We define the notion of DV-PoK which is a variant of non-interactive zero-knowledge proof of knowledge, and provide an efficient DV-PoK scheme as a central technical building block of our protocol. The DV-PoK scheme possesses nice properties such as unforgeability and symmetry which help our protocol to achieve perfect forward secrecy and full deniability respectively. Moreover, the security properties are formally proved in the Canetti-Krawczyk model under the Gap Diffie-Hellman assumption. In sum, our protocol offers a remarkable combination of salient security properties and efficiency, and the notion of DV-PoK is of independent interests.
Last updated:  2014-08-27
Interactive Proofs under Continual Memory Leakage
Prabhanjan Ananth, Vipul Goyal, Omkant Pandey
We consider the task of constructing interactive proofs for NP which can provide meaningful security for a prover even in the presence of continual memory leakage. We imagine a setting where an adversarial verifier participates in multiple sequential interactive proof executions for a fixed NP statement x. In every execution, the adversarial verifier is additionally allowed to leak a fraction of the (secret) memory of the prover. This is in contrast to the recently introduced notion of leakage-resilient zero-knowledge (Garg-Jain-Sahai'11) where there is only a single execution. Under multiple executions, in fact the entire prover witness might end up getting leaked thus leading to a complete compromise of prover security. Towards that end, we define the notion of non-transferable proofs for all languages in NP. In such proofs, instead of receiving w as input, the prover will receive an "encoding'' of the witness w such that the encoding is sufficient to prove the validity of x; further, this encoding can be "updated'' to a fresh new encoding for the next execution. We then require that if (x,w) are sampled from a "hard'' distribution, then no PPT adversary A* can gain the ability to prove x (on its own) to an honest verifier, even if A* has participated in polynomially many interactive proof executions (with leakage) with an honest prover whose input is (x,w). Non-transferability is a strong security guarantee which suffices for many cryptographic applications (and in particular, implies witness hiding). We show how to construct non-transferable proofs for all languages in NP which can tolerate leaking a constant fraction of prover's secret-state during each execution. Our construction is in the common reference string (CRS) model. To obtain our results, we build a witness-encoding scheme which satisfies the following continual-leakage-resilient (CLR) properties: - The encodings can be randomized to yield a fresh new encoding, - There does not exist any efficient adversary, who receiving only a constant fraction of leakage on polynomially many fresh encodings of the same witness w, can output a valid encoding provided that the witness w along with its corresponding input instance x were sampled from a hard distribution. Our encoding schemes are essentially re-randomizable non-interactive zero-knowledge (NIZK) proofs for circuit satisfiability, with the aforementioned CLR properties. We believe that our CLR-encodings, as well as our techniques to build them, may be of independent interest.
Last updated:  2014-08-27
On the Primitivity of Trinomials over Small Finite Fields
Uncategorized
YUjuan Li, Jinhua Zhao, Huaifu Wang
Show abstract
Uncategorized
In this paper, we explore the primitivity of trinomials over small finite fields. We extend the results of the primitivity of trinomials $x^{n}+ax+b$ over ${\mathbb{F}}_{4}$ \cite{Li} to the general form $x^{n}+ax^{k}+b$. We prove that for given $n$ and $k$, one of all the trinomials $x^{n}+ax^{k}+b$ with $b$ being the primitive element of ${\mathbb{F}}_{4}$ and $a+b\neq1$ is primitive over ${\mathbb{F}}_{4}$ if and only if all the others are primitive over ${\mathbb{F}}_{4}$. And we can deduce that if we find one primitive trinomial over ${\mathbb{F}}_{4}$, in fact there are at least four primitive trinomials with the same degree. We give the necessary conditions if there exist primitive trinomials over ${\mathbb{F}}_{4}$. We study the trinomials with degrees $n=4^{m}+1$ and $n=21\cdot4^{m}+29$, where $m$ is a positive integer. For these two cases, we prove that the trinomials $x^{n}+ax+b$ with degrees $n=4^{m}+1$ and $n=21\cdot4^{m}+29$ are always reducible if $m>1$. If some results are obviously true over ${\mathbb{F}}_{3}$, we also give it.
Last updated:  2015-11-05
The Adjacency Graphs of Some Feedback Shift Registers
Ming Li, Yupeng Jiang, Dongdai Lin
The adjacency graphs of feedback shift registers (FSRs) with characteristic function of the form g=(x_0+x_1)*f are considered in this paper. Some properties about these FSRs are given. It is proved that these FSRs contains only prime cycles and these cycles can be divided into two sets such that each set contains no adjacent cycles. When f is a linear function, more properties about these FSRs are derived. It is shown that, when f is a linear function and contains an odd number of terms, the adjacency graph of \mathrm{FSR}((x_0+x_1)*f) can be determined directly from the adjacency graph of \mathrm{FSR}(f). As an application of these results, we determine the adjacency graphs of \mathrm{FSR}((1+x)^4p(x)) and \mathrm{FSR}((1+x)^5p(x)), where p(x) is a primitive polynomial, and construct a large class of de Bruijn sequences from them.
Last updated:  2014-08-27
On the cycle decomposition of the WG-NLFSR
YUjuan Li, Wnehua Shen, Huaifu Wang, Peipei Zhou
Recently, Kalikinkar Mandal and Guang Gong presented a family of nonlinear pseudorandom number generators using Welch-Gong Transformations in their paper [6]. They also performed the cycle decomposition of the WG-NLFSR recurrence relations over different finite fields by computer simulations where the nonlinear recurrence relation is composed of a characteristic polynomial and a WG permutation. In this paper, we mainly prove that the state transition transformation of the WG-NLFSR is an even permutation. We also prove that the number of the cycles in the cycle decomposition of WG-NLFSR is even. And we apply our results to the filtering WG7-NLFSR to prove that the period of the sequences generated by WG7-NLFSR can not be maximum.
Last updated:  2015-02-18
Cryptanalytic Time-Memory-Data Tradeoffs for FX-Constructions with Applications to PRINCE and PRIDE
Itai Dinur
The FX-construction was proposed in 1996 by Kilian and Rogaway as a generalization of the DESX scheme. The construction increases the security of an $n$-bit core block cipher with a $\kappa$-bit key by using two additional $n$-bit masking keys. Recently, several concrete instances of the FX-construction were proposed, including PRINCE (proposed at Asiacrypt 2012) and PRIDE (proposed at CRYPTO 2014). These ciphers have $n=\kappa=64$, and are proven to guarantee about $127-d$ bits of security, assuming that their core ciphers are ideal, and the adversary can obtain at most $2^d$ data. In this paper, we devise new cryptanalytic time-memory-data tradeoff attacks on FX-constructions. While our attacks do not contradict the security proof of PRINCE and PRIDE, nor pose an immediate threat to their users, some specific choices of tradeoff parameters demonstrate that the security margin of the ciphers against practical attacks is smaller than expected. Our techniques combine a special form of time-memory-data tradeoffs, typically applied to stream ciphers, with recent analysis of FX-constructions by Fouque, Joux and Mavromati.
Last updated:  2014-09-12
Pleco and Plectron -- Two Provably Secure Password Hashing Algorithms
Bo Zhu, Xinxin Fan, Guang Gong
Password-based authentication has been widely deployed in practice due to its simplicity and efficiency. Storing passwords and deriving cryptographic keys from passwords in a secure manner are crucial for many security systems and services. However, choices of well-studied password hashing algorithms are extremely limited, as their security requirements and design principles are different from common cryptographic algorithms. In this paper, we propose two practical password hashing algorithms, Pleco and Plectron. They are built upon well-understood cryptographic algorithms, and combine advantages of symmetric and asymmetric primitives. By employing the Rabin cryptosystem, we prove that the one-wayness of Pleco is at least as strong as the hard problem of integer factorization. In addition, both password hashing algorithms are designed to be sequential memory-hard, in order to thwart large-scale password cracking by parallel hardware, such as GPUs, FPGAs, and ASICs. Moreover, total computation and memory consumptions of Pleco and Plectron are tunable through their cost parameters.
Last updated:  2014-08-27
Multi-Bit Differential Fault Analysis of Grain-128 with Very Weak Assumptions
Prakash Dey, Abhishek Chakraborty, Avishek Adhikari, Debdeep Mukhopadhyay
Very few differential fault attacks (DFA) were reported on {\em Grain-128} so far. In this paper we present a generic attack strategy that allows the adversary to challenge the cipher under different multi-bit fault models with faults at a targeted keystream generation round even if bit arrangement of the actual cipher device is unknown. Also unique identification of fault locations is not necessary. To the best of our knowledge, this paper assumes the weakest adversarial power ever considered in the open literature for DFA on {\em Grain-128} and develops the most realistic attack strategy so far on {\em Grain-128}. In particular, when a random area within $k \in \{1,2,3,4,5\}$ neighbourhood bits can only be disturbed by a single fault injection at the first keystream generation round ($k$-neighbourhood bit fault), without knowing the locations or the exact number of bits the injected fault has altered, our attack strategy always breaks the cipher with $5$ faults. In a weaker setup even if bit arrangement of the cipher device is unknown, bad-faults (at the first keystream generation round) are rejected with probabilities $0.999993$, $0.999979$, $0.999963$, $0.999946$ and $0.999921$ assuming that the adversary will use only 1, 2, 3, 4 and 5 neighbourhood bit faults respectively for {\em key-IV} recovery.
Last updated:  2015-01-12
Mersenne factorization factory
Thorsten Kleinjung, Joppe W. Bos, Arjen K. Lenstra
We present work in progress to fully factor seventeen Mersenne numbers using a variant of the special number field sieve where sieving on the algebraic side is shared among the numbers. It is expected that it reduces the overall factoring effort by more than 50\%. As far as we know this is the first practical application of Coppersmith's ``factorization factory'' idea. Most factorizations used a new double-product approach that led to additional savings in the matrix step.
Last updated:  2014-11-26
A Dynamic Cube Attack on $105$ round Grain v1
Subhadeep Banik
As far as the Differential Cryptanalysis of reduced round Grain v1 is concerned, the best results were those published by Knellwolf et al. in Asiacrypt $2011$. In an extended version of the paper, it was shown that it was possible to retrieve {\bf (i)} $5$ expressions in the Secret Key bits for a variant of Grain v1 that employs $97$ rounds (in place of $160$) in its Key Scheduling process using $2^{27}$ chosen IVs and {\bf (ii)} $1$ expression in Secret Key bits for a variant that employs $104$ rounds in its Key Scheduling using $2^{35}$ chosen IVs. However, the second attack on $104$ rounds, had a success probability of around $50$\%, which is to say that the attack worked for only around one half of the Secret Keys. In this paper we propose a dynamic cube attack on $105$ round Grain v1, that has a success probability of $100$\%, and thus we report an improvement of $8$ rounds over the previous best attack on Grain v1 that attacks the entire Keyspace. We take the help of the tool $\Delta${\sf Grain}$_{\sf KSA}$, proposed by Banik at ACISP 2014, to track the differential trails induced in the internal state of Grain v1 by any difference in the IV bits, and we prove that a suitably introduced difference in the IV leads to a distinguisher for the output bit produced in the $105^{th}$ round. This, in turn, helps determine the values of $6$ expressions in the Secret Key bits.
Last updated:  2014-08-27
A note on CCA2-protected McEliece Cryptosystem with a systematic public key
Pavol Zajac
We show that the plaintext of some of the proposed CCA2 conversions of McEliece cryptosystem with a public key in systematic form can be recovered faster than with a general linear decoding. This is due to the fact that an attacker only needs to recover a part of the cleartext to decrypt the relevant plaintext.
Last updated:  2014-08-27
Round-Optimal Password-Protected Secret Sharing and T-PAKE in the Password-Only Model
Stanislaw Jarecki, Aggelos Kiayias, Hugo Krawczyk
In a Password-Protected Secret Sharing (PPSS) scheme with parameters (t,n) (formalized by Bagherzandi et al), a user Alice stores secret information s among n servers so that she can later recover the information solely on the basis of her password. The security requirement is similar to a (t,n)-threshold secret sharing, i.e., Alice can recover her secret as long as she can communicate with t + 1 honest servers but an attacker gaining access to t servers cannot learn information about the secret. In particular, the system is secure against o-line attacks by an attacker controlling up to t servers. On the other hand, accounting for inevitable on-line attacks one allows the attacker an advantage proportional to the fraction of dictionary passwords tested in on-line interactions with the user and servers. We present the first round-optimal PPSS scheme, requiring just one message from user to server, and from server to user, and that works in the password-only setting where users do not have access to an authenticated public key. The scheme uses an Oblivious PRF whose security we define using a UC-style ideal functionality and denote as V-OPRF due to its verifiability, and for which we show concrete, very practical realizations in the random oracle model, as well as standard-model instantiations. As an important application we use this scheme to build the first single-round password-only Threshold-PAKE protocol in the CRS and ROM models for arbitrary (t,n) parameters with no PKI requirements for any party (clients or servers) and no inter-server communication. Our T-PAKE protocols are built by combining suitable key exchange protocols on top of our V-OPRF-based PPSS schemes. We prove T-PAKE security via a generic composition theorem showing the security of any such composed protocol.
Last updated:  2014-12-18
FPGA Trojans through Detecting and Weakening of Cryptographic Primitives
Uncategorized
Pawel Swierczynski, Marc Fyrbiak, Philipp Koppe, Christof Paar
Show abstract
Uncategorized
This paper investigates a novel attack vector against cryptography realized on FPGAs, which poses a serious threat to real-world applications.We demonstrate how a targeted bitstream modification can seriously weaken cryptographic algorithms, which we show with the examples of AES and 3DES. The attack is performed by modifying the FPGA bitstream that configures the hardware elements during initialization. Recently, it has been shown that cloning of FPGA designs is feasible, even if the bitstream is encrypted. However, due to its proprietary file format, a meaningful modification is very challenging. While some previous work addressed bitstream reverse-engineering, so far it has not been evaluated how difficult it is to detect and modify cryptographic elements. We outline two possible practical attacks that have serious security implications. We target the S-boxes of block ciphers that can be implemented in look-up tables or stored as precomputed set of values in the memory of the FPGA. We demonstrate that it is possible to detect and apply meaningful changes to cryptographic elements inside an unknown, proprietary and undocumented bitstream. Our proposed attack does not require any knowledge of the internal routing. Furthermore, we show how an AES key can be revealed within seconds. Finally, we discuss countermeasures that can raise the bar for an adversary to successfully perform this kind of attack.
Last updated:  2016-02-15
An Equivalent Condition on the Switching Construction of Differentially $4$-uniform Permutations on $\gf_{2^{2k}}$ from the Inverse Function
Uncategorized
Xi Chen, Yazhi Deng, Min Zhu, Longjiang Qu
Show abstract
Uncategorized
Differentially $4$-uniform permutations on $\gf_{2^{2k}}$ with high nonlinearity are often chosen as substitution boxes in block ciphers. Recently, Qu et al. used the powerful switching method to construct permutations with low differential uniformity from the inverse function \cite{QTTL, QTLG} and proposed a sufficient but not necessary condition for these permutations to be differentially $4$-uniform. In this paper, a sufficient and necessary condition is presented. We also give a compact estimation for the number of constructed differentially $4$-uniform permutations. Comparing with those constructions in \cite{QTTL, QTLG}, the number of functions constructed here is much bigger. As an application, a new class of differentially $4$-uniform permutations is constructed. The obtained functions in this paper may provide more choices for the design of substitution boxes.
Last updated:  2014-08-28
Universally Composable Secure Group Communication
Uncategorized
Youliang Tian, Changgen Peng
Show abstract
Uncategorized
This paper analyzes group communication within the universally composable framework. We first propose the group communication model, identity-based signcrytion model and group key distribution model in the UC framework by designing the ideal functionality $\mathcal {F}_{SAGCOM}$, $\mathcal {F}_{IDSC}$ and $\mathcal {F}_{GKD}$, respectively. Then, we construct a UC secure identity-based signcryption protocol $\pi_{IDSC}$. Moreover, we shows that the identity-based signcryption $\pi_{IDSC}$ securely realizes the ideal functionality $\mathcal {F}_{IDSC}$ if and only if the corresponding protocol IDSC is secure. Finally, based on the identity-based protocol, we propose a group communication scheme $\pi_{SAGCOM}$, which can securely realizes the ideal functionality $\mathcal {F}_{SAGCOM}$ in the $(\mathcal {F}_{IDSC},\mathcal {F}_{GKD})$-hybrid model.
Last updated:  2014-08-27
High-speed Polynomial Multiplication Architecture for Ring-LWE and SHE Cryptosystems
Donald Donglong Chen, Nele Mentens, Frederik Vercauteren, Sujoy Sinha Roy, Ray C. C. Cheung, Derek Pao, Ingrid Verbauwhede
Polynomial multiplication is the basic and most computationally intensive operation in ring-Learning With Errors (ring-LWE) encryption and ``Somewhat" Homomorphic Encryption (SHE) cryptosystems. In this paper, the Fast Fourier Transform (FFT) with a linearithmic complexity of $O(n\log n)$, is exploited in the design of a high-speed polynomial multiplier. A constant geometry FFT datapath is used in the computation to simplify the control of the architecture. The contribution of this work is three-fold. First, parameter sets which support both an efficient modular reduction design and the security requirements for ring-LWE encryption and SHE are provided. Second, a versatile pipelined architecture accompanied with an improved dataflow are proposed to obtain a high-speed polynomial multiplier. Third, the proposed architecture supports polynomial multiplications for different lengths $n$ and moduli $p$. The experimental results on a Spartan-6 FPGA show that the proposed design results in a speedup of 3.5 times on average when compared with the state of the art. It performs a polynomial multiplication in the ring-LWE scheme ($n = 256, p = 1049089$) and the SHE scheme ($n = 1024, p = 536903681$) in only 6.3$\mu$s and 33.1$\mu$s, respectively.
Last updated:  2014-11-12
Graph-Induced Multilinear Maps from Lattices
Uncategorized
Craig Gentry, Sergey Gorbunov, Shai Halevi
Show abstract
Uncategorized
Graded multilinear encodings have found extensive applications in cryptography ranging from non-interactive key exchange protocols, to broadcast and attribute-based encryption, and even to software obfuscation. Despite seemingly unlimited applicability, essentially only two candidate constructions are known (GGH and CLT). In this work, we describe a new graph-induced multilinear encoding scheme from lattices. In a graph-induced multilinear encoding scheme the arithmetic operations that are allowed are restricted through an explicitly defined directed graph (somewhat similar to the "asymmetric variant" of previous schemes). Our construction encodes Learning With Errors (LWE) samples in short square matrices of higher dimensions. Addition and multiplication of the encodings corresponds naturally to addition and multiplication of the LWE secrets.
Last updated:  2014-08-28
Side Channel Attacks: Vulnerability Analysis of PRINCE and RECTANGLE using DPA
Uncategorized
Ravikumar Selvam, Dillibabu Shanmugam, Suganya Annadurai
Show abstract
Uncategorized
Over a decade, cryptographers are more attentive on designing lightweight ciphers in focus to compact cryptographic devices. More often, the security of these algorithms are defined in terms of its resistance to mathematical cryptanalysis methods. Nevertheless, designers are well aware of implementation attacks and concentrating on new design strategies to improve the defence quality against implementation attack. PRINCE ~\cite{Julia2012} and RECTANGLE ~\cite{cryptoeprint:2014:084} lightweight block ciphers are designed using new design strategies for efficiency and security. In this paper we analyse the security of PRINCE and RECTANGLE against a type of implementation attack called Differential Power Analysis (DPA) attack. Our attack reduces key search space from $2^{128}$ to $33008$ for PRINCE and $2^{80}$ to $288$ for RECTANGLE.
Last updated:  2014-08-27
On the Security of `An Efficient Biometric Authentication Protocol for Wireless Sensor Networks'
Uncategorized
Ashok Kumar Das
Show abstract
Uncategorized
In 2013, Althobaiti et al. proposed an efficient biometric-based user authentication scheme for wireless sensor networks. We analyze their scheme for the security against known attacks. Though their scheme is efficient in computation, in this paper we show that their scheme has some security pitfalls such as (1) it is not resilient against node capture attack, (2) it is insecure against impersonation attack and (3) it is insecure against man-in-the-middle attack. Finally, we give some pointers for improving their scheme so that the designed scheme needs to be secure against various known attacks.
Last updated:  2014-08-27
Balanced permutations Even-Mansour ciphers
Shoni Gilboa, Shay Gueron
The $r$-rounds Even-Mansour block cipher uses $r$ public permutations of $\{0, 1\}^n$ and $r+1$ secret keys. An attack on this construction was described in \cite{DDKS}, for $r = 2, 3$. Although this attack is only marginally better than brute force, it is based on an interesting observation (due to \cite{NWW}): for a "typical" permutation $P$, the distribution of $P(x) \oplus x$ is not uniform. To address this, and other potential threats that might stem from this observation in this (or other) context, we introduce the notion of a ``balanced permutation'' for which the distribution of $P(x) \oplus x$ is uniform, and show how to generate families of balanced permutations from the Feistel construction. This allows us to define a $2n$-bit block cipher from the $2$-rounds Even-Mansour scheme. The cipher uses public balanced permutations of $\{0, 1\}^{2n}$, which are based on two public permutations of $\{0, 1\}^{n}$. By construction, this cipher is immune against attacks that rely on the non-uniform behavior of $P(x) \oplus x$. We prove that this cipher is indistinguishable from a random permutation of $\{0, 1\}^{2n}$, for any adversary who has oracle access to the public permutations and to an encryption/decryption oracle, as long as the number of queries is $o (2^{n/2})$. As a practical example, we discuss the properties and the performance of a $256$-bit block cipher that is based on AES.
Last updated:  2015-09-11
The Multiple Number Field Sieve with Conjugation Method
Cécile Pierrot
In this short paper, we propose a variant of the Number Field Sieve to compute discrete logarithms in medium characteristic finite fields. We propose an algorithm that combines two recent ideas, namely the Multiple variant of the Number Field Sieve taking advantage of a large number of number fields in the sieving phase and the Conjugation Method giving a new polynomial selection for the classical Number Field Sieve. The asymptotic complexity of our improved algorithm is L_Q (1/3, (8 (9+4 \sqrt{6})/15)^1/3), where F_Q is the target finite field and (8 (9+4 \sqrt{6})/15)^1/3) is approximately equal to 2.156. This has to be compared with the complexity of the previous state-of-the-art algorithm for medium characteristic finite fields, the Number Field Sieve with Conjugation Method, that has a complexity of approximately L_Q(1/3, 2.201).
Last updated:  2015-01-15
Revocation in Publicly Verifiable Outsourced Computation
James Alderman, Christian Janson, Carlos Cid, Jason Crampton
The combination of software-as-a-service and the increasing use of mobile devices gives rise to a considerable difference in computational power between servers and clients. Thus, there is a desire for clients to outsource the evaluation of complex functions to an external server. Servers providing such a service may be rewarded per computation, and as such have an incentive to cheat by returning garbage rather than devoting resources and time to compute a valid result. In this work, we introduce the notion of Revocable Publicly Verifiable Computation (RPVC), where a cheating server is revoked and may not perform future computations (thus incurring a financial penalty). We introduce a Key Distribution Center (KDC) to efficiently handle the generation and distribution of the keys required to support RPVC. The KDC is an authority over entities in the system and enables revocation. We also introduce a notion of blind verification such that results are verifiable (and hence servers can be rewarded or punished) without learning the value. We present a rigorous definitional framework, define a number of new security models and present a construction of such a scheme built upon Key-Policy Attribute-based Encryption.
Last updated:  2014-08-21
Automated Design, Implementation, and Evaluation of Arbiter-based PUF on FPGA using Programmable Delay Lines
Mehrdad Majzoobi, Akshat Kharaya, Farinaz Koushanfar, Srinivas Devadas
This paper proposes a novel approach for automated implementation of an arbiter-based physical unclonable function (PUF) on field programmable gate arrays (FPGAs). We introduce a high resolution programmable delay logic (PDL) that is implemented by harnessing the FPGA lookup-table (LUT) internal structure. PDL allows automatic fine tuning of delays that can mitigate the timing skews caused by asymmetries in interconnect routing and systematic variations. To thwart the arbiter metastability problem, we present and analyze methods for majority voting of responses. A method to classify and group challenges into different robustness sets is introduced that enhances the corresponding responses’ stability in the face of operational variations. The trade-off between response stability and response entropy (uniqueness) is investigated through comprehensive measurements. We exploit the correlation between the impact of temperature and power supply on responses and perform less costly power measurements to predict the temperature impact on PUF. The measurements are performed on 12 identical Virtex 5 FPGAs across 9 different accurately controlled operating temperature and voltage supply points. A database of challenge response pairs (CRPs) are collected and made openly available for the research community.
Last updated:  2015-06-18
Substring-Searchable Symmetric Encryption
Melissa Chase, Emily Shen
In this paper, we consider a setting where a client wants to outsource storage of a large amount of private data and then perform substring search queries on the data -- given a data string s and a search string p, find all occurrences of p as a substring of s. First, we formalize an encryption paradigm that we call queryable encryption, which generalizes searchable symmetric encryption (SSE) and structured encryption. Then, we construct a queryable encryption scheme for substring queries. Our construction uses suffix trees and achieves asymptotic efficiency comparable to that of unencrypted suffix trees. Encryption of a string of length n takes O(kn) time and produces a ciphertext of size O(kn), and querying for a substring of length m that occurs z times takes O(km+z) time and three rounds of communication, where k is the security parameter. Our security definition guarantees correctness of query results and privacy of data and queries against a malicious, adaptive adversary. Following the line of work started by Curtmola et al. (ACM CCS 2006), in order to construct more efficient schemes we allow the query protocol to leak some limited information that is captured precisely in the definition. We prove security of our substring-searchable encryption scheme against malicious adversaries, where the query protocol leaks limited information about memory access patterns through the suffix tree of the encrypted string.
Last updated:  2015-01-23
Generic Hardness of the Multiple Discrete Logarithm Problem
Aaram Yun
We study generic hardness of the multiple discrete logarithm problem, where the solver has to solve $n$ instances of the discrete logarithm problem simultaneously. There are known generic algorithms which perform $O(\sqrt{n p})$ group operations, where $p$ is the group order, but no generic lower bound was known other than the trivial bound. In this paper we prove the tight generic lower bound, showing that the previously known algorithms are asymptotically optimal. We establish the lower bound by studying hardness of a related computational problem which we call the search-by-hyperplane-queries problem, which may be of independent interest.
Last updated:  2014-08-21
Improved Timing Attacks on ECDSA
Vikram Singh
We improve the timing attack on ECDSA in [1] by Brumley and Tuveri. We use the Gaussian heuristic to analyse the length of error vectors in the lattice Close Vector Problem in order to determine the problems which are theoretically solvable. Then we cost each solution using a strengthened lattice reduction algorithm and Schnorr-Euchner enumeration to determine which problems are practically solvable. The original work by Brumley and Tuveri resulted in OpenSSL's ECDSA being updated to remove the timing information they exploited, so that application is not vulnerable to our improvements. However we publish this work as a general advance in side-channel recovery techniques which may be applicable in related scenarios.
Last updated:  2015-09-10
Type 2 Structure-Preserving Signature Schemes Revisited
Uncategorized
Sanjit Chatterjee, Alfred Menezes
Show abstract
Uncategorized
At CRYPTO 2014, Abe et al. presented generic-signer structure-preserving signature schemes using Type 2 pairings. According to the authors, the proposed constructions are optimal with only two group elements in each signature and just one verification equation. The schemes beat the known lower bounds in the Type 3 setting and thereby establish that the Type 2 setting permits construction of cryptographic schemes with unique properties not achievable in Type 3. In this paper we undertake a concrete analysis of the Abe et al. claims. By properly accounting for the actual structure of the underlying groups and subgroup membership testing of group elements in signatures, we show that the schemes are not as efficient as claimed. We present natural Type 3 analogues of the Type 2 schemes, and show that the Type 3 schemes are superior to their Type 2 counterparts in every aspect. We also formally establish that in the concrete mathematical structure of asymmetric pairing, all Type 2 structure-preserving signature schemes can be converted to the Type 3 setting without any penalty in security or efficiency, and show that the converse is false. Furthermore, we prove that the Type 2 setting does not allow one to circumvent the known lower bound result for the Type 3 setting. Our analysis puts the optimality claims for Type 2 structure-preserving signature in a concrete perspective and indicates an incompleteness in the definition of a generic bilinear group in the Type 2 setting.
Last updated:  2014-08-21
Constant-Round Leakage-Resilient Zero-Knowledge Arguments of Knowledge for NP
Hongda Li, Qihua Niu, Guifang Huang
Garg, Jain, and Sahai first consider zero knowledge proofs in the presence of leakage on the local state of the prover, and present a leakage-resilient-zero-knowledge proof system for HC (Hamiltonian Cycle) problem. Their construction is called $(1+\varepsilon)$-leakage-resilient zero-knowledge, for any constant $\varepsilon>0$, because the total length of the leakage the simulator needs is $(1+\varepsilon)$ times as large as that of the leakage received by the verifier. In recent, Pandey provides a constant-round leakage-resilient zero-knowledge argument satisfying the ideal requirement of $\varepsilon=0$. Whether there exist constant round leakage-resilient zero-knowledge arguments of knowledge for all NP languages is an interesting problem. This paper focuses on this problem and presents a constant-round construction of leakage-resilient zero-knowledge arguments of knowledge for the HC problem.
Last updated:  2014-08-21
Client-Server Concurrent Zero Knowledge with Constant Rounds and Guaranteed Complexity
Ran Canetti, Abhishek Jain, Omer Paneth
The traditional setting for concurrent zero knowledge considers a server that proves a statement in zero-knowledge to multiple clients in multiple concurrent sessions, where the server's actions in a session are independent of all other sessions. Persiano and Visconti [ICALP 05] show how keeping a limited amount of global state across sessions allows the server to significantly reduce the overall complexity while retaining the ability to interact concurrently with an unbounded number of clients. Specifically, they show a protocol that has only slightly super-constant number of rounds; however the communication complexity in each session of their protocol depends on the number of other sessions and has no a priori bound. This has the drawback that the client has no way to know in advance the amount of resources required for completing a session of the protocol up to the moment where the session is completed. We show a protocol that does not have this drawback. Specifically, in our protocol the client obtains a bound on the communication complexity of each session at the start of the session. Additionally the protocol is constant-rounds. Our protocol is fully concurrent, and assumes only collision-resistant hash functions. The proof requires considerably different techniques than those of Persiano and Visconti. Our main technical tool is an adaptation of the "committed-simulator" technique of Deng et. al [FOCS 09].
Last updated:  2016-03-11
Verifiable Order Queries and Order Statistics on a List in Zero-Knowledge
Uncategorized
Esha Ghosh, Olga Ohrimenko, Roberto Tamassia
Show abstract
Uncategorized
Given a list L with n elements, an order query on L asks whether a given element x in L precedes or follows another element y in L. More generally, given a set of m elements from L, an order query asks for the set ordered according to the positions of the elements in L. We introduce two formal models for answering order queries on a list in a verifiable manner and in zero-knowledge. We also present efficient constructions for these models. Our first model, called \emph{zero-knowledge list} (ZKL), generalizes membership queries on a set to order queries on a list in zero-knowledge. We present a construction of ZKL based on zero-knowledge sets and a homomorphic integer commitment scheme. Our second model, \emph{privacy-preserving authenticated list} (PPAL), extends authenticated data structures by adding a zero-knowledge privacy requirement. In this model, a list is outsourced by a trusted owner to an untrusted cloud server, which answers order queries issued by clients. The server also returns a proof of the answer, which is verified by the client using a digest of the list obtained from the owner. PPAL supports the security properties of data integrity against a malicious server and privacy protection against a malicious client. Though PPAL can be implemented using our ZKL construction, this construction is not as efficient as desired in cloud applications. To this end, we present an efficient PPAL construction based on blinded bilinear accumulators and bilinear maps, which is provably secure and zero-knowledge (e.g., hiding even the size of the list). Our PPAL construction uses proofs of $O(m)$ size and allows the client to verify a proof in $O(m)$ time.~The owner executes the setup in $O(n)$ time and space. The server uses $O(n)$ space to store the list and related authentication information, and takes $O(\min(m\log n, n))$ time to answer a query and generate a proof. Both our ZKL and PPAL constructions have one round of communication and are secure in the random oracle model. Finally, we show that our ZKL and PPAL frameworks can be extended to support fundamental statistical queries (including maximum, minimum, median, threshold and top-t elements) efficiently and in zero-knowledge.
Last updated:  2018-01-03
Zipf’s Law in Passwords
Ding Wang, Gaopeng Jian, Xinyi Huang, Ping Wang
Despite more than thirty years of research efforts, textual passwords are still enveloped in mysterious veils. In this work, we make a substantial step forward in understanding the distributions of passwords and measuring the strength of password datasets by using a statistical approach. We first show that Zipf's law perfectly exists in real-life passwords by conducting linear regressions on a corpus of 56 million passwords. As one specific application of this observation, we propose the number of unique passwords used in regression and the slope of the regression line together as a metric for assessing the strength of password datasets, and prove it in a mathematically rigorous manner. Furthermore, extensive experiments (including optimal attacks, simulated optimal attacks and state-of-the-art cracking sessions) are performed to demonstrate the practical effectiveness of our metric. To the best of knowledge, our new metric is the first one that is both easy to approximate and accurate to facilitate comparisons, providing a useful tool for the system administrators to gain a precise grasp of the strength of their password datasets and to adjust the password policies more reasonably.
Last updated:  2014-11-25
Privacy-Preserving Minimum Spanning Trees through Oblivious Parallel RAM for Secure Multiparty Computation
Uncategorized
Peeter Laud
Show abstract
Uncategorized
In this paper, we describe efficient protocols to perform in parallel many reads and writes in private arrays according to private indices. The protocol is implemented on top of the Arithmetic Black Box (ABB) and can be freely composed to build larger privacy-preserving applications. For a large class of secure multiparty computation (SMC) protocols, we believe our technique to have better practical and asymptotic performance than any previous ORAM technique that has been adapted for use in SMC. Our ORAM technique opens up a large class of parallel algorithms for adoption to run on SMC platforms. In this paper, we demonstrate how the minimum spanning tree (MST) finding algorithm by Awerbuch and Shiloach can be executed without revealing any details about the underlying graph (beside its size). The data accesses of this algorithm heavily depend on the location and weight of edges (which are private) and our ORAM technique is instrumental in their execution. Our implementation is probably the first-ever realization of a privacy-preserving MST algorithm.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.