All papers in 2016 (Page 12 of 1195 results)

Last updated:  2016-02-05
Obfuscation without Multilinear Maps
Dingfeng Ye, Peng Liu
Known methods for obfuscating a circuit need to represent the circuit as a branching program and then use a multilinear map to encrypt the branching program. Multilinear maps are, however, too inefficient for encrypting the branching program. We found a dynamic encoding method which effectively singles out different inputs in the context of the matrix randomization technique of Kilian and Gentry et al., so that multilinear maps are no longer needed. To make the method work, we need the branching programs to be regular. For such branching programs, we also give most efficient constructions for NC1 circuits. This results in a much more efficient core obfuscator for NC1 circuits.
Last updated:  2016-05-02
Tightly CCA-Secure Encryption without Pairings
Uncategorized
Romain Gay, Dennis Hofheinz, Eike Kiltz, Hoeteck Wee
Show abstract
Uncategorized
We present the first CCA-secure public-key encryption scheme based on DDH where the security loss is independent of the number of challenge ciphertexts and the number of decryption queries. Our construction extends also to the standard k-Lin assumption in pairing-free groups, whereas all prior constructions starting with Hofheinz and Jager (Crypto ’12) rely on the use of pairings. Moreover, our construction improves upon the concrete efficiency of existing schemes, reducing the ciphertext overhead by about half (to only 3 group elements under DDH), in addition to eliminating the use of pairings. We also show how to use our techniques in the NIZK setting. Specifically, we construct the first tightly simulation-sound designated-verifier NIZK for linear languages without pairings. Using pairings, we can turn our construction into a highly optimized publicly verifiable NIZK with tight simulation-soundness.
Last updated:  2016-02-19
Valiant's Universal Circuit is Practical
Ágnes Kiss, Thomas Schneider
Universal circuits (UCs) can be programmed to evaluate any circuit of a given size $k$. They provide elegant solutions in various application scenarios, e.g. for private function evaluation (PFE) and for improving the flexibility of attribute-based encryption (ABE) schemes. The optimal size of a universal circuit is proven to be $\Omega(k\log k)$. Valiant (STOC'76) proposed a size-optimized UC construction, which has not been put in practice ever since. The only implementation of universal circuits was provided by Kolesnikov and Schneider (FC'08), with size $\mathcal{O}(k\log^2 k)$. In this paper, we refine the size of Valiant's UC and further improve the construction by (at least) $2k$. We show that due to recent optimizations and our improvements, it is the best solution to apply in the case for circuits with a constant number of inputs and outputs. When the number of inputs or outputs is linear in the number of gates, we propose a more efficient hybrid solution based on the two existing constructions. We validate the practicality of Valiant's UC, by giving an example implementation for PFE using these size-optimized UCs.
Last updated:  2016-02-03
Cryptanalysis of the Full Spritz Stream Cipher
Subhadeep Banik, Takanori Isobe
Spritz is a stream cipher proposed by Rivest and Schuldt at the rump session of CRYPTO 2014. It is intended to be a replacement of the popular RC4 stream cipher. In this paper we propose distinguishing attacks on the full Spritz, based on {\it a short-term bias} in the first two bytes of a keystream and {\it a long-term bias} in the first two bytes of every cycle of $N$ keystream bytes, where $N$ is the size of the internal permutation. Our attacks are able to distinguish a keystream of the {\it full} Spritz from a random sequence with samples of first two bytes produced by $2^{44.8}$ multiple key-IV pairs or $2^{60.8}$ keystream bytes produced by a single key-IV pair. These biases are also useful in the event of plaintext recovery in a broadcast attack. In the second part of the paper, we look at a state recovery attack on Spritz, in a special situation when the cipher enters a class of weak states. We determine the probability of encountering such a state, and demonstrate a state recovery algorithm that betters the $2^{1400}$ step algorithm of Ankele et al. at Latincrypt 2015.
Last updated:  2016-06-02
On the Security of the Algebraic Eraser Tag Authentication Protocol
Simon R. Blackburn, M. J. B. Robshaw
The Algebraic Eraser has been gaining prominence as SecureRF, the company commercializing the algorithm, increases its marketing reach. The scheme is claimed to be well-suited to IoT applications but a lack of detail in available documentation has hampered peer-review. Recently more details of the system have emerged after a tag authentication protocol built using the Algebraic Eraser was proposed for standardization in ISO/IEC SC31 and SecureRF provided an open public description of the protocol. In this paper we describe a range of attacks on this protocol that include very efficient and practical tag impersonation as well as partial, and total, tag secret key recovery. Most of these results have been practically verified, they contrast with the 80-bit security that is claimed for the protocol, and they emphasize the importance of independent public review for any cryptographic proposal.
Last updated:  2016-02-02
Spectral characterization of iterating lossy mappings
Uncategorized
Joan Daemen
Show abstract
Uncategorized
In this paper we study what happens to sets when we iteratively apply lossy (round) mappings to them. We describe the information loss as imbalances of parities of intermediate distributions and show that their evolution is governed by the correlation matrices of the mappings. At the macroscopic level we show that iterating lossy mappings results in an increase of a quantity we call "total imbalance". We quantify the increase in total imbalance as a function of the number of iterations and of round mapping characteristics. At the microscopic level we show that the imbalance of a parity located in some round, dubbed "final", is the sum of distinct terms. Each of these terms consists of the imbalance of a parity located at the output of a round, multiplied by the sum of the correlation contributions of all linear trails between that parity and the final parity. We illustrate our theory with experimental data. The developed theory can be applied whenever lossy mappings are repeatedly applied to a state. This is the case in many modes of block ciphers and permutations for, e.g., iterated hashing or self-synchronizing stream encryption. The main reason why we have developed it however, is for applying it to study the security implications of using non-uniform threshold schemes as countermeasure against differential power and electromagnetic analysis.
Last updated:  2016-04-13
On the Hardness of LWE with Binary Error: Revisiting the Hybrid Lattice-Reduction and Meet-in-the-Middle Attack
Johannes Buchmann, Florian Göpfert, Rachel Player, Thomas Wunderer
The security of many cryptographic schemes has been based on special instances of the Learning with Errors (LWE) problem, e.g., Ring-LWE, LWE with binary secret, or LWE with ternary error. However, recent results show that some subclasses are weaker than expected. In this work we show that LWE with binary error, introduced by Micciancio and Peikert, is one such subclass. We achieve this by applying the Howgrave-Graham attack on NTRU, which is a combination of lattice techniques and a Meet-in-the-Middle approach, to this setting. We show that the attack outperforms all other currently existing algorithms for several natural parameter sets. For instance, for the parameter set n = 256, m = 512, q = 256, this attack on LWE with binary error only requires 2^85 operations, while the previously best attack requires 2^117 operations. We additionally present a complete and improved analysis of the attack, using analytic techniques. Finally, based on the attack, we give concrete hardness estimations that can be used to select secure parameters for schemes based on LWE with binary error
Last updated:  2016-12-08
On Linear Hulls and Trails
Tomer Ashur, Vincent Rijmen
This paper improves the understanding of linear cryptanalysis by highlighting some previously overlooked aspects. It shows that linear hulls are sometimes formed already in a single round, and that overlooking such hulls may lead to a wrong estimation of the linear correlation, and thus of the data complexity. It shows how correlation matrices can be used to avoid this, and provides a tutorial on how to use them properly. By separating the input and output masks from the key mask it refines the formulas for computing the expected correlation and the expected linear potential. Finally, it shows that when the correlation of a hull is not properly estimated (e.g., by using the correlation of a single trail as the correlation of the hull), the success probability of Matsui's Algorithm 1 drops, sometimes drastically. It also shows that when the trails composing the hull are properly accounted for, more than a single key bit can be recovered using Algorithm 1. All the ideas presented in this paper are followed by examples comparing previous methods to the corrected ones, and verified experimentally with reduced-round versions of Simon32/64.
Last updated:  2016-05-10
Safely Exporting Keys from Secure Channels: On the Security of EAP-TLS and TLS Key Exporters
Uncategorized
Chris Brzuska, Håkon Jacobsen, Douglas Stebila
Show abstract
Uncategorized
We investigate how to safely export additional cryptographic keys from secure channel protocols, modeled with the authenticated and confidential channel establishment (ACCE) security notion. For example, the EAP-TLS protocol uses the Transport Layer Security (TLS) handshake to output an additional shared secret which can be used for purposes outside of TLS, and the RFC 5705 standard specifies a general mechanism for exporting keying material from TLS. We show that, for a class of ACCE protocols we call “TLS-like” protocols, the EAP-TLS transformation can be used to export an additional key, and that the result is a secure AKE protocol in the Bellare–Rogaway model. Interestingly, we are able to carry out the proof without looking at the specifics of the TLS protocol itself (beyond the notion that it is “TLS-like”), but rather are able to use the ACCE property in a semi black-box way. To facilitate our modular proof, we develop a novel technique, notably an encryption-based key checking mechanism that is used by the security reduction. Our results imply that EAP-TLS using secure TLS 1.2 cipher-suites is a secure authenticated key exchange protocol.
Last updated:  2017-02-21
Intel SGX Explained
Uncategorized
Victor Costan, Srinivas Devadas
Show abstract
Uncategorized
Intel's Software Guard Extensions (SGX) is a set of extensions to the Intel architecture that aims to provide integrity and privacy guarantees to security-sensitive computation performed on a computer where all the privileged software (kernel, hypervisor, etc) is potentially malicious. This paper analyzes Intel SGX, based on the 3 papers that introduced it, on the Intel Software Developer's Manual (which supersedes the SGX manuals), on an ISCA 2015 tutorial, and on two patents. We use the papers, reference manuals, and tutorial as primary data sources, and only draw on the patents to fill in missing information. This paper's contributions are a summary of the Intel-specific architectural and micro-architectural details needed to understand SGX, a detailed and structured presentation of the publicly available information on SGX, a series of intelligent guesses about some important but undocumented aspects of SGX, and an analysis of SGX's security properties.
Last updated:  2016-01-31
Cryptanalysis of ring-LWE based key exchange with key share reuse
Scott Fluhrer
This paper shows how several ring-LWE based key exchange protocols can be broken, under the assumption that the same key share is used for multiple exchanges. This indicates that, if these key exchange protocols are used, then it will be necessary for a fresh key share be generated for each exchange, and that these key exchange protocols cannot be used as a drop in replacement for designs which use Diffie-Hellman static key shares.
Last updated:  2016-01-31
Truncated Differential Analysis of Round-Reduced RoadRunneR Block Cipher
Qianqian Yang, Lei Hu, Siwei Sun, Ling Song
RoadRunneR is a small and fast bitslice lightweight block cipher for low cost 8-bit processors proposed by Adnan Baysal and Sa ̈hap S ̧ahin in the LightSec 2015 conference. While most software efficient lightweight block ciphers lacking a security proof, RoadRunneR’s security is provable against differential and linear attacks. RoadRunneR is a Feistel structure block cipher with 64-bit block size. RoadRunneR-80 is a vision with 80-bit key and 10 rounds, and RoadRunneR-128 is a vision with 128-bit key and 12 rounds. In this paper, we obtain 5-round truncated differentials of RoadRunneR-80 and RoadRunneR-128 with probability 2^{−56}. Using the truncated differentials, we give a truncated differential attack on 7-round RoadRunneR-128 without whitening keys with data complexity of 2^{55} chosen plaintexts, time complexity of 2^{121} encryptions and memory complexity of 2^{68}. This is first known attack on RoadRunneR block cipher.
Last updated:  2016-03-14
NSEC5 from Elliptic Curves: Provably Preventing DNSSEC Zone Enumeration with Shorter Responses
Sharon Goldberg, Moni Naor, Dimitrios Papadopoulos, Leonid Reyzin
While DNSSEC securely provides authenticity and integrity to the domain name system (DNS), it also creates a new security vulnerability called zone enumeration that allows an adversary that asks a small number of targeted DNS queries to learn the IP addresses of all domain names in a zone. An enumerated zone can be used as ''a source of probable e-mail addresses for spam, or as a key for multiple WHOIS queries to reveal registrant data that many registries may have legal obligations to protect'' [RFC 5155] (e.g., per EU data protection laws), or to create a toehold for more complex attacks. As the Internet of things becomes increasingly ubiquitous, it also becomes increasingly important to keep the names and addresses of these ''things'' (e.g., thermostats, fridges, baby monitors) away from remote attackers. In previous work we solved DNSSEC's zone enumeration problem by introducing NSEC5, a cryptographic construction based on RSA digital signatures. NSEC5 provides authenticated denial of existence, i.e., it is used to answer DNS queries that have negative responses (e.g., NXDOMAIN). RSA-based NSEC5 was recently submitted for specification in an Internet draft [draft-vcelak-nsec5-01], and a working implementation of a nameserver that supports RSA-based NSEC5 is also available [https://github.com/dipapado/nsec5-implementation]. However, recent years have seen the DNSSEC community aiming to replace RSA with elliptic curve cryptography (EC), in order to shorten the length of DNSSEC responses. Therefore, in this paper we present a new variant of NSEC5 that uses elliptic curve cryptography (ECC) to produce shorter NSEC5 responses. If a zone is signed with ECDSA at the 128-bit security level and also uses our new ECC-based NSEC5 scheme, its denial-of-existence responses (response code NXDOMAIN) will be about 2 times shorter than that a zone signed with 2048-bit RSA and RSA-based NSEC5. Moreover, our ECC-based NSEC5 has responses lengths that are comparable to NSEC3, DNSSEC's current authenticated-denial-of-existence mechanism that is vulnerable to zone enumeration via offline dictionary attacks. In fact, if a zone signed with ECDSA at the 128-bit security level also uses our new ECC-based NSEC5 scheme, it will have responses that are shorter than a zone using NSEC3 with 1024-bit RSA and SHA1 (for an 80-bit security level), which is today's dominant deployment configuration.
Last updated:  2016-01-29
Non-Interactive Plaintext (In-)Equality Proofs and Group Signatures with Verifiable Controllable Linkability
Olivier Blazy, David Derler, Daniel Slamanig, Raphael Spreitzer
Group signatures are an important privacy-enhancing tool that allow to anonymously sign messages on behalf of a group. A recent feature for group signatures is controllable linkability, where a dedicated linking authority (LA) can determine whether two given signatures stem from the same signer without being able to identify the signer(s). Currently the linking authority is fully trusted, which is often not desirable. In this paper, we firstly introduce a generic technique for non-interactive zero-knowledge plaintext equality and inequality proofs. In our setting, the prover is given two ciphertexts and some trapdoor information, but neither has access to the decryption key nor the randomness used to produce the respective ciphertexts. Thus, the prover performs these proofs on unknown plaintexts. Besides a generic technique, we also propose an efficient instantiation that adapts recent results from Blazy et al. (CT-RSA'15), and in particular a combination of Groth-Sahai (GS) proofs (or sigma proofs) and smooth projective hash functions (SPHFs). While this result may be of independent interest, we use it to realize verifiable controllable linkability for group signatures. Here, the LA is required to non-interactively prove whether or not two signatures link (while it is not able to identify the signers). This significantly reduces the required trust in the linking authority. Moreover, we extend the model of group signatures to cover the feature of verifiable controllable linkability.
Last updated:  2017-01-31
A Cryptographic Analysis of the TLS 1.3 draft-10 Full and Pre-shared Key Handshake Protocol
Benjamin Dowling, Marc Fischlin, Felix Günther, Douglas Stebila
We analyze the handshake protocol of TLS 1.3 draft-ietf-tls-tls13-10 (published October 2015). This continues and extends our previous analysis (CCS 2015, Cryptology ePrint Archive 2015) of former TLS 1.3 drafts (draft-ietf-tls-tls13-05 and draft-ietf-tls-tls13-dh-based). Here we show that the full (EC)DHE Diffie-Hellman-based handshake of draft-10 is also secure in the multi-stage key exchange framework of Fischlin and Günther which captures classical Bellare-Rogaway key secrecy for key exchange protocols that derive multiple keys. We also note that a recent protocol change---the introduction of a NewSessionTicket message for resumption, encrypted under the application traffic key---impairs the protocol modularity and hence our compositional guarantees that ideally would allow an independent analysis of the record protocol. We additionally analyze the pre-shared key modes (with and without ephemeral Diffie-Hellman key), and fit them into the composability framework, addressing composability with the input resumption secret from a previous handshake and of the output session keys.
Last updated:  2016-08-20
Cryptanalysis of PRINCE with Minimal Data
Shahram Rasoolzadeh, Håvard Raddum
We investigate two attacks on the PRINCE block cipher in the most realistic scenario, when the attacker only has a minimal amount of known plaintext available. The first attack is called Accelerated Exhaustive Search, and is able to recover the key for up to the full 12-round PRINCE with a complexity slightly lower than the security claim given by the designers. The second attack is a meet-in-the-middle attack, where we show how to successfully attack 8- and 10-round PRINCE with only two known plaintext/ciphertext pairs. Both attacks take advantage of the fact that the two middle rounds in PRINCE are unkeyed, so guessing the state before the first middle round gives the state after the second round practically for free. These attacks are the fastest until now in the known plaintext scenario for the 8 and 10 reduced-round versions and the full 12-round of PRINCE.
Last updated:  2016-01-28
Protect both Integrity and Confidentiality in Outsourcing Collaborative Filtering Computations
Qiang Tang, Balazs Pejo, Husen Wang
In the cloud computing era, in order to avoid the computational burdens, many recommendation service providers tend to outsource their collaborative filtering computations to third-party cloud servers. In order to protect service quality and privacy for end users, both the integrity of computation results and the confidentiality of original dataset need to be guaranteed. In this paper, we analyze two integrity verification approaches by Vaidya et al. and demonstrate their performances. In particular, we analyze the verification via auxiliary data approach which is only briefly mentioned in the original paper, and demonstrate the experimental results (with better performances). We then propose a new solution to outsource all computations of the weighted Slope One algorithm in multi-server setting and provide experimental results. We finally discuss the possibility of using homomorphic encryption to achieve both integrity and confidentiality guarantees.
Last updated:  2016-01-28
Non-Interactive Verifiable Secret Sharing For Monotone Circuits
Ge Bai, Ivan Damgård, Claudio Orlandi, Yu Xia
We propose a computationally secure and non-interactive verifiable secret sharing scheme that can be efficiently constructed from any monotone Boolean circuit. By non-interactive we mean that the dealer needs to be active only once, where he posts a public message as well as a private message to each shareholder. In the random oracle model, we can even avoid interaction between shareholders. By efficient, we mean that we avoid generic zero-knowledge techniques. Such efficient constructions were previously only known from linear secret sharing schemes (LSSS). It is believed that the class of access structures that can be handled with polynomial size LSSS is incomparable to the class that can be recognized by polynomial size monotone circuits, so in this sense we extend the class of access structures with efficient and non-interactive VSS.
Last updated:  2016-08-13
Improved Multi-Dimensional Meet-in-the-Middle Cryptanalysis of KATAN
Uncategorized
Shahram Rasoolzadeh, Håvard Raddum
Show abstract
Uncategorized
We study multidimensional meet-in-the-middle attacks on the KATAN block cipher family. Several improvements to the basic attacks are introduced and explained. The most noteworthy of these is the technique of guessing only non-linearly involved key bits, which reduces the search space by a significant factor. The optimizations decreases the complexity of multidimensional meet-in-the-middle attacks, allowing more rounds of KATAN to be efficiently attacked than previously reported.
Last updated:  2016-01-28
New Efficient and Flexible Algorithms for Secure Outsourcing of Bilinear Pairings
Xi-Jun Lin, Haipeng Qu, Xiaoshuai Zhang
Outsourcing paradigm has become a hot research topic in the cryptography community, where computation workloads can be outsourced to cloud servers by the resource-constrained devices, such as RFID tags. The computation of bilinear pairings is the most expensive operation in pairing-based cryptographic primitives. In this paper, we present two new algorithms for secure outsourcing the computation of bilinear pairings. One is secure in the OMTUP model. The other, which provides flexible checkability, is in the TUP model. Compared with the state-of-the-art algorithms, our proposal is more efficient.
Last updated:  2016-01-28
Weaknesses in Hadamard Based Symmetric Key Encryption Schemes
Gajraj Kuldeep, Devendra Kumar Yadav, A. K. Sharma
In this paper security aspects of the existing symmetric key encryption schemes based on Hadamard matrices are examined. Hadamard matrices itself have symmetries like one circulant core or two circulant core. Here, we are exploiting the inherent symmetries of Hadamard matrices and are able to perform attacks on these encryption schemes. It is found that entire key can be obtained by observing the ciphertext.
Last updated:  2019-01-27
On the Power of Secure Two-Party Computation
Uncategorized
Carmit Hazay, Muthuramakrishnan Venkitasubramaniam
Show abstract
Uncategorized
Ishai, Kushilevitz, Ostrovsky and Sahai (STOC 2007, SIAM JoC 2009) introduced the powerful ``MPC-in-the-head'' technique that provided a general transformation of information-theoretic MPC protocols secure against passive adversaries to a ZK proof in a ``black-box'' way. In this work, we extend this technique and provide a generic transformation of any semi-honest secure two-party computation (2PC) protocol (with mild adaptive security guarantees) in the so called oblivious-transfer hybrid model to an adaptive ZK proof for any NP language, in a ``black-box'' way assuming only one-way functions. Our basic construction based on Goldreich-Micali-Wigderson's 2PC protocol yields an adaptive ZK proof with communication complexity proportional to quadratic in the size of the circuit implementing the NP relation. Previously such proofs relied on an expensive Karp reduction of the NP language to Graph Hamiltonicity (Lindell and Zarosim (TCC 2009, Journal of Cryptology 2011)). As an application of our techniques, we show how to obtain a ZK proof with an ``input-delayed'' property for any NP language without relying on expensive Karp reductions that is black-box in the underlying one-way function. Namely, the input delayed property allows the honest prover's algorithm to receive the actual statement to be proved only in the final round. We further generalize this to obtain a ``commit and prove'' protocol with the same property where the prover commits to a witness w in the second message and proves a statement x regarding the witness w in zero-knowledge where the statement is determined only in the last round. This improves a previous construction of Lapidot and Shamir (Crypto 1990) that was designed specifically for the Graph Hamiltonicity problem and relied on the underlying primitives in a non-black-box way. Additionally, we provide a general transformation to construct a randomized encoding of a function f from any 2PC protocol that securely computes a related functionality (in a black-box way) from one-way functions. We show that if the 2PC protocol has mild adaptive security guarantees (which are satisfied by both the Yao's and GMW's protocol) then the resulting randomized encoding (RE) can be decomposed to an offline/online encoding.
Last updated:  2016-01-27
MU-ORAM: Dealing with Stealthy Privacy Attacks in Multi-User Data Outsourcing Services
Uncategorized
Jinsheng Zhang, Wensheng Zhang, Daji Qiao
Show abstract
Uncategorized
Outsourcing data to remote storage servers has become more and more popular, but the related security and privacy concerns have also been raised. To protect the pattern in which a user accesses the outsourced data, various oblivious RAM (ORAM) constructions have been designed. However, when existing ORAM designs are extended to support multi-user scenarios, they become vulnerable to stealthy privacy attacks targeted at revealing the data access patterns of innocent users, even if only one curious or compromised user colludes with the storage server. To study the feasibility and costs of overcoming the above limitation, this paper proposes a new ORAM construction called Multi-User ORAM (MU-ORAM), which is resilient to stealthy privacy attacks. The key ideas in the design are (i) introduce a chain of proxies to act as a common interface between users and the storage server, (ii) distribute the shares of the system secrets delicately to the proxies and users, and (iii) enable a user and/or the proxies to collaboratively query and shuffle data. Through extensive security analysis, we quantify the strength of MU-ORAM in protecting the data access patterns of innocent users from attacks, under the assumption that the server, users, and some but not all proxies can be curious but honest, compromised and colluding. Cost analysis has been conducted to quantify the extra overhead incurred by the MU-ORAM design.
Last updated:  2016-04-20
Downgrade Resilience in Key-Exchange Protocols
Karthikeyan Bhargavan, Chris Brzuska, Cédric Fournet, Matthew Green, Markulf Kohlweiss, Santiago Zanella-Béguelin
Key-exchange protocols such as TLS, SSH, IPsec, and ZRTP are highly configurable, with typical deployments supporting multiple protocol versions, cryptographic algorithms and parameters. In the first messages of the protocol, the peers negotiate one specific combination: the protocol mode, based on their local configurations. With few notable exceptions, most cryptographic analyses of configurable protocols consider a single mode at a time. In contrast, downgrade attacks, where a network adversary forces peers to use a mode weaker than the one they would normally negotiate, are a recurrent problem in practice. How to support configurability while at the same time guaranteeing the preferred mode is negotiated? We set to answer this question by designing a formal framework to study downgrade resilience and its relation to other security properties of key-exchange protocols. First, we study the causes of downgrade attacks by dissecting and classifying known and novel attacks against widely used protocols. Second, we survey what is known about the downgrade resilience of existing standards. Third, we combine these findings to define downgrade security, and analyze the conditions under which several protocols achieve it. Finally, we discuss patterns that guarantee downgrade security by design, and explain how to use them to strengthen the security of existing protocols, including a newly proposed draft of TLS 1.3.
Last updated:  2016-02-18
Reverse-Engineering the S-Box of Streebog, Kuznyechik and STRIBOBr1 (Full Version)
Uncategorized
Alex Biryukov, Léo Perrin, Aleksei Udovenko
Show abstract
Uncategorized
The Russian Federation's standardization agency has recently published a hash function called Streebog and a 128-bit block cipher called Kuznyechik. Both of these algorithms use the same 8-bit S-Box but its design rationale was never made public. In this paper, we reverse-engineer this S-Box and reveal its hidden structure. It is based on a sort of 2-round Feistel Network where exclusive-or is replaced by a finite field multiplication. This structure is hidden by two different linear layers applied before and after. In total, five different 4-bit S-Boxes, a multiplexer,two 8-bit linear permutations and two finite field multiplications in a field of size $2^{4}$ are needed to compute the S-Box. The knowledge of this decomposition allows a much more efficient hardware implementation by dividing the area and the delay by 2.5 and 8 respectively. However, the small 4-bit S-Boxes do not have very good cryptographic properties. In fact, one of them has a probability 1 differential. We then generalize the method we used to partially recover the linear layers used to whiten the core of this S-Box and illustrate it with a generic decomposition attack against 4-round Feistel Networks whitened with unknown linear layers. Our attack exploits a particular pattern arising in the Linear Approximations Table of such functions.
Last updated:  2016-02-11
Domain-Specific Pseudonymous Signatures Revisited
Uncategorized
Kamil Kluczniak
Show abstract
Uncategorized
Domain-Specific Pseudonymous Signature schemes were recently proposed for privacy preserving authentication of digital identity documents by the BSI, German Federal Office for Information Security. The crucial property of domain-specific pseudonymous signatures is that a signer may derive unique pseudonyms within a so called domain. Now, the signer's true identity is hidden behind his domain pseudonyms and these pseudonyms are unlinkable, i.e. it is infeasible to correlate two pseudonyms from distinct domains with the identity of a single signer. In this paper we take a critical look at the security definitions and constructions of domain-specific pseudonymous signatures proposed by far. We review two articles which propose ``sound and clean'' security definitions and point out some issues present in these models. Some of the issues we present may have a strong practical impact on constructions ``provably secure'' in this models. Additionally, we point out some worrisome facts about the proposed schemes and their security analysis.
Last updated:  2016-01-26
Verification Methods for the Computationally Complete Symbolic Attacker Based on Indistinguishability
Gergei Bana, Rohit Chadha
In recent years, a new approach has been developed for verifying security protocols with the aim of combining the benefits of symbolic attackers and the benefits of unconditional soundness: the technique of the computationally complete symbolic attacker of Bana and Comon (BC). In this paper we argue that the real breakthrough of this technique is the recent introduction of its version for indistinguishability because, with the extensions we introduce here, for the first time, there is a computationally sound symbolic technique that is syntactically strikingly simple, to which translating standard computational security notions is a straightforward matter, and that can be effectively used for verification of not only equivalence properties, but trace properties of protocols as well. We first fully develop the core elements of this newer version by introducing several new axioms. We illustrate the power and the diverse use of the introduced axioms on simple examples first. We introduce an axiom expressing the Decisional Diffie-Hellman property. We analyze the Diffie-Hellman key exchange, both in its simplest form and an authenticated version as well. We provide computationally sound verification of real-or-random secrecy of the Diffie-Hellman key exchange protocol for multiple sessions, without any restrictions on the computational implementation other than the DDH assumption. We also show authentication for a simplified version of the station-to-station protocol using UF-CMA assumption for digital signatures. Finally, we axiomatize IND-CPA, IND-CCA1 and IND-CCA2 security properties and illustrate their usage.
Last updated:  2016-03-17
Octonion Algebra and Noise-Free Fully Homomorphic Encryption (FHE) Schemes
Yongge Wang
Brakerski showed that linearly decryptable fully homomorphic encryption (FHE) schemes cannot be secure in the chosen plaintext attack (CPA) model. In this paper, we show that linearly decryptable FHE schemes cannot be secure even in the ciphertext only security model. Then we consider the maximum security that a linearly decryptable FHE scheme could achieve. This paper designs fully homomorphic symmetric key encryption (FHE) schemes without bootstrapping (that is, noise-free FHE schemes). The proposed FHE schemes are based on quaternion/octonion algebra and Jordan algebra over finite rings Z_n and are secure in the weak ciphertext-only security model assuming the hardness of solving multivariate quadratic equation systems and solving univariate high degree polynomial equation systems in Z_n. It is up to our knowledge that this is the first noise-free FHE scheme that has ever been designed with a security proof (even in the weak ciphertext-only security model). It is argued that the weak ciphertext-only security model is sufficient for various applications such as privacy preserving computation in cloud. As an example, the proposed FHE schemes are used to construct obfuscated programs. This example could be further used to show that the scheme presented in this paper could be combined with existing FHE schemes with bootstrapping to obtain more efficient FHE schemes with bootstrapping in the fully CPA model. At the end of the paper, we point out the insecurity of several recently proposed noise-free FHE schemes
Last updated:  2019-11-01
OPFE: Outsourcing Computation for Private Function Evaluation
Henry Carter, Patrick Traynor
Outsourcing secure multiparty computation(SMC) protocols has allowed resource-constrained devices to take advantage of these developing cryptographic primitives with great efficiency. While the existing constructions for outsourced SMC guarantee input and output privacy, they require that all parties know the function being evaluated. Thus, stronger security guarantees are necessary in applications where the function itself needs to be kept private. We develop the first linear-complexity protocols for outsourcing private function evaluation (PFE), a subset of SMC protocols that provide both input and function privacy. Assuming a semi-honest function holder, we build on the most efficient two-party PFE constructions to develop outsourced protocols that are secure against a semi-honest, covert, or malicious Cloud server and malicious mobile devices providing input to the function. Our protocols require minimal symmetric key operations and only two rounds of communication from the mobile participants. As a secondary contribution, we develop a technique for combining public and private sub-circuits in a single computation called partially-circuit private (PCP) garbling. This novel garbling technique allows us to apply auxiliary circuits to check for malicious behavior using only free-XOR overhead gates rather than the significantly more costly PFE gate construction. These protocols demonstrate the feasibility of outsourced PFE and provide a first step towards developing privacy-preserving applications for use in Cloud computing.
Last updated:  2016-04-27
Linear Hull Attack on Round-Reduced Simeck with Dynamic Key-guessing Techniques
Uncategorized
Lingyue Qin, Huaifeng Chen, Xiaoyun Wang
Show abstract
Uncategorized
Simeck is a new family of lightweight block ciphers proposed by Yang $et\ al.$ in CHES'15, which has efficient hardware implementation. In this paper, we find differentials with low hamming weight and high probability for Simeck using Kölbl's tool, then we consider the links between the differential and linear characteristic to construct linear hulls for Simeck. We give improved linear hull attack with dynamic key-guessing techniques on Simeck according to the property of the AND operation. Our best results cover Simeck 32/64 reduced to 23 rounds, Simeck 48/96 reduced to 30 rounds, Simeck 64/128 reduced to 37 rounds. Our result is the best known so far for any variant of Simeck.
Last updated:  2016-01-25
A note on Tensor Simple Matrix Encryption Scheme
Yasufumi Hashimoto
The simple matrix encryption scheme (Tao-Diene-Tang-Ding, PQCrypto 2013) has a problem of decryption failures. Quite recently, Petzoldt-Ding-Wang (http://eprint.iacr.org/2016/010) proposed a new version of this scheme called the tensor simple matrix encryption scheme to remove decryption failures by using a tensor product of two small matrices as its secret key. However, it is much weaker than the original scheme. In this note, we show that the tensor simple matrix encryption scheme is equivalent to a weak version of the original simple matrix encryption scheme.
Last updated:  2016-04-25
Unconditionally Secure Revocable Storage: Tight Bounds, Optimal Construction, and Robustness
Yohei Watanabe, Goichiro Hanaoka, Junji Shikata
Data stored in cloud storage sometimes requires long-term security due to its sensitivity (e.g., genome data), and therefore, it also requires flexible access control for handling entities who can use the data. Broadcast encryption can partially provide such flexibility by specifying privileged receivers so that only they can decrypt a ciphertext. However, once privileged receivers are specified, they can be no longer dynamically added and/or removed. In this paper, we propose a new type of broadcast encryption which provides long-term security and appropriate access control, which we call unconditionally secure revocable-storage broadcast encryption (RS-BE). In RS-BE, privileged receivers of a ciphertext can be dynamically updated without revealing any information on the underlying plaintext. Specifically, we define a model and security of RS-BE, derive tight lower bounds on sizes of secret keys required for secure RS-BE, and propose a construction of RS-BE which meets all of these bounds. Our lower bounds can be applied to traditional broadcast encryption. Furthermore, to detect an improper update, we consider security against modification attacks to a ciphertext, and present a concrete construction secure against this type of attacks.
Last updated:  2016-02-22
Analysing and Exploiting the Mantin Biases in RC4
Remi Bricout, Sean Murphy, Kenneth G. Paterson, Thyla van der Merwe
We explore the use of the Mantin biases (Mantin, Eurocrypt 2005) to recover plaintexts from RC4-encrypted traffic. We provide a more fine-grained analysis of these biases than in Mantin's original work. We show that, in fact, the original analysis was incorrect in certain cases: the Mantin biases are sometimes non-existent, and sometimes stronger than originally predicted. We then show how to use these biases in a plaintext recovery attack. Our attack targets two unknown bytes of plaintext that are located close to sequences of known plaintext bytes, a situation that arises in practice when RC4 is used in, for example, TLS. We provide a statistical framework that enables us to make predictions about the performance of this attack and its variants. We then extend the attack using standard dynamic programming techniques to tackle the problem of recovering longer plaintexts, a setting of practical interest in recovering HTTP session cookies and user passwords that are protected by RC4 in TLS. We perform experiments showing that we can successfully recover 16-byte plaintexts with 80% success rate using $2^{31}$ ciphertexts, an improvement over previous attacks.
Last updated:  2016-01-28
Verifiable Dynamic Symmetric Searchable Encryption: Optimality and Forward Security
Raphael Bost, Pierre-Alain Fouque, David Pointcheval
Symmetric Searchable Encryption (SSE) is a very efficient and practical way for data owners to out- source storage of a database to a server while providing privacy guarantees. Such SSE schemes enable clients to encrypt their database while still performing queries for retrieving documents matching some keyword. This functionality is interesting to secure cloud storage, and efficient schemes have been de- signed in the past. However, security against malicious servers has been overlooked in most previous constructions and these only addressed security against honest-but-curious servers. In this paper, we study and design the first efficient SSE schemes provably secure against mali- cious servers. First, we give lower bounds on the complexity of such verifiable SSE schemes. Then, we construct generic solutions matching these bounds using efficient verifiable data structures. Finally, we modify an existing SSE scheme that also provides forward secrecy of search queries, and make it prov- ably secure against active adversaries, without increasing the computational complexity of the original scheme.
Last updated:  2016-06-27
Accountable Privacy for Decentralized Anonymous Payments
Uncategorized
Christina Garman, Matthew Green, Ian Miers
Show abstract
Uncategorized
Decentralized ledger-based currencies such as Bitcoin provide a means to construct payment systems without requiring a trusted bank. Removing this trust assumption comes at the significant cost of transaction privacy. A number of academic works have sought to improve the privacy offered by ledger-based currencies using anonymous electronic cash (e-cash) techniques. Unfortunately, this strong degree of privacy creates new regulatory concerns, since the new private transactions cannot be subject to the same controls used to prevent individuals from conducting illegal transactions such as money laundering. We propose an initial approach to addressing this issue by adding privacy preserving policy-enforcement mechanisms that guarantee regulatory compliance, allow selective user tracing, and admit tracing of tainted coins (e.g., ransom payments). To accomplish this new functionality we also provide improved definitions for Zerocash and, of independent interest, an efficient construction for simulation sound zk-SNARKs.
Last updated:  2019-07-29
Automated key setup and recovery from key exposure for power networks
Uncategorized
Amir Herzberg, Yehonatan Kfir
Show abstract
Uncategorized
Power networks are built with inherent redundancy, including in the communication channels. In this work, we show that this redundancy can increase the cyber security resiliency to key exposure. Specifically, we present CrypTop, a novel protocol for the {\em key setup and refresh} problem in power networks. CrypTop uses multipath communication and already-keyed devices, to setup keys in other devices, and to recover from key exposures. We evaluate CrypTop in realistic power network topologies. Our results show a significant to dramatic improvement compared to previous works, in both recovery from key exposure and ability to setup keys automatically. For example, in IEEE 300 network, and for a single corruption, CrypTop can recover security in 100\% of the devices compared to 1\% for previous works.
Last updated:  2016-06-14
Secure positioning and quantum non-local correlations
Uncategorized
Muhammad Nadeem
Show abstract
Uncategorized
Recently, the problem of quantum position-verification has been extensively analyzed in the formal notion but all existing ceremonial single-round position-verification schemes are insecure. We call here the quantum position-verification schemes formal if verifiers initiate the scheme at time t = ti and later verify the received outcome at time t = tf while they perform no other local unitary transformations in time interval ti < t <tf. We propose here a different notion for quantum position-verification where, instead of sending challenge encoded over flying qubits at time t = ti, one of the verifiers teleports the challenge to the prover while prover is required to measure encoded challenge in specified basis as well as teleport to another verifier while being on the same time slice t = (tf - ti)/2. After receiving outcomes of single qubit measurements as well as Bell state measurements from prover at time t = tf, the scheme enables verifiers to trace the origin of received outcome and hence identify dishonest provers with very high probability &#961; &#8805; 1-1/2n where n is the number of entangled pairs used. No-signaling principle assures that any group of dishonest provers, not at the position to be verified, cannot simulate their actions with the prover who is supposed to be at the specified position.
Last updated:  2016-01-25
New Lattice Attacks on DSA Schemes
Dimitrios Poulakis
We prove that a system of linear congruences of a particular form has at most a unique solution below a certain bound which can be computed efficiently. Using this result we develop attacks against the DSA schemes which, under some assumptions, can provide the secret key in the case where one or several signed messages are available.
Last updated:  2018-06-23
On the Architectural Analysis of Arbiter Delay PUF Variants
Uncategorized
DURGA PRASAD SAHOO, PHUONG HA NGUYEN, RAJAT SUBHRA CHAKRABORTY, DEBDEEP MUKHOPADHYA
Show abstract
Uncategorized
The Arbiter Physically Unclonable Function (APUF) is a widely used strong delay PUF design. There are two FPGA variants of this design, namely, Programmable Delay Line APUF (PAPUF) and Double APUF (DAPUF) to mitigate the FPGA platform specific implementation issues. In this paper, we introduce the idea of Architectural Bias to compare the impact of the architecture of these APUF designs on their design bias. The biased challenge-response behavior of a delay PUF implies the non-uniform distributions of 0’s and 1’s in its response, and if this bias is due to the architectural issue of the PUF design, then it is called “Architectural Bias”. Another important source of bias is the implementation issue specific to an implementation platform. According to our study, a good PUF architecture results in PUF instances with a small amount of architectural bias. In this paper, we provide a comparison of APUF, PAPUF, and DAPUF based on their architectural bias values. In addition, we also compare these APUF architectures with respect to fulfilling the Strict Avalanche Criterion (SAC) and robustness against the machine learning (ML) based modeling attack. We validate our theoretical findings with Matlab based simulations, and the results reveal that the classic APUF has the least architectural bias, followed by DAPUF and PAPUF, respectively. We also conclude from the experimental results that the SAC property of DAPUF is better than that of APUF and PAPUF, and PAPUF’s SAC property is significantly poor. However, our analyses indicate that these APUF variants are vulnerable to ML-based modeling attack.
Last updated:  2016-03-02
Blindly Signed Contracts: Anonymous On-Blockchain and Off-Blockchain Bitcoin Transactions
Ethan Heilman, Foteini Baldimtsi, Sharon Goldberg
Although Bitcoin is often perceived to be an anonymous currency, research has shown that a user's Bitcoin transactions can be linked to compromise the user's anonymity. We present solutions to the anonymity problem for both transactions on Bitcoin's blockchain and off the blockchain (in so called micropayment channel networks). We use an untrusted third party to issue anonymous vouchers which users redeem for Bitcoin. Blind signatures and Bitcoin transaction contracts (aka smart contracts) ensure the anonymity and fairness during the bitcoin $\leftrightarrow$ voucher exchange. Our schemes are practical, secure and anonymous.
Last updated:  2016-02-26
Attacking NTP's Authenticated Broadcast Mode
Aanchal Malhotra, Sharon Goldberg
We identify two attacks on the Network Time Protocol (NTP)'s cryptographically-authenticated broadcast mode. First, we present a replay attack that allows an on-path attacker to indefinitely stick a broadcast client to a specific time. Second, we present a denial-of-service (DoS) attack that allows an off-path attacker to prevent a broadcast client from ever updating its system clock; to do this, the attacker sends the client a single malformed broadcast packet per query interval. Our DoS attack also applies to all other NTP modes that are `ephemeral' or `preemptable' (including manycast, pool, etc). We then use network measurements to give evidence that NTP's broadcast and other ephemeral/preemptable modes are being used in the wild. We conclude by discussing why NTP's current implementation of symmetric-key cryptographic authentication does not provide security in broadcast mode, and make some recommendations to improve the current state of affairs.
Last updated:  2016-11-30
Fully Homomorphic Public-Key Encryption with Two Ciphertexts based on Discrete Logarithm Problem
Uncategorized
Masahiro Yagisawa
Show abstract
Uncategorized
In previous paper I proposed the fully homomorphic public-key encryption based on discrete logarithm problem which may be vulnerable to “m and -m attack”. In this paper I propose improved fully homomorphic public-key encryption (FHPKE) with composite number modulus based on the discrete logarithm assumption (DLA) and computational Diffie–Hellman assumption (CDH) of multivariate polynomials on octonion ring which is immune from “m and -m attack”. The scheme has two ciphertexts corresponding to one plaintext.
Last updated:  2016-01-22
Speed and Area Optimized Parallel Higher-Radix Modular Multipliers
khalid Javeed, Xiaojun Wang
Modular multiplication is the fundamental and compute-intense operation in many Public-Key crypto-systems. This paper presents two modular multipliers with their efficient architectures based on Booth encoding, higher-radix, and Montgomery powering ladder approaches. Montgomery powering ladder technique enables concurrent execution of main operations in the proposed designs, while higher-radix techniques have been adopted to reduce an iteration count which formally dictates a cycle count. It is also shown that by an adopting Booth encoding logic in the designs helps to reduce their area cost with a slight degradation in the maximum achievable frequencies. The proposed designs are implemented in Verilog HDL and synthesized targeting virtex-6 FPGA platform using Xilinx ISE 14.2 Design suite. The radix-4 multiplier computes a 256-bit modular multiplication in 0.93 ms, occupies 1.6K slices, at 137.87 MHz in a cycle count of n/2+2, whereas the radix-8 multiplier completes the operation in 0.69ms, occupies 3.6K slices, achieves 123.43 MHz frequency in a cycle count of n/3+4. The implementation results reveals that the proposed designs consumes 18% lower FPGA slices without any significant performance degradation as compared to their best contemporary designs.
Last updated:  2016-01-22
Fault-Tolerant Aggregate Signatures
Gunnar Hartung, Björn Kaidel, Alexander Koch, Jessica Koch, Andy Rupp
Aggregate signature schemes allow for the creation of a short aggregate of multiple signatures. This feature leads to significant reductions of bandwidth and storage space in sensor networks, secure routing protocols, certificate chains, software authentication, and secure logging mechanisms. Unfortunately, in all prior schemes, adding a single invalid signature to a valid aggregate renders the whole aggregate invalid. Verifying such an invalid aggregate provides no information on the validity of any individual signature. Hence, adding a single faulty signature destroys the proof of integrity and authenticity for a possibly large amount of data. This is largely impractical in a range of scenarios, e.g. secure logging, where a single tampered log entry would render the aggregate signature of all log entries invalid. In this paper, we introduce the notion of fault-tolerant aggregate signature schemes. In such a scheme, the verification algorithm is able to determine the subset of all messages belonging to an aggregate that were signed correctly, provided that the number of aggregated faulty signatures does not exceed a certain bound. We give a generic construction of fault-tolerant aggregate signatures from ordinary aggregate signatures based on cover-free families. A signature in our scheme is a small vector of aggregated signatures of the underlying scheme. Our scheme is bounded, i.e. the number of signatures that can be aggregated into one signature must be fixed in advance. However the length of an aggregate signature is logarithmic in this number. We also present an unbounded construction, where the size of the aggregate signature grows linearly in the number of aggregated messages, but the factor in this linear function can be made arbitrarily small. The additional information encoded in our signatures can also be used to speed up verification (compared to ordinary aggregate signatures) in cases where one is only interested in verifying the validity of a single message in an aggregate, a feature beyond fault-tolerance that might be of independent interest. For concreteness, we give an instantiation using a suitable cover-free family.
Last updated:  2016-01-22
Capacity and Data Complexity in Multidimensional Linear Attack
Jialin Huang, Serge Vaudenay, Xuejia Lai, Kaisa Nyberg
Multidimensional linear attacks are one of the most powerful variants of linear cryptanalytic techniques now. However, there is no knowledge on the key-dependent capacity and data complexity so far. Their values were assumed to be close to the average value for a vast majority of keys. This assumption is not accurate. In this paper, under a reasonable condition, we explicitly formulate the capacity as a Gamma distribution and the data complexity as an Inverse Gamma distribution, in terms of the average linear potential and the dimension. The capacity distribution is experimentally verified on the 5-round PRESENT. Regarding complexity, we solve the problem of estimating the average data complexity, which was difficult to estimate because of the existence of zero correlations. We solve the problem of using the median complexity in multidimensional linear attacks, which is an open problem since proposed in Eurocrypt 2011. We also evaluate the difference among the median complexity, the average complexity and a lower bound of the average complexity -- the reciprocal of average capacity. In addition, we estimate more accurately the key equivalent hypothesis, and reveal the fact that the average complexity only provides an accurate estimate for less than half of the keys no matter how many linear approximations are involved. Finally, we revisit the so far best attack on PRESENT based on our theoretical result.
Last updated:  2016-01-19
Improved Fully Homomorphic Encryption with Composite Number Modulus
Masahiro Yagisawa
Gentry’s bootstrapping technique is the most famous method of obtaining fully homomorphic encryption. In previous work I proposed a fully homomorphic encryption without bootstrapping which has the weak point in the plaintext. I also proposed a fully homomorphic encryption with composite number modulus which avoids the weak point by adopting the plaintext including the random numbers in it. In this paper I propose another fully homomorphic encryption with composite number modulus where the complexity required for enciphering and deciphering is smaller than the same modulus RSA scheme. In the proposed scheme it is proved that if there exists the PPT algorithm that decrypts the plaintext from the any ciphertexts of the proposed scheme, there exists the PPT algorithm that factors the given composite number modulus. In addition it is said that the proposed fully homomorphic encryption scheme is immune from the “p and -p attack”. Since the scheme is based on computational difficulty to solve the multivariate algebraic equations of high degree while the almost all multivariate cryptosystems proposed until now are based on the quadratic equations avoiding the explosion of the coefficients. Because proposed fully homomorphic encryption scheme is based on multivariate algebraic equations with high degree or too many variables, it is against the Gröbner basis attack, the differential attack, rank attack and so on.
Last updated:  2016-01-25
Implementing a Toolkit for Ring-LWE Based Cryptography in Arbitrary Cyclotomic Number Fields
Christoph M. Mayer
Recent research in the field of lattice-based cryptography, especially on the topic of the ring-based primitive ring-LWE, provided efficient and practical ring-based cryptographic schemes, which can compete with more traditional number-theoretic ones. In the case of ring-LWE these cryptographic schemes operated mainly in power-of-two cyclotomics, which vastly restricted the variety of possible applications. Due to the toolkit for ring-LWE of Lyubashevsky, Peikert and Regev, there are now cryptographic schemes that operate in arbitrary cyclotomics, with no loss in their underlying hardness guarantees, and only little loss computational efficiency. Next to some further refinements and explanations of the theory and additional implementation notes, we provide the - as far as we know - first implementation of the toolkit of Lyubashevsky, Peikert and Regev. This includes a complete framework with fast and modular algorithms that can be used to build cryptographic schemes around ring-LWE. Our framework is easy to use, open source and has only little third party dependencies. For demonstration purposes we implemented two public-key cryptographic schemes using our framework. The complete source code is available at https://github.com/CMMayer/Toolkit-for-Ring-LWE.git.
Last updated:  2016-09-26
Better Preprocessing for Secure Multiparty Computation
Carsten Baum, Ivan Damgård, Tomas Toft, Rasmus Zakarias
We present techniques and protocols for the preprocessing of secure multiparty computation (MPC), focusing on the so-called SPDZ MPC scheme SPDZ and its derivatives. These MPC schemes consist of a so-called preprocessing or offline phase where correlated randomness is generated that is independent of the inputs and the evaluated function, and an online phase where such correlated randomness is consumed to securely and efficiently evaluate circuits. In the recent years, it has been shown that such protocols turn out to be very efficient in practice. While much research has been conducted towards optimizing the online phase of the MPC protocols, there seems to have been less focus on the offline phase of such protocols. With this work, we want to close this gap and give a toolbox of techniques that aim at optimizing the preprocessing. We support both instantiations over small fields and large rings using somewhat homomorphic encryption and the Paillier cryptosystem, respectively. In the case of small fields, we show how the preprocessing overhead can basically be made independent of the field characteristic and present a more efficient (amortized) zero-knowledge proof of plaintext knowledge. In the case of large rings, we present a protocol based on the Paillier cryptosystem which has a lower message complexity than previous protocols and employs more efficient zero-knowledge proofs that, to the best of our knowledge, were not presented in previous work.
Last updated:  2016-01-19
Comb to Pipeline: Fast Software Encryption Revisited
Andrey Bogdanov, Martin M. Lauridsen, Elmar Tischhauser
AES-NI, or Advanced Encryption Standard New Instructions, is an extension of the x86 architecture proposed by Intel in 2008. With a pipelined implementation utilizing AES-NI, parallelizable modes such as AES-CTR become extremely efficient. However, out of the four non-trivial NIST-recommended encryption modes, three are inherently sequential: CBC, CFB, and OFB. This inhibits the advantage of using AES-NI significantly. Similar observations apply to CMAC, CCM and a great deal of other modes. We address this issue by proposing the comb scheduler -- a fast scheduling algorithm based on an efficient look-ahead strategy, featuring a low overhead -- with which sequential modes profit from the AES-NI pipeline in real-world settings by filling it with multiple, independent messages. As our main target platform we apply the comb scheduler to implementations on Haswell, a recent Intel microarchitecture, for a wide range of modes. We observe a drastic speed-up of factor 5 for NIST's CBC, CFB, OFB and CMAC performing around 0.88 cpb. Surprisingly, contrary to the entire body of previous performance analysis, the throughput of the authenticated encryption (AE) mode CCM gets very close to that of GCM and OCB3, with about 1.64 cpb (vs. 1.63 cpb and 1.51 cpb, respectively), when message lengths are sampled according to a realistic distribution for Internet packets, despite Haswell's heavily improved binary field multiplication. This suggests CCM as an AE mode of choice as it is NIST-recommended, does not have any weak-key issues like GCM, and is royalty-free as opposed to OCB3. Among the CAESAR contestants, the comb scheduler significantly speeds up CLOC/SILC, JAMBU, and POET, with the mostly sequential nonce-misuse resistant design of POET, performing at 2.14 cpb, becoming faster than the well-parallelizable COPA. Despite Haswell being the target platform, we also include performance figures for the more recent Skylake microarchitecture, which provides further optimizations to AES-NI instructions. Finally, this paper provides the first optimized AES-NI implementations for the novel AE modes OTR, CLOC/SILC, COBRA, POET, McOE-G, and Julius.
Last updated:  2021-04-25
How To Simulate It - A Tutorial on the Simulation Proof Technique
Yehuda Lindell
One of the most fundamental notions of cryptography is that of \emph{simulation}. It stands behind the concepts of semantic security, zero knowledge, and security for multiparty computation. However, writing a simulator and proving security via the use of simulation is a non-trivial task, and one that many newcomers to the field often find difficult. In this tutorial, we provide a guide to how to write simulators and prove security via the simulation paradigm. Although we have tried to make this tutorial as stand-alone as possible, we assume some familiarity with the notions of secure encryption, zero-knowledge, and secure computation.
Last updated:  2016-01-19
New Approaches for Secure Outsourcing Algorithm for Modular Exponentiations
Xi-Jun Lin, Lin Sun, Haipeng Qu, Xiaoshuai Zhang
Outsourcing paradigm is one of the most attractive benefits of cloud computing, where computation workloads can be outsourced to cloud servers by the resource-constrained devices, such as RFID tags. With this paradigm, cloud users can avoid setting up their own infrastructures. As a result, some new challenges, such as security and checkability, are inevitably introduced. In this paper, we address the problem of secure outsourcing algorithm for modular exponentiations in the one-malicious version of two untrusted program model. We show that our proposed algorithm is more efficient than the state-of-the-art algorithms. On the other hand, we point out in this paper that the first outsource-secure algorithm for simultaneous modular exponentiations proposed recently is insecure, where the sensitive information can be leaked to the malicious servers. As a result, we propose a new and more efficient algorithm for simultaneous modular exponentiations. We also propose the constructions for outsource-secure Cramer-Shoup encryptions and Schnorr signatures which are also more efficient than the state-of-the-art algorithms.
Last updated:  2016-01-19
Defeating the Ben-Zvi, Blackburn, and Tsaban Attack on the Algebraic Eraser
Iris Anshel, Derek Atkins, Dorian Goldfeld, Paul E. Gunnells
The \emph{Algebraic Eraser Diffie--Hellman} (AEDH) protocol was introduced in 2005 and published in 2006 by I.~Anshel, M.~Anshel, D.~Goldfeld, and S.~Lemieux as a protocol suitable for use on platforms with constrained computational resources, such as FPGAs, ASICs, and wireless sensors. It is a group-theoretic cryptographic protocol that allows two users to construct a shared secret via a Diffie--Hellman-type scheme over an insecure channel. Building on the refuted 2012 permutation-based attack of Kalka--Teichner--Tsaban (KKT), Ben-Zvi, Blackburn, and Tsaban (BBT) present a heuristic attack, published November 13, 2015, that attempts to recover the AEDH shared secret. In their paper BBT reference the AEDH protocol as presented to ISO for certification (ISO 29167-20) by SecureRF. The ISO 29167-20 draft contains two profiles using the Algebraic Eraser. One profile is unaffected by this attack; the second profile is subject to their attack provided the attack runs in real time. This is not the case in most practical deployments. The BBT attack is simply a targeted attack that does not attempt to break the method, system parameters, or recover any private keys. Rather, its limited focus is to recover the shared secret in a single transaction. In addition, the BBT attack is based on several conjectures that are assumed to hold when parameters are chosen according to standard distributions, which can be mitigated, if not avoided. This paper shows how to choose special distributions so that these conjectures do not hold making the BBT attack ineffective for braid groups with sufficiently many strands. Further, the BBT attack assumes that certain data is available to an attacker, but there are realistic deployment scenarios where this is not the case, making the attack fail completely. In summary, the BBT attack is flawed (with respect to the SecureRF ISO draft) and, at a minimum, over-reaches as to its applicability.
Last updated:  2016-01-24
Strong Continuous Non-malleable Encoding Schemes with Tamper-Detection
Uncategorized
Amir S. Mortazavi, Mahmoud Salmasizadeh, Amir Daneshgar
Show abstract
Uncategorized
A non-malleable encoding scheme is a keyless encoding scheme which is resilient to tampering attacks. Such a scheme is said to be continuously secure if the scheme is resilient to attacks containing more than one tampering procedure. Also, such a scheme is said to have tamper-detection property if any kind of tampering attack is detected. In [S. Faust, et al., Continuous nonmalleable codes, TCC Proc., LNCS Vol. 8349, 2014.] a general continuous non-malleable encoding scheme based on NIZK is introduced which is secure in a strong model for which the adversary receives a no-tamper as a response to its tampering query if the decoding of the tampered codeword is identical to the original message. In this article we introduce a new strongly secure continuous non-malleable encoding scheme with tamper-detection property whose security is based on the existence of secure MAC’s. Moreover, we introduce and justify the importance of an intermediate security model called semi-strong continuous non-malleability, while we provide a secure semi-strong continuous non-malleable encoding scheme whose security is based on the existence of CCA-secure public-key encryption. Considering the area of applications of encoding schemes in tamper-proof devices, it is instructive to note that our proposed schemes can be used to implement an algorithmic tamperdetection level as well as maintaining the security conditions.
Last updated:  2016-02-04
Neeva: A Lightweight Hash Function
Uncategorized
Khushboo Bussi, Dhananjoy Dey, Manoj Kumar, B. K. Dass
Show abstract
Uncategorized
RFID technology is one of the major applications of lightweight cryptography where security and cost both are equally essential or we may say that cost friendly cryptographic tools have given more weightage. In this paper, we propose a lightweight hash, \textit{Neeva-hash} satisfying the very basic idea of lightweight cryptography. Neeva-hash is based on sponge mode of iteration with software friendly permutation which provides great efficiency and required security in RFID technology. The proposed hash can be used for many application based purposes.
Last updated:  2016-01-17
A NEW UNLINKABLE SECRET HANDSHAKES SCHEME BASED ON ZSS
Preeti Kulshrestha, Arun Kumar
Secret handshakes (SH) scheme is a key agreement protocol between two members of the same group. Under this scheme two members share a common key if and only if they both belong to the same group. If the protocol fails none of the parties involved get any idea about the group affiliation of the other. Moreover if the transcript of communication is available to a third party, she/he does not get any information about the group affiliation of communicating parties. The concept of SH was given by Balfanz in 2003 who also gave a practical SH scheme using pairing based cryptography. The protocol proposed by Balfanz uses one time credential to insure that handshake protocol performed by the same party cannot be linked. Xu and Yung proposed SH scheme that achieve unlinkability with reusable credentials. In this paper, a new unlinkable secret handshakes scheme is presented. Our scheme is constructed from the ZSS signature and inspired on an identity based authenticated key agreement protocol, proposed by McCullagh et al. In recently proposed work most of unlinkable secret handshake schemes have either design flaw or security flaw, we proved the security of proposed scheme by assuming the intractability of the bilinear inverse Diffie-Hellman and k-CAA problems.
Last updated:  2020-05-15
Packet Header Anomaly Detection Using Bayesian Topic Models
Xuefei Cao, Bo Chen, Hui Li, Yulong Fu
A method of network intrusion detection is proposed based on Bayesian topic models. The method employs tcpdump packets and extracts multiple features from the packet headers. A topic model is trained using the normal traffic in order to learn feature patterns of the normal traffic. Then the test traffic is analyzed against the learned normal feature patterns to measure the extent to which the test traffic resembles the learned feature patterns. Since the feature patterns are learned using only the normal traffic, the test traffic is likely to be normal if its feature pattern resembles the learned feature patterns. An attack alarm is raised when the test traffic's resemblance to the learned feature patterns is lower than a threshold. Experiment shows that our method is efficient in attack detection. It answers the open question how to detect network intrusions using topic models.
Last updated:  2016-01-17
Standard quantum bit commitment – an indefinite commitment time
Muhammad Nadeem
Currently, it is believed in the literature that unconditionally secure bit commitment is impossible in non-relativistic quantum cryptography while only a weaker notion of bit commitment with finite commitment time is achievable in relativistic quantum setting. Moreover, relativistic classical bit commitment protocols allow arbitrary long commitment time but such protocols are not practically feasible; either because of multiple rounds of communication that result in exponential increase in communication complexity or due to asymptotic nature of security argument. The impossibility of practically feasible standard bit commitment leaves an obvious skepticism on the completeness of Hilbert space quantum cryptography and its claims of unconditional security. Contrary to the previously proposed results, we demonstrate here that an information-theoretic standard bit commitment scheme can be devised by using rules of purely non-relativistic quantum mechanics only; neither additional resources from theory of relativity nor multiple rounds of communications are required. The proposed bit commitment scheme can be applied efficiently with existing quantum technologies without long term quantum memory; quantum entanglement is required only for time t+&#948; where t is the communication time between the committer and receiver while &#948; << t is the processing time at laboratory.
Last updated:  2016-01-18
Collateral Damage in Online Social Networks: computing the significance of information collection
Uncategorized
Iraklis Symeonids, Bart Preneel
Show abstract
Uncategorized
Third-party apps enable a personalized experience on social networking platforms; however, they give rise to privacy interdependence issues. Apps installed by a user's friends can collect and potentially misuse her own personal data inflicting \textit{collateral damage} on the user herself while leaving her without proper means of control. In this paper, we present a study on the \textit{collateral information collection} of apps in social networks. Based on real data, we compute the proportion of exposed user attributes including the case of profiling, when several apps are offered by the same provider.
Last updated:  2016-01-14
A Framework for Outsourcing of Secure Computation
Thomas P. Jakobsen, Jesper Buus Nielsen, Claudio Orlandi
We study the problem of how to efficiently outsource a sensitive computation on secret inputs to a number of untrusted workers, under the assumption that at least one worker is honest. In our setting there are a number of clients $C_1,\ldots,C_n$ with inputs $x_1,\ldots,x_n$. The clients want to delegate a secure computation of $f(x_1,\ldots,x_n)$ to a set of untrusted workers $W_1,\ldots,W_m$. We want do so in such a way that as long as there is at least one honest worker (and everyone else might be actively corrupted) the following holds: * the privacy of the inputs is preserved; * output of the computation is correct (in particular workers cannot change the inputs of honest clients). We propose a solution where the clients' work is minimal and the interaction pattern simple (one message to upload inputs, one to receive results), while at the same time reducing the overhead for the workers to a minimum. Our solution is generic and can be instantiated with any underlying reactive MPC protocol where linear operations are ``for free''. In contrast previous solutions were less generic and could only be instantiated for specific numbers of clients/workers.
Last updated:  2016-01-14
Characterizations of the Degraded Boolean Function and Cryptanalysis of the SAFER Family
wentan Yi, Shaozhen Chen
This paper investigates the degradation properties of Boolean functions from the aspects of the distributions of dierences and linear masks, and shows two characterizations of the degraded Boolean function. One is that there exists a linear space of the input dierences, where the dierentials with the zero output dierence have probability 1; Another one is that the input linear masks of the nonzero-correlation linear approximations are included in a linear space. Those two linear spaces are orthogonal spaces. Moreover, the degradation properties are showed about the exponentiation type S-box of the SAFER block ciphers, which are applied to reduce the compute complexity in the zero-correlation linear attacks on 5-round SAFER SK/128, 4(5)-round SAFER+/128(256) and 5(6)-round SAFER++/128(256). In the attacks, some of the linear properties of PHT employed as the linear layer by the SAFER block ciphers are investigated and some zero-correlation approximations for SAFER SK, SAFER+, and SAFER++ are identied, when only the least one or two signicant bits are considered. The results show that more rounds of some of the SAFER block ciphers can be attacked, by considering the degradation properties and the zero-correlation linear relations.
Last updated:  2019-02-28
Simple Proofs of Space-Time and Rational Proofs of Storage
Tal Moran, Ilan Orlov
We introduce a new cryptographic primitive: Proofs of Space-Time (PoSTs) and construct an extremely simple, practical protocol for implementing these proofs. A PoST allows a prover to convince a verifier that she spent a ``space-time'' resource (storing data---space---over a period of time). Formally, we define the PoST resource as a trade-off between CPU work and space-time (under reasonable cost assumptions, a rational user will prefer to use the lower-cost space-time resource over CPU work). Compared to a proof-of-work, a PoST requires less energy use, as the ``difficulty'' can be increased by extending the time period over which data is stored without increasing computation costs. Our definition is very similar to ``Proofs of Space'' [ePrint 2013/796, 2013/805] but, unlike the previous definitions, takes into account amortization attacks and storage duration. Moreover, our protocol uses a very different (and much simpler) technique, making use of the fact that we explicitly allow a space-time tradeoff, and doesn't require any non-standard assumptions (beyond random oracles). Unlike previous constructions, our protocol allows incremental difficulty adjustment, which can gracefully handle increases in the price of storage compared to CPU work. In addition, we show how, in a cryptocurrency context, the parameters of the scheme can be adjusted using a market-based mechanism, similar in spirit to the difficulty adjustment for PoW protocols.
Last updated:  2016-09-08
Universal Composition with Responsive Environments
Jan Camenisch, Robert R. Enderlein, Stephan Krenn, Ralf Kuesters, Daniel Rausch
In universal composability frameworks, adversaries (or environments) and protocols/ideal functionalities often have to exchange meta-information on the network interface, such as algorithms, keys, signatures, ciphertexts, signaling information, and corruption-related messages. For these purely modeling-related messages, which do not reflect actual network communication, it would often be very reasonable and natural for adversaries/environments to provide the requested information immediately or give control back to the protocol/functionality immediately after having received some information. However, in none of the existing models for universal composability is this guaranteed. We call this the \emph{non-responsiveness problem}. As we will discuss in the paper, while formally non-responsiveness does not invalidate any of the universal composability models, it has many disadvantages, such as unnecessarily complex specifications and less expressivity. Also, this problem has often been ignored in the literature, leading to ill-defined and flawed specifications. Protocol designers really should not have to care about this problem at all, but currently they have to: giving the adversary/environment the option to not respond immediately to modeling-related requests does not translate to any real attack scenario. This paper solves the non-responsiveness problem and its negative consequences completely, by avoiding this artificial modeling problem altogether. We propose the new concepts of responsive environments and adversaries. Such environments and adversaries must provide a valid response to modeling-related requests before any other protocol/functionality is activated. Hence, protocol designers do no longer have to worry about artifacts resulting from such requests not being answered promptly. Our concepts apply to all existing models for universal composability, as exemplified for the UC, GNUC, and IITM models, with full definitions and proofs (simulation relations, transitivity, equivalence of various simulation notions, and composition theorems) provided for the IITM model.
Last updated:  2016-01-13
Towards a Unified Security Model for Physically Unclonable Functions
Frederik Armknecht, Daisuke Moriyama, Ahmad-Reza Sadeghi, Moti Yung
The use of Physically Unclonable Functions (PUFs) in cryptographic protocols attracted an increased interest over recent years. Since sound security analysis requires a concise specification of the alleged properties of the PUF, there have been numerous trials to provide formal security models for PUFs. However, all these approaches have been tailored to specific types of applications or specific PUF instantiations. For the sake of applicability, composability, and comparability, however, there is a strong need for a unified security model for PUFs (to satisfy, for example, a need to answer whether a future protocol requirements match a new and coming PUF realization properties). In this work, we propose a PUF model which generalizes various existing PUF models and includes security properties that have not been modeled so far. We prove the relation between some of the properties, and also discuss the relation of our model to existing ones.
Last updated:  2016-01-13
On the Leakage-Resilient Key Exchange
Janaka Alawatugoda
Typically, secure channels are constructed from an authenticated key exchange (AKE) protocol, which authenticates the communicating parties based on long-term public keys and establishes secret session keys. In this paper we address the partial leakage of long-term secret keys of key exchange protocol participants due to various side-channel attacks. Security models for two-party authenticated key exchange protocols have developed over time to provide security even when the adversary learns certain secret values. This paper combines and extends the advances of security modelling for AKE protocols addressing more granular partial leakage of long-term secrets of protocol participants.
Last updated:  2016-01-12
Beyond the selective disclosure of ABCs on RAM-constrained devices
Antonio de la Piedra
The utilization of private Attribute-based credentials (ABC) in everyday life could enable citizens to only partially reveal their identity in economic transactions and communication with public institutions. This means citizens could control in a practical way the information related to their own life and identity in many contexts. At the time of writing, the Identity Mixer (Idemix) by IBM is the only credential system that offers enough flexibility to proof a considerable variety of properties of the attributes of a credential. Despite many practitioners have proposed different strategies for implementing ABCs on smart cards in the last few years, the complexity of the assumptions these primitives usually rely on, undermines fast and practical implementations of ABCs. The lack of smart cards with powerful hardware arithmetic accelerators is not the only problem for speeding up the computation of these primitives since one need to perform fast arithmetic operations with operands stored in RAM. Moreover, the implementation of complex Zero-Knowledge Proofs (ZKP) needs a considerable amount of pseudorandomness. In order to overcome these limitations, we proposed to use a Pseudo-Random Number Generator (PRNG) for recomputing pseudorandomness and we use it tandem with variable reconstruction in order to implement complex proofs. The utilization of this simple technique enable us to compute pseudonyms, domain pseudonyms, multi-credential proofs and to rely on the AND, NOT and OR operators to prove inner properties of the attributes of the credential whereas prior art only addressed the selective disclosure of one attribute on a given credential. Moreover, we show how to increase the number of attributes stored on the card via this construction. Finally, we show how to chain proofs based on AND, NOT and OR operators in order to extend the amount of properties of a credential that can be showed via external and internal commitment reordering.
Last updated:  2016-11-17
An Efficient Lattice-Based Signature Scheme with Provably Secure Instantiation
Sedat Akleylek, Nina Bindel, Johannes Buchmann, Juliane Krämer, Giorgia Azzurra Marson
In view of the expected progress in cryptanalysis it is important to find alternatives for currently used signature schemes such as RSA and ECDSA. The most promising lattice-based signature schemes to replace these schemes are BLISS (CRYPTO 2013) and GLP (CHES 2012). Both come with a security reduction from a lattice problem and have high performance. However, their parameters are not chosen according to their provided security reduction, i.e., the instantiation is not provably secure. In this paper, we present the first lattice-based signature scheme with good performance when provably secure instantiated. To this end, we provide a tight security reduction for the new scheme from the ring learning with errors problem which allows for provably secure and efficient instantiations. We present experimental results obtained from a software implementation of our scheme. They show that our scheme, when provably secure instantiated, performs comparably with BLISS and the GLP scheme.
Last updated:  2016-01-12
Simple SIMON: FPGA implementations of the SIMON 64/128 Block Cipher
Jos Wetzels, Wouter Bokslag
In this paper we will present various hardware architecture designs for implementing the SIMON 64/128 block cipher as a cryptographic component offering encryption, decryption and self-contained key-scheduling capabilities and discuss the issues and design options we encountered and the tradeoffs we made in implementing them. Finally, we will present the results of our hardware architectures' implementation performances on the Xilinx Spartan-6 FPGA series.
Last updated:  2016-01-12
Sponges and Engines: An introduction to Keccak and Keyak
Jos Wetzels, Wouter Bokslag
In this document we present an introductory overview of the algorithms and design components underlying the Keccac cryptographic primitive and the Keyak encryption scheme for authenticated (session-supporting) encryption. This document aims to familiarize readers with the basic principles of authenticated encryption, the Sponge and Duplex constructions (full-state, keyed as well as regular versions), the permutation functions underlying Keccak and Keyak as well as Keyak v2's Motorist mode of operation.
Last updated:  2017-05-12
Balloon Hashing: A Memory-Hard Function Providing Provable Protection Against Sequential Attacks
Uncategorized
Dan Boneh, Henry Corrigan-Gibbs, Stuart Schechter
Show abstract
Uncategorized
We present the Balloon password-hashing algorithm. This is the first practical cryptographic hash function that: (i) has proven memory-hardness properties in the random-oracle model, (ii) uses a password-independent access pattern, and (iii) meets or exceeds the performance of the best heuristically secure password-hashing algorithms. Memory-hard functions require a large amount of working space to evaluate efficiently and when used for password hashing, they dramatically increase the cost of offline dictionary attacks. In this work, we leverage a previously unstudied property of a certain class of graphs (“random sandwich graphs”) to analyze the memory-hardness of the Balloon algorithm. The techniques we develop are general: we also use them to give a proof of security of the scrypt and Argon2i password-hashing functions in the random-oracle model. Our security analysis uses a sequential model of computation, which essentially captures attacks that run on single-core machines. Recent work shows how to use massively parallel special-purpose machines (e.g., with hundreds of cores) to attack Balloon and other memory-hard functions. We discuss these important attacks, which are outside of our adversary model, and propose practical defenses against them. To motivate the need for security proofs in the area of password hashing, we demonstrate and implement a practical attack against Argon2i that successfully evaluates the function with less space than was previously claimed possible. Finally, we use experimental results to compare the performance of the Balloon hashing algorithm to other memory-hard functions.
Last updated:  2016-01-12
A Practical Template Attack on MICKEY-128 2.0 Using PSO Generated IVs and LS-SVM
Abhishek Chakraborty, Debdeep Mukhopadhyay
The reported power analysis attacks on hardware implementations of the MICKEY family of streams ciphers require a large number of power traces. The primary motivation of our work is to break an implementation of the cipher when only a limited number of power traces can be acquired by an adversary. In this paper, we propose a novel approach to mount a Template attack (TA) on MICKEY-128 2.0 stream cipher using Particle Swarm Optimization (PSO) generated initialization vectors (IVs). In addition, we report the results of power analysis against a MICKEY-128 2.0 implementation on a SASEBO-GII board to demonstrate our proposed attack strategy. The captured power traces were analyzed using Least Squares Support Vector Machine (LS-SVM) learning algorithm based binary classifiers to segregate the power traces into the respective Hamming distance (HD) classes. The outcomes of the experiments reveal that our proposed power analysis attack strategy requires a much lesser number of IVs compared to a standard Correlation Power Analysis (CPA) attack on MICKEY-128 2.0 during the key loading phase of the cipher.
Last updated:  2017-05-12
Human-readable Proof of the Related-Key Security of AES-128
Khoongming Khoo, Eugene Lee, Thomas Peyrin, Siang Meng Sim
The related-key model is now considered an important scenario for block cipher security and many schemes were broken in this model, even AES-192 and AES-256. Recently were introduced efficient computer-based search tools that can produce the best possible related-key truncated differential paths for AES. However, one has to trust the implementation of these tools and they do not provide any meaningful information on how to design a good key schedule, which remains a challenge for the community as of today. We provide in this article the first human-readable proof on the minimal number of active Sboxes in the related-key model for AES-128, without any help from a computer. More precisely, we show that any related-key differential paths for AES-128 will respectively contain at least 0, 1, 3 and 9 active Sboxes for 1, 2, 3 and 4 rounds. Our proof is tight, not trivial, and actually exhibits for the first time the interplay between the key state and the internal state of an AES-like block cipher with an AES-like key schedule. As application example, we leverage our proofs to propose a new key schedule, that is not only faster (a simple permutation on the byte positions) but also ensures a higher number of active Sboxes than AES-128's key schedule. We believe this is an important step towards a good understanding of efficient and secure key schedule designs.
Last updated:  2016-01-12
Refund attacks on Bitcoin’s Payment Protocol
Patrick McCorry, Siamak F. Shahandashti, Feng Hao
BIP70 is a community-accepted Payment Protocol standard that governs how merchants and customers perform payments in Bitcoin. This standard is supported by most major wallets and the two dominant Payment Processors: Coinbase and BitPay, who collectively provide the infrastructure for accepting Bitcoin as a form of payment to more than 100,000 merchants. In this paper, we present new attacks on the Payment Protocol, which affect all BIP70 merchants. The Silkroad Trader attack highlights an authentication vulnerability in the Payment Protocol while the Marketplace Trader attack exploits the refund policies of existing Payment Processors. Both attacks have been experimentally verified on real-life merchants using a modified Bitcoin wallet. The attacks have been acknowledged by both Coinbase and Bitpay with temporary mitigation measures put in place. However, to fully address the identified issues will require revising the BIP70 standard. We present a concrete proposal to revise BIP70 by providing the merchant with publicly verifiable evidence to prevent both attacks.
Last updated:  2016-01-12
Improved on an improved remote user authentication scheme with key agreement
Uncategorized
Yalin Chen, Jue-Sam Chou, I - Chiung Liao
Show abstract
Uncategorized
Recently, Kumari et al. pointed out that Chang et al.’s scheme “Untraceable dynamic-identity-based remote user authentication scheme with veri&#64257;able password update” not only has several drawbacks, but also does not provide any session key agreement. Hence, they proposed an improved remote user authentication Scheme with key agreement on Chang et al.’s Scheme. After cryptanalysis, they confirm the security properties of the improved scheme. However, we determine that the scheme suffers from both anonymity breach and he smart card loss password guessing attack, which are in the ten basic requirements in a secure identity authentication using smart card, assisted by Liao et al. Therefore, we modify the method to include the desired security functionality, which is significantly important in a user authentication system using smart card.
Last updated:  2016-01-10
On derivatives of polynomials over finite fields through integration
Enes Pasalic, Amela Muratovic-Ribic, Samir Hodzic, Sugata Gangopadhyay
In this note, using rather elementary technique and the derived formula that relates the coefficients of a polynomial over a finite field and its derivative, we deduce many interesting results related to derivatives of Boolean functions and derivatives of mappings over finite fields. For instance, we easily identify several infinite classes of polynomials which cannot possess linear structures. The same technique can be applied for deducing a nontrivial upper bound on the degree of so-called planar mappings.
Last updated:  2016-04-29
Quasi-Linear Size Zero Knowledge from Linear-Algebraic PCPs
Eli Ben-Sasson, Alessandro Chiesa, Ariel Gabizon, Madars Virza
The seminal result that every language having an interactive proof also has a zero-knowledge interactive proof assumes the existence of one-way functions. Ostrovsky and Wigderson (ISTCS 1993) proved that this assumption is necessary: if one-way functions do not exist, then only languages in BPP have zero-knowledge interactive proofs. Ben-Or et al. (STOC 1988) proved that, nevertheless, every language having a multi-prover interactive proof also has a zero-knowledge multi-prover interactive proof, unconditionally. Their work led to, among many other things, a line of work studying zero knowledge without intractability assumptions. In this line of work, Kilian, Petrank, and Tardos (STOC 1997) defined and constructed zero-knowledge probabilistically checkable proofs (PCPs). While PCPs with quasilinear-size proof length, but without zero knowledge, are known, no such result is known for zero knowledge PCPs. In this work, we show how to construct ``2-round'' PCPs that are zero knowledge and of length \tilde{O}(K) where K is the number of queries made by a malicious polynomial time verifier. Previous solutions required PCPs of length at least K^6 to maintain zero knowledge. In this model, which we call *duplex PCP* (DPCP), the verifier first receives an oracle string from the prover, then replies with a message, and then receives another oracle string from the prover; a malicious verifier can make up to K queries in total to both oracles. Deviating from previous works, our constructions do not invoke the PCP Theorem as a blackbox but instead rely on certain algebraic properties of a specific family of PCPs. We show that if the PCP has a certain linear algebraic structure --- which many central constructions can be shown to possess, including [BFLS91,ALMSS98,BS08] --- we can add the zero knowledge property at virtually no cost (up to additive lower order terms) while introducing only minor modifications in the algorithms of the prover and verifier. We believe that our linear-algebraic characterization of PCPs may be of independent interest, as it gives a simplified way to view previous well-studied PCP constructions.
Last updated:  2016-02-02
Truncated Differential Based Known-Key Attacks on Round-Reduced Simon
Yonglin Hao, Willi Meier
At Crypto 2015, Blondeau, Peyrin and Wang proposed a truncated-differential-based known-key attack on full PRESENT, a nibble oriented lightweight blockcipher with a SPN structure. The truncated difference they used is derived from the existing multidimensional linear characteristics. An innovative technique of their work is the design of a MITM layer added before the characteristic that covers extra rounds with a complexity lower than that of a generic construction. We notice that there are good linear hulls for bit-oriented block cipher Simon corresponding to highly qualified truncated differential characteristics. Based on these characteristics, we propose known-key distinguishers on round-reduced Simon block cipher family, which is bit oriented and has a Feistel structure. Similar to the MITM layer, we design a specific start-from-the-middle method for pre-adding extra rounds with complexities lower than generic bounds. With these techniques, we launch basic known-key attacks on round-reduced Simon. We also involve some key guessing technique and further extend the basic attacks to more rounds. Our known-key attacks can reach as many as 29/32/38/48/63-rounds of Simon32/48/64/96/128, which comes quite close to the full number of rounds. To the best of our knowledge, these are the first known-key results on the block cipher Simon.
Last updated:  2016-06-20
Analysis of Gong et al.'s CCA2-Secure Homomorphic Encryption
Hyung Tae Lee, San Ling, Huaxiong Wang
It is a well-known result that homomorphic encryption is not secure against adaptive chosen ciphertext attacks (CCA2) because of its malleable property. Very recently, however, Gong et al. proposed a construction asserted to be a CCA2-secure additively homomorphic encryption (AHE) scheme; in their construction, the adversary is not able to obtain a correct answer when querying the decryption oracle on a ciphertext obtained by modifying the challenge ciphertext (Theoretical Computer Science, 2016). Because their construction is very similar to Paillier's AHE, it appeared to support an additively homomorphic property, though they did not specify an evaluation algorithm for the scheme in their paper. In this paper, we present a simple CCA2 attack on their construction by re-randomizing the challenge ciphertext. Furthermore, we look into an additively homomorphic property of their construction. To do this, we first consider a typical candidate for an addition algorithm on ciphertexts, as provided for previous AHE constructions, and establish that it does not function correctly. Subsequently, we provide plausible evidence for the hardness of achieving an additively homomorphic property with their construction. According to our analysis, it seems hard to preserve an additively homomorphic property of their construction without any modification. In addition, as a minor contribution, we point out a flaw in the decryption algorithm of their construction and present a rectified algorithm for correct decryption.
Last updated:  2016-01-08
Private Functional Encryption: Indistinguishability-Based Definitions and Constructions from Obfuscation
Uncategorized
Afonso Arriaga, Manuel Barbosa, Pooya Farshim
Show abstract
Uncategorized
Private functional encryption guarantees that not only the information in ciphertexts is hidden but also the circuits in decryption tokens are protected. A notable use case of this notion is query privacy in searchable encryption. Prior privacy models in the literature were fine-tuned for specific functionalities (namely, identity-based encryption and inner-product encryption), did not model correlations between ciphertexts and decryption tokens, or fell under strong uninstantiability results. We develop a new indistinguishability-based privacy notion that overcomes these limitations and give constructions supporting different circuit classes and meeting varying degrees of security. Obfuscation is a common building block that these constructions share, albeit the obfuscators necessary for each construction are based on different assumptions. Our feasibility results go beyond previous constructions in a number of ways. In particular, a keyword search scheme that we base on point obfuscators tolerates arbitrary and possibly low-entropy correlations between encrypted data and queried keywords, even under (mildly restricted) adaptive token-extraction queries. Our more elaborate keyword search scheme achieves the strongest notion of privacy that we put forth (with no restrictions), but relies on stronger forms of obfuscation. We also develop a composable and distributionally secure inner-product obfuscator and use it to build an inner-product encryption scheme that achieves an unprecedented level of privacy. This, in particular, positively answers a question left open by Boneh, Raghunathan and Segev (ASIACRYPT 2013) concerning the extension and realization of enhanced security for hyperplane membership.
Last updated:  2016-01-08
Valiant's Universal Circuit: Improvements, Implementation, and Applications
Helger Lipmaa, Payman Mohassel, Saeed Sadeghian
A Universal Circuit (UC) is a circuit that can simulate any circuit of a maximum size, given its description as input. In this work, we look back at Valiant's universal circuit construction from Valiant (STOC 1976). Although it yields asymptotically optimal UC, and has implications for important problems in cryptography such as ''private function evaluation'' (PFE) and ''cryptographic program obfuscation'', somewhat surprisingly, no implementations of the construction exist. We provide a more approachable description, improve its constant factors, and put forth the first complete implementation. We observe that our improved implementation of Valiant's UC performs better than estimated and in fact, is almost always smaller than UC construction of Kolesnikov and Schneider (FC 2008). The UC circuits generated by our implementation can be used for benchmarking MPC protocols, and provide a point of comparison for any future PFE. We also observe, for the first time, that the same construction can be adapted to yield size optimized \emph{universal arithmetic circuit} (UAC).
Last updated:  2016-01-08
A trustless privacy-preserving reputation system
Alexander Schaub, Rémi Bazin, Omar Hasan, Lionel Brunie
Reputation systems are crucial for distributed applications in which users have to be made accountable for their actions, such as e-commerce websites. However, existing systems often disclose the identity of the raters, which might deter honest users from posting reviews, out of fear of retaliation from the ratees. While many privacy-preserving reputation systems have been proposed, none of them was at the same time truly decentralized, trustless, and suitable for real world usage in, for example, e-commerce applications. After discussing the weaknesses and shortcoming of existing solutions, we will present our own blockchain-based trustless reputation system, and analyze its correctness and the security guarantees it promises.
Last updated:  2016-01-08
Quantum Collision-Resistance of Non-Uniformly Distributed Functions
Uncategorized
Ehsan Ebrahimi Targhi, Gelo Noel Tabia, Dominique Unruh
Show abstract
Uncategorized
We study the quantum query complexity of finding a collision for a function $f$ whose outputs are chosen according to a distribution with min-entropy $k$. We prove that $\Omega(2^{k/9})$ quantum queries are necessary to find a collision for function $f$. This is needed in some security proofs in the quantum random oracle model (e.g. Fujisaki-Okamoto transform).
Last updated:  2016-01-07
Foundations of Hardware-Based Attested Computation and Application to SGX
Manuel Barbosa, Bernardo Portela, Guillaume Scerri, Bogdan Warinschi
Exciting new capabilities of modern trusted hardware technologies allow for the execution of arbitrary code within environments completely isolated from the rest of the system and provide cryptographic mechanisms for securely reporting on these executions to remote parties. Rigorously proving security of protocols that rely on this type of hardware faces two obstacles. The first is to develop models appropriate for the induced trust assumptions (e.g., what is the correct notion of a party when the peer one wishes to communicate with is a specific instance of an an outsourced program). The second is to develop scalable analysis methods, as the inherent stateful nature of the platforms precludes the application of existing modular analysis techniques that require high degrees of independence between the components. We give the first steps in this direction by studying three cryptographic tools which have been commonly associated with this new generation of trusted hardware solutions. Specifically, we provide formal security definitions, generic constructions and security analysis for attested computation, key-exchange for attestation and secure outsourced computation. Our approach is incremental: each of the concepts relies on the previous ones according to an approach that is quasi-modular. For example we show how to build a secure outsourced computation scheme from an arbitrary attestation protocol combined together with a key-exchange and an encryption scheme.
Last updated:  2016-01-27
Threshold-optimal DSA/ECDSA signatures and an application to Bitcoin wallet security
Rosario Gennaro, Steven Goldfeder, Arvind Narayanan
While threshold signature schemes have been presented before, there has never been an optimal threshold signature algorithm for DSA. Due to the properties of DSA, it is far more difficult to create a threshold scheme for it than for other signature algorithms. In this paper, we present a breakthrough scheme that provides a threshold DSA algorithm that is efficient and optimal. We also present a compelling application to use our scheme: securing Bitcoin wallets. Bitcoin thefts are on the rise, and threshold DSA is necessary to secure Bitcoin wallets. Our scheme is the first general threshold DSA scheme that does not require an honest majority and is useful for securing Bitcoin wallets.
Last updated:  2016-01-06
Cryptography for Big Data Security
Ariel Hamlin, Nabil Schear, Emily Shen, Mayank Varia, Sophia Yakoubov, Arkady Yerukhimovich
As big data collection and analysis becomes prevalent in today’s computing environments there is a growing need for techniques to ensure security of the collected data. To make matters worse, due to its large volume and velocity, big data is commonly stored on distributed or shared computing resources not fully controlled by the data owner. Thus, tools are needed to ensure both the confidentiality of the stored data and the integrity of the analytics results even in untrusted environments. In this chapter, we present several cryptographic approaches for securing big data and discuss the appropriate use scenarios for each. We begin with the problem of securing big data storage. We first address the problem of secure block storage for big data allowing data owners to store and retrieve their data from an untrusted server. We present techniques that allow a data owner to both control access to their data and ensure that none of their data is modified or lost while in storage. However, in most big data applications, it is not sufficient to simply store and retrieve one’s data and a search functionality is necessary to allow one to select only the relevant data. Thus, we present several techniques for searchable encryption allowing database- style queries over encrypted data. We review the performance, functionality, and security provided by each of these schemes and describe appropriate use-cases. However, the volume of big data often makes it infeasible for an analyst to retrieve all relevant data. Instead, it is desirable to be able to perform analytics directly on the stored data without compromising the confidentiality of the data or the integrity of the computation results. We describe several recent cryptographic breakthroughs that make such processing possible for varying classes of analytics. We review the performance and security characteristics of each of these schemes and summarize how they can be used to protect big data analytics especially when deployed in a cloud setting. We hope that the exposition in this chapter will raise awareness of the latest types of tools and protections available for securing big data. We believe better understanding and closer collaboration between the data science and cryptography communities will be critical to enabling the future of big data processing.
Last updated:  2016-01-28
Better Security for Functional Encryption for Inner Product Evaluations
Michel Abdalla, Florian Bourse, Angelo De Caro, David Pointcheval
Functional encryption is a new public key paradigm that solves, in a non-interactive way, most of the security challenges raised by cloud computing. A recent paper by Abdalla, Bourse, De Caro, and Pointcheval shows a functional encryption scheme for evaluations of inner products whose security can be proven under simple assumptions. Inner product evaluation is a simple, but quite powerful functionality, that suffices for many concrete applications. We analyze the different security notions for functional encryption for inner product evaluation and propose a new generic construction that achieves security against adaptive adversaries. We show 3 instantiations based on the ElGamal encryption (plain DDH assumption), Paillier/BCP encryption (DCR assumption), and Regev encryption (LWE assumption). All of them have different advantages and drawbacks, but with acceptable trade-offs, and rely on standard assumptions.
Last updated:  2016-01-06
Eliminating Decryption Failures from the Simple Matrix Encryption Scheme
Albrecht Petzoldt, Jintai Ding, Lih-Chung Wang
The SimpleMatrix encryption scheme as proposed by Tao et al. \cite{TD13} is one of the very few existing approaches to create a secure and efficient encryption scheme on the basis of multivariate polynomials. However, in its basic version, decryption failures occur with non-negligible probability. Although this problem has been addressed in several papers \cite{DP14,TX15}, a general solution to it is still missing.\\ In this paper we propose an improved version of the SimpleMatrix scheme, which eliminates decryption failures completely and therefore solves the biggest problem of the SimpleMatrix encryption scheme. Additionally, we propose a second version of the scheme, which reduces the blow-up factor between plain and ciphertext size to a value arbitrary close to 1.
Last updated:  2016-01-10
PUF-BASED SOLUTIONS FOR SECURE COMMUNICATIONS IN ADVANCED METERING INFRASTRUCTURE (AMI)
Uncategorized
Mahshid Delavar, Sattar Mirzakuchaki, Mohammad Hassan Ameri, Javad Mohajeri
Show abstract
Uncategorized
Advanced Metering Infrastructure (AMI) provides two-way communications between the utility and the smart meters. Developing authenticated key exchange (AKE) and broadcast authentication (BA) protocols to provide the security of unicast and broadcast communications in AMI is an essential part of AMI design. The security of all existing cryptographic protocols are based on the assumption that secret information are stored in the non-volatile memory of each party. These information must be kept unknown to the adversary. Unfortunately, in an AMI network, the attackers can obtain some or all of the stored secret information from non-volatile memories by a great variety of inexpensive and fast side channel attacks. Especially, the smart meters which are located in physically insecure environments are more vulnerable to these attacks. Thus, all existing AKE and BA protocols are no longer secure against such attacks. In this paper, we investigate how to develop secure AKE and BA protocols with the presence of memory attack. As a solution, we propose to embed a Physical Unclonable Function (PUF) in each communicating party which generate the secret values as required without need to store them. By combining PUFs and two well-known and secure protocols, we propose a PUF-based Authenticated Key Exchange protocol (PUF-AKE) for unicast communications and a PUF-based Broadcast Authentication (PUF-BA) for broadcast communications. We show that our proposed protocols are memory leakage resilient. Also, we prove the security of them in a standard model. Performance analysis of both of the protocols show they are efficient for AMI applications. The proposed protocols can be easily implemented in AMI networks.
Last updated:  2018-03-21
cMix: Mixing with Minimal Real-Time Asymmetric Cryptographic Operations
Uncategorized
David Chaum, Debajyoti Das, Farid Javani, Aniket Kate, Anna Krasnova, Joeri de Ruiter, Alan T. Sherman
Show abstract
Uncategorized
We introduce cMix, a new approach to anonymous communications. Through a precomputation, the core cMix protocol eliminates all expensive realtime public-key operations --- at the senders, recipients and mixnodes --- thereby decreasing real-time cryptographic latency and lowering computational costs for clients. The core real-time phase performs only a few fast modular multiplications. In these times of surveillance and extensive profiling there is a great need for an anonymous communication system that resists global attackers. One widely recognized solution to the challenge of traffic analysis is a mixnet, which anonymizes a batch of messages by sending the batch through a fixed cascade of mixnodes. Mixnets can offer excellent privacy guarantees, including unlinkability of sender and receiver, and resistance to many traffic-analysis attacks that undermine many other approaches including onion routing. Existing mixnet designs, however, suffer from high latency in part because of the need for real-time public-key operations. Precomputation greatly improves the real-time performance of cMix, while its fixed cascade of mixnodes yields the strong anonymity guarantees of mixnets. cMix is unique in not requiring any real-time public-key operations by users. Consequently, cMix is the first mixing suitable for low latency chat for lightweight devices. Our presentation includes a specification of cMix, security arguments, anonymity analysis, and a performance comparison with selected other approaches. We also give benchmarks from our prototype.
Last updated:  2016-01-04
Easing Coppersmith Methods using Analytic Combinatorics: Applications to Public-Key Cryptography with Weak Pseudorandomness
Uncategorized
Fabrice Benhamouda, Céline Chevalier, Adrian Thillard, Damien Vergnaud
Show abstract
Uncategorized
The \emph{Coppersmith methods} is a family of lattice-based techniques to find small integer roots of polynomial equations. They have found numerous applications in cryptanalysis and, in recent developments, we have seen applications where the number of unknowns and the number of equations are non-constant. In these cases, the combinatorial analysis required to settle the complexity and the success condition of the method becomes very intricate. We provide a toolbox based on \emph{analytic combinatorics} for these studies. It uses the structure of the considered polynomials to derive their generating functions and applies complex analysis techniques to get asymptotics. The toolbox is versatile and can be used for many different applications, including multivariate polynomial systems with arbitrarily many unknowns (of possibly different sizes) and simultaneous modular equations over different moduli. To demonstrate the power of this approach, we apply it to recent cryptanalytic results on number-theoretic pseudorandom generators for which we easily derive precise and formal analysis. We also present new theoretical applications to two problems on RSA key generation and randomness generation used in padding functions for encryption.
Last updated:  2016-01-04
Indistinguishability Obfuscation with Non-trivial Efficiency
Uncategorized
Huijia Lin, Rafael Pass, Karn Seth, Sidharth Telang
Show abstract
Uncategorized
It is well known that *inefficient* indistinguishability obfuscators (iO) with running time poly(|C|,lambda) . 2^n, where C is the circuit to be obfuscated, lambda is the security parameter, and n is the input length of C, exists *unconditionally*: simply output the function table of C (i.e., the output of C on all possible inputs). Such inefficient obfuscators, however, are not useful for applications. We here consider iO with a slightly ``non-trivial'' notion of efficiency: the running-time of the obfuscator may still be ``trivial'' (namely, poly(|C|,lambda) . 2^n), but we now require that the obfuscated code is just slightly smaller than the truth table of C (namely poly(|C|,lambda) . 2^{n(1-epsilon)}, where epsilon >0); we refer to this notion as *iO with exponential efficiency*, or simply *exponentially-efficient iO (XiO)*. We show that, perhaps surprisingly, under the subexponential LWE assumption, subexponentially-secure XiO for polynomial-size circuits implies (polynomial-time computable) iO for all polynomial-size circuits.
Last updated:  2016-01-04
A Columnar Transposition cipher in a contemporary setting.
John Jones
A simple cryptographic method, a type of columnar transposition cipher, is described which may be used in series with other methods to provide practical hybrid encryption. The method involves the use of a deterministic Cryptographic Pseudo Random Number Generator (CPRNG) to specify an unbiased random transposition of blocks of plain-text or intermediate text. The decryption process involves applying a reverse transposition at the appropriate stage. The method may be applied at a single stage or several stages. The unit of transposition may be a bit or a byte or larger block. The method, when added in series with existing encryption methods, can deter attacks which exploit known weaknesses. The method could be applied at several scales in order to obscure structures within the plain-text, e.g. at the bit level, to obscure the almost fixed transmission headers; at the computer word-level, to disguise application record structures. Two (incompatible) outline implementations are presented. One for use with a block cipher and one for use with a stream cipher The specification of necessary configuration parameters required to amend or construct a software or hardware implementation is avoided in this preliminary article
Last updated:  2016-01-05
Bounding basis reduction properties
Uncategorized
Arnold Neumaier
Show abstract
Uncategorized
The paper describes improved analysis techniques for basis reduction that allow one to prove strong complexity bounds and reduced basis guarantees for traditional reduction algorithms and some of their variants. This is achieved by a careful exploitation of the linear equations and inequalities relating various bit sizes before and after one or more reduction steps.
Last updated:  2016-01-06
On Splitting a Point with Summation Polynomials in Binary Elliptic Curves
Nicolas T. Courtois
Recent research for efficient algorithms for solving the discrete logarithm (DL) problem on elliptic curves depends on the difficult question of the feasibility of index calculus which would consist of splitting EC points into sums of points lying in a certain subspace. A natural algebraic approach towards this goal is through solving systems of non linear multivariate equations derived from the so called summation polynomials which method have been proposed by Semaev in 2004 [12]. In this paper we consider simplified variants of this problem with splitting in two or three parts in binary curves. We propose three algorithms with running time of the order of 2^n/3 for both problems. It is not clear how to interpret these results but they do in some sense violate the generic group model for these curves.
Last updated:  2016-01-04
Remote Cache-Timing Attack without Learning Phase
Uncategorized
Ali Can Atici, Cemal Yilmaz, Erkay Savas
Show abstract
Uncategorized
Theoretically secure cryptographic algorithms can be vulnerable to attacks due to their implementation flaws, which disclose side-channel information about the secret key. Bernstein's attack is a well known cache-timing attack which uses execution time as the side-channel. The major drawback of this attack is that it needs an identical target machine to perform its learning phase where the attacker models the cache timing-behavior of the target machine. This assumption makes the attack unrealistic in many circumstances. In this work, we present an effective method to eliminate the learning phase. We propose a methodology to model the cache timing-behavior of the target machine by hypothetical modeling. To test the validity of the proposed method, we performed the Bernstein attack and showed that, in majority of the cases, the new attack is actually superior to the original attack which uses a learning phase.
Last updated:  2016-01-04
Improved on an efficient user authentication scheme for heterogeneous wireless sensor network tailored for the Internet of Things environment
Yalin Chen, Jue-Sam Chou, Hung-Sheng Wu
Recently, Farasha et al. proposed an efficient user authentication and key agreement scheme for heterogeneous wireless sensor network tailored for the Internet of Things environment. By using BAN-logic and AVISPA tools, they confirm the security properties of the proposed scheme. However, after analyzing, we determine that the scheme could not resist the smart card loss password guessing attack, which is one of the ten basic requirements in a secure identity authentication using smart card, assisted by Liao et al. Therefore, we modify the method to include the desired security functionality, which is significantly important in a user authentication system using smart card.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.