All papers in 2018 (Page 11 of 1249 results)

Last updated:  2020-12-23
CALYPSO: Private Data Management for Decentralized Ledgers
Eleftherios Kokoris-Kogias, Enis Ceyhun Alp, Linus Gasser, Philipp Jovanovic, Ewa Syta, Bryan Ford
Distributed ledger technologies provide high availability and integrity, making them a key enabler for practical and secure computation of distributed workloads among mutually distrustful parties. However, many practical applications also require confidentiality, the third pillar of the CIA triad. In this work, we enhance permissioned and permissionless blockchains with the ability to manage confidential data without forfeiting availability or decentralization. More specifically, CALYPSO sets out to achieve two orthogonal goals that challenge modern distributed ledgers: (a) enable blockchains to auditably manage secrets and (b) protect distributed computations against arbitrage attacks when their results depend on the ordering and secrecy of inputs. To this end, CALYPSO proposes on-chain secrets, a novel abstraction that enforces atomic deposition of an auditable trace whenever users access confidential data. Furthermore, CALYPSO provides user-controlled consent management that ensures revocation atomicity and accountable anonymity. Finally, to enable the permissionless deployment of CALYPSO, we introduce an incentive scheme and provide users with the option to select their preferred trustees. We evaluated our CALYPSO prototype with a confidential document sharing application and a decentralized lottery. Our benchmarks show that the latency of processing transactions increases linearly to the added security (in number of trustees) and is in the range of 0.2 to 8 seconds for 16 to 128 trustees.
Last updated:  2022-02-21
TinyKeys: A New Approach to Efficient Multi-Party Computation
Carmit Hazay, Emmanuela Orsini, Peter Scholl, Eduardo Soria-Vazquez
We present a new approach to designing concretely efficient MPC protocols with semi-honest security in the dishonest majority setting. Motivated by the fact that within the dishonest majority setting the efficiency of most practical protocols does not depend on the number of honest parties, we investigate how to construct protocols which improve in efficiency as the number of honest parties increases. Our central idea is to take a protocol which is secure for $n-1$ corruptions and modify it to use short symmetric keys, with the aim of basing security on the concatenation of all honest parties' keys. This results in a more efficient protocol tolerating fewer corruptions, whilst also introducing an LPN-style syndrome decoding assumption. We first apply this technique to a modified version of the semi-honest GMW protocol, using OT extension with short keys, to improve the efficiency of standard GMW with fewer corruptions. We also obtain more efficient constant-round MPC, using BMR-style garbled circuits with short keys, and present an implementation of the online phase of this protocol. Our techniques start to improve upon existing protocols when there are around $n=20$ parties with $h=6$ honest parties, and as these increase we obtain up to a 13 times reduction (for $n=400,h=120$) in communication complexity for our GMW variant, compared with the best-known GMW-based protocol modified to use the same threshold.
Last updated:  2018-02-22
Non-Malleable Codes for Small-Depth Circuits
Marshall Ball, Dana Dachman-Soled, Siyao Guo, Tal Malkin, Li-Yang Tan
We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by small-depth circuits. For constant-depth circuits of polynomial size (i.e.~$\mathsf{AC^0}$ tampering functions), our codes have codeword length $n = k^{1+o(1)}$ for a $k$-bit message. This is an exponential improvement of the previous best construction due to Chattopadhyay and Li (STOC 2017), which had codeword length $2^{O(\sqrt{k})}$. Our construction remains efficient for circuit depths as large as $\Theta(\log(n)/\log\log(n))$ (indeed, our codeword length remains $n\leq k^{1+\epsilon})$, and extending our result beyond this would require separating $\mathsf{P}$ from $\mathsf{NC^1}$. We obtain our codes via a new efficient non-malleable reduction from small-depth tampering to split-state tampering. A novel aspect of our work is the incorporation of techniques from unconditional derandomization into the framework of non-malleable reductions. In particular, a key ingredient in our analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC 2013), a derandomization of the influential switching lemma from circuit complexity; the randomness-efficiency of this switching lemma translates into the rate-efficiency of our codes via our non-malleable reduction.
Last updated:  2020-10-26
Reading in the Dark: Classifying Encrypted Digits with Functional Encryption
Edouard Dufour-Sans, Romain Gay, David Pointcheval
As machine learning grows into a ubiquitous technology that finds many interesting applications, the privacy of data is becoming a major concern. This paper deals with machine learning and encrypted data. Namely, our contribution is twofold: we first build a new Functional Encryption scheme for quadratic multi-variate polynomials, which outperforms previous schemes. It enables the efficient computation of quadratic polynomials on encrypted vectors, so that only the result is in clear. We then turn to quadratic networks, a class of machine learning models, and show that their design makes them particularly suited to our encryption scheme. This synergy yields a technique for efficiently recovering a plaintext classification of encrypted data. Eventually, we prototype our construction and run it on the MNIST dataset to demonstrate practical relevance. We obtain 97.54% accuracy, with decryption and encryption taking few seconds.
Last updated:  2018-09-25
Static-Memory-Hard Functions, and Modeling the Cost of Space vs. Time
Uncategorized
Thaddeus Dryja, Quanquan C. Liu, Sunoo Park
Show abstract
Uncategorized
A series of recent research starting with (Alwen and Serbinenko, STOC 2015) has deepened our understanding of the notion of memory-hardness in cryptography — a useful property of hash functions for deterring large-scale password-cracking attacks — and has shown memory-hardness to have intricate connections with the theory of graph pebbling. Definitions of memory-hardness are not yet unified in the somewhat nascent field of memory-hardness, however, and the guarantees proven to date are with respect to a range of proposed definitions. In this paper, we observe two significant and practical considerations that are not analyzed by existing models of memory-hardness, and propose new models to capture them, accompanied by constructions based on new hard-to-pebble graphs. Our contribution is two-fold, as follows. First, existing measures of memory-hardness only account for dynamic memory usage (i.e., memory read/written at runtime), and do not consider static memory usage (e.g., memory on disk). Among other things, this means that memory requirements considered by prior models are inherently upper-bounded by a hash function’s runtime; in contrast, counting static memory would potentially allow quantification of much larger memory requirements, decoupled from runtime. We propose a new definition of static-memory-hard function (SHF) which takes static memory into account: we model static memory usage by oracle access to a large preprocessed string, which may be considered part of the hash function description. Static memory requirements are complementary to dynamic memory requirements: neither can replace the other, and to deter large-scale password-cracking attacks, a hash function will benefit from being both dynamic memory-hard and static-memory-hard. We give two SHF constructions based on pebbling. To prove static-memory-hardness, we define a new pebble game (“black-magic pebble game”), and new graph constructions with optimal complexity under our proposed measure. Moreover, we provide a prototype implementation of our first SHF construction (which is based on pebbling of a simple “cylinder” graph), providing an initial demonstration of practical feasibility for a limited range of parameter settings. Secondly, existing memory-hardness models implicitly assume that the cost of space and time are more or less on par: they consider only linear ratios between the costs of time and space. We propose a new model to capture nonlinear time-space trade-offs: e.g., how is the adversary impacted when space is quadratically more expensive than time? We prove that nonlinear tradeoffs can in fact cause adversaries to employ different strategies from linear tradeoffs. Finally, as an additional contribution of independent interest, we present an asymptotically tight graph construction that achieves the best possible space complexity up to log log n-factors for an existing memory-hardness measure called cumulative complexity in the sequential pebbling model.
Last updated:  2018-02-22
Short Non-Malleable Codes from Related-Key Secure Block Ciphers
Serge Fehr, Pierre Karpman, Bart Mennink
A non-malleable code is an unkeyed randomized encoding scheme that offers the strong guarantee that decoding a tampered codeword either results in the original message, or in an unrelated message. We consider the simplest possible construction in the computational split-state model, which simply encodes a message $m$ as $k||E_k(m)$ for a uniformly random key $k$, where $E$ is a block cipher. This construction is comparable to, but greatly simplifies over, the one of Kiayias et al. (ACM CCS 2016), who eschewed this simple scheme in fear of related-key attacks on $E$. In this work, we prove this construction to be a strong non-malleable code as long as $E$ is: (i) a pseudorandom permutation under leakage and (ii) related-key secure with respect to an arbitrary but fixed key relation. Both properties are believed to hold for "good" block ciphers, such as AES-128, making this non-malleable code very efficient with short codewords of length $|m| + 2\tau$ (where $\tau$ is the security parameter, e.g., 128 bits), without significant security penalty.
Last updated:  2019-10-24
Impeccable Circuits
Anita Aghaie, Amir Moradi, Shahram Rasoolzadeh, Aein Rezaei Shahmirzadi, Falk Schellenberg, Tobias Schneider
By injecting faults, active physical attacks pose serious threats to cryptographic hardware where Concurrent Error Detection (CED) schemes are promising countermeasures. They are usually based on an Error-Detecting Code (EDC) which enables detecting certain injected faults depending on the specification of the underlying code. Here, we propose a methodology to enable correct, practical, and robust implementation of code-based CEDs. We show that straightforward hardware implementations of given code-based CEDs can suffer from severe vulnerabilities, not providing the desired protection level. In particular, propagation of faults into combinatorial logic is often ignored in security evaluation of these schemes. First, we formally define this detrimental effect and demonstrate its destructive impact. Second, we introduce an implementation strategy to limit the fault propagation effect. Third, in contrast to many other works where the fault coverage is the main focus, we present a detailed implementation strategy which can guarantee the detection of any fault covered by the underlying EDC. This holds for any time of the computation and any location in the circuit, both in data processing and control unit. In short, we provide practical guidelines how to construct efficient CED schemes with arbitrary EDCs to achieve the desired protection level. We practically evaluate the efficiency of our methodology by case studies covering different symmetric block ciphers and various linear EDCs.
Last updated:  2018-02-22
Doing Real Work with FHE: The Case of Logistic Regression
Jack L. H. Crawford, Craig Gentry, Shai Halevi, Daniel Platt, Victor Shoup
We describe our recent experience, building a system that uses fully-homomorphic encryption (FHE) to approximate the coefficients of a logistic-regression model, built from genomic data. The aim of this project was to examine the feasibility of a solution that operates "deep within the bootstrapping regime," solving a problem that appears too hard to be addressed just with somewhat-homomorphic encryption. As part of this project, we implemented optimized versions of many "bread and butter" FHE tools. These tools include binary arithmetic, comparisons, partial sorting, and low-precision approximation of "complicated functions" such as reciprocals and logarithms. Our eventual solution can handle thousands of records and hundreds of fields, and it takes a few hours to run. To achieve this performance we had to be extremely frugal with expensive bootstrapping and data-movement operations. We believe that our experience in this project could server as a guide for what is or is not currently feasible to do with fully-homomorphic encryption.
Last updated:  2018-02-22
Efficient Parallel Binary Operations on Homomorphic Encrypted Real Numbers
Jim Basilakis, Bahman Javadi
A number of homomorphic encryption application areas, such as privacy-preserving machine learning analysis in the cloud, could be better enabled if there existed a general solution for combining sufficiently expressive logical and numerical circuit primitives to form higher-level algorithms relevant to the application domain. Logical primitives are more efficient in a binary plaintext message space, whereas numeric primitives favour a word-based message space before encryption. In a step closer to an overall strategy of combining logical and numeric operation types, this paper examines accelerating binary operations on real numbers suitable for somewhat homomorphic encryption. A parallel solution based on SIMD can be used to efficiently perform addition, subtraction and comparison operations in a single step. The result maximises computational efficiency, memory space usage and minimises multiplicative circuit depth. Performance of these primitives and their application in min-max and sorting operations are demonstrated. In sorting real numbers, a speed up of 25-30 times is observed.
Last updated:  2018-02-22
Hermes. A framework for cryptographically assured access control and data security
Eugene Pilyankevich, Ignat Korchagin, Andrey Mnatsakanov
This paper presents Hermes – a practical data security scheme with a reference implementation, which enables distributed sharing and collaboration, enforcing access control with the help of cryptographic methods (public key cryptography and traditional symmetric cryptography).
Last updated:  2021-03-09
Bloom Filter Encryption and Applications to Efficient Forward-Secret 0-RTT Key Exchange
David Derler, Kai Gellert, Tibor Jager, Daniel Slamanig, Christoph Striecks
Forward secrecy is considered an essential design goal of modern key establishment (KE) protocols, such as TLS 1.3, for example. Furthermore, efficiency considerations such as zero round-trip time (0-RTT), where a client is able to send cryptographically protected payload data along with the very first KE message, are motivated by the practical demand for secure low-latency communication. For a long time, it was unclear whether protocols that simultaneously achieve 0-RTT and full forward secrecy exist. Only recently, the first forward-secret 0-RTT protocol was described by Günther et al. (Eurocrypt 2017). It is based on Puncturable Encryption. Forward secrecy is achieved by "puncturing" the secret key after each decryption operation, such that a given ciphertext can only be decrypted once (cf. also Green and Miers, S&P 2015). Unfortunately, their scheme is completely impractical, since one puncturing operation takes between 30 seconds and several minutes for reasonable security and deployment parameters, such that this solution is only a first feasibility result, but not efficient enough to be deployed in practice. In this paper, we introduce a new primitive that we term Bloom Filter Encryption (BFE), which is derived from the probabilistic Bloom filter data structure. We describe different constructions of BFE schemes, and show how these yield new puncturable encryption mechanisms with extremely efficient puncturing. Most importantly, a puncturing operation only involves a small number of very efficient computations, plus the deletion of certain parts of the secret key, which outperforms previous constructions by orders of magnitude. This gives rise to the first forward-secret 0-RTT protocols that are efficient enough to be deployed in practice. We believe that BFE will find applications beyond forward-secret 0-RTT protocols.
Last updated:  2018-06-03
A Key-recovery Attack on 855-round Trivium
Uncategorized
Ximing Fu, Xiaoyun Wang, Xiaoyang Dong, Willi Meier
Show abstract
Uncategorized
In this paper, we propose a key-recovery attack on Trivium reduced to 855 rounds. As the output is a complex Boolean polynomial over secret key and IV bits and it is hard to find the solution of the secret keys, we propose a novel nullification technique of the Boolean polynomial to reduce the output Boolean polynomial of 855-round Trivium. Then we determine the degree upper bound of the reduced nonlinear boolean polynomial and detect the right keys. These techniques can be applicable to most stream ciphers based on nonlinear feedback shift registers (NFSR). Our attack on $855$-round Trivium costs time complexity $2^{77}$. As far as we know, this is the best key-recovery attack on round-reduced Trivium. To verify our attack, we also give some experimental data on 721-round reduced Trivium.
Last updated:  2018-02-22
Green Mining: toward a less energetic impact of cryptocurrencies
Philippe Jacquet, Bernard Mans
While cryptocurrencies continue to gain popularity, their energy cost is increasingly becoming unsustainable. In this paper, we present an innovative scheme which eliminates the burden of the proof of work which is the main cause of the energy waste in cryptocurrencies such as Bitcoin. The scheme is based on a green leader election scheme which guarantees a bounded average number of simultaneous mining whatever the size of the population in competition.
Last updated:  2018-02-22
Non-Profiled Deep Learning-Based Side-Channel Attacks
Uncategorized
Benjamin Timon
Show abstract
Uncategorized
Deep Learning has recently been introduced as a new alternative to perform Side-Channel analysis. Until now, studies have been focused on applying Deep Learning techniques to perform Profiled Side-Channel attacks where an attacker has a full control of a profiling device and is able to collect a large amount of traces for different key values in order to characterize the device leakage prior to the attack. In this paper we introduce a new method to apply Deep Learning techniques in a Non-Profiled context, where an attacker can only collect a limited number of side-channel traces for a fixed unknown key value from a closed device. We show that by combining key guesses with observations of Deep Learning metrics, it is possible to recover information about the secret key. The main interest of this method, is that it is possible to use the power of Deep Learning and Neural Networks in a Non-Profiled scenario. We show that it is possible to exploit the translation-invariance property of Convolutional Neural Networks against de-synchronized traces and use Data Augmentation techniques also during Non-Profiled side-channel attacks. Additionally, the present work shows that in some conditions, this method can outperform classic Non-Profiled attacks as Correlation Power Analysis. We also highlight that it is possible to target masked implementations without leakages combination pre-preprocessing and with less assumptions than classic high-order attacks. To illustrate these properties, we present a series of experiments performed on simulated data and real traces collected from the ChipWhisperer board and from the ASCAD database. The results of our experiments demonstrate the interests of this new method and show that this attack can be performed in practice.
Last updated:  2022-05-10
Breach-Resistant Structured Encryption
Uncategorized
Ghous Amjad, Seny Kamara, Tarik Moataz
Show abstract
Uncategorized
Motivated by the problem of data breaches, we formalize a notion of security for dynamic structured encryption (STE) schemes that guarantees security against a snapshot adversary; that is, an adversary that receives a copy of the encrypted structure at various times but does not see the transcripts related to any queries. In particular, we focus on the construction of dynamic encrypted multi-maps which are used to build efficient searchable symmetric encryption schemes, graph encryption schemes and encrypted relational databases. Interestingly, we show that a form of snapshot security we refer to as breach resistance implies previously-studied notions such as a (weaker version) of history independence and write-only obliviousness. Moreover, we initiate the study of dual-secure dynamic STE constructions: schemes that are forward-private against a persistent adversary and breach-resistant against a snapshot adversary. The notion of forward privacy guarantees that updates to the encrypted structure do not reveal their association to any query made in the past. As a concrete instantiation, we propose a new dual-secure dynamic multi-map encryption scheme that outperforms all existing constructions; including schemes that are not dual-secure. Our construction has query complexity that grows with the selectivity of the query and the number of deletes since the client executed a linear-time rebuild protocol which can be de-amortized. We implemented our scheme (with the de-amortized rebuild protocol) and evaluated its concrete efficiency empirically. Our experiments show that it is highly efficient with queries taking less than 1 microsecond per label/value pair.
Last updated:  2018-02-22
Proofs of Catalytic Space
Krzysztof Pietrzak
Proofs of space (PoS) [DFKP15] are proof systems where a prover can convince a verifier that he ``wastes" disk space. PoS were introduced as a more ecological and economical replacement for proofs of work which are currently used to secure blockchains like Bitcoin. In this work we investigate extensions of PoS which allow the prover to embed useful data into the dedicated space, which later can be recovered. The first contribution of this paper is a security proof for the PoS from [DFKP15] in the random oracle model (the original proof only applied to a restricted class of adversaries which can store a subset of the data an honest prover would store). When this PoS is instantiated with recent constructions of maximally depth robust graphs, our proof implies basically optimal security. As a second contribution we introduce and construct proofs of catalytic space (PoCS), which are defined like classical PoS, but most of the space required by the prover can at the same time be used to store useful data. Our first construction has almost no overhead (i.e., the useful data is almost as large as the dedicated space), whereas our second construction has a slightly larger overhead, but allows for efficient updates of the data. Our constructions are extensions of the [DFKP15] PoS, and our tight proof for the PoS extends (non-trivially) to the PoCS. As our last contribution we construct a proof of replication (PoR), coming up with such an object has recently been stated as an open problem in the Filecoin paper. Also this construction (and its proof) are extensions of the [DFKP15] PoS.
Last updated:  2018-05-21
A New Family of Pairing-Friendly elliptic curves
Uncategorized
Michael Scott, Aurore Guillevic
Show abstract
Uncategorized
There have been recent advances in solving the finite extension field discrete logarithm problem as it arises in the context of pairing-friendly elliptic curves. This has lead to the abandonment of approaches based on super-singular curves of small characteristic, and to the reconsideration of the field sizes required for implementation based on non-supersingular curves of large characteristic. This has resulted in a revision of recommendations for suitable curves, particularly at a higher level of security. Indeed for AES-256 levels of security the BLS48 curves have been suggested, and demonstrated to be superior to other candidates. These curves have an embedding degree of 48. The well known taxonomy of Freeman, Scott and Teske only considered curves with embedding degrees up to 50. Given some uncertainty around the constants that apply to the best discrete logarithm algorithm, it would seem to be prudent to push a little beyond 50. In this note we announce the discovery of a new family of pairing friendly elliptic curves which includes a new construction for a curve with an embedding degree of 54.
Last updated:  2018-02-20
SoK: unraveling Bitcoin smart contracts
Nicola Atzei, Massimo Bartoletti, Tiziana Cimoli, Stefano Lande, Roberto Zunino
Albeit the primary usage of Bitcoin is to exchange currency, its blockchain and consensus mechanism can also be exploited to securely execute some forms of smart contracts. These are agreements among mutually distrusting parties, which can be automatically enforced without resorting to a trusted intermediary. Over the last few years a variety of smart contracts for Bitcoin have been proposed, both by the academic community and by that of developers. However, the heterogeneity in their treatment, the informal (often incomplete or imprecise) descriptions, and the use of poorly documented Bitcoin features, pose obstacles to the research. In this paper we present a comprehensive survey of smart contracts on Bitcoin, in a uniform framework. Our treatment is based on a new formal specification language for smart contracts, which also helps us to highlight some subtleties in existing informal descriptions, making a step towards automatic verification. We discuss some obstacles to the diffusion of smart contracts on Bitcoin, and we identify the most promising open research challenges.
Last updated:  2018-09-20
Signatures with Flexible Public Key: Introducing Equivalence Classes for Public Keys
Uncategorized
Michael Backes, Lucjan Hanzlik, Kamil Kluczniak, Jonas Schneider
Show abstract
Uncategorized
We introduce a new cryptographic primitive called signatures with flexible public key (SFPK). We divide the key space into equivalence classes induced by a relation $\mathcal{R}$. A signer can efficiently change his or her key pair to a different representatives of the same class, but without a trapdoor it is hard to distinguish if two public keys are related. Our primitive is motivated by structure-preserving signatures on equivalence classes (SPSEQ), where the partitioning is done on the message space. Therefore, both definitions are complementary and their combination has various applications. We first show how to efficiently construct static group signatures and self-blindable certificates by combining the two primitives. When properly instantiated, the result is a group signature scheme that has a shorter signature size than the current state-of-the-art scheme by Libert, Peters, and Yung from Crypto'15, but is secure in the same setting. In its own right, our primitive has stand-alone applications in the cryptocurrency domain, where it can be seen as a straightforward formalization of so-called stealth addresses. Finally, it can be used to build the first efficient ring signature scheme in the plain model without trusted setup, where signature size depends only sub-linearly on the number of ring members. Thus, we solve an open problem stated by Malavolta and Schr{ö}der at ASIACRYPT'2017.
Last updated:  2018-05-08
New Lower Bounds on Predicate Entropy for Function Private Public-Key Predicate Encryption
Sikhar Patranabis, Debdeep Mukhopadhyay
We present function private public-key predicate encryption schemes from standard cryptographic assumptions, that achieve new lower bounds on the min-entropy of underlying predicate distributions. Existing function private predicate encryption constructions in the public-key setting can be divided into two broad categories. The first category of constructions are based on standard assumptions, but impose highly stringent requirements on the min-entropy of predicate distributions, thereby limiting their applicability in the context of real-world predicates. For example, the statistically function private constructions of Boneh, Raghunathan and Segev (CRYPTO'13 and ASIACRYPT'13) are inherently restricted to predicate distributions with min-entropy roughly proportional to the security parameter $\lambda$. The second category of constructions mandate more relaxed min-entropy requirements, but are either based on non-standard assumptions (such as indistinguishability obfuscation) or are secure in the generic group model. In this paper, we affirmatively bridge the gap between these categories by presenting new public-key constructions for identity-based encryption, hidden-vector encryption, and subspace-membership encryption~(a generalization of inner-product encryption) that are both data and function private under variants of the well-known DBDH, DLIN and matrix DDH assumptions, while relaxing the min-entropy requirement on the predicate distributions to $\omega(\log\lambda)$. In summary, we establish that the minimum predicate entropy necessary for any meaningful notion of function privacy in the public-key setting, is in fact, sufficient, for a fairly rich class of predicates.
Last updated:  2018-02-20
Threshold Implementation in Software - Case Study of PRESENT
Pascal Sasdrich, René Bock, Amir Moradi
Masking is one of the predominantly deployed countermeasures in order to prevent side-channel analysis (SCA) attacks. Over the years, various masking schemes have been proposed. However, the implementation of Boolean masking schemes has proven to be difficult in particular for embedded devices due to undisclosed architecture details and device internals. In this article, we investigate the application of Threshold Implementation (TI) in terms of Boolean masking in software using the PRESENT cipher as a case study. Since TI has proven to be a proper solution in order to implement Boolean masking for hardware circuits, we apply the same concept for software implementations and compare it to classical first- and second-order Boolean masking schemes. Eventually, our practical security evaluations reveal that amongst all our considered implementation variants only the TI can provide first-order security while all others still exhibit detectable first-order leakage.
Last updated:  2019-07-21
Kissing numbers and transference theorems from generalized tail bounds
Stephen D. Miller, Noah Stephens-Davidowitz
We generalize Banaszczyk's seminal tail bound for the Gaussian mass of a lattice to a wide class of test functions. From this we obtain quite general transference bounds, as well as bounds on the number of lattice points contained in certain bodies. As applications, we bound the lattice kissing number in $\ell_p$ norms by $e^{n+o(n)}/p$ for $0<p\leq2$, and also give a proof of a new transference bound in the $\ell_1$ norm.
Last updated:  2018-02-21
Making Groth's zk-SNARK Simulation Extractable in the Random Oracle Model
Uncategorized
Sean Bowe, Ariel Gabizon
Show abstract
Uncategorized
We describe a variant of Groth's zk-SNARK [Groth, Eurocrypt 2016] that satisfies simulation extractability, which is a strong form of adaptive non-malleability. The proving time is almost identical to [Groth] and requires only two additional group operations. Our proof consists of 5 group elements rather than 3 as in [Groth], and the security proof requires the random oracle model.
Last updated:  2018-02-20
RKHD ElGamal signing and 1-way sums
Daniel R. L. Brown
An ECDSA modification with signing equation $s=rk+hd$ has the properties that the signer avoids modular inversion and that passive universal forgery is equivalent to inverting a sum of two functions with freely independent inputs. Let $\sigma:s\mapsto sG$ and $\rho:R\mapsto -rR$ where $r$ is an integer representation of the point $R$. The free sum of $\rho$ and $\sigma$ is $\nu: (R,s) \mapsto \rho(R)+\sigma(s)$. A RKHD signature $(R,s)$ verifies if and only if $\nu(R,s) = hQ$, where $h$ is the hash of the message and $Q$ is the public key. So RKHD security relies upon, among other things, the assumption that free sum $\nu$ is 1-way (or unforgoable, to be precise). Other free sums are 1-way under plausible assumptions: elliptic curve discrete logs, integer factoring, and secure small-key Wegman--Carter--Shoup authentication. Yet other free sums of 1-way functions (integer-factoring based) fail to be 1-way. The ease with which these free sums arise hints at the ease determining RKHD security. RKHD signatures are very similar to ECGDSA (an elliptic curve version Agnew--Mullin--Vanstone signatures): variable-$G$ forgers of the two schemes are algorithmically equivalent. But ECGDSA requires the signer to do one modular inversion, a small implementation security risk.
Last updated:  2020-04-21
A privacy-preserving method for temporarily linking/revoking pseudonym certificates in vehicular networks
Marcos A. Simplicio Jr., Eduardo Lopes Cominetti, Harsh Kupwade Patil, Jefferson E. Ricardini, Leonardo T. D. Ferraz, Marcos Vinicius M. Silva
Vehicular communication (V2X) technologies are expected to become increasingly common in the future. Although they enable improvements on transportation safety and efficiency, the large scale deployment of V2X requires addressing some challenges. In particular, to prevent abuse by drivers and by the system itself, V2X architectures must: (1) ensure the authenticity of messages, which is usually accomplished by means of digital certification; and (2) preserve the privacy of honest users, so owners of non-revoked certificates cannot be easily identified and tracked by eavesdroppers. A promising design to address these requirements is the Security Credential Management System (SCMS), which is currently among the main candidates for protecting V2X communications in the United States. Even though SCMS provides efficient, scalable and privacy-preserving mechanisms for managing V2X-oriented certificates, in this article we show that its certificate revocation process can be further enhanced. Namely, we present two birthday attacks against SCMS's revocation process, both of which degrade the system's security as time passes and more certificates are revoked. We then describe an alternative design to prevent such security degradation with minimal computational overhead. In complement to these security gains, we also describe a mechanism for improving the flexibility of the revocation procedure, allowing certificates (as well as their owner's privacy) to be temporarily revoked in an efficient manner. This should be useful, for example, to implement suspension mechanisms or to aid in investigations by law-enforcement authorities.
Last updated:  2018-08-24
Can you find the one for me? Privacy-Preserving Matchmaking via Threshold PSI
Uncategorized
Yongjun Zhao, Sherman S. M. Chow
Show abstract
Uncategorized
Private set-intersection (PSI) allows a client to only learn the intersection between his/her set $C$ and the set $S$ of another party, while this latter party learns nothing. We aim to enhance PSI in different dimensions, motivated by the use cases of increasingly popular online matchmaking --- Meeting ``the one'' who possesses \emph{all} desired qualities and \emph{free from any} undesirable attributes may be a bit idealistic. In this paper, we realize \emph{over-} (resp. \emph{below-}) threshold PSI, such that the client learns the intersection (or other auxiliary private data) only when $|C \cap S| > t$ (resp. $\leq t$). The threshold corresponds to tunable criteria for (mis)matching, without marking all possible attributes as desired or not. In other words, the matching criteria are in a succinct form and the matching computation does not exhaust the whole universe of attributes. To the best of our knowledge, our constructions are the very first solution for these two open problems posed by Bradley~\etal (SCN~'16) and Zhao and Chow (PoPETS~'17), without resorting to the asymptotically less efficient generic approach from garbled circuits. Moreover, we consider an ``outsourced'' setting with a service provider coordinating the PSI execution, instead of having two strangers to be online simultaneously for running a highly-interactive PSI directly with each other. Outsourcing our protocols are arguably optimal --- the two users perform $O(|C|)$ and $O(1)$ decryptions, for unlocking the private set $C$ and the outcome of matching.
Last updated:  2018-02-14
Simple Proofs of Sequential Work
Bram Cohen, Krzysztof Pietrzak
At ITCS 2013, Mahmoody, Moran and Vadhan [MMV'13] introduce and construct publicly verifiable proofs of sequential work, which is a protocol for proving that one spent sequential computational work related to some statement. The original motivation for such proofs included non-interactive time-stamping and universally verifiable CPU benchmarks. A more recent application, and our main motivation, are blockchain designs, where proofs of sequential work can be used -- in combination with proofs of space -- as a more ecological and economical substitute for proofs of work which are currently used to secure Bitcoin and other cryptocurrencies. The construction proposed by [MMV'13] is based on a hash function and can be proven secure in the random oracle model, or assuming inherently sequential hash-functions, which is a new standard model assumption introduced in their work. In a proof of sequential work, a prover gets a "statement" $\chi$, a time parameter $N$ and access to a hash-function $H$, which for the security proof is modelled as a random oracle. Correctness requires that an honest prover can make a verifier accept making only $N$ queries to $H$, while soundness requires that any prover who makes the verifier accept must have made (almost) $N$ sequential queries to $H$. Thus a solution constitutes a proof that $N$ time passed since $\chi$ was received. Solutions must be publicly verifiable in time at most polylogarithmic in $N$. The construction of [MMV'13] is based on "depth-robust" graphs, and as a consequence has rather poor concrete parameters. But the major drawback is that the prover needs not just $N$ time, but also $N$ space to compute a proof. In this work we propose a proof of sequential work which is much simpler, more efficient and achieves much better concrete bounds. Most importantly, the space required can be as small as $\log(N)$ (but we get better soundness using slightly more memory than that). An open problem stated by [MMV'13] that our construction does not solve either is achieving a "unique" proof, where even a cheating prover can only generate a single accepting proof. This property would be extremely useful for applications to blockchains.
Last updated:  2022-04-25
Truncated Differential Properties of the Diagonal Set of Inputs for 5-round AES
Lorenzo Grassi, Christian Rechberger
In the last couple of years, a new wave of results appeared, proposing and exploiting new properties of round-reduced AES. In this paper we survey and combine some of these results (namely, the multiple-of-n property and the mixture differential cryptanalysis) in a systematic way in order to answer more general questions regarding the probability distribution of encrypted diagonal sets. This allows to analyze this special set of inputs, and report on new properties regarding the probability distribution of the number of different pairs of corresponding ciphertexts are equal in certain anti-diagonal(s) after 5 rounds. An immediate corollary of the multiple-of-8 property is that the variance of such a distribution can be shown to be higher than for a random permutation. Surprisingly, also the mean of the distribution is significantly different from random, something which cannot be explained by the multiple-of-8 property. We propose a theoretical explanation of this, by assuming an APN-like assumption on the S-Box which closely resembles the AES-Sbox. By combining the multiple-of-8 property, the mixture differential approach, and the results just mentioned about the mean and the variance, we are finally able to formulate the probability distribution of the diagonal set after 5-round AES as a sum of independent binomial distributions.
Last updated:  2018-06-21
Rasta: A cipher with low ANDdepth and few ANDs per bit
Christoph Dobraunig, Maria Eichlseder, Lorenzo Grassi, Virginie Lallemand, Gregor Leander, Eik List, Florian Mendel, Christian Rechberger
Recent developments in multi party computation (MPC) and fully homomorphic encryption (FHE) promoted the design and analysis of symmetric cryptographic schemes that minimize multiplications in one way or another. In this paper, we propose with Rasta a design strategy for symmetric encryption that has ANDdepth d and at the same time only needs d ANDs per encrypted bit. Even for very low values of d between 2 and 6 we can give strong evidence that attacks may not exist. This contributes to a better understanding of the limits of what concrete symmetric-key constructions can theoretically achieve with respect to AND-related metrics, and is to the best of our knowledge the first attempt that minimizes both metrics simultaneously. Furthermore, we can give evidence that for choices of d between 4 and 6 the resulting implementation properties may well be competitive by testing our construction in the use-case of removing the large ciphertext-expansion when using the BGV scheme.
Last updated:  2018-06-11
Two-Round Multiparty Secure Computation Minimizing Public Key Operations
Sanjam Garg, Peihan Miao, Akshayaram Srinivasan
We show new constructions of semi-honest and malicious two-round multiparty secure computation protocols using only (a fixed) $\mathsf{poly}(n,\lambda)$ invocations of a two-round oblivious transfer protocol (which use expensive public-key operations) and $\mathsf{poly}(\lambda, |C|)$ cheaper one-way function calls, where $\lambda$ is the security parameter, $n$ is the number of parties, and $C$ is the circuit being computed. All previously known two-round multiparty secure computation protocols required $\mathsf{poly}(\lambda,|C|)$ expensive public-key operations.
Last updated:  2018-02-14
Efficient and Constant-Rounds Secure Comparison through Dynamic Groups and Asymmetric Computations
Ken Goss, Wei Jiang
Within recent years, secure comparison protocols have been proposed using binary decomposition and properties of algebraic fields. These have been repeatedly optimized and increased in efficiency, but have seemingly reached a plateau. We propose a new approach to this problem that takes advantage of dynamic group sizes for intermediate calculations and asymmetric computations among participating parties. As a consequence, according to our analysis, communication and computation costs have been brought to a very low and efficient level. Particularly, the communication costs have been considerably reduced both in order as well as the dominating term's order of magnitude. In addition, our proposed protocol requires no secure multi-party multiplication invocations in contrast to those required by the existing protocols, leading to inefficient constructions of secure comparisons.
Last updated:  2018-02-28
--Withdrawn--
Uncategorized
Zhi Hu, Lin Wang, Chang-An Zhao
Uncategorized
Last updated:  2018-02-14
On the Use of Independent Component Analysis to Denoise Side-Channel Measurements
Uncategorized
Houssem Maghrebi, Emmanuel Prouff
Show abstract
Uncategorized
Independent Component Analysis (ICA) is a powerful technique for blind source separation. It has been successfully applied to signal processing problems, such as feature extraction and noise reduction, in many different areas including medical signal processing and telecommunication. In this work, we propose a framework to apply ICA to denoise side-channel measurements and hence to reduce the complexity of key recovery attacks. Based on several case studies, we afterwards demonstrate the overwhelming advantages of ICA with respect to the commonly used preprocessing techniques such as the singular spectrum analysis. Mainly, we target a software masked implementation of an AES and a hardware unprotected one. Our results show a significant Signal-to-Noise Ratio (SNR) gain which translates into a gain in the number of traces needed for a successful side-channel attack. This states the ICA as an important new tool for the security assessment of cryptographic implementations.
Last updated:  2018-02-14
Fine-Tuning Decentralized Anonymous Payment Systems based on Arguments for Arithmetic Circuit Satisfiability
Kamil Kluczniak, Man Ho Au
Digital currencies like Bitcoin and other blockchain based systems provide means to record monetary transfers between accounts. In Bitcoin like systems transactions are published on a decentralized ledger and reveal the sender, receiver and amount of a transfer, hence such systems give only moderate anonymity guarantees. Payment systems like ZCash attempt to offer much stronger anonymity by hiding the origin, destination and value of a payment. The ZCash system is able to offer strong anonymity, mainly due to use of Zero-Knowledge Succinct Non-interactive Arguments of Knowledge (ZK-SNARK) of arithmetic circuit satisfiability. One drawback of ZCash is that the arithmetic circuit is rather large, thus requires a large common reference string and complex prover for the ZK-SNARK. In fact, the memory and prover complexity is dominated by the ZK-SNARK in use and is mainly determined by the complexity of the circuit. In this paper we design a Decentralized Anonymous Payment system (DAP), functionally similar to ZCash, however with significantly smaller arithmetic circuits, thus greatly reducing the memory and prover complexity of the system. Our construction is based on algebraic primitives, from the realm of elliptic curve and lattice based cryptography, which satisfiability might be efficiently verified by an arithmetic circuit.
Last updated:  2018-02-14
Scalable Key Rank Estimation (and Key Enumeration) Algorithm for Large Keys
Uncategorized
Vincent Grosso
Show abstract
Uncategorized
Evaluation of security margins after a side-channel attack is an important step of side-channel resistance evaluation. The security margin indicates the brute force effort needed to recover the key given the leakages. In the recent years, several solutions for key rank estimation algorithms have been proposed. All these solutions give an interesting trade-off between the tightness of the result and the time complexity for symmetric key. Unfortunately, none of them has a linear complexity in the number of subkeys, hence these solutions are slow for large (asymmetric) keys. In this paper, we present a solution to obtain a key rank estimation algorithm with a reasonable trade-off between the efficiency and the tightness that is suitable for large keys. Moreover, by applying backtracking we obtain a parallel key enumeration algorithm.
Last updated:  2018-02-14
A New Framework for Finding Nonlinear Superpolies in Cube Attacks against Trivium-Like Ciphers
Chen-Dong Ye, Tian Tian
In this paper, we study experimental cube attacks against Trivium-like ciphers and we focus on improving nonlinear superpolies recovery. We first present a general framework in cube attacks to test nonlinear superpolies, by exploiting a kind of linearization technique. It worth noting that, in the new framework, the complexities of testing and recovering nonlinear superpolies are almost the same as those of testing and recovering linear superpolies. To demonstrate the effectiveness of our new attack framework, we do extensive experiments on Trivium, Kreyvium, and TriviA-SC-v2 respectively. We obtain several linear and quadratic superpolies for the 802-round Trivium, which is the best experimental results against Trivium regarding the number of initialization rounds. For Kreyvium, it is shown that the probability of finding a quadratic superpoly using the new framework is twice as large as finding a linear superpoly. Hopefully, this new framework would provide some new insights on cube attacks against NFSR-based ciphers, and in particular make nonlinear superpolies potentially useful in the future cube attacks.
Last updated:  2018-02-14
Vectorizing Higher-Order Masking
Benjamin Grégoire, Kostas Papagiannopoulos, Peter Schwabe, Ko Stoffelen
The cost of higher-order masking as a countermeasure against side-channel attacks is often considered too high for practical scenarios, as protected implementations become very slow. At Eurocrypt 2017, the bounded moment leakage model was proposed to study the (theoretical) security of parallel implementations of masking schemes. Work at CHES 2017 then brought this to practice by considering an implementation of AES with 32 shares, bitsliced inside 32-bit registers of ARM Cortex-M processors. In this paper we show how the NEON vector instructions of larger ARM Cortex-A processors can be exploited to build much faster masked implementations of AES. Specifically, we present AES with 4 and 8 shares, which in theory provide security against 3rd and 7th-order attacks, respectively. The software is publicly available and optimized for the ARM Cortex-A8. We use refreshing and multiplication algorithms that are proven to be secure in the bounded moment leakage model and to be strongly non-interfering. Additionally, we perform a concrete side-channel evaluation on a BeagleBone Black, using a combination of test vector leakage assessment (TVLA), leakage certification tools and information-theoretic bounds.
Last updated:  2018-02-14
A First-Order SCA Resistant AES without Fresh Randomness
Felix Wegener, Amir Moradi
Since the advent of Differential Power Analysis (DPA) in the late 1990s protecting embedded devices against Side-Channel Analysis (SCA) attacks has been a major research effort. Even though many different first-order secure masking schemes are available today, when applied to the AES S-box they all require fresh random bits in every evaluation. As the quality criteria for generating random numbers on an embedded device are not well understood, an integrated Random Number Generator (RNG) can be the weak spot of any protected implementation and may invalidate an otherwise secure implementation. We present a new construction based on Threshold Implementations and Changing of the Guards to realize a first-order secure AES with zero per-round randomness. Hence, our design does not need a built-in RNG, thereby enhancing security and reducing the overhead.
Last updated:  2018-02-14
On the Complexity of Simulating Auxiliary Input
Uncategorized
Yi-Hsiu Chen, Kai-Min Chung, Jyun-Jie Liao
Show abstract
Uncategorized
We construct a simulator for the simulating auxiliary input problem with complexity better than all previous results and prove the optimality up to logarithmic factors by establishing a black-box lower bound. Specifically, let $\ell$ be the length of the auxiliary input and $\epsilon$ be the indistinguishability parameter. Our simulator is $\tilde{O}(2^{\ell}\epsilon^{-2})$ more complicated than the distinguisher family. For the lower bound, we show the relative complexity to the distinguisher of a simulator is at least $\Omega(2^{\ell}\epsilon^{-2})$ assuming the simulator is restricted to use the distinguishers in a black-box way and satisfy a mild restriction.
Last updated:  2021-11-02
On the Ring-LWE and Polynomial-LWE problems
Uncategorized
Miruna Rosca, Damien Stehlé, Alexandre Wallet
Show abstract
Uncategorized
The Ring Learning With Errors problem (RLWE) comes in various forms. Vanilla RLWE is the decision dual-RLWE variant, consisting in distinguishing from uniform a distribution depending on a secret belonging to the dual O_K^vee of the ring of integers O_K of a specified number field K. In primal-RLWE, the secret instead belongs to O_K. Both decision dual-RLWE and primal-RLWE enjoy search counterparts. Also widely used is (search/decision) Polynomial Learning With Errors (PLWE), which is not defined using a ring of integers O_K of a number field K but a polynomial ring ZZ[x]/f for a monic irreducible f in ZZ[x]. We show that there exist reductions between all of these six problems that incur limited parameter losses. More precisely: we prove that the (decision/search) dual to primal reduction from Lyubashevsky et al. [EUROCRYPT~2010] and Peikert [SCN~2016] can be implemented with a small error rate growth for all rings (the resulting reduction is non-uniform polynomial time); we extend it to polynomial-time reductions between (decision/search) primal RLWE and PLWE that work for a family of polynomials f that is exponentially large as a function of deg f (the resulting reduction is also non-uniform polynomial time); and we exploit the recent technique from Peikert et al. [STOC~2017] to obtain a search to decision reduction for RLWE for arbitrary number fields. The reductions incur error rate increases that depend on intrinsic quantities related to K and f.
Last updated:  2018-02-11
Full Indifferentiable Security of the Xor of Two or More Random Permutations Using the $\chi^2$ Method
Uncategorized
Srimanta Bhattacharya, Mridul Nandi
Show abstract
Uncategorized
The construction $\mathsf{XORP}$ (bitwise-xor of outputs of two independent $n$-bit random permutations) has gained broad attention over the last two decades due to its high security. Very recently, Dai \textit{et al.} (CRYPTO'17), by using a method which they term the {\em Chi-squared method} ($\chi^2$ method), have shown $n$-bit security of $\mathsf{XORP}$ when the underlying random permutations are kept secret to the adversary. In this work, we consider the case where the underlying random permutations are publicly available to the adversary. The best known security of $\mathsf{XORP}$ in this security game (also known as {\em indifferentiable security}) is $\frac{2n}{3}$-bit, due to Mennink \textit{et al.} (ACNS'15). Later, Lee (IEEE-IT'17) proved a better $\frac{(k-1)n}{k}$-bit security for the general construction $\mathsf{XORP}[k]$ which returns the xor of $k$ ($\geq 2$) independent random permutations. However, the security was shown only for the cases where $k$ is an even integer. In this paper, we improve all these known bounds and prove full, {\em i.e.,} $n$-bit (indifferentiable) security of $\mathsf{XORP}$ as well as $\mathsf{XORP}[k]$ for any $k$. Our main result is $n$-bit security of $\mathsf{XORP}$, and we use the $\chi^2$ method to prove it.
Last updated:  2018-02-11
Statistical Witness Indistinguishability (and more) in Two Messages
Uncategorized
Yael Tauman Kalai, Dakshita Khurana, Amit Sahai
Show abstract
Uncategorized
Two-message witness indistinguishable protocols were first constructed by Dwork and Naor (FOCS 00). They have since proven extremely useful in the design of several cryptographic primitives. However, so far no two-message arguments for NP provided statistical privacy against malicious verifiers. In this paper, we construct the first: - Two-message statistical witness indistinguishable (SWI) arguments for NP. - Two-message statistical zero-knowledge arguments for NP with super-polynomial simulation (Statistical SPS-ZK). These were previously believed to be impossible to construct via black-box reductions (Chung et al., ePrint 2012). - Two-message statistical distributional weak zero-knowledge (SwZK) arguments for NP with polynomial simulation, where the instance is sampled by the prover in the second round. These protocols are based on quasi-polynomial hardness of two-message oblivious transfer (OT) with game-based security, which can in turn be based on quasi-polynomial DDH or QR or N'th residuosity. We also demonstrate simple applications of these arguments to constructing more secure forms of oblivious transfer. Along the way, we show that the Kalai and Raz (Crypto 09) transform compressing interactive proofs to two-message arguments can be generalized to compress certain types of interactive arguments. We introduce and construct a new technical tool, which is a variant of extractable two-message statistically hiding commitments, by extending the work of Khurana and Sahai (FOCS 17). These techniques may be of independent interest.
Last updated:  2018-02-11
On the Existence of Three Round Zero-Knowledge Proofs
Uncategorized
Nils Fleischhacker, Vipul Goyal, Abhishek Jain
Show abstract
Uncategorized
We study the round complexity of zero-knowledge (ZK) proof systems. While five round ZK proofs for NP are known from standard assumptions [Goldreich-Kahan, J. Cryptology'96], Katz [TCC'08] proved that four rounds are insufficient for this task w.r.t. black-box simulation. In this work, we study the feasibility of ZK proofs using non-black-box simulation. Our main result is that three round private-coin ZK proofs for NP do not exist (even w.r.t. non-black-box simulation), under certain assumptions on program obfuscation. Our approach builds upon the recent work of Kalai et al. [Crypto'17] who ruled out constant round public-coin ZK proofs under the same assumptions as ours.
Last updated:  2018-04-27
Optimal Forgeries Against Polynomial-Based MACs and GCM
Uncategorized
Atul Luykx, Bart Preneel
Show abstract
Uncategorized
Polynomial-based authentication algorithms, such as GCM and Poly1305, have seen widespread adoption in practice. Due to their importance, a significant amount of attention has been given to understanding and improving both proofs and attacks against such schemes. At EUROCRYPT 2005, Bernstein published the best known analysis of the schemes when instantiated with PRPs, thereby establishing the most lenient limits on the amount of data the schemes can process per key. A long line of work, initiated by Handschuh and Preneel at CRYPTO 2008, finds the best known attacks, advancing our understanding of the fragility of the schemes. Yet surprisingly, no known attacks perform as well as the predicted worst-case attacks allowed by Bernstein's analysis, nor has there been any advancement in proofs improving Bernstein's bounds, and the gap between attacks and analysis is significant. We settle the issue by finding a novel attack against polynomial-based authentication algorithms using PRPs, and combine it with new analysis, to show that Bernstein's bound, and our attacks, are optimal.
Last updated:  2018-02-11
The Wonderful World of Global Random Oracles
Uncategorized
Jan Camenisch, Manu Drijvers, Tommaso Gagliardoni, Anja Lehmann, Gregory Neven
Show abstract
Uncategorized
The random-oracle model by Bellare and Rogaway (CCS'93) is an indispensable tool for the security analysis of practical cryptographic protocols. However, the traditional random-oracle model fails to guarantee security when a protocol is composed with arbitrary protocols that use the same random oracle. Canetti, Jain, and Scafuro (CCS'14) put forth a global but non-programmable random oracle in the Generalized UC framework and showed that some basic cryptographic primitives with composable security can be efficiently realized in their model. Because their random-oracle functionality is non-programmable, there are many practical protocols that have no hope of being proved secure using it. In this paper, we study alternative definitions of a global random oracle and, perhaps surprisingly, show that these allow one to prove GUC-secure existing, very practical realizations of a number of essential cryptographic primitives including public-key encryption, non-committing encryption, commitments, Schnorr signatures, and hash-and-invert signatures. Some of our results hold generically for any suitable scheme proven secure in the traditional ROM, some hold for specific constructions only. Our results include many highly practical protocols, for example, the folklore commitment scheme H(m|r) (where m is a message and r is the random opening information) which is far more efficient than the construction of Canetti et al.
Last updated:  2018-07-19
An Efficiency-Preserving Transformation from Honest-Verifier Statistical Zero-Knowledge to Statistical Zero-Knowledge
Uncategorized
Pavel Hubáček, Alon Rosen, Margarita Vald
Show abstract
Uncategorized
We present an unconditional transformation from any honest-verifier statistical zero-knowledge (HVSZK) protocol to standard SZK that preserves round complexity and efficiency of both the verifier and the prover. This improves over currently known transformations, which either rely on some computational assumptions or introduce significant computational overhead. Our main conceptual contribution is the introduction of instance-dependent SZK proofs for NP, which serve as a building block in our transformation. Instance-dependent SZK for NP can be constructed unconditionally based on instance-dependent commitment schemes of Ong and Vadhan (TCC'08). As an additional contribution, we give a simple constant-round SZK protocol for Statistical-Difference resembling the textbook HVSZK proof of Sahai and Vadhan (J.ACM'03). This yields a conceptually simple constant-round protocol for all of SZK.
Last updated:  2019-10-22
OPAQUE: An Asymmetric PAKE Protocol Secure Against Pre-Computation Attacks
Uncategorized
Stanislaw Jarecki, Hugo Krawczyk, Jiayu Xu
Show abstract
Uncategorized
Password-Authenticated Key Exchange (PAKE) protocols allow two parties that only share a password to establish a shared key in a way that is immune to offline attacks. Asymmetric PAKE (aPAKE) strengthens this notion for the more common client-server setting where the server stores a mapping of the password and security is required even upon server compromise, that is, the only allowed attack in this case is an (inevitable) offline exhaustive dictionary attack against individual user passwords. Unfortunately, current aPAKE protocols (that dispense with the use of servers' public keys) allow for pre-computation attacks that lead to the instantaneous compromise of user passwords upon server compromise, thus forgoing much of the intended aPAKE security. Indeed, these protocols use - in essential ways - deterministic password mappings or use random "salt" transmitted in the clear from servers to users, and thus are vulnerable to pre-computation attacks. We initiate the study of "Strong aPAKE" protocols that are secure as aPAKE's but are also secure against pre-computation attacks. We formalize this notion in the Universally Composable (UC) settings and present two modular constructions using an Oblivious PRF as a main tool. The first builds a Strong aPAKE from any aPAKE (which in turn can be constructed from any PAKE [GMR'06]) while the second builds a Strong aPAKE from any authenticated key-exchange protocol secure against reverse impersonation (a.k.a. KCI). Using the latter transformation, we show a practical instantiation of a UC-secure Strong aPAKE in the Random Oracle model. The protocol ("OPAQUE") consists of 2 messages (3 with mutual authentication), requires 3 and 4 exponentiations for server and client, respectively (2 to 4 of which can be fixed-base depending on optimizations), provides forward secrecy, is PKI-free, supports user-side hash iterations, has a built-in facility for password-based storage and retrieval of secrets and credentials, and accommodates a user-transparent server-side threshold implementation.
Last updated:  2018-11-06
Untagging Tor: A Formal Treatment of Onion Encryption
Jean Paul Degabriele, Martijn Stam
Tor is a primary tool for maintaining anonymity online. It provides a low-latency, circuit-based, bidirectional secure channel between two parties through a network of onion routers, with the aim of obscuring exactly who is talking to whom, even to adversaries controlling part of the network. Tor relies heavily on cryptographic techniques, yet its onion encryption scheme is susceptible to tagging attacks (Fu and Ling, 2009), which allow an active adversary controlling the first and last node of a circuit to deanonymize with near-certainty. This contrasts with less active traffic correlation attacks, where the same adversary can at best deanonymize with high probability. The Tor project has been actively looking to defend against tagging attacks and its most concrete alternative is proposal 261, which specifies a new onion encryption scheme based on a variable-input-length tweakable cipher. We provide a formal treatment of low-latency, circuit-based onion encryption, relaxed to the unidirectional setting, by expanding existing secure channel notions to the new setting and introducing circuit hiding to capture the anonymity aspect of Tor. We demonstrate that circuit hiding prevents tagging attacks and show proposal 261's relay protocol is circuit hiding and thus resistant against tagging attacks.
Last updated:  2018-02-11
Boomerang Connectivity Table: A New Cryptanalysis Tool
Carlos Cid, Tao Huang, Thomas Peyrin, Yu Sasaki, Ling Song
A boomerang attack is a cryptanalysis framework that regards a block cipher $E$ as the composition of two sub-ciphers $E_1\circ E_0$ and builds a particular characteristic for $E$ with probability $p^2q^2$ by combining differential characteristics for $E_0$ and $E_1$ with probability $p$ and $q$, respectively. Crucially the validity of this figure is under the assumption that the characteristics for $E_0$ and $E_1$ can be chosen independently. Indeed, Murphy has shown that independently chosen characteristics may turn out to be incompatible. On the other hand, several researchers observed that the probability can be improved to $p$ or $q$ around the boundary between $E_0$ and $E_1$ by considering a positive dependency of the two characteristics, e.g.~the ladder switch and S-box switch by Biryukov and Khovratovich. This phenomenon was later formalised by Dunkelman et al.~as a sandwich attack that regards $E$ as $E_1\circ E_m \circ E_0$, where $E_m$ satisfies some differential propagation among four texts with probability $r$, and the entire probability is $p^2q^2r$. In this paper, we revisit the issue of dependency of two characteristics in $E_m$, and propose a new tool called Boomerang Connectivity Table (BCT), which evaluates $r$ in a systematic and easy-to-understand way when $E_m$ is composed of a single S-box layer. With the BCT, previous observations on the S-box including the incompatibility, the ladder switch and the S-box switch are represented in a unified manner. Moreover, the BCT can detect a new switching effect, which shows that the probability around the boundary may be even higher than $p$ or $q$. To illustrate the power of the BCT-based analysis, we improve boomerang attacks against Deoxys-BC, and disclose the mechanism behind an unsolved probability amplification for generating a quartet in SKINNY. Lastly, we discuss the issue of searching for S-boxes having good BCT and extending the analysis to modular addition.
Last updated:  2018-06-27
DelegaTEE: Brokered Delegation Using Trusted Execution Environments
Sinisa Matetic, Moritz Schneider, Andrew Miller, Ari Juels, Srdjan Capkun
We introduce a new concept called brokered delegation. Brokered delegation allows users to flexibly delegate credentials and rights for a range of service providers to other users and third parties. We explore how brokered delegation can be implemented using novel trusted execution environments (TEEs). We introduce a system called DelegaTEE that enables users (Delegatees) to log into different online services using the credentials of other users (Owners). Credentials in DelegaTEE are never revealed to Delegatees and Owners can restrict access to their accounts using a range of rich, contextually dependent delegation policies. DelegaTEE fundamentally shifts existing access control models for centralized online services. It does so by using TEEs to permit access delegation at the user's discretion. DelegaTEE thus effectively reduces mandatory access control (MAC) in this context to discretionary access control (DAC). The system demonstrates the significant potential for TEEs to create new forms of resource sharing around online services without the direct support from those services. We present a full implementation of DelegaTEE using Intel SGX and demonstrate its use in four real-world applications: email access (SMTP/IMAP), restricted website access using a HTTPS proxy, e-banking/credit card, and a third-party payment system (PayPal).
Last updated:  2018-02-13
The Missing Difference Problem, and its Applications to Counter Mode Encryption
Gaëtan Leurent, Ferdinand Sibleyras
The counter mode (CTR) is a simple, efficient and widely used encryption mode using a block cipher. It comes with a security proof that guarantees no attacks up to the birthday bound (i.e. as long as the number of encrypted blocks $\sigma$ satisfies $\sigma \ll 2^{n/2}$), and a matching attack that can distinguish plaintext/ciphertext pairs from random using about $2^{n/2}$ blocks of data. The main goal of this paper is to study attacks against the counter mode beyond this simple distinguisher. We focus on message recovery attacks, with realistic assumptions about the capabilities of an adversary, and evaluate the full time complexity of the attacks rather than just the query complexity. Our main result is an attack to recover a block of message with complexity $\tilde{\mathcal{O}}(2^{n/2})$. This shows that the actual security of CTR is similar to that of CBC, where collision attacks are well known to reveal information about the message. To achieve this result, we study a simple algorithmic problem related to the security of the CTR mode: the missing difference problem. We give efficient algorithms for this problem in two practically relevant cases: where the missing difference is known to be in some linear subspace, and when the amount of data is higher than strictly required. As a further application, we show that the second algorithm can also be used to break some polynomial MACs such as GMAC and Poly1305, with a universal forgery attack with complexity $\tilde{\mathcal{O}}(2^{2n/3})$.
Last updated:  2018-02-11
Correlation Cube Attacks: From Weak-Key Distinguisher to Key Recovery
Uncategorized
Meicheng Liu, Jingchun Yang, Wenhao Wang, Dongdai Lin
Show abstract
Uncategorized
In this paper, we describe a new variant of cube attacks called correlation cube attack. The new attack recovers the secret key of a cryptosystem by exploiting conditional correlation properties between the superpoly of a cube and a specific set of low-degree polynomials that we call a basis, which satisfies that the superpoly is a zero constant when all the polynomials in the basis are zeros. We present a detailed procedure of correlation cube attack for the general case, including how to find a basis of the superpoly of a given cube. One of the most significant advantages of this new analysis technique over other variants of cube attacks is that it converts from a weak-key distinguisher to a key recovery attack. As an illustration, we apply the attack to round-reduced variants of the stream cipher Trivium. Based on the tool of numeric mapping introduced by Liu at CRYPTO 2017, we develop a specific technique to efficiently find a basis of the superpoly of a given cube as well as a large set of potentially good cubes used in the attack on Trivium variants, and further set up deterministic or probabilistic equations on the key bits according to the conditional correlation properties between the superpolys of the cubes and their bases. For a variant when the number of initialization rounds is reduced from 1152 to 805, we can recover about 7-bit key information on average with time complexity $2^{44}$, using $2^{45}$ keystream bits and preprocessing time $2^{51}$. For a variant of Trivium reduced to 835 rounds, we can recover about 5-bit key information on average with the same complexity. All the attacks are practical and fully verified by experiments. To the best of our knowledge, they are thus far the best known key recovery attacks for these variants of Trivium, and this is the first time that a weak-key distinguisher on Trivium stream cipher can be converted to a key recovery attack.
Last updated:  2019-04-30
ROYALE: A Framework for Universally Composable Card Games with Financial Rewards and Penalties Enforcement
Bernardo David, Rafael Dowsley, Mario Larangeira
While many tailor made card game protocols are known, the vast majority of those suffer from three main issues: lack of mechanisms for distributing financial rewards and punishing cheaters, lack of composability guarantees and little flexibility, focusing on the specific game of poker. Even though folklore holds that poker protocols can be used to play any card game, this conjecture remains unproven and, in fact, does not hold for a number of protocols (including recent results). We both tackle the problem of constructing protocols for general card games and initiate a treatment of such protocols in the Universal Composability (UC) framework, introducing an ideal functionality that captures general card games constructed from a set of core card operations. Based on this formalism, we introduce Royale, the first UC-secure general card games which supports financial rewards/penalties enforcement. We remark that Royale also yields the first UC-secure poker protocol. Interestingly, Royale performs better than most previous works (that do not have composability guarantees), which we highlight through a detailed concrete complexity analysis and benchmarks from a prototype implementation.
Last updated:  2018-04-27
A New Approach to Black-Box Concurrent Secure Computation
Uncategorized
Sanjam Garg, Susumu Kiyoshima, Omkant Pandey
Show abstract
Uncategorized
We consider the task of constructing concurrently composable protocols for general secure computation by making only black-box use of underlying cryptographic primitives. Existing approaches for this task first construct a black-box version of CCA-secure commitments which provide a strong form of concurrent security to the committed value(s). This strong form of security is then crucially used to construct higher level protocols such as concurrently secure OT/coin-tossing (and eventually all functionalities). This work explores a fresh approach. We first aim to construct a concurrently-secure OT protocol whose concurrent security is proven directly using concurrent simulation techniques; in particular, it does not rely on the usual ``non-polynomial oracles'' of CCA-secure commitments. The notion of concurrent security we target is super-polynomial simulation (SPS). We show that such an OT protocol can be constructed from polynomial hardness assumptions in a black-box manner, and within a constant number of rounds. In fact, we only require the existence of (constant round) semi-honest OT and standard collision-resistant hash functions. Next, we show that such an OT protocol is sufficient to obtain SPS-secure (concurrent) multiparty computation (MPC) for general functionalities. This transformation does not require any additional assumptions; it also maintains the black-box nature as well as the constant round feature of the original OT protocol. Prior to our work, the only known black-box construction of constant-round concurrently composable MPC required stronger assumptions; namely, verifiable perfectly binding homomorphic commitment schemes and PKE with oblivious public-key generation.
Last updated:  2018-05-09
Memory Lower Bounds of Reductions Revisited
Uncategorized
Yuyu Wang, Takahiro Matsuda, Goichiro Hanaoka, Keisuke Tanaka
Show abstract
Uncategorized
In Crypto 2017, Auerbach et al. initiated the study on memory-tight reductions and proved two negative results on the memory-tightness of restricted black-box reductions from multi-challenge security to single-challenge security for signatures and an artificial hash function. In this paper, we revisit the results by Auerbach et al. and show that for a large class of reductions treating multi-challenge security, it is impossible to avoid loss of memory-tightness unless we sacrifice the efficiency of their running-time. Specifically, we show three lower bound results. Firstly, we show a memory lower bound of natural black-box reductions from the multi-challenge unforgeability of unique signatures to any computational assumption. Then we show a lower bound of restricted reductions from multi-challenge security to single-challenge security for a wide class of cryptographic primitives with unique keys in the multi-user setting. Finally, we extend the lower bound result shown by Auerbach et al. treating a hash function to one treating any hash function with a large domain.
Last updated:  2018-06-04
Constrained PRFs for NC1 in Traditional Groups
Nuttapong Attrapadung, Takahiro Matsuda, Ryo Nishimaki, Shota Yamada, Takashi Yamakawa
We propose new constrained pseudorandom functions (CPRFs) in traditional groups. Traditional groups mean cyclic and multiplicative groups of prime order that were widely used in the 1980s and 1990s (sometimes called ``pairing free'' groups). Our main constructions are as follows. - We propose a selectively single-key secure CPRF for circuits with depth $O(\log n)$ (that is, $\textbf{NC}^1$ circuits) in traditional groups} where $n$ is the input size. It is secure under the $L$-decisional Diffie-Hellman inversion ($L$-DDHI) assumption in the group of quadratic residues $\mathbb{QR}_q$ and the decisional Diffie-Hellman (DDH) assumption in a traditional group of order $q$ in the standard model. - We propose a selectively single-key private bit-fixing CPRF in traditional groups. It is secure under the DDH assumption in any prime-order cyclic group in the standard model. - We propose adaptively single-key secure CPRF for $\textbf{NC}^1$ and private bit-fixing CPRF in the random oracle model. To achieve the security in the standard model, we develop a new technique using correlated-input secure hash functions.
Last updated:  2018-02-11
Bootstrapping for Approximate Homomorphic Encryption
Jung Hee Cheon, Kyoohyung Han, Andrey Kim, Miran Kim, Yongsoo Song
This paper extends the leveled homomorphic encryption scheme for an approximate arithmetic of Cheon et al. (ASIACRYPT 2017) to a fully homomorphic encryption, i.e., we propose a new technique to refresh low-level ciphertexts based on Gentry's bootstrapping procedure. The modular reduction operation is the main bottleneck in the homomorphic evaluation of the decryption circuit. We exploit a scaled sine function as an approximation of the modular reduction operation and present an efficient evaluation strategy. Our method requires only one homomorphic multiplication for each of iterations and so the total computation cost grows linearly with the depth of the decryption circuit. We also show how to recrypt packed ciphertexts on the RLWE construction with an open-source implementation. For example, it takes 139.8 seconds to refresh a ciphertext that encrypts 128 numbers with 12 bits of precision, yielding an amortized rate of 1.1 seconds per slot.
Last updated:  2019-10-15
A General Framework for the Related-key Linear Attack against Block Ciphers with Linear Key Schedules
Jung-Keun Lee, Bonwook Koo, Woo-Hwan Kim
We present a general framework for the related-key linear attack that can be applied to iterative block ciphers with linear key schedules. The attack utilizes a newly introduced related-key linear approximation that is obtained directly from a linear trail. The attack makes use of a known related-key data consisting of triplets of a plaintext, a ciphertext, and a key difference such that the ciphertext is the encrypted value of the plaintext under the key that is the xor of the key to be recovered and the specified key difference. If such a block cipher has a linear trail with linear correlation \epsilon, it admits attacks with related-key data of size \epsilon^{-2} just as in the case of classical Matsui's Algorithms. But since the attack makes use of a related-key data, the attacker can use a linear trail with the squared correlation less than 2^{-n}, n being the block size, in case the key size is larger than n. Moreover, the standard key hypotheses seem to be appropriate even when the trail is not dominant as validated by experiments. The attack can be applied in two ways. First, using a linear trail with squared correlation smaller than 2^{-n}, one can get an effective attack covering more rounds than existing attacks against some ciphers, such as Simon48/96, Simon64/128 and Simon128/256. Secondly, using a trail with large squared correlation, one can use related-key data for key recovery even when the data is not suitable for existing linear attacks.
Last updated:  2018-02-11
Adaptively Secure Garbling with Near Optimal Online Complexity
Uncategorized
Sanjam Garg, Akshayaram Srinivasan
Show abstract
Uncategorized
We construct an adaptively secure garbling scheme with an online communication complexity of $n+m+\mathsf{poly}(\log |C|, \sec)$ where $C: \{0,1\}^n \rightarrow \{0,1\}^{m}$ is the circuit being garbled, and $\sec$ is the security parameter. The security of our scheme can be based on (polynomial hardness of) the Computational Diffie-Hellman (CDH) assumption, or the Factoring assumption or the Learning with Errors assumption. This is nearly the best achievable in the standard model (i.e., without random oracles) as the online communication complexity must be larger than both $n$ and $m$. The online computational complexity of our scheme is $O(n+m)+\mathsf{poly}(\log |C|, \sec)$. Previously known standard model adaptively secure garbling schemes had asymptotically worse online cost or relied on exponentially hard computational assumptions.
Last updated:  2019-11-14
Analysis of Error-Correcting Codes for Lattice-Based Key Exchange
Tim Fritzmann, Thomas Pöppelmann, Johanna Sepulveda
Lattice problems allow the construction of very efficient key exchange and public-key encryption schemes. When using the Learning with Errors (LWE) or Ring-LWE (RLWE) problem such schemes exhibit an interesting trade-off between decryption error rate and security. The reason is that secret and error distributions with a larger standard deviation lead to better security but also increase the chance of decryption failures. As a consequence, various message/key encoding or reconciliation techniques have been proposed that usually encode one payload bit into several coefficients. In this work, we analyze how error-correcting codes can be used to enhance the error resilience of protocols like NewHope, Frodo, or Kyber. For our case study, we focus on the recently introduced NewHope Simple and propose and analyze four different options for error correction: i) BCH code; ii) combination of BCH code and additive threshold encoding; iii) LDPC code; and iv) combination of BCH and LDPC code. We show that lattice-based cryptography can profit from classical and modern codes by combining BCH and LDPC codes. This way we achieve quasi-error-free communication and an increase of the estimated post-quantum bit-security level by 20.39% and a decrease of the communication overhead by 12.8%.
Last updated:  2021-03-16
Another Step Towards Realizing Random Oracles: Non-Malleable Point Obfuscation
Uncategorized
Ilan Komargodski, Eylon Yogev
Show abstract
Uncategorized
The random oracle paradigm allows us to analyze the security of protocols and constructions in an idealized model, where all parties have access to a truly random function. This is one of the most popular and well-studied models in cryptography. However, being such a strong idealized model, it is known to be susceptible to various weaknesses when implemented naively in ``real-life'', as shown by Canetti, Goldreich and Halevi (J. ACM 2004). As a counter-measure, one could try to identify and implement only one or few of the properties a random oracle possesses that are needed for a specific setting. Such a systematic study was initiated by Canetti (CRYPTO 1997), who showed how to implement the property that the output of the function does not reveal anything regarding the input by constructing a point function obfucator. This property turned out to suffice in many follow-up works and applications. In this work, we tackle another natural property of random oracles and implement it in the standard model. The property we focus on is non-malleability, where it is required that the output on an input cannot be used to generate an output on any related point. We construct a point obfuscator that is both hiding (a la Canetti) and is non-malleable for a non-trivial class of mauling functions. Our construction does not use heavy cryptographic machinery (such as zero-knowledge proofs) and is comparable to that of Canetti in terms of time complexity and obfuscation size. The security of our construction relies on variants of the DDH and power-DDH assumptions. On the technical side, we introduce a new technique for proving security of a construction based on a DDH-like assumption. We call this technique ``double-exponentiation'' and believe it will be useful in the future.
Last updated:  2018-02-08
The Complexity of Multiparty PSM Protocols and Related Models
Amos Beimel, Eyal Kushilevitz, Pnina Nissim
We study the efficiency of computing arbitrary k-argument functions in the Private Simultaneous Messages (PSM) model of (Feige et al. STOC'94, Ishai and Kushilevitz ISTCS'97). This question was recently studied by (Beimel et al. TCC'14), in the two-party case (k = 2). We tackle this question in the general case of PSM protocols for k > 2 parties. Our motivation is two-fold: On one hand, there are various applications (old and new) of PSM protocols for constructing other cryptographic primitives, where obtaining more efficient PSM protocols imply more efficient primitives. On the other hand, improved PSM protocols are an interesting goal on its own. In particular, we pay a careful attention to the case of small number of parties (e.g., k = 3,4, 5), which may be especially interesting in practice, and optimize our protocols for those cases. Our new upper bounds include a k-party PSM protocol, for any k > 2 and any function f : [N]^k --> {0; 1}, of complexity O(poly(k) N^{k/2}) (compared to the previous upper bound of O(poly(k) N^{k-1})), and even better bounds for small values of k; e.g., an O(N) PSM protocol for the case k = 3. We also handle the more involved case where different parties have inputs of different sizes, which is useful both in practice and for applications. As applications, we obtain more efficient Non-Interactive secure Multi-Party (NIMPC) protocols (a variant of PSM, where some of the parties may collude with the referee (Beimel et al. CRYPTO'14)), improved ad-hoc PSM protocols (another variant of PSM, where the subset of participating parties is not known in advance (Beimel et al. ITCS'16, Beimel et al. EUROCRYPT'17)), secret-sharing schemes for strongly-homogeneous access structures with smaller share size than previously known, and better homogeneous distribution designs (Beimel et al. ITCS'16), a primitive with many cryptographic applications on its own.
Last updated:  2018-02-09
Sustained Space Complexity
Joel Alwen, Jeremiah Blocki, Krzysztof Pietrzak
Memory-hard functions (MHF) are functions whose evaluation cost is dominated by memory cost. MHFs are egalitarian, in the sense that evaluating them on dedicated hardware (like FPGAs or ASICs) is not much cheaper than on off-the-shelf hardware (like x86 CPUs). MHFs have interesting cryptographic applications, most notably to password hashing and securing blockchains. Alwen and Serbinenko [STOC'15] define the cumulative memory complexity (cmc) of a function as the sum (over all time-steps) of the amount of memory required to compute the function. They advocate that a good MHF must have high cmc. Unlike previous notions, cmc takes into account that dedicated hardware might exploit amortization and parallelism. Still, cmc has been critizised as insufficient, as it fails to capture possible time-memory trade-offs; as memory cost doesn't scale linearly, functions with the same cmc could still have very different actual hardware cost. In this work we address this problem, and introduce the notion of sustained-memory complexity, which requires that any algorithm evaluating the function must use a large amount of memory for many steps. We construct functions (in the parallel random oracle model) whose sustained-memory complexity is almost optimal: our function can be evaluated using $n$ steps and $O(n/\log(n))$ memory, in each step making one query to the (fixed-input length) random oracle, while any algorithm that can make arbitrary many parallel queries to the random oracle, still needs $\Omega(n/\log(n))$ memory for $\Omega(n)$ steps. As has been done for various notions (including cmc) before, we reduce the task of constructing an MHFs with high sustained-memory complexity to proving pebbling lower bounds on DAGs. Our main technical contribution is the construction is a family of DAGs on $n$ nodes with constant indegree with high ``sustained-space complexity", meaning that any parallel black-pebbling strategy requires $\Omega(n/\log(n))$ pebbles for at least $\Omega(n)$ steps. Along the way we construct a family of maximally ``depth-robust" DAGs with maximum indegree $O(\log n)$, improving upon the construction of Mahmoody et al. [ITCS'13] which had maximum indegree $O\left(\log^2 n \cdot \mathsf{polylog}(\log n)\right)$.
Last updated:  2018-02-08
Polynomial Time Bounded Distance Decoding near Minkowski’s Bound in Discrete Logarithm Lattices
Léo Ducas, Cécile Pierrot
We propose a concrete family of dense lattices of arbitrary dimension n in which the lattice Bounded Distance Decoding (BDD) problem can be solved in deterministic polynomial time. This construction is directly adapted from the Chor-Rivest cryptosystem (1988). The lattice construction needs discrete logarithm computations that can be made in deterministic polynomial time for well-chosen parameters. Each lattice comes with a deterministic polynomial time decoding algorithm able to decode up to large radius. Namely, we reach decoding radius within O(log n) Minkowski’s bound, for both l1 and l2 norms.
Last updated:  2018-02-08
Fast Near Collision Attack on the Grain v1 Stream Cipher
Uncategorized
Bin Zhang, Chao Xu, Willi Meier
Show abstract
Uncategorized
Modern stream ciphers often adopt a large internal state to resist various attacks, where the cryptanalysts have to deal with a large number of variables when mounting state recovery attacks. In this paper, we propose a general new cryptanalytic method on stream ciphers, called fast near collision attack, to address this situation. It combines a near collision property with the divide-and-conquer strategy so that only subsets of the internal state, associated with different keystream vectors, are recovered first and merged carefully later to retrieve the full large internal state. A self-contained method is introduced and improved to derive the target subset of the internal state from the partial state difference efficiently. As an application, we propose a new key recovery attack on Grain v1, one of the $7$ finalists selected by the eSTREAM project, in the single-key setting. Both the pre-computation and the online phases are tailored according to its internal structure, to provide an attack for any fixed IV in $2^{75.7}$ cipher ticks after the pre-computation of $2^{8.1}$ cipher ticks, given $2^{28}$-bit memory and about $2^{19}$ keystream bits. Practical experiments on Grain v1 itself whenever possible and on a 80-bit reduced version confirmed our results.
Last updated:  2019-11-18
The Communication Complexity of Private Simultaneous Messages, Revisited
Benny Applebaum, Thomas Holenstein, Manoj Mishra, Ofer Shayevitz
Private Simultaneous Message (PSM) protocols were introduced by Feige, Kilian and Naor (STOC '94) as a minimal non-interactive model for information-theoretic three-party secure computation. While it is known that every function $f:\{0,1\}^k\times \{0,1\}^k \rightarrow \{0,1\}$ admits a PSM protocol with exponential communication of $2^{k/2}$ (Beimel et al., TCC '14), the best known (non-explicit) lower-bound is $3k-O(1)$ bits. To prove this lower-bound, FKN identified a set of simple requirements, showed that any function that satisfies these requirements is subject to the $3k-O(1)$ lower-bound, and proved that a random function is likely to satisfy the requirements. We revisit the FKN lower-bound and prove the following results: (Counterexample) We construct a function that satisfies the FKN requirements but has a PSM protocol with communication of $2k+O(1)$ bits, revealing a gap in the FKN proof. (PSM lower-bounds) We show that, by imposing additional requirements, the FKN argument can be fixed leading to a $3k-O(\log k)$ lower-bound for a random function. We also get a similar lower-bound for a function that can be computed by a polynomial-size circuit (or even polynomial-time Turing machine under standard complexity-theoretic assumptions). This yields the first non-trivial lower-bound for an explicit Boolean function partially resolving an open problem of Data, Prabhakaran and Prabhakaran (Crypto '14, IEEE Information Theory '16). We further extend these results to the setting of imperfect PSM protocols which may have small correctness or privacy error. (CDS lower-bounds) We show that the original FKN argument applies (as is) to some weak form of PSM protocols which are strongly related to the setting of Conditional Disclosure of Secrets (CDS). This connection yields a simple combinatorial criterion for establishing linear $\Omega(k)$-bit CDS lower-bounds. As a corollary, we settle the complexity of the Inner Product predicate resolving an open problem of Gay, Kerenidis, and Wee (Crypto '15).
Last updated:  2018-02-14
Conjecturally Superpolynomial Lower Bound for Share Size
Uncategorized
Shahram Khazaei
Show abstract
Uncategorized
Information ratio, which measures the maximum/average share size per shared bit, is a criterion of efficiency of a secret sharing scheme. It is generally believed that there exists a family of access structures such that the information ratio of any secret sharing scheme realizing it is $2^{\Omega(n)}$, where the parameter $n$ stands for the number of participants. The best known lower bound, due to Csirmaz (1994), is $\Omega(n/\log n)$. Closing this gap is a long-standing open problem in cryptology. In this paper, using a technique called \emph{substitution}, we recursively construct a family of access structures having information ratio $n^{\frac{\log n}{\log \log n}}$, assuming a well-stated information-theoretic conjecture is true. Our conjecture emerges after introducing the notion of \emph{convec set} for an access structure, a subset of $n$-dimensional real space. We prove some topological properties about convec sets and raise several open problems.
Last updated:  2019-09-24
MILP-Aided Related-Tweak/Key Impossible Differential Attack and Its applications to QARMA, Joltik-BC
Rui Zong, Xiaoyang Dong
In this paper, we study the relation of single-key impossible differentials with the related-tweakey/key ones and propose an interesting algorithm that can efficiently derive longer related-tweakey/key impossible differentials from single-key ones. With application of the MILP technique, the algorithm can be converted an automatic tool for searching related-tweakey/key impossible differentials. We use this automatic tool to analyze QARMA-64 and give a 11-round key recovery attack, which attacks one more round than the best previous result. Moreover, we also analyze Joltik-BC-128, a internal tweakable block cipher of an authenticated encryption candidate of the CAESAR competition Joltik and our result can attack two more rounds than the result given by the cipher designers.
Last updated:  2019-10-04
Symbolic security of garbled circuits
Baiyu Li, Daniele Micciancio
We present the first computationally sound symbolic analysis of Yao's garbled circuit construction for secure two party computation. Our results include an extension of the symbolic language for cryptographic expressions from previous work on computationally sound symbolic analysis, and a soundness theorem for this extended language. We then demonstrate how the extended language can be used to formally specify not only the garbled circuit construction, but also the formal (symbolic) simulator required by the definition of security. The correctness of the simulation is proved in a purely syntactical way, within the symbolic model of cryptography, and then translated into a concrete computational indistinguishability statement via our general computational soundness theorem. We also implement our symbolic security framework and the garbling scheme in Haskell, and our experiment shows that the symbolic analysis performs well and can be done within several seconds even for large circuits that are useful for real world applications.
Last updated:  2018-02-07
A Reaction Attack on LEDApkc
Tomas Fabsic, Viliam Hromada, Pavol Zajac
We propose a new reaction attack on the public-key cryptosystem LEDApkc. The adversary uses the decoding failure rate (DFR) analysis to learn information about the secret masking matrix $Q$. Provided the adversary learns information about $Q$ within $10^4\times \text{DFR}^{-1}$ decryptions (as prescribed by LEDApkc design to thwart previously known attacks), the adversary builds a small set of candidates for $Q$. Using these candidates, the adversary obtains candidates for a generator matrix of the secret LDPC code. Afterwards, the adversary applies Stern's algorithm to recover the secret matrix $H$, thus recovering the full private key. Provided the adversary can learn information about the matrix $Q$, the complexity of the attack is below $2^{99}$ for a parameter set for 128-bit security. In order to study whether the adversary can learn information about $Q$ from $10^4\times \text{DFR}^{-1}$ decryptions, we conducted experiments with a modified parameter set. The parameter set was modified only in order to increase the DFR, and thus make experiments less computationally expensive. We show that with the modified parameter set it is indeed possible to learn the required information about the matrix $Q$.
Last updated:  2018-05-15
Faster Multiplication Triplet Generation from Homomorphic Encryption for Practical Privacy-Preserving Machine Learning under a Narrow Bandwidth
Uncategorized
Wen-jie Lu, Jun Sakuma
Uncategorized
Machine learning algorithms are used by more and more online applications to improve the services. Machine learning-based online services are usually accessed by thousands of clients concurrently through a relatively narrow bandwidth, such as a WiFi network or a cell phone network. When applying secure computations to such online services, however, current methods for generating multiplication triplets might take a long time, especially when only a narrow bandwidth is available or large-scale matrices are involved in the computation. In this paper, we present a more practical method for generating multiplication triplets that are specified for additively shared matrices from homomorphic encryption. With our algorithmic and implement optimizations, our protocol is faster than and consumes less communication traffic than the existing methods. Experimental results show that, under a 100~Mbps network, our protocol took about $18.0$ seconds to generate triplets for matrices with more than $2.6\times 10^5$ entries. It was about $20 - 108$ times faster than existing methods. As the concrete example, we applied our protocol to two existing secure computation frameworks of machine learning, i.e., SecureML (S\&P'17) and MiniONN (CCS'17). Experimental results show that our method reduced about $74\% - 97\%$ of the triplet generation time of these frameworks when a narrow bandwidth was used.
Last updated:  2019-02-16
But Why does it Work? A Rational Protocol Design Treatment of Bitcoin
Christian Badertscher, Juan Garay, Ueli Maurer, Daniel Tschudi, Vassilis Zikas
An exciting recent line of work has focused on formally investigating the core cryptographic assumptions underlying the security of Bitcoin. In a nutshell, these works conclude that Bitcoin is secure if and only if the majority of the mining power is honest. Despite their great impact, however, these works do not address an incisive question asked by positivists and Bitcoin critics, which is fuelled by the fact that Bitcoin indeed works in reality: Why should the real-world system adhere to these assumptions? In this work we employ the machinery from the Rational Protocol Design (RPD) framework by Garay et al. [FOCS'13] to analyze Bitcoin and address questions such as the above. We show assuming a natural class of incentives for the miners' behavior i.e., rewarding them for adding blocks to the blockchain but having them pay for mining here one can reserve the honest majority assumption as a fallback, or even, depending on the application, completely replace it by the assumption that the miners aim to maximize their revenue. Our results underscore the appropriateness of RPD as a ``rational cryptography'' framework for analyzing Bitcoin. Along the way, we devise significant extensions to the original RPD machinery that broaden its applicability to cryptocurrencies, which may be of independent interest.
Last updated:  2018-02-07
Naor-Reingold Goes Public: The Complexity of Known-key Security
Pratik Soni, Stefano Tessaro
We study the complexity of building secure block ciphers in the setting where the key is known to the attacker. In particular, we consider two security notions with useful implications, namely public-seed pseudorandom permutations (or psPRPs, for short) (Soni and Tessaro, EUROCRYPT '17) and correlation-intractable ciphers (Knudsen and Rijmen, ASIACRYPT '07; Mandal, Seurin, and Patarin, TCC '12). For both these notions, we exhibit constructions which make only two calls to an underlying non-invertible primitive, matching the complexity of building a pseudorandom permutation in the secret-key setting. Our psPRP result instantiates the round functions in the Naor-Reingold (NR) construction with a secure UCE hash function. For correlation intractability, we instead instantiate them from a (public) random function, and replace the pairwise-independent permutations in the NR construction with (almost) $O(k^2)$-wise independent permutations, where $k$ is the arity of the relations for which we want correlation intractability. Our constructions improve upon the current state of the art, requiring five- and six-round Feistel networks, respectively, to achieve psPRP security and correlation intractability. To do so, we rely on techniques borrowed from Impagliazzo-Rudich-style black-box impossibility proofs for our psPRP result, for which we give what we believe to be the first constructive application, and on techniques for studying randomness with limited independence for correlation intractability.
Last updated:  2022-01-19
Revisiting AES-GCM-SIV: Multi-user Security, Faster Key Derivation, and Better Bounds
Priyanka Bose, Viet Tung Hoang, Stefano Tessaro
This paper revisits the multi-user (mu) security of symmetric encryption, from the perspective of delivering an analysis of the AES-GCM-SIV AEAD scheme. Our end result shows that its mu security is comparable to that achieved in the single-user setting. In particular, even when instantiated with short keys (e.g., 128 bits), the security of AES-GCM-SIV is not impacted by the collisions of two user keys, as long as each individual nonce is not re-used by too many users. Our bounds also improve existing analyses in the single-user setting, in particular when messages of variable lengths are encrypted. We also validate security against a general class of key-derivation methods, including one that halves the complexity of the final proposal. As an intermediate step, we consider mu security in a setting where the data processed by every user is bounded, and where user keys are generated according to arbitrary, possibly correlated distributions. This viewpoint generalizes the currently adopted one in mu security, and can be used to analyze re-keying practices.
Last updated:  2018-02-07
A note on the equivalence of IND-CCA & INT-PTXT and IND-CCA & INT-CTXT
Uncategorized
Daniel Jost, Christian Badertscher, Fabio Banfi
Show abstract
Uncategorized
The security for authenticated encryption schemes is often captured by demanding CCA security (IND-CCA) and integrity of plaintexts (INT-PTXT). In this short note, we prove that this implies in particular integrity of ciphertexts, i.e., INT-CTXT. Hence, the two sets of requirements mentioned in the title are equivalent.
Last updated:  2018-02-07
A Las Vegas algorithm to solve the elliptic curve discrete logarithm problem
Ayan Mahalanobis, Vivek Mallick
In this paper, we describe a new Las Vegas algorithm to solve the elliptic curve discrete logarithm problem. The algorithm depends on a property of the group of rational points of an elliptic curve and is thus not a generic algorithm. The algorithm that we describe has some similarities with the most powerful index-calculus algorithm for the discrete logarithm problem over a finite field.
Last updated:  2018-02-07
Quasi-Optimal SNARGs via Linear Multi-Prover Interactive Proofs
Uncategorized
Dan Boneh, Yuval Ishai, Amit Sahai, David J. Wu
Show abstract
Uncategorized
Succinct non-interactive arguments (SNARGs) enable verifying NP computations with significantly less complexity than that required for classical NP verification. In this work, we focus on simultaneously minimizing the proof size and the prover complexity of SNARGs. Concretely, for a security parameter $\lambda$, we measure the asymptotic cost of achieving soundness error $2^{-\lambda}$ against provers of size $2^\lambda$. We say a SNARG is quasi-optimally succinct if its proof length is $\tilde{O}(\lambda)$, and that it is quasi-optimal, if moreover, its prover complexity is only polylogarithmically greater than the running time of the classical NP prover. We show that this definition is the best we could hope for assuming that NP does not have succinct proofs. Our definition strictly strengthens the previous notion of quasi-optimality introduced in the work of Boneh et al. (Eurocrypt 2017). This work gives the first quasi-optimal SNARG for Boolean circuit satisfiability from a concrete cryptographic assumption. Our construction takes a two-step approach. The first is an information-theoretic construction of a quasi-optimal linear multi-prover interactive proof (linear MIP) for circuit satisfiability. Then, we describe a generic cryptographic compiler that transforms our quasi-optimal linear MIP into a quasi-optimal SNARG by relying on the notion of linear-only vector encryption over rings introduced by Boneh et al. Combining these two primitives yields the first quasi-optimal SNARG based on linear-only vector encryption. Moreover, our linear MIP construction leverages a new robust circuit decomposition primitive that allows us to decompose a circuit satisfiability instance into several smaller circuit satisfiability instances. This primitive may be of independent interest. Finally, we consider (designated-verifier) SNARGs that provide optimal succinctness for a non-negligible soundness error. Concretely, we put forward the notion of "1-bit SNARGs" that achieve soundness error 1/2 with only one bit of proof. We first show how to build 1-bit SNARGs from indistinguishability obfuscation, and then show that 1-bit SNARGs also suffice for realizing a form of witness encryption. The latter result highlights a two-way connection between the soundness of very succinct argument systems and powerful forms of encryption.
Last updated:  2019-02-19
On Isogeny Graphs of Supersingular Elliptic Curves over Finite Fields
Gora Adj, Omran Ahmadi, Alfred Menezes
We study the isogeny graphs of supersingular elliptic curves over finite fields, with an emphasis on the vertices corresponding to elliptic curves of $j$-invariant 0 and 1728.
Last updated:  2018-02-05
Fiat-Shamir and Correlation Intractability from Strong KDM-Secure Encryption
Uncategorized
Ran Canetti, Yilei Chen, Leonid Reyzin, Ron D. Rothblum
Show abstract
Uncategorized
A hash function family is called correlation intractable if for all sparse relations, it is hard to find, given a random function from the family, an input-output pair that satisfies the relation (Canetti et al., STOC 98). Correlation intractability (CI) captures a strong Random-Oracle-like property of hash functions. In particular, when security holds for all sparse relations, CI suffices for guaranteeing the soundness of the Fiat-Shamir transformation from any constant round, statistically sound interactive proof to a non-interactive argument. However, to date, the only CI hash function for all sparse relations (Kalai et al., Crypto 17) is based on general program obfuscation with exponential hardness properties. We construct a simple CI hash function for arbitrary sparse relations, from any symmetric encryption scheme that satisfies some natural structural properties, and in addition guarantees that key recovery attacks mounted by polynomial-time adversaries have only exponentially small success probability - even in the context of key-dependent messages (KDM). We then provide parameter settings where ElGamal encryption and Regev encryption plausibly satisfy the needed properties. Our techniques are based on those of Kalai et al., with the main contribution being substituting a statistical argument for the use of obfuscation, therefore greatly simplifying the construction and basing security on better-understood intractability assumptions. In addition, we extend the definition of correlation intractability to handle moderately sparse relations so as to capture the properties required in proof-of-work applications (e.g. Bitcoin). We also discuss the applicability of our constructions and analyses in that regime.
Last updated:  2018-02-05
SMT-based Cube Attack on Simeck32/64
Uncategorized
Mojtaba Zaheri, Babak Sadeghiyan
Show abstract
Uncategorized
Satisfiability modulo theories or SMT can be stated as a generalization of Boolean satisfiability problem or SAT. The core idea behind the introduction of SMT solvers is to reduce the complexity through providing more information about the problem environment. In this paper, we take advantage of a similar idea and feed the SMT solver itself, by extra information provided through middle state Cube characteristics, to introduce a new method which we call SMT-based Cube Attack, and apply it to improve the success of the solver in attacking reduced-round versions of the Simeck32/64 lightweight block cipher. We first propose a new algorithm to find cubes with most number of middle state characteristics. Then, we apply these obtained cubes and their characteristics as extra information in the SMT definition of the cryptanalysis problem, to evaluate its effectiveness. Our cryptanalysis results in a full key recovery attack by 64 plaintext/ciphertext pairs on 12 rounds of the cipher in just 122.17 seconds. This is the first practical attack so far presented against the reduced-round versions of Simeck32/64. We also conduct the cube attack on the Simeck32/64 to compare with the SMT-based cube attack. The results indicate that the proposed attack is more powerful than the cube attack.
Last updated:  2018-02-05
Multi-mode Cryptocurrency Systems
Tuyet Duong, Alexander Chepurnoy, Hong-Sheng Zhou
In the past years, the security of Bitcoin-like protocols has been intensively studied. However, previous investigations are mainly focused on the single-mode version of Bitcoin protocol, where the protocol is running among full nodes (miners). In this paper, we initiate the study of multi-mode cryptocurrency protocols. We generalize the recent framework by Garay et al. (Eurocrypt 2015) with new security definitions that capture the security of realistic cryptocurrency systems (e.g. Bitcoin with full and lightweight nodes). We provide the first rigorous security model for addressing the "blockchain bloat" issue. As an immediate application of our new framework, we analyze the security of existing blockchain pruning proposals for Bitcoin aiming to improve the storage efficiency of network nodes by pruning unnecessary information from the ledger.
Last updated:  2018-02-05
Authenticated Encryption Mode IAPM using SHA-3's Public Random Permutation
Charanjit S. Jutla
We study instantiating the random permutation of the block-cipher mode of operation IAPM (Integrity-Aware Parallelizable Mode) with the public random permutation of Keccak, on which the draft standard SHA-3 is built. IAPM and the related mode OCB are single-pass highly parallelizable authenticated-encryption modes, and while they were originally proven secure in the private random permutation model, Kurosawa has shown that they are also secure in the public random permutation model assuming the whitening keys are uniformly chosen with double the usual entropy. In this paper, we show a general composability result that shows that the whitening key can be obtained from the usual entropy source by a key-derivation function which is itself built on Keccak. We stress that this does not follow directly from the usual indifferentiability of key-derivation function constructions from Random Oracles. We also show that a simple and general construction, again employing Keccak, can also be used to make the IAPM scheme key-dependent-message secure. Finally, implementations on modern AMD-64 architecture supporting 128-bit SIMD instructions, and not supporting the native AES instructions, show that IAPM with Keccak runs three times faster than IAPM with AES.
Last updated:  2020-09-11
Accountability in Security Protocols
Robert Künnemann, Deepak Garg, Michael Backes
A promising paradigm in protocol design is to hold parties accountable for misbehavior, instead of postulating that they are trustworthy. Recent approaches in defining this property, called accountability, characterized malicious behavior as a deviation from the protocol that causes a violation of the desired security property, but did so under the assumption that all deviating parties are controlled by a single, centralized adversary. In this work, we investigate the setting where multiple parties can deviate with or without coordination in a variant of the applied-pi calculus. We first demonstrate that, under realistic assumptions, it is impossible to determine all misbehaving parties; however, we show that accountability can be relaxed to exclude causal dependencies that arise from the behavior of deviating parties, and not from the protocol as specified. We map out the design space for the relaxation, point out protocol classes separating these notions and define conditions under which we can guarantee fairness and completeness. Most importantly, we discover under which circumstances it is correct to consider accountability in the single-adversary setting, where this property can be verified with off-the-shelf protocol verification tools.
Last updated:  2018-02-05
Onion-AE: Foundations of Nested Encryption
Phillip Rogaway, Yusi Zhang
Nested symmetric encryption is a well-known technique for low-latency communication privacy. But just what problem does this technique aim to solve? In answer, we provide a provable-security treatment for onion authenticated-encryption (onion-AE). Extending the conventional notion for authenticated-encryption, we demand indistinguishability from random bits and time-of-exit authenticity verification. We show that the encryption technique presently used in Tor does not satisfy our definition of onion-AE security, but that a construction by Mathewson (2012), based on a strong, tweakable, wideblock PRP, does do the job. We go on to discuss three extensions of onion-AE, giving defini- tions to handle inbound flows, immediate detection of authenticity errors, and corrupt ORs.
Last updated:  2018-02-05
Challenges in cyber security - Ransomware Phenomenon
Uncategorized
Pasca Vlad-Raul, Simion Emil
Show abstract
Uncategorized
Ransomware has become one of the major threats nowadays due to its huge impact and increased rate of infections around the world. CryptoWall 3, was responsible for damages of over 325 millions of dollars, since its discovery in 2015. Recently, another family of ransomware appeared in the cyber space which is called WannaCry over 230.000 computers around the world, in over 150 countries were infected. Ransomware usually uses the RSA algorithm to protect the encryption key and AES for encrypting the files. If these algorithms are correctly implemented then it is impossible to recover the encrypted information. Some attacks, nonetheless, work against the implementation of RSA. These attacks are not against the basic algorithm, but against the protocol. In the following sections we present the fully analysis on three representative ransomware: Spora, DMA Locker and WannaCry.
Last updated:  2018-02-02
Evaluating the indistinguishability of the XTS mode in the proposed security model
Nguyen Tuan Anh, Nguyen Bui Cuong
In this paper, we consider the indistinguishability of XTS in some security models for both full final block and partial final block cases. Firstly, some evaluations of the indistinguishability up-to-block are presented. Then, we present a new security model in which the adversary can not control sector number, based on an $\epsilon$-collision resistant function. In this model, we give a bound of the distinguishing advantage that the adversary can get when attacks on XTS. The received results is an extension of \cite{6}.
Last updated:  2018-02-02
Distributed Time-Memory Tradeoff Attacks on Ciphers (with Application to Stream Ciphers and Counter Mode)
Howard M. Heys
In this paper, we consider the implications of parallelizing time-memory tradeoff attacks using a large number of distributed processors. It is shown that Hellman’s original tradeoff method and the Biryukov-Shamir attack on stream ciphers, which incorporates data into the tradeoff, can be effectively distributed to reduce both time and memory, while other approaches are less advantaged in a distributed approach. Distributed tradeoff attacks are specifically discussed as applied to stream ciphers and the counter mode operation of block ciphers, where their feasibility is considered in relation to distributed exhaustive key search. In particular, for counter mode with an unpredictable initial count, we show that distributed tradeoff attacks are applicable, but can be made infeasible if the entropy of the initial count is at least as large as the key. In general, the analyses of this paper illustrate the effectiveness of a distributed tradeoff approach and show that, when enough processors are involved in the attack, it is possible some systems, such as lightweight cipher implementations, may be practically susceptible to attack.
Last updated:  2018-10-03
BitML: A Calculus for Bitcoin Smart Contracts
Massimo Bartoletti, Roberto Zunino
We introduce BitML, a domain-specific language for specifying contracts that regulate transfers of bitcoins among participants, without relying on trusted intermediaries. We define a symbolic and a computational model for reasoning about BitML security. In the symbolic model, participants act according to the semantics of BitML, while in the computational model they exchange bitstrings, and read/append transactions on the Bitcoin blockchain. A compiler is provided to translate contracts into standard Bitcoin transactions. Participants can execute a contract by appending these transactions on the Bitcoin blockchain, according to their strategies. We prove the correctness of our compiler, showing that computational attacks on compiled contracts are also observable in the symbolic model.
Last updated:  2018-02-01
ECC mod 8^91+5
Daniel R. L. Brown
The field size $8^{91}+5$ for elliptic curve cryptography offers simplicity, security, and efficiency.
Last updated:  2019-01-24
Efficient Circuit-based PSI via Cuckoo Hashing
Benny Pinkas, Thomas Schneider, Christian Weinert, Udi Wieder
While there has been a lot of progress in designing efficient custom protocols for computing Private Set Intersection (PSI), there has been less research on using generic Multi-Party Computation (MPC) protocols for this task. However, there are many variants of the set intersection functionality that are not addressed by the existing custom PSI solutions and are easy to compute with generic MPC protocols (e.g., comparing the cardinality of the intersection with a threshold or measuring ad conversion rates). Generic PSI protocols work over circuits that compute the intersection. For sets of size $n$, the best known circuit constructions conduct $O(n \log n)$ or $O(n \log n / \log\log n)$ comparisons (Huang et al., NDSS'12 and Pinkas et al., USENIX Security'15). In this work, we propose new circuit-based protocols for computing variants of the intersection with an almost linear number of comparisons. Our constructions are based on new variants of Cuckoo hashing in two dimensions. We present an asymptotically efficient protocol as well as a protocol with better concrete efficiency. For the latter protocol, we determine the required sizes of tables and circuits experimentally, and show that the run-time is concretely better than that of existing constructions. The protocol can be extended to a larger number of parties. The proof technique for analyzing Cuckoo hashing in two dimensions is new and can be generalized to analyzing standard Cuckoo hashing as well as other new variants of it.
Last updated:  2018-01-31
Drive-by Key-Extraction Cache Attacks from Portable Code
Daniel Genkin, Lev Pachmanov, Eran Tromer, Yuval Yarom
We show how malicious web content can extract cryptographic secret keys from the user's computer. The attack uses portable scripting languages supported by modern browsers to induce contention for CPU cache resources, and thereby gleans information about the memory accesses of other programs running on the user's computer. We show how this side-channel attack can be realized in both WebAssembly and PNaCl; how to attain very fine-grained measurements; and how to use these to extract ElGamal, ECDH and RSA decryption keys from various cryptographic libraries. The attack does not rely on bugs in the browser's nominal sandboxing mechanisms, or on fooling users. It applies even to locked-down platforms with strong confinement mechanisms and browser-only functionality, such as Chromebook devices. Moreover, on browser-based platforms the attacked software too may be written in portable JavaScript; and we show that in this case even implementations of supposedly-secure constant-time algorithms, such as Curve25519's, are vulnerable to our attack.
Last updated:  2019-12-19
Updatable Encryption with Post-Compromise Security
Uncategorized
Anja Lehmann, Bjoern Tackmann
Show abstract
Uncategorized
An updatable encryption scheme allows to periodically rotate the encryption key and move already existing ciphertexts from the old to the new key. These ciphertext updates are done with the help of a so-called update token and can be performed by an untrusted party, as the update never decrypts the data. Updatable encryption is particularly useful in settings where encrypted data is outsourced, e.g., stored on a cloud server. The data owner can produce an update token, and the cloud server can update the ciphertexts. We provide a comprehensive treatment of ciphertext-independent schemes, where a single token is used to update all ciphertexts. We show that the existing ciphertext-independent schemes and models by Boneh et al. (CRYPTO’13) and Everspaugh et al. (CRYPTO’17) do not guarantee the post-compromise security one would intuitively expect from key rotation. In fact, the simple scheme recently proposed by Everspaugh et al. allows to recover the current key upon corruption of a single old key. Surprisingly, none of the models so far reflects the timely aspect of key rotation which makes it hard to grasp when an adversary is allowed to corrupt keys. We propose strong security models that clearly capture post-compromise and forward security under adaptive attacks. We then analyze various existing schemes and show that none of them is secure in this strong model, but we formulate the additional constraints that suffice to prove their security in a relaxed version of our model. Finally, we propose a new updatable encryption scheme that achieves our strong notions while being (at least) as efficient as the existing solutions.
Last updated:  2019-03-06
An Improved RNS Variant of the BFV Homomorphic Encryption Scheme
Shai Halevi, Yuriy Polyakov, Victor Shoup
We present an optimized implementation of the Fan-Vercauteren variant of Brakerski's scale-invariant homomorphic encryption scheme. Our algorithmic improvements focus on optimizing decryption and homomorphic multiplication in the Residue Number System (RNS), using the Chinese Remainder Theorem (CRT) to represent and manipulate the large coefficients in the ciphertext polynomials. In particular, we propose efficient procedures for scaling and CRT basis extension that do not require translating the numbers to standard (positional) representation. Compared to the previously proposed RNS design due to Bajard et al., our procedures are simpler and faster, and introduce a lower amount of noise. We implement our optimizations in the PALISADE library and evaluate the runtime performance for the range of multiplicative depths from 1 to 100. For example, homomorphic multiplication for a depth-20 setting can be executed in 62 ms on a modern server system, which is already practical for some outsourced-computing applications. Our algorithmic improvements can also be applied to other scale-invariant homomorphic encryption schemes, such as YASHE.
Last updated:  2018-01-31
Unbounded ABE via Bilinear Entropy Expansion, Revisited
Jie Chen, Junqing Gong, Lucas Kowalczyk, Hoeteck Wee
We present simpler and improved constructions of unbounded attribute-based encryption (ABE) schemes with constant-size public parameters under static assumptions in bilinear groups. Concretely, we obtain: - a simple and adaptively secure unbounded ABE scheme in composite-order groups, improving upon a previous construction of Lewko and Waters (Eurocrypt '11) which only achieves selective security; - an improved adaptively secure unbounded ABE scheme based on the $k$-linear assumption in prime-order groups with shorter ciphertexts and secret keys than those of Okamoto and Takashima (Asiacrypt '12); - the first adaptively secure unbounded ABE scheme for arithmetic branching programs under static assumptions. At the core of all of these constructions is a "bilinear entropy expansion" lemma that allows us to generate any polynomial amount of entropy starting from constant-size public parameters; the entropy can then be used to transform existing adaptively secure "bounded" ABE schemes into unbounded ones.
Last updated:  2018-01-31
An Improved Affine Equivalence Algorithm for Random Permutations
Itai Dinur
In this paper we study the affine equivalence problem, where given two functions $\vec{F},\vec{G}: \{0,1\}^n \rightarrow \{0,1\}^n$, the goal is to determine whether there exist invertible affine transformations $A_1,A_2$ over $GF(2)^n$ such that $\vec{G} = A_2 \circ \vec{F} \circ A_1$. Algorithms for this problem have several well-known applications in the design and analysis of Sboxes, cryptanalysis of white-box ciphers and breaking a generalized Even-Mansour scheme. We describe a new algorithm for the affine equivalence problem and focus on the variant where $\vec{F},\vec{G}$ are permutations over $n$-bit words, as it has the widest applicability. The complexity of our algorithm is about $n^3 2^n$ bit operations with very high probability whenever $\vec{F}$ (or $\vec{G})$ is a random permutation. This improves upon the best known algorithms for this problem (published by Biryukov et al. at EUROCRYPT 2003), where the first algorithm has time complexity of $n^3 2^{2n}$ and the second has time complexity of about $n^3 2^{3n/2}$ and roughly the same memory complexity. Our algorithm is based on a new structure (called a \emph{rank table}) which is used to analyze particular algebraic properties of a function that remain invariant under invertible affine transformations. Besides its standard application in our new algorithm, the rank table is of independent interest and we discuss several of its additional potential applications.
Last updated:  2018-07-03
Offline Assisted Group Key Exchange
Uncategorized
Colin Boyd, Gareth T. Davies, Kristian Gjøsteen, Yao Jiang
Show abstract
Uncategorized
We design a group key exchange protocol with forward secrecy where most of the participants remain offline until they wish to compute the key. This is well suited to a cloud storage environment where users are often offline, but have online access to the server which can assist in key exchange. We define and instantiate a new primitive, a blinded KEM, which we show can be used in a natural way as part of our generic protocol construction. Our new protocol has a security proof based on a well-known model for group key exchange. Our protocol is efficient, requiring Diffie-Hellman with a handful of standard public key operations per user in our concrete instantiation.
Last updated:  2019-05-27
Classification of Balanced Quadratic Functions
Lauren De Meyer, Begül Bilgin
S-boxes, typically the only nonlinear part of a block cipher, are the heart of symmetric cryptographic primitives. They significantly impact the cryptographic strength and the implementation characteristics of an algorithm. Due to their simplicity, quadratic vectorial Boolean functions are preferred when efficient implementations for a variety of applications are of concern. Many characteristics of a function stay invariant under affine equivalence. So far, all 6-bit Boolean functions, 3- and 4-bit permutations have been classified up to affine equivalence. At FSE 2017, Bozoliv et al. presented the first classification of 5-bit quadratic permutations. In this work, we propose an adaptation of their work resulting in a highly efficient algorithm to classify $n \times m$ functions for $n \geq m$. Our algorithm enables for the first time a complete classification of 6-bit quadratic permutations as well as all balanced quadratic functions for $n \leq 6$. These functions can be valuable for new cryptographic algorithm designs with efficient multi-party computation or side-channel analysis resistance as goal. In addition, we provide a second tool for finding decompositions of length two. We demonstrate its use by decomposing existing higher degree S-boxes and constructing new S-boxes with good cryptographic and implementation properties.
Last updated:  2020-04-23
Just in Time Hashing
Benjamin Harsha, Jeremiah Blocki
In the past few years billions of user passwords have been exposed to the threat of offline cracking attempts. Such brute-force cracking attempts are increasingly dangerous as password cracking hardware continues to improve and as users continue to select low entropy passwords. Key-stretching techniques such as hash iteration and memory hard functions can help to mitigate the risk, but increased key-stretching effort necessarily increases authentication delay so this defense is fundamentally constrained by usability concerns. We introduce Just in Time Hashing (JIT), a client side key-stretching algorithm to protect user passwords against offline brute-force cracking attempts without increasing delay for the user. The basic idea is to exploit idle time while the user is typing in their password to perform extra key-stretching. As soon as the user types in the first character(s) of their password our algorithm immediately begins filling memory with hash values derived from the character(s) that the user has typed thus far. We conduct a user study to guide the development of JIT e.g. by determining how much extra key-stretching could be performed during idle cycles or how many consecutive deletions JIT may need to handle. Our security analysis demonstrates that JIT can substantially increase guessing costs over traditional key-stretching algorithms with equivalent (or less) authentication delay. Specifically an empirical evaluation using existing password datasets demonstrates that JIT increases guessing costs by nearly an order of magnitude in comparison to standard key-stretching techniques with comparable delay. We provide a proof-of-concept implementation of a Just in Time Hashing algorithm by modifying Argon2.
Last updated:  2018-01-30
MRHS Solver Based on Linear Algebra and Exhaustive Search
Håvard Raddum, Pavol Zajac
We show how to build a binary matrix from the MRHS representation of a symmetric-key cipher. The matrix contains the cipher represented as an equation system and can be used to assess a cipher's resistance against algebraic attacks. We give an algorithm for solving the system and compute its complexity. The complexity is normally close to exhaustive search on the variables representing the user-selected key. Finally, we show that for some variants of LowMC, the joined MRHS matrix representation can be used to speed up regular encryption in addition to exhaustive key search.
Last updated:  2018-01-30
Rank Analysis of Cubic Multivariate Cryptosystems
John Baena, Daniel Cabarcas, Daniel Escudero, Karan Khathuria, Javier Verbel
In this work we analyze the security of cubic cryptographic constructions with respect to rank weakness. We detail how to extend the big field idea from quadratic to cubic, and show that the same rank defect occurs. We extend the min-rank problem and propose an algorithm to solve it in this setting. We show that for fixed small rank, the complexity is even lower than for the quadratic case. However, the rank of a cubic polynomial in $n$ variables can be larger than $n$, and in this case the algorithm is very inefficient. We show that the rank of the differential is not necessarily smaller, rendering this line of attack useless if the rank is large enough. Similarly, the algebraic attack is exponential in the rank, thus useless for high rank.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.