All papers in 2018 (Page 10 of 1249 results)
An Analysis of the NIST SP 800-90A Standard
Uncategorized
Uncategorized
We investigate the security properties of the three deterministic random bit generator (DRBG) mechanisms in the NIST SP 800-90A standard [2]. This standard received a considerable amount of negative attention, due to the controversy surrounding the now retracted DualEC-DRBG, which was included in earlier versions. Perhaps because of the attention paid to the DualEC, the other algorithms in the standard have received surprisingly patchy analysis to date, despite widespread deployment. This paper addresses a number of these gaps in analysis, with a particular focus on HASH-DRBG and HMAC-DRBG. We uncover a mix of positive and less positive results. On the positive side, we prove (with a caveat) the robustness [16] of HASH-DRBG and HMAC-DRBG in the random oracle model (ROM). Regarding the caveat, we show that if an optional input is omitted, then – contrary to claims in the standard — HMAC-DRBG does not even achieve the (weaker) property of forward security. We also conduct a more informal and practice-oriented exploration of flexibility in implementation choices permitted by the standard. Specifically, we argue that these DRBGs have the property that partial state leakage may lead security to break down in unexpected ways. We highlight implementation choices allowed by the overly flexible standard that exacerbate both the likelihood, and impact, of such attacks. While our attacks are theoretical, an analysis of two open source implementations of CTR-DRBG shows that potentially problematic implementation choices are made in the real world.
Monero Ring Attack: Recreating Zero Mixin Transaction Effect
Uncategorized
Uncategorized
Monero is one of the privacy-preserving cryptocurrencies employing CryptoNote protocol. The privacy features in Monero are provided by cryptographic techniques called linkable ring signature and one-time public key. Recent studies show that the majority of Monero inputs are traceable prior to mandatory RingCT transaction. After the RingCT was implemented, the problem was mitigated. We propose a novel attack to reduce the anonymity of Monero transactions or even to fully deanonymise the inputs. The proposed protocol can be launched in RingCT scenario and enable multiple attackers to collaborate without trusting each other. The attack scheme can be planted in the existing Monero services without extra fees and without putting the users’ money at risk.
3PC ORAM with Low Latency, Low Bandwidth, and Fast Batch Retrieval
Multi-Party Computation of Oblivious RAM (MPC ORAM) implements secret-shared random access memory in a way that protects access pattern privacy against a threshold of corruptions. MPC ORAM enables secure computation of any RAM program on large data held by different entities, e.g. MPC processing of database queries on a secret-shared database. MPC ORAM can be constructed by any (client-server) ORAM, but there is an efficiency gap between known MPC ORAM's and ORAM's. Current asymptotically best MPC ORAM is implied by an "MPC friendly" variant of Path-ORAM called Circuit-ORAM, due to Wang et al. However, using garbled circuit for Circuit-ORAM's client implies MPC ORAM which matches Path-ORAM in rounds but increases bandwidth by \Omega(kappa) factor, while using GMW or BGW protocols implies MPC ORAM which matches Path-ORAM in bandwidth, but increases round complexity by \Omega(log n loglog n) factor, for kappa the security parameter and n the memory size.
In this paper we bridge the gap between MPC ORAM and client-server
ORAM by showing a specialized 3PC ORAM protocol, i.e. MPC ORAM
for 3 parties tolerating 1 fault, which uses only symmetric ciphers and asymptotically matches client-server Path-ORAM in round complexity and for large records also in bandwidth.
Our 3PC ORAM also allows for fast pipelined processing: With post-
poned clean-up it processes b=O(log n) accesses in O(b+log n) rounds with O(D+poly(log n)) bandwidth per item, where D is record size.
Collusion Resistant Traitor Tracing from Learning with Errors
In this work we provide a traitor tracing construction with ciphertexts that grow polynomially in where is the number of users and prove it secure under the Learning with Errors (LWE) assumption. This is the first traitor tracing scheme with such parameters provably secure from a standard assumption. In addition to achieving new traitor tracing results, we believe our techniques push forward the broader area of computing on encrypted data under standard assumptions. Notably, traitor tracing is substantially different problem from other cryptography primitives that have seen recent progress in LWE solutions.
We achieve our results by first conceiving a novel approach to building traitor tracing that starts with a new form of Functional Encryption that we call Mixed FE. In a Mixed FE system the encryption algorithm is bimodal and works with either a public key or master secret key. Ciphertexts encrypted using the public key can only encrypt one type of functionality. On the other hand the secret key encryption can be used to encode many different types of programs, but is only secure as long as the attacker sees a bounded number of such ciphertexts.
We first show how to combine Mixed FE with Attribute-Based Encryption to achieve traitor tracing. Second we build Mixed FE systems for polynomial sized branching programs (which corresponds to the complexity class LOGSPACE) by relying on the polynomial hardness of the LWE assumption with super-polynomial modulus-to-noise ratio.
In-region Authentication
Location information has wide applications in customization and personalization of services, as well as secure authentication and access control. We introduce {\em in-Region Authentication (inRA)}, a novel type of authentication, that allows a prover to prove to a set of cooperating verifiers that they are in possession of the correct secret key, and are inside a specified (policy) region of arbitrary shape. These requirements naturally arise when a privileged service is offered to registered users within an area. Locating a prover without assuming GPS (Global Positioning System) signal however, incurs error. We discuss the challenge of designing secure protocols that have quantifiable error in this setting, define and formalize correctness and security properties of the protocols, and propose a systematic approach to designing a family of protocols with provable security where error can be flexibly defined and efficiently minimized. We give a concrete instance of this family that starts with two verifiers, prove its security and evaluate its application to four different policy regions. Our results show that in all cases false acceptance and false rejection of below can be achieved. We compare our results with related works, and propose directions for future research.
Nothing Refreshes Like a RePSI: Reactive Private Set Intersection
Uncategorized
Uncategorized
Private Set Intersection (PSI) is a popular cryptographic primitive that allows two parties, a client and a server, to compute the intersection of their private sets, so that the client only receives the output of the computation, while the server learns nothing besides the size of the client's set. A common limitation of PSI is that a dishonest client can progressively learn the server's set by enumerating it over different executions. Although these "oracle attacks" do not formally violate security according to traditional secure computation definitions, in practice, they often hamper real-life deployment of PSI instantiations, especially if the server's set does not change much over multiple interactions.
In a first step to address this problem, this paper presents and studies the concept of Reactive PSI (RePSI). We model PSI as a reactive functionality, whereby the output depends on previous instances, and use it to limit the effectiveness of oracle attacks. We introduce a general security model for RePSI in the (augmented) semi-honest model and a construction which enables the server to control how many inputs have been used by the client across several executions. In the process, we also present the first practical construction of a Size-Hiding PSI (SHI-PSI) protocol in the standard model, which may be of independent interest.
Flexible Signatures: Towards Making Authentication Suitable for Real-Time Environments
This work introduces the concept of flexible signatures. In a flexible signature scheme, the verification algorithm quantifies the validity of a signature based on the number of computations performed such that the signature's validation (or confidence) level in improves as the algorithm performs more computations. Importantly, the definition of flexible signatures does not assume the resource restriction to be known in advance until the verification process is hard stopped by a system interrupt. Although prominent traditional signature schemes such as RSA, (EC)DSA, EdDSA seem unfit towards building flexible signatures, we find updated versions of the Lamport-Diffie one-time signature and Merkle authentication tree to be suitable for building flexible signatures. We present a flexible signature construction based on these hash-based primitives and prove its security with a concrete security analysis. We also perform a thorough validity-level analysis demonstrating an attractive computation-vs-validity trade-off offered by our construction: a security level of bits can be ensured by performing only around rd of the total hash computations for our flexible signature construction with a Merkle tree of height .
We see this work as the first step towards realizing flexible-security cryptographic primitives. Beyond flexible signatures, our flexible-security conceptualization offers an interesting opportunity to build similar primitives in the asymmetric as well as symmetric cryptographic domains. Apart from being theoretically interesting, these flexible security primitives can be of particular interest to real-time systems as well as the Internet of things: rigid all-or-nothing guarantees offered by the traditional cryptographic primitives have been particularly unattractive to these unpredictably resource-constrained
MergeMAC: A MAC for Authentication with Strict Time Constraints and Limited Bandwidth
This paper presents MergeMAC, a MAC that is particularly suitable for environments with strict time requirements and extremely limited bandwidth. MergeMAC computes the MAC by splitting the message into two parts. We use a pseudorandom function (PRF) to map messages to random bit strings and then merge them with a very efficient keyless function. The advantage of this approach is that the outputs of the PRF can be cached for frequently needed message parts. We demonstrate the merits of MergeMAC for authenticating messages on the CAN bus where bandwidth is extremely limited and caching can be used to recover parts of the message counter instead of transmitting it. We recommend an instantiation of the merging function MERGE and analyze the security of our construction. Requirements for a merging function are formally defined and the resulting EUF-CMA security of MergeMAC is proven.
Comparison of Cost of Protection Against Differential Power Analysis of Selected Authenticated Ciphers
Authenticated ciphers, like all physical implementations of cryptography, are vulnerable to side-channel attacks, including differential power analysis (DPA). The t-test leakage detection methodology has been used to verify improved resistance of block ciphers to DPA after application of countermeasures. However, extension of the t-test methodology to authenticated ciphers is non-trivial, since authenticated ciphers require additional input and output conditions, complex interfaces, and long test vectors interlaced with protocol necessary to describe authenticated cipher operations. In this research we augment an existing side-channel analysis architecture (FOBOS) with t-test leakage detection for authenticated ciphers. We use this capability to show that implementations in the Spartan-6 FPGA of the CAESAR Round 3 candidates ACORN, ASCON, CLOC (AES and TWINE), SILC (AES, PRESENT, and LED), JAMBU (AES and SIMON), and Ketje Jr., as well as AES-GCM, are vulnerable to 1st order DPA. We then implement versions of the above ciphers, protected against 1st order DPA, using threshold implementations. The t-test leakage detection methodology is used to verify improved resistance to 1st order DPA of the protected cipher implementations. Finally, we benchmark unprotected and protected cipher implementations in the Spartan-6 FPGA, and compare the costs of 1st order DPA protection in terms of area, frequency, throughput, throughput-to-area (TP/A) ratio, power, and energy-per-bit. Our results show that ACORN has the lowest area (in LUTs), the highest TP/A ratio, and is the most energy-efficient of all DPA-resistant implementations. However, Ketje Jr. has the highest throughput.
Delegatable Attribute-based Anonymous Credentials from Dynamically Malleable Signatures
Uncategorized
Uncategorized
In this paper, we introduce the notion of delegatable attribute-based anonymous credentials (DAAC).
Such systems offer fine-grained anonymous access control and they give the credential holder the ability to issue more restricted credentials to other users.
In our model, credentials are parameterized with attributes that (1) express what the credential holder himself has been certified and (2) define which attributes he may issue to others.
Furthermore, we present a practical construction of DAAC.
For this construction, we deviate from the usual approach of embedding a certificate chain in the credential.
Instead, we introduce a novel approach for which we identify a new primitive we call dynamically malleable signatures (DMS) as the main ingredient. This primitive may be of independent interest.
We also give a first instantiation of DMS with efficient protocols.
Two attacks on rank metric code-based schemes: RankSign and an Identity-Based-Encryption scheme
RankSign [GRSZ14a] is a code-based signature scheme proposed to the NIST competition for quantum-safe cryptography
[AGHRZ17] and, moreover, is a fundamental building block of a new Identity-Based-Encryption (IBE) [GHPT17a]. This signature scheme is based on the rank metric and enjoys remarkably small key sizes, about 10KBytes for an intended level of security of 128 bits.
Unfortunately we will show that all the parameters proposed for this scheme in [AGHRZ17] can be broken by an algebraic attack that exploits the fact that the augmented LRPC codes
used in this scheme have very low weight codewords.
Therefore, without RankSign the IBE cannot be instantiated at this time. As a second contribution we will show that the problem is deeper than finding a new signature in rank-based cryptography,
we also found an attack on the generic problem upon which its security reduction relies. However, contrarily to the RankSign scheme, it seems that the parameters of the
IBE scheme could be chosen in order to avoid our attack. Finally, we have also shown that if one replaces the rank metric in the
[GHPT17a] IBE scheme by the Hamming metric, then a devastating attack can be found.
Quantum FHE (Almost) As Secure As Classical
Fully homomorphic encryption schemes (FHE) allow to apply arbitrary efficient computation to encrypted data without decrypting it first. In Quantum FHE (QFHE) we may want to apply an arbitrary quantumly efficient computation to (classical or quantum) encrypted data.
We present a QFHE scheme with classical key generation (and classical encryption and decryption if the encrypted message is itself classical) with comparable properties to classical FHE. Security relies on the hardness of the learning with errors (LWE) problem with polynomial modulus, which translates to the worst case hardness of approximating short vector problems in lattices to within a polynomial factor. Up to polynomial factors, this matches the best known assumption for classical FHE. Similarly to the classical setting, relying on LWE alone only implies leveled QFHE (where the public key length depends linearly on the maximal allowed evaluation depth). An additional circular security assumption is required to support completely unbounded depth. Interestingly, our circular security assumption is the same assumption that is made to achieve unbounded depth multi-key classical FHE.
Technically, we rely on the outline of Mahadev (arXiv 2017) which achieves this functionality by relying on super-polynomial LWE modulus and on a new circular security assumption. We observe a connection between the functionality of evaluating quantum gates and the circuit privacy property of classical homomorphic encryption. While this connection is not sufficient to imply QFHE by itself, it leads us to a path that ultimately allows using classical FHE schemes with polynomial modulus towards constructing QFHE with the same modulus.
Invisible Sanitizable Signatures and Public-Key Encryption are Equivalent
Sanitizable signature schemes are signature schemes which support the delegation of modification rights. The signer can allow a sanitizer to perform a set of admissible operations on the original message and then to update the signature, in such a way that basic security properties like unforgeability or accountability are preserved. Recently, Camenisch et al. (PKC 2017) devised new schemes with the previously unattained invisibility property. This property says that the set of admissible operations for the sanitizer remains hidden from outsiders. Subsequently, Beck et al. (ACISP 2017) gave an even stronger version of this notion and constructions achieving it. Here we characterize the invisibility property in both forms by showing that invisible sanitizable signatures are equivalent to IND-CPA-secure encryption schemes, and strongly invisible signatures are equivalent to IND-CCA2-secure encryption schemes. The equivalence is established by proving that invisible (resp. strongly invisible) sanitizable signature schemes yield IND-CPA-secure (resp. IND-CCA2-secure) public-key encryption schemes and that, vice versa, we can build (strongly) invisible sanitizable signatures given a corresponding public-key encryption scheme.
SoK: The Problem Landscape of SIDH
The Supersingular Isogeny Diffie-Hellman protocol (SIDH) has recently
been the subject of increased attention in the cryptography community.
Conjecturally quantum-resistant, SIDH has the feature that it shares
the same data flow as ordinary Diffie-Hellman: two parties exchange a
pair of public keys, each generated from a private key,
and combine them to form a shared secret. To create a potentially
quantum-resistant scheme, SIDH depends on a new family of computational
assumptions involving isogenies between supersingular elliptic curves which
replace both the discrete logarithm problem and the computational and
decisional Diffie-Hellman problems. Like in the case of ordinary
Diffie-Hellman, one is interested in knowing if these problems are related.
In fact, more is true: there is a rich network of reductions between the
isogeny problems securing the private keys of the participants in the SIDH
protocol, the computational and decisional SIDH problems, and the problem of
validating SIDH public keys. In this article we explain these relationships,
which do not appear elsewhere in the literature, in hopes of providing a
clearer picture of the SIDH problem landscape to the cryptography community
at large.
Fast modular squaring with AVX512IFMA
Modular exponentiation represents a signicant workload for public key cryptosystems. Examples include not only the classical RSA, DSA, and DH algorithms, but also the partially homomorphic Paillier encryption. As a result, efficient software implementations of modular exponentiation are an important target for optimization. This paper studies methods for using Intel's forthcoming AVX512 Integer Fused Multiply Accumulate (AVX512IFMA) instructions in order to speed up modular (Montgomery) squaring, which dominates the cost of the exponentiation. We further show how a minor tweak in the architectural definition of AVX512IFMA has the potential to further speed up modular squaring.
Impossible Differential Attack on QARMA Family of Block Ciphers
QARMA is a family of lightweight tweakable block ciphers, which is used to support a software protection feature in the ARMv8 architecture. In this paper, we study the security of QARMA family against the impossible differential attack. First, we generalize the concept of truncated difference. Then, based on the generalized truncated difference, we construct the first 6-round impossible differential dinstinguisher of QARMA. Using the 6-round distinguisher and the time-and-memory trade-off technique, we present 10-round impossible differential attack on QARMA. This attack requires (resp. ) encryption units, (resp. ) chosen plaintext and 72-bit (resp. 144-bit) space for QARMA-64 (resp. QARMA-128). Further, if allowed with higher memory complexity (about 120-bit and 240-bit space for QARMA-64 and QARMA-128, respectively), our attack can break up 11 rounds of QARMA. To the best of our knowledge, these results are currently the best results with respect to attacked rounds.
Breaking the Circuit-Size Barrier in Secret Sharing
Uncategorized
Uncategorized
We study secret sharing schemes for general (non-threshold) access structures. A general secret sharing scheme for parties is associated to a monotone function . In such a scheme, a dealer distributes shares of a secret among parties. Any subset of parties should be able to put together their shares and reconstruct the secret if , and should have no information about if . One of the major long-standing questions in information-theoretic cryptography is to minimize the (total) size of the shares in a secret-sharing scheme for arbitrary monotone functions .
There is a large gap between lower and upper bounds for secret sharing. The best known scheme for general has shares of size , but the best lower bound is . Indeed, the exponential share size is a direct result of the fact that in all known secret-sharing schemes, the share size grows with the size of a circuit (or formula, or monotone span program) for . Indeed, several researchers have suggested the existence of a {\em representation size barrier} which implies that the right answer is closer to the upper bound, namely, .
In this work, we overcome this barrier by constructing a secret sharing scheme for any access structure with shares of size and a linear secret sharing scheme for any access structure with shares of size . As a contribution of independent interest, we also construct a secret sharing scheme with shares of size for monotone access structures, out of a total of of them. Our construction builds on recent works that construct better protocols for the conditional disclosure of secrets (CDS) problem.
Differential Cryptanalysis of Round-Reduced Sparx-64/128
Uncategorized
Uncategorized
Sparx is a family of ARX-based block ciphers designed according to the long-trail strategy (LTS) that were both introduced by Dinu et al. at ASIACRYPT'16. Similar to the wide-trail strategy, the LTS allows provable upper bounds on the length of differential characteristics and linear paths. Thus, the cipher is a highly interesting target for third-party cryptanalysis. However, the only third-party cryptanalysis on Sparx-64/128 to date was given by Abdelkhalek et al. at AFRICACRYPT'17 who proposed impossible-differential attacks on 15 and 16 (out of 24) rounds.
In this paper, we present chosen-ciphertext differential attacks on 16 rounds of Sparx-64/128. First, we show a truncated-differential analysis that requires chosen ciphertexts and approximately encryptions. Second, we illustrate the effectiveness of boomerangs on Sparx by a rectangle attack that requires approximately chosen ciphertexts and about encryption equivalents. Finally, we also considered a yoyo attack on 16 rounds that, however, requires the full codebook and approximately encryption equivalents.
Estimate all the {LWE, NTRU} schemes!
Uncategorized
Uncategorized
We consider all LWE- and NTRU-based encryption, key encapsulation, and digital signature schemes proposed for standardisation as part of the Post-Quantum Cryptography process run by the US National Institute of Standards and Technology (NIST). In particular, we investigate the impact that different estimates for the asymptotic runtime of (block-wise) lattice reduction have on the predicted security of these schemes. Relying on the ``LWE estimator'' of Albrecht et al., we estimate the cost of running primal and dual lattice attacks against every LWE-based scheme, using every cost model proposed as part of a submission. Furthermore, we estimate the security of the proposed NTRU-based schemes against the primal attack under all cost models for lattice reduction.
Time-Based Direct Revocable Ciphertext-Policy Attribute-Based Encryption with Short Revocation List
In this paper, we propose an efficient revocable Ciphertext-Policy Attribute-Based Encryption (CP-ABE) scheme. We base on the direct revocation approach, by embedding the revocation list into ciphertext. However, since the revocation list will grow longer as time goes by, we further leverage this by proposing a secret key time validation technique so that users will have their keys expired on a date and the revocation list only needs to include those user keys revoked before their intended expired date (e.g. those user keys which have been stolen before expiry). These keys can be removed from the revocation list after their expiry date in order to keep the revocation list short, as these keys can no longer be used to decrypt ciphertext generated after their expiry time. This technique is derived from Hierarchical Identity-based Encryption (HIBE) mechanism and thus time periods are in hierarchical structure: year, month, day. Users with validity of the whole year can decrypt any ciphertext associated with time period of any month or any day within the year. By using this technique, the size of public parameters and user secret key can be greatly reduced. A bonus advantage of this technique is the support of discontinuity of user validity (e.g. taking no-paid leave).
Symbolic Side-Channel Analysis for Probabilistic Programs
In this paper we describe symbolic side-channel analysis techniques for detecting and quantifying information leakage, given in terms of Shannon and Min Entropy. Measuring the precise leakage is challenging due to the randomness and noise often present in program executions and side-channel observations. We account for this noise by introducing additional (symbolic) program inputs which are interpreted probabilistically, using symbolic execution with parameterized model counting. We also explore an approximate sampling approach for increased scalability. In contrast to typical Monte Carlo techniques, our approach works by sampling symbolic paths, representing multiple concrete paths, and uses pruning
to accelerate computation and guarantee convergence to the optimal results.
The key novelty of our approach is to provide bounds on the leakage that are provably under- and over-approximating the real leakage.
We implemented the techniques in the Symbolic PathFinder tool and we demonstrate them on Java programs.
Improved High-Order Conversion From Boolean to Arithmetic Masking
Masking is a very common countermeasure against side channel attacks. When combining Boolean and arithmetic masking, one must be able to convert between the two types of masking, and the conversion algorithm itself must be secure against side-channel attacks. An efficient high-order Boolean to arithmetic conversion scheme was recently described at CHES 2017, with complexity independent of the register size. In this paper we describe a simplified variant with fewer mask refreshing, and still with a proof of security in the ISW
probing model. In practical implementations, our variant is roughly 25% faster.
A Note On Groth-Ostrovsky-Sahai Non-Interactive Zero-Knowledge Proof System
In 2006, Groth, Ostrovsky and Sahai designed one non-interactive zero-knowledge (NIZK) proof system [new version, J. ACM, 59(3), 1-35, 2012] for plaintext being zero or one using bilinear groups with composite order. Based on the system, they presented the first perfect NIZK argument system for any NP language and the first universal composability secure NIZK argument for any NP language in the presence of a dynamic/adaptive adversary.
This resolves a central open problem concerning NIZK protocols.
In this note, we remark that in their proof system the prover has not to invoke the trapdoor key to generate witnesses. The mechanism was dramatically different from the previous works, such as Blum-Feldman-Micali proof system and Blum-Santis-Micali-Persiano proof system. We would like to stress that the prover can cheat the verifier to accept a false claim if the trapdoor key is available to him.
Last updated: 2018-07-10
Verifier Non-Locality in Interactive Proofs
In multi-prover interactive proofs, the verifier interrogates the provers and attempts to steal their knowledge. Other than that, the verifier's role has not been studied. Augmentation of the provers with non-local resources results in classes of languages that may not be NEXP. We have discovered that the verifier plays a much more important role than previously thought. Simply put, the verifier has the capability of providing non-local resources for the provers intrinsically. Therefore, standard MIPs may already contain protocols equivalent to one in which the prover is augmented non-locally. Existing MIPs' proofs of soundness implicitly depend on the fact that the verifier is not a non-local resource provider. The verifier's non-locality is a new unused tool and liability for protocol design and analysis. Great care should have been taken when claiming that ZKMIP=MIP. We show specific issues with existing protocols and revisit the proof of this statement. For this purpose, we also define a new model of multi-prover interactive proofs which we call "correlational confinement form".
Multi-power Post-quantum RSA
Special purpose factoring algorithms have discouraged the adoption of multi-power RSA, even in a post-quantum setting. We revisit the known attacks and find that a general recommendation against repeated factors is unwarranted. We find that one-terabyte RSA keys of the form are competitive with one-terabyte RSA keys of the form . Prime generation can be made to be a factor of 100000 times faster at a loss of at least but not more than bits of security against known attacks. The range depends on the relative cost of bit and qubit operations under the assumption that qubit operations cost bit operations for some constant .
ACPC: Efficient revocation of pseudonym certificates using activation codes
Vehicular communication (V2X) technologies allow vehicles to exchange information about the road conditions and their own status, and thereby enhance transportation safety and efficiency. For broader deployment, however, such technologies are expected to address security and privacy concerns, preventing abuse by users and by the system's entities. In particular, the system is expected to enable the revocation of malicious vehicles, e.g., in case they send invalid information to their peers or to the roadside infrastructure; it should also prevent the system from being misused for tracking honest vehicles.Both features are enabled by Vehicular Public Key Infrastructure (VPKI) solutions such as Security Credential Management Systems (SCMS), one of the leading candidates for protecting V2X communication in the United States. Unfortunately, though, SCMS's original revocation mechanism can lead to large Certification Revocation Lists (CRLs), which in turn impacts the bandwidth usage and processing overhead of the system. In this article, we propose a novel design called Activation Codes for Pseudonym Certificates (ACPC), which can be integrated into SCMS to address this issue. Our proposal is based on activation codes, short bitstrings without which certificates previously issued to a vehicle cannot be used by the latter, which are periodically distributed to non-revoked vehicles using an efficient broadcast mechanism. As a result, the identifiers of the corresponding certificates do no need to remain on the CRL for a long time, reducing the CRLs' size and streamlining their distribution and verification of any vehicle's revocation status. Besides describing ACPC in detail, we also compare it to similar-purpose solutions such as Issue First Activate Later (IFAL) and Binary Hash Tree based Certificate Access Management (BCAM).This analysis shows that our proposal not only brings security improvements (e.g., in terms of resilience against colluding system authorities), but also leads to processing and bandwidth overheads that are orders of magnitude smaller than those observed in the state of the art.
PPAD: Privacy Preserving Group-Based ADvertising in Online Social Networks
Services provided as free by Online Social Networks (OSN) come with privacy concerns. Users' information kept by OSN providers are vulnerable to the risk of being sold to the advertising firms. To protect user privacy, existing proposals utilize data encryption, which prevents the providers from monetizing users' information. Therefore, the providers would not be financially motivated to establish secure OSN designs based on users' data encryption. Addressing these problems, we propose the first Privacy Preserving Group-Based Advertising (PPAD) system that gives monetizing ability for the OSN providers. PPAD performs profile and advertisement matching without requiring the users or advertisers to be online, and is shown to be secure in the presence of honest but curious servers that are allowed to create fake users or advertisers. We also present advertisement accuracy metrics under various system parameters providing a range of security-accuracy trade-offs.
DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks
This paper proposes DeepMarks, a novel end-to-end framework for systematic fingerprinting in the context of Deep Learning (DL). Remarkable progress has been made in the area of deep learning. Sharing the trained DL models has become a trend that is ubiquitous in various fields ranging from biomedical diagnosis to stock prediction. As the availability and popularity of pre-trained models are increasing, it is critical to protect the Intellectual Property (IP) of the model owner. DeepMarks introduces the first fingerprinting methodology that enables the model owner to embed unique fingerprints within the parameters (weights) of her model and later identify undesired usages of her distributed models. The proposed framework embeds the fingerprints in the Probability Density Function (pdf) of trainable weights by leveraging the extra capacity available in contemporary DL models. DeepMarks is robust against fingerprints collusion as well as network transformation attacks, including model compression and model fine-tuning. Extensive proof-ofconcept evaluations on MNIST and CIFAR10 datasets, as well as a wide variety of deep neural networks architectures such as Convolutional Neural Networks (CNNs) and Wide Residual Networks (WRNs), corroborate the effectiveness and robustness of DeepMarks framework
Revisiting Proxy Re-Encryption: Forward Secrecy, Improved Security, and Applications
We revisit the notion of proxy re-encryption (PRE), an enhanced public-key encryption primitive envisioned by Blaze et al. (Eurocrypt'98) and formalized by Ateniese et al. (NDSS'05) for delegating decryption rights from a delegator to a delegatee using a semi-trusted proxy. PRE notably allows to craft re-encryption keys in order to equip the proxy with the power of transforming ciphertexts under a delegator's public key to ciphertexts under a delegatee's public key, while not learning anything about the underlying plaintexts.
We study an attractive cryptographic property for PRE, namely that of forward secrecy. In our forward-secret PRE (fs-PRE) definition, the proxy periodically evolves the re-encryption keys and permanently erases old versions while the delegator's public key is kept constant. As a consequence, ciphertexts for old periods are no longer re-encryptable and, in particular, cannot be decrypted anymore at the delegatee's end. Moreover, delegators evolve their secret keys too, and, thus, not even they can decrypt old ciphertexts once their key material from past periods has been deleted. This, as we will discuss, directly has application in short-term data/message-sharing scenarios.
Technically, we formalize fs-PRE. Thereby, we identify a subtle but significant gap in the well-established security model for conventional PRE and close it with our formalization (which we dub fs-PRE^+). We present the first provably secure and efficient constructions of fs-PRE as well as PRE (implied by the former) satisfying the strong fs-PRE^+ and PRE^+ notions, respectively. All our constructions are instantiable in the standard model under standard assumptions and our central building block are hierarchical identity-based encryption (HIBE) schemes that only need to be selectively secure.
General State Channel Networks
Uncategorized
Uncategorized
One of the fundamental challenges that hinder further adaption of decentralized cryptocurrencies is scalability. Because current cryptocurrencies require that all transactions are processed and stored on a distributed ledger -- the so-called blockchain -- transaction throughput is inherently limited. An important proposal to significantly improve scalability are off-chain protocols, where the massive amount of transactions is executed without requiring the costly interaction with the blockchain. Examples of off-chain protocols include payment channels and networks, which are currently deployed by popular cryptocurrencies such as Bitcoin and Ethereum. A further extension of payment networks envisioned for cryptocurrencies are so-called state channel networks. In contrast to payment networks that only support off-chain payments between users, state channel networks allow execution of arbitrary complex smart contracts. The main contribution of this work is to give the first full specification for general state channel networks. Moreover, we provide formal security definitions and prove the security of our construction against powerful adversaries. An additional benefit of our construction is the use of channel virtualization, which further reduces latency and costs in complex channel networks.
HydRand: Practical Continuous Distributed Randomness
A reliable source of randomness is not only an essential building block in various cryptographic, security, and distributed systems protocols, but also plays an integral part in the design of many new blockchain proposals. Consequently, the topic of publicly-verifiable, bias-resistant and unpredictable randomness has recently enjoyed increased attention. In particular random beacon protocols, aimed at continuous operation, can be a vital component for current Proof-of-Stake based distributed ledger proposals. We improve upon previous random beacon approaches with HydRand, a novel distributed protocol based on publicly-verifiable secret sharing (PVSS) to ensure unpredictability, bias-resistance, and public-verifiability of a continuous sequence of random beacon values. Furthermore, HydRand provides guaranteed output delivery of randomness at regular and predictable intervals in the presence of adversarial behavior and does not rely on a trusted dealer for the initial setup. Compared to existing PVSS based approaches that strive to achieve similar properties, our solution improves scalability by lowering the communication complexity from to . Furthermore, we are the first to present a detailed comparison of recently described schemes and protocols that can be used for implementing random beacons.
Practical attacks against the Walnut digital signature scheme
Recently, NIST started the process of standardizing quantum-
resistant public-key cryptographic algorithms. WalnutDSA, the subject of this paper, is one of the 20 proposed signature schemes that are being considered for standardization. Walnut relies on a one-way function called E-Multiplication, which has a rich algebraic structure. This paper shows that this structure can be exploited to launch several practical attacks against the Walnut cryptosystem. The attacks work very well in practice; it is possible to forge signatures and compute equivalent secret keys for the 128-bit and 256-bit security parameters submitted to NIST in less than a second and in less than a minute respectively.
Sliding-Window Correlation Attacks Against Encryption Devices with an Unstable Clock
Power analysis side channel attacks rely on aligned traces. As a counter-measure, devices can use a jittered clock to misalign the power traces. In this paper we suggest a way to overcome this counter-measure, using an old method of integrating samples over time followed by a correlation attack (Sliding Window CPA). We theoretically re-analyze this general method with characteristics of jittered clocks and show that it is stronger than previously believed. We show that integration of samples over a suitably chosen window size actually amplifies the correlation both with and without jitter - as long as multiple leakage points are present within the window. We then validate our analysis on a new data-set of traces measured on a board implementing a jittered clock. Our experiments show that the SW-CPA attack with a well-chosen window size is very successful against a jittered clock counter-measure and significantly outperforms previous suggestions, requiring a much smaller set of traces to correctly identify the correct key.
Non-Malleable Secret Sharing
Uncategorized
Uncategorized
A number of works have focused on the setting where an adversary tampers with the shares of a secret sharing scheme. This includes literature on verifiable secret sharing, algebraic manipulation detection(AMD) codes, and, error correcting or detecting codes in general. In this work, we initiate a systematic study of what we call non-malleable secret sharing. Very roughly, the guarantee we seek is the following: the adversary may potentially tamper with all of the shares, and still, either the reconstruction procedure outputs the original secret, or, the original secret is ''destroyed'' and the reconstruction outputs a string which is completely ''unrelated'' to the original secret. Recent exciting work on non-malleable codes in the split-state model led to constructions which can be seen as 2-out-of-2 non-malleable secret sharing schemes. These constructions have already found a number of applications in cryptography. We investigate the natural question of constructing t-out-of-n non-malleable secret sharing schemes. Such a secret sharing scheme ensures that only a set consisting of t or more shares can reconstruct the secret, and, additionally guarantees non-malleability under an attack where potentially every share maybe tampered with. Techniques used for obtaining split-state non-malleable codes (or 2-out-of-2 non-malleable secret sharing) are (in some form) based on two-source extractors and seem not to generalize to our setting.
Our first result is the construction of a t-out-of-n non-malleable secret sharing scheme against an adversary who arbitrarily tampers each of the shares independently. Our construction is unconditional and features statistical non-malleability.
As our main technical result, we present t-out-of-n non-malleable secret sharing scheme in a stronger adversarial model where an adversary may jointly tamper multiple shares. Our construction is unconditional and the adversary is allowed to jointly-tamper subsets of up to (t-1) shares. We believe that the techniques introduced in our construction may be of independent interest.
Inspired by the well studied problem of perfectly secure message transmission introduced in the seminal work of Dolev et. al (J. of ACM'93), we also initiate the study of non-malleable message transmission. Non-malleable message transmission can be seen as a natural generalization in which the goal is to ensure that the receiver either receives the original message, or, the original message is essentially destroyed and the receiver receives an ''unrelated'' message, when the network is under the influence of an adversary who can byzantinely corrupt all the nodes in the network. As natural applications of our non-malleable secret sharing schemes, we propose constructions for non-malleable message transmission.
Secure Multiplication for Bitslice Higher-Order Masking: Optimisation and Comparison
Uncategorized
Uncategorized
In this paper, we optimize the performances and compare several recent masking schemes in bitslice on 32-bit arm devices, with a focus on multiplication. Our main conclusion is that efficiency (or randomness) gains always come at a cost, either in terms of composability or in terms of resistance against horizontal attacks. Our evaluations should therefore allow a designer to select a masking scheme based on implementation constraints and security requirements. They also highlight the increasing feasibility of (very) high-order masking that are offered by increasingly powerful embedded devices, with new opportunities of high-security devices in various contexts.
Secure top most significant genome variants search: iDASH 2017 competition
Uncategorized
Uncategorized
One of the 3 tracks of iDASH Privacy & Security Workshop 2017 competition was to execute a whole genome variants search on private genomic data. Particularly, the search application was to find the top most significant SNPs (Single-Nucleotide Polymorphisms) in a database of genome records labeled with control or case.Privacy and confidentiality of genome data had to be ensured using Intel SGX enclaves. The typical use-case of this application is the multi-party computation (each party possessing one or several genome records) of the SNPs which statistically differentiate control and case genome datasets.
In this paper we discuss the solution submitted by our team to this competition. Our solution consists of two applications: (i) compress and encrypt genome files and (ii) perform genome processing (top most important SNPs search). We have opted for a horizontal treatment of genome records and heavily used parallel processing. Rust programming language was employed to develop both applications. Execution performance of the processing applications scales well and very good performance metrics are obtained. Contest organizers selected it as the best submission amongst other received competition entries and our team was awarded the first prize on this track.
On the cost of computing isogenies between supersingular elliptic curves
The security of the Jao-De Feo Supersingular Isogeny Diffie-Hellman
(SIDH) key agreement scheme is based on the intractability of the
Computational Supersingular Isogeny (CSSI) problem --- computing
-rational isogenies of degrees and
between certain supersingular elliptic curves defined over
. The classical meet-in-the-middle attack on CSSI
has an expected running time of , but also has
storage requirements. In this paper, we demonstrate that the van
Oorschot-Wiener collision finding algorithm has a lower cost (but
higher running time) for solving CSSI, and thus should be used instead
of the meet-in-the-middle attack to assess the security of SIDH against
classical attacks. The smaller parameter brings significantly
improved performance for SIDH.
Multilinear maps via secret ring
Garg, Gentry and Halevi (GGH13) described the first candidate multilinear maps using ideal lattices. However, Hu and Jia recently presented an efficient attack on the GGH13 map, which breaks the multipartite key exchange (MPKE) and witness encryption (WE) based on GGH13. In this work, we describe a new variant of GGH13 using secret ring, which preserves the origin functionality of GGH13. The security of our variant depends upon the following new hardness problem. Given the determinant of the circular matrix of some element in a secret ring, the problem is to find this secret ring and reconstruct this element.
DeepSigns: A Generic Watermarking Framework for Protecting the Ownership of Deep Learning Models
Deep Learning (DL) models have caused a paradigm shift in our ability to comprehend raw data in various important fields, ranging from intelligence warfare and healthcare to autonomous transportation and automated manufacturing. A practical concern, in the rush to adopt DL models as a service, is protecting the models against Intellectual Property (IP) infringement. The DL models are commonly built by allocating significant computational resources that process vast amounts of proprietary training data. The resulting models are therefore considered to be the IP of the model builder and need to be protected to preserve the owner's competitive advantage.
This paper proposes DeepSigns, a novel end-to-end IP protection framework that enables insertion of coherent digital watermarks in contemporary DL models. DeepSigns, for the first time, introduces a generic watermarking methodology that can be used for protecting DL owner's IP rights in both white-box and black-box settings, where the adversary may or may not have the knowledge of the model internals. The suggested methodology is based on embedding the owner's signature (watermark) in the probability density function (pdf) of the data abstraction obtained in different layers of a DL model. DeepSigns can demonstrably withstand various removal and transformation attacks, including model compression, model fine-tuning, and watermark overwriting. Proof-of-concept evaluations on MNIST, and CIFAR10 datasets, as well as a wide variety of neural network architectures including Wide Residual Networks, Convolution Neural Networks, and Multi-Layer Perceptrons corroborate DeepSigns' effectiveness and applicability.
Chosen Message Attack on Multivariate Signature ELSA at Asiacrypt 2017
One of the most efficient post-quantum signature schemes is Rainbow whose harness is based on the multivariate quadratic polynomial (MQ) problem.
ELSA, a new multivariate signature scheme proposed at Asiacrypt 2017,has a similar construction to Rainbow.
Its advantages, compared to Rainbow, are its smaller secret key and faster signature generation.
In addition, its existential unforgeability against an adaptive chosen-message attack has been proven under the hardness of the MQ-problem induced by a public key of ELSA with a specific parameter set in the random oracle model.
The high efficiency of ELSA is derived from a set of hidden quadratic equations used in the process of signature generation.
However, the hidden quadratic equations yield a vulnerability.
In fact, a piece of information of these equations can be recovered by using valid signatures and an equivalent secret key can be partially recovered from it.
In this paper, we describe how to recover an equivalent secret key of ELSA by a chosen message attack.
Our experiments show that we can recover an equivalent secret key for the claimed -bit security parameter of ELSA on a standard PC in seconds with valid signatures.
Last updated: 2019-02-01
Error Estimation of Practical Convolution Discrete Gaussian Sampling with Rejection Sampling
Uncategorized
Uncategorized
Discrete Gaussian Sampling is a fundamental tool in lattice cryptography which has been used in digital signatures, identify-based encryption, attribute-based encryption, zero-knowledge proof and fully homomorphic cryptosystem. As a subroutine of lattice-based scheme, a high precision sampling usually leads to a high security level and also brings large time and space complexity. In order to optimize security and efficiency, how to achieve a higher security level with a lower precision becomes a widely studied open question. A popular method for addressing this question is
to use different metrics other than statistical distance to measure errors. The proposed metrics include KL-divergence, Rényi-divergence, and Max-log distance, and these techniques are supposed to achieve security with precision or even less. However, we note that error bounds are not universal but depend on specific sampling methods.
For example, if we use the popular rejection sampling, there will be large gaps between some existing results and practical experiments in terms of error bounds. In this paper, we make two novel observations about practical errors. As an application of the observations, we consider convolution theorem of
discrete Gaussian sampling by using Rejection method and reformulate it into a practical one with much more accurate error bounds. We describe a rigorous proof of it and demonstrate that the bounds are tightly matched by our experiments.
It seems that the statistical distance is a better metric to distinguish distributions by looking at characteristic functions of probabilities.
Our bounds under KL-divergence, Rényi-divergence and Max-log distance using Rejection sampling may have no influence on estimating security level, but this successful application reveals the proposed observations are very effective in analyzing practical probabilities. Moreover, some technical tools including several improved inequalities for discrete Gaussian measure are developed.
On perfectly secure 2PC in the OT-hybrid model
A well known result by Kilian (ACM 1988) asserts that general secure two computation (2PC) with statistical security, can be based on OT. Specifically, in the client-server model, where only one party -- the client -- receives an output, Kilian’s result shows that given the ability to call an ideal oracle that computes OT, two parties can securely compute an arbitrary function of their inputs with unconditional security. Ishai et al. (EUROCRYPT 2011) further showed that this can be done efficiently for every two-party functionality in in a single round.
However, their results only achieve statistical security, namely, it is allowed to have some error in security. This leaves open the natural question as to which client-server functionalities can be computed with perfect security in the OT-hybrid model, and what is the round complexity of such computation. So far, only a handful of functionalities were known to have such protocols. In addition to the obvious theoretical appeal of the question towards better understanding secure computation, perfect, as opposed to statistical reductions, may be useful for designing secure multiparty protocols with high concrete efficiency, achieved by eliminating the dependence on a security parameter.
In this work, we identify a large class of client-server functionalities , where the server's domain is larger than the client's domain , that have a perfect reduction to OT. Furthermore, our reduction is 1-round using an oracle to secure evaluation of many parallel invocations of -bit-OT, as done by Ishai et al. (EUROCRYPT 2011). Interestingly, the set of functions that we are able to compute was previously identified by Asharov (TCC 2014) in the context of fairness in two-party computation, naming these functions full-dimensional. Our result also extends to randomized non-Boolean functions satisfying .
Isolated Curves and the MOV Attack
We present a variation on the CM method that produces elliptic curves over prime fields with nearly prime order that do not admit many efficiently computable isogenies. Assuming the Bateman-Horn conjecture, we prove that elliptic curves produced this way almost always have a large embedding degree, and thus are resistant to the MOV attack on the ECDLP.
State Separation for Code-Based Game-Playing Proofs
The security analysis of real-world protocols involves reduction steps that are conceptually simple but still have to account for many protocol complications found in standards and implementations. Taking inspiration from universal composability, abstract cryptography, process algebras, and type-based verification frameworks, we propose a method to simplify large reductions, avoid mistakes in carrying them out, and obtain concise security statements.
Our method decomposes monolithic games into collections of stateful *packages* representing collections of oracles that call one another using well-defined interfaces. Every component scheme yields a pair of a real and an ideal package. In security proofs, we then successively replace each real package with its ideal counterpart, treating the other packages as the reduction. We build this reduction by applying a number of algebraic operations on packages justified by their state separation. Our method handles reductions that emulate the game perfectly, and leaves more complex arguments to existing game-based proof techniques such as the code-based analysis suggested by Bellare and Rogaway. It also facilitates computer-aided proofs, inasmuch as the perfect reductions steps can be automatically discharged by proof assistants.
We illustrate our method on two generic composition proofs: (1) a proof of self-composition using a hybrid argument; and (2) the composition of keying and keyed components. For concreteness, we apply them to the KEM-DEM proof of hybrid-encryption by Cramer and Shoup and to the composition of forward-secure game-based key exchange protocols with symmetric-key protocols.
Efficient four-dimensional GLV curve with high security
We apply Smith's construction to generate four-dimensional GLV curves with fast arithmetic in the group law as well as in the base field. As Costello and Longa did in [5] for a 128-bit security level, we btained an interesting curve for fast GLV scalar multiplication, providing a high level of security (254 bits). Our curve is defined over a well-known finite field: where . We finally explicit the two endomorphisms used during GLV decomposition.
Geosocial Query with User-Controlled Privacy
Geosocial applications collect (and record) users’ precise location data to perform proximity computations, such as notifying a user or triggering a service when a friend is within geographic proximity. With the growing popularity of mobile devices that have sophisticated localization capability, it becomes more convenient and tempting to share location data. But the precise location data in plaintext not only exposes user’s whereabouts but also mobility pa erns that are sensitive and cannot be changed easily. This paper proposes cryptographic protocols on top of spatial cloaking to reduce the resolution of location and balance between data utility and privacy. Specifically, we interest in the setting that allows users to send periodic updates of precise coordinates and define privacy preferences to control the granularity of the location, both in an encrypted format. Our system supports three kinds of user queries — “Where is this user?”, “Who is nearby?”, and “How close is this user from another user?”. Also, we develop a new algorithm to improve the multidimensional data access by reducing significant masking error. Our prototype and various performance evaluations on different platforms demonstrated that our system is practical.
21 - Bringing Down the Complexity: Fast Composable Protocols for Card Games Without Secret State
Uncategorized
Uncategorized
While many cryptographic protocols for card games have been proposed, all of them focus on card games where players have some state that must be kept secret from each other, e.g. closed cards and bluffs in Poker. This scenario poses many interesting technical challenges, which are addressed with cryptographic tools that introduce significant computational and communication overheads (e.g. zero-knowledge proofs). In this paper, we consider the case of games that do not require any secret state to be maintained (e.g. Blackjack and Baccarat). Basically, in these games, cards are chosen at random and then publicly advertised, allowing for players to publicly announce their actions (before or after cards are known). We show that protocols for such games can be built from very lightweight primitives such as digital signatures and canonical random oracle commitments, yielding constructions that far outperform all known card game protocols in terms of communication, computational and round complexities. Moreover, in constructing highly efficient protocols, we introduce a new technique based on verifiable random functions for extending coin tossing, which is at the core of our constructions. Besides ensuring that the games are played correctly, our protocols support financial rewards and penalties enforcement, guaranteeing that winners receive their rewards and that cheaters get financially penalized. In order to do so, we build on blockchain-based techniques that leverage the power of stateful smart contracts to ensure fair protocol execution.
Rethinking Large-Scale Consensus
In this position paper, we initiate a systematic treatment of reaching consensus in a permissionless network. We prove several simple but hopefully insightful lower bounds that demonstrate exactly why reaching consensus in a permissionless setting is fundamentally more difficult than the classical, permissioned setting. We then present a simplified proof of Nakamoto's blockchain which we recommend for pedagogical purposes. Finally, we survey recent results including how to avoid well-known painpoints in permissionless consensus, and how to apply core ideas behind blockchains to solve consensus in the classical, permissioned setting and meanwhile achieve new properties that are not attained by classical approaches.
On the Ineffectiveness of Internal Encodings - Revisiting the DCA Attack on White-Box Cryptography
The goal of white-box cryptography is to implement cryptographic algorithms securely in software in the presence of an adversary that has complete access to the software's program code and execution environment.
In particular, white-box cryptography needs to protect the embedded secret key from being extracted. As for today, all publicly available white-box
implementations turned out succeptible to key extraction attacks. In the meanwhile, white-box cryptography is widely deployed in commercial implementations that claim to be secure. Bos, Hubain, Michiels and Teuwen (CHES 2016) introduced differential computational analysis (DCA), the first automated attack on white-box cryptography.
The DCA attack performs a statistical analysis on execution traces. These traces contain information about the execution, such as memory addresses or register values, that is collected via binary instrumentation tooling during the encryption process. The white-box implementations that were attacked by Bos et al., as well as white-box implementations that have been described in the literature, protect the embedded key by using internal encodings techniques that have been introduced by Chow, Eisen, Johnson and van Oorschot (SAC 2002). In this paper, we prove rigorously that such internal encodings are too weak to protect against the DCA attack and thereby explain the experimental success of the DCA attack of Bos et al.
Outsourcing Modular Exponentiation in Cryptographic Web Applications
Modern web applications using advanced cryptographic methods may need to calculate a large number of modular exponentiations. Performing such calculations in the web browser efficiently is a known problem. We propose a solution to this problem based on outsourcing the computational effort to untrusted exponentiation servers. We present several efficient outsourcing protocols for different settings and a practical implementation consisting of a JavaScript client library and a server application. Compared to browser-only computation, our solution improves the overall computation time by an order of magnitude.
This is an extended version of a paper accepted and presented at the Voting’18 workshop of the Financial Cryptography and Data Security 2018 conference. It will be included in the conference’s LNCS proceedings and available on the Springer web site.
Clusters of Re-used Keys
We survey the long-term cryptographic public keys, (for SSH, e-mail and HTTP protocols), on hosts that run the SMTP protocol in ten countries. We find that keys are very widely re-used across multiple IP addresses, and even autonomous systems. From one run scanning 18,268 hosts in Ireland that run at least one TLS or SSH service, approximately 53% of the hosts involved are using keys that are also seen on some other IP address. When two IP addresses share a key, then those two IP addresses are considered members of the same cluster. In the same scan we find a maximum cluster size of 1,991 hosts and a total of 1,437 clusters, mostly with relatively few hosts per cluster (median cluster size was 26.5, most common cluster size is two). In that scan, of the 54,447 host/port combinations running cryptographic protocols, we only see 20,053 unique keys (36%), indicating significant key re-use across hosts and ports. Scans in other countries demonstrate the same issue. We describe the methodology followed and the published source code and public data sources that enable researchers to replicate, validate and extend these results. Clearly, such key re-use can create undesirable security and privacy dependencies between cluster members. A range of causes for key sharing have been confirmed, including multi-homed hosts, mirroring, large-scale use of wildcard public key certificates, cloning virtual machines that already contain host keys and vendors shipping products with hard-coded or default key pairs. Discussions with local (Irish) asset-owners to better understand the reasons for key re-use and to possibly assist with improving network posture are ongoing, and we will continue to incorporate resulting findings in revisions of this article.
In search of CurveSwap: Measuring elliptic curve implementations in the wild
We survey elliptic curve implementations from several vantage points. We perform internet-wide scans for TLS on a large number of ports, as well as SSH and IPsec to measure elliptic curve support and implementation behaviors, and collect passive measurements of client curve support for TLS. We also perform active measurements to estimate server vulnerability to known attacks against elliptic curve implementations, including support for weak curves, invalid curve attacks, and curve twist attacks. We estimate that 0.77% of HTTPS hosts, 0.04% of SSH hosts, and 4.04% of IKEv2 hosts that support elliptic curves do not perform curve validity checks as specified in elliptic curve standards. We describe how such vulnerabilities could be used to construct an elliptic curve parameter downgrade attack called CurveSwap for TLS, and observe that there do not appear to be combinations of weak behaviors we examined enabling a feasible CurveSwap attack in the wild. We also analyze source code for elliptic curve implementations, and find that a number of libraries fail to perform point validation for JSON Web Encryption, and find a flaw in the Java and NSS multiplication algorithms.
Fine-Grained Secure Computation
Uncategorized
Uncategorized
This paper initiates a study of Fine Grained Secure Computation: i.e.
the construction of secure computation primitives against "moderately complex" adversaries. We present definitions and constructions for compact Fully Homomorphic Encryption and Verifiable Computation secure against (non-uniform) adversaries. Our results do not require the existence of one-way functions and hold under a widely believed separation assumption, namely . We also present two application scenarios for our model: (i)hardware chips that prove their own correctness, and (ii) protocols against rational adversaries potentially relevant to the Verifier's Dilemma in smart-contracts transactions such as Ethereum.
Asynchronous ratcheted key exchange
Ratcheted key exchange (RKE) is a cryptographic technique used in instant messaging systems like Signal and the WhatsApp messenger for attaining strong security in the face of state exposure attacks. RKE received academic attention in the recent works of Cohn-Gordon et al. (EuroS&P 2017) and Bellare et al. (CRYPTO 2017). While the former is analytical in the sense that it aims primarily at assessing the security that one particular protocol does achieve (which might be weaker than the notion that it should achieve), the authors of the latter develop and instantiate a notion of security from scratch, independently of existing implementations. Unfortunately, however, their model is quite restricted, e.g. for considering only unidirectional communication and the exposure of only one of the two parties.
In this article we resolve the limitations of prior work by developing alternative security definitions, for unidirectional RKE as well as for RKE where both parties contribute. We follow a purist approach, aiming at finding strong yet convincing notions that cover a realistic communication model with fully concurrent operation of both participants. We further propose secure instantiations (as the protocols analyzed or proposed by Cohn-Gordon et al. and Bellare et al. turn out to be weak in our models). While our scheme for the unidirectional case builds on a generic KEM as the main building block (differently to prior work that requires explicitly Diffie-Hellman), our schemes for bidirectional RKE require a stronger, HIBE-like component.
ExpFault: An Automated Framework for Exploitable Fault Characterization in Block Ciphers (Revised Version)
Uncategorized
Uncategorized
Malicious exploitation of faults for extracting secrets is one of the most practical and potent threats to modern cryptographic primitives. Interestingly, not every possible fault for a cryptosystem is maliciously exploitable, and evaluation of the exploitability of a fault is nontrivial. In order to devise precise defense mechanisms against such rogue faults, a comprehensive knowledge is required about the exploitable part of the fault space of a cryptosystem.
Unfortunately, the fault space is diversified and of formidable size even while a single crypto-primitive is considered and traditional manual fault analysis techniques may often fall short
to practically cover such a fault space within reasonable time. An automation for analyzing individual fault instances for their exploitability is thus inevitable. Such an automation is
supposed to work as the core engine for analyzing the fault spaces of cryptographic primitives. In this paper, we propose an automation for evaluating the exploitability status of fault instances
from block ciphers, mainly in the context of Differential Fault Analysis (DFA) attacks. The proposed framework is generic and scalable, which are perhaps the two most important features
for covering diversified fault spaces of formidable size originating from different ciphers. As a proof-of-concept, we reconstruct some known attack examples on AES and PRESENT using
the framework and finally analyze a recently proposed cipher GIFT [BPP + 17] for the first time. It is found that the secret key of GIFT can be determined with 2 nibble fault instances injected
consecutively at the beginning of the 25th and 23rd round with remaining key space complexity of 2^7.06 .
Learning strikes again: the case of the DRS signature scheme
Lattice signature schemes generally require particular care when it comes to preventing secret information from leaking through signature transcript. For example, the Goldreich-Goldwasser-Halevi (GGH) signature scheme and the NTRUSign scheme were completely broken by the parallelepiped-learning attack of Nguyen and Regev (Eurocrypt 2006). Several heuristic countermeasures were also shown vulnerable to similar statistical attacks.
At PKC 2008, Plantard, Susilo and Win proposed a new variant of GGH, informally arguing resistance to such attacks. Based on this variant, Plantard, Sipasseuth, Dumondelle and Susilo proposed a concrete signature scheme, called DRS, that is in the round 1 of the NIST post-quantum cryptography project.
In this work, we propose yet another statistical attack and demonstrate a weakness of the DRS scheme: one can recover some partial information of the secret key from sufficiently many signatures. One difficulty is that, due to the DRS reduction algorithm, the relation between the statistical leak and the secret seems more intricate. We work around this difficulty by training a statistical model, using a few features that we designed according to a simple heuristic analysis.
While we only recover partial secret coefficients, this information is easily exploited by lattice attacks, significantly decreasing their complexity. Concretely, we claim that, provided that 100 000 signatures are available, the secret key may be recovered using BKZ-138 for the first set of DRS parameters submitted to the NIST. This puts the security level of this parameter set below 80-bits (maybe even 70-bits), to be compared to an original claim of 128-bits.
Furthermore, we review the DRS v2 scheme that is proposed to resist above statistical attack. For this countermeasure, while one may not recover partial secret coefficients exactly by learning, it seems feasible to gain some information on the secret key. Exploiting this information, we can still effectively reduce the cost of lattice attacks.
Privacy Amplification from Non-malleable Codes
Non-malleable Codes give us the following property: their codewords cannot be tampered into codewords of related messages. Privacy Amplification allows parties to convert their weak shared secret into a fully hidden, uniformly distributed secret key, while communicating on a fully tamperable public channel. In this work, we show how to construct a constant round privacy amplification protocol from any augmented split-state non-malleable code. Existentially, this gives us another primitive (in addition to optimal non-malleable extractors) whose optimal construction would solve the long-standing open problem of building constant round privacy amplification with optimal entropy loss. Instantiating our code with the current best known NMC gives us an -round privacy amplification protocol with entropy loss and min-entropy requirement , where is the security parameter and is the length of the shared weak secret. In fact, for our result, even the weaker primitive of Non-malleable Randomness Encoders suffice.
We view our result as an exciting connection between two of the most fascinating and well-studied information theoretic primitives, non-malleable codes and privacy amplification.
Linear Biases in AEGIS Keystream
AEGIS is an authenticated cipher introduced at SAC 2013, which takes advantage of AES-NI instructions to reach outstanding speed in software. Like LEX, Fides, as well as many sponge-based designs, AEGIS leaks part of its inner state each round to form a keystream. In this paper, we investigate the existence of linear biases in this keystream. Our main result is a linear mask with bias on the AEGIS-256 keystream. The resulting distinguisher can be exploited to recover bits of a partially known message encrypted times, regardless of the keys used. We also consider AEGIS-128, and find a surprising correlation between ciphertexts at rounds and , although the biases would require data to be detected. Due to their data requirements, neither attack threatens the practical security of the cipher.
Simulations of Optical Emissions for Attacking AES and Masked AES
Uncategorized
Uncategorized
In this paper we present a novel attack based on photonic
emission analysis targeting software implementations of AES. We focus on the particular case in which the attacker can collect the photonic emission of a limited number of sense amplifiers (e.g. only one) of the SRAM storing the S-Box. The attack consists in doing hypothesis on the secret key based on the knowledge of the partial output of the SubBytes operation. We also consider the possibility to attack a masked implementation of AES using the photonic emission analysis. In the case of masking, the attacker needs 2 leakages of the same encryption to overcome the randomization of the masks. For our analysis, we assume the same physical setup described in other previous works. Reported results are based on simulations with some hypothesis on the probability of photonic emission of a single transistor.
Direct Anonymous Attestation with Efficient Verifier-Local Revocation for Subscription System
In an anonymous subscription system (ASS), a subscribed user (SU) is able to access the services of a service provider without having to reveal its true identity. For a SU computing platform that is compliant with the Trusted Platform Module (TPM) standard, direct anonymous attestation (DAA) is an appropriate cryptographic protocol for realizing ASS, since DAA enables privacy-preserving authentication of the SU platform. This approach takes advantage of a cryptographic key that is securely embedded in the platform's hardware. Although the computing industry and academia have made significant strides in developing secure and sound DAA schemes, these schemes share a common drawback that may act as a major obstacle to their widespread deployment. In all of the existing schemes, the SU suffers from significant computational and communication costs that increase proportionally to the size of the revocation list. This drawback renders the existing schemes to be impractical when the size of the revocation list grows beyond a relatively modest size. In this paper, we propose a novel scheme called Lightweight Anonymous Subscription with Efficient Revocation (LASER) that addresses this very problem. In LASER, the computational and communication costs of the SU's signature are multiple orders of magnitude lower than the prior art. LASER achieves this significant performance improvement by shifting most of the computational and communication costs from the DAA's online procedure (i.e., signature generation) to its offline procedure (i.e., acquisition of keys/credentials). We have conducted a thorough analysis of LASER's performance-related features and compared the findings to the prior art. We have also conducted a comprehensive evaluation of LASER by implementing it on a laptop platform with an on-board TPM. To the best of our knowledge, the results presented in this paper represent the first implementation and analysis of a scheme using an actual TPM cryptoprocessor that is compliant with the most recent TPM specification version 2.0. We have thoroughly analyzed the security of LASER in the random oracle model.
Secure and Scalable Document Similarity on Distributed Databases: Differential Privacy to the Rescue
Privacy-preserving collaborative data analysis enables richer models than what each party can learn with their own data. Secure Multi-Party Computation (MPC) offers a robust cryptographic approach to this problem, and in fact several protocols have been proposed for various data analysis and machine learning tasks. In this work, we focus on secure similarity computation between text documents, and the application to -nearest neighbors (\knn) classification. Due to its non-parametric nature, \knn presents scalability challenges in the MPC setting. Previous work addresses these by introducing non-standard assumptions about the abilities of an attacker, for example by relying on non-colluding servers. In this work, we tackle the scalability challenge from a different angle, and instead introduce a secure preprocessing phase that reveals differentially private (DP) statistics about the data. This allows us to exploit the inherent sparsity of text data and significantly speed up all subsequent classifications.
Constant Size Traceable Ring Signature Scheme without Random Oracles
Uncategorized
Uncategorized
Currently several traceable (or linkable) identity-based ring signature schemes have been proposed. However, most of them are constructed in the random oracle model. In this paper, we present a fully traceable ring signature (TRS) scheme without random oracles, which has the constant size signature and a security reduction to the computational Diffie-Hellman (CDH) assumption. Also, we give a formal security model for traceable ring signature and prove that the proposed scheme has the properties of traceability and anonymity.
Secure Cloud Storage Scheme Based On Hybrid Cryptosystem
Uncategorized
Uncategorized
This paper presents a secure cloud storage scheme based on hybrid cryptosystem, which consists of Elliptic Curve Cryptography (ECC),
Advanced Encryption Standard (AES), and one-way hash function. Here, the data owner exports large volume of encrypted data to a cloud
storage provider. The exported encrypted data is over-encrypted by the cloud storage provider, and the data is sent to the requesting user. An existing hybrid cryptosystem based dynamic key management scheme with hierarchical access control has been incorporated in our
scheme. The key management scheme groups users in various security classes, and helps to derive efficiently, as well as directly
the secret keys of the lower order security classes. The incorporated key management scheme in our proposed scheme incurs low computational, communication, and storage overheads for key generation, and derivation purposes. The security analysis, and the
simulation results run on the AVISPA tool (formal security verification tool) show that the proposed scheme is protected from the adversaries. This scheme is useful in `owner-write-users-read' application areas, and the end users may use resource-constrained
wireless mobile devices securely in this proposed scheme.
AuCPace: Efficient verifier-based PAKE protocol tailored for the IIoT
Uncategorized
Uncategorized
Increasingly connectivity becomes integrated in products and devices that previously
operated in a stand-alone setting. This observation holds for many consumer ap-
plications in the so-called "Internet of Things" (IoT) as well as for corresponding
industry applications (IIoT), such as industrial process sensors. Often the only
practicable means for authentication of human users is a password. The security of
password-based authentication schemes frequently forms the weakest point of the
security infrastructure.
Missing integration of IoT or IIoT device in a WEB-PKI should be considered a
significant real-world risk. In this setting, verifier-based password-authenticated key-
exchange (V-PAKE) protocols are known to provide a significant security improvement
by preventing phishing and offline dictionary attacks.
For IIoT, availability concerns for the case of failures of (part of) the communication
infrastructure makes local storage of access credentials mandatory. The larger threat
of physical attacks makes it important to use memory-hard password hashing.
This paper presents a corresponding tailored protocol, AuCPace, together with a
security proof within the Universal Composability (UC) framework considering fully
adaptive adversaries. AuCPace uses CPace as a building block which could be used
as a stand-alone balanced PAKE protocol. Moreover, we show how AuCPace could
optionally provide for pre-computation attack resistance. In this paper we also
introduce a new security notion of partially augmented PAKE that provides specific
performance advantages for constrained servers.
We also present an actual instantiation of our protocol, AuCPace25519, and present
performance results on ARM Cortex-M0 and Cortex-M4 microcontrollers, demon-
strating the suitability of AuCPace for the constrained server setting.
This specific paper revision is an update of the journal version. It was setup for the
PAKE selection process of the CFRG working group of the IETF for which AuCPace
and CPace have been nominated.
Collateral Damage of Facebook Applications: a Comprehensive Study
Third-party applications on Facebook can collect personal data of the users who install them, but also of their friends. This raises serious privacy issues as these friends are not notified by the applications nor by Facebook, and they have not given consent. This paper presents a detailed multi-faceted study of the collateral information collection of the applications on Facebook. To investigate the views of the users, we designed a questionnaire and collected the responses of 114 participants. The results show that participants are concerned about the collateral information collection and in particular about the lack of notification and of mechanisms to control the data collection. Based on real data, we compute the likelihood of collateral information collection affecting users: we show that the probability is significant and greater than 80% for popular applications such as TripAdvisor. We also demonstrate that a substantial amount of profile data can be collected by applications, which enables application providers to profile users. To investigate whether collateral information collection is an issue to users’ privacy we analysed the legal framework in light of the new General Data Protection Regulation. We provide a detailed analysis of the entities involved and investigate which entity is accountable for the collateral information collection. To provide countermeasures, we propose a privacy dashboard extension that implements privacy scoring computations to enhance transparency towards collateral information collection. Furthermore, we discuss alternative solutions highlighting other countermeasures such as notification and access control mechanisms, cryptographic solutions and application auditing. To the best of our knowledge, this is the first work that provides a detailed multi-faceted study of this problem and that analyses the threat of user profiling by application providers.
Hadamard Matrices, -Linearly Independent Sets and Correlation-Immune Boolean Functions with Minimum Hamming Weights
Uncategorized
Uncategorized
It is known that correlation-immune (CI) Boolean functions used in the framework of side channel attacks need to have low Hamming weights. In 2013, Bhasin et al. studied the minimum Hamming weight of -CI Boolean functions, and presented an open problem: the minimal weight of a -CI function in variables might not increase with . Very recently, Carlet and Chen proposed some constructions of low-weight CI functions, and gave a conjecture on the minimum Hamming weight of -CI functions in variables.
In this paper, we determine the values of the minimum Hamming weights of -CI Boolean functions in variables for infinitely many 's and give a negative answer to the open problem proposed by Bhasin et al. We then present a method to construct minimum-weight 2-CI functions through Hadamard matrices, which can provide all minimum-weight 2-CI functions in variables. Furthermore, we prove that the Carlet-Chen conjecture is equivalent to the famous Hadamard conjecture. Most notably, we propose an efficient method to construct low-weight -variable CI functions through -linearly independent sets, which can provide numerous minimum-weight -CI functions. Particularly, we obtain some new values of the minimum Hamming weights of -CI functions in variables for . We conjecture that the functions constructed by us are of the minimum Hamming weights if the sets are of absolute maximum -linearly independent. If our conjecture holds, then all the values for and most values for general are determined.
Homomorphic Rank Sort Using Surrogate Polynomials
In this paper we propose a rank based algorithm for sorting encrypted data using monomials. Greedy Sort is a sorting technique that achieves to minimize the depth of the homomorphic evaluations. It is a costly algorithm due to excessive ciphertext multiplications and its implementation is cumbersome. Another method Direct Sort has a slightly deeper circuit than Greedy Sort, nevertheless it is simpler to implement and scales better with the size of the input array. Our proposed method minimizes both the circuit depth and the number of ciphertext multiplications. In addition to its performance, its simple design makes it more favorable compared to the alternative methods which are hard to parallelize, e.g. not suitable for fast GPU implementations. Furthermore, we improve the performance of homomorphic sorting algorithm by adapting the SIMD operations alongside message slot rotation techniques. This method allow us to pack integers into a single ciphertext and compute comparisons at once, thus reducing comparisons to .
Modeling Quantum-Safe Authenticated Key Establishment, and an Isogeny-Based Protocol
Uncategorized
Uncategorized
We propose a security model for authenticated key establishment in the quantum setting. Our model is the first for authenticated key establishment that allows for quantum superpositions of queries. The model builds on the classical Canetti-Krawczyk model but allows quantum interactions between the adversary and quantum oracles that emulate classical parties. We demonstrate that this new security definition is satisfiable by giving a generic construction from simpler cryptographic primitives and a specific protocol which is secure in the quantum random oracle model, under the supersingular isogeny decisional Diffie-Hellman assumption (SIDH).
Upgrading to Functional Encryption
The notion of Functional Encryption (FE) has recently emerged as a strong primitive with several exciting applications. In this work, we initiate the study of the following question: Can existing public key encryption schemes be ``upgraded'' to Functional Encryption schemes without changing their public keys or the encryption algorithm? We call a public-key encryption with this property to be FE-compatible.
Indeed, assuming ideal obfuscation, it is easy to see that every CCA-secure public-key encryption scheme is FE-compatible. Despite the recent success in using indistinguishability obfuscation to replace ideal obfuscation for many applications, we show that this phenomenon most likely will not apply here.
We show that assuming fully homomorphic encryption and the learning with errors (LWE) assumption, there exists a CCA-secure encryption scheme that is provably not FE-compatible. We also show that a large class of natural CCA-secure encryption schemes proven secure in the random oracle model are not FE-compatible in the random oracle model.
Nevertheless, we identify a key structure that, if present, is sufficient to provide FE-compatibility. Specifically, we show that assuming sub-exponentially secure iO and sub-exponentially secure one way functions, there exists a class of public key encryption schemes which we call Special-CCA secure encryption schemes that are in fact, FE-compatible.
In particular, each of the following popular CCA secure encryption schemes
(some of which existed even before the notion of FE was introduced)
fall into the class of Special-CCA secure encryption schemes and are thus FE-compatible:
1) The scheme of Canetti, Halevi and Katz (Eurocrypt 2004) when instantiated with the IBE scheme of Boneh-Boyen (Eurocrypt 2004).
2) The scheme of Canetti, Halevi and Katz (Eurocrypt 2004) when instantiated with any Hierarchical IBE scheme.
3) The scheme of Peikert and Waters (STOC 2008) when instantiated with any Lossy Trapdoor Function.
Updatable and Universal Common Reference Strings with Applications to zk-SNARKs
By design, existing (pre-processing)
zk-SNARKs embed a secret trapdoor in a relation-dependent common reference strings (CRS). The trapdoor is exploited
by a (hypothetical) simulator to prove the scheme is zero knowledge, and the secret-dependent
structure facilitates a linear-size CRS and linear-time prover computation.
If known by a real party, however, the trapdoor can be used to subvert the
security of the system. The structured CRS that makes zk-SNARKs practical
also makes deploying zk-SNARKS problematic, as it is difficult to argue why the
trapdoor would not be available to the entity responsible for generating the
CRS. Moreover, for pre-processing zk-SNARKs a new trusted CRS needs to be
computed every time the relation is changed.
%
In this paper, we address both issues by proposing a model where a number of users can update a universal CRS. The updatable CRS model guarantees security if at least one of the users updating the CRS is honest.
We provide both a negative result, by showing that
zk-SNARKs with private secret-dependent polynomials in the CRS cannot be
updatable, and a positive result by constructing a zk-SNARK based on a CRS consisting only of secret-dependent monomials.
The CRS is of
quadratic size, is updatable, and is universal in the sense that it can be specialized into one or more relation-dependent
CRS of linear size with linear-time prover computation.
Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing
We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of . Thus we can only show that ``very hard'' LPN is harder than some ``very mildly hard'' worst case problem. We note that LPN with noise already implies symmetric cryptography.
Specifically, we consider the (n,m,w)-nearest codeword problem ( -NCP) which takes as input a generating matrix for a binary linear code in dimensions and rank , and a target vector which is very close to the code (Hamming distance at most ), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error , -NCP can be solved given oracle access to an LPN distinguisher with noise ratio .
Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that -NCP with the aforementioned parameters lies in the complexity class (i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be -hard. We then show that LPN with very low noise rate implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in .
Mixed-radix Naccache-Stern encryption
In this work we explore a combinatorial optimization problem stemming from the Naccache-Stern cryptosystem. We show that solving this problem results in bandwidth improvements, and suggest
a polynomial-time approximation algorithm to find an optimal solution.
Our work suggests that
using optimal radix encoding results in an asymptotic 50% increase in bandwidth.
Approximate and Probabilistic Differential Privacy Definitions
This technical report discusses three subtleties related to the widely used notion of differential privacy (DP). First, we discuss how the choice of a distinguisher influences the privacy notion and why we should always have a distinguisher if we consider approximate DP. Secondly, we draw a line between the very intuitive probabilistic differential privacy (with probability we have -DP) and the commonly used approximate differential privacy ( -DP). Finally we see that and why probabilistic differential privacy (and similar notions) are not complete under post-processing, which has significant implications for notions used in the literature.
How to Record Quantum Queries, and Applications to Quantum Indifferentiability
The quantum random oracle model (QROM) has become the standard model in which to prove the post-quantum security of random-oracle-based constructions. Unfortunately, none of the known proof techniques allow the reduction to record information about the adversary's queries, a crucial feature of many classical ROM proofs, including all proofs of indifferentiability for hash function domain extension.
In this work, we give a new QROM proof technique that overcomes this ``recording barrier''. Our central observation is that when viewing the adversary's query and the oracle itself in the Fourier domain, an oracle query switches from writing to the adversary's space to writing to the oracle itself. This allows a reduction to simulate the oracle by simply recording information about the adversary's query in the Fourier domain.
We then use this new technique to show the indifferentiability of the Merkle-Damgard domain extender for hash functions. We also give a proof of security for the Fujisaki-Okamoto transformation; previous proofs required modifying the scheme to include an additional hash term. Given the threat posed by quantum computers and the push toward quantum-resistant cryptosystems, our work represents an important tool for efficient post-quantum cryptosystems.
Lattice-Based zk-SNARKs from Square Span Programs
Zero-knowledge SNARKs (zk-SNARKs) are non-interactive proof systems with short (i.e., independent of
the size of the witness) and efficiently verifiable proofs.
They elegantly resolve the juxtaposition of individual privacy and public trust, by providing an efficient way of demonstrating knowledge of secret information without actually revealing it.
To this day, zk-SNARKs are widely deployed all over the planet and are used to
keep alive a system worth billion of euros, namely the cryptocurrency Zcash. However, all current SNARKs
implementations rely on so-called pre-quantum assumptions and, for this reason, are not expected to withstand cryptanalitic efforts over the next few decades.
In this work, we introduce a new zk-SNARK that can be instantiated from
lattice-based assumptions, and which is thus believed to be post-quantum secure.
We provide a generalization in the spirit of Gennaro et al. (Eurocrypt'13)
to the SNARK of Danezis et al. (Asiacrypt'14) that is based on Square Span Programs (SSP) and relies on weaker
computational assumptions.
We focus on designated-verifier proofs and propose a protocol in which a proof
consists of just 5 LWE encodings.
We provide a concrete choice of parameters, showing that our construction is
practically instantiable.
G-Merkle: A Hash-Based Group Signature Scheme From Standard Assumptions
Hash-based signature schemes are the most promising cryptosystem candidates in a post-quantum world, but offer little structure to enable more sophisticated constructions such as group signatures.
Group signatures allow a group member to anonymously sign messages on behalf of the whole group (as needed for anonymous remote attestation).
In this work, we introduce G-Merkle, the first (stateful) hash-based group signature scheme.
Our proposal relies on minimal assumptions, namely the existence of one-way functions, and offers performance equivalent to the Merkle single-signer setting. The public key size (as small as in the single-signer setting) outperforms all other post-quantum group signatures. Moreover, for group members issuing at most signatures each, the size of a hash-based group signature is just as large as a Merkle signature with a tree composed by leaf nodes. This directly translates into fast signing and verification engines.
Different from lattice-based counterparts, our construction does not require any random oracle. Note that due to the randomized structure of our Merkle tree, the signature authentication paths are pre-stored or deduced from a public tree, which seems a requirement hard to circumvent. To conclude, we present implementation results to demonstrate the practicality of our proposal.
Towards Attribute-Based Encryption for RAMs from LWE: Sub-linear Decryption, and More
Attribute based encryption (ABE) is an advanced encryption system with a built-in mechanism to generate keys associated with functions which in turn provide restricted access to encrypted data. Most of the known candidates of attribute based encryption model the functions as circuits. This results in significant efficiency bottlenecks, especially in the setting where the function associated with the ABE key is represented by a random access machine (RAM) and a database, with the runtime of the RAM program being sublinear in the database size. In this work we study the notion of attribute based encryption for random access machines (RAMs), introduced in the work of Goldwasser, Kalai, Popa, Vaikuntanathan and Zeldovich (Crypto 2013). We present a construction of attribute based encryption for RAMs satisfying sublinear decryption complexity assuming learning with errors; this is the first construction based on standard assumptions. Previously, Goldwasser et al. achieved this result based on non-falsifiable knowledge assumptions. We also consider a dual notion of ABE for RAMs, where the database is in the ciphertext and we show how to achieve this dual notion, albeit with large attribute keys, also based on learning with errors.
Multi-Theorem Preprocessing NIZKs from Lattices
Non-interactive zero-knowledge (NIZK) proofs are fundamental to modern cryptography. Numerous NIZK constructions are known in both the random oracle and the common reference string (CRS) models. In the CRS model, there exist constructions from several classes of cryptographic assumptions such as trapdoor permutations, pairings, and indistinguishability obfuscation. Notably absent from this list, however, are constructions from standard lattice assumptions. While there has been partial progress in realizing NIZKs from lattices for specific languages, constructing NIZK proofs (and arguments) for all of NP from standard lattice assumptions remains open.
In this work, we make progress on this problem by giving the first construction of a multi-theorem NIZK for NP from standard lattice assumptions in the preprocessing model. In the preprocessing model, a (trusted) setup algorithm generates proving and verification keys. The proving key is needed to construct proofs and the verification key is needed to check proofs. In the multi-theorem setting, the proving and verification keys should be reusable for an unbounded number of theorems without compromising soundness or zero-knowledge. Existing constructions of NIZKs in the preprocessing model (or even the designated-verifier model) that rely on weaker assumptions like one-way functions or oblivious transfer are only secure in a single-theorem setting. Thus, constructing multi-theorem NIZKs in the preprocessing model does not seem to be inherently easier than constructing them in the CRS model.
We begin by constructing a multi-theorem preprocessing NIZK directly from context-hiding homomorphic signatures. Then, we show how to efficiently implement the preprocessing step using a new cryptographic primitive called blind homomorphic signatures. This primitive may be of independent interest. Finally, we show how to leverage our new lattice-based preprocessing NIZKs to obtain new malicious-secure MPC protocols purely from standard lattice assumptions.
MathCoin: A Blockchain Proposal that Helps Verify Mathematical Theorems In Public
A public blockchain is proposed in an attempt to enable the coin holders to participate in verifying mathematical theorems for public access. Incentives are designed to encourage any party to contribute their knowledge by buying tokens of mathematical propositions that they believe are true. The proposed blockchain is a platform for people to exchange their belief in mathematical propositions. An implementation of this blockchain proposal, once established, will provide the general public with an easy and instant access to reliable knowledge without having to read difficult proofs or having to blindly trust a small number of experts. Conversely, experts from various fields may find it much easier for making their work appreciated by more people, leading to a better impact. According to the incentive inherently provided by the blockchain, they can even earn significantly if they do prove some theorems that were not previously known by the blockchain. Foundations who are interested in the validity of a particular proposition not yet explicitly recorded on the blockchain can donate a fund, which will distribute to experts who contribute positive efforts toward solving the specified problems. Only the people who erroneously create or buy tokens of a proposition that is eventually proven false will lose money. A reference design of the proposed blockchain that attempts to achieve the above-mentioned goal is described and reasoned.
A Brief Retrospective Look at the Cayley-Purser Public-key Cryptosystem, 19 Years Later
The purpose of this paper is to describe and analyze the Cayley-Purser algorithm, which is a public-key cryptosystem proposed by Flannery in 1999. I will present two attacks on it, one of which is apparently new. I will also examine a variant of the Cayley-Purser algorithm that was patented by Slavin in 2008, and show that it is also insecure.
Vault: Fast Bootstrapping for the Algorand Cryptocurrency
Uncategorized
Uncategorized
Decentralized cryptocurrencies rely on participants to keep track of the state of the system in order to verify new transactions. As the number of users and transactions grows, this requirement becomes a significant burden, requiring users to download, verify, and store a large amount of data to participate.
Vault is a new cryptocurrency design based on Algorand that minimizes these storage and bootstrapping costs for participants. Vault’s design is based on Algorand’s proof-of-stake consensus protocol and uses several techniques to achieve its goals. First, Vault decouples the storage of recent transactions from the storage of account balances, which enables Vault to delete old account state. Second, Vault allows sharding state across participants in a way that preserves strong security guarantees. Finally, Vault introduces the notion of stamping certificates, which allow a new client to catch up securely and efficiently in a proof-of-stake system without having to verify every single block.
Experiments with a prototype implementation of Vault’s data structures show that Vault’s design reduces the bandwidth cost of joining the network as a full client by 99.7% compared to Bitcoin and 90.5% compared to Ethereum when downloading a ledger containing 500 million transactions.
Perfectly Secure Oblivious RAM with Sublinear Bandwidth Overhead
Oblivious RAM (ORAM) has established itself as a fundamental cryptographic building block.
Understanding which bandwidth overheads are possible under which assumptions has been the topic of a vast amount of previous works.
In this work, we focus on perfectly secure ORAM and we present the first construction with sublinear bandwidth overhead in the worst-case.
All prior constructions with perfect security require linear communication overhead in the worst-case and only achieve sublinear bandwidth overheads in the amortized sense.
We present a fundamentally new approach for construction ORAM and
our results significantly advance our understanding of what is possible with perfect security.
Our main construction, Lookahead ORAM, is perfectly secure, has a worst-case bandwidth overhead of , and a total storage cost of on the server-side, where is the maximum number of stored data elements.
In terms of concrete server-side storage costs, our construction has the smallest storage overhead among all perfectly and statistically secure ORAMs and is only a factor 3 worse than the most storage efficient computationally secure ORAM.
Assuming a client-side position map, our construction is the first, among all ORAMs with worst-case sublinear overhead, that allows for a online bandwidth overhead without server-side computation.
Along the way, we construct a conceptually extremely simple statistically secure ORAM with a worst-case bandwidth overhead of , which may be of independent interest.
A Note on Post-Quantum Authenticated Key Exchange from Supersingular Isogenies
In this work, we study several post-quantum authenticated key exchange protocols in the setting of supersingular isogenies. Leveraging the design of the well-studied schemes by Krawczyk (2003), Boyd et al. (2008), Fujioka et al. (2013), Krawczyk and Wee (2015), and others, we show how to use the Supersingular Isogeny Diffie-Hellman (SIDH) and Supersingular Isogeny Key Encapsulation (SIKE) protocols as basic building blocks to construct efficient and flexible authenticated key exchange schemes featuring different functionalities and levels of security.
This note is also intended to be a ``gentle'' introduction to supersingular isogeny based cryptography, and its most relevant constructions, for protocol designers and cryptographers.
Authenticated key exchange for SIDH
We survey authenticated key exchange (AKE) in the context of supersingular isogeny Diffie-Hellman key exchange (SIDH). We discuss different approaches to achieve authenticated key exchange, and survey the literature. We explain some challenges that arise in the SIDH setting if one wants to do a ``Diffie-Hellman-like'' AKE, and present several candidate authenticated key exchange protocols suitable for SIDH. We also discuss some open problems.
Compact, Scalable, and Efficient Discrete Gaussian Samplers for Lattice-Based Cryptography
Lattice-based cryptography, one of the leading
candidates for post-quantum security, relies heavily on discrete
Gaussian samplers to provide necessary uncertainty, obfuscating
computations on secret information. For reconfigurable hardware,
the cumulative distribution table (CDT) scheme has previously
been shown to achieve the highest throughput and the
smallest resource utilisation, easily outperforming other existing
samplers. However, the CDT sampler does not scale well. In
fact, for large parameters, the lookup tables required are far too
large to be practically implemented. This research proposes a
hierarchy of multiple smaller samplers, extending the Gaussian
convolution lemma to compute optimal parameters, where the
individual samplers require much smaller lookup tables. A large
range of parameter sets, covering encryption, signatures, and
key exchange are evaluated. Hardware-optimised parameters are
formulated and a practical implementation on Xilinx Artix-
7 FPGA device is realised. The proposed sampling designs
demonstrate promising performance on reconfigurable hardware,
even for large parameters, that were otherwise thought infeasible.
Security proof for Quantum Key Recycling with noise
Uncategorized
Uncategorized
Quantum Key Recycling aims to re-use the keys employed in quantum encryption and quantum authentication schemes. QKR protocols can achieve better round complexity than Quantum Key Distribution.
We consider a QKR protocol that works with qubits, as opposed to high-dimensional qudits. A security proof was given by Fehr and Salvail [1] in the case where there is practically no noise. A high-rate scheme for the noisy case was proposed by Skoric and de Vries [2], based on eight-state encoding. However, a security proof was not given.
In this paper we introduce a protocol modification to [2]. We provide a security proof. The modified protocol has high rate not only for 8-state encoding, but also 6-state and BB84 encoding. Our proof is based on a bound on the trace distance between the real quantum state of the system and a state in which the keys are completely secure. It turns out that the rate is higher than suggested by previous results. Asymptotically the rate equals the rate of Quantum Key Distribution with one-way postprocessing.
Last updated: 2018-12-01
An Efficient and Secure Attribute-Based Signcryption Scheme for Smart Grid Applications
With regards to the development of modern power systems, Smart Grid (SG) as an intelligent generation of electricity networks has been faced with a tremendous attention. Fine-grained data sharing in SG plays a vital role in efficiently managing data flow in the SG. As these data commonly contain sensitive information, design of the secure and efficient privacy-preserving schemes for such networks with plenty of resource constrained devices is one of the most controversial issues. In this paper, we propose a secure Ciphertext-Policy Attribute-Based SignCryption (CP-ABSC) scheme which simultaneously provides the authenticity and privacy of the users by enforcing an arbitrary access control policy on encrypted data. Since the number of required pairings in the signcryption and designcryption algorithms are independent to the number of the involved attributes, the computational overhead is reduced in comparison with the existing schemes in the literature. In addition, we formally prove that the unforgeability and indistinguishability of the proposed scheme are reducible to the well-known hardness assumption of the q-Bilinear Diffie-Hellman Exponent (q-BDHE) problem. Moreover, we show that embedding a Physical Unclonable Function (PUF) in each smart meter will significantly reduce the storage overhead of the protocol and secure it against non-volatile memory attackers.
Chimeric Ledgers: Translating and Unifying UTXO-based and Account-based Cryptocurrencies
Cryptocurrencies are historically divided in two broad groups with respect to the style of transactions that they accept. In the account-based style, each address is seen as an account with a balance, and transactions are transfers of value from one account to another. In the UTXO-based style, transactions inductively spend outputs generated by previous trans- actions and create new unspent outputs, and there is no intrinsic notion of account associated with an address. Each style has advantages and disadvantages. This paper formally defines: the two styles; translations that allow to simulate one style by the other; new transaction types that allow both styles of transactions to co-exist on the same ledger; and a new transaction type that combines features from both styles.
Post-Quantum EPID Signatures from Symmetric Primitives
Uncategorized
Uncategorized
EPID signatures are used extensively in real-world systems for hardware enclave attestation. As such, there is a strong interest in making these schemes post-quantum secure. In this paper we initiate the study of EPID signature schemes built only from symmetric primitives, such as hash functions and PRFs. We present two constructions in the random oracle model. The first is a scheme satisfying the EPID signature syntax and security definitions needed for private hardware attestation used in Intel’s SGX. The second achieves significantly shorter signatures for many applications, including the use case of remote hardware attestation. While our EPID signatures for attestation are longer than standard post-quantum signatures, they are short enough for applications where the data being signed is large, such as analytics on large private data sets, or streaming media to a trusted display. We evaluate several instantiations of our schemes so that the costs and benefits of these constructions are clear. Along the way we also give improvements to the zero-knowledge Merkle inclusion proofs of Derler et al. (2017).
MDS Matrices with Lightweight Circuits
MDS matrices are an important element for the design of block ciphers such as the AES. In recent years, there has been a lot of work on the construction of MDS matrices with a low implementation cost, in the context of lightweight cryptography.
Most of the previous efforts focused on local optimization, constructing MDS matrices with coefficients that can be efficiently computed. In particular, this led to a matrix with a direct xor count of only 106, while a direct implementation of the MixColumn matrix of the AES requires 152 bitwise xors.
More recently, techniques based on global optimization have been introduced, were the implementation can reuse some intermediate variables. In particular, Kranz \emph{et al.} used optimization tools to a find good implementation from the description of an MDS matrix. They have lowered the cost of implementing the MixColumn matrix to 97 bitwise xors, and proposed a new matrix with only 72 bitwise xors, the lowest cost known so far.
In this work we propose a different approach to global optimization. Instead of looking for an optimized circuit of a given matrix, we run a search through a space of circuits, to find optimal circuits yielding MDS matrices. This results in MDS matrices with an even lower cost, with only 67 bitwise xors.
The Death and Rebirth of Privacy-Preserving WiFi Fingerprint Localization with Paillier Encryption
Localization based on premeasured WiFi fingerprints is a popular method for indoor localization where satellite based positioning systems are unavailable. In these systems, privacy of the users' location is lost because the location is computed by the service provider. In INFOCOM'14, Li et al. presented PriWFL, a WiFi fingerprint localization system based on additively homomorphic Paillier encryption, that was claimed to protect both the users' location privacy and the service provider's database privacy. In this paper, we demonstrate a severe weakness in PriWFL that allows an attacker to compromise the service provider's database under a realistic attack model and also identify certain other problems in PriWFL that decrease its localization accuracy. Hence, we show that PriWFL does not solve the privacy problems of WiFi fingerprint localization. We also explore different solutions to implement secure privacy-preserving WiFi fingerprint localization and propose two schemes based on Paillier encryption which do not suffer from the weakness of PriWFL and offer the same localization accuracy as the privacy-violating schemes.
Fault Analysis of the KTANTAN Family of Block Ciphers: A Revisited Work of Fault Analysis of the KATAN Family of Block Ciphers
This paper investigates the security of the
KTANTAN block cipher against differential fault analysis. This
attack is considered to be first side channel analysis of
KTANTAN in the literature. KTANTAN is a relative to the
KATAN block cipher. Therefore, the previous fault analysis on
KATAN family of block cipher is revisited. Similar to KATAN,
KTANTAN has three variants namely KTANTAN32,
KTANTAN48 and KTANTAN64. The inner structure of
KTANTAN is similar to KATAN except the key schedule
algorithms. KATAN has been practically broken by using fault
analysis, employing a transient single-bit fault model, with the
assumption is that the attacker is able to inject faults randomly
into the internal state of the cipher. The attack is empowerd by
extended cube method similarly as applied on KATAN. The
complexity of this attack is for KTANTAN32 and for both
KTANTAN48 and KTANTAN64. Furthermore, based on the
obtained results, this paper concludes that KTANTAN is more
robust against fault analysis compared to KATAN.
On Quantum Indifferentiability
We study the indifferentiability of classical constructions in the quantum setting, such as the Sponge construction or Feistel networks. (But the approach easily generalizes to other constructions, too.) We
give evidence that, while those constructions are known to be
indifferentiable in the classical setting, they are not
indifferentiable in the quantum setting. Our approach is based on
an quantum-information-theoreoretical conjecture.
QC-MDPC: A Timing Attack and a CCA2 KEM
Uncategorized
Uncategorized
In 2013, Misoczki, Tillich, Sendrier and Barreto proposed a variant of the McEliece cryptosystem based on quasi-cyclic moderate-density parity-check (QC-MDPC) codes. This proposal uses an iterative bit-flipping algorithm in its decryption procedure. Such algorithms fail with a small probability.
At Asiacrypt 2016, Guo, Johansson and Stankovski (GJS) exploited these failures to perform a key recovery attack. They introduced the notion of the distance spectrum of a sparse vector and showed that the knowledge of the spectrum is enough to find the vector. By observing many failing plaintexts they recovered the distance spectrum of the QC-MDPC secret key.
In this work, we explore the underlying causes of this attack, ways in which it can be improved, and how it can be mitigated.
We prove that correlations between the spectrum of the key and the spectrum of the error induce a bias on the distribution of the syndrome weight. Hence, the syndrome weight is the fundamental quantity from which secret information leaks.
Assuming a side-channel allows the observation of the syndrome weight, we are able to perform a key-recovery attack, which has the advantage of exploiting all known plaintexts, not only those leading to a decryption failure.
Based on this study, we derive a timing attack. It performs well on most decoding algorithms, even on the recent variants where the decryption failure rate is low, a case which is more challenging to the GJS attack. To our knowledge, this is the first timing attack on a QC-MDPC scheme.
Finally, we show how to construct a new KEM, called ParQ that can reduce the decryption failure rate to a level negligible in the security parameter, without altering the QC-MDPC parameters. This is done through repeated encryption. We formally prove the IND-CCA2 security of ParQ, in a model that considers decoding failures. This KEM offers smaller key sizes and is suitable for purposes where the public key is used statically.
Topology-Hiding Computation Beyond Semi-Honest Adversaries
Topology-hiding communication protocols allow a set of parties,
connected by an incomplete network with unknown communication graph,
where each party only knows its neighbors, to construct a complete
communication network such that the network topology remains hidden
even from a powerful adversary who can corrupt parties. This
communication network can then be used to perform arbitrary tasks, for
example secure multi-party computation, in a topology-hiding manner.
Previously proposed protocols could only tolerate passive
corruption. This paper proposes protocols that can also tolerate
fail-corruption (i.e., the adversary can crash any party at
any point in time) and so-called semi-malicious corruption (i.e., the
adversary can control a corrupted party's randomness), without leaking
more than an arbitrarily small fraction of a bit of information about
the topology. A small-leakage protocol was recently proposed by Ball et al. [Eurocrypt'18], but only under the unrealistic set-up assumption that each party has a trusted hardware module containing secret correlated pre-set keys, and with the further two restrictions that only passively corrupted parties can be crashed by the adversary, and semi-malicious corruption is not tolerated. Since leaking a small
amount of information is unavoidable, as is the need to abort the
protocol in case of failures, our protocols seem to achieve the best
possible goal in a model with fail-corruption.
Further contributions of the paper are applications of the protocol to
obtain secure MPC protocols, which requires a way to bound the
aggregated leakage when multiple small-leakage protocols are
executed in parallel or sequentially. Moreover, while previous
protocols are based on the DDH assumption, a new so-called PKCR
public-key encryption scheme based on the LWE assumption is proposed,
allowing to base topology-hiding computation on LWE. Furthermore, a
protocol using fully-homomorphic encryption achieving very low round
complexity is proposed.
Logistic Regression Model Training based on the Approximate Homomorphic Encryption
Security concerns have been raised since big data became a prominent tool in data analysis. For instance, many machine learning algorithms aim to generate prediction models using training data which contain sensitive information about individuals. Cryptography community is considering secure computation as a solution for privacy protection. In particular, practical requirements have triggered research on the efficiency of cryptographic primitives.
This paper presents a practical method to train a logistic regression model while preserving the data confidentiality. We apply the homomorphic encryption scheme of Cheon et al. (ASIACRYPT 2017) for an efficient arithmetic over real numbers, and devise a new encoding method to reduce storage of encrypted database. In addition, we adapt Nesterov's accelerated gradient method to reduce the number of iterations as well as the computational cost while maintaining the quality of an output classifier.
Our method shows a state-of-the-art performance of homomorphic encryption system in a real-world application. The submission based on this work was selected as the best solution of Track 3 at iDASH privacy and security competition 2017. For example, it took about six minutes to obtain a logistic regression model given the dataset consisting of 1579 samples, each of which has 18 features with a binary outcome variable.
Capsule: A Protocol for Secure Collaborative Document Editing
Today's global society strongly relies on collaborative document editing, which plays an increasingly large role in sensitive workflows. While other collaborative venues, such as secure messaging, have seen secure protocols being standardized and widely implemented, the same cannot be said for collaborative document editing. Popular tools such as Google Docs, Microsoft Office365 and Etherpad are used to collaboratively write reports and other documents which are frequently sensitive and confidential, in spite of the server having the ability to read and modify text undetected.
Capsule is the first formalized and formally verified protocol standard that addresses secure collaborative document editing. Capsule provides confidentiality and integrity on encrypted document data, while also guaranteeing the ephemeral identity of collaborators and preventing the server from adding new collaborators to the document. Capsule also, to an extent, prevents the server from serving different versions of the document being collaborated on.
In this paper, we provide a full protocol description of Capsule. We also provide formal verification results on the Capsule protocol in the symbolic model. Finally, we present a full software implementation of Capsule, which includes a novel formally verified signing primitive implementation.
The Limit of Blockchains: Infeasibility of a Smart Obama-Trump Contract
Blockchains have become a buzzword and many blockchain proponents believe that smart contract is a panacea to redefine the digital economy. The community has a misconception that any kind of contracts could be implemented as a blockchain smart contract. There is no doubt that Turing-complete scripting languages in blockchain techniques such as Ethereum can be used to draft many important smart contracts. However, digital economy is much more than Turing-complete smart contracts. Many protocols/contracts in our daily life could not be implemented using Turing- complete smart contracts. As an example, we formulate an Obama-Trump contract and show that this kind of contract could not be implemented using blockchain smart contract techniques. It is straightforward to observe that many contracts in our daily life could be described in terms of an Obama-Trump contract. As a background discussion, we also give a comprehensive review of historical cryptographic currency techniques, bitcoin smart contract techniques, and Turing-complete smart contract techniques in modern blockchains.
VeritasDB: High Throughput Key-Value Store with Integrity
While businesses shift their databases to the cloud, they continue to depend on them to operate correctly. Alarmingly, cloud services constantly face threats from exploits in the privileged computing layers (e.g. OS, Hypervisor) and attacks from rogue datacenter administrators, which tamper with the database's storage and cause it to produce incorrect results. Although integrity verification of outsourced storage and file systems is a well-studied problem, prior techniques impose prohibitive overheads (up to 30x in throughput) and place additional responsibility on clients.
We present VeritasDB, a key-value store that guarantees data integrity to the client in the presence of exploits or implementation bugs in the database server. VeritasDB is implemented as a network proxy that mediates communication between the unmodified client(s) and the unmodified database server, which can be any off-the-shelf database engine (e.g., Redis, RocksDB, Apache Cassandra). The proxy transforms each client request before forwarding it to the server and checks the correctness of the server's response before forwarding it to the client.
To ensure the proxy is trusted, we use the protections of modern trusted hardware platforms, such as Intel SGX, to host the proxy's code and trusted state, thus completely eliminating trust on the cloud provider. To maintain high performance in VeritasDB while scaling to large databases, we design an authenticated Merkle B+-tree that leverages features of SGX (modest amount of protected RAM, direct access to large unprotected RAM, and CPU parallelism) to implement several novel optimizations based on caching, concurrency, and compression. On standard YCSB and Visa transaction workloads, we observe an average overhead of 2.8x in throughput and 2.5x in latency, compared to the (insecure) system with no integrity checks --- using CPU parallelism, we bring the throughput overhead down to 1.05x.
Making Public Key Functional Encryption Function Private, Distributively
We put forth a new notion of distributed public key functional encryption. In such a functional encryption scheme, the secret key for a function will be split into shares . Given a ciphertext that encrypts a message , a secret key share , one can evaluate and obtain a shared value . Adding all the shares up can recover the actual value of , while partial shares reveal nothing about the plaintext. More importantly, this new model allows us to establish {\em function privacy} which was not possible in the setting of regular public key functional encryption. We formalize such notion and construct such a scheme from any public key functional encryption scheme together with learning with error assumption.
We then consider the problem of hosting services in the untrusted cloud. Boneh, Gupta, Mironov, and Sahai (Eurocrypt 2014) first studied such application and gave a construction based on indistinguishability obfuscation. Their construction had the restriction that the number of corrupted clients has to be bounded and known. They left an open problem how to remove such restriction. We resolve this problem by applying our function private (distributed) public key functional encryption to the setting of hosting service in multiple clouds. Furthermore, our construction provides a much simpler and more flexible paradigm which is of both conceptual and practical interests.
Along the way, we strengthen and simplify the security notions of the underlying primitives, including function secret sharing.