## All papers in 2019 (Page 15 of 1498 results)

A Post-Quantum UC-Commitment Scheme in the Global Random Oracle Model from Code-Based Assumptions

In this work, we propose the first post-quantum UC-commitment scheme in the Global Random Oracle Model, where only one non-programmable random oracle is available. The security of our proposal is based on two well-established post-quantum hardness assumptions from coding theory: The Syndrome Decoding and the Goppa Distinguisher. We prove that our proposal is perfectly hiding and computationally binding. The scheme is secure against static malicious adversaries.

Linearly equivalent S-boxes and the Division Property

Division property is a new cryptanalysis method introduced by Todo at Eurocrypt'15 that proves to be very efficient on block ciphers and stream ciphers.
It can be viewed as a generalization or a more precise version of integral cryptanalysis, that allows to take into account bit properties.
However, it is very cumbersome to study the propagation of a given division property through the layers of a block cipher.
Fortunately, computer-aided techniques can be used to this end and many new results have been found.
Nonetheless, we claim that the previous techniques do not consider the full search space.
Indeed, we show that even if the previous techniques fail to find a distinguisher based on the division property over a given function $E$, we can find a distinguisher over a linearly equivalent function, i.e. over $L_{out} \circ E \circ L_{in}$ with $L_{out}$ and $L_{in}$ being linear mappings, and such distinguisher is still relevant.
We first show that the representation of the block cipher heavily influences the propagation of the division property, and exploiting this, we give an algorithm to efficiently search for such linear mappings $L_{out}$ and $L_{in}$.
As a result, we are able to exhibit a new distinguisher over 10 rounds of RECTANGLE, while the previous best known distinguisher was over 9 rounds.
Our algorithm also allows us to rule out such distinguisher over strictly more than 9 rounds of PRESENT, which is the highest known number of rounds on which we can build an integral distinguisher.
Finally, we also give some insight about the construction of an S-box to strengthen a block cipher against our technique.
We first prove that if an S-box satisfies a certain criteria, then using this S-box is optimal in term of resistance against classical division property.
According to this, we exhibit some stronger variants of RECTANGLE and PRESENT, on which the maximum number of rounds we can distinguish is 2 rounds lower than the original, thus more secure against division property.

On Recovering Affine Encodings in White-Box Implementations

Uncategorized

Uncategorized

Ever since the first candidate white-box implementations by Chow et al. in 2002, producing a secure white-box implementation of AES has remained an enduring challenge.
Following the footsteps of the original proposal by Chow et al., other constructions were later built around the same framework. In this framework, the round function of the cipher is "encoded" by composing it with non-linear and affine layers known as encodings. However, all such attempts were broken by a series of increasingly efficient attacks that are able to peel off these encodings, eventually uncovering the underlying round function, and with it the secret key.
These attacks, however, were generally ad-hoc and did not enjoy a wide applicability. As our main contribution, we propose a generic and efficient algorithm to recover affine encodings, for any Substitution-Permutation-Network (SPN) cipher, such as AES, and any form of affine encoding.
For AES parameters, namely 128-bit blocks split into 16 parallel 8-bit S-boxes, affine encodings are recovered with a time complexity estimated at $2^{32}$ basic operations, independently of how the encodings are built.
This algorithm is directly applicable to a large class of schemes. We illustrate this on a recent proposal due to Baek, Cheon and Hong, which was not previously analyzed. While Baek et al. evaluate the security of their scheme to 110 bits, a direct application of our generic algorithm is able to break the scheme with an estimated time complexity of only $2^{35}$ basic operations.
As a second contribution, we show a different approach to cryptanalyzing the Baek et al. scheme, which reduces the analysis to a standalone combinatorial problem, ultimately achieving key recovery in time complexity $2^{31}$. We also provide an implementation of the attack, which is able to recover the secret key in about 12 seconds on a standard desktop computer.

Variants of the AES Key Schedule for Better Truncated Differential Bounds

Differential attacks are one of the main ways to attack block ciphers.
Hence, we need to evaluate the security of a given block cipher against these attacks.
One way to do so is to determine the minimal number of active S-boxes, and use this number along with the maximal differential probability of the S-box to determine the minimal probability of any differential characteristic.
Thus, if one wants to build a new block cipher, one should try to maximize the minimal number of active S-boxes.
On the other hand, the related-key security model is now quite important, hence, we also need to study the security of block ciphers in this model.
In this work, we search how one could design a key schedule to maximize the number of active S-boxes in the related-key model.
However, we also want this key schedule to be efficient, and therefore choose to only consider permutations.
Our target is AES, and along with a few generic results about the best reachable bounds, we found a permutation to replace the original key schedule that reaches a minimal number of active S-boxes of 20 over 6 rounds, while no differential characteristic with a probability larger than $2^{-128}$ exists.
We also describe an algorithm which helped us to show that there is no permutation that can reach 18 or more active S-boxes in 5 rounds.
Finally, we give several pairs $(P_s, P_k)$, replacing respectively the ShiftRows operation and the key schedule of the AES, reaching a minimum of 21 active S-boxes over 6 rounds, while again, there is no differential characteristic with a probability larger than $2^{-128}$.

CHES 2018 Side Channel Contest CTF - Solution of the AES Challenges

Alongside CHES 2018 the side channel contest 'Deep learning vs. classic profiling' was held.
Our team won both AES challenges (masked AES implementation), working under the handle AGSJWS.
Here we describe and analyse our attack.
We can solve the more difficult of the two challenges with $2$ to $5$ power traces, which is much less than was available in the contest.
Our attack combines techniques from machine learning with classical techniques. The attack was superior to all classical and deep learning based attacks which we have tried. Moreover, it provides some insights on the implementation.

Last updated: 2019-11-01

Key Encapsulation Mechanism From Modular Multivariate Linear Equations

In this article we discuss the modular pentavariate and hexavariate linear equations and its usefulness for asymmetric cryptography. Construction of our key encapsulation mechanism dwells on such modular linear equations whose unknown roots can be interpreted as long vectors within a lattice which surpasses the Gaussian heuristic; hence unable to be identified by the LLL lattice reduction algorithm. By utilizing our specially constructed public key when computing the modular hexavariate linear ciphertext equation, the decapsulation mechanism can correctly output the shared secret parameter.
The scheme has short key length, no decapsulation failure issues, plaintext-to-ciphertext expansion of one-to-one as well as uses ``simple" mathematics in order to achieve maximum simplicity in design, such that even practitioners with limited mathematical background will be able to understand the arithmetic.
Due to inexistence of efficient algorithms running upon a quantum computer to obtain the roots of our modular pentavariate and hexavariate linear equation and also to retrieve the private key from the public key, our key encapsulation mechanism can be a probable candidate for seamless post quantum drop-in replacement for current traditional asymmetric schemes.

Partitions in the S-Box of Streebog and Kuznyechik

Streebog and Kuznyechik are the latest symmetric cryptographic primitives standardized by the Russian GOST. They share the same S-Box, $\pi$, whose design process was not described by its authors. In previous works, Biryukov, Perrin and Udovenko recovered two completely different decompositions of this S-Box.
We revisit their results and identify a third decomposition of $\pi$. It is an instance of a fairly small family of permutations operating on $2m$ bits which we call TKlog and which is closely related to finite field logarithms. Its simplicity and the small number of components it uses lead us to claim that it has to be the structure intentionally used by the designers of Streebog and Kuznyechik.
The $2m$-bit permutations of this type have a very strong algebraic structure: they map multiplicative cosets of the subfield $\mathbb{F}_{2^{m}}^{*}$ to additive cosets of $\mathbb{F}_{2^{m}}^{*}$. Furthermore, the function relating each multiplicative coset to the corresponding additive coset is always essentially the same. To the best of our knowledge, we are the first to expose this very strong algebraic structure.
We also investigate other properties of the TKlog and show in particular that it can always be decomposed in a fashion similar to the first decomposition of Biryukov et al., thus explaining the relation between the two previous decompositions. It also means that it is always possible to implement a TKlog efficiently in hardware and that it always exhibits a visual pattern in its LAT similar to the one present in $\pi$.
While we could not find attacks based on these new results, we discuss the impact of our work on the security of Streebog and Kuznyechik. To this end, we provide a new simpler representation of the linear layer of Streebog as a matrix multiplication in the exact same field as the one used to define $\pi$. We deduce that this matrix interacts in a non-trivial way with the partitions preserved by $\pi$.

Efficient Zero-Knowledge for NP from Secure Two-Party Computation

Ishai et al. [28, 29] introduced a powerful technique that provided a general transformation from secure multiparty computation (MPC) protocols to zero-knowledge (ZK) proofs in a black-box way, called “MPC-in-the-head”. A recent work [27] extends this technique and shows two ZK proof protocols from a secure two-party computation (2PC) protocol. The works [28, 27] both show a basic three-round ZK proof protocol which can be made negligibly sound by standard sequential repetition [19]. Under general black-box zero knowledge notion, neither ZK proofs nor arguments with negligible soundness error can be achieved in less than four rounds without additional assumptions [15].
In this paper, we address this problem under the notion of augmented black-box zero knowledge [26], which is defined with a new simulation method, called augmented black-box simulation. It is presented by permitting the simulator to have access to the verifier’s current private state (i.e. “random coins” used to compute the current message) in a special manner. We first show a three-round augmented black-box ZK proof for the language graph 3-colorability, denoted G3C. And then we generalize the construction to a three-round augmented black-box ZK proof for any NP relation R(x, w) without relying on expensive Karp reductions. The two constructions are based on a family of claw-free permutations and the general construction is additionally based on a black-box use of a secure 2PC for a related two-party functionality. Besides, we show our protocols can be made negligibly sound by directly parallel repetition.

Round5: Compact and Fast Post-Quantum Public-Key Encryption

We present the ring-based configuration of the NIST submission Round5, a Ring Learning with Rounding (RLWR)- based
IND-CPA secure public-key encryption scheme. It combines elements of the NIST candidates Round2 (use of RLWR as underlying problem, having $1+x+\ldots +x^n$ with $n+1$ prime as reduction polynomial, allowing for a large design space) and HILA5 (the constant-time error-correction code XEf). Round5 performs part of encryption, and
decryption via multiplication in $\mathbb{Z}_{p}[x]/(x^{n+1}-1)$, and uses secret-key polynomials that have a factor $(x-1)$.
This technique reduces the failure probability and makes correlation in the decryption error negligibly low. The latter allows the effective application of error correction through XEf to further reduce the failure rate and shrink parameters, improving both security and performance. We argue for the security of Round5, both formal and concrete. We further analyze the
decryption error, and give analytical as well as experimental results arguing that the decryption failure rate is lower than in Round2, with negligible correlation in errors. IND-CCA secure parameters constructed using Round5 and offering more than 232 and 256 bits of quantum and classical security respectively, under the conservative core sieving model, require only 2144 B of bandwidth. For comparison, similar, competing proposals require over 30% more bandwidth. Furthermore, the high flexibility of Round5's design allows choosing finely tuned parameters fitting the needs of diverse applications -- ranging from the IoT to high-security levels.

The General Sieve Kernel and New Records in Lattice Reduction

We propose the General Sieve Kernel (G6K, pronounced /ʒe.si.ka/), an abstract stateful machine supporting a wide variety of lattice reduction strategies based on sieving algorithms. Using the basic instruction set of this abstract stateful machine, we first give concise formulations of previous sieving strategies from the literature and then propose new ones. We then also give a light variant of BKZ exploiting the features of our abstract stateful machine. This encapsulates several recent suggestions (Ducas at Eurocrypt 2018; Laarhoven and Mariano at PQCrypto 2018) to move beyond treating sieving as a blackbox SVP oracle and to utilise strong lattice reduction as preprocessing for sieving. Furthermore, we propose new tricks to minimise the sieving computation required for a given reduction quality with mechanisms such as recycling vectors between sieves, on-the-fly lifting and flexible insertions akin to Deep LLL and recent variants of Random Sampling Reduction.
Moreover, we provide a highly optimised, multi-threaded and tweakable implementation of this machine which we make open-source. We then illustrate the performance of this implementation of our sieving strategies by applying G6K to various lattice challenges. In particular, our approach allows us to solve previously unsolved instances of the Darmstadt SVP (151, 153, 155) and LWE (e.g. (75, 0.005)) challenges. Our solution for the SVP-151 challenge was found 400 times faster than the time reported for the SVP-150 challenge, the previous record. For exact SVP, we observe a performance crossover between G6K and FPLLL's state of the art implementation of enumeration at dimension 70.

Continuous Key Agreement with Reduced Bandwidth

Uncategorized

Uncategorized

Continuous Key Agreement (CKA) is a two-party procedure used by Double Ratchet protocols (e. g., Signal). This is a continuous and synchronous protocol that generates a fresh key for every sent/received message. It guarantees forward secrecy and Post-Compromise Security (PCS). PCS allows for reestablishing the security within a few rounds after the state of one of the parties has been compromised.
Alwen et al. have recently proposed a new KEM-based CKA construction where every message contains a ciphertext and a fresh public key. This can be made quantum-safe by deploying a quantum-safe KEM. They mention that the bandwidth can be reduced when using an ElGamal KEM (which is not quantum-safe). In this paper, we generalized their approach by defining a new primitive, namely Merged KEM (MKEM). This primitive merges the key generation and the encapsulation steps of a KEM. This is not possible for every KEM and we discuss cases where a KEM can be converted to an MKEM. One example is the quantum-safe proposal BIKE1, where the BIKE-MKEM saves 50% of the communication bandwidth, compared to the original construction. In addition, we offer the notion and two constructions for hybrid CKA.

The Secure Link Prediction Problem

Link Prediction is an important and well-studied problem for social networks. Given a snapshot of a graph, the link prediction problem predicts which new interactions between members are most likely to occur in the near future. As networks grow in size, data owners are forced to store the data in remote cloud servers which reveals sensitive information about the network. The graphs are therefore stored in encrypted form.
We study the link prediction problem on encrypted graphs. To the best of our knowledge, this secure link prediction problem has not been studied before. We use the number of common neighbors for prediction. We present three algorithms for the secure link prediction problem. We design prototypes of the schemes and formally prove their security. We execute our algorithms in real-life datasets.

Reinterpreting and Improving the Cryptanalysis of the Flash Player PRNG

Constant blinding is an efficient countermeasure against just-in-time (JIT) spraying attacks. Unfortunately, this mitigation mechanism is not always implemented correctly. One such example is the constant blinding mechanism found in the Adobe Flash Player. Instead of choosing a strong mainstream pseudo-random number generator (PRNG), the Flash Player designers chose to implement a proprietary one. This led to the discovery of a vulnerability that can be exploited to recover the initial seed used by the PRNG and thus, to bypass the constant blinding mechanism. Using this vulnerability as a starting point, we show that no matter the parameters used by the previously mentioned PRNG it still remains a weak construction. A consequence of this study is an improvement of the seed recovering mechanism from previously known complexity of $\mathcal O(2^{21})$ to one of $\mathcal O(2^{11})$.

The Lattice-Based Digital Signature Scheme qTESLA

We present qTESLA, a family of post-quantum digital signature schemes that exhibits several attractive features such as simplicity and strong security guarantees against quantum adversaries, and built-in protection against certain side-channel and fault attacks. qTESLA---selected for round 2 of NIST's post-quantum cryptography standardization project---consolidates a series of recent schemes originating in works by Lyubashevsky, and Bai and Galbraith.
We provide full-fledged, constant-time portable C implementations that showcase the code compactness of the proposed scheme, e.g., our code requires only about 300 lines of C code. Finally, we also provide AVX2-optimized assembly implementations that achieve a factor-1.5 speedup.

An Information Obfuscation Calculus for Encrypted Computing

Relative cryptographic semantic security for encrypted words of user data at runtime holds in the emerging field of encrypted computing, in conjunction with an appropriate instruction set and compiler. An information obfuscation calculus for program compilation in that context is introduced here that quantifies the security exactly,
improving markedly on the result.

Cryptanalysis of an NTRU-based Proxy Encryption Scheme from ASIACCS'15

In ASIACCS 2015, Nuñez, Agudo, and Lopez proposed a proxy re-encryption scheme, NTRUReEncrypt, based on NTRU, which allows a proxy to translate ciphertext under the delegator's public key into a re-encrypted ciphertext that can be decrypted correctly by delegatee's private key. In addition to its potential resistance to quantum algorithm, the scheme was also considered to be efficient. However, in this paper we point out that the re-encryption process will increase the decryption error, and the increased decryption error will lead to a reaction attack that enables the proxy to recover the private key of the delegator and the delegatee. Moreover, we also propose a second attack which enables the delegatee to recover the private key of the delegator when he collects enough re-encrypted ciphertexts from a same message. We reevaluate the security of NTRUReEncrypt, and also give suggestions and discussions on potential mitigation methods.

Arithmetic Garbling from Bilinear Maps

We consider the problem of garbling arithmetic circuits and present a garbling scheme for inner-product predicates over exponentially large fields. Our construction stems from a generic transformation from predicate encryption which makes only blackbox calls to the underlying primitive. The resulting garbling scheme has practical efficiency and can be used as a garbling gadget to securely compute common arithmetic subroutines. We also show that inner-product predicates are complete by generically bootstrapping our construction to arithmetic garbling for polynomial-size circuits, albeit with a loss of concrete efficiency.
In the process of instantiating our construction we propose two new predicate encryption schemes, which might be of independent interest. More specifically, we construct (i) the first pairing-free (weakly) attribute-hiding non-zero inner-product predicate encryption scheme, and (ii) a key-homomorphic encryption scheme for linear functions from bilinear maps. Both schemes feature constant-size keys and practical efficiency.

Practical Group-Signatures with Privacy-Friendly Openings

Group signatures allow creating signatures on behalf of a group, while remaining anonymous. To prevent misuse, there exists a designated entity, named the opener, which can revoke anonymity by generating a proof which links a signature to its creator. Still, many intermediate cases have been discussed in the literature, where not the full power of the opener is required, or the users themselves require the power to claim (or deny) authorship of a signature and (un-)link signatures in a controlled way. However, these concepts were only considered in isolation. We unify these approaches, supporting all these possibilities simultaneously, providing fine-granular openings, even by members. Namely, a member can prove itself whether it has created a given signature (or not), and can create a proof which makes two created signatures linkable (or unlinkable resp.) in a controlled way. Likewise, the opener can show that a signature was not created by a specific member and can prove whether two signatures stem from the same signer (or not) without revealing anything else. Combined, these possibilities can make full openings irrelevant in many use-cases. This has the additional benefit that the requirements on the reachability of the opener are lessened. Moreover, even in the case of an involved opener, our framework is less privacy-invasive, as the opener no longer requires access to the signed message.
Our provably secure black-box CCA-anonymous construction with dynamic joins requires only standard building blocks. We prove its practicality by providing a performance evaluation of a concrete instantiation, and show that our non-optimized implementation is competitive compared to other, less feature-rich, notions.

Turbospeedz: Double Your Online SPDZ! Improving SPDZ using Function Dependent Preprocessing

Secure multiparty computation allows a set of mutually distrusting parties to securely compute a function of their private inputs, revealing only the output, even if some of the parties are corrupt. Recent years have seen an enormous amount of work that drastically improved the concrete efficiency of secure multiparty computation protocols. Many secure multiparty protocols work in an ``offline-online" model. In this model, the computation is split into two main phases: a relatively slow ``offline phase", which the parties execute before they know their input, and a fast ``online phase", which the parties execute after receiving their input.
One of the most popular and efficient protocols for secure multiparty computation working in this model is the SPDZ protocol (Damgaard et al., CRYPTO 2012). The SPDZ offline phase is function independent, i.e., does not requires knowledge of the computed function at the offline phase. Thus, a natural question is: can the efficiency of the SPDZ protocol be improved if the function is known at the offline phase?
In this work, we answer the above question affirmatively. We show that by using a function dependent preprocessing protocol, the online communication of the SPDZ protocol can be brought down significantly, almost by a factor of 2, and the online computation is often also significantly reduced. In scenarios where communication is the bottleneck, such as strong computers on low bandwidth networks, this could potentially almost double the online throughput of the SPDZ protocol, when securely computing the same circuit many times in parallel (on different inputs).
We present two versions of our protocol: Our first version uses the SPDZ offline phase protocol as a black-box, which achieves the improved online communication at the cost of slightly increasing the offline communication. Our second version works by modifying the state-of-the-art SPDZ preprocessing protocol, Overdrive (Keller et al., Eurocrypt 2018). This version improves the overall communication over the state-of-the-art SPDZ when the function is known at the offline phase.

New Results about the Boomerang Uniformity of Permutation Polynomials

Uncategorized

Uncategorized

In EUROCRYPT 2018, Cid et al. introduced a new concept on the cryptographic property of S-boxes: Boomerang Connectivity Table (BCT for short) for evaluating the subtleties of boomerang-style attacks. Very recently, BCT and the boomerang uniformity, the maximum value in BCT, were further studied by Boura and Canteaut. Aiming at providing new insights, we show some new results about BCT and the boomerang uniformity of permutations in terms of theory and experiment in this paper. Firstly, we present an equivalent technique to compute BCT and the boomerang uniformity, which seems to be much simpler than the original definition. Secondly, thanks to Carlet's idea, we give a characterization of functions $f$ from $\mathbb{F}_{2}^n$ to itself with boomerang uniformity $\delta_{f}$ by means of the Walsh transform. Thirdly, by our method, we consider boomerang uniformities of some specific permutations, mainly the ones with low differential uniformity. Finally, we obtain another class of $4$-uniform BCT permutation polynomials over $\mathbb{F}_{2^n}$, which is the first binomial.

Testing the Randomness of Cryptographic Function Mappings

A cryptographic function with a fixed-length output, such as a block cipher, hash function, or message authentication code (MAC), should behave as a random mapping. The mapping's randomness can be evaluated with statistical tests. Statistical test suites typically used to evaluate cryptographic functions, such as the NIST test suite, are not well-suited for testing fixed-output-length cryptographic functions. Also, these test suites employ a frequentist approach, making it difficult to obtain an overall evaluation of the mapping's randomness. This paper describes CryptoStat, a test suite that overcomes the aforementioned deficiencies. CryptoStat is specifically designed to test the mappings of fixed-output-length cryptographic functions, and CryptoStat employs a Bayesian approach that quite naturally yields an overall evaluation of the mappings' randomness. Results of applying CryptoStat to reduced-round and full-round versions of the AES block ciphers and the SHA-1 and SHA-2 hash functions are reported; the results are analyzed to determine the algorithms' randomness margins.

Pairing Implementation Revisited

Pairing-based cryptography is now a mature science. However implementation of a pairing-based protocol can be challenging, as the efficient computation of a pairing is difficult, and the existing literature on implementation might not match with the requirements of a particular application. Furthermore developments in our understanding of the true security of pairings render much of the existing literature redundant. Here we take a fresh look and develop a simpler three-stage algorithm for calculating pairings, as they arise in real applications.

Analysis and Improvement of Differential Computation Attacks against Internally-Encoded White-Box Implementations

White-box cryptography is the last security barrier for a cryptographic software implementation deployed in an untrusted environment. The principle of internal encodings is a commonly used white-box technique to protect block cipher implementations. It consists in representing an implementation as a network of look-up tables which are then encoded using randomly generated bijections (the internal encodings). When this approach is implemented based on nibble (i.e. 4-bit wide) encodings, the protected implementation has been shown to be vulnerable to differential computation analysis (DCA). The latter is essentially an adaptation of differential power analysis techniques to computation traces consisting of runtime information, e.g., memory accesses, of the target software.
In order to thwart DCA, it has then been suggested to use wider encodings, and in particular byte encodings, at least to protect the outer rounds of the block cipher which are the prime targets of DCA.
In this work, we provide an in-depth analysis of when and why DCA works. We pinpoint the properties of the target variables and the encodings that make the attack (in)feasible. In particular, we show that DCA can break encodings wider than 4-bit, such as byte encodings. Additionally, we propose new DCA-like attacks inspired from side-channel analysis techniques. Specifically, we describe a collision attack particularly effective against the internal encoding countermeasure. We also investigate mutual information analysis (MIA) which naturally applies in this context. Compared to the original DCA, these attacks are also passive and they require very limited knowledge of the attacked implementation, but they achieve significant improvements in terms of trace complexity. All the analyses of our work are experimentally backed up with various attack simulation results. We also verified the practicability of our analyses and attack techniques against a publicly available white-box AES implementation protected with byte encodings --which DCA has failed to break before-- and against a ``masked'' white-box AES implementation --which intends to resist DCA.

Assessment of the Key-Reuse Resilience of NewHope

NewHope is a suite of two efficient Ring-Learning-With-Error based key encapsulation mechanisms (KEMs) that has been proposed to the NIST call for proposals for post-quantum standardization. In this paper, we study the security of NewHope when an active adversary accesses a key establishment and is given access to an oracle, called key mismatch oracle, which indicates whether her guess of the shared key value derived by the party targeted by the attack is correct or not. This attack model turns out to be relevant in key reuse situations since an attacker may then be able to access such an oracle repeatedly with the same key either directly or using faults or side channels, depending on the considered instance of NewHope. Following this model we show that, by using NewHope recommended parameters, several thousands of queries are sufficient to recover the full private key with high probability. This result has been experimentally confirmed using Magma CAS implementation. While the presented key mismatch oracle attacks do not break any of the designers’ security claims for the NewHope KEMs, they provide better insight into the resilience of these KEMs against key reuse. In the case of the CPA-KEM instance of NewHope, they confirm that key reuse (e.g. key caching at server side) should be strictly avoided, even for an extremely short duration. In the case of the CCA-KEM instance of NewHope, they allow to point out critical steps inside the CCA transform that should be carefully protected against faults or side channels in case of potential key reuse.

Efficient and Secure Multiparty Computation from Fixed-Key Block Ciphers

Many implementations of secure computation use fixed-key AES (modeled as a random permutation); this results in substantial performance benefits due to existing hardware support for~AES and the ability to avoid recomputing the AES key schedule. Surveying these implementations, however, we find that most utilize AES in a heuristic fashion; in the best case this leaves a gap in the security proof, but in many cases we show it allows for explicit attacks.
Motivated by this unsatisfactory state of affairs, we initiate a comprehensive study of how to use fixed-key block ciphers for secure computation---in particular for OT extension and circuit garbling---efficiently and securely. Specifically:
- We consider several notions of pseudorandomness for hash functions (e.g., correlation robustness), and show provably secure schemes for OT extension, garbling, and other applications based on hash functions satisfying these notions.
- We provide provably secure constructions, in the random-permutation model, of hash functions satisfying the different notions of pseudorandomness we consider.
Taken together, our results provide end-to-end security proofs for implementations of secure-computation protocols based on fixed-key block ciphers (modeled as random permutations). Perhaps surprisingly, at the same time our work also results in noticeable performance improvements over the state-of-the-art.

Destructive Privacy and Mutual Authentication in Vaudenay's RFID Model

With the large scale adoption of the Radio Frequency Identification (RFID) technology, a variety of security and privacy risks need to be addressed. Arguably, the most general and used RFID security and privacy model is the one proposed by Vaudenay. It considers concurrency, corruption (with or without destruction) of tags, and the possibility to get the result of a protocol session on the reader side. Security in Vaudenay's model embraces two forms, unilateral (tag) authentication and mutual (tag and reader) authentication, while privacy is very flexible and dependent on the adversary class. The construction of destructive private RFID schemes in Vaudenay's model was left open when the model was initially proposed. It was solved three years later in the context of unilateral authentication.
In this paper we propose a destructive private and mutual authentication RFID scheme in Vaudenay's model. The security and privacy of our scheme are rigorously proved. We also show that the only two RFID schemes proposed so far that claimed to achieve destructive privacy and mutual authentication are not even narrow forward private. Thus, our RIFD scheme is the first one to achieve this kind of privacy and security. The paper also points out some privacy proof flaws that have been met in previous constructions.

ZeroCT: Improving ZeroCoin with Confidential Transactions and more

The Zerocoin protocol is a set of cryptographic algorithms which embedded in a cryptocurrency provide anonymous swap of tokens in a mathematically provable way by using cryptographic accumulators. Functionally it can be described as a black box where an actor can introduce an arbitrary number of coins, and later withdraw them without leaving evidence of connection between both actions. The withdrawing step admits a destination for the coins different from the original minter, but unconditionally requires a previous mint action and does not accept the transfer of coins without leaving the accumulator, thus exposing the traceability of the coins. We propose an alternative design which for the first time combines the virtues of Zerocoin with those of Confidential Transactions offering fully-featured anonymous transactions between individuals with private amounts.

Repeatable Oblivious Shuffling of Large Outsourced Data Blocks

As data outsourcing becomes popular, oblivious algorithms have raised extensive attentions since their control flow and data access pattern appear to be independent of the input data they compute on and thus are especially suitable for secure processing in outsourced environments. In this work, we focus on oblivious shuffling algorithms that aim to shuffle encrypted data blocks outsourced to the cloud server without disclosing the permutation of blocks to the server. Existing oblivious shuffling algorithms suffer from issues of heavy communication and client computation costs when blocks have a large size because all outsourced blocks must be downloaded to the client for shuffling or peeling off extra encryption layers. To help eliminate this void, we introduce the ``repeatable oblivious shuffling'' notation that restricts the communication and client computation costs to be independent of the block size. We present an efficient construction of repeatable oblivious shuffling using additively homomorphic encryption schemes. The comprehensive evaluation of our construction shows its effective usability in practice for shuffling large-sized blocks.

Uncle Traps: Harvesting Rewards in a Queue-based Ethereum Mining Pool

Mining pools in Proof-of-Work cryptocurrencies allow miners to pool their computational resources as a means of reducing payout variance. In Ethereum, uncle blocks are valid Proof-of-Work solutions which do not become the head of the blockchain, yet yield rewards if later referenced by main chain blocks. Mining pool operators are faced with the non-trivial task of fairly distributing rewards for both block types among pool participants.
Inspired by empirical observations, we formally reconstruct a Sybil attack exploiting the uncle block distribution policy in a queue-based mining pool. To ensure fairness of the queue-based payout scheme, we propose a mitigation. We examine the effectiveness of the attack strategy under the current and the proposed policy via a discrete-event simulation.
Our findings show that the observed attack can indeed be obviated by altering the current reward scheme.

Quantum Indistinguishability of Random Sponges

In this work we show that the sponge construction can be used to construct quantum-secure pseudorandom functions. As our main result we prove that random sponges are quantum indistinguishable from random functions. In this setting the adversary is given superposition access to the input-output behavior of the construction but not to the internal function. Our proofs hold under the assumption that the internal function is a random function or permutation. We then use this result to obtain a quantum-security version of a result by
Andreeva, Daemen, Mennink, and Van Assche (FSE'15) which shows that a sponge that uses a secure PRP or PRF as internal function is a secure PRF. This result also proves that the recent attacks against CBC-MAC in the quantum-access model by Kaplan, Leurent, Leverrier, and Naya-Plasencia (Crypto'16) and Santoli, and Schaffner (QIC'16) can be prevented by introducing a state with a non-trivial inner part.
The proof of our main result is derived by analyzing the joint distribution of any $q$ input-output pairs.
Our method analyzes the statistical behavior of the considered construction in great detail. The used techniques might prove useful in future analysis of different cryptographic primitives considering quantum adversaries.
Using Zhandry's PRF/PRP switching lemma we then obtain that quantum indistinguishability also holds if the internal block function is a random permutation.

Sampling the Integers with Low Relative Error

Randomness is an essential part of any secure cryptosystem, but many constructions rely on distributions that are not uniform. This is particularly true for lattice based cryptosystems, which more often than not make use of discrete Gaussian distributions over the integers. For practical purposes it is crucial to evaluate the impact that approximation errors have on the security of a scheme to provide the best possible trade-off between security and performance. Recent years have seen surprising results allowing to use relatively low precision while maintaining high levels of security. A key insight in these results is that sampling a distribution with low relative error can provide very strong security guarantees. Since floating point numbers provide guarantees on the relative approximation error, they seem a suitable tool in this setting, but it is not obvious which sampling algorithms can actually profit from them. While previous works have shown that inversion sampling can be adapted to provide a low relative error (Pöppelmann et al., CHES 2014; Prest, ASIACRYPT 2017), other works have called into question if this is possible for other sampling techniques (Zheng et al., Eprint report 2018/309). In this work, we consider all sampling algorithms that are popular in the cryptographic setting and analyze the relationship of floating point precision and the resulting relative error. We show that all of the algorithms either natively achieve a low relative error or can be adapted to do so.

Managing Your Kleptographic Subscription Plan

In the classical kleptographic business models, the manufacturer of a device $D$ is paid either in advance or in installments by a malicious entity to backdoor $D$. Unfortunately, these models have an inherent high risk for the manufacturer. This translates in high costs for clients. To address this issue, we introduce a subscription based business model and tackle some of the technical difficulties that arise.

Publicly Verifiable Proofs from Blockchains

Uncategorized

Uncategorized

A proof system is publicly verifiable, if anyone, by looking at the transcript of the proof, can be convinced that the corresponding theorem is true. Public verifiability is important in many applications since it allows to compute a proof only once while convincing an unlimited number of verifiers.
Popular interactive proof systems (e.g., $\Sigma$-protocols) protect the witness through various properties (e.g., witness indistinguishability (WI) and zero knowledge (ZK)) but typically they are not publicly verifiable since such proofs are convincing only for those verifiers who contributed to the transcripts of the proofs.
The only known proof systems that are publicly verifiable rely on a non-interactive (NI) prover, through trust assumptions (e.g., NIZK in the CRS model), heuristic assumptions (e.g., NIZK in the random oracle model),specific number-theoretic assumptions on bilinear groups or relying on obfuscation assumptions (obtaining NIWI with no setups).
In this work we construct publicly verifiable witness-indistinguishable proof systems from any $\Sigma$-protocol, based only on the existence of a very generic blockchain. The novelty of our approach is in enforcing a non-interactive verification (thus guaranteeing public verifiability) while allowing the prover to be interactive and talk to the blockchain (this allows us to circumvent the need of strong assumptions and setups). This opens interesting directions for the design of cryptographic protocols leveraging on blockchain technology.

Multi-Protocol UC and its Use for Building Modular and Efficient Protocols

Uncategorized

Uncategorized

We want to design and analyze protocols in a modular way by combining idealized components that we realize individually. While this is in principle possible using security frameworks that provide generic composition theorems, we notice that actually applying this methodology in practical protocols is far from trivial and, worse, is sometimes not even possible. As an example, we use a natural combination of zero-knowledge proofs with signature and commitment schemes, where the goal to have a party prove in zero-knowledge that it knows a signature on a committed message, i.e., prove knowledge of a witness to a statement involving algorithms of the signature and commitment scheme. We notice that, unfortunately, the composition theorem of the widely used UC framework does allow one to modularly prove the security of this example protocol.
We then describe a new variant of the UC framework, multi-protocol UC, and show a composition theorem that generalizes the one from the standard framework. We use this new framework to provide a modular analysis of a practical protocol that follows the above structure and is based on discrete-logarithm-based primitives. Besides the individual security proofs of the protocol components, we also describe a new methodology for idealizing them as components that can then be composed.

A Revocable Group Signature Scheme with Scalability from Simple Assumptions and Its Application to Identity Management

Group signatures are signatures providing signer anonymity where signers can produce signatures on behalf of the group that they belong to. Although such anonymity is quite attractive considering privacy issues, it is not trivial to check whether a signer has been revoked or not. Thus, how to revoke the rights of signers is one of the major topics in the research on group signatures. In particular, scalability, where the signing and verification costs and the signature size are constant in terms of the number of signers N, and other costs regarding signers are at most logarithmic in N, is quite important.
In this paper, we propose a revocable group signature scheme which is currently more efficient compared to previous all scalable schemes. Moreover, our revocable group signature scheme is secure under simple assumptions (in the random oracle model), whereas all scalable schemes are secure under q-type assumptions. We implemented our scheme by employing Barreto-Lynn-Scott curves of embedding degree 12 over a 455-bit prime field (BLS-12-455), and Barreto-Naehrig curves of embedding degree 12 over a 382-bit prime field (BN-12-382), respectively, by using the RELIC library. We showed that the online running times of our signing algorithm were approximately 14 msec (BLS-12-455) and 11 msec (BN-12-382), and those of our verification algorithm were approximately 20 msec (BLS-12-455) and 16 msec (BN-12-382), respectively. Finally, we showed that our scheme is applied to an identity management system proposed by Isshiki et al.

Efficient Non-Interactive Zero-Knowledge Proofs in Cross-Domains without Trusted Setup

With the recent emergence of efficient zero-knowledge (ZK) proofs for general circuits, while efficient zero-knowledge proofs of algebraic statements have existed for decades, a natural challenge arose to combine algebraic and non-algebraic statements. Chase et al. (CRYPTO 2016) proposed an interactive ZK proof system for this cross-domain problem. As a use case they show that their system can be used to prove knowledge of a RSA/DSA signature on a message m with respect to a publicly known Pedersen commitment g^m h^r . One drawback of their system is that it requires interaction between the prover and the verifier. This is due to the interactive nature of garbled circuits, which are used in their construction. Subsequently, Agrawal et al. (CRYPTO 2018) proposed an efficient non-interactive ZK (NIZK) proof system for cross-domains based on SNARKs, which however require a trusted setup assumption.
In this paper, we propose a NIZK proof system for cross-domains that requires no trusted setup and is efficient both for the prover and the verifier. Our system constitutes a combination of Schnorr based ZK proofs and ZK proofs for general circuits by Giacomelli et al. (USENIX 2016). The proof size and the running time of our system are comparable to the approach by Chase et al. Compared to Bulletproofs (SP 2018), a recent NIZK proofs system on committed inputs, our techniques achieve asymptotically better performance on prover and verifier, thus presenting a different trade-off between the proof size and the running time.

Additively Homomorphic IBE from Higher Residuosity

Uncategorized

Uncategorized

We present an identity-Based encryption (IBE) scheme that is group homomorphic for addition modulo a ``large'' (i.e. superpolynomial) integer, the first such group homomorphic IBE. Our first result is the construction of an IBE scheme supporting homomorphic addition modulo a poly-sized prime $e$. Our construction builds upon the IBE scheme of Boneh, LaVigne and Sabin (BLS). BLS relies on a hash function that maps identities to $e$-th residues. However there is no known way to securely instantiate such a function. Our construction extends BLS so that it can use a hash function that can be securely instantiated. We prove our scheme IND-ID-CPA secure under the (slightly modified) $e$-th residuosity assumption in the random oracle model and show that it supports a (modular) additive homomorphism. By using multiple instances of the scheme with distinct primes and leveraging the Chinese Remainder Theorem, we can support homomorphic addition modulo a ``large'' (i.e. superpolynomial) integer. We also show that our scheme for $e > 2$ is anonymous by additionally assuming the hardness of deciding solvability of a special system of multivariate polynomial equations. We provide a justification for this assumption by considering known attacks.

SigAttack: New High-level SAT-based Attack on Logic Encryptions

Logic encryption is a powerful hardware protection technique that uses extra key inputs to lock a circuit from piracy or unauthorized use. The recent discovery of the SAT-based attack with Distinguishing Input Pattern (DIP) generation has rendered all traditional logic encryptions vulnerable, and thus the creation of new encryption methods. However, a critical question for any new encryption method is whether security against the DIP-generation attack means security against all other attacks. In this paper, a new high-level SAT-based attack called SigAttack has been discovered and thoroughly investigated. It is based on extracting a key-revealing signature in the encryption. A majority of all known SAT-resilient encryptions are shown to be vulnerable to SigAttack. By formulating the condition under which SigAttack is effective, the paper also provides guidance for the future logic encryption design.

CycSAT-Unresolvable Cyclic Logic Encryption Using Unreachable States

Logic encryption has attracted much attention due to increasing IC design costs and growing number of untrusted foundries. Unreachable states in a design provide a space of flexibility for logic encryption to explore. However, due to the available access of scan chain, traditional combinational encryption cannot leverage the benefit of such flexibility. Cyclic logic encryption inserts key-controlled feedbacks into the original circuit to prevent piracy and overproduction. Based on our discovery, cyclic logic encryption can utilize unreachable states to improve security. Even though cyclic encryption is vulnerable to a powerful attack called CycSAT, we develop a new way of cyclic encryption by utilizing unreachable states to defeat CycSAT. The attack complexity of the proposed scheme is discussed and its robustness is demonstrated.

BeSAT: Behavioral SAT-based Attack on Cyclic Logic Encryption

Cyclic logic encryption is newly proposed in the area of hardware security. It introduces feedback cycles into the circuit to defeat existing logic decryption techniques. To ensure that the circuit is acyclic under the correct key, CycSAT is developed to add the acyclic condition as a CNF formula to the SAT-based attack. However, we found that it is impossible to capture all cycles in any graph with any set of feedback signals as done in the CycSAT algorithm. In this paper, we propose a behavioral SAT-based attack called BeSAT. BeSAT observes the behavior of the encrypted circuit on top of the structural analysis, so the stateful and oscillatory keys missed by CycSAT can still be blocked. The experimental results show that BeSAT successfully overcomes the drawback of CycSAT.

Tightly secure hierarchical identity-based encryption

We construct the first tightly secure hierarchical identity-based encryption (HIBE) scheme based on standard assumptions, which solves an open problem from Blazy, Kiltz, and Pan (CRYPTO 2014). At the core of our constructions is a novel randomization technique that enables us to randomize user secret keys for identities with flexible length.
The security reductions of previous HIBEs lose at least a factor of Q, which is the number of user secret key queries. Different to that, the security loss of our schemes is only dependent on the security parameter. Our schemes are adaptively secure based on the Matrix Diffie-Hellman assumption, which is a generalization of standard Diffie-Hellman assumptions such as k-Linear. We have two tightly secure constructions, one with constant ciphertext size, and the other with tighter security at the cost of linear ciphertext size. Among other things, our schemes imply the first tightly secure identity-based signature scheme by a variant of the Naor transformation.

Short Discrete Log Proofs for FHE and Ring-LWE Ciphertexts

Uncategorized

Uncategorized

In applications of fully-homomorphic encryption (FHE) that involve computation on encryptions produced by several users, it is important that each user proves that her input is indeed well-formed. This may simply mean that the inputs are valid FHE ciphertexts or, more generally, that the plaintexts $m$ additionally satisfy $f(m)=1$ for some public function $f$. The most efficient FHE schemes are based on the hardness of the Ring-LWE problem and so a natural solution would be to use lattice-based zero-knowledge proofs for proving properties about the ciphertext. Such methods, however, require larger-than-necessary parameters and result in rather long proofs, especially when proving general relationships.
In this paper, we show that one can get much shorter proofs (roughly $1.25$KB) by first creating a Pedersen commitment from the vector corresponding to the randomness and plaintext of the FHE ciphertext. To prove validity of the ciphertext, one can then prove that this commitment is indeed to the message and randomness and these values are in the correct range. Our protocol utilizes a connection between polynomial operations in the lattice scheme and inner product proofs for Pedersen commitments of Bünz et al. (S&P 2018). Furthermore, our proof of equality between the ciphertext and the commitment is very amenable to amortization -- proving the equivalence of $k$ ciphertext / commitment pairs only requires an additive factor of $O(\log{k})$ extra space than for one such proof. For proving additional properties of the plaintext(s), one can then directly use the logarithmic-space proofs of Bootle et al. (Eurocrypt 2016) and Bünz et al. (IEEE S&P 2018) for proving arbitrary relations of discrete log commitment.
Our technique is not restricted to FHE ciphertexts and can be applied to proving many other relations that arise in lattice-based cryptography. For example, we can create very efficient verifiable encryption / decryption schemes with short proofs in which confidentiality is based on the hardness of Ring-LWE while the soundness is based on the discrete logarithm problem. While such proofs are not fully post-quantum, they are adequate in scenarios where secrecy needs to be future-proofed, but one only needs to be convinced of the validity of the proof in the pre-quantum era. We furthermore show that our zero-knowledge protocol can be easily modified to have the property that breaking soundness implies solving discrete log in a short amount of time. Since building quantum computers capable of solving discrete logarithm in seconds requires overcoming many more fundamental challenges, such proofs may even remain valid in the post-quantum era.

Obfuscating simple functionalities from knowledge assumptions

Uncategorized

Uncategorized

This paper shows how to obfuscate several simple functionalities from a new Knowledge of OrthogonALity Assumption (KOALA) in cyclic groups which is shown to hold in the Generic Group Model. Specifically, we give simpler and stronger security proofs for obfuscation schemes for point functions, general-output point functions and pattern matching with wildcards. We also revisit the work of Bishop et al. (CRYPTO 2018) on obfuscating the pattern matching with wildcards functionality. We improve upon the construction and the analysis in several ways:
- attacks and stronger guarantees: We show that the construction achieves virtual black-box security for a simulator that runs in time roughly $2^{n/2}$, as well as distributional security for larger classes of distributions. We give attacks that show that our results are tight.
- weaker assumptions: We prove security under KOALA
- better efficiency: We also provide a construction that outputs $n+1$ instead of 2n group elements.
We obtain our results by first obfuscating a simpler "big subset
functionality", for which we establish full virtual black-box security; this yields a simpler and more modular analysis for pattern matching.
Finally, we extend our distinguishing attacks
to a large class of simple linear-in-the-exponent schemes.

Rate-Optimizing Compilers for Continuously Non-Malleable Codes

We study the *rate* of so-called *continuously* non-malleable codes, which allow to encode a message in such a way that (possibly adaptive) continuous tampering attacks on the codeword yield a decoded value that is unrelated to the original message. Our results are as follows:
-) For the case of bit-wise independent tampering, we establish the existence of rate-one continuously non-malleable codes with information-theoretic security, in the plain model.
-) For the case of split-state tampering, we establish the existence of rate-one continuously non-malleable codes with computational security, in the (non-programmable) random oracle model. We further exhibit a rate-1/2 code and a rate-one code in the common reference string model, but the latter only withstands *non-adaptive* tampering.
It is well known that computational security is inherent for achieving continuous non-malleability in the split-state model (even in the presence of non-adaptive tampering).
Continuously non-malleable codes are useful for protecting *arbitrary* cryptographic primitives against related-key attacks, as well as for constructing non-malleable public-key encryption schemes. Our results directly improve the efficiency of these applications.

Deep Learning to Evaluate Secure RSA Implementations

This paper presents the results of several successful profiled side-channel attacks against a secure implementation of the RSA algorithm. The implementation was running on a ARM Core SC 100 completed with a certified EAL4+ arithmetic co-processor. The analyses have been conducted by three experts' teams, each working on a specific attack path and exploiting information extracted either from the electromagnetic emanation or from the power consumption. A particular attention is paid to the description of all the steps that are usually followed during a security evaluation by a laboratory, including the acquisitions and the observations pre-processing which are practical issues usually put aside in the literature. Remarkably, the profiling portability issue is also taken into account and different device samples are involved for the profiling and testing phases. Among other aspects, this paper shows the high potential of deep learning attacks against secure implementations of RSA and raises the need for dedicated countermeasures.

Last updated: 2019-02-26

A New Code-based Signature Scheme with Shorter Public Key

Code-based signature has been believed to be a useful authentication tool for post-quantum cryptography. There have been some attempts to construct efficient code-based signatures; however, existing code-based signature schemes suffer from large public-key size, which has affected their applicability. It has been a challenging research task to construct efficient code-based signatures with a shorter public-key size. In this paper, we propose an efficient code-based signature scheme, which offers a short public key size. Our scheme is an analogue to the Schnorr signature where we utilize random rank double circulant codes and matrix-vector product used in the Rank Quasi-Cyclic (RQC) scheme introduced by Melchor et al. (NIST 2017).We provide the security proof of our signature scheme by reducing it to the Rank Quasi-Cyclic Syndrome Decoding (RQCSD) problem. Our work provides an example for the construction of code-based signatures for the applications which require short public keys.

Key Encapsulation Mechanism with Explicit Rejection in the Quantum Random Oracle Model

Uncategorized

Uncategorized

The recent post-quantum cryptography standardization project launched by NIST increased the interest in generic key encapsulation mechanism (KEM) constructions in the quantum random oracle (QROM).
Based on a OW-CPA-secure public-key encryption (PKE), Hofheinz, Hövelmanns and Kiltz (TCC 2017) first presented two generic constructions of an IND-CCA-secure KEM with quartic security loss in the QROM, one with implicit rejection (a pseudorandom key is return for an invalid ciphertext) and the other with explicit rejection (an abort symbol is returned for an invalid ciphertext).
Both are widely used in the NIST Round-1 KEM submissions and the ones with explicit rejection account for 40%.
Recently, the security reductions have been improved to quadratic loss under a standard assumption, and be tight under a nonstandard assumption by Jiang et al. (Crypto 2018) and Saito, Xagawa and Yamakawa (Eurocrypt 2018).
However, these improvements only apply to the KEM submissions with implicit rejection and the techniques do not seem to carry over to KEMs with explicit rejection.
In this paper, we provide three generic constructions of an IND-CCA-secure KEM with explicit rejection, under the same assumptions and with the same tightness in the security reductions as the aforementioned KEM constructions
with implicit rejection (Crypto 2018, Eurocrypt 2018).
Specifically, we develop a novel approach to verify the validity of a ciphertext in the QROM and use it to extend the proof techniques for KEM constructions with implicit rejection (Crypto 2018, Eurocrypt 2018) to our KEM constructions with explicit rejection.
Moreover, using an improved version of one-way to hiding lemma by Ambainis, Hamburg and Unruh (ePrint 2018/904),
for two of our constructions, we present tighter reductions to the standard IND-CPA assumption.
Our results directly apply to 9 KEM submissions with explicit rejection, and provide tighter reductions than previously known (TCC 2017).

Deterministic Identity-Based Encryption from Lattice-Based Programmable Hash Functions with High Min-Entropy

There only exists one deterministic identity-based encryption (DIBE) scheme which is adaptively secure in the auxiliary-input setting, under the learning with errors (LWE) assumption. However, the master public key consists of $\mathcal{O}(\lambda)$ basic matrices. In this paper, we consider to construct adaptively secure DIBE schemes with more compact public parameters from the LWE problem.
On the one hand, we gave a generic DIBE construction from lattice-based programmable hash functions with high min-entropy.
On the other hand, when instantiating our generic DIBE construction with four LPHFs with high min-entropy, we can get four adaptively secure DIBE schemes with more compact public parameters. In one of our DIBE schemes, the master public key only consists of $\omega(\log \lambda)$ basic matrices.

Improved Security Evaluation Techniques for Imperfect Randomness from Arbitrary Distributions

Uncategorized

Uncategorized

Dodis and Yu (TCC 2013) studied how the security of cryptographic primitives that are secure in the "ideal" model in which the distribution of a randomness is the uniform distribution, is degraded when the ideal distribution of a randomness is switched to a "real-world" (possibly biased) distribution that has some lowerbound on its min-entropy or collision-entropy. However, in many constructions, their security is guaranteed only when a randomness is sampled from some non-uniform distribution (such as Gaussian in lattice-based cryptography), in which case we cannot directly apply the results by Dodis and Yu.
In this paper, we generalize the results by Dodis and Yu using the Rényi divergence, and show how the security of a cryptographic primitive whose security is guaranteed when the ideal distribution of a randomness is a general (possibly non-uniform) distribution $Q$, is degraded when the distribution is switched to another (real-world) distribution $R$. More specifically, we derive two general inequalities regarding the Rényi divergence of $R$ from $Q$ and an adversary's advantage against the security of a cryptographic primitive. As applications of our results, we show (1) an improved reduction for switching the distributions of distinguishing problems with public samplability, which is simpler and much tighter than the reduction by Bai et al. (ASIACRYPT 2015), and (2) how the differential privacy of a mechanism is degraded when its randomness comes from not an ideal distribution $Q$ but a real-world distribution $R$. Finally, we show methods for approximate-sampling from an arbitrary distribution $Q$ with some guaranteed upperbound on the Rényi divergence (of the distribution $R$ of our sampling methods from $Q$).

The Relationship between the Construction and Solution of the MILP Models and Applications

The automatic search method based on Mix-integer Linear Programming (MILP) is one of the most common tools to search the distinguishers of block ciphers. For differential analysis, the byte-oriented MILP model is usually used to count the number of differential active s-boxes and the bit-oriented MILP model is used to search the optimal differential characteristic. In this paper, we present the influences between the construction and solution of MILP models solved by Gurobi : 1). the number of variables; 2). the number of constraints; 3). the order of the constraints; 4). the order of variables in constraints. We carefully construct the MILP models according to these influences in order to find the desired results in a reasonable time.
As applications, we search the differential characteristic of PRESENT,GIFT-64 and GIFT-128 in the single-key setting. We do a dual processing for the constraints of the s-box. It only takes 298 seconds to finish the search of the 8-round optimal differential characteristic based on the new MILP model. We also obtain the optimal differential characteristic of the 9/10/11-round PRESENT. With a special initial constraint, it only takes 4 seconds to obtain a 9-round differential characteristic with probability $2^{-42}$. We also get a 12/13-round differential characteristic with probability $2^{-58}/2^{-62}$. For GIFT-128, we improve the probability of differential characteristic of $9 \sim 21$ rounds and give the first attack on 26-round GIFT-128 based on a 20-round differential characteristic with probability $2^{-121.415}$.

Sub-logarithmic Distributed Oblivious RAM with Small Block Size

Uncategorized

Uncategorized

Oblivious RAM (ORAM) is a cryptographic primitive that allows a client to securely execute RAM programs over data that is stored in an untrusted server. Distributed Oblivious RAM is a variant of ORAM, where the data is stored in $m>1$ servers. Extensive research over the last few decades have succeeded to reduce the bandwidth overhead of ORAM schemes, both in the single-server and the multi-server setting, from $O(\sqrt{N})$ to $O(1)$. However, all known protocols that achieve a sub-logarithmic overhead either require heavy server-side computation (e.g. homomorphic encryption), or a large block size of at least $\Omega(\log^3 N)$.
In this paper, we present a family of distributed ORAM constructions that follow the hierarchical approach of Goldreich and Ostrovsky [GO96]. We enhance known techniques, and develop new ones, to take better advantage of the existence of multiple servers. By plugging efficient known hashing schemes in our constructions, we get the following results:
1. For any number $m\geq 2$ of servers, we show an $m$-server ORAM scheme with $O(\log N/\log\log N)$ overhead, and block size $\Omega(\log^2 N)$. This scheme is private even against an $(m-1)$-server collusion.
2. A three-server ORAM construction with $O(\omega(1)\cdot\log N/\log\log N)$ overhead and a block size almost logarithmic, i.e. $\Omega(\log^{1+\epsilon}N)$.
We also investigate a model where the servers are allowed to perform a linear amount of light local computations, and show that constant overhead is achievable in this model, through a simple four-server ORAM protocol. From theoretical viewpoint, this is the first ORAM scheme with asymptotic constant overhead, and polylogarithmic block size, that does not use homomorphic encryption. Practically speaking, although we do not provide an implementation of the suggested construction, evidence from related work (e.g. [DS17]) confirms that despite the linear computational overhead, our construction is practical, in particular when applied to secure computation.

NIST Post-Quantum Cryptography- A Hardware Evaluation Study

Experts forecast that quantum computers can break classical cryptographic algorithms. Scientists are developing post quantum cryptographic (PQC) algorithms, that are invulnerable to quantum computer attacks. The National Institute of
Standards and Technology (NIST) started a public evaluation process to standardize quantum-resistant public key algorithms. The objective of our study is to provide a hardware comparison of the NIST PQC competition candidates. For this, we use a High-Level Synthesis (HLS) hardware design methodology to map high-level C specifications of selected PQC candidates into both FPGA and ASIC implementations.

Block-Anti-Circulant Unbalanced Oil and Vinegar

We introduce a new technique for compressing the public keys of the UOV signature scheme that makes use of block-anti-circulant matrices. These matrices admit a compact representation as for every block, the remaining elements can be inferred from the first row. This space saving translates to the public key, which as a result of this technique can be shrunk by a small integer factor. We propose parameters sets that take into account several important attacks.

Leakage-resilient Identity-based Encryption in Bounded Retrieval Model with Nearly Optimal Leakage-Ratio

Uncategorized

Uncategorized

We propose new constructions of leakage-resilient public-key encryption (PKE) and identity-based encryption (IBE) schemes in the bounded retrieval model (BRM). In the BRM, adversaries are allowed to obtain at most $\ell$-bit leakage from a secret key and we can increase $\ell$ only by increasing the size of secret keys without losing efficiency in any other performance measure. We call $\ell/|\textsf{sk}|$ leakage-ratio where $|\textsf{sk}|$ denotes a bit-length of a secret key.
Several PKE/IBE schemes in the BRM are known. However, none of these constructions achieve a constant leakage-ratio under a standard assumption in the standard model. Our PKE/IBE schemes are the first schemes in the BRM that achieve leakage-ratio $1-\epsilon$ for any constant $\epsilon>0$ under standard assumptions in the standard model.
As previous works, we use identity-based hash proof systems (IB-HPS) to construct IBE schemes in the BRM. It is known that a parameter for IB-HPS called the universality-ratio is translated into the leakage-ratio of the resulting IBE scheme in the BRM. We construct an IB-HPS with universality-ratio $1-\epsilon$ for any constant $\epsilon>0$ based on any inner-product predicate encryption (IPE) scheme with compact secret keys. Such IPE schemes exist under the $d$-linear, subgroup decision, learning with errors, or computational bilinear Diffie-Hellman assumptions. As a result, we obtain IBE schemes in the BRM with leakage-ratio $1-\epsilon$ under any of these assumptions. Our PKE schemes are immediately obtained from our IBE schemes.

Toha Key Hardened Function

TOHA is Key Hardened Function designed in the general spirit of sequential memory- hard function which based on secure cryptographic hash function, the idea behind its design is to make it harder for an attacker to perform some generic attacks and to make it costly as well, TOHA can be used for deriving keys from a master password or generating keys with length of 256-bit to be used in other algorithm schemes, general approach is to use a password and a salt like a normal scheme plus other parameters, and you can think of the salt as an index into a large set of keys derived from the same password, and of course you don’t need to hide the salt to operate.

A Generic Attack on Lattice-based Schemes using Decryption Errors with Application to ss-ntru-pke

Hard learning problems are central topics in recent cryptographic research. Many cryptographic primitives relate their security to difficult problems in lattices, such as the shortest vector problem. Such schemes include the possibility of decryption errors with some very small probability. In this paper we propose and discuss a generic attack for secret key recovery based on generating decryption errors. In a standard PKC setting, the model first consists of a precomputation phase where
special messages and their corresponding error vectors are generated. Secondly, the messages are submitted for decryption and some decryption errors are observed. Finally, a phase with a statistical analysis of the messages/errors causing the decryption errors reveals the secret key. The idea is that conditioned on certain secret keys, the decryption error probability is significantly higher than the average case used in the error probability estimation. The attack is demonstrated in detail on one
NIST Post-Quantum Proposal, ss-ntru-pke, that is attacked with complexity below the claimed security level.

Hunting and Gathering - Verifiable Random Functions from Standard Assumptions with Short Proofs

Uncategorized

Uncategorized

A verifiable random function (VRF) is a pseudorandom function, where outputs can be publicly verified. That is, given an output value together with a proof, one can check that the function was indeed correctly evaluated on the corresponding input. At the same time, the output of the function is computationally indistinguishable from random for all non-queried inputs.
We present the first construction of a VRF which meets the following properties at once: It supports an exponential-sized input space, it achieves full adaptive security based on a non-interactive constant-size assumption and its proofs consist of only a logarithmic number of group elements for inputs of arbitrary polynomial length.
Our construction can be instantiated in symmetric bilinear groups with security based on the decision linear assumption.
We build on the work of Hofheinz and Jager (TCC 2016), who were the first to construct a verifiable random function with security based on a non-interactive constant-size assumption. Basically, their VRF is a matrix product in the exponent, where each matrix is chosen according to one bit of the input. In order to allow verification given a symmetric bilinear map, a proof consists of all intermediary results. This entails a proof size of Omega(L) group elements, where L is the bit-length of the input.
Our key technique, which we call hunting and gathering, allows us to break this barrier by rearranging the function, which - combined with the partitioning techniques of Bitansky (TCC 2017) - results in a proof size of l group elements for arbitrary l in omega(1).

Message Authentication (MAC) Algorithm For The VMPC-R (RC4-like) Stream Cipher

We propose an authenticated encryption scheme for the VMPC-R stream cipher. VMPC-R is an RC4-like algorithm proposed in 2013. It was created in a challenge to find a bias-free cipher within the RC4 design scope and to the best of our knowledge no security weakness in it has been published to date. The contribution of this paper is an algorithm to compute Message Authentication Codes (MACs) along with VMPC-R encryption. We also propose a simple method of transforming the MAC computation algorithm into a hash function.

NTTRU: Truly Fast NTRU Using NTT

We present NTTRU -- an IND-CCA2 secure NTRU-based key encapsulation scheme that uses the number theoretic transform (NTT) over the cyclotomic ring $Z_{7681}[X]/(X^{768}-X^{384}+1)$ and produces public keys and ciphertexts of approximately $1.25$ KB at the $128$-bit security level. The number of cycles on a Skylake CPU of our constant-time AVX2 implementation of the scheme for key generation, encapsulation and decapsulation is approximately $6.4$K, $6.1$K, and $7.9$K, which is more than 30X, 5X, and 8X faster than these respective procedures in the NTRU schemes that were submitted to the NIST post-quantum standardization process. These running times are also, by a large margin, smaller than those for all the other schemes in the NIST process. We also give a simple transformation that allows one to provably deal with small decryption errors in OW-CPA encryption schemes (such as NTRU) when using them to construct an IND-CCA2 key encapsulation.

Fully Invisible Protean Signatures Schemes

Protean Signatures (PS), recently introduced by Krenn et al. (CANS '18), allow a semi-trusted third party, named the sanitizer, to modify a signed message in a controlled way.
The sanitizer can
edit signer-chosen parts to arbitrary bitstrings, while the sanitizer can also redact
admissible parts, which are also chosen by the signer. Thus, PSs generalize both redactable signature (RSS) and sanitizable signature (SSS)
into a single notion.
However, the current definition of invisibility does not prohibit that an outsider can decide which
parts of a message are redactable - only which parts can be edited are hidden. This negatively
impacts on the privacy guarantees provided by the state-of-the-art definition.
We extend PSs to be fully invisible.
This strengthened notion guarantees that an outsider can neither decide which parts of a message can be edited nor which
parts can be redacted. To achieve our goal, we introduce the new notions of Invisible RSSs and Invisible Non-Accountable SSSs (SSS'), along with a consolidated framework for aggregate signatures.
Using those building blocks, our resulting construction is significantly
more efficient than the original scheme by Krenn et al., which we demonstrate in a prototypical implementation.

Identity-based Broadcast Encryption with Efficient Revocation

Uncategorized

Uncategorized

Identity-based broadcast encryption (IBBE) is an effective method to protect the data security and privacy in multi-receiver scenarios, which can make broadcast encryption more practical. This paper further expands the study of scalable revocation methodology in the setting of IBBE, where a key authority releases a key update material periodically in such a way that only non-revoked users can update their decryption keys. Following the binary tree data structure approach, a concrete instantiation of revocable IBBE scheme is proposed using asymmetric pairings of prime order bilinear groups. Moreover, this scheme can withstand decryption key exposure, which is proven to be semi-adaptively secure under chosen plaintext attacks in the standard model by reduction to static complexity assumptions. In particular, the proposed scheme is very efficient both in terms of computation costs and communication bandwidth, as the ciphertext size is constant, regardless of the number of recipients. To demonstrate the practicality, it is further implemented in Charm, a framework for rapid prototyping of cryptographic primitives.

Improving Attacks on Round-Reduced Speck32/64 using Deep Learning

This paper has four main contributions. First, we calculate the predicted difference distribution of Speck32/64 with one specific input difference under the Markov assumption completely for up to eight rounds and verify that this yields a globally fairly good model of the difference distribution of Speck32/64. Secondly, we show that contrary to conventional wisdom, machine learning can produce very powerful cryptographic distinguishers: for instance, in a simple low-data, chosen plaintext attack on nine rounds of Speck, we present distinguishers based on deep residual neural networks that achieve a mean key rank roughly five times lower than an analogous classical distinguisher using the full difference distribution table. Thirdly, we develop a highly selective key search policy based on a variant of Bayesian optimization which, together with our neural distinguishers, can be used to reduce the remaining security of 11-round Speck32/64 to roughly 38 bits. This is a significant improvement over previous literature. Lastly, we show that our neural distinguishers successfully use features of the ciphertext pair distribution that are invisible to all purely differential distinguishers even given unlimited data.
While our attack is based on a known input difference taken from the literature, we also show that neural networks can be used to rapidly (within a matter of minutes on our machine) find good input differences without using prior human cryptanalysis.

Non-Zero Inner Product Encryption Schemes from Various Assumptions: LWE, DDH and DCR

Uncategorized

Uncategorized

In non-zero inner product encryption (NIPE) schemes, ciphertexts and secret keys are associated with vectors and decryption is possible whenever the inner product of these vectors does not equal zero. So far, much effort on constructing bilinear map-based NIPE schemes have been made and this has lead to many efficient schemes. However, the constructions of NIPE schemes without bilinear maps are much less investigated. The only known other NIPE constructions are based on lattices, however, they are all highly inefficient due to the need of converting inner product operations into circuits or branching programs.
To remedy our rather poor understanding regarding NIPE schemes without bilinear maps, we provide two methods for constructing NIPE schemes: a direct construction from lattices and a generic construction from functional encryption schemes for inner products (LinFE). For our first direct construction, it highly departs from the traditional lattice-based constructions and we rely heavily on new tools concerning Gaussian measures over multi-dimensional lattices to prove security. For our second generic construction, using the recent constructions of LinFE schemes as building blocks, we obtain the first NIPE constructions based on the DDH and DCR assumptions. In particular, we obtain the first NIPE schemes without bilinear maps or lattices.

Using TopGear in Overdrive: A more efficient ZKPoK for SPDZ

The HighGear protocol (Eurocrypt 2018) is the fastest currently known approach to preprocessing for the SPDZ Multi-Party Computation scheme. Its backbone is formed by an Ideal Lattice-based Somewhat Homomorphic Encryption Scheme and accompanying Zero-Knowledge proofs. Unfortunately, due to certain characteristics of HighGear such current implementations use far too low security parameters in a number of places. This is mainly due to memory and bandwidth consumption constraints. In this work we present a new approach to the ZKPoKs as introduced in the HighGear work. We rigorously formalize their approach and show how to improve upon it using a different proof strategy. This allows us to increase the security of the underlying protocols, all while maintaining roughly the same performance in terms of memory and bandwidth consumption.

A Formal Treatment of Hardware Wallets

Bitcoin, being the most successful cryptocurrency, has been repeatedly attacked with many users losing their funds. The industry's response to securing the user's assets is to offer tamper-resistant hardware wallets. Although such wallets are considered to be the most secure means for managing an account, no formal attempt has been previously done to identify, model and formally verify their properties. This paper provides the first formal model of the Bitcoin hardware wallet operations. We identify the properties and security parameters of a Bitcoin wallet and formally define them in the Universal Composition (UC) Framework. We present a modular treatment of a hardware wallet ecosystem, by realizing the wallet functionality in a hybrid setting defined by a set of protocols. This approach allows us to capture in detail the wallet's components, their interaction and the potential threats. We deduce the wallet's security by proving that it is secure under common cryptographic assumptions, provided that there is no deviation in the protocol execution. Finally, we define the attacks that are successful under a protocol deviation, and analyze the security of commercially available wallets.

FE for Inner Products and Its Application to Decentralized ABE

In this work, we revisit the primitive functional encryption (FE) for inner products and show its application to decentralized attribute- based encryption (ABE). Particularly, we derive an FE for inner prod- ucts that satisfies a stronger notion, and show how to use such an FE to construct decentralized ABE for the class {0,1}{0,1}-LSSS against bounded collusions in the plain model. We formalize the FE notion and show how to achieve such an FE under the LWE or DDH assumption. Therefore, our resulting decentralized ABE can be constructed under the same standard assumptions, improving the prior construction by Lewko and Waters (Eurocrypt 2011). Finally, we also point out challenges to construct decentralized ABE for general functions by establishing a relation between such an ABE and witness encryption for general NP statements.

Safety in Numbers: On the Need for Robust Diffie-Hellman Parameter Validation

We consider the problem of constructing Diffie-Hellman (DH) parameters which pass standard approaches to parameter validation but for which the Discrete Logarithm Problem (DLP) is relatively easy to solve. We consider both the finite field setting and the elliptic curve setting.
For finite fields, we show how to construct DH parameters $(p,q,g)$ for the safe prime setting in which $p=2q+1$ is prime, $q$ is relatively smooth but fools random-base Miller-Rabin primality testing with some reasonable probability, and $g$ is of order $q$ mod $p$. The construction involves modifying and combining known methods for obtaining Carmichael numbers. Concretely, we provide an example with 1024-bit $p$ which passes OpenSSL's Diffie-Hellman validation procedure with probability $2^{-24}$ (for versions of OpenSSL prior to 1.1.0i). Here, the largest factor of $q$ has 121 bits, meaning that the DLP can be solved with about $2^{64}$ effort using the Pohlig-Hellman algorithm. We go on to explain how this parameter set can be used to mount offline dictionary attacks against PAKE protocols.
In the elliptic curve case, we use an algorithm of Broker and Stevenhagen to construct an elliptic curve $E$ over a finite field ${\mathbb{F}}_p$ having a specified number of points $n$. We are able to select $n$ of the form $h\cdot q$ such that $h$ is a small co-factor, $q$ is relatively smooth but fools random-base Miller-Rabin primality testing with some reasonable probability, and $E$ has a point of order $q$. Concretely, we provide example curves at the 128-bit security level with $h=1$, where $q$ passes a single random-base Miller-Rabin primality test with probability $1/4$ and where the elliptic curve DLP can be solved with about $2^{44}$ effort. Alternatively, we can pass the test with probability $1/8$ and solve the elliptic curve DLP with about $2^{35.5}$ effort. These ECDH parameter sets lead to similar attacks on PAKE protocols relying on elliptic curves.
Our work shows the importance of performing proper (EC)DH parameter validation in cryptographic implementations and/or the wisdom of relying on standardised parameter sets of known provenance.

Collusion Resistant Broadcast and Trace from Positional Witness Encryption

An emerging trend is for researchers to identify cryptography primitives for which feasibility was first established under obfuscation and then move the realization to a different setting. In this work we explore a new such avenue — to move obfuscation-based cryptography to the assumption of (positional) witness encryption. Our goal is to develop techniques and tools, which we will dub “witness encryption friendly” primitives and use these to develop a methodology for building advanced cryptography from positional witness encryption.
We take a bottom up approach and pursue our general agenda by attacking the specific problem of building collusion-resistant broadcast systems with tracing from positional witness encryption. We achieve a system where the size of ciphertexts, public key and private key are polynomial in the security parameter $\lambda$ and independent of the number of users N in the broadcast system. Currently, systems with such parameters are only known from indistinguishability obfuscation.

Last updated: 2019-01-21

Analysis of Two Countermeasures against the Signal Leakage Attack

In 2017, a practical attack, referred to as the signal leakage
attack, against reconciliation-based RLWE key exchange protocols was
proposed. In particular, this attack can recover a long-term private key
if a key pair is reused.
Directly motivated by this attack, recently, Ding et al. proposed two
countermeasures against the attack. One is the RLWE key exchange
protocol with reusable keys (KERK), which is included in the Ding Key
Exchange, a NIST submission. The idea for this construction is using zero
knowledge proof. The other is the practical randomized RLWE-based key
exchange (PRKE) (TOC’18), which mixes more randomization.
We found that the two countermeasures above can effectively prevent
malicious Alice from recovering the private key of Bob when keys are
reused. However, both countermeasures don’t consider the case where
malicious Bob tries to recover the private key of Alice. In particular,
malicious Bob can recover the private key of Alice by carefully choosing
what he sends and observing whether shared keys match. By analyzing
the complexities of these attacks, the results show these attacks are
practical and effective.
Notice that the key to carry out these attacks is that malicious Bob
chooses a RLWE sample with the special structure as his public key.
Therefore, other RLWE-based schemes, including KEM (or key exchange)
and PKE, are also vulnerable to these attacks. In response to these attacks,
we propose a mechanism where one party can construct a new
”public key” of the other party, and in order to illustrate the mechanism,
we give an improved KERK.

Last updated: 2019-01-23

Upper Bound on $\lambda_1(\Lambda^{\bot}(\mathbf A))$

It has been shown that, for appropriate parameters, solving the $\mathsf{SIS}$ problem in the average case is at least as hard as approximating certain lattice problems in the worst case %on any $n$-dimensional lattice
to within polynomial factor $\beta\cdot\widetilde{O}(\sqrt n)$, where typically $\beta=O(\sqrt{n\log n})$ such that random $\mathsf{SIS}$ instances admit a solution. In this work, we show that $\beta=O(1)$, i.e., $\beta$ is essentially upper-bounded by a constant. This directly gives us a poly-time exhaustive search algorithm for solving the $\mathsf{SIS}$ problem (resulting in approximating certain worst-case lattice problems to within $\widetilde{O}(\sqrt n)$ factor). Although the exhaustive search algorithm is rather inefficient for typical setting of parameters, our result indicates that lattice-based cryptography is not secure, at least in an asymptotical sense. Our work is based on an observation of the lower/upper bounds on the smoothing parameter for lattices.

nQUIC: Noise-Based QUIC Packet Protection

We present nQUIC, a variant of QUIC-TLS that uses the Noise protocol framework for its key exchange and basis of its packet protector with no semantic transport changes. nQUIC is designed for deployment in systems and for applications that assert trust in raw public keys rather than PKI-based certificate chains. It uses a fixed key exchange algorithm, compromising agility for implementation and verification ease. nQUIC provides mandatory server and optional client authentication, resistance to Key Compromise Impersonation attacks, and forward and future secrecy of traffic key derivation, which makes it favorable to QUIC-TLS for long-lived QUIC connections in comparable applications. We developed two interoperable prototype implementations written in Go and Rust. Experimental results show that nQUIC finishes its handshake in a comparable amount of time as QUIC-TLS.

Group Signatures with Selective Linkability

Group signatures allow members of a group to anonymously produce signatures on behalf of the group. They are an important building block for privacy-enhancing applications, e.g., enabling user data to be collected in authenticated form while preserving the user’s privacy. The linkability between the signatures thereby plays a crucial role for balancing utility and privacy: knowing the correlation of events significantly increases the utility of the data but also severely harms the user’s privacy. Therefore group signatures are unlinkable per default, but either support linking or identity escrow through a dedicated central party or offer user-controlled linkability. However, both approaches have significant limitations. The former relies on a fully trusted entity and reveals too much information, and the latter requires exact knowledge of the needed linkability at the moment when the signatures are created. However, often the exact purpose of the data might not be clear at the point of data collection. In fact, data collectors tend to gather large amounts of data at first, but will need linkability only for selected, small subsets of the data. We introduce a new type of group signatures that provide a more flexible and privacy-friendly access to such selective linkability. When created, all signatures are fully unlinkable. Only when strictly needed or desired, should the required pieces be made linkable with the help of a central entity. For privacy, this linkability is established in an oblivious and non-transitive manner. We formally define the requirements for this new type of group signatures and provide an efficient instantiation that provably satisfies these requirements under discrete-logarithm based assumptions.

Non-malleable encryption with proofs of plaintext knowledge and applications to voting

Non-malleable asymmetric encryption schemes which prove plaintext knowledge are sufficient for secrecy in some domains. For example, ballot secrecy in voting. In these domains, some applications derive encryption schemes by coupling malleable ciphertexts with proofs of plaintext knowledge, without evidence that the sufficient condition (for secrecy) is satisfied nor an independent security proof (of secrecy). Consequently, it is unknown whether these applications satisfy desirable secrecy properties. In this article, we propose a generic construction for such a coupling and show that our construction produces non-malleable encryption schemes which prove plaintext knowledge. Furthermore, we show how our results can be used to prove ballot secrecy of voting systems. Accordingly, we facilitate the development of applications satisfying their security objectives.

STP Models of Optimal Differential and Linear Trail for S-box Based Ciphers

Uncategorized

Uncategorized

Automatic tools have played an important role in designing new cryptographic primitives and evaluating the security of ciphers. Simple Theorem Prover constraint solver (STP) has been used to search for differential/linear trails of ciphers. This paper proposes general STP-based models searching for differential and linear trails with the optimal probability and correlation for S-box based ciphers. In order to get trails with the best probability or correlation for ciphers with arbitrary S-box, we give an efficient algorithm to describe probability or correlation of S-Box. Based on the algorithm we present a search model for optimal differential and linear trails, which is efficient for ciphers with S-Boxes whose DDTs/LATs contain entities not equal to the power of two. Meanwhile, the STP-based model for single-key impossible differentials considering key schedule is proposed, which traces the propagation of values from plaintext to ciphertext instead of propagations of differences. And we found that there is no 5-round AES-128 single-key truncated impossible differential considering key schedule, where input and output differences have only one active byte respectively. Finally, our proposed models are utilized to search for trails of bit-wise ciphers GIFT-128, DES, DESL and ICEBERG and word-wise ciphers ARIA, SM4 and SKINNY-128. As a result, improved results are presented in terms of the number of rounds or probabilities/correlations.

A publicly verifiable quantum signature scheme based on asymmetric quantum cryptography

In 2018, Shi et al. 's showed that Kaushik et al.'s quantum signature scheme is defective. It suffers from the forgery attack. They further proposed an improvement, trying to avoid the attack. However, after examining we found their improved quantum signature is deniable, because the verifier can impersonate the signer to sign a message. After that, when a dispute occurs, he can argue that the signature was not signed by him. It was from the signer. To overcome the drawback, in this paper, we raise an improvement to make it publicly verifiable and hence more suitable to be applied in real life. After cryptanalysis, we confirm that our improvement not only resist the forgery attack but also is undeniable.

Biased Nonce Sense: Lattice Attacks against Weak ECDSA Signatures in Cryptocurrencies

In this paper, we compute hundreds of Bitcoin private keys and dozens of Ethereum, Ripple, SSH, and HTTPS private keys by carrying out cryptanalytic attacks against digital signatures contained in public blockchains and Internet-wide scans. The ECDSA signature algorithm requires the generation of a per-message secret nonce. If this nonce is not generated uniformly at random, an attacker can potentially exploit this bias to compute the long-term signing key. We use a lattice-based algorithm for solving the hidden number problem to efficiently compute private ECDSA keys that were used with biased signature nonces due to multiple apparent implementation vulnerabilities.

The BIG Cipher: Design, Security Analysis, and Hardware-Software Optimization Techniques

Secure block cipher design is a complex discipline which combines mathematics, engineering, and computer science. In order to develop cryptographers who are grounded in all three disciplines, it is necessary to undertake synergistic research as early as possible in technical curricula, particularly at the undergraduate university level. In this work, students are presented with a new block cipher, which is designed to offer moderate security while providing engineering and analysis challenges suitable for the senior undergraduate level. The BIG (Block) (Instructional, Generic) cipher is analyzed for vulnerability to linear cryptanalysis. Further, the cipher is implemented using the Nios II microprocessor and two configurations of memory-mapped hardware accelerators, in the Cyclone V FPGA on the Terasic DE1 System-on-chip (SoC). Three distinct implementations are realized: 1) Purely software (optimized for latency), 2) Purely hardware (optimized for area), and 3) A hardware-software codesign (optimized for throughput-to-area ratio). All three implementations are evaluated in terms of latency (encryption and decryption), throughput (Mbps), area (ALMs), and throughput-to-area (TP/A) ratio (Mbps/ALM); all metrics account for a fully functional Nios II, 8 kilobytes of on-chip RAM, Avalon interconnect, benchmark timer, and any hardware accelerators. In terms of security, we demonstrate recovery of a relationship among 12 key bits using as few as 16,000 plaintext/ciphertext pairs in a 6-round reduced round attack and reveal a diffusion rate of only 43.3 percent after 12 rounds. The implementation results show that the hardware-software codesign achieves a 67x speed-up and 37x increase in TP/A ratio over the software implementation, and 5x speed-up and 5x increase in TP/A ratio compared to the hardware implementation.

CryptoNote+

CryptoNote protocol proved to be very popular among cryptocurrency startups. We propose several features to extend the basic protocol. Among them are Hybrid Mining (a different mining scheme preventing a straightforward 51% attack), Slow Emission (an emission curve better suited for the real-world adoption), Return Addresses (transaction-specic addresses anonymously linking transactions to their originators), Tiny Addresses (short numerical addresses easy to remember and relay). For breivity, we call these features CryptoNote+.

Decentralizing Inner-Product Functional Encryption

Uncategorized

Uncategorized

Multi-client functional encryption (MCFE) is a more flexible variant of functional encryption whose functional decryption involves multiple ciphertexts from different parties. Each party holds a different secret key $\mathsf{sk}_i$ and can independently and adaptively be corrupted by the adversary. We present two compilers for MCFE schemes for the inner-product functionality, both of which support encryption labels. Our first compiler transforms any scheme with a special key-derivation property into a decentralized scheme, as defined by Chotard et al. (ASIACRYPT 2018), thus allowing for a simple distributed way of generating functional decryption keys without a trusted party. Our second compiler allows to lift a unnatural restriction present in existing (decentralized) MCFE schemes,which requires the adversary to ask for a ciphertext from each party. We apply our compilers to the works of Abdalla et al. (CRYPTO 2018) and Chotard et al. (ASIACRYPT 2018) to obtain schemes with hitherto unachieved properties. From Abdalla et al., we obtain instantiations of DMCFE schemes in the standard model (from DDH, Paillier, or LWE) but without labels. From Chotard et al., we obtain a DMCFE scheme with labels still in the random oracle model, but without pairings.

Improving the MILP-based Security Evaluation Algorithm against Differential/Linear Cryptanalysis Using A Divide-and-Conquer Approach

In recent years, Mixed Integer Linear Programming (MILP) has been widely used in cryptanalysis of symmetric-key primitives. For differential and linear cryptanalysis, MILP can be used to solve two kinds of problems: calculation of the minimum number of differentially/linearly active S-boxes, and search for the best differential/linear characteristics. There are already
numerous papers published in this area. However, the efficiency is not satisfactory enough for many symmetric-key primitives.
In this paper, we greatly improve the efficiency of the MILP-based search algorithm for both problems. Each of the two problems for an $r$-round cipher can be converted to an MILP model whose feasible region is the set of all possible $r$-round differential/linear characteristics. Generally, high-probability differential/linear characteristics are likely to have a low number of active S-boxes at a certain round. Inspired by the idea of a divide-and-conquer approach, we divide the set of all possible differential/linear characteristics into several smaller subsets, then separately search them. That is to say, the search of the whole set is split into easier searches of smaller subsets, and optimal solutions within the smaller subsets are combined to give the optimal solution within the whole set. In addition, we use several techniques to further improve the efficiency of the search algorithm. As applications, we apply our search algorithm to five lightweight block ciphers: PRESENT, GIFT-64, RECTANGLE, LBLOCK and TWINE. For each cipher, we obtain better results than the best-known ones obtained from the MILP method. For the minimum number of differentially/linearly active S-boxes, we reach 31/31, 16/15, 16/16, 20/20 and 20/20 rounds for the five ciphers respectively. For the best differential/linear characteristics, we reach 18/18, 15/13, 15/14, 16/15 and 15/16 rounds for the five ciphers respectively.

Generic Constructions of Robustly Reusable Fuzzy Extractor

Robustly reusable Fuzzy Extractor (rrFE) considers reusability and robustness simultaneously.
We present two approaches to the generic construction of rrFE. Both of approaches make use of a secure sketch and universal hash functions. The first approach also employs a special pseudo-random function (PRF), namely unique-input key-shift (ui-ks) secure PRF, and the second uses a key-shift secure auxiliary-input authenticated encryption (AIAE). The ui-ks security of PRF (resp. key-shift security of AIAE), together with the homomorphic properties of secure sketch and universal hash function, guarantees the reusability and robustness of rrFE. Meanwhile, we show two instantiations of the two approaches respectively. The first instantiation results in the first rrFE from the LWE assumption, while the second instantiation results in the first rrFE from the DDH assumption over non-pairing groups.

CHURP: Dynamic-Committee Proactive Secret Sharing

We introduce CHURP (CHUrn-Robust Proactive secret sharing). CHURP enables secure secret-sharing in dynamic settings, where the committee of nodes storing a secret changes over time. Designed for blockchains, CHURP has lower communication complexity than previous schemes: $O(n)$ on-chain and $O(n^2)$ off-chain in the optimistic case of no node failures.
CHURP includes several technical innovations: An efficient new proactivization scheme of independent interest, a technique (using asymmetric bivariate polynomials) for efficiently changing secret-sharing thresholds, and a hedge against setup failures in an efficient polynomial commitment scheme. We also introduce a general new technique for inexpensive off-chain communication across the peer-to-peer networks of permissionless blockchains.
We formally prove the security of CHURP, report on an implementation, and present performance measurements.

Fast Message Franking: From Invisible Salamanders to Encryptment

Message franking enables cryptographically verifiable reporting of abusive content in end-to-end encrypted messaging. Grubbs, Lu, and Ristenpart recently formalized the needed underlying
primitive, what they call compactly committing authenticated encryption (AE), and analyzed the security of a number of approaches. But all known secure schemes are still slow compared to the fastest standard AE schemes. For this reason Facebook Messenger uses AES-GCM for franking of attachments such as images or videos.
We show how to break Facebook’s attachment franking scheme: a malicious user can send an objectionable image to a recipient but that recipient cannot report it as abuse. The core problem stems from use of fast but non-committing AE, and so we build the fastest compactly committing AE schemes to date. To do so we introduce a new primitive, called encryptment, which captures the essential properties needed. We prove that, unfortunately, schemes with performance profile similar to AES-GCM won’t work. Instead, we show how to efficiently transform Merkle-Damgärd-style hash functions into secure encryptments, and how to efficiently build compactly committing AE from encryptment. Ultimately our main construction allows franking using just a single computation of SHA-256 or SHA-3. Encryptment proves useful for a variety of other applications, such as remotely keyed AE and concealments, and our results imply the first single-pass schemes in these settings as well.

More Efficient Algorithms for the NTRU Key Generation using the Field Norm

NTRU lattices are a class of polynomial rings which allow for compact and efficient representations of the lattice basis, thereby offering very good performance characteristics for the asymmetric algorithms that use them. Signature algorithms based on NTRU lattices have fast signature generation and verification, and relatively small signatures, public keys and private keys.
A few lattice-based cryptographic schemes entail, generally during the key generation, solving the NTRU equation:
$$ f G - g F = q \mod x^n + 1 $$
Here $f$ and $g$ are fixed, the goal is to compute solutions $F$ and $G$ to the equation, and all the polynomials are in $\mathbb{Z}[x]/(x^n + 1)$. The existing methods for solving this equation are quite cumbersome: their time and space complexities are at least cubic and quadratic in the dimension $n$, and for typical parameters they therefore require several megabytes of RAM and take more than a second on a typical laptop, precluding onboard key generation in embedded systems such as smart cards.
In this work, we present two new algorithms for solving the NTRU equation. Both algorithms make a repeated use of the field norm in tower of fields; it allows them to be faster and more compact than existing algorithms by factors $\tilde O(n)$. For lattice-based schemes considered in practice, this reduces both the computation time and RAM usage by factors at least 100, making key pair generation within range of smart card abilities.

BlAnC: Blockchain-based Anonymous and Decentralized Credit Networks

Distributed credit networks, such as Ripple and Stellar, are becoming popular as an alternative means for financial transactions. However, the current designs do not preserve user privacy or are not truly decentralized. In this paper, we explore the creation of a distributed credit network that preserves user and transaction privacy and unlinkability.
We propose BlAnC, a novel, fully decentralized blockchain-based credit network where credit transfer between a sender-receiver pair happens on demand. In BlAnC, multiple concurrent transactions can occur seamlessly, and malicious network actors that do not follow the protocols and/or disrupt operations can be identified efficiently. We perform security analysis of our proposed protocols in the universal composability framework to demonstrate its strength,
and discuss how our network handles operational dynamics. We also present preliminary experiments and scalability analyses.

The Science of Guessing in Collision Optimized Divide-and-Conquer Attacks

Recovering keys ranked in very deep candidate space efficiently is a very important but challenging issue in Side-Channel Attacks (SCAs). State-of-the-art Collision Optimized Divide-and-Conquer Attacks (CODCAs) extract collision information from a collision attack to optimize the key recovery of a divide-and-conquer attack, and transform the very huge guessing space to a much smaller collision space. However, the inefficient collision detection makes them time-consuming. The very limited collisions exploited and large performance difference between the collision attack and the divide-and-conquer attack in CODCAs also prevent their application in much larger spaces. In this paper, we propose a Minkowski Distance enhanced Collision Attack (MDCA) with performance closer to Template Attack (TA) compared to traditional Correlation-Enhanced Collision Attack (CECA), thus making the optimization more practical and meaningful. Next, we build a more advanced CODCA named Full-Collision Chain (FCC) from TA and MDCA to exploit all collisions. Moreover, to minimize the thresholds while guaranteeing a high success probability of key recovery, we propose a fault-tolerant scheme to optimize FCC. The full-key is divided into several big ``blocks'', on which a Fault-Tolerant Vector (FTV) is exploited to flexibly adjust its chain space. Finally, guessing theory is exploited to optimize thresholds determination and search orders of sub-keys. Experimental results show that FCC notably outperforms the existing CODCAs.

A Proof of the Beierle-Kranz-Leander’s Conjecture related to Lightweight Multiplication in $F_{2^n}$

Lightweight cryptography is an important tool for building strong security solutions for pervasive devices with limited resources. Due to the stringent cost constraints inherent in extremely large applications, the efficient implementation of cryptographic hardware and software algorithms is of utmost importance to realize the vision of generalized computing.
In CRYPTO 2016, Beierle, Kranz and Leander have considered lightweight multiplication in ${F}_{2^n}$. Specifically, they have considered the fundamental question of optimizing finite field multiplications with one fixed element and investigated which field representation, that is which choice of basis, allows for an optimal implementation. They have left open a conjecture related to two XOR-count. Using the theory of linear algebra, we prove in the present paper that their conjecture is correct. Consequently, this proved conjecture can be used as a reference for further developing and implementing cryptography algorithms in lightweight devices.

Learning to Reconstruct: Statistical Learning Theory and Encrypted Database Attacks

We show that the problem of reconstructing encrypted databases from access pattern leakage is closely related to statistical learning theory. This new viewpoint enables us to develop broader attacks that are supported by streamlined performance analyses.
As an introduction to this viewpoint, we first present a general reduction from reconstruction with known queries to PAC learning. Then, we directly address the problem of $\epsilon$-approximate database reconstruction ($\epsilon$-ADR) from range query leakage, giving attacks whose query cost scales only with the relative error $\epsilon$, and is independent of the size of the database, or the number $N$ of possible values of data items. This already goes significantly beyond the state of the art for such attacks, as represented by Kellaris et al. (ACM CCS 2016) and Lacharité et al. (IEEE S&P 2018).
We also study the new problem of $\epsilon$-approximate order reconstruction ($\epsilon$-AOR), where the adversary is tasked with reconstructing the order of records, except for records whose values are approximately equal. We show that as few as ${\mathcal{O}}(\epsilon^{-1} \log \epsilon^{-1})$ uniformly random range queries suffice. Our analysis relies on an application of learning theory to PQ-trees, special data structures tuned to compactly record certain ordering constraints.
We then show that when an auxiliary distribution is available, $\epsilon$-AOR can be enhanced to achieve $\epsilon$-ADR; using real data, we show that devastatingly small numbers of queries are needed to attain very accurate database reconstruction.
Finally, we generalize from ranges to consider what learning theory tells us about the impact of access pattern leakage for other classes of queries, focusing on prefix and suffix queries. We illustrate this with both concrete attacks for prefix queries and with a general lower bound for all query classes.

Survey for Performance & Security Problems of Passive Side-channel Attacks Countermeasures in ECC

Uncategorized

Uncategorized

The main objective of the Internet of Things is to interconnect everything around us to obtain information which was unavailable to us before, thus enabling us to make better decisions. This interconnection of things involves security issues for any Internet of Things key technology. Here we focus on elliptic curve cryptography (ECC) for embedded devices, which offers a high degree of security, compared to other encryption mechanisms. However, ECC also has security issues, such as Side-Channel Attacks (SCA), which are a growing threat in the implementation of cryptographic devices. This paper analyze the state-of-the-art of several proposals of algorithmic countermeasures to prevent passive SCA on ECC defined over prime fields. This work evaluates the trade-offs between security and the performance of side-channel attack countermeasures for scalar multiplication algorithms without pre-computation, i.e. for variable base point.
Although a number of results are required to study the state-of-the-art of side-channel attack in elliptic curve cryptosystems, the interest of this work is to present explicit solutions that may be used for the future implementation of security mechanisms suitable for embedded devices applied to Internet of Things. In addition security problems for the countermeasures are also analyzed.

On the Asymptotics of Solving the LWE Problem Using Coded-BKW with Sieving

The Learning with Errors problem (LWE) has become a central topic in recent cryptographic research. In this paper, we present a new solving algorithm combining important ideas from previous work on improving the Blum-Kalai-Wasserman (BKW) algorithm and ideas from sieving in lattices. The new algorithm is analyzed and demonstrates an improved asymptotic performance. For the Regev parameters $q=n^2$ and noise level $\sigma = n^{1.5}/(\sqrt{2\pi}\log_{2}^{2}n)$, the asymptotic complexity is $2^{0.893n}$ in the standard setting, improving on the previously best known complexity of roughly $2^{0.930n}$. The newly proposed algorithm also provides asymptotic improvements when a quantum computer is assumed or when the number of samples is limited.

One Fault is All it Needs: Breaking Higher-Order Masking with Persistent Fault Analysis

Persistent fault analysis (PFA) was proposed at CHES 2018 as a novel fault analysis technique. It was shown to completely defeat standard redundancy based countermeasure against fault analysis. In this work, we investigate the security of masking schemes against PFA. We show that with only one fault injection, masking countermeasures can be broken at any masking order. The study is performed on publicly available implementations of masking.

Tight Security Bounds for Generic Stream Cipher Constructions

The design of modern stream ciphers is strongly influenced by the fact that Time-Memory-Data tradeoff attacks (TMD-TO attacks) reduce their effective key length to $\mathit{SL}/2$, where $\mathit{SL}$ denotes the inner state length. The classical solution, employed, e.g., by eSTREAM portfolio members Trivium and Grain v1, is to design the cipher in accordance with the Large-State-Small-Key construction, which implies that $\mathit{SL}$ is at least twice as large as the session key length $\mathit{KL}$.
In the last years, a new line of research looking for alternative stream cipher constructions guaranteeing a higher TMD-TO resistance with smaller inner state lengths has emerged. So far, this has led to three generic constructions: the LIZARD construction, having a provable TMD-TO resistance of $2\cdot \mathit{SL}/3$; the Continuous-Key-Use construction, underlying the stream cipher proposals Sprout, Plantlet, and Fruit; and the Continuous-IV-Use construction, very recently proposed by Hamann, Krause, and Meier. Meanwhile, it could be shown that the Continuous-Key-Use construction is vulnerable against certain nontrivial distinguishing attacks.
In this paper, we present a formal framework for proving security lower bounds on the resistance of generic stream cipher constructions against TMD-TO attacks and analyze two of the constructions mentioned above. First, we derive a tight security lower bound of approximately $\min\{\mathit{KL},\mathit{SL}/2\}$ on the resistance of the Large-State-Small-Key construction. This shows that the feature $\mathit{KL}\le \mathit{SL}/2$ does not open the door for new nontrivial TMD-TO attacks against Trivium and Grain v1 which are more dangerous than the known ones. Second, we prove a maximal security bound on the TMD-TO resistance of the Continuous-IV-Use construction, which shows that designing concrete instantiations of ultra-lightweight Continuous-IV-Use stream ciphers is a hopeful direction of future research.

Minimizing Trust in Hardware Wallets with Two Factor Signatures

Uncategorized

Uncategorized

We introduce the notion of two-factor signatures (2FS), a generalization of a two-out-of-two threshold signature scheme in which one of the parties is a hardware token which can store a high-entropy secret, and the other party is a human who knows a low-entropy password. The security (unforgeability) property of 2FS requires that an external adversary corrupting either party (the token or the computer the human is using) cannot forge a signature.
This primitive is useful in contexts like hardware cryptocurrency wallets in which a signature conveys the authorization of a transaction. By the above security property, a hardware wallet implementing a two-factor signature scheme is secure against attacks mounted by a malicious hardware vendor; in contrast, all currently used wallet systems break under such an attack (and as such are not secure under our definition).
We construct efficient provably-secure 2FS schemes which produce either Schnorr signature (assuming the DLOG assumption), or EC-DSA signatures (assuming security of EC-DSA and the CDH assumption) in the Random Oracle Model, and evaluate the performance of implementations of them. Our EC-DSA based 2FS scheme can directly replace currently used hardware wallets for Bitcoin and other major cryptocurrencies to enable security against malicious hardware vendors.

ScanSAT: Unlocking Obfuscated Scan Chains

While financially advantageous, outsourcing key steps such as testing to potentially untrusted Outsourced Semiconductor Assembly and Test (OSAT) companies may pose a risk of compromising on-chip assets. Obfuscation of scan chains is a technique that hides the actual scan data from the untrusted testers; logic inserted between the scan cells, driven by a secret key, hide the transformation functions between the scan- in stimulus (scan-out response) and the delivered scan pattern (captured response). In this paper, we propose ScanSAT: an attack that transforms a scan obfuscated circuit to its logic- locked version and applies a variant of the Boolean satisfiability (SAT) based attack, thereby extracting the secret key. Our empirical results demonstrate that ScanSAT can easily break naive scan obfuscation techniques using only three or fewer attack iterations even for large key sizes and in the presence of scan compression.

On the Bright Side of Darkness: Side-Channel Based Authentication Protocol Against Relay Attacks

Relay attacks are nowadays well known and most designers of secure authentication protocols are aware of them. At present, the main methods to prevent these attacks are based on the so-called distance bounding technique which consists in measuring the round-trip time of the exchanged authentication messages between the prover and the verifier to estimate an upper
bound on the distance between these entities.
Based on this bound, the verifier checks if the prover is sufficiently close by to rule out an unauthorized entity.
Recently, a new work has proposed an authentication protocol that surprisingly uses the side-channel leakage to prevent relay attacks.
In this paper, we exhibit some practical and security issues of this protocol and provide a new one that fixes all of them.
Then, we argue the resistance of our proposal against both side-channel and relay attacks under some realistic assumptions.
Our experimental results show the efficiency of our protocol in terms of false acceptance and false rejection rates.

Last updated: 2019-04-04

Secure and Effective Logic Locking for Machine Learning Applications

Logic locking has been proposed as a strong protection of intellectual property (IP) against security threats in the IC supply chain especially when the fabrication facility is untrusted. Various techniques have proposed circuit configurations which do not allow the untrusted fab to decipher the true functionality and/or produce usable versions of the chip without having access to the locking key.
These techniques rely on using additional locking circuitry which injects incorrect behavior into the digital functionality when the key is incorrect. However, much of this conventional research focuses on locking individual modules (such as adders, ALUs etc.). While locking these modules is useful, the true test for any locking scheme should consider their impact on the application running on a processor with such modules. A locked module within a processor may or may not have a substantial impact at the application level thereby allowing the attacker (untrusted foundry or unauthorized user) to still get useful work out of the system despite not having access to the key details.
In this work, we show that even when state of the art locking schemes are used to lock the modules within a processor, a large class of workloads derived from machine learning (ML) applications (which are increasingly becoming the most relevant ones) continue to function correctly. This has huge implications to the effectiveness of the current locking techniques. The main reason for this behavior is the inherent error resiliency of such applications.
To counter this threat, we propose a novel secure and effective logic locking scheme, called Strong Anti-SAT (SAS), to lock the entire processor and make sure that the ML applications undergo significant accuracy loss when any wrong key is applied.
We provide two types of SAS, namely SAS-A and SAS-B.
Experiments show that, for both types of SAS, 1) the application-level accuracy loss is significant (for ML applications) given any wrong key, 2) the attacker needs extremely long time to find a correct key, and 3) the hardware overhead is very small.
Lastly, even though our techniques target machine learning type application workloads, the impact on conventional workloads will also be similar. Due to the inherent error resilience of ML, locking ML workloads is a harder problem to tackle.

Leakage-Resilient Group Signature: Definitions and Constructions

Group signature scheme provides group members a way to sign messages without revealing their identities. Anonymity and traceability are two essential properties in a group signature system. However, these two security properties hold based on the assumption that all the signing keys are perfectly secret and leakage-free. On the another hand, on account of the physical imperfection of cryptosystems in practice, malicious attackers can learn fraction of secret state (including secret keys and intermediate randomness) of the cryptosystem via side-channel attacks, and thus breaking the security of whole system.
To address this issue, Ono et al. introduced a new security model of group signature, which captures randomness exposure attacks. They proved that their proposed construction satisfies the security require-ments of group signature scheme. Nevertheless, their scheme is only provably secure against randomness exposure and supposes the secret keys remains leakage-free. In this work, we focus on the security model of leakage-resilient group signature based on bounded leakage setting and propose three new black-box constructions of leakage-resilient group signature secure under the proposed security models.

Sanctorum: A lightweight security monitor for secure enclaves

Enclaves have emerged as a particularly compelling primitive to implement trusted execution environments: strongly isolated sensitive user-mode processes in a largely untrusted software environment.
While the threat models employed by various enclave systems differ, the high-level guarantees they offer are essentially the same: attestation of an enclave’s initial state, as well as a guarantee of enclave integrity and privacy in the
presence of an adversary.
This work describes Sanctorum, a small trusted code base (TCB), consisting of a generic enclave-capable system, which is sufficient to implement secure enclaves akin to the primitive offered by Intel’s SGX.
While enclaves may be implemented via unconditionally trusted hardware and microcode, as it is the case in SGX, we employ a smaller TCB principally consisting
of authenticated, privileged software, which may be replaced or
patched as needed.
Sanctorum implements a formally verified specification for generic enclaves on an in-order multiprocessor system meeting baseline security requirements, e.g., the MIT Sanctum processor and the Keystone enclave framework.
Sanctorum requires trustworthy hardware including a random number generator, a private cryptographic key pair derived via a secure bootstrapping protocol, and a robust isolation primitive to safeguard sensitive information.
Sanctorum’s threat model is informed by the threat model of the isolation primitive, and is suitable for adding enclaves to a variety of processor systems.

- « Previous
- 1
- ...
- 14
- 15