All papers in 2018 (1249 results)

Last updated:  2019-01-03
Accountable Tracing Signatures from Lattices
San Ling, Khoa Nguyen, Huaxiong Wang, Yanhong Xu
Group signatures allow users of a group to sign messages anonymously in the name of the group, while incorporating a tracing mechanism to revoke anonymity and identify the signer of any message. Since its introduction by Chaum and van Heyst (EUROCRYPT 1991), numerous proposals have been put forward, yielding various improvements on security, efficiency and functionality. However, a drawback of traditional group signatures is that the opening authority is given too much power, i.e., he can indiscriminately revoke anonymity and there is no mechanism to keep him accountable. To overcome this problem, Kohlweiss and Miers (PoPET 2015) introduced the notion of accountable tracing signatures ($\mathsf{ATS}$) - an enhanced group signature variant in which the opening authority is kept accountable for his actions. Kohlweiss and Miers demonstrated a generic construction of $\mathsf{ATS}$ and put forward a concrete instantiation based on number-theoretic assumptions. To the best of our knowledge, no other $\mathsf{ATS}$ scheme has been known, and the problem of instantiating $\mathsf{ATS}$ under post-quantum assumptions, e.g., lattices, remains open to date. ~~In this work, we provide the first lattice-based accountable tracing signature scheme. The scheme satisfies the security requirements suggested by Kohlweiss and Miers, assuming the hardness of the Ring Short Integer Solution ($\mathsf{RSIS}$) and the Ring Learning With Errors ($\mathsf{RLWE}$) problems. At the heart of our construction are a lattice-based key-oblivious encryption scheme and a zero-knowledge argument system allowing to prove that a given ciphertext is a valid $\mathsf{RLWE}$ encryption under some hidden yet certified key. These technical building blocks may be of independent interest, e.g., they can be useful for the design of other lattice-based privacy-preserving protocols.
Last updated:  2019-01-03
Function Private Predicate Encryption for Low Min-Entropy Predicates
Sikhar Patranabis, Debdeep Mukhopadhyay, Somindu C. Ramanna
In this work, we propose new predicate encryption schemes for zero inner-product encryption (ZIPE) and non-zero inner-product encryption (NIPE) predicates from prime-order bilinear pairings, which are both attribute and function private in the public-key setting. Our ZIPE scheme is adaptively attribute private under the standard Matrix DDH assumption for unbounded collusions. It is additionally computationally function private under a min-entropy variant of the Matrix DDH assumption for predicates sampled from distributions with superlogarithmic min-entropy. Existing (statistically) function private ZIPE schemes due to Boneh et al. [Crypto’13, Asiacrypt’13] necessarily require predicate distributions with significantly larger min-entropy in the public-key setting. Our NIPE scheme is adaptively attribute private under the standard Matrix DDH assumption, albeit for bounded collusions. It is also computationally function private under a min-entropy variant of the Matrix DDH assumption for predicates sampled from distributions with super-logarithmic min-entropy. To the best of our knowledge, existing NIPE schemes from bilinear pairings were neither attribute private nor function private. Our constructions are inspired by the linear FE constructions of Agrawal et al. [Crypto’16] and the simulation secure ZIPE of Wee [TCC’17]. In our ZIPE scheme, we show a novel way of embedding two different hard problem instances in a single secret key - one for unbounded collusion-resistance and the other for function privacy. With respect to NIPE, we introduce new techniques for simultaneously achieving attribute and function privacy. We also show natural generalizations of our ZIPE and NIPE constructions to a wider class of subspace membership, subspace non-membership and hidden-vector encryption predicates.
Last updated:  2019-01-03
Two round multiparty computation via Multi-key fully homomorphic encryption with faster homomorphic evaluations
Uncategorized
NingBo Li, TanPing Zhou, XiaoYuan Yang, YiLiang Han, Longfei Liu, WenChao Liu
Show abstract
Uncategorized
Multi-key fully homomorphic encryption (MKFHE) allows computations on ciphertexts encrypted by different users (public keys), and the results can be jointly decrypted using the secret keys of all the users involved. The NTRU-based scheme is an important alternative to post-quantum cryptography, but the NTRU-based MKFHE has the following drawbacks, which cause it inefficient in scenarios such as secure multi-party computing (MPC). One is the relinearization technique used for key switching takes up most of the time of the scheme’s homomorphic evaluation, the other is that each user needs to decrypt in sequence, which makes the decryption process complicated. We propose an efficient leveled MKFHE scheme, which improves the efficiency of homomorphic evaluations, and constructs a two-round (MPC) protocol based on this. Firstly, we construct an efficient single key FHE with less relinearization operations. We greatly reduces the number of relinearization operations in homomorphic evaluations process by separating the homomorphic multiplication and relinearization techniques. Furthermore, the batching technique and a specialization of modulus can be applied to our scheme to improve the efficiency. Secondly, the efficient single-key homomorphic encryption scheme proposed in this paper is transformed into a multi-key vision according to the method in LTV12 scheme. Finally, we construct a distributed decryption process which can be implemented independently for all participating users, and reduce the number of interactions between users in the decryption process. Based on this, a two-round MPC protocol is proposed. Experimental analysis shows that the homomorphic evaluation of the single-key FHE scheme constructed in this paper is 2.4 times faster than DHS16, and the MKFHE scheme constructed in this paper can be used to implement a two-round MPC protocol effectively, which can be applied to secure MPC between multiple users under the cloud computing environment.
Last updated:  2019-06-20
Fiat-Shamir: From Practice to Theory, Part II (NIZK and Correlation Intractability from Circular-Secure FHE)
Uncategorized
Ran Canetti, Alex Lombardi, Daniel Wichs
Show abstract
Uncategorized
We construct non-interactive zero-knowledge (NIZK) arguments for $\mathsf{NP}$ from any circular-secure fully homomorphic encryption (FHE) scheme. In particular, we obtain such NIZKs under a circular-secure variant of the learning with errors (LWE) problem while only assuming a standard (poly/negligible) level of security. Our construction can be modified to obtain NIZKs which are either: (1) statistically zero-knowledge arguments in the common random string model or (2) statistically sound proofs in the common reference string model. We obtain our result by constructing a new correlation-intractable hash family [Canetti, Goldreich, and Halevi, JACM~'04] for a large class of relations, which suffices to apply the Fiat-Shamir heuristic to specific 3-message proof systems that we call ``trapdoor $\Sigma$-protocols.'' In particular, assuming circular secure FHE, our hash function $h$ ensures that for any function $f$ of some a-priori bounded circuit size, it is hard to find an input $x$ such that $h(x)=f(x)$. This continues a recent line of works aiming to instantiate the Fiat-Shamir methodology via correlation intractability under progressively weaker and better-understood assumptions. Another consequence of our hash family construction is that, assuming circular-secure FHE, the classic quadratic residuosity protocol of [Goldwasser, Micali, and Rackoff, SICOMP~'89] is not zero knowledge when repeated in parallel. We also show that, under the plain LWE assumption (without circularity), our hash family is a universal correlation intractable family for general relations, in the following sense: If there exists any hash family of some description size that is correlation-intractable for general (even inefficient) relations, then our specific construction (with a comparable size) is correlation-intractable for general (efficiently verifiable) relations.
Last updated:  2019-02-15
qSCMS: Post-quantum certificate provisioning process for V2X
Paulo S. L. M. Barreto, Jefferson E. Ricardini, Marcos A. Simplicio Jr., Harsh Kupwade Patil
Security and privacy are paramount in the field of intelligent transportation systems (ITS). This motivates many proposals aiming to create a Vehicular Public Key Infrastructure (VPKI) for managing vehicles’ certificates. Among them, the Security Credential Management System (SCMS) is one of the leading contenders for standardization in the US. SCMS provides a wide array security features, which include (but are not limited to) data authentication, vehicle privacy and revocation of misbehaving vehicles. In addition, the key provisioning process in SCMS is realized via the so-called butterfly key expansion, which issues arbitrarily large batches of pseudonym certificates in response to a single client request. Although promising, this process is based on classical elliptic curve cryptography (ECC), which is known to be susceptible to quantum attacks. Aiming to address this issue, in this work we propose a post-quantum butterfly key expansion process. The proposed protocol relies on lattice-based cryptography, which leads to competitive key, ciphertext and signature sizes. Moreover, it provides low bandwidth utilization when compared with other lattice-based schemes, and, like the original SCMS, addresses the security and functionality requirements of vehicular communication.
Last updated:  2019-01-03
Senopra: Reconciling Data Privacy and Utility via Attested Smart Contract Execution
Dat Le Tien, Frank Eliassen
The abundance of smart devices and sensors has given rise to an unprecedented large-scale data collection. While this benefits various data-driven application domains, it raises numerous security and privacy concerns. In particular, recent high-profile data breach incidents demonstrate security dangers and single point vulnerability of multiple systems. Moreover, even if the data is properly protected at rest (i.e., during storage), data confidentiality may still be compromised once it is fed as input to computations. In this paper, we introduce Senopra, a privacy-preserving data management framework that leverages trusted execution environment and confidentiality-preserving smart contract system to empower data owners with absolute control over their data. More specifically, the data owners can specify fine-grained access policies governing how their captured data is accessed. The access policies are then enforced by a policy agent that operates in an autonomous and confidentiality-preserving manner. To attain scalability and efficiency, Senopra exploits Key Aggregation Cryptosystem (KAC) for key management, and incorporates an optimisation that significantly improves KAC's key reconstruction cost. Our experimental study shows that Senopra can support privacy- preserving data management at scale with low latency.
Last updated:  2019-01-03
Multi-dimensional Packing for HEAAN for Approximate Matrix Arithmetics
Jung Hee Cheon, Andrey Kim, Donggeon Yhee
HEAAN is a homomorphic encryption (HE) scheme for approximate arithmetics. Its vector packing technique proved its potential in cryptographic applications requiring approximate computations, including data analysis and machine learning. In this paper, we propose MHEAAN - a generalization of HEAAN to the case of a tensor structure of plaintext slots. Our design takes advantage of the HEAAN scheme, that the precision losses during the evaluation are limited by the depth of the circuit, and it exceeds no more than one bit compared to unencrypted approximate arithmetics, such as floating point operations. Due to the multi-dimensional structure of plaintext slots along with rotations in various dimensions, MHEAAN is a more natural choice for applications involving matrices and tensors. We provide a concrete two-dimensional construction and show the efficiency of our scheme on several matrix operations, such as matrix multiplication, matrix transposition, and inverse. As an application, we implement the non-interactive Deep Neural Network (DNN) classification algorithm on encrypted data and encrypted model. Due to our efficient bootstrapping, the implementation can be easily extended to DNN structure with an arbitrary number of hidden layers
Last updated:  2020-07-25
Fully Deniable Interactive Encryption
Ran Canetti, Sunoo Park, Oxana Poburinnaya
Deniable encryption (Canetti et al., Crypto 1996) enhances secret communication over public channels, providing the additional guarantee that the secrecy of communication is protected even if the parties are later coerced (or willingly bribed) to expose their entire internal states: plaintexts, keys and randomness. To date, constructions of deniable encryption --- and more generally, interactive deniable communication --- only address restricted cases where only one party is compromised (Sahai and Waters, STOC 2014). The main question --- whether deniable communication is at all possible if both parties are coerced at once --- has remained open. We resolve this question in the affirmative, presenting a communication protocol that is fully deniable under coercion of both parties. Our scheme has three rounds, assumes subexponentially secure indistinguishability obfuscation and one-way functions, and uses a short global reference string that is generated once at system set-up and suffices for an unbounded number of encryptions and decryptions. Of independent interest, we introduce a new notion called off-the-record deniability, which protects parties even when their claimed internal states are inconsistent (a case not covered by prior definitions). Our scheme satisfies both standard deniability and off-the-record deniability.
Last updated:  2020-08-19
BoxDB: Realistic Adversary Model for Distance Bounding
Ioana Boureanu, David Gerault, Pascal Lafourcade
Recently, the worldwide-used EMVCo standard for electronic payments included the “EMV RRP (Europay Mastercard Visa Relay-Resistant Protocol)” protocol. This uses distance bounding to counteract relay attacks in contactless payments. Last year, EMV RRP was widely analysed by symbolic verification methods, with several distance-bounding attacks and fixes proposed. Yet, one version of EMV RRP was found secure by all such formal analyses. Contrary to this, we exhibit an attack on this version of EMV RRP. Moreover, we exhibit similar vulnerabilities on another EMV RRP variant and on 13 distance-bounding protocols. We then propose a secure version of the EMV RRP protocol, called PayPass+, and prove its security, in a strong adversary model. Our attacks stem from a new, fine-grained corruption model. We formalise it in a provable-security model, called BoxDB. In BoxDB, we express traditional as well as new DB security-properties. All our positive and negative security results are given in this formal model. Also, to fill a gap in computer-aided verification of security, BoxDB is designed specifically to lay the foundations for machine-checked cryptographic proofs for protocols based on distance bounding. The threat model in BoxDB is modular and can be tailored to different applications. Importantly, the corruption model in BoxDB also leads us to show that the strongest threat against DB protocols, namely terrorist frauds, need not be considered in formal DB-security models.
Last updated:  2019-09-12
Structural Nonlinear Invariant Attacks on T-310: Attacking Arbitrary Boolean Functions
Nicolas T. Courtois
Recent papers show how to construct polynomial invariant attacks for block ciphers, however almost all such results are somewhat weak: invariants are simple and low degree and the Boolean functions tend by very simple if not degenerate. Is there a better more realistic attack, with invariants of higher degree and which is likely to work with stronger Boolean functions? In this paper we show that such attacks exist and can be constructed explicitly through on the one side, the study of Fundamental Equation of eprint/2018/807, and on the other side, a study of the space of Annihilators of any given Boolean function. The main contribution of this paper is that to show that the ``product attack'' where the invariant polynomial is a product of simpler polynomials is interesting and quite powerful. Our approach is suitable for backdooring a block cipher in presence of an arbitrarily strong Boolean function not chosen by the attacker. The attack is constructed using excessively simple paper and pencil maths. We also outline a potential application to Data Encryption Standard (DES).
Last updated:  2019-12-06
Universally Composable Accumulators
Foteini Baldimtsi, Ran Canetti, Sophia Yakoubov
Accumulators, first introduced by Benaloh and de Mare (Eurocrypt 1993), are compact representations of arbitrarily large sets and can be used to prove claims of membership or non-membership about the underlying set. They are almost exclusively used as building blocks in real-world complex systems, including anonymous credentials, group signatures and, more recently, anonymous cryptocurrencies. Having rigorous security analysis for such systems is crucial for their adoption and safe use in the real world, but it can turn out to be extremely challenging given their complexity. In this work, we provide the first universally composable (UC) treatment of cryptographic accumulators. There are many different types of accumulators: some support additions, some support deletions and some support both; and, orthogonally, some support proofs of membership, some support proofs of non-membership, and some support both. Additionally, some accumulators support public verifiability of set operations, and some do not. Our UC definition covers all of these types of accumulators concisely in a single functionality, and captures the two basic security properties of accumulators: correctness and soundness. We then prove the equivalence of our UC definition to standard accumulator definitions. This implies that existing popular accumulator schemes, such as the RSA accumulator, already meet our UC definition, and that the security proofs of existing systems that leverage such accumulators can be significantly simplified. Finally, we use our UC definition to get simple proofs of security. We build an accumulator in a modular way out of two weaker accumulators (in the style of Baldimtsi et. al (Euro S&P 2017), and we give a simple proof of its UC security. We also show how to simplify the proofs of security of complex systems such as anonymous credentials. Specifically, we show how to extend an anonymous credential system to support revocation by utilizing our results on UC accumulators.
Last updated:  2019-01-03
Jevil's Encryption Systems
Nadim Kobeissi
Imagine if, given a puzzle, you could encrypt a plaintext to the solution of the puzzle without knowing the solution yourself! The Jevil family of encryption systems is a novel set of real-world encryption systems based on the promising foundation of witness encryption. The first Jevil encryption systems comprise of Pentomino, Sudoku and Nonogram-based encryption, allowing for the encryption of plaintext such that solving a Pentomino, Sudoku or Nonogram puzzle yields to decryption. Jevil encryption systems are shown to be correct, secure and to achieve high performance with modest overhead.
Last updated:  2018-12-31
Proof-of-Stake Sidechains
Peter Gaži, Aggelos Kiayias, Dionysis Zindros
Sidechains have long been heralded as the key enabler of blockchain scalability and interoperability. However, no modeling of the concept or a provably secure construction has so far been attempted. We provide the first formal definition of what a sidechain system is and how assets can be moved between sidechains securely. We put forth a security definition that augments the known transaction ledger properties of persistence and liveness to hold across multiple ledgers and enhance them with a new ``firewall'' security property which safeguards each blockchain from its sidechains, limiting the impact of an otherwise catastrophic sidechain failure. We then provide a sidechain construction that is suitable for proof-of-stake (PoS) sidechain systems. As an exemplary concrete instantiation we present our construction for an epoch-based PoS system consistent with Ouroboros (Crypto~2017), the PoS blockchain protocol used in Cardano which is one of the largest pure PoS systems by market capitalisation, and we also comment how the construction can be adapted for other protocols such as Ouroboros Praos (Eurocrypt~2018), Ouroboros Genesis (CCS~2018), Snow White and Algorand. An important feature of our construction is {\em merged-staking} that prevents ``goldfinger'' attacks against a sidechain that is only carrying a small amount of stake. An important technique for pegging chains that we use in our construction is cross-chain certification which is facilitated by a novel cryptographic primitive we introduce called ad-hoc threshold multisignatures (ATMS) which may be of independent interest. We show how ATMS can be securely instantiated by regular and aggregate digital signatures as well as succinct arguments of knowledge such as STARKs and bulletproofs with varying degrees of storage efficiency.
Last updated:  2018-12-31
Memory-Constrained Implementation of Lattice-based Encryption Scheme on the Standard Java Card Platform
Ye Yuan, Kazuhide Fukushima, Junting Xiao, Shinsaku Kiyomoto, Tsuyoshi Takagi
Memory-constrained devices, including widely used smart cards, require resisting attacks by the quantum computers. Lattice-based encryption scheme possesses high efficiency and reliability which could run on small devices with limited storage capacity and computation resources such as IoT sensor nodes or smart cards. We present the first implementation of a lattice-based encryption scheme on the standard Java Card platform by combining number theoretic transform and improved Montgomery modular multiplication. The running time of decryption is nearly optimal (about 7 seconds for 128-bit security level). We also optimize discrete Ziggurat algorithm and Knuth-Yao algorithm to sample from prescribed probability distributions on the Java Card platform. More importantly, we indicate that polynomial multiplication can be performed on Java Card efficiently even if the long integers are not supported, which makes running more lattice-based cryptosystems on smart cards achievable.
Last updated:  2019-04-01
Sum-of-Squares Meets Program Obfuscation, Revisited
Uncategorized
Boaz Barak, Samuel B. Hopkins, Aayush Jain, Pravesh Kothari, Amit Sahai
Show abstract
Uncategorized
We develop attacks on the security of variants of pseudo-random generators computed by quadratic polynomials. In particular we give a general condition for breaking the one-way property of mappings where every output is a quadratic polynomial (over the reals) of the input. As a corollary, we break the degree-2 candidates for security assumptions recently proposed for constructing indistinguishability obfuscation by Ananth, Jain and Sahai (ePrint 2018) and Agrawal (ePrint 2018). We present conjectures that would imply our attacks extend to a wider variety of instances, and in particular offer experimental evidence that they break assumption of Lin-Matt (ePrint 2018). Our algorithms use semidefinite programming, and in particular, results on low-rank recovery (Recht, Fazel, Parrilo 2007) and matrix completion (Gross 2009).
Last updated:  2018-12-31
Fast Secure Comparison for Medium-Sized Integers and Its Application in Binarized Neural Networks
Mark Abspoel, Niek J. Bouman, Berry Schoenmakers, Niels de Vreede
In 1994, Feige, Kilian, and Naor proposed a simple protocol for secure $3$-way comparison of integers $a$ and $b$ from the range $[0,2]$. Their observation is that for $p=7$, the Legendre symbol $(x | p)$ coincides with the sign of $x$ for $x=a-b\in[-2,2]$, thus reducing secure comparison to secure evaluation of the Legendre symbol. More recently, in 2011, Yu generalized this idea to handle secure comparisons for integers from substantially larger ranges $[0,d]$, essentially by searching for primes for which the Legendre symbol coincides with the sign function on $[-d,d]$. In this paper, we present new comparison protocols based on the Legendre symbol that additionally employ some form of error correction. We relax the prime search by requiring that the Legendre symbol encodes the sign function in a noisy fashion only. Practically, we use the majority vote over a window of $2k+1$ adjacent Legendre symbols, for small positive integers $k$. Our technique significantly increases the comparison range: e.g., for a modulus of $60$ bits, $d$ increases by a factor of $2.9$ (for $k=1$) and $5.4$ (for $k=2$) respectively. We give a practical method to find primes with suitable noisy encodings. We demonstrate the practical relevance of our comparison protocol by applying it in a secure neural network classifier for the MNIST dataset. Concretely, we discuss a secure multiparty computation based on the binarized multi-layer perceptron of Hubara et al., using our comparison for the second and third layers.
Last updated:  2018-12-31
Setup-Free Secure Search on Encrypted Data: Faster and Post-Processing Free
Adi Akavia, Craig Gentry, Shai Halevi, Max Leibovich
We present a novel $\textit{secure search}$ protocol on data and queries encrypted with Fully Homomorphic Encryption (FHE). Our protocol enables organizations (client) to (1) securely upload an unsorted data array $x=(x[1],\ldots,x[n])$ to an untrusted honest-but-curious sever, where data may be uploaded over time and from multiple data-sources; and (2) securely issue repeated search queries $q$ for retrieving the first element $(i^*,x[i^*])$ satisfying an agreed matching criterion $i^* = \min\ \left\{ \left.i\in[n] \;\right\vert \mathsf{IsMatch}(x[i],q)=1 \right\}$, as well as fetching the next matching elements with further interaction. For security, the client encrypts the data and queries with FHE prior to uploading, and the server processes the ciphertexts to produce the result ciphertext for the client to decrypt. Our secure search protocol improves over the prior state-of-the-art for secure search on FHE encrypted data (Akavia, Feldman, Shaul (AFS), CCS'2018) in achieving: (1) $\textit{Post-processing free}$ protocol where the server produces a ciphertext for the correct search outcome with overwhelming success probability.This is in contrast to returning a list of candidates for the client to post-process, or suffering from a noticeable error probability, in AFS. Our post-processing freeness enables the server to use secure search as a sub-component in a larger computation without interaction with the client. (2) $\textit{Faster protocol:}$(a) Client time and communication bandwidth are improved by a $\log^2n/\log\log n$ factor. (b) Server evaluates a polynomial of degree linear in $\log n$ (compare to cubic in AFS), and overall number of multiplications improved by up to $\log n$ factor.(c) Employing only $\textrm{GF}(2)$ computations (compare to $\textrm{GF}(p)$ for $p \gg 2$ in AFS) to gain both further speedup and compatibility to all current FHE candidates. (3) $\textit{Order of magnitude speedup exhibited by extensive benchmarks}$ we executed on identical hardware for implementations of ours versus AFS's protocols. Additionally, like other FHE based solutions, out solution is setup-free: to outsource elements from the client to the server, no additional actions are performed on $x$ except for encrypting it element by element (each element bit by bit) and uploading the resulted ciphertexts to the server.
Last updated:  2020-09-25
FACCT: FAst, Compact, and Constant-Time Discrete Gaussian Sampler over Integers
Raymond K. Zhao, Ron Steinfeld, Amin Sakzad
The discrete Gaussian sampler is one of the fundamental tools in implementing lattice-based cryptosystems. However, a naive discrete Gaussian sampling implementation suffers from side-channel vulnerabilities, and the existing countermeasures usually introduce significant overhead in either the running speed or the memory consumption. In this paper, we propose a fast, compact, and constant-time implementation of the binary sampling algorithm, originally introduced in the BLISS signature scheme. Our implementation adapts the Rényi divergence and the transcendental function polynomial approximation techniques. The efficiency of our scheme is independent of the standard deviation, and we show evidence that our implementations are either faster or more compact than several existing constant-time samplers. In addition, we show the performance of our implementation techniques applied to and integrated with two existing signature schemes: qTesla and Falcon. On the other hand, the convolution theorems are typically adapted to sample from larger standard deviations, by combining samples with much smaller standard deviations. As an additional contribution, we show better parameters for the convolution theorems.
Last updated:  2018-12-31
Key Assignment Scheme with Authenticated Encryption
Uncategorized
Suyash Kandele, Souradyuti Paul
Show abstract
Uncategorized
The Key Assignment Scheme (KAS) is a well-studied cryptographic primitive used for hierarchical access control (HAC) in a multilevel organisation where the classes of people with higher privileges can access files of those with lower ones. Our first contribution is the formalization of a new cryptographic primitive, namely, KAS-AE that supports the aforementioned HAC solution with an additional authenticated encryption property. Next, we present three efficient KAS-AE schemes that solve the HAC and the associated authenticated encryption problem more efficiently -- both with respect to time and memory -- than the existing solutions that achieve it by executing KAS and AE separately. Our first KAS-AE construction is built by using the cryptographic primitive MLE (EUROCRYPT 2013) as a black box; the other two constructions (which are the most efficient ones) have been derived by cleverly tweaking the hash function FP (Indocrypt 2012) and the authenticated encryption scheme APE (FSE 2014). This high efficiency of our constructions is critically achieved by using two techniques: design of a mechanism for reverse decryption used for reduction of time complexity, and a novel key management scheme for optimizing storage requirements when organizational hierarchy forms an arbitrary access graph (instead of a linear graph). We observe that constructing a highly efficient KAS-AE scheme using primitives other than MLE, FP and APE is a non-trivial task. We leave it as an open problem. Finally, we provide a detailed comparison of all the KAS-AE schemes.
Last updated:  2018-12-31
Certificate Transparency Using Blockchain
D S V Madala, Mahabir Prasad Jhanwar, Anupam Chattopadhyay
The security of web communication via the SSL/TLS protocols relies on safe distributions of public keys associated with web domains in the form of $\mathsf{X.509}$ certificates. Certificate authorities (CAs) are trusted third parties that issue these certificates. However, the CA ecosystem is fragile and prone to compromises. Starting with Google's Certificate Transparency project, a number of research works have recently looked at adding transparency for better CA accountability, effectively through public logs of all certificates issued by certification authorities, to augment the current $\mathsf{X.509}$ certificate validation process into SSL/TLS. In this paper, leveraging recent progress in blockchain technology, we propose a novel system, called $\mathsf{CTB} $, that makes it impossible for a CA to issue a certificate for a domain without obtaining consent from the domain owner. We further make progress to equip $\mathsf{CTB}$ with certificate revocation mechanism. We implement $\mathsf{CTB}$ using IBM's Hyperledger Fabric blockchain platform. $\mathsf{CTB}$'s smart contract, written in Go, is provided for complete reference.
Last updated:  2018-12-30
Post-quantum verifiable random functions from ring signatures
Uncategorized
Endre Abraham
Show abstract
Uncategorized
One of the greatest challenges on exchanging seemingly random nonces or data either on a trusted or untrusted channel is the hardness of verify- ing the correctness of such output. If one of the parties or an eavesdropper can gain game-theoretic advantage of manipulating this seed, others can- not efficiently notice modifications nor accuse the oracle in some way. Decentralized applications where an oracle can go unnoticed with biased outputs are highly vulnerable to attacks of this kind, limiting applicability of these parties even though they can introduce great scalability to such systems. Verifiable random functions[1] presented by Micali can be viewed as keyed hash funcions where the key(s) used are asymmetric. They al- low the oracle to prove correctness of a defined pseudorandom function on seed s without actually making it public, thus not compromising the unpredictability of the function. Our contribution here is to provide a variant of this scheme and proving it’s security against known quantum attacks and quantum oracles
Last updated:  2018-12-30
Pooled Mining Makes Selfish Mining Tricky
Suhyeon Lee, Seungjoo Kim
Bitcoin, the first successful cryptocurrency, uses the blockchain structure and PoW mechanism to generate blocks. PoW makes an adversary difficult to control the network until she retains over 50\% of the hashrate of the total network. Another cryptocurrency, Ethereum, also uses this mechanism and it did not make problem before. In PoW research, however, several attack strategies are studied. In this paper, we researched selfish mining in the pooled mining environment and found the pooled mining exposes mining information of the block which adversary is mining to the random miners. Using this leaked information, other miners can exploit the selfish miner. At the same time, the adversary loses revenue than when she does honest mining. Because of the existence of our counter method, the adversary with pooled mining cannot do selfish mining easily on Bitcoin or blockchains using PoW.
Last updated:  2018-12-30
On Some Computational Problems in Local Fields
Yingpu Deng, Lixia Luo, Guanju Xiao
Lattices in Euclidean spaces are important research objects in geometric number theory, and they have important applications in many areas, such as cryptology. The shortest vector problem (SVP) and the closest vector problem (CVP) are two famous computational problems about lattices. In this paper, we define so-called p-adic lattices, and consider the p-adic analogues of SVP and CVP in local fields. We find that, in contrast with lattices in Euclidean spaces, the situation is completely different and interesting. We also develop relevant algorithms, indicating that these problems are computable.
Last updated:  2019-02-20
Multi-Party Oblivious RAM based on Function Secret Sharing and Replicated Secret Sharing Arithmetic
Marina Blanton, Chen Yuan
In this work, we study the problem of constructing oblivious RAM for secure multi-party computation to obliviously access memory at private locations during secure computation. We build on recent two-party Floram construction that uses function secret sharing for a point function and incurs $O(\sqrt N)$ secure computation and $O(N)$ local computation per ORAM access for an $N$-element data set. Our new construction, Top ORAM, is designed for multi-party computation with $n \ge 3$ parties and uses replicated secret sharing. We reduce secure computation component to $O(\log N)$, which has notable effect on performance. As a result, when Top ORAM is instantiated with $n=3$ parties, it outperforms all other 2- and 3-party ORAM constructions that we tested for datasets up to a few million (at which point $O(N)$ local work becomes the bottleneck). To be able to accomplish the above, we design a number of secure $n$-party protocols for semi-honest adversaries in the setting with honest majority for replicated secret sharing. They are suitable to be instantiated over any finite ring, which has the advantage of using native hardware arithmetic with rings $\mathbb{Z}_{2^k}$ for some $k$. We also provide conversion procedures between other, more common types of secret sharing and replicated secret sharing to enable integration of Top ORAM with other secure computation frameworks. As an additional contribution of this work, we show how our ORAM techniques can be used to realize private binary search at the cost of only a single ORAM access and $\log N$ comparisons, instead of conventional $O(\log N)$ ORAM accesses and comparisons. Because of this property, performance of our binary search is significantly faster than binary search using other ORAM schemes for all ranges of values that we tested.
Last updated:  2018-12-30
Efficient Information Theoretic Multi-Party Computation from Oblivious Linear Evaluation
Louis Cianciullo, Hossein Ghodosi
Oblivious linear evaluation (OLE) is a two party protocol that allows a receiver to compute an evaluation of a sender's private, degree $1$ polynomial, without letting the sender learn the evaluation point. OLE is a special case of oblivious polynomial evaluation (OPE) which was first introduced by Naor and Pinkas in 1999. In this article we utilise OLE for the purpose of computing multiplication in multi-party computation (MPC). MPC allows a set of $n$ mutually distrustful parties to privately compute any given function across their private inputs, even if up to $t<n$ of these participants are corrupted and controlled by an external adversary. In terms of efficiency and communication complexity, multiplication in MPC has always been a large bottleneck. The typical method employed by most current protocols has been to utilise Beaver's method, which relies on some precomputed information. In this paper we introduce an OLE-based MPC protocol which also relies on some precomputed information. Our proposed protocol has a more efficient communication complexity than Beaver's protocol by a multiplicative factor of $t$. Furthermore, to compute a share to a multiplication, a participant in our protocol need only communicate with one other participant; unlike Beaver's protocol which requires a participant to contact at least $t$ other participants.
Last updated:  2018-12-30
Boolean Exponent Splitting
Michael Tunstall, Louiza Papachristodoulou, Kostas Papagiannopoulos
A typical countermeasure against side-channel attacks consists of masking intermediate values with a random number. In symmetric cryptographic algorithms, Boolean shares of the secret are typically used, whereas in asymmetric algorithms the secret exponent/scalar is typically masked using algebraic properties. This paper presents a new exponent splitting technique with minimal impact on performance based on Boolean shares. More precisely, it is shown how an exponent can be efficiently split into two shares, where the exponent is the XOR sum of the two shares, typically requiring only an extra register and a few register copies per bit. Our novel exponentiation and scalar multiplication algorithms can be randomized for every execution and combined with other blinding techniques. In this way, both the exponent and the intermediate values can be protected against various types of side-channel attacks. We perform a security evaluation of our algorithms using the mutual information framework and provide proofs that they are secure against first-order side-channel attacks. The side-channel resistance of the proposed algorithms is also practically verified with test vector leakage assessment performed on Xilinx's Zynq zc702 evaluation board.
Last updated:  2020-03-08
XMSS and Embedded Systems - XMSS Hardware Accelerators for RISC-V
Wen Wang, Bernhard Jungk, Julian Wälde, Shuwen Deng, Naina Gupta, Jakub Szefer, Ruben Niederhagen
We describe a software-hardware co-design for the hash-based post-quantum signature scheme XMSS on a RISC-V embedded processor. We provide software optimizations for the XMSS reference implementation for SHA-256 parameter sets and several hardware accelerators that allow to balance area usage and performance based on individual needs. By integrating our hardware accelerators into the RISC-V processor, the version with the best time-area product generates a key pair (that can be used to generate 2^10 signatures) in 3.44s achieving an over 54x speedup in wall-clock time compared to the pure software version. For such a key pair, signature generation takes less than 10 ms and verification takes less than 6 ms, bringing speedups of over 42x and 17x respectively. This shows that embedded systems equipped with scheme-specific hardware accelerators are able to practically use XMSS. We tested and measured the cycle count of our implementation on an Intel Cyclone V SoC FPGA. The integration of our XMSS accelerators into an embedded RISC-V processor shows that it is possible to use hash-based post-quantum signatures for a large variety of embedded applications.
Last updated:  2019-05-10
Further Lower Bounds for Structure-Preserving Signatures in Asymmetric Bilinear Groups
Essam Ghadafi
Structure-Preserving Signatures (SPSs) are a useful tool for the design of modular cryptographic protocols. Recent series of works have shown that by limiting the message space of those schemes to the set of Diffie-Hellman (DH) pairs, it is possible to circumvent the known lower bounds in the Type-3 bilinear group setting thus obtaining the shortest signatures consisting of only 2 elements from the shorter source group. It has been shown that such a variant yields efficiency gains for some cryptographic constructions, including attribute-based signatures and direct anonymous attestation. Only the cases of signing a single DH pair or a DH pair and a vector from $\Z_p$ have been considered. Signing a vector of group elements is required for various applications of SPSs, especially if the aim is to forgo relying on heuristic assumptions. An open question is whether such an improved lower bound also applies to signing a vector of $\ell > 1$ messages. We answer this question negatively for schemes existentially unforgeable under an adaptive chosen-message attack (EUF-CMA) whereas we answer it positively for schemes existentially unforgeable under a random-message attack (EUF-RMA) and those which are existentially unforgeable under a combined chosen-random-message attack (EUF-CMA-RMA). The latter notion is a leeway between the two former notions where it allows the adversary to adaptively choose part of the message to be signed whereas the remaining part of the message is chosen uniformly at random by the signer. Another open question is whether strongly existentially unforgeable under an adaptive chosen-message attack (sEUF-CMA) schemes with 2-element signatures exist. We answer this question negatively, proving it is impossible to construct sEUF-CMA schemes with 2-element signatures even if the signature consists of elements from both source groups. On the other hand, we prove that sEUF-RMA and sEUF-CMA-RMA schemes with 2-element (unilateral) signatures are possible by giving constructions for those notions. Among other things, our findings show a gap between random-message/combined chosen-random-message security and chosen-message security in this setting.
Last updated:  2019-01-03
Error Amplification in Code-based Cryptography
Alexander Nilsson, Thomas Johansson, Paul Stankovski Wagner
Code-based cryptography is one of the main techniques enabling cryptographic primitives in a post-quantum scenario. In particular, the MDPC scheme is a basic scheme from which many other schemes have been derived. These schemes rely on iterative decoding in the decryption process and thus have a certain small probability $p$ of having a decryption (decoding) error. In this paper we show a very fundamental and important property of code-based encryption schemes. Given one initial error pattern that fails to decode, the time needed to generate another message that fails to decode is strictly much less than $1/p$. We show this by developing a method for fast generation of undecodable error patterns (error pattern chaining), which additionally proves that a measure of closeness in ciphertext space can be exploited through its strong linkage to the difficulty of decoding these messages. Furthermore, if side-channel information is also available (time to decode), then the initial error pattern no longer needs to be given since one can be easily generated in this case. These observations are fundamentally important because they show that a, say, $128$-bit encryption scheme is not inherently safe from reaction attacks even if it employs a decoder with a failure rate of $2^{-128}$. In fact, unless explicit protective measures are taken, having a failure rate at all -- of any magnitude -- can pose a security problem because of the error amplification effect of our method. A key-recovery reaction attack was recently shown on the MDPC scheme as well as similar schemes, taking advantage of decoding errors in order to recover the secret key. It was also shown that knowing the number of iterations in the iterative decoding step, which could be received in a timing attack, would also enable and enhance such an attack. In this paper we apply our error pattern chaining method to show how to improve the performance of such reaction attacks in the CPA case. We show that after identifying a single decoding error (or a decoding step taking more time than expected in a timing attack), we can adaptively create new error patterns that have a much higher decoding error probability than for a random error. This leads to a significant improvement of the attack based on decoding errors in the CPA case and it also gives the strongest known attack on MDPC-like schemes, both with and without using side-channel information.
Last updated:  2020-12-01
Implementing Token-Based Obfuscation under (Ring) LWE
Cheng Chen, Nicholas Genise, Daniele Micciancio, Yuriy Polyakov, Kurt Rohloff
Token-based obfuscation (TBO) is an interactive approach to cryptographic program obfuscation that was proposed by Goldwasser et al. (STOC 2013) as a potentially more practical alternative to conventional non-interactive security models, such as Virtual Black Box (VBB) and Indistinguishability Obfuscation. We introduce a query-revealing variant of TBO, and implement in PALISADE several optimized query-revealing TBO constructions based on (Ring) LWE covering a relatively broad spectrum of capabilities: linear functions, conjunctions, and branching programs. Our main focus is the obfuscation of general branching programs, which are asymptotically more efficient and expressive than permutation branching programs traditionally considered in program obfuscation studies. Our work implements read-once branching programs that are significantly more advanced than those implemented by Halevi et al. (ACM CCS 2017), and achieves program evaluation runtimes that are two orders of magnitude smaller. Our implementation introduces many algorithmic and code-level optimizations, as compared to the original theoretical construction proposed by Chen et al. (CRYPTO 2018). These include new trapdoor sampling algorithms for matrices of ring elements, extension of the original LWE construction to Ring LWE (with a hardness proof for non-uniform Ring LWE), asymptotically and practically faster token generation procedure, Residue Number System procedures for fast large integer arithmetic, and others. We also present efficient implementations for TBO of conjunction programs and linear functions, which significantly outperform prior implementations of these obfuscation capabilities, e.g., our conjunction obfuscation implementation is one order of magnitude faster than the VBB implementation by Cousins et al. (IEEE S&P 2018). We also provide an example where linear function TBO is used for classifying an ovarian cancer data set. All implementations done as part of this work are packaged in a TBO toolkit that is made publicly available.
Last updated:  2019-01-11
Using the Cloud to Determine Key Strengths -- Triennial Update
M. Delcourt, T. Kleinjung, A. K. Lenstra, S. Nath, D. Page, N. Smart
We develop a new methodology to assess cryptographic key strength using cloud computing, by calculating the true economic cost of (symmetric- or private-) key retrieval for the most common cryptographic primitives. Although the present paper gives the current year (2018), 2015, 2012 and 2011 costs, more importantly it provides the tools and infrastructure to derive new data points at any time in the future, while allowing for improvements such as of new algorithmic approaches. Over time the resulting data points will provide valuable insight in the selection of cryptographic key sizes. For instance, we observe that the past clear cost-advantage of total cost of ownership compared to cloud-computing seems to be evaporating.
Last updated:  2019-06-29
Tight Reductions for Diffie-Hellman Variants in the Algebraic Group Model
Uncategorized
Taiga Mizuide, Atsushi Takayasu, Tsuyoshi Takagi
Show abstract
Uncategorized
Fuchsbauer, Kiltz, and Loss~(Crypto'18) gave a simple and clean definition of an ¥emph{algebraic group model~(AGM)} that lies in between the standard model and the generic group model~(GGM). Specifically, an algebraic adversary is able to exploit group-specific structures as the standard model while the AGM successfully provides meaningful hardness results as the GGM. As an application of the AGM, they show a tight computational equivalence between the computing Diffie-Hellman~(CDH) assumption and the discrete logarithm~(DL) assumption. For the purpose, they used the square Diffie-Hellman assumption as a bridge, i.e., they first proved the equivalence between the DL assumption and the square Diffie-Hellman assumption, then used the known equivalence between the square Diffie-Hellman assumption and the CDH assumption. In this paper, we provide an alternative proof that directly shows the tight equivalence between the DL assumption and the CDH assumption. The crucial benefit of the direct reduction is that we can easily extend the approach to variants of the CDH assumption, e.g., the bilinear Diffie-Hellman assumption. Indeed, we show several tight computational equivalences and discuss applicabilities of our techniques.
Last updated:  2018-12-30
Cryptanalysis of the Full DES and the Full 3DES Using a New Linear Property
Tomer Ashur, Raluca Posteuca
In this paper we extend the work presented by Ashur and Posteuca in BalkanCryptSec 2018, by designing 0-correlation key-dependent linear trails covering more than one round of DES. First, we design a 2-round 0-correlation key-dependent linear trail which we then connect to Matsui's original trail in order to obtain a linear approximation covering the full DES and 3DES. We show how this approximation can be used for a key recovery attack against both ciphers. To the best of our knowledge, this paper is the first to use this kind of property to attack a symmetric-key algorithm, and our linear attack against 3DES is the first statistical attack against this cipher.
Last updated:  2018-12-30
Exploring Crypto Dark Matter: New Simple PRF Candidates and Their Applications
Uncategorized
Dan Boneh, Yuval Ishai, Alain Passelègue, Amit Sahai, David J. Wu
Show abstract
Uncategorized
Pseudorandom functions (PRFs) are one of the fundamental building blocks in cryptography. We explore a new space of plausible PRF candidates that are obtained by mixing linear functions over different small moduli. Our candidates are motivated by the goals of maximizing simplicity and minimizing complexity measures that are relevant to cryptographic applications such as secure multiparty computation. We present several concrete new PRF candidates that follow the above approach. Our main candidate is a weak PRF candidate (whose conjectured pseudorandomness only holds for uniformly random inputs) that first applies a secret mod-2 linear mapping to the input, and then a public mod-3 linear mapping to the result. This candidate can be implemented by depth-2 $ACC^0$ circuits. We also put forward a similar depth-3 strong PRF candidate. Finally, we present a different weak PRF candidate that can be viewed as a deterministic variant of ``Learning Parity with Noise'' (LPN) where the noise is obtained via a mod-3 inner product of the input and the key. The advantage of our approach is twofold. On the theoretical side, the simplicity of our candidates enables us to draw natural connections between their hardness and questions in complexity theory or learning theory (e.g., learnability of depth-2 $ACC^0$ circuits and width-3 branching programs, interpolation and property testing for sparse polynomials, and natural proof barriers for showing super-linear circuit lower bounds). On the applied side, the ``piecewise-linear'' structure of our candidates lends itself nicely to applications in secure multiparty computation (MPC). Using our PRF candidates, we construct protocols for distributed PRF evaluation that achieve better round complexity and/or communication complexity (often both) compared to protocols obtained by combining standard MPC protocols with PRFs like AES, LowMC, or Rasta (the latter two are specialized MPC-friendly PRFs). Our advantage over competing approaches is maximized in the setting of MPC with an honest majority, or alternatively, MPC with preprocessing. Finally, we introduce a new primitive we call an encoded-input PRF, which can be viewed as an interpolation between weak PRFs and standard (strong) PRFs. As we demonstrate, an encoded-input PRF can often be used as a drop-in replacement for a strong PRF, combining the efficiency benefits of weak PRFs and the security benefits of strong PRFs. We conclude by showing that our main weak PRF candidate can plausibly be boosted to an encoded-input PRF by leveraging error-correcting codes.
Last updated:  2018-12-30
Changing Points in APN Functions
Lilya Budaghyan, Claude Carlet, Tor Helleseth, Nikolay Kaleyski
We investigate the differential properties of a construction in which a given function $F : \mathbb{F}_{2^n} \rightarrow \mathbb{F}_{2^n}$ is modified at $K \in \mathbb{N}$ points in order to obtain a new function $G$. This is motivated by the question of determining the minimum Hamming distance between two APN functions and can be seen as a generalization of a previously studied construction in which a given function is modified at a single point. We derive necessary and sufficient conditions which the derivatives of $F$ must satisfy for $G$ to be APN, and use these conditions as the basis for an efficient filtering procedure for searching for APN functions whose value differs from that of a given APN function $F$ at a given set of points. We define a quantity $m_F$ related to $F$ counting the number of derivatives of a given type, and derive a lower bound on the distance between an APN function $F$ and its closest APN neighbor in terms of $m_F$. Furthermore, the value $m_F$ is shown to be invariant under CCZ-equivalence and easier to compute in the case of quadratic functions. We give a formula for $m_F$ in the case of $F(x) = x^3$ which allows us to express a lower bound on the distance between $F(x)$ and the closest APN function in terms of the dimension $n$ of the underlying field. We observe that this distance tends to infinity with $n$. We also compute $m_F$ and the distance to the closest APN function for a representative $F$ from each of the switching classes over $\mathbb{F}_{2^n}$ for $4 \le n \le 8$. For a given function $F$ and value $v$, we describe an efficient method for finding all sets of points $\{ u_1, u_2, \dots, u_K \}$ such that setting $G(u_i) = F(u_i) + v$ and $G(x) = F(x)$ for $x \ne u_i$ is APN.
Last updated:  2018-12-30
This is Not an Attack on Wave
Uncategorized
Thomas Debris-Alazard, Nicolas Sendrier, Jean-Pierre Tillich
Show abstract
Uncategorized
Very recently, a preprint ``Cryptanalysis of the Wave Signature Scheme'', eprint 2018/1111, appeared claiming to break Wave ``Wave: A New Code-Based Signature Scheme'', eprint 2018/996. We explain here why this claim is incorrect.
Last updated:  2019-08-12
New Hybrid Method for Isogeny-based Cryptosystems using Edwards Curves
Uncategorized
Suhri Kim, Kisoon Yoon, Jihoon Kwon, Young-Ho Park, Seokhie Hong
Show abstract
Uncategorized
Along with the resistance against quantum computers, isogeny-based cryptography offers attractive cryptosystems due to small key sizes and compatibility with the current elliptic curve primitives. While the state-of-the-art implementation uses Montgomery curves, which facilitates efficient elliptic curve arithmetic and isogeny computations, other forms of elliptic curves can be used to produce an efficient result. In this paper, we present the new hybrid method for isogeny-based cryptosystem using Edwards curves. Unlike the previous hybrid methods, we exploit Edwards curves for recovering the curve coefficients and Montgomery curves for other operations. To this end, we first carefully examine and compare the computational cost of Montgomery and Edwards isogenies. Then, we fine-tune and tailor Edwards isogenies in order to blend with Montgomery isogenies efficiently. Additionally, we present the implementation results of Supersingular Isogeny Diffie--Hellman (SIDH) key exchange using the proposed method. We demonstrate that our method outperforms the previously proposed hybrid method, and is as fast as Montgomery-only implementation. Our results show that proper use of Edwards curves for isogeny-based cryptosystem can be quite practical.
Last updated:  2018-12-24
Instant Privacy-Preserving Biometric Authentication for Hamming Distance
Joohee Lee, Dongwoo Kim, Duhyeong Kim, Yongsoo Song, Junbum Shin, Jung Hee Cheon
In recent years, there has been enormous research attention in privacy-preserving biometric authentication, which enables a user to verify him or herself to a server without disclosing raw biometric information. Since biometrics is irrevocable when exposed, it is very important to protect its privacy. In IEEE TIFS 2018, Zhou and Ren proposed a privacy-preserving user-centric biometric authentication scheme named PassBio, where the end-users encrypt their own templates, and the authentication server never sees the raw templates during the authentication phase. In their approach, it takes about 1 second to encrypt and compare 2000-bit templates based on Hamming distance on a laptop. However, this result is still far from practice because the size of templates used in commercialized products is much larger: according to NIST IREX IX report of 2018 which analyzed 46 iris recognition algorithms, size of their templates varies from 4,632-bit (579-byte) to 145,832-bit (18,229-byte). In this paper, we propose a new privacy-preserving user-centric biometric authentication (HDM-PPBA) based on Hamming distance, which shows a big improvement in efficiency to the previous works. It is based on our new single-key function-hiding inner product encryption, which encrypts and computes the Hamming distance of 145,832-bit binary in about 0.3 seconds on Intel Core i5 2.9GHz CPU. We show that it satisfies simulation-based security under the hardness assumption of Learning with Errors (LWE) problem. The storage requirements, bandwidth and time complexity of HDM-PPBA depend linearly on the bit-length of biometrics, and it is applicable to any large templates used in NIST IREX IX report with high efficiency.
Last updated:  2018-12-23
Deep Learning vs Template Attacks in front of fundamental targets: experimental study
Uncategorized
Yevhenii ZOTKIN, Francis OLIVIER, Eric BOURBAO
Show abstract
Uncategorized
This study compares the experimental results of Template Attacks (TA) and Deep Learning (DL) techniques called Multi Layer Perceptron (MLP) and Convolutional Neural Network (CNN), concurrently in front of classical use cases often encountered in the side-channel analysis of cryptographic devices (restricted to SK). The starting point regards their comparative effectiveness against masked encryption which appears as intrinsically vulnerable. Surprisingly TA improved with Principal Components Analysis (PCA) and normalization, honorably makes the grade versus the latest DL methods which demand more calculation power. Another result is that both approaches face high difficulties against static targets such as secret data transfers or key schedule. The explanation of these observations resides in cross-matching. Beyond masking, the effects of other protections like jittering, shuffling and coding size are also tested. At the end of the day the benefit of DL techniques, stands in the better resistance of CNN to misalignment.
Last updated:  2019-02-27
Multi-Target Attacks on the Picnic Signature Scheme and Related Protocols
Itai Dinur, Niv Nadler
Picnic is a signature scheme that was presented at ACM CCS 2017 by Chase et al. and submitted to NIST's post-quantum standardization project. Among all submissions to NIST's project, Picnic is one of the most innovative, making use of recent progress in construction of practically efficient zero-knowledge (ZK) protocols for general circuits. In this paper, we devise multi-target attacks on Picnic and its underlying ZK protocol, ZKB++. Given access to $S$ signatures, produced by a single or by several users, our attack can (information theoretically) recover the $\kappa$-bit signing key of a user in complexity of about $2^{\kappa - 7}/S$. This is faster than Picnic's claimed $2^{\kappa}$ security against classical (non-quantum) attacks by a factor of $2^7 \cdot S$ (as each signature contains about $2^7$ attack targets). Whereas in most multi-target attacks, the attacker can easily sort and match the available targets, this is not the case in our attack on Picnic, as different bits of information are available for each target. Consequently, it is challenging to reach the information theoretic complexity in a computational model, and we had to perform cryptanalytic optimizations by carefully analyzing ZKB++ and its underlying circuit. Our best attack for $\kappa = 128$ has time complexity of $T = 2^{77}$ for $S = 2^{64}$. Alternatively, we can reach the information theoretic complexity of $T = 2^{64}$ for $S = 2^{57}$, given that all signatures are produced with the same signing key. Our attack exploits a weakness in the way that the Picnic signing algorithm uses a pseudo-random generator. The weakness is fixed in the recent Picnic 2.0 version. In addition to our attack on Picnic, we show that a recently proposed improvement of the ZKB++ protocol (due to Katz, Kolesnikov and Wang) is vulnerable to a similar multi-target attack.
Last updated:  2022-03-04
Countering Block Withholding Attack Effciently
Suhyeon Lee, Seungjoo Kim
Bitcoin, well-known cryptocurrency, selected Poof-of-Work (PoW) for its security. PoW mechanism incentivizes participants and deters attacks on the network. Bitcoin seems to have operated the stable distributed network with PoW until now. Researchers found, however, some vulnerabilities in PoW such as selfish mining, block withholding attack, and so on. Especially, after Rosenfeld suggested block withholding attack and Eyal made this attack practical, many variants and countermeasures have been proposed. Most countermeasures, however, were accompanied by changes in the mining algorithm to make the attack impossible, which lowered the practical adaptability. In this paper, we propose a countermeasure to prevent block withholding attack effectively. Mining pools can adapt our method without changing their mining environment.
Last updated:  2019-04-24
MProve: A Proof of Reserves Protocol for Monero Exchanges
Arijit Dutta, Saravanan Vijayakumaran
Theft from cryptocurrency exchanges due to cyberattacks or internal fraud is a major problem. Exchanges can partially alleviate customer concerns by providing periodic proofs of solvency. We describe MProve, a proof of reserves protocol for Monero exchanges which can be combined with a known proof of liabilities protocol to provide a proof of solvency. It is the first protocol for Monero which provides address privacy by allowing an exchange to hide its own addresses within a larger anonymity set. MProve also provides a simple proof of non-collusion between exchanges.
Last updated:  2018-12-19
Teleportation-based quantum homomorphic encryption scheme with quasi-compactness and perfect security
Uncategorized
Min Liang
Show abstract
Uncategorized
Quantum homomorphic encryption (QHE) is an important cryptographic technology for delegated quantum computation. It enables remote Server performing quantum computation on encrypted quantum data, and the specific algorithm performed by Server is unnecessarily known by Client. Quantum fully homomorphic encryption (QFHE) is a QHE that satisfies both compactness and $\mathcal{F}$-homomorphism, which is homomorphic for any quantum circuits. However, Yu et al.[Phys. Rev. A 90, 050303(2014)] proved a negative result: assume interaction is not allowed, it is impossible to construct perfectly secure QFHE scheme. So this article focuses on non-interactive and perfectly secure QHE scheme with loosen requirement, specifically quasi-compactness. This article defines encrypted gate, which is denoted by $EG[U]:|\alpha\rangle\rightarrow\left((a,b),Enc_{a,b}(U|\alpha\rangle)\right)$. We present a gate-teleportation-based two-party computation scheme for $EG[U]$, where one party gives arbitrary quantum state $|\alpha\rangle$ as input and obtains the encrypted $U$-computing result $Enc_{a,b}(U|\alpha\rangle)$, and the other party obtains the random bits $a,b$. Based on $EG[P^x](x\in\{0,1\})$, we propose a method to remove the $P$-error generated in the homomorphic evaluation of $T/T^\dagger$-gate. Using this method, we design two non-interactive and perfectly secure QHE schemes named \texttt{GT} and \texttt{VGT}. Both of them are $\mathcal{F}$-homomorphic and quasi-compact (the decryption complexity depends on the $T/T^\dagger$-gate complexity). Assume $\mathcal{F}$-homomorphism, non-interaction and perfect security are necessary property, the quasi-compactness is proved to be bounded by $O(M)$, where $M$ is the total number of $T/T^\dagger$-gates in the evaluated circuit. \texttt{VGT} is proved to be optimal and has $M$-quasi-compactness. According to our QHE schemes, the decryption would be inefficient if the evaluated circuit contains exponential number of $T/T^\dagger$-gates. Thus our schemes are suitable for homomorphic evaluation of any quantum circuit with low $T/T^\dagger$-gate complexity, such as any polynomial-size quantum circuit or any quantum circuit with polynomial number of $T/T^\dagger$-gates.
Last updated:  2018-12-19
Revisiting Orthogonal Lattice Attacks on Approximate Common Divisor Problems and their Applications
Jun Xu, Santanu Sarkar, Lei Hu
In this paper, we revisit three existing types of orthogonal lattice (OL) attacks and propose optimized cases to solve approximate common divisor (ACD) problems. In order to reduce both space and time costs, we also make an improved lattice using the rounding technique. Further, we present asymptotic formulas of the time complexities on our optimizations as well as three known OL attacks. Besides, we give specific conditions that the optimized OL attacks can work and show how the attack ability depends on the blocksize $\beta$ in the BKZ-$\beta$ algorithm. Therefore, we put forward a method to estimate the concrete cost of solving the random ACD instances. It can be used in the choice of practical parameters in ACD problems. Finally, we give the security estimates of some ACD-based FHE constructions from the literature and also analyze the implicit factorization problem with sufficient number of samples. In the above situations, our optimized OL attack using the rounding technique performs fastest in practice.
Last updated:  2018-12-19
On the Decoding Failure Rate of QC-MDPC Bit-Flipping Decoders
Nicolas Sendrier, Valentin Vasseur
Quasi-cyclic moderate density parity check codes allow the design of McEliece-like public-key encryption schemes with compact keys and a security that provably reduces to hard decoding problems for quasi-cyclic codes. In particular, QC-MDPC are among the most promising code-based key encapsulation mechanisms (KEM) that are proposed to the NIST call for standardization of quantum safe cryptography (two proposals, BIKE and QC-MDPC KEM). The first generation of decoding algorithms suffers from a small, but not negligible, decoding failure rate (DFR in the order of $10^{-7}$ to $10^{-10}$). This allows a key recovery attack presented by Guo, Johansson, and Stankovski (GJS attack) at Asiacrypt 2016 which exploits a small correlation between the faulty message patterns and the secret key of the scheme, and limits the usage of the scheme to KEMs using ephemeral public keys. It does not impact the interactive establishment of secure communications (e.g. TLS), but the use of static public keys for asynchronous applications (e.g. email) is rendered dangerous. Understanding and improving the decoding of QCMDPC is thus of interest for cryptographic applications. In particular, finding parameters for which the failure rate is provably negligible (typically as low as $2^{-64}$ or $2^{-128}$) would allow static keys and increase the applicability of the mentioned cryptosystems. We study here a simple variant of bit-flipping decoding, which we call step-by-step decoding. It has a higher DFR but its evolution can be modelled by a Markov chain, within the theoretical framework of Julia Chaulet's PhD thesis. We study two other, more efficient, decoders. One is the textbook algorithm. The other is (close to) the BIKE decoder. For all those algorithms we provide simulation results, and, assuming an evolution similar to the step-by-step decoder, we extrapolate the value of the DFR as a function of the block length. This will give an indication of how much the code parameters must be increased to ensure resistance to the GJS attack.
Last updated:  2018-12-19
ARPA Whitepaper
Derek Zhang, Alex Su, Felix Xu, Jiang Chen
We propose a secure computation solution for blockchain networks. The correctness of computation is verifiable even under malicious majority condition using information-theoretic Message Authentication Code (MAC), and the privacy is preserved using Secret-Sharing. With state-of-the-art multiparty computation protocol and a layer2 solution, our privacy-preserving computation guarantees data security on blockchain, cryptographically, while reducing the heavy-lifting computation job to a few nodes. This breakthrough has several implications on the future of decentralized networks. First, secure computation can be used to support Private Smart Contracts, where consensus is reached without exposing the information in the public contract. Second, it enables data to be shared and used in trustless network, without disclosing the raw data during data-at-use, where data ownership and data usage is safely separated. Last but not least, computation and verification processes are separated, which can be perceived as computational sharding, this effectively makes the transaction processing speed linear to the number of participating nodes. Our objective is to deploy our secure computation network as an layer2 solution to any blockchain system. Smart Contracts\cite{smartcontract} will be used as bridge to link the blockchain and computation networks. Additionally, they will be used as verifier to ensure that outsourced computation is completed correctly. In order to achieve this, we first develop a general MPC network with advanced features, such as: 1) Secure Computation, 2) Off-chain Computation, 3) Verifiable Computation, and 4)Support dApps' needs like privacy-preserving data exchange.
Last updated:  2019-03-20
Cryptanalysis of a code-based one-time signature
Jean-Christophe Deneuville, Philippe Gaborit
In 2012, Lyubashevsky introduced a new framework for building lattice-based signature schemes without resorting to any trapdoor (such as GPV [6] or NTRU [7]). The idea is to sample a set of short lattice elements and construct the public key as a Short Integer Solution (SIS for short) instance. Signatures are obtained using a small subset sum of the secret key, hidden by a (large) Gaussian mask. (Information leakage is dealt with using rejection sampling.) Recently, Persichetti proposed an efficient adaptation of this framework to coding theory [12]. In this paper, we show that this adaptation cannot be secure, even for one-time signatures (OTS), due to an inherent difference between bounds in Hamming and Euclidean metrics. The attack consists in rewriting a signature as a noisy syndrome decoding problem, which can be handled efficiently using the extended bit flipping decoding algorithm. We illustrate our results by breaking Persichetti’s OTS scheme built upon this approach [12]: using a single signature, we recover the secret (signing) key in about the same amount of time as required for a couple of signature verifications.
Last updated:  2018-12-18
The Lord of the Shares: Combining Attribute-Based Encryption and Searchable Encryption for Flexible Data Sharing
Antonis Michalas
Secure cloud storage is considered one of the most important issues that both businesses and end-users are considering before moving their private data to the cloud. Lately, we have seen some interesting approaches that are based either on the promising concept of Symmetric Searchable Encryption (SSE) or on the well-studied field of Attribute-Based Encryption (ABE). In the first case, researchers are trying to design protocols where users' data will be protected from both \textit{internal} and \textit{external} attacks without paying the necessary attention to the problem of user revocation. On the other hand, in the second case existing approaches address the problem of revocation. However, the overall efficiency of these systems is compromised since the proposed protocols are solely based on ABE schemes and the size of the produced ciphertexts and the time required to decrypt grows with the complexity of the access formula. In this paper, we propose a protocol that combines \textit{both} SSE and ABE in a way that the main advantages of each scheme are used. The proposed protocol allows users to directly search over encrypted data by using an SSE scheme while the corresponding symmetric key that is needed for the decryption is protected via a Ciphertext-Policy Attribute-Based Encryption scheme.
Last updated:  2019-04-29
DAGS: Reloaded Revisiting Dyadic Key Encapsulation
Uncategorized
Gustavo Banegas, Paulo S. L. M. Barreto, Brice Odilon Boidje, Pierre-Louis Cayrel, Gilbert Ndollane Dione, Kris Gaj, Cheikh Thiecoumba Gueye, Richard Haeussler, Jean Belo Klamti, Ousmane N'diaye, Duc Tri Nguyen, Edoardo Persichetti, Jefferson E. Ricardini
Show abstract
Uncategorized
In this paper we revisit some of the main aspects of the DAGS Key Encapsulation Mechanism, one of the code-based candidates to NIST's standardization call for the key exchange/encryption functionalities. In particular, we modify the algorithms for key generation, encapsulation and decapsulation to fit an alternative KEM framework, and we present a new set of parameters that use binary codes. We discuss advantages and disadvantages for each of the variants proposed.
Last updated:  2018-12-18
AuthCropper: Authenticated Image Cropper for Privacy Preserving Surveillance Systems
Jihye Kim, Jiwon Lee, Hankyung Ko, Donghwan Oh, Semin Han, Kwonho Jeong, Hyunok Oh
As surveillance systems are popular, the privacy of the recorded video becomes more important. On the other hand, the authenticity of video images should be guaranteed when used as evidence in court. It is challenging to satisfy both (personal) privacy and authenticity of a video simultaneously, since the privacy requires modifications (e.g., partial deletions) of an original video image while the authenticity does not allow any modifications of the original image. This paper proposes a novel method to convert an encryption scheme to support partial decryption with a constant number of keys and construct a privacy-aware authentication scheme by combining with a signature scheme. The security of our proposed scheme is implied by the security of the underlying encryption and signature schemes. Experimental results show that the proposed scheme can handle the UHD video stream with more than 17 fps on a real embedded system, which validates the practicality of the proposed scheme.
Last updated:  2018-12-18
Subversion in Practice: How to Efficiently Undermine Signatures
Joonsang Baek, Willy Susilo, Jongkil Kim, Yang-Wai Chow
Algorithm substitution attack (ASA) on signatures should be treated seriously as the authentication services of numerous systems and applications rely on signature schemes and compromising them has a significant impact on the security of users. We present a somewhat alarming result in this regard: a highly efficient ASA on the Digital Signature Algorithm (DSA) and its implementation. Compared with the generic ASAs on signature schemes proposed in the literature, our attack provides fast and undetectable subversion, which will extract the user's private signing key by collecting maximum three signatures arbitrarily. Moreover, our ASA is proven to be robust against state reset. We implemented the proposed ASA by replacing the original DSA in Libgcrypt (a popular cryptographic library used in many applications) with our subverted DSA. Experiment shows that the user's private key can readily be recovered once the subverted DSA is used to sign messages. In our implementation, various measures have been considered to significantly reduce the possibility of detection through comparing the running time of the original DSA and the subverted one (i.e. timing analysis). To our knowledge, this is the first implementation of ASA in practice, which shows that ASA is a real threat rather than only a theoretical speculation.
Last updated:  2018-12-18
On a Rank-Metric Code-Based Cryptosystem with Small Key Size
Julian Renner, Sven Puchinger, Antonia Wachter-Zeh
A repair of the Faure-Loidreau (FL) public-key code-based cryptosystem is proposed.The FL cryptosystem is based on the hardness of list decoding Gabidulin codes which are special rank-metric codes. We prove that the recent structural attack on the system by Gaborit et al. is equivalent to decoding an interleaved Gabidulin code. Since all known polynomial-time decoders for these codes fail for a large constructive class of error patterns, we are able to construct public keys that resist the attack. It is also shown that all other known attacks fail for our repair and parameter choices. Compared to other code-based cryptosystems, we obtain significantly smaller key sizes for the same security level.
Last updated:  2021-06-10
Quantum Equivalence of the DLP and CDHP for Group Actions
Steven Galbraith, Lorenz Panny, Benjamin Smith, Frederik Vercauteren
In this short note we give a polynomial-time quantum reduction from the vectorization problem (DLP) to the parallelization problem (CDHP) for efficiently computable group actions. Combined with the trivial reduction from parallelization to vectorization, we thus prove the quantum equivalence of these problems, which is the post-quantum counterpart to classic results of den Boer and Maurer in the classical Diffie-Hellman setting. In contrast to the classical setting, our reduction holds unconditionally and does not assume knowledge of suitable auxiliary algebraic groups. We discuss the implications of this reduction for isogeny-based cryptosystems including CSIDH.
Last updated:  2019-02-12
On Lions and Elligators: An efficient constant-time implementation of CSIDH
Uncategorized
Michael Meyer, Fabio Campos, Steffen Reith
Show abstract
Uncategorized
The recently proposed CSIDH primitive is a promising candidate for post quantum static-static key exchanges with very small keys. However, until now there is only a variable-time proof-of-concept implementation by Castryck, Lange, Martindale, Panny, and Renes, recently optimized by Meyer and Reith, which can leak various information about the private key. Therefore, we present an efficient constant-time implementation that samples key elements only from intervals of nonnegative numbers and uses dummy isogenies, which prevents certain kinds of side-channel attacks. We apply several optimizations, e.g. Elligator and the newly introduced SIMBA, in order to get a more efficient implementation.
Last updated:  2018-12-18
Automated software protection for the masses against side-channel attacks
Uncategorized
NICOLAS BELLEVILLE, DAMIEN COUROUSSÉ, KARINE HEYDEMANN, HENRI-PIERRE CHARLES
Show abstract
Uncategorized
We present an approach and a tool to answer the need for effective, generic and easily applicable protections against side-channel attacks. The protection mechanism is based on code polymorphism, so that the observable behaviour of the protected component is variable and unpredictable to the attacker. Our approach combines lightweight specialized runtime code generation with the optimization capabilities of static compilation. It is extensively configurable. Experimental results show that programs secured by our approach present strong security levels and meet the performance requirements of constrained systems.
Last updated:  2020-06-04
Gradient Visualization for General Characterization in Profiling Attacks
Loïc Masure, Cécile Dumas, Emmanuel Prouff
In Side-Channel Analysis (SCA), several papers have shown that neural networks could be trained to efficiently extract sensitive information from implementations running on embedded devices. This paper introduces a new tool called Gradient Visualization that aims to proceed a post-mortem information leakage characterization after the successful training of a neural network. It relies on the computation of the gradient of the loss function used during the training. The gradient is no longer computed with respect to the model parameters, but with respect to the input trace components. Thus, it can accurately highlight temporal moments where sensitive information leaks. We theoretically show that this method, based on Sensitivity Analysis, may be used to efficiently localize points of interest in the SCA context. The efficiency of the proposed method does not depend on the particular countermeasures that may be applied to the measured traces as long as the profiled neural network can still learn in presence of such difficulties. In addition, the characterization can be made for each trace individually. We verified the soundness of our proposed method on simulated data and on experimental traces from a public side-channel database. Eventually we empirically show that the Sensitivity Analysis is at least as good as state-of-the-art characterization methods, in presence (or not) of countermeasures.
Last updated:  2018-12-18
M&M: Masks and Macs against Physical Attacks
Lauren De Meyer, Victor Arribas, Svetla Nikova, Ventzislav Nikov, Vincent Rijmen
Cryptographic implementations on embedded systems need to be protected against physical attacks. Today, this means that apart from incorporating countermeasures against side-channel analysis, implementations must also withstand fault attacks and combined attacks. Recent proposals in this area have shown that there is a big tradeoff between the implementation cost and the strength of the adversary model. In this work, we introduce a new combined countermeasure M&M that combines Masking with information-theoretic MAC tags and infective computation. It works in a stronger adversary model than the existing scheme ParTI, yet is a lot less costly to implement than the provably secure MPC-based scheme CAPA. We demonstrate M&M with a SCA- and DFA-secure implementation of the AES block cipher. We evaluate the side-channel leakage of the second-order secure design with a non-specific t-test and use simulation to validate the fault resistance.
Last updated:  2019-11-25
On Degree-d Zero-Sum Sets of Full Rank
Christof Beierle, Alex Biryukov, Aleksei Udovenko
A set $S \subseteq \mathbb{F}_2^n$ is called degree-$d$ zero-sum if the sum $\sum_{s \in S} f(s)$ vanishes for all $n$-bit Boolean functions of algebraic degree at most $d$. Those sets correspond to the supports of the $n$-bit Boolean functions of degree at most $n-d-1$. We prove some results on the existence of degree-$d$ zero-sum sets of full rank, i.e., those that contain $n$ linearly independent elements, and show relations to degree-1 annihilator spaces of Boolean functions and semi-orthogonal matrices. We are particularly interested in the smallest of such sets and prove bounds on the minimum number of elements in a degree-$d$ zero-sum set of rank $n$. The motivation for studying those objects comes from the fact that degree-$d$ zero-sum sets of full rank can be used to build linear mappings that preserve special kinds of \emph{nonlinear invariants}, similar to those obtained from orthogonal matrices and exploited by Todo, Leander and Sasaki for breaking the block ciphers Midori, Scream and iScream.
Last updated:  2018-12-18
Quantum Chosen-Ciphertext Attacks against Feistel Ciphers
Gembu Ito, Akinori Hosoyamada, Ryutaroh Matsumoto, Yu Sasaki, Tetsu Iwata
Seminal results by Luby and Rackoff show that the 3-round Feistel cipher is secure against chosen-plaintext attacks (CPAs), and the 4-round version is secure against chosen-ciphertext attacks (CCAs). However, the security significantly changes when we consider attacks in the quantum setting, where the adversary can make superposition queries. By using Simon's algorithm that detects a secret cycle-period in polynomial-time, Kuwakado and Morii showed that the 3-round version is insecure against quantum CPA by presenting a polynomial-time distinguisher. Since then, Simon's algorithm has been heavily used against various symmetric-key constructions. However, its applications are still not fully explored. In this paper, based on Simon's algorithm, we first formalize a sufficient condition of a quantum distinguisher against block ciphers so that it works even if there are multiple collisions other than the real period. This distinguisher is similar to the one proposed by Santoli and Schaffner, and it does not recover the period. Instead, we focus on the dimension of the space obtained from Simon's quantum circuit. This eliminates the need to evaluate the probability of collisions, which was needed in the work by Kaplan et al. at CRYPTO 2016. Based on this, we continue the investigation of the security of Feistel ciphers in the quantum setting. We show a quantum CCA distinguisher against the 4-round Feistel cipher. This extends the result of Kuwakado and Morii by one round, and follows the intuition of the result by Luby and Rackoff where the CCA setting can extend the number of rounds by one. We also consider more practical cases where the round functions are composed of a public function and XORing the subkeys. We show the results of both distinguishing and key recovery attacks against these constructions.
Last updated:  2018-12-18
Durandal: a rank metric based signature scheme
Nicolas Aragon, Olivier Blazy, Philippe Gaborit, Adrien Hauteville, Gilles Zémor
We describe a variation of the Schnorr-Lyubashevsky approach to devising signature schemes that is adapted to rank based cryptography. This new approach enables us to obtain a randomization of the signature, which previously seemed difficult to derive for code-based cryptography. We provide a detailed analysis of attacks and an EUF-CMA proof for our scheme. Our scheme relies on the security of the Ideal Rank Support Learning and the Ideal Rank Syndrome problems and a newly introduced problem: Product Spaces Subspaces Indistinguishability, for which we give a detailed analysis. Overall the parameters we propose are efficient and comparable in terms of signature size to the Dilithium lattice-based scheme, with a signature size of less than 4kB for a public key of size less than 20kB.
Last updated:  2018-12-10
Cryptanalysis of 2-round KECCAK-384
Rajendra Kumar, Nikhil Mittal, Shashank Singh
In this paper, we present a cryptanalysis of round reduced Keccak-384 for 2 rounds. The best known preimage attack for this variant of Keccak has the time complexity $2^{129}$. In our analysis, we find a preimage in the time complexity of $2^{89}$ and almost same memory is required.
Last updated:  2019-11-21
Large Universe Subset Predicate Encryption Based on Static Assumption (without Random Oracle)
Sanjit Chatterjee, Sayantan Mukherjee
In a recent work, Katz et al. (CANS'17) generalized the notion of Broadcast Encryption to define Subset Predicate Encryption (SPE) that emulates \emph{subset containment} predicate in the encrypted domain. They proposed two selective secure constructions of SPE in the small universe settings. Their first construction is based on $q$-type assumption while the second one is based on DBDH. % which can be converted to large universe using random oracle. Both achieve constant size secret key while the ciphertext size depends on the size of the privileged set. They also showed some black-box transformation of SPE to well-known primitives like WIBE and ABE to establish the richness of the SPE structure. This work investigates the question of large universe realization of SPE scheme based on static assumption without random oracle. We propose two constructions both of which achieve constant size secret key. First construction $\mathsf{SPE}_1$, instantiated in composite order bilinear groups, achieves constant size ciphertext and is proven secure in a restricted version of selective security model under the subgroup decision assumption (SDP). Our main construction $\mathsf{SPE}_2$ is adaptive secure in the prime order bilinear group under the symmetric external Diffie-Hellman assumption (SXDH). Thus $\mathsf{SPE}_2$ is the first large universe instantiation of SPE to achieve adaptive security without random oracle. Both our constructions have efficient decryption function suggesting their practical applicability. Thus the primitives like WIBE and ABE resulting through black-box transformation of our constructions become more practical.
Last updated:  2018-12-10
The Role of the Adversary Model in Applied Security Research
Quang Do, Ben Martini, Kim-Kwang Raymond Choo
Adversary models have been integral to the design of provably-secure cryptographic schemes or protocols. However, their use in other computer science research disciplines is relatively limited, particularly in the case of applied security research (e.g., mobile app and vulnerability studies). In this study, we conduct a survey of prominent adversary models used in the seminal field of cryptography, and more recent mobile and Internet of Things (IoT) research. Motivated by the findings from the cryptography survey, we propose a classification scheme for common app-based adversaries used in mobile security research, and classify key papers using the proposed scheme. Finally, we discuss recent work involving adversary models in the contemporary research field of IoT. We contribute recommendations to aid researchers working in applied (IoT) security based upon our findings from the mobile and cryptography literature. The key recommendation is for authors to clearly define adversary goals, assumptions and capabilities.
Last updated:  2021-05-20
Batching Techniques for Accumulators with Applications to IOPs and Stateless Blockchains
Dan Boneh, Benedikt Bünz, Ben Fisch
We present batching techniques for cryptographic accumulators and vector commitments in groups of unknown order. Our techniques are tailored for distributed settings where no trusted accumulator manager exists and updates to the accumulator are processed in batches. We develop techniques for non-interactively aggregating membership proofs that can be verified with a constant number of group operations. We also provide a constant sized batch non-membership proof for a large number of elements. These proofs can be used to build the first positional vector commitment (VC) with constant sized openings and constant sized public parameters. As a core building block for our batching techniques we develop several succinct proof systems in groups of unknown order. These extend a recent construction of a succinct proof of correct exponentiation, and include a succinct proof of knowledge of an integer discrete logarithm between two group elements. We use these new constructions to design a stateless blockchain, where nodes only need a constant amount of storage in order to participate in consensus. Further, we show how to use these techniques to reduce the size of IOP instantiations, such as STARKs.
Last updated:  2018-12-10
Automatic Search for A Variant of Division Property Using Three Subsets (Full Version)
Kai Hu, Meiqin Wang
The division property proposed at Eurocrypt'15 is a novel technique to find integral distinguishers, which has been applied to most kinds of symmetric ciphers such as block ciphers, stream ciphers, and authenticated encryption,~\textit{etc}. The original division property is word-oriented, and later the bit-based one was proposed at FSE'16 to get better integral property, which is composed of conventional bit-based division property (two-subset division property) and bit-based division property using three subsets (three-subset division property). Three-subset division property has more potential to achieve better integral distinguishers compared with the two-subset division property. The bit-based division property could not be to apply to ciphers with large block sizes due to its unpractical complexity. At Asiacrypt'16, the two-subset division property was modeled using Mixed Integral Linear Programming (MILP) technique, and the limits of block sizes were eliminated. However, there is still no efficient method searching for three-subset division property. The propagation rule of the \texttt{XOR} operation for $\mathbb{L}$ \footnote{The definition of $\mathbb{L}$ and $\mathbb{K}$ is introduced in Section 2.}, which is a set used in the three-set division property but not in two-set one, requires to remove some specific vectors, and new vectors generated from $\mathbb{L}$ should be appended to $\mathbb{K}$ when \texttt{Key-XOR} operation is applied, both of which are difficult for common automatic tools such as MILP, SMT or CP. In this paper, we overcome one of the two challenges, concretely, we address the problem to add new vectors into $\mathbb{K}$ from $\mathbb{L}$ in an automatic search model. Moreover, we present a new model automatically searching for a variant three-subset division property (VTDP) with STP solver. The variant is weaker than the original three-subset division property (OTDP) but it is still powerful in some ciphers. Most importantly, this model has no constraints on the block size of target ciphers, which can also be applied to ARX and S-box based ciphers. As illustrations, some improved integral distinguishers have been achieved for SIMON32, SIMON32/48/64(102), SPECK32 and KATAN/KTANTAN32/48/64 according to the number of rounds or number of even/odd-parity bits.
Last updated:  2018-12-10
MILP Method of Searching Integral Distinguishers Based on Division Property Using Three Subsets
Uncategorized
Senpeng Wang, Bin Hu, Jie Guan, Kai Zhang, Tairong Shi
Show abstract
Uncategorized
Division property is a generalized integral property proposed by Todo at EUROCRYPT 2015, and then conventional bit-based division property (CBDP) and bit-based division property using three subsets (BDPT) were proposed by Todo and Morii at FSE 2016. The huge time and memory complexity that once restricted the applications of CBDP have been solved by Xiang et al. at ASIACRYPT 2016. They extended Mixed Integer Linear Programming (MILP) method to search integral distinguishers based on CBDP. BDPT can find more accurate integral distinguishers than CBDP, but it can not be modeled efficiently. Thus it cannot be applied to block ciphers with block size larger than 32 bits. In this paper, we focus on the feasibility of applying MILP-aided method to search integral distinguishers based on BDPT. We firstly study how to get the BDPT propagation rules of an S-box. Based on that we can efficiently describe the BDPT propagation of cipher which has S-box. Moreover, we propose a technique called ``fast propagation", which can translate BDPT into CBDP, then the balanced bits based on BDPT can be presented. Together with the propagation properties of BDPT, we can use MILP method based on CBDP to search integral distinguishers based on BDPT. In order to prove the efficiency of our method, we search integral distinguishers on SIMON, SIMECK, PRESENT, RECTANGLE, LBlock, and TWINE. For SIMON64, PRESENT, and RECTANGLE, we find more balanced bits than the previous longest distinguishers. For LBlock, we find a 17-round integral distinguisher which is one more round than the previous longest integral distinguisher, and a better 16-round integral distinguisher with less active bits can be obtain. For other ciphers, our results are in accordance with the previous longest distinguishers.
Last updated:  2018-12-10
On Quantum Chosen-Ciphertext Attacks and Learning with Errors
Gorjan Alagic, Stacey Jeffery, Maris Ozols, Alexander Poremba
Large-scale quantum computing is a significant threat to classical public-key cryptography. In strong “quantum access” security models, numerous symmetric-key cryptosystems are also vulnerable. We consider classical encryption in a model which grants the adversary quantum oracle access to encryption and decryption, but where the latter is restricted to non-adaptive (i.e., pre-challenge) queries only. We define this model formally using appropriate notions of ciphertext indistinguishability and semantic security (which are equivalent by standard arguments) and call it QCCA1 in analogy to the classical CCA1 security model. Using a bound on quantum random-access codes, we show that the standard PRF- and PRP-based encryption schemes are QCCA1-secure when instantiated with quantum-secure primitives. We then revisit standard IND-CPA-secure Learning with Errors (LWE) encryption and show that leaking just one quantum decryption query (and no other queries or leakage of any kind) allows the adversary to recover the full secret key with constant success probability. In the classical setting, by contrast, recovering the key uses a linear number of decryption queries, and this is optimal. The algorithm at the core of our attack is a (large-modulus version of) the well-known Bernstein-Vazirani algorithm. We emphasize that our results should not be interpreted as a weakness of these cryptosystems in their stated security setting (i.e., post-quantum chosen-plaintext secrecy). Rather, our results mean that, if these cryptosystems are exposed to chosen-ciphertext attacks (e.g., as a result of deployment in an inappropriate real-world setting) then quantum attacks are even more devastating than classical ones.
Last updated:  2019-02-20
Uncontrolled Randomness in Blockchains: Covert Bulletin Board for Illicit Activity
Nasser Alsalami, Bingsheng Zhang
Public blockchains can be abused to covertly store and disseminate potentially harmful digital content. Consequently, this threat jeopardizes the future of such applications and poses a serious regulatory issue. In this work, we show the severity of the problem by demonstrating that blockchains can be exploited as a covert bulletin board to secretly store and distribute arbitrary content. More specically, all major blockchain systems use randomized cryptographic primitives, such as digital signatures and non-interactive zero-knowledge proofs, and we illustrate how the uncontrolled randomness in such primitives can be maliciously manipulated to enable covert communication and hidden persistent storage. To clarify the potential risk, we design, implement and evaluate our technique against the widely-used ECDSA signature scheme, the CryptoNote's ring signature scheme, and Monero's ring condential transactions. Importantly, the signicance of the demonstrated attacks stems from their undetectability, their adverse eect on the future of decentralized blockchains, and their serious repercussions on users' privacy and crypto funds. Finally, besides presenting the attacks, we examine existing countermeasures and devise two new steganography-resistant blockchain architectures to practically thwart this threat in the context of blockchains.
Last updated:  2018-12-05
Lossy Trapdoor Permutations with Improved Lossiness
Benedikt Auerbach, Eike Kiltz, Bertram Poettering, Stefan Schoenen
Lossy trapdoor functions (Peikert and Waters, STOC 2008 and SIAM J. Computing 2011) imply, via black-box transformations, a number of interesting cryptographic primitives, including chosen-ciphertext secure public-key encryption. Kiltz, O'Neill, and Smith (CRYPTO 2010) showed that the RSA trapdoor permutation is lossy under the Phi-hiding assumption, but syntactically it is not a lossy trapdoor function since it acts on Z_N and not on strings. Using a domain extension technique by Freeman et al. (PKC 2010 and J. Cryptology 2013) it can be extended to a lossy trapdoor permutation, but with considerably reduced lossiness. In this work we give new constructions of lossy trapdoor permutations from the Phi-hiding assumption, the quadratic residuosity assumption, and the decisional composite residuosity assumption, all with improved lossiness. Furthermore, we propose the first all-but-one lossy trapdoor permutation from the Phi-hiding assumption. A technical vehicle used for achieving this is a novel transform that converts trapdoor functions with index-dependent domain into trapdoor functions with fixed domain.
Last updated:  2019-03-25
Code-based Cryptosystem from Quasi-Cyclic Elliptic Codes
Fangguo Zhang, Zhuoran Zhang
With the fast development of quantum computation, code based cryptography arises public concern as a candidate of post quantum cryptography. However, the large key-size becomes a main drawback such that the code-based schemes seldom become practical although they performed pretty well on the speed of both encryption and decryption algorithm. Algebraic geometry codes was considered to be a good solution to reduce the size of keys, but because of its special construction, there have lots of attacks against them. In this paper, we propose a public key encryption scheme based on elliptic codes which can resist the known attacks. By using automorphism on the rational points of the elliptic curve, we construct quasi-cyclic elliptic codes, which reduce the key size further. We apply the list-decoding algorithm to decryption thus more errors beyond half of the minimum distance of the code could be correct, which is the key point to resist the known attacks for AG codes based cryptosystem.
Last updated:  2018-12-21
Horizontal DEMA Attack as the Criterion to Select the Best Suitable EM Probe
Uncategorized
Christian Wittke, Ievgen Kabin, Dan Klann, Zoya Dyka, Anton Datsuk, Peter Langendoerfer
Show abstract
Uncategorized
Implementing cryptographic algorithms in a tamper resistant way is an extremely complex task as the algorithm used and the target platform have a significant impact on the potential leakage of the implementation. In addition the quality of the tools used for the attacks is of importance. In order to evaluate the resistance of a certain design against electromagnetic emanation attacks – as a highly relevant type of attacks – we discuss the quality of different electromagnetic (EM) probes as attack tools. In this paper we propose to use the results of horizontal attacks for comparison of measurement setup and for determining the best suitable instruments for measurements. We performed horizontal differential electromagnetic analysis (DEMA) attacks against our ECC design that is an im-plementation of the Montgomery kP algorithm for the NIST elliptic curve B-233. We experimented with 7 different EM probes under same conditions: attacked FPGA, design, inputs, measurement point and measurement equipment were the same, excepting EM probes. The used EM probe influences the success rate of performed attack significantly. We used this fact for the comparison of probes and for determining the best suitable one.
Last updated:  2020-01-23
Lattice-Based Signature from Key Consensus
Leixiao Cheng, Boru Gong, Yunlei Zhao
Given the current research status in lattice-based cryptography, it is commonly suggested that lattice-based signature could be subtler and harder to achieve. Among them, Dilithium is one of the most promising signature candidates for the post-quantum era, for its simplicity, efficiency, small public key size, and resistance against side channel attacks. The design of Dilithium is based on a list of pioneering works (e.g.,[VL09,VL12,BG14]), and has very remarkable performance by very careful and comprehensive optimizations in implementation and parameter selection. Whether better trade-offs on the already remarkable performance of Dilithium can be made is left in \cite{CRYSTALS} as an interesting open question. In this work, we provide new insights in interpreting the design of Dilithium, in terms of key consensus previously proposed in the literature for key encapsulation mechanisms (KEM) and key exchange (KEX). Based on the deterministic version of the optimal key consensus with noise (OKCN) mechanism, originally developed in [JZ16] for KEM/KEX, we present \emph{signature from key consensus with noise} (SKCN), which could be viewed as generalization and optimization of Dilithium. The construction of SKCN is generic, modular and flexible, which in particular allows a much broader range of parameters for searching better tradeoffs among security, computational efficiency, and bandwidth. For example, on the recommended parameters, compared with Dilithium our SKCN scheme is more efficient both in computation and in bandwidth, while preserving the same level of post-quantum security. In addition, using the same routine of OKCN for both KEM/KEX and digital signature eases (hardware) implementation and deployment in practice, and is useful to simplify the system complexity of lattice-based cryptography in general.
Last updated:  2020-10-12
Elliptic Curves in Generalized Huff's Model
Uncategorized
Ronal Pranil Chand, Maheswara Rao Valluri
Show abstract
Uncategorized
Abstract This paper introduces a new form of elliptic curves in generalized Huff's model. These curves endowed with the addition are shown to be a group over a finite field. We present formulae for point addition and doubling point on the curves, and evaluate the computational cost of point addition and doubling point using projective, Jacobian, Lopez-Dahab coordinate systems, and embedding of the curves into \mathbb{P}^{1}\times\mathbb{P}^{1} system. We also prove that the curves are birationally equivalent to Weierstrass form. We observe that the computational cost on the curves for point addition and doubling point is lowest by embedding the curves into \mathbb{P}^{1}\times\mathbb{P}^{1} system than the other mentioned coordinate systems and is nearly optimal to other known Huff's models.
Last updated:  2020-12-03
Pseudo-Free Families of Computational Universal Algebras
Mikhail Anokhin
Let $\Omega$ be a finite set of finitary operation symbols. We initiate the study of (weakly) pseudo-free families of computational $\Omega$-algebras in arbitrary varieties of $\Omega$-algebras. Most of our results concern (weak) pseudo-freeness in the variety $\mathfrak O$ of all $\Omega$-algebras. A family $(H_d)_{d\in D}$ of computational $\Omega$-algebras (where $D\subseteq\{0,1\}^*$) is called polynomially bounded (resp., having exponential size) if there exists a polynomial $\eta$ such that for all $d\in D$, the length of any representation of every $h\in H_d$ is at most $\eta(\lvert d\rvert)$ (resp., $\lvert H_d\rvert\le2^{\eta(\lvert d\rvert)}$). First, we prove the following trichotomy: (i) if $\Omega$ consists of nullary operation symbols only, then there exists a polynomially bounded pseudo-free family in $\mathfrak O$; (ii) if $\Omega=\Omega_0\cup\{\omega\}$, where $\Omega_0$ consists of nullary operation symbols and the arity of $\omega$ is $1$, then there exist an exponential-size pseudo-free family and a polynomially bounded weakly pseudo-free family (both in $\mathfrak O$); (iii) in all other cases, the existence of polynomially bounded weakly pseudo-free families in $\mathfrak O$ implies the existence of collision-resistant families of hash functions. Second, assuming the existence of collision-resistant families of hash functions, we construct a polynomially bounded weakly pseudo-free family and an exponential-size pseudo-free family in the variety of all $m$-ary groupoids, where $m$ is an arbitrary positive integer. In particular, for arbitrary $m\ge2$, polynomially bounded weakly pseudo-free families in the variety of all $m$-ary groupoids exist if and only if collision-resistant families of hash functions exist.
Last updated:  2018-12-03
Excalibur Key-Generation Protocols For DAG Hierarchic Decryption
Uncategorized
Louis Goubin, Geraldine Monsalve, Juan Reutter, Francisco Vial Prado
Show abstract
Uncategorized
Public-key cryptography applications often require structuring decryption rights according to some hierarchy. This is typically addressed with re-encryption procedures or relying on trusted parties, in order to avoid secret-key transfers and leakages. Using a novel approach, Goubin and Vial-Prado (2016) take advantage of the Multikey FHE-NTRU encryption scheme to establish decryption rights at key-generation time, thus preventing leakage of all secrets involved (even by powerful key-holders). Their algorithms are intended for two parties, and can be composed to form chains of users with inherited decryption rights. In this article, we provide new protocols for generating Excalibur keys under any DAG-like hierarchy, and present formal proofs of security against semi-honest adversaries. Our protocols are compatible with the homomorphic properties of FHE-NTRU, and the base case of our security proofs may be regarded as a more formal, simulation-based proof of said work.
Last updated:  2018-12-03
Downgradable Identity-based Encryption and Applications
Olivier Blazy, Paul Germouty, Duong Hieu Phan
In Identity-based cryptography, in order to generalize one receiver encryption to multi-receiver encryption, wildcards were introduced: WIBE enables wildcard in receivers' pattern and Wicked-IBE allows one to generate a key for identities with wildcard. However, the use of wildcard makes the construction of WIBE, Wicked-IBE more complicated and significantly less efficient than the underlying IBE. The main reason is that the conventional identity's binary alphabet is extended to a ternary alphabet $\{0,1,*\}$ and the wildcard $*$ is always treated in a convoluted way in encryption or in key generation. In this paper, we show that when dealing with multi-receiver setting, wildcard is not necessary. We introduce a new downgradable property for IBE scheme and show that any IBE with this property, called DIBE, can be efficiently transformed into WIBE or Wicked-IBE. While WIBE and Wicked-IBE have been used to construct Broadcast encryption, we go a step further by employing DIBE to construct Attribute-based Encryption of which the access policy is expressed as a boolean formula in the disjunctive normal form.
Last updated:  2019-03-14
New Privacy Threat on 3G, 4G, and Upcoming 5G AKA Protocols
Ravishankar Borgaonkar, Lucca Hirschi, Shinjo Park, Altaf Shaik
Mobile communications are used by more than two-thirds of the world population who expect security and privacy guarantees. The 3rd Generation Partnership Project (3GPP) responsible for the worldwide standardization of mobile communication has designed and mandated the use of the AKA protocol to protect the subscribers’ mobile services. Even though privacy was a requirement, numerous subscriber location attacks have been demonstrated against AKA, some of which have been fixed or mitigated in the enhanced AKA protocol designed for 5G. In this paper, we reveal a new privacy attack against all variants of the AKA protocol, including 5G AKA, that breaches subscriber privacy more severely than known location privacy attacks do. Our attack exploits a new logical vulnerability we uncovered that would require dedicated fixes. We demonstrate the practical feasibility of our attack using low cost and widely available setups. Finally we conduct a security analysis of the vulnerability and discuss countermeasures to remedy our attack.
Last updated:  2018-12-03
A Comparison of NTRU Variants
John M. Schanck
We analyze the size vs. security trade-offs that are available when selecting parameters for perfectly correct key encapsulation mechanisms based on NTRU.
Last updated:  2019-02-06
The 9 Lives of Bleichenbacher's CAT: New Cache ATtacks on TLS Implementations
Uncategorized
Eyal Ronen, Robert Gillham, Daniel Genkin, Adi Shamir, David Wong, Yuval Yarom
Show abstract
Uncategorized
At CRYPTO’98, Bleichenbacher published his seminal paper which described a padding oracle attack against RSA implementations that follow the PKCS #1 v1.5 standard. Over the last twenty years researchers and implementors had spent a huge amount of effort in developing and deploying numerous mitigation techniques which were supposed to plug all the possible sources of Bleichenbacher-like leakages. However, as we show in this paper most implementations are still vulnerable to several novel types of attack based on leakage from various microarchitectural side channels: Out of nine popular implementations of TLS that we tested, we were able to break the security of seven implementations with practical proof-of-concept attacks. We demonstrate the feasibility of using those Cache-like ATacks (CATs) to perform a downgrade attack against any TLS connection to a vulnerable server, using a BEAST-like Man in the Browser attack. The main difficulty we face is how to perform the thousands of oracle queries required before the browser’s imposed timeout (which is 30 seconds for almost all browsers, with the exception of Firefox which can be tricked into extending this period). The attack seems to be inherently sequential (due to its use of adaptive chosen ciphertext queries), but we describe a new way to parallelize Bleichenbacher-like padding attacks by exploiting any available number of TLS servers that share the same public key certificate. With this improvement, we could demonstrate the feasibility of a downgrade attack which could recover all the 2048 bits of the RSA plaintext (including the premaster secret value, which suffices to establish a secure connection) from five available TLS servers in under 30 seconds. This sequential-to-parallel transformation of such attacks can be of independent interest, speeding up and facilitating other side channel attacks on RSA implementations.
Last updated:  2019-02-20
The impact of error dependencies on Ring/Mod-LWE/LWR based schemes
Jan-Pieter D'Anvers, Frederik Vercauteren, Ingrid Verbauwhede
Current estimation techniques for the probability of decryption failures in Ring/Mod-LWE/LWR based schemes assume independence of the failures in individual bits of the transmitted message to calculate the full failure rate of the scheme. In this paper we disprove this assumption both theoretically and practically for schemes based on Ring/Mod-Learning with Errors/Rounding. We provide a method to estimate the decryption failure probability, taking into account the bit failure dependency. We show that the independence assumption is suitable for schemes without error correction, but that it might lead to underestimating the failure probability of algorithms using error correcting codes. In the worst case, for LAC-128, the failure rate is $2^{48}$ times bigger than estimated under the assumption of independence. This higher-than-expected failure rate could lead to more efficient cryptanalysis of the scheme through decryption failure attacks.
Last updated:  2018-12-06
PwoP: Intrusion-Tolerant and Privacy-Preserving Sensor Fusion
Chenglu Jin, Marten van Dijk, Michael K. Reiter, Haibin Zhang
We design and implement, PwoP, an efficient and scalable system for intrusion-tolerant and privacy-preserving multi-sensor fusion. PwoP develops and unifies techniques from dependable distributed systems and modern cryptography, and in contrast to prior works, can 1) provably defend against pollution attacks where some malicious sensors lie about their values to sway the final result, and 2) perform within the computation and bandwidth limitations of cyber-physical systems. PwoP is flexible and extensible, covering a variety of application scenarios. We demonstrate the practicality of our system using Raspberry Pi Zero W, and we show that PwoP is efficient in both failure-free and failure scenarios.
Last updated:  2020-02-11
Toward RSA-OAEP without Random Oracles
Nairen Cao, Adam O'Neill, Mohammad Zaheri
We show new partial and full instantiation results under chosen-ciphertext security for the widely implemented and standardized RSA-OAEP encryption scheme of Bellare and Rogaway (EUROCRYPT 1994) and two variants. Prior work on such instantiations either showed negative results or settled for ``passive'' security notions like IND-CPA. More precisely, recall that RSA-OAEP adds redundancy and randomness to a message before composing two rounds of an underlying Feistel transform, whose round functions are modeled as random oracles (ROs), with RSA. Our main results are: \begin{itemize} \item Either of the two oracles (while still modeling the other as a RO) can be instantiated in RSA-OAEP under IND-CCA2 using mild standard-model assumptions on the round functions and generalizations of algebraic properties of RSA shown by Barthe, Pointcheval, and Báguelin (CCS 2012). The algebraic properties are only shown to hold at practical parameters for small encryption exponent ($e=3$), but we argue they have value for larger $e$ as well. \item Both oracles can be instantiated simultaneously for two variants of RSA-OAEP, called ``$t$-clear'' and ``$s$-clear'' RSA-OAEP. For this we use extractability-style assumptions in the sense of Canetti and Dakdouk (TCC 2010) on the round functions, as well as novel yet plausible ``XOR-type'' assumptions on RSA. While admittedly strong, such assumptions may nevertheless be necessary at this point to make positive progress. \end{itemize} In particular, our full instantiations evade impossibility results of Shoup (J.~Cryptology 2002), Kiltz and Pietrzak (EUROCRYPT 2009), and Bitansky et al. (STOC 2014). Moreover, our results for $s$-clear RSA-OAEP yield the most efficient RSA-based encryption scheme proven IND-CCA2 in the standard model (using bold assumptions on cryptographic hashing) to date.
Last updated:  2018-12-03
Placing Conditional Disclosure of Secrets in the Communication Complexity Universe
Benny Applebaum, Prashant Nalini Vasudevan
In the Conditional Disclosure of Secrets (CDS) problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold $n$-bit inputs $x$ and $y$ respectively, wish to release a common secret $z$ to Carol (who knows both $x$ and $y$) if and only if the input $(x,y)$ satisfies some predefined predicate $f$. Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security. Despite the growing interest in this model, very few lower-bounds are known. In this paper, we relate the CDS complexity of a predicate $f$ to its communication complexity under various communication games. For several basic predicates our results yield tight, or almost tight, lower-bounds of $\Omega(n)$ or $\Omega(n^{1-\epsilon})$, providing an exponential improvement over previous logarithmic lower-bounds. We also define new communication complexity classes that correspond to different variants of the CDS model and study the relations between them and their complements. Notably, we show that allowing for imperfect correctness can significantly reduce communication -- a seemingly new phenomenon in the context of information-theoretic cryptography. Finally, our results show that proving explicit super-logarithmic lower-bounds for imperfect CDS protocols is a necessary step towards proving explicit lower-bounds against the class AM, or even $\text{AM}\cap \text{co-AM}$ -- a well known open problem in the theory of communication complexity. Thus imperfect CDS forms a new minimal class which is placed just beyond the boundaries of the ``civilized'' part of the communication complexity world for which explicit lower-bounds are known.
Last updated:  2018-12-03
Result Pattern Hiding Searchable Encryption for Conjunctive Queries
Shangqi Lai, Sikhar Patranabis, Amin Sakzad, Joseph K. Liu, Debdeep Mukhopadhyay, Ron Steinfeld, Shi-Feng Sun, Dongxi Liu, Cong Zuo
The recently proposed Oblivious Cross-Tags (OXT) protocol (CRYPTO 2013) has broken new ground in designing efficient searchable symmetric encryption (SSE) protocol with support for conjunctive keyword search in a single-writer single-reader framework. While the OXT protocol offers high performance by adopting a number of specialised data-structures, it also trades-off security by leaking ‘partial’ database information to the server. Recent attacks have exploited similar partial information leakage to breach database confidentiality. Consequently, it is an open problem to design SSE protocols that plug such leakages while retaining similar efficiency. In this paper, we propose a new SSE protocol, called Hidden Cross-Tags (HXT), that removes ‘Keyword Pair Result Pattern’ (KPRP) leakage for conjunctive keyword search. We avoid this leakage by adopting two additional cryptographic primitives - Hidden Vector Encryption (HVE) and probabilistic (Bloom filter) indexing into the HXT protocol. We propose a ‘lightweight’ HVE scheme that only uses efficient symmetric-key building blocks, and entirely avoids elliptic curve-based operations. At the same time, it affords selective simulation-security against an unbounded number of secret-key queries. Adopting this efficient HVE scheme, the overall practical storage and computational overheads of HXT over OXT are relatively small (no more than 10% for two keywords query, and 21% for six keywords query), while providing a higher level of security.
Last updated:  2018-12-03
On the Price of Proactivizing Round-Optimal Perfectly Secret Message Transmission
Uncategorized
Ravi Kishore, Ashutosh Kumar, Chiranjeevi Vanarasa, Kannan Srinathan
Show abstract
Uncategorized
In a network of $n$ nodes (modelled as a digraph), the goal of a perfectly secret message transmission (PSMT) protocol is to replicate sender's message $m$ at the receiver's end without revealing any information about $m$ to a computationally unbounded adversary that eavesdrops on any $t$ nodes. The adversary may be mobile too -- that is, it may eavesdrop on a different set of $t$ nodes in different rounds. We prove a necessary and sufficient condition on the synchronous network for the existence of $r$-round PSMT protocols, for any given $r > 0$; further, we show that round-optimality is achieved without trading-off the communication complexity; specifically, our protocols have an overall communication complexity of $O(n)$ elements of a finite field to perfectly transmit one field element. Apart from optimality/scalability, two interesting implications of our results are: (a) adversarial mobility does not affect its tolerability: PSMT tolerating a static $t$-adversary is possible if and only if PSMT tolerating mobile $t$-adversary is possible; and (b) mobility does not affect the round optimality: the fastest PSMT protocol tolerating a static $t$-adversary is not faster than the one tolerating a mobile $t$-adversary.
Last updated:  2022-10-09
Keeping Time-Release Secrets through Smart Contracts
Uncategorized
Jianting Ning, Hung Dang, Ruomu Hou, Ee-Chien Chang
Show abstract
Uncategorized
A time-release protocol enables one to send secrets into a future release time. The main technical challenge lies in incorporating timing control into the protocol, especially in the absence of a central trusted party. To leverage on the regular heartbeats emitted from decen- tralized blockchains, in this paper, we advocate an incentive-based approach that combines threshold secret sharing and blockchain based smart contract. In particular, the secret is split into shares and distributed to a set of incentivized participants, with the payment settlement contractualized and enforced by the autonomous smart contract. We highlight that such ap- proach needs to achieve two goals: to reward honest participants who release their shares honestly after the release date (the “carrots”), and to punish premature leakage of the shares (the “sticks”). While it is not difficult to contractualize a carrot mechanism for punctual releases, it is not clear how to realise the stick. In the first place, it is not clear how to identify premature leakage. Our main idea is to encourage public vigilantism by incorporating an informer-bounty mechanism that pays bounty to any informer who can provide evidence of the leakage. The possibility of being punished constitute a deterrent to the misbehaviour of premature releases. Since various entities, including the owner, participants and the in- formers, might act maliciously for their own interests, there are many security requirements. In particular, to prevent a malicious owner from acting as the informer, the protocol must ensure that the owner does not know the distributed shares, which is counter-intuitive and not addressed by known techniques. We investigate various attack scenarios, and propose a secure and efficient protocol based on a combination of cryptographic primitives. Our technique could be of independent interest to other applications of threshold secret sharing in deterring sharing.
Last updated:  2018-12-03
Identity-Concealed Authenticated Encryption and Key Exchange
Yunlei Zhao
Identity concealment and zero-round trip time (0-RTT) connection are two of current research focuses in the design and analysis of secure transport protocols, like TLS1.3 and Google's QUIC, in the client-server setting. In this work, we introduce a new primitive for identity-concealed authenticated encryption in the public-key setting, referred to as {higncryption, which can be viewed as a novel monolithic integration of public-key encryption, digital signature, and identity concealment. We present the security definitional framework for higncryption, and a conceptually simple (yet carefully designed) protocol construction. As a new primitive, higncryption can have many applications. In this work, we focus on its applications to 0-RTT authentication, showing higncryption is well suitable to and compatible with QUIC and OPTLS, and on its applications to identity-concealed authenticated key exchange (CAKE) and unilateral CAKE (UCAKE). In particular, we make a systematic study on applying and incorporating higncryption to TLS. Of independent interest is a new concise security definitional framework for CAKE and UCAKE proposed in this work, which unifies the traditional BR and (post-ID) frameworks, enjoys composability, and ensures very strong security guarantee. Along the way, we make a systematically comparative study with related protocols and mechanisms including Zheng's signcryption, one-pass HMQV, QUIC, TLS1.3 and OPTLS, most of which are widely standardized or in use.
Last updated:  2018-12-03
Can you sign a quantum state
Gorjan Alagic, Tommaso Gagliardoni, Christian Majenz
Cryptography with quantum states exhibits a number of surprising and counterintuitive features. In a 2002 work, Barnum et al. argued informally that these strange features should imply that digital signatures for quantum states are impossible (Barnum et al., FOCS 2002). In this work, we perform the first rigorous study of the problem of signing quantum states. We first show that the intuition of Barnum et al. was correct, by proving an impossibility result which rules out even very weak forms of signing quantum states. Essentially, we show that any non-trivial combination of correctness and security requirements results in negligible security. This rules out all quantum signature schemes except those which simply measure the state and then sign the outcome using a classical scheme. In other words, only classical signature schemes exist. We then show a positive result: it is possible to sign quantum states, provided that they are also encrypted with the public key of the intended recipient. Following classical nomenclature, we call this notion quantum signcryption. Classically, signcryption is only interesting if it provides superior efficiency to simultaneous encryption and signing. Our results imply that, quantumly, it is far more interesting: by the laws of quantum mechanics, it is the only signing method available. We develop security definitions for quantum signcryption, ranging from a simple one-time two-user setting, to a chosen-ciphertext-secure many-time multi-user setting. We also give secure constructions based on post-quantum public-key primitives. Along the way, we show that a natural hybrid method of combining classical and quantum schemes can be used to "upgrade" a secure classical scheme to the fully-quantum setting, in a wide range of cryptographic settings including signcryption, authenticated encryption, and chosen-ciphertext security.
Last updated:  2018-12-03
More on sliding right
Uncategorized
Joachim Breitner
Show abstract
Uncategorized
This text can be thought of an “external appendix” to the paper Sliding right into disaster: Left-to-right sliding windows leak by Daniel J. Bernstein, Joachim Breitner, Daniel Genkin, Leon Groot Bruinderink, Nadia Heninger, Tanja Lange, Christine van Vredendaal and Yuval Yarom [1, 2], and goes into the details of an alternative way to find the knowable bits of the secret exponent, which is complete and can (in rare corner cases) find more bits than the rewrite rules in Section 3.1 of [1], an algorithm to calculate the collision entropy H that is used in Theorem 3 of [1], and a proof of Theorem 3.
Last updated:  2019-01-31
On the Concrete Security of Goldreich’s Pseudorandom Generator
Uncategorized
Geoffroy Couteau, Aurélien Dupin, Pierrick Méaux, Mélissa Rossi, Yann Rotella
Show abstract
Uncategorized
Local pseudorandom generators allow to expand a short random string into a long pseudo-random string, such that each output bit depends on a constant number d of input bits. Due to its extreme efficiency features, this intriguing primitive enjoys a wide variety of applications in cryptography and complexity. In the polynomial regime, where the seed is of size n and the output of size n^s for s > 1, the only known solution, commonly known as Goldreich's PRG, proceeds by applying a simple d-ary predicate to public random size-d subsets of the bits of the seed. While the security of Goldreich's PRG has been thoroughly investigated, with a variety of results deriving provable security guarantees against class of attacks in some parameter regimes and necessary criteria to be satisfied by the underlying predicate, little is known about its concrete security and efficiency. Motivated by its numerous theoretical applications and the hope of getting practical instantiations for some of them, we initiate a study of the concrete security of Goldreich's PRG, and evaluate its resistance to cryptanalytic attacks. Along the way, we develop a new guess-and-determine-style attack, and identify new criteria which refine existing criteria and capture the security guarantees of candidate local PRGs in a more fine-grained way.
Last updated:  2023-02-14
Adaptively Secure MPC with Sublinear Communication Complexity
Ran Cohen, abhi shelat, Daniel Wichs
A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results: (1) A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online-computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size. (2) Under the same assumptions, we construct a "Bob-optimized" 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice's input size. We prove impossibility of "Alice-optimized" protocols. (3) Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP-relation. On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS'18) and shed light on an interesting duality between LFE and FHE. Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold-PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting, and show a two-round SFE protocol with guaranteed output delivery under the same constraints. Our results highlight that the asymptotic cost of adaptive security can be reduced to be comparable to, and in many settings almost match, that of static security, with only a little sacrifice to the concrete round complexity and asymptotic communication complexity.
Last updated:  2018-12-03
Algebraic normal form of a bent function: properties and restrictions
Natalia Tokareva
Maximally nonlinear Boolean functions in $n$ variables, where n is even, are called bent functions. There are several ways to represent Boolean functions. One of the most useful is via algebraic normal form (ANF). What can we say about ANF of a bent function? We try to collect all known and new facts related to ANF of a bent function. A new problem in bent functions is stated and studied: is it true that a linear, quadratic, cubic, etc. part of ANF of a bent function can be arbitrary? The case of linear part is well studied before. In this paper we prove that a quadratic part of a bent function can be arbitrary too.
Last updated:  2018-12-04
Improved upper bound on root number of linearized polynomials and its application to nonlinearity estimation of Boolean functions
Uncategorized
Sihem Mesnager, Kwang Ho Kim, Myong Song Jo
Show abstract
Uncategorized
To determine the dimension of null space of any given linearized polynomial is one of vital problems in finite field theory, with concern to design of modern symmetric cryptosystems. But, the known general theory for this task is much far from giving the exact dimension when applied to a specific linearized polynomial. The first contribution of this paper is to give a better general method to get more precise upper bound on the root number of any given linearized polynomial. We anticipate this result would be applied as a useful tool in many research branches of finite field and cryptography. Really we apply this result to get tighter estimations of the lower bounds on the second order nonlinearities of general cubic Boolean functions, which has been being an active research problem during the past decade, with many examples showing great improvements. Furthermore, this paper shows that by studying the distribution of radicals of derivatives of a given Boolean functions one can get a better lower bound of the second-order nonlinearity, through an example of the monomial Boolean function $g_{\mu}=Tr(\mu x^{2^{2r}+2^r+1})$ over any finite field $GF{n}$.
Last updated:  2019-02-20
Adversarially Robust Property Preserving Hash Functions
Elette Boyle, Rio LaVigne, Vinod Vaikuntanathan
Property-preserving hashing is a method of compressing a large input x into a short hash h(x) in such a way that given h(x) and h(y), one can compute a property P(x, y) of the original inputs. The idea of property-preserving hash functions underlies sketching, compressed sensing and locality-sensitive hashing. Property-preserving hash functions are usually probabilistic: they use the random choice of a hash function from a family to achieve compression, and as a consequence, err on some inputs. Traditionally, the notion of correctness for these hash functions requires that for every two inputs x and y, the probability that h(x) and h(y) mislead us into a wrong prediction of P(x, y) is negligible. As observed in many recent works (incl. Mironov, Naor and Segev, STOC 2008; Hardt and Woodruff, STOC 2013; Naor and Yogev, CRYPTO 2015), such a correctness guarantee assumes that the adversary (who produces the offending inputs) has no information about the hash function, and is too weak in many scenarios. We initiate the study of adversarial robustness for property-preserving hash functions, provide definitions, derive broad lower bounds due to a simple connection with communication complexity, and show the necessity of computational assumptions to construct such functions. Our main positive results are two candidate constructions of property-preserving hash functions (achieving different parameters) for the (promise) gap-Hamming property which checks if x and y are “too far” or “too close”. Our first construction relies on generic collision-resistant hash functions, and our second on a variant of the syndrome decoding assumption on low-density parity check codes.
Last updated:  2018-12-03
Special Soundness Revisited
Douglas Wikström
We generalize and abstract the problem of extracting a witness from a prover of a special sound protocol into a combinatorial problem induced by a sequence of matroids and a predicate, and present a parametrized algorithm for solving this problem. The parametrization provides a tight tradeoff between the running time and the extraction error of the algorithm, which allows optimizing the parameters to minimize: the soundness error for interactive proofs, or the extraction time for proofs of knowledge. In contrast to previous work we bound the distribution of the running time and not only the expected running time. Tail bounds give a tighter analysis when applied recursively and concentrated running time.
Last updated:  2020-12-22
Towards Round-Optimal Secure Multiparty Computations: Multikey FHE without a CRS
Eunkyung Kim, Hyang-Sook Lee, Jeongeun Park
Multikey fully homomorphic encryption (MFHE) allows homomorphic operations between ciphertexts encrypted under different keys. In applications for secure multiparty computation (MPC)protocols, MFHE can be more advantageous than usual fully homomorphic encryption (FHE) since users do not need to agree with a common public key before the computation when using MFHE. In EUROCRYPT 2016, Mukherjee and Wichs constructed a secure MPC protocol in only two rounds via MFHE which deals with a common random/reference string (CRS) in key generation. After then, Brakerski et al.. replaced the role of CRS with the distributed setup for CRS calculation to form a four round secure MPC protocol. Thus, recent improvements in round complexity of MPC protocols have been made using MFHE. In this paper, we go further to obtain round-efficient and secure MPC protocols. The underlying MFHE schemes in previous works still involve the common value, CRS, it seems to weaken the power of using MFHE to allow users to independently generate their own keys. Therefore, we resolve the issue by constructing an MFHE scheme without CRS based on LWE assumption, and then we obtain a secure MPC protocol against semi-malicious security in three rounds.
Last updated:  2018-12-03
Universally Composable Oblivious Transfer Protocol based on the RLWE Assumption
Pedro Branco, Jintai Ding, Manuel Goulão, Paulo Mateus
We use an RLWE-based key exchange scheme to construct a simple and efficient post-quantum oblivious transfer based on the Ring Learning with Errors assumption. We prove that our protocol is secure in the Universal Composability framework against static malicious adversaries in the random oracle model. The main idea of the protocol is that the receiver and the sender interact using the RLWE-based key exchange in such a way that the sender computes two keys, one of them shared with the receiver. It is infeasible for the sender to know which is the shared key and for the receiver to get information about the other one. The sender encrypts each message with each key using a symmetric-key encryption scheme and the receiver can only decrypt one of the ciphertexts. The protocol is extremely efficient in terms of computational and communication complexity, and thus a strong candidate for post-quantum applications.
Last updated:  2019-08-19
Leakage Resilient Secret Sharing and Applications
Akshayaram Srinivasan, Prashant Nalini Vasudevan
A secret sharing scheme allows a dealer to share a secret among a set of $n$ parties such that any authorized subset of the parties can recover the secret, while any unauthorized subset of the parties learns no information about the secret. A local leakage-resilient secret sharing scheme (introduced in independent works by (Goyal and Kumar, STOC 18) and (Benhamouda, Degwekar, Ishai and Rabin, Crypto 18)) additionally requires the secrecy to hold against every unauthorized set of parties even if they obtain some bounded local leakage from every other share. The leakage is said to be local if it is computed independently for each share. So far, the only known constructions of local leakage resilient secret sharing schemes are for threshold access structures for very low ($O(1)$) or very high ($n -o(\log n)$) thresholds. In this work, we give a compiler that takes a secret sharing scheme for any monotone access structure and produces a local leakage resilient secret sharing scheme for the same access structure, with only a constant-factor blow-up in the sizes of the shares. Furthermore, the resultant secret sharing scheme has optimal leakage-resilience rate i.e., the ratio between the leakage tolerated and the size of each share can be made arbitrarily close to $1$. Using this secret sharing scheme as the main building block, we obtain the following results: 1. Rate Preserving Non-Malleable Secret Sharing: We give a compiler that takes any secret sharing scheme for a 4-monotone access structure with rate $R$ and converts it into a non-malleable secret sharing scheme for the same access structure with rate $\Omega(R)$. The prior such non-zero rate construction (Badrinarayanan and Srinivasan, 18) only achieves a rate of $\Theta(R/{t_{\max}\log^2 n})$, where $t_{\max}$ is the maximum size of any minimal set in the access structure. As a special case, for any threshold $t \geq 4$ and an arbitrary $n \geq t$, we get the first constant rate construction of $t$-out-of-$n$ non-malleable secret sharing. 2. Leakage-Tolerant Multiparty Computation for General Interaction Pattern: For any function, we give a reduction from constructing leakage-tolerant secure multi-party computation protocols obeying any interaction pattern to constructing a secure (and not necessarily leakage-tolerant) protocol for a related function obeying the star interaction pattern. This improves upon the result of (Halevi et al., ITCS 2016), who constructed a protocol that is secure in a leak-free environment.
Last updated:  2018-12-03
Dfinity Consensus, Explored
Uncategorized
Ittai Abraham, Dahlia Malkhi, Kartik Nayak, Ling Ren
Show abstract
Uncategorized
We explore a Byzantine Consensus protocol called Dfinity Consensus, recently published in a technical report. Dfinity Consensus solves synchronous state machine replication among $n = 2f + 1$ replicas with up to $f$ Byzantine faults. We provide a succinct explanation of the core mechanism of Dfinity Consensus to the best of our understanding. We prove the safety and liveness of the protocol specification we provide. Our complexity analysis of the protocol reveals the follows. The protocol achieves expected $O(f \times \Delta)$ latency against an adaptive adversary, (where \Delta is the synchronous bound on message delay), and expected $O(\Delta)$ latency against a mildly adaptive adversary. In either case, the communication complexity is unbounded. We then explain how the protocol can be modified to reduce the communication complexity to $O(n^3)$ in the former case, and to $O(n^2)$ in the latter.
Last updated:  2019-03-22
Improvements of Blockchain’s Block Broadcasting:An Incentive Approach
Qingzhao Zhang, Yijun Leng, Lei Fan
In order to achieve a truthful distributed ledger, homogeneous nodes in Blockchain systems will propagate messages on a P2P network so that they can synchronize the status of the ledger. Currently, blockchain systems target on achieving better scalability and higher throughput to support divergent applications which will lead to heavier message propagation, especially the broadcasting of blocks. The heavier traffic on the P2P network will cause longer latency of block synchronization, which may damage system consistency and expose the system to potential attacks. Even worse, when heavy communication consumes a lot of network capacity, nodes in the P2P network may not relay blocks to save their bandwidth. This may damage the efficiency of network synchronization. In order to alleviate the problems, we propose an improved block broadcasting protocol which elaborates block data sharding and financial incentive mechanisms. In the proposed scheme, a block is sliced into pieces in order to keep the network traffic smooth and speed up content delivery. Any node which relays a piece of the block will get benefits with financial rewards. By applying data sharding, our proposed scheme speed up the block broadcasting and therefore shorten the synchronization time by 90\%, which is shown in our simulation experiments. In addition, we carry out game theoretical analysis to prove that nodes are efficiently incentivized to relay blocks honestly and actively.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.