All papers in 2016 (1195 results)
MASCAT: Stopping Microarchitectural Attacks Before Execution
Uncategorized
Uncategorized
Microarchitectural attacks have gained popularity in recent years since they use only standard resources, e.g. memory and cache access timing. Such privileges are available to applications at the lowest privilege levels. Further, microarchitectural attacks have proven successful on shared cloud instances across VMs, on smartphones with sandboxing, and on numerous embedded platforms.
Given the rise of malicious code in app stores and in online repositories it becomes essential to scan applications for such stealthy attacks. We present a static code analysis tool, MASCAT , capable of scanning for ever evolving microarchitectural attacks. Our proposed tool MASCAT can be used by app store service providers to perform large scale fully automated analysis of applications. The initial MASCAT suite is built to include attack vectors to cover popular cache/DRAM access attacks and Rowhammer. Further, our tool is easily extensible to cover newer attack vectors as they emerge.
Constant-Time Callees with Variable-Time Callers
Uncategorized
Uncategorized
Side-channel attacks are a serious threat to security-critical software. To mitigate remote timing and cache-timing attacks, many ubiquitous cryptography software libraries feature constant-time implementations of cryptographic primitives. In this work, we disclose a vulnerability in OpenSSL 1.0.1u that recovers ECDSA private keys for the standardized elliptic curve P-256 despite the library featuring both constant-time curve operations and modular inversion with microarchitecture attack mitigations. Exploiting this defect, we target
the errant modular inversion code path with a cache-timing and improved
performance degradation attack, recovering the inversion state sequence. We propose a new approach of extracting a variable number of nonce bits from these sequences, and improve upon the best theoretical result to recover private keys in a lattice attack with as few as 50 signatures and corresponding traces. As far as we are aware, this is the first timing attack against OpenSSL ECDSA that does not target scalar multiplication, the first side-channel attack on cryptosystems
leveraging P-256 constant-time scalar multiplication and furthermore, we extend our attack to TLS and SSH protocols, both linked to OpenSSL for P-256 ECDSA signing.
Efficient Encryption from Random Quasi-Cyclic Codes
We propose a framework for constructing efficient code-based encryption schemes from codes that do not hide any structure in their public matrix. The framework is in the spirit of the schemes first proposed by Alekhnovich in 2003 and based on the difficulty of decoding random linear codes from random errors of low weight. We depart somewhat from Aleknovich's approach and propose an encryption scheme based on the difficulty of decoding random quasi-cyclic codes.
We propose two new cryptosystems instantiated within our framework: the Hamming Quasi-Cyclic cryptosystem (HQC), based on the Hamming metric, and the Rank Quasi-Cyclic cryptosystem (RQC), based on the rank metric. We give a security proof, which reduces the IND-CPA security of our systems to a decisional version of the well known
problem of decoding random families of quasi-cyclic codes for the Hamming and rank metrics (the respective QCSD and RQCSD problems). We also provide an analysis of the decryption failure probability of our scheme in the Hamming metric case: for the rank metric there is no decryption failure. Our schemes benefit from a very fast decryption algorithm together with small key sizes of only a few thousand bits. The cryptosystems are very efficient for low encryption rates and are very well suited to key exchange and authentication. Asymptotically, for $\lambda$ the security parameter, the public key sizes are respectively in $\mathcal{O}({\lambda}^2)$ for HQC and in $\mathcal{O}(\lambda^{\frac{4}{3}})$ for RQC.
Practical parameter compares well to systems based on ring-LPN or the recent MDPC system.
The Secret Processor Will Go to The Ball: Benchmark Insider-Proof Encrypted Computing
Uncategorized
Uncategorized
`Encrypted computing' is an approach to the prevention of insider attacks by the privileged operator against the unprivileged user on a computation system. It requires a processor that works natively on encrypted data in user mode, and the security barrier that protects the user is hardware-based encryption, not access protocols. We report on progress and practical experience with our superscalar RISC class prototype processor for encrypted computing and the supporting software infrastructure. It has been shown formally impossible for operator mode to read (or write to order) the plaintext form of data originating from or being operated on in the user mode of this class of processor, given that the encryption is independently secure. This paper aims to alert the secure hardware community that encrypted computing is possibly practical, not only theoretically plausible. The standard Dhrystone benchmark reported here for AES-128 encrypted computation shows performance equivalent to a 433MHz classic Pentium at the prototype's 1GHz base clock.
Non-Malleable Codes with Split-State Refresh
Non-Malleable Codes for the split state model allow to encode a mes-
sage into two parts such that arbitrary independent tampering on the parts either destroys completely the content or maintains the message untouched.
If the code is also leakage resilient it allows limited independent leakage from the two parts. We propose a model where the two parts can be refreshed independently. We give an abstract framework for building codes for this model, instantiate the construc-
tion under the external Diffie-Hellman assumption and give applications of such split-state refreshing. An advantage of our new model is that it allows arbitrarily many tamper attacks and arbitrarily large leakage over the life-time of the systems
as long as occasionally each part of the code is refreshed. Our model also tolerates that the refreshing occasionally is leaky or tampered with.
On the Security of Practical and Complete Homomorphic Encrypted Computation
Security with respect to the operator as an adversary is considered for processors supporting unbounded general purpose homomorphic encrypted computation. An efficient machine code architecture is defined for those platforms and it is proved that user programs expressed in it are cryptographically obfuscated, guaranteeing privacy though they, their traces and (encrypted) data are visible to the operator.
It is proved that encrypted user data cannot be deciphered by the operator, nor may programs be altered to give an intended result. A compiler is defined and it is proved that any recompilation produces uniformly distributed random variations in runtime data, supporting cryptographic obfuscation.
Equivocating Yao: Constant-Round Adaptively Secure Multiparty Computation in the Plain Model
Yao's garbling scheme is one of the basic building blocks of cryptographic protocol design. Originally designed to enable two-message, two-party secure computation, the scheme has been extended in many ways and has innumerable applications. Still, a basic question has remained open throughout the years: Can the scheme be extended to guarantee security in the face of an adversary that corrupts both parties, adaptively, as the computation proceeds?
We answer this question in the affirmative. We define a new type of encryption, called {\sf functionally equivocal encryption (FEE),} and show that when Yao's scheme is implemented with an FEE as the underlying encryption mechanism, it becomes secure against such adaptive adversaries. We then show how to implement FEE from any one way function.
Combining our scheme with non-committing encryption, we obtain the first two-message, two-party computation protocol, and the first constant-round multiparty computation protocol, in the plain model, that are secure against semi-honest adversaries who can adaptively corrupt all parties. A number of extensions and applications are described within.
On the Bit Security of Elliptic Curve Diffie--Hellman
This paper gives the first bit security result for the elliptic curve Diffie--Hellman key exchange protocol for elliptic curves defined over prime fields. About $5/6$ of the most significant bits of the $x$-coordinate of the Diffie--Hellman key are as hard to compute as the entire key. A similar result can be derived for the $5/6$ lower bits. The paper also generalizes and improves the result for elliptic curves over extension fields, that shows that computing one component (in the ground field) of the Diffie--Hellman key is as hard to compute as the entire key.
Farfalle: parallel permutation-based cryptography
In this paper, we introduce Farfalle, a new permutation-based construction for building a pseudorandom function (PRF). The PRF takes as input a key and a sequence of arbitrary-length data strings, and returns an arbitrary-length output. It has a compression layer and an expansion layer, each involving the parallel application of a permutation. The construction also makes use of LFSR-like rolling functions for generating input and output masks and for updating the inner state during expansion. On top of the inherent parallelism, Farfalle instances can be very efficient because the construction imposes less requirements on the underlying primitive than, e.g., the duplex construction or typical block cipher modes. Farfalle has an incremental property: compression of common prefixes of inputs can be factored out. Thanks to its input-output characteristics, Farfalle is really versatile. We specify simple modes on top of it for authentication, encryption and authenticated encryption, as well as a wide block cipher mode. As a showcase, we present Kravatte, a very efficient instance of Farfalle based on Keccak-p[1600] permutations and formulate concrete security claims against classical and quantum adversaries. The permutations in the compression and expansion layers of Kravatte have only 6 rounds apiece and the rolling functions are lightweight. We provide a rationale for our choices and report on software performance.
Computing Optimal Ate Pairings on Elliptic Curves with Embedding Degree $9,15$ and $27$
Uncategorized
Uncategorized
Much attention has been given to efficient computation of pairings on elliptic curves with even embedding degree since the advent of pairing-based cryptography. The existing few works in the case of odd embedding degrees require some improvements.
This paper considers the computation of optimal ate pairings on elliptic curves of embedding degrees $k=9, 15 \mbox{ and } 27$ which have twists of order three. Mainly, we provide a detailed arithmetic and cost estimation of operations in the tower extensions field of the corresponding extension fields. A good selection of parameters
enables us to improve the theoretical cost for the Miller step and the final exponentiation using the lattice-based method comparatively to the previous few works that exist in these cases. In particular for $k=15$ and $k=27$ we obtained an improvement, in terms of operations in the base field, of up to $25\%$ and $29\%$ respectively in the computation of the final exponentiation.
Also, we obtained that elliptic curves with embedding degree $k=15$ present faster results than BN$12$ curves at the $128$-bit security levels.
We provided a MAGMA implementation in each case to ensure the correctness of the formulas used in this work.
On the Complexity of Breaking Pseudoentropy
Pseudoentropy has found a lot of important applications to cryptography and complexity theory.
In this paper we focus on the foundational problem that has not been investigated so far, namely
by how much pseudoentropy (the amount seen by computationally bounded attackers) differs from its information-theoretic counterpart (seen by unbounded observers), given certain limits on attacker's computational power?
We provide the following answer for HILL pseudoentropy, which exhibits a \emph{threshold behavior}
around the size exponential in the entropy amount:
\begin{itemize}
\item If the attacker size ($s$) and advantage ($\epsilon$) satisfy $s \gg 2^k\epsilon^{-2}$ where $k$ is the claimed amount of pseudoentropy, then the pseudoentropy boils down to the information-theoretic smooth entropy
\item If $s \ll 2^k\epsilon^2$ then pseudoentropy could be arbitrarily bigger than the information-theoretic smooth entropy
\end{itemize}
Besides answering the posted question, we show an elegant application of our result to the complexity theory, namely
that it implies the classical result on the existence of functions hard to approximate (due to Pippenger).
In our approach we utilize non-constructive techniques: the duality of linear programming and the probabilistic method.
Last updated: 2016-12-31
A Digital Signature Scheme Based On Supersingular Isogeny Problem
Uncategorized
Uncategorized
In this paper we propose a digital signature scheme based on supersingular isogeny problem. We design a signature scheme using the Fiat-Shamir transform. The scheme uses a modified version of zero-knowledge proof proposed by De Feo, Jao, and Plût. Unlike the original version our zero-knowledge proof uses only one curve as a commitment. A digital signature scheme using the similar idea was proposed recently by Galbraith et al., but our proposal uses a different method in computing isogeny. We take advantage of our proposed version of zero-knowledge proof to speed up signature generation process. We also present a method of compressing signature.
Bitcoin Private Key Locked Transactions
Bitcoin smart contracts allow the development of new protocols on top of Bitcoin itself. This usually involves the definition of complex scripts, far beyond the requirement of a single signature. In this paper we introduce the concept of private key locked transactions, a novel type of transactions that allows the atomic verification of a given private key (belonging to an asymmetric key pair) during the script execution.
Some Results on the Known Classes of Quadratic APN Functions
In this paper, we determine the Walsh spectra of three classes of quadratic APN functions and we prove that the class of quadratic trinomial APN functions constructed by Gölo\u glu is affine equivalent to Gold functions.
Public Key Encryption with Equality Test in the Standard Model
Public key encryption with equality test (PKEET) is a cryptosystem that allows a tester who has trapdoors issued by one or more users $U_i$ to perform equality tests on ciphertexts encrypted using public key(s) of $U_i$. Since this feature has a lot of practical applications including search on encrypted data, several PKEET schemes have been proposed so far. However, to the best of our knowledge, all the existing proposals are proven secure only under the hardness of number-theoretic problems and the random oracle heuristics.
In this paper, we show that this primitive can be achieved not only generically from well-established other primitives but also even without relying on the random oracle heuristics. More precisely, our generic construction for PKEET employs a two-level hierarchical identity-based encryption scheme, which is selectively secure against chosen plaintext attacks, a strongly unforgeable one-time signature scheme and a cryptographic hash function. Our generic approach toward PKEET has several advantages over all the previous works; it directly leads the first standard model construction and also directly implies the first lattice-based construction. Finally, we show how to extend our approach to the identity-based setting.
New Impossible Differential Search Tool from Design and Cryptanalysis Aspects
In this paper, a new tool searching for impossible differentials
against symmetric-key primitives is presented. Compared to the previous tools, our tool can detect any contradiction between input and output differences, and it can take into account the property inside the S-box when its size is small e.g. 4 bits. In addition, several techniques are proposed to evaluate 8-bit S-box. With this tool, the number of rounds of impossible differentials are improved from the previous best results by 1 round for Midori128, Lilliput, and Minalpher. The tool also finds new impossible differentials of ARIA and MIBS. We manually verify the impossibility of the searched results, which reveals new structural properties of those designs.
Our tool can be implemented only by slightly modifying the previous differential search tool using Mixed Integer Linear Programming (MILP), while the previous tools need to be implemented independently of the differential search tools. This motivates us to discuss the usage of our tool particular for the design process. With this tool, the maximum number of rounds of impossible differentials can be proven under reasonable assumptions and the tool is applied to various concrete designs.
Last updated: 2019-04-16
How to Meet Big Data When Private Set Intersection Realizes Constatnt Communication Complexity
Electronic information is increasingly often shared among unreliable entities. In this
context, one interesting problem involves two parties that secretly want to determine intersection
of their respective private data sets while none of them wish to disclose the whole set to other. One can adopt Private Set Intersection (PSI) protocol to address this problem
preserving the associated security and privacy issues. This paper presents the first PSI protocol that achieves constant ($p(\kappa)$) communication complexity with linear computation
overhead and is fast even for the case of large input sets, where $p(\kappa)$ is a polynomial in security parameter $\kappa$. The scheme is proven to be provably secure in the standard model
against semi-honest parties. We combine somewhere statistically binding (SSB) hash function with indistinguishability obfuscation (iO) and Bloom filter to construct our PSI protocol.
Updatable Functional Encryption
Functional encryption (FE) allows an authority to issue tokens associated with various functions, allowing the holder of some token for function f to learn only f(D) from a ciphertext that encrypts D. The standard approach is to model f as a circuit, which yields inefficient evaluations over large inputs. Here, we propose a new primitive that we call updatable functional encryption (UFE), where instead of circuits we deal with RAM programs, which are closer to how programs are expressed in von Neumann architecture. We impose strict efficiency constrains in that the run-time of a token P' on ciphertext CT is proportional to the run-time of its clear-form counterpart (program P on memory D) up to a polylogarithmic factor in the size of D, and we envision tokens that are capable to update the ciphertext, over which other tokens can be subsequently executed. We define a security notion for our primitive and propose a candidate construction from obfuscation, which serves as a starting point towards the realization of other schemes and contributes to the study on how to compute RAM programs over public-key encrypted data.
Implementing and Proving the TLS 1.3 Record Layer
The record layer is the main bridge between TLS applications and internal sub-protocols. Its core functionality is an elaborate authenticated encryption: streams of messages for each sub-protocol (hand- shake, alert, and application data) are fragmented, multiplexed, and encrypted with optional padding to hide their lengths. Conversely, the sub-protocols may provide fresh keys or signal stream termination to the record layer.
Compared to prior versions, TLS 1.3 discards obsolete schemes in favor of a common construction for Authenticated Encryption with Associated Data (AEAD), instantiated with algorithms such as AES- GCM and ChaCha20-Poly1305. It differs from TLS 1.2 in its use of padding, associated data and nonces. It encrypts the content-type used to multiplex between sub-protocols. New protocol features such as early application data (0-RTT and 0.5-RTT) and late handshake messages require additional keys and a more general model of stateful encryption.
We build and verify a reference implementation of the TLS record layer and its cryptographic algorithms in F*, a dependently typed language where security and functional guarantees can be specified as pre- and post-conditions. We reduce the high-level security of the record layer to cryptographic assumptions on its ciphers. Each step in the reduction is verified by typing an F* module; when the step incurs a security loss, this module precisely captures the corresponding game-based security assumption.
We first verify the functional correctness and injectivity properties of our implementations of one- time MAC algorithms (Poly1305 and GHASH) and provide a generic proof of their security given these properties. We show the security of AEAD given any secure one-time MAC and PRF. We extend AEAD, first to stream encryption, then to length-hiding, multiplexed encryption. Finally, we build a security model of the record layer against an adversary that controls the TLS sub-protocols. We compute concrete security bounds for the AES-GCM and ChaCha20-Poly1305 ciphersuites, and derive recommended limits on sent data before re-keying. Combining our functional correctness and security results, we obtain the first verified implementations of the main TLS 1.3 record ciphers.
We plug our implementation of the record layer into an existing TLS library and confirm that the combination interoperates with Chrome and Firefox, and thus that experimentally the new TLS record layer (as described in RFCs and cryptographic standards) is provably secure.
Efficient Slide Attacks
The slide attack, presented in 1999 by Biryukov and Wagner, has already become a classical tool in
cryptanalysis of block ciphers. While it was used to mount practical attacks on a few cryptosystems, its
practical applicability is limited, as typically, its time complexity is lower bounded by $2^n$ (where
$n$ is the block size).
There are only a few known scenarios in which the slide attack performs
better than the $2^n$ bound.
In this paper we concentrate on {\it efficient} slide attacks, whose time complexity is less than $2^n$.
We present a number of new attacks that apply in scenarios in which previously known slide attacks are
either inapplicable, or require at least $2^n$ operations. In particular, we present the first known
slide attack on a Feistel construction with a {\it 3-round} self-similarity, and an attack with practical
time complexity of $2^{40}$ on a 128-bit key variant of the GOST block cipher with {\it unknown} S-boxes. The
best previously known attack on the same variant, with {\it known} S-boxes (by Courtois, 2014), has time
complexity of $2^{91}$.
Leakage of Signal function with reused keys in RLWE key exchange
Uncategorized
Uncategorized
In this paper, we show that the signal function used in Ring-Learning with Errors (RLWE) key exchange could leak information to find the secret $s$ of a reused public key $p=as+2e$. This work is motivated by an attack proposed in \cite{cryptoeprint:2016:085} and gives an insight into how public keys reused for long term in RLWE key exchange protocols can be exploited. This work specifically focuses on the attack on the KE protocol in \cite{Ding} by initiating multiple sessions with the honest party and analyze the output of the signal function. Experiments have confirmed the success of our attack in recovering the secret.
On the Security Notions for Homomorphic Signatures
Homomorphic signature schemes allow anyone to perform computation on signed data in such a way that the correctness of computation’s results is publicly certified. In this work we analyze the security notions for this powerful primitive considered in previous work, with a special focus on adaptive security. Motivated by the complications of existing security models in the adaptive setting, we consider a simpler and (at the same time) stronger security definition inspired to that proposed by Gennaro and Wichs (ASIACRYPT’13) for homomorphic MACs. In addition to strength and simplicity, this definition has the advantage to enable the adoption of homomorphic signatures in dynamic data outsourcing scenarios, such as delegation of computation on data streams. Then, since no existing homomorphic signature satisfies this stronger notion, our main technical contribution are general compilers which turn a homomorphic signature scheme secure under a weak definition into one secure under the new stronger notion. Our compilers are totally generic with respect to the underlying scheme. Moreover, they preserve two important properties of homomorphic signatures: context-hiding (i.e. signatures on computation’s output do not reveal information about the input) and efficient verification (i.e. verifying a signature against a program P can be made faster, in an amortized, asymptotic sense, than recomputing P from scratch).
Revisiting Full-PRF-Secure PMAC and Using It for Beyond-Birthday Authenticated Encryption
This paper proposes an authenticated encryption scheme, called SIVx, that preserves BBB security also in the case of unlimited nonce reuses. For this purpose, we propose a single-key BBB-secure message authentication code with 2n-bit outputs, called PMAC2x, based on a tweakable block cipher. PMAC2x is motivated by PMAC_TBC1k by Naito; we revisit its security proof and point out an invalid assumption. As a remedy, we provide an alternative proof for our construction, and derive a corrected bound for PMAC_TBC1k.
Construction of Lightweight MDS Matrices over the Matrix Polynomial Residue Ring
Uncategorized
Uncategorized
Firstly, by analyzing non-singular matrices with few XORs in the matrix polynomial residue ring,
we present an efficient method for building lightweight maximum distance separable (MDS) matrices with elements chosen from a fixed matrix polynomial residue ring. Compared with that constructions of previous methods usually cost several days or several weeks, our new method only cost within several minutes. With this method, many different types of lightweight MDS matrices can be quickly constructed. This method has a significance for researching the lightweight MDS matrix. Surprisingly, it did not receive much attention previously. We give 5 matrix templates which are suitable to construct lightweight MDS matrices.
Secondly, we investigate the existence of involutory MDS matrix for several matrix templates. Besides, we present an efficient necessary-and-sufficient condition for judging whether a Hadamard matrix is involutory. With this condition, an extremely efficient algorithm for constructing lightweight Hadamard involutory MDS matrices is given. By doing experiments, we get a lot of new Hadamard involutory MDS matrices with much fewer XORs than previously optimal results. Thirdly, in theory, we discuss reasons about why our methods work very efficiently. Finally, we prove a series of propositions about the parity of XORs of element-matrix and entirety-matrix.
On the Provable Security of the Tweakable Even-Mansour Cipher Against Multi-Key and Related-Key Attacks
Cogliati et al. introduced the tweakable Even-Mansour cipher constructed from a single permutation and an almost-XOR-universal (AXU) family of hash functions with tweak and key schedule. Most of previous papers considered the security of the (iterated) tweakable Even-Mansour cipher in the single-key setting. In this paper, we focus on the security of the tweakable Even-Mansour cipher in the multi-key and related-key settings. We prove that the tweakable Even-Mansour cipher with related-key-AXU hash functions is secure against multi-key and related-key attacks, and derive a tight bound using H-coefficients technique, respectively. Our work is of high practical relevance because of rekey requirements and the inevitability of related keys in real-world implementations.
A Salad of Block Ciphers
This book is a survey on the state of the art in block cipher design and analysis.
It is work in progress, and it has been for the good part of the last three years -- sadly, for various reasons no significant change has been made during the last twelve months.
However, it is also in a self-contained, useable, and relatively polished state, and for this reason
I have decided to release this \textit{snapshot} onto the public as a service to the cryptographic community, both in order to obtain feedback, and also as a means to give something back to the community from which I have learned much.
At some point I will produce a final version -- whatever being a ``final version'' means in the constantly evolving field of block cipher design -- and I will publish it. In the meantime I hope the material contained here will be useful to other people.
Impossible-Differential and Boomerang Cryptanalysis of Round-Reduced Kiasu-BC
Kiasu-BC is a tweakable block cipher proposed by Jean et al. at ASIACRYPT 2014 alongside their TWEAKEY framework. The cipher is almost identical to the AES-128 except for the tweak, which renders it an attractive primitive for various modes of operation and applications requiring tweakable block ciphers. Therefore, studying how the additional tweak input affects security compared to that of the AES is highly valuable to gain trust in future instantiations.
This work proposes impossible-differential and boomerang attacks on eight rounds of Kiasu-BC in the single-key model, using the core idea that the tweak input allows to construct local collisions. While our results do not threat the security of the full-round version, they help concretize the security of Kiasu-BC in the single-key model.
LWE from Non-commutative Group Rings
Uncategorized
Uncategorized
The Ring Learning-With-Errors (LWE) problem, whose security is based on hard ideal lattice problems, has proven to be a promising primitive with diverse applications in cryptography. There are however recent discoveries of faster algorithms for the principal ideal SVP problem, and attempts to generalize the attack to non-principal ideals. In this work, we study the LWE problem on group rings, and build cryptographic schemes based on this new primitive. One can regard the LWE on cyclotomic integers as a special case when the underlying group is cyclic, while our proposal utilizes non-commutative groups, which eliminates the weakness associated with the principal ideal lattices. In particular, we show how to build public key encryption schemes from dihedral group rings, which maintains the efficiency of the ring-LWE and improves its security.
Last updated: 2017-01-17
Generic Zero-Knowledge and Multivariate Quadratic Systems
Zero-knowledge proofs are a core building block for a broad range of cryptographic protocols. This paper introduces a generic zero-knowledge proof system capable of proving the correct computation of any circuit. Our protocol draws on recent advancements in multiparty computation and its security relies only on the underlying commitment scheme. Furthermore, we optimize this protocol for use with multivariate quadratic systems of polynomials, leading to provably secure signatures from multivariate quadratic systems, with keys that scale linearly and signatures that scale quadratically with the security parameter.
Mobile Commerce: Secure Multi-party Computation & Financial Cryptography
Uncategorized
Uncategorized
Abstract: The basic objective of this work is to construct an efficient and secure mechanism for mobile commerce applying the concept of financial cryptography and secure multi-party computation. The mechanism (MCM) is defined by various types of elements: a group of agents or players, actions, a finite set of inputs of each agent, a finite set of outcomes as defined by output function, a set of objective functions and constraints, payment function, a strategy profile, dominant strategy and revelation principle. The mechanism adopts a set of intelligent moves as dominant strategies: (a) flexible use of hybrid payment system which supports cash, e-payment and m-payment, (b) secure multi-party computation to ensure information security and privacy and (c) call intelligent analytics to assess and mitigate possible threats on m-commerce service. The mechanism supports three different types of transaction processing protocols (P1, P2 and P3) and calls a cryptographic protocol (Pc). The cryptographic protocol performs a set of functions sequentially such as authentication, authorization, correct identification, privacy verification and audit of correctness, fairness, rationality, accountability and transparency of secure multi-party computation on each m-transaction. The basic building blocks of the cryptographic protocol are signcryption, proofs of knowledge, commitments and secret sharing. This work also presents the complexity analysis of the mechanism in terms of computational cost, communication cost, security and business intelligence.
Keywords: Secure multi-party computation, Financial cryptography, Mobile commerce mechanism, Threat analytics, Digital economy
Group key exchange protocols withstanding ephemeral-key reveals
When a group key exchange protocol is executed, the session key is typically extracted from two types of secrets; long-term keys (for authentication) and freshly generated (often random) values. The leakage of this latter so-called ephemeral keys has been extensively analyzed in the 2-party case, yet very few works are concerned with it in the group setting. We provide a generic {group key exchange} construction that is strongly secure, meaning that the attacker is allowed to learn both long-term and ephemeral keys (but not both from the same participant, as this would trivially disclose the session key). Our design can be seen as a compiler, in the sense that it builds on a 2-party key exchange protocol which is strongly secure and transforms it into a strongly secure group key exchange protocol by adding only one extra round of communication. When applied to an existing 2-party protocol from Bergsma et al., the result is a 2-round group key exchange protocol which is strongly secure in the standard model, thus yielding the first construction with this property.
Efficient Transparent Redactable Signatures with a Single Signature Invocation
Uncategorized
Uncategorized
A redactable signature scheme is one that allows the original signature to be used, usually along with some additional data, to verify certain carefully` specified changes to the original document that was signed, namely the removal or redaction of subdocuments. For redactable signatures, the term "transparency" has been used to describe a scheme that hides the number and locations of redacted subdocuments. We present here two efficient transparent redactable signature schemes, which are the first such schemes in the literature that are based solely on tools of symmetric cryptography, along with a single application of an ordinary digital signature.
As with several previous schemes for redactable signatures, we sign a sequence of randomized commitments that depend on the contents of the subdocuments of the document to be signed. In order to hide their number and location, we randomize their order, and mix them with a sequence of "dummy nodes" that are indistinguishable from commitment values. Our first scheme uses a data structure of size quadratic in the number of subdocuments, encoding all the precedence relations between pairs of subdocuments. By embedding these precedence relations in a smaller family of graphs, our second scheme is more efficient, with expected cost linear in the number of subdocuments in the document to be signed. We introduce a quantified version of the transparency property, precisely describing the uncertainty about the number of redacted subdocuments that is guaranteed by the two schemes.
We prove that our schemes are secure, i.e. unforgeable, private, and transparent, based on the security of collision-free hash functions, pseudorandom generators, and digital signature schemes. While providing such strong security, our scheme is also efficient, in terms of both computation and communication.
Attacking FHE-based applications by software fault injections
The security of fully homomorphic encryption is often studied at the primitive level, and a lot of questions remain open when the
cryptographer needs to choose between incompatible options, like IND-
CCA1 security versus circular security or search-to-decision reduction.
The aim of this report is to emphasize the well known (and often under-
estimated) fact that the ability to compute every function, which is the most desired feature of Homomorphic Encryption schemes, is also their main weakness. We show that it can be exploited to perform very realistic attacks in the context of secure homomorphic computations in the cloud. In order to break a fully homomorphic system, the cloud provider who runs the computation will not target the primitive but the overall system. The attacks we describe are a combination between safe-errors attacks (well known in the smart cards domain) and reaction attacks, they are easy to perform and they can reveal one secret key bit per query. Furthermore, as homomorphic primitives gets improved, and become T times faster with K times smaller keys, these attacks become KT times more practical. Our purpose is to highlight the fact, that if a semantically-secure model is in general enough to design homomorphic primitives, additional protections need to be adopted at a system level to secure cloud applications. We do not attack a specific construction but the entire idea of homomorphic encryption, by pointing out all the possible targets of this attack (encrypted data, bootstrapping keys, trans-ciphering keys, etc.). We also propose some possible countermeasures (or better precautions) in order to prevent the loss of information.
Using Fully Homomorphic Encryption for Statistical Analysis of Categorical, Ordinal and Numerical Data
In recent years, there has been a growing trend towards outsourcing of computational tasks with the development of cloud services. The Gentry’s pioneering work of fully homomorphic encryption (FHE) and successive works have opened a new vista for secure and practical cloud computing. In this paper, we consider performing statistical analysis on encrypted data. To improve the efficiency of the computations, we take advantage of the batched computation based on the Chinese-Remainder-Theorem. We propose two building blocks that work with FHE: a novel batch greater-than primitive, and matrix primitive for encrypted matrices. With these building blocks, we construct secure procedures and protocols for different types of statistics including the histogram (count), contingency table (with cell suppression) for categorical data; k-percentile for ordinal data; and principal component analysis and linear regression for numerical data. To demonstrate the effectiveness of our methods, we ran experiments in five real datasets. For instance, we can compute a contingency table with more than 50 cells from 4000 of data in just 5 minutes, and we can train a linear regression model with more than 40k of data and dimension as high as 6 within 15 minutes. We show that the FHE is not as slow as commonly believed and it becomes feasible to perform a broad range of statistical analysis on thousands of encrypted data.
Meet-in-the-Middle Attacks on Classes of Contracting and Expanding Feistel Constructions
We show generic attacks on unbalanced Feistel ciphers based on the
meet-in-the-middle technique. We analyze two general classes of unbalanced Feistel
structures, namely contracting Feistels and expanding Feistels. In both of the cases,
we consider the practical scenario where the round functions are keyless and known
to the adversary. In the case of contracting Feistels with 4 branches, we show attacks
on 16 rounds when the key length k (in bits) is as large as the block length n (in bits),
and up to 24 rounds when k = 2n. In the case of expanding Feistels, we consider two
scenarios: one, where different nonlinear functions without particular structures are
used in the round function, and a more practical one, where a single nonlinear is used
but different linear functions are introduced in the state update. In the former case,
we propose generic attacks on 13 rounds when k = n, and up to 21 rounds when
k = 2n. In the latter case, 16 rounds can be attacked for k = n, and 24 rounds for
k = 2n.
Impossible Differential Attack on Simpira v2
Simpira v2 is a family of cryptographic permutations proposed at ASIACRYPT 2016 which can be used to construct high throughput block ciphers using the Even-Mansour construction, permutation-based hashing and wide-block authenticated encryption. In this paper, we give a 9-round impossible differential of Simpira-4, which turns out to be the first 9-round impossible differential.
In order to get some efficient key recovery attacks on its block cipher mode (EM construction with Simpira-4), we use some 6/7-round shrunken impossible differentials. Based on eight different 6-round impossible differentials,
we propose a series of 7-round key recovery attacks on the block cipher mode, each 6-round impossible differential helps to recover 32-bit of the master key (512-bit) and totally half of the master key bits are recovered. The attacks need $2^{57}$ chosen plaintexts and $2^{57}$ 7-round encryptions.
Furthermore, based on ten 7-round impossible differentials, we add one round on the top or at the bottom to mount ten 8-round key recovery attacks on the block cipher mode, which recover the full key space (512-bit) with the data complexity of $2^{170}$ chosen plaintexts and time complexity of $2^{170}$ 8-round encryptions. Those are the first attacks on round-reduced Simpira v2 and do not threaten the EM mode with the full 15-round Simpira-4.
Meet-in-the-Middle Attack on QARMA Block Cipher
QARMA is a recently published lightweight tweakable block cipher, which has been used by the ARMv8 architecture to support a software protection feature. In this paper, using the method of MITM, we give the first distinguisher of QARMA block cipher. It is made up of the \emph{Pseudo-Reflector} construction with two forward rounds and three backward rounds. By adding two rounds on the top and three rounds on the bottom of the distinguisher, together with the idea of the differential enumeration technique and the key-dependent sieve skill, we achieve a 10-round (of 16-round) key recovery attack with memory complexity of $2^{116}$ 192-bit space, data complexity of $2^{53}$ chosen plaintexts and time complexity of $2^{70.1}$ encryption units. Furthermore, we use the same distinguisher to attack QARMA-128 which also includes 10 (of 24) round functions and the $\emph{Pseudo-Refector}$ construction. The memory complexity is $2^{232}$ 384-bit space, the data complexity is $2^{105}$ chosen plaintexts and the time complexity is $2^{141.7}$ encryption units. These are the first attacks on QARMA and do not threaten the security of full round QARMA.
SPECTRE: A Fast and Scalable Cryptocurrency Protocol
A growing body of research on Bitcoin and other permissionless cryptocurrencies that utilize Nakamoto's blockchain has shown that they do not easily scale to process a high throughput of transactions, or to quickly approve individual transactions; blocks must be kept small, and their creation rates must be kept low in order to allow nodes to reach consensus securely. As of today, Bitcoin processes a mere 3-7 transactions per second, and transaction confirmation takes at least several minutes.
We present SPECTRE, a new protocol for the consensus core of cryptocurrencies that remains secure even under high throughput and fast confirmation times. At any throughput, SPECTRE is resilient to attackers with up to 50\% of the computational power (up until the limit defined by network congestion and bandwidth constraints). SPECTRE can operate at high block creation rates, which implies that its transactions confirm in mere seconds (limited mostly by the round-trip-time in the network).
Key to SPECTRE's achievements is the fact that it satisfies weaker properties than classic consensus requires. In the conventional paradigm, the order between any two transactions must be decided and agreed upon by all non-corrupt nodes. In contrast, SPECTRE only satisfies this with respect to transactions performed by honest users. We observe that in the context of money, two conflicting payments that are published concurrently could only have been created by a dishonest user, hence we can afford to delay the acceptance of such transactions without harming the usability of the system.
Our framework formalizes this weaker set of requirements for a cryptocurrency's distributed ledger.
We then provide a formal proof that SPECTRE satisfies these requirements.
Activate Later Certificates for V2X -- Combining ITS efficiency with privacy
Uncategorized
Uncategorized
We specify Issue First Activate Later (IFAL). This is an ETSI type of V2X Public Key Infrastructure based on short-lived pseudonymous certificates without Certificate Revocation Lists. IFAL certificates are valid in the future but can only be used together with periodically provided activation codes. IFAL supports controlled de-pseudonymization enabling provisioning to stop for misbehaving vehicles.
IFAL allows for flexible policies, trade-offs between three essential V2X properties: trust, privacy and usability. IFAL activation codes are small and can be sent in an SMS, through roadside equipment or even broadcasted. Like the Butterfly scheme, IFAL uses key derivation with one base private/public key pair. However in IFAL the security module can be simple as it can be kept oblivious of key derivation.
NewHope without reconciliation
In this paper we introduce NewHope-Simple, a variant of the NewHope Ring-LWE-based key exchange that is using a straight-forward transformation from Ring-LWE encryption to a passively secure KEM (or key-exchange scheme). The main advantage of NewHopeLP-Simple over NewHope is simplicity. In particular, it avoids the error-reconciliation mechanism originally proposed by Ding. The explanation of his method, combined with other tricks, like unbiasing the key following Peikert's tweak and using the quantizer $D_4$ to extract one key bit from multiple coefficients, takes more than three pages in the NewHope-Simple paper.
The price for that simplicity is small: one of the exchanged messages increases in size by $6.25\%$ from $2048$ bytes to $2176$ bytes. The security of NewHopeLP is the same as the security of NewHope; the performance is very similar.
Scripting smart contracts for distributed ledger technology
We give an overview of the scripting languages used in existing cryptocurrencies, and in particular we review in some detail the scripting languages of Bitcoin, Nxt and Ethereum, in the context of a high-level overview of Distributed Ledger Technology and cryptocurrencies. We survey different approaches, and give an overview of critiques of existing languages. We also cover technologies that might be used to underpin extensions and innovations in scripting and contracts, including technologies for verification, such as zero knowledge proofs, proof-carrying code and static analysis, as well as approaches to making systems more efficient, e.g. Merkelized Abstract Syntax Trees.
Comparative Study of Various Approximations to the Covariance Matrix in Template Attacks
Template attacks have been shown to be among the strongest attacks when it comes to side–channel attacks. An essential ingredient there is to calculate the inverse of a covariance matrix. In this paper we make a comparative study of the effectiveness of some 24 different variants of template attacks based on different approximations of this covariance matrix. As an example, we have chosen a recent smart card where the plain text, cipher text, as well as the key leak strongly. Some of the more commonly chosen approximations to the covariance matrix turn out to be not very effective, whilst others are.
Identification Protocols and Signature Schemes Based on Supersingular Isogeny Problems
Uncategorized
Uncategorized
We present signature schemes whose security relies on computational assumptions relating to isogeny graphs of supersingular elliptic curves. We give two schemes, both of them based on interactive identification protocols. The first identification protocol is due to De Feo, Jao and Pl{û}t. The second one, and the main contribution of the paper, makes novel use of an algorithm of Kohel-Lauter-Petit-Tignol for the quaternion version of the $\ell$-isogeny problem, for which we provide a more complete description and analysis, and is based on a more standard and potentially stronger computational problem. Both identification protocols lead to signatures that are existentially unforgeable under chosen message attacks in the random oracle model using the well-known Fiat-Shamir transform, and in the quantum random oracle model using another transform due to Unruh. A version of the first signature scheme was indepdendently published by Yoo, Azarderakhsh, Jalali, Jao and Soukharev. This is the full version of a paper published at ASIACRYPT 2017.
Attacks against search Poly-LWE
Uncategorized
Uncategorized
The Ring-LWE (RLWE) problem is expected to be a computationally-hard problem even with quantum algorithms. The Poly-LWE (PLWE) problem is closely related to the RLWE problem, and in practice a security base for various recently-proposed cryptosystems. In 2014, Eisentraeger et al. proposed attacks against the decision-variant of the PLWE problem (and in 2015, Elias et al. precisely described and extended their attacks to be applied for that of the RLWE problem). Their attacks against the decision-PLWE problem succeed with sufficiently high probability in polynomial time under certain assumptions, one of which is that the defining polynomial of the PLWE instance splits completely over the ground field.
In this paper, we present polynomial-time attacks against the search-variant of the PLWE problem. Our attacks are viewed as search-case variants of the previous attacks, but can deal with more general cases where the defining polynomial of the PLWE problem does not split completely over the ground field.
Leak Me If You Can: Does TVLA Reveal Success Rate?
Uncategorized
Uncategorized
Test Vector Leakage Assessment Methodology (TVLA) has
emerged as a popular side-channel testing methodology as it can detect the presence of side-channel information in leakage measurements. However, in its current form, TVLA results cannot be used to quantify side-channel vulnerability. In this paper, we extend the TVLA testing beyond its current scope. Precisely, we derive concrete relationship between TVLA and signal to noise ratio (SNR). The linking of the two metrics, allows direct computation of success rate (SR) from TVLA, and thus unify
these popular side channel detection and evaluation metrics. This, to our knowledge, is the first work in this direction. An end-to-end methodology is proposed, which can be easily automated, to derive attack SR starting from TVLA testing. The proposed methodology can take leakage model as a input and report attack SR which is validated on simulated and practical measurements. Not to surprise, the methodology performs better when the leakage model is accurately profiled. The methodology, although still limited to first-order leakage, is also further extended to (first order)
multivariate setting.
A Novel Multi-factor ID-based Designated Verifier Signature scheme
In a classic digital signature scheme, the global community is capable of verifying a signature. In a designated verifier scheme (DVS), only the designated verifier has this capability. In a classic DVS scheme the signer themselves ``designates'' the entity that will have the capability of verifying their signature. In a pure identity-based signature scheme a Trusted Authority is introduced, and is responsible for issuing secret signing keys to all participants. In our proposed scheme it is this TA, not the signer, that designates the verifier, and to this end the TA issues the designated verifier with its own secret. Finally we propose a variation that supports non-repudiation, plus a hardware-free multi-factor signature capability.
Simple Homomorphisms of Cocks IBE and Applications
Uncategorized
Uncategorized
The Cocks Identity Based Encryption (IBE) scheme, proposed in 2001 by Clifford Cocks, has been the standard for Quadratic Residue-based IBE. It had been long believed that this IBE did not have enough structure to have homomorphic properties. In 2013, Clear, Hughes, and Tewari (Africacrypt 2013) created a Cocks scheme derivative where they viewed ciphertexts as polynomials modulo a quadratic. While the scheme was homomorphic, it required sending twice as much information per ciphertext as the original Cocks scheme. A recent result by Joye (PKC 2016) used complex algebraic structures to demonstrate the fact that Cocks IBE, on its own, is additively homomorphic.
In this work, we build upon the results from CHT and Joye. We take the simple intuition from CHT, that ciphertexts can be seen as polynomials, but also demonstrate that we only need to send as much data as in the original Cocks scheme. This perspective leads to better intuition as to why these ciphertexts are homomorphic and to explicit efficient algorithms for computing this homomorphic addition.
We believe that our approach will facilitate other extensions of Cocks IBE. As an example, we exhibit a two-way proxy re-encryption algorithm, which arises as a simple consequence of the structure we propose. That is, given a re-encryption key, we can securely convert a ciphertext under one key to a ciphertext under the other key and vice-versa (hence two-way).
Exploiting Safe Error based Leakage of RFID Authentication Protocol using Hardware Trojan Horse
Radio-Frequency Identification tags are used for several applications requiring authentication mechanisms, which if subverted can lead to dire consequences. Many of these devices are based on low-cost Integrated Circuits which are designed in off-shore fabrication facilities and thus raising concerns about their trust. Recently, a lightweight entity authentication protocol called LCMQ was proposed, which is based on Learning Parity with Noise, Circulant Matrix, and Multivariate Quadratic problems. This protocol was proven to be secure against Man-in-the-middle attack and cipher-text only attacks. In this paper, we show that in the standard setting, although the authentication uses two $m$ bit keys, $\mathbf{K_1}$ and $\mathbf{K_2}$, knowledge of only $\mathbf{K_2}$ is sufficient to forge the authentication. Based on this observation, we design a stealthy malicious modification to the circuitry based on the idea of Safe-errors to leak $\mathbf{K_2}$ and thus can be used to forge the entire authentication mechanism. We develop a Field Programmable Gate Array prototype of the design which is extremely lightweight and can be implemented using four Lookup tables.
Splinter: Practical Private Queries on Public Data
Many online services let users query public datasets such as maps, flight prices, or restaurant reviews. Unfortunately, the queries to these services reveal highly sensitive information that can compromise users’ privacy. This paper presents Splinter, a system that protects users’ queries on public data and scales to realistic applications. A user splits her query into multiple parts and sends each part to a different provider that holds a copy of the data. As long as any one of the providers is honest and does not collude with the others, the providers cannot determine the query. Splinter uses and extends a new cryptographic primitive called Function Secret Sharing (FSS) that makes it up to an order of magnitude more efficient than prior systems based on Private Information Retrieval and garbled circuits. We develop protocols extending FSS to new types of queries, such as MAX and TOPK queries. We also provide an optimized implementation of FSS using AES-NI instructions and multicores. Splinter achieves end-to-end latencies below 1.6 seconds for realistic workloads including a Yelp clone, flight search, and map routing.
Cryptanalysis of a certificateless aggregate signature scheme
Recently, Nie et al. proposed a certificateless aggregate signature scheme. In the standard security model considered in certificateless
cryptography, we are dealing with two types of adversaries. In this paper, we show that Nie et al.'s scheme is insecure against the
adversary of the first type. In other words, although they claimed that their proposed scheme is existentially unforgeable against
adaptive chosen message attack considering the adversaries in certificateless settings, we prove that such a forgery can be done.
Preventing Adaptive Key Recovery Attacks on the Gentry-Sahai-Waters Leveled Homomorphic Encryption Scheme
A major open problem is to protect leveled homomorphic encryption from adaptive attacks that allow an adversary to learn the private key. The only positive results in this area are by Loftus, May, Smart and Vercauteren. They use a notion of "valid ciphertexts" and obtain an IND-CCA1 scheme under a strong knowledge assumption, but they also show their scheme is not secure under a natural adaptive attack based on a "ciphertext validity oracle". However, due to recent cryptanalysis their scheme is no longer considered secure.
The main contribution of this paper is to explore a new approach to achieving this goal, which does not rely on a notion of "valid ciphertexts". The idea is to generate a "one-time" private key every time the decryption algorithm is run, so that even if an attacker can learn some bits of the one-time private key from each decryption query, this does not allow them to compute a valid private key.
This is the full version of the paper. The short version, which appeared in Provsec 2016, presented a variant of the Gentry-Sahai-Waters (GSW) levelled homomorphic encryption scheme. Damien Stehle pointed out an attack on our variant of this scheme that had not been anticipated in the Provsec paper; we explain the attack in this full version. This version of the paper also contains a new "dual" version of the GSW scheme. We give an explanation of why the known attacks no longer break the system. It remains an open problem to develop a scheme for which one can prove IND-CCA1 security.
Evolving S-Boxes with Reduced Differential Power Analysis Susceptibility
Uncategorized
Uncategorized
Differential power analysis targets S-boxes to break ciphers that resist cryptanalysis. We relax cryptanalytic constraints to lower S-box leakage, as quantified by the transparency order. We apply genetic algorithms to generate 8-bit S-boxes, optimizing transparency order and nonlinearity as in existing work (Picek et al. 2015). We apply multiobjective evolutionary algorithms to generate a Pareto front. We find a tight relationship where nonlinearity drops substantially before transparency order does, suggesting the difficulty of finding S-boxes with high nonlinearity and low transparency order, if they exist. Additionally, we show that the cycle crossover yields more efficient single objective genetic algorithms for generating S-boxes than the existing literature. We demonstrate this in the first side-by-side comparison of the genetic algorithms of Millan et al. 1999, Wang et al. 2012, and Picek et al. 2015. Finally, we propose and compare several methods for avoiding fixed points in S-boxes; repairing a fixed point after evolution in a way that preserves fitness was superior to including a fixed point penalty in the objective function or randomly repairing fixed points during or after evolution.
Private Projections & Variants
There are many realistic settings where two mutually suspicious parties need to share some specific information while keeping everything else private. Various privacy-preserving techniques (such as Private Set Intersection) have been proposed as general solutions.
Based on timely real-world examples, this paper motivates the need for a new privacy tool, called Private Set Intersection with Projection (PSI-P). In it, Server has (at least) a two-attribute table and Client has a set of values. At the end of the protocol, based on all matches between Client's set and values in one (search) attribute of Server’s database, Client should learn the set of elements corresponding to the second attribute, and nothing else. In particular the intersection of Client's set and the set of values in the search attribute must remain hidden.
We construct several efficient (linear complexity) protocols that approximate privacy required by PSI-P and suffice in many practical scenarios. We also provide a new construction for PSI-P with full privacy, albeit slightly less efficient. Its key building block is a new primitive called Existential Private Set Intersection (PSI-X) which yields a binary flag indicating whether the intersection of two private sets is empty or non-empty.
Ciphertext and Plaintext Leakage Reveals the Entire TDES Key
Uncategorized
Uncategorized
SCA(Side-channel analysis) is a well-known method to recover the sensitive data stored in security products. Meanwhile numerous countermeasures for hardware implementation of cryptographic algorithms are proposed to protect the internal data against this attack fortunately. However, some designs are not aware that the protection of the plaintext and ciphertext is also crucial. In this work, we attack an implementation TDES(triple DES) by taking advantage of such leakages detected in a widely used commercial product which is based on the hardware platform that passed the EAL5+ certification. In particular, we guess entire DES keys to construct hypotheses for the intermediate outputs in a TDES calculation. The time cost for this approach is nearly $\frac{1}{2^{32}}$ of that by a brute force. Furthermore, if in addition leakage about the key becomes available, the attack costs become practical. That is, reducing the key entropy of every DES key to $2^{28}$ allows an enumeration of the entire TDES in 21.6 hours.
New construction of single-cycle T-function families
The single cycle T-function is a particular permutation function with complex algebraic structures, maximum period and efficient implementation in software and hardware. In this paper, on the basis of existing methods, we present a new construction using a class of single cycle T-functions meeting certain conditions to construct a family of new single cycle T-functions, and we also give the numeration lower bound for the newly constructed single cycle T- functions.
An Oblivious Parallel RAM with $O(\log^2 N)$ Parallel Runtime Blowup
Uncategorized
Uncategorized
Oblivious RAM (ORAM) is a cryptographic primitive that allows a client to access memory locations from a server without revealing its access patterns. Oblivious Parallel RAM (OPRAM) is a PRAM counterpart of Oblivious RAM, i.e., it allows $m$ clients that trust each other to simultaneously access data from a server without revealing their access patterns. The best known OPRAM scheme achieves amortized client-server bandwidth of $O(\log^2 N)$ per lookup, but they do not achieve perfectly linear access time speedup with clients. In fact, for each access, the blowup for the slowest client (also known as parallel runtime blowup) is $O(f(m)\log m\log^2 N), f(m) = \omega(1)$. This implies that, for most accesses, some clients remain idle while others are accessing data. In this work, we show an OPRAM scheme that has parallel runtime blowup of $O(\log^2 N)$ while maintaining $O(\log^2 N)$ client-server bandwidth blowup for each client.
Attribute Based Encryption: Traitor Tracing, Revocation and Fully Security on Prime Order Groups
Uncategorized
Uncategorized
A Ciphertext-Policy Attribute-Based Encryption (CP-ABE) allows users to specify the access policies without having to know the identities of users. In this paper, we contribute by proposing an ABE scheme which enables revoking corrupted users. Given a key-like blackbox, our system can identify at least one of the users whose key must have been used to construct the blackbox and can revoke the key from the system.
This paper extends the work of Liu and Wong to achieve traitor revocability. We construct an Augmented Revocable CP-ABE (AugR-CP-ABE) scheme, and describe its security by message-hiding and index-hiding games. Then we prove that an AugR-CP-ABE scheme with message-hiding and index-hiding properties can be transferred to a secure Revocable CP-ABE with fully collusion-resistant blackbox traceability. In the proof for index-hiding, we divide the adversary's behaviors in two ways and build direct reductions that use adversary to solve the D3DH problem.
Our scheme achieves the sub-linear overhead of $O(\sqrt{N})$, where $N$ is the number of users in the system. This scheme is highly expressive and can take any monotonic access structures as ciphertext policies.
Last updated: 2016-12-23
Comments on “Flaw in the Security Analysis of Leakage-resilient Authenticated Key Exchange Protocol from CT-RSA 2016 and Restoring the Security Proof”
In CT-RSA 2016, Chen, Mu, Yang, Susilo and Guo proposed a strongly leakage-resilient authenticated key exchange (AKE) protocol. In a rencent work, Chakraborty et al. claimed that they identified a flaw in the security analysis of Chen et al.’s protocol. In the letter, we point out that the flaw identified by Chakraborty et al. is invalid and does not exist in the original proof presented in Chen et al.’s paper.
Pairing Cryptography Meets Isogeny: A New Framework of Isogenous Pairing Groups
We put forth a new mathematical framework called Isogenous Pairing Groups (IPG) and new intractable assumptions in the framework, the Isogenous DBDH (Isog-DBDH) assumption and its variants. Three operations, i.e., exponentiation, pairing and isogeny on elliptic curves are treated under a unified notion of trapdoor homomorphisms, and combinations of the operations have potential new cryptographic applications, in which the compatibility of pairing and isogeny is a main ingredient in IPG. As an example, we present constructions of (small and large universe) key-policy attribute-based encryption (KP-ABE) schemes secure against pre-challenge quantum adversaries in the quantum random oracle model (QROM). Note that our small universe KP-ABE has asymptotically the same efficiency as Goyal et al.'s small universe KP-ABE, which has only classical security. As a by-product, we also propose practical (hierarchical) identity-based encryption ((H)IBE) schemes secure against pre-challenge quantum adversaries in the QROM from isogenies, which are based on the Boneh-Franklin IBE and the Gentry-Silverberg HIBE, respectively.
New Impossible Differential Characteristic of SPECK64 using MILP
Uncategorized
Uncategorized
Impossible differential attack is one of powerful methods for analyzing block
ciphers. When designing block ciphers, it must be safe for impossible differential
attacks. In case of impossible differential attack, the attack starts from finding the
impossible differential characteristic. However, in the case of the ARX-based block
cipher, these analyzes were difficult due to the addition of modulus. In this paper,
we introduce 157 new six-round impossible differential characteristics of
ARX-basef block cipher, SPECK64, using Mixed Integer Linear Programming
(MILP) base impossible differential characteristic search proposed by Cui [3] etc.
Cryptography During the French and American Wars in Vietnam
Uncategorized
Uncategorized
After Vietnam's Declaration of Independence on 2 September 1945, the country had to suffer through two long, brutal wars, first against the French and then against the Americans, before finally in 1975 becoming a unified country free of colonial domination. Our purpose is to examine the role of cryptography in those two wars. Despite the far greater technological resources of their opponents, the communications
intelligence specialists of the Viet Minh, the National Liberation Front, and the Democratic Republic of Vietnam had considerable
success in both protecting Vietnamese communications and acquiring tactical and strategic secrets from the enemy. Perhaps surprisingly,
in both wars there was a balance between the sides. Generally speaking, cryptographic knowledge and protocol design were at a high level
at the central commands, but deployment for tactical communications in the field was difficult, and there were many failures on all sides.
Static Power Side-Channel Analysis of a Threshold Implementation Prototype Chip
The static power consumption of modern CMOS devices has become a substantial concern in the context of the side-channel security of cryptographic hardware. The continuous growth of the leakage power dissipation in nanometer-scaled CMOS technologies is not only inconvenient for effective low power designs, but does also create a new target for power analysis adversaries. In this paper, we present the first experimental results of a static power side-channel analysis targeting an ASIC implementation of a provably first-order secure hardware masking scheme. The investigated 150 nm CMOS prototype chip realizes the PRESENT-80 lightweight block cipher as a threshold implementation and allows us to draw a comparison between the information leakage through its dynamic and static power consumption. By employing a sophisticated measurement setup dedicated to static power analysis, including a very low-noise DC amplifier as well as a climate chamber, we are able to recover the key of our target implementation with significantly less traces compared to the corresponding dynamic power analysis attack. In particular, for a successful third-order attack exploiting the static currents, less than 200 thousand traces are needed. Whereas for the same attack in the dynamic power domain around 5 million measurements are required. Furthermore, we are able to show that only-first-order resistant approaches like the investigated threshold implementation do not significantly increase the complexity of a static power analysis. Therefore, we firmly believe that this side channel can actually become the target of choice for real-world adversaries against masking countermeasures implemented in advanced CMOS technologies.
Privacy-preserving Hybrid Recommender System
Uncategorized
Uncategorized
Privacy issues in recommender systems have attracted the attention of researchers for many years. So far, a number of solutions have been proposed. Unfortunately, most of them are far from practical as they either downgrade the utility or are very inefficient. In this paper, we aim at a more practical solution (particularly in the sense of relieving the tension between utility and privacy), by proposing a privacy-preserving hybrid recommender system which consists of an incremental matrix factorization (IMF) component and a user-based collaborative filtering (UCF) component. The IMF component provides the fundamental utility while allows the service provider to efficiently learn feature vectors in plaintext domain, and the UCF component improves the utility while allows users to carry out their computations in an offline manner. Leveraging somewhat homomorphic encryption (SWHE) schemes, we provide privacy-preserving candidate instantiations for both components. Interestingly, as a side effect of the hybrid design, individual components can enhance each other's privacy guarantees. With respect to efficiency, our experiments demonstrate that the hybrid solution is much more efficient than existing solutions.
Implementing Complete Formulas on Weierstrass Curves in Hardware
This work revisits the recent complete addition formulas for prime order elliptic curves of Renes, Costello and Batina in light of parallelization. We introduce the first hardware implementation of the new formulas on an FPGA based on three arithmetic units performing Montgomery multiplication. Our results are competitive with current literature and show the potential of the new complete formulas in hardware design. Furthermore, we present algorithms to compute the formulas using anywhere between two and six processors, using the minimum number of parallel field multiplications.
Some results on ACORN
In this paper we obtain a weakness in the design specification of ACORN, which is a competitor of CAESAR competition. We show that there exists a probabilistic linear relation between message bits and ciphertext bits, which holds with probability greater than $\frac{1}{2}$. This is the first paper which finds a probabilistic linear relation between message and corresponding ciphertext bits of ACRON, and which holds with probability greater than $\frac{1}{2}$. We also propose a new type of CPA attack on ACORN. By our attack method, it is possible to recover full initial state of the encryption phase of the cipher, and the attack has complexity $\approx 2^{40}$. After obtaining the initial state of the encryption phase, one can invert the associated data loading phase and key-IV initialization phase to recover the secret key bits.
New construction of single cycle T-function families
Uncategorized
Uncategorized
The single cycle T-function is a particular permutation function with complex algebraic structures, maximum period and efficient implementation in software and hardware. In this paper, on the basis of existing methods, by using a class of single cycle T-functions that satisfy some certain conditions, we first present a new construction of single cycle T-function families. Unlike the previous approaches, this method can construct multiple single cycle T-functions at once. Then the mathematical proof of the feasibility is given. Next the numeration for the newly constructed single cycle T-functions is also investigated. Finally, this paper is end up with a discussion of the properties which these newly constructed functions preserve, such as linear complexity and stability (k-error complexity), as well as a comparison with previous construction methods.
Are RNGs Achilles’ heel of RFID Security and Privacy Protocols ?
Security and privacy concerns have been growing with the increased usage of the RFID technology in our daily lives. To mitigate these issues, numerous privacy-friendly authentication protocols have been published in the last decade. Random number generators (RNGs) are commonly used in RFID tags to provide security and privacy of RFID protocols. RNGs might be weak spot of a protocol scheme and misusing of RNGs causes security and privacy problems. However, having a secure RNG with large entropy might be a trade-off between security and cost for low-cost RFID tags. Furthermore, a RNG used in RFID tag may not work properly in time. Therefore, we claim that vulnerability of using a RNG may deeply influence the security and privacy level of the system. To the best of our knowledge, this concern has not been considered in RFID literature. Motivated by this need, in this study, we first revisit Vaudenay's privacy model which combines the early models and presents a new mature and elegant privacy model with different adversary classes. Then, we enhance the model by introducing a new oracle, which allows analyzing the usage of RNGs in RFID protocols. We also analyze a couple of proposed protocols under our improved model.
Last updated: 2017-07-22
Certificateless Public Key Encryption with Equality Test
In this paper, we propose the concept of certificateless public key encryption with equality test (CL-PKEET) to solve the key escrow problem in IBEET. More in details, we first give the definition of CL-PKEET and define the security model. Finally, we propose a concrete CL-PKEET scheme based on the Decision Bilinear Diffie-Hellman (DBDH) assumption and prove its security.
Modifying Shor’s algorithm to compute short discrete logarithms
Uncategorized
Uncategorized
We revisit Shor's algorithm for computing discrete logarithms in the multiplicative group of GF(p) on a quantum computer and modify it to compute logarithms d in groups <g> of prime order q in the special case where d <<< q. As a stepping stone to performing this modification, we first introduce a modified algorithm for computing logarithms on the general interval 0 < d < q for comparison.
We demonstrate conservative lower bounds on the success probability of our algorithms in both the general and the special case. In both cases, our algorithms initially set the index registers to a uniform superposition of all states, compared to p-1 states in Shor's original algorithm.
In the special case where d <<< q, our algorithm uses 3 ceil(log d) qubits for the two index registers and computes two QFTs of size 2^(ceil(log d)) and 2^(2 ceil(log d)), compared to 2 floor(log q) qubits for the index registers and two QFTs both of size 2^(floor(log q)) in the general case.
A quantum circuit for computing [a - bd] g is furthermore required, where 0 <= a < 2^(2 ceil(log d)) and 0 <= b < 2^(ceil(log d)) in the special case, compared to 0 <= a, b < 2^(floor(log q)) in the general case.
This implies that the complexity of computing discrete logarithms on a quantum computer can be made to depend not only on the choice of group, and on its order q, but also on the logarithm d.
In the special case where d <<< q, our algorithm does not require q to be prime. It may hence be generalized to finite abelian groups.
Related-Key Impossible-Differential Attack on Reduced-Round SKINNY
Uncategorized
Uncategorized
At CRYPTO'16, Beierle et al. presented SKINNY, a family of lightweight
tweakable block ciphers intended to compete with SIMON. SKINNY can be
implemented efficiently in both soft- and hardware, possesses a Substitution-
Permutation-Network structure, and supports block sizes of 64 and 128 bits as
well as key and tweak sizes of 64, 128, 192, and 256 bits. This paper outlines a
related-tweakey impossible-differential attack on 21 rounds of SKINNY-64/128
and two attacks on 22 and 23 rounds of SKINNY-64/128 under the assumption that
48 bits of the tweakey are public.
Lizard: Cut off the Tail! Practical Post-Quantum Public-Key Encryption from LWE and LWR
Uncategorized
Uncategorized
The LWE problem has been widely used in many constructions for post-quantum cryptography due to its strong security reduction from the worst-case of lattice hard problems and its lightweight operations. The PKE schemes based on the LWE problem have a simple and fast decryption, but the encryption phase is rather slow due to large parameter size for the leftover hash lemma or expensive Gaussian samplings.
In this paper, we propose a novel PKE scheme, called Lizard, without relying on either of them. The encryption procedure of Lizard first combines several LWE samples as in the previous LWE-based PKEs,
but the following step to re-randomize this combination before adding a plaintext is different: it removes several least significant bits of each component of the computed vector rather than adding an auxiliary error vector. Lizard is IND-CPA secure under the hardness assumptions of the LWE and LWR problems, and its variant achieves IND-CCA security in the quantum random oracle model.
Our approach accelerates encryption speed to a large extent and also reduces the size of ciphertexts, and Lizard is very competitive for applications requiring fast encryption and decryption phases. In our single-core implementation on a laptop, the encryption and decryption of IND-CCA Lizard with 256-bit plaintext space under 128-bit quantum security take 0.014 and 0.027 milliseconds, which are comparable to those of NTRU. To achieve these results, we further take some advantages of sparse small secrets.
Last updated: 2017-11-06
Estonian Voting Verification Mechanism Revisited
Uncategorized
Uncategorized
After the Estonian Parliamentary Elections held in 2011, an additional verification mechanism was integrated into the i-voting system in order to resist corrupted voting devices, including the so called Student's Attack where a student practically showed that the voting system is indeed not verifiable by developing several versions of malware capable of blocking or even changing the vote. This mechanism gives voters the opportunity to verify whether the vote they cast is stored in the central system correctly. However, the verification phase ends by displaying the cast vote in plain form on the verification device. In other words, the device on which the verification is done learns the voter's choice. In this work, our aim is to investigate this verification phase in detail and to point out that leaking the voter's choice to the verification application may harm the voter privacy. Additionally, when applied in a wide range, this would even compromise the fairness and the overall secrecy of the elections. In this respect, we propose an alternative verification mechanism for the Estonian i-voting system to overcome this vulnerability. Not only is the proposed mechanism secure and resistant against corrupted verification devices, so does it successfully verify whether the vote is correctly stored in the system. We also highlight that our proposed mechanism brings only symmetric encryptions and hash functions on the verification device, thereby mitigating these weaknesses in an efficient way with a negligible cost. More concretely, it brings only $m$ additional symmetric key decryptions to the verification device, where $m$ denoting the number of candidates. Finally, we prove the security of the proposed verification mechanism and compare the cost complexity of the proposed method with that of the current mechanism.
Integrity Analysis of Authenticated Encryption Based on Stream Ciphers
We study the security of authenticated encryption based on a stream cipher and a universal hash function. We consider ChaCha20-Poly1305 and generic constructions proposed by Sarkar, where the generic constructions include 14 AEAD (authenticated encryption with associated data) schemes and 3 DAEAD (deterministic AEAD) schemes. In this paper, we analyze the integrity of these schemes both in the standard INT-CTXT notion and in the RUP (releasing unverified plaintext) setting called INT-RUP notion. We present INT-CTXT attacks against 3 out of the 14 AEAD schemes and 1 out of the 3 DAEAD schemes. We then show INT-RUP attacks against 1 out of the 14 AEAD schemes and the 2 remaining DAEAD schemes. We next show that ChaCha20-Poly1305 is provably secure in the INT-RUP notion. Finally, we show that 4 out of the remaining 10 AEAD schemes are provably secure in the INT-RUP notion.
Dude, is my code constant time?
This paper introduces dudect: a tool to assess whether a piece of code runs in constant time or not on a given platform. We base our approach on leakage detection techniques, resulting in a very compact, easy to use and easy to maintain tool. Our methodology fits in around 300 lines of C and runs on the target platform. The approach is substantially different from previous solutions. Contrary to others, our solution requires no modeling of hardware behavior. Our solution can be used in black-box testing, yet benefits from implementation details if available. We show the effectiveness of our approach by detecting several variable-time cryptographic implementations. We place a prototype implementation of dudect in the public domain.
Quantum Key Recycling with eight-state encoding (The Quantum One Time Pad is more interesting than we thought)
Uncategorized
Uncategorized
Perfect encryption of quantum states using the Quantum One-Time Pad (QOTP) requires 2 classical key bits per qubit.
Almost-perfect encryption, with information-theoretic security, requires only slightly more than 1. We slightly improve lower bounds on the key length. We show that key length $n+2\log\frac1\varepsilon$ suffices to encrypt $n$ qubits in such a way that the cipherstate's $L_1$-distance from uniformity is upperbounded by $\varepsilon$. For a stricter security definition involving the $\infty$-norm, we prove sufficient key length $n+\log n +2\log\frac1\varepsilon+1+\frac1n\log\frac1\delta+\log\frac{\ln 2}{1-\varepsilon}$, where $\delta$ is a small probability of failure. Our proof uses Pauli operators, whereas previous results on the $\infty$-norm needed Haar measure sampling.
We show how to QOTP-encrypt classical plaintext in a nontrivial way:
we encode a plaintext bit as the vector $\pm(1,1,1)/\sqrt3$ on the Bloch sphere. Applying the Pauli encryption operators results in eight possible cipherstates which are equally spread out on the Bloch sphere. This encoding, especially when combined with the half-keylength option of QOTP, has advantages over 4-state and 6-state encoding in applications such as Quantum Key Recycling and Unclonable Encryption. We propose a key recycling scheme that is more efficient and can tolerate more noise than a recent scheme by Fehr and Salvail.
For 8-state QOTP encryption with pseudorandom keys we do a statistical analysis of the cipherstate eigenvalues. We present numerics up to 9 qubits.
Insecurity of RCB: Leakage-Resilient Authenticated Encryption
Uncategorized
Uncategorized
Leakage-resilient cryptography is about security in the pres-
ence of leakage from side-channels. In this paper, we present several issues
of the RCB block cipher mode. Agrawal et al [2] proposed recently RCB
as a leakage-resilient authenticated encryption (AE) scheme. Our main
result is that RCB fails to provide authenticity, even in the absence of
leakage.
Cryptanalysis of Reduced round SKINNY Block Cipher
SKINNY is a family of lightweight tweakable block ciphers designed to have the smallest hardware footprint. In this paper, we present zero-correlation linear approximations and related-tweake impossible differential characteristics for different versions of SKINNY. We utilize meet-in-the-middle approach to construct 9-round and 10-round zero-correlation linear distinguisher. We also obtain 12-round related-tweakey impossible differential characteristics for both SKINNY-64 and 128 in TK1 model and TK2 model. To the best of our knowledge, the presented zero-correlation characteristics in this paper is the first investigation of the security of SKINNY against this attack.
A Code-Based Group Signature Scheme
This work is the extended version of [1] which proposed the first code-based group sig-
nature. The new group signature scheme we present here has numerous advantages over all
existing post-quantum constructions and even competes (in terms of properties) with pairing
based constructions: it allows to add new members during the lifetime of the group (dynamic).
Plus, it appears that our scheme might be extended into a traceable signature according to the
definition of Kiayias, Tsiounis and Yung [2] (KTY model) while handling membership revo-
cation. Our security is based on a relaxation of the model of Bellare, Shi and Zhang [3] (BSZ
model) verifying the properties of anonymity, traceability and non-frameability. The main idea
of our scheme consists in building an offset collision of two syndromes associated to two dif-
ferent matrices: a random one which enables to build a random syndrome from a chosen small
weight vector; and a trapdoor matrix for the syndrome decoding problem, which permits to find
a small weight preimage of the previous random syndrome to which a fixed syndrome is added.
These two small weight vectors will constitute the group member’s secret signing key whose
knowledge will be proved thanks to a variation of Stern’s authentication protocol. For appli-
cations, we consider the case of the code-based CFS signature scheme [4] of Courtois, Finiasz
and Sendrier. If one denotes by N the number of group members, CFS leads to signatures and
public keys sizes in $N^{1/\sqrt{\log N}}$. Along with this work, we also introduce a new kind of proof of knowledge, Testable weak Zero Knowledge (TwZK), implicitly covered in the short version of this paper [1]. TwZK proofs appear particularly well fitted in the context of group signature schemes: it allows a verifier to test whether a specific witness is used without learning anything more from the proof. Under the Random Oracle Model (ROM), we ensure the security of our scheme by defining the One More Syndrome Decoding problem, a new code-based problem related to the Syndrome Decoding problem [5].
Designing Optimal Implementations of Linear Layers (Full Version)
Linear layer is a fundamental primitive for computer science, electronic engineering, and telecommunication. The performance of a linear layer depends on two aspects: diffusion ability and implementation cost, where the latter is usually measured by the number of XORs needed to implement it. For many years, linear layers have been implemented by computing co-ordinates of the output independently. With this method, costs are determined only by the matrices representing linear layers. However, we note that the implementation cost of a given linear layer depends not only on its matrix but also on the ways with which we implement it. So, in this paper, we focus on another implementation method: modifying input vectors to output step by step (MIOSS). This method uses fewer XORs than previous methods do and needs no extra temporary register. Besides, this method makes the implementation cost of a linear layer same as that of its inverse. With the new implementation method, we first clarify the measurement of implementation cost and the optimal implementation procedure of linear layers. Here, ``optimal'' means using fewest XORs. Then, to find the optimal implementation procedure of a given linear layer, we construct a graph-theoretical model and transfer the problem to the shortest path problem in graph theory. Although there has been several algorithms for the shortest path problem, they do not perform best for the graph that we construct in this paper because of its regularity. Therefore, we adopt a new ``double-direction'' algorithm that uses less storage and makes the search for a shortest path more efficient in a regular graph. Unfortunately, this algorithm is not practical for large size linear layers because of its high space/time complexity. So, we finally construct another algorithm for finding efficient implementations of linear layers. An important advantage of this last algorithm is its extremely low complexity. We conduct it to the linear layer of AES and get very efficient implementations.
Privacy-friendly Forecasting for the Smart Grid using Homomorphic Encryption and the Group Method of Data Handling
While the smart grid has the potential to have a positive impact on the sustainability and efficiency of the electricity market, it also poses some serious challenges with respect to the privacy of the consumer. One of the traditional use-cases of this privacy sensitive data is the usage for forecast prediction. In this paper we show how to compute the forecast prediction such that the supplier does not learn any individual consumer usage information. This is achieved by using the Fan-Vercauteren somewhat homomorphic encryption scheme. Typical prediction algorithms are based on artificial neural networks that require the computation of an activation function which is complicated to compute homomorphically. We investigate a different approach and show that Ivakhnenko's group method of data handling is suitable for homomorphic computation.
Our results show this approach is practical: prediction for a small apartment complex of $10$ households can be computed homomorphically in less than four seconds using a parallel implementation or in about half a minute using a sequential implementation. Expressed in terms of the mean average percentage error, the prediction accuracy is roughly 21\%.
Evaluating Entropy for TRNGs: Efficient, Robust and Provably Secure
Uncategorized
Uncategorized
Estimating entropy of randomness sources is a task of crit-
ical importance in the context of true random number generators, as
feeding cryptographic applications with insufficient entropy is a serious
real-world security risk. The challenge is to maximize accuracy and con-
fidence under certain data models and resources constants.
In this paper we analyze the performance of a simple collision-counting
estimator, under the assumption that source outputs are independent
but their distribution can change due to adversarial influences.
For n samples and confidence 1 − we achieve the following features
(a) Efficiency: reads the stream in one-pass and uses constant memory
(forward-only mode)
(b) Accuracy: estimates the amount of extractable bits with a relative
1
error O(n − 2 log(1/ε)), when the source outputs are i.i.d.
(c) Robustness: keeps the same error when the source outputs are inde-
1
pendent but the distribution changes up to t = O(n^0.5) times during
runtime
We demonstrate that the estimator is accurate enough to adjust post-
processing components dynamically, estimating entropy on the fly in-
stead investigating it off-line. Our work thus continues the line of re-
search on "testable random number generators" originated by Bucii and
Luzzi at CHES'05.
Impossible Differential Cryptanalysis of Reduced-Round SKINNY
Uncategorized
Uncategorized
SKINNY is a new lightweight tweakable block cipher family proposed by Beierle $et$ $al$. in CRYPTO 2016. SKINNY-$n$-$t$ is a block cipher with $n$-bit state and $t$-bit tweakey (key and tweak). It is designed to compete with the recent NSA SIMON block cipher. In this paper, we present impossible differential attacks against reduced-round versions of all the 6 SKINNY's variants, namely, SKINNY-$n$-$n$, SKINNY-$n$-2$n$ and SKINNY-$n$-3$n$ ($n=64$ or $n=128$) in the single-tweakey model. More precisely, we present impossible differential attacks against 18, 20 and 22 rounds of SKINNY-$n$-$n$, SKINNY-$n$-2$n$ and SKINNY-$n$-3$n$ ($n=64$ or $n=128$), respectively. These attacks are based on the same 11-round impossible differential distinguisher. To the best of our knowledge, these are the best attacks against these 6 variants of the cipher in the single-tweakey model.
Full Disk Encryption: Bridging Theory and Practice
We revisit the problem of Full Disk Encryption (FDE), which refers to the encryption of each sector of a disk volume. In the context of FDE, it is assumed that there is no space to store additional data, such as an IV (Initialization Vector) or a MAC (Message Authentication Code) value. We formally define the security notions in this model against chosen-plaintext and chosen-ciphertext attacks. Then, we classify various FDE modes of operation according to their security in this setting, in the presence of various restrictions on the queries of the adversary. We will find that our approach leads to new insights for both theory and practice. Moreover, we introduce the notion of a diversifier, which does not require additional storage, but allows the plaintext of a particular sector to be encrypted to different ciphertexts. We show how a 2-bit diversifier can be implemented in the EagleTree simulator for solid state drives (SSDs), while decreasing the total number of Input/Output Operations Per Second (IOPS) by only 4%.
Efficient Construction of Visual Cryptographic Scheme for Compartmented Access Structures
Uncategorized
Uncategorized
In this paper, we consider a special type of secret sharing
scheme known as Visual Cryptographic Scheme (VCS) in which the secret reconstruction is done
visually without any mathematical computation unlike other secret sharing schemes.
We put forward an efficient direct construction of a visual cryptographic scheme for compartmented access structure which generalizes the access structure for threshold as well as for threshold with certain essential participants. Up to the best of our knowledge, the scheme is the first proposed scheme for compartmented access structure in the literature of visual cryptography. Finding the closed form of relative contrast of a scheme is, in general, a combinatorially hard problem. We come up with a closed form of both pixel expansion as well as relative contrast. Numerical evidence shows that our scheme performs better in terms of both relative contrast as well as pixel expansion than the cumulative array based construction obtained as a particular case of general access structure.
Direct construction of quasi-involutory recursive-like MDS matrices from $2$-cyclic codes
Uncategorized
Uncategorized
A good linear diffusion layer is a prerequisite in the design of block ciphers.
Usually it is obtained by combining matrices with optimal diffusion property over the Sbox alphabet. These matrices are constructed either directly using some algebraic properties or by enumerating a search space, testing the optimal diffusion property for every element. For implementation purposes, two types of structures are considered: Structures where all the rows derive from the first row and recursive structures built from powers of companion matrices. In this paper, we propose a direct construction for new recursive-like MDS matrices. We show they are quasi-involutory in the sense that the matrix-vector product with the matrix or with its inverse can be implemented by clocking a same LFSR-like architecture.
Hiding Higher-Order Side-Channel Leakage - Randomizing Cryptographic Implementations in Reconfigurable Hardware
First-order secure Threshold Implementations (TI) of symmetric cryptosystems provide provable security at a moderate overhead; yet attacks using higher-order statistical moments are still feasible. Cryptographic instances compliant to Higher-Order Threshold Implementation (HO-TI) can prevent such attacks, however, usually at unacceptable implementation costs. As an alternative concept we investigate in this work the idea of dynamic hardware modification, i.e., random changes and transformations of cryptographic implementations in order to render higher-order attacks on first-order TI impractical. In a first step, we present a generic methodology which can be applied to (almost) every cryptographic implementation. In order to investigate the effectiveness of our proposed strategy, we use an instantiation of our methodology that adapts ideas from White-Box Cryptography and applies this construction to a first-order secure TI. Further, we show that dynamically updating cryptographic implementations during operation provides the ability to avoid higher-order leakages to be practically exploitable.
Efficient Post-Quantum Zero-Knowledge and Signatures
In this paper, we present a new post-quantum digital signature algorithm that derives its security entirely from assumptions about symmetric-key primitives, which are very well studied and believed to be quantum-secure (with increased parameter sizes). We present our new scheme with a complete post-quantum security analysis, and benchmark results from a prototype implementation.
Our construction is an efficient instantiation of the following design: the public key is $y=f(x)$ for preimage-resistant function $f$, and $x$ is the private key. A signature is a non-interactive zero-knowledge proof of $x$, that incorporates a message to be signed. Our security analysis uses recent results of Unruh (EUROCRYPT'12,'15,'16) that show how to securely convert an
interactive sigma protocol to a non-interactive one in the quantum random oracle model (QROM). The Unruh construction is generic, and does not immediately yield compact proofs. However, when we specialize the construction to our application, we can reduce the size overhead of the QROM-secure construction to 1.6x, when compared to the Fiat-Shamir transform, which does not have a rigorous post-quantum security analysis. Our implementation results compare both instantiations, with multiple choices of $f$ for comparison. Our signature scheme proposal uses the block cipher LowMC for $f$, as it gives the shortest signatures.
In addition to reducing the size of signatures with Unruh's construction, we also improve the size of proofs in the underlying sigma protocol (of Giacomelli et al., USENIX'16) by a factor of two. This is of independent interest as it yields more compact proofs for arbitrary choices of $f$ in the classical case as well. Further, this reduction in size comes at no additional computational
cost.
Practical CCA2-Secure and Masked Ring-LWE Implementation
During the last years public-key encryption schemes based on the hardness of ring-LWE have gained significant popularity. For real-world security applications assuming strong adversary models, a number of practical issues still need to be addressed. In this work we thus present an instance of ring-LWE encryption that is protected against active attacks (i.e., adaptive chosen-ciphertext attacks) and equipped with countermeasures against side-channel analysis. Our solution is based on a postquantum variant of the Fujisaki-Okamoto (FO) transform combined with provably secure first-order masking. To protect the key and message during decryption, we developed a masked binomial sampler that secures the re-encryption process required by FO. Our work shows that CCA2-secured RLWE-based encryption can be achieved with reasonable performance on constrained devices but also stresses that the required transformation and handling of decryption errors implies a performance overhead that has been overlooked by the community so far. With parameters providing 233 bits of quantum security, our implementation requires 4,176,684 cycles for encryption and 25,640,380 cycles for decryption with masking and hiding countermeasures on a
Cortex-M4F. The first-order security of our masked implementation is also practically verified using the non-specific t-test evaluation methodology.
Security Analysis of SKINNY under Related-Tweakey Settings
Uncategorized
Uncategorized
In CRYPTO'16, a new family of tweakable lightweight block ciphers - SKINNY was introduced. Denoting the variants of SKINNY as SKINNY-$n$-$t$, where $n$ represents the block size and $t$ represents the tweakey length, the design specifies $t \in \{n, 2n, 3n\}$. In this work, we evaluate the security of SKINNY against differential cryptanalysis in the related-tweakey model. First, we investigate truncated related-tweakey differential trails of SKINNY and search for the longest impossible and rectangle distinguishers where there is only one active cell in the input and the output. Based on the distinguishers obtained, $19$, $23$ and $27$ rounds of SKINNY-$n$-$n$, SKINNY-$n$-$2n$ and SKINNY-$n$-$3n$ can be attacked respectively. Next, actual differential trails for SKINNY under related-tweakey model are explored and optimal differential trails of SKINNY-64 within certain number of rounds are searched with an indirect searching method based on Mixed-Integer Linear Programming. The results show a trend that as the number of rounds increases, the probability of optimal differential trails is much lower than the probability derived from the lower bounds of active Sboxes in SKINNY.
Magic Adversaries Versus Individual Reduction: Science Wins Either Way
We prove that, assuming there exists an injective one-way function $f$, \emph{at least} one of the following statements is true:
\begin{itemize}
\item (Infinitely-often) Non-uniform public-key encryption and key agreement exist;
\item The Feige-Shamir protocol instantiated with $f$ is distributional concurrent zero knowledge for a large class of distributions over any OR NP-relations with small distinguishability gap.
\end{itemize}
The questions of whether we can achieve these goals are known to be subject to black-box limitations. Our win-win result also establishes an unexpected connection between the complexity of public-key encryption and the round-complexity of concurrent zero knowledge.
As the main technical contribution, we introduce a dissection procedure for concurrent adversaries, which enables us to transform a magic concurrent adversary that breaks the distributional concurrent zero knowledge of the Feige-Shamir protocol into non-black-box constructions of (infinitely-often) public-key encryption and key agreement.
This dissection of complex algorithms gives insight into the fundamental gap between the known \emph{universal} security reductions/simulations, in which a single reduction algorithm or simulator works for \emph{all} adversaries, and the natural security definitions (that are sufficient for almost all cryptographic primitives/protocols), which switch the order of qualifiers and only require that for every adversary there \emph{exists} an \emph{individual} reduction or simulator.
Functional Encryption for Quadratic Functions, and Applications to Predicate Encryption
Uncategorized
Uncategorized
We present a functional encryption scheme based on standard assumptions where ciphertexts are associated with a tuple of values \((x_1,\ldots,x_n) \in \mathbb{Z}_p^n\), secret keys are associated with a degree-two polynomial, and the decryption of a ciphertext \(\mathsf{ct}_{(x_1,\ldots,x_n) \in \mathbb{Z}_p^n}\) with a secret key \(\mathsf{sk}_{P \in \mathbb{Z}_p[X_1,\ldots,X_n], \mathsf{deg}(P) \leq 2}\) recovers \(P(x_1,\ldots,x_n)\), where the ciphertext contains only \(O(n)\) group elements. Our scheme, which achieves selective security based on pairings, also yields a new predicate encryption scheme that supports degree-two polynomial evaluation, generalizing both [KSW 08] and [BSW 06].
Generic Transformations of Predicate Encodings: Constructions and Applications
Predicate encodings (Wee, TCC 2014; Chen, Gay, Wee, EUROCRYPT 2015), are symmetric primitives that can be used for building predicate encryption schemes. We give an algebraic characterization of the notion of privacy from predicate encodings, and explore several of its consequences. Specifically, we propose more efficient predicate encodings for boolean formulae and arithmetic span programs, and generic optimizations of predicate encodings. We define new constructions to build boolean combination of predicate encodings. We formalize the relationship between predicate encodings and pair encodings (Attrapadung, EUROCRYPT 2014), another primitive that can be transformed generically into predicate encryption schemes, and compare our constructions for boolean combinations of pair encodings with existing similar constructions from pair encodings. Finally, we demonstrate that our results carry to tag-based encodings (Kim, Susilo, Guo, and Au, SCN 2016).
Practical Functional Encryption for Bilinear Forms
We present a practically efficient functional encryption scheme for the class of functionalities that can be expressed via bilinear forms over the integers. Bilinear forms are a general class of quadratic functions that includes, for instance, multivariate quadratic polynomials. Our realization works over asymmetric bilinear groups and is surprisingly simple, efficient and easy to implement. For instance, in our scheme the public key and each ciphertext consist of $2n+1$ and $4n+2$ group elements respectively, where $n$ is the dimension of the encrypted vectors, while secret keys are only two group elements.
The scheme is proved secure under the standard (adaptive) indistinguishability based security notion of Boneh, Sahai and Waters (TCC 2011). The proof is rather convoluted and relies on the so-called generic bilinear group model. Specifically, our proof comes in two main stages. In a preliminary step, we put forward and prove a new master theorem to argue hardness in the generic bilinear group model of a broad family of interactive decisional problems, which includes the indistinguishability-based security game for our functional encryption scheme. Next, the more technically involved part of the proof consists in showing that our scheme actually fits the requirements of our master theorem.
A Fast Single-Key Two-Level Universal Hash Function
Uncategorized
Uncategorized
Universal hash functions based on univariate polynomials are well known, e.g. \sym{Poly1305} and \sym{GHASH}. Using
Horner's rule to evaluate such hash functions require $\ell-1$ field multiplications for hashing a message consisting of $\ell$
blocks where each block is one field element. A faster method is based on the class of Bernstein-Rabin-Winograd (BRW) polynomials
which require $\lfloor\ell/2\rfloor$ multiplications and $\lfloor\lg\ell\rfloor$ squarings for $\ell\geq 3$ blocks.
Though this is significantly smaller than Horner's rule based hashing, implementation of BRW polynomials for variable length
messages present significant difficulties. In this work, we propose a two-level hash function where BRW polynomial based
hashing is done at the lower level and Horner's rule based hashing is done at the higher level. The BRW polynomial based
hashing is applied to a fixed number of blocks and hence the difficulties in handling variable length messages is avoided.
Even though the hash function has two levels, we show that it is sufficient to use a single field element as the hash key. The
basic idea is instantiated to propose two new hash functions, one which hashes a single binary string and the other can hash a vector
of binary strings. We describe two actual implementations, one over $\mathbb{F}_{2^{128}}$ and the other over
$\mathbb{F}_{2^{256}}$ both using the {\tt pclmulqdq} instruction available in modern Intel processors. On both the Haswell and
Skylake processors, the implementation over $\mathbb{F}_{2^{128}}$ is faster than the highly optimised implementation of \sym{GHASH}
by Gueron.
We further show that the Fast Fourier Transform based field multiplication over $\mathbb{F}_{2^{256}}$ proposed by Bernstein and Chou
can be used to evaluate the new hash function at a cost of about at most 46 bit operations per bit of digest, but, unlike the
Bernstein-Chou analysis, there is no hidden cost of generating the hash key.
More generally, the new idea of building a two-level hash function having a single field element as the hash key can be applied to other
finite fields to build new hash functions.
Challenges with Assessing the Impact of NFS Advances on the Security of Pairing-based Cryptography
In the past two years there have been several advances in Number Field Sieve (NFS) algorithms for computing discrete logarithms in finite fields $\mathbb{F}_{p^n}$ where $p$ is prime and $n > 1$ is a small integer. This article presents a concise overview of these algorithms and discusses some of the challenges with assessing their impact on keylengths for pairing-based cryptosystems.
MILP-Aided Bit-Based Division Property for ARX-Based Block Cipher
Uncategorized
Uncategorized
The huge time and memory complexities of utilizing bit-based division property, which was first presented by Todo and Morri at FSE 2016, bothered cryptographers for quite some time and it had been solved by Xiang \textit{et al.} at ASIACRYPT 2016. They applied MILP method to search integral distinguisher based on division property, and used it to analyze six lightweight block ciphers. Later on, Sun \textit{et al.} handled the feasibility of MILP-aided bit-based division property for primitives with non-bit-permutation linear layers. Although MILP-aided bit-based division property has gave many perfect results since its appearance, there still are many left problems when we want to develop its further applications. In this paper, we focus on the feasibility of MILP-aided bit-based division property for ARX-based primitive.
More specifically, we consider the construction of MILP models for some components of ARX-based structure. Firstly, the \texttt{Modulo} model is proposed by using its iterated expression and introducing some auxiliary variables. Then, to propagate the operations of \texttt{AND} and \texttt{OR} with a constant (or a subkey), we prove that the known-region deduced by the input division property is always included in the known-region derived from the output division property, which allows us to ignore these operations. Furthermore, with its help, we also handle the \texttt{Modulo} operation with a constant (or a subkey). As a result, these new models are exploited to search integral distinguishers for some ARX-based block ciphers. For HIGHT and LEA, the lengths of the distinguishers both are improved by one round. Some 15-round integral distinguishers for TEA/XTEA are presented. Comparing with the existing one transformed by utilizing the equivalence between zero-correlation and integral cryptanalysis, our newly obtained distinguishers either reduces the data requirement or increases the number of zero-sum bits. Moreover, the bit-based division properties for KATAN and KTANTAN families of block ciphers are also provided.
Pseudoentropic Isometries: A New Framework for Fuzzy Extractor Reusability
Uncategorized
Uncategorized
Fuzzy extractors (Dodis \textit{et al.}, Eurocrypt 2004) turn a noisy secret into a stable, uniformly distributed key.
\textit{Reusable} fuzzy extractors remain secure when multiple keys are produced from a single noisy secret (Boyen, CCS 2004). Boyen proved that any information-theoretically secure reusable fuzzy extractor is subject to strong limitations. Simoens \textit{et al.} (IEEE S\&P, 2009) then showed deployed constructions suffer severe security breaks when reused.
Canetti \textit{et al.} (Eurocrypt 2016) proposed using computational security to sidestep this problem. They constructed a computationally secure reusable fuzzy extractor for the Hamming metric that corrects a \emph{sublinear} fraction of errors.
We introduce a generic approach to constructing reusable fuzzy extractors.
We define a new primitive called a \emph{reusable pseudoentropic isometry} that projects an input metric space to an output metric space. This projection preserves distance and entropy even if the same input is mapped to multiple output metric spaces.
A reusable pseudoentropy isometry yields a reusable fuzzy extractor by 1) randomizing the noisy secret using the isometry and 2) applying a traditional fuzzy extractor to derive a secret key.
We propose reusable pseudoentropic isometries for the set difference and Hamming metrics.
The set difference construction is built from composable digital lockers (Canetti and Dakdouk, Eurocrypt 2008) yielding the first reusable fuzzy extractor that corrects a {\it linear} fraction of errors.
For the Hamming metric, we show that the second construction of Canetti \textit{et al.} (Eurocrypt 2016) can be seen as an instantiation of our framework.
In both cases, the pseudoentropic isometry's reusability requires noisy secrets distributions to have entropy in each symbol of the alphabet.
Lastly, we implement our set difference solution and describe two use cases.
Improved Parameters for the Ring-TESLA Digital Signature Scheme
Uncategorized
Uncategorized
Akleylek et al have proposed Ring-TESLA, a practical and efficient digital signature scheme based on the Ring Learning With Errors problem. However we have identified there are some problems with the parameters proposed for Ring-TESLA, as we believe they do not ensure the correct operation of the scheme and do not provide the targeted levels of security under either the provable Ring-TESLA reduction, or an assessment of practical modern attacks such as lattice sieving.
We recommend new Ring-TESLA parameters that target more security levels and provide for correct, secure, and efficient instantiation. We describe the necessary preliminaries, recap the Ring-TESLA scheme, and present our parameter recommendations, selection methodology, and analysis.
We have implemented Ring-TESLA using our recommended parameters, and we place this software in the public domain.
Multi-key Analysis of Tweakable Even-Mansour with Applications to Minalpher and OPP
Uncategorized
Uncategorized
The tweakable Even-Mansour construction generalizes the conventional Even-Mansour scheme through
replacing round keys by strings derived from a master key and a tweak. Besides providing plenty
of inherent variability, such a design builds a tweakable block cipher from some lower level
primitive. In the present paper, we evaluate the multi-key security of TEM-1, one of the most
commonly used one-round tweakable Even-Mansour schemes (introduced at CRYPTO 2015), which is
constructed from a single $n$-bit permutation $E$ and a function $f(k,t)$ linear in $k$ from
some tweak space to ${\{ 0,1\} ^n}$. Based on giant component theorem in random graph theory,
we propose collision-based multi-key attacks on TEM-1 in the known-plaintext setting. Furthermore,
inspired by the methodology of Fouque et al. presented at ASIACRYPT 2014, we devise a novel way
of detecting collisions to obtain memory-efficient attacks in the blockwise-adaptive chosen-plaintext
setting.
As applications, we utilize our techniques to analyze the authenticated encryption algorithm Minalpher
(a second-round candidate of CAESAR) and OPP (proposed at EUROCRYPT 2016) in the multi-key setting.
First, we present our known-plaintext attacks on Minalpher and OPP without nonce misuse, which enable
us to recover almost all $O(2^{(n/3)})$ independent equivalent keys by making $O(2^{(n/3)})$ queries
per key and costing $O(2^{(2n/3)})$ memory overall. Moreover, after defining appropriate iterated
functions and accordingly changing the mode of creating chains, we improve the basic blockwise-adaptive
chosen-plaintext attack to make it applicable for the nonce-respecting setting.
While our attacks do not contradict the security proofs of Minalpher and OPP in the classical setting,
nor pose an immediate threat to their uses, our results demonstrate their security margins in the
multi-user setting should be carefully considered. We emphasize this is the very first third-party
analysis on Minalpher and OPP.
Projective Arithmetic Functional Encryption and Indistinguishability Obfuscation From Degree-5 Multilinear Maps
In this work, we propose a variant of functional encryption called projective
arithmetic functional encryption (PAFE). Roughly speaking, our notion is like functional
encryption for arithmetic circuits, but where secret keys only yield partially decrypted values. These partially decrypted values can be linearly combined with known coefficients and the result can be tested to see if it is a small value.
We give a degree-preserving construction of PAFE from multilinear maps. That is, we show how to achieve PAFE for arithmetic circuits of degree d using only degree-d multilinear maps. Our construction is based on an assumption over such multilinear maps, that we justify in a generic model. We then turn to applying our notion of PAFE to one of the most pressing open problems in the foundations of cryptography: building secure indistinguishability obfuscation (iO) from simpler building blocks.
iO from degree-5 multilinear maps. Recently, the works of Lin [Eurocrypt 2016] and Lin-Vaikuntanathan [FOCS 2016] showed how to build iO from constant-degree multilinear maps. However, no explicit constant was given in these works, and an analysis of these published works shows that the degree requirement would be in excess of 30. The ultimate "dream" goal of this line of work would be to reduce the degree requirement all the way to 2, allowing for the use of well-studied bilinear maps, or barring that, to a low constant that may be supportable by alternative secure low-degree multilinear map candidates. We make substantial progress toward this goal by showing how to leverage PAFE for degree-5 arithmetic circuits to achieve iO, thus yielding the first iO construction from degree-5 multilinear maps.