All papers in 2018 (1249 results)

Last updated:  2018-12-30
Post-quantum verifiable random functions from ring signatures
Uncategorized
Endre Abraham
Show abstract
Uncategorized
One of the greatest challenges on exchanging seemingly random nonces or data either on a trusted or untrusted channel is the hardness of verify- ing the correctness of such output. If one of the parties or an eavesdropper can gain game-theoretic advantage of manipulating this seed, others can- not efficiently notice modifications nor accuse the oracle in some way. Decentralized applications where an oracle can go unnoticed with biased outputs are highly vulnerable to attacks of this kind, limiting applicability of these parties even though they can introduce great scalability to such systems. Verifiable random functions[1] presented by Micali can be viewed as keyed hash funcions where the key(s) used are asymmetric. They al- low the oracle to prove correctness of a defined pseudorandom function on seed s without actually making it public, thus not compromising the unpredictability of the function. Our contribution here is to provide a variant of this scheme and proving it’s security against known quantum attacks and quantum oracles
Last updated:  2018-12-30
Pooled Mining Makes Selfish Mining Tricky
Suhyeon Lee, Seungjoo Kim
Bitcoin, the first successful cryptocurrency, uses the blockchain structure and PoW mechanism to generate blocks. PoW makes an adversary difficult to control the network until she retains over 50\% of the hashrate of the total network. Another cryptocurrency, Ethereum, also uses this mechanism and it did not make problem before. In PoW research, however, several attack strategies are studied. In this paper, we researched selfish mining in the pooled mining environment and found the pooled mining exposes mining information of the block which adversary is mining to the random miners. Using this leaked information, other miners can exploit the selfish miner. At the same time, the adversary loses revenue than when she does honest mining. Because of the existence of our counter method, the adversary with pooled mining cannot do selfish mining easily on Bitcoin or blockchains using PoW.
Last updated:  2018-12-30
On Some Computational Problems in Local Fields
Yingpu Deng, Lixia Luo, Guanju Xiao
Lattices in Euclidean spaces are important research objects in geometric number theory, and they have important applications in many areas, such as cryptology. The shortest vector problem (SVP) and the closest vector problem (CVP) are two famous computational problems about lattices. In this paper, we define so-called p-adic lattices, and consider the p-adic analogues of SVP and CVP in local fields. We find that, in contrast with lattices in Euclidean spaces, the situation is completely different and interesting. We also develop relevant algorithms, indicating that these problems are computable.
Last updated:  2019-02-20
Multi-Party Oblivious RAM based on Function Secret Sharing and Replicated Secret Sharing Arithmetic
Marina Blanton, Chen Yuan
In this work, we study the problem of constructing oblivious RAM for secure multi-party computation to obliviously access memory at private locations during secure computation. We build on recent two-party Floram construction that uses function secret sharing for a point function and incurs $O(\sqrt N)$ secure computation and $O(N)$ local computation per ORAM access for an $N$-element data set. Our new construction, Top ORAM, is designed for multi-party computation with $n \ge 3$ parties and uses replicated secret sharing. We reduce secure computation component to $O(\log N)$, which has notable effect on performance. As a result, when Top ORAM is instantiated with $n=3$ parties, it outperforms all other 2- and 3-party ORAM constructions that we tested for datasets up to a few million (at which point $O(N)$ local work becomes the bottleneck). To be able to accomplish the above, we design a number of secure $n$-party protocols for semi-honest adversaries in the setting with honest majority for replicated secret sharing. They are suitable to be instantiated over any finite ring, which has the advantage of using native hardware arithmetic with rings $\mathbb{Z}_{2^k}$ for some $k$. We also provide conversion procedures between other, more common types of secret sharing and replicated secret sharing to enable integration of Top ORAM with other secure computation frameworks. As an additional contribution of this work, we show how our ORAM techniques can be used to realize private binary search at the cost of only a single ORAM access and $\log N$ comparisons, instead of conventional $O(\log N)$ ORAM accesses and comparisons. Because of this property, performance of our binary search is significantly faster than binary search using other ORAM schemes for all ranges of values that we tested.
Last updated:  2018-12-30
Efficient Information Theoretic Multi-Party Computation from Oblivious Linear Evaluation
Louis Cianciullo, Hossein Ghodosi
Oblivious linear evaluation (OLE) is a two party protocol that allows a receiver to compute an evaluation of a sender's private, degree $1$ polynomial, without letting the sender learn the evaluation point. OLE is a special case of oblivious polynomial evaluation (OPE) which was first introduced by Naor and Pinkas in 1999. In this article we utilise OLE for the purpose of computing multiplication in multi-party computation (MPC). MPC allows a set of $n$ mutually distrustful parties to privately compute any given function across their private inputs, even if up to $t<n$ of these participants are corrupted and controlled by an external adversary. In terms of efficiency and communication complexity, multiplication in MPC has always been a large bottleneck. The typical method employed by most current protocols has been to utilise Beaver's method, which relies on some precomputed information. In this paper we introduce an OLE-based MPC protocol which also relies on some precomputed information. Our proposed protocol has a more efficient communication complexity than Beaver's protocol by a multiplicative factor of $t$. Furthermore, to compute a share to a multiplication, a participant in our protocol need only communicate with one other participant; unlike Beaver's protocol which requires a participant to contact at least $t$ other participants.
Last updated:  2018-12-30
Boolean Exponent Splitting
Michael Tunstall, Louiza Papachristodoulou, Kostas Papagiannopoulos
A typical countermeasure against side-channel attacks consists of masking intermediate values with a random number. In symmetric cryptographic algorithms, Boolean shares of the secret are typically used, whereas in asymmetric algorithms the secret exponent/scalar is typically masked using algebraic properties. This paper presents a new exponent splitting technique with minimal impact on performance based on Boolean shares. More precisely, it is shown how an exponent can be efficiently split into two shares, where the exponent is the XOR sum of the two shares, typically requiring only an extra register and a few register copies per bit. Our novel exponentiation and scalar multiplication algorithms can be randomized for every execution and combined with other blinding techniques. In this way, both the exponent and the intermediate values can be protected against various types of side-channel attacks. We perform a security evaluation of our algorithms using the mutual information framework and provide proofs that they are secure against first-order side-channel attacks. The side-channel resistance of the proposed algorithms is also practically verified with test vector leakage assessment performed on Xilinx's Zynq zc702 evaluation board.
Last updated:  2020-03-08
XMSS and Embedded Systems - XMSS Hardware Accelerators for RISC-V
Wen Wang, Bernhard Jungk, Julian Wälde, Shuwen Deng, Naina Gupta, Jakub Szefer, Ruben Niederhagen
We describe a software-hardware co-design for the hash-based post-quantum signature scheme XMSS on a RISC-V embedded processor. We provide software optimizations for the XMSS reference implementation for SHA-256 parameter sets and several hardware accelerators that allow to balance area usage and performance based on individual needs. By integrating our hardware accelerators into the RISC-V processor, the version with the best time-area product generates a key pair (that can be used to generate 2^10 signatures) in 3.44s achieving an over 54x speedup in wall-clock time compared to the pure software version. For such a key pair, signature generation takes less than 10 ms and verification takes less than 6 ms, bringing speedups of over 42x and 17x respectively. This shows that embedded systems equipped with scheme-specific hardware accelerators are able to practically use XMSS. We tested and measured the cycle count of our implementation on an Intel Cyclone V SoC FPGA. The integration of our XMSS accelerators into an embedded RISC-V processor shows that it is possible to use hash-based post-quantum signatures for a large variety of embedded applications.
Last updated:  2019-05-10
Further Lower Bounds for Structure-Preserving Signatures in Asymmetric Bilinear Groups
Essam Ghadafi
Structure-Preserving Signatures (SPSs) are a useful tool for the design of modular cryptographic protocols. Recent series of works have shown that by limiting the message space of those schemes to the set of Diffie-Hellman (DH) pairs, it is possible to circumvent the known lower bounds in the Type-3 bilinear group setting thus obtaining the shortest signatures consisting of only 2 elements from the shorter source group. It has been shown that such a variant yields efficiency gains for some cryptographic constructions, including attribute-based signatures and direct anonymous attestation. Only the cases of signing a single DH pair or a DH pair and a vector from $\Z_p$ have been considered. Signing a vector of group elements is required for various applications of SPSs, especially if the aim is to forgo relying on heuristic assumptions. An open question is whether such an improved lower bound also applies to signing a vector of $\ell > 1$ messages. We answer this question negatively for schemes existentially unforgeable under an adaptive chosen-message attack (EUF-CMA) whereas we answer it positively for schemes existentially unforgeable under a random-message attack (EUF-RMA) and those which are existentially unforgeable under a combined chosen-random-message attack (EUF-CMA-RMA). The latter notion is a leeway between the two former notions where it allows the adversary to adaptively choose part of the message to be signed whereas the remaining part of the message is chosen uniformly at random by the signer. Another open question is whether strongly existentially unforgeable under an adaptive chosen-message attack (sEUF-CMA) schemes with 2-element signatures exist. We answer this question negatively, proving it is impossible to construct sEUF-CMA schemes with 2-element signatures even if the signature consists of elements from both source groups. On the other hand, we prove that sEUF-RMA and sEUF-CMA-RMA schemes with 2-element (unilateral) signatures are possible by giving constructions for those notions. Among other things, our findings show a gap between random-message/combined chosen-random-message security and chosen-message security in this setting.
Last updated:  2019-01-03
Error Amplification in Code-based Cryptography
Alexander Nilsson, Thomas Johansson, Paul Stankovski Wagner
Code-based cryptography is one of the main techniques enabling cryptographic primitives in a post-quantum scenario. In particular, the MDPC scheme is a basic scheme from which many other schemes have been derived. These schemes rely on iterative decoding in the decryption process and thus have a certain small probability $p$ of having a decryption (decoding) error. In this paper we show a very fundamental and important property of code-based encryption schemes. Given one initial error pattern that fails to decode, the time needed to generate another message that fails to decode is strictly much less than $1/p$. We show this by developing a method for fast generation of undecodable error patterns (error pattern chaining), which additionally proves that a measure of closeness in ciphertext space can be exploited through its strong linkage to the difficulty of decoding these messages. Furthermore, if side-channel information is also available (time to decode), then the initial error pattern no longer needs to be given since one can be easily generated in this case. These observations are fundamentally important because they show that a, say, $128$-bit encryption scheme is not inherently safe from reaction attacks even if it employs a decoder with a failure rate of $2^{-128}$. In fact, unless explicit protective measures are taken, having a failure rate at all -- of any magnitude -- can pose a security problem because of the error amplification effect of our method. A key-recovery reaction attack was recently shown on the MDPC scheme as well as similar schemes, taking advantage of decoding errors in order to recover the secret key. It was also shown that knowing the number of iterations in the iterative decoding step, which could be received in a timing attack, would also enable and enhance such an attack. In this paper we apply our error pattern chaining method to show how to improve the performance of such reaction attacks in the CPA case. We show that after identifying a single decoding error (or a decoding step taking more time than expected in a timing attack), we can adaptively create new error patterns that have a much higher decoding error probability than for a random error. This leads to a significant improvement of the attack based on decoding errors in the CPA case and it also gives the strongest known attack on MDPC-like schemes, both with and without using side-channel information.
Last updated:  2020-12-01
Implementing Token-Based Obfuscation under (Ring) LWE
Cheng Chen, Nicholas Genise, Daniele Micciancio, Yuriy Polyakov, Kurt Rohloff
Token-based obfuscation (TBO) is an interactive approach to cryptographic program obfuscation that was proposed by Goldwasser et al. (STOC 2013) as a potentially more practical alternative to conventional non-interactive security models, such as Virtual Black Box (VBB) and Indistinguishability Obfuscation. We introduce a query-revealing variant of TBO, and implement in PALISADE several optimized query-revealing TBO constructions based on (Ring) LWE covering a relatively broad spectrum of capabilities: linear functions, conjunctions, and branching programs. Our main focus is the obfuscation of general branching programs, which are asymptotically more efficient and expressive than permutation branching programs traditionally considered in program obfuscation studies. Our work implements read-once branching programs that are significantly more advanced than those implemented by Halevi et al. (ACM CCS 2017), and achieves program evaluation runtimes that are two orders of magnitude smaller. Our implementation introduces many algorithmic and code-level optimizations, as compared to the original theoretical construction proposed by Chen et al. (CRYPTO 2018). These include new trapdoor sampling algorithms for matrices of ring elements, extension of the original LWE construction to Ring LWE (with a hardness proof for non-uniform Ring LWE), asymptotically and practically faster token generation procedure, Residue Number System procedures for fast large integer arithmetic, and others. We also present efficient implementations for TBO of conjunction programs and linear functions, which significantly outperform prior implementations of these obfuscation capabilities, e.g., our conjunction obfuscation implementation is one order of magnitude faster than the VBB implementation by Cousins et al. (IEEE S&P 2018). We also provide an example where linear function TBO is used for classifying an ovarian cancer data set. All implementations done as part of this work are packaged in a TBO toolkit that is made publicly available.
Last updated:  2019-01-11
Using the Cloud to Determine Key Strengths -- Triennial Update
M. Delcourt, T. Kleinjung, A. K. Lenstra, S. Nath, D. Page, N. Smart
We develop a new methodology to assess cryptographic key strength using cloud computing, by calculating the true economic cost of (symmetric- or private-) key retrieval for the most common cryptographic primitives. Although the present paper gives the current year (2018), 2015, 2012 and 2011 costs, more importantly it provides the tools and infrastructure to derive new data points at any time in the future, while allowing for improvements such as of new algorithmic approaches. Over time the resulting data points will provide valuable insight in the selection of cryptographic key sizes. For instance, we observe that the past clear cost-advantage of total cost of ownership compared to cloud-computing seems to be evaporating.
Last updated:  2019-06-29
Tight Reductions for Diffie-Hellman Variants in the Algebraic Group Model
Uncategorized
Taiga Mizuide, Atsushi Takayasu, Tsuyoshi Takagi
Show abstract
Uncategorized
Fuchsbauer, Kiltz, and Loss~(Crypto'18) gave a simple and clean definition of an ¥emph{algebraic group model~(AGM)} that lies in between the standard model and the generic group model~(GGM). Specifically, an algebraic adversary is able to exploit group-specific structures as the standard model while the AGM successfully provides meaningful hardness results as the GGM. As an application of the AGM, they show a tight computational equivalence between the computing Diffie-Hellman~(CDH) assumption and the discrete logarithm~(DL) assumption. For the purpose, they used the square Diffie-Hellman assumption as a bridge, i.e., they first proved the equivalence between the DL assumption and the square Diffie-Hellman assumption, then used the known equivalence between the square Diffie-Hellman assumption and the CDH assumption. In this paper, we provide an alternative proof that directly shows the tight equivalence between the DL assumption and the CDH assumption. The crucial benefit of the direct reduction is that we can easily extend the approach to variants of the CDH assumption, e.g., the bilinear Diffie-Hellman assumption. Indeed, we show several tight computational equivalences and discuss applicabilities of our techniques.
Last updated:  2018-12-30
Cryptanalysis of the Full DES and the Full 3DES Using a New Linear Property
Tomer Ashur, Raluca Posteuca
In this paper we extend the work presented by Ashur and Posteuca in BalkanCryptSec 2018, by designing 0-correlation key-dependent linear trails covering more than one round of DES. First, we design a 2-round 0-correlation key-dependent linear trail which we then connect to Matsui's original trail in order to obtain a linear approximation covering the full DES and 3DES. We show how this approximation can be used for a key recovery attack against both ciphers. To the best of our knowledge, this paper is the first to use this kind of property to attack a symmetric-key algorithm, and our linear attack against 3DES is the first statistical attack against this cipher.
Last updated:  2018-12-30
Exploring Crypto Dark Matter: New Simple PRF Candidates and Their Applications
Uncategorized
Dan Boneh, Yuval Ishai, Alain Passelègue, Amit Sahai, David J. Wu
Show abstract
Uncategorized
Pseudorandom functions (PRFs) are one of the fundamental building blocks in cryptography. We explore a new space of plausible PRF candidates that are obtained by mixing linear functions over different small moduli. Our candidates are motivated by the goals of maximizing simplicity and minimizing complexity measures that are relevant to cryptographic applications such as secure multiparty computation. We present several concrete new PRF candidates that follow the above approach. Our main candidate is a weak PRF candidate (whose conjectured pseudorandomness only holds for uniformly random inputs) that first applies a secret mod-2 linear mapping to the input, and then a public mod-3 linear mapping to the result. This candidate can be implemented by depth-2 $ACC^0$ circuits. We also put forward a similar depth-3 strong PRF candidate. Finally, we present a different weak PRF candidate that can be viewed as a deterministic variant of ``Learning Parity with Noise'' (LPN) where the noise is obtained via a mod-3 inner product of the input and the key. The advantage of our approach is twofold. On the theoretical side, the simplicity of our candidates enables us to draw natural connections between their hardness and questions in complexity theory or learning theory (e.g., learnability of depth-2 $ACC^0$ circuits and width-3 branching programs, interpolation and property testing for sparse polynomials, and natural proof barriers for showing super-linear circuit lower bounds). On the applied side, the ``piecewise-linear'' structure of our candidates lends itself nicely to applications in secure multiparty computation (MPC). Using our PRF candidates, we construct protocols for distributed PRF evaluation that achieve better round complexity and/or communication complexity (often both) compared to protocols obtained by combining standard MPC protocols with PRFs like AES, LowMC, or Rasta (the latter two are specialized MPC-friendly PRFs). Our advantage over competing approaches is maximized in the setting of MPC with an honest majority, or alternatively, MPC with preprocessing. Finally, we introduce a new primitive we call an encoded-input PRF, which can be viewed as an interpolation between weak PRFs and standard (strong) PRFs. As we demonstrate, an encoded-input PRF can often be used as a drop-in replacement for a strong PRF, combining the efficiency benefits of weak PRFs and the security benefits of strong PRFs. We conclude by showing that our main weak PRF candidate can plausibly be boosted to an encoded-input PRF by leveraging error-correcting codes.
Last updated:  2018-12-30
Changing Points in APN Functions
Lilya Budaghyan, Claude Carlet, Tor Helleseth, Nikolay Kaleyski
We investigate the differential properties of a construction in which a given function $F : \mathbb{F}_{2^n} \rightarrow \mathbb{F}_{2^n}$ is modified at $K \in \mathbb{N}$ points in order to obtain a new function $G$. This is motivated by the question of determining the minimum Hamming distance between two APN functions and can be seen as a generalization of a previously studied construction in which a given function is modified at a single point. We derive necessary and sufficient conditions which the derivatives of $F$ must satisfy for $G$ to be APN, and use these conditions as the basis for an efficient filtering procedure for searching for APN functions whose value differs from that of a given APN function $F$ at a given set of points. We define a quantity $m_F$ related to $F$ counting the number of derivatives of a given type, and derive a lower bound on the distance between an APN function $F$ and its closest APN neighbor in terms of $m_F$. Furthermore, the value $m_F$ is shown to be invariant under CCZ-equivalence and easier to compute in the case of quadratic functions. We give a formula for $m_F$ in the case of $F(x) = x^3$ which allows us to express a lower bound on the distance between $F(x)$ and the closest APN function in terms of the dimension $n$ of the underlying field. We observe that this distance tends to infinity with $n$. We also compute $m_F$ and the distance to the closest APN function for a representative $F$ from each of the switching classes over $\mathbb{F}_{2^n}$ for $4 \le n \le 8$. For a given function $F$ and value $v$, we describe an efficient method for finding all sets of points $\{ u_1, u_2, \dots, u_K \}$ such that setting $G(u_i) = F(u_i) + v$ and $G(x) = F(x)$ for $x \ne u_i$ is APN.
Last updated:  2018-12-30
This is Not an Attack on Wave
Uncategorized
Thomas Debris-Alazard, Nicolas Sendrier, Jean-Pierre Tillich
Show abstract
Uncategorized
Very recently, a preprint ``Cryptanalysis of the Wave Signature Scheme'', eprint 2018/1111, appeared claiming to break Wave ``Wave: A New Code-Based Signature Scheme'', eprint 2018/996. We explain here why this claim is incorrect.
Last updated:  2019-08-12
New Hybrid Method for Isogeny-based Cryptosystems using Edwards Curves
Uncategorized
Suhri Kim, Kisoon Yoon, Jihoon Kwon, Young-Ho Park, Seokhie Hong
Show abstract
Uncategorized
Along with the resistance against quantum computers, isogeny-based cryptography offers attractive cryptosystems due to small key sizes and compatibility with the current elliptic curve primitives. While the state-of-the-art implementation uses Montgomery curves, which facilitates efficient elliptic curve arithmetic and isogeny computations, other forms of elliptic curves can be used to produce an efficient result. In this paper, we present the new hybrid method for isogeny-based cryptosystem using Edwards curves. Unlike the previous hybrid methods, we exploit Edwards curves for recovering the curve coefficients and Montgomery curves for other operations. To this end, we first carefully examine and compare the computational cost of Montgomery and Edwards isogenies. Then, we fine-tune and tailor Edwards isogenies in order to blend with Montgomery isogenies efficiently. Additionally, we present the implementation results of Supersingular Isogeny Diffie--Hellman (SIDH) key exchange using the proposed method. We demonstrate that our method outperforms the previously proposed hybrid method, and is as fast as Montgomery-only implementation. Our results show that proper use of Edwards curves for isogeny-based cryptosystem can be quite practical.
Last updated:  2018-12-24
Instant Privacy-Preserving Biometric Authentication for Hamming Distance
Joohee Lee, Dongwoo Kim, Duhyeong Kim, Yongsoo Song, Junbum Shin, Jung Hee Cheon
In recent years, there has been enormous research attention in privacy-preserving biometric authentication, which enables a user to verify him or herself to a server without disclosing raw biometric information. Since biometrics is irrevocable when exposed, it is very important to protect its privacy. In IEEE TIFS 2018, Zhou and Ren proposed a privacy-preserving user-centric biometric authentication scheme named PassBio, where the end-users encrypt their own templates, and the authentication server never sees the raw templates during the authentication phase. In their approach, it takes about 1 second to encrypt and compare 2000-bit templates based on Hamming distance on a laptop. However, this result is still far from practice because the size of templates used in commercialized products is much larger: according to NIST IREX IX report of 2018 which analyzed 46 iris recognition algorithms, size of their templates varies from 4,632-bit (579-byte) to 145,832-bit (18,229-byte). In this paper, we propose a new privacy-preserving user-centric biometric authentication (HDM-PPBA) based on Hamming distance, which shows a big improvement in efficiency to the previous works. It is based on our new single-key function-hiding inner product encryption, which encrypts and computes the Hamming distance of 145,832-bit binary in about 0.3 seconds on Intel Core i5 2.9GHz CPU. We show that it satisfies simulation-based security under the hardness assumption of Learning with Errors (LWE) problem. The storage requirements, bandwidth and time complexity of HDM-PPBA depend linearly on the bit-length of biometrics, and it is applicable to any large templates used in NIST IREX IX report with high efficiency.
Last updated:  2018-12-23
Deep Learning vs Template Attacks in front of fundamental targets: experimental study
Uncategorized
Yevhenii ZOTKIN, Francis OLIVIER, Eric BOURBAO
Show abstract
Uncategorized
This study compares the experimental results of Template Attacks (TA) and Deep Learning (DL) techniques called Multi Layer Perceptron (MLP) and Convolutional Neural Network (CNN), concurrently in front of classical use cases often encountered in the side-channel analysis of cryptographic devices (restricted to SK). The starting point regards their comparative effectiveness against masked encryption which appears as intrinsically vulnerable. Surprisingly TA improved with Principal Components Analysis (PCA) and normalization, honorably makes the grade versus the latest DL methods which demand more calculation power. Another result is that both approaches face high difficulties against static targets such as secret data transfers or key schedule. The explanation of these observations resides in cross-matching. Beyond masking, the effects of other protections like jittering, shuffling and coding size are also tested. At the end of the day the benefit of DL techniques, stands in the better resistance of CNN to misalignment.
Last updated:  2019-02-27
Multi-Target Attacks on the Picnic Signature Scheme and Related Protocols
Itai Dinur, Niv Nadler
Picnic is a signature scheme that was presented at ACM CCS 2017 by Chase et al. and submitted to NIST's post-quantum standardization project. Among all submissions to NIST's project, Picnic is one of the most innovative, making use of recent progress in construction of practically efficient zero-knowledge (ZK) protocols for general circuits. In this paper, we devise multi-target attacks on Picnic and its underlying ZK protocol, ZKB++. Given access to $S$ signatures, produced by a single or by several users, our attack can (information theoretically) recover the $\kappa$-bit signing key of a user in complexity of about $2^{\kappa - 7}/S$. This is faster than Picnic's claimed $2^{\kappa}$ security against classical (non-quantum) attacks by a factor of $2^7 \cdot S$ (as each signature contains about $2^7$ attack targets). Whereas in most multi-target attacks, the attacker can easily sort and match the available targets, this is not the case in our attack on Picnic, as different bits of information are available for each target. Consequently, it is challenging to reach the information theoretic complexity in a computational model, and we had to perform cryptanalytic optimizations by carefully analyzing ZKB++ and its underlying circuit. Our best attack for $\kappa = 128$ has time complexity of $T = 2^{77}$ for $S = 2^{64}$. Alternatively, we can reach the information theoretic complexity of $T = 2^{64}$ for $S = 2^{57}$, given that all signatures are produced with the same signing key. Our attack exploits a weakness in the way that the Picnic signing algorithm uses a pseudo-random generator. The weakness is fixed in the recent Picnic 2.0 version. In addition to our attack on Picnic, we show that a recently proposed improvement of the ZKB++ protocol (due to Katz, Kolesnikov and Wang) is vulnerable to a similar multi-target attack.
Last updated:  2022-03-04
Countering Block Withholding Attack Effciently
Suhyeon Lee, Seungjoo Kim
Bitcoin, well-known cryptocurrency, selected Poof-of-Work (PoW) for its security. PoW mechanism incentivizes participants and deters attacks on the network. Bitcoin seems to have operated the stable distributed network with PoW until now. Researchers found, however, some vulnerabilities in PoW such as selfish mining, block withholding attack, and so on. Especially, after Rosenfeld suggested block withholding attack and Eyal made this attack practical, many variants and countermeasures have been proposed. Most countermeasures, however, were accompanied by changes in the mining algorithm to make the attack impossible, which lowered the practical adaptability. In this paper, we propose a countermeasure to prevent block withholding attack effectively. Mining pools can adapt our method without changing their mining environment.
Last updated:  2019-04-24
MProve: A Proof of Reserves Protocol for Monero Exchanges
Arijit Dutta, Saravanan Vijayakumaran
Theft from cryptocurrency exchanges due to cyberattacks or internal fraud is a major problem. Exchanges can partially alleviate customer concerns by providing periodic proofs of solvency. We describe MProve, a proof of reserves protocol for Monero exchanges which can be combined with a known proof of liabilities protocol to provide a proof of solvency. It is the first protocol for Monero which provides address privacy by allowing an exchange to hide its own addresses within a larger anonymity set. MProve also provides a simple proof of non-collusion between exchanges.
Last updated:  2018-12-19
Teleportation-based quantum homomorphic encryption scheme with quasi-compactness and perfect security
Uncategorized
Min Liang
Show abstract
Uncategorized
Quantum homomorphic encryption (QHE) is an important cryptographic technology for delegated quantum computation. It enables remote Server performing quantum computation on encrypted quantum data, and the specific algorithm performed by Server is unnecessarily known by Client. Quantum fully homomorphic encryption (QFHE) is a QHE that satisfies both compactness and $\mathcal{F}$-homomorphism, which is homomorphic for any quantum circuits. However, Yu et al.[Phys. Rev. A 90, 050303(2014)] proved a negative result: assume interaction is not allowed, it is impossible to construct perfectly secure QFHE scheme. So this article focuses on non-interactive and perfectly secure QHE scheme with loosen requirement, specifically quasi-compactness. This article defines encrypted gate, which is denoted by $EG[U]:|\alpha\rangle\rightarrow\left((a,b),Enc_{a,b}(U|\alpha\rangle)\right)$. We present a gate-teleportation-based two-party computation scheme for $EG[U]$, where one party gives arbitrary quantum state $|\alpha\rangle$ as input and obtains the encrypted $U$-computing result $Enc_{a,b}(U|\alpha\rangle)$, and the other party obtains the random bits $a,b$. Based on $EG[P^x](x\in\{0,1\})$, we propose a method to remove the $P$-error generated in the homomorphic evaluation of $T/T^\dagger$-gate. Using this method, we design two non-interactive and perfectly secure QHE schemes named \texttt{GT} and \texttt{VGT}. Both of them are $\mathcal{F}$-homomorphic and quasi-compact (the decryption complexity depends on the $T/T^\dagger$-gate complexity). Assume $\mathcal{F}$-homomorphism, non-interaction and perfect security are necessary property, the quasi-compactness is proved to be bounded by $O(M)$, where $M$ is the total number of $T/T^\dagger$-gates in the evaluated circuit. \texttt{VGT} is proved to be optimal and has $M$-quasi-compactness. According to our QHE schemes, the decryption would be inefficient if the evaluated circuit contains exponential number of $T/T^\dagger$-gates. Thus our schemes are suitable for homomorphic evaluation of any quantum circuit with low $T/T^\dagger$-gate complexity, such as any polynomial-size quantum circuit or any quantum circuit with polynomial number of $T/T^\dagger$-gates.
Last updated:  2018-12-19
Revisiting Orthogonal Lattice Attacks on Approximate Common Divisor Problems and their Applications
Jun Xu, Santanu Sarkar, Lei Hu
In this paper, we revisit three existing types of orthogonal lattice (OL) attacks and propose optimized cases to solve approximate common divisor (ACD) problems. In order to reduce both space and time costs, we also make an improved lattice using the rounding technique. Further, we present asymptotic formulas of the time complexities on our optimizations as well as three known OL attacks. Besides, we give specific conditions that the optimized OL attacks can work and show how the attack ability depends on the blocksize $\beta$ in the BKZ-$\beta$ algorithm. Therefore, we put forward a method to estimate the concrete cost of solving the random ACD instances. It can be used in the choice of practical parameters in ACD problems. Finally, we give the security estimates of some ACD-based FHE constructions from the literature and also analyze the implicit factorization problem with sufficient number of samples. In the above situations, our optimized OL attack using the rounding technique performs fastest in practice.
Last updated:  2018-12-19
On the Decoding Failure Rate of QC-MDPC Bit-Flipping Decoders
Nicolas Sendrier, Valentin Vasseur
Quasi-cyclic moderate density parity check codes allow the design of McEliece-like public-key encryption schemes with compact keys and a security that provably reduces to hard decoding problems for quasi-cyclic codes. In particular, QC-MDPC are among the most promising code-based key encapsulation mechanisms (KEM) that are proposed to the NIST call for standardization of quantum safe cryptography (two proposals, BIKE and QC-MDPC KEM). The first generation of decoding algorithms suffers from a small, but not negligible, decoding failure rate (DFR in the order of $10^{-7}$ to $10^{-10}$). This allows a key recovery attack presented by Guo, Johansson, and Stankovski (GJS attack) at Asiacrypt 2016 which exploits a small correlation between the faulty message patterns and the secret key of the scheme, and limits the usage of the scheme to KEMs using ephemeral public keys. It does not impact the interactive establishment of secure communications (e.g. TLS), but the use of static public keys for asynchronous applications (e.g. email) is rendered dangerous. Understanding and improving the decoding of QCMDPC is thus of interest for cryptographic applications. In particular, finding parameters for which the failure rate is provably negligible (typically as low as $2^{-64}$ or $2^{-128}$) would allow static keys and increase the applicability of the mentioned cryptosystems. We study here a simple variant of bit-flipping decoding, which we call step-by-step decoding. It has a higher DFR but its evolution can be modelled by a Markov chain, within the theoretical framework of Julia Chaulet's PhD thesis. We study two other, more efficient, decoders. One is the textbook algorithm. The other is (close to) the BIKE decoder. For all those algorithms we provide simulation results, and, assuming an evolution similar to the step-by-step decoder, we extrapolate the value of the DFR as a function of the block length. This will give an indication of how much the code parameters must be increased to ensure resistance to the GJS attack.
Last updated:  2018-12-19
ARPA Whitepaper
Derek Zhang, Alex Su, Felix Xu, Jiang Chen
We propose a secure computation solution for blockchain networks. The correctness of computation is verifiable even under malicious majority condition using information-theoretic Message Authentication Code (MAC), and the privacy is preserved using Secret-Sharing. With state-of-the-art multiparty computation protocol and a layer2 solution, our privacy-preserving computation guarantees data security on blockchain, cryptographically, while reducing the heavy-lifting computation job to a few nodes. This breakthrough has several implications on the future of decentralized networks. First, secure computation can be used to support Private Smart Contracts, where consensus is reached without exposing the information in the public contract. Second, it enables data to be shared and used in trustless network, without disclosing the raw data during data-at-use, where data ownership and data usage is safely separated. Last but not least, computation and verification processes are separated, which can be perceived as computational sharding, this effectively makes the transaction processing speed linear to the number of participating nodes. Our objective is to deploy our secure computation network as an layer2 solution to any blockchain system. Smart Contracts\cite{smartcontract} will be used as bridge to link the blockchain and computation networks. Additionally, they will be used as verifier to ensure that outsourced computation is completed correctly. In order to achieve this, we first develop a general MPC network with advanced features, such as: 1) Secure Computation, 2) Off-chain Computation, 3) Verifiable Computation, and 4)Support dApps' needs like privacy-preserving data exchange.
Last updated:  2019-03-20
Cryptanalysis of a code-based one-time signature
Jean-Christophe Deneuville, Philippe Gaborit
In 2012, Lyubashevsky introduced a new framework for building lattice-based signature schemes without resorting to any trapdoor (such as GPV [6] or NTRU [7]). The idea is to sample a set of short lattice elements and construct the public key as a Short Integer Solution (SIS for short) instance. Signatures are obtained using a small subset sum of the secret key, hidden by a (large) Gaussian mask. (Information leakage is dealt with using rejection sampling.) Recently, Persichetti proposed an efficient adaptation of this framework to coding theory [12]. In this paper, we show that this adaptation cannot be secure, even for one-time signatures (OTS), due to an inherent difference between bounds in Hamming and Euclidean metrics. The attack consists in rewriting a signature as a noisy syndrome decoding problem, which can be handled efficiently using the extended bit flipping decoding algorithm. We illustrate our results by breaking Persichetti’s OTS scheme built upon this approach [12]: using a single signature, we recover the secret (signing) key in about the same amount of time as required for a couple of signature verifications.
Last updated:  2018-12-18
The Lord of the Shares: Combining Attribute-Based Encryption and Searchable Encryption for Flexible Data Sharing
Antonis Michalas
Secure cloud storage is considered one of the most important issues that both businesses and end-users are considering before moving their private data to the cloud. Lately, we have seen some interesting approaches that are based either on the promising concept of Symmetric Searchable Encryption (SSE) or on the well-studied field of Attribute-Based Encryption (ABE). In the first case, researchers are trying to design protocols where users' data will be protected from both \textit{internal} and \textit{external} attacks without paying the necessary attention to the problem of user revocation. On the other hand, in the second case existing approaches address the problem of revocation. However, the overall efficiency of these systems is compromised since the proposed protocols are solely based on ABE schemes and the size of the produced ciphertexts and the time required to decrypt grows with the complexity of the access formula. In this paper, we propose a protocol that combines \textit{both} SSE and ABE in a way that the main advantages of each scheme are used. The proposed protocol allows users to directly search over encrypted data by using an SSE scheme while the corresponding symmetric key that is needed for the decryption is protected via a Ciphertext-Policy Attribute-Based Encryption scheme.
Last updated:  2019-04-29
DAGS: Reloaded Revisiting Dyadic Key Encapsulation
Uncategorized
Gustavo Banegas, Paulo S. L. M. Barreto, Brice Odilon Boidje, Pierre-Louis Cayrel, Gilbert Ndollane Dione, Kris Gaj, Cheikh Thiecoumba Gueye, Richard Haeussler, Jean Belo Klamti, Ousmane N'diaye, Duc Tri Nguyen, Edoardo Persichetti, Jefferson E. Ricardini
Show abstract
Uncategorized
In this paper we revisit some of the main aspects of the DAGS Key Encapsulation Mechanism, one of the code-based candidates to NIST's standardization call for the key exchange/encryption functionalities. In particular, we modify the algorithms for key generation, encapsulation and decapsulation to fit an alternative KEM framework, and we present a new set of parameters that use binary codes. We discuss advantages and disadvantages for each of the variants proposed.
Last updated:  2018-12-18
AuthCropper: Authenticated Image Cropper for Privacy Preserving Surveillance Systems
Jihye Kim, Jiwon Lee, Hankyung Ko, Donghwan Oh, Semin Han, Kwonho Jeong, Hyunok Oh
As surveillance systems are popular, the privacy of the recorded video becomes more important. On the other hand, the authenticity of video images should be guaranteed when used as evidence in court. It is challenging to satisfy both (personal) privacy and authenticity of a video simultaneously, since the privacy requires modifications (e.g., partial deletions) of an original video image while the authenticity does not allow any modifications of the original image. This paper proposes a novel method to convert an encryption scheme to support partial decryption with a constant number of keys and construct a privacy-aware authentication scheme by combining with a signature scheme. The security of our proposed scheme is implied by the security of the underlying encryption and signature schemes. Experimental results show that the proposed scheme can handle the UHD video stream with more than 17 fps on a real embedded system, which validates the practicality of the proposed scheme.
Last updated:  2018-12-18
Subversion in Practice: How to Efficiently Undermine Signatures
Joonsang Baek, Willy Susilo, Jongkil Kim, Yang-Wai Chow
Algorithm substitution attack (ASA) on signatures should be treated seriously as the authentication services of numerous systems and applications rely on signature schemes and compromising them has a significant impact on the security of users. We present a somewhat alarming result in this regard: a highly efficient ASA on the Digital Signature Algorithm (DSA) and its implementation. Compared with the generic ASAs on signature schemes proposed in the literature, our attack provides fast and undetectable subversion, which will extract the user's private signing key by collecting maximum three signatures arbitrarily. Moreover, our ASA is proven to be robust against state reset. We implemented the proposed ASA by replacing the original DSA in Libgcrypt (a popular cryptographic library used in many applications) with our subverted DSA. Experiment shows that the user's private key can readily be recovered once the subverted DSA is used to sign messages. In our implementation, various measures have been considered to significantly reduce the possibility of detection through comparing the running time of the original DSA and the subverted one (i.e. timing analysis). To our knowledge, this is the first implementation of ASA in practice, which shows that ASA is a real threat rather than only a theoretical speculation.
Last updated:  2018-12-18
On a Rank-Metric Code-Based Cryptosystem with Small Key Size
Julian Renner, Sven Puchinger, Antonia Wachter-Zeh
A repair of the Faure-Loidreau (FL) public-key code-based cryptosystem is proposed.The FL cryptosystem is based on the hardness of list decoding Gabidulin codes which are special rank-metric codes. We prove that the recent structural attack on the system by Gaborit et al. is equivalent to decoding an interleaved Gabidulin code. Since all known polynomial-time decoders for these codes fail for a large constructive class of error patterns, we are able to construct public keys that resist the attack. It is also shown that all other known attacks fail for our repair and parameter choices. Compared to other code-based cryptosystems, we obtain significantly smaller key sizes for the same security level.
Last updated:  2021-06-10
Quantum Equivalence of the DLP and CDHP for Group Actions
Steven Galbraith, Lorenz Panny, Benjamin Smith, Frederik Vercauteren
In this short note we give a polynomial-time quantum reduction from the vectorization problem (DLP) to the parallelization problem (CDHP) for efficiently computable group actions. Combined with the trivial reduction from parallelization to vectorization, we thus prove the quantum equivalence of these problems, which is the post-quantum counterpart to classic results of den Boer and Maurer in the classical Diffie-Hellman setting. In contrast to the classical setting, our reduction holds unconditionally and does not assume knowledge of suitable auxiliary algebraic groups. We discuss the implications of this reduction for isogeny-based cryptosystems including CSIDH.
Last updated:  2019-02-12
On Lions and Elligators: An efficient constant-time implementation of CSIDH
Uncategorized
Michael Meyer, Fabio Campos, Steffen Reith
Show abstract
Uncategorized
The recently proposed CSIDH primitive is a promising candidate for post quantum static-static key exchanges with very small keys. However, until now there is only a variable-time proof-of-concept implementation by Castryck, Lange, Martindale, Panny, and Renes, recently optimized by Meyer and Reith, which can leak various information about the private key. Therefore, we present an efficient constant-time implementation that samples key elements only from intervals of nonnegative numbers and uses dummy isogenies, which prevents certain kinds of side-channel attacks. We apply several optimizations, e.g. Elligator and the newly introduced SIMBA, in order to get a more efficient implementation.
Last updated:  2018-12-18
Automated software protection for the masses against side-channel attacks
Uncategorized
NICOLAS BELLEVILLE, DAMIEN COUROUSSÉ, KARINE HEYDEMANN, HENRI-PIERRE CHARLES
Show abstract
Uncategorized
We present an approach and a tool to answer the need for effective, generic and easily applicable protections against side-channel attacks. The protection mechanism is based on code polymorphism, so that the observable behaviour of the protected component is variable and unpredictable to the attacker. Our approach combines lightweight specialized runtime code generation with the optimization capabilities of static compilation. It is extensively configurable. Experimental results show that programs secured by our approach present strong security levels and meet the performance requirements of constrained systems.
Last updated:  2020-06-04
Gradient Visualization for General Characterization in Profiling Attacks
Loïc Masure, Cécile Dumas, Emmanuel Prouff
In Side-Channel Analysis (SCA), several papers have shown that neural networks could be trained to efficiently extract sensitive information from implementations running on embedded devices. This paper introduces a new tool called Gradient Visualization that aims to proceed a post-mortem information leakage characterization after the successful training of a neural network. It relies on the computation of the gradient of the loss function used during the training. The gradient is no longer computed with respect to the model parameters, but with respect to the input trace components. Thus, it can accurately highlight temporal moments where sensitive information leaks. We theoretically show that this method, based on Sensitivity Analysis, may be used to efficiently localize points of interest in the SCA context. The efficiency of the proposed method does not depend on the particular countermeasures that may be applied to the measured traces as long as the profiled neural network can still learn in presence of such difficulties. In addition, the characterization can be made for each trace individually. We verified the soundness of our proposed method on simulated data and on experimental traces from a public side-channel database. Eventually we empirically show that the Sensitivity Analysis is at least as good as state-of-the-art characterization methods, in presence (or not) of countermeasures.
Last updated:  2018-12-18
M&M: Masks and Macs against Physical Attacks
Lauren De Meyer, Victor Arribas, Svetla Nikova, Ventzislav Nikov, Vincent Rijmen
Cryptographic implementations on embedded systems need to be protected against physical attacks. Today, this means that apart from incorporating countermeasures against side-channel analysis, implementations must also withstand fault attacks and combined attacks. Recent proposals in this area have shown that there is a big tradeoff between the implementation cost and the strength of the adversary model. In this work, we introduce a new combined countermeasure M&M that combines Masking with information-theoretic MAC tags and infective computation. It works in a stronger adversary model than the existing scheme ParTI, yet is a lot less costly to implement than the provably secure MPC-based scheme CAPA. We demonstrate M&M with a SCA- and DFA-secure implementation of the AES block cipher. We evaluate the side-channel leakage of the second-order secure design with a non-specific t-test and use simulation to validate the fault resistance.
Last updated:  2019-11-25
On Degree-d Zero-Sum Sets of Full Rank
Christof Beierle, Alex Biryukov, Aleksei Udovenko
A set $S \subseteq \mathbb{F}_2^n$ is called degree-$d$ zero-sum if the sum $\sum_{s \in S} f(s)$ vanishes for all $n$-bit Boolean functions of algebraic degree at most $d$. Those sets correspond to the supports of the $n$-bit Boolean functions of degree at most $n-d-1$. We prove some results on the existence of degree-$d$ zero-sum sets of full rank, i.e., those that contain $n$ linearly independent elements, and show relations to degree-1 annihilator spaces of Boolean functions and semi-orthogonal matrices. We are particularly interested in the smallest of such sets and prove bounds on the minimum number of elements in a degree-$d$ zero-sum set of rank $n$. The motivation for studying those objects comes from the fact that degree-$d$ zero-sum sets of full rank can be used to build linear mappings that preserve special kinds of \emph{nonlinear invariants}, similar to those obtained from orthogonal matrices and exploited by Todo, Leander and Sasaki for breaking the block ciphers Midori, Scream and iScream.
Last updated:  2018-12-18
Quantum Chosen-Ciphertext Attacks against Feistel Ciphers
Gembu Ito, Akinori Hosoyamada, Ryutaroh Matsumoto, Yu Sasaki, Tetsu Iwata
Seminal results by Luby and Rackoff show that the 3-round Feistel cipher is secure against chosen-plaintext attacks (CPAs), and the 4-round version is secure against chosen-ciphertext attacks (CCAs). However, the security significantly changes when we consider attacks in the quantum setting, where the adversary can make superposition queries. By using Simon's algorithm that detects a secret cycle-period in polynomial-time, Kuwakado and Morii showed that the 3-round version is insecure against quantum CPA by presenting a polynomial-time distinguisher. Since then, Simon's algorithm has been heavily used against various symmetric-key constructions. However, its applications are still not fully explored. In this paper, based on Simon's algorithm, we first formalize a sufficient condition of a quantum distinguisher against block ciphers so that it works even if there are multiple collisions other than the real period. This distinguisher is similar to the one proposed by Santoli and Schaffner, and it does not recover the period. Instead, we focus on the dimension of the space obtained from Simon's quantum circuit. This eliminates the need to evaluate the probability of collisions, which was needed in the work by Kaplan et al. at CRYPTO 2016. Based on this, we continue the investigation of the security of Feistel ciphers in the quantum setting. We show a quantum CCA distinguisher against the 4-round Feistel cipher. This extends the result of Kuwakado and Morii by one round, and follows the intuition of the result by Luby and Rackoff where the CCA setting can extend the number of rounds by one. We also consider more practical cases where the round functions are composed of a public function and XORing the subkeys. We show the results of both distinguishing and key recovery attacks against these constructions.
Last updated:  2018-12-18
Durandal: a rank metric based signature scheme
Nicolas Aragon, Olivier Blazy, Philippe Gaborit, Adrien Hauteville, Gilles Zémor
We describe a variation of the Schnorr-Lyubashevsky approach to devising signature schemes that is adapted to rank based cryptography. This new approach enables us to obtain a randomization of the signature, which previously seemed difficult to derive for code-based cryptography. We provide a detailed analysis of attacks and an EUF-CMA proof for our scheme. Our scheme relies on the security of the Ideal Rank Support Learning and the Ideal Rank Syndrome problems and a newly introduced problem: Product Spaces Subspaces Indistinguishability, for which we give a detailed analysis. Overall the parameters we propose are efficient and comparable in terms of signature size to the Dilithium lattice-based scheme, with a signature size of less than 4kB for a public key of size less than 20kB.
Last updated:  2018-12-10
Cryptanalysis of 2-round KECCAK-384
Rajendra Kumar, Nikhil Mittal, Shashank Singh
In this paper, we present a cryptanalysis of round reduced Keccak-384 for 2 rounds. The best known preimage attack for this variant of Keccak has the time complexity $2^{129}$. In our analysis, we find a preimage in the time complexity of $2^{89}$ and almost same memory is required.
Last updated:  2019-11-21
Large Universe Subset Predicate Encryption Based on Static Assumption (without Random Oracle)
Sanjit Chatterjee, Sayantan Mukherjee
In a recent work, Katz et al. (CANS'17) generalized the notion of Broadcast Encryption to define Subset Predicate Encryption (SPE) that emulates \emph{subset containment} predicate in the encrypted domain. They proposed two selective secure constructions of SPE in the small universe settings. Their first construction is based on $q$-type assumption while the second one is based on DBDH. % which can be converted to large universe using random oracle. Both achieve constant size secret key while the ciphertext size depends on the size of the privileged set. They also showed some black-box transformation of SPE to well-known primitives like WIBE and ABE to establish the richness of the SPE structure. This work investigates the question of large universe realization of SPE scheme based on static assumption without random oracle. We propose two constructions both of which achieve constant size secret key. First construction $\mathsf{SPE}_1$, instantiated in composite order bilinear groups, achieves constant size ciphertext and is proven secure in a restricted version of selective security model under the subgroup decision assumption (SDP). Our main construction $\mathsf{SPE}_2$ is adaptive secure in the prime order bilinear group under the symmetric external Diffie-Hellman assumption (SXDH). Thus $\mathsf{SPE}_2$ is the first large universe instantiation of SPE to achieve adaptive security without random oracle. Both our constructions have efficient decryption function suggesting their practical applicability. Thus the primitives like WIBE and ABE resulting through black-box transformation of our constructions become more practical.
Last updated:  2018-12-10
The Role of the Adversary Model in Applied Security Research
Quang Do, Ben Martini, Kim-Kwang Raymond Choo
Adversary models have been integral to the design of provably-secure cryptographic schemes or protocols. However, their use in other computer science research disciplines is relatively limited, particularly in the case of applied security research (e.g., mobile app and vulnerability studies). In this study, we conduct a survey of prominent adversary models used in the seminal field of cryptography, and more recent mobile and Internet of Things (IoT) research. Motivated by the findings from the cryptography survey, we propose a classification scheme for common app-based adversaries used in mobile security research, and classify key papers using the proposed scheme. Finally, we discuss recent work involving adversary models in the contemporary research field of IoT. We contribute recommendations to aid researchers working in applied (IoT) security based upon our findings from the mobile and cryptography literature. The key recommendation is for authors to clearly define adversary goals, assumptions and capabilities.
Last updated:  2021-05-20
Batching Techniques for Accumulators with Applications to IOPs and Stateless Blockchains
Dan Boneh, Benedikt Bünz, Ben Fisch
We present batching techniques for cryptographic accumulators and vector commitments in groups of unknown order. Our techniques are tailored for distributed settings where no trusted accumulator manager exists and updates to the accumulator are processed in batches. We develop techniques for non-interactively aggregating membership proofs that can be verified with a constant number of group operations. We also provide a constant sized batch non-membership proof for a large number of elements. These proofs can be used to build the first positional vector commitment (VC) with constant sized openings and constant sized public parameters. As a core building block for our batching techniques we develop several succinct proof systems in groups of unknown order. These extend a recent construction of a succinct proof of correct exponentiation, and include a succinct proof of knowledge of an integer discrete logarithm between two group elements. We use these new constructions to design a stateless blockchain, where nodes only need a constant amount of storage in order to participate in consensus. Further, we show how to use these techniques to reduce the size of IOP instantiations, such as STARKs.
Last updated:  2018-12-10
Automatic Search for A Variant of Division Property Using Three Subsets (Full Version)
Kai Hu, Meiqin Wang
The division property proposed at Eurocrypt'15 is a novel technique to find integral distinguishers, which has been applied to most kinds of symmetric ciphers such as block ciphers, stream ciphers, and authenticated encryption,~\textit{etc}. The original division property is word-oriented, and later the bit-based one was proposed at FSE'16 to get better integral property, which is composed of conventional bit-based division property (two-subset division property) and bit-based division property using three subsets (three-subset division property). Three-subset division property has more potential to achieve better integral distinguishers compared with the two-subset division property. The bit-based division property could not be to apply to ciphers with large block sizes due to its unpractical complexity. At Asiacrypt'16, the two-subset division property was modeled using Mixed Integral Linear Programming (MILP) technique, and the limits of block sizes were eliminated. However, there is still no efficient method searching for three-subset division property. The propagation rule of the \texttt{XOR} operation for $\mathbb{L}$ \footnote{The definition of $\mathbb{L}$ and $\mathbb{K}$ is introduced in Section 2.}, which is a set used in the three-set division property but not in two-set one, requires to remove some specific vectors, and new vectors generated from $\mathbb{L}$ should be appended to $\mathbb{K}$ when \texttt{Key-XOR} operation is applied, both of which are difficult for common automatic tools such as MILP, SMT or CP. In this paper, we overcome one of the two challenges, concretely, we address the problem to add new vectors into $\mathbb{K}$ from $\mathbb{L}$ in an automatic search model. Moreover, we present a new model automatically searching for a variant three-subset division property (VTDP) with STP solver. The variant is weaker than the original three-subset division property (OTDP) but it is still powerful in some ciphers. Most importantly, this model has no constraints on the block size of target ciphers, which can also be applied to ARX and S-box based ciphers. As illustrations, some improved integral distinguishers have been achieved for SIMON32, SIMON32/48/64(102), SPECK32 and KATAN/KTANTAN32/48/64 according to the number of rounds or number of even/odd-parity bits.
Last updated:  2018-12-10
MILP Method of Searching Integral Distinguishers Based on Division Property Using Three Subsets
Uncategorized
Senpeng Wang, Bin Hu, Jie Guan, Kai Zhang, Tairong Shi
Show abstract
Uncategorized
Division property is a generalized integral property proposed by Todo at EUROCRYPT 2015, and then conventional bit-based division property (CBDP) and bit-based division property using three subsets (BDPT) were proposed by Todo and Morii at FSE 2016. The huge time and memory complexity that once restricted the applications of CBDP have been solved by Xiang et al. at ASIACRYPT 2016. They extended Mixed Integer Linear Programming (MILP) method to search integral distinguishers based on CBDP. BDPT can find more accurate integral distinguishers than CBDP, but it can not be modeled efficiently. Thus it cannot be applied to block ciphers with block size larger than 32 bits. In this paper, we focus on the feasibility of applying MILP-aided method to search integral distinguishers based on BDPT. We firstly study how to get the BDPT propagation rules of an S-box. Based on that we can efficiently describe the BDPT propagation of cipher which has S-box. Moreover, we propose a technique called ``fast propagation", which can translate BDPT into CBDP, then the balanced bits based on BDPT can be presented. Together with the propagation properties of BDPT, we can use MILP method based on CBDP to search integral distinguishers based on BDPT. In order to prove the efficiency of our method, we search integral distinguishers on SIMON, SIMECK, PRESENT, RECTANGLE, LBlock, and TWINE. For SIMON64, PRESENT, and RECTANGLE, we find more balanced bits than the previous longest distinguishers. For LBlock, we find a 17-round integral distinguisher which is one more round than the previous longest integral distinguisher, and a better 16-round integral distinguisher with less active bits can be obtain. For other ciphers, our results are in accordance with the previous longest distinguishers.
Last updated:  2018-12-10
On Quantum Chosen-Ciphertext Attacks and Learning with Errors
Gorjan Alagic, Stacey Jeffery, Maris Ozols, Alexander Poremba
Large-scale quantum computing is a significant threat to classical public-key cryptography. In strong “quantum access” security models, numerous symmetric-key cryptosystems are also vulnerable. We consider classical encryption in a model which grants the adversary quantum oracle access to encryption and decryption, but where the latter is restricted to non-adaptive (i.e., pre-challenge) queries only. We define this model formally using appropriate notions of ciphertext indistinguishability and semantic security (which are equivalent by standard arguments) and call it QCCA1 in analogy to the classical CCA1 security model. Using a bound on quantum random-access codes, we show that the standard PRF- and PRP-based encryption schemes are QCCA1-secure when instantiated with quantum-secure primitives. We then revisit standard IND-CPA-secure Learning with Errors (LWE) encryption and show that leaking just one quantum decryption query (and no other queries or leakage of any kind) allows the adversary to recover the full secret key with constant success probability. In the classical setting, by contrast, recovering the key uses a linear number of decryption queries, and this is optimal. The algorithm at the core of our attack is a (large-modulus version of) the well-known Bernstein-Vazirani algorithm. We emphasize that our results should not be interpreted as a weakness of these cryptosystems in their stated security setting (i.e., post-quantum chosen-plaintext secrecy). Rather, our results mean that, if these cryptosystems are exposed to chosen-ciphertext attacks (e.g., as a result of deployment in an inappropriate real-world setting) then quantum attacks are even more devastating than classical ones.
Last updated:  2019-02-20
Uncontrolled Randomness in Blockchains: Covert Bulletin Board for Illicit Activity
Nasser Alsalami, Bingsheng Zhang
Public blockchains can be abused to covertly store and disseminate potentially harmful digital content. Consequently, this threat jeopardizes the future of such applications and poses a serious regulatory issue. In this work, we show the severity of the problem by demonstrating that blockchains can be exploited as a covert bulletin board to secretly store and distribute arbitrary content. More specically, all major blockchain systems use randomized cryptographic primitives, such as digital signatures and non-interactive zero-knowledge proofs, and we illustrate how the uncontrolled randomness in such primitives can be maliciously manipulated to enable covert communication and hidden persistent storage. To clarify the potential risk, we design, implement and evaluate our technique against the widely-used ECDSA signature scheme, the CryptoNote's ring signature scheme, and Monero's ring condential transactions. Importantly, the signicance of the demonstrated attacks stems from their undetectability, their adverse eect on the future of decentralized blockchains, and their serious repercussions on users' privacy and crypto funds. Finally, besides presenting the attacks, we examine existing countermeasures and devise two new steganography-resistant blockchain architectures to practically thwart this threat in the context of blockchains.
Last updated:  2018-12-05
Lossy Trapdoor Permutations with Improved Lossiness
Benedikt Auerbach, Eike Kiltz, Bertram Poettering, Stefan Schoenen
Lossy trapdoor functions (Peikert and Waters, STOC 2008 and SIAM J. Computing 2011) imply, via black-box transformations, a number of interesting cryptographic primitives, including chosen-ciphertext secure public-key encryption. Kiltz, O'Neill, and Smith (CRYPTO 2010) showed that the RSA trapdoor permutation is lossy under the Phi-hiding assumption, but syntactically it is not a lossy trapdoor function since it acts on Z_N and not on strings. Using a domain extension technique by Freeman et al. (PKC 2010 and J. Cryptology 2013) it can be extended to a lossy trapdoor permutation, but with considerably reduced lossiness. In this work we give new constructions of lossy trapdoor permutations from the Phi-hiding assumption, the quadratic residuosity assumption, and the decisional composite residuosity assumption, all with improved lossiness. Furthermore, we propose the first all-but-one lossy trapdoor permutation from the Phi-hiding assumption. A technical vehicle used for achieving this is a novel transform that converts trapdoor functions with index-dependent domain into trapdoor functions with fixed domain.
Last updated:  2019-03-25
Code-based Cryptosystem from Quasi-Cyclic Elliptic Codes
Fangguo Zhang, Zhuoran Zhang
With the fast development of quantum computation, code based cryptography arises public concern as a candidate of post quantum cryptography. However, the large key-size becomes a main drawback such that the code-based schemes seldom become practical although they performed pretty well on the speed of both encryption and decryption algorithm. Algebraic geometry codes was considered to be a good solution to reduce the size of keys, but because of its special construction, there have lots of attacks against them. In this paper, we propose a public key encryption scheme based on elliptic codes which can resist the known attacks. By using automorphism on the rational points of the elliptic curve, we construct quasi-cyclic elliptic codes, which reduce the key size further. We apply the list-decoding algorithm to decryption thus more errors beyond half of the minimum distance of the code could be correct, which is the key point to resist the known attacks for AG codes based cryptosystem.
Last updated:  2018-12-21
Horizontal DEMA Attack as the Criterion to Select the Best Suitable EM Probe
Uncategorized
Christian Wittke, Ievgen Kabin, Dan Klann, Zoya Dyka, Anton Datsuk, Peter Langendoerfer
Show abstract
Uncategorized
Implementing cryptographic algorithms in a tamper resistant way is an extremely complex task as the algorithm used and the target platform have a significant impact on the potential leakage of the implementation. In addition the quality of the tools used for the attacks is of importance. In order to evaluate the resistance of a certain design against electromagnetic emanation attacks – as a highly relevant type of attacks – we discuss the quality of different electromagnetic (EM) probes as attack tools. In this paper we propose to use the results of horizontal attacks for comparison of measurement setup and for determining the best suitable instruments for measurements. We performed horizontal differential electromagnetic analysis (DEMA) attacks against our ECC design that is an im-plementation of the Montgomery kP algorithm for the NIST elliptic curve B-233. We experimented with 7 different EM probes under same conditions: attacked FPGA, design, inputs, measurement point and measurement equipment were the same, excepting EM probes. The used EM probe influences the success rate of performed attack significantly. We used this fact for the comparison of probes and for determining the best suitable one.
Last updated:  2020-01-23
Lattice-Based Signature from Key Consensus
Leixiao Cheng, Boru Gong, Yunlei Zhao
Given the current research status in lattice-based cryptography, it is commonly suggested that lattice-based signature could be subtler and harder to achieve. Among them, Dilithium is one of the most promising signature candidates for the post-quantum era, for its simplicity, efficiency, small public key size, and resistance against side channel attacks. The design of Dilithium is based on a list of pioneering works (e.g.,[VL09,VL12,BG14]), and has very remarkable performance by very careful and comprehensive optimizations in implementation and parameter selection. Whether better trade-offs on the already remarkable performance of Dilithium can be made is left in \cite{CRYSTALS} as an interesting open question. In this work, we provide new insights in interpreting the design of Dilithium, in terms of key consensus previously proposed in the literature for key encapsulation mechanisms (KEM) and key exchange (KEX). Based on the deterministic version of the optimal key consensus with noise (OKCN) mechanism, originally developed in [JZ16] for KEM/KEX, we present \emph{signature from key consensus with noise} (SKCN), which could be viewed as generalization and optimization of Dilithium. The construction of SKCN is generic, modular and flexible, which in particular allows a much broader range of parameters for searching better tradeoffs among security, computational efficiency, and bandwidth. For example, on the recommended parameters, compared with Dilithium our SKCN scheme is more efficient both in computation and in bandwidth, while preserving the same level of post-quantum security. In addition, using the same routine of OKCN for both KEM/KEX and digital signature eases (hardware) implementation and deployment in practice, and is useful to simplify the system complexity of lattice-based cryptography in general.
Last updated:  2020-10-12
Elliptic Curves in Generalized Huff's Model
Uncategorized
Ronal Pranil Chand, Maheswara Rao Valluri
Show abstract
Uncategorized
Abstract This paper introduces a new form of elliptic curves in generalized Huff's model. These curves endowed with the addition are shown to be a group over a finite field. We present formulae for point addition and doubling point on the curves, and evaluate the computational cost of point addition and doubling point using projective, Jacobian, Lopez-Dahab coordinate systems, and embedding of the curves into \mathbb{P}^{1}\times\mathbb{P}^{1} system. We also prove that the curves are birationally equivalent to Weierstrass form. We observe that the computational cost on the curves for point addition and doubling point is lowest by embedding the curves into \mathbb{P}^{1}\times\mathbb{P}^{1} system than the other mentioned coordinate systems and is nearly optimal to other known Huff's models.
Last updated:  2020-12-03
Pseudo-Free Families of Computational Universal Algebras
Mikhail Anokhin
Let $\Omega$ be a finite set of finitary operation symbols. We initiate the study of (weakly) pseudo-free families of computational $\Omega$-algebras in arbitrary varieties of $\Omega$-algebras. Most of our results concern (weak) pseudo-freeness in the variety $\mathfrak O$ of all $\Omega$-algebras. A family $(H_d)_{d\in D}$ of computational $\Omega$-algebras (where $D\subseteq\{0,1\}^*$) is called polynomially bounded (resp., having exponential size) if there exists a polynomial $\eta$ such that for all $d\in D$, the length of any representation of every $h\in H_d$ is at most $\eta(\lvert d\rvert)$ (resp., $\lvert H_d\rvert\le2^{\eta(\lvert d\rvert)}$). First, we prove the following trichotomy: (i) if $\Omega$ consists of nullary operation symbols only, then there exists a polynomially bounded pseudo-free family in $\mathfrak O$; (ii) if $\Omega=\Omega_0\cup\{\omega\}$, where $\Omega_0$ consists of nullary operation symbols and the arity of $\omega$ is $1$, then there exist an exponential-size pseudo-free family and a polynomially bounded weakly pseudo-free family (both in $\mathfrak O$); (iii) in all other cases, the existence of polynomially bounded weakly pseudo-free families in $\mathfrak O$ implies the existence of collision-resistant families of hash functions. Second, assuming the existence of collision-resistant families of hash functions, we construct a polynomially bounded weakly pseudo-free family and an exponential-size pseudo-free family in the variety of all $m$-ary groupoids, where $m$ is an arbitrary positive integer. In particular, for arbitrary $m\ge2$, polynomially bounded weakly pseudo-free families in the variety of all $m$-ary groupoids exist if and only if collision-resistant families of hash functions exist.
Last updated:  2018-12-03
Excalibur Key-Generation Protocols For DAG Hierarchic Decryption
Uncategorized
Louis Goubin, Geraldine Monsalve, Juan Reutter, Francisco Vial Prado
Show abstract
Uncategorized
Public-key cryptography applications often require structuring decryption rights according to some hierarchy. This is typically addressed with re-encryption procedures or relying on trusted parties, in order to avoid secret-key transfers and leakages. Using a novel approach, Goubin and Vial-Prado (2016) take advantage of the Multikey FHE-NTRU encryption scheme to establish decryption rights at key-generation time, thus preventing leakage of all secrets involved (even by powerful key-holders). Their algorithms are intended for two parties, and can be composed to form chains of users with inherited decryption rights. In this article, we provide new protocols for generating Excalibur keys under any DAG-like hierarchy, and present formal proofs of security against semi-honest adversaries. Our protocols are compatible with the homomorphic properties of FHE-NTRU, and the base case of our security proofs may be regarded as a more formal, simulation-based proof of said work.
Last updated:  2018-12-03
Downgradable Identity-based Encryption and Applications
Olivier Blazy, Paul Germouty, Duong Hieu Phan
In Identity-based cryptography, in order to generalize one receiver encryption to multi-receiver encryption, wildcards were introduced: WIBE enables wildcard in receivers' pattern and Wicked-IBE allows one to generate a key for identities with wildcard. However, the use of wildcard makes the construction of WIBE, Wicked-IBE more complicated and significantly less efficient than the underlying IBE. The main reason is that the conventional identity's binary alphabet is extended to a ternary alphabet $\{0,1,*\}$ and the wildcard $*$ is always treated in a convoluted way in encryption or in key generation. In this paper, we show that when dealing with multi-receiver setting, wildcard is not necessary. We introduce a new downgradable property for IBE scheme and show that any IBE with this property, called DIBE, can be efficiently transformed into WIBE or Wicked-IBE. While WIBE and Wicked-IBE have been used to construct Broadcast encryption, we go a step further by employing DIBE to construct Attribute-based Encryption of which the access policy is expressed as a boolean formula in the disjunctive normal form.
Last updated:  2019-03-14
New Privacy Threat on 3G, 4G, and Upcoming 5G AKA Protocols
Ravishankar Borgaonkar, Lucca Hirschi, Shinjo Park, Altaf Shaik
Mobile communications are used by more than two-thirds of the world population who expect security and privacy guarantees. The 3rd Generation Partnership Project (3GPP) responsible for the worldwide standardization of mobile communication has designed and mandated the use of the AKA protocol to protect the subscribers’ mobile services. Even though privacy was a requirement, numerous subscriber location attacks have been demonstrated against AKA, some of which have been fixed or mitigated in the enhanced AKA protocol designed for 5G. In this paper, we reveal a new privacy attack against all variants of the AKA protocol, including 5G AKA, that breaches subscriber privacy more severely than known location privacy attacks do. Our attack exploits a new logical vulnerability we uncovered that would require dedicated fixes. We demonstrate the practical feasibility of our attack using low cost and widely available setups. Finally we conduct a security analysis of the vulnerability and discuss countermeasures to remedy our attack.
Last updated:  2018-12-03
A Comparison of NTRU Variants
John M. Schanck
We analyze the size vs. security trade-offs that are available when selecting parameters for perfectly correct key encapsulation mechanisms based on NTRU.
Last updated:  2019-02-06
The 9 Lives of Bleichenbacher's CAT: New Cache ATtacks on TLS Implementations
Uncategorized
Eyal Ronen, Robert Gillham, Daniel Genkin, Adi Shamir, David Wong, Yuval Yarom
Show abstract
Uncategorized
At CRYPTO’98, Bleichenbacher published his seminal paper which described a padding oracle attack against RSA implementations that follow the PKCS #1 v1.5 standard. Over the last twenty years researchers and implementors had spent a huge amount of effort in developing and deploying numerous mitigation techniques which were supposed to plug all the possible sources of Bleichenbacher-like leakages. However, as we show in this paper most implementations are still vulnerable to several novel types of attack based on leakage from various microarchitectural side channels: Out of nine popular implementations of TLS that we tested, we were able to break the security of seven implementations with practical proof-of-concept attacks. We demonstrate the feasibility of using those Cache-like ATacks (CATs) to perform a downgrade attack against any TLS connection to a vulnerable server, using a BEAST-like Man in the Browser attack. The main difficulty we face is how to perform the thousands of oracle queries required before the browser’s imposed timeout (which is 30 seconds for almost all browsers, with the exception of Firefox which can be tricked into extending this period). The attack seems to be inherently sequential (due to its use of adaptive chosen ciphertext queries), but we describe a new way to parallelize Bleichenbacher-like padding attacks by exploiting any available number of TLS servers that share the same public key certificate. With this improvement, we could demonstrate the feasibility of a downgrade attack which could recover all the 2048 bits of the RSA plaintext (including the premaster secret value, which suffices to establish a secure connection) from five available TLS servers in under 30 seconds. This sequential-to-parallel transformation of such attacks can be of independent interest, speeding up and facilitating other side channel attacks on RSA implementations.
Last updated:  2019-02-20
The impact of error dependencies on Ring/Mod-LWE/LWR based schemes
Jan-Pieter D'Anvers, Frederik Vercauteren, Ingrid Verbauwhede
Current estimation techniques for the probability of decryption failures in Ring/Mod-LWE/LWR based schemes assume independence of the failures in individual bits of the transmitted message to calculate the full failure rate of the scheme. In this paper we disprove this assumption both theoretically and practically for schemes based on Ring/Mod-Learning with Errors/Rounding. We provide a method to estimate the decryption failure probability, taking into account the bit failure dependency. We show that the independence assumption is suitable for schemes without error correction, but that it might lead to underestimating the failure probability of algorithms using error correcting codes. In the worst case, for LAC-128, the failure rate is $2^{48}$ times bigger than estimated under the assumption of independence. This higher-than-expected failure rate could lead to more efficient cryptanalysis of the scheme through decryption failure attacks.
Last updated:  2018-12-06
PwoP: Intrusion-Tolerant and Privacy-Preserving Sensor Fusion
Chenglu Jin, Marten van Dijk, Michael K. Reiter, Haibin Zhang
We design and implement, PwoP, an efficient and scalable system for intrusion-tolerant and privacy-preserving multi-sensor fusion. PwoP develops and unifies techniques from dependable distributed systems and modern cryptography, and in contrast to prior works, can 1) provably defend against pollution attacks where some malicious sensors lie about their values to sway the final result, and 2) perform within the computation and bandwidth limitations of cyber-physical systems. PwoP is flexible and extensible, covering a variety of application scenarios. We demonstrate the practicality of our system using Raspberry Pi Zero W, and we show that PwoP is efficient in both failure-free and failure scenarios.
Last updated:  2020-02-11
Toward RSA-OAEP without Random Oracles
Nairen Cao, Adam O'Neill, Mohammad Zaheri
We show new partial and full instantiation results under chosen-ciphertext security for the widely implemented and standardized RSA-OAEP encryption scheme of Bellare and Rogaway (EUROCRYPT 1994) and two variants. Prior work on such instantiations either showed negative results or settled for ``passive'' security notions like IND-CPA. More precisely, recall that RSA-OAEP adds redundancy and randomness to a message before composing two rounds of an underlying Feistel transform, whose round functions are modeled as random oracles (ROs), with RSA. Our main results are: \begin{itemize} \item Either of the two oracles (while still modeling the other as a RO) can be instantiated in RSA-OAEP under IND-CCA2 using mild standard-model assumptions on the round functions and generalizations of algebraic properties of RSA shown by Barthe, Pointcheval, and Báguelin (CCS 2012). The algebraic properties are only shown to hold at practical parameters for small encryption exponent ($e=3$), but we argue they have value for larger $e$ as well. \item Both oracles can be instantiated simultaneously for two variants of RSA-OAEP, called ``$t$-clear'' and ``$s$-clear'' RSA-OAEP. For this we use extractability-style assumptions in the sense of Canetti and Dakdouk (TCC 2010) on the round functions, as well as novel yet plausible ``XOR-type'' assumptions on RSA. While admittedly strong, such assumptions may nevertheless be necessary at this point to make positive progress. \end{itemize} In particular, our full instantiations evade impossibility results of Shoup (J.~Cryptology 2002), Kiltz and Pietrzak (EUROCRYPT 2009), and Bitansky et al. (STOC 2014). Moreover, our results for $s$-clear RSA-OAEP yield the most efficient RSA-based encryption scheme proven IND-CCA2 in the standard model (using bold assumptions on cryptographic hashing) to date.
Last updated:  2018-12-03
Placing Conditional Disclosure of Secrets in the Communication Complexity Universe
Benny Applebaum, Prashant Nalini Vasudevan
In the Conditional Disclosure of Secrets (CDS) problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold $n$-bit inputs $x$ and $y$ respectively, wish to release a common secret $z$ to Carol (who knows both $x$ and $y$) if and only if the input $(x,y)$ satisfies some predefined predicate $f$. Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security. Despite the growing interest in this model, very few lower-bounds are known. In this paper, we relate the CDS complexity of a predicate $f$ to its communication complexity under various communication games. For several basic predicates our results yield tight, or almost tight, lower-bounds of $\Omega(n)$ or $\Omega(n^{1-\epsilon})$, providing an exponential improvement over previous logarithmic lower-bounds. We also define new communication complexity classes that correspond to different variants of the CDS model and study the relations between them and their complements. Notably, we show that allowing for imperfect correctness can significantly reduce communication -- a seemingly new phenomenon in the context of information-theoretic cryptography. Finally, our results show that proving explicit super-logarithmic lower-bounds for imperfect CDS protocols is a necessary step towards proving explicit lower-bounds against the class AM, or even $\text{AM}\cap \text{co-AM}$ -- a well known open problem in the theory of communication complexity. Thus imperfect CDS forms a new minimal class which is placed just beyond the boundaries of the ``civilized'' part of the communication complexity world for which explicit lower-bounds are known.
Last updated:  2018-12-03
Result Pattern Hiding Searchable Encryption for Conjunctive Queries
Shangqi Lai, Sikhar Patranabis, Amin Sakzad, Joseph K. Liu, Debdeep Mukhopadhyay, Ron Steinfeld, Shi-Feng Sun, Dongxi Liu, Cong Zuo
The recently proposed Oblivious Cross-Tags (OXT) protocol (CRYPTO 2013) has broken new ground in designing efficient searchable symmetric encryption (SSE) protocol with support for conjunctive keyword search in a single-writer single-reader framework. While the OXT protocol offers high performance by adopting a number of specialised data-structures, it also trades-off security by leaking ‘partial’ database information to the server. Recent attacks have exploited similar partial information leakage to breach database confidentiality. Consequently, it is an open problem to design SSE protocols that plug such leakages while retaining similar efficiency. In this paper, we propose a new SSE protocol, called Hidden Cross-Tags (HXT), that removes ‘Keyword Pair Result Pattern’ (KPRP) leakage for conjunctive keyword search. We avoid this leakage by adopting two additional cryptographic primitives - Hidden Vector Encryption (HVE) and probabilistic (Bloom filter) indexing into the HXT protocol. We propose a ‘lightweight’ HVE scheme that only uses efficient symmetric-key building blocks, and entirely avoids elliptic curve-based operations. At the same time, it affords selective simulation-security against an unbounded number of secret-key queries. Adopting this efficient HVE scheme, the overall practical storage and computational overheads of HXT over OXT are relatively small (no more than 10% for two keywords query, and 21% for six keywords query), while providing a higher level of security.
Last updated:  2018-12-03
On the Price of Proactivizing Round-Optimal Perfectly Secret Message Transmission
Uncategorized
Ravi Kishore, Ashutosh Kumar, Chiranjeevi Vanarasa, Kannan Srinathan
Show abstract
Uncategorized
In a network of $n$ nodes (modelled as a digraph), the goal of a perfectly secret message transmission (PSMT) protocol is to replicate sender's message $m$ at the receiver's end without revealing any information about $m$ to a computationally unbounded adversary that eavesdrops on any $t$ nodes. The adversary may be mobile too -- that is, it may eavesdrop on a different set of $t$ nodes in different rounds. We prove a necessary and sufficient condition on the synchronous network for the existence of $r$-round PSMT protocols, for any given $r > 0$; further, we show that round-optimality is achieved without trading-off the communication complexity; specifically, our protocols have an overall communication complexity of $O(n)$ elements of a finite field to perfectly transmit one field element. Apart from optimality/scalability, two interesting implications of our results are: (a) adversarial mobility does not affect its tolerability: PSMT tolerating a static $t$-adversary is possible if and only if PSMT tolerating mobile $t$-adversary is possible; and (b) mobility does not affect the round optimality: the fastest PSMT protocol tolerating a static $t$-adversary is not faster than the one tolerating a mobile $t$-adversary.
Last updated:  2022-10-09
Keeping Time-Release Secrets through Smart Contracts
Uncategorized
Jianting Ning, Hung Dang, Ruomu Hou, Ee-Chien Chang
Show abstract
Uncategorized
A time-release protocol enables one to send secrets into a future release time. The main technical challenge lies in incorporating timing control into the protocol, especially in the absence of a central trusted party. To leverage on the regular heartbeats emitted from decen- tralized blockchains, in this paper, we advocate an incentive-based approach that combines threshold secret sharing and blockchain based smart contract. In particular, the secret is split into shares and distributed to a set of incentivized participants, with the payment settlement contractualized and enforced by the autonomous smart contract. We highlight that such ap- proach needs to achieve two goals: to reward honest participants who release their shares honestly after the release date (the “carrots”), and to punish premature leakage of the shares (the “sticks”). While it is not difficult to contractualize a carrot mechanism for punctual releases, it is not clear how to realise the stick. In the first place, it is not clear how to identify premature leakage. Our main idea is to encourage public vigilantism by incorporating an informer-bounty mechanism that pays bounty to any informer who can provide evidence of the leakage. The possibility of being punished constitute a deterrent to the misbehaviour of premature releases. Since various entities, including the owner, participants and the in- formers, might act maliciously for their own interests, there are many security requirements. In particular, to prevent a malicious owner from acting as the informer, the protocol must ensure that the owner does not know the distributed shares, which is counter-intuitive and not addressed by known techniques. We investigate various attack scenarios, and propose a secure and efficient protocol based on a combination of cryptographic primitives. Our technique could be of independent interest to other applications of threshold secret sharing in deterring sharing.
Last updated:  2018-12-03
Identity-Concealed Authenticated Encryption and Key Exchange
Yunlei Zhao
Identity concealment and zero-round trip time (0-RTT) connection are two of current research focuses in the design and analysis of secure transport protocols, like TLS1.3 and Google's QUIC, in the client-server setting. In this work, we introduce a new primitive for identity-concealed authenticated encryption in the public-key setting, referred to as {higncryption, which can be viewed as a novel monolithic integration of public-key encryption, digital signature, and identity concealment. We present the security definitional framework for higncryption, and a conceptually simple (yet carefully designed) protocol construction. As a new primitive, higncryption can have many applications. In this work, we focus on its applications to 0-RTT authentication, showing higncryption is well suitable to and compatible with QUIC and OPTLS, and on its applications to identity-concealed authenticated key exchange (CAKE) and unilateral CAKE (UCAKE). In particular, we make a systematic study on applying and incorporating higncryption to TLS. Of independent interest is a new concise security definitional framework for CAKE and UCAKE proposed in this work, which unifies the traditional BR and (post-ID) frameworks, enjoys composability, and ensures very strong security guarantee. Along the way, we make a systematically comparative study with related protocols and mechanisms including Zheng's signcryption, one-pass HMQV, QUIC, TLS1.3 and OPTLS, most of which are widely standardized or in use.
Last updated:  2018-12-03
Can you sign a quantum state
Gorjan Alagic, Tommaso Gagliardoni, Christian Majenz
Cryptography with quantum states exhibits a number of surprising and counterintuitive features. In a 2002 work, Barnum et al. argued informally that these strange features should imply that digital signatures for quantum states are impossible (Barnum et al., FOCS 2002). In this work, we perform the first rigorous study of the problem of signing quantum states. We first show that the intuition of Barnum et al. was correct, by proving an impossibility result which rules out even very weak forms of signing quantum states. Essentially, we show that any non-trivial combination of correctness and security requirements results in negligible security. This rules out all quantum signature schemes except those which simply measure the state and then sign the outcome using a classical scheme. In other words, only classical signature schemes exist. We then show a positive result: it is possible to sign quantum states, provided that they are also encrypted with the public key of the intended recipient. Following classical nomenclature, we call this notion quantum signcryption. Classically, signcryption is only interesting if it provides superior efficiency to simultaneous encryption and signing. Our results imply that, quantumly, it is far more interesting: by the laws of quantum mechanics, it is the only signing method available. We develop security definitions for quantum signcryption, ranging from a simple one-time two-user setting, to a chosen-ciphertext-secure many-time multi-user setting. We also give secure constructions based on post-quantum public-key primitives. Along the way, we show that a natural hybrid method of combining classical and quantum schemes can be used to "upgrade" a secure classical scheme to the fully-quantum setting, in a wide range of cryptographic settings including signcryption, authenticated encryption, and chosen-ciphertext security.
Last updated:  2018-12-03
More on sliding right
Uncategorized
Joachim Breitner
Show abstract
Uncategorized
This text can be thought of an “external appendix” to the paper Sliding right into disaster: Left-to-right sliding windows leak by Daniel J. Bernstein, Joachim Breitner, Daniel Genkin, Leon Groot Bruinderink, Nadia Heninger, Tanja Lange, Christine van Vredendaal and Yuval Yarom [1, 2], and goes into the details of an alternative way to find the knowable bits of the secret exponent, which is complete and can (in rare corner cases) find more bits than the rewrite rules in Section 3.1 of [1], an algorithm to calculate the collision entropy H that is used in Theorem 3 of [1], and a proof of Theorem 3.
Last updated:  2019-01-31
On the Concrete Security of Goldreich’s Pseudorandom Generator
Uncategorized
Geoffroy Couteau, Aurélien Dupin, Pierrick Méaux, Mélissa Rossi, Yann Rotella
Show abstract
Uncategorized
Local pseudorandom generators allow to expand a short random string into a long pseudo-random string, such that each output bit depends on a constant number d of input bits. Due to its extreme efficiency features, this intriguing primitive enjoys a wide variety of applications in cryptography and complexity. In the polynomial regime, where the seed is of size n and the output of size n^s for s > 1, the only known solution, commonly known as Goldreich's PRG, proceeds by applying a simple d-ary predicate to public random size-d subsets of the bits of the seed. While the security of Goldreich's PRG has been thoroughly investigated, with a variety of results deriving provable security guarantees against class of attacks in some parameter regimes and necessary criteria to be satisfied by the underlying predicate, little is known about its concrete security and efficiency. Motivated by its numerous theoretical applications and the hope of getting practical instantiations for some of them, we initiate a study of the concrete security of Goldreich's PRG, and evaluate its resistance to cryptanalytic attacks. Along the way, we develop a new guess-and-determine-style attack, and identify new criteria which refine existing criteria and capture the security guarantees of candidate local PRGs in a more fine-grained way.
Last updated:  2023-02-14
Adaptively Secure MPC with Sublinear Communication Complexity
Ran Cohen, abhi shelat, Daniel Wichs
A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results: (1) A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online-computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size. (2) Under the same assumptions, we construct a "Bob-optimized" 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice's input size. We prove impossibility of "Alice-optimized" protocols. (3) Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP-relation. On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS'18) and shed light on an interesting duality between LFE and FHE. Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold-PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting, and show a two-round SFE protocol with guaranteed output delivery under the same constraints. Our results highlight that the asymptotic cost of adaptive security can be reduced to be comparable to, and in many settings almost match, that of static security, with only a little sacrifice to the concrete round complexity and asymptotic communication complexity.
Last updated:  2018-12-03
Algebraic normal form of a bent function: properties and restrictions
Natalia Tokareva
Maximally nonlinear Boolean functions in $n$ variables, where n is even, are called bent functions. There are several ways to represent Boolean functions. One of the most useful is via algebraic normal form (ANF). What can we say about ANF of a bent function? We try to collect all known and new facts related to ANF of a bent function. A new problem in bent functions is stated and studied: is it true that a linear, quadratic, cubic, etc. part of ANF of a bent function can be arbitrary? The case of linear part is well studied before. In this paper we prove that a quadratic part of a bent function can be arbitrary too.
Last updated:  2018-12-04
Improved upper bound on root number of linearized polynomials and its application to nonlinearity estimation of Boolean functions
Uncategorized
Sihem Mesnager, Kwang Ho Kim, Myong Song Jo
Show abstract
Uncategorized
To determine the dimension of null space of any given linearized polynomial is one of vital problems in finite field theory, with concern to design of modern symmetric cryptosystems. But, the known general theory for this task is much far from giving the exact dimension when applied to a specific linearized polynomial. The first contribution of this paper is to give a better general method to get more precise upper bound on the root number of any given linearized polynomial. We anticipate this result would be applied as a useful tool in many research branches of finite field and cryptography. Really we apply this result to get tighter estimations of the lower bounds on the second order nonlinearities of general cubic Boolean functions, which has been being an active research problem during the past decade, with many examples showing great improvements. Furthermore, this paper shows that by studying the distribution of radicals of derivatives of a given Boolean functions one can get a better lower bound of the second-order nonlinearity, through an example of the monomial Boolean function $g_{\mu}=Tr(\mu x^{2^{2r}+2^r+1})$ over any finite field $GF{n}$.
Last updated:  2019-02-20
Adversarially Robust Property Preserving Hash Functions
Elette Boyle, Rio LaVigne, Vinod Vaikuntanathan
Property-preserving hashing is a method of compressing a large input x into a short hash h(x) in such a way that given h(x) and h(y), one can compute a property P(x, y) of the original inputs. The idea of property-preserving hash functions underlies sketching, compressed sensing and locality-sensitive hashing. Property-preserving hash functions are usually probabilistic: they use the random choice of a hash function from a family to achieve compression, and as a consequence, err on some inputs. Traditionally, the notion of correctness for these hash functions requires that for every two inputs x and y, the probability that h(x) and h(y) mislead us into a wrong prediction of P(x, y) is negligible. As observed in many recent works (incl. Mironov, Naor and Segev, STOC 2008; Hardt and Woodruff, STOC 2013; Naor and Yogev, CRYPTO 2015), such a correctness guarantee assumes that the adversary (who produces the offending inputs) has no information about the hash function, and is too weak in many scenarios. We initiate the study of adversarial robustness for property-preserving hash functions, provide definitions, derive broad lower bounds due to a simple connection with communication complexity, and show the necessity of computational assumptions to construct such functions. Our main positive results are two candidate constructions of property-preserving hash functions (achieving different parameters) for the (promise) gap-Hamming property which checks if x and y are “too far” or “too close”. Our first construction relies on generic collision-resistant hash functions, and our second on a variant of the syndrome decoding assumption on low-density parity check codes.
Last updated:  2018-12-03
Special Soundness Revisited
Douglas Wikström
We generalize and abstract the problem of extracting a witness from a prover of a special sound protocol into a combinatorial problem induced by a sequence of matroids and a predicate, and present a parametrized algorithm for solving this problem. The parametrization provides a tight tradeoff between the running time and the extraction error of the algorithm, which allows optimizing the parameters to minimize: the soundness error for interactive proofs, or the extraction time for proofs of knowledge. In contrast to previous work we bound the distribution of the running time and not only the expected running time. Tail bounds give a tighter analysis when applied recursively and concentrated running time.
Last updated:  2020-12-22
Towards Round-Optimal Secure Multiparty Computations: Multikey FHE without a CRS
Eunkyung Kim, Hyang-Sook Lee, Jeongeun Park
Multikey fully homomorphic encryption (MFHE) allows homomorphic operations between ciphertexts encrypted under different keys. In applications for secure multiparty computation (MPC)protocols, MFHE can be more advantageous than usual fully homomorphic encryption (FHE) since users do not need to agree with a common public key before the computation when using MFHE. In EUROCRYPT 2016, Mukherjee and Wichs constructed a secure MPC protocol in only two rounds via MFHE which deals with a common random/reference string (CRS) in key generation. After then, Brakerski et al.. replaced the role of CRS with the distributed setup for CRS calculation to form a four round secure MPC protocol. Thus, recent improvements in round complexity of MPC protocols have been made using MFHE. In this paper, we go further to obtain round-efficient and secure MPC protocols. The underlying MFHE schemes in previous works still involve the common value, CRS, it seems to weaken the power of using MFHE to allow users to independently generate their own keys. Therefore, we resolve the issue by constructing an MFHE scheme without CRS based on LWE assumption, and then we obtain a secure MPC protocol against semi-malicious security in three rounds.
Last updated:  2018-12-03
Universally Composable Oblivious Transfer Protocol based on the RLWE Assumption
Pedro Branco, Jintai Ding, Manuel Goulão, Paulo Mateus
We use an RLWE-based key exchange scheme to construct a simple and efficient post-quantum oblivious transfer based on the Ring Learning with Errors assumption. We prove that our protocol is secure in the Universal Composability framework against static malicious adversaries in the random oracle model. The main idea of the protocol is that the receiver and the sender interact using the RLWE-based key exchange in such a way that the sender computes two keys, one of them shared with the receiver. It is infeasible for the sender to know which is the shared key and for the receiver to get information about the other one. The sender encrypts each message with each key using a symmetric-key encryption scheme and the receiver can only decrypt one of the ciphertexts. The protocol is extremely efficient in terms of computational and communication complexity, and thus a strong candidate for post-quantum applications.
Last updated:  2019-08-19
Leakage Resilient Secret Sharing and Applications
Akshayaram Srinivasan, Prashant Nalini Vasudevan
A secret sharing scheme allows a dealer to share a secret among a set of $n$ parties such that any authorized subset of the parties can recover the secret, while any unauthorized subset of the parties learns no information about the secret. A local leakage-resilient secret sharing scheme (introduced in independent works by (Goyal and Kumar, STOC 18) and (Benhamouda, Degwekar, Ishai and Rabin, Crypto 18)) additionally requires the secrecy to hold against every unauthorized set of parties even if they obtain some bounded local leakage from every other share. The leakage is said to be local if it is computed independently for each share. So far, the only known constructions of local leakage resilient secret sharing schemes are for threshold access structures for very low ($O(1)$) or very high ($n -o(\log n)$) thresholds. In this work, we give a compiler that takes a secret sharing scheme for any monotone access structure and produces a local leakage resilient secret sharing scheme for the same access structure, with only a constant-factor blow-up in the sizes of the shares. Furthermore, the resultant secret sharing scheme has optimal leakage-resilience rate i.e., the ratio between the leakage tolerated and the size of each share can be made arbitrarily close to $1$. Using this secret sharing scheme as the main building block, we obtain the following results: 1. Rate Preserving Non-Malleable Secret Sharing: We give a compiler that takes any secret sharing scheme for a 4-monotone access structure with rate $R$ and converts it into a non-malleable secret sharing scheme for the same access structure with rate $\Omega(R)$. The prior such non-zero rate construction (Badrinarayanan and Srinivasan, 18) only achieves a rate of $\Theta(R/{t_{\max}\log^2 n})$, where $t_{\max}$ is the maximum size of any minimal set in the access structure. As a special case, for any threshold $t \geq 4$ and an arbitrary $n \geq t$, we get the first constant rate construction of $t$-out-of-$n$ non-malleable secret sharing. 2. Leakage-Tolerant Multiparty Computation for General Interaction Pattern: For any function, we give a reduction from constructing leakage-tolerant secure multi-party computation protocols obeying any interaction pattern to constructing a secure (and not necessarily leakage-tolerant) protocol for a related function obeying the star interaction pattern. This improves upon the result of (Halevi et al., ITCS 2016), who constructed a protocol that is secure in a leak-free environment.
Last updated:  2018-12-03
Dfinity Consensus, Explored
Uncategorized
Ittai Abraham, Dahlia Malkhi, Kartik Nayak, Ling Ren
Show abstract
Uncategorized
We explore a Byzantine Consensus protocol called Dfinity Consensus, recently published in a technical report. Dfinity Consensus solves synchronous state machine replication among $n = 2f + 1$ replicas with up to $f$ Byzantine faults. We provide a succinct explanation of the core mechanism of Dfinity Consensus to the best of our understanding. We prove the safety and liveness of the protocol specification we provide. Our complexity analysis of the protocol reveals the follows. The protocol achieves expected $O(f \times \Delta)$ latency against an adaptive adversary, (where \Delta is the synchronous bound on message delay), and expected $O(\Delta)$ latency against a mildly adaptive adversary. In either case, the communication complexity is unbounded. We then explain how the protocol can be modified to reduce the communication complexity to $O(n^3)$ in the former case, and to $O(n^2)$ in the latter.
Last updated:  2019-03-22
Improvements of Blockchain’s Block Broadcasting:An Incentive Approach
Qingzhao Zhang, Yijun Leng, Lei Fan
In order to achieve a truthful distributed ledger, homogeneous nodes in Blockchain systems will propagate messages on a P2P network so that they can synchronize the status of the ledger. Currently, blockchain systems target on achieving better scalability and higher throughput to support divergent applications which will lead to heavier message propagation, especially the broadcasting of blocks. The heavier traffic on the P2P network will cause longer latency of block synchronization, which may damage system consistency and expose the system to potential attacks. Even worse, when heavy communication consumes a lot of network capacity, nodes in the P2P network may not relay blocks to save their bandwidth. This may damage the efficiency of network synchronization. In order to alleviate the problems, we propose an improved block broadcasting protocol which elaborates block data sharding and financial incentive mechanisms. In the proposed scheme, a block is sliced into pieces in order to keep the network traffic smooth and speed up content delivery. Any node which relays a piece of the block will get benefits with financial rewards. By applying data sharding, our proposed scheme speed up the block broadcasting and therefore shorten the synchronization time by 90\%, which is shown in our simulation experiments. In addition, we carry out game theoretical analysis to prove that nodes are efficiently incentivized to relay blocks honestly and actively.
Last updated:  2018-12-03
Analysis Of The Simulatability Of An Oblivious Transfer
Bing Zeng
In the Journal of Cryptology (25(1): 158-193. 2012), Shai Halevi and Yael Kalai proposed a general framework for constructing two-message oblivious transfer protocols using smooth projective hashing. The authors asserts that this framework gives a simulation-based security guarantee when the sender is corrupted. Later this work has been believed to be half-simulatable in literatures. In this paper, we show that the assertion is not true and present our ideas to construct a fully-simulatable oblivious transfer framework.
Last updated:  2023-04-20
Quantum-secure message authentication via blind-unforgeability
Gorjan Alagic, Christian Majenz, Alexander Russell, Fang Song
Formulating and designing authentication of classical messages in the presence of adversaries with quantum query access has been a longstanding challenge, as the familiar classical notions of unforgeability do not directly translate into meaningful notions in the quantum setting. A particular difficulty is how to fairly capture the notion of "predicting an unqueried value" when the adversary can query in quantum superposition. We propose a natural definition of unforgeability against quantum adversaries called blind unforgeability. This notion defines a function to be predictable if there exists an adversary who can use "partially blinded" oracle access to predict values in the blinded region. We support the proposal with a number of technical results. We begin by establishing that the notion coincides with EUF-CMA in the classical setting and go on to demonstrate that the notion is satisfied by a number of simple guiding examples, such as random functions and quantum-query-secure pseudorandom functions. We then show the suitability of blind unforgeability for supporting canonical constructions and reductions. We prove that the "hash-and-MAC" paradigm and the Lamport one-time digital signature scheme are indeed unforgeable according to the definition. To support our analysis, we additionally define and study a new variety of quantum-secure hash functions called Bernoulli-preserving. Finally, we demonstrate that blind unforgeability is stronger than a previous definition of Boneh and Zhandry [EUROCRYPT '13, CRYPTO '13] in the sense that we can construct an explicit function family which is forgeable by an attack that is recognized by blind-unforgeability, yet satisfies the definition by Boneh and Zhandry.
Last updated:  2018-12-03
Compressive Sensing based Leakage Sampling and Reconstruction: A First Study
Changhai Ou, Chengju Zhou, Siew-Kei Lam
An important prerequisite for Side-channel Attack (SCA) is leakage sampling where the side-channel measurements (e.g. power traces) of the cryptographic device are collected for further analysis. However, as the operating frequency of cryptographic devices continues to increase due to advancing technology, leakage sampling will impose higher requirements on the sampling equipment. This paper undertakes the first study to show that effective leakage sampling can be achieved without relying on sophisticated equipments through Compressive Sensing (CS). In particular, CS can obtain low-dimensional samples from high-dimensional power traces by simply projecting the useful information onto the observation matrix. The leakage information can then be reconstructed in a workstation for further analysis. With this approach, the sampling rate to obtain the side-channel measurements is no longer limited by the operating frequency of the cryptographic device and Nyquist sampling theorem. Instead it depends on the sparsity of the leakage signal. Our study reveals that there is large amount of information redundancy in power traces obtained from the leaky device. As such, CS can employ a much lower sampling rate and yet obtain equivalent leakage sampling performance, which significantly lowers the requirement of sampling equipments. The feasibility of our approach is verified theoretically and through experiments.
Last updated:  2018-12-03
Towards Practical Security of Pseudonymous Signature on the BSI eIDAS Token
Mirosław Kutyłowski, Lucjan Hanzlik, Kamil Kluczniak
In this paper we present an extension of Pseudonymous Signature introduced by the German Federal BSI authority as a part of technical recommendations for electronic identity documents. Without switching to pairing friendly groups we enhance the scheme so that: (a) the issuer does not know the private keys of the citizen (so it cannot impersonate the citizen), (b) a powerful adversary that breaks any number of ID cards created by the Issuer cannot forge new cards that could be proven as fake ones, (c) deanonymization of the pseudonyms used by a citizen is a multi-party protocol, where the consent of each authority is necessary to reveal the identity of a user. (d) we propose extended features concerning fully anonymous signatures and a pragmatic revocation approach. (e) we present an argument for unlinkability (cross-domain anonymity) of the presented schemes. In this way we make a step forwards to overcome the substantial weaknesses of the Pseudonymous Signature scheme. Moreover, the extension is on top of the original scheme with relatively small number of changes, following the strategy of reusing the previous schemes -- thereby reducing the costs of potential technology update.
Last updated:  2020-05-29
Stronger Leakage-Resilient and Non-Malleable Secret-Sharing Schemes for General Access Structures
Divesh Aggarwal, Ivan Damgard, Jesper Buus Nielsen, Maciej Obremski, Erick Purwanto, Joao Ribeiro, Mark Simkin
In this work we present a collection of compilers that take secret sharing schemes for an arbitrary access structures as input and produce either leakage-resilient or non-malleable secret sharing schemes for the same access structure. A leakage-resilient secret sharing scheme hides the secret from an adversary, who has access to an unqualified set of shares, even if the adversary additionally obtains some size-bounded leakage from all other secret shares. A non-malleable secret sharing scheme guarantees that a secret that is reconstructed from a set of tampered shares is either equal to the original secret or completely unrelated. To the best of our knowledge we present the first generic compiler for leakage-resilient secret sharing for general access structures. In the case of non-malleable secret sharing, we strengthen previous definitions, provide separations between them, and construct a non-malleable secret sharing scheme for general access structures that fulfills the strongest definition with respect to independent share tampering functions. More precisely, our scheme is secure against concurrent tampering: The adversary is allowed to (non-adaptively) tamper the shares multiple times, and in each tampering attempt can freely choose the qualified set of shares to be used by the reconstruction algorithm to re-construct the tampered secret. This is a strong analogue of the multiple-tampering setting for split-state non-malleable codes and extractors. We show how to use leakage-resilient and non-malleable secret sharing schemes to construct leakage-resilient and non-malleable threshold signatures. Classical threshold signatures allow to distribute the secret key of a signature scheme among a set of parties, such that certain qualified subsets can sign messages. We construct threshold signature schemes that remain secure even if an adversary leaks from or tampers with all secret shares.
Last updated:  2018-12-05
Functional Analysis Attacks on Logic Locking
Uncategorized
Deepak Sirone, Pramod Subramanyan
Uncategorized
This paper proposes Functional Analysis attacks on state of the art Logic Locking algorithms (abbreviated as FALL attacks). FALL attacks have two stages. The first stage identifies nodes involved in the locking functionality and analyzes functional properties of these nodes to shortlist a small number of candidate locking keys. In many cases, this shortlists exactly one locking key, so no further analysis is needed. However, if more than one key is shortlisted, the second stage introduces a SAT-based algorithm to identify the correct locking key from a list of alternatives using simulations on an unlocked circuit. In comparison to past work, the FALL attack is more practical as it can often succeed (90% of successful attempts in our experiments) by only analyzing the locked netlist, without requiring oracle access to an unlocked circuit. Further, FALL attacks successfully defeat Secure Function Logic Locking (SFLL), the only locking algorithm that is resilient to known attacks on logic locking. Our experimental evaluation shows that FALL is able to defeat 65 out of 80 (81%) circuits locked using SFLL.
Last updated:  2018-12-03
Privacy Computing: Concept, Computing Framework And Future Development Trends
Uncategorized
Fenghua Li, Hui Li, Ben Niu, Jinjun Chen
Show abstract
Uncategorized
With the rapid development of information technology and the continuous evolution of personalized services, huge amounts of data are accumulated by the large Internet company in the process of serving users. Moreover, dynamic data interactions increase the intentional/unintentional privacy persistence in different information systems. However, the following problems such as the short board effect of privacy information preservation among different information systems and the difficulty of tracing the source of privacy violations are becoming more and more serious. Therefore, existing privacy preserving schemes cannot provide a systematic preservation. In this paper, we pay attention to the links of information lifecycle, such as information collection, storage, processing, distribution and destruction. Then we propose the theory of privacy computing and the key technology system, including privacy computing framework, formal definition of privacy computing, four principles that should be followed in privacy computing, algorithm design criteria, evaluation of privacy preserving effect, privacy computing language and so on. Finally, we employ four application scenarios to describe the universal application of privacy computing and prospect of the future research trends. It is expected to guide the theoretical research on user's privacy preservation under open environments.
Last updated:  2019-08-19
Revisiting Non-Malleable Secret Sharing
Saikrishna Badrinarayanan, Akshayaram Srinivasan
A threshold secret sharing scheme (with threshold $t$) allows a dealer to share a secret among a set of parties such that any group of $t$ or more parties can recover the secret and no group of at most $t-1$ parties learn any information about the secret. A non-malleable threshold secret sharing scheme, introduced in the recent work of Goyal and Kumar (STOC'18), additionally protects a threshold secret sharing scheme when its shares are subject to tampering attacks. Specifically, it guarantees that the reconstructed secret from the tampered shares is either the original secret or something that is unrelated to the original secret. In this work, we continue the study of threshold non-malleable secret sharing against the class of tampering functions that tamper each share independently. We focus on achieving greater efficiency and guaranteeing a stronger security property. We obtain the following results: - Rate Improvement. We give the first construction of a threshold non-malleable secret sharing scheme that has rate $> 0$. Specifically, for every $n,t \geq 4$, we give a construction of a $t$-out-of-$n$ non-malleable secret sharing scheme with rate $\Theta(\frac{1}{t\log ^2 n})$. In the prior constructions, the rate was $\Theta(\frac{1}{n\log m})$ where $m$ is the length of the secret and thus, the rate tends to 0 as $m \rightarrow \infty$. Furthermore, we also optimize the parameters of our construction and give a concretely efficient scheme. - Multiple Tampering. We give the first construction of a threshold non-malleable secret sharing scheme secure in the stronger setting of bounded tampering wherein the shares are tampered by multiple (but bounded in number) possibly different tampering functions. The rate of such a scheme is $\Theta(\frac{1}{k^3t\log^2 n})$ where $k$ is an apriori bound on the number of tamperings. We complement this positive result by proving that it is impossible to have a threshold non-malleable secret sharing scheme that is secure in the presence of an apriori unbounded number of tamperings. - General Access Structures. We extend our results beyond threshold secret sharing and give constructions of rate-efficient, non-malleable secret sharing schemes for more general monotone access structures that are secure against multiple (bounded) tampering attacks.
Last updated:  2019-08-27
A new SNOW stream cipher called SNOW-V
Patrik Ekdahl, Thomas Johansson, Alexander Maximov, Jing Yang
In this paper we are proposing a new member in the SNOW family of stream ciphers, called SNOW-V. The motivation is to meet an industry demand of very high speed encryption in a virtualized environment, something that can be expected to be relevant in a future 5G mobile communication system. We are revising the SNOW 3G architecture to be competitive in such a pure software environment, making use of both existing acceleration instructions for the AES encryption round function as well as the ability of modern CPUs to handle large vectors of integers (e.g. SIMD instructions). We have kept the general design from SNOW 3G, in terms of linear feedback shift register (LFSR) and Finite State Machine (FSM), but both entities are updated to better align with vectorized implementations. The LFSR part is new and operates 8 times the speed of the FSM. We have furthermore increased the total state size by using 128-bit registers in the FSM, we use the full AES encryption round function in the FSM update, and, finally, the initialization phase includes a masking with key bits at its end. The result is an algorithm generally much faster than AES-256 and with expected security not worse than AES-256.
Last updated:  2019-01-18
Factoring Products of Braids via Garside Normal Form
Simon-Philipp Merz, Christophe Petit
Braid groups are infinite non-abelian groups naturally arising from geometric braids. For two decades they have been proposed for cryptographic use. In braid group cryptography public braids often contain secret braids as factors and it is hoped that rewriting the product of braid words hides individual factors. We provide experimental evidence that this is in general not the case and argue that under certain conditions parts of the Garside normal form of factors can be found in the Garside normal form of their product. This observation can be exploited to decompose products of braids of the form $ABC$ when only $B$ is known. Our decomposition algorithm yields a universal forgery attack on WalnutDSA, which is one of the 20 proposed signature schemes that are being considered by NIST for standardization of quantum-resistant public-key cryptography. Our attack on WalnutDSA can universally forge signatures within seconds for both the 128-bit and 256-bit security level, given one random message-signature pair. The attack worked on 99.8% and 100% of signatures for the 128-bit and 256-bit security levels in our experiments. Furthermore, we show that the decomposition algorithm can be used to solve instances of the conjugacy search problem and decomposition search problem in braid groups. These problems are at the heart of other cryptographic schemes based on braid groups.
Last updated:  2018-11-29
Fast Authentication from Aggregate Signatures with Improved Security
Muslum Ozgur Ozmen, Rouzbeh Behnia, Attila A. Yavuz
An attempt to derive signer-efficient digital signatures from aggregate signatures was made in a signature scheme referred to as Structure-free Compact Rapid Authentication (SCRA) (IEEE TIFS 2017). In this paper, we first mount a practical universal forgery attack against the NTRU instantiation of SCRA by observing only 8161 signatures. Second, we propose a new signature scheme (FAAS), which transforms any single-signer aggregate signature scheme into a signer-efficient scheme. We show two efficient instantiations of FAAS, namely, FAAS-NTRU and FAAS-RSA, both of which achieve high computational efficiency. Our experiments confirmed that FAAS schemes achieve up to 100x faster signature generation compared to their underlying schemes. Moreover, FAAS schemes eliminate some of the costly operations such as Gaussian sampling, rejection sampling, and exponentiation at the signature generation that are shown to be susceptible to side-channel attacks. This enables FAAS schemes to enhance the security and efficiency of their underlying schemes. Finally, we prove that FAAS schemes are secure (in random oracle model), and open-source both our attack and FAAS implementations for public testing purposes.
Last updated:  2021-06-03
Efficient Fully-Leakage Resilient One-More Signature Schemes
Antonio Faonio
In a recent paper Faonio, Nielsen and Venturi (ICALP 2015) gave new constructions of leakage-resilient signature schemes. The signature schemes proposed remain unforgeable against an adversary leaking arbitrary information on the entire state of the signer, including the random coins of the signing algorithm. The main feature of their signature schemes is that they offer a graceful degradation of security in situations where standard existential unforgeability is impossible. The notion, put forward by Nielsen, Venturi, and Zottarel (PKC 2014), defines a slack parameter $\gamma$ which, roughly speaking, describes how gracefully the security degrades. Unfortunately, the standard-model signature scheme of Faonio,Nielsen and Venturi has a slack parameter that depends on the number of signatures queried by the adversary. In this paper we show two new constructions in the standard model where the above limitation is avoided. Specifically, the first scheme achieves slack parameter $O(1/\lambda)$ where $\lambda$ is the security parameter and it is based on standard number theoretic assumptions, the second scheme achieves optimal slack parameter (i.e. $\gamma = 1$) and it is based on knowledge of the exponent assumptions. Our constructions are efficient and have leakage rate $1 - o(1)$, most notably our second construction has signature size of only 8 group elements which makes it the leakage-resilient signature scheme with the shortest signature size known to the best of our knowledge.
Last updated:  2018-11-29
Breaking the Binding: Attacks on the Merkle Approach to Prove Liabilities and its Applications
Kexin Hu, Zhenfeng Zhang, Kaiven Guo
Proofs of liabilities are used for applications, function like banks or Bitcoin exchanges, to prove the sums of money in their dataset that they should owe. The Maxwell protocol, a cryptographic proof of liabilities scheme which relies on a data structure well known as the summation Merkle tree, utilizes a Merkle approach to prove liabilities in the decentralized setting, i.e., clients independently verify they are in the dataset with no trusted auditor. In this paper, we go into the Maxwell protocol and the summation Merkle tree. We formalize the Maxwell protocol and show it is not secure. We present an attack with which the proved liabilities using the Maxwell protocol are less than the actual value. This attack can have significant consequences: A Bitcoin exchange controlling a total of $n$ client accounts can present valid liabilities proofs including only one account balance in its dataset. We suggest two improvements to the Maxwell protocol and the summation Merkle tree, and present a formal proof for the improvement that is closest in spirit to the Maxwell protocol. Moreover, we show the DAM scheme, a micropayment scheme of Zerocash which adopts the Maxwell protocol as a tool to disincentivize double/multiple spending, is vulnerable to an multi-spending attack. We show the Provisions scheme, which adopts the Maxwell protocol to extend its privacy-preserving proof of liabilities scheme, is also infected by a similar attack.
Last updated:  2018-12-14
Leakage-Resilient Secret Sharing
Ashutosh Kumar, Raghu Meka, Amit Sahai
In this work, we consider the natural goal of designing secret sharing schemes that ensure security against a powerful adaptive adversary who may learn some ``leaked'' information about all the shares. We say that a secret sharing scheme is $p$-party leakage-resilient, if the secret remains statistically hidden even after an adversary learns a bounded amount of leakage, where each bit of leakage can depend jointly on the shares of an adaptively chosen subset of $p$ parties. A lot of works have focused on designing secret sharing schemes that handle individual and (mostly) non-adaptive leakage for (some) threshold secret sharing schemes [DP07,DDV10,LL12,ADKO15,GK18,BDIR18]. We give an unconditional compiler that transforms any standard secret sharing scheme with arbitrary access structure into a $p$-party leakage-resilient one for $p$ logarithmic in the number of parties. This yields the first secret sharing schemes secure against adaptive and joint leakage for more than two parties. As a natural extension, we initiate the study of leakage-resilient non-malleable secret sharing} and build such schemes for general access structures. We empower the computationally unbounded adversary to adaptively leak from the shares and then use the leakage to tamper with each of the shares arbitrarily and independently. Leveraging our $p$-party leakage-resilient schemes, we also construct such non-malleable secret sharing schemes: any such tampering either preserves the secret or completely `destroys' it. This improves upon the non-malleable secret sharing scheme of Goyal and Kumar (CRYPTO 2018) where no leakage was permitted. Leakage-resilient non-malleable codes can be seen as 2-out-of-2 schemes satisfying our guarantee and have already found several applications in cryptography [LL12,ADKO15,GKPRS18,GK18,CL18,OPVV18]. Our constructions rely on a clean connection we draw to communication complexity in the well-studied number-on-forehead (NOF) model and rely on functions that have strong communication-complexity lower bounds in the NOF model (in a black-box way). We get efficient $p$-party leakage-resilient schemes for $p$ upto $O(\log n)$ as our share sizes have exponential dependence on $p$. We observe that improving this dependence from $2^{O(p)}$ to $2^{o(p)}$ will lead to progress on longstanding open problems in complexity theory.
Last updated:  2018-11-29
Genus 2 curves with given split Jacobian
Jasper Scholten
We construct a genus 2 curve inside the product of 2 elliptic curves. The formula for this construction has appeared in a previous paper. The current paper discusses how this formula arises naturally by using some theory of elliptic Kummer surfaces
Last updated:  2019-11-22
A Provably-Secure Unidirectional Proxy Re-Encryption Scheme Without Pairing in the Random Oracle Model
S. Sharmila Deva Selvi, Arinjita Paul, C. Pandu Rangan
Proxy re-encryption (PRE) enables delegation of decryption rights by entrusting a proxy server with special information, that allows it to transform a ciphertext under one public key into a ciphertext of the same message under a different public key. It is important to note that, the proxy which performs the re-encryption learns nothing about the message encrypted under either public keys. Due to its transformation property, proxy re-encryption schemes have practical applications in distributed storage, encrypted email forwarding, Digital Rights Management (DRM) and cloud storage. From its introduction, several proxy re-encryption schemes have been proposed in the literature, and a majority of them have been realized using bilinear pairing. In Africacrypt 2010, the first PKI-based collusion resistant CCA secure PRE scheme without pairing was proposed in the random oracle model. In this paper, we point out an important weakness in the scheme. We also present the first collusion-resistant pairing-free unidirectional proxy re-encryption scheme which meets CCA security under a variant of the computational Diffie-Hellman hardness assumption in the random oracle model.
Last updated:  2018-11-29
PoTS - A Secure Proof of TEE-Stake for Permissionless Blockchains
Sébastien Andreina, Jens-Matthias Bohli, Ghassan O. Karame, Wenting Li, Giorgia Azzurra Marson
Proof-of-Stake (PoS) protocols have been actively researched for the past few years. PoS finds direct applicability in permissionless blockchain platforms and emerges as one of the strongest candidates to replace the largely inefficient Proof of Work mechanism that is currently plugged in the majority of existing permissionless blockchain systems. Although a number of PoS variants have been proposed, these protocols suffer from a number of security shortcomings. Namely, most existing PoS variants are either subject to the nothing at stake, the long range, or the stake grinding attacks which considerably degrade security in the blockchain. These shortcomings do not result from a lack of foresight when designing these protocols, but are inherently due to the ease of manipulating "stake" when compared to other more established variants, such as "work". In this paper, we address these problems and propose a secure Proof of Stake protocol, PoTS, that leverages Trusted Execution Environments (TEEs), such as Intel SGX, to ensure that each miner can generate at most one block per "height" for strictly increasing heights—thus thwarting the problem of nothing at stake and a large class of long-range attacks. In combination with TEEs, PoTS additionally uses cryptographic techniques to also prevent grinding attacks and protect against posterior corruption. We show that our protocol is secure, in the sense of well-established cryptographic notions for blockchain protocols, down to realistic hardware assumptions on TEE and well-established cryptographic assumptions. Finally, we evaluate the performance of our proposal by means of implementation. Our evaluation results show that PoTS offers a strong tradeoff between security of performance of the underlying PoS protocol.
Last updated:  2018-11-29
Echoes of the Past: Recovering Blockchain Metrics From Merged Mining
Nicholas Stifter, Philipp Schindler, Aljosha Judmayer, Alexei Zamyatin, Andreas Kern, Edgar Weippl
So far, the topic of merged mining has mainly been considered in a security context, covering issues such as mining power centralization or crosschain attack scenarios. In this work we show that key information for determining blockchain metrics such as the fork rate can be recovered through data extracted from merge mined cryptocurrencies. Specifically, we reconstruct a long-ranging view of forks and stale blocks in Bitcoin from its merge mined child chains, and compare our results to previous findings that were derived from live measurements. Thereby, we show that live monitoring alone is not sufficient to capture a large majority of these events, as we are able to identify a non-negligible portion of stale blocks that were previously unaccounted for. Their authenticity is ensured by cryptographic evidence regarding both, their position in the respective blockchain, as well as the Proof-of-Work difficulty. Furthermore, by applying this new technique to Litecoin and its child cryptocur rencies, we are able to provide the first extensive view and lower bound on the stale block and fork rate in the Litecoin network. Finally, we outline that a recovery of other important metrics and blockchain characteristics through merged mining may also be possible.
Last updated:  2018-11-29
A Public Key Exchange Cryptosystem Based on Ideal Secrecy
Vamshi Krishna Kammadanam, Virendra R. Sule, Yi Hong
This paper proposes two closely related asymmetric key (or a public key) schemes for key exchange whose security is based on the notion of ideal secrecy. In the first scheme, the private key consists of two singular matrices, a polar code matrix and a random permutation matrix all over the binary field. The sender transmits addition of two messages over a public channel using the public key of the receiver. The receiver can decrypt individual messages using the private key. An adversary, without the knowledge of the private key, can only compute multiple equiprobable solutions in a space of sufficiently large size related to the dimension of the kernel of the singular matrices. This achieves security in the sense of ideal secrecy. The next scheme extends over general matrices. The two schemes are cryptanalyzed against various attacks.
Last updated:  2019-05-15
Ouroboros Crypsinous: Privacy-Preserving Proof-of-Stake
Thomas Kerber, Markulf Kohlweiss, Aggelos Kiayias, Vassilis Zikas
We present Ouroboros Crypsinous, the first formally analysed privacy-preserving proof-of-stake (PoS) block\-chain protocol. To model its security we give a thorough treatment of private ledgers in the universal composition (UC) setting that might be of independent interest. To prove our protocol secure against adaptive attacks, which are particularly critical in the PoS setting, we introduce a new coin evolution technique that relies on SNARKs and key-private forward secure encryption. The latter primitive - and the associated construction - can be of independent interest. We stress that existing approaches to private blockchains, such as the proof-of-work-based Zerocash are analyzed only against static corruptions.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.