All papers in 2022 (Page 14 of 1781 results)

Last updated:  2022-04-23
India’s “Aadhaar” Biometric ID: Structure, Security, and Vulnerabilities
Pratyush Ranjan Tiwari, Dhruv Agarwal, Prakhar Jain, Swagam Dasgupta, Preetha Datta, Vineet Reddy, Debayan Gupta
India's Aadhaar is the largest biometric identity system in history, designed to help deliver subsidies, benefits, and services to India's 1.4 billion residents. The Unique Identification Authority of India (UIDAI) is responsible for providing each resident (not each citizen) with a distinct identity - a 12-digit Aadhaar number - using their biometric and demographic details. We provide the first comprehensive description of the Aadhaar infrastructure, collating information across thousands of pages of public documents and releases, as well as direct discussions with Aadhaar developers. Critically, we describe the first known cryptographic issue within the system, and discuss how a workaround prevents it from being exploitable at scale. Further, we categorize and rate various security and privacy limitations and the corresponding threat actors, examine the legitimacy of alleged security breaches, and discuss improvements and mitigation strategies.
Last updated:  2022-10-12
Medha: Microcoded Hardware Accelerator for computing on Encrypted Data
Ahmet Can Mert, Aikata, Sunmin Kwon, Youngsam Shin, Donghoon Yoo, Yongwoo Lee, Sujoy Sinha Roy
Homomorphic encryption enables computation on encrypted data, and hence it has a great potential in privacy-preserving outsourcing of computations to the cloud. Hardware acceleration of homomorphic encryption is crucial as software implementations are very slow. In this paper, we present design methodologies for building a programmable hardware accelerator for speeding up the cloud-side homomorphic evaluations on encrypted data. First, we propose a divide-and-conquer technique that enables homomorphic evaluations in the polynomial ring $R_{Q,2N} = \mathbb{Z}_{Q}[x]/(x^{2N} + 1)$ to use a hardware accelerator that has been built for the smaller ring $R_{Q,N} = \mathbb{Z}_{Q}[x]/(x^{N} + 1)$. The technique makes it possible to use a single hardware accelerator flexibly for supporting several homomorphic encryption parameter sets. Next, we present several architectural design methods that we use to realize the flexible and instruction-set accelerator architecture, which we call `Medha'. At every level of the implementation hierarchy, we explore possibilities for parallel processing. Starting from hardware-friendly parallel algorithms for the basic building blocks, we gradually build heavily parallel RNS polynomial arithmetic units. Next, many of these parallel units are interconnected elegantly so that their interconnections require the minimum number of nets, therefore making the overall architecture placement-friendly on the platform. As homomorphic encryption is computation- as well as data-centric, the speed of homomorphic evaluations depends greatly on the way the data variables are handled. For Medha, we take a memory-conservative design approach and get rid of any off-chip memory access during homomorphic evaluations. Finally, we implement Medha in a Xilinx Alveo U250 FPGA and measure timing performances of the microcoded homomorphic addition, multiplication, key-switching, and rescaling routines for the leveled fully homomorphic encryption scheme RNS-HEAAN at 200 MHz clock frequency. For the large parameter sets $(\log Q, N) = (438, 2^{14})$ and $(546, 2^{15})$, Medha achieves accelerations by up to $68\times$ and $78\times$ times respectively compared to a highly optimized software implementation Microsoft SEAL running at 2.3 GHz.
Last updated:  2022-04-23
Short Lattice Signature Scheme with Tighter Reduction under Ring-SIS Assumption
Kaisei Kajita, Go Ohtake, Kazuto Ogawa, Koji Nuida, Tsuyoshi Takagi
We propose a short signature scheme under the ring-SIS assumption in the standard model. Specifically, by revisiting an existing construction [Ducas and Micciancio, CRYPTO 2014], we demonstrate lattice-based signatures with improved reduction loss. As far as we know, there are no ways to use multiple tags in the signature simulation of security proof in the lattice tag-based signatures. We address the tag-collision possibility in the lattice setting, which improves reduction loss. Our scheme generates tags from messages by constructing a scheme under a mild security condition that is existentially unforgeable against random message attack with auxiliary information. Thus our scheme can reduce the signature size since it does not need to send tags with the signatures. Our scheme has short signature sizes of 𝑂(1) and achieves tighter reduction loss than that of Ducas et al.’s scheme. Our proposed scheme has two variants. Our scheme with one property has tighter reduction and the same verification key size of 𝑂(log 𝑛) as that of Ducas et al.’s scheme, where 𝑛 is the security parameter. Our scheme with the other property achieves much tighter reduction loss of 𝑂(𝑄/𝑛) and verification key size of 𝑂(𝑛), where 𝑄 is the number of signing queries.
Last updated:  2022-04-23
Property-Preserving Hash Functions and Combinatorial Group Testing
Kazuhiko Minematsu
Property-preserving hash (PPH) function is a class of hash functions that allows an evaluation of the property of inputs from their hash values. Boyle et al. at ITCS 2019 recently introduced it and considered the robustness of PPH against an adversary who accesses the internal randomness of PPH, and proposed two robust PPH constructions for a weak form of Hamming distance predicate. The second construction received attention for its short hash value, although it relies on an ad-hoc security assumption. The first construction, which is entirely hash-based and based on the classical collision-resistance assumption, has been largely overlooked. We study their first construction and discover its close connection to a seemingly different field of hash/MAC-based (adversarial) error detection using the theory of Combinatorial Group Testing (CGT). We show some consequences of this discovery. In particular, we show that some existing proposals in the field of CGT-based error detection can be converted into a PPH for the Hamming distance property, and they immediately improve and generalize Boyle et al.'s hash-based PPH proposal. We also show that the idea of Boyle et al. is useful in the context of a variant of CGT problem.
Last updated:  2023-11-28
Subverting Cryptographic Hardware used in Blockchain Consensus
Pratyush Ranjan Tiwari and Matthew Green
In this work, we study and formalize security notions for algorithm substitution attacks (ASAs) on em cryptographic puzzles. Puzzles are difficult problems that require an investment of computation, memory, or some other related resource. They are heavily used as a building block for the consensus networks used by cryptocurrencies. These include primitives such as proof-of-work, proof-of-space, and verifiable delay functions (VDFs). Due to economies of scale, these networks increasingly rely on a small number of companies to construct opaque hardware or software (e.g., GPU or FPGA images): this dependency raises concerns about cryptographic subversion. Unlike the algorithms considered by previous ASAs, cryptographic puzzles do not rely on secret keys and thus enable a very different set of attacks. We first explore the threat model for these systems and then propose concrete attacks that (1) selectively reduce a victim's solving capability ( e.g., hashrate) and (2) exfiltrate puzzle solutions to an attacker. We then propose defenses, several of which can be applied to existing cryptocurrency hardware with minimal changes. We also find that mining devices for many major proof-of-work cryptocurrencies already demonstrate errors exactly how a potentially subverted device would. Given that these attacks are relevant to all proof of work cryptocurrencies that have a combined market capitalization of around a few hundred billion dollars (2022), we recommend that all vulnerable mining protocols consider making the suggested adaptations today.
Last updated:  2022-08-31
On the Security of TrCBC
Debrup Chakraborty, Samir Kundu
TrCBC is a variant of CBC-MAC which appeared in Information Processing Letters, 112(7):302-307, 2012. The authors claimed TrCBC to be a secure message authentication code (MAC) with some interesting properties. If TrCBC is instantiated with a block cipher with block length n, then it requires ⌈λ/n⌉ block cipher calls for authenticating a λ-bit message and requires a single key, which is the block cipher key. The authors state that TrCBC can have tag lengths of size less than n/2. We show that with high probability, an adversary can forge TrCBC with tag length n/2 − 1 with just three queries. The attack that we show can be applied to forge a large class of messages. The authors proved TrCBC to be a pseudorandom function (PRF). A scrutiny of the claimed PRF bound shows that for some recommended values of tag lengths, the bound turns out to be quite large. Thus, the security theorem does not imply security of TrCBC for all recommended tag lengths.
Last updated:  2022-04-25
SIDH-sign: an efficient SIDH PoK-based signature
Jesús-Javier Chi-Domínguez, Víctor Mateu, Lucas Pandolfo Perin
We analyze and implement the SIDH PoK-based construction from De Feo, Dobson, Galbraith, and Zobernig. We improve the SIDH-PoK built-in functions to allow an efficient constant-time implementation. After that, we combine it with Fiat-Shamir transform to get an SIDH PoK-based signature scheme that we short label as SIDH-sign. We suggest SIDH-sign-p377, SIDH-sign-p546, and SIDH-sign-p697 as instances that provide security compared to NIST L1, L3, and L5. To the best of our knowledge, the three proposed instances provide the best performance among digital signature schemes based on isogenies.
Last updated:  2022-05-20
Side-Channel Analysis of Lattice-Based Post-Quantum Cryptography: Exploiting Polynomial Multiplication
Uncategorized
Catinca Mujdei, Arthur Beckers, Jose Maria Bermudo Mera, Angshuman Karmakar, Lennert Wouters, Ingrid Verbauwhede
Show abstract
Uncategorized
Polynomial multiplication algorithms such as Toom-Cook and the Number Theoretic Transform are fundamental building blocks for lattice-based post-quantum cryptography. In this work, we present correlation power analysis-based side-channel analysis methodologies targeting every polynomial multiplication strategy for all lattice-based post-quantum key encapsulation mechanisms in the final round of the NIST post-quantum standardization procedure. We perform practical experiments on real side-channel measurements demonstrating that our method allows to extract the secret key from all lattice-based post-quantum key encapsulation mechanisms. Our analysis demonstrates that the used polynomial multiplication strategy can significantly impact the time complexity of the attack.
Last updated:  2023-09-07
Understanding binary-Goppa decoding
Daniel J. Bernstein
This paper reviews, from bottom to top, a polynomial-time algorithm to correct t errors in classical binary Goppa codes defined by squarefree degree-t polynomials. The proof is factored through a proof of a simple Reed--Solomon decoder, and the algorithm is simpler than Patterson's algorithm. All algorithm layers are expressed as Sage scripts backed by test scripts. All theorems are formally verified. The paper also covers the use of decoding inside the Classic McEliece cryptosystem, including reliable recognition of valid inputs.
Last updated:  2022-12-01
On the Hardness of Module Learning With Errors with Short Distributions
Katharina Boudgoust, Corentin Jeudy, Adeline Roux-Langlois, Weiqiang Wen
The Module Learning With Errors problem (M-LWE) is a core computational assumption of lattice-based cryptography which offers an interesting trade-off between guaranteed security and concrete efficiency. The problem is parameterized by a secret distribution as well as an error distribution. There is a gap between the choices of those distributions for theoretical hardness results (standard formulation of M-LWE, i.e., uniform secret modulo $q$ and Gaussian error) and practical schemes (small bounded secret and error). In this work, we make progress towards narrowing this gap. More precisely, we prove that M-LWE with uniform $\eta$-bounded secret for any $1 \leq \eta \ll q$ and Gaussian error, in both its search and decision variants, is at least as hard as the standard formulation of M-LWE, provided that the module rank $d$ is at least logarithmic in the ring degree $n$. We also prove that the search version of M-LWE with large uniform secret and uniform $\eta$-bounded error is at least as hard as the standard M-LWE problem, if the number of samples $m$ is close to the module rank $d$ and with further restrictions on $\eta$. The latter result can be extended to provide the hardness of M-LWE with uniform $\eta$-bounded secret and error under specific parameter conditions. Overall, the results apply to all cyclotomic fields, but most of the intermediate results are proven in more general number fields.
Last updated:  2022-08-16
Breaking Masked Implementations of the Clyde-Cipher by Means of Side-Channel Analysis - A Report on the CHES Challenge Side-Channel Contest 2020
Aron Gohr, Friederike Laus, Werner Schindler
In this paper we present our solution to the CHES Challenge 2020, the task of which it was to break masked hardware respective software implementations of the lightweight cipher Clyde by means of side-channel analysis. We target the secret cipher state after processing of the first $S$-box layer. Using the provided trace data we obtain a strongly biased posterior distribution for the secret-shared cipher state at the targeted point; this enables us to see exploitable biases even before the secret sharing based masking. These biases on the unshared state can be evaluated one $S$-box at a time and combined across traces, which enables us to recover likely key hypotheses $S$-box by $S$-box. In order to see the shared cipher state, we employ a deep neural network similar to the one used by Gohr, Jacob and Schindler to solve the CHES 2018 AES challenge. We modify their architecture to predict the exact bit sequence of the secret-shared cipher state. We find that convergence of training on this task is unsatisfying with the standard encoding of the shared cipher state and therefore introduce a different encoding of the prediction target, which we call the scattershot encoding. In order to further investigate how exactly the scattershot encoding helps to solve the task at hand, we construct a simple synthetic task where convergence problems very similar to those we observed in our side-channel task appear with the naive target data encoding but disappear with the scattershot encoding. We complete our analysis by showing results that we obtained with a classical method (as opposed to an AI-based method), namely the stochastic approach, that we generalize for this purpose first to the setting of shared keys. We show that the neural network draws on a much broader set of features, which may partially explain why the neural-network based approach massively outperforms the stochastic approach. On the other hand, the stochastic approach provides insights into properties of the implementation, in particular the observation that the $S$-boxes behave very different regarding the easiness respective hardness of their prediction.
Last updated:  2022-04-22
Designated-Verifier Linkable Ring Signatures
Pourandokht Behrouz, Panagiotis Grontas, Vangelis Konstantakatos, Aris Pagourtzis, Marianna Spyrakou
We introduce Designated-Verifier Linkable Ring Signatures (DVLRS), a novel cryptographic primitive which combines designated-verifier and linkable ring signatures. Our goal is to guarantee signer ambiguity and provide the capability to the designated verifier to add ‘noise’ using simulated signatures that are publicly verifiable. This increases the privacy of the participants, as it does not allow an adversary to bypass the anonymity provided by ring signatures by using the content of a message to identify the signer. We model unforgeability, anonymity, linkability and non-transferability for DVLRS and provide a secure construction in the Random Oracle model. Finally, we explore some first applications for our primitive, which revolve around the use case of an anonymous assessment system that also protects the subject of the evaluation, even if the private key is compromised.
Last updated:  2022-04-22
Efficient ASIC Architectures for Low Latency Niederreiter Decryption
Daniel Fallnich, Shutao Zhang, Tobias Gemmeke
Post-quantum cryptography addresses the increasing threat that quantum computing poses to modern communication systems. Among the available "quantum-resistant" systems, the Niederreiter cryptosystem is positioned as a conservative choice with strong security guarantees. As a code-based cryptosystem, the Niederreiter system enables high performance operations and is thus ideally suited for applications such as the acceleration of server workloads. However, until now, no ASIC architecture is available for low latency computation of Niederreiter operations. Therefore, the present work targets the design, implementation and optimization of tailored archi- tectures for low latency Niederreiter decryption. Two architectures utilizing different decoding algorithms are proposed and implemented using a 22nm FDSOI CMOS technology node. One of these optimized architectures improves the decryption latency by 27% compared to a state-of-the-art reference and requires at the same time only 25% of the area.
Last updated:  2022-05-01
Improved Pump and Jump BKZ by Sharp Simulator
Leizhang Wang, Wenwen Xia, Geng Wang, Baocang Wang, Dawu Gu
The General Sieve Kernel (G6K) implemented a variety of lattice reduction algorithms based on sieving algorithms. One of the representative of these lattice reduction algorithms is Pump and jump-BKZ (pnj-BKZ) algorithm which is currently considered as the fastest lattice reduction algorithm. The pnj-BKZ is a BKZ-type lattice reduction algorithm which includes the jump strategy, and uses Pump as the SVP Oracle. Here, Pump which was also proposed in G6K, is an SVP sloving algorithm that combines progressive sieve technology and dimforfree technology. However unlike classical BKZ, there is no simulator for predicting the behavior of the pnj-BKZ algorithm when jump greater than 1, which is helpful to find a better lattice reduction strategy. There are two main differences between pnj-BKZ and the classical BKZ algorithm: one is that after pnj-BKZ performs the SVP Oracle on a certain projected sublattice, it won't calling SVP Oracle for the next nearest projected sublattice. Instead, pnj-BKZ jumps to the corresponding projected sublattice after J indexs to run the algorithm for solving the SVP. By using this jump technique, the number of times that the SVP algorithm needs to be called for each round of pnj-BKZ will be reduced to about 1/J times of original. The second is that pnj-BKZ uses Pump as the SVP Oracle on the projected sublattice. Based on the BKZ2.0 simulator, we proposes a pnj-BKZ simulator by using the properties of HKZ reduction basis. Experiments show that our proposed pnj-BKZ simulator can well predicate the behavior of pnj-BKZ with jump greater than 1. Besides, we use this pnj-BKZ simulator to give the optimization strategy for choosing jump which can improve the reducing efficiency of pnj-BKZ. Our optimized pnj-BKZ is 2.9 and 2.6 times faster in solving TU LWE challenge ( n=75,alpha=0.005 ) and TU LWE challenge ( n=60,alpha=0.010 ) than G6K's default LWE sloving strategy.
Last updated:  2022-07-13
Armistice: Micro-Architectural Leakage Modelling for Masked Software Formal Verification
Arnaud de Grandmaison, Karine Heydemann, Quentin L. Meunier
Side channel attacks are powerful attacks for retrieving secret data by exploiting physical measurements such as power consumption or electromagnetic emissions. Masking is a popular countermeasure as it can be proven secure against an attacker model. In practice, software masked implementations suffer from a security reduction due to a mismatch between the considered leakage sources in the security proof and the real ones, which depend on the micro-architecture. We present the model of a system comprising an Arm Cortex-M3 obtained from its RTL description and test-vectors, as well as a model of the memory of a STM32F1 board, built exclusively using test-vectors. Based on these models, we propose Armistice, a framework for formally verifying the absence of leakage in first-order masked implementations taking into account the modelled micro-architectural sources of leakage. We show that Armistice enables to pinpoint vulnerable instructions in real world masked implementations and helps design masked software implementations which are practically secure.
Last updated:  2022-04-22
Quantum binary quadratic form reduction
Nicolas David, Thomas Espitau, Akinori Hosoyamada
Quadratic form reduction enjoys broad uses both in classical and quantum algorithms such as in the celebrated LLL algorithm for lattice reduction. In this paper, we propose the first quantum circuit for definite binary quadratic form reduction that achieves O(n log n) depth, O(n^2) width and O(n^2 log(n)) quantum gates. The proposed work is based on a binary variant of the reduction algorithm of the definite quadratic form. As side results, we show a quantum circuit performing bit rotation with O(log n) depth, O(n) width, and O(n log n) gates, in addition to a circuit performing integer logarithm computation with O(log n) depth, O(n) width, and O(n) gates.
Last updated:  2022-04-22
Băhēm: A Provably Secure Symmetric Cipher
M. Rajululkahf
This paper proposes Băhēm; a symmetric cipher that, when used with a pre-shared secret key k, no cryptanalysis can degrade its security below H(k) bits of entropy, even under Grover's algorithm or even if it turned out that P = NP. Băhēm's security is very similar to that of the one-time pad (OTP), except that it does not require the communicating parties the inconvenient constraint of generating a large random pad in advance of their communication. Instead, Băhēm allows the parties to agree on a small pre-shared secret key, such as |k| = 128 bits, and then generate their random pads in the future as they go. For any operation, be it encryption or decryption, Băhēm performs only 4 exclusive-or operations (XORs) per cleartext bit including its 2 overhead bits. If it takes a CPU 1 cycle to perform an XOR between a pair of 64 bit variables, then a Băhēm operation takes 4 / 8 = 0.5 cycles per byte. Further, all Băhēm's operations are independent, therefore a system with n many CPU cores can perform 0.5 / n cpu cycles per byte per wall-clock time. While Băhēm has an overhead of 2 extra bits per every encrypted cleartext bit, its early single-threaded prototype implementation achieves a faster /decryption/ than OpenSSL's ChaCha20's, despite the fact that Băhēm's ciphertext is 3 times larger than ChaCha20's. This support that the 2 bit overhead is practically negligible for most applications. Băhēm's early prototype has a slower /encryption/ time than OpenSSL's ChaCha20 due to its use of a true random number generator (TRNG). However, this can be trivially optimised by gathering the true random bits in advance, so Băhēm gets the entropy conveniently when it runs. Aside from Băhēm's usage as a provably-secure general-purpose symmetric cipher, it can also be used, in some applications such as password verification, to enhance existing hashing functions to become provably one-way, by using Băhēm to encrypt a predefined string using the hash as the key. A password is then verified if its hash decrypts the Băhēm ciphertext to retrieve the predefined string.
Last updated:  2022-08-27
Superposition Attacks on Pseudorandom Schemes based on Two or Less Permutations
Shaoxuan Zhang, Chun Guo, Qingju Wang
We study quantum superposition attacks against permutation-based pseudorandom cryptographic schemes. We first extend Kuwakado and Morii's attack against the Even-Mansour cipher (ISITA 2012), and exhibit key recovery attacks against a large class of pseudorandom schemes based on a single call to an $n$-bit permutation, with polynomial $O(n)$ quantum steps. We also show how to overcome restrictions on available quantum data in certain relevant settings. We then consider TPPR schemes, namely, Two Permutation-based PseudoRandom cryptographic schemes. Using the improved Grover-meet-Simon method of Bonnetain et al. (ASIACRYPT 2019), we show that the keys of a wide class of TPPR schemes can be recovered with $O(n)$ superposition queries and $O(n2^{n/2})$ quantum steps. We also exhibit sub-classes of "degenerated" TPPR schemes that lack certain internal operations, and exhibit more efficient key recovery attacks using either the Simon's algorithm or Chailloux et al.'s algorithm for collision searching (ASIACRYPT 2017). Further using the all-subkeys-recovery idea of Isobe and Shibutani (SAC 2012), our results give rise to key recovery attacks against several recently proposed permutation-based PRFs, as well as the 2-round Even-Mansour ciphers with generic key schedule functions (Chen et al., JoC 2018) and their tweakable variants (Cogliati et al., CRYPTO 2015). From a constructive perspective, our results establish new quantum Q2 security upper bounds for two permutation-based pseudorandom schemes as well as sound design choices.
Last updated:  2022-04-22
Reducing the Depth of Quantum FLT-Based Inversion Circuit
Harashta Tatimma Larasati, Dedy Septono Catur Putranto, Rini Wisnu Wardhani, Howon Kim
In this study, we propose to reduce the depth of the existing quantum Fermat's Little Theorem (FLT)-based inversion circuit for binary finite field. In particular, we propose follow a complete waterfall approach to translate the Itoh-Tsujii's variant of FLT to the corresponding quantum circuit and remove the inverse squaring operations employed in the previous work by Banegas et al., lowering the number of CNOT gates (CNOT count), which contributes to reduced overall depth and gate count. Furthermore, compare the cost by firstly constructing our method and previous work's in Qiskit quantum computer simulator and perform the resource analysis. Our approach can serve as an alternative for a time-efficient implementation.
Last updated:  2022-04-22
New optimization techniques for PlonK’s arithmetization
Miguel Ambrona, Anne-Laure Schmitt, Raphael R. Toledo, Danny Willems
PlonK is a universal and updatable zk-SNARK for general circuit satisfiability that allows a verifier to check the validity of a certain NP statement very efficiently, optionally in zero-knowledge. PlonK requires that the NP relation of interest be expressed as a system of so-called PlonK constraints. Such conversion is complex and can be implemented in various ways, having a great impact on the prover complexity (which is typically linearithmic in the number of PlonK constraints). We propose several general results for simplifying PlonK constraint systems, which produce more compact but equivalent systems and can lead to significant performance improvements. We also develop an automated optimizer of constraints, based on our techniques, that can be used to construct very compact and less error-prone constraint systems, favoring a more auditable circuit design. Finally, we demonstrate the potential of our techniques by implementing optimized constraint systems for the Poseidon hash, obtaining the most compact representations in the Turbo-PlonK model with minimal custom gates. En route, we devise a novel optimization idea for implementing Poseidon partial rounds and show that it can be applied to both simplifying SNARK circuits and achieving performance improvements in CPU implementations of the Poseidon hash.
Last updated:  2022-05-16
Information Leakage in Code-based Masking: A Systematic Evaluation by Higher-Order Attacks
Wei Cheng, Sylvain Guilley, Jean-Luc Danger
Code-based masking is a recent line of research on masking schemes aiming at provably counteracting side-channel attacks. It generalizes and unifies many masking schemes within a coding-theoretic formalization. In code-based masking schemes, the tuning parameters are the underlying linear codes, whose choice significantly affects the side-channel resilience. In this paper, we investigate the exploitability of the information leakage in code-based masking and present attack-based evaluation results of higher-order optimal distinguisher (HOOD). Particularly, we consider two representative instances of code-based masking, namely inner product masking (IPM) and Shamir's secret sharing (SSS) based masking. Our results do confirm the state-of-the-art theoretical derivatives in an empirical manner with numerically simulated measurements. Specifically, theoretical results are based on quantifying information leakage; we further complete the panorama with attack-based evaluations by investigating the exploitability of the leakage. Moreover, we classify all possible candidates of linear codes in IPM with 2 and 3 shares and (3,1)-SSS based masking, and highlight both optimal and worst codes for them. Relying on our empirical evaluations, we therefore recommend investigating the coding-theoretic properties to find the best linear codes in strengthening instances of code-based masking. As for applications, our attack-based evaluation directly empowers designers, by employing optimal linear codes, to enhance the protection of code-based masking. Our framework leverages simulated leakage traces, hence allowing for source code validation or patching in case it is found to be attackable.
Last updated:  2022-06-17
A Novel NIZK-based Privacy Preserving Biometric Identification Scheme for Internet of Things
Lin You, Qiang Zhu, Gengran Hu
With the popularity of biometric-based identity authentication in the field of the Internet of Things, more and more attention has been paid to the privacy protection of biometric data. Gunasinghe et al. presented the PrivBioMTAuth which is the first authentication solution from mobile phones to protect user’s privacy by performing interactive zero-knowledge proof. However, PrivBioMTAuth still requires considerable storage overhead and communication overhead during the registration phase. Meanwhile, the user’s biometric images and password need to be revealed to the identity provider. In this paper, we present an authentication solution for Internet of Things with fully succinct verification, significantly lower storage overhead and communication overhead. Different from PrivBioMTAuth, we rely on the non-interactive zero knowledge arguments given in Groth’s work to reduce the proof size and simplify the verification complexity. In addition, we focus on multi-exponentiation arguments based on Bayer et al.’s work to ensure the truth of the operation results provided by the identity provider.
Last updated:  2022-04-12
SIPFA: Statistical Ineffective Persistent Faults Analysis on Feistel Ciphers
Nasour Bagheri, Sadegh Sadeghi, Prasanna Ravi, Shivam Bhasin, Hadi Soleimany
Persistent Fault Analysis (PFA) is an innovative and powerful analysis technique in which fault persists throughout the execution. The prior prominent results on PFA were on SPN block ciphers, and the security of Feistel ciphers against this attack has received less attention. In this paper, we introduce a framework to utilize Statistical Ineffective Fault Analysis (SIFA) in the persistent fault setting by proposing Statistical Ineffective Persistent Faults Analysis (SIPFA) that can be efficiently applied to Feistel ciphers in a variety of scenarios. To demonstrate the effectiveness of our technique, we apply SIFPA on three widely used Feistel schemes, DES, 3DES, and Camellia. Our analysis reveals that the secret key of these block ciphers can be extracted with a complexity of at most $2^{50}$ utilizing a single unknown fault. Furthermore, we demonstrate that the secret can be recovered in a fraction of a second by increasing the adversary's control over the injected faults. To evaluate SIPFA in a variety of scenarios, we conducted both simulations and real experiments utilizing electromagnetic fault injection on DES and 3DES.
Last updated:  2023-09-27
Multilinear Schwartz-Zippel mod N with Applications to Succinct Arguments
Benedikt Bünz and Ben Fisch
We show that for $\mathbf{x}\gets [0,2^\lambda)^\mu$ and any integer $N$ the probability that $f(\mathbf{x})\equiv 0 \bmod N$ for any non-zero multilinear polynomial $f\in \mathbb{Z}[X_1, \dots,X_\mu]$, co-prime to $N$ is inversely proportional to $N$. As a corollary we show that if $\log_2 N\geq \log_2(2\mu)\lambda+8\mu^2 $ then the probability is bounded by $\frac{\mu+1}{2^\lambda}$. We also give tighter numerically derived bounds, showing that if $\log_2 N\geq {418}$, and $\mu\leq 20$ the probability is bounded by $\frac{\mu}{2^\lambda}+2^{-120}$. We then apply this Multilinear Composite Schwartz-Zippel Lemma (LCSZ) to resolve an open problem in the literature on succinct arguments: that the Bulletproofs protocol for linear relations over classical Pedersen commitments in prime-order groups remains knowledge sound when generalized to commitment schemes that are binding only over short integer vectors. In particular, this means that the Bulletproofs protocol can be instantiated with plausibly post-quantum commitments from lattice hardness assumptions (SIS/R-SIS/M-SIS). It can also be instantiated with commitments based on groups of unknown order (GUOs), in which case the verification time becomes logarithmic instead of linear time. Prior work on lattice-based Bulletproofs (Crypto 2020) and its extensions required modifying the protocol to sample challenges from special sets of polynomial size. This results in a non-negligible knowledge error, necessitating parallel repetition to amplify soundness, which impacts efficiency and poses issues for the Fiat-Shamir transform. Our analysis shows knowledge soundness for the original Bulletproofs protocol with the exponential-size integer challenge set $[0,2^\lambda]$ and thus achieves a negligible soundness error without repetition, circumventing a previous impossibility result (Crypto 2021). Our analysis also closes a critical gap in the original security proof of DARK, a GUO-based polynomial commitment scheme (Eurocrypt 2020). Along the way to achieving our result we also define Almost Special Soundness (AMSS), a generalization of Special-Soundness. Our main result is divided into two parts: (1) that the Bulletproofs protocol over generalized commitments is AMSS, and (2) that AMSS implies knowledge soundness. This framework serves to simplify the application of our analytical techniques to protocols beyond Bulletproofs in the future.
Last updated:  2022-11-13
Improving Differential-Neural Distinguisher Model For DES, Chaskey and PRESENT
Uncategorized
Liu Zhang, Zilong Wang
Show abstract
Uncategorized
In CRYPTO 2019, Gohr first introduced the deep learning method to cryptanalysis for Speck32/64. A differential-neural distinguisher was obtained using ResNet neural network. Zhang et al. used multiple parallel convolutional layers with different kernel sizes to capture information from multiple dimensions, thus improving the accuracy or obtaining a more round of distinguisher for Speck32/64 and Simon32/64. Inspired by Zhang’s work, we apply the network structure to other ciphers. We not only improve the accuracy of the distinguisher, but also increase the number of rounds of the distinguisher,that is, distinguish more rounds of ciphertext and random number for DES, Chaskey and PRESENT.
Last updated:  2022-04-13
Robust, Revocable and Adaptively Secure Attribute-Based Encryption with Outsourced Decryption
Anis Bkakria
Attribute based encryption (ABE) is a cryptographic technique allowing fine-grained access control by enabling one-to-many encryption. Existing ABE constructions suffer from at least one of the following limitations. First, single point of failure on security meaning that, once an authority is compromised, an adversary can either easily break the confidentiality of the encrypted data or effortlessly prevent legitimate users from accessing data; second, the lack of user and/or attribute revocation mechanism achieving forward secrecy; third, a heavy computation workload is placed on data user; last but not least, the lack of adaptive security in standard models. In this paper, we propose the first single-point-of-failure free multi-authority ciphertext-policy ABE that simultaneously (1) ensures robustness for both decryption key issuing and access revocation while achieving forward secrecy; (2) enables outsourced decryption to reduce the decryption overhead for data users that have limited computational resources; and (3) achieves adaptive (full) security in standard models. The provided theoretical complexity comparison shows that our construction introduces linear storage and computation overheads that occurs only once during its setup phase, which we believe to be a reasonable price to pay to achieve all previous features.
Last updated:  2022-04-26
Proof of Availability & Retrieval in a Modular Blockchain Architecture
Shir Cohen, Guy Goren, Lefteris Kokoris-Kogias, Alberto Sonnino, Alexander Spiegelman
This paper explores a modular design architecture aimed at helping blockchains (and other SMR implementation) to scale to a very large number of processes. This comes in contrast to existing monolithic architectures that interleave transaction dissemination, ordering, and execution in a single functionality. To achieve this we first split the monolith to multiple layers which can use existing distributed computing primitives. The exact specification of the data dissemination are formally defined by the Proof of Availability & Retrieval (PoA&R) abstraction. Solutions to the PoA&R problem contain two related sub-protocols: one that ``pushes'' information into the network and another that ``pulls'' this information. Regarding the latter, there is a dearth of research literature which is rectified in this paper. We present a family of pulling sub-protocols and rigorously analyzing them. Extensive simulations support the theoretical claims of efficiency and robustness in case of a very large number of players. Finally, actual implementation and deployment on a small number of machines (roughly the size of several industrial systems) demonstrates the viability of the architecture's paradigm.
Last updated:  2022-04-12
Efficient Compiler to Covert Security with Public Verifiability for Honest Majority MPC
Thomas Attema, Vincent Dunning, Maarten Everts, Peter Langenkamp
We present a novel compiler for transforming arbitrary, passively secure MPC protocols into efficient protocols with covert security and public verifiability in the honest majority setting. Our compiler works for protocols with any number of parties > 2 and treats the passively secure protocol in a black-box manner. In multi-party computation (MPC), covert security provides an attractive trade-off between the security of actively secure protocols and the efficiency of passively secure protocols. In this security notion, honest parties are only required to detect an active attack with some constant probability, referred to as the deterrence rate. Extending covert security with public verifiability additionally ensures that any party, even an external one not participating in the protocol, is able to identify the cheaters if an active attack has been detected. Recently, Faust et al. (EUROCRYPT 2021) and Scholl et al. (Pre-print 2021) introduced similar covert security compilers based on computationally expensive time-lock puzzles. At the cost of requiring an honest majority, our work avoids the use of time-lock puzzles completely. Instead, we adopt a much more efficient publicly verifiable secret sharing scheme to achieve a similar functionality. This obviates the need for a trusted setup and a general-purpose actively secure MPC protocol. We show that our computation and communication costs are orders of magnitude lower while achieving the same deterrence rate.
Last updated:  2022-04-16
Dependable Intrusion Detection System for IoT: A Deep Transfer Learning-based Approach
Sk. Tanzir Mehedi, Adnan Anwar, Ziaur Rahman, Kawsar Ahmed, Rafiqul Islam
Security concerns for IoT applications have been alarming because of their widespread use in different enterprise systems. The potential threats to these applications are constantly emerging and changing, and therefore, sophisticated and dependable defense solutions are necessary against such threats. With the rapid development of IoT networks and evolving threat types, the traditional machine learning-based IDS must update to cope with the security requirements of the current sustainable IoT environment. In recent years, deep learning, and deep transfer learning have progressed and experienced great success in different fields and have emerged as a potential solution for dependable network intrusion detection. However, new and emerging challenges have arisen related to the accuracy, efficiency, scalability, and dependability of the traditional IDS in a heterogeneous IoT setup. This manuscript proposes a deep transfer learning-based dependable IDS model that outperforms several existing approaches. The unique contributions include effective attribute selection, which is best suited to identify normal and attack scenarios for a small amount of labeled data, designing a dependable deep transfer learning-based ResNet model, and evaluating considering real-world data. To this end, a comprehensive experimental performance evaluation has been conducted. Extensive analysis and performance evaluation show that the proposed model is robust, more efficient, and has demonstrated better performance, ensuring dependability.
Last updated:  2022-04-12
UTT: Decentralized Ecash with Accountable Privacy
Alin Tomescu, Adithya Bhat, Benny Applebaum, Ittai Abraham, Guy Gueta, Benny Pinkas, Avishay Yanai
We present UnTraceable Transactions (UTT), a system for decentralized ecash with accountable privacy. UTT is the first ecash system that obtains three critical properties: (1) it provides decentralized trust by implementing the ledger, bank, auditor, and registration authorities via threshold cryptography and Byzantine Fault Tolerant infrastructure; (2) it balances accountability and privacy by implementing anonymity budgets: users can anonymously send payments, but only up to a limited amount of currency per month. Past this point, transactions can either be made public or subjected to customizable auditing rules; (3) by carefully choosing cryptographic building blocks and co-designing the cryptography and decentralization, UTT is tailored for high throughput and low latency. With a combination of optimized cryptographic building blocks and vertical scaling (optimistic concurrency control), UTT can provide almost 1,000 payments with accountable privacy per second, with latencies of around 100 milliseconds and less. Through horizontal scaling (multiple shards), UTT can scale to tens of thousands of such transactions per second. With 60 shards we measure over 10,000 transactions with accountable privacy per second. We formally define and prove the security of UTT using an MPC-style ideal functionality. Along the way, we define a new MPC framework that captures the security of reactive functionalities in a stand-alone setting, thus filling an important gap in the MPC literature. Our new framework is compatible with practical instantiations of cryptographic primitives and provides a trade-off between concrete efficiency and provable security that may be also useful for future work.
Last updated:  2022-05-03
Improved Stock Market Structure Using Cryptography
Charanjit S. Jutla, Barry Mishra
The stock markets have two primary functions, that of providing liquidity and price discovery. While the market micro-structure was mostly ignored or assumed to function ideally for the purpose of asset pricing, O'Hara (Journal of Finance, 2003) has established that both liquidity and price discovery affect asset pricing, and in particular asset returns. Easley and O'Hara (Journal of Finance 2004) have demonstrated that informed investors' private information is not reflected efficiently in price discovery. We argue that the periodic price discovery has both positive and negative consequences for asset returns. In particular, the inefficient reflection of investors' information during price discovery incentivizes them to conduct research. However, this requires that the auctioneer be ideal or fully trusted. In this work we propose using cryptography, and in particular multi-party secure computation, to setup a novel stock market structure that, to a large extent, removes the negative consequences of liquidity costs and periodic price discovery, as well as incentivizes investors to conduct research. Interestingly, the proposed market structure takes us back to the early days of stock markets, i.e. periodic call markets, but with the not so ``trusted'' auctioneer replaced by a decentralized set of parties where no individual party (or small coalition) gets to know the order book.
Last updated:  2022-04-12
Astrape: Anonymous Payment Channels with Boring Cryptography
Yuhao Dong, Ian Goldberg, Sergey Gorbunov, Raouf Boutaba
The increasing use of blockchain-based cryptocurrencies like Bitcoin has run into inherent scalability limitations of blockchains. Payment channel networks, or PCNs, promise to greatly increase scalability by conducting the vast majority of transactions outside the blockchain while leveraging it as a final settlement protocol. Unfortunately, first-generation PCNs have significant privacy flaws. In particular, even though transactions are conducted off-chain, anonymity guarantees are very weak. In this work, we present Astrape, a novel PCN construction that achieves strong security and anonymity guarantees with simple, black-box cryptography, given a blockchain with flexible scripting. Existing anonymous PCN constructions often integrate with specific, often custom-designed, cryptographic constructions. But at a slight cost to asymptotic performance, Astrape can use any generic public-key signature scheme and any secure hash function, modeled as a random oracle, to achieve strong anonymity, by using a unique construction reminiscent of onion routing. This allows Astrape to achieve provable security that is "generic" over the computational hardness assumptions of the underlying primitives. Astrape's simple cryptography also lends itself to more straightforward security proofs compared to existing systems. Furthermore, we evaluate Astrape's performance, including that of a concrete implementation on the Bitcoin Cash blockchain. We show that despite worse theoretical time complexity compared to state-of-the-art systems that use custom cryptography, Astrape operations on average have a very competitive performance of less than 10 milliseconds of computation and 1 KB of communication on commodity hardware. Astrape explores a new avenue to secure and anonymous PCNs that achieves similar or better performance compared to existing solutions.
Last updated:  2022-04-12
On End-to-End Encryption
Britta Hale, Chelsea Komlo
End-to-end encryption (E2EE) is vitally important to security and privacy online, yet currently under-defined. In this note, we map intuitive notions of end-to-end encryption to existing notions of encryption. In particular, we introduce the notion of endness as an notion which end-to-end systems must achieve in addition to traditional security notions associated with encryption, and provide formalizations to capture practical requirements. We demonstrate how the notion of encryption plus endness relates to a variety of case studies that either meet normative security understanding of E2EE or are considered normative failures. Finally, we extend these observations to authentication, and real-world authenticated channel use variants, including authenticated encryption with associated data and message franking.
Last updated:  2022-08-16
Attacks Against White-Box ECDSA and Discussion of Countermeasures - A Report on the WhibOx Contest 2021
Sven Bauer, Hermann Drexler, Maximilian Gebhardt, Dominik Klein, Friederike Laus, Johannes Mittmann
This paper deals with white-box implementations of the Elliptic Curve Digital Signature Algorithm (ECDSA): First, we consider attack paths to break such implementations. In particular, we provide a systematic overview of various fault attacks, to which ECDSA white-box implementations are especially susceptible. Then, we propose different mathematical countermeasures, mainly based on masking/blinding of sensitive variables, in order to prevent or at least make such attacks more difficult. We also briefly mention some typical implementational countermeasures and their challenges in the ECDSA white-box scenario. Our work has been initiated by the CHES challenge WhibOx Contest 2021, which consisted of designing and breaking white-box ECDSA implementations, so called challenges. We illustrate our results and findings by means of the submitted challenges and provide a comprehensive overview which challenge could be solved in which way. Furthermore, we analyze selected challenges in more details.
Last updated:  2022-04-12
Leveled Multikey FHE with constant-size ciphertexts from RLWE
Vanesa Daza, Paz Morillo, Sergi Rovira
A multi-key fully homomorphic encryption (MKFHE) scheme allows a public server to evaluate arbitrary circuits over ciphertexts encrypted under different keys. One of the main drawbacks of MKFHE schemes is the need for a ciphertext expansion procedure prior to evaluation, which combines ciphertexts encrypted under different keys to a (much larger) ciphertext encrypted under a concatenated key. In this paper, we present a new (leveled) RLWE-based MKFHE scheme without ciphertext expansion.
Last updated:  2022-04-12
Fast Side-Channel Key-Recovery Attack against Elephant Dumbo
Louis Vialar
In this paper, we present an efficient side-channel key recovery attack against Dumbo, the 160-bit variant of NIST lightweight cryptography contest candidate Elephant. We use Correlation Power Analysis to attack the first round of the Spongent permutation during the absorption of the first block of associated data. The full attack runs in about a minute on a common laptop and only requires around 30 power traces to recover the entire secret key on an ARM Cortex-M4 microcontroller clocked at 7.4MHz. This is, to the best of our knoweledge, the first attack of this type presented against Elephant.
Last updated:  2022-04-12
TWAP Oracle Attacks: Easier Done than Said?
Torgin Mackinga, Tejaswi Nadahalli, Roger Wattenhofer
Blockchain ``on-chain'' oracles are critical to the functioning of many Decentralized Finance (DeFi) protocols. We analyze these oracles for manipulation resistance. Specifically, we analyze the cost of manipulating on-chain time-weighted average price (TWAP) oracles that use the arithmetic mean. It has been assumed that manipulating a TWAP oracle with the well-known multi-block attack is expensive and scales linearly with the length of the TWAP. We question this assumption with two novel results. First, we describe a single-block attack that works under the same setting as the multi-block attack but costs less to execute. Second, we describe a multi-block MEV (MMEV) style attack where the attacker colludes with a miner/proposer who can mine/propose two blocks in a row. This MMEV style attack makes oracle manipulation orders of magnitude cheaper than previously known attacks. In the proof-of-work setting, MMEV can be done by selfish mining even with very low shares of hashpower.
Last updated:  2022-04-12
A White-Box Speck Implementation using Self-Equivalence Encodings (Full Version)
Joachim Vandersmissen, Adrián Ranea, Bart Preneel
In 2002, Chow et al. initiated the formal study of white-box cryptography and introduced the CEJO framework. Since then, various white-box designs based on their framework have been proposed, all of them broken. Ranea and Preneel proposed a different method in 2020, called self-equivalence encodings and analyzed its security for AES. In this paper, we apply this method to generate the first academic white-box Speck implementations using self-equivalence encodings. Although we focus on Speck in this work, our design could easily be adapted to protect other add-rotate-xor (ARX) ciphers. Then, we analyze the security of our implementation against key-recovery attacks. We propose an algebraic attack to fully recover the master key and external encodings from a white-box Speck implementation, with limited effort required. While this result shows that the linear and affine self-equivalences of self-equivalence encodings are insecure, we hope that this negative result will spur additional research in higher-degree self-equivalence encodings for white-box cryptography. Finally, we created an open-source Python project implementing our design, publicly available at https://github.com/jvdsn/white-box-speck. We give an overview of five strategies to generate output code, which can be used to improve the performance of the white-box implementation. We compare these strategies and determine how to generate the most performant white-box Speck code. Furthermore, this project could be employed to test and compare the efficiency of attacks on white-box implementations using self-equivalence encodings.
Last updated:  2023-04-27
Attack on SHealS and HealS: the Second Wave of GPST
Steven D. Galbraith, Yi-Fu Lai
We cryptanalyse the isogeny-based public key encryption schemes SHealS and HealS, and the key exchange scheme HealSIDH of Fouotsa and Petit from Asiacrypt 2021.
Last updated:  2022-04-12
Quantum Attacks on PRFs Based on Public Random Permutations
Tingting Guo, Peng Wang, Lei Hu, Dingfeng Ye
We proposed three general frameworks F1,F2, and F3 for n-to-n-bit PRFs with one, two parallel, and two serial public permutation calls respectively, where every permutation is preceded and followed by any bitwise linear mappings. We analyze them in the Q2 model where attackers have quantum-query access to PRFs and permutations. Our results show F1 is not secure with O(n) quantum queries while its PRFs achieve n/2-bit security in the classical setting, and F2,F3 are not secure with O(2^{n/2}n) quantum queries while their PRFs, such as SoEM, PDMMAC, and pEDM, achieve 2n/3-bit security in the classical setting. Besides, we attack three general instantiations XopEM, EDMEM, and EDMDEM of F2,F3, which derive from replacing the two PRPs in Xop, EDM, and EDMD with two independent EM constructions, and concrete PRF instantiations DS-SoEM, PDMMAC, and pEDM, SoKAC21 of F2,F3, with at most O(2^{n/2}n) quantum queries.
Last updated:  2023-05-10
Two-Client Inner-Product Functional Encryption, with an Application to Money-Laundering Detection
Paola de Perthuis, David Pointcheval
In this paper, we extend Inner-Product Functional Encryption (IPFE), where there is just a vector in the key and a vector in the single sender's ciphertext, to two-client ciphertexts. More precisely, in our two-client functional encryption scheme, there are two data providers who can independently encrypt vectors $\mathbf{x}$ and $\mathbf{y}$ for a data consumer who can, from a functional decryption key associated to a vector $\mathbf{\alpha}$, compute $\sum \alpha_i x_i y_i = \mathbf{x} \cdot \mathsf{Diag}(\mathbf{\alpha}) \cdot \mathbf{y}^\top$. Ciphertexts are linear in the dimension of the vectors, whereas the functional decryption keys are of constant size. We study two interesting particular cases: - 2-party Inner-Product Functional Encryption, with $\mathbf{\alpha}= (1,\ldots,1)$. There is a unique functional decryption key, which enables the computation of $\mathbf{x}\cdot \mathbf{y}^\top$ by a third party, where $\mathbf{x}$ and $\mathbf{y}$ are provided by two independent clients; - Inner-Product Functional Encryption with a Selector, with $\mathbf{x}= \mathbf{x}_0 \| \mathbf{x}_1$ and $\mathbf{y}= \bar{b}^n \| b^n \in \{ 1^n \| 0^n, 0^n \| 1^n \}$, for some bit $b$, on the public coefficients $\mathbf{\alpha} = \mathbf{\alpha}_0 \| \mathbf{\alpha}_1$, in the functional decryption key, so that one gets $\mathbf{x}_b \cdot \mathbf{\alpha}_b^\top$, where $\mathbf{x}$ and $b$ are provided by two independent clients. This result is based on the fundamental Product-Preserving Lemma, which is of independent interest. It exploits Dual Pairing Vector Spaces (DPVS), with security proofs under the $\mathsf{SXDH}$ assumption. We provide two practical applications: to medical diagnosis for the latter IPFE with Selector, and to money-laundering detection for the former 2-party IPFE, both with strong privacy properties, with adaptative security and the use of labels granting a Multi-Client Functional Encryption (MCFE) security for the scheme, thus enabling its use in practical situations.
Last updated:  2022-09-16
A Security Model for Randomization-based Protected Caches
Jordi Ribes-González, Oriol Farràs, Carles Hernández, Vatistas Kostalabros, Miquel Moretó
Cache side-channel attacks allow adversaries to learn sensitive information about co-running processes by using only access latency measures and cache contention. This vulnerability has been shown to lead to several microarchitectural attacks. As a promising solution, recent work proposes Randomization-based Protected Caches (RPCs). RPCs randomize cache addresses, changing keys periodically so as to avoid long-term leakage. Unfortunately, recent attacks have called the security of state-of-the-art RPCs into question. In this work, we tackle the problem of formally defining and analyzing the security properties of RPCs. We first give security definitions against access-based cache side-channel attacks that capture security against known attacks such as Prime+Probe and Evict+Probe. Then, using these definitions, we obtain results that allow to guarantee security by adequately choosing the rekeying period, the key generation algorithm and the cache randomizer, thus providing security proofs for RPCs under certain assumptions.
Last updated:  2022-10-22
Efficient Multiplication of Somewhat Small Integers using Number-Theoretic Transforms
Hanno Becker, Vincent Hwang, Matthias J. Kannwischer, Lorenz Panny, Bo-Yin Yang
Conventional wisdom purports that FFT-based integer multiplication methods (such as the Schönhage-Strassen algorithm) begin to compete with Karatsuba and Toom-Cook only for integers of several tens of thousands of bits. In this work, we challenge this belief, leveraging recent advances in the implementation of number-theoretic transforms (NTT) stimulated by their use in post-quantum cryptography. We report on implementations of NTT-based integer arithmetic on two Arm Cortex-M CPUs on opposite ends of the performance spectrum: Cortex-M3 and Cortex-M55. Our results indicate that NTT-based multiplication is capable of outperforming the big-number arithmetic implementations of popular embedded cryptography libraries for integers as small as 2048 bits. To provide a realistic case study, we benchmark implementations of the RSA encryption and decryption operations. Our cycle counts on Cortex-M55 are about 10× lower than on Cortex-M3.
Last updated:  2022-04-06
Computing isogenies between finite Drinfeld modules
Benjamin Wesolowski
We prove that isogenies between Drinfeld modules over a finite field can be computed in polynomial time. This breaks Drinfeld analogs of isogeny-based cryptosystems.
Last updated:  2022-11-02
Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures
Aparna Gupte, Neekon Vafa, Vinod Vaikuntanathan
We show direct and conceptually simple reductions between the classical learning with errors (LWE) problem and its continuous analog, CLWE (Bruna, Regev, Song and Tang, STOC 2021). This allows us to bring to bear the powerful machinery of LWE-based cryptography to the applications of CLWE. For example, we obtain the hardness of CLWE under the classical worst-case hardness of the gap shortest vector problem. Previously, this was known only under quantum worst-case hardness of lattice problems. More broadly, with our reductions between the two problems, any future developments to LWE will also apply to CLWE and its downstream applications. As a concrete application, we show an improved hardness result for density estimation for mixtures of Gaussians. In this computational problem, given sample access to a mixture of Gaussians, the goal is to output a function that estimates the density function of the mixture. Under the (plausible and widely believed) exponential hardness of the classical LWE problem, we show that Gaussian mixture density estimation in $\mathbb{R}^n$ with roughly $\log n$ Gaussian components given $\mathsf{poly}(n)$ samples requires time quasi-polynomial in $n$. Under the (conservative) polynomial hardness of LWE, we show hardness of density estimation for $n^{\epsilon}$ Gaussians for any constant $\epsilon > 0$, which improves on Bruna, Regev, Song and Tang (STOC 2021), who show hardness for at least $\sqrt{n}$ Gaussians under polynomial (quantum) hardness assumptions. Our key technical tool is a reduction from classical LWE to LWE with $k$-sparse secrets where the multiplicative increase in the noise is only $O(\sqrt{k})$, independent of the ambient dimension $n$.
Last updated:  2023-05-16
Publicly Accountable Robust Multi-Party Computation
Marc Rivinius, Pascal Reisert, Daniel Rausch, Ralf Kuesters
In recent years, lattice-based secure multi-party computation (MPC) has seen a rise in popularity and is used more and more in large scale applications like privacy-preserving cloud computing, electronic voting, or auctions. Many of these applications come with the following high security requirements: a computation result should be publicly verifiable, with everyone being able to identify a malicious party and hold it accountable, and a malicious party should not be able to corrupt the computation, force a protocol restart, or block honest parties or an honest third-party (client) that provided private inputs from receiving a correct result. The protocol should guarantee verifiability and accountability even if all protocol parties are malicious. While some protocols address one or two of these often essential security features, we present the first publicly verifiable and accountable, and (up to a threshold) robust SPDZ-like MPC protocol without restart. We propose protocols for accountable and robust online, offline, and setup computations. We adapt and partly extend the lattice-based commitment scheme by Baum et al. (SCN 2018) as well as other primitives like ZKPs. For the underlying commitment scheme and the underlying BGV encryption scheme we determine ideal parameters. We give a performance evaluation of our protocols and compare them to state-of-the-art protocols both with and without our target security features: public accountability, public verifiability and robustness.
Last updated:  2024-02-21
Fiat-Shamir for Proofs Lacks a Proof Even in the Presence of Shared Entanglement
Frédéric Dupuis, Philippe Lamontagne, and Louis Salvail
We explore the cryptographic power of arbitrary shared physical resources. The most general such resource is access to a fresh entangled quantum state at the outset of each protocol execution. We call this the Common Reference Quantum State (CRQS) model, in analogy to the well-known Common Reference String (CRS). The CRQS model is a natural generalization of the CRS model but appears to be more powerful: in the two-party setting, a CRQS can sometimes exhibit properties associated with a Random Oracle queried once by measuring a maximally entangled state in one of many mutually unbiased bases. We formalize this notion as a Weak One-Time Random Oracle (WOTRO), where we only ask of the $m$-bit output to have some randomness when conditioned on the $n$-bit input. We show that when $n-m\in\omega(\lg n)$, any protocol for WOTRO in the CRQS model can be attacked by an (inefficient) adversary. Moreover, our adversary is efficiently simulatable, which rules out the possibility of proving the computational security of a scheme by a fully-black-box reduction to a cryptographic game assumption. On the other hand, we introduce a non-game quantum assumption for hash functions that implies WOTRO in the CRQ\$ model (where the CRQS consists only of EPR pairs). We first build a statistically secure WOTRO protocol where $m=n$, then hash the output. The impossibility of WOTRO has the following consequences. First, we show the fully-black-box impossibility of a quantum Fiat-Shamir transform, extending the impossibility result of Bitansky et al. (TCC '13) to the CRQS model. Second, we show a black-box impossibility result for a strenghtened version of quantum lightning (Zhandry, Eurocrypt '19) where quantum bolts have an additional parameter that cannot be changed without generating new bolts. Our results also apply to $2$-message protocols in the plain model.
Last updated:  2022-06-17
Verifiable Quantum Advantage without Structure
Takashi Yamakawa, Mark Zhandry
We show the following hold, unconditionally unless otherwise stated, relative to a random oracle with probability 1: - There are NP search problems solvable by BQP machines but not BPP machines. - There exist functions that are one-way, and even collision resistant, against classical adversaries but are easily inverted quantumly. Similar separations hold for digital signatures and CPA-secure public key encryption (the latter requiring the assumption of a classically CPA-secure encryption scheme). Interestingly, the separation does not necessarily extend to the case of other cryptographic objects such as PRGs. - There are unconditional publicly verifiable proofs of quantumness with the minimal rounds of interaction: for uniform adversaries, the proofs are non-interactive, whereas for non-uniform adversaries the proofs are two message public coin. - Our results do not appear to contradict the Aaronson-Ambanis conjecture. Assuming this conjecture, there exist publicly verifiable certifiable randomness, again with the minimal rounds of interaction. By replacing the random oracle with a concrete cryptographic hash function such as SHA2, we obtain plausible Minicrypt instantiations of the above results. Previous analogous results all required substantial structure, either in terms of highly structured oracles and/or algebraic assumptions in Cryptomania and beyond.
Last updated:  2023-07-26
McFly: Verifiable Encryption to the Future Made Practical
Nico Döttling, Lucjan Hanzlik, Bernardo Magri, Stella Wohnig
Blockchain protocols have revolutionized the way individuals and devices can interact and transact over the internet. More recently, a trend has emerged to harness blockchain technology as a catalyst to enable advanced security features in distributed applications, in particular fairness. However, the tools employed to achieve these security features are either resource wasteful (e.g., time-lock primitives) or only efficient in theory (e.g., witness encryption). We present McFly, a protocol that allows one to efficiently ``encrypt a message to the future'' such that the receiver can decrypt the message almost effortlessly. Towards this goal, we design and implement a novel primitive we call signature-based witness encryption and combine it with a BFT blockchain (or a blockchain finality layer) in such a way that the decryption of the message can be piggybacked on the tasks already performed by the blockchain committee, resulting in almost-for-free decryption. To demonstrate the practicality of the McFly protocol, we implemented our signature-based witness encryption scheme and evaluated it on a standard laptop with Intel i7 @2,3 GHz. For the popular BLS12-381 curve, a $381$-bit message and a committee of size $500$ the encryption time is $9.8s$ and decryption is $14.8 s$. The scheme remains practical for a committee of size $2000$ with an encryption time of $58 s$ and decryption time of $218 s$.
Last updated:  2022-04-06
Classical Verification of Quantum Computations in Linear Time
Jiayu Zhang
In the quantum computation verification problem, a quantum server wants to convince a client that the output of evaluating a quantum circuit $C$ is some result that it claims. This problem is considered very important both theoretically and practically in quantum computation [arXiv:1709.06984, 1704.04487, 1209.0449]. The client is considered to be limited in computational power, and one desirable property is that the client can be completely classical, which leads to the classical verification of quantum computation (CVQC) problem. In terms of the time complexity of server-side quantum computations (which typically dominate the total time complexity of both the client and the server), the fastest single-server CVQC protocol so far has complexity $O(poly(\kappa)|C|^3)$ where $|C|$ is the size of the circuit to be verified, given by Mahadev [arXiv:1804.01082]. This leads to a similar cubic time blowup in many existing protocols including multiparty quantum computation, zero knowledge and obfuscation [ia.cr/2021/964, arXiv:1902.05217, 2106.06094, 1912.00990, 2012.04848, 1911.08101]. Considering the preciousness of quantum computation resources, this cubic complexity barrier could be a big obstacle for taking protocols for these problems into practice. In this work, by developing new techniques, we give a new CVQC protocol with complexity $O(poly(\kappa)|C|)$ (in terms of the total time complexity of both the client and the server), which is significantly faster than existing protocols. Our protocol is secure in the quantum random oracle model [arXiv:1008.0931] assuming the existence of noisy trapdoor claw-free functions [arXiv:1804.00640], which are both extensively used assumptions in quantum cryptography. Along the way, we also give a new classical channel remote state preparation protocol for states in $\{|+_\theta\rangle=\frac{1}{\sqrt{2}}(|0\rangle+e^{i\theta\pi/4}|1\rangle):\theta\in \{0,1\cdots 7\}\}$, another basic primitive in quantum cryptography. Our protocol allows for parallel verifiable preparation of $L$ independently random states in this form (up to a constant overall error and a possibly unbounded server-side isometry), and runs in only $O(poly(\kappa)L)$ time and constant rounds; for comparison, existing works (even for possibly simpler state families) all require very large or unestimated time and round complexities [arXiv:1904.06320, 1904.06303, 2201.13445, 2201.13430].
Last updated:  2023-02-28
Non-Adaptive Universal One-Way Hash Functions from Arbitrary One-Way Functions
Xinyu Mao, Noam Mazor, Jiapeng Zhang
In this work we give the first non-adaptive construction of universal one-way hash functions (UOWHFs) from arbitrary one-way functions. Our construction uses $O(n^9)$ calls to the one-way function, has a key of length $O(n^{10})$, and can be implemented in NC1 assuming the underlying one-way function is in NC1. Prior to this work, the best UOWHF construction used O(n13) adaptive calls and a key of size O(n5) (Haitner, Holenstein, Reingold, Vadhan and Wee [Eurocrypt ’10]). By the result of Applebaum, Ishai and Kushilevitz [FOCS ’04], the above implies the existence of UOWHFs in NC0, given the existence of one-way functions in NC1. We also show that the PRG construction of Haitner, Reingold and Vadhan (HRV, [STOC ’10]), with small modifications, yields a relaxed notion of UOWHFs , which is a function family which can be (inefficiently) converted to UOWHF by changing the functions on a negligible fraction of the inputs. In order to analyze this construction, we introduce the notion of next-bit unreachable entropy, which replaces the next-bit pseudoentropy notion used by HRV.
Last updated:  2024-02-13
Is the JCJ voting system really coercion-resistant?
Véronique Cortier, Pierrick Gaudry, and Quentin Yang
Coercion-resistance is a security property of electronic voting, often considered as a must-have for high-stake elections. The JCJ voting scheme, proposed in 2005 by Juels, Catalano and Jakobsson, is still the reference paradigm when designing a coercion-resistant protocol. We highlight a weakness in JCJ that is also present in all the systems following its general structure. This comes from the procedure that precedes the tally, where the trustees remove the ballots that should not be counted. This phase leaks more information than necessary, leading to potential threats for the coerced voters. Fixing this leads to the notion of cleansing-hiding, that we apply to form a variant of JCJ that we call CHide. One reason for the problem not being seen before is the fact that the associated formal definition of coercion-resistance was too weak. We therefore propose a definition that takes into account more behaviors such as revoting or the addition of fake ballots by authorities. We then prove that CHide is coercion-resistant for this definition.
Last updated:  2022-09-15
Resurrecting Xifrat - Compact Cryptosystems 2nd Attempt
Jianfang "Danny" Niu
Xifrat was a group-theoretic public-key cryptosystem based on a quasigroup with the special property of "restricted-commutativity". It was broken within half a month of its publication, due to a mistake made in the "mixing" function. In this paper, we revisit the design decisions made, proposing new constructions, and attempt (again) to build secure digital signature schemes and key encapsulation mechanisms. If the schemes can be proven secure, then this will be the most compact and the most efficient post-quantum cryptosystem ever proposed to date.
Last updated:  2022-08-16
Implicit White-Box Implementations: White-Boxing ARX Ciphers
Adrián Ranea, Joachim Vandersmissen, Bart Preneel
Since the first white-box implementation of AES published twenty years ago, no significant progress has been made in the design of secure implementations against an attacker with full control of the device. Designing white-box implementations of existing block ciphers is a challenging problem, as all proposals have been broken. Only two white-box design strategies have been published this far: the CEJO framework, which can only be applied to ciphers with small S-boxes, and self-equivalence encodings, which were only applied to AES. In this work we propose implicit implementations, a new design of white-box implementations based on implicit functions, and we show that current generic attacks that break CEJO or self-equivalence implementations are not successful against implicit implementations. The generation and the security of implicit implementations are related to the self-equivalences of the non-linear layer of the cipher, and we propose a new method to obtain self-equivalences based on the CCZ-equivalence. We implemented this method and many other functionalities in a new open-source tool BoolCrypt, which we used to obtain for the first time affine, linear, and even quadratic self-equivalences of the permuted modular addition. Using the implicit framework and these self-equivalences, we describe for the first time a practical white-box implementation of a generic Addition-Rotation-XOR (ARX) cipher, and we provide an open-source tool to easily generate implicit implementations of ARX ciphers.
Last updated:  2022-04-06
Constant Size Secret Sharing: with General Thresholds, Towards Standard Assumptions, and Applications
Uncategorized
Katarzyna Kapusta, Matthieu Rambaud, Ferdinand Sibleyras
Show abstract
Uncategorized
We consider threshold Computational Secret Sharing Schemes, i.e., such that the secret can be recovered from any $t+1$ out of $n$ shares, and such that no computationally bounded adversary can distinguish between $t$ shares of a chosen secret and a uniform string. We say that such a scheme has Constant Size (CSSS) if, in the asymptotic regime of many shares of small size the security parameter, then the total size of shares reaches the minimum, which is the size of an erasures-correction encoding of the secret with same threshold. But all CSSS so far have only maximum threshold, i.e., $t=n-1$. They are known as All Or Nothing Transforms (AONT). On the other hand, for arbitrary thresholds $t<n-1$, the shortest scheme known so far is [Kra93, Crypto], which has instead twice larger size in the previous regime, due to a size overhead of $n$ times the security parameter. The other limitation of known CSSS is that they require a number of calls to idealized primitives which grows linearly with the size of the secret. Our first contribution is to show that the CSSS of [Des00, Crypto], which holds under the ideal cipher assumption, looses its privacy when instantiated with a plain pseudorandom permutation. Our main contribution is a scheme which: is the first CSSS for any threshold $t$, and furthermore, whose security holds, for the first time, under any plain pseudorandom function, with the only idealized assumption being in the key-derivation function. It is based on the possibly new observation that the scheme of [Des00] can be seen as an additive secret-sharing of an encryption key, using the ciphertext itself as a source of randomness. A variation of our construction enables to improve upon known schemes, that we denote as Encryption into Shares with Resilience against Key exposure (ESKE), having the property that all ciphertext blocks are needed to obtain any information, even when the key is leaked. We obtain the first ESKE with arbitrary threshold $t$ and constant size, furthermore in one pass of encryption. Also, for the first time, the only idealized assumption is in the key-derivation. Then, we demonstrate how to establish fast revocable storage on an untrusted server, from any black box ESKE. Instantiated with our ESKE, then encryption and decryption both require only $1$ pass of symmetric primitives under standard assumptions (except the key-derivation), compared to at least $2$ consecutive passes in [MS18, CT-RSA] and more in [Bac+16, CCS]. We finally bridge the gap between two conflicting specifications of AONT in the literature: one very similar to CSSS, which has indistinguishability, and one which has not.
Last updated:  2022-04-06
Spectre Declassified: Reading from the Right Place at the Wrong Time
Uncategorized
Basavesh Ammanaghatta Shivakumar, Jack Barnes, Gilles Barthe, Sunjay Cauligi, Chitchanok Chuengsatiansup, Daniel Genkin, Sioli O'Connell, Peter Schwabe, Rui Qi Sim, Yuval Yarom
Show abstract
Uncategorized
Practical information-flow programming languages commonly allow controlled leakage via a “declassify” construct—programmers can use this construct to declare intentional leakage. For instance, cryptographic signatures and ciphertexts, which are computed from private keys, are viewed as secret by information-flow analyses. Cryptographic libraries can use declassify to make this data public, as it is no longer sensitive. In this paper, we study the impact of speculative execution in practical information-flow programming languages. First, we show that speculative execution leads to unintended leakage that violates the programmer’s intent. Concretely, we present a PoC that recovers the AES key of an implementation of AES written in FaCT, a domain-specific language for constant-time programming. Our PoC is an instance of a Spectre-PHT attack; interestingly, it remains effective even if the program is compiled with Speculative Load Hardening (SLH), a compiler-based countermeasure against Spectre-PHT. Second, we propose compiler-based countermeasures for protecting programs against leakage, and show that these countermeasures achieve relative non-interference: Informally, speculative leakage of the transformed programs must correspond to sequential leakage of the original programs. One of our countermeasures is a new transformation of independent interest called selective speculative load hardening (selSLH). SelSLH optimizes SLH as implemented by the LLVM compiler, reducing the number of inserted mitigations. Third, we implement one of our countermeasures in the FaCT compiler and evaluate performance overhead for core cryptographic routines from several open-source projects. The results indicate a moderate overhead. Although we do not implement selSLH, we carry a preliminary evaluation which suggests a significant gain over SLH for cryptographic implementations.
Last updated:  2023-02-07
SoK: New Insights into Fully Homomorphic Encryption Libraries via Standardized Benchmarks
Charles Gouert, Dimitris Mouris, Nektarios Georgios Tsoutsos
Fully homomorphic encryption (FHE) enables arbitrary computation on encrypted data, allowing users to upload ciphertexts to cloud servers for computation while mitigating privacy risks. Many cryptographic schemes fall under the umbrella of FHE, and each scheme has several open-source implementations with its own strengths and weaknesses. Nevertheless, developers have no straightforward way to choose which FHE scheme and implementation is best suited for their application needs, especially considering that each scheme offers different security, performance, and usability guarantees. To allow programmers to effectively utilize the power of FHE, we employ a series of benchmarks called the Terminator 2 Benchmark Suite and present new insights gained from running these algorithms with a variety of FHE back-ends. Contrary to generic benchmarks that do not take into consideration the inherent challenges of encrypted computation, our methodology is tailored to the secure computational primitives of each target FHE implementation. To ensure fair comparisons, we developed a versatile compiler (called T2) that converts arbitrary benchmarks written in a domain-specific language into identical encrypted programs running on different popular FHE libraries as a backend. Our analysis exposes for the first time the advantages and disadvantages of each FHE library as well as the types of applications most suited for each computational domain (i.e., binary, integer, and floating-point).
Last updated:  2022-04-06
Refined Cryptanalysis of the GPRS Ciphers GEA-1 and GEA-2
Dor Amzaleg, Itai Dinur
At EUROCRYPT~2021, Beierle et al. presented the first public analysis of the GPRS ciphers GEA-1 and GEA-2. They showed that although GEA-1 uses a 64-bit session key, it can be recovered with the knowledge of only 65 bits of keystream in time $2^{40}$ using $44$ GiB of memory. The attack exploits a weakness in the initialization process of the cipher that was presumably hidden intentionally by the designers to reduce its security. While no such weakness was found for GEA-2, the authors presented an attack on this cipher with time complexity of about $2^{45}$. The main practical obstacle is the required knowledge of 12800 bits of keystream used to encrypt a full GPRS frame. Variants of the attack are applicable (but more expensive) when given less consecutive keystream bits, or when the available keystream is fragmented (it contains no long consecutive block). In this paper, we improve and complement the previous analysis of GEA-1 and GEA-2. For GEA-1, we devise an attack in which the memory complexity is reduced by a factor of about $2^{13} = 8192$ from $44$ GiB to about 4 MiB, while the time complexity remains $2^{40}$. Our implementation recovers the GEA-1 session key in average time of 2.5~hours on a modern laptop. For GEA-2, we describe two attacks that complement the analysis of Beierle et al. The first attack obtains a linear tradeoff between the number of consecutive keystream bits available to the attacker (denoted by $\ell$) and the time complexity. It improves upon the previous attack in the range of (roughly) $\ell \leq 7000$. Specifically, for $\ell = 1100$ the complexity of our attack is about $2^{54}$, while the previous one is not faster than the $2^{64}$ brute force complexity. In case the available keystream is fragmented, our second attack reduces the memory complexity of the previous attack by a factor of $512$ from 32 GiB to 64 MiB with no time complexity penalty. Our attacks are based on new combinations of stream cipher cryptanalytic techniques and algorithmic techniques used in other contexts (such as solving the $k$-XOR problem).
Last updated:  2022-04-06
Polynomial Approximation of Inverse sqrt Function for FHE
Samanvaya Panda
Inverse sqrt and sqrt function have numerous applications in linear algebra and machine learning such as vector normalisation, eigenvalue computation, dimensionality reduction, clustering, etc. This paper presents a method to approximate and securely perform the inverse sqrt function using CKKS homomorphic encryption scheme. Since the CKKS homomorphic scheme allows only computation of polynomial functions, we propose a method to approximate the inverse sqrt function polynomially. In the end, we provide an implementation of our method for the inverse sqrt function.
Last updated:  2023-10-16
Verifiable Mix-Nets and Distributed Decryption for Voting from Lattice-Based Assumptions
Diego F. Aranha, Carsten Baum, Kristian Gjøsteen, and Tjerand Silde
Cryptographic voting protocols have recently seen much interest from practitioners due to their (planned) use in countries such as Estonia, Switzerland, France, and Australia. Practical protocols usually rely on tested designs such as the mixing-and-decryption paradigm. There, multiple servers verifiably shuffle encrypted ballots, which are then decrypted in a distributed manner. While several efficient protocols implementing this paradigm exist from discrete log-type assumptions, the situation is less clear for post-quantum alternatives such as lattices. This is because the design ideas of the discrete log-based voting protocols do not carry over easily to the lattice setting, due to specific problems such as noise growth and approximate relations. This work proposes a new verifiable secret shuffle for BGV ciphertexts and a compatible verifiable distributed decryption protocol. The shuffle is based on an extension of a shuffle of commitments to known values which is combined with an amortized proof of correct re-randomization. The verifiable distributed decryption protocol uses noise drowning, proving the correctness of decryption steps in zero-knowledge. Both primitives are then used to instantiate the mixing-and-decryption electronic voting paradigm from lattice-based assumptions. We give concrete parameters for our system, estimate the size of each component and provide implementations of all important sub-protocols. Our experiments show that the shuffle and decryption protocol is suitable for use in real-world e-voting schemes.
Last updated:  2022-04-29
Multiverse of HawkNess: A Universally-Composable MPC-based Hawk Variant
Aritra Banerjee, Hitesh Tewari
The evolution of Smart contracts in recent years inspired a crucial question: Do smart contract evaluation protocols provide the required level of privacy when executing contracts on the Blockchain? The Hawk (IEEE S&P '16) paper introduces a way to solve the problem of privacy in smart contracts by evaluating the contracts off-chain, albeit with the trust assumption of a manager. To avoid the partially trusted manager altogether, a novel approach named zkHawk (IEEE BRAINS '21) explains how we can evaluate the contracts privately off-chain using a multi-party computation (MPC) protocol instead of trusting said manager. This paper dives deeper into the detailed construction of a variant of the zkHawk protocol titled V-zkHawk using formal proofs to construct the said protocol and model its security in the universal composability (UC) framework (FOCS '01). The V-zkHawk protocol discussed here does not support immediate closure, i.e, all the parties ($n$) have to send a message to inform the blockchain that the contract has been executed with corruption allowed for up to $t$ parties, where $t<n$. In the most quintessential sense, the V-zkHawk is a variant because the outcome of the protocol is similar (i.e., execution of smart contract via an MPC function evaluation) to zkHawk, but we modify key aspects of the protocol essentially creating a small trade-off (removing immediate closure) to provide UC (stronger) security. The V-zkHawk protocol leverages joint Schnorr signature schemes, encryption schemes, Non-Interactive Zero-Knowledge Proofs (NIZKs), and commitment schemes with Common Reference String (CRS) assumptions, MPC function evaluations, and assumes the existence of asynchronous, authenticated broadcast channels. We achieve malicious security in a dishonest majority setting in the UC framework.
Last updated:  2022-04-06
Gemini: Elastic SNARKs for Diverse Environments
Jonathan Bootle, Alessandro Chiesa, Yuncong Hu, Michele Orrù
We introduce and study elastic SNARKs, a class of succinct arguments where the prover has multiple configurations with different time and memory tradeoffs, which can be selected depending on the execution environment and the proved statement. The output proof is independent of the chosen configuration. We construct an elastic SNARK for rank-1 constraint satisfiability (R1CS). In a time-efficient configuration, the prover uses a linear number of cryptographic operations and a linear amount of memory. In a space-efficient configuration, the prover uses a quasilinear number of cryptographic operations and a logarithmic amount of memory. A key component of our construction is an elastic probabilistic proof. Along the way, we also formulate a streaming framework for R1CS that we deem of independent interest. We additionally contribute Gemini, a Rust implementation of our protocol. Our benchmarks show that Gemini, on a single machine, supports R1CS instances with tens of billions of constraints.
Last updated:  2022-07-01
Dew: Transparent Constant-sized zkSNARKs
Arasu Arun, Chaya Ganesh, Satya Lokam, Tushar Mopuri, Sriram Sridhar
We construct polynomial commitment schemes with constant sized evaluation proofs and logarithmic verification time in the transparent setting. To the best of our knowledge, this is the first result achieving this combination of properties. Our starting point is a transparent inner product commitment scheme with constant-sized proofs and linear verification. We build on this to construct a polynomial commitment scheme with constant size evaluation proofs and logarithmic (in the degree of the polynomial) verification time. Our constructions make use of groups of unknown order instantiated by class groups. We prove security of our construction in the Generic Group Model (GGM). Using our polynomial commitment scheme to compile an information-theoretic proof system yields Dew -- a transparent and constant-sized zkSNARK (Zero-knowledge Succinct Non-interactive ARguments of Knowledge) with logarithmic verification. Finally, we show how to recover the result of DARK (Bünz et al., Eurocrypt 2020). DARK presented a succinct transparent polynomial commitment scheme with logarithmic proof size and verification. However, it was recently discovered to have a gap in its security proof (Block et al, CRYPTO 2021). We recover its extractability based on our polynomial commitment construction, thus obtaining a transparent polynomial commitment scheme with logarithmic proof size and verification under the same assumptions as DARK, but with a prover time that is quadratic.
Last updated:  2022-04-06
LLTI: Low-Latency Threshold Implementations
Victor Arribas, Zhenda Zhang, Svetla Nikova
With the enormous increase in portable cryptographic devices, physical attacks are becoming similarly popular. One of the most common physical attacks is Side-Channel Analysis (SCA), extremely dangerous due to its non-invasive nature. Threshold Implementations (TI) was proposed as the first countermeasure to provide provable security in masked hardware implementations. While most works on hardware masking are focused on optimizing the area requirements, with the newer and smaller technologies area is taking a backseat, and low-latency is gaining importance. In this work, we revisit the scheme proposed by Arribas et al. in TCHES 2018 to secure unrolled implementations. We formalize and expand this methodology, to devise a masking scheme, derived from TI, designed to secure hardware implementations optimized for latency named Low-Latency Threshold Implementations (LLTI). By applying the distributive property and leveraging a divide-and-conquer strategy, we split a non-linear operation in layers which are masked separately. The result is a more efficient scheme than the former TI for any operation of algebraic degree greater than two, achieving great optimizations both in terms of speed and area. We compare the performance of first-order LLTI with first-order TI in securing a cubic gate and a degree-7 AND gate without using any registers in between. We achieve a 137% increase in maximum frequency and a 60% reduction in area for the cubic gate, and 3131 times reduction in area in the case of a degree-7 AND gate compared to TI. To further illustrate the power of our scheme we take a low-latency PRINCE implementation from the literature and, by simply changing the secure S-box with the LLTI version, we achieve a 46% max. frequency improvement and a 38% area reduction. Moreover, we apply LLTI to a secure a low-latency AES implementation and compare it with the TI version, achieving a 6.9 times max. freq. increase and a 47.2% area reduction.
Last updated:  2022-04-06
Efficient, Actively Secure MPC with a Dishonest Majority: a Survey
Emmanuela Orsini
The last ten years have seen a tremendous growth in the interest and practicality of secure multiparty computation (MPC) and its possible applications. Secure MPC is indeed a very hot research topic and recent advances in the eld have already been translated into commercial products world-wide. A major pillar in this advance has been in the case of active security with a dishonest majority, mainly due to the SPDZ-line of work protocols. This survey gives an overview of these protocols, with a focus of the original SPDZ paper (CRYPTO 2012) and its subsequent optimizations. It also covers some alternative approaches based on oblivious transfer, oblivious linear-function evaluation, and constant-round protocols.
Last updated:  2022-04-04
Post-Quantum ID-based Ring Signatures from Symmetric-key Primitives
Maxime Buser, Joseph K. Liu, Ron Steinfeld, Amin Sakzad
Ring signatures and ID-based cryptography are considered promising in terms of application. A ring signature authenticates messages while the author of the message remains anonymous. ID-based cryptographic primitives suppress the need for certificates in public key infrastructures (PKI). In this work, we propose a generic construction for post-quantum ID-based ring signatures (IDRS) based on symmetric-key primitives from which we derive the first two constructions of IDRS. The first construction named PicRS utilizes the Picnic digital signature to ensure its security while the second construction XRS is motivated by the stateful digital signature XMSS instead of Picnic, allowing a signature size reduction. Both constructions have a competitive signature size when compared with state-of-the-art lattice-based IDRS. XRS can achieve a competitive signature size of 889KB for a ring of 4096 users while the fully stateless PicRS achieves a signature size of 1.900MB for a ring of 4096 users. In contrast, the shortest lattice-based IDRS achieves a signature size of 335MB for the same ring size.
Last updated:  2022-04-04
Efficient and Tight Oblivious Transfer from PKE with Tight Multi-User Security
Saikrishna Badrinarayanan, Daniel Masny, Pratyay Mukherjee
We propose an efficient oblivious transfer in the random oracle model based on public key encryption with pseudorandom public keys. The construction is as efficient as the state of art though it has a significant advantage. It has a tight security reduction to the multi-user security of the underlying public key encryption. In previous constructions, the security reduction has a multiplicative loss that amounts in at least the amount of adversarial random oracle queries. When considering this loss for a secure parameter choice, the underlying public key encryption or elliptic curve would require a significantly higher security level which would decrease the overall efficiency. Our OT construction can be instantiated from a wide range of assumptions such as DDH, LWE, or codes based assumptions as well as many public key encryption schemes such as the NIST PQC finalists. Since tight multi-user security is a very natural requirement which many public key encryption schemes suffice, many public key encryption schemes can be straightforwardly plugged in our construction without the need of reevaluating or adapting any parameter choices.
Last updated:  2022-11-05
PQ-HPKE: Post-Quantum Hybrid Public Key Encryption
Mila Anastasova, Panos Kampanakis, Jake Massimo
Public key cryptography is used to asymmetrically establish keys, authenticate or encrypt data between communicating parties at a relatively high performance cost. To reduce computational overhead, modern network protocols combine asymmetric primitives for key establishment and authentication with symmetric ones. Similarly, Hybrid Public Key Encryption, a relatively new scheme, uses public key cryptography for key derivation and symmetric key cryptography for data encryption. In this paper, we present the first quantum-resistant implementation of HPKE to address concerns that quantum computers bring to asymmetric algorithms. We propose PQ-only and PQ-hybrid HPKE variants and analyze their performance for two post-quantum key encapsulation mechanisms and various plaintext sizes. We compare these variants with RSA and classical HPKE and show that the additional post-quantum overhead is amortized over the plaintext size. Our PQ-hybrid variant with a lattice-based KEM shows an overhead of 52% for 1KB of encrypted data which is reduced to 17% for 1MB of plaintext. We report 1.83, 1.78, and 2.15 x10^6 clock cycles needed for encrypting 1MB of message based on classical, PQ-only, and PQ-hybrid HPKE respectively, where we note that the cost of introducing quantum-resistance to HPKE is relatively low.
Last updated:  2022-03-31
Instachain: Breaking the Sharding Limits via Adjustable Quorums
Mustafa Safa Ozdayi, Yue Guo, Mahdi Zamani
Sharding is a key approach to scaling the performance of on-chain transactions: the network is randomly partitioned into smaller groups of validator nodes, known as \textit{shards}, each growing a disjoint ledger of transactions via state-machine replication (SMR) in parallel to other shards. As SMR protocols typically incur a quadratic message complexity in the network size, shards can process transactions significantly faster than the entire network. On the downside, shards cannot be made arbitrarily small due to the exponentially-increasing probability that malicious nodes take over a sufficient majority in small shards, compromising the security of SMR. In practice, this dictates relatively large shards with hundreds of nodes each, significantly limiting the scalability benefits of sharding. In this paper, we propose \textit{Instachain}, a novel sharding approach that breaks the scalability limits of sharding by reducing the shard size to significantly-smaller numbers than was previously considered possible. We achieve this by relaxing the liveness property for some of the shards while still preserving the safety property across all shards. To do this, we carefully adjust the quorum size parameter of the intra-shard SMR protocol to achieve maximum parallelism across all shards without compromising security. In addition, Instachain is the first sharding protocol to adopt the stateless blockchain model in shards, which in conjunction with a novel cross-shard verification technique allows the protocol to efficiently prevent double-spending attempts across significantly-more shards than previous work.
Last updated:  2022-09-05
Complete and Improved FPGA Implementation of Classic McEliece
Po-Jen Chen, Tung Chou, Sanjay Deshpande, Norman Lahr, Ruben Niederhagen, Jakub Szefer, Wen Wang
We present the first specification-compliant constant-time FPGA implementation of the Classic McEliece cryptosystem from the third-round of NIST's Post-Quantum Cryptography standardization process. In particular, we present the first complete implementation including encapsulation and decapsulation modules as well as key generation with seed expansion. All the hardware modules are parametrizable, at compile time, with security level and performance parameters. As the most time consuming operation of Classic McEliece is the systemization of the public key matrix during key generation, we present and evaluate three new algorithms that can be used for systemization while complying with the specification: hybrid early-abort systemizer (HEA), single-pass early-abort systemizer (SPEA), and dual-pass early-abort systemizer (DPEA). All of the designs outperform the prior systemizer designs for Classic McEliece by 2.2x to 2.6x in average runtime and by 1.7x to 2.4x in time-area efficiency. We show that our complete Classic McEliece design for example can perform key generation in 5.2-20ms, encapsulation in 0.1-0.5ms, and decapsulation in 0.7-1.5ms for all security levels on an Xlilinx Artix 7 FPGA. The performance can be increased even further at the cost of resources by increasing the level of parallelization using the performance parameters of our design.
Last updated:  2022-04-08
Quotient Approximation Modular Reduction
Aurélien Greuet, Simon Montoya, Clémence Vermeersch
Modular reduction is a core operation in public-key cryptography. While a standard modular reduction is often required, a partial reduction limiting the growth of the coefficients is enough for several usecases. Knowing the quotient of the Euclidean division of an integer by the modulus allows to easily recover the remainder. We propose a way to compute efficiently, without divisions, an approximation of this quotient. From this approximation, both full and partial reductions are deduced. The resulting algorithms are modulus specific: the sequence of operations to perform in order to get a reduction depends on the modulus and the size of the input. We analyse the cost of our algorithms for a usecase coming from post-quantum cryptography. We show that with this modulus, on a CPU with a slow multiplication, our method gives an algorithm faster than prior art algorithms.
Last updated:  2022-03-31
Enhancing AES Using Chaos and Logistic Map-Based Key Generation Technique for Securing IoT-Based Smart Home
Ziaur Rahman, Xun Yi, Mustain Billah, Mousumi Sumi, Adnan Anwar
The Internet of Things (IoT) has brought new ways for humans and machines to communicate with each other over the internet. Though sensor-driven devices have largely eased our everyday lives, most IoT infrastructures have been suffering from security challenges. Since the emergence of IoT, lightweight block ciphers have been a better option for intelligent and sensor-based applications. When public-key infrastructure dominates worldwide, the symmetric key encipherment such as Advanced Encryption Standard (AES) shows immense prospects to sit with the smart home IoT appliances. As investigated, chaos motivated logistic map shows enormous potential to secure IoT aligned real-time data communication. The unpredictability and randomness features of the logistic map in sync with chaos-based scheduling techniques can pave the way to build a particular dynamic key propagation technique for data confidentiality, availability and integrity. After being motivated by the security prospects of AES and chaos cryptography, the paper illustrates a key scheduling technique using a 3-dimensional S-box (substitution-box). The logistic map algorithm has been incorporated to enhance security. The proposed approach has applicability for lightweight IoT devices such as smart home appliances. The work determines how seeming chaos accelerates the desired key-initiation before message transmission. The proposed model is evaluated based on the key generation delay required for the smart-home sensor devices.
Last updated:  2023-11-22
Proof-of-Stake Is a Defective Mechanism
Uncategorized
Vicent Sus
Show abstract
Uncategorized
Proof-of-Stake (PoS) algorithms, implemented as foundational components of the consensus mechanism of distributed ledgers, are defective cryptosystems by nature. This paper presents intuitive arguments for why PoS, by trying to improve the energy efficiency of Proof-of-Work (PoW) when implemented as a Sybil control mechanism in distributed ledgers, introduces a set of significant new flaws. Such systems are plutocratic, oligopolistic, and permissioned.
Last updated:  2022-04-29
On the weightwise nonlinearity of weightwise perfectly balanced functions
Agnese Gini, Pierrick Méaux
In this article we perform a general study on the criterion of weightwise nonlinearity for the functions which are weightwise perfectly balanced (WPB). First, we investigate the minimal value this criterion can take over WPB functions, deriving theoretic bounds, and exhibiting the first values. We emphasize the link between this minimum and weightwise affine functions, and we prove that for $n\ge 8$ no $n$-variable WPB function can have this property. Then, we focus on the distribution and the maximum of this criterion over the set of WPB functions. We provide theoretic bounds on the latter and algorithms to either compute or estimate the former, together with the results of our experimental studies for $n$ up to $8$. Finally, we present two new constructions of WPB functions obtained by modifying the support of linear functions for each set of fixed Hamming weight. This provides a large corpus of WPB function with proven weightwise nonlinearity, and we compare the weightwise nonlinearity of these constructions to the average value, and to the parameters of former constructions in $8$ and $16$ variables.
Last updated:  2022-03-31
Improving the Privacy of Tor Onion Services
Edward Eaton, Sajin Sasy, Ian Goldberg
Onion services enable bidirectional anonymity for parties that communicate over the Tor network, thus providing improved privacy properties compared to standard TLS connections. Since these services are designed to support server-side anonymity, the entry points for these services shuffle across the Tor network periodically. In order to connect to an onion service at a given time, the client has to resolve the .onion address for the service, which requires querying volunteer Tor nodes called Hidden Service Directories (HSDirs). However, previous work has shown that these nodes may be untrustworthy, and can learn or leak the metadata about which onion services are being accessed. In this paper, we present a new class of attacks that can be performed by malicious HSDirs against the current generation (v3) of onion services. These attacks target the unlinkability of onion services, allowing some services to be tracked over time. To restore unlinkability, we propose a number of concrete designs that use Private Information Retrieval (PIR) to hide information about which service is being queried, even from the HSDirs themselves. We examine the three major classes of PIR schemes, and analyze their performance, security, and how they fit into Tor in this context. We provide and evaluate implementations and end-to-end integrations, and make concrete suggestions to show how these schemes could be used in Tor to minimize the negative impact on performance while providing the most security.
Last updated:  2022-06-23
Counting Vampires: From Univariate Sumcheck to Updatable ZK-SNARK
Helger Lipmaa, Janno Siim, Michal Zajac
We propose a univariate sumcheck argument $\mathfrak{Count}$ of essentially optimal communication efficiency of one group element. While the previously most efficient univariate sumcheck argument of Aurora is based on polynomial commitments, $\mathfrak{Count}$ is based on inner-product commitments. We use $\mathfrak{Count}$ to construct a new pairing-based updatable and universal zk-SNARK $\mathfrak{Vampire}$ with the shortest known argument length (four group and two finite field elements) for $\mathsf{NP}$. In addition, $\mathfrak{Vampire}$ uses the aggregated polynomial commitment scheme of Boneh \emph{et al}.
Last updated:  2023-07-14
Benchmarking and Analysing the NIST PQC Lattice-Based Signature Schemes Standards on the ARM Cortex M7
James Howe, Bas Westerbaan
This paper presents an analysis of the two lattice-based digital signature schemes, Dilithium and Falcon, which have been chosen by NIST for standardisation, on the ARM Cortex M7 using the STM32F767ZI NUCLEO-144 development board. This research is motivated by the ARM Cortex M7 device being the only processor in the Cortex-M family to offer a double precision (i.e., 64-bit) floating-point unit, making Falcon's implementations, requiring 53 bits of double precision, able to fully run native floating-point operations without any emulation. When benchmarking natively, Falcon shows significant speed-ups between 6.2-8.3x in clock cycles, 6.2-11.8x in runtime, and Dilithium does not show much improvement other than those gained by the slightly faster processor. We then present profiling results of the two schemes on the ARM Cortex M7 to show their respective bottlenecks and operations where the improvements are and can be made. This demonstrates, for example, that some operations in Falcon's procedures observe speed-ups by an order of magnitude. Finally, since Falcon's use of floating points is so rare in cryptography, we test the native FPU instructions on 4 different STM32 development boards with the ARM Cortex M7 and also a Raspberry Pi 3 which is used in some of Falcon's official benchmarking results. We find constant-time irregularities in all of these devices, which makes Falcon insecure on these devices for applications where signature generation can be timed by an attacker.
Last updated:  2022-03-31
Constant Latency in Sleepy Consensus
Atsuki Momose, Ling Ren
Dynamic participation support is an important feature of Bitcoin's longest-chain protocol and its variants. But these protocols suffer from long latency as a fundamental trade-off. Specifically, the latency depends at least on the following two factors: 1) the desired security level of the protocol, and 2) the actual participation level of the network. Classic BFT protocols, on the other hand, can achieve constant latency but cannot make progress under dynamic participation. In this work, we present a protocol that simultaneously supports dynamic participation and achieves constant latency. Our core technique is to extend the classic BFT approach from static quorum size to dynamic quorum size, i.e., according to the current participation level, while preserving important properties of static quorum. We also present a recovery mechanism for rejoining nodes that is efficient in terms of both communication and storage. Our experimental evaluation shows our protocol has much lower latency than a longest-chain protocol, especially when there is a sudden decrease of participation.
Last updated:  2023-12-01
Horst Meets Fluid-SPN: Griffin for Zero-Knowledge Applications
Lorenzo Grassi, Yonglin Hao, Christian Rechberger, Markus Schofnegger, Roman Walch, and Qingju Wang
Zero-knowledge (ZK) applications form a large group of use cases in modern cryptography, and recently gained in popularity due to novel proof systems. For many of these applications, cryptographic hash functions are used as the main building blocks, and they often dominate the overall performance and cost of these approaches. Therefore, in the last years several new hash functions were built in order to reduce the cost in these scenarios, including Poseidon and Rescue among others. These hash functions often look very different from more classical designs such as AES or SHA-2. For example, they work natively over prime fields rather than binary ones. At the same time, for example Poseidon and Rescue share some common features, such as being SPN schemes and instantiating the nonlinear layer with invertible power maps. While this allows the designers to provide simple and strong arguments for establishing their security, it also introduces crucial limitations in the design, which may affect the performance in the target applications. In this paper, we propose the Horst construction, in which the addition in a Feistel scheme (x, y) -> (y + F(x), x) is extended via a multiplication, i.e., (x, y) -> (y * G(x) + F(x), x). By carefully analyzing the performance metrics in SNARK and STARK protocols, we show how to combine an expanding Horst scheme with a Rescue-like SPN scheme in order to provide security and better efficiency in the target applications. We provide an extensive security analysis for our new design Griffin and a comparison with all current competitors.
Last updated:  2022-03-31
Improved Rotational-XOR Cryptanalysis of Simon-like Block Ciphers
Uncategorized
Jinyu Lu, Yunwen Liu, Tomer Ashur, Bing Sun, Chao Li
Show abstract
Uncategorized
Rotational-XOR (RX) cryptanalysis is a cryptanalytic method aimed at finding distinguishable statistical properties in ARX-C ciphers, i.e., ciphers that can be described only by using modular addition, cyclic rotation, XOR, and the injection of constants. In this paper we extend RX-cryptanalysis to AND-RX ciphers, a similar design paradigm where the modular addition is replaced by vectorial bitwise AND; such ciphers include the block cipher families Simon and Simeck. We analyze the propagation of RX-differences through AND-RX rounds and develop a closed form formula for their expected probability. Inspired by the MILP verification model proposed by Sadeghi et al., we develop a SAT/SMT model for searching compatible RX-characteristics in Simon-like ciphers, i.e., that there are at least one right pair of messages/keys to satisfy the RK-characteristics. To the best of our knowledge, this is the first model that takes the RX-difference transitions and value transitions simultaneously into account in Simon-like ciphers. Meanwhile, we investigate how the choice of the round constants affects the resistance of Simon-like ciphers against RX-cryptanalysis. Finally, we show how to use an RX-distinguisher for a key recovery attack. Evaluating our model we find compatible RX-characteristics of up to 20, 27, and 34 rounds with respective probabilities of 2 −26 ,2 −44 , and 2 −56 for versions of Simeck with block sizes of 32, 48, and 64 bits, respectively, for large classes of weak keys in the related-key model. In most cases, these are the longest published distinguishers for the respective variants of Simeck. In the case of Simon, we present compatible RX-characteristics for round reduced versions of all ten instances. We observe that for equal block and key sizes, the RX-distinguishers cover fewer rounds in Simon than in Simeck. Concluding the paper, we present a key recovery attack on Simeck 64 reduced to 28 rounds using a 23-round RX-characteristic.
Last updated:  2022-03-28
A Logic and an Interactive Prover for the Computational Post-Quantum Security of Protocols
Cas Cremers, Caroline Fontaine, Charlie Jacomme
We provide the first mechanized post-quantum sound security protocol proofs. We achieve this by developing PQ-BC, a computational first-order logic that is sound with respect to quantum attackers, and corresponding mechanization support in the form of the PQ-Squirrel prover. Our work builds on the classical BC logic [Bana,Comon,CCS14] and its mechanization in the Squirrel prover [BDJKM,S&P21]. Our development of PQ-BC requires making the BC logic sound for a single interactive quantum attacker. We implement the PQ-Squirrel prover by modifying Squirrel , relying on the soundness results of PQ-BC and enforcing a set of syntactic conditions; additionally, we provide new tactics for the logic that extend the tool’s scope. Using PQ-Squirrel , we perform several case studies, thereby giving the first mechanical proofs of their computational post- quantum security. These include two generic constructions of KEM based key exchange, two sub-protocols from IKEv1 and IKEv2, and a proposed post-quantum variant of Signal’s X3DH protocol. Additionally, we use PQ-Squirrel to prove that several classical Squirrel case-studies are already post-quantum sound. We provide the sources of PQ-Squirrel and all our models for reproducibility, as well as a long version of this paper with full details.
Last updated:  2022-09-30
Quantum Advantage from Any Non-Local Game
Uncategorized
Yael Tauman Kalai, Alex Lombardi, Vinod Vaikuntanathan, Lisa Yang
Show abstract
Uncategorized
We show a general method of compiling any $k$-prover non-local game into a single-prover interactive game maintaining the same (quantum) completeness and (classical) soundness guarantees (up to negligible additive factors in a security parameter). Our compiler uses any quantum homomorphic encryption scheme (Mahadev, FOCS 2018; Brakerski, CRYPTO 2018) satisfying a natural form of correctness with respect to auxiliary (quantum) input. The homomorphic encryption scheme is used as a cryptographic mechanism to simulate the effect of spatial separation, and is required to evaluate $k-1$ prover strategies (out of $k$) on encrypted queries. In conjunction with the rich literature on (entangled) multi-prover non-local games starting from the celebrated CHSH game (Clauser, Horne, Shimonyi and Holt, Physical Review Letters 1969), our compiler gives a broad framework for constructing mechanisms to classically verify quantum advantage.
Last updated:  2022-03-28
The Inverse of $\chi$ and Its Applications to Rasta-like Ciphers
Fukang Liu, Santanu Sarkar, Willi Meier, Takanori Isobe
At ASIACRYPT 2021, Liu et al. pointed out a weakness of the Rasta-like ciphers neglected by the designers. The main strategy is to construct exploitable equations of the $n$-bit $\chi$ operation denoted by $\chi_n$. However, these equations are all obtained by first studying $\chi_n$ for small $n$. In this note, we demonstrate that if the explicit formula of the inverse of $\chi_n$ denoted by $\chi_n^{-1}$ is known, all these exploitable equations would have been quite obvious and the weakness of the Rasta-like ciphers could have been avoided at the design phase. However, the explicit formula of $\chi_n^{-1}$ seems to be not well-known and the most relevant work was published by Biryukov et al. at ASIACRYPT 2014. In this work, we give a very simple formula of $\chi_n^{-1}$ that can be written down in only one line and we prove its correctness in a rigorous way. Based on its formula, the formula of exploitable equations for Rasta-like ciphers can be easily derived and therefore more exploitable equations are found.
Last updated:  2022-03-28
Auditable, Available and Resilient Private Computation on the Blockchain via MPC
Christopher Cordi, Michael P. Frank, Kasimir Gabert, Carollan Helinski, Ryan C. Kao, Vladimir Kolesnikov, Abrahim Ladha, Nicholas Pattengale
Simple but mission-critical internet-based applications that require extremely high reliability, availability, and verifiability (e.g., auditability) could benefit from running on robust public programmable blockchain platforms such as Ethereum. Unfortunately, program code running on such blockchains is normally publicly viewable, rendering these platforms unsuitable for applications requiring strict privacy of application code, data, and results. In this work, we investigate using MPC techniques to protect the privacy of a blockchain computation. While our main goal is to hide both the data and the computed function itself, we also consider the standard MPC setting where the function is public. We describe GABLE (Garbled Autonomous Bots Leveraging Ethereum), a blockchain MPC architecture and system. The GABLE architecture specifies the roles and capabilities of the players. GABLE includes two approaches for implementing MPC over blockchain: Garbled Circuits (GC), evaluating universal circuits, and Garbled Finite State Automata (GFSA). We formally model and prove the security of GABLE implemented over garbling schemes, a popular abstraction of GC and GFSA from (Bellare et al, CCS 2012). We analyze in detail the performance (including Ethereum gas costs) of both approaches and discuss the trade-offs. We implement a simple prototype of GABLE and report on the implementation issues and experience.
Last updated:  2022-03-28
Revocable Hierarchical Attribute-based Signatures from Lattices
Daniel Gardham, Mark Manulis
Attribute-based Signatures (ABS) allow users to obtain attributes from issuing authorities, and sign messages whilst simultaneously proving compliance of their attributes with a verification policy. ABS demands that both the signer and the set of attributes used to satisfy a policy remain hidden to the verifier. Hierarchical ABS (HABS) supporting roots of trust and delegation were recently proposed to alleviate scalability issues in centralised ABS schemes. An important yet challenging property for privacy-preserving ABS is revocation, which may be applied to signers or some of the attributes they possess. Existing ABS schemes lack efficient revocation of either signers or their attributes, relying on generic costly proofs.Moreover, in HABS there is a further need to support revocation of authorities on the delegation paths, which is not provided by existing HABS constructions. This paper proposes a direct HABS scheme with a Verifier-Local Revocation (VLR) property. We extend the original HABS security model to address revocation and develop a new attribute delegation technique with appropriate VLR mechanism for HABS, which also implies the first ABS scheme to support VLR. Moreover, our scheme supports inner-product signing policies, offering a wider class of attribute relations than previous HABS schemes, and is the first to be based on lattices, which are thought to offer post-quantum security.
Last updated:  2022-11-19
Side-channel attacks based on power trace decomposition
Fanliang Hu, Huanyu Wang, Junnian Wang
Side Channel Attacks (SCAs), an attack that exploits the physical information generated when an encryption algorithm is executed on a device to recover the key, have become one of the key threats to the security of encrypted devices. Recently, with the development of deep learning, deep learning techniques have been applied to side channel attacks with good results on publicly available dataset experiences. In this paper, we propose a power tracking decomposition method that divides the original power tracking into two parts, where the data-influenced part is defined as data power tracking and the other part is defined as device constant power tracking, and use the data power tracking for training the network model, which has more obvious advantages than using the original power tracking for training the network model. To verify the effectiveness of the approach, we evaluated the ATxmega128D4 microcontroller by capturing the power traces generated when implementing AES-128. Experimental results show that network models trained using data power traces outperform network models trained using raw power traces in terms of classification accuracy, training time, cross-subkey recovery key and cross-device recovery key.
Last updated:  2022-03-28
A lightweight verifiable secret sharing scheme in IoTs
Likang Lu, Jianzhu Lu
Verifiable secret sharing (VSS) is a fundamental tool of cryptography and distributed computing in Internet of things (IoTs). Since network bandwidth is a scarce resource, minimizing the number of verification data will improve the performance of VSS. Existing VSS schemes, however, face limitations in meeting the number of verification data and energy consumptions for low-end devices, which make their adoption challenging in resource-limited IoTs. To address above limitations, we propose a VSS scheme according to Nyberg’s oneway accumulator for one-way hash functions (NAHFs). The proposed scheme has two distinguished features: first, the security of the scheme is based on NAHFs whose computational requirements are the basic criteria for known IoT devices and, second, upon receiving only one verification data, participants can verify the correctness of both their shares and the secret without any communication. Experimental results demonstrate that, compared to the Feldman scheme and Rajabi-Eslami scheme, the energy consumption of a participant in the proposed scheme is respectively reduced by at least 24% and 83% for a secret.
Last updated:  2022-03-28
Fuzz, Penetration, and AI Testing for SoC Security Verification: Challenges and Solutions
Uncategorized
Kimia Zamiri Azar, Muhammad Monir Hossain, Arash Vafaei, Hasan Al Shaikh, Nurun N. Mondol, Fahim Rahman, Mark Tehranipoor, Farimah Farahmandi
Show abstract
Uncategorized
The ever-increasing usage and application of system-on-chips (SoCs) has resulted in the tremendous modernization of these architectures. For a modern SoC design, with the inclusion of numerous complex and heterogeneous intellectual properties (IPs), and its privacy-preserving declaration, there exists a wide variety of highly sensitive assets. These assets must be protected from any unauthorized access and against a diverse set of attacks. Attacks for obtaining such assets could be accomplished through different sources, including malicious IPs, malicious or vulnerable firmware/software, unreliable and insecure interconnection and communication protocol, and side-channel vulnerabilities through power/performance profiles. Any unauthorized access to such highly sensitive assets may result in either a breach of company secrets for original equipment manufactures (OEM) or identity theft for the end-user. Unlike the enormous advances in functional testing and verification of the SoC architecture, security verification is still on the rise, and little endeavor has been carried out by academia and industry. Unfortunately, there exists a huge gap between the modernization of the SoC architectures and their security verification approaches. With the lack of automated SoC security verification in modern electronic design automation (EDA) tools, we provide a comprehensive overview of the requirements that must be realized as the fundamentals of the SoC security verification process in this paper. By reviewing these requirements, including the creation of a unified language for SoC security verification, the definition of security policies, formulation of the security verification, etc., we put forward a realization of the utilization of self-refinement techniques, such as fuzz, penetration, and AI testing, for security verification purposes. We evaluate all the challenges and resolution possibilities, and we provide the potential approaches for the realization of SoC security verification via these self-refinement techniques.
Last updated:  2022-04-01
Improved Straight-Line Extraction in the Random Oracle Model With Applications to Signature Aggregation
Yashvanth Kondi, abhi shelat
The goal of this paper is to improve the efficiency and applicability of straightline extraction techniques in the random oracle model. Straightline extraction in the random oracle model refers to the existence of an extractor, which given the random oracle queries made by a prover $P^*(x)$ on some theorem $x$, is able to produce a witness $w$ for $x$ with roughly the same probability that $P^*$ produces a verifying proof. This notion applies to both zero-knowledge protocols and verifiable computation where the goal is compressing a proof. Pass (CRYPTO '03) first showed how to achieve this property for NP using a cut-and-choose technique which incurred a $\lambda^2$-bit overhead in communication where $\lambda$ is a security parameter. Fischlin (CRYPTO '05) presented a more efficient technique based on ``proofs of work'' that sheds this $\lambda^2$ cost, but only applies to a limited class of Sigma Protocols with a ``quasi-unique response'' property, which for example, does not necessarily include the standard OR composition for Sigma protocols. With Schnorr/EdDSA signature aggregation as a motivating application, we develop new techniques to improve the computation cost of straight-line extractable proofs. Our improvements to the state of the art range from 70X--200X for the best compression parameters. This is due to a uniquely suited polynomial evaluation algorithm, and the insight that a proof-of-work that relies on multicollisions and the birthday paradox is faster to solve than inverting a fixed target. Our collision based proof-of-work more generally improves the Prover's random oracle query complexity when applied in the NIZK setting as well. In addition to reducing the query complexity of Fischlin's Prover, for a special class of Sigma protocols we can for the first time closely match a new lower bound we present. Finally we extend Fischlin's technique so that it applies to a more general class of strongly-sound Sigma protocols, which includes the OR composition. We achieve this by carefully randomizing Fischlin's technique---we show that its current deterministic nature prevents its application to certain multi-witness languages.
Last updated:  2022-03-28
Poly Onions: Achieving Anonymity in the Presence of Churn
Uncategorized
Megumi Ando, Miranda Christ, Anna Lysyanskaya, Tal Malkin
Show abstract
Uncategorized
Onion routing is a popular approach towards anonymous communication. Practical implementations are widely used (for example, Tor has millions of users daily), but are vulnerable to various traffic correlation attacks, and the theoretical foundations, despite recent progress, still lag behind. In particular, all works that model onion routing protocols and prove their security only address a single run, where each party sends and receives a single message of fixed length, once. Moreover, they all assume a static network setting, where the parties are stable throughout the lifetime of the protocol. In contrast, real networks have a high rate of churn (nodes joining and exiting the network), real users want to send multiple messages, and realistic adversaries may observe multiple runs of the protocol. In this paper, we initiate a formal treatment of onion routing in a setting with multiple runs over a dynamic network with churn. We provide the following contributions. -We define the cryptographic primitive of poly onion encryption, which is appropriate for a setting with churn. This primitive is inspired by duo onions, introduced by Iwanik, Klonowski, and Kutylowski (Communications and Multimedia Security, 2005) towards improving onion delivery rate. We generalize the idea, change it to add auxiliary helpers towards supporting better security, and propose formal definitions. -We construct an instantiation of poly onion encryption based on standard cryptographic primitives (CCA secure public key encryption with tags, PRP, MAC, and secret sharing). Our construction is secure against an active adversary, and is parameterized to allow flexible instantiations supporting a range of corruption thresholds and churn limits. -We formally model anonymous onion routing for multiple runs in the setting with churn, including a definition of strong anonymity, where the adversary has CCA-like access to oracles for generating and processing onions. -We prove that if an onion routing protocol satisfies a natural condition we define ("simulatability"), then strong single-run anonymity implies strong multiple-run anonymity. This condition is satisfied by existing onion routing schemes, such as the $\Pi_p$ protocol of Ando, Lysyanskaya, and Upfal (ICALP 2018). As a consequence, these schemes are anonymous also for multiple runs (although not when there is churn). -We provide an anonymous routing protocol, "Poly $\Pi_p$," and prove that it is anonymous in the setting with churn, against a passive adversary. We obtain this construction by using an instance of our poly onion encryption within the $\Pi_p$ protocol.
Last updated:  2022-06-01
An Improved Model on the Vague Sets-Based DPoS’s Voting Phase in Blockchain
Uncategorized
Lin You, Zhuobiao Wang, Gengran Hu, Chengtang Cao
Uncategorized
As a common consensus mechanism used in blockchain systems, DPoS uses voting to select committee members who will generate the corresponding blocks. In order to elect committee members as fairly as possible, the vague sets are introduced into the voting phase of DPoS. In the vague sets based model proposed by Xu et al., the voting nodes can vote for, oppose or abstain from it. In this paper, we improve this vague set based model by introducing a new mapping from the vague set to fuzzy set and considering the case that each voting node is assigned a weight. In addition, several nice properties of our improved model are proved and it makes the voting phase of DPoS more fair.
Last updated:  2022-06-17
An Efficient and Robust Multidimensional Data Aggregation Scheme for Smart Grid Based on Blockchain
Lin You, Xinhua Zhang, Gengran Hu, Longbo Han
In order to analyze real-time power data without revealing user's privacy, privacy-preserving data aggregation has been extensively researched in smart grid. However, most of the existing schemes either have too much computation overhead and cannot achieve dynamic users, or require a trusted center. In this paper, we propose an efficient and robust multidimensional data aggregation scheme based on blockchain. In our scheme, a leader election algorithm in Raft protocol is used to select a mining node from all smart meters to aggregate data. A dynamically verifiable secret sharing homomorphism scheme is adopted to realize flexible dynamic user management. In addition, our scheme can not only resist internal and external attackers but also support multidimensional data aggregation and fault tolerance. Compared with other schemes, our scheme not only supports user fault tolerance, but also supports fault tolerance of the intermediate aggregation node. The security analysis shows that our proposed scheme is IND-CPA secure and can meet stronger security features. Our performance analyses show that compared with other schemes, our scheme can be implemented with lower computation cost and communication overhead.
Last updated:  2023-10-29
Higher-order masked Saber
Suparna Kundu, Jan-Pieter D’Anvers, Michiel Van Beirendonck, Angshuman Karmakar, and Ingrid Verbauwhede
Side-channel attacks are formidable threats to the cryptosystems deployed in the real world. An effective and provably secure countermeasure against side-channel attacks is masking. In this work, we present a detailed study of higher-order masking techniques for the key-encapsulation mechanism Saber. Saber is one of the lattice-based finalist candidates in the National Institute of Standards of Technology's post-quantum standardization procedure. We provide a detailed analysis of different masking algorithms proposed for Saber in the recent past and propose an optimized implementation of higher-order masked Saber. Our proposed techniques for first-, second-, and third-order masked Saber have performance overheads of 2.7x, 5x, and 7.7x respectively compared to the unmasked Saber. We show that compared to Kyber which is another lattice-based finalist scheme, Saber's performance degrades less with an increase in the order of masking. We also show that higher-order masked Saber needs fewer random bytes than higher-order masked Kyber. Additionally, we adapt our masked implementation to uSaber, a variant of Saber that was specifically designed to allow an efficient masked implementation. We present the first masked implementation of uSaber, showing that it indeed outperforms masked Saber by at least 12% for any order. We provide optimized implementations of all our proposed masking schemes on ARM Cortex-M4 microcontrollers.
Last updated:  2022-03-28
Shaduf++: Non-Cycle and Privacy-Preserving Payment Channel Rebalancing
Zhonghui Ge, Yi Zhang, Yu Long, Dawu Gu
A leading approach to enhancing the performance and scalability of permissionless blockchains is to use the payment channel, which allows two users to perform off-chain payments with almost unlimited frequency. By linking payment channels together to form a payment channel network, users connected by a path of channels can perform off-chain payments rapidly. However, payment channels risk encountering fund depletion, which threatens the availability of both the payment channel and network. The most recent method needs a cycle-based channel rebalancing procedure, which requires a fair leader and users with rebalancing demands forming directed cycles in the network. Therefore, its large-scale applications are restricted. In this work, we introduce Shaduf, a novel non-cycle off-chain rebalancing protocol that offers a new solution for users to shift coins between channels directly without relying on the cycle setting. Shaduf can be applied to more general rebalancing scenarios. We provide the details of Shaduf and formally prove its security under the Universal Composability framework. Our prototype demonstrates its feasibility and the experimental evaluation shows that Shaduf enhances the Lighting Network performance in payment success ratio and volume. Experimental results also show that our protocol prominently reduces users’ deposits in channels while maintaining the same amount of payments. Moreover, as a privacy enhancement of Shaduf, we propose Shaduf++. Shaduf++ not only retains all the advantages of Shaduf, but also preserves privacy for the rebalancing operations.
Last updated:  2024-02-25
Phase-shift Fault Analysis of Grain-128
HRIDYA P R and Jimmy Jose
Phase-shift fault attack is a type of fault attack used for cryptanalysis of stream ciphers. It involves clocking a cipher’s feedback shift registers out of phase, in order to generate faulted keystream. Grain- 128 cipher is a 128-bit modification of the Grain cipher which is one of the finalists in the eSTREAM project. In this work, we propose a phase-shift fault attack against Grain-128 loaded with key-IV pairs that result in an all-zero LFSR after initialisation. We frame equations in terms of the input and output bits of the cipher and solve them using a SAT solver. By correctly guessing 40 internal state bits, we are able to recover the entire 128-bit key with just 2 phase-shift faults for keystreams of length 200 bits.
Last updated:  2022-03-28
Secure Two-party Computation Approach for NTRUEncrypt
Lin You, Yan Wang, Liang Li, Gengran Hu
Secure multi-party computation can provide a solution for privacy protection and ensure the correctness of the final calculation results. Lattice-based algorithms are considered to be one of the most promising post-quantum cryptographic algorithms due to a better balance among security, key sizes and calculation speeds. The NTRUEncrypt is a lattice-based anti-quantum attack cryptographic algorithm. Since there haven't been much candidate post-quantum cryptographic algorithms for secure multi-party computation. In this paper, we propose a novel secure two-party computation scheme based on NTRUEncrypt and implement the polynomial multiplication operations under NTRUEncrypt-OT. Our secure two-party computation scheme mainly uses oblivious transfer and privacy set interaction. We prove the security of our scheme in the semi-honest model. Our scheme can be applied for multi-party computation scenarios, such as quantum attack-resisted E-votes or E-auctions.
Last updated:  2022-03-28
ECDSA White-Box Implementations: Attacks and Designs from WhibOx 2021 Contest
Guillaume Barbu, Ward Beullens, Emmanuelle Dottax, Christophe Giraud, Agathe Houzelot, Chaoyun Li, Mohammad Mahzoun, Adrián Ranea, Jianrui Xie
Despite the growing demand for software implementations of ECDSA secure against attackers with full control of the execution environment, the scientific literature on white-box ECDSA design is scarce. To assess the state-of-the-art and encourage practical research on this topic, the WhibOx 2021 contest invited developers to submit white-box ECDSA implementations and attackers to break the corresponding submissions. In this work we describe several attack techniques and designs used during the WhibOx 2021 contest. We explain the attack methods used by the team TheRealIdefix, who broke the largest number of challenges, and we show the success of each method against all the implementations in the contest. Moreover, we describe the designs, submitted by the team zerokey, of the two winning challenges; these designs represent the ECDSA signature algorithm by a sequence of systems of low-degree equations, which are obfuscated with affine encodings and extra random variables and equations. The WhibOx contest has shown that securing ECDSA in the white-box model is an open and challenging problem, as no implementation survived more than two days. To this end, our designs provide a starting methodology for further research, and our attacks highlight the weak points future work should address.
Last updated:  2024-01-11
Light Clients for Lazy Blockchains
Ertem Nusret Tas, David Tse, Lei Yang, and Dionysis Zindros
Lazy blockchains decouple consensus from transaction verification and execution to increase throughput. Although they can contain invalid transactions (e.g., double spends) as a result, these can easily be filtered out by full nodes that check if there have been previous conflicting transactions. However, creating light (SPV) clients that do not see the whole transaction history becomes a challenge: A record of a transaction on the chain does not necessarily entail transaction confirmation. In this paper, we devise a protocol that enables the creation of efficient light clients for lazy blockchains. The number of interaction rounds and the communication complexity of our protocol are logarithmic in the blockchain execution time. Our construction is based on a bisection game that traverses the Merkle tree containing the ledger of all - valid or invalid - transactions. We prove that our proof system is succinct, complete and sound, and empirically demonstrate the feasibility of our scheme.
Last updated:  2022-03-28
On Succinct Non-Interactive Arguments in Relativized Worlds
Megan Chen, Alessandro Chiesa, Nicholas Spooner
Succinct non-interactive arguments of knowledge (SNARKs) are cryptographic proofs with strong efficiency properties. Applications of SNARKs often involve proving computations that include the SNARK verifier, a technique called recursive composition. Unfortunately, SNARKs with desirable features such as a transparent (public-coin) setup are known only in the random oracle model (ROM). In applications this oracle must be heuristically instantiated and used in a non-black-box way. In this paper we identify a natural oracle model, the low-degree random oracle model, in which there exist transparent SNARKs for all NP computations relative to this oracle. Informally, letting $\mathcal{O}$ be a low-degree encoding of a random oracle, and assuming the existence of (standard-model) collision-resistant hash functions, there exist SNARKs relative to $\mathcal{O}$ for all languages in $\mathsf{NP}^{\mathcal{O}}$. Such a SNARK can directly prove a computation about its own verifier. This capability leads to proof-carrying data (PCD) in the oracle model $\mathcal{O}$ based solely on the existence of (standard-model) collision-resistant hash functions. To analyze this model, we introduce a more general framework, the linear code random oracle model (LCROM). We show how to obtain SNARKs in the LCROM for computations that query the oracle, given an accumulation scheme for oracle queries in the LCROM. Then we construct such an accumulation scheme for the special case of a low degree random oracle.
Last updated:  2023-02-10
Witness-Authenticated Key Exchange Revisited: Improved Models, Simpler Constructions, Extensions to Groups
Matteo Campanelli, Rosario Gennaro, Kelsey Melissaris, Luca Nizzardo
We study witness-authenticated key exchange (WAKE), in which parties authenticate through knowledge of a witness to any NP statement. WAKE achieves generic authenticated key exchange in the absence of trusted parties; WAKE is most suitable when a certificate authority is either unavailable or undesirable, as in highly decentralized networks. In practice WAKE approximates witness encryption, its elusive non-interactive analogue, at the cost of minimal interaction. This work is the first to propose, model and build witness-authenticated key exchange amongst groups of more than two parties, as well as the first to provide practical and provably secure constructions in the two-party case for general NP statements. Specifically our contributions are: (1) both game-based and universally composable (Canetti, FOCS '01) definitions for WAKE along with equivalence conditions between the two definitions, (2) a highly general compiler that introduces witness-authentication to any key exchange protocol along with, as a direct consequence, a three-round group WAKE protocol from DDH and signatures of knowledge (SOK), and (3) an optimized two-round group WAKE construction from DDH and SOK along with experimental benchmarks to demonstrate concrete practicality. Additionally, we study the specialized two-party case and provide a critique of prior work on this topic (Ngo et al., Financial Crypto '21) by pinpointing nontrivial weaknesses in the model, constructions and security proofs seen therein. We rectify those limitations with this work, significantly diverging in our techniques, design and approach.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.