All papers in 2021 (Page 3 of 1705 results)

Last updated:  2021-11-15
EVA Improved: Compiler and Extension Library for CKKS
Sangeeta Chowdhary, Wei Dai, Kim Laine, Olli Saarikivi
Homomorphic encryption (HE), especially the CKKS scheme, can be extremely challenging to use. The EVA language and compiler (Dathathri et al., PLDI 2020) was an attempt at addressing this challenge. EVA allows a developer to express their encrypted computation in a simple form with a Python-integrated language called PyEVA. It then compiles the program into an executable form by inserting operations such as relinearization and rescaling, applying optimizations, and choosing encryption parameters with the objective of minimizing execution time. Compiled programs can be executed with a parallelizing back-end against a library of HE primitives. Our work improves upon the EVA toolchain in several ways: changes to the Python front-end make writing PyEVA programs more natural, while a rework of EVA's C++ APIs makes writing new passes easier. We also implement two new optimizations, common subexpression elimination and reduction balancing, which we show allow users to write simpler and more modular PyEVA programs. We argue that the abstraction EVA provides is insufficient to resolve some common usability challenges. For example, managing vectors of arbitrary size is non-trivial. To resolve these problems, we demonstrate how building a library of commonly used data structures and functions is simple in PyEVA. EVA's automation allows writing very concise code, which gets fused and optimized together with the user program. We create the beginnings of an EVA Extension Library (EXL), that provides vector and matrix classes and a collection of common statistical functions, to demonstrate the power of this approach.
Last updated:  2021-11-15
CCA SecureA Posteriori Openable Encryption in the Standard Model
Xavier Bultel
A Posteriori Openable Public Key Encryptions (APOPKE) allow any user to generate a constant-size key that decrypts the messages they have sent over a chosen period of time. As an important feature, the period can be dynamically chosen after the messages have been sent. This primitive was introduced in 2016 by Bultel and Lafourcade. They also defined the Chosen-Plaintext Attack (CPA) security for APOPKE, and designed a scheme called GAPO, which is CPA secure in the random oracle model. In this paper, we formalize the Chosen-Ciphertext Attack (CCA) security for APOPKE, then we design a scheme called CHAPO (for CHosen-ciphetext attack resistant A Posteriori Openable encryption), and we prove its CCA security in the standard model. CHAPO is approximately twice as efficient as GAPO and is more generic. We also give news applications, and discuss the practical impact of its CCA security.
Last updated:  2021-11-15
Interaction-Preserving Compilers for Secure Computation
Nico Döttling, Vipul Goyal, Giulio Malavolta, Justin Raizes
In this work we consider the following question: What is the cost of security for multi-party protocols? Specifically, given an insecure protocol where parties exchange (in the worst case) $\Gamma$ bits in $N$ rounds, is it possible to design a secure protocol with communication complexity close to $\Gamma$ and $N$ rounds? We systematically study this problem in a variety of settings and we propose solutions based on the intractability of different cryptographic problems. For the case of two parties we design an interaction-preserving compiler where the number of bits exchanged in the secure protocol approaches $\Gamma$ and the number of rounds is exactly $N$, assuming the hardness of standard problems over lattices. For the more general multi-party case, we obtain the same result assuming either (i) an additional round of interaction or (ii) the existence of extractable witness encryption and succinct non-interactive arguments of knowledge. As a contribution of independent interest, we construct the first multi-key fully homomorphic encryption scheme with message-to-ciphertext ratio (i.e., rate) of $1 - o(1)$, assuming the hardness of the learning with errors (LWE) problem. We view our work as a support for the claim that, as far as interaction and communication are concerned, one does not need to pay a significant price for security in multi-party protocols.
Last updated:  2021-11-15
Strong and Tight Security Guarantees against Integral Distinguishers
Phil Hebborn, Baptiste Lambin, Gregor Leander, Yosuke Todo
Integral attacks belong to the classical attack vectors against any given block ciphers. However, providing arguments that a given cipher is resistant against those attacks is notoriously difficult. In this paper, based solely on the assumption of independent round keys, we develop significantly stronger arguments than what was possible before: our main result is that we show how to argue that the sum of ciphertexts over any possible subset of plaintext is key-dependent, i.e., the non existence of integral distinguishers.
Last updated:  2021-11-15
Relations between Privacy, Verifiability, Accountability and Coercion-Resistance in Voting Protocols
Alisa Pankova, Jan Willemson
This paper studies quantitative relationships between privacy, verifiability, accountability, and coercion-resistance of voting protocols. We adapt existing definitions to make them better comparable with each other and determine which bounds a certain requirement on one property poses on some other property. It turns out that, in terms of proposed definitions, verifiability and accountability do not necessarily put constraints on privacy and coercion-resistance. However, the relations between these notions become more interesting in the context of particular attacks. Depending on the assumptions and the attacker's goal, voter coercion may benefit from a too weak as well as too strong verifiability.
Last updated:  2022-02-03
Succinct Erasure Coding Proof Systems
Nicolas Alhaddad, Sisi Duan, Mayank Varia, Haibin Zhang
Erasure coding is a key tool to reduce the space and communication overhead in fault-tolerant distributed computing. State-of-the-art distributed primitives, such as asynchronous verifiable information dispersal (AVID), reliable broadcast (RBC), multi-valued Byzantine agreement (MVBA), and atomic broadcast, all use erasure coding. This paper introduces an erasure coding proof (ECP) system, which allows the encoder to prove succinctly and non-interactively that an erasure-coded fragment is consistent with a constant-sized commitment to the original data block. Each fragment can be verified independently of the other fragments. Our proof system is based on polynomial commitments, with new batching techniques that may be of independent interest. To illustrate the benefits of our ECP system, we show how to build the first AVID protocol with optimal message complexity, word complexity, and communication complexity.
Last updated:  2022-04-06
Improved Lattice-Based Mix-Nets for Electronic Voting
Valeh Farzaliyev, Jan Willemson, Jaan Kristjan Kaasik
Mix-networks were first proposed by Chaum in the late 1970s -- early 1980s as a general tool for building anonymous communication systems. Classical mix-net implementations rely on standard public key primitives (e.g. ElGamal encryption) that will become vulnerable when a sufficiently powerful quantum computer will be built. Thus, there is a need to develop quantum-resistant mix-nets. This paper focuses on the application case of electronic voting where the number of votes to be mixed may reach hundreds of thousands or even millions. We propose an improved architecture for lattice-based post-quantum mix-nets featuring more efficient zero-knowledge proofs while maintaining established security assumptions. Our current implementation scales up to 100000 votes, still leaving a lot of room for future optimisation.
Last updated:  2021-11-15
Rectangular, Range, and Restricted AONTs: Three Generalizations of All-or-Nothing Transforms
Navid Nasr Esfahani, Douglas Stinson
All-or-nothing transforms (AONTs) were originally defined by Rivest as bijections from $s$ input blocks to $s$ output blocks such that no information can be obtained about any input block in the absence of any output block. Numerous generalizations and extensions of all-or-nothing transforms have been discussed in recent years, many of which are motivated by diverse applications in cryptography, information security, secure distributed storage, etc. In particular, $t$-AONTs, in which no information can be obtained about any $t$ input blocks in the absence of any $t$ output blocks, have received considerable study. In this paper, we study three generalizations of AONTs that are motivated by applications due to Pham et al. and Oliveira et al. We term these generalizations rectangular, range, and restricted AONTs. Briefly, in a rectangular AONT, the number of outputs is greater than the number of inputs. A range AONT satisfies the $t$-AONT property for a range of consecutive values of $t$. Finally, in a restricted AONT, the unknown outputs are assumed to occur within a specified set of "secure" output blocks. We study existence and non-existence and provide examples and constructions for these generalizations. We also demonstrate interesting connections with combinatorial structures such as orthogonal arrays, split orthogonal arrays, MDS codes and difference matrices.
Last updated:  2021-11-15
GMMT: A Revocable Group Merkle Multi-Tree Signature Scheme
Mahmoud Yehia, Riham AlTawy, T. Aaron Gulliver
G-Merkle (GM) (PQCrypto 2018) is the first hash-based group signature scheme where it was stated that multi-tree approaches are not applicable, thus limiting the maximum number of supported signatures to $2^{20}$. DGM (ESORICS 2019) is a dynamic and revocable GM-based group signature scheme that utilizes a computationally expensive puncturable encryption for revocation and requires interaction between verfiers and the group manager for signature verification. In this paper, we propose GMMT, a hash-based group signature scheme that provides solutions to the aforementioned challenges of the two schemes. GMMT builds on GM and adopts a multi-tree construction that constructs new GM trees for new signing leaves assignment while keeping the group public key unchanged. Compared to a single GM instance which enables $2^{20}$ signature, GMMT allows growing the multi-tree structure adaptively to support $2^{64}$ signatures under the same public key. Moreover, GMMT has a revocation mechanism that attains linkable anonymity of revoked signatures and has a logarithmic verfication computational complexity compared to the linear complexity of DGM. The group manager in GMMT requires storage that is linear in the number of members while the corresponding storage in DGM is linear in the number of signatures supported by the system. Concretely, for a system that supports $2^{64}$ signatures with $2^{15}$ members and provides 256-bit security, the required storage of the group manager is 1 MB (resp. $10^{8.7}$ TB) in GMMT(resp. DGM).
Last updated:  2021-11-15
Security Analysis Of DGM and GM Group Signature Schemes Instantiated With XMSS-T
Mahmoud Yehia, Riham AlTawy, T. Aaron Gulliver
Group Merkle (GM) (PQCrypto 2018) and Dynamic Group Merkle (DGM) (ESORICS 2019) are recent proposals for post-quantum hash-based group signature schemes. They are designed as generic constructions that employ any stateful Merkle hash-based signature scheme. XMSS-T (PKC 2016, RFC8391) is the latest stateful Merkle hash-based signature scheme where (almost) optimal parameters are provided. In this paper, we show that the setup phase of both GM and DGM does not enable drop-in instantiation by XMSS-T which limits both designs in employing earlier XMSS versions with sub-optimal parameters which negatively affects the performance of both schemes. Thus, we provide a tweak to the setup phase of GM and DGM to overcome this limitation and enable the adoption of XMSS-T. Moreover, we analyze the bit security of DGM when instantiated with XMSS-T and show that it is susceptible to multi-target attacks because of the parallel Signing Merkle Trees (SMT) approach. More precisely, when DGM is used to sign 264 messages, its bit security is 44 bits less than that of XMSS-T. Finally, we provide a DGM variant that mitigates multi-target attacks and show that it attains the same bit security as XMSS-T.
Last updated:  2021-11-15
Mahmoud Yehia, Riham AlTawy, T. Aaron Gulliver
SPHINCS+ is a stateless hash-based digital signature scheme and an alternate candidate in round 3 of the NIST Post- Quantum Cryptography standardization competition. Although not considered as a finalist because of its performance, SPHINCS+ may be considered for standardization by NIST after another round of evaluations. In this paper, we propose a Verfi able Obtained Random Subsets (v-ORS) generation mechanism which with one extra hash computation binds the message with the signing FORS instance (the underlying few-time signature algorithm). This enables SPHINCS+ to off er more security against generic attacks because the proposed modi cation restricts the ORS generation to use a hash key from the utilized signing FORS instance. Consequently, such a modi cation enables the exploration of di erent parameter sets for FORS to achieve better performance at the same security level. For instance, when using v-ORS, one parameter set for SPHINCS+-256s provides 82.9% reduction in the computation cost of FORS which leads to around 27% reduction in the number of hash calls of the signing procedure. Given that NIST has identfi ed the performance of SPHINCS+ as its main drawback, these results are a step forward in the path to standardization.
Last updated:  2021-11-15
On the efficiency of a general attack against the MOBS cryptosystem
Christopher Battarbee, Delaram Kahrobaei, Dylan Tailor, Siamak F. Shahandashti
All instances of the semidirect key exchange protocol, a generalisation of the famous Diffie-Hellman key exchange protocol, satisfy the so-called ``telescoping equality''; in some cases, this equality has been used to construct an attack. In this report we present computational evidence suggesting that an instance of the scheme called `MOBS (Matrices Over Bitstrings)' is an example of a scheme where the telescoping equality has too many solutions to be a practically viable means to conduct an attack.
Last updated:  2021-11-20
VASA: Vector AES Instructions for Security Applications
Jean-Pierre Münch, Thomas Schneider, Hossein Yalame
Due to standardization, AES is today’s most widely used block cipher. Its security is well-studied and hardware acceleration is available on a variety of platforms. Following the success of the Intel AES New Instructions (AES-NI), support for Vectorized AES (VAES) has been added in 2018 and already shown to be useful to accelerate many implementations of AES-based algorithms where the order of AES evaluations is fixed a priori. In our work, we focus on using VAES to accelerate the computation in secure multi-party computation protocols and applications. For some MPC building blocks, such as OT extension, the AES operations are independent and known a priori and hence can be easily parallelized, similar to the original paper on VAES by Drucker et al. (ITNG’19). We evaluate the performance impact of using VAES in the AES-CTR implementations used in Microsoft CrypTFlow2, and the EMP-OT library which we accelerate by up to 24%. The more complex case that we study for the first time in our paper are dependent AES calls that are not fixed yet in advance and hence cannot be parallelized manually. This is the case for garbling schemes. To get optimal efficiency from the hardware, enough independent calls need to be combined for each batch of AES executions. We identify such batches using a deferred execution technique paired with early execution to reduce non-locality issues and more static techniques using circuit depth and explicit gate independence. We present a performance and a modularity focused technique to compute the AES operations efficiently while also immediately using the results and preparing the inputs. Using these techniques, we achieve a performance improvement via VAES of up to 244% for the ABY framework and of up to 28% for the EMP-AGMPC framework. By implementing several garbling schemes from the literature using VAES acceleration, we obtain a 171% better performance for ABY.
Last updated:  2022-03-12
SoK: Password-Authenticated Key Exchange -- Theory, Practice, Standardization and Real-World Lessons
Feng Hao, Paul C. van Oorschot
Password-authenticated key exchange (PAKE) is a major area of cryptographic protocol research and practice. Many PAKE proposals have emerged in the 30 years following the original 1992 Encrypted Key Exchange (EKE), some accompanied by new theoretical models to support rigorous analysis. To reduce confusion and encourage practical development, major standards bodies including IEEE, ISO/IEC and the IETF have worked towards standardizing PAKE schemes, with mixed results. Challenges have included contrasts between heuristic protocols and schemes with security proofs, and subtleties in the assumptions of such proofs rendering some schemes unsuitable for practice. Despite initial difficulty identifying suitable use cases, the past decade has seen PAKE adoption in numerous large-scale applications such as Wi-Fi, Apple's iCloud, browser synchronization, e-passports, and the Thread network protocol for Internet of Things devices. Given this backdrop, we consolidate three decades of knowledge on PAKE protocols, integrating theory, practice, standardization and real-world experience. We provide a thorough and systematic review of the field, a summary of the state-of-the-art, a taxonomy to categorize existing protocols, and a comparative analysis of protocol performance using representative schemes from each taxonomy category. We also review real-world applications, summarize lessons learned, and highlight open research problems related to PAKE protocols.
Last updated:  2021-11-15
The Hidden Lattice Problem
Uncategorized
Luca Notarnicola, Gabor Wiese
Show abstract
Uncategorized
We consider the problem of revealing a small hidden lattice from the knowledge of a low-rank sublattice modulo a given sufficiently large integer – the Hidden Lattice Problem. A central motivation of study for this problem is the Hidden Subset Sum Problem, whose hardness is essentially determined by that of the hidden lattice problem. We describe and compare two algorithms for the hidden lattice problem: we first adapt the algorithm by Nguyen and Stern for the hidden subset sum problem, based on orthogonal lattices, and propose a new variant, which we explain to be related by duality in lattice theory. Following heuristic, rigorous and practical analyses, we find that our new algorithm brings some advantages as well as a competitive al- ternative for algorithms for problems with cryptographic interest, such as Approximate Common Divisor Problems, and the Hidden Subset Sum Problem. Finally, we study variations of the problem and highlight its relevance to cryptanalysis.
Last updated:  2023-08-15
Precio: Private Aggregate Measurement via Oblivious Shuffling
F. Betül Durak, Chenkai Weng, Erik Anderson, Kim Laine, Melissa Chase
We introduce Precio, a new secure aggregation method for computing layered histograms and sums over secret shared data in a client-server setting. Precio is motivated by ad conversion measurement scenarios, where online advertisers and ad networks want to measure the performance of ad campaigns without requiring privacy-invasive techniques, such as third-party cookies. Precio has linear (communication) complexity in the number of data points and guarantees differentially private outputs. We formally analyze its security and privacy and present a thorough performance evaluation. The protocol supports much larger domains than Prio. It supports much more flexible aggregates than the DPF-based solution and in some settings has up to four orders of magnitude better performance.
Last updated:  2021-11-15
Estimating the Effectiveness of Lattice Attacks
Kotaro Abe, Makoto Ikeda
Lattice attacks are threats to (EC)DSA and have been used in cryptanalysis. In lattice attacks, a few bits of nonce leaks in multiple signatures are sufficient to recover the secret key. Currently, the BKZ algorithm is frequently used as a lattice reduction algorithm for lattice attacks, and there are many reports on the conditions for successful attacks. However, experimental attacks using the BKZ algorithm have only shown results for specific key lengths, and it is not clear how the conditions change as the key length changes. In this study, we conducted some experiments to simulate lattice attacks on P256, P384, and P521 and confirmed that attacks on P256 with 3 bits nonce leak, P384 with 4 bits nonce leak, and P521 with 5 bits nonce leak are feasible. The result for P521 is a new record. We also investigated in detail the reasons for the failure of the attacks and proposed a model to estimate the feasibility of lattice attacks using the BKZ algorithm. We believe that this model can be used to estimate the effectiveness of lattice attacks when the key length is changed.
Last updated:  2022-06-21
Accelerating the Delfs-Galbraith algorithm with fast subfield root detection
Maria Corte-Real Santos, Craig Costello, Jia Shi
We give a new algorithm for finding an isogeny from a given supersingular elliptic curve $E/\mathbb{F}_{p^2}$ to a subfield elliptic curve $E'/\mathbb{F}_p$, which is the bottleneck step of the Delfs-Galbraith algorithm for the general supersingular isogeny problem. Our core ingredient is a novel method of rapidly determining whether a polynomial $f \in L[X]$ has any roots in a subfield $K \subset L$, while crucially avoiding expensive root-finding algorithms. In the special case when $f=\Phi_{\ell,p}(X,j) \in \mathbb{F}_{p^2}[X]$, i.e. when $f$ is the $\ell$-th modular polynomial evaluated at a supersingular $j$-invariant, this provides a means of efficiently determining whether there is an $\ell$-isogeny connecting the corresponding elliptic curve to a subfield curve. Together with the traditional Delfs-Galbraith walk, inspecting many $\ell$-isogenous neighbours in this way allows us to search through a larger proportion of the supersingular set per unit of time. Though the asymptotic $\tilde{O}(p^{1/2})$ complexity of our improved algorithm remains unchanged from that of the original Delfs-Galbraith algorithm, our theoretical analysis and practical implementation both show a significant reduction in the runtime of the subfield search. This sheds new light on the concrete hardness of the general supersingular isogeny problem, the foundational problem underlying isogeny-based cryptography.
Last updated:  2021-11-15
A Cryptographic View of Deep-Attestation, or how to do Provably-Secure Layer-Linking
Ghada Arfaoui, Pierre-Alain Fouque, Thibaut Jacques, Pascal Lafourcade, Adina Nedelcu, Cristina Onete, Léo Robert
Deep attestation is a particular case of remote attestation, i.e., verifying the integrity of a platform with a remote verification server. We focus on the remote attestation of hypervisors and their hosted virtual machines (VM), for which two solutions are currently supported by ETSI. The first is single-channel attestation, requiring for each VM an attestation of that VM and the underlying hypervisor through the physical TPM. The second, multi-channel attestation, allows to attest VMs via virtual TPMs and separately from the hypervisor -- this is faster and requires less overall attestations, but the server cannot verify the link between VM and hypervisor attestations, which comes for free for single-channel attestation. We design a new approach to provide linked remote attestation which achieves the best of both worlds: we benefit from the efficiency of multi-channel attestation while simultaneously allowing attestations to be linked. Moreover, we formalize a security model for deep attestation and prove the security of our approach. Our contribution is agnostic of the precise underlying secure component (which could be instantiated as a TPM or something equivalent) and can be of independent interest. Finally, we implement our proposal using TPM 2.0 and vTPM (KVM/QEMU), and show that it is practical and efficient.
Last updated:  2022-05-28
Mitaka: a simpler, parallelizable, maskable variant of Falcon
Thomas Espitau, Pierre-Alain Fouque, François Gérard, Mélissa Rossi, Akira Takahashi, Mehdi Tibouchi, Alexandre Wallet, Yang Yu
This work describes the Mitaka signature scheme: a new hash-and-sign signature scheme over NTRU lattices which can be seen as a variant of NIST finalist Falcon. It achieves comparable efficiency but is considerably simpler, online/offline, and easier to parallelize and protect against side-channels, thus offering significant advantages from an implementation standpoint. It is also much more versatile in terms of parameter selection. We obtain this signature scheme by replacing the FFO lattice Gaussian sampler in Falcon by the ``hybrid'' sampler of Ducas and Prest, for which we carry out a detailed and corrected security analysis. In principle, such a change can result in a substantial security loss, but we show that this loss can be largely mitigated using new techniques in key generation that allow us to construct much higher quality lattice trapdoors for the hybrid sampler relatively cheaply. This new approach can also be instantiated on a wide variety of base fields, in contrast with Falcon's restriction to power-of-two cyclotomics. We also introduce a new lattice Gaussian sampler with the same quality and efficiency, but which is moreover compatible with the integral matrix Gram root technique of Ducas et al., allowing us to avoid floating point arithmetic. This makes it possible to realize the same signature scheme as Mitaka efficiently on platforms with poor support for floating point numbers. Finally, we describe a provably secure masking of Mitaka. More precisely, we introduce novel gadgets that allow provable masking at any order at much lower cost than previous masking techniques for Gaussian sampling-based signature schemes, for cheap and dependable side-channel protection.
Last updated:  2022-03-10
Don't Reject This: Key-Recovery Timing Attacks Due to Rejection-Sampling in HQC and BIKE
Qian Guo, Clemens Hlauschek, Thomas Johansson, Norman Lahr, Alexander Nilsson, Robin Leander Schröder
Well before large-scale quantum computers will be available, traditional cryptosystems must be transitioned to post-quantum (PQ) secure schemes. The NIST PQC competition aims to standardize suitable cryptographic schemes. Candidates are evaluated not only on their formal security strengths, but are also judged based on the security with regard to resistance against side-channel attacks. Although round 3 candidates have already been intensively vetted with regard to such attacks, one important attack vector has hitherto been missed: PQ schemes often rely on rejection sampling techniques to obtain pseudorandomness from a specific distribution. In this paper, we reveal that rejection sampling routines that are seeded with secret-dependent information and leak timing information result in practical key recoveryattacks in the code-based key encapsulation mechanisms HQC and BIKE. Both HQC and BIKE have been selected as alternate candidates in the third round of the NIST competition, which puts them on track for getting standardized separately to the finalists. They have already been specifically hardened with constant-time decoders to avoid side-channel attacks. However, in this paper, we show novel timing vulnerabilities in both schemes: (1) Our secret key recovery attack on HQC requires only approx. 866,000 idealized decapsulation timing oracle queries in the 128-bit security setting. It is structurally different from previously identified attacks on the scheme: Previously, exploitable side-channel leakages have been identified in the BCH decoder of a previously submitted HQC version, in the ciphertext check as well as in the pseudorandom function of the Fujisaki-Okamoto transformation. In contrast, our attack uses the fact that the rejection sampling routine invoked during the deterministic re-encryption of the decapsulation leaks secret-dependent timing information, which can be efficiently exploited to recover the secret key when HQC is instantiated with the (now constant-time) BCH decoder, as well as with the RMRS decoder of the current submission. (2) From the timing information of the constant weight word sampler in the BIKE decapsulation, we demonstrate how to distinguish whether the decoding step is successful or not, and how this distinguisher is then used in the framework of the GJS attack to derive the distance spectrum of the secret key, using 5.8 x 10^7 idealized timing oracle queries. We provide details and analyses of the fully implemented attacks, as well as a discussion on possible countermeasures and their limits.
Last updated:  2021-11-08
On Forging SPHINCS+-Haraka Signatures on a Fault-tolerant Quantum Computer
Robin M. Berger, Marcel Tiepelt
SPHINCS+ is a state-of-the-art hash based signature scheme, the security of which is either based on SHA-256, SHAKE-256 or on the Haraka hash function. In this work, we perform an in-depth analysis of how the hash functions are embedded into SPHINCS+ and how the quantum pre-image resistance impacts the security of the signature scheme. Subsequently, we evaluate the cost of implementing Grover’s quantum search algorithm to find a pre-image that admits a universal forgery. In particular, we provide quantum implementations of the Haraka and SHAKE-256 hash functions in Q# and consider the efficiency of attacks in the context of fault-tolerant quantum computers. We restrict our findings to SPHINCS+-128 due to the limited security margin of Haraka. Nevertheless, we present an attack that performs better, to the best of our knowledge, than previously published attacks. We can forge a SPHINCS + -128-Haraka signature in about $1.5 \cdot 2^{90}$ surface code cycles and $2.03 \cdot 10^{6}$ physical qubits, translating to about $1.55 \cdot 2^{101}$ logical-qubit-cycles. For SHAKE-256, the same attack requires $8.65 \cdot 10^{6}$ qubits and $1.6 \cdot 2^{84}$ cycles resulting in about $1.17 \cdot 2^{99}$ logical-qubit-cycles.
Last updated:  2021-11-08
A Practical Forward-Secure DualRing
Nan Li, Yingjiu Li, Atsuko Miyaji, Yangguang Tian, Tsz Hon Yuen
Ring signature allows a signer to generate a signature on behalf of a set of public keys, while a verifier can verify the signature without identifying who the actual signer is. In Crypto 2021, Yuen et al. proposed a new type of ring signature scheme called DualRing. However, it lacks forward security. The security of DualRing cannot be guaranteed if the signer's secret key is compromised. In this work, we introduce forward-secure DualRing. The singer can periodically update his secret key using our proposed ``split-and-combine" method to mitigate the security risks caused by the leakage of secret keys. We present a practical scheme based on the discrete logarithm assumption. We show a detailed evaluation to validate its practicality.
Last updated:  2021-11-08
The Optimal Error Resilience of Interactive Communication Over Binary Channels
Meghal Gupta, Rachel Yun Zhang
In interactive coding, Alice and Bob wish to compute some function $f$ of their individual private inputs $x$ and $y$. They do this by engaging in a non-adaptive (fixed order, fixed length) interactive protocol to jointly compute $f(x,y)$. The goal is to do this in an error-resilient way, such that even given some fraction of adversarial corruptions to the protocol, both parties still learn $f(x,y)$. In this work, we study the optimal error resilience of such a protocol in the face of adversarial bit flip or erasures. While the optimal error resilience of such a protocol over a large alphabet is well understood, the situation over the binary alphabet has remained open. In this work, we resolve this problem of determining the optimal error resilience over binary channels. In particular, we construct protocols achieving $\frac16$ error resilience over the binary bit flip channel and $\frac12$ error resilience over the binary erasure channel, for both of which matching upper bounds are known. We remark that the communication complexity of our binary bit flip protocol is polynomial in the size of the inputs, and the communication complexity of our binary erasure protocol is linear in the size of the minimal noiseless protocol computing $f$.
Last updated:  2021-11-08
Interactive Error Correcting Codes Over Binary Erasure Channels Resilient to $>\frac12$ Adversarial Corruption
Meghal Gupta, Yael Tauman Kalai, Rachel Zhang
An error correcting code ($\mathsf{ECC}$) allows a sender to send a message to a receiver such that even if a constant fraction of the communicated bits are corrupted, the receiver can still learn the message correctly. Due to their importance and fundamental nature, $\mathsf{ECC}$s have been extensively studied, one of the main goals being to maximize the fraction of errors that the $\mathsf{ECC}$ is resilient to. For adversarial erasure errors (over a binary channel) the maximal error resilience of an $\mathsf{ECC}$ is $\frac12$ of the communicated bits. In this work, we break this $\frac12$ barrier by introducing the notion of an interactive error correcting code ($\mathsf{iECC}$) and constructing an $\mathsf{iECC}$ that is resilient to adversarial erasure of $\frac35$ of the total communicated bits. We emphasize that the adversary can corrupt both the sending party and the receiving party, and that both parties' rounds contribute to the adversary's budget. We also prove an impossibility (upper) bound of $\frac23$ on the maximal resilience of any binary $\mathsf{iECC}$ to adversarial erasures. In the bit flip setting, we prove an impossibility bound of $\frac27$.
Last updated:  2023-08-16
Extractors: Low Entropy Requirements Colliding With Non-Malleability
Divesh Aggarwal, Eldon Chung, Maciej Obremski
Two-source extractors are deterministic functions that, given two independent weak sources of randomness, output a (close to) uniformly random string of bits. Cheraghchi and Guruswami (TCC 2015) introduced two-source non-malleable extractors that combine the properties of randomness extraction with tamper resilience. Two-source non-malleable extractors have since then attracted a lot of attention, and have very quickly become fundamental objects in cryptosystems involving communication channels that cannot be fully trusted. Various applications of two-source non-malleable extractors include in particular non-malleable codes, non-malleable commitments, non-malleable secret sharing, network extraction, and privacy amplification with tamperable memory. The best known constructions of two-source non-malleable extractors are due to Chattopadhyay, Goyal, and Li (STOC 2016), Li (STOC 2017), and Li (CCC 2019). All of these constructions require both sources to have min-entropy at least $0.99 n$, where $n$ is the bit-length of each source. In this work, we introduce collision-resistant randomness extractors. This allows us to design a compiler that, given a two-source non-malleable extractor, and a collision-resistant extractor, outputs a two-source non-malleable extractor that inherits the non-malleability property from the non-malleable extractor, and the entropy requirement from the collision-resistant extractor. Nested application of this compiler leads to a dramatic improvement of the state-of-the-art mentioned above. We obtain a construction of a two-source non-malleable extractor where one source is required to have min-entropy greater than $0.8n$, and the other source is required to have only $\text{polylog} (n)$ min-entropy. Moreover, the other parameters of our construction, i.e., the output length, and the error remain comparable to prior constructions.
Last updated:  2021-11-08
Reducing the Cost of Machine Learning Differential Attacks Using Bit Selection and aPartial ML-Distinguisher
Amirhossein Ebrahimi, Francesco Regazzoni, Paolo Palmieri
In a differential cryptanalysis attack, the attacker tries to observe a block cipher's behavior under an input difference: if the system's resulting output differences show any non-random behavior, a differential distinguisher is obtained. While differential cryptanlysis has been known for several decades, Gohr was the first to propose in 2019 the use of machine learning (ML) to build a distinguisher. In this paper, we present the first Partial Differential (PD) ML-distinguisher, and demonstrate its effectiveness on lightweight cipher SPECK32/64. As a PD-ML-distinguisher is based on a selection of bits rather than all bits in a block, we also study if different selections of bits have different impact in the accuracy of the distinguisher, and we find that to be the case. More importantly, we also establish that certain bits have reliably higher effectiveness than others, through a series of independent experiments on different datasets, and we propose an algorithm for assigning an effectiveness score to each bit in the block. By selecting the highest scoring bits, we are able to train a partial ML-distinguisher over 8-bits that is almost as accurate as an equivalent ML-distinguisher over the entire 32 bits (68.8% against 72%), for six rounds of SPECK32/64. The reduced input size implies a significant reduction in the complexity of achieving a distinguisher, and also leads to a reduction in the number of bits of possible subkeys to be guessed in a potential subsequent key recovery attack. These results may therefore open the way to the application of (partial) ML-based distinguishers to ciphers whose block size has so far been considered too large.
Last updated:  2022-03-15
Zarcanum: A Proof-of-Stake Scheme for Confidential Transactions with Hidden Amounts
sowle, koe
This article explores a Proof-of-Stake mining algorithm in an environment where amounts are hidden with homomorphic commitments, in particular, using confidential transactions. Our goal was to avoid revealing amounts and other sensitive information (like which output was used to stake a given block) to blockchain observers when doing staking. Our contribution is a Proof-of-Stake mining scheme that does not reveal amounts and is compatible with ring confidential transactions. We also present an extension to the Bulletproofs+ protocol that allows range proofs on double-blinded commitments, with corresponding security statements.
Last updated:  2021-11-06
Multisignature with double threshold condition in the blockchain and its application to and strong keys generating
Ruslan Skuratovskii, Alexandr Kalenyk
Improving the reliability of account protection in the blockchain is one of the most important goals of the entire cryptographic arsenal used in the blockchain and cryptocurrency exchange. We propose a new threshold multisignature scheme with a double boundary condition. Access to funds stored on a multisig wallet is possible only when two or more signatures are provided at the same time.
Last updated:  2022-06-09
Multivariate public key cryptography with polynomial composition
Emile Hautefeuille
This paper presents a new public key cryptography scheme using multivariate polynomials over a finite field. Each multivariate polynomial from the public key is obtained by secretly and repeatedly composing affine transformations with series of quadratic polynomials (in a single variable). The main drawback of this scheme is the length of the public key.
Last updated:  2021-11-06
Circuit-based PSI for Covid-19 Risk Scoring
Leonie Reichert, Marcel Pazelt, Björn Scheuermann
—Many solutions have been proposed to improve manual contact tracing for infectious diseases through automation. Privacy is crucial for the deployment of such a system as it greatly influences adoption. Approaches for digital contact tracing like Google Apple Exposure Notification (GAEN) protect the privacy of users by decentralizing risk scoring. But GAEN leaks information about diagnosed users as ephemeral pseudonyms are broadcast to everyone. To combat deanonymisation based on the time of encounter while providing extensive risk scoring functionality we propose to use a private set intersection (PSI) protocol based on garbled circuits. Using oblivious programmable pseudo random functions PSI (PPRF-PSI) , we implement our solution CERTAIN which leaks no information to querying users other than one risk score for each of the last 14 days representing their risk of infection. We implement payload inclusion for OPPRF-PSI and evaluate the efficiency and performance of different risk scoring mechanisms on an Android device
Last updated:  2022-11-04
Foundations of Transaction Fee Mechanism Design
Hao Chung, Elaine Shi
In blockchains such as Bitcoin and Ethereum, users compete in a transaction fee auction to get their transactions confirmed in the next block. A line of recent works set forth the desiderata for a “dream” transaction fee mechanism (TFM), and explored whether such a mechanism existed. A dream TFM should satisfy 1) user incentive compatibility (UIC), i.e., truthful bidding should be a user’s dominant strategy; 2) miner incentive compatibility (MIC), i.e., the miner’s dominant strategy is to faithfully implement the prescribed mechanism; and 3) miner-user side contract proofness (SCP), i.e., no coalition of the miner and one or more user(s) can increase their joint utility by deviating from the honest behavior. The weakest form of SCP is called 1-SCP, where we only aim to provide resilience against the collusion of the miner and a single user. Sadly, despite the various attempts, to the best of knowledge, no existing mechanism can satisfy all three properties in all situations. Since the TFM departs from classical mechanism design in modeling and assumptions, to date, our understanding of the design space is relatively little. In this paper, we further unravel the mathematical structure of transaction fee mechanism design by proving the following results: - Can we have a dream TFM? We prove a new impossibility result: assuming finite block size, no single-parameter, non-trivial, possibly randomized TFM can simultaneously satisfy UIC and 1-SCP. Consequently, no non-trivial TFM can satisfy all three desired properties simultaneously. This answers an important open question raised by Roughgarden in his recent work. - Rethinking the incentive compatibility notions. We observe that the prevalently adopted incentive compatibility notions may be too draconian and somewhat flawed. We rectify the existing modeling techniques, and suggest a relaxed incentive compatibility notion that captures additional hidden costs of strategic deviation. We construct a new mechanism called the “burning second-price auction”, and show that it indeed satisfies the new incentive compatibility notions. We additionally prove that the use of randomness is necessary under the new incentive compatibility notions for “useful” mechanisms that resist the coalitions of the miner and at least 2 users. - Do the new design elements make a difference? Unlike classical mechanisms, TFMs may employ a couple new design elements that are idiosyncratic to blockchains. For example, a burn rule (employed by Ethereum’s EIP-1559) allows part to all of the payment from the users to be burnt rather than paid to the miner. Some mechanisms also allow unconfirmed transactions to be included in the block, to set the price for others. Our work unveils how these new design elements actually make a difference in TFM design, allowing us to achieve incentive compatible properties that would otherwise be impossible.
Last updated:  2021-11-06
Computational self-testing for entangled magic states
Akihiro Mizutani, Yuki Takeuchi, Ryo Hiromasa, Yusuke Aikawa, Seiichiro Tani
In the seminal paper [Metger and Vidick, Quantum ’21], they proposed a computational self-testing protocol for Bell states in a single quantum device. Their protocol relies on the fact that the target states are stabilizer states, and hence it is highly non-trivial to reveal whether the other class of quantum states, non-stabilizer states, can be self-tested within their framework. Among non-stabilizer states, magic states are indispensable resources for universal quantum computation. In this letter, we show that a magic state for the CCZ gate can be self-tested while that for the T gate cannot. Our result is applicable to a proof of quantumness, where we can classically verify whether a quantum device generates a quantum state having non zero magic.
Last updated:  2021-11-11
Improving Cryptography Based On Entropoids
Anisha Mukherjee, Saibal K. Pal
Entropic quasigroups or entropoids provide an attractive option for development of post-quantum cryptographic schemes. We elaborate on the mathematical properties of entropoids with modifications in the initial operation. The starting entropic quasigroups obtained by this process can be applied to generate higher-order structures suitable for cryptography. We also propose an encryption/decryption scheme analogous to the ElGamal scheme with quasigroup string transformations in the entropoid setting. We then move on to enumerate important properties that are beneficial in cryptographic use together with algorithms for their verification.
Last updated:  2022-08-30
Efficient Searchable Symmetric Encryption for Join Queries
Charanjit Jutla, Sikhar Patranabis
The Oblivious Cross-Tags (OXT) protocol due to Cash et al. (CRYPTO'13) is a highly scalable searchable symmetric encryption (SSE) scheme that allows fast processing of conjunctive and more general Boolean queries over encrypted relational databases. A longstanding open question has been to extend OXT to also support queries over joins of tables without pre-computing the joins. In this paper, we solve this open question without compromising on the nice properties of OXT with respect to both security and efficiency. We propose Join Cross-Tags (JXT) - a purely symmetric-key solution that supports efficient conjunctive queries over (equi) joins of encrypted tables without any pre-computation at setup. JXT is fully compatible with OXT, and can be used in conjunction with OXT to support a wide class of SQL queries directly over encrypted relational databases. JXT incurs a storage cost (over OXT) of a factor equal to the number of potential join-attributes in a table, which is usually compensated by the fact that JXT is a fully symmetric-key solution (as opposed to OXT which relies on discrete-log hard groups). We prove the (adaptive) simulation-based security of JXT with respect to a rigorously defined leakage profile.
Last updated:  2021-11-06
Concurrent-Secure Two-Party Computation in Two Rounds from Subexponential LWE
Saikrishna Badrinarayanan, Rex Fernando, Amit Sahai
Very recently, two works were able to construct two-round secure multi-party computation (MPC) protocols in the plain model, without setup, relying on the superpolynomial simulation framework of Pass [Pas03]. The first work [ABG+21] achieves this relying on subexponential non-interactive witness indistinguishable arguments, the subexponential SXDH assumption, and the existence of a special type of non-interactive non-malleable commitment. The second work [FJK21] additionally achieves concurrent security, and relies on subexponential quantum hardness of the learning-with-errors (LWE) problem, subexponential classical hardness of SXDH, the existence of a subexponentially-secure (classically-hard) indistinguishablity obfuscation (iO) scheme, and time-lock puzzles. This paper focuses on the assumptions necessary to construct secure computation protocols in two rounds without setup, focusing on the subcase of two-party functionalities. In this particular case, we show how to build a two-round, concurrent-secure, two-party computation (2PC) protocol based on a single, standard, post-quantum assumption, namely subexponential hardness of the learning-with-errors (LWE) problem. We note that our protocol is the first two-round concurrent-secure 2PC protocol that does not require the existence of a one-round non-malleable commitment (NMC). Instead, we are able to use the two-round NMCs of [KS17a], which is instantiable from subexponential LWE.
Last updated:  2021-11-06
New Indifferentiability Security Proof of MDPH Hash Function
Chun Guo, Tetsu Iwata, Kazuhiko Minematsu
MDPH is a double-block-length hash function proposed by Naito at Latincrypt 2019.This is a combination of Hirose's compression function and the domain extender called Merkle-Damg\r{a}rd with permutation (MDP). When instantiated with an $n$-bit block cipher, Naito proved that this achieves the (nearly) optimal indifferentiable security bound of $O(n-\log n)$-bit security. In this paper, we first point out that the proof of the claim contains a gap, which is related to the definition of the simulator in simulating the decryption of the block cipher. We then show that the proof can be fixed. We introduce a new simulator that addresses the issue, showing that MDPH retains its (nearly) optimal indifferentiable security bound of $O(n-\log n)$-bit security.
Last updated:  2022-07-15
LeakageVerif: Scalable and Efficient Leakage Verification in Symbolic Expressions
Quentin L. Meunier, Etienne Pons, Karine Heydemann
Side-channel attacks are a powerful class of attacks targeting cryptographic devices. Masking is a popular protection technique to thwart such attacks as it can be theoretically proven secure. However, correctly implementing masking schemes is a non-trivial task and error-prone. If several techniques have been proposed to formally verify masked implementations, they all come with limitations regarding expressiveness, scalability or accuracy. In this work, we propose a symbolic approach, based on a variant of the classical substitution method, for formally verifying arithmetic and boolean masked programs. This approach is more accurate and scalable than existing approaches thanks to a careful design and implementation of key heuristics, algorithms and data structures involved in the verification process. We present all the details of this approach and the open-source tool called LeakageVerif which implements it as a python library, and which offers constructions for symbolic expressions and functions for their verification. We compare LeakageVerif to three existing state-of-the-art tools on a set of 46 masked programs, and we show that it has very good scalability and accuracy results while providing all the necessary constructs for describing algorithmic to assembly masking schemes. Finally, we also provide the set of 46 benchmarks, named MaskedVerifBenchs and written for comparing the different verification tools, in the hope that they will be useful to the community for future comparisons.
Last updated:  2021-11-06
On the Round Complexity of Black-box Secure MPC
Yuval Ishai, Dakshita Khurana, Amit Sahai, Akshayaram Srinivasan
We consider the question of minimizing the round complexity of secure multiparty computation (MPC) protocols that make a black-box use of simple cryptographic primitives in the setting of security against any number of malicious parties. In the plain model, previous black-box protocols required a high constant number of rounds (>15). This is far from the known lower bound of 4 rounds for protocols with black-box simulators. When allowing a random oblivious transfer (OT) correlation setup, 2-round protocols making a black-box use of a pseudorandom generator were previously known. However, such protocols were obtained via a round-collapsing ``protocol garbling'' technique that has poor concrete efficiency and makes a non-black-box use of an underlying malicious-secure protocol. We improve this state of affairs by presenting the following types of black-box protocols. - 4-round ``pairwise MPC'' in the plain model. This round-optimal protocol enables each ordered pair of parties to compute a function of both inputs whose output is delivered to the second party. The protocol makes black-box use of any public-key encryption (PKE) with pseudorandom public keys. As a special case, we get a black-box round-optimal realization of secure (copies of) OT between every ordered pair of parties. - 2-round MPC from OT correlations. This round-optimal protocol makes a black-box use of any general 2-round MPC protocol satisfying an augmented notion of {\em semi-honest} security. In the two-party case, this yields new kinds of 2-round black-box protocols. - 5-round MPC in the plain model. This protocol makes a black-box use of PKE with pseudorandom public keys, and 2-round oblivious transfer with ``semi-malicious'' security. A key technical tool for the first result is a novel combination of split-state non-malleable codes (Dziembowski, Pietrzak and Wichs, JACM '18) with standalone secure two-party protocols. The second result is based on a new round-optimized variant of the ``IPS compiler'' (Ishai, Prabhakaran and Sahai, Crypto '08). The third result is obtained via a specialized combination of these two techniques.
Last updated:  2021-11-06
On semigroups of multivariate transformations constructed in terms of time dependent linguistic graphs and solutions of Post Quantum Multivariate Cryptography.
V. Ustimenko
Time dependent linguistic graphs over abelian group H are introduced. In the case $H=K*$ such bipartite graph with point set $P=H^n$ can be used for generation of Eulerian transformation of $(K*)^n$, i.e. the endomorphism of $K[x_1, x_2,… , x_n]$ sending each variable to a monomial term. Subsemigroups of such endomorphisms together with their special homomorphic images are used as platforms of cryptographic protocols of noncommutative cryptography. The security of these protocol is evaluated via complexity of hard problem of decomposition of Eulerian transformation into the product of known generators of the semigroup. Nowadays the problem is intractable one in the Postquantum setting. The symbiotic combination of such protocols with special graph based stream ciphers working with plaintext space of kind $K^m$ where $m=n^t$ for arbitrarily chosen parameter $t$ is proposed. This way we obtained a cryptosystem with encryption/decryption procedure of complexity $O(m^{1+2/t})$.
Last updated:  2022-11-29
Themis: Fast, Strong Order-Fairness in Byzantine Consensus
Mahimna Kelkar, Soubhik Deb, Sishan Long, Ari Juels, Sreeram Kannan
We introduce Themis, a scheme for introducing fair ordering of transactions into (permissioned) Byzantine consensus protocols with at most $f$ faulty nodes among $n \geq 4f +1$. Themis enforces the strongest notion of fair ordering proposed to date. It also achieves standard liveness, rather than the weaker notion of previous work with the same fair ordering property. We show experimentally that Themis can be integrated into state-of-the-art consensus protocols with minimal modification or performance overhead. Additionally, we introduce a suite of experiments of general interest for evaluating the practical strength of various notions of fair ordering and the resilience of fair-ordering protocols to adversarial manipulation. We use this suite of experiments to show that the notion of fair ordering enforced by Themis is significantly stronger in practical settings than those of competing systems. We believe Themis offers strong practical protection against many types of transaction-ordering attacks---such as front-running and back-running---that are currently impacting commonly used smart contract systems.
Last updated:  2021-11-06
Polynomial-time targeted attacks on coin tossing for any number of corruptions
Omid Etesami, Ji Gao, Saeed Mahloujifar, Mohammad Mahmoody
Consider an $n$-message coin-tossing protocol between $n$ parties $P_1,\dots,P_n$, in which $P_i$ broadcasts a single message $w_i$ in round $i$ (possibly based on the previously shared messages) and at the end they agree on bit $b$. A $k$-replacing adversary $A_k$ can change up to $k$ of the messages as follows. In every round $i$, the adversary who knows all the messages broadcast so far, as well as a message $w_i$ that is prepared by $P_i$ to be just sent, can can to replace the prepared message $w_i$ with its own choice. A targeted adversary prefers the outcome $b'=1$, and its bias is defined as $\mu'-\mu$, where $\mu'=\Pr[b'=1]$ (resp. $\Pr[b=1]=\mu$) refers to the probability of outputting $1$ when the attack happens (resp. does not happen). In this work, we study $k$-replacing targeted attacks, their computational efficiency, and optimality, for all $k \in [n]$. Large messages: When the messages are allowed to be arbitrarily long, we show that polynomial-time $k$-replacing targeted attacks can achieve bias $\Omega(\mu k/\sqrt n)$ for any $k$ (and any protocol), which is optimal up to a constant factor for any $\mu = \Theta(1)$. Previously, it was known how to achieve such bias only for $k = \Omega(\sqrt n)$ (Komargodski-Raz [DISC'18], Mahloujifar-Mahmoody [ALT'19], and Etesami-Mahloujifar-Mahmoody [SODA'20]). This proves a computational variant of the isoperimetric inequality for product spaces under $k=o(\sqrt n)$ Hamming distance. As a corollary, we also obtain improved $poly(n)$-time targeted poisoning attacks on deterministic learners, in which the adversary can increase the probability of any efficiently testable bad event over the produced model from $\mu=1/poly(n)$ to $\mu + \Omega(\mu k /\sqrt n)$ by changing $k$ out of $n$ training examples. Binary messages: When the messages $w_1,\dots,w_n$ are uniformly random bits, we show that if $\mu=\Pr[b=1]= \Pr[\sum w_i \geq t] = \beta^{(t)}_n$ for $t \in [n]$ is the probability of falling into a Hamming ball, then polynomial-time $k$-replacing targeted attacks can achieve $\mu'=\Pr[b'=1]=\beta^{(t-k)}_n $, which is optimal due to the simple majority protocol. Thus, as corollary we obtain an alternative proof of the Harper's celebrated vertex isoperimetric inequality in which the optimal adversary (that maps random points to a set of measure $\mu$ by changing at most $k$ bits) is limited to be online and run in polynomial time. Previously, Lichtenstein, Linial, and Saks [Combinatorica'89] showed how to achieve $\mu'=\Pr[b'=1] = \beta^{(t-k)}_{ n-k }$ (using computationally unbounded attacks), which is optimal for adaptive adversaries who decide on corrupting parties before seeing their messages.
Last updated:  2021-11-06
3-Party Distributed ORAM from Oblivious Set Membership
Brett Hemenway Falk, Daniel Noble, Rafail Ostrovsky
Distributed Oblivious RAM (DORAM) protocols allow a group of participants to obliviously access a secret-shared array at a secret-shared index, and DORAM is the key tool for secure multiparty computation (MPC) in the RAM model. In this work, we present a novel 3-party semi-honest DORAM protocol with O((κ + D) log N) communication per access, where N is the size of the memory, κ is a security parameter and D is the block size. Our protocol performs polylogarithmic computation and does not require homomorphic encryption. Under natural parameter choices, this is the most communication-efficient DORAM with these properties. To build this DORAM protocol, we first present an extremely efficient oblivious data structure for answering set membership queries. From this we build an oblivious hash table with asymptotically optimal memory usage and access cost and with negligible failure probability. We believe these are of independent interest.
Last updated:  2021-11-06
Prime pairing in algorithms searching for smooth group order
Uncategorized
Pavel Atnashev, George Woltman
Show abstract
Uncategorized
Factorization methods like P−1, P+1, ECM have a stage which deals with primes of $a ± b$ form, where $a + b$ and $a − b$ are processed by a single operation. Selecting $a$ and $b$ such that both $a + b$ and $a − b$ are prime is called `prime pairing` and can significantly improve performance of the stage. This paper introduces new methods of pairing, which in some cases find pairs for up to 99.9% of primes in a range. A practical algorithm and its implementations are presented.
Last updated:  2022-10-13
A Unified Cryptoprocessor for Lattice-based Signature and Key-exchange
Aikata Aikata, Ahmet Can Mert, David Jacquemin, Amitabh Das, Donald Matthews, Santosh Ghosh, Sujoy Sinha Roy
We propose design methodologies for building a compact, unified and programmable cryptoprocessor architecture that computes post-quantum key agreement and digital signature. Synergies in the two types of cryptographic primitives are used to make the cryptoprocessor compact. As a case study, the cryptoprocessor architecture has been optimized targeting the signature scheme 'CRYSTALS-Dilithium' and the key encapsulation mechanism (KEM) 'Saber', both finalists in the NIST’s post-quantum cryptography standardization project. The programmable cryptoprocessor executes key generations, encapsulations, decapsulations, signature generations, and signature verifications for all the security levels of Dilithium and Saber. On a Xilinx Ultrascale+ FPGA, the proposed cryptoprocessor consumes 18,406 LUTs, 9,323 FFs, 4 DSPs, and 24 BRAMs. It achieves 200 MHz clock frequency and finishes CCA-secure key-generation/encapsulation/decapsulation operations for LightSaber in 29.6/40.4/ 58.3$\mu$s; for Saber in 54.9/69.7/94.9$\mu$s; and for FireSaber in 87.6/108.0/139.4$\mu$s, respectively. It finishes key-generation/sign/verify operations for Dilithium-2 in 70.9/151.6/75.2$\mu$s; for Dilithium-3 in 114.7/237/127.6$\mu$s; and for Dilithium-5 in 194.2/342.1/228.9$\mu$s, respectively, for the best-case scenario. On UMC 65nm library for ASIC the latency is improved by a factor of two due to a 2$\times$ increase in clock frequency.
Last updated:  2024-03-10
Fine-Grained Cryptanalysis: Tight Conditional Bounds for Dense k-SUM and k-XOR
Itai Dinur, Nathan Keller, and Ohad Klein
An average-case variant of the $k$-SUM conjecture asserts that finding $k$ numbers that sum to 0 in a list of $r$ random numbers, each of the order $r^k$, cannot be done in much less than $r^{\lceil k/2 \rceil}$ time. On the other hand, in the dense regime of parameters, where the list contains more numbers and many solutions exist, the complexity of finding one of them can be significantly improved by Wagner's $k$-tree algorithm. Such algorithms for $k$-SUM in the dense regime have many applications, notably in cryptanalysis. In this paper, assuming the average-case $k$-SUM conjecture, we prove that known algorithms are essentially optimal for $k= 3,4,5$. For $k>5$, we prove the optimality of the $k$-tree algorithm for a limited range of parameters. We also prove similar results for $k$-XOR, where the sum is replaced with exclusive or. Our results are obtained by a self-reduction that, given an instance of $k$-SUM which has a few solutions, produces from it many instances in the dense regime. We solve each of these instances using the dense $k$-SUM oracle, and hope that a solution to a dense instance also solves the original problem. We deal with potentially malicious oracles (that repeatedly output correlated useless solutions) by an obfuscation process that adds noise to the dense instances. Using discrete Fourier analysis, we show that the obfuscation eliminates correlations among the oracle's solutions, even though its inputs are highly correlated.
Last updated:  2021-11-06
Privacy-preserving Identity Management System
Jeonghyuk Lee, Jaekyung Choi, Hyunok Oh, Jihye Kim
Recently, a self-sovereign identity model has been researched actively as an alternative to the existing identity models such as a centralized identity model, federated identity model, and user-centric model. The self-sovereign identity model allows a user to have complete control of his identity. Meanwhile, the core component of the self-sovereign identity model is data minimization. The data minimization signifies that the extent of the exposure of user private identity should be minimized. As a solution to data minimization, zero-knowledge proofs can be grafted to the self-sovereign identity model. Specifically, zero-knowledge Succinct Non-interactive ARgument of Knowledges(zk-SNARKs) enables proving the truth of the statement on an arbitrary relation. In this paper, we propose a privacy-preserving self-sovereign identity model based on zk-SNARKs to allow any type of data minimization beyond the selective disclosure and range proof. The security of proposed model is formally proven under the security of the zero-knowledge proof and the unforgeability of the signature in the random oracle model. Furthermore, we optimize the proving time by checking the correctness of the commitment outside of the proof relation for practical use. The resulting scheme improves proving time for hash computation (to verify a commitment input) from 0.5 s to about 0.1 ms on a 32-bit input.
Last updated:  2021-11-06
QC-MDPC codes DFR and the IND-CCA security of BIKE
Valentin Vasseur
The aim of this document is to clarify the DFR (Decoding Failure Rate) claims made for BIKE, a third round alternate candidate KEM (Key Encapsulation Mechanism) to the NIST call for post-quantum cryptography standardization. For the most part, the material presented here is not new, it is extracted from the relevant scientific literature, in particular [V21]. Even though a negligible DFR is not needed for a KEM using ephemeral keys (e.g. TLS) which only requires IND-CPA security, it seems that IND-CCA security, relevant for reusable/static keys, has become a requirement. Therefore, a negligible DFR is needed both for the security reduction [FO99, HHK17] and to thwart existing attacks [GJS16]. Proving a DFR lower than $2^{-\lambda}$ where $\lambda$ is the security parameter (e.g. $\lambda=128$ or $256$) is hardly possible with mere simulation. Instead a methodology based on modelization, simulation, and extrapolation with confidence estimate was devised [V21]. Models are backed up by theoretical results [T18,SV19], but do not account for some combinatorial properties of the underlying error correcting code. Those combinatorial properties give rise to what is known in telecommunication as "error floors" [R03]. The statistical modeling predicts a fast decrease of the DFR as the block size grows, the waterfall region, whereas the combinatorial properties, weak keys [DGK19] or near-codewords [V21], predict a slower decrease, the error floor region. The issue here is to show that the error floor occurs in a region where the DFR is already below the security requirement. This would validate the extrapolation approach, and, as far as we can say, this appears to be the case for the QC-MDPC codes corresponding to BIKE parameters. The impact of the QC-MDPC code combinatorial properties on decoding, as reported in this document, is better and better understood. In particular, it strongly relates with the spectrum of low weight vectors, as defined in [GJS16]. At this point, none of the results we are aware of and which are presented here contradict in any way the DFR claims made for BIKE. Admittedly those claims remain heuristic in part, but could be understood as an additional assumption, just like the computational assumptions made for all similar primitives, under which the BIKE scheme is IND-CCA secure.
Last updated:  2021-11-06
An In-Depth Symbolic Security Analysis of the ACME Standard
Karthikeyan Bhargavan, Abhishek Bichhawat, Quoc Huy Do, Pedram Hosseyni, Ralf Kuesters, Guido Schmitz, Tim Wuertele
he ACME certificate issuance and management protocol, standardized as IETF RFC 8555, is an essential element of the web public key infrastructure (PKI). It has been used by Let’s Encrypt and other certification authorities to issue over a billion certificates, and a majority of HTTPS connections are now secured with certificates issued through ACME. Despite its importance, however, the security of ACME has not been studied at the same level of depth as other protocol standards like TLS 1.3 or OAuth. Prior formal analyses of ACME only considered the cryptographic core of early draft versions of ACME, ignoring many security-critical low-level details that play a major role in the 100 page RFC, such as recursive data structures, long-running sessions with asynchronous sub-protocols, and the issuance for certificates that cover multiple domains. We present the first in-depth formal security analysis of the ACME standard. Our model of ACME is executable and comprehensive, with a level of detail that lets our ACME client interoperate with other ACME servers. We prove the security of this model using a recent symbolic protocol analysis framework called DY* , which in turn is based on the F* programming language. Our analysis accounts for all prior attacks on ACME in the literature, including both cryptographic attacks and low-level attacks on stateful protocol execution. To analyze ACME, we extend DY ★ with authenticated channels, key substitution attacks, and a concrete execution framework, which are of independent interest. Our security analysis of ACME totaling over 16,000 lines of code is one of the largest proof developments for a cryptographic protocol standard in the literature, and it serves to provide formal security assurances for a crucial component of web security.
Last updated:  2022-09-08
Server-Aided Continuous Group Key Agreement
Joël Alwen, Dominik Hartmann, Eike Kiltz, Marta Mularczyk
Continuous Group Key Agreement (CGKA) -- or Group Ratcheting -- lies at the heart of a new generation of scalable End-to-End secure (E2E) cryptographic multi-party applications. One of the most important (and first deployed) CGKAs is ITK which underpins the IETF's upcoming Messaging Layer Security E2E secure group messaging standard. To scale beyond the group sizes possible with earlier E2E protocols, a central focus of CGKA protocol design is to minimize bandwidth requirements (i.e. communication complexity). In this work, we advance both the theory and design of CGKA culminating in an extremely bandwidth efficient CGKA. To that end, we first generalize the standard CGKA communication model by introducing server-aided CGKA (saCGKA) which generalizes CGKA and more accurately models how most E2E protocols are deployed in the wild. Next, we introduce the SAIK protocol; a modification of ITK, designed for real-world use, that leverages the new capabilities available to an saCGKA to greatly reduce its communication (and computational) complexity in practical concrete terms. Further, we introduce an intuitive, yet precise, security model for saCGKA. It improves upon existing security models for CGKA in several ways. It more directly captures the intuitive security goals of CGKA. Yet, formally it also relaxes certain requirements allowing us to take advantage of the saCGKA communication model. Finally, it is significantly simpler making it more tractable to work with and easier to build intuition for. As a result, the security proof of SAIK is also simpler and more modular. Finally, we provide empirical data comparing the (at times, quite dramatically improved) complexity profile of SAIK to state-of-the art CGKAs. For example, in a newly created group with 10K members, to change the group state (e.g. add/remove parties) ITK requires each group member download 1.38MB. However, with SAIK, members download no more than 2.7KB.
Last updated:  2021-10-29
Dynamic Random Probing Expansion with Quasi Linear Asymptotic Complexity
Uncategorized
Sonia Belaïd, Matthieu Rivain, Abdul Rahman Taleb, Damien Vergnaud
Show abstract
Uncategorized
The masking countermeasure is widely used to protect cryptographic implementations against side-channel attacks. While many masking schemes are shown to be secure in the widely deployed probing model, the latter raised a number of concerns regarding its relevance in practice. Offering the adversary the knowledge of a fixed number of intermediate variables, it does not capture the so-called horizontal attacks which exploit the repeated manipulation of sensitive variables. Therefore, recent works have focused on the random probing model in which each computed variable leaks with some given probability $p$. This model benefits from fitting better the reality of the embedded devices. In particular, Belaïd, Coron, Prouff, Rivain, and Taleb (CRYPTO 2020) introduced a framework to generate random probing circuits. Their compiler somehow extends base gadgets as soon as they satisfy a notion called random probing expandability (RPE). A subsequent work from Belaïd, Rivain, and Taleb (EUROCRYPT 2021) went a step forward with tighter properties and improved complexities. In particular, their construction reaches a complexity of $\mathcal{O}(\kappa^{3.9})$, for a $\kappa$-bit security, while tolerating a leakage probability of $p=2^{-7.5}$. In this paper, we generalize the random probing expansion approach by considering a dynamic choice of the base gadgets at each step in the expansion. This approach makes it possible to use gadgets with high number of shares --which enjoy better asymptotic complexity in the expansion framework-- while still tolerating the best leakage rate usually obtained for small gadgets. We investigate strategies for the choice of the sequence of compilers and show that it can reduce the complexity of an AES implementation by a factor $10$. We also significantly improve the asymptotic complexity of the expanding compiler by exhibiting new asymptotic gadget constructions. Specifically, we introduce RPE gadgets for linear operations featuring a quasi-linear complexity as well as an RPE multiplication gadget with linear number of multiplications. These new gadgets drop the complexity of the expanding compiler from quadratic to quasi-linear.
Last updated:  2022-08-22
Russian Federal Remote E-voting Scheme of 2021 -- Protocol Description and Analysis
Jelizaveta Vakarjuk, Nikita Snetkov, Jan Willemson
This paper presents the details of one of the two cryptographic remote e-voting protocols used in the Russian parliamentary elections of 2021. As the official full version of the scheme has never been published by the election organisers, our paper aims at putting together as complete picture as possible from various incomplete sources. As all the currently available sources are in Russian, our presentation also aims at serving the international community by making the description available in English for further studies. In the second part of the paper, we provide an initial analysis of the protocol, identifying the potential weaknesses under the assumptions of corruption of the relevant key components. As a result, we conclude that the biggest problems of the system stem from weak voter authentication. In addition, as it was possible to vote from any device with a browser and Internet access, the attack surface was relatively large in general.
Last updated:  2023-10-13
A State-Separating Proof for Yao’s Garbling Scheme
Chris Brzuska and Sabine Oechsner
Secure multiparty computation enables mutually distrusting parties to compute a public function of their secret inputs. One of the main approaches for designing MPC protocols are garbled circuits whose core component is usually referred to as a garbling scheme. In this work, we revisit the security of Yao’s garbling scheme and provide a modular security proof which composes the security of multiple layer garblings to prove security of the full circuit garbling. We perform our security proof in the style of state-separating proofs (ASIACRYPT 2018).
Last updated:  2021-10-29
A Lightweight Implementation of Saber Resistant Against Side-Channel Attacks
Abubakr Abdulgadir, Kamyar Mohajerani, Viet Ba Dang, Jens-Peter Kaps, Kris Gaj
The field of post-quantum cryptography aims to develop and analyze algorithms that can withstand classical and quantum cryptanalysis. The NIST PQC standardization process, now in its third round, specifies ease of protection against side-channel analysis as an important selection criterion. In this work, we develop and validate a masked hardware implementation of Saber key encapsulation mechanism, a third-round NIST PQC finalist. We first design a baseline lightweight hardware architecture of Saber and then apply side-channel countermeasures. Our protected hardware implementation is significantly faster than previously reported protected software and software/hardware co-design implementations. Additionally, applying side-channel countermeasures to our baseline design incurs approximately 2.9x and 1.4x penalty in terms of the number of LUTs and latency, respectively, in modern FPGAs.
Last updated:  2021-10-29
High-Performance Hardware Implementation of CRYSTALS-Dilithium
Luke Beckwith, Duc Tri Nguyen, Kris Gaj
Many currently deployed public-key cryptosystems are based on the difficulty of the discrete logarithm and integer factorization problems. However, given an adequately sized quantum computer, these problems can be solved in polynomial time as a function of the key size. Due to the future threat of quantum computing to current cryptographic standards, alternative algorithms that remain secure under quantum computing are being evaluated for future use. One such algorithm is CRYSTALS-Dilithium, a lattice-based digital signature scheme, which is a finalist in the NIST Post Quantum Cryptography (PQC) competition. As a part of this evaluation, high-performance implementations of these algorithms must be investigated. This work presents a high-performance implementation of CRYSTALS-Dilithium targeting FPGAs. In particular, we present a design that achieves the best latency for an FPGA implementation to date. We also compare our results with the most-relevant previous work on hardware implementations of NIST Round 3 post-quantum digital signature candidates.
Last updated:  2022-10-01
Efficient Zero-Knowledge Argument in Discrete Logarithm Setting: Sublogarithmic Proof or Sublinear Verifier
Sungwook Kim, Hyeonbum Lee, Jae Hong Seo
We propose three interactive zero-knowledge arguments for arithmetic circuit of size $N$ in the common random string model, which can be converted to be non-interactive by Fiat-Shamir heuristics in the random oracle model. First argument features $O(\sqrt{\log N})$ communication and round complexities and $O(N)$ computational complexity for the verifier. Second argument features $O(\log N)$ communication and $O(\sqrt{N})$ computational complexity for the verifier. Third argument features $O(\log N)$ communication and $O(\sqrt{N}\log N)$ computational complexity for the verifier. Contrary to first and second arguments, the third argument is free of reliance on pairing-friendly elliptic curves. The soundness of three arguments is proven under the standard discrete logarithm and/or the double pairing assumption, which is at least as reliable as the decisional Diffie-Hellman assumption.
Last updated:  2021-10-29
One-more Unforgeability of Blind ECDSA
Xianrui Qin, Cailing Cai, Tsz Hon Yuen
In this paper, we give the first formal security analysis on the one-more unforgeability of blind ECDSA. We start with giving a general attack on blind ECDSA, which is similar to the ROS attack on the blind Schnorr signature. We formulate the ECDSA-ROS problem to capture this attack. Next, we give a generic construction of blind ECDSA based on an additive homomorphic encryption and a corresponding zero-knowledge proof. Our concrete instantiation is about 40 times more bandwidth efficient than the blind ECDSA in AsiaCCS 2019. After that, we give the first formal proof of one-more unforgeability for blind ECDSA, under a new model called algebraic bijective random oracle. The security of our generic blind ECDSA relies on the hardness of a discrete logarithm-based interactive assumption and an assumption of the underlying elliptic curve. Finally, we analyze the hardness of the ECDSA-ROS problem in the algebraic bijective random oracle model.
Last updated:  2021-10-27
Secure Featurization and Applications to Secure Phishing Detection
Akash Shah, Nishanth Chandran, Mesfin Dema, Divya Gupta, Arun Gururajan, Huan Yu
Secure inference allows a server holding a machine learning (ML) inference algorithm with private weights, and a client with a private input, to obtain the output of the inference algorithm, without revealing their respective private inputs to one another. While this problem has received plenty of attention, existing systems are not applicable to a large class of ML algorithms (such as in the domain of Natural Language Processing) that perform featurization as their first step. In this work, we address this gap and make the following contributions: 1. We initiate the formal study of secure featurization and its use in conjunction with secure inference protocols. 2. We build secure featurization protocols in the one/two/three-server settings that provide a tradeoff between security and efficiency. 3. Finally, we apply our algorithms in the context of secure phishing detection and evaluate our end-to-end protocol on models that are commonly used for phishing detection.
Last updated:  2021-10-27
Mixed Certificate Chains for the Transition to Post-Quantum Authentication in TLS 1.3
Sebastian Paul, Yulia Kuzovkova, Norman Lahr, Ruben Niederhagen
Large-scale quantum computers will be able to efficiently solve the underlying mathematical problems of widely deployed public key cryptosystems in the near future. This threat has sparked increased interest in the field of Post-Quantum Cryptography (PQC) and standardization bodies like NIST, IETF, and ETSI are in the process of standardizing PQC schemes as a new generation of cryptography. This raises the question of how to ensure a fast, reliable, and secure transition to upcoming PQC standards in today’s highly interconnected world. In this work, we propose and investigate a migration strategy towards post-quantum (PQ) authentication for the network protocol Transport Layer Security (TLS). Our strategy is based on the concept of “mixed certificate chains” which use different signature algorithms within the same certificate chain. In order to demonstrate the feasibility of our migration strategy we combine the well-studied and trusted hash-based signature schemes SPHINCS+ and XMSS with elliptic curve cryptography first and subsequently with lattice-based PQC signature schemes (CRYSTALS-Dilithium and Falcon). Furthermore, we combine authentication based on mixed certificate chains with the lattice-based key encapsulation mechanism (KEM) CRYSTALS-Kyber as representative for PQC KEMs to evaluate a fully post-quantum and mutually authenticated TLS 1.3 handshake. Our results show that mixed certificate chains containing hash-based signature schemes only at the root certificate authority level lead to feasible connection establishment times despite the increase in communication size. By analyzing code size and peak memory usage of our client and server programs we further demonstrate the suitability of our migration strategy even for embedded devices.
Last updated:  2023-09-21
Batch point compression in the context of advanced pairing-based protocols
Dmitrii Koshelev
This paper continues previous ones about compression of points on elliptic curves $E_b\!: y^2 = x^3 + b$ (with $j$-invariant $0$) over a finite field $\mathbb{F}_{\!q}$ of characteristic $p > 3$. It is shown in detail how any two (resp., three) points from $E_b(\mathbb{F}_{\!q})$ can be quickly compressed to two (resp., three) elements of $\mathbb{F}_{\!q}$ (apart from a few auxiliary bits) in such a way that the corresponding decompression stage requires to extract only one cubic (resp., sextic) root in $\mathbb{F}_{\!q}$. As a result, for many fields $\mathbb{F}_{\!q}$ occurring in practice, the new compression-decompression methods are more efficient than the classical one with the two (resp., three) $x$ or $y$ coordinates of the points, which extracts two (resp., three) roots in $\mathbb{F}_{\!q}$. As a by-product, it is also explained how to sample uniformly at random two (resp., three) ``independent'' $\mathbb{F}_{\!q}$-points on $E_b$ essentially at the cost of only one cubic (resp., sextic) root in $\mathbb{F}_{\!q}$. Finally, the cases of four and more points from $E_b(\mathbb{F}_{\!q})$ are commented on as well.
Last updated:  2022-10-15
Sleepy Channels: Bitcoin-Compatible Bi-directional Payment Channels without Watchtowers
Lukas Aumayr, Sri AravindaKrishnan Thyagarajan, Giulio Malavolta, Pedro Moreno-Sanchez, Matteo Maffei
Payment channels (PC) are a promising solution to the scalability issue of cryptocurrencies, allowing users to perform the bulk of the transactions off-chain without needing to post everything on the blockchain. Many PC proposals however, suffer from a severe limitation: Both parties need to constantly monitor the blockchain to ensure that the other party did not post an outdated transaction. If this event happens, the honest party needs to react promptly and engage in a punishment procedure. This means that prolonged absence periods (e.g., a power outage) may be exploited by malicious users. As a mitigation, the community has introduced watchtowers, a third-party monitoring the blockchain on behalf of off-line users. Unfortunately, watchtowers are either trusted, which is critical from a security perspective, or they have to lock a certain amount of coins, called collateral, for each monitored PC in order to be held accountable, which is financially infeasible for a large network. We present Sleepy Channels, the first bi-directional PC protocol without watchtowers (or any other third party) that supports an unbounded number of payments and does not require parties to be persistently online. The key idea is to confine the period in which PC updates can be validated on-chain to a short, pre-determined time window, which is when the PC parties have to be online. This behavior is incentivized by letting the parties lock a collateral in the PC, which can be adjusted depending on their mutual trust and which they get back much sooner if they are online during this time window. Our protocol is compatible with any blockchain that is capable of verifying digital signatures (e.g., Bitcoin), as shown by our proof of concept. Moreover, our experimental results show that Sleepy Channels impose a communication and computation overhead similar to state-of-the-art PC protocols while removing watchtower's collateral and fees for the monitoring service.
Last updated:  2022-11-18
Streamlined NTRU Prime on FPGA
Bo-Yuan Peng, Adrian Marotzke, Ming-Han Tsai, Bo-Yin Yang, Ho-Lin Chen
We present a novel full hardware implementation of Streamlined NTRU Prime, with two variants: A high-speed, high-area implementation, and a slower, low-area implementation. We introduce several new techniques that improve performance, including a batch inversion for key generation, a high-speed schoolbook polynomial multiplier, an NTT polynomial multiplier combined with a CRT map, a new DSP-free modular reduction method, a high-speed radix sorting module, and new en- and decoders. With the high-speed design, we achieve the to-date fastest speeds for Streamlined NTRU Prime, with speeds of 5007, 10989 and 64026 cycles for encapsulation, decapsulation, and key generation respectively, while running at 285 MHz on a Xilinx Zynq Ultrascale+. The entire design uses 40060 LUT, 26384 flip-flops, 36.5 Bram and 31 DSP.
Last updated:  2022-09-05
Platypus: A Central Bank Digital Currency with Unlinkable Transactions and Privacy Preserving Regulation
Karl Wüst, Kari Kostiainen, Noah Delius, Srdjan Capkun
Due to the popularity of blockchain-based cryptocurrencies, the increasing digitalization of payments, and the constantly reducing role of cash in society, central banks have shown an increased interest in deploying central bank digital currencies (CBDCs) that could serve as a digital cash-equivalent. While most recent research on CBDCs focuses on blockchain technology, it is not clear that this choice of technology provides the optimal solution. In particular, the centralized trust model of a CBDC offers opportunities for different designs. In this paper, we depart from blockchain designs and instead build on ideas from traditional e-cash schemes. We propose a new style of building digital currencies that combines the transaction processing model of e-cash with an account-based fund management model. We argue that such a style of building digital currencies is especially well-suited to CBDCs. We also design the first such digital currency system, called Platypus, that provides strong privacy, high scalability, and expressive but simple regulation, which are all critical features for a CBDC. Platypus achieves these properties by adapting techniques similar to those used in anonymous blockchain cryptocurrencies like Zcash to fit our account model and applying them to the e-cash context.
Last updated:  2024-04-22
On the {\sf P/poly} Validity of the Agr17 FE Scheme
Yupu Hu, Siyue Dong, Baocang Wang, Jun Liu, Yanbin Pan, and Xingting Dong
Functional encryption (FE) is a cutting-edge research topic in cryptography. The Agr17 FE scheme is a major scheme of FE area. This scheme had the novelty of “being applied for the group of general functions (that is, {\sf P/poly} functions) without IO”. It took the BGG+14 ABE scheme as a bottom structure, which was upgraded into a “partially hiding attribute” scheme, and combined with a fully homomorphic encryption (FHE) scheme. However, the Agr17 FE scheme had a strange operation. For noise cancellation of FHE decryption stage, it used bulky “searching noise” rather than elegant “filtering”. It searched total modulus interval, so that the FHE modulus should be polynomially large. In this paper we discuss the {\sf P/poly} validity of the Agr17 FE scheme. First, we obtain the result that the Agr17 FE scheme is {\sf P/poly} invalid. More detailedly, when the Agr17 FE scheme is applied for the group of randomly chosen {\sf P/poly} Boolean functions, FHE modulus at the “searching” stage cannot be polynomially large. Our analysis is based on three restrictions of the BGG+14 ABE scheme: (1) The modulus of the BGG+14 ABE should be adapted to being super-polynomially large, if it is applied for the group of randomly chosen {\sf P/poly} functions. (2) The modulus of the BGG+14 ABE cannot be switched. (3) If the BGG+14 ABE is upgraded into a “partially hiding attribute” scheme, permitted operations about hidden part of the attribute can only be affine operations. Then, to check whether the {\sf P/poly} validity can be obtained by modifying the scheme, we consider two modified versions. The first modified version is controlling the FHE noise by repeatedly applying bootstrapping, and replacing a modular inner product with an arithmetic inner product. The second modified version is replacing the search for the modulus interval with the search for a public noise interval, hoping such noise interval polynomially large and tolerating the modulus which may be super-polynomially large. The first modified version may be {\sf P/poly} valid, but it is weaker. There is no evidence to support the {\sf P/poly} validity of the second modified version. Finally, we present an additional conclusion that there is no evidence to support the {\sf P/poly} validity of the GVW15 PE scheme.
Last updated:  2023-11-01
Length-preserving encryption with HCTR2
Paul Crowley, Nathan Huckleberry, and Eric Biggers
On modern processors HCTR is one of the most efficient constructions for building a tweakable super-pseudorandom permutation. However, a bug in the specification and another in Chakraborty and Nandi's security proof invalidate the claimed security bound. We here present HCTR2, which fixes these issues and improves the security bound, performance and flexibility. GitHub: https://github.com/google/hctr2
Last updated:  2022-08-25
Improved Circuit-based PSI via Equality Preserving Compression
Kyoohyung Han, Dukjae Moon, Yongha Son
Circuit-based private set intersection (circuit-PSI) enables two parties with input set $X$ and $Y$ to compute a function $f$ over the intersection set $X \cap Y$, without revealing any other information. State-of-the-art protocols for circuit-PSI commonly involves a procedure that securely checks whether two input strings are equal and outputs an additive share of the equality result. This procedure is typically performed by generic two party computation protocols, and its cost occupies quite large portion of the total cost of circuit-PSI. In this work, we propose {\textit{equality preserving compression}} (EPC) protocol that compresses the length of equality check targets while preserving equality using homomorphic encryption (HE) scheme, which is secure against the semi-honest adversary. This can be seamlessly applied to state-of-the-art circuit-PSI protocol frameworks. We demonstrate by implementation that our EPC provides $10-40\%$ speed-up for circuit-PSI with set size from $2^{16}$ to $2^{20}$, on LAN network. We believe that EPC protocol itself can be independent interest, which can be applied to other application than PSI.
Last updated:  2021-10-27
An Addendum to the ZUC-256 Stream Cipher
ZUC Design Team
ZUC-256 is a stream cipher, together with AES-256 and SNOW-V, proposed as the core primitive in future set of 3GPP confidentiality and integrity algorithms for the upcoming 5G applications which offer the 256-bit security. \\ While the original initialization scheme of ZUC-256 can work with a 256-bit key and an IV of length up to 184 bits, we describe a new initialization scheme of ZUC-256 that supports an IV of the exact 128 bits in this paper. Compared to the original initialization scheme, this new key/IV setup algorithm avoids the division of the whole key/IV byte and provides a simple and natural-looking initialization scheme for ZUC-256.
Last updated:  2023-11-17
Incremental Offline/Online PIR (extended version)
Yiping Ma, Ke Zhong, Tal Rabin, and Sebastian Angel
Recent private information retrieval (PIR) schemes preprocess the database with a query-independent offline phase in order to achieve sublinear computation during a query-specific online phase. These offline/online protocols expand the set of applications that can profitably use PIR, but they make a critical assumption: that the database is immutable. In the presence of changes such as additions, deletions, or updates, existing schemes must preprocess the database from scratch, wasting prior effort. To address this, we introduce incremental preprocessing for offline/online PIR schemes, allowing the original preprocessing to continue to be used after database changes, while incurring an update cost proportional to the number of changes rather than the size of the database. We adapt two offline/online PIR schemes to use incremental preprocessing and show how it significantly improves the throughput and reduces the latency of applications where the database changes over time.
Last updated:  2021-10-26
ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking
Anuj Dubey, Afzal Ahmad, Muhammad Adeel Pasha, Rosario Cammarota, Aydin Aysu
Intellectual Property (IP) thefts of trained machine learning (ML) models through side-channel attacks on inference engines are becoming a major threat. Indeed, several recent works have shown reverse engineering of the model internals using such attacks, but the research on building defenses is largely unexplored. There is a critical need to efficiently and securely transform those defenses from cryptography such as masking to ML frameworks. Existing works, however, revealed that a straightforward adaptation of such defenses either provides partial security or leads to high area overheads. To address those limitations, this work proposes a fundamentally new direction to construct neural networks that are inherently more compatible with masking. The key idea is to use modular arithmetic in neural networks and then efficiently realize masking, in either Boolean or arithmetic fashion, depending on the type of neural network layers. We demonstrate our approach on the edge-computing friendly binarized neural networks (BNN) and show how to modify the training and inference of such a network to work with modular arithmetic without sacrificing accuracy. We then design novel masking gadgets using Domain-Oriented Masking (DOM) to efficiently mask the unique operations of ML such as the activation function and the output layer classification, and we prove their security in the glitch-extended probing model. Finally, we implement fully masked neural networks on an FPGA, quantify that they can achieve a similar latency while reducing the FF and LUT costs over the state-of-the-art protected implementations by 34.2% and 42.6%, respectively, and demonstrate their first-order side-channel security with up to 1M traces.
Last updated:  2022-02-23
Efficient Representation of Numerical Optimization Problems for SNARKs
Sebastian Angel, Andrew J. Blumberg, Eleftherios Ioannidis, Jess Woods
This paper introduces Otti, a general-purpose compiler for (zk)SNARKs that provides support for numerical optimization problems. Otti produces efficient arithmetizations of programs that contain optimization problems including linear programming (LP), semi-definite programming (SDP), and a broad class of stochastic gradient descent (SGD) instances. Numerical optimization is a fundamental algorithmic building block: applications include scheduling and resource allocation tasks, approximations to NP-hard problems, and training of neural networks. Otti takes as input arbitrary programs written in a subset of C that contain optimization problems specified via an easy-to-use API. Otti then automatically produces rank-1 constraint satisfiability (R1CS) instances that express a succinct transformation of those programs. Correct execution of the transformed program implies the optimality of the solution to the original optimization problem. Our evaluation on real benchmarks shows that Otti, instantiated with the Spartan proof system, can prove the optimality of solutions in zero-knowledge in as little as 100 ms---over 4 orders of magnitude faster than existing approaches
Last updated:  2021-11-22
Vectorial Decoding Algorithm for Fast Correlation Attack and Its Applications to Stream Cipher Grain-128a
ZhaoCun Zhou, DengGuo Feng, Bin Zhang
Fast correlation attacks, pioneered by Meier and Staffelbach, is an important cryptanalysis tool for LFSR-based stream cipher, which exploits the correlation between the LFSR state and key stream and targets at recovering the initial state of LFSR via a decoding algorithm. In this paper, we develop a vectorial decoding algorithm for fast correlation attack, which is a natural generalization of original binary approach. Our approach benefits from the contributions of all correlations in a subspace. We propose two novel criterions to improve the iterative decoding algorithm. We also give some cryptographic properties of the new FCA which allows us to estimate the efficiency and complexity bounds. Furthermore, we apply this technique to well-analyzed stream cipher Grain-128a. Based on a hypothesis, an interesting result for its security bound is deduced from the perspective of iterative decoding. Our analysis reveals the potential vulnerability for LFSRs over generic linear group and also for nonlinear functions with high SEI multidimensional linear approximations such as Grain-128a.
Last updated:  2021-10-26
The Language's Impact on the Enigma Machine
Daniel Matyas Perendi, Prosanta Gope
The infamous Enigma machine was believed to be unbreakable before 1932 simply because of its variable settings and incredible complexity. However, people realised that there is a known pattern in the German messages, which then significantly reduced the number of possible settings and made the code breaker's job easier. Modern cryptanalysis techniques provide a lot more powerful way to break the Enigma cipher using letter frequencies and a concept called index of coincidence. In turn, this technique only works well for the English language(using the characters of the English alphabet), but what if we encountered an Enigma machine designed for the Hungarian language, where the alphabet consists of more than 26 characters? Experiments on the Enigma cipher with different languages have not been done to date, hence in this article we show the language's impact on both the machine and the cipher. Not only the Hungarian, but in fact, any language using more characters than the English language could have a significant effect on the Enigma machine and its complexity if there existed one. By a broad comparative analysis, it is proven that the size of the alphabet has a significant impact on the complexity and therefore the cryptanalysis.
Last updated:  2022-01-05
Oblivious Transfer from Trapdoor Permutations in Minimal Rounds
Arka Rai Choudhuri, Michele Ciampi, Vipul Goyal, Abhishek Jain, Rafail Ostrovsky
Oblivious transfer (OT) is a foundational primitive within cryptography owing to its connection with secure computation. One of the oldest constructions of oblivious transfer was from certified trapdoor permutations (TDPs). However several decades later, we do not know if a similar construction can be obtained from TDPs in general. In this work, we study the problem of constructing round optimal oblivious transfer from trapdoor permutations. In particular, we obtain the following new results (in the plain model) relying on TDPs in a black-box manner: 1) Three-round oblivious transfer protocol that guarantees indistinguishability-security against malicious senders (and semi-honest receivers). 2) Four-round oblivious transfer protocol secure against malicious adversaries with black-box simulation-based security. By combining our second result with an already known compiler we obtain the first round-optimal 2-party computation protocol that relies in a black-box way on TDPs. A key technical tool underlying our results is a new primitive we call dual witness encryption (DWE) that may be of independent interest.
Last updated:  2021-10-26
Wavelet: Code-based postquantum signatures with fast verification on microcontrollers
Gustavo Banegas, Thomas Debris-Alazard, Milena Nedeljković, Benjamin Smith
This work presents the first full implementation of Wave, a postquantum code-based signature scheme. We define Wavelet, a concrete Wave scheme at the 128-bit classical security level (or NIST postquantum security Level 1) equipped with a fast verification algorithm targeting embedded devices. Wavelet offers 930-byte signatures, with a public key of 3161 kB. We include implementation details using AVX instructions, and on ARM Cortex-M4, including a solution to deal with Wavelet’s large public keys, which do not fit in the SRAM of a typical embedded device. Our verification algorithm is ≈ 4.65× faster then the original, and verifies in 1 087 538 cycles using AVX instructions, or 13 172 ticks in an ARM Cortex-M4.
Last updated:  2021-10-26
Secure and Efficient Multi-Key FHE Scheme Supporting Multi-bit Messages from LWE Preserving Non-Interactive Decryption
Chinmoy Biswas, Ratna Dutta
We consider multi-key fully homomorphic encryption (multi-key FHE) which is the richest variant of fully homomorphic encryption (FHE) that allows complex computation on encrypted data under different keys. Since its introduction by Lopez-Alt, Tromer and Vaikuntanathan in 2012, numerous proposals have been presented yielding various improvements in security and efficiency. However, most of these multi-key FHE schemes encrypt a single-bit message. Constructing a multi-key FHE scheme encrypting multi-bit messages have been notoriously difficult without loosing efficiency for homomorphic evaluation and ciphertext extension under additional keys. In this work, we study multi-key FHE that can encrypt multi-bit messages. Motivated by the goals of improving the efficiency, we propose a new construction with non-interactive decryption and security against chosen-plaintext attack (IND-CPA) from the standard learning with errors (LWE) assumption. We consider a binary matrix as plaintext instead of a single-bit. Our approach supports efficient homomorphic matrix addition and multiplication. Another interesting feature is that our technique of extending a ciphertext under additional keys yields significant reduction in the computational overhead. More interestingly, when contrasted with the previous multi-key FHE schemes for multi-bit messages, our candidates exhibits favorable results in the length of the secret key, public key and ciphertext preserving non-interactive decryption. Keywords: lattice based cryptosystem, multi-key fully homomorphic encryption, learning with errors, multi-bit messages
Last updated:  2021-10-26
Improved Zero-Knowledge Argument of Encrypted Extended Permutation
Yi Liu, Qi Wang, Siu-Ming Yiu
Extended permutation (EP) is a generalized notion of the standard permutation. Unlike the one-to-one correspondence mapping of the standard permutation, EP allows to replicate or omit elements as many times as needed during the mapping. EP is useful in the area of secure multi-party computation (MPC), especially for the problem of private function evaluation (PFE). As a special class of MPC problems, PFE focuses on the scenario where a party holds a private circuit $C$ while all other parties hold their private inputs $x_1, \ldots, x_n$, respectively. The goal of PFE protocols is to securely compute the evaluation result $C(x_1, \ldots, x_n)$, while any other information beyond $C(x_1, \ldots, x_n)$ is hidden. EP here is introduced to describe the topological structure of the circuit $C$, and it is further used to support the evaluation of $C$ privately. For an actively secure PFE protocol, it is crucial to guarantee that the private circuit provider cannot deviate from the protocol to learn more information. Hence, we need to ensure that the private circuit provider correctly performs an EP. This seeks the help of the so-called \emph{zero-knowledge argument of encrypted extended permutation} protocol. In this paper, we provide an improvement of this protocol. Our new protocol can be instantiated to be non-interactive while the previous protocol should be interactive. Meanwhile, compared with the previous protocol, our protocol is significantly (\eg more than $3.4\times$) faster, and the communication cost is only around $24\%$ of that of the previous one.
Last updated:  2021-10-26
Reviewing ISO/IEC Standard for Time-stamping Services
Uncategorized
Long Meng, Liqun Chen
Show abstract
Uncategorized
Time-stamping services are used to prove that a data item existed at a given point in time. This proof is represented by a time stamp token that is created by a time-stamping authority. ISO/IEC 18014 specifies time-stamping services and requires them holding the following two properties: (1) The data being time-stamped is not disclosed to the time-stamping authority, hash values of the data are provided to the authority instead. (2) A time-stamp token can be renewed, as a result the validity duration of a time-stamp token is not restricted by the lifetimes of underlying algorithms or policies. In this paper, we review this standard and discover several issues: Due to inconsistent writing or information missing, a time-stamping service, following the standard specification, may not be able to achieve these designed properties. We provide a solution to each issue.
Last updated:  2021-10-24
Non-randomness of S-unit lattices
Daniel J. Bernstein, Tanja Lange
Spherical models of lattices are standard tools in the study of lattice-based cryptography, except for variations in terminology and minor details. Spherical models are used to predict the lengths of short vectors in lattices and the effectiveness of reduction modulo those short vectors. These predictions are consistent with an asymptotic theorem by Gauss, theorems on short vectors in almost all lattices from the invariant distribution, and a variety of experiments in the literature. $S$-unit attacks are a rapidly developing line of attacks against structured lattice problems. These include the quantum polynomial-time attacks that broke the cyclotomic case of Gentry's original STOC 2009 FHE system under minor assumptions, and newer attacks that have broken through various barriers previously claimed for this line of work. $S$-unit attacks take advantage of auxiliary lattices, standard number-theoretic lattices called $S$-unit lattices. Spherical models have recently been applied to these auxiliary lattices to deduce core limits on the power of $S$-unit attacks. This paper shows that these models underestimate the power of $S$-unit attacks: $S$-unit lattices, like the lattice $Z^d$, have much shorter vectors and reduce much more effectively than predicted by these models. The attacker can freely choose $S$ to make the gap as large as desired, breaking through the core limits previously asserted for $S$-unit attacks.
Last updated:  2022-04-30
Public-Key Quantum Money with a Classical Bank
Omri Shmueli
Quantum money is a main primitive in quantum cryptography, that enables a bank to distribute to parties in the network, called wallets, unclonable quantum banknotes that serve as a medium of exchange between wallets. While quantum money suggests a theoretical solution to some of the fundamental problems in currency systems, it still requires a strong model to be implemented; quantum computation and a quantum communication infrastructure. A central open question in this context is whether we can have a quantum money scheme that uses "minimal quantumness", namely, local quantum computation and only classical communication. Public-key semi-quantum money (Radian and Sattath, AFT 2019) is a quantum money scheme where the algorithm of the bank is completely classical, and quantum banknotes are publicly verifiable on any quantum computer. In particular, such scheme relies on local quantum computation and only classical communication. The only known construction of public-key semi-quantum is based on quantum lightning (Zhandry, EUROCRYPT 2019), which is based on a computational assumption that is now known to be broken. In this work, we construct public-key semi-quantum money, based on quantum-secure indistinguishability obfuscation and the sub-exponential hardness of the Learning With Errors problem. The technical centerpiece of our construction is a new 3-message protocol, where a classical computer can delegate to a quantum computer the generation of a quantum state that is both, unclonable and publicly verifiable.
Last updated:  2021-10-24
On Unpadded NTRU Quantum (In)Security
Théodore Conrad-Frenkiel, Rémi Géraud-Stewart, David Naccache
This paper utilizes the techniques used by Regev \cite{DBLP:journals/jacm/Regev09} and Lyubashevsky, Peikert \& Regev in the security reduction of LWE and its algebraic variants \cite{DBLP:conf/eurocrypt/LyubashevskyPR13} to exhibit a quantum reduction from the decryption of NTRU to leaking information about the secret key. Since this reduction requires decryption with the same key one wishes to attack, it renders NTRU vulnerable to the same type of attacks that affect the Rabin--Williams scheme \cite{DBLP:conf/eurocrypt/Bernstein08} -- albeit requiring a quantum decryption query. A common practice thwarting such attacks consists in applying the Fujisaki-Okamoto (FO, \cite{DBLP:conf/pkc/FujisakiO99}) transformation before encrypting. However, not all NTRU protocols enforce this protection. In particular the DPKE version of NTRU \cite{DBLP:conf/eurocrypt/SaitoXY18} is susceptible to such an attack.
Last updated:  2021-10-24
Improving First-Order Threshold Implementations of SKINNY
Andrea Caforio, Daniel Collins, Ognjen Glamocanin, Subhadeep Banik
Threshold Implementations have become a popular generic technique to construct circuits resilient against power analysis attacks. In this paper, we look to devise efficient threshold circuits for the lightweight block cipher family SKINNY. The only threshold circuits for this family are those proposed by its designers who decomposed the 8-bit S-box into four quadratic S-boxes, and constructed a 3-share byte-serial threshold circuit that executes the substitution layer over four cycles. In particular, we revisit the algebraic structure of the S-box and prove that it is possible to decompose it into (a) three quadratic S-boxes and (b) two cubic S-boxes. Such decompositions allow us to construct threshold circuits that require three shares and executes each round function in three cycles instead of four, and similarly circuits that use four shares requiring two cycles per round. Our constructions significantly reduce latency and energy consumption per encryption operation. Notably, to validate our designs, we synthesize our circuits on standard CMOS cell libraries to evaluate performance, and we conduct leakage detection via statistical tests on power traces on FPGA platforms to assess security.
Last updated:  2023-05-13
PREs with HRA Security and Key Privacy Based on Standard LWE Assumptions
Yang Wang, Yanmin Zhao, Mingqiang Wang
Proxy re-encryption (PRE) schemes, which nicely solve the problem of delegating decryption rights, enable a semi-trusted proxy to transform a ciphertext encrypted under one key into a ciphertext of the same message under another arbitrary key. For a long time, the semantic security of PREs is quite similar to that of public key encryption (PKE) schemes. Cohen first pointed out the insufficiency of the security under chosen-plaintext attacks (CPA) of PREs in PKC 2019, and proposed a {\it{strictly stronger}} security notion, named security under honest re-encryption attacks (HRA), of PREs. Surprisingly, a few PREs satisfy the stronger HRA security and almost all of them are paring-based till now. To the best of our knowledge, we present the first detailed construction of HRA secure single-hop PREs based on standard LWE problems with {\it{comparably small and polynomially-bounded}} parameters in this paper. Combing known reductions, the HRA security of our PREs could also be guaranteed by the worst-case basic lattice problems (e.g. SIVP$_{\gamma}$). Meanwhile, our single-hop PRE schemes are also key-private, which means that the implicit identities of a re-encryption key will not be revealed even in the case of a proxy colluding with some corrupted users. Some discussions about key-privacy of multi-hop PREs are also proposed, which indicates that several constructions of multi-hop PREs do not satisfy their key-privacy definitions.
Last updated:  2022-09-12
Encryption to the Future: A Paradigm for Sending Secret Messages to Future (Anonymous) Committees
Matteo Campanelli, Bernardo David, Hamidreza Khoshakhlagh, Anders Konring, Jesper Buus Nielsen
A number of recent works have constructed cryptographic protocols with flavors of adaptive security by having a randomly-chosen anonymous committee run at each round. Since most of these protocols are stateful, transferring secret states from past committees to future, but still unknown, committees is a crucial challenge. Previous works have tackled this problem with approaches tailor-made for their specific setting, which mostly rely on using a blockchain to orchestrate auxiliary committees that aid in state hand-over process. In this work, we look at this challenge as an important problem on its own and initiate the study of Encryption to the Future (EtF) as a cryptographic primitive. First, we define a notion of an EtF scheme where time is determined with respect to an underlying blockchain and a lottery selects parties to receive a secret message at some point in the future. While this notion seems overly restrictive, we establish two important facts: 1. if used to encrypt towards parties selected in the ``far future'', EtF implies witness encryption for NP over a blockchain; 2. if used to encrypt only towards parties selected in the ``near future'', EtF is not only sufficient for transferring state among committees as required by previous works, but also captures previous tailor-made solutions. To corroborate these results, we provide a novel construction of EtF based on witness encryption over commitments (cWE), which we instantiate from a number of standard assumptions via a construction based on generic cryptographic primitives. Finally, we show how to use ``near future'' EtF to obtain ``far future'' EtF with a protocol based on an auxiliary committee whose communication complexity is \emph{independent} of the length of plaintext messages being sent to the future.
Last updated:  2022-02-21
Higher-Order Masked Ciphertext Comparison for Lattice-Based Cryptography
Jan-Pieter D'Anvers, Daniel Heinz, Peter Pessl, Michiel van Beirendonck, Ingrid Verbauwhede
Checking the equality of two arrays is a crucial building block of the Fujisaki-Okamoto transformation, and as such it is used in several post-quantum key encapsulation mechanisms including Kyber and Saber. While this comparison operation is easy to perform in a black box setting, it is hard to efficiently protect against side-channel attacks. For instance, the hash-based method by Oder et al. is limited to first-order masking, a higher-order method by Bache et al. was shown to be flawed, and a very recent higher-order technique by Bos et al. suffers in runtime. In this paper, we first demonstrate that the hash-based approach, and likely many similar first-order techniques, succumb to a relatively simple side-channel collision attack. We can successfully recover a Kyber512 key using just 6000 traces. While this does not break the security claims, it does show the need for efficient higher-order methods. We then present a new higher-order masked comparison algorithm based on the (insecure) higher-order method of Bache et al. Our new method is 4.2x, resp. 7.5x, faster than the method of Bos et al. for a 2nd, resp. 3rd, -order masking on the ARM Cortex-M4, and unlike the method of Bache et al., the new technique takes ciphertext compression into account, We prove correctness, security, and masking security in detail and provide performance numbers for 2nd and 3rd-order implementations. Finally, we verify our the side-channel security of our implementation using the test vector leakage assessment (TVLA) methodology.
Last updated:  2023-10-21
Revisiting Meet-in-the-Middle Cryptanalysis of SIDH/SIKE with Application to the $IKEp182 Challenge
Aleksei Udovenko and Giuseppe Vitto
This work focuses on concrete cryptanalysis of the isogeny-based cryptosystems SIDH/SIKE under realistic memory/storage constraints. More precisely, we are solving the problem of finding an isogeny of a given smooth degree between two given supersingular elliptic curves. Recent works by Adj et al. (SAC 2018), Costello et al. (PKC 2020), Longa et al. (CRYPTO 2021) suggest that parallel "memoryless" golden collision search by van Oorschot-Wiener (JoC 1999) is the best realistic approach for the problem. We show instead that the classic meet-in-the-middle attack is still competitive due to its very low computational overhead, at least on small parameters. As a concrete application, we apply the meet-in-the-middle attack with optimizations to the $IKEp182 challenge posed by Microsoft Research. The attack was executed on a cluster and required less than 10 core-years and 256TiB of high-performance network storage (GPFS). Different trade-offs allow execution of the attack with similar time complexity and reduced storage requirements of only about 70TiB.
Last updated:  2021-10-24
Extending the Tally-Hiding Ordinos System: Implementations for Borda, Hare-Niemeyer, Condorcet, and Instant-Runoff Voting
Fabian Hertel, Nicolas Huber, Jonas Kittelberger, Ralf Kuesters, Julian Liedtke, Daniel Rausch
Modern electronic voting systems (e-voting systems) are designed to achieve a variety of security properties, such as verifiability, accountability, and vote privacy. Some of these systems aim at so-called tally-hiding: they compute the election result, according to some result function, like the winner of the election, without revealing any other information to any party. In particular, if desired, they neither reveal the full tally consisting of all (aggregated or even individual) votes nor parts of it, except for the election result, according to the result function. Tally-hiding systems offer many attractive features, such as strong privacy guarantees both for voters and for candidates, and protection against Italian attacks. The Ordinos system is a recent provably secure framework for accountable tally-hiding e-voting that extends Helios and can be instantiated for various election methods and election result functions. So far, practical instantiations and implementations for only rather simple result functions (e.g., computing the $k$ best candidates) and single/multi-vote elections have been developed for Ordinos. In this paper, we propose and implement several new Ordinos instantiations in order to support Borda voting, the Hare-Niemeyer method for proportional representation, multiple Condorcet methods, and Instant-Runoff Voting. Our instantiations, which are based on suitable secure multi-party computation (MPC) components, offer the first tally-hiding implementations for these voting methods. To evaluate the practicality of our MPC components and the resulting e-voting systems, we provide extensive benchmarks for all our implementations.
Last updated:  2021-10-24
With a Little Help from My Friends: Constructing Practical Anonymous Credentials
Lucjan Hanzlik, Daniel Slamanig
Anonymous credentials (ACs) are a powerful cryptographic tool for the secure use of digital services, when simultaneously aiming for strong privacy guarantees of users combined with strong authentication guarantees for providers of services. They allow users to selectively prove possession of attributes encoded in a credential without revealing any other meaningful information about themselves. While there is a significant body of research on AC systems, modern use-cases of ACs such as mobile applications come with various requirements not sufficiently considered so far. These include preventing the sharing of credentials and coping with resource constraints of the platforms (e.g., smart cards such as SIM cards in smartphones). Such aspects are typically out of scope of AC constructions, and, thus AC systems that can be considered entirely practical have been elusive so far. In this paper we address this problem by introducing and formalizing the notion of core/helper anonymous credentials (CHAC). The model considers a constrained core device (e.g., a SIM card) and a powerful helper device (e.g., a smartphone). The key idea is that the core device performs operations that do not depend on the size of the credential or the number of attributes, but at the same time the helper device is unable to use the credential without its help. We present a provably secure generic construction of CHACs using a combination of signatures with flexible public keys (SFPK) and the novel notion of aggregatable attribute-based equivalence class signatures (AAEQ) along with a concrete instantiation. The key characteristics of our scheme are that the size of showing tokens is independent of the number of attributes in the credential(s) and that the core device only needs to compute a single elliptic curve scalar multiplication, regardless of the number of attributes. We confirm the practical efficiency of our CHACs with an implementation of our scheme on a Multos smart card as the core and an Android smartphone as the helper device. A credential showing requires less than 500 ms on the smart card and around 200 ms on the smartphone (even for a credential with 1000 attributes).
Last updated:  2022-08-29
Autoencoder Assist: An Efficient Profiling Attack on High-dimensional Datasets
Uncategorized
Qi Lei, Zijia Yang, Qin Wang, Yaoling Ding, Zhe Ma, An Wang
Show abstract
Uncategorized
Deep learning (DL)-based profiled attack has been proved to be a powerful tool in side-channel analysis. A variety of multi-layer perception (MLP) networks and convolutional neural networks (CNN) are thereby applied to cryptographic algorithm implementations for exploiting correct keys with a smaller number of traces and a shorter time. However, most attacks merely focus on small datasets, in which their points of interest are well-trimmed for attacks. Countermeasures applied in embedded systems always result in high-dimensional side-channel traces, i.e., the high-dimension of each input trace. Time jittering and random delay techniques introduce desynchronization but increase SCA complexity as well. These traces inevitably require complicated designs of neural networks and large sizes of trainable parameters for exploiting the correct keys. Therefore, performing profiled attacks (directly) on high-dimensional datasets is difficult. To bridge this gap, we propose a dimension reduction tool for high-dimensional traces by combining signal-to-noise ratio (SNR) analysis and autoencoder. With the designed asymmetric undercomplete autoencoder (UAE) architecture, we extract a small group of critical features from numerous time samples. The compression rate by using our UAE method reaches 40x on synchronized datasets and 30x on desynchronized datasets. This preprocessing step facilitates the profiled attacks by extracting potential leakage features. To demonstrate its effectiveness, we evaluate our proposed method on the raw ASCAD dataset with 100,000 samples in each trace. We also derive desynchronized datasets from the raw ASCAD dataset and validate our method under random delay effect. We further propose a $2^n$-structure MLP network as the attack model. By applying UAE and 2^n-structure MLP network on these traces, experimental results show that all correct subkeys on synchronized datasets (16 S-boxes) and desynchronized datasets are successfully revealed within hundreds of seconds. This shows that our autoencoder can significantly facilitate DL-based profiled attacks on high-dimensional datasets.
Last updated:  2022-04-04
How to Handle Invalid Queries for Malicious-Private Protocols Based on Homomorphic Encryption
Koji Nuida
We consider a setting of two-party computation between a server and a client where every message received by the server is encrypted by a fully homomorphic encryption (FHE) scheme and its decryption key is held by the client only. Akavia and Vald (IACR ePrint Archive, 2021) revisited the privacy problem in such protocols against malicious servers and revealed (as opposed to a naive expectation) that under certain condition, a malicious server can recover the client's input even if the underlying FHE scheme is IND-CPA secure. They also gave some sufficient conditions for the FHE scheme to ensure the privacy against malicious servers. However, their argument did not consider the possibility that a query from a malicious server to a client involves an invalid ciphertext. In this paper, we show, by giving a concrete example, that if such an invalid query is just rejected by the client, then the sufficient conditions in Akavia and Vald's result do not in general ensure the privacy against malicious servers. We also propose another option to handle an invalid query in a way that the client returns a random ciphertext (without explicitly rejecting the query), and show that such a protocol is private against malicious servers if the underlying FHE scheme is IND-CPA secure, sanitizable (in the sense of Ducas and Stehl\'{e}, EUROCRYPT 2016), and circular secure.
Last updated:  2021-10-24
SME: Scalable Masking Extensions
Uncategorized
Ben Marshall, Dan Page
Show abstract
Uncategorized
Supporting masking countermeasures for non-invasive side-channel security in instructions set architectures is a hard problem. Masked operations often have a large number of inputs and outputs, and enabling portable higher order masking has remained a difficult. However, there are clear benefits to enabling this in terms of performance, code density and security guarantees. We present SME, an instruction set extension for enabling secure and efficient software masking of cryptographic code at higher security orders. Our design improves on past work by enabling the same software to run at higher masking orders, depending on the level of security the CPU/SoC implementer has deemed appropriate for their product or device at design time. Our approach relies on similarities between implementations of higher order masking schemes and traditional vector programming. It greatly simplifies the task of writing masked software, and restores the basic promise of ISAs: that the same software will run correctly and securely on any correctly implemented CPU with the necessary security guarantees. We describe our concept as a custom extension to the RISC-V ISA, and its soon to be ratified scalar cryptography extension. An example implementation is also described, with performance and area tradeoffs detailed for several masking security orders. To our knowledge, ours is the first example of enabling flexible side-channel secure implementations of the official RISC-V lightweight cryptography instructions.
Last updated:  2021-10-24
A Note on the Pseudorandomness of Low-Degree Polynomials over the Integers
Aayush Jain, Alexis Korb, Paul Lou, Amit Sahai
We initiate the study of a problem called the Polynomial Independence Distinguishing Problem (PIDP). The problem is parameterized by a set of polynomials $\mathcal{Q}=(q_1,\ldots, q_m)$ where each $q_i:\mathbb{R}^n\rightarrow \mathbb{R}$ and an input distribution $\mathcal{D}$ over the reals. The goal of the problem is to distinguish a tuple of the form $\{ q_i,q_i(\mathbf{x})\}_{i\in [m]} $ from $\{ q_i,q_i(\mathbf{x}_i)\}_{i\in [m]} $ where $\mathbf{x}, \mathbf{x}_1,\ldots , \mathbf{x}_m$ are each sampled independently from the distribution $\mathcal{D}^n$. Refutation and search versions of this problem are conjectured to be hard in general for polynomial time algorithms (Feige, STOC 02) and are also subject to known theoretical lower bounds for various hierarchies (such as Sum-of-Squares and Sherali-Adams). Nevertheless, we show polynomial time distinguishers for the problem in several scenarios, including settings where such lower bounds apply to the search or refutation versions of the problem. Our results apply to the setting when each polynomial is a constant degree multilinear polynomial. We show that this natural problem admits polynomial time distinguishing algorithms for the following scenarios: Non-trivial Distinguishers: We define a non-trivial distinguisher to be an algorithm that runs in time $n^{O(1)}$ and distinguishes between the two distributions with probability at least $n^{-O(1)}$. We show that such non-trivial distinguishers exist for large classes of worst-case families of polynomials, and essentially any non-trivial input distribution that is symmetric around zero, and isn't equivalent to a distribution over Boolean values. In particular, we show that when $m\geq n$ and the sets of indices corresponding to the variables present in each monomial exhibit a weak expansion property with expansion factor greater than $1/2$ for unions of at most $4$ sets, then a non-trivial distinguisher exists. Overwhelming Distinguishers: Next we consider the problem of amplifying the success probability of the distinguisher, to guarantee that it succeeds with probability $1-n^{-\omega(1)}$. We obtain such an overwhelming distinguisher for natural random classes of homogeneous multilinear constant degree $d$ polynomials, denoted by $\mathcal{Q}_{n,d,p}$, and natural input distributions $\mathcal{D}$ such as discrete Gaussians or uniform distributions over bounded intervals. The polynomials are chosen by independently sampling each coefficient to be $0$ with probability $p$ and uniformly from $\cD$ otherwise. For these polynomials, we show a surprisingly simple distinguisher that requires $p> n\log n/\binom{n}{d}$ and $m\geq \tilde{O}(n^{2})$ samples, independent of the degree $d$. This is in contrast with the setting for refutation, where we have sum-of-squares lower bounds against constant degree sum-of-squares algorithms (Grigoriev, TCS 01; Schoenebeck, FOCS 08) for this parameter regime for degree $d>6$.
Last updated:  2022-04-20
Exploring Feature Selection Scenarios for Deep Learning-based Side-Channel Analysis
Uncategorized
Guilherme Perin, Lichao Wu, Stjepan Picek
Show abstract
Uncategorized
One of the main promoted advantages of deep learning in profiling side-channel analysis is the possibility of skipping the feature engineering process. Despite that, most recent publications consider feature selection as the attacked interval from the side-channel measurements is pre-selected. This is similar to the worst-case security assumptions in security evaluations when the random secret shares (e.g., mask shares) are known during the profiling phase: an evaluator can identify points of interest locations and efficiently trim the trace interval. To broadly understand how feature selection impacts the performance of deep learning-based profiling attacks, this paper investigates three different feature selection scenarios that could be realistically used in practical security evaluations. The scenarios range from the minimum possible number of features (worst-case security assumptions) to the whole available traces. Our results emphasize that deep neural networks as profiling models show successful key recovery independently of explored feature selection scenarios against first-order masked software implementations of AES-128. First, we show that feature selection with the worst-case security assumptions results in optimal profiling models that are highly dependent on the number of features and signal-to-noise ratio levels. Second, we demonstrate that attacking raw side-channel measurements with small deep neural networks also provides optimal models, which shorten the gap between worst-case security evaluations and online (realistic) profiling attacks. In all explored feature selection scenarios, the hyperparameter search always indicates a successful model with up to eight hidden layers for MLPs and CNNs, suggesting that complex models are not required for the considered datasets. Our results demonstrate the key recovery with less than ten attack traces for all datasets for at least one of the feature selection scenarios. Additionally, in several cases, we can recover the target key with a single attack trace.
Last updated:  2021-10-24
Three Attacks on Proof-of-Stake Ethereum
Caspar Schwarz-Schilling, Joachim Neu, Barnabé Monnot, Aditya Asgaonkar, Ertem Nusret Tas, David Tse
Recently, two attacks were presented against Proof-of-Stake (PoS) Ethereum: one where short-range reorganizations of the underlying consensus chain are used to increase individual validators' profits and delay consensus decisions, and one where adversarial network delay is leveraged to stall consensus decisions indefinitely. We provide refined variants of these attacks, considerably relaxing the requirements on adversarial stake and network timing, and thus rendering the attacks more severe. Combining techniques from both refined attacks, we obtain a third attack which allows an adversary with vanishingly small fraction of stake and no control over network message propagation (assuming instead probabilistic message propagation) to cause even long-range consensus chain reorganizations. Honest-but-rational or ideologically motivated validators could use this attack to increase their profits or stall the protocol, threatening incentive alignment and security of PoS Ethereum. The attack can also lead to destabilization of consensus from congestion in vote processing.
Last updated:  2024-03-21
A General Framework of Homomorphic Encryption for Multiple Parties with Non-Interactive Key-Aggregation
Hyesun Kwak, Dongwon Lee, Yongsoo Song, and Sameer Wagh
Homomorphic Encryption (HE) is a useful primitive for secure computation, but it is not generally applicable when multiple parties are involved, as the authority is solely concentrated in a single party, the secret key owner. To solve this issue, several variants of HE have emerged in the context of multiparty setting, resulting in two major lines of work -- Multi-Party HE (MPHE) and Multi-Key HE (MKHE). In short, MPHEs tend to be more efficient, but all parties should be specified at the beginning to collaboratively generate a public key, and the access structure is fixed throughout the entire computation. On the other hand, MKHEs have relatively poor performance but provide better flexibility in that a new party can generate its own key and join the computation anytime. In this work, we propose a new HE primitive, called Multi-Group HE (MGHE). Stated informally, an MGHE scheme provides seamless integration between MPHE and MKHE, and has the best of both worlds. In an MGHE scheme, a group of parties jointly generates a public key for efficient single-key encryption and homomorphic operations similar to MPHE. However, it also supports computation on encrypted data under different keys, in the MKHE manner. We formalize the security and correctness notions for MGHE and discuss the relation with previous approaches. We also present a concrete instantiation of MGHE from the BFV scheme and provide a proof-of-concept implementation to demonstrate its performance. In particular, our MGHE construction has a useful property that the key generation is simply done by aggregating individual keys without any interaction between the parties, while all the existing MPHE constructions relied on multi-round key-generation protocols. Finally, we describe a method to design a general multi-party computation protocol from our MGHE scheme.
Last updated:  2021-10-24
Analysis of Client-side Security for Long-term Time-stamping Services
Long Meng, Liqun Chen
Time-stamping services produce time-stamp tokens as evidence to prove that digital data existed at given points in time. Time-stamp tokens contain verifiable cryptographic bindings between data and time, which are produced using cryptographic algorithms. In the ANSI, ISO/IEC and IETF standards for time-stamping services, cryptographic algorithms are addressed in two aspects: (i) Client-side hash functions used to hash data into digests for nondisclosure. (ii) Server-side algorithms used to bind the time and digests of data. These algorithms are associated with limited lifespans due to their operational life cycles and increasing computational powers of attackers. After the algorithms are compromised, time-stamp tokens using the algorithms are no longer trusted. The ANSI and ISO/IEC standards provide renewal mechanisms for time-stamp tokens. However, the renewal mechanisms for client-side hash functions are specified ambiguously, that may lead to the failure of implementations. Besides, in existing papers, the security analyses of long-term time-stamping schemes only cover the server-side renewal, and the client-side renewal is missing. In this paper, we analyse the necessity of client-side renewal, and propose a comprehensive long-term time-stamping scheme that addresses both client-side renewal and server-side renewal mechanisms. After that, we formally analyse and evaluate the client-side security of our proposed scheme.
Last updated:  2021-10-24
Franchised Quantum Money
Uncategorized
Bhaskar Roberts, Mark Zhandry
Show abstract
Uncategorized
The construction of public key quantum money based on standard cryptographic assumptions is a longstanding open question. Here we introduce franchised quantum money, an alternative form of quantum money that is easier to construct. Franchised quantum money retains the features of a useful quantum money scheme, namely unforgeability and local verification: anyone can verify banknotes without communicating with the bank. In franchised quantum money, every user gets a unique secret verification key, and the scheme is secure against counterfeiting and sabotage, a new security notion that appears in the franchised model. Finally, we construct franchised quantum money and prove security assuming one-way functions.
Last updated:  2022-03-02
Hiding in Plain Sight: Memory-tight Proofs via Randomness Programming
Ashrujit Ghoshal, Riddhi Ghosal, Joseph Jaeger, Stefano Tessaro
This paper continues the study of memory-tight reductions (Auerbach et al, CRYPTO '17). These are reductions that only incur minimal memory costs over those of the original adversary, allowing precise security statements for memory-bounded adversaries (under appropriate assumptions expressed in terms of adversary time and memory usage). Despite its importance, only a few techniques to achieve memory-tightness are known and impossibility results in prior works show that even basic, textbook reductions cannot be made memory-tight. This paper introduces a new class of memory-tight reductions which leverage random strings in the interaction with the adversary to hide state information, thus shifting the memory costs to the adversary. We exhibit this technique with several examples. We give memory-tight proofs for digital signatures allowing many forgery attempts when considering randomized message distributions or probabilistic RSA-FDH signatures specifically. We prove security of the authenticated encryption scheme Encrypt-then-PRF with a memory-tight reduction to the underlying encryption scheme. By considering specific schemes or restricted definitions we avoid generic impossibility results of Auerbach et al. (CRYPTO '17) and Ghoshal et al. (CRYPTO '20). As a further case study, we consider the textbook equivalence of CCA-security for public-key encryption for one or multiple encryption queries. We show two qualitatively different memory-tight versions of this result, depending on the considered notion of CCA security.
Last updated:  2021-10-24
Focus is Key to Success: A Focal Loss Function for Deep Learning-based Side-channel Analysis
Maikel Kerkhof, Lichao Wu, Guilherme Perin, Stjepan Picek
The deep learning-based side-channel analysis represents one of the most powerful side-channel attack approaches. Thanks to its capability in dealing with raw features and countermeasures, it becomes the de facto standard evaluation method for the evaluation labs/certification schemes. To reach this performance level, recent works significantly improved the deep learning-based attacks from various perspectives, like hyperparameter tuning, design guidelines, or custom neural network architecture elements. Still, limited attention has been given to the core of the learning process - the loss function. This paper analyzes the limitations of the existing loss functions and then proposes a novel side-channel analysis-optimized loss function: Focal Loss Ratio (FLR), to cope with the identified drawbacks observed in other loss functions. To validate our design, we 1) conduct a thorough experimental study considering various scenarios (datasets, leakage models, neural network architectures) and 2) compare with other loss functions commonly used in the deep learning-based side-channel analysis (both ``traditional'' one and those designed for side-channel analysis). Our results show that FLR loss outperforms other loss functions in various conditions while not having computation overheads compared to common loss functions like categorical cross-entropy.
Last updated:  2021-10-24
A Concrete Treatment of Efficient Continuous Group Key Agreement via Multi-Recipient PKEs
Keitaro Hashimoto, Shuichi Katsumata, Eamonn Postlethwaite, Thomas Prest, Bas Westerbaan
Continuous group key agreements (CGKAs) are a class of protocols that can provide strong security guarantees to secure group messaging protocols such as Signal and MLS. Protection against device compromise is provided by commit messages: at a regular rate, each group member may refresh their key material by uploading a commit message, which is then downloaded and processed by all the other members. In practice, propagating commit messages dominates the bandwidth consumption of existing CGKAs. We propose Chained CmPKE, a CGKA with an asymmetric bandwidth cost: in a group of $N$ members, a commit message costs $O(N)$ to upload and $O(1)$ to download, for a total bandwidth cost of $O(N)$. In contrast, TreeKEM [19, 24, 76] costs $\Omega(\log N)$ in both directions, for a total cost $\Omega(N\log N)$. Our protocol relies on generic primitives, and is therefore readily post-quantum. We go one step further and propose post-quantum primitives that are tailored to Chained CmPKE, which allows us to cut the growth rate of uploaded commit messages by two or three orders of magnitude compared to naive instantiations. Finally, we realize a software implementation of Chained CmPKE. Our experiments show that even for groups with a size as large as $N = 2^{10}$, commit messages can be computed and processed in less than 100 ms.
Last updated:  2021-10-24
Non-Slanderability of Linkable Spontaneous Anonymous Group Signature (LSAG)
Uncategorized
Veronika Kuchta, Joseph K. Liu
Show abstract
Uncategorized
In this paper, we formally prove the non-slanderability property of the first linkable ring signature paper in ACISP 2004 (in which the notion was called linkable spontaneous anonymous group signature (LSAG)). The rigorous security analysis will give confidence to any future construction of Ring Confidential Transaction (RingCT) protocol for blockchain systems which may use this signature scheme as the basis.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.