All papers in 2024 (Page 5 of 767 results)

Last updated:  2024-03-15
Accelerating SLH-DSA by Two Orders of Magnitude with a Single Hash Unit
Markku-Juhani O. Saarinen
We report on efficient and secure hardware implementation techniques for the FIPS 205 SLH-DSA Hash-Based Signature Standard. We demonstrate that very significant overall performance gains can be obtained from hardware that optimizes the padding formats and iterative hashing processes specific to SLH-DSA. A prototype implementation, SLotH, contains Keccak/SHAKE, SHA2-256, and SHA2-512 cores and supports all 12 parameter sets of SLH-DSA. SLotH also supports side-channel secure PRF computation and Winternitz chains. SLotH drivers run on a small RISC-V control core, as is common in current Root-of-Trust (RoT) systems. The new features make SLH-DSA on SLotH many times faster compared to similarly-sized general-purpose hash accelerators. Compared to unaccelerated microcontroller implementations, the performance of SLotH's SHAKE variants is up to $300\times$ faster; signature generation with 128f parameter set is is 4,903,978 cycles, while signature verification with 128s parameter set is only 179,603 cycles. The SHA2 parameter sets have approximately half of the speed of SHAKE parameter sets. We observe that the signature verification performance of SLH-DSA's ``s'' parameter sets is generally better than that of accelerated ECDSA or Dilithium on similarly-sized RoT targets. The area of the full SLotH system is small, from 63 kGE (SHA2, Cat 1 only) to 155 kGe (all parameter sets). Keccak Threshold Implementation adds another 130 kGE. We provide sensitivity analysis of SLH-DSA in relation to side-channel leakage. We show experimentally that an SLH-DSA implementation with CPU hashing will rapidly leak the SK.seed master key. We perform a 100,000-trace TVLA leakage assessment with a protected SLotH unit.
Last updated:  2024-02-28
Key Recovery Attack on the Partial Vandermonde Knapsack Problem
Dipayan Das and Antoine Joux
The Partial Vandermonde (PV) Knapsack problem is an algebraic variant of the low-density inhomogeneous SIS problem. The problem has been used as a building block for various lattice-based constructions, including signatures (ACNS'14, ACISP'18), encryptions (DCC'15,DCC'20), and signature aggregation (Eprint'20). At Crypto'22, Boudgoust, Gachon, and Pellet-Mary proposed a key distinguishing attack on the PV Knapsack exploiting algebraic properties of the problem. Unfortunately, their attack doesn't offer key recovery, except for worst-case keys. In this paper, we propose an alternative attack on the PV Knapsack problem, which provides key recovery for a much larger set of keys. Like the Crypto'22 attack, it is based on lattice reduction and uses a dimension reduction technique to speed-up the underlying lattice reduction algorithm and enhance its performance. As a side bonus, our attack transforms the PV Knapsack problem into uSVP instances instead of SVP instances in the Crypto'22 attack. This also helps the lattice reduction algorithm, both from a theoretical and practical point of view. We use our attack to re-assess the hardness of the concrete parameters used in the literature. It appears that many contain a non-negligible fraction of weak keys, which are easily identified and extremely susceptible to our attack. For example, a fraction of $2^{-19}$ of the public keys of a parameter set from ACISP'18 can be solved in about $30$ hours on a moderate server using off-the-shelf lattice reduction. This parameter set was initially claimed to have a $129$-bit security against key recovery attack. Its security was reduced to $87$-bit security using the distinguishing attack from Crypto'22. Similarly, the ACNS'14 proposal also includes a parameter set containing a fraction of $2^{-19}$ of weak keys; those can be solved in about $17$ hours.
Last updated:  2024-04-19
Combined Threshold Implementation
Jakob Feldtkeller, Jan Richter-Brockmann, Pascal Sasdrich, and Tim Güneysu
Physical security is an important aspect of devices for which an adversary can manipulate the physical execution environment. Recently, more and more attention has been directed towards a security model that combines the capabilities of passive and active physical attacks, i.e., an adversary that performs fault-injection and side-channel analysis at the same time. Implementing countermeasures against such a powerful adversary is not only costly but also requires the skillful combination of masking and redundancy to counteract all reciprocal effects. In this work, we propose a new methodology to generate combined-secure circuits. We show how to transform TI-like constructions to resist any adversary with the capability to tamper with internal gates and probe internal wires. For the resulting protection scheme, we can prove the combined security in a well-established theoretical security model. Since the transformation preserves the advantages of TI-like structures, the resulting circuits prove to be more efficient in the number of required bits of randomness (up to 100%), the latency in clock cycles (up to 40%), and even the area for pipelined designs (up to 40%) than the state of the art for an adversary restricted to manipulating a single gate and probing a single wire.
Last updated:  2024-03-07
Algebraic Algorithm for the Alternating Trilinear Form Equivalence Problem
Lars Ran, Simona Samardjiska, and Monika Trimoska
The Alternating Trilinear Form Equivalence (ATFE) problem was recently used by Tang et al. as a hardness assumption in the design of a Fiat-Shamir digital signature scheme ALTEQ. The scheme was submitted to the additional round for digital signatures of the NIST standardization process for post-quantum cryptography. ATFE is a hard equivalence problem known to be in the class of equivalence problems that includes, for instance, the Tensor Isomorphism (TI), Quadratic Maps Linear Equivalence (QMLE) and the Matrix Code Equivalence (MCE) problems. Due to the increased cryptographic interest, the understanding of its practical hardness has also increased in the last couple of years. Currently, there are several combinatorial and algebraic algorithms for solving it, the best of which is a graph-theoretic algorithm that also includes an algebraic subroutine. In this paper, we take a purely algebraic approach to the ATFE problem, but we use a coding theory perspective to model the problem. This modelling was introduced earlier for the MCE problem. Using it, we improve the cost of algebraic attacks against ATFE compared to previously known ones. Taking into account the algebraic structure of alternating trilinear forms, we show that the obtained system has less variables but also less equations than for MCE and gives rise to structural degree-3 syzygies. Under the assumption that outside of these syzygies the system behaves semi-regularly, we provide a concrete, non-asymptotic complexity estimate of the performance of our algebraic attack. Our results show that the complexity of our attack is below the estimated security levels of ALTEQ by more than 20 bits for NIST level I (and more for the others), thus the scheme requires re-parametrization for all three NIST security levels.
Last updated:  2024-03-12
Time-Averaged Analysis of Selfish Mining in Bitcoin
Roozbeh Sarenche, Ren Zhang, Svetla Nikova, and Bart Preneel
A Bitcoin miner who owns a sufficient amount of mining power can perform selfish mining to increase his relative revenue. Studies have demonstrated that the time-averaged profit of a selfish miner starts to rise once the mining difficulty level gets adjusted in favor of the attacker. Selfish mining profitability lies in the fact that orphan blocks are not incorporated into the current version of Bitcoin's difficulty adjustment mechanism (DAM). Therefore, it is believed that considering the count of orphan blocks in the DAM can result in selfish mining unprofitability. In this paper, we disprove this belief by providing a formal analysis of the selfish mining time-averaged profit. We present a precise definition of the orphan blocks that can be incorporated into calculating the next epoch's target and then introduce two modified versions of DAM in which both main-chain blocks and orphan blocks are incorporated. We propose two versions of smart intermittent selfish mining, where the first one dominates the normal intermittent selfish mining and the second one results in selfish mining profitability under the modified DAMs. Moreover, we present the orphan exclusion attack with the help of which the attacker can stop honest miners from reporting the orphan blocks. Using combinatorial tools, we analyze the profitability of selfish mining accompanied by the orphan exclusion attack under the modified DAMs. Our result shows that even when considering the orphan blocks in the DAM, normal selfish mining can still be profitable; however, the level of profitability under the modified DAMs is significantly lower than that observed under the current version of Bitcoin DAM.
Last updated:  2024-02-28
Integrating Causality in Messaging Channels
Shan Chen and Marc Fischlin
Causal reasoning plays an important role in the comprehension of communication, but it has been elusive so far how causality should be properly preserved by instant messaging services. To the best of our knowledge, causality preservation is not even treated as a desired security property by most (if not all) existing secure messaging protocols like Signal. This is probably due to the intuition that causality seems already preserved when all received messages are intact and displayed according to their sending order. Our starting point is to notice that this intuition is wrong. Until now, for messaging channels (where conversations take place), both the proper causality model and the provably secure constructions have been left open. Our work fills this gap, with the goal to facilitate the formal understanding of causality preservation in messaging. First, we focus on the common two-user secure messaging channels and model the desired causality preservation property. We take the popular Signal protocol as an example and analyze the causality security of its cryptographic core (the double-ratchet mechanism). We show its inadequacy with a simple causality attack, then fix it such that the resulting Signal channel is causality-preserving, even in a strong sense that guarantees post-compromise security. Our fix is actually generic: it can be applied to any bidirectional channel to gain strong causality security. Then, we model causality security for the so-called message franking channels. Such a channel additionally enables end users to report individual abusive messages to a server (e.g., the service provider), where this server relays the end-to-end-encrypted communication between users. Causality security in this setting further allows the server to retrieve the necessary causal dependencies of each reported message, essentially extending isolated reported messages to message flows. This has great security merit for dispute resolution, because a benign message may be deemed abusive when isolated from the context. As an example, we apply our model to analyze Facebook’s message franking scheme. We show that a malicious user can easily trick Facebook (i.e., the server) to accuse an innocent user. Then we fix this issue by amending the underlying message franking channel to preserve the desired causality.
Last updated:  2024-02-28
Key Exchange with Tight (Full) Forward Secrecy via Key Confirmation
Jiaxin Pan, Doreen Riepel, and Runzhi Zeng
Weak forward secrecy (wFS) of authenticated key exchange (AKE) protocols is a passive variant of (full) forward secrecy (FS). A natural mechanism to upgrade from wFS to FS is the use of key confirmation messages which compute a message authentication code (MAC) over the transcript. Unfortunately, Gellert, Gjøsteen, Jacobson and Jager (GGJJ, CRYPTO 2023) show that this mechanism inherently incurs a loss proportional to the number of users, leading to an overall non-tight reduction, even if wFS was established using a tight reduction. Inspired by GGJJ, we propose a new notion, called one-way verifiable weak forward secrecy (OW-VwFS), and prove that OW-VwFS can be transformed tightly to FS using key confirmation in the random oracle model (ROM). To implement our generic transformation, we show that several tightly wFS AKE protocols additionally satisfy our OW-VwFS notion tightly. We highlight that using the recent lattice-based protocol from Pan, Wagner, and Zeng (CRYPTO 2023) can give us the first lattice-based tightly FS AKE via key confirmation in the classical random oracle model. Besides this, we also obtain a Decisional-Diffie-Hellman-based protocol that is considerably more efficient than the previous ones. Finally, we lift our study on FS via key confirmation to the quantum random oracle model (QROM). While our security reduction is overall non-tight, it matches the best existing bound for wFS in the QROM (Pan, Wagner, and Zeng, ASIACRYPT 2023), namely, it is square-root- and session-tight. Our analysis is in the multi-challenge setting, and it is more realistic than the single-challenge setting as in Pan et al..
Last updated:  2024-02-28
The NISQ Complexity of Collision Finding
Yassine Hamoudi, Qipeng Liu, and Makrand Sinha
Collision-resistant hashing, a fundamental primitive in modern cryptography, ensures that there is no efficient way to find distinct inputs that produce the same hash value. This property underpins the security of various cryptographic applications, making it crucial to understand its complexity. The complexity of this problem is well-understood in the classical setting and $\Theta(N^{1/2})$ queries are needed to find a collision. However, the advent of quantum computing has introduced new challenges since quantum adversaries - equipped with the power of quantum queries - can find collisions much more efficiently. Brassard, Höyer and Tapp and Aaronson and Shi established that full-scale quantum adversaries require $\Theta(N^{1/3})$ queries to find a collision, prompting a need for longer hash outputs, which impacts efficiency in terms of the key lengths needed for security. This paper explores the implications of quantum attacks in the Noisy-Intermediate Scale Quantum (NISQ) era. In this work, we investigate three different models for NISQ algorithms and achieve tight bounds for all of them: (1) A hybrid algorithm making adaptive quantum or classical queries but with a limited quantum query budget, or (2) A quantum algorithm with access to a noisy oracle, subject to a dephasing or depolarizing channel, or (3) A hybrid algorithm with an upper bound on its maximum quantum depth; i.e., a classical algorithm aided by low-depth quantum circuits. In fact, our results handle all regimes between NISQ and full-scale quantum computers. Previously, only results for the pre-image search problem were known for these models by Sun and Zheng, Rosmanis, Chen, Cotler, Huang and Li while nothing was known about the collision finding problem. Along with our main results, we develop an information-theoretic framework for recording query transcripts of quantum-classical algorithms. The main feature of this framework is that it allows us to record queries in two incompatible bases - classical queries in the standard basis and quantum queries in the Fourier basis - consistently. We call the framework the hybrid compressed oracle as it naturally interpolates between the classical way of recording queries and the compressed oracle framework of Zhandry for recording quantum queries.
Last updated:  2024-02-28
Key-Recovery Attack on a Public-Key Encryption Related to Planted Clique
Caicai Chen and Chris Jones
Hudoba proposed a public key encryption (PKE) scheme and conjectured its security to be based on the Planted Clique problem. In this note, we show that this scheme is not secure. We do so by devising an efficient algorithm for the even neighbor independent set problem proposed by Hudoba. This leaves open the possibility of building PKE based on Planted Clique.
Last updated:  2024-02-28
Stateless Deterministic Multi-Party EdDSA Signatures with Low Communication
Qi Feng, Kang Yang, Kaiyi Zhang, Xiao Wang, Yu Yu, Xiang Xie, and Debiao He
EdDSA, standardized by both IRTF and NIST, is a variant of the well-known Schnorr signature based on Edwards curves, and enjoys the benefit of statelessly and deterministically deriving nonces (i.e., it does not require reliable source of randomness or state continuity). Recently, NIST calls for multi-party threshold EdDSA signatures in one mode of deriving nonce statelessly and deterministically and verifying such derivation via zero-knowledge (ZK) proofs. Multi-party full-threshold EdDSA signatures in the dishonest-majority malicious setting have the advantage of strong security guarantee, and specially cover the two-party case. However, it is challenging to translate the stateless and deterministic benefit of EdDSA to the multi-party setting, as no fresh randomness is available for the protocol execution. We present the notion of information-theoretic message authenticated codes (IT-MACs) over groups in the multi-verifier setting, and adopt the recent pseudorandom correlation function (PCF) to generate IT-MACs statelessly and deterministically. Furthermore, we generalize the two-party IT-MACs-based ZK protocol by Baum et al. (Crypto'21) into the multi-verifier setting, which may be of independent interest. Together with multi-verifier extended doubly-authenticated bits (mv-edabits) with errors, we design a multi-verifier zero-knowledge (MVZK) protocol to derive nonces statelessly and deterministically. Building upon the MVZK protocol, we propose a stateless deterministic multi-party EdDSA signature, tolerating all-but-one malicious corruptions. Compared to the state-of-the-art multi-party EdDSA signature by Garillot et al. (Crypto'21), we improve communication cost by a factor of $61\times$, at the cost of increasing computation cost by about $2.25\times$ and requiring three extra rounds.
Last updated:  2024-02-28
Security analysis of the iMessage PQ3 protocol
Douglas Stebila
The iMessage PQ3 protocol is an end-to-end encrypted messaging protocol designed for exchanging data in long-lived sessions between two devices. It aims to provide classical and post-quantum confidentiality for forward secrecy and post-compromise secrecy, as well as classical authentication. Its initial authenticated key exchange is constructed from digital signatures plus elliptic curve Diffie–Hellman and post-quantum key exchanges; to derive per-message keys on an ongoing basis, it employs an adaptation of the Signal double ratchet that includes a post-quantum key encapsulation mechanism. This paper presents the cryptographic details of the PQ3 protocol and gives a reductionist security analysis by adapting the multi-stage key exchange security analysis of Signal by Cohn-Gordon et al. (J. Cryptology, 2020). The analysis shows that PQ3 provides confidentiality with forward secrecy and post-compromise security against both classical and quantum adversaries, in both the initial key exchange as well as the continuous rekeying phase of the protocol.
Last updated:  2024-02-27
On Central Primitives for Quantum Cryptography with Classical Communication
Kai-Min Chung, Eli Goldin, and Matthew Gray
Recent work has introduced the "Quantum-Computation Classical-Communication" (QCCC) (Chung et. al.) setting for cryptography. There has been some evidence that One Way Puzzles (OWPuzz) are the natural central cryptographic primitive for this setting (Khurana and Tomer). For a primitive to be considered central it should have several characteristics. It should be well behaved (which for this paper we will think of as having amplification, combiners, and universal constructions); it should be implied by a wide variety of other primitives; and it should be equivalent to some class of useful primitives. We present combiners, correctness and security amplifica- tion, and a universal construction for OWPuzz. Our proof of security amplification uses a new and cleaner version construction of EFI from OWPuzz (in comparison to the result of Khurana and Tomer) that generalizes to weak OWPuzz and is the most technically involved section of the paper. It was previously known that OWPuzz are implied by other primitives of interest including commitments, symmetric key encryp- tion, one way state generators (OWSG), and therefore pseudorandom states (PRS). However we are able to rule out OWPuzz’s equivalence to many of these primitives by showing a black box separation between general OWPuzz and a restricted class of OWPuzz (those with efficient verification, which we call EV − OWPuzz). We then show that EV − OWPuzz are also implied by most of these primitives, which separates them from OWPuzz as well. This separation also separates extending PRS from highly compressing PRS answering an open question of Ananth et. al.
Last updated:  2024-02-27
Adaptively Secure Streaming Functional Encryption
Pratish Datta, Jiaxin Guan, Alexis Korb, and Amit Sahai
This paper introduces the first adaptively secure streaming functional encryption (sFE) scheme for P/Poly. sFE stands as an evolved variant of traditional functional encryption (FE), catering specifically to contexts with vast and/or dynamically evolving data sets. sFE is designed for applications where data arrives in a streaming fashion and is computed on in an iterative manner as the stream arrives. Unlike standard FE, in sFE: (1) encryption is possible without knowledge of the full data set, (2) partial decryption is possible given only a prefix of the input. Guan, Korb, and Sahai introduced this concept in their recent publication [CRYPTO 2023], where they constructed an sFE scheme for P/Poly using a compact standard FE scheme for the same. However, their sFE scheme only achieved semi-adaptive-function-selective security, which constrains the adversary to obtain all functional keys prior to seeing any ciphertext for the challenge stream. This limitation severely limits the scenarios where sFE can be applied, and therefore fails to provide a suitable theoretical basis for sFE. In contrast, the adaptive security model empowers the adversary to arbitrarily interleave requests for functional keys with ciphertexts related to the challenge stream. Guan, Korb, and Sahai identified achieving adaptive security for sFE as the key question left open by their work. We resolve this open question positively by constructing an adaptively secure sFE construction from indistinguishability obfuscation for P/Poly and injective PRGs. By combining our work with that of Jain, Lin, and Sahai [STOC 2021, EUROCRYPT 2022], we obtain the first adaptively secure sFE scheme for P/Poly based on sub-exponential hardness of well-studied computational problems
Last updated:  2024-02-27
WARPfold : Wrongfield ARithmetic for Protostar folding
Lev Soukhanov
Inspired by range-check trick from recent Latticefold paper we construct elliptic-curve based IVC capable of simulating non-native arithmetic efficiently. We explain the general principle (which can be applied to both Protostar and Hypernova), and describe the Wrongfield ARithmetic for Protostar folding in details. Our construction supports circuits over mutilple non-native fields simultaneously and allows interfacing between them using range-checked elements. WARPfold can be used to warp between different proof systems and construct folding schemes over curves not admitting a dual partner (such as BLS12-381).
Last updated:  2024-02-27
FuLeakage: Breaking FuLeeca by Learning Attacks
Felicitas Hörmann and Wessel van Woerden
FuLeeca is a signature scheme submitted to the recent NIST call for additional signatures. It is an efficient hash-and-sign scheme based on quasi-cyclic codes in the Lee metric and resembles the lattice-based signature Falcon. FuLeeca proposes a so-called concentration step within the signing procedure to avoid leakage of secret-key information from the signatures. However, FuLeeca is still vulnerable to learning attacks, which were first observed for lattice-based schemes. We present three full key-recovery attacks by exploiting the proximity of the code-based FuLeeca scheme to lattice-based primitives. More precisely, we use a few signatures to extract an $n/2$-dimensional circulant sublattice of the given length-$n$ code, that still contains the exceptionally short secret-key vector. This significantly reduces the classical attack cost and, in addition, leads to a full key recovery in quantum-polynomial time. Furthermore, we exploit a bias in the concentration procedure to classically recover the full key for any security level with at most 175,000 signatures in less than an hour.
Last updated:  2024-02-27
Improved Meet-in-the-Middle Nostradamus Attacks on AES-like Hashing
Xiaoyang Dong, Jian Guo, Shun Li, Phuong Pham, and Tianyu Zhang
The Nostradamus attack was originally proposed as a security vulnerability for a hash function by Kelsey and Kohno at EUROCRYPT 2006. It requires the attacker to commit to a hash value y of an iterated hash function H. Subsequently, upon being provided with a message prefix P, the adversary’s task is to identify a suffix S such that H(P||S) equals y. Kelsey and Kohno demonstrated a herding attack requiring $O(\sqrt{n}\cdot 2^{2n/3})$ evaluations of the compression function of H, where n represents the output and state size of the hash, placing this attack between preimage attacks and collision searches in terms of complexity. At ASIACRYPT 2022, Benedikt et al. transform Kelsey and Kohno’s attack into a quantum variant, decreasing the time complexity from $O(\sqrt{n}\cdot 2^{2n/3})$ to $O(\sqrt[3]{n}\cdot 2^{3n/7})$. At ToSC 2023, Zhang et al. proposed the first dedicated Nostradamus attack on AES-like hashing in both classical and quantum settings. In this paper, we have made revisions to the multi-target technique incorporated into the meet-in-the-middle automatic search framework. This modification leads to a decrease in time complexity during the online linking phase, effectively reducing the overall attack time complexity in both classical and quantum scenarios. Specifically, we can achieve more rounds in the classical setting and reduce the time complexity for the same round in the quantum setting.
Last updated:  2024-03-01
Improved Differential Meet-In-The-Middle Cryptanalysis
Zahra Ahmadian, Akram Khalesi, Dounia M'foukh, Hossein Moghimi, and María Naya-Plasencia
In this paper, we extend the applicability of differential meet- in-the-middle attacks, proposed at Crypto 2023, to truncated differen- tials, and in addition, we introduce three new ideas to improve this type of attack: we show how to add longer structures than the original pa- per, we show how to improve the key recovery steps by introducing some probability in them, and we combine this type of attacks with the state- test technique, that was introduced in the context of impossible differ- ential attacks. Furthermore, we have developed a MILP-based tool to automate the search for a truncated differential-MITM attack with op- timized overall complexity, incorporating some of the proposed improve- ments. Thanks to this, we can build the best known attacks on the cipher CRAFT, reaching 23 rounds against 21 previously; we provide a new at- tack on 23-round SKINNY-64-192, and we improve the best attacks on SKINNY-128-384.
Last updated:  2024-02-27
Automating Collision Attacks on RIPEMD-160
Yingxin Li, Fukang Liu, and Gaoli Wang
As an ISO/IEC standard, the hash function RIPEMD-160 has been used to generate the Bitcoin address with SHA-256. However, due to the complex double-branch structure of RIPEMD-160, the best collision attack only reaches 36 out of 80 steps of RIPEMD-160, and the best semi-free-start (SFS) collision attack only reaches 40 steps. To improve the 36-step collision attack proposed at EUROCRYPT 2023, we explored the possibility of using different message differences to increase the number of attacked steps, and we finally identified one choice allowing a 40-step collision attack. To find the corresponding 40-step differential characteristic, we re-implement the MILP-based method to search for signed differential characteristics with SAT/SMT. As a result, we can find a colliding message pair for 40-step RIPEMD-160 in practical time, which significantly improves the best collision attack on RIPEMD-160. For the best SFS collision attack published at ToSC 2019, we observe that the bottleneck is the probability of the right-branch differential characteristics as they are fully uncontrolled in the message modification. To address this issue, we utilize our SAT/SMT-based tool to search for high-probability differential characteristics for the right branch. Consequently, we can mount successful SFS collision attacks on 41, 42 and 43 steps of RIPEMD-160, thus significantly improving the SFS collision attacks. In addition, we also searched for a 44-step differential characteristic, but the differential probability is too low to allow a meaningful SFS collision attack.
Last updated:  2024-02-27
New Records in Collision Attacks on SHA-2
Yingxin Li, Fukang Liu, and Gaoli Wang
The SHA-2 family including SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224 and SHA512/256 is a U.S. federal standard pub- lished by NIST. Especially, there is no doubt that SHA-256 is one of the most important hash functions used in real-world applications. Due to its complex design compared with SHA-1, there is almost no progress in collision attacks on SHA-2 after ASIACRYPT 2015. In this work, we retake this challenge and aim to significantly improve collision attacks on the SHA-2 family. First, we observe from many existing attacks on SHA-2 that the current advanced tool to search for SHA-2 characteristics has reached the bottleneck. Specifically, longer differential characteristics could not be found, and this causes that the collision attack could not reach more steps. To address this issue, we adopt Liu et al.’s MILP-based method and implement it with SAT/SMT for SHA-2, where we also add more techniques to detect contradictions in SHA-2 characteristics. This answers an open problem left in Liu et al.’s paper to apply the technique to SHA-2. With this SAT/SMT-based tool, we search for SHA-2 charac- teristics by controlling its sparsity in a dedicated way. As a result, we successfully find the first practical semi-free-start (SFS) colliding message pair for 39-step SHA-256, improving the best 38-step SFS collision attack published at EUROCRYPT 2013. In addition, we also report the first practical free-start (FS) collision attack on 40-step SHA-224, while the previously best theoretic 40-step attack has time complexity 2110. More- over, for the first time, we can mount practical and theoretic collision attacks on 28-step and 31-step SHA-512, respectively, which improve the best collision attack only reaching 27 steps of SHA-512 at ASIACRYPT 2015. In a word, with new techniques to find SHA-2 characteristics, we have made some notable progress in the analysis of SHA-2 after the major achievements made at EUROCRYPT 2013 and ASIACRYPT 2015.
Last updated:  2024-02-27
A Computational Tsirelson's Theorem for the Value of Compiled XOR Games
Uncategorized
David Cui, Giulio Malavolta, Arthur Mehta, Anand Natarajan, Connor Paddock, Simon Schmidt, Michael Walter, and Tina Zhang
Show abstract
Uncategorized
Nonlocal games are a foundational tool for understanding entanglement and constructing quantum protocols in settings with multiple spatially separated quantum devices. In this work, we continue the study initiated by Kalai et al. (STOC '23) of compiled nonlocal games, played between a classical verifier and a single cryptographically limited quantum device. Our main result is that the compiler proposed by Kalai et al. is sound for any two-player XOR game. A celebrated theorem of Tsirelson shows that for XOR games, the quantum value is exactly given by a semidefinite program, and we obtain our result by showing that the SDP upper bound holds for the compiled game up to a negligible error arising from the compilation. This answers a question raised by Natarajan and Zhang (FOCS '23), who showed soundness for the specific case of the CHSH game. Using our techniques, we obtain several additional results, including (1) tight bounds on the compiled value of parallel-repeated XOR games, (2) operator self-testing statements for any compiled XOR game, and (3) a ``nice" sum-of-squares certificate for any XOR game, from which operator rigidity is manifest.
Last updated:  2024-02-27
The Algebraic Freelunch Efficient Gröbner Basis Attacks Against Arithmetization-Oriented Primitives
Augustin Bariant, Aurélien Boeuf, Axel Lemoine, Irati Manterola Ayala, Morten Øygarden, Léo Perrin, and Håvard Raddum
In this paper, we present a new type of algebraic attack that applies to many recent arithmetization-oriented families of permutations, such as those used in Griffin, Anemoi, ArionHash, and XHash8, whose security relies on the hardness of the constrained-input constrained-output (CICO) problem. We introduce the FreeLunch approach: the monomial ordering is chosen so that the natural polynomial system encoding the CICO problem already is a Gröbner basis. In addition, we present a new dedicated resolution algorithm for FreeLunch systems of complexity lower than applicable state-of-the-art FGLM algorithms. We show that the FreeLunch approach challenges the security of fullround instances of Anemoi, Arion and Griffin. We confirm these theoretical results with experimental results on those three permutations. In particular, using the FreeLunch attack combined with a new technique to bypass 3 rounds of Griffin, we recover a CICO solution for 7 out of 10 rounds of Griffin in less than four hours on one core of AMD EPYC 7352 (2.3GHz).
Last updated:  2024-02-27
A data aggregation protocol based on TFHE
Maria Ferrara, Antonio Tortora, and Maria Tota
Torus Fully Homomorphic Encryption (TFHE) is a probabilistic cryptosytem over the real torus which allows one to operate directly on encrypted data without first decrypting them. We present an aggregation protocol based on a variant of TFHE for computing the sum of sensitive data, working only with the corresponding ciphertexts. Our scheme is an ideal choice for a system of smart meters - electronic devices for measuring energy consumption - that demands consumers’ privacy. In contrast to some other solutions, our proposal does not require any communication among smart meters and it is quantum-safe.
Last updated:  2024-02-27
An Efficient Adaptive Attack Against FESTA
Guoqing Zhou and Maozhi Xu
At EUROCRYPT’23, Castryck and Decru, Maino et al., and Robert present efficient attacks against supersingular isogeny Diffie-Hellman key exchange protocol (SIDH). Drawing inspiration from these attacks, Andrea Basso, Luciano Maino, and Giacomo Pope introduce FESTA, an isogeny-based trapdoor function, along with a corresponding IND-CCA secure public key encryption (PKE) protocol at ASIACRYPT’23. FESTA incorporates either a diagonal or circulant matrix into the secret key to mask torsion points. In this paper, we employ a side-channel attack to construct an auxiliary verification oracle. By querying this oracle, we propose an adaptive attack strategy to recover the secret key in FESTA when the secret matrix is circulant. Compared with existing attacks, our strategy is more efficient and formal. Leveraging these findings, we implement our attack algorithms to recover the circulant matrix in secret key. Finally, we demonstrate that if the secret matrix is circulant, then the adversary can successfully recover FESTA’s secret key with a polynomial number of decryption machine queries. Consequently, our paper illustrates that FESTA PKE protocol with secret circulant matrix does not achieve IND-CCA security.
Last updated:  2024-02-27
Probabilistic Extensions: A One-Step Framework for Finding Rectangle Attacks and Beyond
Uncategorized
Ling Song, Qianqian Yang, Yincen Chen, Lei Hu, and Jian Weng
Show abstract
Uncategorized
In differential-like attacks, the process typically involves extending a distinguisher forward and backward with probability 1 for some rounds and recovering the key involved in the extended part. Particularly in rectangle attacks, a holistic key recovery strategy can be employed to yield the most efficient attacks tailored to a given distinguisher. In this paper, we treat the distinguisher and the extended part as an integrated entity and give a one-step framework for finding rectangle attacks with the purpose of reducing the overall complexity or attacking more rounds. In this framework, we propose to allow probabilistic differential propagations in the extended part and incorporate the holistic recovery strategy. Additionally, we introduce the ``split-and-bunch technique'' to further reduce the time complexity. Beyond rectangle attacks, we extend these foundational concepts to encompass differential attacks as well. To demonstrate the efficiency of our framework, we apply it to Deoxys-BC-384, SKINNY, ForkSkinny, and CRAFT, achieving a series of refined and improved rectangle attacks and differential attacks. Notably, we obtain the first 15-round attack on Deoxys-BC-384, narrowing its security margin to only one round. Furthermore, our differential attack on CRAFT extends to 23 rounds, covering two more rounds than the previous best attacks.
Last updated:  2024-02-27
Partial Differential Fault Analysis on Ascon
Yang Gao
Authenticated Encryption with Associated Data (AEAD) is a trend in applied cryptography because it combine confidentiality, integrity, and authentication into one algorithm and is more efficient than using block ciphers and hash functions separately. The Ascon algorithm, as the winner in both the CAESAR competition and the NIST LwC competition, will soon become the AEAD standard for protecting the Internet of Things and micro devices with limited computing resources. We propose a partial differential fault analysis (PDFA) technology for the Ascon algorithm, using stuck-at fault and random-nibble fault models respectively. Theoretically, after 9.9 full-round fault injections or 263 single nibble fault injections, 128-bit key can be completely recovered. In addition, we conducted the first discussion of this analysis method under different nonce configurations. In the Nonce-respect case, an average of 130 additional Tag queries are required to complete the guessing of the faulty tag, afterwards equating this case with the Nonce-misuse case. Subsequent experimental results proved the correctness of the theoretical model. Finally we discuss some countermeasures against proposed attacks, and we propose a new S-box that can be used to replace the existing S-box in ASCON to render PDFA ineffective.
Last updated:  2024-05-11
Massive Superpoly Recovery with a Meet-in-the-middle Framework -- Improved Cube Attacks on Trivium and Kreyvium
Jiahui He, Kai Hu, Hao Lei, and Meiqin Wang
The cube attack extracts the information of secret key bits by recovering the coefficient called superpoly in the output bit with respect to a subset of plaintexts/IV, which is called a cube. While the division property provides an efficient way to detect the structure of the superpoly, superpoly recovery could still be prohibitively costly if the number of rounds is sufficiently high. In particular, Core Monomial Prediction (CMP) was proposed at ASIACRYPT 2022 as a scaled-down version of Monomial Prediction (MP), which sacrifices accuracy for efficiency but ultimately gets stuck at 848 rounds of \trivium. In this paper, we provide new insights into CMP by elucidating the algebraic meaning to the core monomial trails. We prove that it is sufficient to recover the superpoly by extracting all the core monomial trails, an approach based solely on CMP, thus demonstrating that CMP can achieve perfect accuracy as MP does. We further reveal that CMP is still MP in essence, but with variable substitutions on the target function. Inspired by the divide-and-conquer strategy that has been widely used in previous literature, we design a meet-in-the-middle (MITM) framework, in which the CMP-based approach can be embedded to achieve a speedup. To illustrate the power of these new techniques, we apply the MITM framework to \trivium, \grain and \kreyvium. As a result, not only can the previous computational cost of superpoly recovery be reduced (e.g., 5x faster for superpoly recovery on 192-round \grain), but we also succeed in recovering superpolies for up to 851 rounds of \trivium and up to 899 rounds of \kreyvium. This surpasses the previous best results by respectively 3 and 4 rounds. Using the memory-efficient M\"obius transform proposed at EUROCRYPT 2021, we can perform key recovery attacks on target ciphers, even though the superpoly may contain over $2^{40}$ monomials. This leads to the best cube attacks on the target ciphers.
Last updated:  2024-02-27
VeriSimplePIR: Verifiability in SimplePIR at No Online Cost for Honest Servers
Leo de Castro and Keewoo Lee
We present VeriSimplePIR, a verifiable version of the state-of-the-art semi-honest SimplePIR protocol. VeriSimplePIR is a stateful verifiable PIR scheme guaranteeing that all queries are consistent with a fixed, well-formed database. It is the first efficient verifiable PIR scheme to not rely on an honest digest to ensure security; any digest, even one produced by a malicious server, is sufficient to commit to some database. This is due to our extractable verification procedure, which can extract the entire database from the consistency proof checked against each response. Furthermore, VeriSimplePIR ensures this strong security guarantee without compromising the performance of SimplePIR. The online communication overhead is roughly $1.1$-$1.5\times$ SimplePIR, and the online computation time on the server is essentially the same. We achieve this low overhead via a novel one-time preprocessing protocol that generates a reusable proof that can verify any number of subsequent query-response pairs as long as no malicious behavior is detected. As soon as the verification procedure rejects a response from the server, the offline phase must be rerun to compute a new proof. VeriSimplePIR represents an approach to maliciously secure cryptography that is highly optimized for honest parties while maintaining security even in the presence of malicious adversaries.
Last updated:  2024-02-29
A New Approach for Non-Interactive Zero-Knowledge from Learning with Errors
Brent Waters
We put forward a new approach for achieving non-interactive zero-knowledge proofs (NIKZs) from the learning with errors (LWE) assumption (with subexponential modulus to noise ratio). We provide a LWE-based construction of a hidden bits generator that gives rise to a NIZK via the celebrated hidden bits paradigm. A noteable feature of our construction is its simplicity. Our construction employs lattice trapdoors, but beyond that uses only simple operations. Unlike prior solutions we do not rely on a correlation intractability argument nor do we utilize fully homomorphic encryption techniques. Our solution provides a new methodology that adds to the diversity of techniques for solving this fundamental problem.
Last updated:  2024-03-04
From Random Probing to Noisy Leakages Without Field-Size Dependence
Gianluca Brian, Stefan Dziembowski, and Sebastian Faust
Side channel attacks are devastating attacks targeting cryptographic implementations. To protect against these attacks, various countermeasures have been proposed -- in particular, the so-called masking scheme. Masking schemes work by hiding sensitive information via secret sharing all intermediate values that occur during the evaluation of a cryptographic implementation. Over the last decade, there has been broad interest in designing and formally analyzing such schemes. The random probing model considers leakage where the value on each wire leaks with some probability $\epsilon$. This model is important as it implies security in the noisy leakage model via a reduction by Duc et al. (Eurocrypt 2014). Noisy leakages are considered the "gold-standard" for analyzing masking schemes as they accurately model many real-world physical leakages. Unfortunately, the reduction of Duc et al. is non-tight, and in particular requires that the amount of noise increases by a factor of $|\mathbb{F}|$ for circuits that operate over $\mathbb{F}$ (where $\mathbb{F}$ is a finite field). In this work, we give a generic transformation from random probing to average probing, which avoids this loss of $|\mathbb{F}|$. Since the average probing is identical to the noisy leakage model (Eurocrypt 2014), this yields for the first time a security analysis of masked circuits where the noise parameter $\delta$ in the noisy leakage model is independent of $|\mathbb{F}|$. The latter is particularly important for cryptographic schemes operating over large fields, e.g., the AES or the recently standardized post-quantum schemes.
Last updated:  2024-04-15
Tight Indistinguishability Bounds for the XOR of Independent Random Permutations by Fourier Analysis
Itai Dinur
The XOR of two independent permutations (XoP) is a well-known construction for achieving security beyond the birthday bound when implementing a pseudorandom function using a block cipher (i.e., a pseudorandom permutation). The idealized construction (where the permutations are uniformly chosen and independent) and its variants have been extensively analyzed over nearly 25 years. The best-known asymptotic information-theoretic indistinguishability bound for the XoP construction is $O(q/2^{1.5n})$, derived by Eberhard in 2017, where $q$ is the number of queries and $n$ is the block length. A generalization of the XoP construction outputs the XOR of $r \geq 2$ independent permutations, and has also received significant attention in both the single-user and multi-user settings. In particular, for $r = 3$, the best-known bound (obtained by Choi et al. [ASIACRYPT'22]) is about $q^2/2^{2.5 n}$ in the single-user setting and $\sqrt{u} q_{\max}^2/2^{2.5 n}$ in the multi-user setting (where $u$ is the number of users and $q_{\max}$ is the number of queries per user). In this paper, we prove an indistinguishability bound of $q/2^{(r - 0.5)n}$ for the (generalized) XoP construction in the single-user setting, and a bound of $\sqrt{u} q_{\max}/2^{(r - 0.5)n}$ in the multi-user setting. In particular, for $r=2$, we obtain the bounds $q/2^{1.5n}$ and $\sqrt{u} q_{\max}/2^{1.5n}$ in single-user and multi-user settings, respectively. For $r=3$ the corresponding bounds are $q/2^{2.5n}$ and $\sqrt{u} q_{\max}/2^{2.5n}$. All of these bounds hold assuming $q < 2^{n}/2$ (or $q_{\max} < 2^{n}/2$). Compared to previous works, we improve all the best-known bounds for the (generalized) XoP construction in the multi-user setting, and the best-known bounds for the generalized XoP construction for $r \geq 3$ in the single-user setting (assuming $q \geq 2^{n/2}$). For the basic two-permutation XoP construction in the single-user setting, our concrete bound of $q/2^{1.5n}$ stands in contrast to the asymptotic bound of $O(q/2^{1.5n})$ by Eberhard. Since all of our bounds are matched (up to constant factors) for $q \geq 2^{n/2}$ by attacks published by Patarin in 2008 (and their generalizations to the multi-user setting), they are all tight. We obtain our results by Fourier analysis of Boolean functions. Most of our technical work involves bounding (sums of) Fourier coefficients of the density function associated with sampling without replacement. While the proof of Eberhard relies on similar bounds, our proof is elementary and simpler.
Last updated:  2024-05-15
Solving the Tensor Isomorphism Problem for special orbits with low rank points: Cryptanalysis and repair of an Asiacrypt 2023 commitment scheme
Valerie Gilchrist, Laurane Marco, Christophe Petit, and Gang Tang
The Tensor Isomorphism Problem (TIP) has been shown to be equivalent to the matrix code equivalence problem, making it an interesting candidate on which to build post-quantum cryptographic primitives. These hard problems have already been used in protocol development. One of these, MEDS, is currently in Round 1 of NIST's call for additional post-quantum digital signatures. In this work, we consider the TIP for a special class of tensors. The hardness of the decisional version of this problem is the foundation of a commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (Asiacrypt 2023). We present polynomial-time algorithms for the decisional and computational versions of TIP for special orbits, which implies that the commitment scheme is not secure. The key observations of these algorithms are that these special tensors contain some low-rank points, and their stabilizer groups are not trivial. With these new developments in the security of TIP in mind, we give a new commitment scheme based on the general TIP that is non-interactive, post-quantum, and statistically binding, making no new assumptions. Such a commitment scheme does not currently exist in the literature.
Last updated:  2024-03-02
RAMenPaSTA: Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs
Khai Hanh Tang, Minh Pham, and Chan Nam Ngo
Incremental Verifiable Computation (IVC) allows a prover to prove to a verifier the correct execution of a sequential computation. Recent works focus on improving the universality and efficiency of IVC Schemes, which can be categorized into Accumulation and Folding-based IVCs with Folding-based ones being more efficient (due to their deferred proof generation until the final step). Unfortunately, both approaches satisfy only heuristic security as they model the Random Oracle (RO) as a circuit in their non-constant depth recursive composition of the base Scheme. Such drawback is two-fold: to connect the consecutive execution step the RO is recursively modeled as a circuit during the folding or the accumulating process, and again in the final SNARK wrapper circuit (a common practice in Folding-based IVCs). We revisit this problem, with a focus on the Folding-based IVCs due to their efficiency, and propose the detachment of RO invocation from the folding circuit. We can instead accumulate such invocations, yielding the so-called Conditional Folding (CF) Scheme to overcome the first drawback. One can consider our CF Scheme a hybrid Folding-Accumulation Scheme with provable security. We provide a non-trivial practical construction for our CF scheme that is natively parallelizable, which offers great efficiency. We rigorously prove the security of our CF scheme (also for the case of folding in parallel; and our scheme can be made non-interactive using Fiat-Shamir). Our CF scheme is generic and does not require trusted setup. It can be adapted to construct the first IVC for RAM programs, i.e. Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs that we dub RAMenPaSTA, that can be used to build zero-knowledge virtual machines (zkVMs). Both our CF Scheme and RAMenPaSTA can be of independent research interests.
Last updated:  2024-02-26
Split-State Non-Malleable Codes and Secret Sharing Schemes for Quantum Messages
Naresh Goud Boddu, Vipul Goyal, Rahul Jain, and João Ribeiro
Non-malleable codes are fundamental objects at the intersection of cryptography and coding theory. These codes provide security guarantees even in settings where error correction and detection are impossible, and have found applications to several other cryptographic tasks. One of the strongest and most well-studied adversarial tampering models is $2$-split-state tampering. Here, a codeword is split into two parts which are stored in physically distant servers, and the adversary can then independently tamper with each part using arbitrary functions. This model can be naturally extended to the secret sharing setting with several parties by having the adversary independently tamper with each share. Previous works on non-malleable coding and secret sharing in the split-state tampering model only considered the encoding of classical messages. Furthermore, until recent work by Aggarwal, Boddu, and Jain (IEEE Trans. Inf. Theory 2024 & arXiv 2022), adversaries with quantum capabilities and shared entanglement had not been considered, and it is a priori not clear whether previous schemes remain secure in this model. In this work, we introduce the notions of split-state non-malleable codes and secret sharing schemes for quantum messages secure against quantum adversaries with shared entanglement. Then, we present explicit constructions of such schemes that achieve low-error non-malleability. More precisely, we construct efficiently encodable and decodable split-state non-malleable codes and secret sharing schemes for quantum messages preserving entanglement with external systems and achieving security against quantum adversaries having shared entanglement with codeword length $n$, any message length at most $n^{\Omega(1)}$, and error $\varepsilon=2^{-{n^{\Omega(1)}}}$. In the easier setting of average-case non-malleability, we achieve efficient non-malleable coding with rate close to $1/11$.
Last updated:  2024-02-26
The Impact of Reversibility on Parallel Pebbling
Jeremiah Blocki, Blake Holman, and Seunghoon Lee
The (parallel) classical black pebbling game is a helpful abstraction which allows us to analyze the resources (time, space, space-time, cumulative space) necessary to evaluate a function $f$ with a static data-dependency graph $G$ on a (parallel) computer. In particular, the parallel black pebbling game has been used as a tool to quantify the (in)security of Data-Independent Memory-Hard Functions (iMHFs). Recently Blocki et al. (TCC 2022) introduced the parallel reversible pebbling game as a tool to analyze resource requirements when we additionally require that computation is reversible. Intuitively, the parallel reversible pebbling game extends the classical parallel black pebbling game by imposing restrictions on when pebbles can be removed. By contrast, the classical black pebbling game imposes no restrictions on when pebbles can be removed to free up space. One of the primary motivations of the parallel reversible pebbling game is to provide a tool to analyze the full cost of quantum preimage attacks against an iMHF. However, while there is an extensive line of work analyzing pebbling complexity in the (parallel) black pebbling game, comparatively little is known about the parallel reversible pebbling game. Our first result is a lower bound of $\Omega\left(N^{1+1/\sqrt{\log N}} \right)$ on the reversible cumulative pebbling cost for a line graph on $N$ nodes. This yields a separation between classical and reversible pebbling costs demonstrating that the reversibility constraint can increase cumulative pebbling costs (and space-time costs) by a multiplicative factor of $\Omega\left(N^{1/\sqrt{\log N}} \right)$ --- the classical pebbling cost (space-time or cumulative) for a line graph is just $\mathcal{O}(N)$. On the positive side, we prove that any classical parallel pebbling can be transformed into a reversible pebbling strategy whilst increasing space-time (resp. cumulative memory) costs by a multiplicative factor of at most $\mathcal{O}\left(N^{2/\sqrt{\log N}}\right)$ (resp. $\mathcal{O}\left(N^{\mathcal{O}(1)/\sqrt[4]{\log N}}\right)$). We also analyze the impact of the reversibility constraint on the cumulative pebbling cost of depth-robust and depth-reducible DAGs exploiting reversibility to improve constant factors in a prior lower bound of Alwen et al. (EUROCRYPT 2017). For depth-reducible DAGs we show that the state-of-the-art recursive pebbling techniques of Alwen et al. (EUROCRYPT 2017) can be converted into a recursive reversible pebbling attack without any asymptotic increases in pebbling costs. Finally, we extend a result of Blocki et al. (ITCS 2020) to show that it is Unique Games hard to approximate the reversible cumulative pebbling cost of a DAG $G$ to within any constant factor.
Last updated:  2024-02-26
Practical Attack on All Parameters of the DME Signature Scheme
Pierre Briaud, Maxime Bros, Ray Perlner, and Daniel Smith-Tone
DME is a multivariate scheme submitted to the call for additional signatures recently launched by NIST. Its performance is one of the best among all the candidates. The public key is constructed from the alternation of very structured linear and non-linear components that constitute the private key, the latter being defined over an extension field. We exploit these structures by proposing an algebraic attack which is practical on all DME parameters.
Last updated:  2024-05-16
Leakage-Tolerant Circuits
Yuval Ishai and Yifan Song
A leakage-resilient circuit for $f:\{0,1\}^n\to\{0,1\}^m$ is a randomized Boolean circuit $C$ mapping a randomized encoding of an input $x$ to an encoding of $y=f(x)$, such that applying any leakage function $L\in \cal L$ to the wires of $C$ reveals essentially nothing about $x$. A leakage-tolerant circuit achieves the stronger guarantee that even when $x$ and $y$ are not protected by any encoding, the output of $L$ can be simulated by applying some $L'\in \cal L$ to $x$ and $y$ alone. Thus, $C$ is as secure as an ideal hardware implementation of $f$ with respect to leakage from $\cal L$. Leakage-resilient circuits were constructed for low-complexity classes $\cal L$, including (length-$t$ output) $\mathcal{AC}0$ functions, parities, and functions with bounded communication complexity. In contrast, leakage-tolerant circuits were only known for the simple case of probing leakage, where $L$ outputs the values of $t$ wires in $C$. We initiate a systematic study of leakage-tolerant circuits for natural classes $\cal L$ of global leakage functions, obtaining the following main results. $\textbf{Leakage-tolerant circuits for depth-1 leakage.}$ Every circuit $C_f$ for $f$ can be efficiently compiled into an $\cal L$-tolerant circuit $C$ for $f$, where $\cal L$ includes all leakage functions $L$ that output either $t$ parities or $t$ disjunctions (alternatively, conjunctions) of any number of wires or their negations. In the case of parities, our simulator runs in $2^{O(t)}$ time. We provide partial evidence that this may be inherent. $\textbf{Application to stateful leakage-resilient circuits.}$ We present a general transformation from (stateless) leakage-tolerant circuits to stateful leakage-resilient circuits. Using this transformation, we obtain the first constructions of stateful $t$-leakage-resilient circuits that tolerate a continuous parity/disjunction/conjunction leakage in which the circuit size grows sub-quadratically with $t$. Interestingly, here we can obtain $\mathtt{poly}(t)$-time simulation even in the case of parities.
Last updated:  2024-02-26
Transaction Fee Mechanism Design in a Post-MEV World
Maryam Bahrani, Pranav Garimidi, and Tim Roughgarden
The incentive-compatibility properties of blockchain transaction fee mechanisms have been investigated with passive block producers that are motivated purely by the net rewards earned at the consensus layer. This paper introduces a model of active block producers that have their own private valuations for blocks (representing, for example, additional value derived from the application layer). The block producer surplus in our model can be interpreted as one of the more common colloquial meanings of the phrase ``maximal extractable value (MEV).'' We first prove that transaction fee mechanism design is fundamentally more difficult with active block producers than with passive ones: With active block producers, no non-trivial or approximately welfare maximizing transaction fee mechanism can be incentive-compatible for both users and block producers. These impossibility results can be interpreted as a mathematical justification for augmenting transaction fee mechanisms with additional components such as orderflow auctions, block producer competition, trusted hardware, or cryptographic techniques. We then proceed to a more fine-grained model of block production that is inspired by current practice, in which we distinguish the roles of ``searchers'' (who actively identify opportunities for value extraction from the application layer and compete for the right to take advantage of them) and ``proposers'' (who participate directly in the blockchain protocol and make the final choice of the published block). Searchers can effectively act as an ``MEV oracle'' for a transaction fee mechanism, thereby enlarging the design space. Here, we first consider a transaction fee mechanism that resembles how searchers have traditionally been incorporated into the block production process, with each transaction effectively sold off to a searcher through a first-price auction. We then explore the design space with searchers more generally, and design a mechanism that circumvents our impossibility results for mechanisms without searchers. Our mechanism (the ``SAKA'' mechanism) is deterministic, incentive-compatible (for users, searchers, and the block producer), and sybil-proof, and it guarantees roughly 50% of the maximum-possible welfare when transaction sizes are small relative to block sizes. We conclude with a matching negative result: even when transactions are small relative to blocks, no incentive-compatible, sybil proof, and deterministic transaction fee mechanism can guarantee more than 50% of the maximum-possible welfare.
Last updated:  2024-05-20
Fuzzy Private Set Intersection with Large Hyperballs
Aron van Baarsen and Sihang Pu
Traditional private set intersection (PSI) involves a receiver and a sender holding sets $X$ and $Y$, respectively, with the receiver learning only the intersection $X\cap Y$. We turn our attention to its fuzzy variant, where the receiver holds \(|X|\) hyperballs of radius \(\delta\) in a metric space and the sender has $|Y|$ points. Representing the hyperballs by their center, the receiver learns the points $x\in X$ for which there exists $y\in Y$ such that $\mathsf{dist}(x,y)\leq \delta$ with respect to some distance metric. Previous approaches either require general-purpose multi-party computation (MPC) techniques like garbled circuits or fully homomorphic encryption (FHE), leak details about the sender’s precise inputs, support limited distance metrics, or scale poorly with the hyperballs' volume. This work presents the first black-box construction for fuzzy PSI (including other variants such as PSI cardinality, labeled PSI, and circuit PSI), which can handle polynomially large radius and dimension (i.e., a potentially exponentially large volume) in two interaction messages, supporting general \(L_{p\in[1,\infty]}\) distance, without relying on garbled circuits or FHE. The protocol excels in both asymptotic and concrete efficiency compared to existing works. For security, we solely rely on the assumption that the Decisional Diffie-Hellman (DDH) holds in the random oracle model.
Last updated:  2024-02-26
How to Validate a Verification?
Houda Ferradi
This paper introduces \textsl{signature validation}, a primitive allowing any \underline{t}hird party $T$ (\underline{T}héodore) to verify that a \underline{v}erifier $V$ (\underline{V}adim) computationally verified a signature $s$ on a message $m$ issued by a \underline{s}igner $S$ (\underline{S}arah). A naive solution consists in sending by Sarah $x=\{m,\sigma_s\}$ where $\sigma_s$ is Sarah's signature on $m$ and have Vadim confirm reception by a signature $\sigma_v$ on $x$. Unfortunately, this only attests \textsl{proper reception} by Vadim, i.e. that Vadim \textsl{could have checked} $x$ and not that Vadim \textsl{actually verified} $x$. By ``actually verifying'' we mean providing a proof or a convincing argument that a program running on Vadim's machine checked the correctness of $x$. This paper proposes several solutions for doing so, thereby providing a useful building-block in numerous commercial and legal interactions for proving informed consent.
Last updated:  2024-02-26
Attribute-Based Signatures with Advanced Delegation, and Tracing
Cécile Delerablée, Lénaïck Gouriou, and David Pointcheval
Attribute-based cryptography allows fine-grained control on the use of the private key. In particular, attribute-based signature (ABS) specifies the capabilities of the signer, which can only sign messages associated to a policy that is authorized by his set of attributes. Furthermore, we can expect signature to not leak any information about the identity of the signer. ABS is a useful tool for identity-preserving authentication process which requires granular access-control, and can furthermore be enhanced with additional properties, for example delegation where users are able to manage a set of keys derived from their original one. In this paper, we address delegation of signing keys. Our first delegation works for any subset of the original attributes, which is the intuitive approach of delegation. Furthermore, we also provide another kind of delegation where the delegator can choose a policy at delegation time to produce keys that can sign any message under this specific policy. This last approach to delegation is a direct application of a new version of the indexing technique, which was first introduced by Okamoto and Takashima in order to prove adaptive security in ABS and its counterpart for encryption, ABE. On top of that, we prove that our scheme is compatible with a well studied feature of ABS, traceability, by using an approach based on Linearly-Homomorphic signatures. All our schemes also guarantee the anonymity of the real signer. The unforgeability of our schemes is proven using the SXDH assumption, and our constructions use the Dual Pairing Vector Spaces (DPVS) framework developed by Okamoto and Takashima, which has been widely used for all kind of attribute and functional cryptography mechanisms.
Last updated:  2024-02-26
Registered Functional Encryptions from Pairings
Ziqi Zhu, Jiangtao Li, Kai Zhang, Junqing Gong, and Haifeng Qian
This work initiates the study of concrete registered functional encryption (Reg-FE) beyond ``all-or-nothing'' functionalities: - We build the first Reg-FE for linear function or inner-product evaluation (Reg-IPFE) from pairings. The scheme achieves adaptive IND-security under $k$-Lin assumption in the prime-order bilinear group. A minor modification yields the first Registered Inner-Product Encryption (Reg-IPE) scheme from $k$-Lin assumption. Prior work achieves the same security in the generic group model. -We build the first Reg-FE for quadratic function (Reg-QFE) from pairings. The scheme achieves very selective simulation-based security (SIM-security) under bilateral $k$-Lin assumption in the prime-order bilinear group. Here, ``very selective'' means that the adversary claims challenge messages, all quadratic functions to be registered and all corrupted users at the beginning. Besides focusing on the compactness of the master public key and helper keys, we also aim for compact ciphertexts in Reg-FE. Let $L$ be the number of slots and $n$ be the input size. Our first Reg-IPFE has weakly compact ciphertexts of size $O(n\cdot\log L)$ while our second Reg-QFE has compact ciphertexts of size $O(n+\log L)$. Technically, for our first Reg-IPFE, we employ nested dual-system method within the context of Reg-IPFE; for our second Reg-QFE, we follow Wee's ``IPFE-to-QFE'' transformation [TCC' 20] but devise a set of new techniques that make our pairing-based Reg-IPFE compatible. Along the way, we introduce a new notion named Pre-Constrained Registered IPFE which generalizes slotted Reg-IPFE by constraining the form of functions that can be registered.
Last updated:  2024-02-26
Haven++: Batched and Packed Dual-Threshold Asynchronous Complete Secret Sharing with Applications
Nicolas Alhaddad, Mayank Varia, and Ziling Yang
Asynchronous complete secret sharing (ACSS) is a foundational primitive in the design of distributed algorithms and cryptosystems that require secrecy. Dual-threshold ACSS permits a dealer to distribute a secret to a collection of $n$ servers so that everyone holds shares of a polynomial containing the dealer's secret. This work contributes a new ACSS protocol, called Haven++, that uses packing and batching to make asymptotic and concrete advances in the design and application of ACSS for large secrets. Haven++ allows the dealer to pack multiple secrets in a single sharing phase, and to reconstruct either one or all of them later. For even larger secrets, we contribute a batching technique to amortize the cost of proof generation and verification across multiple invocations of our protocol. The result is an asymptotic improvement in amortized communication and computation complexity, both for ACSS itself and for its application to asynchronous distributed key generation. We implement Haven++ and find that it improves performance over the hbACSS protocol of Yurek et al. by a factor of 3-10$\times$ or more across a wide range of parameters for the number of parties and batch size.
Last updated:  2024-04-22
Proofs for Deep Thought: Accumulation for large memories and deterministic computations
Benedikt Bünz and Jessica Chen
We construct two new accumulation schemes. The first one is for checking that $\ell$ read and write operations were performed correctly from a memory of size $T$. The prover time is entirely independent of $T$ and only requires committing to $6\ell$ field elements, which is an over $100$X improvement over prior work. The second one is for deterministic computations. It does not require committing to the intermediate wires of the computation but only to the input and output. This is achieved by building an accumulation scheme for a modified version of the famous GKR protocol. We show that these schemes are highly compatible and that the accumulation for GKR can further reduce the cost of the memory-checking scheme. Using the BCLMS (Crypto 21) compiler, these protocols yield an efficient, incrementally verifiable computation (IVC) scheme that is particularly useful for machine computations with large memories and deterministic steps.
Last updated:  2024-03-09
Under What Conditions Is Encrypted Key Exchange Actually Secure?
Jake Januzelli, Lawrence Roy, and Jiayu Xu
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to agree upon a cryptographic key, in the setting where the only secret shared in advance is a low-entropy password. The standard security notion for PAKE is in the Universal Composability (UC) framework. In recent years there have been a large number of works analyzing the UC-security of Encrypted Key Exchange (EKE), the very first PAKE protocol, and its One-encryption variant (OEKE), both of which compile an unauthenticated Key Agreement (KA) protocol into a PAKE. In this work, we present a comprehensive and thorough study of the UC-security of both EKE and OEKE in the most general setting and using the most efficient building blocks: 1. We show that among the seven existing results on the UC-security of (O)EKE, six are flawed; 2. We show that for (O)EKE to be UC-secure, the underlying KA protocol needs to satisfy the properties of strong pseudorandomness, pseudorandom non-malleability, and collision resistance, all of which are missing in existing works; 3. We give UC-security proofs for EKE and OEKE using Programmable-Once Random Function (POPF), which is the most efficient instantiation to date and is around 4 times faster than the standard instantiation using Ideal Cipher (IC). Our results in particular allow for PAKE constructions from post-quantum KA protocols such as Kyber. We also give a security analysis of POPF in a new composition framework called almost UC, which we believe is interesting in its own right.
Last updated:  2024-02-25
Circuit Bootstrapping: Faster and Smaller
Ruida Wang, Yundi Wen, Zhihao Li, Xianhui Lu, Benqiang Wei, Kun Liu, and Kunpeng Wang
We present a novel circuit bootstrapping algorithm that outperforms the state-of-the-art TFHE method with 9.9× speedup and 15.6× key size reduction. These improvements can be attributed to two technical contributions. Firstly, we redesigned the circuit bootstrapping workflow to operate exclusively under the ring ciphertext type, which eliminates the need of conversion between LWE and RLWE ciphertexts. Secondly, we improve the LMKC+ blind rotation algorithm by reducing the number of automorphisms, then propose the first automorphism type multi-value functional bootstrapping. These automorphism-based techniques lead to further key size optimization, and are of independent interest besides circuit bootstrapping. Based our new circuit bootstrapping we can evaluate AES-128 in 26.2s (single thread), achieving 10.3× speedup compared with the state-of-the-art TFHE-based approach.
Last updated:  2024-02-25
Theoretical Explanation and Improvement of Deep Learning-aided Cryptanalysis
Weixi Zheng, Liu Zhang, and Zilong Wang
At CRYPTO 2019, Gohr demonstrated that differential-neural distinguishers (DNDs) for Speck32/64 can learn more features than classical cryptanalysis's differential distribution tables (DDT). Furthermore, a non-classical key recovery procedure is devised by combining the Upper Confidence Bound (UCB) strategy and the BayesianKeySearch algorithm. Consequently, the time complexity of 11-round key recovery attacks on Speck32/64 is significantly reduced compared with the state-of-the-art results in classical cryptanalysis. This advancement in deep learning-assisted cryptanalysis has opened up new possibilities. However, the specific encryption features exploited by DNDs remain unclear. In this paper, we begin by analyzing the features learned by DND based on the probability distribution of a ciphertext pair. Our analysis reveals that DND not only learns the differential features of the ciphertext pair but also captures the XOR information of the left and right branches of the ciphertext pair. This explains why the performance of DND can outperform DDT in certain cases. For other ciphers, we can also predict whether deep learning methods can achieve superior results to classical methods based on the probability distribution of the ciphertext pair. Next, we modify the input data format and network structure based on the specific features that can be learned to train DND specifically. With these modifications, it is possible to reduce the size of their parameters to only 1/16 of their previous networks while maintaining high precision. Additionally, the training time for the DNDs is significantly reduced. Finally, to improve the efficiency of deep learning-assisted cryptanalysis, we introduce Bayes-UCB to select promising ciphertext structures more efficiently. We also introduce an improved BayesianKeySearch algorithm to retain guessed keys with the highest scores in key guessing. We use both methods to launch 11-round, 12-round, and 13-round key recovery attacks on Speck32/64. The results show that under the same conditions, the success rate of 11-round key recovery attacks has increased from Gohr's 36.1% to 52.8%, the success rate of 12-round key recovery attacks has increased from Gohr's 39% to 50%, and the success rate of 13-round key recovery attacks has increased from Zhang et al.'s 21% to 24%. In addition, the time complexity of these experiments is also significantly reduced.
Last updated:  2024-04-19
Formal Verification of Emulated Floating-Point Arithmetic in Falcon
Vincent Hwang
We show that there is a discrepancy between the emulated floating-point multiplications in the submission package of Falcon and the claimed behavior. In particular, we show that floating-point products with absolute values the smallest normal positive floating-point number are incorrectly zeroized. However, we show that the discrepancy doesn’t effect the complex fast Fourier transform by modeling the floating-point addition, subtraction, and multiplication in CryptoLine. We later implement our own floating-point multiplications in Armv7-M assembly and Jasmin and prove their equivalence with our model, demonstrating the possibility of transferring the challenging verification task (verifying highly-optimized assembly) to the presumably more readable code base (Jasmin).
Last updated:  2024-02-24
POPSTAR: Lightweight Threshold Reporting with Reduced Leakage
Hanjun Li, Sela Navot, and Stefano Tessaro
This paper proposes POPSTAR, a new lightweight protocol for the private computation of heavy hitters, also known as a private threshold reporting system. In such a protocol, the users provide input measurements, and a report server learns which measurements appear more than a pre-specified threshold. POPSTAR follows the same architecture as STAR (Davidson et al, CCS 2022) by relying on a helper randomness server in addition to a main server computing the aggregate heavy hitter statistics. While STAR is extremely lightweight, it leaks a substantial amount of information, consisting of an entire histogram of the provided measurements (but only reveals the actual measurements that appear beyond the threshold). POPSTAR shows that this leakage can be reduced at a modest cost ($\sim$7$\times$ longer aggregation time). Our leakage is closer to that of Poplar (Boneh et al, S&P 2021), which relies however on distributed point functions and a different model which requires interactions of two non-colluding servers (with equal workloads) to compute the heavy hitters.
Last updated:  2024-02-24
On the cryptosystems based on two Eulerian transfor-mations defined over the commutative rings $Z_{2^s}, s>1$.
Vasyl Ustimenko
We suggest the family of ciphers s^E^n, n=2,3,.... with the space of plaintexts (Z*_{2^s})^n, s >1 such that the encryption map is the composition of kind G=G_1A_1G_2A_2 where A_i are the affine transformations from AGL_n(Z_{2^s}) preserving the variety (Z*_{2^s)}^n , Eulerian endomorphism G_i , i=1,2 of K[x_1, x_2,...., x_n] moves x_i to monomial term ϻ(x_1)^{d(1)}(x_2)^{d(2)}...(x_n)^{d(n)} , ϻϵ Z*_{2^s} and act on (Z*_{2^s})^n as bijective transformations. The cipher is converted to a protocol supported cryptosystem. Protocols of Noncommutative Cryptography implemented on the platform of Eulerian endomorphism are used for the delivery of G_i and A_i from Alice to Bob. One can use twisted Diffie-Hellman protocols which security rests on the complexity of Conjugacy Power problem or hidden tame homomorphism protocol which security rests of the word decomposition problem. Instead of the delivery of G_i Alice and Bob can elaborate these transformations via the inverse twisted Diffie-Hellman protocol implemented on the platform of tame Eulerian transformations of (Z*_{2^s})^n. The cost of single protocol is O(n^3) and the cost of the computation of the reimage of used nonlinear map is O(n^2). So the verification of n^t , t≥1 signatures takes time O(n^{t+2}). Instead of inverse twisted Diffie-Hellman protocol correspondents can use inverse hidden tame homomorphism protocol which rests on the complexity of word decomposition for tame Eulerian transformations. We use natural bijections between Z*_{2^s} and Z_{2^{s-1}}, Z*_{2^s} and finite field F_{2^{s-1}} and Z*_{2^s} and Boolean ring B_{s-1} of order 2^{s-1} to modify the family of ciphers or cryptosystems via the change of AGL_n(Z_{2^s}) for the AGL_n(K), where K is one of the rings Z_{2^{s-1}, F_{2^{s-1} and B_{s-1}. New ciphers are defined via the multiplications of two different commutative rings Z_{2^s} and K. It does not allow to treat them as stream ciphers of multivariate cryptography and use corresponding cryptanalytic technique. Adversary is not able to use known cryptanalytical methods such as linearisation attacks. We discuss the option of change the mentioned above elements of AGL_n(Z_{2^s) or AGL_n(K) for nonlinear multivariate transformation F of (Z_{2^s})^n or K^n with the symmetric trapdoor accelerator T, i.e. the piece of information such that the knowledge of T allows to compute the value F(p) in arbitrarily chosen p ϵ P in time O(n^2) and to solve the equation of kind F(x)=c for each c from C in time O(n ^2).
Last updated:  2024-05-14
Plinko: Single-Server PIR with Efficient Updates via Invertible PRFs
Alexander Hoover, Sarvar Patel, Giuseppe Persiano, and Kevin Yeo
We study single-server private information retrieval (PIR) where a client wishes to privately retrieve the $x$-th entry from a database held by a server without revealing the index $x$. In our work, we focus on PIR with client pre-processing where the client may compute hints during an offline phase. The hints are then leveraged during queries to obtain sub-linear online time. We present Plinko that is the first single-server PIR with client pre-processing that obtains optimal trade-offs between client storage and total (client and server) query time for all parameters. Our scheme uses $t = \tilde{O}(n/r)$ query time for any client storage size $r$. This matches known lower bounds of $r \cdot t = \Omega(n)$ up to logarithmic factors for all parameterizations whereas prior works could only match the lower bound when $r = \tilde{O}(\sqrt{n})$. Moreover, Plinko is also the first updateable PIR scheme where an entry can be updated in worst-case $\tilde{O}(1)$ time. As our main technical tool, we define the notion of an invertible pseudorandom function (iPRF) that generalizes standard PRFs to be equipped with an efficient inversion algorithm. We present a construction of an iPRF from one-way functions where forward evaluation runs in $\tilde{O}(1)$ time and inversion runs in time linear in the inverse set (output) size. Furthermore, our iPRF construction is the first that remains efficient and secure for arbitrary domain and range sizes (including small domains and ranges). In the context of single-server PIR, we show that iPRFs may be used to construct the first hint set representation where finding a hint containing an entry $x$ may be done in $\tilde{O}(1)$ time.
Last updated:  2024-02-29
Closing the Efficiency Gap between Synchronous and Network-Agnostic Consensus
Giovanni Deligios and Mose Mizrahi Erbes
In the consensus problem, $n$ parties want to agree on a common value, even if some of them are corrupt and arbitrarily misbehave. If the parties have a common input $m$, then they must agree on $m$. Protocols solving consensus assume either a synchronous communication network, where messages are delivered within a known time, or an asynchronous network with arbitrary delays. Asynchronous protocols only tolerate $t_a < n/3$ corrupt parties. Synchronous ones can tolerate $t_s < n/2$ corruptions with setup, but their security completely breaks down if the synchrony assumptions are violated. Network-agnostic consensus protocols, as introduced by Blum, Katz, and Loss [TCC'19], are secure regardless of network conditions, tolerating up to $t_s$ corruptions with synchrony and $t_a$ without, under provably optimal assumptions $t_a \leq t_s$ and $2t_s + t_a < n$. Despite efforts to improve their efficiency, all known network-agnostic protocols fall short of the asymptotic complexity of state-of-the-art purely synchronous protocols. In this work, we introduce a novel technique to compile any synchronous and any asynchronous consensus protocols into a network-agnostic one. This process only incurs a small constant number of overhead rounds, so that the compiled protocol matches the optimal round complexity for synchronous protocols. Our compiler also preserves under a variety of assumptions the asymptotic communication complexity of state-of-the-art synchronous and asynchronous protocols. Hence, it closes the current efficiency gap between synchronous and network-agnostic consensus. As a plus, our protocols support $\ell$-bit inputs, and can be extended to achieve communication complexity $O(n^2\kappa + \ell n)$ under the assumptions for which this is known to be possible for purely synchronous protocols.
Last updated:  2024-02-23
Threshold Garbled Circuits with Low Overhead
Schuyler Rosefield, abhi shelat, and LaKyah Tyner
The folklore approach to designing a threshold variant of symmetric cryptographic algorithms involves applying generic MPC methods to se- cret sharing techniques: the MPC first combines participant input shares using the secret sharing scheme, and then evaluates the cryptographic function on the reconstructed key. Hardening this secure against n − 1 malicious parties requires some mechanism to ensure input consistency, e.g., adding MACs to inputs, which consequently, increases the number of inputs and gates to the MPC. In many cases, this extra overhead is substantially more than the underlying cost of evaluating the symmetric cryptographic algorithm. We present a scheme that can convert any suitable maliciously secure dishonest majority boolean-circuit FMPC into a threshold scheme Fthresh with almost no overhead. Specifically, we present an SUC-secure scheme that allows for reactive threshold t-of-n boolean circuit evaluation amongst a group of n parties P , for any t ≤ n, against a malicious adversary that corrupts any number of parties less than the threshold t. Moreover, mul- tiple circuits can be evaluated sequentially with the secret-shared authen- ticated outputs of a circuit to be used subsequently as inputs for a new circuit by any S ⊆ P of size |S| ≥ t. Building upon the works of Wang et al, Hazay et al, and Yang et al, [WRK17, HSSV17, YWZ20] for dishonest majority FMPC, our key insight is to create threshold versions of the “authenticated bits” used to han- dle input in these recent n-party garbled circuits protocols. The resulting design incurs a small overhead to produce the reusable “threshold authen- ticated bits” during preprocessing, and adds no extra communication to evaluate with the authenticated input during the online phase. Using our methods, thresholdizing a boolean circuit has essentially no performance overhead. For example, to compute HMAC, a full Setup+Eval execution of the (n − 2)-out-of-n thresholdized version is approximately 4% more expensive than the state-of-the-art n-party MPC. In contrast, using the folklore method is approximately 100% more expensive. This is especially true for small circuits such as AES which has 6800 gates and thus incurs the most overhead for thresholdizing. Simply considering the online Eval cost, our approach can evaluate AES blocks at 2.3/s with 16 parties, exceeding the baseline MPC cost without preprocessing, and sur- passing the folklore method that only achieves .33/s blocks. Ultimately, this result makes threshold boolean circuit MPC as feasible as any MPC application.
Last updated:  2024-02-23
Alternative Key Schedules for the AES
Christina Boura, Patrick Derbez, and Margot Funk
The AES block cipher is today the most important and analyzed symmetric algorithm. While all versions of the AES are known to be secure in the single-key setting, this is not the case in the related-key scenario. In this article we try to answer the question whether the AES would resist better differential-like related-key attacks if the key schedule was different. For this, we search for alternative permutation-based key schedules by extending the work of Khoo et al. at ToSC 2017 and Derbez et al. at SAC 2018. We first show that the model of Derbez et al. was flawed. Then, we develop different approaches together with MILP-based tools to find good permutations that could be used as the key schedule for AES-128, AES-192 and AES-256. Our methods permitted to find permutations that outperform the permutation exhibited by Khoo et al. for AES-128. Moreover, our new approach based on two MILP models that call one another allowed us to handle a larger search space and thus to search for alternative key schedules for the two bigger versions of AES. This method permitted us to find permutations for AES-192 and AES-256 that provide better resistance to related-key differential attacks. Most importantly, we showed that these variants can resist full-round boomerang attacks.
Last updated:  2024-02-23
Exploring the Advantages and Challenges of Fermat NTT in FHE Acceleration
Andrey Kim, Ahmet Can Mert, Anisha Mukherjee, Aikata Aikata, Maxim Deryabin, Sunmin Kwon, HyungChul Kang, and Sujoy Sinha Roy
Recognizing the importance of fast and resource-efficient polynomial multiplication in homomorphic encryption, in this paper, we introduce a novel method that enables integer multiplier-less Number Theoretic Transform (NTT) for computing polynomial multiplication. First, we use a Fermat number as an auxiliary modulus of NTT. However, this approach of using Fermat number scales poorly with the degree of polynomial. Hence, we propose a transformation of a large-degree univariate polynomial into small-degree multi-variable polynomials. After that, we compute these NTTs on small-degree polynomials with Fermat number as modulus. We design an accelerator architecture customized for the novel multivariate NTT and use it for benchmarking practical homomorphic encryption applications. The accelerator can achieve a 1,200× speed-up compared to software implementations. We further discuss the potential and limitations of the proposed polynomial multiplication method in the context of homomorphic encryption.
Last updated:  2024-02-26
The Complexity of Algebraic Algorithms for LWE
Matthias Johann Steiner
Arora & Ge introduced a noise-free polynomial system to compute the secret of a Learning With Errors (LWE) instance via linearization. Albrecht et al. later utilized the Arora-Ge polynomial model to study the complexity of Gröbner basis computations on LWE polynomial systems under the assumption of semi-regularity. In this paper we revisit the Arora-Ge polynomial and prove that it satisfies a genericity condition recently introduced by Caminata & Gorla, called being in generic coordinates. For polynomial systems in generic coordinates one can always estimate the complexity of DRL Gröbner basis computations in terms of the Castelnuovo-Mumford regularity and henceforth also via the Macaulay bound. Moreover, we generalize the Gröbner basis algorithm of Semaev & Tenti to arbitrary polynomial systems with a finite degree of regularity. In particular, existence of this algorithm yields another approach to estimate the complexity of DRL Gröbner basis computations in terms of the degree of regularity. In practice, the degree of regularity of LWE polynomial systems is not known, though one can always estimate the lowest achievable degree of regularity. Consequently, from a designer's worst case perspective this approach yields sub-exponential complexity estimates for general, binary secret and binary error LWE. In recent works by Dachman-Soled et al. the hardness of LWE in the presence of side information was analyzed. Utilizing their framework we discuss how hints can be incorporated into LWE polynomial systems and how they affect the complexity of Gröbner basis computations.
Last updated:  2024-02-23
Trapdoor Memory-Hard Functions
Benedikt Auerbach, Christoph U. Günther, and Krzysztof Pietrzak
Memory-hard functions (MHF) are functions whose evaluation provably requires a lot of memory. While MHFs are an unkeyed primitive, it is natural to consider the notion of trapdoor MHFs (TMHFs). A TMHF is like an MHF, but when sampling the public parameters one also samples a trapdoor which allows evaluating the function much cheaper. Biryukov and Perrin (Asiacrypt'17) were the first to consider TMHFs and put forth a candidate TMHF construction called Diodon that is based on the Scrypt MHF (Percival, BSDCan'09). To allow for a trapdoor, Scrypt's initial hash chain is replaced by a sequence of squares in a group of unknown order where the order of the group is the trapdoor. For a length $n$ sequence of squares and a group of order $N$, Diodon's cumulative memory complexity (CMC) is $O(n^2\log N)$ without the trapdoor and $O(n \log(n) \log(N)^2)$ with knowledge of it. While Scrypt is proven to be optimally memory-hard in the random oracle model (Alwen et al., Eurocrypt'17), Diodon's memory-hardness has not been proven so far. In this work, we fill this gap by rigorously analyzing a specific instantiation of Diodon. We show that its CMC is lower bounded by $\Omega(\frac{n^2}{\log n} \log N)$ which almost matches the upper bound. Our proof is based Alwen et al.'s lower bound on Scrypt's CMC but requires non-trivial modifications due to the algebraic structure of Diodon. Most importantly, our analysis involves a more elaborate compression argument and a solvability criterion for certain systems of Diophantine equations.
Last updated:  2024-02-23
Aggregating Falcon Signatures with LaBRADOR
Marius A. Aardal, Diego F. Aranha, Katharina Boudgoust, Sebastian Kolby, and Akira Takahashi
Several prior works have suggested to use non-interactive arguments of knowledge with short proofs to aggregate signatures of Falcon, which is part of the first post-quantum signatures selected for standardization by NIST. Especially LaBRADOR, based on standard structured lattice assumptions and published at CRYPTO’23, seems promising to realize this task. However, no prior work has tackled this idea in a rigorous way. In this paper, we thoroughly prove how to aggregate Falcon signatures using LaBRADOR. First, we improve LaBRADOR by moving from a low-splitting to a high-splitting ring, allowing for faster computations. This modification leads to some additional technical challenges for proving the knowledge soundness of LaBRADOR. Moreover, we provide the first complete knowledge soundness analysis for the non-interactive version of LaBRADOR. Here, the multi-round and recursive nature of LaBRADOR requires a complex and thorough analysis. For this purpose, we introduce the notion of predicate special soundness (PSS). This is a general framework for evaluating the knowledge error of complex Fiat-Shamir arguments of knowledge protocols in a modular fashion, which we believe to be of independent interest. Lastly, we explain the exact steps to take in order to adapt the LaBRADOR proof system for aggregating Falcon signatures and provide concrete estimates for proof sizes. Additionally, we formalize the folklore approach of obtaining aggregate signatures from the class of hash-then-sign signatures through arguments of knowledge.
Last updated:  2024-03-20
A Zero-Dimensional Gröbner Basis for Poseidon
Matthias Johann Steiner
In this paper we construct dedicated weight orders $>$ so that a $>$-Gröbner bases of Poseidon can be found via linear transformations for the preimage as well as the CICO problem. In particular, with our Gröbner bases we can exactly compute the $\mathbb{F}_q$-vector space dimension of the quotient space for all possible Poseidon configurations. This in turn resolves previous attempts to assess the security of Poseidon against Gröbner basis attacks, since the vector space dimension quantifies the complexity of computing the variety of a zero-dimensional polynomial system.
Last updated:  2024-02-23
NiLoPher: Breaking a Modern SAT-Hardened Logic-Locking Scheme via Power Analysis Attack
Prithwish Basu Roy, Johann Knechtel, Akashdeep Saha, Saideep Sreekumar, Likhitha Mankali, Mohammed Nabeel, Debdeep Mukhopadhyay, Ramesh Karri, and Ozgur Sinanoglu
LoPher brings, for the first time, cryptographic security promises to the field of logic locking in a bid to break the game of cat-and-mouse seen in logic locking. Toward this end, LoPher embeds the circuitry to lock within multiple rounds of a block cipher, by carefully configuring all the S-Boxes. To realize general Boolean functionalities and to support varying interconnect topologies, LoPher also introduces additional layers of MUXes between S-Boxes and the permutation operations. The authors of LoPher claim resilience against SAT-based attacks in particular. Here, we show the first successful attack on LoPher. First, we uncover a significant limitation for LoPher’s key-space configuration, resulting in large numbers of equivalent keys and, thus, a largely simplified search space for attackers in practice. Second, motivated by their well-proven working against ciphers, we employ a power side-channel attack against LoPher. We find that ISCAS-85 benchmarks locked with LoPher can all be broken in few thousands of traces. Finally, we also outline a simple and low-cost countermeasure to render LoPher more secure.
Last updated:  2024-02-23
C'est très CHIC: A compact password-authenticated key exchange from lattice-based KEM
Afonso Arriaga, Manuel Barbosa, Stanislaw Jarecki, and Marjan Skrobot
Several Password Authenticated Key Exchange (PAKE) protocols have been recently proposed that leverage a Key-Encapsulation Mechanism (KEM) to create an efficient and easy-to-implement post-quantum secure PAKE. This line of work is driven by the intention of the National Institute of Standards and Technology (NIST) to soon standardize a lattice-based post-quantum KEM called $\mathsf{Kyber}$. In two recent works, Beguinet et al. (ACNS 2023) and Pan and Zeng (ASIACRYPT 2023) proposed generic compilers that transform KEM into PAKE, relying on an Ideal Cipher (IC) defined over a group. However, although IC on a group is often used in cryptographic protocols, special care must be taken to instantiate such objects in practice, especially when a low-entropy key is used. To address this concern, Dos Santos et al. (EUROCRYPT 2023) proposed a relaxation of the IC model under the Universal Composability (UC) framework called Half-Ideal Cipher (HIC). They demonstrate how to construct a UC-secure PAKE protocol, named $\mathsf{EKE\textrm{-}KEM}$, from a KEM and a modified 2-round Feistel construction called $\mathsf{m2F}$. Remarkably, $\mathsf{m2F}$ sidesteps the use of IC over a group, instead employing an IC defined over a fixed-length bitstring domain, which is easier to instantiate. In this paper, we introduce a novel PAKE protocol called $\mathsf{CHIC}$ that improves the communication and computation efficiency of $\mathsf{EKE\textrm{-}KEM}$. We do so by opening $\mathsf{m2F}$ construction in a white-box manner and avoiding the HIC abstraction in our analysis. We provide a detailed proof of the security of $\mathsf{CHIC}$ and establish precise security requirements for the underlying KEM, including one-wayness and anonymity of ciphertexts, and uniformity of public keys. Our analysis improves prior work by pinpointing the necessary and sufficient conditions for a tight security proof. Our findings extend to general KEM-based EKE-style protocols, under both game-based definitions (with Perfect Forward Secrecy) and UC PAKE definitions, and show that a passively secure KEM is not sufficient. In this respect, our results align with those of Pan and Zeng (ASIACRYPT 2023), but contradict the analyses of KEM-to-PAKE compilers by Beguinet et al. (ACNS 2023) and Dos Santos et al. (EUROCRYPT 2023). Finally, we provide an implementation of $\mathsf{CHIC}$, highlighting its minimal overhead compared to an underlying CCA-secure KEM - $\mathsf{Kyber}$. An interesting aspect of the implementation is that we reuse existing $\mathsf{Kyber}$ reference code to solve an open problem concerning instantiating the half-ideal cipher construction. Specifically, we reuse the rejection sampling procedure, originally designed for public-key compression, to implement the hash onto the public key space, which is a component in the half-ideal cipher. As of now, to the best of our knowledge, CHIC stands as the most efficient PAKE protocol from black-box KEM that offers rigorously proven UC security.
Last updated:  2024-02-23
SweetPAKE: Key exchange with decoy passwords
Afonso Arriaga, Peter Y.A. Ryan, and Marjan Skrobot
Decoy accounts are often used as an indicator of the compromise of sensitive data, such as password files. An attacker targeting only specific known-to-be-real accounts might, however, remain undetected. A more effective method proposed by Juels and Rivest at CCS'13 is to maintain additional fake passwords associated with each account. An attacker who gains access to the password file is unable to tell apart real passwords from fake passwords, and the attempted usage of a false password immediately sets off an alarm indicating a password file compromise. Password-Authenticated Key Exchange (PAKE) has long been recognised for its strong security guarantees when it comes to low-entropy password authentication and secure channel establishment, without having to rely on the setup of a PKI. In this paper, we introduce SweetPAKE, a new cryptographic primitive that offers the same security guarantees as PAKE for key exchange, while allowing clients with a single password to authenticate against servers with $n$ candidate passwords for that account and establish a secure channel. Additional security properties are identified and formalized to ensure that (a) high-entropy session keys are indistinguishable from random, even if later on the long-term secret password becomes corrupted (forward secrecy); (b) upon password file leakage, an adversary cannot tell apart real from fake passwords; and (c) a malicious client cannot trigger a false alarm. We capture these properties by extending well-established game-based definitions of PAKE. Furthermore, we propose a new UC formulation that comprehensively unifies both SweetPAKE (session key indistinguishability and sugarword indistinguishability) and a related notion known as Oblivious-PAKE. Finally, we propose efficient SweetPAKE and Oblivious-PAKE protocols constructed from Password-Authenticated Public-Key Encryption (PAPKE) that satisfy all the proposed notions.
Last updated:  2024-02-23
Concretely Efficient Lattice-based Polynomial Commitment from Standard Assumptions
Intak Hwang, Jinyeong Seo, and Yongsoo Song
Polynomial commitment is a crucial cryptographic primitive in constructing zkSNARKs. To date, most practical constructions are either insecure against quantum adversaries or lack homomorphic properties, which are useful in recursive compositions of SNARKs. Recently, lattice-based constructions from functional commitments have drawn attention for possessing all the desirable properties, but they yet lack concrete efficiency, and their extractability, which is essential for SNARKs, requires further analysis. In this paper, we propose a novel construction of an extractable polynomial commitment scheme based on standard lattice-based assumptions, which is transparent and publicly verifiable. Our polynomial commitment has a square-root proof size and verification complexity, but it provides concrete efficiency in proof size, proof generation, and verification. When compared with the recent code-based construction based on Brakedown (CRYPTO 23), our construction provides comparable performance in all aspects.
Last updated:  2024-02-23
Single-Input Functionality against a Dishonest Majority: Practical and Round-Optimal
Zhelei Zhou, Bingsheng Zhang, Hong-Sheng Zhou, and Kui Ren
In this work, we focus on Single-Input Functionality (SIF), which can be viewed as a special case of MPC. In a SIF, only one distinguished party called the dealer holds a private input. SIF allows the dealer to perform a computation task with other parties without revealing any additional information about the private input. SIF has diverse applications, including multiple-verifier zero-knowledge, and verifiable relation sharing. As our main contribution, we propose the first 1-round SIF protocol against a dishonest majority in the preprocessing model, which is highly efficient. The only prior work that achieves 1-round online communication assumes an honest majority and is only a feasibility result (Applebaum et al., Crypto 2022). We implement our protocols and conduct extensive experiments to illustrate the practical efficiency of our protocols. As our side product, we extend the subfield Vector Oblivious Linear Evaluation (sVOLE) into the multi-party setting, and propose a new primitive called multi-verifier sVOLE, which may be of independent interest.
Last updated:  2024-02-23
A Two-Layer Blockchain Sharding Protocol Leveraging Safety and Liveness for Enhanced Performance
Yibin Xu, Jingyi Zheng, Boris Düdder, Tijs Slaats, and Yongluan Zhou
Sharding is a critical technique that enhances the scalability of blockchain technology. However, existing protocols often assume adversarial nodes in a general term without considering the different types of attacks, which limits transaction throughput at runtime because attacks on liveness could be mitigated. There have been attempts to increase transaction throughput by separately handling the attacks; however, they have security vulnerabilities. This paper introduces Reticulum, a novel sharding protocol that overcomes these limitations and achieves enhanced scalability in a blockchain network without security vulnerabilities. Reticulum employs a two-phase design that dynamically adjusts transaction throughput based on runtime adversarial attacks on either or both liveness and safety. It consists of `control' and `process' shards in two layers corresponding to the two phases. Process shards are subsets of control shards, with each process shard expected to contain at least one honest node with high confidence. Conversely, control shards are expected to have a majority of honest nodes with high confidence. Reticulum leverages unanimous voting in the first phase to involve fewer nodes in accepting/rejecting a block, allowing more parallel process shards. The control shard finalizes the decision made in the first phase and serves as a lifeline to resolve disputes when they surface. Experiments demonstrate that the unique design of Reticulum empowers high transaction throughput and robustness in the face of different types of attacks in the network, making it superior to existing sharding protocols for blockchain networks.
Last updated:  2024-02-29
Single Pass Client-Preprocessing Private Information Retrieval
Arthur Lazzaretti and Charalampos Papamanthou
Recently, many works have considered Private Information Retrieval (PIR) with client-preprocessing: In this model a client and a server jointly run a preprocessing phase, after which client queries can run in time sublinear in the size of the database. In addition, such approaches store no additional bits per client at the server, allowing us to scale PIR to a large number of clients. In this work, we propose the first client-preprocessing PIR scheme with ``single pass'' client-preprocessing. In particular, our scheme is concretely optimal with respect to preprocessing, in the sense that it requires exactly one linear pass over the database. This is in stark contrast with existing works, whose preprocessing is proportional to $\lambda \cdot N$, where $\lambda$ is the security parameter (e.g., $\lambda=128$). Our approach yields a preprocessing speedup of 45-100$\times$ and a query speedup of up to 20$\times$ when compared to previous state-of-the-art schemes (e.g., Checklist, USENIX 2021), making preprocessing PIR more attractive for a myriad of use cases that are ``session-based''. In addition to fast preprocessing, our scheme features extremely fast updates (additions and edits)---in constant time. Previously, the best known approach for handling updates in client-preprocessing PIR had time complexity $O(\log N)$, while also adding a $\log N$ factor to the bandwidth. We implement our update algorithm and show concrete speedups of about 20$\times$ in update time when compared to the previous state-of-the-art updatable scheme (e.g., Checklist, USENIX 2021).
Last updated:  2024-04-24
Simple constructions of linear-depth t-designs and pseudorandom unitaries
Tony Metger, Alexander Poremba, Makrand Sinha, and Henry Yuen
Uniformly random unitaries, i.e., unitaries drawn from the Haar measure, have many useful properties, but cannot be implemented efficiently. This has motivated a long line of research into random unitaries that ``look'' sufficiently Haar random while also being efficient to implement. Two different notions of derandomisation have emerged: $t$-designs are random unitaries that information-theoretically reproduce the first $t$ moments of the Haar measure, and pseudorandom unitaries (PRUs) are random unitaries that are computationally indistinguishable from Haar random. In this work, we take a unified approach to constructing $t$-designs and PRUs. For this, we introduce and analyse the ``$PFC$ ensemble'', the product of a random computational basis permutation $P$, a random binary phase operator $F$, and a random Clifford unitary $C$. We show that this ensemble reproduces exponentially high moments of the Haar measure. We can then derandomise the $PFC$ ensemble to show the following: 1. Linear-depth $t$-designs. We give the first construction of a (diamond-error) approximate $t$-design with circuit depth linear in $t$. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their $2t$-wise independent counterparts. 2. Non-adaptive PRUs. We give the first construction of PRUs with non-adaptive security, i.e., we construct unitaries that are indistinguishable from Haar random to polynomial-time distinguishers that query the unitary in parallel on an arbitary state. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their pseudorandom counterparts. 3. Adaptive pseudorandom isometries. We show that if one considers isometries (rather than unitaries) from $n$ to $n + \omega(\log n)$ qubits, a small modification of our PRU construction achieves adaptive security, i.e., even a distinguisher that can query the isometry adaptively in sequence cannot distinguish it from Haar random isometries. This gives the first construction of adaptive pseudorandom isometries. Under an additional conjecture, this proof also extends to adaptive PRUs.
Last updated:  2024-02-22
Recommendations for the Design and Validation of a Physical True Random Number Generator Integrated in an Electronic Device
David Lubicz and Viktor FIscher
These Recommendations describe essential elements of the design of a secure physical true random number generator (PTRNG) integrated in an electronic device. Based on these elements, we describe and justify requirements for the design, validation and testing of PTRNGs, which are intended to guarantee the security of generators aimed at cryptographic applications.
Last updated:  2024-03-11
Diving Deep into the Preimage Security of AES-like Hashing
Shiyao Chen, Jian Guo, Eik List, Danping Shi, and Tianyu Zhang
Since the seminal works by Sasaki and Aoki, Meet-in-the-Middle (MITM) attacks are recognized as an effective technique for preimage and collision attacks on hash functions. At Eurocrypt 2021, Bao et al. automated MITM attacks on AES-like hashing and improved upon the best manual result. The attack framework has been furnished by subsequent works, yet far from complete. This paper elucidates three key contributions dedicated in further generalizing the idea of MITM and refining the automatic model on AES-like hashing. (1) We introduce S-box linearization to MITM pseudo-preimage attacks on AES-like hashing. The technique suits perfectly with superposition states to preserve information after S-box with an affordable cost. (2) We propose distributed initial structures, an extension on the original concept of initial states, that selects initial degrees of freedom in a more versatile manner to enlarge the search space. (3) We exploit the structural similarities between encryption and key schedule in constructions (e.g. Whirlpool and Streebog) to model propagations more accurately and avoid repeated costs. Weaponed with these innovative techniques, we further empower the MITM framework and improve the attack results on AES-like designs for preimage and collision. We obtain the first preimage attacks on 10-round AES-192, 10-round Rijndael-192/256, and 7.75-round Whirlpool, reduced time and/or memory complexities for preimage attacks on 5-, 6-round Whirlpool and 7.5-, 8.5-round Streebog, as well as improved collision attacks on 6- and 6.5-round Whirlpool.
Last updated:  2024-03-27
Divide and Surrender: Exploiting Variable Division Instruction Timing in HQC Key Recovery Attacks
Robin Leander Schröder, Stefan Gast, and Qian Guo
We uncover a critical side-channel vulnerability in the Hamming Quasi-Cyclic (HQC) round 4 optimized implementation arising due to the use of the modulo operator. In some cases, compilers optimize uses of the modulo operator with compile-time known divisors into constant-time Barrett reductions. However, this optimization is not guaranteed: for example, when a modulo operation is used in a loop the compiler may emit division (div) instructions which have variable execution time depending on the numerator. When the numerator depends on secret data, this may yield a timing side-channel. We name vulnerabilities of this kind Divide and Surrender (DaS) vulnerabilities. For processors supporting Simultaneous Multithreading (SMT) we propose a new approach called DIV-SMT which enables precisely measuring small division timing variations using scheduler and/or execution unit contention. We show that using only 100 such side-channel traces we can build a Plaintext-Checking (PC) oracle with above 90% accuracy. Our approach may also prove applicable to other instances of the DaS vulnerability, such as KyberSlash. We stress that exploitation with DIV-SMT requires co-location of the attacker on the same physical core as the victim. We then apply our methodology to HQC and present a novel way to recover HQC secret keys faster, achieving an 8-fold decrease in the number of idealized oracle queries when compared to previous approaches. Our new PC oracle attack uses our newly developed Zero Tester method to quickly determine whether an entire block of bits contains only zero-bits. The Zero Tester method enables the DIV-SMT powered attack on HQC-128 to complete in under 2 minutes on our targeted AMD Zen2 machine.
Last updated:  2024-02-21
New Models for the Cryptanalysis of ASCON
Mathieu Degré, Patrick Derbez, Lucie Lahaye, and André Schrottenloher
This paper focuses on the cryptanalysis of the ASCON family using automatic tools. We analyze two different problems with the goal to obtain new modelings, both simpler and less computationally heavy than previous works (all our models require only a small amount of code and run on regular desktop computers). The first problem is the search for Meet-in-the-middle attacks on reduced-round ASCON-Hash. Starting from the MILP modeling of Qin et al. (EUROCRYPT 2023 & ePrint 2023), we rephrase the problem in SAT, which accelerates significantly the solving time and removes the need for the ``weak diffusion structure'' heuristic. This allows us to reduce the memory complexity of Qin et al.'s attacks and to prove some optimality results. The second problem is the search for lower bounds on the probability of differential characteristics for the ASCON permutation. We introduce a lossy MILP encoding of the propagation rules based on the Hamming weight, in order to find quickly lower bounds which are comparable to the state of the art. We find a small improvement over the existing bound on 7 rounds.
Last updated:  2024-02-21
Accelerating Training and Enhancing Security Through Message Size Optimization in Symmetric Cryptography
Uncategorized
ABHISAR, Madhav Yadav, and Girish Mishra
Show abstract
Uncategorized
This research extends Abadi and Andersen's exploration of neural networks using secret keys for information protection in multiagent systems. Focusing on enhancing confidentiality properties, we employ end-to-end adversarial training with neural networks Alice, Bob, and Eve. Unlike prior work limited to 64-bit messages, our study spans message sizes from 4 to 1024 bits, varying batch sizes and training steps. An innovative aspect involves training model Bob to approach a minimal error value close to zero and examining its effect on the feasibility of the model. This research unveils the neural networks' adaptability and scalability in encryption and decryption across diverse scenarios, offering valuable insights into their optimization potential for secure communication.
Last updated:  2024-02-21
Attacking ECDSA with Nonce Leakage by Lattice Sieving: Bridging the Gap with Fourier Analysis-based Attacks
Yiming Gao, Jinghui Wang, Honggang Hu, and Binang He
The Hidden Number Problem (HNP) has found extensive applications in side-channel attacks against cryptographic schemes, such as ECDSA and Diffie-Hellman. There are two primary algorithmic approaches to solving the HNP: lattice-based attacks and Fourier analysis-based attacks. Lattice-based attacks exhibit better efficiency and require fewer samples when sufficiently long substrings of the nonces are known. However, they face significant challenges when only a small fraction of the nonce is leaked, such as 1-bit leakage, and their performance degrades in the presence of errors. In this paper, we address an open question by introducing an algorithmic tradeoff that significantly bridges the gap between these two approaches. By introducing a parameter $x$ to modify Albrecht and Heninger's lattice, the lattice dimension is reduced by approximately $(\log_2{x})/ l$, where $l$ represents the number of leaked bits. We present a series of new methods, including the interval reduction algorithm, several predicates, and the pre-screening technique. Furthermore, we extend our algorithms to solve the HNP with erroneous input. Our attack outperforms existing state-of-the-art lattice-based attacks against ECDSA. We break several records including 1-bit and less than 1-bit leakage on a 160-bit curve, while the best previous lattice-based attack for 1-bit leakage was conducted only on a 112-bit curve.
Last updated:  2024-02-21
An Efficient Hash Function for Imaginary Class Groups
Kostas Kryptos Chalkias, Jonas Lindstrøm, and Arnab Roy
This paper presents a new efficient hash function for imaginary class groups. Many class group based protocols, such as verifiable delay functions, timed commitments and accumulators, rely on the existence of an efficient and secure hash function, but there are not many concrete constructions available in the literature, and existing constructions are too inefficient for practical use cases. Our novel approach, building on Wesolowski's initial scheme, achieves a staggering 500-fold increase in computation speed, making it exceptionally practical for real-world applications. This optimisation is achieved at the cost of a smaller image of the hash function, but we show that the image is still sufficiently large for the hash function to be secure. Additionally, our construction is almost linear in its ability to be parallelized, which significantly enhances its computational efficiency on multi-processor systems, making it highly suitable for modern computing environments.
Last updated:  2024-02-21
Multiplex: TBC-based Authenticated Encryption with Sponge-Like Rate
Thomas Peters, Yaobin Shen, and François-Xavier Standaert
Authenticated Encryption (AE) modes of operation based on Tweakable Block Ciphers (TBC) usually measure efficiency in the number of calls to the underlying primitive per message block. On the one hand, many existing solutions reach a primitive-rate of 1, meaning that each n-bit block of message asymptotically needs a single call to the TBC with output length n. On the other hand, while these modes look optimal in a blackbox setting, they become less attractive when leakage comes into play, since all these calls must then be equally well protected to maintain security. Leakage-resistant modes improve this situation, by generating ephemeral keys every constant number of calls. However, rekeying is inherently suboptimal in primitive-rate, since a TBC call can only be used either to refresh a key or to encrypt a block. Even worse, existing solutions achieving almost n bits of security for n-bit secret keys have at most a primitive-rate 2/3. Hence the question: Can we design a highly-secure TBC-based rekeying mode with ``nearly optimal'' primitive-rate? We answer this question positively with Multiplex, a new mode that has primitive-rate d/(d+1) given a TBC with a dn-bit tweak. Multiplex achieves $n-\log_2(dn)$ bits of security for both (i) misuse-resilience CCA security in the blackbox setting and (ii) Ciphertext Integrity with Misuse-resistant and unbounded Leakage in encryption and decryption (CIML2). It also provides (iii) confidentiality with leakage up to the birthday bound. Furthermore, Multiplex can run d+1 calls in parallel in each iteration. The combination of these features gives a mode of operation that inherits most of the good implementation features and flexibility of a Duplex sponge -- therefore paving the way towards sound comparisons between TBC-based and permutation-based AE.
Last updated:  2024-02-21
Registered Attribute-Based Signature
Yijian Zhang, Jun Zhao, Ziqi Zhu, Junqing Gong, and Jie Chen
This paper introduces the notion of registered attribute-based signature (registered ABS). Distinctly different from classical attribute-based signature (ABS), registered ABS allows any user to generate their own public/secret key pair and register it with the system. The key curator is critical to keep the system flowing, which is a fully transparent entity that does not retain secrets. Our results can be summarized as follows. -This paper provides the first definition of registered ABS, which has never been defined. -This paper presents the first generic fully secure registered ABS over the prime-order group from $k$-Lin assumption under the standard model, which supports various classes of predicate. -This paper gives the first concrete registered ABS scheme for arithmetic branching program (ABP), which achieves full security in the standard model. Technically, our registered ABS is inspired by the blueprint of Okamoto and Takashima[PKC'11]. We convert the prime-order registered attribute-based encryption (registered ABE) scheme of Zhu et al.[ASIACRYPT'23] via predicate encoding to registered ABS by employing the technique of re-randomization with specialized delegation, while we employ the different dual-system method considering the property of registration. Prior to our work, the work of solving the key-escrow issue was presented by Okamoto and Takashima[PKC'13] while their work considered the weak adversary in the random oracle model.
Last updated:  2024-02-21
IDEA-DAC: Integrity-Driven Editing for Accountable Decentralized Anonymous Credentials via ZK-JSON
Shuhao Zheng, Zonglun Li, Junliang Luo, Ziyue Xin, and Xue Liu
Decentralized Anonymous Credential (DAC) systems are increasingly relevant, especially when enhancing revocation mechanisms in the face of complex traceability challenges. This paper introduces IDEA-DAC, a paradigm shift from the conventional revoke-and-reissue methods, promoting direct and Integrity-Driven Editing (IDE) for Accountable DACs, which results in better integrity accountability, traceability, and system simplicity. We further incorporate an Edit-bound Conformity Check that ensures tailored integrity standards during credential amendments using R1CS-based ZK-SNARKs. Delving deeper, we propose ZK-JSON, a unique R1CS circuit design tailored for IDE over generic JSON documents. This design imposes strictly $O(N)$ rank-1 constraints for variable-length JSON documents of up to $N$ bytes in length, encompassing serialization, encryption, and edit-bound conformity checks. Additionally, our circuits only necessitate a one-time compilation, setup, and smart contract deployment for homogeneous JSON documents up to a specified size. While preserving core DAC features such as selective disclosure, anonymity, and predicate provability, IDEA-DAC achieves precise data modification checks without revealing private content, ensuring only authorized edits are permitted. In summary, IDEA-DAC offers an enhanced methodology for large-scale JSON-formatted credential systems, setting a new standard in decentralized identity management efficiency and precision.
Last updated:  2024-02-20
Quantum Pseudorandomness Cannot Be Shrunk In a Black-Box Way
Samuel Bouaziz--Ermann and Garazi Muguruza
Pseudorandom Quantum States (PRS) were introduced by Ji, Liu and Song as quantum analogous to Pseudorandom Generators. They are an ensemble of states efficiently computable but computationally indistinguishable from Haar random states. Subsequent works have shown that some cryptographic primitives can be constructed from PRSs. Moreover, recent classical and quantum oracle separations of PRS from One-Way Functions strengthen the interest in a purely quantum alternative building block for quantum cryptography, potentially weaker than OWFs. However, our lack of knowledge of extending or shrinking the number of qubits of the PRS output still makes it difficult to reproduce some of the classical proof techniques and results. Short-PRSs, that is PRSs with logarithmic size output, have been introduced in the literature along with cryptographic applications, but we still do not know how they relate to PRSs. Here we answer half of the question, by showing that it is not possible to shrink the output of a PRS from polynomial to logarithmic qubit length while still preserving the pseudorandomness property, in a relativized way. More precisely, we show that relative to Kretschmer's quantum oracle (TQC 2021) short-PRSs cannot exist (while PRSs exist, as shown by Kretschmer's work).
Last updated:  2024-04-22
Secure Integrated Sensing and Communication Under Correlated Rayleigh Fading
Martin Mittelbach, Rafael F. Schaefer, Matthieu Bloch, Aylin Yener, and Onur Gunlu
We consider a secure integrated sensing and communication (ISAC) scenario, in which a signal is transmitted through a state-dependent wiretap channel with one legitimate receiver with which the transmitter communicates and one honest-but-curious target that the transmitter wants to sense. The secure ISAC channel is modeled as two state-dependent fast-fading channels with correlated Rayleigh fading coefficients and independent additive Gaussian noise components. Delayed channel outputs are fed back to the transmitter to improve the communication performance and to estimate the channel state sequence. We establish and illustrate an achievable secrecy-distortion region for degraded secure ISAC channels under correlated Rayleigh fading. We also evaluate the inner bound for a large set of parameters to derive practical design insights for secure ISAC methods. The presented results include in particular parameter ranges for which the secrecy capacity of a classical wiretap channel setup is surpassed and for which the channel capacity is approached.
Last updated:  2024-02-20
SoK: Parameterization of Fault Adversary Models - Connecting Theory and Practice
Dilara Toprakhisar, Svetla Nikova, and Ventzislav Nikov
Since the first fault attack by Boneh et al. in 1997, various physical fault injection mechanisms have been explored to induce errors in electronic systems. Subsequent fault analysis methods of these errors have been studied, and successfully used to attack many cryptographic implementations. This poses a significant challenge to the secure implementation of cryptographic algorithms. To address this, numerous countermeasures have been proposed. Nevertheless, these countermeasures are primarily designed to protect against the particular assumptions made by the fault analysis methods. These assumptions, however, encompass only a limited range of the capabilities inherent to physical fault injection mechanisms. In this paper, we narrow our focus to fault attacks and countermeasures specific to ASICs, and introduce a novel parameterized fault adversary model capturing an adversary's control over an ASIC. We systematically map (a) the physical fault injection mechanisms, (b) adversary models assumed in fault analysis, and (c) adversary models used to design countermeasures into our introduced model. This model forms the basis for our comprehensive exploration that covers a broad spectrum of fault attacks and countermeasures within symmetric key cryptography as a comprehensive survey. Furthermore, our investigation highlights a notable misalignment among the adversary models assumed in countermeasures, fault attacks, and the intrinsic capabilities of the physical fault injection mechanisms. Through this study, we emphasize the need to reevaluate existing fault adversary models, and advocate for the development of a unified model.
Last updated:  2024-02-27
A generic algorithm for efficient key recovery in differential attacks – and its associated tool
Christina Boura, Nicolas David, Patrick Derbez, Rachelle Heim Boissier, and María Naya-Plasencia
Differential cryptanalysis is an old and powerful attack against block ciphers. While different techniques have been introduced throughout the years to improve the complexity of this attack, the key recovery phase remains a tedious and error-prone procedure. In this work, we propose a new algorithm and its associated tool that permits, given a distinguisher, to output an efficient key guessing strategy. Our tool can be applied to SPN ciphers whose linear layer consists of a bit-permutation and whose key schedule is linear or almost linear. It can be used not only to help cryptanalysts find the best differential attack on a given cipher but also to assist designers in their security analysis. We applied our tool to four targets: RECTANGLE, PRESENT-80, SPEEDY-7-192 and GIFT-64. We extend the previous best attack on RECTANGLE-128 by one round and the previous best differential attack against PRESENT-80 by 2 rounds. We improve a previous key recovery step in an attack against SPEEDY and present more efficient key recovery strategies for RECTANGLE-80 and GIFT. Our tool outputs the results in only a second for most targets.
Last updated:  2024-02-20
CAPABARA: A Combined Attack on CAPA
Dilara Toprakhisar, Svetla Nikova, and Ventzislav Nikov
Physical attacks pose a substantial threat to the secure implementation of cryptographic algorithms. While considerable research efforts are dedicated to protecting against passive physical attacks (e.g., side-channel analysis (SCA)), the landscape of protection against other types of physical attacks remains a challenge. Fault attacks (FA), though attracting growing attention in research, still lack the prevalence of provably secure designs when compared to SCA. The realm of combined attacks, which leverage the capabilities of both SCA and FA adversaries, introduces powerful adversarial models, rendering protection against them challenging. This challenge has consequently led to a relatively unexplored area of research, resulting in a notable gap in understanding and efficiently protecting against combined attacks. The CAPA countermeasure, published at CRYPTO 2018, addresses this challenge with a robust adversarial model that goes beyond conventional SCA and FA adversarial models. Drawing inspiration from the principles of Multiparty Computation (MPC), CAPA claims security against higher-order SCA, higher-order fault attacks, and their combination. In this work, we present a combined attack that breaks CAPA within the constraints of its assumed adversarial model. In response, we propose potential fixes to the design of CAPA that increase the complexity of the proposed attack, although not provably thwarting it. With this presented combined attack, we highlight the difficulty of effectively protecting against combined attacks.
Last updated:  2024-02-20
Efficient Zero-Knowledge Arguments and Digital Signatures via Sharing Conversion in the Head
Jules Maire and Damien Vergnaud
We present a novel technique within the MPC-in-the-Head framework, aiming to design efficient zero-knowledge protocols and digital signature schemes. The technique allows for the simultaneous use of additive and multiplicative sharings of secret information, enabling efficient proofs of linear and multiplicative relations. The applications of our technique are manifold. It is first applied to construct zero-knowledge arguments of knowledge for Double Discrete Logarithms (DDLP). The resulting protocol achieves improved communication complexity without compromising efficiency. We also propose a new zero-knowledge argument of knowledge for the Permuted Kernel Problem. Eventually, we suggest a short (candidate) post-quantum digital signature scheme constructed from a new one-way function based on simple polynomials known as fewnomials. This scheme offers simplicity and ease of implementation. Finally, we present two additional results inspired by this work but using alternative approaches. We propose a zero-knowledge argument of knowledge of an RSA plaintext for a small public exponent that significantly improves the state-of-the-art communication complexity. We also detail a more efficient forward-backward construction for the DDLP.
Last updated:  2024-02-20
Mirrored Commitment: Fixing ``Randomized Partial Checking'' and Applications
Paweł Lorek, Moti Yung, and Filip Zagórski
Randomized Partial Checking (RPC} was proposed by Jakobsson, Juels, and Rivest and attracted attention as an efficient method of verifying the correctness of the mixing process in numerous applied scenarios. In fact, RPC is a building block for many electronic voting schemes, including Prêt à Voter, Civitas, Scantegrity II as well as voting-systems used in real-world elections (e.g., in Australia). Mixing is also used in anonymous transfers of cryptocurrencies. It turned out, however, that a series of works showed, however, subtle issues with analyses behind RPC. First, that the actual security level of the RPC protocol is way off the claimed bounds. The probability of successful manipulation of $k$ votes is $(\frac{3}{4})^k$ instead of the claimed $\frac{1}{2^k}$ (this difference, in turn, negatively affects actual implementations of the notion within existing election systems. This is so since concrete implemented procedures of a given length were directly based on this parameter). Further, privacy guarantees that a constant number of mix-servers is enough turned out to also not be correct. We can conclude from the above that these analyses of the processes of mixing are not trivial. In this paper, we review the relevant attacks, and we present Mirrored-RPC -- a fix to RPC based on ``mirrored commitment'' which makes it optimally secure; namely, having a probability of successful manipulation of $k$ votes $\frac{1}{2^k}$. Then, we present an analysis of the privacy level of both RPC and mRPC. We show that for $n$ messages, the number of mix-servers (rounds) needed to be $\varepsilon$-close to the uniform distribution in total variation distance is lower bounded by: \[ r(n, \varepsilon) \geq \log_{2}{n \choose 2}/\varepsilon. \] This proof of privacy, in turn, gives insights into the anonymity of various cryptocurrencies (e.g., Zerocash) using anonymizing pools. If a random fraction $q$ of $n$ existing coins is mixed (in each block), then to achieve full anonymity, the number of blocks one needs to run the protocol for, is: \[ rb(n, q, \varepsilon) \geq - \frac{\log n + \log (n-1) - \log (2\varepsilon)}{ {\log({1-q^2}})}. \]
Last updated:  2024-02-20
Practical Improvements to Statistical Ineffective Fault Attacks
Barış Ege, Bob Swinkels, Dilara Toprakhisar, and Praveen Kumar Vadnala
Statistical Fault Attacks (SFA), introduced by Fuhr et al., exploit the statistical bias resulting from injected faults. Unlike prior fault analysis attacks, which require both faulty and correct ciphertexts under the same key, SFA leverages only faulty ciphertexts. In CHES 2018, more powerful attacks called Statistical Ineffective Fault Attacks (SIFA) have been proposed. In contrast to the previous fault attacks that utilize faulty ciphertexts, SIFA exploits the distribution of the intermediate values leading to fault-free ciphertexts. As a result, the SIFA attacks were shown to be effective even in the presence of widely used fault injection countermeasures based on detection and infection. In this work, we build upon the core idea of SIFA, and provide two main practical improvements over the previously proposed analysis methods. Firstly, we show how to perform SIFA from the input side, which in contrast to the original SIFA, requires injecting faults in the earlier rounds of an encryption or decryption operation. If we consider the start of the operation as the trigger for fault injection, the cumulative jitter in the first few rounds of a cipher is much lower than the last rounds. Hence, performing the attack in the first or second round requires a narrower parameter range for fault injection and hence less fault injection attempts to recover the secret key. Secondly, in comparison to the straightforward SIFA approach of guessing 32-bits at a time, we propose a chosen input approach that reduces the guessing effort to 16-bits at a time. This decreases the key search space for full key recovery of an AES-128 implementation from $2^{34}$ to $2^{19}$.
Last updated:  2024-02-20
Toward Malicious Constant-Rate 2PC via Arithmetic Garbling
Carmit Hazay and Yibin Yang
A recent work by Ball, Li, Lin, and Liu [Eurocrypt'23] presented a new instantiation of the arithmetic garbling paradigm introduced by Applebaum, Ishai, and Kushilevitz [FOCS'11]. In particular, Ball et al.'s garbling scheme is the first constant-rate garbled circuit over large enough bounded integer computations, inferring the first constant-round constant-rate secure two-party computation (2PC) over bounded integer computations in the presence of semi-honest adversaries. The main source of difficulty in lifting the security of garbling schemes-based protocols to the malicious setting lies in proving the correctness of the underlying garbling scheme. In this work, we analyze the security of Ball et al.'s scheme in the presence of malicious attacks. - We demonstrate an overflow attack, which is inevitable in this computational model, even if the garbled circuit is fully correct. Our attack follows by defining an adversary, corrupting either the garbler or the evaluator, that chooses a bad input and causes the computation to overflow, thus leaking information about the honest party's input. By utilizing overflow attacks, we show that $1$-bit leakage is necessary for achieving security against a malicious garbler, discarding the possibility of achieving full malicious security in this model. We further demonstrate a wider range of overflow attacks against a malicious evaluator with more than $1$ bit of leakage. - We boost the security level of Ball et al.'s scheme by utilizing two variants of Vector Oblivious Linear Evaluation, denoted by VOLEc and aVOLE. We present the first constant-round constant-rate 2PC protocol over bounded integer computations, in the presence of a malicious garbler with $1$-bit leakage and a semi-honest evaluator, in the {VOLEc,aVOLE}-hybrid model and being black-box in the underlying group and ring. Compared to the semi-honest variant, our protocol incurs only a constant factor overhead, both in computation and communication. The constant-round and constant-rate properties hold even in the plain model.
Last updated:  2024-02-19
A Concrete Analysis of Wagner's $k$-List Algorithm over $\mathbb{Z}_p$
Antoine Joux, Hunter Kippen, and Julian Loss
Since its introduction by Wagner (CRYPTO `02), the $k$-list algorithm has found significant utility in cryptanalysis. One important application thereof is in computing forgeries on several interactive signature schemes that implicitly rely on the hardness of the ROS problem formulated by Schnorr (ICICS `01). The current best attack strategy for these schemes relies the conjectured runtime of the $k$-list algorithm over $\mathbb{Z}_p$. The tightest known analysis of Wagner's algorithm over $\mathbb{Z}_p$ is due to Shallue (ANTS `08). However, it hides large polynomial factors and leaves a gap with respect to desirable concrete parameters for the attack. In this work, we develop a degraded version of the $k$-list algorithm which provably enforces the heuristic invariants in Wagner's original. In the process, we devise and analyze a new list merge procedure that we dub the interval merge. We give a thorough analysis of the runtime and success probability of our degraded algorithm, and show that it beats the projected runtime of the analysis by Shallue for parameters relevant to the generalized ROS attack of Benhamouda et al. (EUROCRYPT `21). For a $256$-bit prime $p$, and $k = 8$, our degraded $k$-list algorithm runs in time $\approx 2^{70.4}$, while Shallue's analysis states that the Wagner's original algorithm runs in time $\approx 2^{98.3}$.
Last updated:  2024-02-19
Polynomial Commitments from Lattices: Post-Quantum Security, Fast Verification and Transparent Setup
Valerio Cini, Giulio Malavolta, Ngoc Khanh Nguyen, and Hoeteck Wee
Polynomial commitment scheme allows a prover to commit to a polynomial $f \in \mathcal{R}[X]$ of degree $L$, and later prove that the committed function was correctly evaluated at a specified point $x$; in other words $f(x)=u$ for public $x,u \in\mathcal{R}$. Most applications of polynomial commitments, e.g. succinct non-interactive arguments of knowledge (SNARKs), require that (i) both the commitment and evaluation proof are succinct (i.e., polylogarithmic in the degree $L$) - with the latter being efficiently verifiable, and (ii) no pre-processing step is allowed. Surprisingly, as far as plausibly quantum-safe polynomial commitments are concerned, the currently most efficient constructions only rely on weak cryptographic assumptions, such as security of hash functions. Indeed, despite making use of the underlying algebraic structure, prior lattice-based polynomial commitments still seem to be much behind the hash-based ones. Moreover, security of the aforementioned lattice constructions against quantum adversaries was never formally discussed. In this work, we bridge the gap and propose the first (asymptotically and concretely) efficient lattice-based polynomial commitment with transparent setup and post-quantum security. Our interactive variant relies on the standard (Module-)SIS problem, and can be made non-interactive in the random oracle model using Fiat-Shamir transformation. In addition, we equip the scheme with a knowledge soundness proof against quantum adversaries which can be of independent interest. In terms of concrete efficiency, for $L=2^{20}$ our scheme yields proofs of size $2$X smaller than the hash-based \textsf{FRI} commitment (Block et al., Asiacrypt 2023), and $70$X smaller than the very recent lattice-based construction by Albrecht et al. (Eurocrypt 2024).
Last updated:  2024-02-19
HARTS: High-Threshold, Adaptively Secure, and Robust Threshold Schnorr Signatures
Renas Bacho, Julian Loss, Gilad Stern, and Benedikt Wagner
Threshold variants of the Schnorr signature scheme have recently been at the center of attention due to their applications to Bitcoin, Ethereum, and other cryptocurrencies. However, existing constructions for threshold Schnorr signatures among a set of $n$ parties with corruption threshold $t_c$ suffer from at least one of the following drawbacks: (i) security only against static (i.e., non-adaptive) adversaries, (ii) cubic or higher communication cost to generate a single signature, (iii) strong synchrony assumptions on the network, or (iv) $t_c+1$ are sufficient to generate a signature, i.e., the corruption threshold of the scheme equals its reconstruction threshold. Especially (iv) turns out to be a severe limitation for many asynchronous real-world applications where $t_c < n/3$ is necessary to maintain liveness, but a higher signing threshold of $n-t_c$ is needed. A recent scheme, ROAST, proposed by Ruffing et al. (ACM CCS `22) addresses (iii) and (iv), but still falls short of obtaining subcubic complexity and adaptive security. In this work, we present HARTS, the first threshold Schnorr signature scheme to incorporate all these desiderata. More concretely: - HARTS is adaptively secure and remains fully secure and operational even under asynchronous network conditions in the presence of up to $t_c < n/3$ malicious parties. This is optimal. - HARTS outputs a Schnorr signature of size $\lambda$ with a near-optimal amortized communication cost of $O(\lambda n^2 \log{n})$ bits and $O(1)$ rounds per signature. - HARTS is a high-threshold scheme: no fewer than $t_r+1$ signature shares can be combined to yield a full signature, where $t_r\geq 2n/3 > 2t_c$. This is optimal. We prove our result in a modular fashion in the algebraic group model. At the core of our construction, we design a new simple, and adaptively secure high-threshold AVSS scheme which may be of independent interest.
Last updated:  2024-03-13
Polynomial-Time Key-Recovery Attack on the ${\tt NIST}$ Specification of ${\tt PROV}$
River Moreira Ferreira and Ludovic Perret
In this paper, we present an efficient attack against ${\tt PROV}$, a recent variant of the popular Unbalanced Oil and Vinegar (${\tt UOV}$) multivariate signature scheme, that has been submitted to the ongoing ${\tt NIST}$ standardization process for additional post-quantum signature schemes. A notable feature of ${\tt PROV}$ is its proof of security, namely, existential unforgeability under a chosen-message attack (${\tt EUF-CMA}$), assuming the hardness of solving the system formed by the public-key non-linear equations. We present a polynomial-time key-recovery attack against the first specification of ${\tt PROV}$ (v$1.0$). To do so, we remark that a small fraction of the ${\tt PROV}$ secret-key is leaked during the signature process. Adapting and extending previous works on basic ${\tt UOV}$, we show that the entire secret-key can be then recovered from such a small fraction in polynomial-time. This leads to an efficient attack against ${\tt PROV}$ that we validated in practice. For all the security parameters suggested in by the authors of ${\tt PROV}$, our attack recovers the secret-key in at most $8$ seconds. We conclude the paper by discussing the apparent mismatch between such a practical attack and the theoretical security claimed by ${\tt PROV}$ designers. Our attack is not structural but exploits that the current specification of ${\tt PROV}$ differs from the required security model. A simple countermeasure makes ${\tt PROV}$ immune against the attack presented here and led the designers to update the specification of ${\tt PROV}$ (v$1.1$).
Last updated:  2024-02-19
Circle STARKs
Ulrich Haböck, David Levit, and Shahar Papini
Traditional STARKs require a cyclic group of a smooth order in the field. This allows efficient interpolation of points using the FFT algorithm, and writing constraints that involve neighboring rows. The Elliptic Curve FFT (ECFFT, Part I and II) introduced a way to make efficient STARKs for any finite field, by using a cyclic group of an elliptic curve. We show a simpler construction in the lines of ECFFT over the circle curve $x^2 + y^2 = 1$. When $p + 1$ is divisible by a large power of $2$, this construction is as efficient as traditional STARKs and ECFFT. Applied to the Mersenne prime $p = 2^{31} − 1$, which has been recently advertised in the IACR eprint 2023:824, our preliminary benchmarks indicate a speed-up by a factor of $1.4$ compared to a traditional STARK using the Babybear prime $p = 2^{31} − 2^{27} + 1$.
Last updated:  2024-02-19
Fault Attacks on UOV and Rainbow
Juliane Krämer and Mirjam Loiero
Multivariate cryptography is one of the main candidates for creating post-quantum public key cryptosystems. Especially in the area of digital signatures, there exist many practical and secure multivariate schemes. The signature schemes UOV and Rainbow are two of the most promising and best studied multivariate schemes which have proven secure for more than a decade. However, so far the security of multivariate signature schemes towards physical attacks has not been appropriately assessed. Towards a better understanding of the physical security of multivariate signature schemes, this paper presents fault attacks against SingleField schemes, especially UOV and Rainbow. Our analysis shows that although promising attack vectors exist, multivariate signature schemes inherently offer a good protection against fault attacks.
Last updated:  2024-02-19
Reduce and Prange: Revisiting Prange's Information Set Decoding for LPN and RSD
Jiseung Kim and Changmin Lee
The learning parity with noise (LPN) problem has been widely utilized in classical cryptography to construct cryptographic primitives. Various variants of LPN have been proposed, including LPN over large fields and LPN with regular noise, depending on the underlying space and the noise regularity. These LPN variants have proven to be useful in constructing cryptographic primitives. We propose an improvement to the Gaussian elimination attack, which is also known as Prange's information set decoding algorithm, for solving the LPN problem. Contrary to prevailing knowledge, we find that the Gaussian elimination attack is highly competitive and currently the best method for solving LPN over large fields. Our improvement involves applying partial Gaussian elimination repeatedly, rather than the whole Gaussian algorithm, which we have named the ``Reduce and Prange's algorithm". Moreover, we provide two applications of Reduce and Prange algorithms: One is the hybrid algorithm of ours and Berstein, Lange and Peters's algorithm at PQCrypto'08, and the other one is Reduce and Prange algorithm for LPN with regular noise. Last, we provide a concrete estimation of the bit-security of LPN variants using our Reduce and Prange's frameworks. Our results show that the bit-security of LPN over $\mathbb{F}_q$ is reduced by 5-11 bits when $\log q = 128$ compared to previous analysis by Liu et al. (will appear at Eurocrypt'24). Furthermore, we show that our algorithm outperforms recent work by Briaud and Øygard (Eurocrypt'23) and Liu et al. for certain parameters. It reduces the bit-security of LPN with regular noise by 5-28 bits.
Last updated:  2024-02-22
The Multi-user Constrained PRF Security of Generalized GGM Trees for MPC and Hierarchical Wallets
Chun Guo, Xiao Wang, Xiang Xie, and Yu Yu
Multi-user (mu) security considers large-scale attackers that, given access to a number of cryptosystem instances, attempt to compromise at least one of them. We initiate the study of mu security of the so-called GGMtree that stems from the PRG-to-PRF transformation of Goldreich, Goldwasser, and Micali, with a goal to provide references for its recently popularized use in applied cryptography. We propose a generalized model for GGM trees and analyze its mu prefix-constrained PRF security in the random oracle model. Our model allows to derive concrete bounds and improvements for various protocols, and we showcase on the Bitcoin-Improvement-Proposal standard Bip32 hierarchical wallets and function secret sharing (FSS) protocols. In both scenarios, we propose improvements with better performance and concrete security bounds at the same time. Compared with the state-of-the-art designs, our SHACAL3- and KeccaK-𝑝-based Bip32 variants reduce the communication cost of MPC-based implementations by 73.3%∼93.8%, while our AES-based FSS substantially improves mu security while reducing computations by 50%.
Last updated:  2024-02-19
Amortized Large Look-up Table Evaluation with Multivariate Polynomials for Homomorphic Encryption
Heewon Chung, Hyojun Kim, Young-Sik Kim, and Yongwoo Lee
We present a new method for efficient look-up table (LUT) evaluation in homomorphic encryption (HE), based on Ring-LWE-based HE schemes, including both integer-message schemes such as Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/Fan-Vercauteren (BFV), and complex-number-message schemes like the Cheon-Kim-Kim-Song (CKKS) scheme. Our approach encodes bit streams into codewords and translates LUTs into low-degree multivariate polynomials, allowing for the simultaneous evaluation of multiple independent LUTs with minimal overhead. To mitigate noise accumulation in the CKKS scheme, we propose a novel noise-reduction technique, accompanied by proof demonstrating its effectiveness in asymptotically decreasing noise levels. We demonstrate our algorithm's effectiveness through a proof-of-concept implementation, showcasing significant efficiency gains, including a 0.029ms per slot evaluation for 8-input, 8-output LUTs and a 280ms amortized decryption time for AES-128 using CKKS on a single GPU. This work not only advances LUT evaluation in HE but also introduces a transciphering method for the CKKS scheme utilizing standard symmetric-key encryption, bridging the gap between discrete bit strings and numerical data.
Last updated:  2024-02-26
Information-Theoretic Homomorphic Encryption and 2-Party Computation
Jonathan Trostle
Homomorphic encryption has been an active area of research since Gentry's breakthrough results on fully homomorphic encryption. We present secret key somewhat homomorphic schemes where client privacy is information-theoretic (server can be computationally unbounded). As the group order in our schemes gets larger, entropy approaches max- imal entropy (perfect security). Our basic scheme is additive somewhat homomorphic. In one scheme, the server handles circuit multiplication gates by returning the mulitiplicands to the client which does the multiplication and sends back the encrypted product. We give a 2-party protocol that also incorporates server inputs where the client privacy is information-theoretic. Server privacy is not information-theoretic, but rather depends on hardness of the subset sum problem. Correctness for the server in the malicious model can be verified by a 3rd party where the client and server privacy are information-theoretically protected from the verifier. Scaling the 2PC protocol via separate encryption parameters for smaller subcircuits allows the ciphertext size to grow logarithmically as circuit size grows.
Last updated:  2024-02-26
Deep Learning Based Analysis of Key Scheduling Algorithm of Advanced Ciphers
Narendra Kumar Patel and Hemraj Shobharam Lamkuche
The advancements in information technology have made the Advanced Encryption Standard (AES) and the PRESENT cipher indispensable in ensuring data security and facilitating private transactions. AES is renowned for its flexibility and widespread use in various fields, while the PRESENT cipher excels in lightweight cryptographic situations. This paper delves into a dual examination of the Key Scheduling Algorithms (KSAs) of AES and the PRESENT cipher, which play a crucial role in generating round keys for their respective encryption techniques. By implementing deep learning methods, particularly a Neural Network model, our study aims to unravel the complexities of these KSAs and shed light on their inner workings.
Last updated:  2024-02-19
Understanding User-Perceived Security Risks and Mitigation Strategies in the Web3 Ecosystem
Janice Jianing Si, Tanusree Sharma, and Kanye Ye Wang
The advent of Web3 technologies promises unprecedented levels of user control and autonomy. However, this decentralization shifts the burden of security onto the users, making it crucial to understand their security behaviors and perceptions. To address this, our study introduces a comprehensive framework that identifies four core components of user interaction within the Web3 ecosystem: blockchain infrastructures, Web3-based Decentralized Applications (DApps), online communities, and off-chain cryptocurrency platforms. We delve into the security concerns perceived by users in each of these components and analyze the mitigation strategies they employ, ranging from risk assessment and aversion to diversification and acceptance. We further discuss the landscape of both technical and human-induced security risks in the Web3 ecosystem, identify the unique security differences between Web2 and Web3, and highlight key challenges that render users vulnerable, to provide implications for security design in Web3.
Last updated:  2024-02-18
YPIR: High-Throughput Single-Server PIR with Silent Preprocessing
Samir Jordan Menon and David J. Wu
We introduce YPIR, a single-server private information retrieval (PIR) protocol that achieves high throughput (up to 75% of the memory bandwidth of the machine) without any offline communication. For retrieving a 1-bit (or 1-byte) record from a 32-GB database, YPIR achieves 10.9 GB/s/core server throughput and requires 2.5 MB of total communication. On the same setup, the state-of-the-art SimplePIR protocol achieves a 12.6 GB/s/core server throughput, requires 1.5 MB total communication, but additionally requires downloading a 724 MB hint in an offline phase. YPIR leverages a new lightweight technique to remove the hint from high-throughput single-server PIR schemes with small overhead. We also show how to reduce the server preprocessing time in the SimplePIR family of protocols by a factor of $10$-$15\times$. By removing the need for offline communication, YPIR significantly reduces the server-side costs for private auditing of Certificate Transparency logs. Compared to the best previous PIR-based approach, YPIR reduces the server-side costs by a factor of $5.6\times$. Note that to reduce communication costs, the previous approach assumed that updates to the Certificate Transparency log servers occurred in weekly batches. Since there is no offline communication in YPIR, our approach allows clients to always audit the most recent Certificate Transparency logs (e.g., updating once a day). Supporting daily updates using the prior scheme would cost $30\times$ more than YPIR (based on current AWS compute costs).
Last updated:  2024-02-17
A note on PUF-Based Robust and Anonymous Authentication and Key Establishment Scheme for V2G Networks
Milad Seddigh and Seyed Hamid Baghestani
Vehicle-to-grid (V2G) provides effective charging services, allows bidirectional energy communication between the power grid and electric vehicle (EV), and reduces environmental pollution and energy crises. Recently, Sungjin Yu et al. proposed a PUF-based, robust, and anonymous authentication and key establishment scheme for V2G networks. In this paper, we show that the proposed protocol does not provide user anonymity and is vulnerable to tracing attack. We also found their scheme is vulnerable to ephemeral secret leakage attacks.
Last updated:  2024-02-17
A New Approach to Generic Lower Bounds: Classical/Quantum MDL, Quantum Factoring, and More
Minki Hhan
This paper studies the limitations of the generic approaches to solving cryptographic problems in classical and quantum settings in various models. - In the classical generic group model (GGM), we find simple alternative proofs for the lower bounds of variants of the discrete logarithm (DL) problem: the multiple-instance DL and one-more DL problems (and their mixture). We also re-prove the unknown-order GGM lower bounds, such as the order finding, root extraction, and repeated squaring. - In the quantum generic group model (QGGM), we study the complexity of variants of the discrete logarithm. We prove the logarithm DL lower bound in the QGGM even for the composite order setting. We also prove an asymptotically tight lower bound for the multiple-instance DL problem. Both results resolve the open problems suggested in a recent work by Hhan, Yamakawa, and Yun. - In the quantum generic ring model we newly suggested, we give the logarithmic lower bound for the order-finding algorithms, an important step for Shor's algorithm. We also give a logarithmic lower bound for a certain generic factoring algorithm outputting relatively small integers, which includes a modified version of Regev's algorithm. - Finally, we prove a lower bound for the basic index calculus method for solving the DL problem in a new idealized group model regarding smooth numbers. The quantum lower bounds in both models allow certain (different) types of classical preprocessing. All of the proofs are significantly simpler than the previous proofs and are through a single tool, the so-called compression lemma, along with linear algebra tools. Our use of this lemma may be of independent interest.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.