Papers updated in last 183 days (Page 2 of 1449 results)

Last updated:  2024-05-01
New self-orthogonal codes from weakly regular plateaued functions and their application in LCD codes
Melike Çakmak, Ahmet Sınak, and Oğuz Yayla
A linear code with few weights is a significant code family in coding theory. A linear code is considered self-orthogonal if contained within its dual code. Self-orthogonal codes have applications in linear complementary dual codes, quantum codes, etc. The construction of linear codes is an interesting research problem. There are various methods to construct linear codes, and one approach involves utilizing cryptographic functions defined over finite fields. The construction of linear codes (in particular, self-orthogonal codes) from functions has been studied in the literature. In this paper, we generalize the construction method given by Heng et al. in [Des. Codes Cryptogr. 91(12), 2023] to weakly regular plateaued functions. We first construct several families of p-ary linear codes with few weights from weakly regular plateaued unbalanced (resp. balanced) functions over the finite fields of odd characteristics. We observe that the constructed codes are self-orthogonal codes when p = 3. Then, we use the constructed ternary self-orthogonal codes to build new families of ternary LCD codes. Consequently, we obtain (almost) optimal ternary self-orthogonal codes and LCD codes.
Last updated:  2024-05-01
Eagle: Efficient Privacy Preserving Smart Contracts
Carsten Baum, James Hsin-yu Chiang, Bernardo David, and Tore Kasper Frederiksen
The proliferation of Decentralised Finance (DeFi) and Decentralised Autonomous Organisations (DAO), which in current form are exposed to front-running of token transactions and proposal voting, demonstrate the need to shield user inputs and internal state from the parties executing smart contracts. In this work we present “Eagle”, an efficient UC-secure protocol which efficiently realises a notion of privacy preserving smart contracts where both the amounts of tokens and the auxiliary data given as input to a contract are kept private from all parties but the one providing the input. Prior proposals realizing privacy preserving smart contracts on public, permissionless blockchains generally offer a limited contract functionality or require a trusted third party to manage private inputs and state. We achieve our results through a combination of secure multi-party computation (MPC) and zero-knowledge proofs on Pedersen commitments. Although other approaches leverage MPC in this setting, these incur impractical computational overheads by requiring the computation of cryptographic primitives within MPC. Our solution achieves security without the need of any cryptographic primitives to be computed inside the MPC instance and only require a constant amount of exponentiations per client input.
Last updated:  2024-05-01
REED: Chiplet-Based Accelerator for Fully Homomorphic Encryption
Aikata Aikata, Ahmet Can Mert, Sunmin Kwon, Maxim Deryabin, and Sujoy Sinha Roy
Fully Homomorphic Encryption (FHE) enables privacy-preserving computation and has many applications. However, its practical implementation faces massive computation and memory overheads. To address this bottleneck, several Application-Specific Integrated Circuit (ASIC) FHE accelerators have been proposed. All these prior works put every component needed for FHE onto one chip (monolithic), hence offering high performance. However, they suffer from practical problems associated with large-scale chip design, such as inflexibility, low yield, and high manufacturing cost. In this paper, we present the \emph{first-of-its-kind} multi-chiplet-based FHE accelerator `REED' for overcoming the limitations of prior monolithic designs. To utilize the advantages of multi-chiplet structures while matching the performance of larger monolithic systems, we propose and implement several novel strategies in the context of FHE. These include a scalable chiplet design approach, an effective framework for workload distribution, a custom inter-chiplet communication strategy, and advanced pipelined Number Theoretic Transform and automorphism design to enhance performance. Experimental results demonstrate that REED 2.5D microprocessor consumes 96.7mm$^2$ chip area, 49.4 W average power in 7nm technology. It could achieve a remarkable speedup of up to 2,991$\times$ compared to a CPU (24-core 2$\times$Intel X5690) and offer 1.9$\times$ better performance, along with a 50\% reduction in development costs when compared to state-of-the-art ASIC FHE accelerators. Furthermore, our work presents the \textit{first} instance of benchmarking an encrypted deep neural network (DNN) training. Overall, the REED architecture design offers a highly effective solution for accelerating FHE, thereby significantly advancing the practicality and deployability of FHE in real-world applications.
Last updated:  2024-05-01
SASTA: Ambushing Hybrid Homomorphic Encryption Schemes with a Single Fault
Aikata Aikata, Ahaan Dabholkar, Dhiman Saha, and Sujoy Sinha Roy
The rising tide of data breaches targeting large data storage centres and servers has raised serious privacy and security concerns. Homomorphic Encryption schemes offer an effective defence against such attacks, but their adoption has been hindered by substantial computational and communication overheads, particularly on the client's side. The Hybrid Homomorphic Encryption (HEE) protocol was developed to mitigate these issues. However, the susceptibility of HHE to strong attacks, specifically physical attacks, has been largely unexplored. While physical attacks like the Differential Fault Analysis (DFA) have proved very effective in the field of symmetric cryptography, prior works have largely relied on strong assumptions like nonce reuse, limiting their feasibility in a real-world setting. In this work, we introduce a novel attack- SASTA, which presents, to the best of our knowledge, the first generalized analysis of HHE under DFA. Our analysis uncovers a significant limitation of the HHE protocol where a single fault leads to complete key recovery not only for the standard scheme-AES but also for the new HHE tailored Symmetric Encryption (SE) schemes -- RASTA, PASTA, MASTA, and HERA. We further extend SASTA to effectively target Authenticated Transciphering protocols. Unlike prior works, the key advantage of SASTA is that it does not require nonce reuse. We demonstrate a proof-of-concept of our attack on an off-the-shelf ATXmega128D4-AU microcontroller running HHE firmware and mount end-to-end key recovery attacks. Finally, we discuss conventional countermeasures to defend against SASTA. Our work highlights that despite HHE's advantages of improving performance and reducing communication overhead, further analysis of its security guarantees is required.
Last updated:  2024-05-01
Greco: Fast Zero-Knowledge Proofs for Valid FHE RLWE Ciphertexts Formation
Enrico Bottazzi
Fully homomorphic encryption (FHE) allows for evaluating arbitrary functions over encrypted data. In Multi-party FHE applications, different parties encrypt their secret data and submit ciphertexts to a server, which, according to the application logic, performs homomorphic operations on them. For example, in a secret voting application, the tally is computed by summing up the ciphertexts encoding the votes. Valid encrypted votes are of the form $E(0)$ and $E(1)$. A malicious voter could send an invalid encrypted vote such as $E(145127835)$, which can mess up the whole election. Because of that, users must prove that the ciphertext they submitted is a valid Ring-Learning with Errors (RLWE) ciphertext and that the plaintext message they encrypted is a valid vote (for example, either a 1 or 0). Greco uses zero-knowledge proof to let a user prove that their RLWE ciphertext is well-formed. Or, in other words, that the encryption operation was performed correctly. The resulting proof can be, therefore, composed with additional application-specific logic and subject to public verification in a non-interactive setting. Considering the secret voting application, one can prove further properties of the message being encrypted or even properties about the voter, allowing the application to support anonymous voting as well. The prover has been implemented using Halo2-lib as a proving system, and the benchmarks have shown that Greco can already be integrated into user-facing applications without creating excessive friction for the user. The implementation is available at https://github.com/privacy-scaling-explorations/greco
Last updated:  2024-05-01
A New Approach to Efficient and Secure Fixed-point Computation
Tore Kasper Frederiksen, Jonas Lindstrøm, Mikkel Wienberg Madsen, and Anne Dorte Spangsberg
Secure Multi-Party Computation (MPC) constructions typically allow computation over a finite field or ring. While useful for many applications, certain real-world applications require the usage of decimal numbers. While it is possible to emulate floating-point operations in MPC, fixed-point computation has gained more traction in the practical space due to its simplicity and efficient realizations. Even so, current protocols for fixed-point MPC still require computing a secure truncation after each multiplication gate. In this paper, we show a new paradigm for realizing fixed-point MPC. Starting from an existing MPC protocol over arbitrary, large, finite fields or rings, we show how to realize MPC over a residue number system (RNS). This allows us to leverage certain mathematical structures to construct a secure algorithm for efficient approximate truncation by a static and public value. We then show how this can be used to realize highly efficient secure fixed-point computation. In contrast to previous approaches, our protocol does not require any multiplications of secret values in the underlying MPC scheme to realize truncation but instead relies on preprocessed pairs of correlated random values, which we show can be constructed very efficiently, when accepting a small amount of leakage and robustness in the strong, covert model. We proceed to implement our protocol, with SPDZ as the underlying MPC protocol, and achieve significantly faster fixed-point multiplication.
Last updated:  2024-04-30
Private Analytics via Streaming, Sketching, and Silently Verifiable Proofs
Mayank Rathee, Yuwen Zhang, Henry Corrigan-Gibbs, and Raluca Ada Popa
We present Whisper, a system for privacy-preserving collection of aggregate statistics. Like prior systems, a Whisper deployment consists of a small set of non-colluding servers; these servers compute aggregate statistics over data from a large number of users without learning the data of any individual user. Whisper’s main contribution is that its server- to-server communication cost and its server-side storage costs scale sublinearly with the total number of users. In particular, prior systems required the servers to exchange a few bits of information to verify the well-formedness of each client submission. In contrast, Whisper uses silently verifiable proofs, a new type of proof system on secret-shared data that allows the servers to verify an arbitrarily large batch of proofs by exchanging a single 128-bit string. This improvement comes with increased client-to-server communication, which, in cloud computing, is typically cheaper (or even free) than the cost of egress for server-to-server communication. To reduce server storage, Whisper approximates certain statistics using small-space sketching data structures. Applying randomized sketches in an environment with adversarial clients requires a careful and novel security analysis. In a deployment with two servers and 100,000 clients of which 1% are malicious, Whisper can improve server-to-server communication for vector sum by three orders of magnitude while each client’s communication increases by only 10%.
Last updated:  2024-04-30
Intmax2: A ZK-rollup with Minimal Onchain Data and Computation Costs Featuring Decentralized Aggregators
Erik Rybakken, Leona Hioki, and Mario Yaksetig
We present a blockchain scaling solution called Intmax2, which is a Zero-Knowledge rollup (ZK-rollup) protocol with stateless and permissionless block production, while minimizing the usage of data and computation on the underlying blockchain. Our architecture distinctly diverges from existing ZK-rollups since essentially all of the data and computational costs are shifted to the client-side as opposed to imposing heavy requirements on the block producers or the underlying Layer 1 blockchain. The only job for block producers is to periodically generate a commitment to a set of transactions, distribute inclusion proofs to each sender, and collect and aggregate signatures by the senders. This design allows permissionless and stateless block production, and is highly scalable with the number of users.
Last updated:  2024-04-30
Conan: Distributed Proofs of Compliance for Anonymous Data Collection
Mingxun Zhou, Elaine Shi, and Giulia Fanti
We consider how to design an anonymous data collection protocol that enforces compliance rules. Imagine that each client contributes multiple data items (e.g., votes, location crumbs, or secret shares of its input) to an anonymous network, which mixes all clients' data items so that the receiver cannot determine which data items belong to the same user. Now, each user must prove to an auditor that the set it contributed satisfies a compliance predicate, without identifying which items it contributed. For example, the auditor may want to ensure that no one voted for the same candidate twice, or that a user's location crumbs are not too far apart in a given time interval. Our main contribution is a novel anonymous, compliant data collection protocol that realizes the above goal. In comparison with naive approaches such as generic multi-party computation or earlier constructions of collaborative zero-knowledge proofs, the most compelling advantage of our approach is that each client's communication and computation overhead do not grow with respect to the number of clients $n$. In this sense, we save a factor of at least $n$ over prior work, which allows our technique to scale to applications with a large number of clients, such as anonymous voting and privacy-preserving federated learning. We first describe our protocol using generic cryptographic primitives that can be realized from standard assumptions. We then suggest a concrete instantiation called {\sc Conan} which we implement and evaluate. In this concrete instantiation, we are willing to employ SNARKs and the random oracle model for better practical efficiency. Notably, in this practical instantiation, each client's additional communication overhead (not counting the overhead of sending its data items over the anonymous network) is only $\widetilde{O}(1)$. We evaluated our technique in various application settings, including secure voting, and secure aggregation protocols for histogram, summation, and vector summation. Our evaluation results show that each client's additional communication overhead is only 2.2KB or 2.6KB, depending on which SNARK implementation we use. Further, each client's computation is only 0.2s - 0.5s for almost all cases, except for the vector summation application where the data items are high-dimensional and each client's computation is 8.5-10.6s.
Last updated:  2024-04-30
Homomorphic Evaluation of LWR-based PRFs and Application to Transciphering
Amit Deo, Marc Joye, Benoit Libert, Benjamin R. Curtis, and Mayeul de Bellabre
Certain applications such as FHE transciphering require randomness while operating over encrypted data. This randomness has to be obliviously generated in the encrypted domain and remain encrypted throughout the computation. Moreover, it should be guaranteed that independent-looking random coins can be obliviously generated for different computations. In this work, we consider the homomorphic evaluation of pseudorandom functions (PRFs) with a focus on practical lattice-based candidates. In the homomorphic PRF evaluation setting, given a fully homomorphic encryption of the PRF secret key $\vec{s}$, it should be possible to homomorphically compute encryptions of PRF evaluations $\{ \text{PRF}_{\vec{s}}(x_i) \}_{i=1}^M$ for public inputs $\{ x_i\}_{i=1}^M$. We consider this problem for PRF families based on the hardness of the Learning-With-Rounding (LWR) problem introduced by Banerjee, Peikert and Rosen (Eurocrypt '12). We build on the random-oracle variant of a PRF construction suggested by Banerjee et al. and demonstrate that it can be evaluated using only two sequential programmable bootstraps in the TFHE homomorphic encryption scheme. We also describe several modifications of this PRF---which we prove as secure as the original function---that support homomorphic evaluations using only one programmable bootstrap per slot. Numerical experiments were conducted using practically relevant FHE parameter sets from the TFHE-rs library. Our benchmarks show that a throughput of about $1000$ encrypted pseudorandom bits per second (resp. $900$ encrypted pseudorandom bits per second) can be achieved on an AWS hpc7a.96xlarge machine (resp. on a standard laptop with an Apple M2 chip), on a single thread. The PRF evaluation keys in our experiments have sizes roughly $40\%$ and $60\%$ of a bootstrapping key. Applying our solution to transciphering enables important bandwidth savings, typically trading $64$-bit values for $4$-bit values per transmitted ciphertext.
Last updated:  2024-04-30
Batch PIR and Labeled PSI with Oblivious Ciphertext Compression
Alexander Bienstock, Sarvar Patel, Joon Young Seo, and Kevin Yeo
In this paper, we study two problems: oblivious compression and decompression of ciphertexts. In oblivious compression, a server holds a set of ciphertexts with a subset of encryptions of zeroes whose positions are only known to the client. The goal is for the server to effectively compress the ciphertexts obliviously, while preserving the non-zero plaintexts and without learning the plaintext values. For oblivious decompression, the client, instead, succinctly encodes a sequence of plaintexts such that the server may decode encryptions of all plaintexts value, but the zeroes may be replaced with arbitrary values. We present solutions to both problems that construct lossless compressions only 5% more than the optimal minimum using only additive homomorphism. The crux of both algorithms involve embedding ciphertexts as random linear systems that are efficiently solvable. Using our compression schemes, we obtain state-of-the-art schemes for batch private information retrieval (PIR) where a client wishes to privately retrieve multiple entries from a server-held database in one query. We show that our compression schemes may be used to reduce communication by up to 30% for batch PIR in both the single- and two-server settings. Additionally, we study labeled private set intersection (PSI) in the unbalanced setting where one party's set is significantly smaller than the other party's set and each entry has associated data. By utilizing our novel compression algorithm, we present a protocol with 65-88% reduction in communication with comparable computation compared to prior works.
Last updated:  2024-04-30
Verifiable FHE via Lattice-based SNARKs
Shahla Atapoor, Karim Baghery, Hilder V. L. Pereira, and Jannik Spiessens
Fully Homomorphic Encryption (FHE) is a prevalent cryptographic primitive that allows for computation on encrypted data. In various cryptographic protocols, this enables outsourcing computation to a third party while retaining the privacy of the inputs to the computation. However, these schemes make an honest-but-curious assumption about the adversary. Previous work has tried to remove this assumption by combining FHE with Verifiable Computation (VC). Recent work has increased the flexibility of this approach by introducing integrity checks for homomorphic computations over rings. However, efficient FHE for circuits of large multiplicative depth also requires non-ring computations called maintenance operations, i.e. modswitching and keyswitching, which cannot be efficiently verified by existing constructions. We propose the first efficiently verifiable FHE scheme that allows for arbitrary depth homomorphic circuits by utilizing the double-CRT representation in which FHE schemes are typically computed, and using lattice-based SNARKs to prove components of this computation separately, including the maintenance operations. Therefore, our construction can theoretically handle bootstrapping operations. We also present the first implementation of a verifiable computation on encrypted data for a computation that contains multiple ciphertext-ciphertext multiplications. Concretely, we verify the homomorphic computation of an approximate neural network containing three layers and more than 100 ciphertexts in less than 1 second while maintaining reasonable prover costs.
Last updated:  2024-04-30
Complete Knowledge: Preventing Encumbrance of Cryptographic Secrets
Uncategorized
Mahimna Kelkar, Kushal Babel, Philip Daian, James Austgen, Vitalik Buterin, and Ari Juels
Show abstract
Uncategorized
Most cryptographic protocols model a player’s knowledge of secrets in a simple way. Informally, the player knows a secret in the sense that she can directly furnish it as a (private) input to a protocol, e.g., to digitally sign a message. The growing availability of Trusted Execution Environments (TEEs) and secure multiparty computation, however, undermines this model of knowledge. Such tools can encumber a secret sk and permit a chosen player to access sk conditionally, without actually knowing sk. By permitting selective access to sk by an adversary, encumbrance of secrets can enable vote-selling in cryptographic voting schemes, illegal sale of credentials for online services, and erosion of deniability in anonymous messaging systems. Unfortunately, existing proof-of-knowledge protocols fail to demonstrate that a secret is unencumbered. We therefore introduce and formalize a new notion called complete knowledge (CK). A proof (or argument) of CK shows that a prover does not just know a secret, but also has fully unencumbered knowledge, i.e., unrestricted ability to use the secret. We introduce two practical CK schemes that use special-purpose hardware, specifically TEEs and off-the-shelf mining ASICs. We prove the security of these schemes and explore their practical deployment with a complete, end-to-end prototype with smart-contract verification that supports both. We show how CK can address encumbrance attacks identified in previous work. Finally, we introduce two new applications enabled by CK that involve proving ownership of blockchain assets.
Last updated:  2024-04-30
Solving the Tensor Isomorphism Problem for special orbits with low rank points: Cryptanalysis and repair of an Asiacrypt 2023 commitment scheme
Valerie Gilchrist, Laurane Marco, Christophe Petit, and Gang Tang
The Tensor Isomorphism Problem (TIP) has been shown to be equivalent to the matrix code equivalence problem, making it an interesting candidate on which to build post-quantum cryptographic primitives. These hard problems have already been used in protocol development. One of these, MEDS, is currently in Round 1 of NIST's call for additional post-quantum digital signatures. In this work, we consider the TIP for a special class of tensors. The hardness of the decisional version of this problem is the foundation of a commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (Asiacrypt 2023). We present polynomial-time algorithms for the decisional and computational versions of TIP for special orbits, which implies that the commitment scheme is not secure. The key observations of these algorithms are that these special tensors contain some low-rank points, and their stabilizer groups are not trivial. With these new developments in the security of TIP in mind, we give a new commitment scheme based on the general TIP that is non-interactive, post-quantum, and statistically binding, making no new assumptions. Such a commitment scheme does not currently exist in the literature.
Last updated:  2024-04-30
FiveEyes: Cryptographic Biometric Authentication from the Iris
Luke Demarest, Sohaib Ahmad, Sixia Chen, Benjamin Fuller, and Alexander Russell
Despite decades of effort, a stubborn chasm exists between the theory and practice of device-level biometric authentication. Deployed authentication algorithms rely on data that overtly leaks private information about the biometric; thus systems rely on externalized security measures such as trusted execution environments. The authentication algorithms have no cryptographic guarantees. This is particularly frustrating given the long line of research that has developed theoretical tools—known as fuzzy extractors—that enable secure, privacy preserving biometric authentication with public enrollment data (Dodis et al., SIAM Journal of Computing 2008). Unfortunately, the best known constructions either: 1. Assume that bits of biometrics are i.i.d. (or that all correlation is captured in pairs of features (Hine et al., TIFS 2023)), which is not true for the biometrics themselves or for features extracted using modern learning techniques, or 2. Only provide substantial true accept rates with an estimated security of $32$ bits for the iris (Simhadri et al., ISC 2019) and $45$ bits for the face (Zhang, Cui, and Yu, ePrint 2021/1559). This work introduces FiveEyes, an iris key derivation system powered by technical advances in both 1) feature extraction from the iris and 2) the fuzzy extractor used to secure authentication keys. FiveEyes’ feature extractor’s loss focuses on quality for key derivation. The fuzzy extractor builds on sample-then-lock (Canetti et al., Journal of Cryptology 2021). FiveEyes’ fuzzy extractor uses statistics of the produced features to sample non-uniformly, which significantly improves the security vs. true accept rate (TAR) tradeoff. Irises used to evaluate TAR and security are class disjoint from those used for training and collecting statistics. We state assumptions sufficient for security. We present various parameter regimes to highlight different TARs: 1. $65$ bits of security (equivalent to $87$ bits with a password) at $12$% TAR, and 2. $50$ bits of security (equivalent to $72$ bits with a password) at $45$% TAR. Applying known TAR (Davida et al., IEEE S&P 1998) amplification techniques additively boosts TAR by $30$% for the above settings.
Last updated:  2024-04-30
Succinct Arguments over Towers of Binary Fields
Benjamin E. Diamond and Jim Posen
We introduce an efficient SNARK for towers of binary fields. Adapting Brakedown (CRYPTO '23), we construct a multilinear polynomial commitment scheme suitable for polynomials over tiny fields, including that with 2 elements. Our commitment scheme, unlike those of previous works, treats small-field polynomials with zero embedding overhead. We further introduce binary-field adaptations of HyperPlonk's (EUROCRYPT '23) product and permutation checks, as well as of Lasso's lookup. Our scheme's binary PLONKish variant captures standard hash functions—like Keccak-256 and Grøstl—extremely efficiently. With recourse to thorough performance benchmarks, we argue that our scheme can efficiently generate precisely those Keccak-256-proofs which critically underlie modern efforts to scale Ethereum.
Last updated:  2024-04-30
Swiper: a new paradigm for efficient weighted distributed protocols
Andrei Tonkikh and Luciano Freitas
The majority of fault-tolerant distributed algorithms are designed assuming a nominal corruption model, in which at most a fraction $f_n$ of parties can be corrupted by the adversary. However, due to the infamous Sybil attack, nominal models are not sufficient to express the trust assumptions in open (i.e., permissionless) settings. Instead, permissionless systems typically operate in a weighted model, where each participant is associated with a weight and the adversary can corrupt a set of parties holding at most a fraction $f_w$ of total weight. In this paper, we suggest a simple way to transform a large class of protocols designed for the nominal model into the weighted model. To this end, we formalize and solve three novel optimization problems, which we collectively call the weight reduction problems, that allow us to map large real weights into small integer weights while preserving the properties necessary for the correctness of the protocols. In all cases, we manage to keep the sum of the integer weights to be at most linear in the number of parties, resulting in extremely efficient protocols for the weighted model. Moreover, we demonstrate that, on weight distributions that emerge in practice, the sum of the integer weights tends to be far from the theoretical worst-case and, sometimes, even smaller than the number of participants. While, for some protocols, our transformation requires an arbitrarily small reduction in resilience (i.e., $f_w = f_n - \epsilon$), surprisingly, for many important problems we manage to obtain weighted solutions with the same resilience ($f_w = f_n$) as nominal ones. Notable examples include erasure-coded distributed storage and broadcast protocols, verifiable secret sharing, and asynchronous consensus. Although there are ad-hoc weighted solutions to some of these problems, the protocols yielded by our transformations enjoy all the benefits of nominal solutions, including simplicity, efficiency, and a wider range of possible cryptographic assumptions. Since the release of the first version of this paper online, a version of the weight reduction approach has been integrated into a major layer-1 blockchain system for implementing a randomness beacon.
Last updated:  2024-04-30
Ipotane: Achieving the Best of All Worlds in Asynchronous BFT
Xiaohai Dai, Chaozheng Ding, Hai Jin, Julian Loss, and Ling Ren
State-of-the-art asynchronous Byzantine Fault Tolerance (BFT) protocols integrate a partially-synchronous optimistic path. The holy grail in this paradigm is to match the performance of a partially-synchronous protocol in favorable situations and match the performance of a purely asynchronous protocol in unfavorable situations. Several prior works have made progress toward this goal by matching the efficiency of a partially-synchronous protocol in favorable conditions. However, their performance compared to purely asynchronous protocols is reduced when network conditions are unfavorable. To address these shortcomings, a recent work, Abraxas (CCS'23), presents the first optimistic asynchronous BFT protocol that retains stable throughput in all situations. However, Abraxas still incurs very high worst-case latency in unfavorable situations because it is slow at detecting the failure of its optimistic path. Another recent work, ParBFT (CCS'23) guarantees good latency in all situations, but suffers from reduced throughput in unfavorable situations due to its use of extra Asynchronous Binary Agreement (ABA) instances. To approach our holy grail, we propose Ipotane, which delivers performance comparable to partially-synchronous protocols in favorable situations, and attains performance on par with purely asynchronous protocols in unfavorable situations—in both throughput and latency. Ipotane also runs the two paths simultaneously. It adopts two-chain HotStuff as the optimistic path, thus achieving high performance in favorable situations. As for the pessimistic path, we introduce a new primitive Dual-functional Byzantine Agreement (DBA), which packs the functionalities of biased ABA and Validated Asynchronous Byzantine Agreement (VABA). Ipotane runs DBA instances continuously as the pessimistic path. DBA’s ABA functionality quickly detects the optimistic path’s failure, ensuring Ipotane’s low latency in unfavorable situations. Meanwhile, the VABA functionality continuously produces blocks, maintaining Ipotane’s high throughput. Additionally, the biased property ensures that blocks committed via the optimistic path are respected by DBA instances, guaranteeing consistency across two paths. We conduct extensive experiments to demonstrate that Ipotane achieves high throughput and low latency in all situations.
Last updated:  2024-04-29
Faster Private Decision Tree Evaluation for Batched Input from Homomorphic Encryption
Kelong Cong, Jiayi Kang, Georgio Nicolas, and Jeongeun Park
Privacy-preserving decision tree evaluation (PDTE) allows a client that holds feature vectors to perform inferences against a decision tree model on the server side without revealing feature vectors to the server. Our work focuses on the non-interactive batched setting where the client sends a batch of encrypted feature vectors and then obtains classifications, without any additional interaction. This is useful in privacy-preserving credit scoring, biometric authentication, and many more applications. In this paper, we propose two novel non-interactive batched PDTE protocols, BPDTE_RCC and BPDTE_CW, based on two new ciphertext-plaintext comparison algorithms, the improved range cover comparison (RCC) comparator and the constant-weight (CW) piece-wise comparator, respectively. Compared to the current state-of-the-art Level Up (CCS'23), our comparison algorithms are up to $72\times$ faster for batched inputs of 16 bits. Moreover, we introduced a new tree traversal method called Adapted SumPath, to achieve $\mathcal{O}(1)$ complexity of the server's response, whereas Level Up has $\mathcal{O}(2^d)$ for a depth-$d$ tree where the client needs to look up classification values in a table. Overall, our PDTE protocols attain the optimal server-to-client communication complexity and are up to $17\times$ faster than Level Up in batch size 16384.
Last updated:  2024-04-29
Efficient Linkable Ring Signatures: New Framework and Post-Quantum Instantiations
Yuxi Xue, Xingye Lu, Man Ho Au, and Chengru Zhang
In this paper, we introduce a new framework for constructing linkable ring signatures (LRS). Our framework is based purely on signatures of knowledge (SoK) which allows one to issue signatures on behalf of any NP-statement using the corresponding witness. Our framework enjoys the following advantages: (1) the security of the resulting LRS depends only on the security of the underlying SoK; (2) the resulting LRS naturally supports online/offline signing (resp. verification), where the output of the offline signing (resp. verification) can be re-used across signatures of the same ring. For a ring size $n$, our framework requires an SoK of the NP statement with size $\log n$. To instantiate our framework, we adapt the well-known post-quantum secure non-interactive argument of knowledge (NIAoK), ethSTARK, into an SoK. This SoK is inherently post-quantum secure and has a signature size poly-logarithmic in the size of the NP statement. Thus, our resulting LRS has a signature size of $O(\text{polylog}(\log n))$. By comparison, existing post-quantum ring signatures, regardless of linkability considerations, have signature sizes of $O(\log n)$ at best. Furthermore, leveraging online/offline verification, part of the verification of signatures on the same ring can be shared, resulting in a state-of-the-art amortized verification cost of $O(\text{polylog}(\log n))$. Our LRS also performs favourably against existing schemes in practical scenarios. Concretely, our scheme has the smallest signature size among all post-quantum linkable ring signatures with non-slanderability for ring size larger than $32$. In our experiment, at $128$-bit security and ring size of $1024$, our LRS has a size of $29$KB, and an amortized verification cost of $0.3$ ms, surpassing the state-of-the-art by a significant margin. Even without considering amortization, the verification time for a single signature is $128$ ms, comparable to those featuring linear signature size. A similar performance advantage can also be seen at signing. Furthermore, our LRS has extremely short public keys ($32$ bytes), while public keys of existing constructions are in the order of kilobytes.
Last updated:  2024-04-29
FE[r]Chain: Enforcing Fairness in Blockchain Data Exchanges Through Verifiable Functional Encryption
Camille Nuoskala, Reyhaneh Rabbaninejad, Tassos Dimitriou, and Antonis Michalas
Functional Encryption (FE) allows users to extract specific function-related information from encrypted data while preserving the privacy of the underlying plaintext. Though significant research has been devoted to developing secure and efficient Multi-Input Functional Encryption schemes supporting diverse functions, there remains a noticeable research gap in the development of verifiable FE schemes. Functionality and performance have received considerable attention, however, the crucial aspect of verifiability in FE has been relatively understudied. Another important aspect that prior research in FE with outsourced decryption has not adequately addressed is the fairness of the data-for-money exchange between a curator and an analyst. This paper focuses on addressing these gaps by proposing a verifiable FE scheme for inner product computation. The scheme not only supports the multi-client setting but also extends its functionality to accommodate multiple users -- an essential feature in modern privacy-respecting services. Additionally, it demonstrates how this FE scheme can be effectively utilized to ensure fairness and atomicity in a payment protocol, further enhancing the trustworthiness of data exchanges.
Last updated:  2024-04-29
Secure Latent Dirichlet Allocation
Thijs Veugen, Vincent Dunning, Michiel Marcus, and Bart Kamphorst
Topic modelling refers to a popular set of techniques used to discover hidden topics that occur in a collection of documents. These topics can, for example, be used to categorize documents or label text for further processing. One popular topic modelling technique is Latent Dirichlet Allocation (LDA). In topic modelling scenarios, the documents are often assumed to be in one, centralized dataset. However, sometimes documents are held by different parties, and contain privacy- or commercially-sensitive information that cannot be shared. We present a novel, decentralized approach to train an LDA model securely without having to share any information about the content of the documents with the other parties. We preserve the privacy of the individual parties using a combination of privacy enhancing technologies. We show that our decentralized, privacy preserving LDA solution has a similar accuracy compared to an (insecure) centralised approach. With $1024$-bit Paillier keys, a topic model with $5$ topics and $3000$ words can be trained in around $16$ hours. Furthermore, we show that the solution scales linearly in the total number of words and the number of topics.
Last updated:  2024-04-29
Information-theoretic security with asymmetries
Tim Beyne and Yu Long Chen
In this paper, we study the problem of lower bounding any given cost function depending on the false positive and false negative probabilities of adversaries against indistinguishability security notions in symmetric-key cryptography. We take the cost model as an input, so that this becomes a purely information-theoretical question. We propose power bounds as an easy-to-use alternative for advantage bounds in the context of indistinguishability with asymmetric cost functions. We show that standard proof techniques such as hybrid arguments and the H-coefficient method can be generalized to the power model, and apply these techniques to the PRP-PRF switching lemma, the Even-Mansour (EM) construction, and the sum-of-permutations (SoP) construction. As the final and perhaps most useful contribution, we provide two methods to convert single-user power bounds into multi-user power bounds, and investigate their relation to the point-wise proximity method of Hoang and Tessaro (Crypto 2016). These method are applied to obtain tight multi-user power bounds for EM and SoP.
Last updated:  2024-04-29
Cryptanalytic Audit of the XHash Sponge Function and its Components
Vincent Rijmen
In this audit we started from the security analysis provided in the design documentation of XHash8/12. We extended the analysis in several directions and confirmed the security claims that were made by the designers.
Last updated:  2024-04-29
Implementation and Performance Analysis of Homomorphic Signature Schemes
Davide Carnemolla, Dario Catalano, Mario Di Raimondo, and Federico Savasta
Homomorphic signatures allow to validate computation on signed data. Alice, holding a dataset, $\{m_1 , \ldots , m_t \}$ uses her secret key $\sf sk$ to sign these data and stores the authenticated dataset on a remote server. The server can later (publicly) compute $m = f(m_1,...,m_t)$ together with a signature $\sigma$ certifying that $m$ is indeed the correct output of the computation $f$. Over the last fifteen years, the problem of realizing homomorphic signatures has been the focus of numerous research works, with constructions now ranging from very efficient ones supporting linear functions to very expressive ones supporting (up to) arbitrary circuits. In this work we tackle the question of assessing the practicality of schemes belonging to this latter class. Specifically, we implement the GVW lattice based scheme for circuits from STOC 2015 and two, recently proposed, pairings based constructions building from functional commitments. Our experiments show that (both) pairings based schemes outperform GVW on all fronts.
Last updated:  2024-04-29
Monchi: Multi-scheme Optimization For Collaborative Homomorphic Identification
Alberto Ibarrondo, Ismet Kerenciler, Hervé Chabanne, Vincent Despiegel, and Melek Önen
This paper introduces a novel protocol for privacy-preserving biometric identification, named Monchi, that combines the use of homomorphic encryption for the computation of the identification score with function secret sharing to obliviously compare this score with a given threshold and finally output the binary result. Given the cost of homomorphic encryption, BFV in this solution, we study and evaluate the integration of two packing solutions that enable the regrouping of multiple templates in one ciphertext to improve efficiency meaningfully. We propose an end-to-end protocol, prove it secure and implement it. Our experimental results attest to Monchi's applicability to the real-life use case of an airplane boarding scenario with 1000 passengers,taking less than one second to authorize/deny access to the plane to each passenger via biometric identification while maintaining the privacy of all passengers.
Last updated:  2024-04-29
A Complete Beginner Guide to the Number Theoretic Transform (NTT)
Ardianto Satriawan, Rella Mareta, and Hanho Lee
The Number Theoretic Transform (NTT) is a powerful mathematical tool that has become increasingly important in developing Post Quantum Cryptography (PQC) and Homomorphic Encryption (HE). Its ability to efficiently calculate polynomial multiplication using the convolution theorem with a quasi-linear complexity $O(n \log{n})$ instead of $O(n^2)$ when implemented with Fast Fourier Transform-style algorithms has made it a key component in modern cryptography. FFT-style NTT algorithm or fast-NTT is particularly useful in lattice-based cryptography. In this short note, we briefly introduce the basic concepts of linear, cyclic, and negacyclic convolutions via traditional schoolbook algorithms, traditional NTT, its inverse (INTT), and FFT-like versions of NTT/INTT. We then provide consistent toy examples through different concepts and algorithms to understand the basics of NTT concepts.
Last updated:  2024-04-28
A New Hash-based Enhanced Privacy ID Signature Scheme
Liqun Chen, Changyu Dong, Nada El Kassem, Christopher J.P. Newton, and Yalan Wang
The elliptic curve-based Enhanced Privacy ID (EPID) signature scheme is broadly used for hardware enclave attestation by many platforms that implement Intel Software Guard Extensions (SGX) and other devices. This scheme has also been included in the Trusted Platform Module (TPM) specifications and ISO/IEC standards. However, it is insecure against quantum attackers. While research into quantum-resistant EPID has resulted in several lattice-based schemes, Boneh et al. have initiated the study of EPID signature schemes built only from symmetric primitives. We observe that for this line of research, there is still room for improvement. In this paper, we propose a new hash-based EPID scheme, which includes a novel and efficient signature revocation scheme. In addition, our scheme can handle a large group size (up to $2^{60}$ group members), which meets the requirements of rapidly developing hardware enclave attestation applications. The security of our scheme is proved under the Universal Composability (UC) model. Finally, we have implemented our EPID scheme, which, to our best knowledge, is the first implementation of EPID from symmetric primitives.
Last updated:  2024-04-28
Hash-based Direct Anonymous Attestation
Liqun Chen, Changyu Dong, Nada El Kassem, Christopher J.P. Newton, and Yalan Wang
Direct Anonymous Attestation (DAA) was designed for the Trusted Platform Module (TPM) and versions using RSA and elliptic curve cryptography have been included in the TPM specifications and in ISO/IEC standards. These standardised DAA schemes have their security based on the factoring or discrete logarithm problems and are therefore insecure against quantum attackers. Research into quantum-resistant DAA has resulted in several lattice-based schemes. Now in this paper, we propose the first post-quantum DAA scheme from symmetric primitives. We make use of a hash-based signature scheme, which is a slight modification of SPHINCS+, as a DAA credential. A DAA signature, proving the possession of such a credential, is a multiparty computation-based non-interactive zero-knowledge proof. The security of our scheme is proved under the Universal Composability (UC) model. While maintaining all the security properties required for a DAA scheme, we try to make the TPM's workload as low as possible. Our DAA scheme can handle a large group size (up to $2^{60}$ group members), which meets the requirements of rapidly developing TPM applications.
Last updated:  2024-04-28
Sphinx-in-the-Head: Group Signatures from Symmetric Primitives
Liqun Chen, Changyu Dong, Christopher J. P. Newton, and Yalan Wang
Group signatures and their variants have been widely used in privacy-sensitive scenarios such as anonymous authentication and attestation. In this paper, we present a new post-quantum group signature scheme from symmetric primitives. Using only symmetric primitives makes the scheme less prone to unknown attacks than basing the design on newly proposed hard problems whose security is less well-understood. However, symmetric primitives do not have rich algebraic properties, and this makes it extremely challenging to design a group signature scheme on top of them. It is even more challenging if we want a group signature scheme suitable for real-world applications, one that can support large groups and require few trust assumptions. Our scheme is based on MPC-in-the-head non-interactive zero-knowledge proofs, and we specifically design a novel hash-based group credential scheme, which is rooted in the SPHINCS+ signature scheme but with various modifications to make it MPC (multi-party computation) friendly. The security of the scheme has been proved under the fully dynamic group signature model. We provide an implementation of the scheme and demonstrate the feasibility of handling a group size as large as $2^{60}$. This is the first group signature scheme from symmetric primitives that supports such a large group size and meets all the security requirements.
Last updated:  2024-04-28
Encrypted KNN Implementation on Distributed Edge Device Network
B Pradeep Kumar Reddy, Ruchika Meel, and Ayantika Chatterjee
Machine learning (ML) as a service has emerged as a rapidly expanding field across various industries like healthcare, finance, marketing, retail and e-commerce, Industry 4.0, etc where a huge amount of data is gen- erated. To handle this amount of data, huge computational power is required for which cloud computing used to be the first choice. However, there are several challenges in cloud computing like limitations of bandwidth, network connectivity, higher latency, etc. To address these issues, edge computing is prominent nowadays, where the data from sensor nodes is collected and processed on low-cost edge devices. As simple sensor nodes are not capable of handling complex computations of ML models, data from sensor nodes need to be transferred to some nearest edge devices for further processing. If this sensor data is related to some security- critical application, the privacy of such sensitive data needs to be preserved both during communication from sensor node to edge device and computation in edge nodes. This increased need to perform edge-based ML on privacy-preserved data has led to a surge in interest in homomorphic encryption (HE) due to its ability to perform computations on encrypted form of data. The highest form of HE, Fully Homomorphic Encryption (FHE), is capable of theoretically handling arbitrary encrypted algorithms but comes with huge computational overhead. Hence, the implementation of such a complex encrypted ML model on a single edge node is not very practical in terms of latency requirements. Our paper introduces a low-cost encrypted ML framework on a distributed edge cluster, where multiple low-cost edge devices (Raspberry Pi boards) are clustered to perform encrypted distributed K-Nearest Neighbours (KNN) algorithm computations. Our experimental result shows, KNN prediction on standard Wisconsin breast cancer dataset takes approximately 1.2 hours, implemented on a cluster of six pi boards, maintaining end-to-end data confidentiality of critical medical data without any re- quirement of costly cloud-based computation resource support
Last updated:  2024-04-28
Ceno: Non-uniform, Segment and Parallel Zero-knowledge Virtual Machine
Tianyi Liu, Zhenfei Zhang, Yuncong Zhang, Wenqing Hu, and Ye Zhang
In this paper, we explore a novel Zero-knowledge Virtual Machine (zkVM) framework leveraging succinct, non-interactive zero-knowledge proofs for verifiable computation over any code. Our approach divides program execution proof into two stages. In the first stage, the process breaks down program execution into segments, identifying and grouping identical sections. These segments are then proved through data-parallel circuits that allow for varying amounts of duplication. In the subsequent stage, the verifier examines these segment proofs, reconstructing the program's control and data flow based on the segments' duplication number and the original program. The second stage can be further attested by a uniform recursive proof. We propose two specific designs of this concept, where segmentation and parallelization happen at two levels: opcode and basic block. Both designs try to minimize control flow that affects the circuit size and support dynamic copy numbers, ensuring that computational costs directly correlate with the actual code executed (i.e., you only pay as much as you use). In our second design, in particular, by proposing an innovative data-flow reconstruction technique in the second stage, we can drastically cut down on the stack operations even compared to the original program execution. Note that the two designs are complementary rather than mutually exclusive. Integrating both approaches in the same zkVM could unlock more significant potential for accommodating diverse program patterns. We present an asymmetric GKR scheme to implement our designs, pairing a non-uniform prover and a uniform verifier to generate proofs for dynamic-length data-parallel circuits. The use of a GKR prover also significantly reduces the size of the commitment: GKR allows us to commit only the circuit's input and output, whereas in Plonkish-based solutions, the prover needs to commit to all the witnesses.
Last updated:  2024-04-28
High-assurance field inversion for curve-based cryptography
Benjamin Salling Hvass, Diego F. Aranha, and Bas Spitters
The security of modern cryptography depends on multiple factors, from sound hardness assumptions to correct implementations that resist side-channel cryptanalysis. Curve-based cryptography is not different in this regard, and substantial progress in the last few decades has been achieved in both selecting parameters and devising secure implementation strategies. In this context, the security of implementations of field inversion is sometimes overlooked in the research literature, because (i) the approach based on Fermat's Little Theorem (FLT) suffices performance-wise for many parameters used in practice; (ii) it is typically invoked only at the very end of a cryptographic computation, with a small impact on performance; (iii) it is challenging to implement securely for general parameters without a significant performance penalty. However, field inversion can process sensitive information and must be protected with side-channel countermeasures like any other cryptographic operation, as illustrated by recent attacks. In this work, we focus on implementing field inversion for primes of cryptographic interest with security against timing attacks, irrespective of whether the FLT-based inversion can be efficiently implemented. We extend the Fiat-Crypto framework, which synthesizes provably correct-by-construction implementations, to implement the Bernstein-Yang inversion algorithm as a step towards this goal. This allows a correct implementation of prime field inversion to be synthesized for any prime. We benchmark the implementations across a range of primes for curve-based cryptography and they outperform traditional FLT-based approaches in most cases, with observed speedups up to 2 for the largest parameters. Our work is already used in production in the MirageOS unikernel operating system, $\mathtt{zig}$ programming language, and the ECCKiila framework.
Last updated:  2024-04-28
Weightwise (almost) perfectly balanced functions based on total orders
Pierrick Méaux
he unique design of the FLIP cipher necessitated a generalization of standard cryptographic criteria for Boolean functions used in stream ciphers, prompting a focus on properties specific to subsets of $\mathbb{F}_2^n$ rather than the entire set. This led to heightened interest in properties related to fixed Hamming weight sets and the corresponding partition of $\mathbb{F}_2^n$ into n+1 such sets. Consequently, the concept of Weightwise Almost Perfectly Balanced (WAPB) functions emerged, which are balanced on each of these sets.Various studies have since proposed WAPB constructions and examined their cryptographic parameters for use in stream cipher filters. In this article, we introduce a general approach to constructing WAPB functions using the concept of order, which simplifies implementation and enhances cryptographic strength. We present two new constructions: a recursive method employing multiple orders on binary strings, and another utilizing just two orders. We establish lower bounds for nonlinearity and weightwise nonlinearities within these classes. By instantiating specific orders, we demonstrate that some achieve minimal algebraic immunity, while others provide functions with guaranteed optimal algebraic immunity. Experimental results in 8 and 16 variables indicate that using orders based on field representation significantly outperforms other methods in terms of both global and weightwise algebraic immunity and nonlinearity. Additionally, we extend the recursive construction to create WAPB functions for any value of n, with experiments in 10, 12, and 14 variables confirming that these order-based functions exhibit robust cryptographic parameters. In particular, those based on field orders display optimal degrees and algebraic immunity, and strong weightwise nonlinearities and algebraic immunities.
Last updated:  2024-04-28
Zero-Knowledge Proof Vulnerability Analysis and Security Auditing
Xueyan Tang, Lingzhi Shi, Xun Wang, Kyle Charbonnet, Shixiang Tang, and Shixiao Sun
Zero-Knowledge Proof (ZKP) technology marks a revolutionary advancement in the field of cryptography, enabling the verification of certain information ownership without revealing any specific details. This technology, with its paradoxical yet powerful characteristics, provides a solid foundation for a wide range of applications, especially in enhancing the privacy and security of blockchain technology and other cryptographic systems. As ZKP technology increasingly becomes a part of the blockchain infrastructure, its importance for security and completeness becomes more pronounced. However, the complexity of ZKP implementation and the rapid iteration of the technology introduce various vulnerabilities, challenging the privacy and security it aims to offer. This study focuses on the completeness, soundness, and zero-knowledge properties of ZKP to meticulously classify existing vulnerabilities and deeply explores multiple categories of vulnerabilities, including completeness issues, soundness problems, information leakage, and non-standardized cryptographic implementations. Furthermore, we propose a set of defense strategies that include a rigorous security audit process and a robust distributed network security ecosystem. This audit strategy employs a divide-and-conquer approach, segmenting the project into different levels, from the application layer to the platform-nature infrastructure layer, using threat modelling, line-by-line audit, and internal cross-review, among other means, aimed at comprehensively identifying vulnerabilities in ZKP circuits, revealing design flaws in ZKP applications, and accurately identifying inaccuracies in the integration process of ZKP primitives.
Last updated:  2024-04-28
SOK: Research Motivations of Public-Key Cryptography
Fuchun Guo, Willy Susilo, Xiaofeng Chen, Peng Jiang, Jianchang Lai, and Zhen Zhao
The design, proposal, and analysis of cryptographic primitives and protocols (schemes) are one of the primary research fields in cryptology. To advance this research field, it is crucial to fully understand their research motivations. In this paper, we systematically introduce the research motivations for designing and proposing new schemes in public-key cryptography. We found that all research motivations aim to produce benefits for humanity including efficiency, security, and functionality, although some of them may be not obvious or only hold conditionally. We categorize benefits in research motivations into 3 ways, 6 types, and 17 areas. As examples, we introduce 40 research strategies within these areas for exploring benefits, each presented as ``From less-adj (in the first scheme) To more-adj (in the second scheme)", where ``adj" here refers to an adjective word representing a positive outcome. This SOK paper aims to provide valuable insights into the driving forces behind advancements in public-key cryptography, facilitating future research efforts in this field.
Last updated:  2024-04-27
Asymptotics for the standard block size in primal lattice attacks: second order, formally verified
Daniel J. Bernstein
Many proposals of lattice-based cryptosystems estimate security levels by following a recipe introduced in the New Hope proposal. This recipe, given a lattice dimension n, modulus q, and standard deviation s, outputs a "primal block size" β and a security level growing linearly with β. This β is minimal such that some κ satisfies ((n+κ)s^2+1)^{1/2} < (d/β)^{1/2} δ^{2β−d−1} q^{κ/d}, where d = n + κ + 1 and δ = (β(πβ)^{1/β}/(2π exp 1))^{1/2(β−1)}. This paper identifies how β grows with n, with enough precision to show the impact of adjusting q and s by constant factors. Specifically, this paper shows that if lg q grows as Q_0 lg n + Q_1 + o(1) and lg s grows as S_0 lg n + S_1 + o(1), where 0 <= S_0 <= 1/2 < Q_0 − S_0, then β/n grows as z_0 + (z_1+o(1))/lg n, where z_0 = 2Q_0/(Q_0−S_0+1/2)^2 and z_1 has a formula given in the paper. The paper provides a traditional-format proof and a proof verified by the HOL Light proof assistant.
Last updated:  2024-04-27
Efficient Quantum Algorithm for SUBSET-SUM Problem
Sanchita Ghosh, Anant Sharma, Sreetama Das, and Shibdas Roy
Problems in the complexity class $NP$ are not all known to be solvable, but are verifiable given the solution, in polynomial time by a classical computer. The complexity class $BQP$ includes all problems solvable in polynomial time by a quantum computer. Prime factorization is in $NP$ class, and is also in $BQP$ class, owing to Shor's algorithm. The hardest of all problems within the $NP$ class are called $NP$-complete. If a quantum algorithm can solve an $NP$-complete problem in polynomial time, it would imply that a quantum computer can solve all problems in $NP$ in polynomial time. Here, we present a polynomial-time quantum algorithm to solve an $NP$-complete variant of the $SUBSET-SUM$ problem, thereby, rendering $NP\subseteq BQP$. We illustrate that given a set of integers, which may be positive or negative, a quantum computer can decide in polynomial time whether there exists any subset that sums to zero. There are many real-world applications of our result, such as finding patterns efficiently in stock-market data, or in recordings of the weather or brain activity. As an example, the decision problem of matching two images in image processing is $NP$-complete, and can be solved in polynomial time, when amplitude amplification is not required.
Last updated:  2024-04-27
Toward Independent Key Encryption based on Q-Problem
Abdelkader Laouid, Mostefa Kara, and Mohammad Hammoudeh
This paper defines a post-quantum encryption scheme based on discussion cryptography by introducing a new post-quantum hard problem called Q-Problem. The idea behind this scheme is to hide the keys of each entity, and the encryption process is based on secret message holders using only random private keys.
Last updated:  2024-04-27
SECDSA: Mobile signing and authentication under classical ``sole control''
Eric Verheul
The 2014 European eIDAS regulation regulates strong electronic authentication and legally binding electronic signatures. Both require user "sole control". Historically smartcards are used based on direct interaction between user and relying party. Here sole control is provided by giving users both physical possession and control of the cryptographic key used for signing/authentication through a PIN. Such **classical** sole control is required in the 1999 electronic signature directive by some interpretations. The eIDAS regulation repeals the directive and explicitly relaxes its sole control requirements in a trade-off between security and usability. This allows user interaction to be outsourced to intermediary parties (authentication providers, signing services). This also allows mobile applications as user friendly alternatives for smartcards. However, current mobile platforms are only equipped with limited cryptographic hardware not supporting secure knowledge factors (PINs) controlling keys. The eIDAS relaxation raises concerns on sole control; intermediary parties should not be able to act as man-in-the-middle and impersonate users. In this paper we present a simple cryptographic design for signing and authentication on standard mobile platforms providing classical sole control. We argue that our design can meet the highest eIDAS requirements, effectively introducing a new signature category in a 2016 decision of the European Commission. We also sketch a SECDSA based implementation of the European Digital Identity Wallet recently proposed by the European Commission as part of the eIDAS regulation update.
Last updated:  2024-04-27
Jumping for Bernstein-Yang Inversion
Li-Jie Jian, Ting-Yuan Wang, Bo-Yin Yang, and Ming-Shing Chen
This paper achieves fast polynomial inverse operations specifically tailored for the NTRU Prime KEM on ARMv8 NEON instruction set benchmarking on four processor architectures: Cortex-A53, Cortex-A72, Cortex-A76 and Apple M1. We utilize the jumping divison steps of the constant-time GCD algorithm from Bernstein and Yang (TCHES’19) and optimize underlying polynomial multiplication of various lengths to improve the efficiency for computing polynomial inverse operations in NTRU Prime.
Last updated:  2024-04-26
Verifiable Encryption from MPC-in-the-Head
Uncategorized
Akira Takahashi and Greg Zaverucha
Show abstract
Uncategorized
Verifiable encryption (VE) is a protocol where one can provide assurance that an encrypted plaintext satisfies certain properties, or relations. It is an important building block in cryptography with many useful applications, such as key escrow, group signatures, optimistic fair exchange, and others. However, the majority of previous VE schemes are restricted to instantiation with specific public-key encryption schemes or relations. In this work, we propose a novel framework that realizes VE protocols using zero-knowledge proof systems based on the MPC-in-the-head paradigm (Ishai et al. STOC 2007). Our generic compiler can turn a large class of zero-knowledge proofs into secure VE protocols for any secure public-key encryption scheme with the undeniability property, a notion that essentially guarantees binding of encryption when used as a commitment scheme. Our framework is versatile: because the circuit proven by the MPC-in-the-head prover is decoupled from a complex encryption function, the work of the prover is focused on proving the encrypted data satisfies the relation, not the proof of plaintext knowledge. Hence, our approach allows for instantiation with various combinations of properties about the encrypted data and encryption functions. We then consider concrete applications, to demonstrate the efficiency of our framework, by first giving a new approach and implementation to verifiably encrypt discrete logarithms in any prime order group more efficiently than was previously known. Then we give the first practical verifiable encryption scheme for AES keys with post-quantum security, along with an implementation and benchmarks.
Last updated:  2024-04-26
MiniCast: Minimizing the Communication Complexity of Reliable Broadcast
Thomas Locher and Victor Shoup
We give a new protocol for reliable broadcast with improved communication complexity for long messages. Namely, to reliably broadcast a message a message $m$ over an asynchronous network to a set of $n$ parties, of which fewer than $n/3$ may be corrupt, our protocol achieves a communication complexity of $1.5 |m| n + O( \kappa n^2 \log(n) )$, where $\kappa$ is the output length of a collision-resistant hash function. This result improves on the previously best known bound for long messages of $2 |m| n + O( \kappa n^2 \log(n) )$.
Last updated:  2024-04-26
Sing a song of Simplex
Victor Shoup
We flesh out some details of the recently proposed Simplex atomic broadcast protocol, and modify it so that leaders disperse blocks in a more communication-efficient fashion. The resulting protocol, called DispersedSimplex, maintains the simplicity and excellent -- indeed, optimal -- latency characteristics of the original Simplex protocol. We also present several variations, including a variant that supports "stable leaders", variants that incorporate very recently developed data dissemination techniques that allow us to disperse blocks even more efficiently, and variants that are "signature free". We also suggest a number of practical optimizations and provide concrete performance estimates that take into account not just network latency but also network bandwidth limitations and computational costs. Based on these estimates, we argue that despite its simplicity, DispersedSimplex should, in principle, perform in practice as well as or better than any other state-of-the-art atomic broadcast protocol, at least in terms of common-case throughput and latency.
Last updated:  2024-04-26
Key-Homomorphic and Aggregate Verifiable Random Functions
Giulio Malavolta
A verifiable random function (VRF) allows one to compute a random-looking image, while at the same time providing a unique proof that the function was evaluated correctly. VRFs are a cornerstone of modern cryptography and, among other applications, are at the heart of recently proposed proof-of-stake consensus protocols. In this work we initiate the formal study of aggregate VRFs, i.e., VRFs that allow for the aggregation of proofs/images into a small di- gest, whose size is independent of the number of input proofs/images, yet it still enables sound verification. We formalize this notion along with its security properties and we propose two constructions: The first scheme is conceptually simple, concretely efficient, and uses (asymmetric) bilinear groups of prime order. Pseudorandomness holds in the random oracle model and aggregate pseudorandomness is proven in the algebraic group model. The second scheme is in the standard model and it is proven secure against the learning with errors (LWE) problem. As a cryptographic building block of independent interest, we introduce the notion of key homomorphic VRFs, where the verification keys and the proofs are endowed with a group structure. We conclude by discussing several applications of key-homomorphic and aggregate VRFs, such as distributed VRFs and aggregate proof-of-stake protocols.
Last updated:  2024-04-26
Efficient Post-Quantum Secure Deterministic Threshold Wallets from Isogenies
Poulami Das, Andreas Erwig, Michael Meyer, and Patrick Struck
Cryptocurrency networks crucially rely on digital signature schemes, which are used as an authentication mechanism for transactions. Unfortunately, most major cryptocurrencies today, including Bitcoin and Ethereum, employ signature schemes that are susceptible to quantum adversaries, i.e., an adversary with access to a quantum computer can forge signatures and thereby spend coins of honest users. In cryptocurrency networks, signature schemes are typically not executed in isolation, but within a so-called cryptographic wallet. In order to achieve security against quantum adversaries, the signature scheme and the cryptographic wallet must withstand quantum attacks. In this work, we advance the study on post-quantum secure signature and wallet schemes. That is, we provide the first formal model for deterministic threshold wallets and we show a generic post-quantum secure construction from any post-quantum secure threshold signature scheme with rerandomizable keys. We then instantiate our construction from the isogeny-based signature scheme CSI-FiSh and we show that our instantiation significantly improves over prior work.
Last updated:  2024-04-26
GraphOS: Towards Oblivious Graph Processing
Javad Ghareh Chamani, Ioannis Demertzis, Dimitrios Papadopoulos, Charalampos Papamanthou, and Rasool Jalili
We propose GraphOS, a system that allows a client that owns a graph database to outsource it to an untrusted server for storage and querying. It relies on doubly-oblivious primitives and trusted hardware to achieve a very strong privacy and efficiency notion which we call oblivious graph processing: the server learns nothing besides the number of graph vertexes and edges, and for each query its type and response size. At a technical level, GraphOS stores the graph on a doubly-oblivious data structure, so that all vertex/edge accesses are indistinguishable. For this purpose, we propose Omix++, a novel doubly-oblivious map that outperforms the previous state of the art by up to 34×, and may be of independent interest. Moreover, to avoid any leakage from CPU instruction fetching during query evaluation, we propose algorithms for four fundamental graph queries (BFS/DFS traversal, minimum spanning tree, and single-source shortest paths) that have a fixed execution trace, i.e., the sequence of executed operations is independent of the input. By combining these techniques, we eliminate all information that a hardware adversary observing the memory access pattern within the protected enclave can infer. We benchmarked GraphOS against the best existing solution, based on oblivious relational DBMS(translating graph queries to relational operators). GraphOS is not only significantly more performant (by up to two orders of magnitude for our tested graphs) but it eliminates leakage related to the graph topology that is practically inherent when a relational DBMS is used unless all operations are “padded” to the worst case.
Last updated:  2024-04-26
Earn While You Reveal: Private Set Intersection that Rewards Participants
Aydin Abadi
In Private Set Intersection protocols (PSIs), a non-empty result always reveals something about the private input sets of the parties. Moreover, in various variants of PSI, not all parties necessarily receive or are interested in the result. Nevertheless, to date, the literature has assumed that those parties who do not receive or are not interested in the result still contribute their private input sets to the PSI for free, although doing so would cost them their privacy. In this work, for the first time, we propose a multi-party PSI, called “Anesidora”, that rewards parties who contribute their private input sets to the protocol. Anesidora is efficient; it mainly relies on symmetric key primitives and its computation and communication complexities are linear with the number of parties and set cardinality. It remains secure even if the majority of parties are corrupted by active colluding adversaries.
Last updated:  2024-04-26
A note on -Tweakable HCTR: A BBB Secure Tweakable Enciphering Scheme-
Mustafa Khairallah
Tweakable HCTR is an tweakable enciphering proposed by Dutta and Nandi in Indocrypt 2018. It provides beyond birthday bound security when each tweak value is not used too frequently. More importantly for this note, its security bound degrades linearly with the maximum input length. We show in this note that this is not true by showing a single query distinguisher with advantage $O(l^2/2^n)$ where $l$ is the length of that query. The distinguisher does not break the beyond-birthday-bound claim but gives higher advantage than the claimed bound.
Last updated:  2024-04-26
Properties of Lattice Isomorphism as a Cryptographic Group Action
Benjamin Benčina, Alessandro Budroni, Jesús-Javier Chi-Domínguez, and Mukul Kulkarni
In recent years, the Lattice Isomorphism Problem (LIP) has served as an underlying assumption to construct quantum-resistant cryptographic primitives, e.g. the zero-knowledge proof and digital signature scheme by Ducas and van Woerden (Eurocrypt 2022), and the HAWK digital signature scheme (Asiacrypt 2022). While prior lines of work in group action cryptography, e.g. the works of Brassard and Yung (Crypto 1990), and more recently Alamati, De Feo, Montgomery and Patranabis (Asiacrypt 2020), focused on studying the discrete logarithm problem and isogeny-based problems in the group action framework, in recent years this framing has been used for studying the cryptographic properties of computational problems based on the difficulty of determining equivalence between algebraic objects. Examples include Permutation and Linear Code Equivalence Problems used in LESS (Africacrypt 2020), and the Tensor Isomorphism Problem (TCC 2019). This study delves into the quadratic form version of LIP, examining it through the lens of group actions. In this work we (1) give formal definitions and study the cryptographic properties of this group action (LIGA), (2) demonstrate that LIGA lacks both weak unpredictability and weak pseudorandomness, and (3) under certain assumptions, establish a theoretical trade-off between time complexity and the required number of samples for breaking weak unpredictability, for large dimensions. We also conduct experiments supporting our analysis. Additionally, we employ our findings to formulate new hard problems on quadratic forms.
Last updated:  2024-04-26
Rondo: Scalable and Reconfiguration-Friendly Randomness Beacon
Xuanji Meng, Xiao Sui, Zhaoxin Yang, Kang Rong, Wenbo Xu, Shenglong Chen, Ying Yan, and Sisi Duan
We present Rondo, a scalable and reconfiguration-friendly distributed randomness beacon (DRB) protocol in the partially synchronous model. Rondo is the first DRB protocol that is built from batched asynchronous verifiable secret sharing (bAVSS) and meanwhile avoids the high $O(n^3)$ message cost, where $n$ is the number of nodes. Our key contribution lies in the introduction of a new variant of bAVSS called batched asynchronous verifiable secret sharing with partial output (bAVSS-PO). bAVSS-PO is a weaker primitive than bAVSS but allows us to build a secure and more efficient DRB protocol. We propose a bAVSS-PO protocol Breeze. Breeze achieves the optimal $O(n)$ messages for the sharing stage and allows Rondo to offer better scalability than prior DRB protocols. Additionally, to support the reconfiguration, we introduce Rondo-BFT, a dynamic and partially synchronous Byzantine fault-tolerant protocol inspired by Dyno (S&P 2022). Unlike Dyno, Rondo-BFT provides a communication pattern that generates randomness beacon output periodically, making it well-suited for DRB applications. We implement our protocols and evaluate the performance on Amazon EC2 using up to 91 instances. Our evaluation results show that Rondo achieves higher throughput than existing works and meanwhile offers better scalability, where the performance does not degrade as significantly as $n$ grows.
Last updated:  2024-04-26
On Proving Pairings
Andrija Novakovic and Liam Eagen
In this paper we explore efficient ways to prove correctness of elliptic curve pairing relations. Pairing-based cryptographic protocols such as the Groth16 and Plonk SNARKs and the BLS signature scheme are used extensively in public blockchains such as Ethereum due in large part to their small size. However the relatively high cost of pairing computation remains a practical problem for many use cases such as verification ``in circuit" inside a SNARK. This naturally arises in recursive SNARK composition and SNARKs of BLS based consensus protocols. To improve pairing verification, we first show that the final exponentiation step of pairing verification can be replaced with a more efficient ``residue check," which can be incorporated into the Miller loop. Then, we show how to reduce the cost of the Miller loop by pre-computing all the necessary lines, and how this is especially efficient when the second pairing argument is fixed in advance. This is the case for BLS signatures with a fixed public key, as well as for KZG based SNARKs like Plonk and two of the three Groth16 pairings. Finally, we show how to improve of the protocol of [gar] by combining quotients, which allows us to more efficiently prove higher degree relations. These techniques also carry over naturally to pairing verification, for example on-chain verification or as part of the BitVM(2) protocol for Bitcoin smart contracts. We instantiate algorithms and show results for the BN254 curve.
Last updated:  2024-04-26
Computational Attestations of Polynomial Integrity Towards Verifiable Machine Learning
Dustin Ray and Caroline El Jazmi
Machine-learning systems continue to advance at a rapid pace, demonstrating remarkable utility in various fields and disciplines. As these systems continue to grow in size and complexity, a nascent industry is emerging which aims to bring machine-learning-as-a-service (MLaaS) to market. Outsourcing the operation and training of these systems to powerful hardware carries numerous advantages, but challenges arise when needing to ensure privacy and the correctness of work carried out by a potentially untrusted party. Recent advancements in the discipline of applied zero-knowledge cryptography, and probabilistic proof systems in general, have led to a means of generating proofs of integrity for any computation, which in turn can be efficiently verified by any party, in any place, at any time. In this work we present the application of a non-interactive, plausibly-post-quantum-secure, probabilistically-checkable argument system utilized as an efficiently verifiable guarantee that a privacy mechanism was irrefutably applied to a machine-learning model during the training process. That is, we prove the correct training of a differentially-private (DP) linear regression over a dataset of 60,000 samples on a single machine in 55 minutes, verifying the entire computation in 47 seconds. To our knowledge, this result represents the fastest known instance in the literature of provable-DP over a dataset of this size. Finally, we show how this task can be run in parallel, leading to further dramatic reductions in prover and verifier runtime complexity. We believe this result constitutes a key stepping-stone towards end-to-end private MLaaS.
Last updated:  2024-04-26
A note on ``a lightweight mutual and transitive authentication mechanism for IoT network''
Zhengjun Cao and Lihua Liu
We show the authentication mechanism [Ad Hoc Networks, 2023, 103003] fails to keep user anonymity, not as claimed.
Last updated:  2024-04-25
Towards Permissionless Consensus in the Standard Model via Fine-Grained Complexity
Marshall Ball, Juan Garay, Peter Hall, Aggelos Kiayias, and Giorgos Panagiotakos
We investigate the feasibility of permissionless consensus (aka Byzantine agreement) under standard assumptions. A number of protocols have been proposed to achieve permissionless consensus, most notably based on the Bitcoin protocol; however, to date no protocol is known that can be provably instantiated outside of the random oracle model. In this work, we take the first steps towards achieving permissionless consensus in the standard model. In particular, we demonstrate that worst-case conjectures in fine-grained complexity, in particular the orthogonal vectors conjecture (implied by the Strong Exponential Time Hypothesis), imply permissionless consensus in the random beacon model—a setting where a fresh random value is delivered to all parties at regular intervals. This gives a remarkable win-win result: either permissionless consensus exists relative to a random beacon, or there are non-trivial worst-case algorithmic speed-ups for a host of natural algorithmic problems (including SAT). Our protocol achieves resilience against adversaries that control an inverse-polynomial fraction of the honest computational power, i.e., adversarial power $A = T^{1−ε} $ for some constant $ε > 0$, where $T$ denotes the honest computational power. This relatively low threshold is a byproduct of the slack in the fine-grained complexity conjectures. One technical highlight is the construction of a Seeded Proof of Work: a Proof of Work where many (correlated) challenges can be derived from a single short public seed, and yet still no non-trivial amortization is possible.
Last updated:  2024-04-25
Memory adds no cost to lattice sieving for computers in 3 or more spatial dimensions
Samuel Jaques
The security of lattice-based crytography (LWE, NTRU, and FHE) depends on the hardness of the shortest-vector problem (SVP). Sieving algorithms give the lowest asymptotic runtime to solve SVP, but depend on exponential memory. Memory access costs much more in reality than in the RAM model, so we consider a computational model where processors, memory, and meters of wire are in constant proportions to each other. While this adds substantial costs to route data during lattice sieving, we modify existing algorithms to amortize these costs and find that, asymptotically, a classical computer can achieve the previous RAM model cost of $2^{0.2925d+o(d)}$ to sieve a $d$-dimensional lattice for a computer existing in 3 or more spatial dimensions, and can reach $2^{0.3113d+o(d)}$ in 2 spatial dimensions, where "spatial dimensions" are the dimensions of the physical geometry in which the computer exists. Under some assumptions about the constant terms of memory access, we estimate increases in bit security between $7$ to $23$ bits for different Kyber parameter sets and $8$ to $22$ bits for Dilithium.
Last updated:  2024-04-25
Tight Security of TNT and Beyond: Attacks, Proofs and Possibilities for the Cascaded LRW Paradigm
Ashwin Jha, Mustafa Khairallah, Mridul Nandi, and Abishanka Saha
Liskov, Rivest and Wagner laid the theoretical foundations for tweakable block ciphers (TBC). In a seminal paper, they proposed two (up to) birthday-bound secure design strategies --- LRW1 and LRW2 --- to convert any block cipher into a TBC. Several of the follow-up works consider cascading of LRW-type TBCs to construct beyond-the-birthday bound (BBB) secure TBCs. Landecker et al. demonstrated that just two-round cascading of LRW2 can already give a BBB security. Bao et al. undertook a similar exercise in context of LRW1 with TNT --- a three-round cascading of LRW1 --- that has been shown to achieve BBB security as well. In this paper, we present a CCA distinguisher on TNT that achieves a non-negligible advantage with $ O(2^{n/2}) $ queries, directly contradicting the security claims made by the designers. We provide a rigorous and complete advantage calculation coupled with experimental verification that further support our claim. Next, we provide new and simple proofs of birthday-bound CCA security for both TNT and its single-key variant, which confirm the tightness of our attack. Furthering on to a more positive note, we show that adding just one more block cipher call, referred as 4-LRW1, does not just re-establish the BBB security, but also amplifies it up to $ 2^{3n/4} $ queries. As a side-effect of this endeavour, we propose a new abstraction of the cascaded LRW-design philosophy, referred to as the LRW+ paradigm, comprising two block cipher calls sandwiched between a pair of tweakable universal hashes. This helps us to provide a modular proof covering all cascaded LRW constructions with at least $ 2 $ rounds, including 4-LRW1, and its more established relative, the well-known CLRW2, or more aptly, 2-LRW2.
Last updated:  2024-04-25
Organizing Records for Retrieval in Multi-Dimensional Range Searchable Encryption
Mahdieh Heidaripour, Ladan Kian, Maryam Rezapour, Mark Holcomb, Benjamin Fuller, Gagan Agrawal, and Hoda Maleki
Storage of sensitive multi-dimensional arrays must be secure and efficient in storage and processing time. Searchable encryption allows one to trade between security and efficiency. Searchable encryption design focuses on building indexes, overlooking the crucial aspect of record retrieval. Gui et al. (PoPETS 2023) showed that understanding the security and efficiency of record retrieval is critical to understand the overall system. A common technique for improving security is partitioning data tuples into parts. When a tuple is requested, the entire relevant part is retrieved, hiding the tuple of interest. This work assesses tuple partitioning strategies in the dense data setting, considering parts that are random, $1$-dimensional, and multi-dimensional. We consider synthetic datasets of $2$, $3$ and $4$ dimensions, with sizes extending up to $2$M tuples. We compare security and efficiency across a variety of record retrieval methods. Our findings are: 1. For most configurations, multi-dimensional partitioning yields better efficiency and less leakage. 2. 1-dimensional partitioning outperforms multi-dimensional partitioning when the first (indexed) dimension is any size as long as the query is large in all other dimensions except the (the first dimension can be any size). 3. The leakage of 1-dimensional partitioning is reduced the most when using a bucketed ORAM (Demertiz et al., USENIX Security 2020).
Last updated:  2024-04-25
NTRU-based FHE for Larger Key and Message Space
Robin Jadoul, Axel Mertens, Jeongeun Park, and Hilder V. L. Pereira
The NTRU problem has proven a useful building block for efficient bootstrapping in Fully Homomorphic Encryption (FHE) schemes, and different such schemes have been proposed. FINAL (ASIACRYPT 2022) first constructed FHE using homomorphic multiplexer (CMux) gates for the blind rotation operation. Later, XZD+23 (CRYPTO 2023) gave an asymptotic optimization by changing the ciphertext format to enable ring automorphism evaluations. In this work, we examine an adaptation to FINAL to evaluate CMux gates of higher arity and the resulting tradeoff to running times and bootstrapping key sizes. In this setting, we can compare the time and space efficiency of both bootstrapping protocols with larger key space against each other and the state of the art.
Last updated:  2024-04-25
Further Investigations on Nonlinear Complexity of Periodic Binary Sequences
Qin Yuan, Chunlei Li, Xiangyong Zeng, Tor Helleseth, and Debiao He
Nonlinear complexity is an important measure for assessing the randomness of sequences. In this paper we investigate how circular shifts affect the nonlinear complexities of finite-length binary sequences and then reveal a more explicit relation between nonlinear complexities of finite-length binary sequences and their corresponding periodic sequences. Based on the relation, we propose two algorithms that can generate all periodic binary sequences with any prescribed nonlinear complexity.
Last updated:  2024-04-25
Kronos: A Secure and Generic Sharding Blockchain Consensus with Optimized Overhead
Yizhong Liu, Andi Liu, Yuan Lu, Zhuocheng Pan, Yinuo Li, Jianwei Liu, Song Bian, and Mauro Conti
Sharding enhances blockchain scalability by dividing the network into shards, each managing specific unspent transaction outputs or accounts. As an introduced new transaction type, cross-shard transactions pose a critical challenge to the security and efficiency of sharding blockchains. Currently, there is a lack of a generic sharding consensus pattern that achieves both security and low overhead. In this paper, we present Kronos, a secure sharding blockchain consensus achieving optimized overhead. In particular, we propose a new secure sharding consensus pattern, based on a buffer managed jointly by shard members. Valid transactions are transferred to the payee via the buffer, while invalid ones are rejected through happy or unhappy paths. Kronos is proved to achieve security with atomicity under malicious clients with optimal intra-shard overhead $k\mathcal{B}$ ($k$ for involved shard number and $\mathcal{B}$ for a Byzantine fault tolerance (BFT) cost). Efficient rejection even requires no BFT execution in happy paths, and the cost in unhappy paths is still lower than a two-phase commit. Besides, we propose secure cross-shard certification methods based on batch certification and reliable cross-shard transfer. The former combines hybrid trees or vector commitments, while the latter integrates erasure coding. Handling $b$ transactions, Kronos is proved to achieve reliability with low cross-shard overhead $\mathcal{O}(n b \lambda)$ ($n$ for shard size and $\lambda$ for the security parameter). Notably, Kronos imposes no restrictions on BFT and does not rely on time assumptions, offering optional constructions in various modules. Kronos could serve as a universal framework for enhancing the performance and scalability of existing BFT protocols, supporting generic models, including asynchronous networks, increasing the throughput by several orders of magnitude. We implement Kronos using two prominent BFT protocols: asynchronous Speeding Dumbo (NDSS'22) and partial synchronous Hotstuff (PODC'19). Extensive experiments (over up to 1000 AWS EC2 nodes across 4 AWS regions) demonstrate Kronos scales the consensus nodes to thousands, achieving a substantial throughput of 320 ktx/sec with 2.0 sec latency. Compared with the past solutions, Kronos outperforms, achieving up to a 12$\times$ improvement in throughput and a 50% reduction in latency when cross-shard transactions dominate the workload.
Last updated:  2024-04-25
Attribute-based Keyed (Fully) Homomorphic Encryption
Keita Emura, Shingo Sato, and Atsushi Takayasu
Keyed homomorphic public key encryption (KHPKE) is a variant of homomorphic public key encryption, where only users who have a homomorphic evaluation key can perform a homomorphic evaluation. Then, KHPKE satisfies the CCA2 security against users who do not have a homomorphic evaluation key, while it satisfies the CCA1 security against users who have the key. Thus far, several KHPKE schemes have been proposed under the standard Diffie-Hellman-type assumptions and keyed fully homomorphic encryption (KFHE) schemes have also been proposed from lattices although there are no KFHE schemes secure solely under the LWE assumption in the standard model. As a natural extension, there is an identity-based variant of KHPKE; however, the security is based on a $q$-type assumption and there are no attribute-based variants. Moreover, there are no identity-based variants of KFHE schemes due to the complex design of the known KFHE schemes. In this paper, we provide two constructions of attribute-based variants. First, we propose an attribute-based KFHE (ABKFHE) scheme from lattices. We start by designing the first KFHE scheme secure solely under the LWE assumption in the standard model. Since the design is conceptually much simpler than known KFHE schemes, we replace their building blocks with attribute-based ones and obtain the proposed ABKFHE schemes. Next, we propose an efficient attribute-based KHPKE (ABKHE) scheme from a pair encoding scheme (PES). Due to the benefit of PES, we obtain various ABKHE schemes that contain the first identity-based KHPKE scheme secure under the standard $k$-linear assumption and the first pairing-based ABKHE schemes supporting more expressive predicates.
Last updated:  2024-04-25
BackMon: IC Backside Tamper Detection using On-Chip Impedance Monitoring
Tahoura Mosavirik and Shahin Tajik
The expansion of flip-chip technologies and a lack of backside protection make the integrated circuit (IC) vulnerable to certain classes of physical attacks mounted from the IC’s backside. Laser-assisted probing, electromagnetic, and body-basing injection attacks are examples of such attacks. Unfortunately, there are few countermeasures proposed in the literature, and none are available commercially. Those that do exist are not only expensive but are incompatible with current IC manufacturing processes. They also cannot be integrated into legacy systems, such as field programmable gate arrays (FPGAs), which are integral parts of many of the industrial and defense systems. In this paper, we demonstrate how the impedance monitoring of the printed circuit board (PCB) and IC package’s power distribution network (PDN) using on-chip circuit-based network analyzers can detect IC backside tampering. Our method is based on the fact that any tampering attempt to expose the backside silicon substrate, such as the removal of the fan and heat sinks, leads to changes in the equivalent impedance of the package’s PDN, and hence, scanning the package impedance will reveal whether the package integrity has been violated. To validate our claims, we deploy an on-FPGA network analyzer on an AMD Zynq UltraScale+ MPSoC manufactured with 16 nm technology, which is part of a multi-PCB system. We conduct a series of experiments at different temperatures, leveraging the difference of means as the statistical metric, to demonstrate the effectiveness of our method in detecting tamper events required to expose the IC backside silicon.
Last updated:  2024-04-24
Conditional disclosure of secrets with quantum resources
Vahid R. Asadi, Kohdai Kuroiwa, Debbie Leung, Alex May, Sabrina Pasterski, and Chris Waddell
The conditional disclosure of secrets (CDS) primitive is among the simplest cryptographic settings in which to study the relationship between communication, randomness, and security. CDS involves two parties, Alice and Bob, who do not communicate but who wish to reveal a secret $z$ to a referee if and only if a Boolean function $f$ has $f(x,y)=1$. Alice knows $x,z$, Bob knows $y$, and the referee knows $x,y$. Recently, a quantum analogue of this primitive called CDQS was defined and related to f-routing, a task studied in the context of quantum position-verification. CDQS has the same inputs, outputs, and communication pattern as CDS but allows the use of shared entanglement and quantum messages. We initiate the systematic study of CDQS, with the aim of better understanding the relationship between privacy and quantum resources in the information theoretic setting. We begin by looking for quantum analogues of results already established in the classical CDS literature. Doing so we establish a number of basic properties of CDQS, including lower bounds on entanglement and communication stated in terms of measures of communication complexity. Because of the close relationship to the $f$-routing position-verification scheme, our results have relevance to the security of these schemes.
Last updated:  2024-04-24
Encryption Based Covert Channel for Large Language Models
Yongge Wang
Transformer neural networks have gained significant traction since their introduction, becoming pivotal across diverse domains. Particularly in large language models like Claude and ChatGPT, the transformer architecture has demonstrated remarkable efficacy. This paper provides a concise overview of transformer neural networks and delves into their security considerations, focusing on covert channel attacks and their implications for the safety of large language models. We present a covert channel utilizing encryption and demonstrate its efficacy in circumventing Claude.ai's security measures. Our experiment reveals that Claude.ai appears to log our queries and blocks our attack within two days of our initial successful breach. This raises two concerns within the community: (1) The extensive logging of user inputs by large language models could pose privacy risks for users. (2) It may deter academic research on the security of such models due to the lack of experiment repeatability.
Last updated:  2024-04-24
Simple constructions of linear-depth t-designs and pseudorandom unitaries
Tony Metger, Alexander Poremba, Makrand Sinha, and Henry Yuen
Uniformly random unitaries, i.e., unitaries drawn from the Haar measure, have many useful properties, but cannot be implemented efficiently. This has motivated a long line of research into random unitaries that ``look'' sufficiently Haar random while also being efficient to implement. Two different notions of derandomisation have emerged: $t$-designs are random unitaries that information-theoretically reproduce the first $t$ moments of the Haar measure, and pseudorandom unitaries (PRUs) are random unitaries that are computationally indistinguishable from Haar random. In this work, we take a unified approach to constructing $t$-designs and PRUs. For this, we introduce and analyse the ``$PFC$ ensemble'', the product of a random computational basis permutation $P$, a random binary phase operator $F$, and a random Clifford unitary $C$. We show that this ensemble reproduces exponentially high moments of the Haar measure. We can then derandomise the $PFC$ ensemble to show the following: 1. Linear-depth $t$-designs. We give the first construction of a (diamond-error) approximate $t$-design with circuit depth linear in $t$. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their $2t$-wise independent counterparts. 2. Non-adaptive PRUs. We give the first construction of PRUs with non-adaptive security, i.e., we construct unitaries that are indistinguishable from Haar random to polynomial-time distinguishers that query the unitary in parallel on an arbitary state. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their pseudorandom counterparts. 3. Adaptive pseudorandom isometries. We show that if one considers isometries (rather than unitaries) from $n$ to $n + \omega(\log n)$ qubits, a small modification of our PRU construction achieves adaptive security, i.e., even a distinguisher that can query the isometry adaptively in sequence cannot distinguish it from Haar random isometries. This gives the first construction of adaptive pseudorandom isometries. Under an additional conjecture, this proof also extends to adaptive PRUs.
Last updated:  2024-04-24
Unconditional correctness of recent quantum algorithms for factoring and computing discrete logarithms
Cédric Pilatte
In 1994, Shor introduced his famous quantum algorithm to factor integers and compute discrete logarithms in polynomial time. In 2023, Regev proposed a multi-dimensional version of Shor's algorithm that requires far fewer quantum gates. His algorithm relies on a number-theoretic conjecture on the elements in $(\mathbb{Z}/N\mathbb{Z})^{\times}$ that can be written as short products of very small prime numbers. We prove a version of this conjecture using tools from analytic number theory such as zero-density estimates. As a result, we obtain an unconditional proof of correctness of this improved quantum algorithm and of subsequent variants.
Last updated:  2024-04-24
Rate-1 Fully Local Somewhere Extractable Hashing from DDH
Pedro Branco, Nico Döttling, Akshayaram Srinivasan, and Riccardo Zanotto
Somewhere statistically binding (SSB) hashing allows us to sample a special hashing key such that the digest statistically binds the input at $m$ secret locations. This hash function is said to be somewhere extractable (SE) if there is an additional trapdoor that allows the extraction of the input bits at the $m$ locations from the digest. Devadas, Goyal, Kalai, and Vaikuntanathan (FOCS 2022) introduced a variant of somewhere extractable hashing called rate-1 fully local SE hash functions. The rate-1 requirement states that the size of the digest is $m + \mathsf{poly}(\lambda)$ (where $\lambda$ is the security parameter). The fully local property requires that for any index $i$, there is a "very short" opening showing that $i$-th bit of the hashed input is equal to $b$ for some $b \in \{0,1\}$. The size of this opening is required to be independent of $m$ and in particular, this means that its size is independent of the size of the digest. Devadas et al. gave such a construction from Learning with Errors (LWE). In this work, we give a construction of a rate-1 fully local somewhere extractable hash function from Decisional Diffie-Hellman (DDH) and BARGs. Under the same assumptions, we give constructions of rate-1 BARG and RAM SNARG with partial input soundness whose proof sizes are only matched by prior constructions based on LWE.
Last updated:  2024-04-24
MUSEN: Aggregatable Key-Evolving Verifiable Random Functions and Applications
Bernardo David, Rafael Dowsley, Anders Konring, and Mario Larangeira
A Verifiable Random Function (VRF) can be evaluated on an input by a prover who holds a secret key, generating a pseudorandom output and a proof of output validity that can be verified using the corresponding public key. VRFs are a central building block of committee election mechanisms that sample parties to execute tasks in cryptographic protocols, e.g. generating blocks in a Proof-of-Stake (PoS) blockchain or executing a round of MPC protocols. We propose the notion, and a matching construction, of an Aggregatable Key-Evolving VRF (A-KE-VRF) with the following extra properties: 1. Aggregation: combining proofs for several VRF evaluations of different inputs under different secret keys into a single constant size proof; 2. Key-Evolving: preventing adversaries who corrupt a party (learning their secret key) from ``forging'' proofs of past VRF evaluations. As an immediate application, we improve on the block size of PoS blockchains and on the efficiency of Proofs of Proof-of-Stake (PoPoS). Furthermore, the A-KE-VRF notion allows us to construct Encryption to the Future (EtF) and Authentication from the Past (AfP) schemes with a Key-Evolving property, which provides forward security. An EtF scheme allows for sending a message to a party who is randomly selected to execute a role in the future, while an AfP scheme allows for this party to authenticate their messages as coming from a past execution of this role. These primitives are essential for realizing the YOSO MPC Framework (CRYPTO'21).
Last updated:  2024-04-24
Distributed & Scalable Oblivious Sorting and Shuffling
Nicholas Ngai, Ioannis Demertzis, Javad Ghareh Chamani, and Dimitrios Papadopoulos
Existing oblivious systems offer robust security by concealing memory access patterns, but they encounter significant scalability and performance challenges. Recent efforts to enhance the practicality of these systems involve embedding oblivious computation, e.g., oblivious sorting and shuffling, within Trusted Execution Environments (TEEs). For instance, oblivious sort has been heavily utilized: in Oblix (S&P'18), when oblivious indexes are created and accessed; in Snoopy's high-throughput oblivious key-value (SOSP'21) during initialization and when the input requests are deduplicated and prepared for delivery; in Opaque (NSDI'17) for all the proposed oblivious SQL operators; in the state-of-the-art non-foreign key oblivious join approach (PVLDB'20). Additionally, oblivious sort/shuffle find applications in Signal's commercial solution for contact discovery, anonymous Google's Key Transparency, Searchable Encryption, software monitoring, and differentially private federated learning with user privacy. In this work, we address the scalability bottleneck of oblivious sort and shuffle by re-designing these approaches to achieve high efficiency in distributed multi-enclave environments. First, we propose a multi-threaded bitonic sort optimized for the distributed setting, making it the most performant oblivious sort for small number of enclaves (up to 4). For larger numbers of enclaves, we propose a novel oblivious bucket sort, which improves data locality and network consumption and outperforms our optimized distributed bitonic-sort by up to 5-6x. To the best of our knowledge, these are the first distributed oblivious TEE-based sorting solutions. For reference, we are able to sort 2 GiB of data in 1 second and 128 GiB in 53.4 seconds in a multi-enclave test. A fundamental building block of our oblivious bucket-sort is an oblivious shuffle that improves the prior state-of-the-art result (CCS'22) by up to 9.5x in the distributed multi-enclave setting---interestingly it is better by 10% even in the single-enclave/multi-thread setting.
Last updated:  2024-04-24
Hadamard Product Argument from Lagrange-Based Univariate Polynomials
Jie Xie, Yuncong Hu, and Yu Yu
Hadamard product is a point-wise product for two vectors. This paper presents a new scheme to prove Hadamard-product relation as a sub-protocol for SNARKs based on univariate polynomials. Prover uses linear cryptographic operations to generate the proof containing logarithmic field elements. The verification takes logarithmic cryptographic operations with constant numbers of pairings in bilinear group. The construction of the scheme is based on the Lagrange-based KZG commitments (Kate, Zaverucha, and Goldberg at Asiacrypt 2010) and the folding technique. We construct an inner-product protocol from folding technique on univariate polynomials in Lagrange form, and by carefully choosing the random polynomials suitable for folding technique, we construct a Hadamard-product protocol from the inner-product protocol, giving an alternative to prove linear algebra relations in linear time, and the protocol has a better concrete proof size than previous works.
Last updated:  2024-04-23
Guarding the First Order: The Rise of AES Maskings
Amund Askeland, Siemen Dhooghe, Svetla Nikova, Vincent Rijmen, and Zhenda Zhang
We provide three first-order hardware maskings of the AES, each allowing for a different trade-off between the number of shares and the number of register stages. All maskings use a generalization of the changing of the guards method enabling the re-use of randomness between masked S-boxes. As a result, the maskings do not require fresh randomness while still allowing for a minimal number of shares and providing provable security in the glitch-extended probing model. The low-area variant has five cycles of latency and a serialized area cost of $8.13~kGE$. The low-latency variant reduces the latency to three cycles while increasing the serialized area by $67.89\%$ compared to the low-area variant. The maskings of the AES encryption are implemented on FPGA and evaluated with Test Vector Leakage Assessment (TVLA).
Last updated:  2024-04-23
Interactive Threshold Mercurial Signatures and Applications
Masaya Nanri, Octavio Perez Kempner, Mehdi Tibouchi, and Masayuki Abe
Equivalence class signatures allow a controlled form of malleability based on equivalence classes defined over the message space. As a result, signatures can be publicly randomized and adapted to a new message representative in the same equivalence class. Notably, security requires that an adapted signature-message pair looks indistinguishable from a random signature-message pair in the space of valid signatures for the new message representative. Together with the decisional Diffie-Hellman assumption, this yields an unlinkability notion (class-hiding), making them a very attractive building block for privacy-preserving primitives. Mercurial signatures are an extension of equivalence class signatures that allow malleability for the key space. Unfortunately, the most efficient construction to date suffers a severe limitation that limits their application: only a weak form of public key class-hiding is supported. In other words, given knowledge of the original signing key and randomization of the corresponding public key, it is possible to identify whether they are related. In this work, we put forth the notion of interactive threshold mercurial signatures and show how they help to overcome the above-mentioned limitation. Moreover, we present constructions in the two-party and multi-party settings, assuming at least one honest signer. We also discuss related applications, including blind signatures, multi-signatures, and threshold ring signatures. To showcase the practicality of our approach, we implement the proposed constructions, comparing them against related alternatives.
Last updated:  2024-04-23
Quantum Implementation and Analysis of SHA-2 and SHA-3
Kyungbae Jang, Sejin Lim, Yujin Oh, Hyunjun Kim, Anubhab Baksi, Sumanta Chakraborty, and Hwajeong Seo
Quantum computers have the potential to solve hard problems that are nearly impossible to solve by classical computers, this has sparked a surge of research to apply quantum technology and algorithm against the cryptographic systems to evaluate for its quantum resistance. In the process of selecting post-quantum standards, NIST categorizes security levels based on the complexity that quantum computers would require to crack AES encryption (levels 1, 3 and 5) and SHA-2 or SHA-3 (levels 2 and 4). In assessing the security strength of cryptographic algorithms against quantum threats, accurate predictions of quantum resources are crucial. Following the work of Jaques et al. in Eurocrypt 2020, NIST estimated security levels 1, 3, and 5, corresponding to quantum circuit size for finding the key for AES-128, AES-192, and AES-256, respectively. This work has been recently followed-up by Huang et al. (Asiacrypt'22) and Liu et al. (Asiacrypt'23) among others; though the most up-to-date results are available in the work by Jang et al. (ePrint'22). However, for levels 2 and 4, which relate to the collision finding for the SHA-2 and SHA-3 hash functions, quantum attack complexities are probably not well-studied. In this paper, we present novel techniques for optimizing the quantum circuit implementations for SHA-2 and SHA-3 algorithms in all the categories specified by NIST. After that, we evaluate the quantum circuits of target cryptographic hash functions for quantum collision search. Finally, we define the quantum attack complexity for levels 2 and 4, and comment on the security strength of the extended level. We present new concepts to optimize the quantum circuits at the component level and the architecture level.
Last updated:  2024-04-23
SQIAsignHD: SQIsignHD Adaptor Signature
Farzin Renan and Péter Kutas
Adaptor signatures can be viewed as a generalized form of the standard digital signature schemes where a secret randomness is hidden within a signature. Adaptor signatures are a recent cryptographic primitive and are becoming an important tool for blockchain applications such as cryptocurrencies to reduce on-chain costs, improve fungibility, and contribute to off-chain forms of payment in payment-channel networks, payment-channel hubs, and atomic swaps. However, currently used adaptor signature constructions are vulnerable to quantum adversaries due to Shor's algorithm. In this work, we introduce $\mathsf{SQIAsignHD}$, a new quantum-resistant adaptor signature scheme based on isogenies of supersingular elliptic curves, using SQIsignHD - as the underlying signature scheme - and exploiting the idea of the artificial orientation on the supersingular isogeny Diffie-Hellman key exchange protocol, SIDH, as the underlying hard relation. We, furthermore, show that our scheme is secure in the Quantum Random Oracle Model (QROM).
Last updated:  2024-04-22
Constrained Pseudorandom Functions for Inner-Product Predicates from Weaker Assumptions
Sacha Servan-Schreiber
In this paper, we build a framework for constructing Constrained Pseudorandom Functions (CPRFs) with inner-product constraint predicates, using ideas from subtractive secret sharing and related-key-attack security. Our framework can be instantiated using a random oracle or any suitable Related-Key-Attack (RKA) secure pseudorandom function. We provide three instantiations of our framework: 1. an adaptively-secure construction in the random oracle model; 2. a selectively-secure construction under the DDH assumption; and 3. a selectively-secure construction with a polynomial domain under the assumption that one-way functions exist. All three instantiations are constraint-hiding and support inner-product predicates, leading to the first constructions of such expressive CPRFs under each corresponding assumption. Moreover, while the OWF-based construction is primarily of theoretical interest, the random oracle and DDH-based constructions are concretely efficient, which we show via an implementation.
Last updated:  2024-04-22
Worst-Case to Average-Case Hardness of LWE: A Simple and Practical Perspective
Divesh Aggarwal, Leong Jin Ming, and Alexandra Veliche
In this work, we study the worst-case to average-case hardness of the Learning with Errors problem (LWE) under an alternative measure of hardness $−$ the maximum success probability achievable by a probabilistic polynomial-time (PPT) algorithm. Previous works by Regev (STOC 2005), Peikert (STOC 2009), and Brakerski, Peikert, Langlois, Regev, Stehle (STOC 2013) give worst-case to average-case reductions from lattice problems, specifically the approximate decision variant of the Shortest Vector Problem (GapSVP) and the Bounded Distance Decoding (BDD) problem, to LWE. These reductions, however, are lossy in the sense that even the strongest assumption on the worst-case hardness of GapSVP or BDD implies only mild hardness of LWE. Our alternative perspective gives a much tighter reduction and strongly relates the hardness of LWE to that of BDD. In particular, we show that under a reasonable assumption about the success probability of solving BDD via a PPT algorithm, we obtain a nearly tight lower bound on the highest possible success probability for solving LWE via a PPT algorithm. Furthermore, we show a tight relationship between the best achievable success probability by any probabilistic polynomial-time algorithm for decision-LWE to that of search-LWE. Our results not only refine our understanding of the computational complexity of LWE, but also provide a useful framework for analyzing the practical security implications.
Last updated:  2024-04-22
POKE: A Framework for Efficient PKEs, Split KEMs, and OPRFs from Higher-dimensional Isogenies
Andrea Basso
We introduce a new framework, POKE, to build cryptographic protocols from irrational isogenies using higher-dimensional representations. The framework enables two parties to manipulate higher-dimensional representations of isogenies to efficiently compute their pushforwards, and ultimately to obtain a shared secret. We provide three constructions based on POKE: the first is a PKE protocol, which is one of the most compact post-quantum PKEs and possibly the most efficient isogeny-based PKE to date. We then introduce a validation technique to ensure the correctness of uniSIDH public keys: by combining the validation method with a POKE-based construction, we obtain a split KEM, a primitive that generalizes NIKEs and can be used to instantiate a post-quantum version of the Signal's X3DH protocol. The third construction builds upon the split KEM and its validation method to obtain a round-optimal verifiable OPRF. It is the first such construction that does not require more than $\lambda$ isogeny computations, and it is significantly more compact and more efficient than all other isogeny-based OPRFs.
Last updated:  2024-04-22
flookup: Fractional decomposition-based lookups in quasi-linear time independent of table size
Ariel Gabizon and Dmitry Khovratovich
We present a protocol for checking the values of a committed polynomial $\phi(X)$ over a multiplicative subgroup $H\subset \mathbb{F}$ of size $m$ are contained in a table $T\in \mathbb{F}^N$. After an $O(N \log^2 N)$ preprocessing step, the prover algorithm runs in *quasilinear* time $O(m\log ^2 m)$. We improve upon the recent breakthrough results Caulk[ZBK+22] and Caulk+[PK22], which were the first to achieve the complexity sublinear in the full table size $N$ with prover time being $O(m^2+m\log N)$ and $O(m^2)$, respectively. We pose further improving this complexity to $O(m\log m)$ as the next important milestone for efficient zk-SNARK lookups.
Last updated:  2024-04-22
Proofs for Deep Thought: Accumulation for large memories and deterministic computations
Benedikt Bünz and Jessica Chen
We construct two new accumulation schemes. The first one is for checking that $\ell$ read and write operations were performed correctly from a memory of size $T$. The prover time is entirely independent of $T$ and only requires committing to $6\ell$ field elements, which is an over $100$X improvement over prior work. The second one is for deterministic computations. It does not require committing to the intermediate wires of the computation but only to the input and output. This is achieved by building an accumulation scheme for a modified version of the famous GKR protocol. We show that these schemes are highly compatible and that the accumulation for GKR can further reduce the cost of the memory-checking scheme. Using the BCLMS (Crypto 21) compiler, these protocols yield an efficient, incrementally verifiable computation (IVC) scheme that is particularly useful for machine computations with large memories and deterministic steps.
Last updated:  2024-04-22
Complete group law for genus 2 Jacobians on Jacobian coordinates
Elif Ozbay Gurler and Huseyin Hisil
This manuscript provides complete, inversion-free, and explicit group law formulas in Jacobian coordinates for the genus 2 hyperelliptic curves of the form $y^2 = x^5 + a_3 x^3 + a_2 x^2 + a_1 x + a_0$ over a field $K$ with $char(K) \ne 2$. The formulas do not require the use of polynomial arithmetic operations such as resultant, mod, or gcd computations but only operations in $K$.
Last updated:  2024-04-22
Deep Selfish Proposing in Longest-Chain Proof-of-Stake Protocols
Roozbeh Sarenche, Svetla Nikova, and Bart Preneel
It has been shown that the selfish mining attack enables a miner to achieve an unfair relative revenue, posing a threat to the progress of longest-chain blockchains. Although selfish mining is a well-studied attack in the context of Proof-of-Work blockchains, its impact on the longest-chain Proof-of-Stake (LC-PoS) protocols needs yet to be addressed. This paper involves both theoretical and implementation-based approaches to analyze the selfish proposing attack in the LC-PoS protocols. We discuss how factors such as the nothing-at-stake phenomenon and the proposer predictability in PoS protocols can make the selfish proposing attack in LC-PoS protocols more destructive compared to selfish mining in PoW. In the first part of the paper, we use combinatorial tools to theoretically assess the selfish proposer’s block ratio in simplistic LC-PoS environments and under simplified network connection. However, these theoretical tools or classical MDP-based approaches cannot be applied to analyze the selfish proposing attack in real-world and more complicated LC-PoS environments. To overcome this issue, in the second part of the paper, we employ deep reinforcement learning techniques to find the near-optimal strategy of selfish proposing in more sophisticated protocols. The tool implemented in the paper can help us analyze the selfish proposing attack across diverse blockchain protocols with different reward mechanisms, predictability levels, and network conditions.
Last updated:  2024-04-22
How to Lose Some Weight - A Practical Template Syndrome Decoding Attack
Sebastian Bitzer, Jeroen Delvaux, Elena Kirshanova, Sebastian Maaßen, Alexander May, and Antonia Wachter-Zeh
We study the hardness of the Syndrome Decoding problem, the base of most code-based cryptographic schemes, such as Classic McEliece, in the presence of side-channel information. We use ChipWhisperer equipment to perform a template attack on Classic McEliece running on an ARM Cortex-M4, and accurately classify the Hamming weights of consecutive 32-bit blocks of the secret error vector. With these weights at hand, we optimize Information Set Decoding algorithms. Technically, we show how to speed up information set decoding via a dimension reduction, additional parity-check equations, and an improved information set search, all derived from the Hamming weight information. Consequently, using our template attack, we can practically recover an error vector in dimension n=2197 in a matter of seconds. Without side-channel information, such an instance has a complexity of around 88 bit. We also estimate how our template attack affects the security of the proposed McEliece parameter sets. Roughly speaking, even an error-prone leak of our Hamming weight information leads for n=3488 to a security drop of 89 bits.
Last updated:  2024-04-22
New SAT-based Model for Quantum Circuit Decision Problem: Searching for Low-Cost Quantum Implementation
Jingwen Chen, Qun Liu, Yanhong Fan, Lixuan Wu, Boyun Li, and Meiqin Wang
In recent years, quantum technology has been rapidly developed. As security analyses for symmetric ciphers continue to emerge, many require an evaluation of the resources needed for the quantum circuit implementation of the encryption algorithm. In this regard, we propose the quantum circuit decision problem, which requires us to determine whether there exists a quantum circuit for a given permutation f using M ancilla qubits and no more than K quantum gates within the circuit depth D. Firstly, we investigate heuristic algorithms and classical SAT-based models in previous works, revealing their limitations in solving the problem. Hence, we innovatively propose an improved SAT-based model incorporating three metrics of quantum circuits. The model enables us to find the optimal quantum circuit of an arbitrary 3 or 4-bit S-box under a given optimization goal based on SAT solvers, which has proved the optimality of circuits constructed by the tool, LIGHTER-R. Then, by combining different criteria in the model, we find more compact quantum circuit implementations of S-boxes such as RECTANGLE and GIFT. For GIFT S-box, our model provides the optimal quantum circuit that only requires 8 gates with a depth of 31. Furthermore, our model can be generalized to linear layers and improve the previous SAT-based model proposed by Huang et al. in ASIACRYPT 2022 by adding the criteria on the number of qubits and the circuit depth.
Last updated:  2024-04-22
WESP: An encryption method that, as the key size increases, require an exponentially growing time to break
Sam Widlund
WESP is a new encryption algorithm that is based on equation systems, in which the equations are generated using the values of tables that act as the encryption key, and the equations having features making them suitable for cryptographic use. The algorithm is defined, and its properties are discussed. Besides just describing the algorithm, also reasons are presented why the algorithm works the way it works. The key size in WESP can be altered and has no upper limit, and typically the key size is bigger than currently commonly used keys. A calculation is presented, calculating how many bytes can be securely encrypted before the algorithm might start to repeat it’s sequence of encrypting bytes, and that this period can be adjusted to be arbitrarily large. It is also shown that withing the period the resulting stream of encrypting bytes is statistically uniformly distributed. It is also shown that if the encryption tables are not known, the equations in the system of equations cannot be known, and it is demonstrated that the system of equations cannot be solved if the equations are not known, and thus the encryption cannot be broken in a closed form. Then, we calculate for all symbols used in the algorithm, the minimum amount of trials needed, in order to be able to verity the trials. Since the algorithm is constantly updating key values, verification becomes impossible if equations are not evaluated in order. The calculation shows that the minimum number of trials required is such that the number of trials, i.e., the time required to break the encryption, increases exponentially as the key size grows. Since there is no upper limit on the key size there is neither any upper limit on the time it requires to break the encryption.
Last updated:  2024-04-22
Secure Integrated Sensing and Communication Under Correlated Rayleigh Fading
Martin Mittelbach, Rafael F. Schaefer, Matthieu Bloch, Aylin Yener, and Onur Gunlu
We consider a secure integrated sensing and communication (ISAC) scenario, in which a signal is transmitted through a state-dependent wiretap channel with one legitimate receiver with which the transmitter communicates and one honest-but-curious target that the transmitter wants to sense. The secure ISAC channel is modeled as two state-dependent fast-fading channels with correlated Rayleigh fading coefficients and independent additive Gaussian noise components. Delayed channel outputs are fed back to the transmitter to improve the communication performance and to estimate the channel state sequence. We establish and illustrate an achievable secrecy-distortion region for degraded secure ISAC channels under correlated Rayleigh fading. We also evaluate the inner bound for a large set of parameters to derive practical design insights for secure ISAC methods. The presented results include in particular parameter ranges for which the secrecy capacity of a classical wiretap channel setup is surpassed and for which the channel capacity is approached.
Last updated:  2024-04-22
Efficient KZG-based Univariate Sum-check and Lookup Argument
Yuncong Zhang, Shi-Feng Sun, and Dawu Gu
We propose a novel KZG-based sum-check scheme, dubbed $\mathsf{Losum}$, with optimal efficiency. Particularly, its proving cost is one multi-scalar-multiplication of size $k$---the number of non-zero entries in the vector, its verification cost is one pairing plus one group scalar multiplication, and the proof consists of only one group element. Using $\mathsf{Losum}$ as a component, we then construct a new lookup argument, named $\mathsf{Locq}$, which enjoys a smaller proof size and a lower verification cost compared to the state of the arts $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++. Specifically, the proving cost of $\mathsf{Locq}$ is comparable to $\mathsf{cq}$, keeping the advantage that the proving cost is independent of the table size after preprocessing. For verification, $\mathsf{Locq}$ costs four pairings, while $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++ require five, five and six pairings, respectively. For proof size, a $\mathsf{Locq}$ proof consists of four $\mathbb{G}_1$ elements and one $\mathbb{G}_2$ element; when instantiated with the BLS12-381 curve, the proof size of $\mathsf{Locq}$ is $2304$ bits, while $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++ have $3840$, $3328$ and $2944$ bits, respectively. Moreover, $\mathsf{Locq}$ is zero-knowledge as $\mathsf{cq}$+ and $\mathsf{cq}$++, whereas $\mathsf{cq}$ is not. $\mathsf{Locq}$ is more efficient even compared to the non-zero-knowledge (and more efficient) versions of $\mathsf{cq}$+ and $\mathsf{cq}$++.
Last updated:  2024-04-22
Lattice-Based Succinct Mercurial Functional Commitment for Circuits: Definitions and Constructions
Hongxiao Wang, Siu-Ming Yiu, Yanmin Zhao, Zoe L. Jiang, and Min Xie
Vector commitments gain a lot of attention because of their wide usage in applications such as blockchain and accumulator. Mercurial vector commitments and mercurial functional commitments (MFC), as significant variants of VC, are the central techniques to construct more advanced cryptographic primitives such as zero-knowledge set and zero-knowledge functional elementary database (ZK-FEDB). However, the current MFC only supports linear functions, limiting its application, i.e. building the ZK-FEDB that only supports linear function queries. Besides, to our best knowledge, the existing MFC and ZK-FEDBs, including the one proposed by Zhang and Deng (ASIACRYPT '23) using RSA accumulators, are all in the group model and cannot resist the attack of quantum computers. To break these limitations, we formalize the first system model and security model of MFC for circuits. Then, we target some specific properties of a new falsifiable assumption, i.e. the $\mathsf{BASIS}$ assumption proposed by Wee and Wu (EUROCRYPT '23) to construct the first lattice-based succinct mercurial functional commitment for circuits. To the application, we show that our constructions can be used to build the first lattice-based ZK-FEDB directly within the existing generic framework.
Last updated:  2024-04-22
On the {\sf P/poly} Validity of the Agr17 FE Scheme
Yupu Hu, Siyue Dong, Baocang Wang, Jun Liu, Yanbin Pan, and Xingting Dong
Functional encryption (FE) is a cutting-edge research topic in cryptography. The Agr17 FE scheme is a major scheme of FE area. This scheme had the novelty of “being applied for the group of general functions (that is, {\sf P/poly} functions) without IO”. It took the BGG+14 ABE scheme as a bottom structure, which was upgraded into a “partially hiding attribute” scheme, and combined with a fully homomorphic encryption (FHE) scheme. However, the Agr17 FE scheme had a strange operation. For noise cancellation of FHE decryption stage, it used bulky “searching noise” rather than elegant “filtering”. It searched total modulus interval, so that the FHE modulus should be polynomially large. In this paper we discuss the {\sf P/poly} validity of the Agr17 FE scheme. First, we obtain the result that the Agr17 FE scheme is {\sf P/poly} invalid. More detailedly, when the Agr17 FE scheme is applied for the group of randomly chosen {\sf P/poly} Boolean functions, FHE modulus at the “searching” stage cannot be polynomially large. Our analysis is based on three restrictions of the BGG+14 ABE scheme: (1) The modulus of the BGG+14 ABE should be adapted to being super-polynomially large, if it is applied for the group of randomly chosen {\sf P/poly} functions. (2) The modulus of the BGG+14 ABE cannot be switched. (3) If the BGG+14 ABE is upgraded into a “partially hiding attribute” scheme, permitted operations about hidden part of the attribute can only be affine operations. Then, to check whether the {\sf P/poly} validity can be obtained by modifying the scheme, we consider two modified versions. The first modified version is controlling the FHE noise by repeatedly applying bootstrapping, and replacing a modular inner product with an arithmetic inner product. The second modified version is replacing the search for the modulus interval with the search for a public noise interval, hoping such noise interval polynomially large and tolerating the modulus which may be super-polynomially large. The first modified version may be {\sf P/poly} valid, but it is weaker. There is no evidence to support the {\sf P/poly} validity of the second modified version. Finally, we present an additional conclusion that there is no evidence to support the {\sf P/poly} validity of the GVW15 PE scheme.
Last updated:  2024-04-22
$\mathsf{Cougar}$: Cubic Root Verifier Inner Product Argument under Discrete Logarithm Assumption
Hyeonbum Lee, Seunghun Paik, Hyunjung Son, and Jae Hong Seo
An inner product argument (IPA) is a cryptographic primitive used to construct a zero-knowledge proof (ZKP) system, which is a notable privacy-enhancing technology. We propose a novel efficient IPA called $\mathsf{Cougar}$. $\mathsf{Cougar}$ features cubic root verifier and logarithmic communication under the discrete logarithm (DL) assumption. At Asiacrypt2022, Kim et al. proposed two square root verifier IPAs under the DL assumption. Our main objective is to overcome the limitation of square root complexity in the DL setting. To achieve this, we combine two distinct square root IPAs from Kim et al.: one with pairing ($\mathsf{Protocol3}$) and one without pairing ($\mathsf{Protocol4}$). To construct $\mathsf{Cougar}$, we first revisit $\mathsf{Protocol4}$ and reconstruct it to make it compatible with the proof system for the homomorphic commitment scheme. Next, we utilize $\mathsf{Protocol3}$ as the proof system for the reconstructed $\mathsf{Protocol4}$. Furthermore, we provide a soundness proof for $\mathsf{Cougar}$ in the DL assumption.
Last updated:  2024-04-22
Subverting Cryptographic Protocols from A Fine-Grained Perspective - A Case Study on 2-Party ECDSA
Jialiu Cheng, Yi Wang, Rongmao Chen, and Xinyi Huang
The revelations of Edward Snowden in 2013 rekindled concerns within the cryptographic community regarding the potential subversion of cryptographic systems. Bellare et al. (CRYPTO'14) introduced the notion of Algorithm Substitution Attacks (ASAs), which aim to covertly leak sensitive information by undermining individual cryptographic primitives. In this work, we delve deeply into the realm of ASAs against protocols built upon cryptographic primitives. In particular, we revisit the existing ASA model proposed by Berndt et al. (AsiaCCS'22), providing a more fine-grained perspective. We introduce a novel ASA model tailored for protocols, capable of capturing a wide spectrum of subversion attacks. Our model features a modular representation of subverted parties within protocols, along with fine-grained definitions of undetectability. To illustrate the practicality of our model, we applied it to Lindell's two-party ECDSA protocol (CRYPTO'17), unveiling a range of ASAs targeting the protocol's parties with the objective of extracting secret key shares. Our work offers a comprehensive ASA model suited to cryptographic protocols, providing a useful framework for understanding ASAs against protocols.
Last updated:  2024-04-22
Non-interactive Blind Signatures from Lattices
Foteini Baldimtsi, Jiaqi Cheng, Rishab Goyal, and Aayush Yadav
Blind signatures enable a receiver to obtain signatures on messages of its choice without revealing any message to the signer. Round-optimal blind signatures are designed as a two-round interactive protocol between a signer and receiver. Coincidentally, the choice of message is not important in many applications, and is routinely set as a random (unstructured) message by a receiver. With the goal of designing more efficient blind signatures for such applications, Hanzlik (Eurocrypt '23) introduced a new variant called non-interactive blind signatures (NIBS). These allow a signer to asynchronously generate partial signatures for any recipient such that only the intended recipient can extract a blinded signature for a random message. This bypasses the two-round barrier for traditional blind signatures, yet enables many known applications. Hanzlik provided new practical designs for NIBS from bilinear pairings. In this work, we investigate efficient NIBS with post-quantum security. We design the first practical NIBS, as well as non-interactive partially blind signatures called tagged NIBS, from lattice-based assumptions. We also propose a new generic paradigm for NIBS from circuit-private leveled homomorphic encryption achieving optimal-sized signatures (i.e., same as any non-blind signature). Finally, we propose new enhanced security properties for NIBS, that could be of practical and theoretical interest.
Last updated:  2024-04-22
Quantum copy-protection of compute-and-compare programs in the quantum random oracle model
Andrea Coladangelo, Christian Majenz, and Alexander Poremba
Copy-protection allows a software distributor to encode a program in such a way that it can be evaluated on any input, yet it cannot be "pirated" - a notion that is impossible to achieve in a classical setting. Aaronson (CCC 2009) initiated the formal study of quantum copy-protection schemes, and speculated that quantum cryptography could offer a solution to the problem thanks to the quantum no-cloning theorem. In this work, we introduce a quantum copy-protection scheme for a large class of evasive functions known as "compute-and-compare programs" - a more expressive generalization of point functions. A compute-and-compare program $\mathsf{CC}[f,y]$ is specified by a function $f$ and a string $y$ within its range: on input $x$, $\mathsf{CC}[f,y]$ outputs $1$, if $f(x) = y$, and $0$ otherwise. We prove that our scheme achieves non-trivial security against fully malicious adversaries in the quantum random oracle model (QROM), which makes it the first copy-protection scheme to enjoy any level of provable security in a standard cryptographic model. As a complementary result, we show that the same scheme fulfils a weaker notion of software protection, called "secure software leasing", introduced very recently by Ananth and La Placa (eprint 2020), with a standard security bound in the QROM, i.e. guaranteeing negligible adversarial advantage. Finally, as a third contribution, we elucidate the relationship between unclonable encryption and copy-protection for multi-bit output point functions.
Last updated:  2024-04-22
On the Two-sided Permutation Inversion Problem
Gorjan Alagic, Chen Bai, Alexander Poremba, and Kaiyan Shi
In the permutation inversion problem, the task is to find the preimage of some challenge value, given oracle access to the permutation. This is a fundamental problem in query complexity, and appears in many contexts, particularly cryptography. In this work, we examine the setting in which the oracle allows for quantum queries to both the forward and the inverse direction of the permutation---except that the challenge value cannot be submitted to the latter. Within that setting, we consider two options for the inversion algorithm: whether it can get quantum advice about the permutation, and whether it must produce the entire preimage (search) or only the first bit (decision). We prove several theorems connecting the hardness of the resulting variations of the inversion problem, and establish a number of lower bounds. Our results indicate that, perhaps surprisingly, the inversion problem does not become significantly easier when the adversary is granted oracle access to the inverse, provided it cannot query the challenge itself.
Last updated:  2024-04-22
Generic MitM Attack Frameworks on Sponge Constructions
Xiaoyang Dong, Boxin Zhao, Lingyue Qin, Qingliang Hou, Shun Zhang, and Xiaoyun Wang
This paper proposes general meet-in-the-middle (MitM) attack frameworks for preimage and collision attacks on hash functions based on (generalized) sponge construction. As the first contribution, our MitM preimage attack framework covers a wide range of sponge-based hash functions, especially those with lower claimed security level for preimage compared to their output size. Those hash functions have been very widely standardized (e.g., Ascon-Hash, PHOTON, etc.), but are rarely studied against preimage attacks. Even the recent MitM attack framework on sponge construction by Qin et al. (EUROCRYPT 2023) cannot attack those hash functions. As the second contribution, our MitM collision attack framework shows a different tool for the collision cryptanalysis on sponge construction, while previous collision attacks on sponge construction are mainly based on differential attacks. Most of the results in this paper are the first third-party cryptanalysis results. If cryptanalysis previously existed, our new results significantly improve the previous results, such as improving the previous 2-round collision attack on Ascon-Hash to the current 4 rounds, improving the previous 3.5-round quantum preimage attack on SPHINCS$^+$-Haraka to our 4-round classical preimage attack, etc.
Last updated:  2024-04-22
LLRing: Logarithmic Linkable Ring Signatures with Transparent Setup
Xiangyu Hui and Sid Chi-Kin Chau
Linkable ring signatures are an important cryptographic primitive for anonymized applications, such as e-voting, e-cash and confidential transactions. To eliminate backdoor and overhead in a trusted setup, transparent setup in the discrete logarithm or pairing settings has received considerable attention in practice. Recent advances have improved the proof sizes and verification efficiency of linkable ring signatures with a transparent setup to achieve logarithmic bounds. Omniring (CCS '19) and RingCT 3.0 (FC '20) proposed linkable ring signatures in the discrete logarithm setting with logarithmic proof sizes with respect to the ring size, whereas DualDory (ESORICS '22) achieves logarithmic verifiability in the pairing setting. We make three novel contributions in this paper to improve the efficiency and soundness of logarithmic linkable ring signatures: (1) We identify an attack on DualDory that breaks its linkability. (2) To eliminate such attacks, we present a new linkable ring signature scheme in the pairing setting with logarithmic verifiability. (3) We also improve the verification efficiency of linkable ring signatures in the discrete logarithm setting, by a technique of reducing the number of group exponentiations for verification in Omniring by 50%. Furthermore, our technique is applicable to general inner-product relation proofs, which might be of independent interest. Finally, we empirically evaluate our schemes and compare them with the extant linkable ring signatures in concrete implementation.
Last updated:  2024-04-21
FHERMA: Building the Open-Source FHE Components Library for Practical Use
Gurgen Arakelov, Nikita Kaskov, Daria Pianykh, and Yuriy Polyakov
Fully Homomorphic Encryption (FHE) is a powerful Privacy-Enhancing Technology (PET) that enables computations on encrypted data without having access to the secret key. While FHE holds immense potential for enhancing data privacy and security, creating its practical applications is associated with many difficulties. A significant barrier is the absence of easy-to-use, standardized components that developers can utilize as foundational building blocks. Addressing this gap requires constructing a comprehensive library of FHE components, a complex endeavor due to multiple inherent problems. We propose a competition-based approach for building such a library. More concretely, we present FHERMA, a new challenge platform that introduces black-box and white-box challenges, and fully automated evaluation of submitted FHE solutions. The initial challenges on the FHERMA platform are motivated by practical problems in machine learning and blockchain. The winning solutions get integrated into an open-source library of FHE components, which is available to all members of the PETs community under the Apache 2.0 license.
Last updated:  2024-04-21
Ponyta: Foundations of Side-Contract-Resilient Fair Exchange
Hao Chung, Elisaweta Masserova, Elaine Shi, and Sri AravindaKrishnan Thyagarajan
This paper is subsumed by Rapidash (https://eprint.iacr.org/2022/1063). Please use Rapidash for the citation. Fair exchange is a fundamental primitive for blockchains, and is widely adopted in applications such as atomic swaps, payment channels, and DeFi. Most existing designs of blockchain-based fair exchange protocols consider only the users as strategic players, and assume honest miners. However, recent works revealed that the fairness of commonly deployed fair exchange protocols can be completely broken in the presence of user-miner collusion. In particular, a user can bribe the miners to help it cheat — a phenomenon also referred to as Miner Extractable Value (MEV). We provide the first formal treatment of side-contract-resilient fair exchange. We propose a new fair exchange protocol called Ponyta, and we prove that the protocol is incentive compatible in the presence of user-miner collusion. In particular, we show that Ponyta satisfies a coalition-resistant Nash equilibrium. Further, we show how to use Ponyta to realize a cross-chain coin swap application, and prove that our coin swap protocol also satisfies coalition-resistant Nash equilibrium. Our work helps to lay the theoretical groundwork for studying side-contract-resilient fair exchange. Finally, we present practical instantiations of Ponyta in Bitcoin and Ethereum with minimal overhead in terms of costs for the users involved in the fair exchange, thus showcasing instantiability of Ponyta with a wide range of cryptocurrencies.
Last updated:  2024-04-21
Maximizing Miner Revenue in Transaction Fee Mechanism Design
Ke Wu, Elaine Shi, and Hao Chung
Transaction fee mechanism design is a new decentralized mechanism design problem where users bid for space on the blockchain. Several recent works showed that the transaction fee mechanism design fundamentally departs from classical mechanism design. They then systematically explored the mathematical landscape of this new decentralized mechanism design problem in two settings: in the plain setting where no cryptography is employed, and in a cryptography-assisted setting where the rules of the mechanism are enforced by a multi-party computation protocol. Unfortunately, in both settings, prior works showed that if we want the mechanism to incentivize honest behavior for both users as well as miners (possibly colluding with users), then the miner revenue has to be zero. Although adopting a relaxed, approximate notion of incentive compatibility gets around this zero miner-revenue limitation, the scaling of the miner revenue is nonetheless poor. In this paper, we show that if we make a mildly stronger reasonable-world assumption than prior works, we can circumvent the known limitations on miner revenue, and design auctions that generate optimal miner revenue. We also systematically explore the mathematical landscape of transaction fee mechanism design under the new reasonable-world and demonstrate how such assumptions can alter the feasibility and infeasibility landscape.
Last updated:  2024-04-21
A Security Analysis of Restricted Syndrome Decoding Problems
Ward Beullens, Pierre Briaud, and Morten Øygarden
Restricted syndrome decoding problems (R-SDP and R-SDP($G$)) provide an interesting basis for post-quantum cryptography. Indeed, they feature in CROSS, a submission in the ongoing process for standardizing post-quantum signatures. This work improves our understanding of the security of both problems. Firstly, we propose and implement a novel collision attack on R-SDP($G$) that provides the best attack under realistic restrictions on memory. Secondly, we derive precise complexity estimates for algebraic attacks on R-SDP that are shown to be accurate by our experiments. We note that neither of these improvements threatens the updated parameters of CROSS.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.