Papers updated in last 183 days (1443 results)

Last updated:  2024-05-01
A Low-Depth Homomorphic Circuit for Logistic Regression Model Training
Eric Crockett
Machine learning is an important tool for analyzing large data sets, but its use on sensitive data may be limited by regulation. One solution to this problem is to perform machine learning tasks on encrypted data using homomorphic encryption, which enables arbitrary computation on encrypted data. We take a fresh look at one specific task: training a logistic regression model on encrypted data. The most important factor in the efficiency of a solution is the multiplicative depth of the homomorphic circuit. Two prior works have given circuits with multiplicative depth of five per training iteration. We optimize one of these solutions, by Han et al. [Han+18], and give a circuit with half the multiplicative depth per iteration on average, which allows us to perform twice as many training iterations in the same amount of time. In the process of improving the state-of-the-art circuit for this task, we identify general techniques to improve homomorphic circuit design for two broad classes of algorithms: iterative algorithms, and algorithms based on linear algebra over real numbers. First, we formalize the encoding scheme from [Han+18] for encoding linear algebra objects as plaintexts in the CKKS homomorphic encryption scheme. We also show how to use this encoding to homomorphically compute many basic linear algebra operations, including novel operations not discussed in prior work. This “toolkit” is generic, and can be used in any application based on linear algebra. Second, we demonstrate how generic compiler techniques for loop optimization can be used to reduce the multiplicative depth of iterative algorithms.
Last updated:  2024-05-01
Truncator: Time-space Tradeoff of Cryptographic Primitives
Foteini Baldimtsi, Konstantinos Chalkias, Panagiotis Chatzigiannis, and Mahimna Kelkar
We present mining-based techniques to reduce the size of various cryptographic outputs without loss of security. Our approach can be generalized for multiple primitives, such as cryptographic key generation, signing, hashing and encryption schemes, by introducing a brute-forcing step to provers/senders aiming at compressing submitted cryptographic material. Interestingly, mining can result in record-size cryptographic outputs, and we show that 5%-12% shorter hash digests and signatures are practically feasible even with commodity hardware. As a result, our techniques make compressing addresses and transaction signatures possible in order to pay less fees in blockchain applications while decreasing the demand for blockchain space, a major bottleneck for initial syncing, communication and storage. Also, the effects of "compressing once - then reuse'' at mass scale can be economically profitable in the long run for both the Web2 and Web3 ecosystems. Our paradigm relies on a brute-force search operation in order to craft the primitive's output such that it fits into fewer bytes, while the "missing" fixed bytes are implied by the system parameters and omitted from the actual communication. While such compression requires computational effort depending on the level of compression, this cost is only paid at the source (i.e. in blockchains, senders are rewarded by lowered transaction fees), and the benefits of the compression are enjoyed by the whole ecosystem. As a starting point, we show how our paradigm applies to some basic primitives commonly used in blockchain applications but also traditional Web2 transactions (such as shorter digital certificates), and show how security is preserved using a bit security framework. Surprisingly, we also identified cases where wise mining strategies require proportionally less effort than naive brute-forcing, shorter hash-based signatures being one of the best examples. We also evaluate our approach for several primitives based on different levels of compression. Our evaluation concretely demonstrates the benefits both in terms of financial cost and storage if adopted by the community, and we showcase how our technique can achieve up to 83.21% reduction in smart contract gas fees at a cost of less than 4 seconds of computation on a single core.
Last updated:  2024-05-01
Exponential Quantum Speedup for the Traveling Salesman Problem
Anant Sharma, Nupur Deshpande, Sanchita Ghosh, Sreetama Das, and Shibdas Roy
The traveling salesman problem is the problem of finding out the shortest route in a network of cities, that a salesman needs to travel to cover all the cities, without visiting the same city more than once. This problem is known to be $NP$-hard with a brute-force complexity of $O(N^N)$ or $O(N^{2N})$ for $N$ number of cities. This problem is equivalent to finding out the shortest Hamiltonian cycle in a given graph, if at least one Hamiltonian cycle exists in it. Quantum algorithms for this problem typically provide with a quadratic speedup only, using Grover's search, thereby having a complexity of $O(N^{N/2})$ or $O(N^N)$. We present a bounded-error quantum polynomial-time (BQP) algorithm for solving the problem, providing with an exponential speedup. The overall complexity of our algorithm is $O(N^3\log(N)\kappa/\epsilon + 1/\epsilon^3)$, where the errors $\epsilon$ are $O(1/{\rm poly}(N))$, and $\kappa$ is the not-too-large condition number of the matrix encoding all Hamiltonian cycles.
Last updated:  2024-05-01
Vision Mark-32: ZK-Friendly Hash Function Over Binary Tower Fields
Tomer Ashur, Mohammad Mahzoun, Jim Posen, and Danilo Šijačić
Zero-knowledge proof systems are widely used in different applications on the Internet. Among zero-knowledge proof systems, SNARKs are a popular choice because of their fast verification time and small proof size. The efficiency of zero-knowledge systems is crucial for usability, resulting in the development of so-called arithmetization-oriented ciphers. In this work, we introduce Vision Mark-32, a modified instance of Vision defined over binary tower fields, with an optimized number of rounds and an efficient MDS matrix. We implement a fully-pipelined Vision Mark-32 permutation on Alveo U55C FPGA accelerator card and argue an order of magnitude better hardware efficiency compared to the popular Poseidon hash. Our fully-pipelined Vision Mark-32 implementation runs at 250 MHz and uses 398 kLUT and 104 kFF. Lastly, we delineate how to implement each step efficiently in hardware.
Last updated:  2024-05-01
A Scalable Coercion-resistant Voting Scheme for Blockchain Decision-making
Zeyuan Yin, Bingsheng Zhang, Andrii Nastenko, Roman Oliynykov, and Kui Ren
Typically, a decentralized collaborative blockchain decision-making mechanism is realized by remote voting. To date, a number of blockchain voting schemes have been proposed; however, to the best of our knowledge, none of these schemes achieve coercion-resistance. In particular, for most blockchain voting schemes, the randomness used by the voting client can be viewed as a witness/proof of the actual vote, which enables improper behaviors such as coercion and vote-buying. Unfortunately, the existing coercion-resistant voting schemes cannot be directly adopted in the blockchain context. In this work, we design the first scalable coercion-resistant blockchain voting scheme that supports private differential voting power and 1-layer liquid democracy as introduced by Zhang et al. (NDSS '19). Its overall complexity is $O(n)$, where $n$ is the number of voters. Moreover, the ballot size is reduced from Zhang et al.'s $\Theta(m)$ to $\Theta(1)$, where $m$ is the number of experts and/or candidates. We formally prove that our scheme has ballot privacy, verifiability, and coercion-resistance. We implement a prototype of the scheme and the evaluation result shows that our scheme's tally procedure is more than 6x faster than VoteAgain (USENIX '20) in an election with over 10,000 voters and over 50\% extra ballot rate.
Last updated:  2024-05-01
New self-orthogonal codes from weakly regular plateaued functions and their application in LCD codes
Melike Çakmak, Ahmet Sınak, and Oğuz Yayla
A linear code with few weights is a significant code family in coding theory. A linear code is considered self-orthogonal if contained within its dual code. Self-orthogonal codes have applications in linear complementary dual codes, quantum codes, etc. The construction of linear codes is an interesting research problem. There are various methods to construct linear codes, and one approach involves utilizing cryptographic functions defined over finite fields. The construction of linear codes (in particular, self-orthogonal codes) from functions has been studied in the literature. In this paper, we generalize the construction method given by Heng et al. in [Des. Codes Cryptogr. 91(12), 2023] to weakly regular plateaued functions. We first construct several families of p-ary linear codes with few weights from weakly regular plateaued unbalanced (resp. balanced) functions over the finite fields of odd characteristics. We observe that the constructed codes are self-orthogonal codes when p = 3. Then, we use the constructed ternary self-orthogonal codes to build new families of ternary LCD codes. Consequently, we obtain (almost) optimal ternary self-orthogonal codes and LCD codes.
Last updated:  2024-05-01
Eagle: Efficient Privacy Preserving Smart Contracts
Carsten Baum, James Hsin-yu Chiang, Bernardo David, and Tore Kasper Frederiksen
The proliferation of Decentralised Finance (DeFi) and Decentralised Autonomous Organisations (DAO), which in current form are exposed to front-running of token transactions and proposal voting, demonstrate the need to shield user inputs and internal state from the parties executing smart contracts. In this work we present “Eagle”, an efficient UC-secure protocol which efficiently realises a notion of privacy preserving smart contracts where both the amounts of tokens and the auxiliary data given as input to a contract are kept private from all parties but the one providing the input. Prior proposals realizing privacy preserving smart contracts on public, permissionless blockchains generally offer a limited contract functionality or require a trusted third party to manage private inputs and state. We achieve our results through a combination of secure multi-party computation (MPC) and zero-knowledge proofs on Pedersen commitments. Although other approaches leverage MPC in this setting, these incur impractical computational overheads by requiring the computation of cryptographic primitives within MPC. Our solution achieves security without the need of any cryptographic primitives to be computed inside the MPC instance and only require a constant amount of exponentiations per client input.
Last updated:  2024-05-01
REED: Chiplet-Based Accelerator for Fully Homomorphic Encryption
Aikata Aikata, Ahmet Can Mert, Sunmin Kwon, Maxim Deryabin, and Sujoy Sinha Roy
Fully Homomorphic Encryption (FHE) enables privacy-preserving computation and has many applications. However, its practical implementation faces massive computation and memory overheads. To address this bottleneck, several Application-Specific Integrated Circuit (ASIC) FHE accelerators have been proposed. All these prior works put every component needed for FHE onto one chip (monolithic), hence offering high performance. However, they suffer from practical problems associated with large-scale chip design, such as inflexibility, low yield, and high manufacturing cost. In this paper, we present the \emph{first-of-its-kind} multi-chiplet-based FHE accelerator `REED' for overcoming the limitations of prior monolithic designs. To utilize the advantages of multi-chiplet structures while matching the performance of larger monolithic systems, we propose and implement several novel strategies in the context of FHE. These include a scalable chiplet design approach, an effective framework for workload distribution, a custom inter-chiplet communication strategy, and advanced pipelined Number Theoretic Transform and automorphism design to enhance performance. Experimental results demonstrate that REED 2.5D microprocessor consumes 96.7mm$^2$ chip area, 49.4 W average power in 7nm technology. It could achieve a remarkable speedup of up to 2,991$\times$ compared to a CPU (24-core 2$\times$Intel X5690) and offer 1.9$\times$ better performance, along with a 50\% reduction in development costs when compared to state-of-the-art ASIC FHE accelerators. Furthermore, our work presents the \textit{first} instance of benchmarking an encrypted deep neural network (DNN) training. Overall, the REED architecture design offers a highly effective solution for accelerating FHE, thereby significantly advancing the practicality and deployability of FHE in real-world applications.
Last updated:  2024-05-01
SASTA: Ambushing Hybrid Homomorphic Encryption Schemes with a Single Fault
Aikata Aikata, Ahaan Dabholkar, Dhiman Saha, and Sujoy Sinha Roy
The rising tide of data breaches targeting large data storage centres and servers has raised serious privacy and security concerns. Homomorphic Encryption schemes offer an effective defence against such attacks, but their adoption has been hindered by substantial computational and communication overheads, particularly on the client's side. The Hybrid Homomorphic Encryption (HEE) protocol was developed to mitigate these issues. However, the susceptibility of HHE to strong attacks, specifically physical attacks, has been largely unexplored. While physical attacks like the Differential Fault Analysis (DFA) have proved very effective in the field of symmetric cryptography, prior works have largely relied on strong assumptions like nonce reuse, limiting their feasibility in a real-world setting. In this work, we introduce a novel attack- SASTA, which presents, to the best of our knowledge, the first generalized analysis of HHE under DFA. Our analysis uncovers a significant limitation of the HHE protocol where a single fault leads to complete key recovery not only for the standard scheme-AES but also for the new HHE tailored Symmetric Encryption (SE) schemes -- RASTA, PASTA, MASTA, and HERA. We further extend SASTA to effectively target Authenticated Transciphering protocols. Unlike prior works, the key advantage of SASTA is that it does not require nonce reuse. We demonstrate a proof-of-concept of our attack on an off-the-shelf ATXmega128D4-AU microcontroller running HHE firmware and mount end-to-end key recovery attacks. Finally, we discuss conventional countermeasures to defend against SASTA. Our work highlights that despite HHE's advantages of improving performance and reducing communication overhead, further analysis of its security guarantees is required.
Last updated:  2024-05-01
Greco: Fast Zero-Knowledge Proofs for Valid FHE RLWE Ciphertexts Formation
Enrico Bottazzi
Fully homomorphic encryption (FHE) allows for evaluating arbitrary functions over encrypted data. In Multi-party FHE applications, different parties encrypt their secret data and submit ciphertexts to a server, which, according to the application logic, performs homomorphic operations on them. For example, in a secret voting application, the tally is computed by summing up the ciphertexts encoding the votes. Valid encrypted votes are of the form $E(0)$ and $E(1)$. A malicious voter could send an invalid encrypted vote such as $E(145127835)$, which can mess up the whole election. Because of that, users must prove that the ciphertext they submitted is a valid Ring-Learning with Errors (RLWE) ciphertext and that the plaintext message they encrypted is a valid vote (for example, either a 1 or 0). Greco uses zero-knowledge proof to let a user prove that their RLWE ciphertext is well-formed. Or, in other words, that the encryption operation was performed correctly. The resulting proof can be, therefore, composed with additional application-specific logic and subject to public verification in a non-interactive setting. Considering the secret voting application, one can prove further properties of the message being encrypted or even properties about the voter, allowing the application to support anonymous voting as well. The prover has been implemented using Halo2-lib as a proving system, and the benchmarks have shown that Greco can already be integrated into user-facing applications without creating excessive friction for the user. The implementation is available at https://github.com/privacy-scaling-explorations/greco
Last updated:  2024-05-01
A New Approach to Efficient and Secure Fixed-point Computation
Tore Kasper Frederiksen, Jonas Lindstrøm, Mikkel Wienberg Madsen, and Anne Dorte Spangsberg
Secure Multi-Party Computation (MPC) constructions typically allow computation over a finite field or ring. While useful for many applications, certain real-world applications require the usage of decimal numbers. While it is possible to emulate floating-point operations in MPC, fixed-point computation has gained more traction in the practical space due to its simplicity and efficient realizations. Even so, current protocols for fixed-point MPC still require computing a secure truncation after each multiplication gate. In this paper, we show a new paradigm for realizing fixed-point MPC. Starting from an existing MPC protocol over arbitrary, large, finite fields or rings, we show how to realize MPC over a residue number system (RNS). This allows us to leverage certain mathematical structures to construct a secure algorithm for efficient approximate truncation by a static and public value. We then show how this can be used to realize highly efficient secure fixed-point computation. In contrast to previous approaches, our protocol does not require any multiplications of secret values in the underlying MPC scheme to realize truncation but instead relies on preprocessed pairs of correlated random values, which we show can be constructed very efficiently, when accepting a small amount of leakage and robustness in the strong, covert model. We proceed to implement our protocol, with SPDZ as the underlying MPC protocol, and achieve significantly faster fixed-point multiplication.
Last updated:  2024-05-01
Compact and Secure Zero-Knowledge Proofs for Quantum-Resistant Cryptography from Modular Lattice Innovations
Samuel Lavery
This paper presents a comprehensive security analysis of the Adh zero-knowledge proof system, a novel lattice-based, quantum-resistant proof of possession system. The Adh system offers compact key and proof sizes, making it suitable for real-world digital signature and public key agreement protocols. We explore its security by reducing it to the hardness of the Module-ISIS problem and introduce three new variants: Module-ISIS+, Module-ISIS*, and Module-ISIS**. These constructions enhance security through variations on chaining mechanisms. We also provide a reduction to the module modulus subset sum problem under conservative assumptions. Empirical evidence and statistical testing support the zero-knowledge, completeness, and soundness properties of the Adh proof system. Comparative analysis demonstrates the Adh system's advantages in terms of key and proof sizes over existing post-quantum schemes like Kyber and Dilithium. This paper represents an early preprint and is a work in progress. The core security arguments and experimental results are present, and formal proofs and additional analysis are provided. We invite feedback and collaboration from the research community to further strengthen the security foundations of the Adh system and explore its potential applications in quantum-resistant cryptography.
Last updated:  2024-04-30
Intmax2: A ZK-rollup with Minimal Onchain Data and Computation Costs Featuring Decentralized Aggregators
Erik Rybakken, Leona Hioki, and Mario Yaksetig
We present a blockchain scaling solution called Intmax2, which is a Zero-Knowledge rollup (ZK-rollup) protocol with stateless and permissionless block production, while minimizing the usage of data and computation on the underlying blockchain. Our architecture distinctly diverges from existing ZK-rollups since essentially all of the data and computational costs are shifted to the client-side as opposed to imposing heavy requirements on the block producers or the underlying Layer 1 blockchain. The only job for block producers is to periodically generate a commitment to a set of transactions, distribute inclusion proofs to each sender, and collect and aggregate signatures by the senders. This design allows permissionless and stateless block production, and is highly scalable with the number of users.
Last updated:  2024-04-30
Conan: Distributed Proofs of Compliance for Anonymous Data Collection
Mingxun Zhou, Elaine Shi, and Giulia Fanti
We consider how to design an anonymous data collection protocol that enforces compliance rules. Imagine that each client contributes multiple data items (e.g., votes, location crumbs, or secret shares of its input) to an anonymous network, which mixes all clients' data items so that the receiver cannot determine which data items belong to the same user. Now, each user must prove to an auditor that the set it contributed satisfies a compliance predicate, without identifying which items it contributed. For example, the auditor may want to ensure that no one voted for the same candidate twice, or that a user's location crumbs are not too far apart in a given time interval. Our main contribution is a novel anonymous, compliant data collection protocol that realizes the above goal. In comparison with naive approaches such as generic multi-party computation or earlier constructions of collaborative zero-knowledge proofs, the most compelling advantage of our approach is that each client's communication and computation overhead do not grow with respect to the number of clients $n$. In this sense, we save a factor of at least $n$ over prior work, which allows our technique to scale to applications with a large number of clients, such as anonymous voting and privacy-preserving federated learning. We first describe our protocol using generic cryptographic primitives that can be realized from standard assumptions. We then suggest a concrete instantiation called {\sc Conan} which we implement and evaluate. In this concrete instantiation, we are willing to employ SNARKs and the random oracle model for better practical efficiency. Notably, in this practical instantiation, each client's additional communication overhead (not counting the overhead of sending its data items over the anonymous network) is only $\widetilde{O}(1)$. We evaluated our technique in various application settings, including secure voting, and secure aggregation protocols for histogram, summation, and vector summation. Our evaluation results show that each client's additional communication overhead is only 2.2KB or 2.6KB, depending on which SNARK implementation we use. Further, each client's computation is only 0.2s - 0.5s for almost all cases, except for the vector summation application where the data items are high-dimensional and each client's computation is 8.5-10.6s.
Last updated:  2024-04-30
Batch PIR and Labeled PSI with Oblivious Ciphertext Compression
Alexander Bienstock, Sarvar Patel, Joon Young Seo, and Kevin Yeo
In this paper, we study two problems: oblivious compression and decompression of ciphertexts. In oblivious compression, a server holds a set of ciphertexts with a subset of encryptions of zeroes whose positions are only known to the client. The goal is for the server to effectively compress the ciphertexts obliviously, while preserving the non-zero plaintexts and without learning the plaintext values. For oblivious decompression, the client, instead, succinctly encodes a sequence of plaintexts such that the server may decode encryptions of all plaintexts value, but the zeroes may be replaced with arbitrary values. We present solutions to both problems that construct lossless compressions only 5% more than the optimal minimum using only additive homomorphism. The crux of both algorithms involve embedding ciphertexts as random linear systems that are efficiently solvable. Using our compression schemes, we obtain state-of-the-art schemes for batch private information retrieval (PIR) where a client wishes to privately retrieve multiple entries from a server-held database in one query. We show that our compression schemes may be used to reduce communication by up to 30% for batch PIR in both the single- and two-server settings. Additionally, we study labeled private set intersection (PSI) in the unbalanced setting where one party's set is significantly smaller than the other party's set and each entry has associated data. By utilizing our novel compression algorithm, we present a protocol with 65-88% reduction in communication with comparable computation compared to prior works.
Last updated:  2024-04-30
Verifiable FHE via Lattice-based SNARKs
Shahla Atapoor, Karim Baghery, Hilder V. L. Pereira, and Jannik Spiessens
Fully Homomorphic Encryption (FHE) is a prevalent cryptographic primitive that allows for computation on encrypted data. In various cryptographic protocols, this enables outsourcing computation to a third party while retaining the privacy of the inputs to the computation. However, these schemes make an honest-but-curious assumption about the adversary. Previous work has tried to remove this assumption by combining FHE with Verifiable Computation (VC). Recent work has increased the flexibility of this approach by introducing integrity checks for homomorphic computations over rings. However, efficient FHE for circuits of large multiplicative depth also requires non-ring computations called maintenance operations, i.e. modswitching and keyswitching, which cannot be efficiently verified by existing constructions. We propose the first efficiently verifiable FHE scheme that allows for arbitrary depth homomorphic circuits by utilizing the double-CRT representation in which FHE schemes are typically computed, and using lattice-based SNARKs to prove components of this computation separately, including the maintenance operations. Therefore, our construction can theoretically handle bootstrapping operations. We also present the first implementation of a verifiable computation on encrypted data for a computation that contains multiple ciphertext-ciphertext multiplications. Concretely, we verify the homomorphic computation of an approximate neural network containing three layers and more than 100 ciphertexts in less than 1 second while maintaining reasonable prover costs.
Last updated:  2024-04-30
Complete Knowledge: Preventing Encumbrance of Cryptographic Secrets
Uncategorized
Mahimna Kelkar, Kushal Babel, Philip Daian, James Austgen, Vitalik Buterin, and Ari Juels
Show abstract
Uncategorized
Most cryptographic protocols model a player’s knowledge of secrets in a simple way. Informally, the player knows a secret in the sense that she can directly furnish it as a (private) input to a protocol, e.g., to digitally sign a message. The growing availability of Trusted Execution Environments (TEEs) and secure multiparty computation, however, undermines this model of knowledge. Such tools can encumber a secret sk and permit a chosen player to access sk conditionally, without actually knowing sk. By permitting selective access to sk by an adversary, encumbrance of secrets can enable vote-selling in cryptographic voting schemes, illegal sale of credentials for online services, and erosion of deniability in anonymous messaging systems. Unfortunately, existing proof-of-knowledge protocols fail to demonstrate that a secret is unencumbered. We therefore introduce and formalize a new notion called complete knowledge (CK). A proof (or argument) of CK shows that a prover does not just know a secret, but also has fully unencumbered knowledge, i.e., unrestricted ability to use the secret. We introduce two practical CK schemes that use special-purpose hardware, specifically TEEs and off-the-shelf mining ASICs. We prove the security of these schemes and explore their practical deployment with a complete, end-to-end prototype with smart-contract verification that supports both. We show how CK can address encumbrance attacks identified in previous work. Finally, we introduce two new applications enabled by CK that involve proving ownership of blockchain assets.
Last updated:  2024-04-30
Solving the Tensor Isomorphism Problem for special orbits with low rank points: Cryptanalysis and repair of an Asiacrypt 2023 commitment scheme
Valerie Gilchrist, Laurane Marco, Christophe Petit, and Gang Tang
The Tensor Isomorphism Problem (TIP) has been shown to be equivalent to the matrix code equivalence problem, making it an interesting candidate on which to build post-quantum cryptographic primitives. These hard problems have already been used in protocol development. One of these, MEDS, is currently in Round 1 of NIST's call for additional post-quantum digital signatures. In this work, we consider the TIP for a special class of tensors. The hardness of the decisional version of this problem is the foundation of a commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (Asiacrypt 2023). We present polynomial-time algorithms for the decisional and computational versions of TIP for special orbits, which implies that the commitment scheme is not secure. The key observations of these algorithms are that these special tensors contain some low-rank points, and their stabilizer groups are not trivial. With these new developments in the security of TIP in mind, we give a new commitment scheme based on the general TIP that is non-interactive, post-quantum, and statistically binding, making no new assumptions. Such a commitment scheme does not currently exist in the literature.
Last updated:  2024-04-30
FiveEyes: Cryptographic Biometric Authentication from the Iris
Luke Demarest, Sohaib Ahmad, Sixia Chen, Benjamin Fuller, and Alexander Russell
Despite decades of effort, a stubborn chasm exists between the theory and practice of device-level biometric authentication. Deployed authentication algorithms rely on data that overtly leaks private information about the biometric; thus systems rely on externalized security measures such as trusted execution environments. The authentication algorithms have no cryptographic guarantees. This is particularly frustrating given the long line of research that has developed theoretical tools—known as fuzzy extractors—that enable secure, privacy preserving biometric authentication with public enrollment data (Dodis et al., SIAM Journal of Computing 2008). Unfortunately, the best known constructions either: 1. Assume that bits of biometrics are i.i.d. (or that all correlation is captured in pairs of features (Hine et al., TIFS 2023)), which is not true for the biometrics themselves or for features extracted using modern learning techniques, or 2. Only provide substantial true accept rates with an estimated security of $32$ bits for the iris (Simhadri et al., ISC 2019) and $45$ bits for the face (Zhang, Cui, and Yu, ePrint 2021/1559). This work introduces FiveEyes, an iris key derivation system powered by technical advances in both 1) feature extraction from the iris and 2) the fuzzy extractor used to secure authentication keys. FiveEyes’ feature extractor’s loss focuses on quality for key derivation. The fuzzy extractor builds on sample-then-lock (Canetti et al., Journal of Cryptology 2021). FiveEyes’ fuzzy extractor uses statistics of the produced features to sample non-uniformly, which significantly improves the security vs. true accept rate (TAR) tradeoff. Irises used to evaluate TAR and security are class disjoint from those used for training and collecting statistics. We state assumptions sufficient for security. We present various parameter regimes to highlight different TARs: 1. $65$ bits of security (equivalent to $87$ bits with a password) at $12$% TAR, and 2. $50$ bits of security (equivalent to $72$ bits with a password) at $45$% TAR. Applying known TAR (Davida et al., IEEE S&P 1998) amplification techniques additively boosts TAR by $30$% for the above settings.
Last updated:  2024-04-30
Succinct Arguments over Towers of Binary Fields
Benjamin E. Diamond and Jim Posen
We introduce an efficient SNARK for towers of binary fields. Adapting Brakedown (CRYPTO '23), we construct a multilinear polynomial commitment scheme suitable for polynomials over tiny fields, including that with 2 elements. Our commitment scheme, unlike those of previous works, treats small-field polynomials with zero embedding overhead. We further introduce binary-field adaptations of HyperPlonk's (EUROCRYPT '23) product and permutation checks, as well as of Lasso's lookup. Our scheme's binary PLONKish variant captures standard hash functions—like Keccak-256 and Grøstl—extremely efficiently. With recourse to thorough performance benchmarks, we argue that our scheme can efficiently generate precisely those Keccak-256-proofs which critically underlie modern efforts to scale Ethereum.
Last updated:  2024-04-30
Swiper: a new paradigm for efficient weighted distributed protocols
Andrei Tonkikh and Luciano Freitas
The majority of fault-tolerant distributed algorithms are designed assuming a nominal corruption model, in which at most a fraction $f_n$ of parties can be corrupted by the adversary. However, due to the infamous Sybil attack, nominal models are not sufficient to express the trust assumptions in open (i.e., permissionless) settings. Instead, permissionless systems typically operate in a weighted model, where each participant is associated with a weight and the adversary can corrupt a set of parties holding at most a fraction $f_w$ of total weight. In this paper, we suggest a simple way to transform a large class of protocols designed for the nominal model into the weighted model. To this end, we formalize and solve three novel optimization problems, which we collectively call the weight reduction problems, that allow us to map large real weights into small integer weights while preserving the properties necessary for the correctness of the protocols. In all cases, we manage to keep the sum of the integer weights to be at most linear in the number of parties, resulting in extremely efficient protocols for the weighted model. Moreover, we demonstrate that, on weight distributions that emerge in practice, the sum of the integer weights tends to be far from the theoretical worst-case and, sometimes, even smaller than the number of participants. While, for some protocols, our transformation requires an arbitrarily small reduction in resilience (i.e., $f_w = f_n - \epsilon$), surprisingly, for many important problems we manage to obtain weighted solutions with the same resilience ($f_w = f_n$) as nominal ones. Notable examples include erasure-coded distributed storage and broadcast protocols, verifiable secret sharing, and asynchronous consensus. Although there are ad-hoc weighted solutions to some of these problems, the protocols yielded by our transformations enjoy all the benefits of nominal solutions, including simplicity, efficiency, and a wider range of possible cryptographic assumptions. Since the release of the first version of this paper online, a version of the weight reduction approach has been integrated into a major layer-1 blockchain system for implementing a randomness beacon.
Last updated:  2024-04-30
Post-Quantum Searchable Encryption Supporting User-Authorization for Outsourced Data Management
Shiyuan Xu, Yibo Cao, Xue Chen, Yu Guo, Yuer Yang, Fangda Guo, and Siu-Ming Yiu
With the widespread development of database systems, data security has become crucial when it comes to sharing among users and servers. A straightforward approach involves using searchable encryption to ensure the confidentiality of shared data. However, in certain scenarios, varying user tiers are granted disparate data searching privileges, and administrators need to restrict the searchability of ciphertexts to select users exclusively. To address this issue, public key encryption with authorized keyword search (PEAKS) was proposed, wherein solely authorized users possess the ability to conduct targeted keyword searches. Nonetheless, it is vulnerable to resist quantum computing attacks. As a result, research focusing on authorizing users to search for keywords while achieving quantum security is far-reaching. In this paper, we propose a lattice-based variant of PEAKS (L-PEAKS) that enables keyword dataset authorization for outsourced data management. Unlike existing schemes, our design incorporates identity-based encryption (IBE) to overcome the bottleneck of public key management. Besides, we utilize several lattice sampling algorithms to defend against attacks from quantum adversaries. Specifically, each authorized user must obtain a search privilege from an authority. The authority distributes an authorized token to the user within a specific time period, and the user generates a trapdoor for any authorized keywords. Our scheme is proven to be secure against IND-sID-CKA and T-EUF security in a quantum setting. We also conduct comprehensive evaluations on a commodity machine to assess completeness and provide theoretical complexity comparisons with existing state-of-the-art schemes.
Last updated:  2024-04-30
Ipotane: Achieving the Best of All Worlds in Asynchronous BFT
Xiaohai Dai, Chaozheng Ding, Hai Jin, Julian Loss, and Ling Ren
State-of-the-art asynchronous Byzantine Fault Tolerance (BFT) protocols integrate a partially-synchronous optimistic path. The holy grail in this paradigm is to match the performance of a partially-synchronous protocol in favorable situations and match the performance of a purely asynchronous protocol in unfavorable situations. Several prior works have made progress toward this goal by matching the efficiency of a partially-synchronous protocol in favorable conditions. However, their performance compared to purely asynchronous protocols is reduced when network conditions are unfavorable. To address these shortcomings, a recent work, Abraxas (CCS'23), presents the first optimistic asynchronous BFT protocol that retains stable throughput in all situations. However, Abraxas still incurs very high worst-case latency in unfavorable situations because it is slow at detecting the failure of its optimistic path. Another recent work, ParBFT (CCS'23) guarantees good latency in all situations, but suffers from reduced throughput in unfavorable situations due to its use of extra Asynchronous Binary Agreement (ABA) instances. To approach our holy grail, we propose Ipotane, which delivers performance comparable to partially-synchronous protocols in favorable situations, and attains performance on par with purely asynchronous protocols in unfavorable situations—in both throughput and latency. Ipotane also runs the two paths simultaneously. It adopts two-chain HotStuff as the optimistic path, thus achieving high performance in favorable situations. As for the pessimistic path, we introduce a new primitive Dual-functional Byzantine Agreement (DBA), which packs the functionalities of biased ABA and Validated Asynchronous Byzantine Agreement (VABA). Ipotane runs DBA instances continuously as the pessimistic path. DBA’s ABA functionality quickly detects the optimistic path’s failure, ensuring Ipotane’s low latency in unfavorable situations. Meanwhile, the VABA functionality continuously produces blocks, maintaining Ipotane’s high throughput. Additionally, the biased property ensures that blocks committed via the optimistic path are respected by DBA instances, guaranteeing consistency across two paths. We conduct extensive experiments to demonstrate that Ipotane achieves high throughput and low latency in all situations.
Last updated:  2024-04-29
Efficient Linkable Ring Signatures: New Framework and Post-Quantum Instantiations
Yuxi Xue, Xingye Lu, Man Ho Au, and Chengru Zhang
In this paper, we introduce a new framework for constructing linkable ring signatures (LRS). Our framework is based purely on signatures of knowledge (SoK) which allows one to issue signatures on behalf of any NP-statement using the corresponding witness. Our framework enjoys the following advantages: (1) the security of the resulting LRS depends only on the security of the underlying SoK; (2) the resulting LRS naturally supports online/offline signing (resp. verification), where the output of the offline signing (resp. verification) can be re-used across signatures of the same ring. For a ring size $n$, our framework requires an SoK of the NP statement with size $\log n$. To instantiate our framework, we adapt the well-known post-quantum secure non-interactive argument of knowledge (NIAoK), ethSTARK, into an SoK. This SoK is inherently post-quantum secure and has a signature size poly-logarithmic in the size of the NP statement. Thus, our resulting LRS has a signature size of $O(\text{polylog}(\log n))$. By comparison, existing post-quantum ring signatures, regardless of linkability considerations, have signature sizes of $O(\log n)$ at best. Furthermore, leveraging online/offline verification, part of the verification of signatures on the same ring can be shared, resulting in a state-of-the-art amortized verification cost of $O(\text{polylog}(\log n))$. Our LRS also performs favourably against existing schemes in practical scenarios. Concretely, our scheme has the smallest signature size among all post-quantum linkable ring signatures with non-slanderability for ring size larger than $32$. In our experiment, at $128$-bit security and ring size of $1024$, our LRS has a size of $29$KB, and an amortized verification cost of $0.3$ ms, surpassing the state-of-the-art by a significant margin. Even without considering amortization, the verification time for a single signature is $128$ ms, comparable to those featuring linear signature size. A similar performance advantage can also be seen at signing. Furthermore, our LRS has extremely short public keys ($32$ bytes), while public keys of existing constructions are in the order of kilobytes.
Last updated:  2024-04-29
Cryptographic Accumulators: New Definitions, Enhanced Security, and Delegatable Proofs
Anaïs Barthoulot, Olivier Blazy, and Sébastien Canard
Cryptographic accumulators, introduced in 1993 by Benaloh and De Mare, represent a set with a concise value and offer proofs of (non-)membership. Accumulators have evolved, becoming essential in anonymous credentials, e-cash, and blockchain applications. Various properties like dynamic and universal emerged for specific needs, leading to multiple accumulator definitions. In 2015, Derler, Hanser, and Slamanig proposed a unified model, but new properties, including zero-knowledge security, have arisen since. We offer a new definition of accumulators, based on Derler et al.’s, that is suitable for all properties. We also introduce a new security property, unforgeability of private evaluation, to protect accumulator from forgery and we verify this property in Barthoulot, Blazy, and Canard’s recent accumulator. Finally we provide discussions on security properties of accumulators and on the delegatable (non-)membership proofs property.
Last updated:  2024-04-29
Information-theoretic security with asymmetries
Tim Beyne and Yu Long Chen
In this paper, we study the problem of lower bounding any given cost function depending on the false positive and false negative probabilities of adversaries against indistinguishability security notions in symmetric-key cryptography. We take the cost model as an input, so that this becomes a purely information-theoretical question. We propose power bounds as an easy-to-use alternative for advantage bounds in the context of indistinguishability with asymmetric cost functions. We show that standard proof techniques such as hybrid arguments and the H-coefficient method can be generalized to the power model, and apply these techniques to the PRP-PRF switching lemma, the Even-Mansour (EM) construction, and the sum-of-permutations (SoP) construction. As the final and perhaps most useful contribution, we provide two methods to convert single-user power bounds into multi-user power bounds, and investigate their relation to the point-wise proximity method of Hoang and Tessaro (Crypto 2016). These method are applied to obtain tight multi-user power bounds for EM and SoP.
Last updated:  2024-04-29
Cryptanalytic Audit of the XHash Sponge Function and its Components
Vincent Rijmen
In this audit we started from the security analysis provided in the design documentation of XHash8/12. We extended the analysis in several directions and confirmed the security claims that were made by the designers.
Last updated:  2024-04-29
Implementation and Performance Analysis of Homomorphic Signature Schemes
Davide Carnemolla, Dario Catalano, Mario Di Raimondo, and Federico Savasta
Homomorphic signatures allow to validate computation on signed data. Alice, holding a dataset, $\{m_1 , \ldots , m_t \}$ uses her secret key $\sf sk$ to sign these data and stores the authenticated dataset on a remote server. The server can later (publicly) compute $m = f(m_1,...,m_t)$ together with a signature $\sigma$ certifying that $m$ is indeed the correct output of the computation $f$. Over the last fifteen years, the problem of realizing homomorphic signatures has been the focus of numerous research works, with constructions now ranging from very efficient ones supporting linear functions to very expressive ones supporting (up to) arbitrary circuits. In this work we tackle the question of assessing the practicality of schemes belonging to this latter class. Specifically, we implement the GVW lattice based scheme for circuits from STOC 2015 and two, recently proposed, pairings based constructions building from functional commitments. Our experiments show that (both) pairings based schemes outperform GVW on all fronts.
Last updated:  2024-04-29
Monchi: Multi-scheme Optimization For Collaborative Homomorphic Identification
Alberto Ibarrondo, Ismet Kerenciler, Hervé Chabanne, Vincent Despiegel, and Melek Önen
This paper introduces a novel protocol for privacy-preserving biometric identification, named Monchi, that combines the use of homomorphic encryption for the computation of the identification score with function secret sharing to obliviously compare this score with a given threshold and finally output the binary result. Given the cost of homomorphic encryption, BFV in this solution, we study and evaluate the integration of two packing solutions that enable the regrouping of multiple templates in one ciphertext to improve efficiency meaningfully. We propose an end-to-end protocol, prove it secure and implement it. Our experimental results attest to Monchi's applicability to the real-life use case of an airplane boarding scenario with 1000 passengers,taking less than one second to authorize/deny access to the plane to each passenger via biometric identification while maintaining the privacy of all passengers.
Last updated:  2024-04-29
A Complete Beginner Guide to the Number Theoretic Transform (NTT)
Ardianto Satriawan, Rella Mareta, and Hanho Lee
The Number Theoretic Transform (NTT) is a powerful mathematical tool that has become increasingly important in developing Post Quantum Cryptography (PQC) and Homomorphic Encryption (HE). Its ability to efficiently calculate polynomial multiplication using the convolution theorem with a quasi-linear complexity $O(n \log{n})$ instead of $O(n^2)$ when implemented with Fast Fourier Transform-style algorithms has made it a key component in modern cryptography. FFT-style NTT algorithm or fast-NTT is particularly useful in lattice-based cryptography. In this short note, we briefly introduce the basic concepts of linear, cyclic, and negacyclic convolutions via traditional schoolbook algorithms, traditional NTT, its inverse (INTT), and FFT-like versions of NTT/INTT. We then provide consistent toy examples through different concepts and algorithms to understand the basics of NTT concepts.
Last updated:  2024-04-28
A New Hash-based Enhanced Privacy ID Signature Scheme
Liqun Chen, Changyu Dong, Nada El Kassem, Christopher J.P. Newton, and Yalan Wang
The elliptic curve-based Enhanced Privacy ID (EPID) signature scheme is broadly used for hardware enclave attestation by many platforms that implement Intel Software Guard Extensions (SGX) and other devices. This scheme has also been included in the Trusted Platform Module (TPM) specifications and ISO/IEC standards. However, it is insecure against quantum attackers. While research into quantum-resistant EPID has resulted in several lattice-based schemes, Boneh et al. have initiated the study of EPID signature schemes built only from symmetric primitives. We observe that for this line of research, there is still room for improvement. In this paper, we propose a new hash-based EPID scheme, which includes a novel and efficient signature revocation scheme. In addition, our scheme can handle a large group size (up to $2^{60}$ group members), which meets the requirements of rapidly developing hardware enclave attestation applications. The security of our scheme is proved under the Universal Composability (UC) model. Finally, we have implemented our EPID scheme, which, to our best knowledge, is the first implementation of EPID from symmetric primitives.
Last updated:  2024-04-28
Hash-based Direct Anonymous Attestation
Liqun Chen, Changyu Dong, Nada El Kassem, Christopher J.P. Newton, and Yalan Wang
Direct Anonymous Attestation (DAA) was designed for the Trusted Platform Module (TPM) and versions using RSA and elliptic curve cryptography have been included in the TPM specifications and in ISO/IEC standards. These standardised DAA schemes have their security based on the factoring or discrete logarithm problems and are therefore insecure against quantum attackers. Research into quantum-resistant DAA has resulted in several lattice-based schemes. Now in this paper, we propose the first post-quantum DAA scheme from symmetric primitives. We make use of a hash-based signature scheme, which is a slight modification of SPHINCS+, as a DAA credential. A DAA signature, proving the possession of such a credential, is a multiparty computation-based non-interactive zero-knowledge proof. The security of our scheme is proved under the Universal Composability (UC) model. While maintaining all the security properties required for a DAA scheme, we try to make the TPM's workload as low as possible. Our DAA scheme can handle a large group size (up to $2^{60}$ group members), which meets the requirements of rapidly developing TPM applications.
Last updated:  2024-04-28
Sphinx-in-the-Head: Group Signatures from Symmetric Primitives
Liqun Chen, Changyu Dong, Christopher J. P. Newton, and Yalan Wang
Group signatures and their variants have been widely used in privacy-sensitive scenarios such as anonymous authentication and attestation. In this paper, we present a new post-quantum group signature scheme from symmetric primitives. Using only symmetric primitives makes the scheme less prone to unknown attacks than basing the design on newly proposed hard problems whose security is less well-understood. However, symmetric primitives do not have rich algebraic properties, and this makes it extremely challenging to design a group signature scheme on top of them. It is even more challenging if we want a group signature scheme suitable for real-world applications, one that can support large groups and require few trust assumptions. Our scheme is based on MPC-in-the-head non-interactive zero-knowledge proofs, and we specifically design a novel hash-based group credential scheme, which is rooted in the SPHINCS+ signature scheme but with various modifications to make it MPC (multi-party computation) friendly. The security of the scheme has been proved under the fully dynamic group signature model. We provide an implementation of the scheme and demonstrate the feasibility of handling a group size as large as $2^{60}$. This is the first group signature scheme from symmetric primitives that supports such a large group size and meets all the security requirements.
Last updated:  2024-04-28
Encrypted KNN Implementation on Distributed Edge Device Network
B Pradeep Kumar Reddy, Ruchika Meel, and Ayantika Chatterjee
Machine learning (ML) as a service has emerged as a rapidly expanding field across various industries like healthcare, finance, marketing, retail and e-commerce, Industry 4.0, etc where a huge amount of data is gen- erated. To handle this amount of data, huge computational power is required for which cloud computing used to be the first choice. However, there are several challenges in cloud computing like limitations of bandwidth, network connectivity, higher latency, etc. To address these issues, edge computing is prominent nowadays, where the data from sensor nodes is collected and processed on low-cost edge devices. As simple sensor nodes are not capable of handling complex computations of ML models, data from sensor nodes need to be transferred to some nearest edge devices for further processing. If this sensor data is related to some security- critical application, the privacy of such sensitive data needs to be preserved both during communication from sensor node to edge device and computation in edge nodes. This increased need to perform edge-based ML on privacy-preserved data has led to a surge in interest in homomorphic encryption (HE) due to its ability to perform computations on encrypted form of data. The highest form of HE, Fully Homomorphic Encryption (FHE), is capable of theoretically handling arbitrary encrypted algorithms but comes with huge computational overhead. Hence, the implementation of such a complex encrypted ML model on a single edge node is not very practical in terms of latency requirements. Our paper introduces a low-cost encrypted ML framework on a distributed edge cluster, where multiple low-cost edge devices (Raspberry Pi boards) are clustered to perform encrypted distributed K-Nearest Neighbours (KNN) algorithm computations. Our experimental result shows, KNN prediction on standard Wisconsin breast cancer dataset takes approximately 1.2 hours, implemented on a cluster of six pi boards, maintaining end-to-end data confidentiality of critical medical data without any re- quirement of costly cloud-based computation resource support
Last updated:  2024-04-28
Ceno: Non-uniform, Segment and Parallel Zero-knowledge Virtual Machine
Tianyi Liu, Zhenfei Zhang, Yuncong Zhang, Wenqing Hu, and Ye Zhang
In this paper, we explore a novel Zero-knowledge Virtual Machine (zkVM) framework leveraging succinct, non-interactive zero-knowledge proofs for verifiable computation over any code. Our approach divides program execution proof into two stages. In the first stage, the process breaks down program execution into segments, identifying and grouping identical sections. These segments are then proved through data-parallel circuits that allow for varying amounts of duplication. In the subsequent stage, the verifier examines these segment proofs, reconstructing the program's control and data flow based on the segments' duplication number and the original program. The second stage can be further attested by a uniform recursive proof. We propose two specific designs of this concept, where segmentation and parallelization happen at two levels: opcode and basic block. Both designs try to minimize control flow that affects the circuit size and support dynamic copy numbers, ensuring that computational costs directly correlate with the actual code executed (i.e., you only pay as much as you use). In our second design, in particular, by proposing an innovative data-flow reconstruction technique in the second stage, we can drastically cut down on the stack operations even compared to the original program execution. Note that the two designs are complementary rather than mutually exclusive. Integrating both approaches in the same zkVM could unlock more significant potential for accommodating diverse program patterns. We present an asymmetric GKR scheme to implement our designs, pairing a non-uniform prover and a uniform verifier to generate proofs for dynamic-length data-parallel circuits. The use of a GKR prover also significantly reduces the size of the commitment: GKR allows us to commit only the circuit's input and output, whereas in Plonkish-based solutions, the prover needs to commit to all the witnesses.
Last updated:  2024-04-28
High-assurance field inversion for curve-based cryptography
Benjamin Salling Hvass, Diego F. Aranha, and Bas Spitters
The security of modern cryptography depends on multiple factors, from sound hardness assumptions to correct implementations that resist side-channel cryptanalysis. Curve-based cryptography is not different in this regard, and substantial progress in the last few decades has been achieved in both selecting parameters and devising secure implementation strategies. In this context, the security of implementations of field inversion is sometimes overlooked in the research literature, because (i) the approach based on Fermat's Little Theorem (FLT) suffices performance-wise for many parameters used in practice; (ii) it is typically invoked only at the very end of a cryptographic computation, with a small impact on performance; (iii) it is challenging to implement securely for general parameters without a significant performance penalty. However, field inversion can process sensitive information and must be protected with side-channel countermeasures like any other cryptographic operation, as illustrated by recent attacks. In this work, we focus on implementing field inversion for primes of cryptographic interest with security against timing attacks, irrespective of whether the FLT-based inversion can be efficiently implemented. We extend the Fiat-Crypto framework, which synthesizes provably correct-by-construction implementations, to implement the Bernstein-Yang inversion algorithm as a step towards this goal. This allows a correct implementation of prime field inversion to be synthesized for any prime. We benchmark the implementations across a range of primes for curve-based cryptography and they outperform traditional FLT-based approaches in most cases, with observed speedups up to 2 for the largest parameters. Our work is already used in production in the MirageOS unikernel operating system, $\mathtt{zig}$ programming language, and the ECCKiila framework.
Last updated:  2024-04-28
Weightwise (almost) perfectly balanced functions based on total orders
Pierrick Méaux
he unique design of the FLIP cipher necessitated a generalization of standard cryptographic criteria for Boolean functions used in stream ciphers, prompting a focus on properties specific to subsets of $\mathbb{F}_2^n$ rather than the entire set. This led to heightened interest in properties related to fixed Hamming weight sets and the corresponding partition of $\mathbb{F}_2^n$ into n+1 such sets. Consequently, the concept of Weightwise Almost Perfectly Balanced (WAPB) functions emerged, which are balanced on each of these sets.Various studies have since proposed WAPB constructions and examined their cryptographic parameters for use in stream cipher filters. In this article, we introduce a general approach to constructing WAPB functions using the concept of order, which simplifies implementation and enhances cryptographic strength. We present two new constructions: a recursive method employing multiple orders on binary strings, and another utilizing just two orders. We establish lower bounds for nonlinearity and weightwise nonlinearities within these classes. By instantiating specific orders, we demonstrate that some achieve minimal algebraic immunity, while others provide functions with guaranteed optimal algebraic immunity. Experimental results in 8 and 16 variables indicate that using orders based on field representation significantly outperforms other methods in terms of both global and weightwise algebraic immunity and nonlinearity. Additionally, we extend the recursive construction to create WAPB functions for any value of n, with experiments in 10, 12, and 14 variables confirming that these order-based functions exhibit robust cryptographic parameters. In particular, those based on field orders display optimal degrees and algebraic immunity, and strong weightwise nonlinearities and algebraic immunities.
Last updated:  2024-04-28
BPDTE: Batch Private Decision Tree Evaluation via Amortized Efficient Private Comparison
Huiqiang Liang, Haining Lu, and Geng Wang
Machine learning as a service requires the client to trust the server and provide its own private information to use this service. Usually, clients may worry that their private data is being collected by server without effective supervision, and the server also aims to ensure proper management of the user data to foster the advancement of its services. In this work, we focus on private decision tree evaluation (PDTE) which can alleviates such privacy concerns associated with classification tasks using decision tree. After the evaluation, except for some hyperparameters, the client only receives the classification results from the server, while the server learns nothing. Firstly, we propose three amortized efficient private comparison algorithms: TECMP, RDCMP, and CDCMP, which are based on the leveled homomorphic encryption. They are non-interactive, high precision (up to 26624-bit), many-to-many, and output expressive, achieving an amortized cost of less than 1 ms under 32-bit, which is an order of magnitude faster than the state-of-the-art. Secondly, we propose three batch PDTE schemes using this private comparison: TECMP-PDTE, RDCMP-PDTE, and CDCMP-PDTE. Due to the batch operations, we utilized a clear rows relation (CRR) algorithm, which obfuscates the position and classification results of the different row data. Finally, in decision tree exceeding 1000 nodes under 16-bit each, the amortized runtime of TECMP-PDTE and RDCMP-PDTE both more than 56$\times$ faster than state-of-the-art, while the TECMP-PDTE with CRR still achieves 14$\times$ speedup. Even in a single row and a tree of fewer than 100 nodes with 64-bit, the TECMP-PDTE maintains a comparable performance with the current work.
Last updated:  2024-04-28
Zero-Knowledge Proof Vulnerability Analysis and Security Auditing
Xueyan Tang, Lingzhi Shi, Xun Wang, Kyle Charbonnet, Shixiang Tang, and Shixiao Sun
Zero-Knowledge Proof (ZKP) technology marks a revolutionary advancement in the field of cryptography, enabling the verification of certain information ownership without revealing any specific details. This technology, with its paradoxical yet powerful characteristics, provides a solid foundation for a wide range of applications, especially in enhancing the privacy and security of blockchain technology and other cryptographic systems. As ZKP technology increasingly becomes a part of the blockchain infrastructure, its importance for security and completeness becomes more pronounced. However, the complexity of ZKP implementation and the rapid iteration of the technology introduce various vulnerabilities, challenging the privacy and security it aims to offer. This study focuses on the completeness, soundness, and zero-knowledge properties of ZKP to meticulously classify existing vulnerabilities and deeply explores multiple categories of vulnerabilities, including completeness issues, soundness problems, information leakage, and non-standardized cryptographic implementations. Furthermore, we propose a set of defense strategies that include a rigorous security audit process and a robust distributed network security ecosystem. This audit strategy employs a divide-and-conquer approach, segmenting the project into different levels, from the application layer to the platform-nature infrastructure layer, using threat modelling, line-by-line audit, and internal cross-review, among other means, aimed at comprehensively identifying vulnerabilities in ZKP circuits, revealing design flaws in ZKP applications, and accurately identifying inaccuracies in the integration process of ZKP primitives.
Last updated:  2024-04-28
SOK: Research Motivations of Public-Key Cryptography
Fuchun Guo, Willy Susilo, Xiaofeng Chen, Peng Jiang, Jianchang Lai, and Zhen Zhao
The design, proposal, and analysis of cryptographic primitives and protocols (schemes) are one of the primary research fields in cryptology. To advance this research field, it is crucial to fully understand their research motivations. In this paper, we systematically introduce the research motivations for designing and proposing new schemes in public-key cryptography. We found that all research motivations aim to produce benefits for humanity including efficiency, security, and functionality, although some of them may be not obvious or only hold conditionally. We categorize benefits in research motivations into 3 ways, 6 types, and 17 areas. As examples, we introduce 40 research strategies within these areas for exploring benefits, each presented as ``From less-adj (in the first scheme) To more-adj (in the second scheme)", where ``adj" here refers to an adjective word representing a positive outcome. This SOK paper aims to provide valuable insights into the driving forces behind advancements in public-key cryptography, facilitating future research efforts in this field.
Last updated:  2024-04-27
Asymptotics for the standard block size in primal lattice attacks: second order, formally verified
Daniel J. Bernstein
Many proposals of lattice-based cryptosystems estimate security levels by following a recipe introduced in the New Hope proposal. This recipe, given a lattice dimension n, modulus q, and standard deviation s, outputs a "primal block size" β and a security level growing linearly with β. This β is minimal such that some κ satisfies ((n+κ)s^2+1)^{1/2} < (d/β)^{1/2} δ^{2β−d−1} q^{κ/d}, where d = n + κ + 1 and δ = (β(πβ)^{1/β}/(2π exp 1))^{1/2(β−1)}. This paper identifies how β grows with n, with enough precision to show the impact of adjusting q and s by constant factors. Specifically, this paper shows that if lg q grows as Q_0 lg n + Q_1 + o(1) and lg s grows as S_0 lg n + S_1 + o(1), where 0 <= S_0 <= 1/2 < Q_0 − S_0, then β/n grows as z_0 + (z_1+o(1))/lg n, where z_0 = 2Q_0/(Q_0−S_0+1/2)^2 and z_1 has a formula given in the paper. The paper provides a traditional-format proof and a proof verified by the HOL Light proof assistant.
Last updated:  2024-04-27
Efficient Quantum Algorithm for SUBSET-SUM Problem
Sanchita Ghosh, Anant Sharma, Sreetama Das, and Shibdas Roy
Problems in the complexity class $NP$ are not all known to be solvable, but are verifiable given the solution, in polynomial time by a classical computer. The complexity class $BQP$ includes all problems solvable in polynomial time by a quantum computer. Prime factorization is in $NP$ class, and is also in $BQP$ class, owing to Shor's algorithm. The hardest of all problems within the $NP$ class are called $NP$-complete. If a quantum algorithm can solve an $NP$-complete problem in polynomial time, it would imply that a quantum computer can solve all problems in $NP$ in polynomial time. Here, we present a polynomial-time quantum algorithm to solve an $NP$-complete variant of the $SUBSET-SUM$ problem, thereby, rendering $NP\subseteq BQP$. We illustrate that given a set of integers, which may be positive or negative, a quantum computer can decide in polynomial time whether there exists any subset that sums to zero. There are many real-world applications of our result, such as finding patterns efficiently in stock-market data, or in recordings of the weather or brain activity. As an example, the decision problem of matching two images in image processing is $NP$-complete, and can be solved in polynomial time, when amplitude amplification is not required.
Last updated:  2024-04-27
Toward Independent Key Encryption based on Q-Problem
Abdelkader Laouid, Mostefa Kara, and Mohammad Hammoudeh
This paper defines a post-quantum encryption scheme based on discussion cryptography by introducing a new post-quantum hard problem called Q-Problem. The idea behind this scheme is to hide the keys of each entity, and the encryption process is based on secret message holders using only random private keys.
Last updated:  2024-04-27
SECDSA: Mobile signing and authentication under classical ``sole control''
Eric Verheul
The 2014 European eIDAS regulation regulates strong electronic authentication and legally binding electronic signatures. Both require user "sole control". Historically smartcards are used based on direct interaction between user and relying party. Here sole control is provided by giving users both physical possession and control of the cryptographic key used for signing/authentication through a PIN. Such **classical** sole control is required in the 1999 electronic signature directive by some interpretations. The eIDAS regulation repeals the directive and explicitly relaxes its sole control requirements in a trade-off between security and usability. This allows user interaction to be outsourced to intermediary parties (authentication providers, signing services). This also allows mobile applications as user friendly alternatives for smartcards. However, current mobile platforms are only equipped with limited cryptographic hardware not supporting secure knowledge factors (PINs) controlling keys. The eIDAS relaxation raises concerns on sole control; intermediary parties should not be able to act as man-in-the-middle and impersonate users. In this paper we present a simple cryptographic design for signing and authentication on standard mobile platforms providing classical sole control. We argue that our design can meet the highest eIDAS requirements, effectively introducing a new signature category in a 2016 decision of the European Commission. We also sketch a SECDSA based implementation of the European Digital Identity Wallet recently proposed by the European Commission as part of the eIDAS regulation update.
Last updated:  2024-04-27
Jumping for Bernstein-Yang Inversion
Li-Jie Jian, Ting-Yuan Wang, Bo-Yin Yang, and Ming-Shing Chen
This paper achieves fast polynomial inverse operations specifically tailored for the NTRU Prime KEM on ARMv8 NEON instruction set benchmarking on four processor architectures: Cortex-A53, Cortex-A72, Cortex-A76 and Apple M1. We utilize the jumping divison steps of the constant-time GCD algorithm from Bernstein and Yang (TCHES’19) and optimize underlying polynomial multiplication of various lengths to improve the efficiency for computing polynomial inverse operations in NTRU Prime.
Last updated:  2024-04-26
Verifiable Encryption from MPC-in-the-Head
Uncategorized
Akira Takahashi and Greg Zaverucha
Show abstract
Uncategorized
Verifiable encryption (VE) is a protocol where one can provide assurance that an encrypted plaintext satisfies certain properties, or relations. It is an important building block in cryptography with many useful applications, such as key escrow, group signatures, optimistic fair exchange, and others. However, the majority of previous VE schemes are restricted to instantiation with specific public-key encryption schemes or relations. In this work, we propose a novel framework that realizes VE protocols using zero-knowledge proof systems based on the MPC-in-the-head paradigm (Ishai et al. STOC 2007). Our generic compiler can turn a large class of zero-knowledge proofs into secure VE protocols for any secure public-key encryption scheme with the undeniability property, a notion that essentially guarantees binding of encryption when used as a commitment scheme. Our framework is versatile: because the circuit proven by the MPC-in-the-head prover is decoupled from a complex encryption function, the work of the prover is focused on proving the encrypted data satisfies the relation, not the proof of plaintext knowledge. Hence, our approach allows for instantiation with various combinations of properties about the encrypted data and encryption functions. We then consider concrete applications, to demonstrate the efficiency of our framework, by first giving a new approach and implementation to verifiably encrypt discrete logarithms in any prime order group more efficiently than was previously known. Then we give the first practical verifiable encryption scheme for AES keys with post-quantum security, along with an implementation and benchmarks.
Last updated:  2024-04-26
A Characterization of AE Robustness as Decryption Leakage Indistinguishability
Ganyuan Cao
Robustness has emerged as an important criterion for authenticated encryption, alongside the requirements of confidentiality and integrity. We introduce a novel notion, denoted as IND-rCCA, to formalize the robustness of authenticated encryption from the perspective of decryption leakage. This notion is an augmentation of common notions defined for AEAD schemes by considering indistinguishability of potential leakage due to decryption failure, particularly in the presence of multiple checks for failures. With this notion, we study the disparity between a single-error decryption function and the actual leakage incurred during decryption. We introduce the concept of error unicity to require that only one error is disclosed, whether explicitly via decryption or implicitly via leakage, even there are multiple checks for failures. This aims to mitigate the security issue caused by disclosing multiple errors via leakage. We further extend this notion to IND-sf-rCCA to formalize the stateful security involving out-of-order ciphertext. Furthermore, we revisit the robustness of the Encode-then-Encrypt-then-MAC (EEM) paradigm, addressing concerns arising from the disclosure of multiple error messages. We then propose a modification to boost its robustness, thereby ensuring error unicity.
Last updated:  2024-04-26
Bake It Till You Make It: Heat-induced Power Leakage from Masked Neural Networks
Dev M. Mehta, Mohammad Hashemi, David S. Koblah, Domenic Forte, and Fatemeh Ganji
Masking has become one of the most effective approaches for securing hardware designs against side-channel attacks. Regardless of the effort put into correctly implementing masking schemes on a field-programmable gate array (FPGA), leakage can be unexpectedly observed. This is due to the fact that the assumption underlying all masked designs, i.e., the leakages of different shares are independent of each other, may no longer hold in practice. In this regard, extreme temperatures have been shown to be an important factor in inducing leakage, even in correctly masked designs. This has previously been verified using an external heat generator (i.e., a climate chamber). In this paper, we examine whether the leakage can be induced using the circuit components themselves without making any changes to the design. Specifically, we target masked neural networks (NNs) in FPGAs, one of the main building blocks of which is block random access memory (BRAM). In this respect, thanks to the inherent characteristics of NNs, our novel internal heat generators leverage solely the memories devoted to storing the user’s input, especially when frequently writing alternating patterns into BRAMs. The possibility of observing first-order leakage is evaluated by considering one of the most recent and successful first-order secure masked NNs, namely ModuloNET. ModuloNET is specifically designed for FPGAs, where BRAMs are used to store inputs and intermediate computations. Our experimental results demonstrate that undesirable first-order leakage can be observed and exploited by increasing the temperature when an alternating input is applied to the masked NN. To give a better understanding of the impact of extreme heat, we further perform a similar test on the design using an external heat generator, where a similar conclusion can be drawn.
Last updated:  2024-04-26
MiniCast: Minimizing the Communication Complexity of Reliable Broadcast
Thomas Locher and Victor Shoup
We give a new protocol for reliable broadcast with improved communication complexity for long messages. Namely, to reliably broadcast a message a message $m$ over an asynchronous network to a set of $n$ parties, of which fewer than $n/3$ may be corrupt, our protocol achieves a communication complexity of $1.5 |m| n + O( \kappa n^2 \log(n) )$, where $\kappa$ is the output length of a collision-resistant hash function. This result improves on the previously best known bound for long messages of $2 |m| n + O( \kappa n^2 \log(n) )$.
Last updated:  2024-04-26
Sing a song of Simplex
Victor Shoup
We flesh out some details of the recently proposed Simplex atomic broadcast protocol, and modify it so that leaders disperse blocks in a more communication-efficient fashion. The resulting protocol, called DispersedSimplex, maintains the simplicity and excellent -- indeed, optimal -- latency characteristics of the original Simplex protocol. We also present several variations, including a variant that supports "stable leaders", variants that incorporate very recently developed data dissemination techniques that allow us to disperse blocks even more efficiently, and variants that are "signature free". We also suggest a number of practical optimizations and provide concrete performance estimates that take into account not just network latency but also network bandwidth limitations and computational costs. Based on these estimates, we argue that despite its simplicity, DispersedSimplex should, in principle, perform in practice as well as or better than any other state-of-the-art atomic broadcast protocol, at least in terms of common-case throughput and latency.
Last updated:  2024-04-26
Key-Homomorphic and Aggregate Verifiable Random Functions
Giulio Malavolta
A verifiable random function (VRF) allows one to compute a random-looking image, while at the same time providing a unique proof that the function was evaluated correctly. VRFs are a cornerstone of modern cryptography and, among other applications, are at the heart of recently proposed proof-of-stake consensus protocols. In this work we initiate the formal study of aggregate VRFs, i.e., VRFs that allow for the aggregation of proofs/images into a small di- gest, whose size is independent of the number of input proofs/images, yet it still enables sound verification. We formalize this notion along with its security properties and we propose two constructions: The first scheme is conceptually simple, concretely efficient, and uses (asymmetric) bilinear groups of prime order. Pseudorandomness holds in the random oracle model and aggregate pseudorandomness is proven in the algebraic group model. The second scheme is in the standard model and it is proven secure against the learning with errors (LWE) problem. As a cryptographic building block of independent interest, we introduce the notion of key homomorphic VRFs, where the verification keys and the proofs are endowed with a group structure. We conclude by discussing several applications of key-homomorphic and aggregate VRFs, such as distributed VRFs and aggregate proof-of-stake protocols.
Last updated:  2024-04-26
Efficient Post-Quantum Secure Deterministic Threshold Wallets from Isogenies
Poulami Das, Andreas Erwig, Michael Meyer, and Patrick Struck
Cryptocurrency networks crucially rely on digital signature schemes, which are used as an authentication mechanism for transactions. Unfortunately, most major cryptocurrencies today, including Bitcoin and Ethereum, employ signature schemes that are susceptible to quantum adversaries, i.e., an adversary with access to a quantum computer can forge signatures and thereby spend coins of honest users. In cryptocurrency networks, signature schemes are typically not executed in isolation, but within a so-called cryptographic wallet. In order to achieve security against quantum adversaries, the signature scheme and the cryptographic wallet must withstand quantum attacks. In this work, we advance the study on post-quantum secure signature and wallet schemes. That is, we provide the first formal model for deterministic threshold wallets and we show a generic post-quantum secure construction from any post-quantum secure threshold signature scheme with rerandomizable keys. We then instantiate our construction from the isogeny-based signature scheme CSI-FiSh and we show that our instantiation significantly improves over prior work.
Last updated:  2024-04-26
GraphOS: Towards Oblivious Graph Processing
Javad Ghareh Chamani, Ioannis Demertzis, Dimitrios Papadopoulos, Charalampos Papamanthou, and Rasool Jalili
We propose GraphOS, a system that allows a client that owns a graph database to outsource it to an untrusted server for storage and querying. It relies on doubly-oblivious primitives and trusted hardware to achieve a very strong privacy and efficiency notion which we call oblivious graph processing: the server learns nothing besides the number of graph vertexes and edges, and for each query its type and response size. At a technical level, GraphOS stores the graph on a doubly-oblivious data structure, so that all vertex/edge accesses are indistinguishable. For this purpose, we propose Omix++, a novel doubly-oblivious map that outperforms the previous state of the art by up to 34×, and may be of independent interest. Moreover, to avoid any leakage from CPU instruction fetching during query evaluation, we propose algorithms for four fundamental graph queries (BFS/DFS traversal, minimum spanning tree, and single-source shortest paths) that have a fixed execution trace, i.e., the sequence of executed operations is independent of the input. By combining these techniques, we eliminate all information that a hardware adversary observing the memory access pattern within the protected enclave can infer. We benchmarked GraphOS against the best existing solution, based on oblivious relational DBMS(translating graph queries to relational operators). GraphOS is not only significantly more performant (by up to two orders of magnitude for our tested graphs) but it eliminates leakage related to the graph topology that is practically inherent when a relational DBMS is used unless all operations are “padded” to the worst case.
Last updated:  2024-04-26
Earn While You Reveal: Private Set Intersection that Rewards Participants
Aydin Abadi
In Private Set Intersection protocols (PSIs), a non-empty result always reveals something about the private input sets of the parties. Moreover, in various variants of PSI, not all parties necessarily receive or are interested in the result. Nevertheless, to date, the literature has assumed that those parties who do not receive or are not interested in the result still contribute their private input sets to the PSI for free, although doing so would cost them their privacy. In this work, for the first time, we propose a multi-party PSI, called “Anesidora”, that rewards parties who contribute their private input sets to the protocol. Anesidora is efficient; it mainly relies on symmetric key primitives and its computation and communication complexities are linear with the number of parties and set cardinality. It remains secure even if the majority of parties are corrupted by active colluding adversaries.
Last updated:  2024-04-26
A note on -Tweakable HCTR: A BBB Secure Tweakable Enciphering Scheme-
Mustafa Khairallah
Tweakable HCTR is an tweakable enciphering proposed by Dutta and Nandi in Indocrypt 2018. It provides beyond birthday bound security when each tweak value is not used too frequently. More importantly for this note, its security bound degrades linearly with the maximum input length. We show in this note that this is not true by showing a single query distinguisher with advantage $O(l^2/2^n)$ where $l$ is the length of that query. The distinguisher does not break the beyond-birthday-bound claim but gives higher advantage than the claimed bound.
Last updated:  2024-04-26
Properties of Lattice Isomorphism as a Cryptographic Group Action
Benjamin Benčina, Alessandro Budroni, Jesús-Javier Chi-Domínguez, and Mukul Kulkarni
In recent years, the Lattice Isomorphism Problem (LIP) has served as an underlying assumption to construct quantum-resistant cryptographic primitives, e.g. the zero-knowledge proof and digital signature scheme by Ducas and van Woerden (Eurocrypt 2022), and the HAWK digital signature scheme (Asiacrypt 2022). While prior lines of work in group action cryptography, e.g. the works of Brassard and Yung (Crypto 1990), and more recently Alamati, De Feo, Montgomery and Patranabis (Asiacrypt 2020), focused on studying the discrete logarithm problem and isogeny-based problems in the group action framework, in recent years this framing has been used for studying the cryptographic properties of computational problems based on the difficulty of determining equivalence between algebraic objects. Examples include Permutation and Linear Code Equivalence Problems used in LESS (Africacrypt 2020), and the Tensor Isomorphism Problem (TCC 2019). This study delves into the quadratic form version of LIP, examining it through the lens of group actions. In this work we (1) give formal definitions and study the cryptographic properties of this group action (LIGA), (2) demonstrate that LIGA lacks both weak unpredictability and weak pseudorandomness, and (3) under certain assumptions, establish a theoretical trade-off between time complexity and the required number of samples for breaking weak unpredictability, for large dimensions. We also conduct experiments supporting our analysis. Additionally, we employ our findings to formulate new hard problems on quadratic forms.
Last updated:  2024-04-26
Rondo: Scalable and Reconfiguration-Friendly Randomness Beacon
Xuanji Meng, Xiao Sui, Zhaoxin Yang, Kang Rong, Wenbo Xu, Shenglong Chen, Ying Yan, and Sisi Duan
We present Rondo, a scalable and reconfiguration-friendly distributed randomness beacon (DRB) protocol in the partially synchronous model. Rondo is the first DRB protocol that is built from batched asynchronous verifiable secret sharing (bAVSS) and meanwhile avoids the high $O(n^3)$ message cost, where $n$ is the number of nodes. Our key contribution lies in the introduction of a new variant of bAVSS called batched asynchronous verifiable secret sharing with partial output (bAVSS-PO). bAVSS-PO is a weaker primitive than bAVSS but allows us to build a secure and more efficient DRB protocol. We propose a bAVSS-PO protocol Breeze. Breeze achieves the optimal $O(n)$ messages for the sharing stage and allows Rondo to offer better scalability than prior DRB protocols. Additionally, to support the reconfiguration, we introduce Rondo-BFT, a dynamic and partially synchronous Byzantine fault-tolerant protocol inspired by Dyno (S&P 2022). Unlike Dyno, Rondo-BFT provides a communication pattern that generates randomness beacon output periodically, making it well-suited for DRB applications. We implement our protocols and evaluate the performance on Amazon EC2 using up to 91 instances. Our evaluation results show that Rondo achieves higher throughput than existing works and meanwhile offers better scalability, where the performance does not degrade as significantly as $n$ grows.
Last updated:  2024-04-26
On Proving Pairings
Andrija Novakovic and Liam Eagen
In this paper we explore efficient ways to prove correctness of elliptic curve pairing relations. Pairing-based cryptographic protocols such as the Groth16 and Plonk SNARKs and the BLS signature scheme are used extensively in public blockchains such as Ethereum due in large part to their small size. However the relatively high cost of pairing computation remains a practical problem for many use cases such as verification ``in circuit" inside a SNARK. This naturally arises in recursive SNARK composition and SNARKs of BLS based consensus protocols. To improve pairing verification, we first show that the final exponentiation step of pairing verification can be replaced with a more efficient ``residue check," which can be incorporated into the Miller loop. Then, we show how to reduce the cost of the Miller loop by pre-computing all the necessary lines, and how this is especially efficient when the second pairing argument is fixed in advance. This is the case for BLS signatures with a fixed public key, as well as for KZG based SNARKs like Plonk and two of the three Groth16 pairings. Finally, we show how to improve of the protocol of [gar] by combining quotients, which allows us to more efficiently prove higher degree relations. These techniques also carry over naturally to pairing verification, for example on-chain verification or as part of the BitVM(2) protocol for Bitcoin smart contracts. We instantiate algorithms and show results for the BN254 curve.
Last updated:  2024-04-26
Computational Attestations of Polynomial Integrity Towards Verifiable Machine Learning
Dustin Ray and Caroline El Jazmi
Machine-learning systems continue to advance at a rapid pace, demonstrating remarkable utility in various fields and disciplines. As these systems continue to grow in size and complexity, a nascent industry is emerging which aims to bring machine-learning-as-a-service (MLaaS) to market. Outsourcing the operation and training of these systems to powerful hardware carries numerous advantages, but challenges arise when needing to ensure privacy and the correctness of work carried out by a potentially untrusted party. Recent advancements in the discipline of applied zero-knowledge cryptography, and probabilistic proof systems in general, have led to a means of generating proofs of integrity for any computation, which in turn can be efficiently verified by any party, in any place, at any time. In this work we present the application of a non-interactive, plausibly-post-quantum-secure, probabilistically-checkable argument system utilized as an efficiently verifiable guarantee that a privacy mechanism was irrefutably applied to a machine-learning model during the training process. That is, we prove the correct training of a differentially-private (DP) linear regression over a dataset of 60,000 samples on a single machine in 55 minutes, verifying the entire computation in 47 seconds. To our knowledge, this result represents the fastest known instance in the literature of provable-DP over a dataset of this size. Finally, we show how this task can be run in parallel, leading to further dramatic reductions in prover and verifier runtime complexity. We believe this result constitutes a key stepping-stone towards end-to-end private MLaaS.
Last updated:  2024-04-26
A note on ``a lightweight mutual and transitive authentication mechanism for IoT network''
Zhengjun Cao and Lihua Liu
We show the authentication mechanism [Ad Hoc Networks, 2023, 103003] fails to keep user anonymity, not as claimed.
Last updated:  2024-04-25
Towards Permissionless Consensus in the Standard Model via Fine-Grained Complexity
Marshall Ball, Juan Garay, Peter Hall, Aggelos Kiayias, and Giorgos Panagiotakos
We investigate the feasibility of permissionless consensus (aka Byzantine agreement) under standard assumptions. A number of protocols have been proposed to achieve permissionless consensus, most notably based on the Bitcoin protocol; however, to date no protocol is known that can be provably instantiated outside of the random oracle model. In this work, we take the first steps towards achieving permissionless consensus in the standard model. In particular, we demonstrate that worst-case conjectures in fine-grained complexity, in particular the orthogonal vectors conjecture (implied by the Strong Exponential Time Hypothesis), imply permissionless consensus in the random beacon model—a setting where a fresh random value is delivered to all parties at regular intervals. This gives a remarkable win-win result: either permissionless consensus exists relative to a random beacon, or there are non-trivial worst-case algorithmic speed-ups for a host of natural algorithmic problems (including SAT). Our protocol achieves resilience against adversaries that control an inverse-polynomial fraction of the honest computational power, i.e., adversarial power $A = T^{1−ε} $ for some constant $ε > 0$, where $T$ denotes the honest computational power. This relatively low threshold is a byproduct of the slack in the fine-grained complexity conjectures. One technical highlight is the construction of a Seeded Proof of Work: a Proof of Work where many (correlated) challenges can be derived from a single short public seed, and yet still no non-trivial amortization is possible.
Last updated:  2024-04-25
Regev Factoring Beyond Fibonacci: Optimizing Prefactors
Seyoon Ragavan
In this note, we improve the space-efficient variant of Regev's quantum factoring algorithm [Reg23] proposed by Ragavan and Vaikuntanathan [RV24] by constant factors in space and/or size. This allows us to bridge the significant gaps in concrete efficiency between the circuits by [Reg23] and [RV24]; [Reg23] uses far fewer gates, while [RV24] uses far fewer qubits. The main observation is that the space-efficient quantum modular exponentiation technique by [RV24] can be modified to work with more general sequences of integers than the Fibonacci numbers. We parametrize this in terms of a linear recurrence relation, and through this formulation construct three different circuits for quantum factoring: - A circuit that uses $\approx 12.4n$ qubits and $\approx 54.9n^{1/2}$ multiplications of $n$-bit integers. - A circuit that uses $(9+\epsilon)n$ qubits and $O_\epsilon(n^{1/2})$ multiplications of $n$-bit integers, for any $\epsilon > 0$. - A circuit that uses $(24+\epsilon)n^{1/2}$ multiplications of $n$-bit integers, and $O_\epsilon(n)$ qubits, for any $\epsilon > 0$. In comparison, the original circuit by [Reg23] uses at least $\approx 3n^{3/2}$ qubits and $\approx 6n^{1/2}$ multiplications of $n$-bit integers, while the space-efficient variant by [RV24] uses $\approx 10.32n$ qubits and $\approx 138.3n^{1/2}$ multiplications of $n$-bit integers (although a very simple modification of their Fibonacci-based circuit uses $\approx 11.32n$ qubits and only $\approx 103.7n^{1/2}$ multiplications of $n$-bit integers). The improvements proposed in this note take effect for sufficiently large values of $n$; it remains to be seen whether they can also provide benefits for practical problem sizes.
Last updated:  2024-04-25
Memory adds no cost to lattice sieving for computers in 3 or more spatial dimensions
Samuel Jaques
The security of lattice-based crytography (LWE, NTRU, and FHE) depends on the hardness of the shortest-vector problem (SVP). Sieving algorithms give the lowest asymptotic runtime to solve SVP, but depend on exponential memory. Memory access costs much more in reality than in the RAM model, so we consider a computational model where processors, memory, and meters of wire are in constant proportions to each other. While this adds substantial costs to route data during lattice sieving, we modify existing algorithms to amortize these costs and find that, asymptotically, a classical computer can achieve the previous RAM model cost of $2^{0.2925d+o(d)}$ to sieve a $d$-dimensional lattice for a computer existing in 3 or more spatial dimensions, and can reach $2^{0.3113d+o(d)}$ in 2 spatial dimensions, where "spatial dimensions" are the dimensions of the physical geometry in which the computer exists. Under some assumptions about the constant terms of memory access, we estimate increases in bit security between $7$ to $23$ bits for different Kyber parameter sets and $8$ to $22$ bits for Dilithium.
Last updated:  2024-04-25
Tight Security of TNT and Beyond: Attacks, Proofs and Possibilities for the Cascaded LRW Paradigm
Ashwin Jha, Mustafa Khairallah, Mridul Nandi, and Abishanka Saha
Liskov, Rivest and Wagner laid the theoretical foundations for tweakable block ciphers (TBC). In a seminal paper, they proposed two (up to) birthday-bound secure design strategies --- LRW1 and LRW2 --- to convert any block cipher into a TBC. Several of the follow-up works consider cascading of LRW-type TBCs to construct beyond-the-birthday bound (BBB) secure TBCs. Landecker et al. demonstrated that just two-round cascading of LRW2 can already give a BBB security. Bao et al. undertook a similar exercise in context of LRW1 with TNT --- a three-round cascading of LRW1 --- that has been shown to achieve BBB security as well. In this paper, we present a CCA distinguisher on TNT that achieves a non-negligible advantage with $ O(2^{n/2}) $ queries, directly contradicting the security claims made by the designers. We provide a rigorous and complete advantage calculation coupled with experimental verification that further support our claim. Next, we provide new and simple proofs of birthday-bound CCA security for both TNT and its single-key variant, which confirm the tightness of our attack. Furthering on to a more positive note, we show that adding just one more block cipher call, referred as 4-LRW1, does not just re-establish the BBB security, but also amplifies it up to $ 2^{3n/4} $ queries. As a side-effect of this endeavour, we propose a new abstraction of the cascaded LRW-design philosophy, referred to as the LRW+ paradigm, comprising two block cipher calls sandwiched between a pair of tweakable universal hashes. This helps us to provide a modular proof covering all cascaded LRW constructions with at least $ 2 $ rounds, including 4-LRW1, and its more established relative, the well-known CLRW2, or more aptly, 2-LRW2.
Last updated:  2024-04-25
Organizing Records for Retrieval in Multi-Dimensional Range Searchable Encryption
Mahdieh Heidaripour, Ladan Kian, Maryam Rezapour, Mark Holcomb, Benjamin Fuller, Gagan Agrawal, and Hoda Maleki
Storage of sensitive multi-dimensional arrays must be secure and efficient in storage and processing time. Searchable encryption allows one to trade between security and efficiency. Searchable encryption design focuses on building indexes, overlooking the crucial aspect of record retrieval. Gui et al. (PoPETS 2023) showed that understanding the security and efficiency of record retrieval is critical to understand the overall system. A common technique for improving security is partitioning data tuples into parts. When a tuple is requested, the entire relevant part is retrieved, hiding the tuple of interest. This work assesses tuple partitioning strategies in the dense data setting, considering parts that are random, $1$-dimensional, and multi-dimensional. We consider synthetic datasets of $2$, $3$ and $4$ dimensions, with sizes extending up to $2$M tuples. We compare security and efficiency across a variety of record retrieval methods. Our findings are: 1. For most configurations, multi-dimensional partitioning yields better efficiency and less leakage. 2. 1-dimensional partitioning outperforms multi-dimensional partitioning when the first (indexed) dimension is any size as long as the query is large in all other dimensions except the (the first dimension can be any size). 3. The leakage of 1-dimensional partitioning is reduced the most when using a bucketed ORAM (Demertiz et al., USENIX Security 2020).
Last updated:  2024-04-25
NTRU-based FHE for Larger Key and Message Space
Robin Jadoul, Axel Mertens, Jeongeun Park, and Hilder V. L. Pereira
The NTRU problem has proven a useful building block for efficient bootstrapping in Fully Homomorphic Encryption (FHE) schemes, and different such schemes have been proposed. FINAL (ASIACRYPT 2022) first constructed FHE using homomorphic multiplexer (CMux) gates for the blind rotation operation. Later, XZD+23 (CRYPTO 2023) gave an asymptotic optimization by changing the ciphertext format to enable ring automorphism evaluations. In this work, we examine an adaptation to FINAL to evaluate CMux gates of higher arity and the resulting tradeoff to running times and bootstrapping key sizes. In this setting, we can compare the time and space efficiency of both bootstrapping protocols with larger key space against each other and the state of the art.
Last updated:  2024-04-25
Further Investigations on Nonlinear Complexity of Periodic Binary Sequences
Qin Yuan, Chunlei Li, Xiangyong Zeng, Tor Helleseth, and Debiao He
Nonlinear complexity is an important measure for assessing the randomness of sequences. In this paper we investigate how circular shifts affect the nonlinear complexities of finite-length binary sequences and then reveal a more explicit relation between nonlinear complexities of finite-length binary sequences and their corresponding periodic sequences. Based on the relation, we propose two algorithms that can generate all periodic binary sequences with any prescribed nonlinear complexity.
Last updated:  2024-04-25
Kronos: A Secure and Generic Sharding Blockchain Consensus with Optimized Overhead
Yizhong Liu, Andi Liu, Yuan Lu, Zhuocheng Pan, Yinuo Li, Jianwei Liu, Song Bian, and Mauro Conti
Sharding enhances blockchain scalability by dividing the network into shards, each managing specific unspent transaction outputs or accounts. As an introduced new transaction type, cross-shard transactions pose a critical challenge to the security and efficiency of sharding blockchains. Currently, there is a lack of a generic sharding consensus pattern that achieves both security and low overhead. In this paper, we present Kronos, a secure sharding blockchain consensus achieving optimized overhead. In particular, we propose a new secure sharding consensus pattern, based on a buffer managed jointly by shard members. Valid transactions are transferred to the payee via the buffer, while invalid ones are rejected through happy or unhappy paths. Kronos is proved to achieve security with atomicity under malicious clients with optimal intra-shard overhead $k\mathcal{B}$ ($k$ for involved shard number and $\mathcal{B}$ for a Byzantine fault tolerance (BFT) cost). Efficient rejection even requires no BFT execution in happy paths, and the cost in unhappy paths is still lower than a two-phase commit. Besides, we propose secure cross-shard certification methods based on batch certification and reliable cross-shard transfer. The former combines hybrid trees or vector commitments, while the latter integrates erasure coding. Handling $b$ transactions, Kronos is proved to achieve reliability with low cross-shard overhead $\mathcal{O}(n b \lambda)$ ($n$ for shard size and $\lambda$ for the security parameter). Notably, Kronos imposes no restrictions on BFT and does not rely on time assumptions, offering optional constructions in various modules. Kronos could serve as a universal framework for enhancing the performance and scalability of existing BFT protocols, supporting generic models, including asynchronous networks, increasing the throughput by several orders of magnitude. We implement Kronos using two prominent BFT protocols: asynchronous Speeding Dumbo (NDSS'22) and partial synchronous Hotstuff (PODC'19). Extensive experiments (over up to 1000 AWS EC2 nodes across 4 AWS regions) demonstrate Kronos scales the consensus nodes to thousands, achieving a substantial throughput of 320 ktx/sec with 2.0 sec latency. Compared with the past solutions, Kronos outperforms, achieving up to a 12$\times$ improvement in throughput and a 50% reduction in latency when cross-shard transactions dominate the workload.
Last updated:  2024-04-25
Attribute-based Keyed (Fully) Homomorphic Encryption
Keita Emura, Shingo Sato, and Atsushi Takayasu
Keyed homomorphic public key encryption (KHPKE) is a variant of homomorphic public key encryption, where only users who have a homomorphic evaluation key can perform a homomorphic evaluation. Then, KHPKE satisfies the CCA2 security against users who do not have a homomorphic evaluation key, while it satisfies the CCA1 security against users who have the key. Thus far, several KHPKE schemes have been proposed under the standard Diffie-Hellman-type assumptions and keyed fully homomorphic encryption (KFHE) schemes have also been proposed from lattices although there are no KFHE schemes secure solely under the LWE assumption in the standard model. As a natural extension, there is an identity-based variant of KHPKE; however, the security is based on a $q$-type assumption and there are no attribute-based variants. Moreover, there are no identity-based variants of KFHE schemes due to the complex design of the known KFHE schemes. In this paper, we provide two constructions of attribute-based variants. First, we propose an attribute-based KFHE (ABKFHE) scheme from lattices. We start by designing the first KFHE scheme secure solely under the LWE assumption in the standard model. Since the design is conceptually much simpler than known KFHE schemes, we replace their building blocks with attribute-based ones and obtain the proposed ABKFHE schemes. Next, we propose an efficient attribute-based KHPKE (ABKHE) scheme from a pair encoding scheme (PES). Due to the benefit of PES, we obtain various ABKHE schemes that contain the first identity-based KHPKE scheme secure under the standard $k$-linear assumption and the first pairing-based ABKHE schemes supporting more expressive predicates.
Last updated:  2024-04-25
BackMon: IC Backside Tamper Detection using On-Chip Impedance Monitoring
Tahoura Mosavirik and Shahin Tajik
The expansion of flip-chip technologies and a lack of backside protection make the integrated circuit (IC) vulnerable to certain classes of physical attacks mounted from the IC’s backside. Laser-assisted probing, electromagnetic, and body-basing injection attacks are examples of such attacks. Unfortunately, there are few countermeasures proposed in the literature, and none are available commercially. Those that do exist are not only expensive but are incompatible with current IC manufacturing processes. They also cannot be integrated into legacy systems, such as field programmable gate arrays (FPGAs), which are integral parts of many of the industrial and defense systems. In this paper, we demonstrate how the impedance monitoring of the printed circuit board (PCB) and IC package’s power distribution network (PDN) using on-chip circuit-based network analyzers can detect IC backside tampering. Our method is based on the fact that any tampering attempt to expose the backside silicon substrate, such as the removal of the fan and heat sinks, leads to changes in the equivalent impedance of the package’s PDN, and hence, scanning the package impedance will reveal whether the package integrity has been violated. To validate our claims, we deploy an on-FPGA network analyzer on an AMD Zynq UltraScale+ MPSoC manufactured with 16 nm technology, which is part of a multi-PCB system. We conduct a series of experiments at different temperatures, leveraging the difference of means as the statistical metric, to demonstrate the effectiveness of our method in detecting tamper events required to expose the IC backside silicon.
Last updated:  2024-04-24
Conditional disclosure of secrets with quantum resources
Vahid R. Asadi, Kohdai Kuroiwa, Debbie Leung, Alex May, Sabrina Pasterski, and Chris Waddell
The conditional disclosure of secrets (CDS) primitive is among the simplest cryptographic settings in which to study the relationship between communication, randomness, and security. CDS involves two parties, Alice and Bob, who do not communicate but who wish to reveal a secret $z$ to a referee if and only if a Boolean function $f$ has $f(x,y)=1$. Alice knows $x,z$, Bob knows $y$, and the referee knows $x,y$. Recently, a quantum analogue of this primitive called CDQS was defined and related to f-routing, a task studied in the context of quantum position-verification. CDQS has the same inputs, outputs, and communication pattern as CDS but allows the use of shared entanglement and quantum messages. We initiate the systematic study of CDQS, with the aim of better understanding the relationship between privacy and quantum resources in the information theoretic setting. We begin by looking for quantum analogues of results already established in the classical CDS literature. Doing so we establish a number of basic properties of CDQS, including lower bounds on entanglement and communication stated in terms of measures of communication complexity. Because of the close relationship to the $f$-routing position-verification scheme, our results have relevance to the security of these schemes.
Last updated:  2024-04-24
A provably masked implementation of BIKE Key Encapsulation Mechanism
Loïc Demange and Mélissa Rossi
BIKE is a post-quantum key encapsulation mechanism (KEM) selected for the 4th round of the NIST’s standardization campaign. It relies on the hardness of the syndrome decoding problem for quasi-cyclic codes and on the indistinguishability of the public key from a random element, and provides the most competitive performance among round 4 candidates, which makes it relevant for future real-world use cases. Analyzing its side-channel resistance has been highly encouraged by the community and several works have already outlined various side-channel weaknesses and proposed ad-hoc countermeasures. However, in contrast to the well-documented research line on masking lattice-based algorithms, the possibility of generically protecting code-based algorithms by masking has only been marginally studied in a 2016 paper by Cong Chen et al. At this stage of the standardization campaign, it is important to assess the possibility of fully masking BIKE scheme and the resulting cost in terms of performances. In this work, we provide the first high-order masked implementation of a code-based algorithm. We had to tackle many issues such as finding proper ways to handle large sparse polynomials, masking the key-generation algorithm or keeping the benefit of the bitslicing. In this paper, we present all the gadgets necessary to provide a fully masked implementation of BIKE, we discuss our different implementation choices and we propose a full proof of masking in the Ishai Sahai and Wagner (Crypto 2003) model. More practically, we also provide an open C-code masked implementation of the key-generation, encapsulation and decapsulation algorithms with extensive benchmarks. While the obtained performance is slower than existing masked lattice-based algorithms, the scaling in the masking order is still encouraging and no Boolean to Arithmetic conversion has been used. We hope that this work can be a starting point for future analysis and optimization.
Last updated:  2024-04-24
Encryption Based Covert Channel for Large Language Models
Yongge Wang
Transformer neural networks have gained significant traction since their introduction, becoming pivotal across diverse domains. Particularly in large language models like Claude and ChatGPT, the transformer architecture has demonstrated remarkable efficacy. This paper provides a concise overview of transformer neural networks and delves into their security considerations, focusing on covert channel attacks and their implications for the safety of large language models. We present a covert channel utilizing encryption and demonstrate its efficacy in circumventing Claude.ai's security measures. Our experiment reveals that Claude.ai appears to log our queries and blocks our attack within two days of our initial successful breach. This raises two concerns within the community: (1) The extensive logging of user inputs by large language models could pose privacy risks for users. (2) It may deter academic research on the security of such models due to the lack of experiment repeatability.
Last updated:  2024-04-24
Simple constructions of linear-depth t-designs and pseudorandom unitaries
Tony Metger, Alexander Poremba, Makrand Sinha, and Henry Yuen
Uniformly random unitaries, i.e., unitaries drawn from the Haar measure, have many useful properties, but cannot be implemented efficiently. This has motivated a long line of research into random unitaries that ``look'' sufficiently Haar random while also being efficient to implement. Two different notions of derandomisation have emerged: $t$-designs are random unitaries that information-theoretically reproduce the first $t$ moments of the Haar measure, and pseudorandom unitaries (PRUs) are random unitaries that are computationally indistinguishable from Haar random. In this work, we take a unified approach to constructing $t$-designs and PRUs. For this, we introduce and analyse the ``$PFC$ ensemble'', the product of a random computational basis permutation $P$, a random binary phase operator $F$, and a random Clifford unitary $C$. We show that this ensemble reproduces exponentially high moments of the Haar measure. We can then derandomise the $PFC$ ensemble to show the following: 1. Linear-depth $t$-designs. We give the first construction of a (diamond-error) approximate $t$-design with circuit depth linear in $t$. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their $2t$-wise independent counterparts. 2. Non-adaptive PRUs. We give the first construction of PRUs with non-adaptive security, i.e., we construct unitaries that are indistinguishable from Haar random to polynomial-time distinguishers that query the unitary in parallel on an arbitary state. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their pseudorandom counterparts. 3. Adaptive pseudorandom isometries. We show that if one considers isometries (rather than unitaries) from $n$ to $n + \omega(\log n)$ qubits, a small modification of our PRU construction achieves adaptive security, i.e., even a distinguisher that can query the isometry adaptively in sequence cannot distinguish it from Haar random isometries. This gives the first construction of adaptive pseudorandom isometries. Under an additional conjecture, this proof also extends to adaptive PRUs.
Last updated:  2024-04-24
Unconditional correctness of recent quantum algorithms for factoring and computing discrete logarithms
Cédric Pilatte
In 1994, Shor introduced his famous quantum algorithm to factor integers and compute discrete logarithms in polynomial time. In 2023, Regev proposed a multi-dimensional version of Shor's algorithm that requires far fewer quantum gates. His algorithm relies on a number-theoretic conjecture on the elements in $(\mathbb{Z}/N\mathbb{Z})^{\times}$ that can be written as short products of very small prime numbers. We prove a version of this conjecture using tools from analytic number theory such as zero-density estimates. As a result, we obtain an unconditional proof of correctness of this improved quantum algorithm and of subsequent variants.
Last updated:  2024-04-24
Rate-1 Fully Local Somewhere Extractable Hashing from DDH
Pedro Branco, Nico Döttling, Akshayaram Srinivasan, and Riccardo Zanotto
Somewhere statistically binding (SSB) hashing allows us to sample a special hashing key such that the digest statistically binds the input at $m$ secret locations. This hash function is said to be somewhere extractable (SE) if there is an additional trapdoor that allows the extraction of the input bits at the $m$ locations from the digest. Devadas, Goyal, Kalai, and Vaikuntanathan (FOCS 2022) introduced a variant of somewhere extractable hashing called rate-1 fully local SE hash functions. The rate-1 requirement states that the size of the digest is $m + \mathsf{poly}(\lambda)$ (where $\lambda$ is the security parameter). The fully local property requires that for any index $i$, there is a "very short" opening showing that $i$-th bit of the hashed input is equal to $b$ for some $b \in \{0,1\}$. The size of this opening is required to be independent of $m$ and in particular, this means that its size is independent of the size of the digest. Devadas et al. gave such a construction from Learning with Errors (LWE). In this work, we give a construction of a rate-1 fully local somewhere extractable hash function from Decisional Diffie-Hellman (DDH) and BARGs. Under the same assumptions, we give constructions of rate-1 BARG and RAM SNARG with partial input soundness whose proof sizes are only matched by prior constructions based on LWE.
Last updated:  2024-04-24
MUSEN: Aggregatable Key-Evolving Verifiable Random Functions and Applications
Bernardo David, Rafael Dowsley, Anders Konring, and Mario Larangeira
A Verifiable Random Function (VRF) can be evaluated on an input by a prover who holds a secret key, generating a pseudorandom output and a proof of output validity that can be verified using the corresponding public key. VRFs are a central building block of committee election mechanisms that sample parties to execute tasks in cryptographic protocols, e.g. generating blocks in a Proof-of-Stake (PoS) blockchain or executing a round of MPC protocols. We propose the notion, and a matching construction, of an Aggregatable Key-Evolving VRF (A-KE-VRF) with the following extra properties: 1. Aggregation: combining proofs for several VRF evaluations of different inputs under different secret keys into a single constant size proof; 2. Key-Evolving: preventing adversaries who corrupt a party (learning their secret key) from ``forging'' proofs of past VRF evaluations. As an immediate application, we improve on the block size of PoS blockchains and on the efficiency of Proofs of Proof-of-Stake (PoPoS). Furthermore, the A-KE-VRF notion allows us to construct Encryption to the Future (EtF) and Authentication from the Past (AfP) schemes with a Key-Evolving property, which provides forward security. An EtF scheme allows for sending a message to a party who is randomly selected to execute a role in the future, while an AfP scheme allows for this party to authenticate their messages as coming from a past execution of this role. These primitives are essential for realizing the YOSO MPC Framework (CRYPTO'21).
Last updated:  2024-04-24
Distributed & Scalable Oblivious Sorting and Shuffling
Nicholas Ngai, Ioannis Demertzis, Javad Ghareh Chamani, and Dimitrios Papadopoulos
Existing oblivious systems offer robust security by concealing memory access patterns, but they encounter significant scalability and performance challenges. Recent efforts to enhance the practicality of these systems involve embedding oblivious computation, e.g., oblivious sorting and shuffling, within Trusted Execution Environments (TEEs). For instance, oblivious sort has been heavily utilized: in Oblix (S&P'18), when oblivious indexes are created and accessed; in Snoopy's high-throughput oblivious key-value (SOSP'21) during initialization and when the input requests are deduplicated and prepared for delivery; in Opaque (NSDI'17) for all the proposed oblivious SQL operators; in the state-of-the-art non-foreign key oblivious join approach (PVLDB'20). Additionally, oblivious sort/shuffle find applications in Signal's commercial solution for contact discovery, anonymous Google's Key Transparency, Searchable Encryption, software monitoring, and differentially private federated learning with user privacy. In this work, we address the scalability bottleneck of oblivious sort and shuffle by re-designing these approaches to achieve high efficiency in distributed multi-enclave environments. First, we propose a multi-threaded bitonic sort optimized for the distributed setting, making it the most performant oblivious sort for small number of enclaves (up to 4). For larger numbers of enclaves, we propose a novel oblivious bucket sort, which improves data locality and network consumption and outperforms our optimized distributed bitonic-sort by up to 5-6x. To the best of our knowledge, these are the first distributed oblivious TEE-based sorting solutions. For reference, we are able to sort 2 GiB of data in 1 second and 128 GiB in 53.4 seconds in a multi-enclave test. A fundamental building block of our oblivious bucket-sort is an oblivious shuffle that improves the prior state-of-the-art result (CCS'22) by up to 9.5x in the distributed multi-enclave setting---interestingly it is better by 10% even in the single-enclave/multi-thread setting.
Last updated:  2024-04-24
Hadamard Product Argument from Lagrange-Based Univariate Polynomials
Jie Xie, Yuncong Hu, and Yu Yu
Hadamard product is a point-wise product for two vectors. This paper presents a new scheme to prove Hadamard-product relation as a sub-protocol for SNARKs based on univariate polynomials. Prover uses linear cryptographic operations to generate the proof containing logarithmic field elements. The verification takes logarithmic cryptographic operations with constant numbers of pairings in bilinear group. The construction of the scheme is based on the Lagrange-based KZG commitments (Kate, Zaverucha, and Goldberg at Asiacrypt 2010) and the folding technique. We construct an inner-product protocol from folding technique on univariate polynomials in Lagrange form, and by carefully choosing the random polynomials suitable for folding technique, we construct a Hadamard-product protocol from the inner-product protocol, giving an alternative to prove linear algebra relations in linear time, and the protocol has a better concrete proof size than previous works.
Last updated:  2024-04-23
Guarding the First Order: The Rise of AES Maskings
Amund Askeland, Siemen Dhooghe, Svetla Nikova, Vincent Rijmen, and Zhenda Zhang
We provide three first-order hardware maskings of the AES, each allowing for a different trade-off between the number of shares and the number of register stages. All maskings use a generalization of the changing of the guards method enabling the re-use of randomness between masked S-boxes. As a result, the maskings do not require fresh randomness while still allowing for a minimal number of shares and providing provable security in the glitch-extended probing model. The low-area variant has five cycles of latency and a serialized area cost of $8.13~kGE$. The low-latency variant reduces the latency to three cycles while increasing the serialized area by $67.89\%$ compared to the low-area variant. The maskings of the AES encryption are implemented on FPGA and evaluated with Test Vector Leakage Assessment (TVLA).
Last updated:  2024-04-23
Interactive Threshold Mercurial Signatures and Applications
Masaya Nanri, Octavio Perez Kempner, Mehdi Tibouchi, and Masayuki Abe
Equivalence class signatures allow a controlled form of malleability based on equivalence classes defined over the message space. As a result, signatures can be publicly randomized and adapted to a new message representative in the same equivalence class. Notably, security requires that an adapted signature-message pair looks indistinguishable from a random signature-message pair in the space of valid signatures for the new message representative. Together with the decisional Diffie-Hellman assumption, this yields an unlinkability notion (class-hiding), making them a very attractive building block for privacy-preserving primitives. Mercurial signatures are an extension of equivalence class signatures that allow malleability for the key space. Unfortunately, the most efficient construction to date suffers a severe limitation that limits their application: only a weak form of public key class-hiding is supported. In other words, given knowledge of the original signing key and randomization of the corresponding public key, it is possible to identify whether they are related. In this work, we put forth the notion of interactive threshold mercurial signatures and show how they help to overcome the above-mentioned limitation. Moreover, we present constructions in the two-party and multi-party settings, assuming at least one honest signer. We also discuss related applications, including blind signatures, multi-signatures, and threshold ring signatures. To showcase the practicality of our approach, we implement the proposed constructions, comparing them against related alternatives.
Last updated:  2024-04-23
Quantum Implementation and Analysis of SHA-2 and SHA-3
Kyungbae Jang, Sejin Lim, Yujin Oh, Hyunjun Kim, Anubhab Baksi, Sumanta Chakraborty, and Hwajeong Seo
Quantum computers have the potential to solve hard problems that are nearly impossible to solve by classical computers, this has sparked a surge of research to apply quantum technology and algorithm against the cryptographic systems to evaluate for its quantum resistance. In the process of selecting post-quantum standards, NIST categorizes security levels based on the complexity that quantum computers would require to crack AES encryption (levels 1, 3 and 5) and SHA-2 or SHA-3 (levels 2 and 4). In assessing the security strength of cryptographic algorithms against quantum threats, accurate predictions of quantum resources are crucial. Following the work of Jaques et al. in Eurocrypt 2020, NIST estimated security levels 1, 3, and 5, corresponding to quantum circuit size for finding the key for AES-128, AES-192, and AES-256, respectively. This work has been recently followed-up by Huang et al. (Asiacrypt'22) and Liu et al. (Asiacrypt'23) among others; though the most up-to-date results are available in the work by Jang et al. (ePrint'22). However, for levels 2 and 4, which relate to the collision finding for the SHA-2 and SHA-3 hash functions, quantum attack complexities are probably not well-studied. In this paper, we present novel techniques for optimizing the quantum circuit implementations for SHA-2 and SHA-3 algorithms in all the categories specified by NIST. After that, we evaluate the quantum circuits of target cryptographic hash functions for quantum collision search. Finally, we define the quantum attack complexity for levels 2 and 4, and comment on the security strength of the extended level. We present new concepts to optimize the quantum circuits at the component level and the architecture level.
Last updated:  2024-04-23
SQIAsignHD: SQIsignHD Adaptor Signature
Farzin Renan and Péter Kutas
Adaptor signatures can be viewed as a generalized form of the standard digital signature schemes where a secret randomness is hidden within a signature. Adaptor signatures are a recent cryptographic primitive and are becoming an important tool for blockchain applications such as cryptocurrencies to reduce on-chain costs, improve fungibility, and contribute to off-chain forms of payment in payment-channel networks, payment-channel hubs, and atomic swaps. However, currently used adaptor signature constructions are vulnerable to quantum adversaries due to Shor's algorithm. In this work, we introduce $\mathsf{SQIAsignHD}$, a new quantum-resistant adaptor signature scheme based on isogenies of supersingular elliptic curves, using SQIsignHD - as the underlying signature scheme - and exploiting the idea of the artificial orientation on the supersingular isogeny Diffie-Hellman key exchange protocol, SIDH, as the underlying hard relation. We, furthermore, show that our scheme is secure in the Quantum Random Oracle Model (QROM).
Last updated:  2024-04-22
Constrained Pseudorandom Functions for Inner-Product Predicates from Weaker Assumptions
Sacha Servan-Schreiber
In this paper, we build a framework for constructing Constrained Pseudorandom Functions (CPRFs) with inner-product constraint predicates, using ideas from subtractive secret sharing and related-key-attack security. Our framework can be instantiated using a random oracle or any suitable Related-Key-Attack (RKA) secure pseudorandom function. We provide three instantiations of our framework: 1. an adaptively-secure construction in the random oracle model; 2. a selectively-secure construction under the DDH assumption; and 3. a selectively-secure construction with a polynomial domain under the assumption that one-way functions exist. All three instantiations are constraint-hiding and support inner-product predicates, leading to the first constructions of such expressive CPRFs under each corresponding assumption. Moreover, while the OWF-based construction is primarily of theoretical interest, the random oracle and DDH-based constructions are concretely efficient, which we show via an implementation.
Last updated:  2024-04-22
Worst-Case to Average-Case Hardness of LWE: A Simple and Practical Perspective
Divesh Aggarwal, Leong Jin Ming, and Alexandra Veliche
In this work, we study the worst-case to average-case hardness of the Learning with Errors problem (LWE) under an alternative measure of hardness $−$ the maximum success probability achievable by a probabilistic polynomial-time (PPT) algorithm. Previous works by Regev (STOC 2005), Peikert (STOC 2009), and Brakerski, Peikert, Langlois, Regev, Stehle (STOC 2013) give worst-case to average-case reductions from lattice problems, specifically the approximate decision variant of the Shortest Vector Problem (GapSVP) and the Bounded Distance Decoding (BDD) problem, to LWE. These reductions, however, are lossy in the sense that even the strongest assumption on the worst-case hardness of GapSVP or BDD implies only mild hardness of LWE. Our alternative perspective gives a much tighter reduction and strongly relates the hardness of LWE to that of BDD. In particular, we show that under a reasonable assumption about the success probability of solving BDD via a PPT algorithm, we obtain a nearly tight lower bound on the highest possible success probability for solving LWE via a PPT algorithm. Furthermore, we show a tight relationship between the best achievable success probability by any probabilistic polynomial-time algorithm for decision-LWE to that of search-LWE. Our results not only refine our understanding of the computational complexity of LWE, but also provide a useful framework for analyzing the practical security implications.
Last updated:  2024-04-22
POKE: A Framework for Efficient PKEs, Split KEMs, and OPRFs from Higher-dimensional Isogenies
Andrea Basso
We introduce a new framework, POKE, to build cryptographic protocols from irrational isogenies using higher-dimensional representations. The framework enables two parties to manipulate higher-dimensional representations of isogenies to efficiently compute their pushforwards, and ultimately to obtain a shared secret. We provide three constructions based on POKE: the first is a PKE protocol, which is one of the most compact post-quantum PKEs and possibly the most efficient isogeny-based PKE to date. We then introduce a validation technique to ensure the correctness of uniSIDH public keys: by combining the validation method with a POKE-based construction, we obtain a split KEM, a primitive that generalizes NIKEs and can be used to instantiate a post-quantum version of the Signal's X3DH protocol. The third construction builds upon the split KEM and its validation method to obtain a round-optimal verifiable OPRF. It is the first such construction that does not require more than $\lambda$ isogeny computations, and it is significantly more compact and more efficient than all other isogeny-based OPRFs.
Last updated:  2024-04-22
flookup: Fractional decomposition-based lookups in quasi-linear time independent of table size
Ariel Gabizon and Dmitry Khovratovich
We present a protocol for checking the values of a committed polynomial $\phi(X)$ over a multiplicative subgroup $H\subset \mathbb{F}$ of size $m$ are contained in a table $T\in \mathbb{F}^N$. After an $O(N \log^2 N)$ preprocessing step, the prover algorithm runs in *quasilinear* time $O(m\log ^2 m)$. We improve upon the recent breakthrough results Caulk[ZBK+22] and Caulk+[PK22], which were the first to achieve the complexity sublinear in the full table size $N$ with prover time being $O(m^2+m\log N)$ and $O(m^2)$, respectively. We pose further improving this complexity to $O(m\log m)$ as the next important milestone for efficient zk-SNARK lookups.
Last updated:  2024-04-22
Proofs for Deep Thought: Accumulation for large memories and deterministic computations
Benedikt Bünz and Jessica Chen
We construct two new accumulation schemes. The first one is for checking that $\ell$ read and write operations were performed correctly from a memory of size $T$. The prover time is entirely independent of $T$ and only requires committing to $6\ell$ field elements, which is an over $100$X improvement over prior work. The second one is for deterministic computations. It does not require committing to the intermediate wires of the computation but only to the input and output. This is achieved by building an accumulation scheme for a modified version of the famous GKR protocol. We show that these schemes are highly compatible and that the accumulation for GKR can further reduce the cost of the memory-checking scheme. Using the BCLMS (Crypto 21) compiler, these protocols yield an efficient, incrementally verifiable computation (IVC) scheme that is particularly useful for machine computations with large memories and deterministic steps.
Last updated:  2024-04-22
Complete group law for genus 2 Jacobians on Jacobian coordinates
Elif Ozbay Gurler and Huseyin Hisil
This manuscript provides complete, inversion-free, and explicit group law formulas in Jacobian coordinates for the genus 2 hyperelliptic curves of the form $y^2 = x^5 + a_3 x^3 + a_2 x^2 + a_1 x + a_0$ over a field $K$ with $char(K) \ne 2$. The formulas do not require the use of polynomial arithmetic operations such as resultant, mod, or gcd computations but only operations in $K$.
Last updated:  2024-04-22
Deep Selfish Proposing in Longest-Chain Proof-of-Stake Protocols
Roozbeh Sarenche, Svetla Nikova, and Bart Preneel
It has been shown that the selfish mining attack enables a miner to achieve an unfair relative revenue, posing a threat to the progress of longest-chain blockchains. Although selfish mining is a well-studied attack in the context of Proof-of-Work blockchains, its impact on the longest-chain Proof-of-Stake (LC-PoS) protocols needs yet to be addressed. This paper involves both theoretical and implementation-based approaches to analyze the selfish proposing attack in the LC-PoS protocols. We discuss how factors such as the nothing-at-stake phenomenon and the proposer predictability in PoS protocols can make the selfish proposing attack in LC-PoS protocols more destructive compared to selfish mining in PoW. In the first part of the paper, we use combinatorial tools to theoretically assess the selfish proposer’s block ratio in simplistic LC-PoS environments and under simplified network connection. However, these theoretical tools or classical MDP-based approaches cannot be applied to analyze the selfish proposing attack in real-world and more complicated LC-PoS environments. To overcome this issue, in the second part of the paper, we employ deep reinforcement learning techniques to find the near-optimal strategy of selfish proposing in more sophisticated protocols. The tool implemented in the paper can help us analyze the selfish proposing attack across diverse blockchain protocols with different reward mechanisms, predictability levels, and network conditions.
Last updated:  2024-04-22
How to Lose Some Weight - A Practical Template Syndrome Decoding Attack
Sebastian Bitzer, Jeroen Delvaux, Elena Kirshanova, Sebastian Maaßen, Alexander May, and Antonia Wachter-Zeh
We study the hardness of the Syndrome Decoding problem, the base of most code-based cryptographic schemes, such as Classic McEliece, in the presence of side-channel information. We use ChipWhisperer equipment to perform a template attack on Classic McEliece running on an ARM Cortex-M4, and accurately classify the Hamming weights of consecutive 32-bit blocks of the secret error vector. With these weights at hand, we optimize Information Set Decoding algorithms. Technically, we show how to speed up information set decoding via a dimension reduction, additional parity-check equations, and an improved information set search, all derived from the Hamming weight information. Consequently, using our template attack, we can practically recover an error vector in dimension n=2197 in a matter of seconds. Without side-channel information, such an instance has a complexity of around 88 bit. We also estimate how our template attack affects the security of the proposed McEliece parameter sets. Roughly speaking, even an error-prone leak of our Hamming weight information leads for n=3488 to a security drop of 89 bits.
Last updated:  2024-04-22
New SAT-based Model for Quantum Circuit Decision Problem: Searching for Low-Cost Quantum Implementation
Jingwen Chen, Qun Liu, Yanhong Fan, Lixuan Wu, Boyun Li, and Meiqin Wang
In recent years, quantum technology has been rapidly developed. As security analyses for symmetric ciphers continue to emerge, many require an evaluation of the resources needed for the quantum circuit implementation of the encryption algorithm. In this regard, we propose the quantum circuit decision problem, which requires us to determine whether there exists a quantum circuit for a given permutation f using M ancilla qubits and no more than K quantum gates within the circuit depth D. Firstly, we investigate heuristic algorithms and classical SAT-based models in previous works, revealing their limitations in solving the problem. Hence, we innovatively propose an improved SAT-based model incorporating three metrics of quantum circuits. The model enables us to find the optimal quantum circuit of an arbitrary 3 or 4-bit S-box under a given optimization goal based on SAT solvers, which has proved the optimality of circuits constructed by the tool, LIGHTER-R. Then, by combining different criteria in the model, we find more compact quantum circuit implementations of S-boxes such as RECTANGLE and GIFT. For GIFT S-box, our model provides the optimal quantum circuit that only requires 8 gates with a depth of 31. Furthermore, our model can be generalized to linear layers and improve the previous SAT-based model proposed by Huang et al. in ASIACRYPT 2022 by adding the criteria on the number of qubits and the circuit depth.
Last updated:  2024-04-22
WESP: An encryption method that, as the key size increases, require an exponentially growing time to break
Sam Widlund
WESP is a new encryption algorithm that is based on equation systems, in which the equations are generated using the values of tables that act as the encryption key, and the equations having features making them suitable for cryptographic use. The algorithm is defined, and its properties are discussed. Besides just describing the algorithm, also reasons are presented why the algorithm works the way it works. The key size in WESP can be altered and has no upper limit, and typically the key size is bigger than currently commonly used keys. A calculation is presented, calculating how many bytes can be securely encrypted before the algorithm might start to repeat it’s sequence of encrypting bytes, and that this period can be adjusted to be arbitrarily large. It is also shown that withing the period the resulting stream of encrypting bytes is statistically uniformly distributed. It is also shown that if the encryption tables are not known, the equations in the system of equations cannot be known, and it is demonstrated that the system of equations cannot be solved if the equations are not known, and thus the encryption cannot be broken in a closed form. Then, we calculate for all symbols used in the algorithm, the minimum amount of trials needed, in order to be able to verity the trials. Since the algorithm is constantly updating key values, verification becomes impossible if equations are not evaluated in order. The calculation shows that the minimum number of trials required is such that the number of trials, i.e., the time required to break the encryption, increases exponentially as the key size grows. Since there is no upper limit on the key size there is neither any upper limit on the time it requires to break the encryption.
Last updated:  2024-04-22
Secure Integrated Sensing and Communication Under Correlated Rayleigh Fading
Martin Mittelbach, Rafael F. Schaefer, Matthieu Bloch, Aylin Yener, and Onur Gunlu
We consider a secure integrated sensing and communication (ISAC) scenario, in which a signal is transmitted through a state-dependent wiretap channel with one legitimate receiver with which the transmitter communicates and one honest-but-curious target that the transmitter wants to sense. The secure ISAC channel is modeled as two state-dependent fast-fading channels with correlated Rayleigh fading coefficients and independent additive Gaussian noise components. Delayed channel outputs are fed back to the transmitter to improve the communication performance and to estimate the channel state sequence. We establish and illustrate an achievable secrecy-distortion region for degraded secure ISAC channels under correlated Rayleigh fading. We also evaluate the inner bound for a large set of parameters to derive practical design insights for secure ISAC methods. The presented results include in particular parameter ranges for which the secrecy capacity of a classical wiretap channel setup is surpassed and for which the channel capacity is approached.
Last updated:  2024-04-22
Efficient KZG-based Univariate Sum-check and Lookup Argument
Yuncong Zhang, Shi-Feng Sun, and Dawu Gu
We propose a novel KZG-based sum-check scheme, dubbed $\mathsf{Losum}$, with optimal efficiency. Particularly, its proving cost is one multi-scalar-multiplication of size $k$---the number of non-zero entries in the vector, its verification cost is one pairing plus one group scalar multiplication, and the proof consists of only one group element. Using $\mathsf{Losum}$ as a component, we then construct a new lookup argument, named $\mathsf{Locq}$, which enjoys a smaller proof size and a lower verification cost compared to the state of the arts $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++. Specifically, the proving cost of $\mathsf{Locq}$ is comparable to $\mathsf{cq}$, keeping the advantage that the proving cost is independent of the table size after preprocessing. For verification, $\mathsf{Locq}$ costs four pairings, while $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++ require five, five and six pairings, respectively. For proof size, a $\mathsf{Locq}$ proof consists of four $\mathbb{G}_1$ elements and one $\mathbb{G}_2$ element; when instantiated with the BLS12-381 curve, the proof size of $\mathsf{Locq}$ is $2304$ bits, while $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++ have $3840$, $3328$ and $2944$ bits, respectively. Moreover, $\mathsf{Locq}$ is zero-knowledge as $\mathsf{cq}$+ and $\mathsf{cq}$++, whereas $\mathsf{cq}$ is not. $\mathsf{Locq}$ is more efficient even compared to the non-zero-knowledge (and more efficient) versions of $\mathsf{cq}$+ and $\mathsf{cq}$++.
Last updated:  2024-04-22
Lattice-Based Succinct Mercurial Functional Commitment for Circuits: Definitions and Constructions
Hongxiao Wang, Siu-Ming Yiu, Yanmin Zhao, Zoe L. Jiang, and Min Xie
Vector commitments gain a lot of attention because of their wide usage in applications such as blockchain and accumulator. Mercurial vector commitments and mercurial functional commitments (MFC), as significant variants of VC, are the central techniques to construct more advanced cryptographic primitives such as zero-knowledge set and zero-knowledge functional elementary database (ZK-FEDB). However, the current MFC only supports linear functions, limiting its application, i.e. building the ZK-FEDB that only supports linear function queries. Besides, to our best knowledge, the existing MFC and ZK-FEDBs, including the one proposed by Zhang and Deng (ASIACRYPT '23) using RSA accumulators, are all in the group model and cannot resist the attack of quantum computers. To break these limitations, we formalize the first system model and security model of MFC for circuits. Then, we target some specific properties of a new falsifiable assumption, i.e. the $\mathsf{BASIS}$ assumption proposed by Wee and Wu (EUROCRYPT '23) to construct the first lattice-based succinct mercurial functional commitment for circuits. To the application, we show that our constructions can be used to build the first lattice-based ZK-FEDB directly within the existing generic framework.
Last updated:  2024-04-22
On the {\sf P/poly} Validity of the Agr17 FE Scheme
Yupu Hu, Siyue Dong, Baocang Wang, Jun Liu, Yanbin Pan, and Xingting Dong
Functional encryption (FE) is a cutting-edge research topic in cryptography. The Agr17 FE scheme is a major scheme of FE area. This scheme had the novelty of “being applied for the group of general functions (that is, {\sf P/poly} functions) without IO”. It took the BGG+14 ABE scheme as a bottom structure, which was upgraded into a “partially hiding attribute” scheme, and combined with a fully homomorphic encryption (FHE) scheme. However, the Agr17 FE scheme had a strange operation. For noise cancellation of FHE decryption stage, it used bulky “searching noise” rather than elegant “filtering”. It searched total modulus interval, so that the FHE modulus should be polynomially large. In this paper we discuss the {\sf P/poly} validity of the Agr17 FE scheme. First, we obtain the result that the Agr17 FE scheme is {\sf P/poly} invalid. More detailedly, when the Agr17 FE scheme is applied for the group of randomly chosen {\sf P/poly} Boolean functions, FHE modulus at the “searching” stage cannot be polynomially large. Our analysis is based on three restrictions of the BGG+14 ABE scheme: (1) The modulus of the BGG+14 ABE should be adapted to being super-polynomially large, if it is applied for the group of randomly chosen {\sf P/poly} functions. (2) The modulus of the BGG+14 ABE cannot be switched. (3) If the BGG+14 ABE is upgraded into a “partially hiding attribute” scheme, permitted operations about hidden part of the attribute can only be affine operations. Then, to check whether the {\sf P/poly} validity can be obtained by modifying the scheme, we consider two modified versions. The first modified version is controlling the FHE noise by repeatedly applying bootstrapping, and replacing a modular inner product with an arithmetic inner product. The second modified version is replacing the search for the modulus interval with the search for a public noise interval, hoping such noise interval polynomially large and tolerating the modulus which may be super-polynomially large. The first modified version may be {\sf P/poly} valid, but it is weaker. There is no evidence to support the {\sf P/poly} validity of the second modified version. Finally, we present an additional conclusion that there is no evidence to support the {\sf P/poly} validity of the GVW15 PE scheme.
Last updated:  2024-04-22
$\mathsf{Cougar}$: Cubic Root Verifier Inner Product Argument under Discrete Logarithm Assumption
Hyeonbum Lee, Seunghun Paik, Hyunjung Son, and Jae Hong Seo
An inner product argument (IPA) is a cryptographic primitive used to construct a zero-knowledge proof (ZKP) system, which is a notable privacy-enhancing technology. We propose a novel efficient IPA called $\mathsf{Cougar}$. $\mathsf{Cougar}$ features cubic root verifier and logarithmic communication under the discrete logarithm (DL) assumption. At Asiacrypt2022, Kim et al. proposed two square root verifier IPAs under the DL assumption. Our main objective is to overcome the limitation of square root complexity in the DL setting. To achieve this, we combine two distinct square root IPAs from Kim et al.: one with pairing ($\mathsf{Protocol3}$) and one without pairing ($\mathsf{Protocol4}$). To construct $\mathsf{Cougar}$, we first revisit $\mathsf{Protocol4}$ and reconstruct it to make it compatible with the proof system for the homomorphic commitment scheme. Next, we utilize $\mathsf{Protocol3}$ as the proof system for the reconstructed $\mathsf{Protocol4}$. Furthermore, we provide a soundness proof for $\mathsf{Cougar}$ in the DL assumption.
Last updated:  2024-04-22
Subverting Cryptographic Protocols from A Fine-Grained Perspective - A Case Study on 2-Party ECDSA
Jialiu Cheng, Yi Wang, Rongmao Chen, and Xinyi Huang
The revelations of Edward Snowden in 2013 rekindled concerns within the cryptographic community regarding the potential subversion of cryptographic systems. Bellare et al. (CRYPTO'14) introduced the notion of Algorithm Substitution Attacks (ASAs), which aim to covertly leak sensitive information by undermining individual cryptographic primitives. In this work, we delve deeply into the realm of ASAs against protocols built upon cryptographic primitives. In particular, we revisit the existing ASA model proposed by Berndt et al. (AsiaCCS'22), providing a more fine-grained perspective. We introduce a novel ASA model tailored for protocols, capable of capturing a wide spectrum of subversion attacks. Our model features a modular representation of subverted parties within protocols, along with fine-grained definitions of undetectability. To illustrate the practicality of our model, we applied it to Lindell's two-party ECDSA protocol (CRYPTO'17), unveiling a range of ASAs targeting the protocol's parties with the objective of extracting secret key shares. Our work offers a comprehensive ASA model suited to cryptographic protocols, providing a useful framework for understanding ASAs against protocols.
Last updated:  2024-04-22
Non-interactive Blind Signatures from Lattices
Foteini Baldimtsi, Jiaqi Cheng, Rishab Goyal, and Aayush Yadav
Blind signatures enable a receiver to obtain signatures on messages of its choice without revealing any message to the signer. Round-optimal blind signatures are designed as a two-round interactive protocol between a signer and receiver. Coincidentally, the choice of message is not important in many applications, and is routinely set as a random (unstructured) message by a receiver. With the goal of designing more efficient blind signatures for such applications, Hanzlik (Eurocrypt '23) introduced a new variant called non-interactive blind signatures (NIBS). These allow a signer to asynchronously generate partial signatures for any recipient such that only the intended recipient can extract a blinded signature for a random message. This bypasses the two-round barrier for traditional blind signatures, yet enables many known applications. Hanzlik provided new practical designs for NIBS from bilinear pairings. In this work, we investigate efficient NIBS with post-quantum security. We design the first practical NIBS, as well as non-interactive partially blind signatures called tagged NIBS, from lattice-based assumptions. We also propose a new generic paradigm for NIBS from circuit-private leveled homomorphic encryption achieving optimal-sized signatures (i.e., same as any non-blind signature). Finally, we propose new enhanced security properties for NIBS, that could be of practical and theoretical interest.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.