Papers updated in last 31 days (351 results)

Last updated:  2024-07-16
Identity-Based Matchmaking Encryption, Revisited: Improved Constructions with Strong Security
Sohto Chiku, Keitaro Hashimoto, Keisuke Hara, and Junji Shikata
Identity-based matchmaking encryption (IB-ME) [Ateniese et al. Crypto 2019] allows users to communicate privately in an anonymous and authenticated manner. After the seminal paper by Ateniese et al., a lot of work has been done on the security and construction of IB-ME. In this work, we revisit the security definitions of IB-ME and provide improved constructions of it. First, we classify the existing security notions of IB-ME, systematically categorizing privacy into three categories (CPA, CCA, and privacy in the case of mismatch) and authenticity into four categories (NMA and CMA both against insiders and outsiders).In particular, we reconsider the privacy when the sender's identity is mismatched during decryption, and provide a new simple security game, called mismatch security, capturing the essence of it. Second, we propose efficient and strongly secure IB-ME schemes from the bilinear Diffie-Hellman assumption in the random oracle model and from anonymous identity-based encryption, identity-based signature, and reusable extractors in the standard model. The first scheme is based on Boneh-Franklin IBE similar to the Ateniese et al. scheme, but ours achieves a more compact decryption key and ciphertext and stronger CCA-privacy, CMA-authenticity, and mismatch security. The second scheme is an improved generic construction, which active not only stronger security but also the shortest ciphertext among existing generic constructions. Through this construction, we obtain, for example, a more efficient scheme from the symmetric external Diffie-Hellman assumption in the standard model, and a practical scheme from lattices in the quantum random oracle model.
Last updated:  2024-07-15
A Univariate Attack against the Limited-Data Instance of Ciminion
Augustin Bariant
With the increasing interest for advanced protocols for Multi Party Computation, Fully-Homomorphic Encryption or Zero Knowledge proofs, a need for cryptographic algorithms with new constraints has emerged. These algorithms, called Arithmetization-Oriented ciphers, seek to minimize the number of field multiplications in large finite fields $\mathbb{F}_{2^n}$ or $\mathbb{F}_{p}$. Among them, Ciminion is an encryption algorithm proposed by Dobraunig et al. in Eurocrypt 2021. In this paper, we show a new univariate modelization on a variant of Ciminion proposed by the designers. This instance restricts the attacker to at most $2^{s/2}$ data, where $s$ is the security level. Because the designers chose to reduce the number of rounds in that specific attacker model, we are able to attack the cipher for large security levels. We also propose some slight modifications of Ciminion that would overcome this vulnerability.
Last updated:  2024-07-15
Crypto Dark Matter on the Torus: Oblivious PRFs from shallow PRFs and FHE
Martin R. Albrecht, Alex Davidson, Amit Deo, and Daniel Gardham
Partially Oblivious Pseudorandom Functions (POPRFs) are 2-party protocols that allow a client to learn pseudorandom function (PRF) evaluations on inputs of its choice from a server. The client submits two inputs, one public and one private. The security properties ensure that the server cannot learn the private input, and the client cannot learn more than one evaluation per POPRF query. POPRFs have many applications including password-based key exchange and privacy-preserving authentication mechanisms. However, most constructions are based on classical assumptions, and those with post quantum security suffer from large efficiency drawbacks. In this work, we construct a novel POPRF from lattice assumptions and the “Crypto Dark Matter” PRF candidate (TCC’18) in the random oracle model. At a conceptual level, our scheme exploits the alignment of this family of PRF candidates, relying on mixed modulus computations, and programmable bootstrapping in the torus fully homomorphic encryption scheme (TFHE). We show that our construction achieves malicious client security based on circuit-private FHE, and client privacy from the semantic security of the FHE scheme. We further explore a heuristic approach to extend our scheme to support verifiability, based on the difficulty of computing cheating circuits in low depth. This would yield a verifiable (P)OPRF. We provide a proof-of-concept implementation and preliminary benchmarks of our construction. For the core online OPRF functionality, we require amortised 10.0KB communication per evaluation and a one-time per-client setup communication of 2.5MB.
Last updated:  2024-07-15
Phase Modulation Side Channels: Jittery JTAG for On-Chip Voltage Measurements
Colin O'Flynn
Measuring the fluctuations of the clock phase of a target was identified as a leakage source on early electromagnetic side-channel investigations. Despite this, only recently was directly measuring the clock phase (or jitter) of digital signals from a target connected to being a source of exploitable leakage. As the phase of a clock output will be related to signal propagation delay through the target, and this propagation delay is related to voltage, this means that most digital devices perform an unintended phase modulation (PM) of their internal voltage onto clock output phases. This paper first demonstrates an unprofiled CPA attack against a Cortex-M microcontroller using the phase of a clock output, observing the signal on both optically isolated and capacitively isolated paths. The unprofiled attack takes only 2-4x more traces than an attack using a classic shunt-resistor measurement. It is then demonstrated how the JTAG bypass mode can be used to force a clock through a digital device. This forced clock signal can then be used as a highly effective oscilloscope that is located on the target device. As the attack does not require modifications to the device (such as capacitor removal or heat spreader removal) it is difficult to detect using existing countermeasures. The example attack over JTAG uses an unprofiled CPA attack, requiring only about 5x more traces than an ideal shunt-resistor based measurement. In addition, a version of this attack using a fault correlation analysis attack is also demonstrated. Countermeasures are discussed, and a simple resampling countermeasure is tested. All tools both offensive and defensive presented in the paper have been released under open-source licenses.
Last updated:  2024-07-15
Improved Universal Thresholdizer from Iterative Shamir Secret Sharing
Jung Hee Cheon, Wonhee Cho, and Jiseung Kim
The universal thresholdizer, introduced at CRYPTO'18, is a cryptographic scheme that transforms any cryptosystem into a threshold variant, thereby enhancing its applicability in threshold cryptography. It enables black-box construction of one-round threshold signature schemes based on the Learning with Errors problem, and similarly, facilitates one-round threshold ciphertext-attack secure public key encryption when integrated with non-threshold schemes. Current constructions of universal thresholdizer are fundamentally built upon linear secret sharing schemes. One approach employs Shamir's secret sharing, which lacks compactness and results in ciphertext sizes of $O(N \log N)$, and another approach uses $\{0,1\}$-linear secret sharing scheme ($\{0,1\}$-LSSS), which is compact but induces high communication costs due to requiring $O(N^{5.3})$ secret shares. In this work, we introduce a communication-efficient universal thresholdizer by revising the linear secret sharing scheme. We propose a specialized linear secret sharing scheme, called TreeSSS, which reduces the number of required secret shares $O(N^{3+o(1)})$ while maintaining the compactness of the universal thresholdizer. TreeSSS can also serve as a subroutine for constructing lattice based $t$-out-of-$N$ threshold cryptographic primitives such as threshold fully homomorphic encryptions and threshold signatures. In this context, TreeSSS offers the advantage of lower communication overhead due to the reduced number of secret shares involved.
Last updated:  2024-07-15
Round Efficient Byzantine Agreement from VDFs
Poulami Das, Lisa Eckey, Sebastian Faust, Julian Loss, and Monosij Maitra
Byzantine agreement (BA) is a fundamental primitive in distributed systems and has received huge interest as an important building block for blockchain systems. Classical byzantine agreement considers a setting where $n$ parties with fixed, known identities want to agree on an output in the presence of an adversary. Motivated by blockchain systems, the assumption of fixed identities is weakened by using a \emph{resource-based model}. In such models, parties do not have fixed known identities but instead have to invest some expensive resources to participate in the protocol. Prominent examples for such resources are computation (measured by, e.g., proofs-of-work) or money (measured by proofs-of-stake). Unlike in the classical setting where BA without trusted setup (e.g., a PKI or an unpredictable beacon) is impossible for $t \geq n/3$ corruptions, in such resource-based models, BA can be constructed for the optimal threshold of $t <n/2$. In this work, we investigate BA without a PKI in the model where parties have restricted computational resources. Concretely, we consider sequential computation modeled via computing a verifiable delay function (VDF) and establish the following results: Positive Result: We present the first protocol for BA with expected constant round complexity and termination under adaptive corruption, honest majority and without a PKI. Earlier work achieved round complexity $O(n\kappa^2)$ (CRYPTO'15) or $O(\kappa)$ (PKC'18), where $\kappa$ is the security parameter. Negative Result: We give the first lower bound on the communication complexity of BA in a model where parties have restricted computational resources. Concretely, we show that a multicast complexity of $O(\sqrt{n})$ is necessary even if the parties have access to a VDF oracle.
Last updated:  2024-07-15
SIGMA: Secure GPT Inference with Function Secret Sharing
Kanav Gupta, Neha Jawalkar, Ananta Mukherjee, Nishanth Chandran, Divya Gupta, Ashish Panwar, and Rahul Sharma
Secure 2-party computation (2PC) enables secure inference that offers protection for both proprietary machine learning (ML) models and sensitive inputs to them. However, the existing secure inference solutions suffer from high latency and communication overheads, particularly for transformers. Function secret sharing (FSS) is a recent paradigm for obtaining efficient 2PC protocols with a preprocessing phase. We provide SIGMA, the first end-to-end system for secure transformer inference based on FSS. By constructing new FSS-based protocols for complex machine learning functionalities, such as Softmax, GeLU and SiLU, and also accelerating their computation on GPUs, SIGMA improves the latency of secure inference of transformers by $11-19\times$ over the state-of-the-art that uses preprocessing and GPUs. We present the first secure inference of generative pre-trained transformer (GPT) models. In particular, SIGMA executes Meta's LLaMA2 (available on HuggingFace) with 13 billion parameters in 44 seconds and GPT2 in 1.6 seconds.
Last updated:  2024-07-15
Breaking Free: Efficient Multi-Party Private Set Union Without Non-Collusion Assumptions
Minglang Dong, Yu Chen, Cong Zhang, and Yujie Bai
Multi-party private set union (MPSU) protocol enables $m$ $(m > 2)$ parties, each holding a set, to collectively compute the union of their sets without revealing any additional information to other parties. There are two main categories of MPSU protocols: The first builds on public-key techniques. All existing works in this category involve a super-linear number of public-key operations, resulting in poor practical efficiency. The second builds on oblivious transfer and symmetric-key techniques. The only existing work in this category is proposed by Liu and Gao (ASIACRYPT 2023), which features the best concrete performance among all existing protocols, despite its super-linear computation and communication. Unfortunately, it does not achieve the standard semi-honest security, as it inherently relies on a non-collusion assumption, which is unlikely to hold in practice. Therefore, the problem of constructing a practical MPSU protocol based on oblivious transfer and symmetric-key techniques in standard semi-honest model remains open. Furthermore, there is no MPSU protocol achieving both linear computation and linear communication complexity, which leaves another unresolved problem. In this work, we resolve these two open problems. - We propose the first MPSU protocol based on oblivious transfer and symmetric-key techniques in the standard semi-honest model. This protocol is $4.9-9.3 \times$ faster than Liu and Gao in the LAN setting. Concretely, our protocol requires only $3.6$ seconds in online phase for 3 parties with sets of $2^{20}$ items each. - We propose the first MPSU protocol achieving both linear computation and linear communication complexity, based on public-key operations. This protocol has the lowest overall communication costs and shows a factor of $3.0-36.5\times$ improvement in terms of overall communication compared to Liu and Gao. We implement our protocols and conduct an extensive experiment to compare the performance of our protocols and the state-of-the-art. To the best of our knowledge, our implementation is the first correct and secure implementation of MPSU that reports on large-size experiments.
Last updated:  2024-07-14
On the Security of KOS
Benjamin E. Diamond
We study the security of the random oblivious transfer extension protocol of Keller, Orsini, and Scholl (CRYPTO '15), whose security proof was recently invalidated by Roy (CRYPTO '22). We show that KOS is asymptotically secure. Our proof involves a subtle analysis of the protocol's "correlation check", and introduces several new techniques. We also study the protocol's concrete security. We establish concrete security for security parameter values on the order of 5,000. We present evidence that a stronger result than ours—if possible—is likely to require radically new ideas.
Last updated:  2024-07-14
STIR: Reed–Solomon Proximity Testing with Fewer Queries
Gal Arnon, Alessandro Chiesa, Giacomo Fenzi, and Eylon Yogev
We present STIR (Shift To Improve Rate), an interactive oracle proof of proximity (IOPP) for Reed-Solomon codes that achieves the best known query complexity of any concretely efficient IOPP for this problem. For $\lambda$ bits of security, STIR has query complexity $O(\log d + \lambda \cdot \log \log d )$, while FRI, a popular protocol, has query complexity $O(\lambda \cdot \log d )$ (including variants of FRI based on conjectured security assumptions). STIR relies on a new technique for recursively improving the rate of the tested Reed-Solomon code. We provide an implementation of STIR compiled to a SNARK. Compared to a highly-optimized implementation of FRI, STIR achieves an improvement in argument size that ranges from $1.25\times$ to $2.46\times$ depending on the chosen parameters, with similar prover and verifier running times. For example, in order to achieve 128 bits of security for degree $2^{26}$ and rate $1/4$, STIR has argument size $114$ KiB, compared to $211$ KiB for FRI.
Last updated:  2024-07-14
A Practical and Scalable Implementation of the Vernam Cipher, under Shannon Conditions, using Quantum Noise
Adrian Neal
The one-time pad cipher is renowned for its theoretical perfect security, yet its practical deployment is primarily hindered by the key-size and distribution challenge. This paper introduces a novel approach to key distribution called q-stream, designed to make symmetric-key cryptography, and the one-time pad cipher in particular, a viable option for contemporary secure communications, and specifically, post-quantum cryptography, leveraging quantum noise and combinatorics to ensure secure and efficient key-distribution between communicating parties. We demonstrate that our key-distribution mechanism has a variable, yet quantifiable hardness of at least 504 bits, established from immutable mathematical laws, rather than conjectured-intractability, and how we overcome the one-time pad key-size issue with a localised quantum-noise seeded key-generation function, having a system hardness of at least 2304 bits, while introducing sender authentication and message integrity. Whilst the proposed solution has potential applications in fields requiring very high levels of security, such as military communications and large financial transactions, we show from our research with a prototype of q-stream, that it is sufficiently practical and scaleable for use in common browser-based web-applications, without any modification to the browser (i.e. plug-ins), running above SSL/TLS at the application level, where in tests, it achieved a key-distribution rate of around 7 million keys over a 5 minute surge-window, in a single (multi-threaded) instance of q-stream.
Last updated:  2024-07-14
A Note on ``Secure and Distributed IoT Data Storage in Clouds Based on Secret Sharing and Collaborative Blockchain''
Zhengjun Cao and Lihua Liu
We show that the data storage scheme [IEEE/ACM Trans. Netw., 2023, 31(4), 1550-1565] is flawed due to the false secret sharing protocol, which requires that some random $4\times 4$ matrixes over the finite field $F_p$ (a prime $p$) are invertible. But we find its mathematical proof for invertibility is incorrect. To fix this flaw, one needs to check the invertibility of all 35 matrixes so as to generate the proper 7 secret shares.
Last updated:  2024-07-13
LR-OT: Leakage-Resilient Oblivious Transfer
Francesco Berti, Carmit Hazay, and Itamar Levi
Oblivious Transfer (OT) is a fundamental cryptographic primitive, becoming a crucial component of a practical secure protocol. OT is typically implemented in software, and one way to accelerate its running time is by using hardware implementations. However, such implementations are vulnerable to side-channel attacks (SCAs). On the other hand, protecting interactive protocols against SCA is highly challenging because of their longer secrets (which include inputs and randomness), more complicated design, and running multiple instances. Consequently, there are no truly practical leakage-resistant OT protocols yet. In this paper, we introduce two tailored indistinguishability-based security definitions for leakage-resilient OT, focusing on protecting the sender's state. Second, we propose a practical semi-honest secure OT protocol that achieves these security levels while minimizing the assumptions on the protocol's building blocks and the use of a secret state. Finally, we extend our protocol to support sequential composition and explore efficiency-security tradeoffs.
Last updated:  2024-07-13
Predicting one class of truncated matrix congruential generators with unknown parameters
Changcun Wang and Zhaopeng Dai
Matrix congruential generators is an important class of pseudorandom number generators. In this paper we show how to predict a class of Matrix congruential generators matrix congruential generators with unknown parameters. Given a few truncated digits of high-order bits output by a matrix congruential generator, we give a method based on lattice reduction to recover the parameters and the initial state of the generator.
Last updated:  2024-07-13
Optimized Privacy-Preserving Clustering with Fully Homomorphic Encryption
Chen Yang, Jingwei Chen, Wenyuan Wu, and Yong Feng
Clustering is a crucial unsupervised learning method extensively used in the field of data analysis. For analyzing big data, outsourced computation is an effective solution but privacy concerns arise when involving sensitive information. Fully homomorphic encryption (FHE) enables computations on encrypted data, making it ideal for such scenarios. However, existing privacy-preserving clustering based on FHE are often constrained by the high computational overhead incurred from FHE, typically requiring decryption and interactions after only one iteration of the clustering algorithm. In this work, we propose a more efficient approach to evaluate the one-hot vector for the index of the minimum in an array with FHE, which fully exploits the parallelism of single-instruction-multiple-data of FHE schemes. By combining this with FHE bootstrapping, we present a practical FHE-based k-means clustering protocol whose required round of interactions between the data owner and the server is optimal, i.e., accomplishing the entire clustering process on encrypted data in a single round. We implement this protocol using the CKKS FHE scheme. Experiments show that our protocol significantly outperforms the state-of-the-art FHE-based k-means clustering protocols on various public datasets and achieves comparable accuracy to plaintext result. Additionally, We adapt our protocol to support mini-batch k-means for large-scale datasets and report its performance.
Last updated:  2024-07-13
Fault-Resistant Partitioning of Secure CPUs for System Co-Verification against Faults
Simon Tollec, Vedad Hadžić, Pascal Nasahl, Mihail Asavoae, Roderick Bloem, Damien Couroussé, Karine Heydemann, Mathieu Jan, and Stefan Mangard
Fault injection attacks are a serious threat to system security, enabling attackers to bypass protection mechanisms or access sensitive information. To evaluate the robustness of CPU-based systems against these attacks, it is essential to analyze the consequences of the fault propagation resulting from the complex interplay between the software and the processor. However, current formal methodologies combining hardware and software face scalability issues due to the monolithic approach used. To address this challenge, this work formalizes the $k$-fault resistant partitioning notion to solve the fault propagation problem when assessing redundancy-based hardware countermeasures in a first step. Proven security guarantees can then reduce the remaining hardware attack surface when introducing the software in a second step. First, we validate our approach against previous work by reproducing known results on cryptographic circuits. In particular, we outperform state-of-the-art tools for evaluating AES under a three-fault-injection attack. Then, we apply our methodology to the OpenTitan secure element and formally prove the security of its CPU's hardware countermeasure to single bit-flip injections. Besides that, we demonstrate that previously intractable problems, such as analyzing the robustness of OpenTitan running a secure boot process, can now be solved by a co-verification methodology that leverages a $k$-fault resistant partitioning. We also report a potential exploitation of the register file vulnerability in two other software use cases. Finally, we provide a security fix for the register file, prove its robustness, and integrate it into the OpenTitan project.
Last updated:  2024-07-13
Prime Masking vs. Faults - Exponential Security Amplification against Selected Classes of Attacks
Thorben Moos, Sayandeep Saha, and François-Xavier Standaert
Fault injection attacks are a serious concern for cryptographic hardware. Adversaries may extract sensitive information from the faulty output that is produced by a cryptographic circuit after actively disturbing its computation. Alternatively, the information whether an output would have been faulty, even if it is withheld from being released, may be exploited. The former class of attacks, which requires the collection of faulty outputs, such as Differential Fault Analysis (DFA), then either exploits some knowledge about the position of the injected fault or about its value. The latter class of attacks, which can be applied without ever obtaining faulty outputs, such as Statistical Ineffective Fault Attacks (SIFA), then either exploits a dependency between the effectiveness of the fault injection and the value to be faulted (e.g., an LSB stuck-at-0 only affecting odd numbers), denoted as SIFA-1, or a conditional propagation of a faulted value based on a sensitive intermediate (e.g., multiplication of a faulted value by 0 prevents propagation), denoted as SIFA-2. The aptitude of additive masking schemes, which were designed to prevent side-channel analysis, to also thwart fault attacks is typically assumed to be limited. Common fault models, such as toggle/bit-flip, stuck-at-0 or stuck-at-1 survive the recombination of Boolean shares well enough for generic attacks to succeed. More precisely, injecting a fault into one or multiple Boolean shares often results in the same, or at least a predictable, error appearing in the sensitive variable after recombination. In this work, we show that additive masking in prime-order fields breaks such relationships, causing frequently exploited biases to decrease exponentially in the number of shares. As a result, prime masking offers surprisingly strong protection against generic statistical attacks, which require a dependency between the effectiveness of an injected fault and the secret variable that is manipulated, such as SIFA-1. Operation-dependent statistical attacks, such as SIFA-2 and Fault Template Attacks (FTA), may still be performed against certain prime-field structures, even if they are masked with many shares. Yet, we analyze the corresponding cases and are able to provide specific guidelines on how to avoid vulnerabilities either at the cipher design or implementation level by making informed decisions about the primes, non-linear mappings and masked gadgets used. Since prime-field masking appears to be one of the rare instances of affordable countermeasures that naturally provide sound protection against sidechannel analysis and certain fault injection attacks, we believe there is a strong incentive for developing new ciphers to leverage these advantages.
Last updated:  2024-07-13
Permutation Superposition Oracles for Quantum Query Lower Bounds
Christian Majenz, Giulio Malavolta, and Michael Walter
We propose a generalization of Zhandry’s compressed oracle method to random permutations, where an algorithm can query both the permutation and its inverse. We show how to use the resulting oracle simulation to bound the success probability of an algorithm for any predicate on input-output pairs, a key feature of Zhandry’s technique that had hitherto resisted attempts at generalization to random permutations. One key technical ingredient is to use strictly monotone factorizations to represent the permutation in the oracle’s database. As an application of our framework, we show that the one-round sponge construction is unconditionally preimage resistant in the random permutation model. This proves a conjecture by Unruh.
Last updated:  2024-07-13
Bake It Till You Make It: Heat-induced Power Leakage from Masked Neural Networks
Dev M. Mehta, Mohammad Hashemi, David S. Koblah, Domenic Forte, and Fatemeh Ganji
Masking has become one of the most effective approaches for securing hardware designs against side-channel attacks. Regardless of the effort put into correctly implementing masking schemes on a field-programmable gate array (FPGA), leakage can be unexpectedly observed. This is due to the fact that the assumption underlying all masked designs, i.e., the leakages of different shares are independent of each other, may no longer hold in practice. In this regard, extreme temperatures have been shown to be an important factor in inducing leakage, even in correctly masked designs. This has previously been verified using an external heat generator (i.e., a climate chamber). In this paper, we examine whether the leakage can be induced using the circuit components themselves without making any changes to the design. Specifically, we target masked neural networks (NNs) in FPGAs, one of the main building blocks of which is block random access memory (BRAM). In this respect, thanks to the inherent characteristics of NNs, our novel internal heat generators leverage solely the memories devoted to storing the user’s input, especially when frequently writing alternating patterns into BRAMs. The possibility of observing first-order leakage is evaluated by considering one of the most recent and successful first-order secure masked NNs, namely ModuloNET. ModuloNET is specifically designed for FPGAs, where BRAMs are used to store inputs and intermediate computations. Our experimental results demonstrate that undesirable first-order leakage can be observed and exploited by increasing the temperature when an alternating input is applied to the masked NN. To give a better understanding of the impact of extreme heat, we further perform a similar test on the design using an external heat generator, where a similar conclusion can be drawn.
Last updated:  2024-07-12
Anonymous Outsourced Statekeeping with Reduced Server Storage
Dana Dachman-Soled, Esha Ghosh, Mingyu Liang, Ian Miers, and Michael Rosenberg
Strike-lists are a common technique for rollback and replay prevention in protocols that require that clients remain anonymous or that their current position in a state machine remain confidential. Strike-lists are heavily used in anonymous credentials, e-cash schemes, and trusted execution environments, and are widely deployed on the web in the form of Privacy Pass (PoPETS '18) and Google Private State Tokens. In such protocols, clients submit pseudorandom tokens associated with each action (e.g., a page view in Privacy Pass) or state transition, and the token is added to a server-side list to prevent reuse. Unfortunately, the size of a strike-list, and hence the storage required by the server, is proportional to the total number of issued tokens, $N \cdot t$, where $N$ is the number of clients and $t$ is the maximum number of tickets per client. In this work, we ask whether it is possible to realize a strike-list-like functionality, which we call the anonymous tickets functionality, with storage requirements proportional to $N \log(t)$. For the anonymous tickets functionality we construct a secure protocol from standard assumptions that achieves server storage of $O(N)$ ciphertexts, where each ciphertext encrypts a message of length $O(\log(t))$. We also consider an extension of the strike-list functionality where the server stores an arbitrary state for each client and clients advance their state with some function $s_i\gets f(s_{i-1},\mathsf{auxinput})$, which we call the anonymous outsourced state-keeping functionality. In this setting, malicious clients are prevented from rolling back their state, while honest clients are guaranteed anonymity and confidentiality against a malicious server. We achieve analogous results in this setting for two different classes of functions. Our results rely on a new technique to preserve client anonymity in the face of selective failure attacks by a malicious server. Specifically, our protocol guarantees that misbehavior of the server either (1) does not prevent the honest client from redeeming a ticket or (2) provides the honest client with an escape hatch that can be used to simulate a redeem in a way that is indistinguishable to the server.
Last updated:  2024-07-12
Toward A Practical Multi-party Private Set Union
Jiahui Gao, Son Nguyen, and Ni Trieu
This paper studies a multi-party private set union (mPSU), a fundamental cryptographic problem that allows multiple parties to compute the union of their respective datasets without revealing any additional information. We propose an efficient mPSU protocol which is secure in the presence of any number of colluding semi-honest participants. Our protocol avoids computationally expensive homomorphic operations or generic multi-party computation, thus providing an efficient solution for mPSU. The crux of our protocol lies in the utilization of new cryptographic tool, namely, Membership Oblivious Transfer (mOT) . We believe that the mOT and cOPRF may be of independent interest. We implement our mPSU protocol and evaluate their performance. Our protocol shows an improvement of up to $80.84\times$ in terms of running time and $405.73\times$ bandwidth cost compared to the existing state-of-the-art protocols. *Note: June 2024 - Fixed a bug in the original version and simplified the protocol by removing the OPRF.
Last updated:  2024-07-12
A Two-Layer Blockchain Sharding Protocol Leveraging Safety and Liveness for Enhanced Performance
Yibin Xu, Jingyi Zheng, Boris Düdder, Tijs Slaats, and Yongluan Zhou
Sharding is a critical technique that enhances the scalability of blockchain technology. However, existing protocols often assume adversarial nodes in a general term without considering the different types of attacks, which limits transaction throughput at runtime because attacks on liveness could be mitigated. There have been attempts to increase transaction throughput by separately handling the attacks; however, they have security vulnerabilities. This paper introduces Reticulum, a novel sharding protocol that overcomes these limitations and achieves enhanced scalability in a blockchain network without security vulnerabilities. Reticulum employs a two-phase design that dynamically adjusts transaction throughput based on runtime adversarial attacks on either or both liveness and safety. It consists of `control' and `process' shards in two layers corresponding to the two phases. Process shards are subsets of control shards, with each process shard expected to contain at least one honest node with high confidence. Conversely, control shards are expected to have a majority of honest nodes with high confidence. Reticulum leverages unanimous voting in the first phase to involve fewer nodes in accepting/rejecting a block, allowing more parallel process shards. The control shard finalizes the decision made in the first phase and serves as a lifeline to resolve disputes when they surface. Experiments demonstrate that the unique design of Reticulum empowers high transaction throughput and robustness in the face of different types of attacks in the network, making it superior to existing sharding protocols for blockchain networks.
Last updated:  2024-07-12
Dot-Product Proofs and Their Applications
Nir Bitansky, Prahladh Harsha, Yuval Ishai, Ron D. Rothblum, and David J. Wu
A dot-product proof (DPP) is a simple probabilistic proof system in which the input statement $\mathbf{x}$ and the proof $\boldsymbol{\pi}$ are vectors over a finite field $\mathbb{F}$, and the proof is verified by making a single dot-product query $\langle \mathbf{q},(\mathbf{x} \| \boldsymbol{\pi}) \rangle$ jointly to $\mathbf{x}$ and $\boldsymbol{\pi}$. A DPP can be viewed as a 1-query fully linear PCP. We study the feasibility and efficiency of DPPs, obtaining the following results: - Small-field DPP. For any finite field $\mathbb{F}$ and Boolean circuit $C$ of size $S$, there is a DPP for proving that there exists $\mathbf{w}$ such that $C(\mathbf{x}, \mathbf{w})=1$ with a proof $\boldsymbol{\pi}$ of length $S\cdot\mathsf{poly}(|\mathbb{F}|)$ and soundness error $\varepsilon=O(1 / \sqrt{|\mathbb{F}|})$. We show this error to be asymptotically optimal. In particular, and in contrast to the best known PCPs, there exist strictly linear-length DPPs over constant-size fields. - Large-field DPP. If $|\mathbb{F}|\ge\mathsf{poly}(S/\varepsilon)$, there is a similar DPP with soundness error $\varepsilon$ and proof length $O(S)$ (in field elements). The above results do not rely on the PCP theorem and their proofs are considerably simpler. We apply our DPP constructions toward two kinds of applications. - Hardness of approximation. We obtain a simple proof for the NP-hardness of approximating MAXLIN (with dense instances) over any finite field $\mathbb{F}$ up to some constant factor $c>1$, independent of $\mathbb{F}$. Unlike previous PCP-based proofs, our proof yields exponential-time hardness under the exponential time hypothesis (ETH). - Succinct arguments. We improve the concrete efficiency of succinct interactive arguments in the generic group model using input-independent preprocessing. In particular, the communication is comparable to sending two group elements and the verifier's computation is dominated by a single group exponentiation. We also show how to use DPPs together with linear-only encryption to construct succinct commit-and-prove arguments.
Last updated:  2024-07-12
Secret Key Recovery in a Global-Scale End-to-End Encryption System
Graeme Connell, Vivian Fang, Rolfe Schmidt, Emma Dauterman, and Raluca Ada Popa
End-to-end encrypted messaging applications ensure that an attacker cannot read a user's message history without their decryption keys. While this provides strong privacy, it creates a usability problem: if a user loses their devices and cannot access their decryption keys, they can no longer access their account. To solve this usability problem, users should be able to back up their account information with the messaging provider. For privacy, this backup should be encrypted and the provider should not have access to users' decryption keys. To solve this problem, we present Secure Value Recovery 3 (SVR3), a secret key recovery system that distributes trust across different types of hardware enclaves run by different cloud providers in order to protect users' decryption keys. SVR3 is the first deployed secret key recovery system to split trust across heterogeneous enclaves managed by different cloud providers: this design ensures that a single type of enclave does not become a central point of attack. SVR3 protects decryption keys via rollback protection and fault tolerance techniques tailored to the enclaves' security guarantees. SVR3 costs \$0.0025/user/year and takes 365ms for a user to recover their key, which is a rare operation. A part of SVR3 has been rolled out to millions of real users in a deployment with capacity for over 500 million users, demonstrating the ability to operate at scale.
Last updated:  2024-07-12
Cryptanalysis of EagleSign
Ludo N. Pulles and Mehdi Tibouchi
EagleSign is one of the 40 “Round 1 Additional Signatures” that is accepted for consideration in the supplementary round of the Post-Quantum Cryptography standardization process, organized by NIST. Its design is based on structured lattices, and it boasts greater simplicity and performance compared to the two lattice signatures already selected for standardization: Falcon and Dilithium. In this paper, we show that those claimed advantages come at the cost of security. More precisely, we show that the distribution of EagleSign signatures leaks information about the private key, to the point that only a few hundred signatures on arbitrary known messages suffice for a full key recovery, for all proposed parameters. A related vulnerability also affects EagleSign-V2, a subsequent version of the scheme specifically designed to thwart the initial attack. Although a larger number of signatures is required for key recovery, the idea of the attack remains largely similar. Both schemes come with proofs of security that we show are flawed.
Last updated:  2024-07-12
Probabilistic Linearization: Internal Differential Collisions in up to 6 Rounds of SHA-3
Zhongyi Zhang, Chengan Hou, and Meicheng Liu
The SHA-3 standard consists of four cryptographic hash functions, called SHA3-224, SHA3-256, SHA3-384 and SHA3-512, and two extendable-output functions (XOFs), called SHAKE128 and SHAKE256. In this paper, we study the collision resistance of the SHA-3 instances. By analyzing the nonlinear layer, we introduce the concept of maximum difference density subspace, and develop a new target internal difference algorithm by probabilistic linearization. We also exploit new strategies for optimizing the internal differential characteristic. Further more, we figure out the expected size of collision subsets in internal differentials, by analyzing the collision probability of the digests rather than the intermediate states input to the last nonlinear layer. These techniques enhance the analysis of internal differentials, leading to the best collision attacks on four round-reduced variants of the SHA-3 instances. In particular, the number of attacked rounds is extended to 5 from 4 for SHA3-384, and to 6 from 5 for SHAKE256.
Last updated:  2024-07-12
Counting Unpredictable Bits: A Simple PRG from One-way Functions
Noam Mazor and Rafael Pass
A central result in the theory of Cryptography, by Hastad, Imagliazzo, Luby and Levin [SICOMP’99], demonstrates that the existence one-way functions (OWF) implies the existence of pseudo-random generators (PRGs). Despite the fundamental importance of this result, and several elegant improvements/simplifications, analyses of constructions of PRGs from OWFs remain complex (both conceptually and technically). Our goal is to provide a construction of a PRG from OWFs with a simple proof of security; we thus focus on the setting of non-uniform security (i.e., we start off with a OWF secure against non-uniform PPT, and we aim to get a PRG secure against non-uniform PPT). Our main result is a construction of a PRG from OWFs with a self-contained, simple, proof of security, relying only on the Goldreich-Levin Theorem (and the Chernoff bound). Although our main goal is simplicity, the construction, and a variant there-of, also improves the efficiency—in terms of invocations and seed lengths—of the state-of-the-art constructions due to [Haitner-Reingold-Vadhan, STOC’10] and [Vadhan-Zheng, STOC’12], by a factor $O(\log^2 n)$. The key novelty in our analysis is a generalization of the Blum-Micali [FOCS’82] notion of unpredictabilty—rather than requiring that every bit in the output of a function is unpredictable, we count how many unpredictable bits a function has, and we show that any OWF on $n$ input bits (after hashing the input and the output) has $n + O(\log n)$ unpredictable output bits. Such unpredictable bits can next be “extracted” into a pseudorandom string using standard techniques.
Last updated:  2024-07-12
Scalable and Lightweight State-Channel Audits
Christian Badertscher, Maxim Jourenko, Dimitris Karakostas, and Mario Larangeira
Payment channels are one of the most prominent off-chain scaling solutions for blockchain systems. However, regulatory institutions have difficulty embracing them, as the channels lack insights needed for Anti-Money Laundering (AML) auditing purposes. Our work tackles the problem of a formal reliable and controllable inspection of off-ledger payment channels, by offering a novel approach for maintaining and reliably auditing statistics of payment channels. We extend a typical trustless Layer 2 protocol and provide a lightweight and scalable protocol such that: - every state channel is provably auditable w.r.t. a configurable set of policy queries, such that a regulator can retrieve reliable insights about the channel; - no information beyond the answers to auditing queries is leaked; - the cryptographic operations are inexpensive, the setup is simple, and storage complexity is independent of the transaction graph's size. We present a concrete protocol, based on Hydra Isomorphic State Channels (FC'21), and tie the creation of a state channel to real-world identifiers, both in a plain and privacy-preserving manner. For this, we employ verifiable credentials for decentralized identifiers, specifically verifiable Legal Entity Identifiers (vLEI) that increasingly gain traction for financial service providers and regulated institutions.
Last updated:  2024-07-12
Exploiting signature leakages: breaking Enhanced pqsigRM
Thomas Debris-Alazard, Pierre Loisel, and Valentin Vasseur
Enhanced pqsigRM is a code-based hash-and-sign scheme proposed to the second National Institute of Standards and Technology call for post-quantum signatures. The scheme is based on the $(U,U+V)$-construction and it enjoys remarkably small signature lengths, about $1$KBytes for a security level of $128$ bits. Unfortunately we show that signatures leak information about the underlying $(U,U+V)$-structure. It allows to retrieve the private-key with~$100, 000$ signatures.
Last updated:  2024-07-12
SQISignHD: New Dimensions in Cryptography
Pierrick Dartois, Antonin Leroux, Damien Robert, and Benjamin Wesolowski
We introduce SQIsignHD, a new post-quantum digital signature scheme inspired by SQIsign. SQIsignHD exploits the recent algorithmic breakthrough underlying the attack on SIDH, which allows to efficiently represent isogenies of arbitrary degrees as components of a higher dimensional isogeny. SQIsignHD overcomes the main drawbacks of SQIsign. First, it scales well to high security levels, since the public parameters for SQIsignHD are easy to generate: the characteristic of the underlying field needs only be of the form $2^{f}3^{f'}-1$. Second, the signing procedure is simpler and more efficient. Our signing procedure implemented in C runs in 28 ms, which is a significant improvement compared to SQISign. Third, the scheme is easier to analyse, allowing for a much more compelling security reduction. Finally, the signature sizes are even more compact than (the already record-breaking) SQIsign, with compressed signatures as small as 109 bytes for the post-quantum NIST-1 level of security. These advantages may come at the expense of the verification, which now requires the computation of an isogeny in dimension $4$, a task whose optimised cost is still uncertain, as it has been the focus of very little attention. Our experimental sagemath implementation of the verification runs in around 600 ms, indicating the potential cryptographic interest of dimension $4$ isogenies after optimisations and low level implementation.
Last updated:  2024-07-12
Parameters of Algebraic Representation vs. Efficiency of Algebraic Cryptanalysis
Hossein Arabnezhad and Babak Sadeghiyan
The aim of an algebraic attack is to find the secret key by solving a collection of relations that describe the internal structure of a cipher for observations of plaintext/cipher-text pairs. Although algebraic attacks are addressed for cryptanalysis of block and stream ciphers, there is a limited understanding of the impact of algebraic representation of the cipher on the efficiency of solving the resulting collection of equations. In this paper, we investigate on how different S-box representations affect the complexity of algebraic attacks, in an empirical manner. In the literature some algebraic properties are intuitively proposed to evaluate optimality of an algebraic description of S-boxes for algebraic cryptanalysis. In this paper, we compare different S-box representation for algebraic cryptanalysis with doing experiments with SR family of block ciphers. We also show that the so-called \textit{Forward-Backward} representation which is in contrast with all mentioned criteria for optimal representations criteria, practically gives better results than the compliant representations. We also compare the representations for both $GF(2)$ and $GF(2^n)$ fields.
Last updated:  2024-07-12
KAIME : Central Bank Digital Currency with Realistic and Modular Privacy
Ali Dogan and Kemal Bicakci
Recently, with the increasing interest in Central Bank Digital Currency (CBDC), many countries have been working on researching and developing digital currency. The most important reasons for this interest are that CBDC eliminates the disadvantages of traditional currencies and provides a safer, faster, and more efficient payment system. These benefits also come with challenges, such as safeguarding individuals’ privacy and ensuring regulatory mechanisms. While most researches address the privacy conflict between users and regulatory agencies, they miss an important detail. Important parts of a financial system are banks and financial institutions. Some studies ignore the need for privacy and include these institutions in the CBDC system, no system currently offers a solution to the privacy conflict between banks, financial institutions, and users. In this study, while we offer a solution to the privacy conflict between the user and the regulatory agencies, we also provide a solution to the privacy conflict between the user and the banks. Our solution, KAIME has also a modular structure. The privacy of the sender and receiver can be hidden if desired. Compared to previous related research, security analysis and implementation of KAIME is substantially simpler because simple and well-known cryptographic methods are used.
Last updated:  2024-07-12
BG: A Modular Treatment of BFT Consensus
Uncategorized
Xiao Sui, Sisi Duan, and Haibin Zhang
Show abstract
Uncategorized
We provide an expressive framework that allows analyzing and generating provably secure, state-of-the-art Byzantine fault-tolerant (BFT) protocols. Our framework is hierarchical, including three layers. The top layer is used to model the message pattern and abstract key functions on which BFT algorithms can be built. The intermediate layer provides the core functions with high-level properties sufficient to prove the security of the top-layer algorithms. The bottom layer carefully defines predicates according to which we offer operational realizations for the core functions. All three layers in our framework are extensible and enable innovation. One may modify or extend any layer to theoretically cover all BFT protocols, known and unknown. Indeed, unlike prior BFT frameworks, our framework can analyze and recast BFT protocols in an exceedingly fine-grained manner. More importantly, our framework can readily generate new BFT protocols by simply enumerating the parameters in the framework. In this paper, we show that the framework allows us to fully specify and formally prove the security for 23 BFT protocols, including protocols matching HotStuff, Fast-HotStuff, Jolteon, and Marlin, and among these protocols, seven new protocols outperforming existing ones or achieving meaningful trade-offs among various performance metrics.
Last updated:  2024-07-12
Synchronous Distributed Key Generation without Broadcasts
Nibesh Shrestha, Adithya Bhat, Aniket Kate, and Kartik Nayak
Distributed key generation (DKG) is an important building block in designing many efficient distributed protocols. In this work, we initiate the study of communication complexity and latency of distributed key generation protocols under a synchronous network in a point-to-point network. Our key result is the first synchronous DKG protocol for discrete log-based cryptosystems with $O(\kappa n^3)$ communication complexity ($\kappa$ denotes a security parameter) that tolerates $t < n/2$ Byzantine faults among $n$ parties. We show two variants of the protocol: a deterministic protocol with $O(t\Delta)$ latency and randomized protocol with $O(\Delta)$ latency in expectation where $\Delta$ denotes the bounded synchronous delay. In the process of achieving our results, we design (1) a gradecast protocol with optimal communication complexity of $O(\kappa n^2)$ for linear-sized inputs and latency of $O(\Delta)$, (2) a primitive called ``recoverable set of shares'' for ensuring recovery of shared secrets, (3) an oblivious leader election protocol with $O(\kappa n^3)$ communication and $O(\Delta)$ latency, and (4) a multi-valued validated Byzantine agreement (MVBA) protocol with $O(\kappa n^3)$ communication complexity for linear-sized inputs and $O(\Delta)$ latency in expectation. Each of these primitives may be of independent interest.
Last updated:  2024-07-11
Constraint-Packing and the Sum-Check Protocol over Binary Tower Fields
Quang Dao and Justin Thaler
SNARKs based on the sum-check protocol often invoke the ``zero-check PIOP''. This reduces the vanishing of many constraints to a single sum-check instance applied to an $n$-variate polynomial of the form $g(x) = \text{eq}(r,x) \cdot p(x)$, where $p$ is a product of multilinear polynomials, $r$ is a random vector, and $\text{eq}$ is the multilinear extension of the equality function. In recent SNARK designs, $p(x)$ is defined over a ``small'' base field, while $r$ is drawn from a large extension field $\mathbb{F}$ for security. Recent papers (Bagad, Domb, and Thaler 2024; Gruen 2024) have optimized the sum-check protocol prover for this setting. However, these works still require the prover to ``pre-compute'' all evaluations of $\text{eq}(r, x)$ as $x$ ranges over $\{0, 1\}^{n}$, and this computation involves about $n$ multiplications over the extension field $\mathbb{F}$. In this note, we describe a modification to the zero-check PIOP in the case of binary tower fields that reduces this pre-computation cost by a factor of close to $\log |\mathbb{F}|$, which is $128$ in important applications. We show that our modification is sound, and that it strictly generalizes a (possibly folklore) technique of constraint-packing over field extensions.
Last updated:  2024-07-11
Jolt-b: recursion friendly Jolt with basefold commitment
Hang Su, Qi Yang, and Zhenfei Zhang
The authors of Jolt [AST24] pioneered a unique method for creating zero-knowledge virtual machines, known as the lookup singularity. This technique extensively uses lookup tables to create virtual machine circuits. Despite Jolt’s performance being twice as efficient as the previous state-of-the-art1 , there is potential for further enhancement. The initial release of Jolt uses Spartan [Set20] and Hyrax [WTs+ 18] as their backend, leading to two constraints. First, Hyrax employs Pedersen commitment to build inner product arguments, which requires elliptic curve operations. Second, the verification of a Hyrax commitment takes square root time $O(\sqrt{N})$ relative to the circuit size $N$ . This makes the recursive verification of a Jolt proof impractical, as the verification circuit would need to execute all the Hyrax verification logic in-circuit. A later version of Jolt includes Zeromorph [KT23] and HyperKZG as their commitment backend, making the system recursion-friendly, as now the recursive verifier only needs to perform $O(\log N)$ operations, but at the expense of a need for a trusted setup. Our scheme, Jolt-b, addresses these issues by transitioning to the extension field of the Goldilocks and using the Basefold commitment scheme [ZCF23], which has an $O(\log^2 N)$ verifier time. This scheme mirrors the modifications of Plonky2 over the original Plonk [GWC19]: it transitions from EC fields to the Goldilocks field; it replaces the EC-based commitment scheme with an encoding-based commitment scheme. We implemented Jolt-b, along with an optimized version of the Basefold scheme. Our benchmarks show that at a cost of 2.47x slowdown for the prover, we achieve recursion friendliness for the original Jolt. In comparison with other recursion-friendly Jolt variants, our scheme is 1.24x and 1.52x faster in prover time than the Zeromorph and HyperKZG variants of Jolt, respectively.
Last updated:  2024-07-11
Distributed Verifiable Random Function With Compact Proof
Ahmet Ramazan Ağırtaş, Arda Buğra Özer, Zülfükar Saygı, and Oğuz Yayla
Verifiable Random Functions (VRFs) are cryptographic primitives that generate unpredictable randomness along with proofs that are verifiable, a critical requirement for blockchain applications in decentralized finance, online gaming, and more. Existing VRF constructions often rely on centralized entities, creating security vulnerabilities. Distributed VRFs (DVRFs) offer a decentralized alternative but face challenges like large proof sizes or dependence on computationally expensive bilinear pairings. In this research, a unique distributed VRF (DVRF) system called DVRFwCP with considerable improvements is proposed. DVRFwCP has constant-size proofs, which means that the size of the proof does not change based on the number of participants. This overcomes a significant drawback of earlier DVRF systems, which saw proof size increase with participant count. Furthermore, DVRFwCP produces more efficient verification than previous systems by eliminating the requirement for bilinear pairings throughout the verification process. These innovations contribute to a more secure and scalable solution for generating verifiable randomness in decentralized environments. We compare our construction to well-established DVRF instantiations such as DDH-DVRF and GLOW-DVRF while also pointing out the major improvement in the estimated gas cost of these algorithms.
Last updated:  2024-07-11
Proof-Carrying Data from Multi-folding Schemes
Zibo Zhou, Zongyang Zhang, Zhiyu Zhang, and Jin Dong
Proof-carrying data (PCD) is a cryptographic primitive enabling mutually distrustful parties to perform distributed computations on directed acyclic graphs with efficient and incremental verification. Key performance metrics include the prover cost at each step and the recursion overhead, which measures the additional cost beyond proving the original computation. Despite substantial advancements in constructing efficient PCD schemes, these metrics continue to be bottlenecks hindering their widespread application. In this paper, we advance the research by constructing a new PCD scheme based on a new generalized construction of multi-folding schemes. Compared with the state-of-the-art PCD scheme by Bünz et al. (CRYPTO'21), our scheme reduces the prover cost at each step from $4r+6$ multi-scalar multiplications (MSMs) of size $O(|C|)$ to $1$ MSM of the same size, and the recursion overhead from $6$ MSMs of size $2r-1$, $1$ MSM of size $6r-3$ to $1$ MSM of size $2r-1$, where $r$ is the number of incoming edges at certain step and $|C|$ is the proving computation size. Additionally, our PCD scheme supports a more expressive constraint system for encoding computations, namely the customizable constraint system, which supports high-degree constraints, in contrast to the rank-1 constraint system adopted by existing PCD schemes that only supports quadratic constraints. We implement our PCD scheme and report the concrete recursion overhead and practical efficiency for different values of $r$ and $|C|$. Compared with Bünz et al. (CRYPTO'21), our PCD scheme achieves a $2.5$ times lower recursion overhead when $r=2$ and $|C|=2^{20}$. Additionally, when $r=2$ and a proving computation size (excluding recursion overhead) of $2^{24}$, it takes $49$ seconds to generate a PCD proof at each step. Using a SNARK to compress the proof reduces the proof size from $1031$ MB to $13$ KB, with a tradeoff in the verifier time, which increases from $11$ seconds to $22$ seconds.
Last updated:  2024-07-11
PARScoin: A Privacy-preserving, Auditable, and Regulation-friendly Stablecoin
Amirreza Sarencheh, Aggelos Kiayias, and Markulf Kohlweiss
Stablecoins are digital assets designed to maintain a consistent value relative to a reference point, serving as a vital component in Blockchain, and Decentralized Finance (DeFi) ecosystem. Typical implementations of stablecoins via smart contracts come with important downsides such as a questionable level of privacy, potentially high fees, and lack of scalability. We put forth a new design, PARScoin, for a Privacy-preserving, Auditable, and Regulation-friendly Stablecoin that mitigates these issues while enabling high performance both in terms of speed of settlement and for scaling to large numbers of users as our performance analysis demonstrates. Our construction is blockchain-agnostic and is analyzed in the Universal Composition (UC) framework, offering a secure and modular approach for its integration into the broader blockchain ecosystem.
Last updated:  2024-07-11
A New PPML Paradigm for Quantized Models
Tianpei Lu, Bingsheng Zhang, Xiaoyuan Zhang, and Kui Ren
Model quantization has become a common practice in machine learning (ML) to improve efficiency and reduce computational/communicational overhead. However, adopting quantization in privacy-preserving machine learning (PPML) remains challenging due to the complex internal structure of quantized operators, which leads to inefficient protocols under the existing PPML frameworks. In this work, we propose a new PPML paradigm that is tailor-made for and can benefit from quantized models. Our main observation is that lookup tables can ignore the complex internal constructs of any functions which can be used to simplify the quantized operator evaluation. We view the model inference process as a sequence of quantized operators, and each operator is implemented by a lookup table. We then develop an efficient private lookup table evaluation protocol, and its online communication cost is only $\log n$, where $n$ is the size of the lookup table. On a single CPU core, our protocol can evaluate $2^{15}$ tables with 8-bit input and 8-bit output per second. The resulting PPML framework for quantized models offers extremely fast online performance. The experimental results demonstrate that our quantization strategy achieves substantial speedups over SOTA PPML solutions, improving the online performance by $40\sim 60 \times$ w.r.t. convolutional neural network (CNN) models, such as AlexNet, VGG16, and ResNet18, and by $10\sim 25 \times$ w.r.t. large language models (LLMs), such as GPT-2, GPT-Neo, and Llama2.
Last updated:  2024-07-11
Attribute-Based Signatures for Circuits with Optimal Parameter Size from Standard Assumptions
Ryuya Hayashi, Yusuke Sakai, and Shota Yamada
Attribute-based signatures (ABS) allow users to simultaneously sign messages and prove their possession of some attributes while hiding the attributes and revealing only the fact that they satisfy a public policy. In this paper, we propose a generic construction of ABS for circuits of unbounded depth and size with optimal parameter size, meaning that the lengths of public parameters, keys, and signatures are all constant. Our generic construction can be instantiated from various standard assumptions including LWE or DLIN. Only previous ABS construction with optimal parameter size necessitates succinct non-interactive argument of knowledge, which can be only constructed from non-standard assumptions. Our generic construction is based on RAM delegations, which intuitively allows us to compress the evaluation of a circuit when inputs are public. In high level, we find a way to compress the computation of the policy circuit on input a user attribute to achieve overall parameter size, while hiding the user policy at the same time.
Last updated:  2024-07-11
LLRing: Logarithmic Linkable Ring Signatures with Transparent Setup
Xiangyu Hui and Sid Chi-Kin Chau
Linkable ring signatures are an important cryptographic primitive for anonymized applications, such as e-voting, e-cash and confidential transactions. To eliminate backdoor and overhead in a trusted setup, transparent setup in the discrete logarithm or pairing settings has received considerable attention in practice. Recent advances have improved the proof sizes and verification efficiency of linkable ring signatures with a transparent setup to achieve logarithmic bounds. Omniring (CCS '19) and RingCT 3.0 (FC '20) proposed linkable ring signatures in the discrete logarithm setting with logarithmic proof sizes with respect to the ring size, whereas DualDory (ESORICS '22) achieves logarithmic verifiability in the pairing setting. We make three novel contributions in this paper to improve the efficiency and soundness of logarithmic linkable ring signatures: (1) We identify an attack on DualDory that breaks its linkability. (2) To eliminate such an attack, we present a new linkable ring signature scheme in the pairing setting with logarithmic verifiability. (3) We also improve the verification efficiency of linkable ring signatures in the discrete logarithm setting, by a technique of reducing the number of group exponentiations for verification in Omniring by 50%. Furthermore, our technique is applicable to general inner-product relation proofs, which might be of independent interest. Finally, we empirically evaluate our schemes and compare them with the extant linkable ring signatures in concrete implementation.
Last updated:  2024-07-11
Cryptiny: Compacting Cryptography for Space-Restricted Channels and its Use-case for IoT-E2EE
Uncategorized
Liron David, Omer Berkman, Avinatan Hassidim, David Lazarov, Yossi Matias, and Moti Yung
Show abstract
Uncategorized
We present a novel cryptographic paradigm denoted ``cryptiny:'' Employing a single cryptographic value for several security goals, thus ``compacting'' the communication sent over a space-restricted (narrow) channel, while still proving security. Cryptiny is contrary to the classical cryptographic convention of using a separate cryptographic element for each security goal. Demonstrating the importance of cryptiny, we employ it for securing a critical IoT configuration in which a broadcasting ``thing'' (called beacon) operates within stringent bandwidth constraints. In this setting, a compact BLE-broadcasting beacon lacking Internet connectivity efficiently directs brief (non fragmented) messages to its remotely pre-paired owner in real-time. Communication transpires through BLE-to-IP gateway devices denoted observers, (typically smartphones in the beacon's vicinity), and subsequently via a cloud app server. The gateway device as well, piggybacks on the transmission a secure and private message to the owner. This configuration is a generic setting for the current and future IoT real-time ecosystems, where billion of owners, beacons, and observers operate. The configuration instances (analogous to TLS instances over the Internet) imposes high security and privacy demands. We prove that our cryptiny-based protocol for securing the above configuration achieves CCA-secrecy for the beacon's and the observer's messages with backward and forward security for the observer's message, as well simultaneously achieving mutual privacy for beacons and for observers. Achieving backward and forward security is important since beacon devices may be far from their owners for a long duration and may be passively tampered with. In addition, for the backward security proof we develop a new encryption scheme we call ``shifted-DHIES'' (``SDHIES'' for short), which generalizes DHIES. An interesting feature of SDHIES is that encryption is performed with a function of the public key rather than the public key itself.
Last updated:  2024-07-11
Generalized Indifferentiable Sponge and its Application to Polygon Miden VM
Tomer Ashur and Amit Singh Bhati
Cryptographic hash functions are said to be the work-horses of modern cryptography. One of the strongest approaches to assess a cryptographic hash function's security is indifferentiability. Informally, indifferentiability measures to what degree the function resembles a random oracle when instantiated with an ideal underlying primitive. However, proving the indifferentiability security of hash functions has been challenging due to complex simulator designs and proof arguments. The Sponge construction is one of the prevalent hashing method used in various systems. The Sponge has been shown to be indifferentiable from a random oracle when initialized with a random permutation. In this work, we first introduce $\mathsf{GSponge}$, a generalized form of the Sponge construction offering enhanced flexibility in input chaining, field sizes, and padding types. $\mathsf{GSponge}$ not only captures all existing sponge variants but also unveils new, efficient ones. The generic structure of $\mathsf{GSponge}$ facilitates the discovery of two micro-optimizations for already deployed sponges. Firstly, it allows a new padding rule based on zero-padding and domain-separated inputs, saving one full permutation call in certain cases without increasing the generation time of zero-knowledge proofs. Secondly, it allows to absorb up to $\mathsf{c}/2$ more elements (that can save another permutation call for certain message lengths) without compromising the indifferentiability security. These optimizations enhance hashing time for practical use cases such as Merkle-tree hashing and short message processing. We then propose a new efficient instantiation of $\mathsf{GSponge}$ called $\mathsf{Sponge2}$ capturing these micro-optimizations and provide a formal indifferentiability proof to establish both $\mathsf{Sponge2}$ and $\mathsf{GSponge}$'s security. This proof, simpler than the original for Sponges, offers clarity and ease of understanding for real-world practitioners. Additionally, it is demonstrated that $\mathsf{GSponge}$ can be safely instantiated with permutations defined over large prime fields, a result not previously formally proven.
Last updated:  2024-07-11
FIN: Practical Signature-Free Asynchronous Common Subset in Constant Time
Sisi Duan, Xin Wang, and Haibin Zhang
Asynchronous common subset (ACS) is a powerful paradigm enabling applications such as Byzantine fault-tolerance (BFT) and multi-party computation (MPC). The most efficient ACS framework in the information-theoretic setting is due to Ben-Or, Kelmer, and Rabin (BKR, 1994). The BKR ACS protocol has been both theoretically and practically impactful. However, the BKR protocol has an $O(\log n)$ running time (where $n$ is the number of replicas) due to the usage of $n$ parallel asynchronous binary agreement (ABA) instances, impacting both performance and scalability. Indeed, for a network of 16$\sim$64 replicas, the parallel ABA phase occupies about 95%$\sim$97% of the total runtime in BKR. A long-standing open problem is whether we can build an ACS framework with $O(1)$ time while not increasing the message or communication complexity of the BKR protocol. In this paper, we resolve the open problem, presenting the first constant-time ACS protocol with $O(n^3)$ messages in the information-theoretic and signature-free settings. Moreover, as a key ingredient of our new ACS framework and an interesting primitive in its own right, we provide the first information-theoretic multivalued validated Byzantine agreement (MVBA) protocol with $O(1)$ time and $O(n^3)$ messages. Both results can improve---asymptotically and concretely---various applications using ACS and MVBA in the information-theoretic, quantum-safe, or signature-free settings. As an example, we implement FIN, a BFT protocol instantiated using our framework. Via a 121-server deployment on Amazon EC2, we show FIN is significantly more efficient than PACE (CCS 2022), the state-of-the-art asynchronous BFT protocol of the same type. In particular, FIN reduces the overhead of the ABA phase to as low as 1.23% of the total runtime, and FIN achieves up to 3.41x the throughput of PACE. We also show that FIN outperforms other BFT protocols with the standard liveness property such as Dumbo and Speeding Dumbo.
Last updated:  2024-07-11
A Note on Efficient Computation of the Multilinear Extension
Ron D. Rothblum
The multilinear extension of an $m$-variate function $f : \{0,1\}^m \to \mathbb{F}$, relative to a finite field $\mathbb{F}$, is the unique multilinear polynomial $\hat{f} : \mathbb{F}^m \to \mathbb{F}$ that agrees with $f$ on inputs in $\{0,1\}^m$. In this note we show how, given oracle access to $f : \{0,1\}^m \to \mathbb{F}$ and a point $z \in \mathbb{F}^m$, to compute $\hat{f}(z)$ using exactly $2^{m+1}$ multiplications, $2^m$ additions and $O(m)$ additional operations. The amount of space used corresponds to $O(m)$ field elements.
Last updated:  2024-07-11
QuickPool: Privacy-Preserving Ride-Sharing Service
Banashri Karmakar, Shyam Murthy, Arpita Patra, and Protik Paul
Online ride-sharing services (RSS) have become very popular owing to increased awareness of environmental concerns and as a response to increased traffic congestion. To request a ride, users submit their locations and route information for ride matching to a service provider (SP), leading to possible privacy concerns caused by leakage of users' location data. We propose QuickPool, an efficient SP-aided RSS solution that can obliviously match multiple riders and drivers simultaneously, without involving any other auxiliary server. End-users, namely, riders and drivers share their route information with SP as encryptions of the ordered set of points-of-interest (PoI) of their route from their start to end locations. SP performs a zone based oblivious matching of drivers and riders, based on partial route overlap as well as proximity of start and end points. QuickPool is in the semi-honest setting, and makes use of secure multi-party computation. We provide security proof of our protocol, perform extensive testing of our implementation and show that our protocol simultaneously matches multiple drivers and riders very efficiently. We compare the performance of QuickPool with state-of-the-art works and observe a run time improvement of 1.6 - 2$\times$, and communication improvement of at least 8$\times$.
Last updated:  2024-07-11
The Insecurity of SHA2 under the Differential Fault Characteristic of Boolean Functions
Weiqiong Cao, Hua Chen, Hongsong Shi, Haoyuan Li, Jian Wang, and Jingyi Feng
SHA2 has been widely adopted across various traditional public-key cryptosystems, post-quantum cryptography, personal identification, and network communication protocols, etc. Hence, ensuring the robust security of SHA2 is of critical importance. There have been several differential fault attacks based on random word faults targeting SHA1 and SHACAL-2. However, extending such random word-based fault attacks to SHA2 proves significantly more difficult due to the heightened complexity of the boolean functions in SHA2. In this paper, assuming random word faults, we find some distinctive differential properties within the boolean functions in SHA2. Leveraging these findings, we propose a new differential fault attack methodology that can be effectively utilized to recover the final message block and its corresponding initial vector in SHA2, forge HMAC-SHA2 messages, extract the key of SHACAL-2, and extend our analysis to similar algorithm like SM3. We validate the effectiveness of these attacks through rigorous simulations and theoretical deductions, revealing that they indeed pose substantial threats to the security of SHA2. In our simulation-based experiments, our approach necessitates guessing $T$ bits within a register, with $T$ being no more than $5$ at most, and having a approximate $95\%$ (for SHA512) probability of guessing just $1$ bit. Moreover, upon implementing a consecutive series of 15 fault injections, the success probability for recovering one register (excluding the guessed bits) approaches $100\%$. Ultimately, approximately 928 faulty outputs based on random word faults are required to carry out the attack successfully.
Last updated:  2024-07-10
Curl: Private LLMs through Wavelet-Encoded Look-Up Tables
Manuel B. Santos, Dimitris Mouris, Mehmet Ugurbil, Stanislaw Jarecki, José Reis, Shubho Sengupta, and Miguel de Vega
Recent advancements in transformers have revolutionized machine learning, forming the core of Large language models (LLMs). However, integrating these systems into everyday applications raises privacy concerns as client queries are exposed to model owners. Secure multiparty computation (MPC) allows parties to evaluate machine learning applications while keeping sensitive user inputs and proprietary models private. Due to inherent MPC costs, recent works introduce model-specific optimizations that hinder widespread adoption by machine learning researchers. CrypTen (NeurIPS'21) aimed to solve this problem by exposing MPC primitives via common machine learning abstractions such as tensors and modular neural networks. Unfortunately, CrypTen and many other MPC frameworks rely on polynomial approximations of the non-linear functions, resulting in high errors and communication complexity. This paper introduces Curl, an easy-to-use MPC framework that evaluates non-linear functions as lookup tables, resulting in better approximations and significant round and communication reduction. Curl exposes a similar programming model as CrypTen and is highly parallelizable through tensors. At its core, Curl relies on discrete wavelet transformations to reduce the lookup table size without sacrificing accuracy, which results in up to $19\times$ round and communication reduction compared to CrypTen for non-linear functions such as logarithms and reciprocals. We evaluate Curl on a diverse set of LLMs, including BERT, GPT-2, and GPT Neo, and compare against state-of-the-art related works such as Iron (NeurIPS'22) and Bolt (S&P'24) achieving at least $1.9\times$ less communication and latency. Finally, we resolve a long-standing debate regarding the security of widely used probabilistic truncation protocols by proving their security in the stand-alone model. This is of independent interest as many related works rely on this truncation style.
Last updated:  2024-07-10
HRA-Secure Homomorphic Lattice-Based Proxy Re-Encryption with Tight Security
Aloni Cohen, David Bruce Cousins, Nicholas Genise, Erik Kline, Yuriy Polyakov, and Saraswathy RV
We construct an efficient proxy re-encryption (PRE) scheme secure against honest re-encryption attacks (HRA-secure) with precise concrete security estimates. To get these precise concrete security estimates, we introduce the tight, fine-grained noise-flooding techniques of Li et al. (CRYPTO'22) to RLWE-based (homomorphic) PRE schemes, as well as a mixed statistical-computational security to HRA security analysis. Our solution also supports homomorphic operations on the ciphertexts. Such homomorphism allows for advanced applications, e.g., encrypted computation of network statistics across networks and unlimited hops, in the case of full homomorphism, i.e., bootstrapping. We implement our PRE scheme in the OpenFHE software library and apply it to a problem of secure multi-hop data distribution in the context of 5G virtual network slices. We also experimentally evaluate the performance of our scheme, demonstrating that the implementation is practical. In addition, we compare our PRE method with other lattice-based PRE schemes and approaches to achieve HRA security. These achieve HRA security, but not in a tight, practical scheme such as our work. Further, we present an attack on the PRE scheme proposed in Davidson et al.'s (ACISP'19), which was claimed to achieve HRA security without noise flooding.
Last updated:  2024-07-10
DiLizium 2.0: Revisiting Two-Party Crystals-Dilithium
Peeter Laud, Nikita Snetkov, and Jelizaveta Vakarjuk
In previous years there has been an increased interest in designing threshold signature schemes. Most of the recent works focus on constructing threshold versions of ECDSA or Schnorr signature schemes due to their appealing usage in blockchain technologies. Additionally, a lot of research is being done on cryptographic schemes that are resistant to quantum computer attacks. In this work, we propose a new version of the two-party Dilithium signature scheme. The security of our scheme is based on the hardness of Module-LWE and Module-SIS problems. In our construction, we follow a similar logic as Damgård et al. (PKC 2021) and use an additively homomorphic commitment scheme. However, compared to them, our protocol uses signature compression techniques from the original Dilithium signature scheme which makes it closer to the version submitted to the NIST PQC competition. We focus on two-party signature schemes in the context of user authentication.
Last updated:  2024-07-10
Is ML-Based Cryptanalysis Inherently Limited? Simulating Cryptographic Adversaries via Gradient-Based Methods
Avital Shafran, Eran Malach, Thomas Ristenpart, Gil Segev, and Stefano Tessaro
Given the recent progress in machine learning (ML), the cryptography community has started exploring the applicability of ML methods to the design of new cryptanalytic approaches. While current empirical results show promise, the extent to which such methods may outperform classical cryptanalytic approaches is still somewhat unclear. In this work, we initiate exploration of the theory of ML-based cryptanalytic techniques, in particular providing new results towards understanding whether they are fundamentally limited compared to traditional approaches. Whereas most classic cryptanalysis crucially relies on directly processing individual samples (e.g., plaintext-ciphertext pairs), modern ML methods thus far only interact with samples via gradient-based computations that average a loss function over all samples. It is, therefore, conceivable that such gradient-based methods are inherently weaker than classical approaches. We introduce a unifying framework for capturing both ``sample-based'' adversaries that are provided with direct access to individual samples and ``gradient-based'' ones that are restricted to issuing gradient-based queries that are averaged over all given samples via a loss function. Within our framework, we establish a general feasibility result showing that any sample-based adversary can be simulated by a seemingly-weaker gradient-based one. Moreover, the simulation exhibits a nearly optimal overhead in terms of the gradient-based simulator's running time. Finally, we extend and refine our simulation technique to construct a gradient-based simulator that is fully parallelizable (crucial for avoiding an undesirable overhead for parallelizable cryptanalytic tasks), which is then used to construct a gradient-based simulator that executes the particular and highly useful gradient-descent method. Taken together, although the extent to which ML methods may outperform classical cryptanalytic approaches is still somewhat unclear, our results indicate that such gradient-based methods are not inherently limited by their seemingly restricted access to the provided samples.
Last updated:  2024-07-10
Revisiting PACD-based Attacks on RSA-CRT
Guillaume Barbu, Laurent Grémy, and Roch Lescuyer
In this work, we use some recent developments in lattice-based cryptanalytic tools to revisit a fault attack on RSA-CRT signatures based on the Partial Approximate Common Divisor (PACD) problem. By reducing the PACD to a Hidden Number Problem (HNP) instance, we decrease the number of required faulted bits from 32 to 7 in the case of a 1024-bit RSA. We successfully apply the attack to RSA instances up to 8192-bit and present an enhanced analysis of the error-tolerance in the Bounded Distance Decoding (BDD) with predicate approach. Finally, evaluating the impact of standard side-channel and fault countermeasures, we show that merely verifying the signature before output is not an adequate protection against this attack. The reduction from PACD to HNP might be of independent interest.
Last updated:  2024-07-10
Oryx: Private detection of cycles in federated graphs
Ke Zhong and Sebastian Angel
This paper proposes Oryx, a system for efficiently detecting cycles in federated graphs where parts of the graph are held by different parties and are private. Cycle detection is an important building block in designing fraud detection algorithms that operate on confidential transaction data held by different financial institutions. Oryx allows detecting cycles of various length while keeping the topology of the graphs secret, and it does so efficiently; Oryx achieves quasilinear computational complexity and scales well with more machines thanks to a parallel design. Our implementation of Oryx running on a single 32-core AWS machine (for each party) can detect cycles of up to length 6 in under 5 hours in a financial transaction graph that consists of tens of millions of nodes and edges. While the costs are high, adding more machines further reduces the completion time. Furthermore, Oryx is, to our knowledge, the first and only system that can handle this task.
Last updated:  2024-07-10
DeCAF: Decentralizable Continuous Group Key Agreement with Fast Healing
Joël Alwen, Benedikt Auerbach, Miguel Cueto Noval, Karen Klein, Guillermo Pascual-Perez, and Krzysztof Pietrzak
Continuous group key agreement (CGKA) allows a group of users to maintain a continuously updated shared key in an asynchronous setting where parties only come online sporadically and their messages are relayed by an untrusted server. CGKA captures the basic primitive underlying group messaging schemes. Current solutions including TreeKEM ("Messaging Layer Security'' (MLS) IETF RFC 9420) cannot handle concurrent requests while retaining low communication complexity. The exception being CoCoA, which is concurrent while having extremely low communication complexity (in groups of size $n$ and for $m$ concurrent updates the communication per user is $\log(n)$, i.e., independent of $m$). The main downside of CoCoA is that in groups of size $n$, users might have to do up to $\log(n)$ update requests to the server to ensure their (potentially corrupted) key material has been refreshed. In this work we present a "fast healing'' concurrent CGKA protocol, named DeCAF, where users will heal after at most $\log(t)$ requests, with $t$ being the number of corrupted users. While also suitable for the standard central-server setting, our protocol is particularly interesting for realizing decentralized group messaging, where protocol messages (add, remove, update) are being posted on some append-only data structure rather than sent to a server. In this setting, concurrency is crucial once the rate of requests exceeds, say, the rate at which new blocks are added to a blockchain. In the central-server setting, CoCoA (the only alternative with concurrency, sub-linear communication and basic post-compromise security) enjoys much lower download communication. However, in the decentralized setting - where there is no server which can craft specific messages for different users to reduce their download communication - our protocol significantly outperforms CoCoA. DeCAF heals in fewer rounds ($\log(t)$ vs. $\log(n)$) while incurring a similar per round per user communication cost.
Last updated:  2024-07-10
On the Semidirect Discrete Logarithm Problem in Finite Groups
Christopher Battarbee, Giacomo Borin, Ryann Cartor, Nadia Heninger, David Jao, Laura Maddison, Edoardo Persichetti, Angela Robinson, Daniel Smith-Tone, and Rainer Steinwandt
We present an efficient quantum algorithm for solving the semidirect discrete logarithm problem (SDLP) in any finite group. The believed hardness of the semidirect discrete logarithm problem underlies more than a decade of works constructing candidate post-quantum cryptographic algorithms from nonabelian groups. We use a series of reduction results to show that it suffices to consider SDLP in finite simple groups. We then apply the celebrated Classification of Finite Simple Groups to consider each family. The infinite families of finite simple groups admit, in a fairly general setting, linear algebraic attacks providing a reduction to the classical discrete logarithm problem. For the sporadic simple groups, we show that their inherent properties render them unsuitable for cryptographically hard SDLP instances, which we illustrate via a Baby-Step Giant-Step style attack against SDLP in the Monster Group. Our quantum SDLP algorithm is fully constructive for all but three remaining cases that appear to be gaps in the literature on constructive recognition of groups; for these cases SDLP is no harder than finding a linear representation. We conclude that SDLP is not a suitable post-quantum hardness assumption for any choice of finite group.
Last updated:  2024-07-10
OPPID: Single Sign-On with Oblivious Pairwise Pseudonyms
Maximilian Kroschewski, Anja Lehmann, and Cavit Özbay
Single Sign-On (SSO) allows users to conveniently authenticate to many Relying Parties (RPs) through a central Identity Provider (IdP). SSO supports unlinkable authentication towards the RPs via pairwise pseudonyms, where the IdP assigns the user an RP-specific pseudonym. This feature has been rolled out prominently within Apple's SSO service. While establishing unlinkable identities provides privacy towards RPs, it actually emphasizes the main privacy problem of SSO: with every authentication request, the IdP learns the RP that the user wants to access. Solutions to overcome this limitation exist, but either assume users to behave honestly or require them to manage long-term cryptographic keys. In this work, we propose the first SSO system that can provide such pseudonymous authentication in an unobservable yet strongly secure and convenient manner. That is, the IdP blindly derives the user's pairwise pseudonym for the targeted RP without learning the RP's identity and without requiring key material handled by the user. We formally define the desired security and privacy properties for such unlinkable, unobservable, and strongly secure SSO. In particular, our model includes the often neglected RP authentication: the IdP typically wants to limit its services to registered RPs only and thus must be able to (blindly) verify that it issues the token and pseudonym to such a registered RP. We propose a simple construction that combines signatures with efficient proofs-of-knowledge with a blind, yet verifiable, evaluation of the Hashed-Diffie-Hellman PRF. We prove the security of our construction and demonstrate its efficiency through a prototypical implementation, which requires a running time of 2-20ms per involved party.
Last updated:  2024-07-10
Strong Existential Unforgeability and More of MPC-in-the-Head Signatures
Mukul Kulkarni and Keita Xagawa
NIST started the standardization of additional post-quantum signatures in 2022. Among 40 candidates, a few of them showed their stronger security than existential unforgeability, strong existential unforgeability and BUFF (beyond unforgeability features) securities. Recently, Aulbach, Düzlü, Meyer, Struck, and Weishäupl (PQCrypto 2024) examined the BUFF securities of 17 out of 40 candidates. Unfortunately, on the so-called MPC-in-the-Head (MPCitH) signature schemes, we have no knowledge of strong existential unforgeability and BUFF securities. This paper studies the strong securities of all nine MPCitH signature candidates: AIMer, Biscuit, FAEST, MIRA, MiRitH, MQOM, PERK, RYDE, and SDitH. We show that the MPCitH signature schemes are strongly existentially unforgeable under chosen message attacks in the (quantum) random oracle model. To do so, we introduce a new property of the underlying multi-pass identification, which we call _non-divergency_. This property can be considered as a weakened version of the computational unique response for three-pass identification defined by Kiltz, Lyubashevsky, and Schaffner (EUROCRYPT 2018) and its extension to multi-pass identification defined by Don, Fehr, and Majentz (CRYPTO 2020). In addition, we show that the SSH11 protocol proposed by Sakumoto, Shirai, and Hiwatari (CRYPTO 2011) is _not_ computational unique response, while Don et al. (CRYPTO 2020) claimed it. We also survey BUFF securities of the nine MPCitH candidates in the quantum random oracle model. In particular, we show that Biscuit and MiRitH do not have some of the BUFF security.
Last updated:  2024-07-10
Small Private Key Attack Against a Family of RSA-like Cryptosystems
George Teseleanu and Paul Cotan
Let $N=pq$ be the product of two balanced prime numbers $p$ and $q$. Elkamchouchi, Elshenawy and Shaban presented in 2002 an interesting RSA-like cryptosystem that uses the key equation $ed - k (p^2-1)(q^2-1) = 1$, instead of the classical RSA key equation $ed - k (p-1)(q-1) = 1$. The authors claimed that their scheme is more secure than RSA. Unfortunately, the common attacks developed against RSA can be adapted for Elkamchouchi \emph{et al.}'s scheme. In this paper, we introduce a family of RSA-like encryption schemes that uses the key equation $ed - k (p^n-1)(q^n-1) = 1$, where $n>0$ is an integer. Then, we show that regardless of the choice of $n$, there exists an attack based on continued fractions that recovers the secret exponent.
Last updated:  2024-07-10
Lower Bound on Number of Compression Calls of a Collision-Resistance Preserving Hash
Debasmita Chakraborty and Mridul Nandi
The collision-resistant hash function is an early cryptographic primitive that finds extensive use in various applications. Remarkably, the Merkle-Damgård and Merkle tree hash structures possess the collision-resistance preserving property, meaning the hash function remains collision-resistant when the underlying compression function is collision-resistant. This raises the intriguing question of whether reducing the number of underlying compression function calls with the collision-resistance preserving property is possible. In pursuit of addressing these inquiries, we prove that for an ℓn-to-sn-bit collision-resistance preserving hash function designed using r tn-to-n-bit compression function calls, we must have r ≥ ⌈(ℓ−s)/(t−1)⌉. Throughout the paper, all operations other than the compression function are assumed to be linear (which we call linear hash mode).
Last updated:  2024-07-10
Switching Off your Device Does Not Protect Against Fault Attacks
Paul Grandamme, Pierre-Antoine Tissot, Lilian Bossuet, Jean-Max Dutertre, Brice Colombier, and Vincent Grosso
Physical attacks, and among them fault injection attacks, are a significant threat to the security of embedded systems. Among the means of fault injection, laser has the significant advantage of being extremely spatially accurate. Numerous state-of-the-art studies have investigated the use of lasers to inject faults into a target at run-time. However, the high precision of laser fault injection comes with requirements on the knowledge of the implementation and exact execution time of the victim code. The main contribution of this work is the demonstration on experimental basis that it is also possible to perform laser fault injection on an unpowered device. Specifically, we targeted the Flash non-volatile memory of a 32-bit microcontroller. The advantage of this new attack path is that it does not require any synchronisation between the victim and the attacker. We provide an experimental characterization of this phenomenon with a description of the fault model from the physical level up to the software level. Finally, we applied these results to carry out a persistent fault analysis on a 128-bit AES with a particularly realistic attacker model which reinforces the interest of the PFA.
Last updated:  2024-07-10
Structural Lower Bounds on Black-Box Constructions of Pseudorandom Functions
Amos Beimel, Tal Malkin, and Noam Mazor
We address the black-box complexity of constructing pseudorandom functions (PRF) from pseudorandom generators (PRG). The celebrated GGM construction of Goldreich, Goldwasser, and Micali (Crypto 1984) provides such a construction, which (even when combined with Levin's domain-extension trick) has super-logarithmic depth. Despite many years and much effort, this remains essentially the best construction we have to date. On the negative side, one step is provided by the work of Miles and Viola (TCC 2011), which shows that a black-box construction which just calls the PRG once and outputs one of its output bits, cannot be a PRF. In this work, we make significant further progress: we rule out black-box constructions of PRF from PRG that follow certain structural constraints, but may call the PRG adaptively polynomially many times. In particular, we define ``tree constructions" which generalize the GGM structure: they apply the PRG $G$ along a tree path, but allow for different choices of functions to compute the children of a node on the tree and to compute the next node on the computation path down the tree. We prove that a tree construction of logarithmic depth cannot be a PRF (while GGM is a tree construction of super-logarithmic depth). We also show several other results and discuss the special case of one-call constructions. Our main results in fact rule out even weak PRF constructions with one output bit. We use the oracle separation methodology introduced by Gertner, Malkin, and Reingold (FOCS 2001), and show that for any candidate black-box construction $F^G$ from $G$, there exists an oracle relative to which $G$ is a PRG, but $F^G$ is not a PRF.
Last updated:  2024-07-10
Automated Creation of Source Code Variants of a Cryptographic Hash Function Implementation Using Generative Pre-Trained Transformer Models
Elijah Pelofske, Vincent Urias, and Lorie M. Liebrock
Generative pre-trained transformers (GPT's) are a type of large language machine learning model that are unusually adept at producing novel, and coherent, natural language. Notably, these technologies have also been extended to computer programming languages with great success. However, GPT model outputs in general are stochastic and not always correct. For programming languages, the exact specification of the computer code, syntactically and algorithmically, is strictly required in order to ensure the security of computing systems and applications. Therefore, using GPT models to generate computer code poses an important security risk -- while at the same time allowing for potential innovation in how computer code is generated. In this study the ability of GPT models to generate novel and correct versions, and notably very insecure versions, of implementations of the cryptographic hash function SHA-1 is examined. The GPT models Llama-2-70b-chat-hf, Mistral-7B-Instruct-v0.1, and zephyr-7b-alpha are used. The GPT models are prompted to re-write each function using a modified version of the localGPT framework and langchain to provide word embedding context of the full source code and header files to the model, resulting in over $130,000$ function re-write GPT output text blocks (that are potentially correct source code), approximately $40,000$ of which were able to be parsed as C code and subsequently compiled. The generated code is analyzed for being compilable, correctness of the algorithm, memory leaks, compiler optimization stability, and character distance to the reference implementation. Remarkably, several generated function variants have a high implementation security risk of being correct for some test vectors, but incorrect for other test vectors. Additionally, many function implementations were not correct to the reference algorithm of SHA-1, but produced hashes that have some of the basic characteristics of hash functions. Many of the function re-writes contained serious flaws such as memory leaks, integer overflows, out of bounds accesses, use of uninitialised values, and compiler optimization instability. Compiler optimization settings and SHA-256 hash checksums of the compiled binaries are used to cluster implementations that are equivalent but may not have identical syntax - using this clustering over $100,000$ distinct, novel, and correct versions of the SHA-1 codebase were generated where each component C function of the reference implementation is different from the original code.
Last updated:  2024-07-10
An NVMe-based Secure Computing Platform with FPGA-based TFHE Accelerator
Yoshihiro Ohba, Tomoya Sanuki, Claude Gravel, and Kentaro Mihara
In this paper, we introduce a new approach to secure computing by implementing a platform that utilizes an NVMe-based system with an FPGA-based Torus FHE accelerator, SSD, and middleware on the host-side. Our platform is the first of its kind to offer complete secure computing capabilities for TFHE using an FPGA-based accelerator. We have defined secure computing instructions to evaluate 14-bit to 14-bit functions using TFHE, and our middleware allows for communication of ciphertexts, keys, and secure computing programs while invoking secure computing programs through NVMe commands with metadata. Our CMux gate implementation features an optimized NTT/INTT circuit that eliminates pre-NTT and post-INTT operations by pre-scaling and pre-transforming constant polynomials such as the bootstrapping and private-functional key-switching keys. Our performance evaluation demonstrates that our secure computing platform outperforms CPU-based and GPU-based platforms by 15 to 120 times and by 2.5 to 3 times, respectively, in gate bootstrapping execution time. Additionally, our platform uses 7 to 12 times less electric energy consumption during the gate bootstrapping execution time compared to CPU-based platforms and 1.15 to 1.2 times less compared to GPU-based platforms.
Last updated:  2024-07-10
Towards Achieving Asynchronous MPC with Linear Communication and Optimal Resilience
Vipul Goyal, Chen-Da Liu-Zhang, and Yifan Song
Secure multi-party computation (MPC) allows a set of $n$ parties to jointly compute a function over their private inputs. The seminal works of Ben-Or, Canetti and Goldreich [STOC '93] and Ben-Or, Kelmer and Rabin [PODC '94] settled the feasibility of MPC over asynchronous networks. Despite the significant line of work devoted to improving the communication complexity, current protocols with information-theoretic security and optimal resilience $t<n/3$ communicate $\Omega(n^4C)$ field elements for a circuit with $C$ multiplication gates. In contrast, synchronous MPC protocols with $O(nC)$ communication have long been known. In this work we make progress towards closing this gap. We provide a novel MPC protocol in the asynchronous setting with statistical security that makes black-box use of an asynchronous complete secret-sharing (ACSS) protocol. The cost per multiplication reduces to the cost of distributing a constant number of sharings via ACSS, improving a linear factor over the state of the art by Choudhury and Patra [IEEE Trans. Inf. Theory '17]. With a recent concurrent work achieving ACSS with linear cost per sharing, we achieve an MPC with ${O}(nC)$ communication and optimal resilience.
Last updated:  2024-07-09
Constrained Pseudorandom Functions for Inner-Product Predicates from Weaker Assumptions
Sacha Servan-Schreiber
In this paper, we provide a novel framework for constructing Constrained Pseudorandom Functions (CPRFs) with inner-product constraint predicates, using ideas from subtractive secret sharing and related-key-attack security. Our framework can be instantiated using a random oracle or any suitable Related-Key-Attack (RKA) secure pseudorandom function. This results in three new CPRF constructions: 1. an adaptively-secure construction in the random oracle model; 2. a selectively-secure construction under the DDH assumption; and 3. a selectively-secure construction with a polynomial domain under the assumption that one-way functions exist. All three instantiations are constraint-hiding and support inner-product predicates, leading to the first constructions of such expressive CPRFs under each corresponding assumption. Moreover, while the OWF-based construction is primarily of theoretical interest, the random oracle and DDH-based constructions are concretely efficient, which we show via an implementation.
Last updated:  2024-07-09
Finding Bugs and Features Using Cryptographically-Informed Functional Testing
Giacomo Fenzi, Jan Gilcher, and Fernando Virdia
In 2018, Mouha et al. (IEEE Trans. Reliability, 2018) performed a post-mortem investigation of the correctness of reference implementations submitted to the SHA3 competition run by NIST, finding previously unidentified bugs in a significant portion of them, including two of the five finalists. Their innovative approach allowed them to identify the presence of such bugs in a black-box manner, by searching for counterexamples to expected cryptographic properties of the implementations under test. In this work, we extend their approach to key encapsulation mechanisms (KEMs) and digital signature schemes (DSSs). We perform our tests on multiple versions of the LibOQS collection of post-quantum schemes, to capture implementations at different points of the recent Post-Quantum Cryptography Standardization Process run by NIST. We identify multiple bugs, ranging from software bugs (segmentation faults, memory overflows) to cryptographic bugs, such as ciphertext malleability in KEMs claiming IND-CCA security. We also observe various features of KEMs and DSS that do not contradict any security guarantees, but could appear counter-intuitive.
Last updated:  2024-07-09
Implementation and Performance Evaluation of Elliptic Curve Cryptography over SECP256R1 on STM32 Microprocessor
Onur İşler
The use of Internet of Things (IoT) devices in embedded systems has become increasingly popular with advancing technologies. These devices become vulnerable to cyber attacks as they gain popularity. The cryptographic operations performed for the purpose of protection against cyber attacks are crucial to yield fast results in open networks and not slow down network traffic. Therefore, to enhance communication security, studies have been conducted in the literature on using asymmetric encryption and symmetric encryption together in IoT devices for activities such as key sharing, encryption, decryption, data signing, and verifying signed data. In this study, we first propose a cryptographic system engaging of IoT devices operated from a server. Then we do performance analysis of our proposal. In particular, we evaluate the elliptic curve Diffie-Hellman key exchange and elliptic curve digital signature algorithms on the Secp256r1 elliptic curve and AES symmetric encryption via the Micro uECC library conducted with the 32-bit STM32F410RB Nucleo development board microprocessor running at 48 MHz.
Last updated:  2024-07-09
QuietOT: Lightweight Oblivious Transfer with a Public-Key Setup
Geoffroy Couteau, Lalita Devadas, Srinivas Devadas, Alexander Koch, and Sacha Servan-Schreiber
Oblivious Transfer (OT) is at the heart of secure computation and is a foundation for many applications in cryptography. Over two decades of work have led to extremely efficient protocols for evaluating OT instances in the preprocessing model, through a paradigm called OT extension. A few OT instances generated in an offline phase can be used to perform many OTs in an online phase efficiently, i.e., with very low communication and computational overheads. Specifically, traditional OT extension protocols use a small number of “base” OTs, generated using any black-box OT protocol, and convert them into many OT instances using only lightweight symmetric-key primitives. Recently, a new paradigm of OT with a *public-key setup* has emerged, which replaces the base OTs with a non-interactive setup: Using only the public key of the other party, two parties can efficiently compute a virtually unbounded number of OT instances on-the-fly. In this paper, we put forth a novel framework for OT extension with a public-key setup and concretely efficient instantiations. An implementation of our framework is over 30 times faster when compared to the previous state-of-the-art public-key OT protocols, and remains competitive even when compared to OT protocols that *do not* offer a public-key setup. Additionally, our instantiations result in the first public-key schemes with plausible post-quantum security. In summary, this paper contributes: - QuietOT: A framework for OT extension with a public-key setup that uses fast, symmetric-key primitives to generate OT instances following a one-time public-key setup, and offering additional features such as precomputability. - A public-key setup for QuietOT from the RingLWE assumption, resulting in the first post-quantum construction of OT extension with a public-key setup. - An optimized, open-source implementation of our construction that can generate up to 1M OT extensions per second on commodity hardware. In contrast, the state-of-the-art public-key OT protocol is limited to approximately 20K OTs per second. - The first formal treatment of the security of OT with a public-key setup in a multi-party setting, which addresses several subtleties that were overlooked in prior work.
Last updated:  2024-07-09
A Fast and Efficient SIKE Co-Design: Coarse-Grained Reconfigurable Accelerators with Custom RISC-V Microcontroller on FPGA
Jing Tian, Bo Wu, Lang Feng, Haochen Zhang, and Zhongfeng Wang
This paper proposes a fast and efficient FPGA-based hardware-software co-design for the supersingular isogeny key encapsulation (SIKE) protocol controlled by a custom RISC-V processor. Firstly, we highly optimize the core unit, the polynomial-based field arithmetic logic unit (FALU), with the proposed fast convolution-like multiplier (FCM) to significantly reduce the resource consumption while still maintaining low latency and constant time for all the four SIKE parameters. Secondly, we pack the small isogeny and point operations in hardware, devise a coarse-grained reconfigurable hardware architecture (CGRHA) based on FALU as the co-processor, and apply it to the RISC-V core with customized instructions, effectively avoiding extra time consumption for the data exchange with the software side and meanwhile increasing flexibility. Finally, we code the hardware in SystemVerilog language and the software in C language and run experiments on FPGAs. In the co-processor implementation, the experiment results show that our design for the four SIKE parameters achieves 2.6-4.4x speedup and obtains comparable or better area-time product to or than the state-of-the-art. In the hardware-software co-design experiments, we still have the superiority in speed and only <10\% of extra time is introduced by mutual communication.
Last updated:  2024-07-09
Ring Signatures for Deniable AKEM: Gandalf's Fellowship
Phillip Gajland, Jonas Janneck, and Eike Kiltz
Ring signatures, a cryptographic primitive introduced by Rivest, Shamir and Tauman (ASIACRYPT 2001), offer signer anonymity within dynamically formed user groups. Recent advancements have focused on lattice-based constructions to improve efficiency, particularly for large signing rings. However, current state-of-the-art solutions suffer from significant overhead, especially for smaller rings. In this work, we present a novel NTRU-based ring signature scheme, Gandalf, tailored towards small rings. Our post-quantum scheme achieves a 50% reduction in signature sizes compared to the linear ring signature scheme Raptor (ACNS 2019). For rings of size two, our signatures are approximately a quarter the size of DualRing (CRYPTO 2021), another linear scheme, and remain more compact for rings up to size seven. Compared to the sublinear scheme Smile (CRYPTO 2021), our signatures are more compact for rings of up to 26. In particular, for rings of size two, our ring signatures are only 1236 bytes. Additionally, we explore the use of ring signatures to obtain deniability in authenticated key exchange mechanisms (AKEMs), the primitive behind the recent HPKE standard used in MLS and TLS. We take a fine-grained approach at formalising sender deniability within AKEM and seek to define the strongest possible notions. Our contributions extend to a black-box construction of a deniable AKEM from a KEM and a ring signature scheme for rings of size two. Our approach attains the highest level of confidentiality and authenticity, while simultaneously preserving the strongest forms of deniability in two orthogonal settings. Finally, we present parameter sets for our schemes, and show that our deniable AKEM, when instantiated with our ring signature scheme, yields ciphertexts of 2004 bytes.
Last updated:  2024-07-09
A Long Tweak Goes a Long Way: High Multi-user Security Authenticated Encryption from Tweakable Block Ciphers
Benoît Cogliati, Jérémy Jean, Thomas Peyrin, and Yannick Seurin
We analyze the multi-user (mu) security of a family of nonce-based authentication encryption (nAE) schemes based on a tweakable block cipher (TBC). The starting point of our work is an analysis of the mu security of the SCT-2 mode which underlies the nAE scheme Deoxys-II, winner of the CAESAR competition for the defense-in-depth category. We extend this analysis in two directions, as we detail now. First, we investigate the mu security of several TBC-based variants of the counter encryption mode (including CTRT, the encryption mode used within SCT-2) that differ by the way a nonce, a random value, and a counter are combined as tweak and plaintext inputs to the TBC to produce the keystream blocks that will mask the plaintext blocks. Then, we consider the authentication part of SCT-2 and study the mu security of the nonce-based MAC Nonce-as-Tweak (NaT) built from a TBC and an almost universal (AU) hash function. We also observe that the standard construction of an AU hash function from a (T)BC can be proven secure under the assumption that the underlying TBC is unpredictable rather than pseudorandom, allowing much better conjectures on the concrete AU advantage. This allows us to derive the mu security of the family of nAE modes obtained by combining these encryption/MAC building blocks through the NSIV composition method. Some of these modes require an underlying TBC with a larger tweak length than what is usually available for existing ones. We then show the practicality of our modes by instantiating them with two new TBC constructions, Deoxys-TBC-512 and Deoxys-TBC-640, which can be seen as natural extensions of the Deoxys-TBC family to larger tweak input sizes. Designing such TBCs with unusually large tweaks is prone to pitfalls: Indeed, we show that a large-tweak proposal for SKINNY published at EUROCRYPT 2020 presents an inherent construction flaw. We therefore provide a sound design strategy to construct large-tweak TBCs within the Superposition Tweakey (STK) framework, leading to new Deoxys-TBC and SKINNY variants. We provide software benchmarks indicating that while ensuring a very high security level, the performances of our proposals remain very competitive.
Last updated:  2024-07-09
Generic Anamorphic Encryption, Revisited: New Limitations and Constructions
Dario Catalano, Emanuele Giunta, and Francesco Migliaro
The notion of Anamorphic Encryption (Persiano et al. Eurocrypt 2022) aims at establishing private communication against an adversary who can access secret decryption keys and influence the chosen messages. Persiano et al. gave a simple, black-box, rejection sampling-based technique to send anamorphic bits using any IND-CPA secure scheme as underlying PKE. In this paper however we provide evidence that their solution is not as general as claimed: indeed there exists a (contrived yet secure) PKE which lead to insecure anamorphic instantiations. Actually, our result implies that such stateless black-box realizations of AE are impossible to achieve, unless weaker notions are targeted or extra assumptions are made on the PKE. Even worse, this holds true even if one resorts to powerful non-black-box techniques, such as NIZKs, $ i\mathcal{O} $ or garbling. From a constructive perspective, we shed light those required assumptions. Specifically, we show that one could bypass (to some extent) our impossibility by either considering a weaker (but meaningful) notion of AE or by assuming the underlying PKE to (always) produce high min-entropy ciphertexts. Finally, we prove that, for the case of Fully-Asymmetric AE, $ i\mathcal{O}$ can actually be used to overcome existing impossibility barriers. We show how to use $ i\mathcal{O} $ to build Fully-Asymmetric AE (with small anamorphic message space) generically from any IND-CPA secure PKE with sufficiently high min-entropy ciphertexts. Put together our results provide a clearer picture of what black-box constructions can and cannot achieve.
Last updated:  2024-07-09
Shared-Custodial Password-Authenticated Deterministic Wallets
Poulami Das, Andreas Erwig, and Sebastian Faust
Cryptographic wallets are an essential tool in Blockchain networks to ensure the secure storage and maintenance of an user's cryptographic keys. Broadly, wallets can be divided into three categories, namely custodial, non-custodial, and shared-custodial wallets. The first two are centralized solutions, i.e., the wallet is operated by a single entity, which inherently introduces a single point of failure. Shared-custodial wallets, on the other hand, are maintained by two independent parties, e.g., the wallet user and a service provider, and hence avoid the single point of failure centralized solutions. Unfortunately, current shared-custodial wallets suffer from significant privacy issues. In our work, we introduce password-authenticated deterministic wallets (PADW), a novel and efficient shared-custodial wallet solution, which exhibits strong security and privacy guarantees. In a nutshell, in a PADW scheme, the secret key of the user is shared between the user and the server. In order to generate a signature, the user first authenticates itself to the server by providing a password and afterwards engages in an interactive signing protocol with the server. Security is guaranteed as long as at most one of the two parties is corrupted. Privacy, on the other hand, guarantees that a corrupted server cannot link a transaction to a particular user. We formally model the notion of PADW schemes and we give an instantiation from blind Schnorr signatures. Our construction allows for deterministic key derivation, a feature that is widely used in practice by existing wallet schemes, and it does not rely on any heavy cryptographic primitives. We prove our scheme secure against adaptive adversaries in the random oracle model and under standard assumptions. That is, our security proof only relies on the assumption that the Schnorr signature scheme is unforgeable and that a public key encryption scheme is CCA-secure.
Last updated:  2024-07-09
Reducing the Share Size of Weighted Threshold Secret Sharing Schemes via Chow Parameters Approximation
Oriol Farràs and Miquel Guiot
A secret sharing scheme is a cryptographic primitive that allows a dealer to share a secret among a set of parties, so that only authorized subsets of them can recover it. The access structure of the scheme is the family of authorized subsets. In a weighted threshold access structure, each party is assigned a weight according to its importance, and the authorized subsets are those in which the sum of their weights is at least the threshold value. For these access structures, the share size of the best known secret sharing schemes is either linear on the weights or quasipolynomial on the number of parties, which leads to long shares, in general. In certain settings, a way to circumvent this efficiency problem is to approximate the access structure by another one that admits more efficient schemes. This work is dedicated to the open problem posed by this strategy: Finding secret sharing schemes with a good tradeoff between the efficiency and the accuracy of the approximation. We present a method to approximate weighted threshold access structures by others that admit schemes with small shares. This method is based on the techniques for the approximation of the Chow parameters developed by De et al. [Journal of the ACM, 2014]. Our method provides secret sharing schemes with share size $n^{1+o(1)}$, where $n$ is the number of parties, and whose access structure is close to the original one. Namely, in this approximation the condition of being authorized or not is preserved for almost all subsets of parties. In addition, applying the recent results on computational secret sharing schemes by Applebaum et al. [STOC, 2023] we show that there exist computational secret sharing schemes whose security is based on the RSA assumption and whose share size is polylogarithmic in the number of parties.
Last updated:  2024-07-09
Securely Training Decision Trees Efficiently
Divyanshu Bhardwaj, Sandhya Saravanan, Nishanth Chandran, and Divya Gupta
Decision trees are an important class of supervised learning algorithms. When multiple entities contribute data to train a decision tree (e.g. for fraud detection in the financial sector), data privacy concerns necessitate the use of a privacy-enhancing technology such as secure multi-party computation (MPC) in order to secure the underlying training data. Prior state-of-the-art (Hamada et al.) construct an MPC protocol for decision tree training with a communication of $\mathcal{O}(hmN\log N)$, when building a decision tree of height $h$ for a training dataset of $N$ samples, each having $m$ attributes. In this work, we significantly reduce the communication complexity of secure decision tree training. We construct a protocol with communication complexity $\mathcal{O}(mN\log N + hmN + hN\log N)$, thereby achieving an improvement of $\approx \mathsf{min}(h, m, \log N)$ over Hamada et al. At the core of our technique is an improved protocol to regroup sorted private elements further into additional groups (according to a flag vector) while maintaining their relative ordering. We implement our protocol in the MP-SPDZ framework and show that it requires $10\times$ lesser communication and is $9\times$ faster than the state-of-the-art.
Last updated:  2024-07-09
Linear-Communication Asynchronous Complete Secret Sharing with Optimal Resilience
Xiaoyu Ji, Junru Li, and Yifan Song
Secure multiparty computation (MPC) allows a set of $n$ parties to jointly compute a function on their private inputs. In this work, we focus on the information-theoretic MPC in the \emph{asynchronous network} setting with optimal resilience ($t<n/3$). The best-known result in this setting is achieved by Choudhury and Patra [J. Cryptol '23], which requires $O(n^4\kappa)$ bits per multiplication gate, where $\kappa$ is the size of a field element. An asynchronous complete secret sharing (ACSS) protocol allows a dealer to share a batch of Shamir sharings such that all parties eventually receive their shares. ACSS is an important building block in AMPC. The best-known result of ACSS is due to Choudhury and Patra [J. Cryptol '23], which requires $O(n^3\kappa)$ bits per sharing. On the other hand, in the synchronous setting, it is known that distributing Shamir sharings can be achieved with $O(n\kappa)$ bits per sharing. There is a gap of $n^2$ in the communication between the synchronous setting and the asynchronous setting. Our work closes this gap by presenting the first ACSS protocol that achieves $O(n\kappa)$ bits per sharing. When combined with the compiler from ACSS to AMPC by Choudhury and Patra [IEEE Trans. Inf. Theory '17], we obtain an AMPC with $O(n^2\kappa)$ bits per multiplication gate, improving the previously best-known result by a factor of $n^2$. Moreover, with a concurrent work that improves the compiler by Choudhury and Patra by a factor of $n$, we obtain the first AMPC with $O(n\kappa)$ bits per multiplication gate.
Last updated:  2024-07-09
A Simple Post-Quantum Oblivious Transfer Protocol from Mod-LWR
Shen Dong, Hongrui Cui, Kaiyi Zhang, Kang Yang, and Yu Yu
Oblivious transfer (OT) is a fundamental cryptographic protocol that plays a crucial role in secure multi-party computation (MPC). Most practical OT protocols by, e.g., Naor and Pinkas (SODA'01) or Chou and Orlandi (Latincrypt'15), are based on Diffie-Hellman (DH)-like assumptions and not post-quantum secure. In contrast, many other components of MPC protocols, including garbled circuits and secret sharings, are post-quantum secure. The reliance on non-post-quantum OT protocols presents a significant security bottleneck with the advent of quantum computing. In this paper, we address this issue by constructing a simple, efficient OT protocol based on Saber, a Mod-LWR-based key exchange protocol. We implemented our OT protocol and conducted experiments to evaluate its performance. Our results show that our OT protocol significantly outperforms the state-of-the-art Kyber-based post-quantum OT protocol by Masny and Rindal (CCS'19) in terms of both computation and communication costs. Furthermore, the computation speed of our OT protocol is faster than the best-known DH-based OT protocol by Chou and Orlandi (Latincrypt'15), making it competitive to replace DH-based OT in the high-bandwidth network setting.
Last updated:  2024-07-09
Public vs Private Blockchains lineage storage
Bilel Zaghdoudi and Maria Potop Butucaru
This paper reports the experimental results related to lineage event storage via smart contracts deployed on private and public blockchain. In our experiments we measure the following three metrics: the cost to deploy the storage smart contract on the blockchain, which measures the initial expenditure, typically in gas units, required to deploy the smart contract that facilitates lineage event storage, then the time and gas costs needed to store a lineage event. We investigated both single and multi-clients scenarios. We considered the following public blockchains: Hedera, Fantom, Harmony Shard0, Polygon Amoy, Ethereum Sepolia, Optimism Sepolia, Klaytn Baobab and Arbitrum Sepolia. Furthermore, we investigate the performances of Hyperledger Besu with different consensus algorithms as private blockchains.
Last updated:  2024-07-09
Time-Memory Trade-off Algorithms for Homomorphically Evaluating Look-up Table in TFHE
Shintaro Narisada, Hiroki Okada, Kazuhide Fukushima, and Takashi Nishide
We propose time-memory trade-off algorithms for evaluating look-up table (LUT) in both the leveled homomorphic encryption (LHE) and fully homomorphic encryption (FHE) modes in TFHE. For an arbitrary $n$-bit Boolean function, we reduce evaluation time by a factor of $O(n)$ at the expense of an additional memory of "only" $O(2^n)$ as a trade-off: The total asymptotic memory is also $O(2^n)$, which is the same as that of prior works. Our empirical results demonstrate that a $7.8 \times$ speedup in runtime is obtained with a $3.8 \times$ increase in memory usage for 16-bit Boolean functions in the LHE mode. Additionally, in the FHE mode, we achieve reductions in both runtime and memory usage by factors of $17.9 \times$ and $2.5 \times $, respectively, for 8-bit Boolean functions. The core idea is to decompose the function $f$ into sufficiently small subfunctions and leverage the precomputed results for these subfunctions, thereby achieving significant performance improvements at the cost of additional memory.
Last updated:  2024-07-09
Information-Theoretic 2-Party Computation from Additive Somewhat Homomorphic Encryption
Jonathan Trostle
Two-party computation has been an active area of research since Yao's breakthrough results on garbled circuits. We present secret key additive somewhat homomorphic schemes where the client has perfect privacy (server can be computationally unbounded). Our basic scheme is additive somewhat homomorphic and we give protocols to handle addition and multiplication. In one scheme, the server handles circuit multiplication gates by returning the multiplicands to the client which does the multiplication and sends back the encrypted product. We give a 2-party protocol that also incorporates server inputs where the client has perfect privacy. Server privacy is not information-theoretic, but rather depends on hardness of the subset sum problem. Correctness for the server in the malicious model can be verified by a 3rd party with high probability where the client and server privacy are information-theoretically protected from the verifier. Scaling the 2PC protocol via separate encryption parameters for smaller subcircuits allows the ciphertext size to remain constant as circuit size grows.
Last updated:  2024-07-09
Fast, Large Scale Dimensionality Reduction Schemes Based on CKKS
Haonan Yuan, Wenyuan Wu, and Jingwei Chen
The proliferation of artificial intelligence and big data has resulted in a surge in data demand and increased data dimensionality. This escalation has consequently heightened the costs associated with storage and processing. Concurrently, the confidential nature of data collected by various institutions, which cannot be disclosed due to personal privacy concerns, has exacerbated the challenges associated with data analysis and machine learning model training. Therefore, designing a secure and efficient high-dimensional data reduction method that supports multi-party joint participation becomes critical to solving these problems. This paper proposes a novel homomorphic encryption dimensionality reduction scheme (HE-DR) based on CKKS, which modifies the Rank-Revealing (RR) method to make it more applicable to fully homomorphic encryption, thereby achieving fast and secure dimension reduction for high-dimensional data. Compared to traditional homomorphic encryption dimensionality reduction schemes, our approach does not transmit the user’s original data to other participants in any format (Ciphertext or Plaintext). Moreover, our method's computational efficiency is nearly $60-200$ times faster than similar algorithms, and the communication overhead is only $1/3$ of theirs. Finally, we have shown that our proposed scheme can preserve its computational efficiency and accuracy even when dealing with high-dimensional data. As dimensionality escalates, the ratio of ciphertext to plaintext computational efficiency plateaus at approximately 5 times, while the computational error (distance between subspaces) remains around $1e^{-11}$
Last updated:  2024-07-09
Ringtail: Practical Two-Round Threshold Signatures from Learning with Errors
Cecilia Boschini, Darya Kaviani, Russell W. F. Lai, Giulio Malavolta, Akira Takahashi, and Mehdi Tibouchi
A threshold signature scheme splits the signing key among $\ell$ parties, such that any $t$-subset of parties can jointly generate signatures on a given message. Designing concretely efficient post-quantum threshold signatures is a pressing question, as evidenced by NIST's recent call. In this work, we propose, implement, and evaluate a lattice-based threshold signature scheme, Ringtail, which is the first to achieve a combination of desirable properties: (i) The signing protocol consists of only two rounds, where the first round is message-independent and can thus be preprocessed offline. (ii) The scheme is concretely efficient and scalable to $t \leq 1024$ parties. For $128$-bit security and $t = 1024$ parties, we achieve $13.4$ KB signature size and $10.5$ KB of online communication. (iii) The security is based on the standard learning with errors (LWE) assumption in the random oracle model. This improves upon the state-of-the-art (with comparable efficiency) which either has a three-round signing protocol [Eurocrypt'24] or relies on a new non-standard assumption [Crypto'24]. To substantiate the practicality of our scheme, we conduct the first WAN experiment deploying a lattice-based threshold signature, across 8 countries in 5 continents. We observe that an overwhelming majority of the end-to-end latency is consumed by network latency, underscoring the need for round-optimized schemes.
Last updated:  2024-07-08
Simple Logarithmic-size LSAG signature
Edsger Hughes
A number of existing cryptosystems use the well-known linear-size LSAG signature concept, extending it in many ways. This article presents a simple logarithmic-size signature LS-LSAG which, despite a radical reduction in size, retains the basic code block of LSAG. Therefore, substituting LS-LSAG for LSAG requires minimal changes to almost any existing LSAG/CLSAG-based solution, making it logarithmic instead of linear.
Last updated:  2024-07-08
HERatio: Homomorphic Encryption of Rationals using Laurent Polynomials
Luke Harmon, Gaetan Delavignette, and Hanes Oliveira
In this work we present $\mathsf{HERatio}$, a homomorphic encryption scheme that builds on the scheme of Brakerski, and Fan and Vercauteren. Our scheme naturally accepts Laurent polynomials as inputs, allowing it to work with rationals via their bounded base-$b$ expansions. This eliminates the need for a specialized encoder and streamlines encryption, while maintaining comparable efficiency to BFV. To achieve this, we introduce a new variant of the Polynomial Learning With Errors (PLWE) problem which employs Laurent polynomials instead of the usual ``classic'' polynomials, and provide a reduction to the PLWE problem.
Last updated:  2024-07-08
Polytopes in the Fiat-Shamir with Aborts Paradigm
Henry Bambury, Hugo Beguinet, Thomas Ricosset, and Eric Sageloli
The Fiat-Shamir with Aborts paradigm (FSwA) uses rejection sampling to remove a secret’s dependency on a given source distribution. Recent results revealed that unlike the uniform distribution in the hypercube, both the continuous Gaussian and the uniform distribution within the hypersphere minimise the rejection rate and the size of the proof of knowledge. However, in practice both these distributions suffer from the complexity of their sampler. So far, those three distributions are the only available alternatives, but none of them offer the best of all worlds: competitive proof of knowledge size and rejection rate with a simple sampler. We introduce a new generic framework for FSwA using polytope based rejection sampling to enable a wider variety of constructions. As a matter of fact, this framework is the first to generalise these results to integral distributions. To complement the lack of alternatives, we also propose a new polytope construction, whose uniform sampler approaches in simplicity that of the hypercube. At the same time, it provides competitive proof of knowledge size compared to that obtained from the Gaussian distribution. Concurrently, we share some experimental improvements of our construction to further reduce the proof size. Finally, we propose a signature based on the FSwA paradigm using both our framework and construction. We prove it to be competitive with Haetae in signature size and with Dilithium on sampler simplicity.
Last updated:  2024-07-08
Ad Hoc Broadcast, Trace, and Revoke --- Plus Time-Space Trade-Offs for Attribute-Based Encryption
Ji Luo
Traitor tracing schemes [Chor–Fiat–Naor, Crypto ’94] help content distributors fight against piracy and are defined with the content distributor as a trusted authority having access to the secret keys of all users. While the traditional model caters well to its original motivation, its centralized nature makes it unsuitable for many scenarios. For usage among mutually untrusted parties, a notion of *ad hoc* traitor tracing (naturally with the capability of broadcast and revocation) is proposed and studied in this work. Such a scheme allows users in the system to generate their own public/secret key pairs, without trusting any other entity. To encrypt, a list of public keys is used to identify the set of recipients, and decryption is possible with a secret key for any of the public keys in the list. In addition, there is a tracing algorithm that given a list of recipients’ public keys and a pirate decoder capable of decrypting ciphertexts encrypted to them, identifies at least one recipient whose secret key must have been used to construct the said decoder. Two constructions are presented. The first is based on functional encryption for circuits (conceptually, obfuscation) and has constant-size ciphertext, yet its decryption time is linear in the number of recipients. The second is a generic transformation that reduces decryption time at the cost of increased ciphertext size. A matching lower bound on the trade-off between ciphertext size and decryption time is shown, indicating that the two constructions achieve all possible optimal trade-offs, i.e., they fully demonstrate the Pareto front of efficiency. The lower bound also applies to broadcast encryption (hence all mildly expressive attribute-based encryption schemes) and is of independent interest.
Last updated:  2024-07-08
Collision Attacks on Galois/Counter Mode (GCM)
Uncategorized
John Preuß Mattsson
Show abstract
Uncategorized
Advanced Encryption Standard Galois/Counter Mode (AES-GCM) is the most widely used Authenticated Encryption with Associated Data (AEAD) algorithm in the world. In this paper, we analyze the use of GCM with all the Initialization Vector (IV) constructions and lengths approved by NIST SP 800-38D when encrypting multiple plaintexts with the same key. We derive attack complexities in both ciphertext-only and known-plaintext models, with or without nonce hiding, for collision attacks compromising integrity and confidentiality. Our analysis shows that GCM with random IVs provides less than 128 bits of security. When 96-bit IVs are used, as recommended by NIST, the security drops to less than 97 bits. Therefore, we strongly recommend NIST to forbid the use of GCM with 96-bit random nonces.
Last updated:  2024-07-08
Legacy Encryption Downgrade Attacks against LibrePGP and CMS
Falko Strenzke and Johannes Roth
This work describes vulnerabilities in the specification of the AEAD packets as introduced in the novel LibrePGP specification that is implemented by the widely used GnuPG application and the AES-based AEAD schemes as well as the Key Wrap Algorithm specified in the Cryptographic Message Syntax (CMS). These new attacks exploit the possibility to downgrade AEAD or AES Key Wrap ciphertexts to valid legacy CFB- or CBC-encrypted related ciphertexts and require that the attacker learns the content of the legacy decryption result. This can happen either due to the human recipient returning the decryption output, which has entirely pseudo-random appearance, to the attacker or due to a programmatic decryption oracle in the receiving system. The attacks effect the decryption of low-entropy plaintext blocks in AEAD ciphertexts and, in the case of LibrePGP, also the manipulation of existing AEAD ciphertexts. For AES Key Wrap in CMS, full key decryption is possible. Some of the attacks require multiple successful oracle queries. The attacks thus demonstrate that CCA2 security is not achieved by the LibrePGP and CMS AEAD or Key Wrap encryption in the presence of a legacy cipher mode decryption oracle. The proper countermeasure to thwart the attacks is a key derivation that ensures the use of unrelated block cipher keys for the different encryption modes.
Last updated:  2024-07-08
On Efficient and Secure Compression Modes for Arithmetization-Oriented Hashing
Elena Andreeva, Rishiraj Bhattacharyya, Arnab Roy, and Stefano Trevisani
ZK-SNARKs, a fundamental component of privacy-oriented payment systems, identity protocols, or anonymous voting systems, are advanced cryptographic protocols for verifiable computation: modern SNARKs allow to encode the invariants of a program, expressed as an arithmetic circuit, in an appropriate constraint language from which short, zero-knowledge proofs for correct computations can be constructed. One of the most important computations that is run through SNARK systems is the verification of Merkle tree (MT) opening proofs, which relies on the evaluation of a fixed-input-length (FIL) cryptographic compression function over binary MTs. As classical, bit-oriented hash functions like SHA-2 are not compactly representable in SNARK frameworks, Arithmetization-Oriented (AO) cryptographic designs have emerged as an alternative, efficient solution. Today, the majority of AO compression functions are built from the Sponge permutation-based hashing mode. While this approach allows cost savings, compared to blockcipher-based modes, as it does not require key-scheduling, AO blockcipher schedulers are often cheap to compute. Furthermore, classical bit-oriented cryptography has long studied how to construct provably secure compression functions from blockciphers, following the Preneel-Govaerts-Vandewalle (PGV) framework. The potential efficiency gains together with the strong provable security foundations in the classic setting, motivate the study of AO blockcipher-based compression functions. In this work, we propose PGV-LC and PGV-ELC, two AO blockcipher-based FIL compression modes inspired by and extending the classical PGV approach, offering flexible input and output sizes and coming with provable security guarantees in the AO setting. We prove the collision and preimage resistance in the ideal cipher model, and give bounds for collision and opening resistance over MTs of arbitrary arity. We compare experimentally the AO PGV-ELC mode over the Hades blockcipher with its popular and widely adopted Sponge instantiation, Poseidon, and its improved variant Poseidon2. Our resulting constructions are up to \(3\times \) faster than Poseidon and \(2\times \) faster than Poseidon2 in native x86 execution, and up to \(50\% \) faster in the Groth16 SNARK framework. Finally, we study the benefits of using MTs of arity wider than two, proposing a new strategy to obtain a compact R1CS constraint system in such case. In fact, by combining an efficient parametrization of the Hades blockcipher over the PGV-ELC mode, together with an optimal choice of the MT arity, we measured an improvement of up to \(9\times \) in native MT construction time, and up to \(2.5\times \) in proof generation time, compared to Poseidon over binary MTs.
Last updated:  2024-07-08
Faster Asynchronous Blockchain Consensus and MVBA
Matthieu Rambaud
Blockchain consensus, a.k.a. BFT SMR, are protocols enabling $n$ processes to decide on an ever-growing chain. The fastest known asynchronous one is called 2-chain VABA (PODC'21 and FC'22), and is used as fallback chain in Abraxas* (CCS'23). It has a claimed $9.5\delta$ expected latency when used for a single shot instance, a.k.a. an MVBA. We exhibit attacks breaking it. Hence, the title of the fastest asynchronous MVBA with quadratic messages complexity goes to sMVBA (CCS'22), with $10\delta$ expected latency. Our positive contributions are two new and complementary designs. $\bullet$ 2PAC (2-phase asynchronous consensus). It has a simpler and lighter chaining than in previous approaches. Instantiated with either quadratic or cubic phases of voting, it yields: 2PAC$^\text{lean}$: $+90\%$ throughput and $9.5\delta$ expected latency, with quadratic ($O(n^2)$) messages complexity. In both 2-chain VABA and sMVBA (as if chained, with pipelining), the quorum-certified transactions which were produced in the worst-case 1/3 of views with a slow leader were dumped, so the work was lost. The simpler design of 2PAC inserts such blocks in straight-line in the chain. Thus, contrary to naive uncle-referencing, this comes with no computational overhead, yielding a net $+50\%$ throughput gain over chained sMVBA. Both the remaining throughput and latency ($-0.5\delta$) gains, come from the lighter interactive construction of proofs of consistency appended to proposed blocks, compared to sMVBA. 2PAC$^\text{BIG}$: the fastest asynchronous blockchain consensus with cubic ($O(n^3)$) messages complexity. Fault-free single shot MVBA runs decide in just $4\delta$, as soon as no message is delivered more than twice faster than others: GradedDAG (SRDS'23) required furthermore no messages reordering. $\bullet$ Super Fast Pipelined Blocks. This is an upgrade of previous approaches for pipelining: in 2-chain VABA, Cordial Miners (DISC'23) and GradedDAG, a block pipelined by a leader in the middle of the view had almost twice larger latency than the non-pipelined block. Our design provides a fast path deciding the pipelined block with even smaller latency than the non-pipelined block. The fast delay is guaranteed in all executions with a fair scheduler, but remarkably, whatever the behaviors of faulty processes. Consistency is preserved by a lightweight mechanism, of one threshold signature appended per proposal. Instantiated with the previous protocols, it yields: s2PAC$^\text{lean}$, with fast decision of pipelined blocks in $4\delta$; s2PAC$^\text{BIG}$, in $3\delta$; and sGradedDAG, in $3\delta$.
Last updated:  2024-07-08
MUSEN: Aggregatable Key-Evolving Verifiable Random Functions and Applications
Bernardo David, Rafael Dowsley, Anders Konring, and Mario Larangeira
A Verifiable Random Function (VRF) can be evaluated on an input by a prover who holds a secret key, generating a pseudorandom output and a proof of output validity that can be verified using the corresponding public key. VRFs are a central building block of committee election mechanisms that sample parties to execute tasks in cryptographic protocols, e.g. generating blocks in a Proof-of-Stake (PoS) blockchain or executing a round of MPC protocols. We propose the notion, and a matching construction, of an Aggregatable Key-Evolving VRF (A-KE-VRF) with the following extra properties: 1. Aggregation: combining proofs for several VRF evaluations of different inputs under different secret keys into a single constant size proof; 2. Key-Evolving: preventing adversaries who corrupt a party (learning their secret key) from ``forging'' proofs of past VRF evaluations. As an immediate application, we improve on the block size of PoS blockchains and on the efficiency of Proofs of Proof-of-Stake (PoPoS). Furthermore, the A-KE-VRF notion allows us to construct Encryption to the Future (EtF) and Authentication from the Past (AfP) schemes with a Key-Evolving property, which provides forward security. An EtF scheme allows for sending a message to a party who is randomly selected to execute a role in the future, while an AfP scheme allows for this party to authenticate their messages as coming from a past execution of this role. These primitives are essential for realizing the YOSO MPC Framework (CRYPTO'21).
Last updated:  2024-07-08
Squirrel: A Scalable Secure Two-Party Computation Framework for Training Gradient Boosting Decision Tree
Wen-jie Lu, Zhicong Huang, Qizhi Zhang, Yuchen Wang, and Cheng Hong
Gradient Boosting Decision Tree (GBDT) and its variants are widely used in industry, due to their strong interpretability. Secure multi-party computation allows multiple data owners to compute a function jointly while keeping their input private. In this work, we present Squirrel, a two-party GBDT training framework on a vertically split dataset, where two data owners each hold different features of the same data samples. Squirrel is private against semi-honest adversaries, and no sensitive intermediate information is revealed during the training process. Squirrel is also scalable to datasets with millions of samples even under a Wide Area Network (WAN). Squirrel achieves its high performance via several novel co-designs of the GBDT algorithms and advanced cryptography. Especially, 1) we propose a new and efficient mechanism to hide the sample distribution on each node using oblivious transfer. 2) We propose a highly optimized method for gradient aggregation using lattice-based homomorphic encryption (HE). Our empirical results show that our method can be three orders of magnitude faster than the existing HE approaches. 3) We propose a novel protocol to evaluate the sigmoid func- tion on secretly shared values, showing 19×-200×-fold im- provements over two existing methods. Combining all these improvements, Squirrel costs less than 6 seconds per tree on a dataset with 50 thousands samples which outperforms Pivot (VLDB 2020) by more than 28×. We also show that Squirrel can scale up to datasets with more than one million samples, e.g., about 170 seconds per tree over a WAN.
Last updated:  2024-07-08
Bringing Order to Chaos: The Case of Collision-Resistant Chameleon-Hashes
David Derler, Kai Samelin, and Daniel Slamanig
Chameleon-hash functions, introduced by Krawczyk and Rabin at NDSS 2000, are trapdoor collision-resistant hash-functions parametrized by a public key. If the corresponding secret key is known, arbitrary collisions for the hash function can be efficiently found. Chameleon-hash functions have prominent applications in the design of cryptographic primitives, such as lifting non-adaptively secure signatures to adaptively secure ones. Recently, this primitive also received a lot of attention as a building block in more complex cryptographic applications ranging from editable blockchains to advanced signature and encryption schemes. We observe that in latter applications various different notions of collision-resistance are used, and it is not always clear if the respective notion does really cover what seems intuitively required by the application. Therefore, we revisit existing collision-resistance notions in the literature, study their relations, and - using the example of the recent redactable blockchain proposals - discuss which practical impact different notions of collision-resistance might have. Moreover, we provide a stronger, and arguably more desirable, notion of collision-resistance than what is known from the literature. Finally, we present a surprisingly simple and efficient black-box construction of chameleon-hash functions achieving this strong notion.
Last updated:  2024-07-08
BumbleBee: Secure Two-party Inference Framework for Large Transformers
Wen-jie Lu, Zhicong Huang, Zhen Gu, Jingyu Li, Jian Liu, Cheng Hong, Kui Ren, Tao Wei, and WenGuang Chen
Abstract—Large transformer-based models have realized state- of-the-art performance on lots of real-world tasks such as natural language processing and computer vision. However, with the increasing sensitivity of the data and tasks they handle, privacy has become a major concern during model deployment. In this work, we focus on private inference in two-party settings, where one party holds private inputs and the other holds the model. We introduce BumbleBee, a fast and communication-friendly two- party private transformer inference system. Our contributions are three-fold: First, we propose optimized protocols for matrix multiplication, which significantly reduce communication costs by 80% – 90% compared to previous techniques. Secondly, we develop a methodology for constructing efficient protocols tailored to the non-linear activation functions employed in transformer models. The proposed activation protocols have realized a significant enhancement in processing speed, alongside a remarkable reduction in communication costs by 80% – 95% compared with two prior methods. Lastly, we have performed extensive benchmarks on five transformer models. BumbleBee demonstrates its capability by evaluating the LLaMA-7B model, generating one token in approximately 14 minutes using CPUs. Our results further reveal that BumbleBee outperforms Iron (NeurIPS22) by over an order of magnitude and is three times faster than BOLT (Oakland24) with one-tenth communication.
Last updated:  2024-07-08
Consolidated Linear Masking (CLM): Generalized Randomized Isomorphic Representations, Powerful Degrees of Freedom and Low(er)-cost
Itamar Levi and Osnat Keren
Masking is a widely adopted countermeasure against side-channel analysis (SCA) that protects cryptographic implementations from information leakage. However, current masking schemes often incur significant overhead in terms of electronic cost. RAMBAM, a recently proposed masking technique that fits elegantly with the AES algorithm, offers ultra-low latency/area by utilizing redundant representations of finite field elements. This paper presents a comprehensive generalization of RAMBAM and various other masking schemes within a unified framework and a mathematical representation known as Consolidated Linear Masking (CLM), where masking schemes are formalized by their encoding. We establish a theoretical foundation for CLM linking randomized isomorphic (code) representations and the entropy provided by the redundancy to a revised notion of masking order. Our analysis reveals that RAMBAM is a specific instance of CLM as well as other masking constructions, thus paving the way for significant enhancements. For example, a $1^{st}$-order secure design can be achieved almost without increasing the size of the representation of the variables. This property scales up to any order and is versatile. We demonstrate how CLM enables: (1) randomized selection of the isomorphic field for improved security; (2) flexible choice of the randomization polynomial; (3) embedded mask-refreshing via the randomized isomorphic representation that reduces randomness requirements significantly as well as improves performance; (4) a wider range of isomorphic randomized mappings that significantly increases the available randomization space compared to RAMBAM; (5) considerable improvement in securing fault-injection attacks and inherent security against probing adversaries, i.e., more required probes. In addition, our framework addresses ways to improve the brute-force parameter choices in the original RAMBAM. By offering a unifying theoretical perspective for masking and practical enhancements, this work advances the design of efficient and secure masking countermeasures against SCA threats.
Last updated:  2024-07-07
Practical Non-interactive Multi-signatures, and a Multi-to-Aggregate Signatures Compiler
Matthieu Rambaud and Christophe Levrat
In a fully non-interactive multi-signature, resp. aggregate-signature scheme (fNIM, resp. fNIA), signatures issued by many signers on the same message, resp. on different messages, can be succinctly ``combined'', resp. ``aggregated''. fNIMs are used in the Ethereum consensus protocol, to produce the certificates of validity of blocks which are to be verified by billions of clients. fNIAs are used in some PBFT-like consensus protocols, such as the production version of Diem by Aptos, to replace the forwarding of many signatures by a new leader. In this work we address three complexity bottlenecks. (i) fNIAs are costlier than fNIMs, e.g., we observe that verification time of a 3000-wise aggregate signature of BGLS (Eurocrypt'03), takes 300x longer verification time than verification of a 3000-wise pairing-based multisignature. (ii) fNIMs impose that each verifier processes the setup published by the group of potential signers. This processing consists either in verifying proofs of possession (PoPs), such as in Pixel (Usenix'20) and in the IETF'22 draft inherited from Ristenpart-Yilek (Eurocrypt'07), which costs a product of pairings over all published keys. Or, it consists in re-randomizing the keys, such as in SMSKR (FC'24). (iii) Existing proven security bounds on efficient fNIMs do not give any guarantee in practical curves with 256bits-large groups, such as BLS12-381 (used in Ethereum) or BLS12-377 (used in Zexe). Thus, computing in much larger curves is required to have provable guarantees. Our first contribution is a new fNIM called $\mathsf{dms}$, it addresses both (ii) and (iii). It is as simple as adding Schnorr PoPs to the schoolbook pairing-based fNIM of Boldyreva (PKC'03). (ii) For a group of 1000 signers, verification of these PoPs is: $5+$ times faster than for the previous pairing-based PoPs; and $3+$ times faster than the Verifier's processing of the setup in SMSKR (and contrary to the latter, needs not be re-started when a new member joins the group). (iii) We prove a tight reduction to the discrete logarithm (DL), in the algebraic group model (AGM). Given the current estimation of roughly 128 bits of security for the DL in both the curves BLS12-381 and BLS12-377, we deduce a probability of forgery of $\mathsf{dms}$ no higher than about $2^{-93}$ for a time $2^{80}$ adversary. This reduction is our main technical contribution. The only related proof before was for an interactive Schnorr-based multi-signature scheme, using Schnorr PoPs. Our approach easily fills a gap in this proof, since we take into account that the adversary has access to a signing oracle even before publishing its PoPs. But in our context of pairing-based multi-signatures, extraction of the keys of the adversary is significantly more complicated, since the signing oracle produces a correlated random string. We finally provide another application of $\mathsf{dms}$, which is that it can be plugged in recent threshold signatures without setup (presented by Das et al at CCS'23, and Garg et al at SP'24), since these schemes implicitly build on any arbitrary BLS-based fNIM. Our second contribution addresses (i), it is a very simple compiler: $\mathcal{M}to\mathcal{A}$ (multi-to-aggregate). It turns any fNIM into an fNIA, suitable for aggregation of signatures on messages with a prefix in common, with the restriction that a signer must not sign twice using the same prefix. The resulting fNIA is post-quantum secure as soon as the fNIM is, such as Chipmunk (CCS'23). We demonstrate the relevance for Diem by applying $\mathcal{M}to\mathcal{A}$ to $\mathsf{dms}$: the resulting fNIA enables to verify 39x faster an aggregate of 129 signatures, over messages with $7$ bits-long variable parts, than BGLS.
Last updated:  2024-07-07
Masked Vector Sampling for HQC
Maxime Spyropoulos, David Vigilant, Fabrice Perion, Renaud Pacalet, and Laurent Sauvage
Anticipating the advent of large quantum computers, NIST started a worldwide competition in 2016 aiming to define the next cryptographic standards. HQC is one of these post-quantum schemes still in contention, with four others already in the process of being standardized. In 2022, Guo et al. introduced a timing attack that exploited an inconsistency in HQC rejection sampling function to recover its secret key in 866,000 calls to an oracle. The authors of HQC updated its specification by applying an algorithm to sample vectors in constant time. A masked implementation of this function was then proposed for BIKE but it is not directly applicable to HQC. In this paper we propose a masked specification-compliant version of HQC vector sampling function which relies, to our knowledge, on the first masked implementation of the Barrett reduction.
Last updated:  2024-07-07
Improved Alternating-Moduli PRFs and Post-Quantum Signatures
Navid Alamati, Guru-Vamsi Policharla, Srinivasan Raghuraman, and Peter Rindal
We revisit the alternating moduli paradigm for constructing symmetric key primitives with a focus on constructing highly efficient protocols to evaluate them using secure multi-party computation (MPC). The alternating moduli paradigm of Boneh et al. (TCC 2018) enables the construction of various symmetric key primitives with the common characteristic that the inputs are multiplied by two linear maps over different moduli, first over $\mathbb{F}_2$ and then over $\mathbb{F}_3$. The first contribution focuses on efficient two-party evaluation of alternating moduli PRFs, effectively building an oblivious pseudorandom function. We present a generalization of the PRF proposed by Boneh et al. (TCC 18) along with methods to lower the communication and computation. We then provide several variants of our protocols, with different computation and communication tradeoffs, for evaluating the PRF. Most are in the OT/VOLE hybrid model while one is based on specialized garbling. Our most efficient protocol effectively is about $3\times$ faster and requires $1.3\times$ lesser communication. Our next contribution is the efficient evaluation of the OWF $f(x)=B\cdot_3 (A\cdot_2 x)$ proposed by Dinur et al. (CRYPTO 21) where $A \in \mathbb{F}^{m\times n}_2, B\in\mathbb{F}^{t\times m}_3$ and $\cdot_p$ is multiplication mod $p$. This surprisingly simple OWF can be evaluated within MPC by secret sharing $[\hspace{-3px}[x]\hspace{-3px}]$ over $\mathbb{F}_2$, locally computing $[\hspace{-3px}[v]\hspace{-3px}]=A\cdot_2 [\hspace{-3px}[x]\hspace{-3px}]$, performing a modulus switching protocol to $\mathbb{F}_3$ shares, followed by locally computing the output shares $[\hspace{-3px}[y]\hspace{-3px}]=B\cdot_3 [\hspace{-3px}[v]\hspace{-3px}]$. We design a bespoke MPC-in-the-Head (MPCitH) signature scheme that evaluates the OWF, achieving state of art performance. The resulting signature has a size ranging from 4.0-5.5 KB, achieving between $2\text{-}3\times$ reduction compared to Dinur et al. To the best of our knowledge, this is only $\approx 5\%$ larger than the smallest signature based on symmetric key primitives, including the latest NIST PQC competition submissions. We additionally show that our core techniques can be extended to build very small post-quantum ring signatures for small-medium sized rings that are competitive with state-of-the-art lattice based schemes. Our techniques are in fact more generally applicable to set membership in MPCitH.
Last updated:  2024-07-07
A New CRT-based Fully Homomorphic Encryption
Anil Kumar Pradhan
We have proposed a novel FHE scheme that uniquely encodes the plaintext with noise in a way that prevents the increasing noise from overflowing and corrupting the plaintext. This allows users to perform computations on encrypted data smoothly. The scheme is constructed using the Chinese Remainder Theorem (CRT), supporting a predefined number of modular operations on encrypted plaintext without the need for bootstrapping. Although FHE recently became popular after Gentry's work and various developments have occurred in the last decade, the idea of "Fully Homomorphic Encryption (FHE)" scheme was first introduced in the 1970s by Rivest. The Chinese Remainder Theorem is one of the most suitable tools for developing a FHE Scheme because it forms a ring homomorphism \( Z_{p_1} \times Z_{p_2} \times \ldots \times Z_{p_k} \cong Z_{p_1 p_2 \ldots p_k} \). Various attempts have been made to develop a FHE using CRT, but most of them were unsuccessful, mainly due to the chosen plaintext attack (CPA). The proposed scheme overcomes the chosen plaintext attack. The scheme also adds random errors to the message during encryption. However, these errors are added in such a way that, when homomorphic operations are performed over encrypted data, the increasing values of errors never overwrite the values of the messages, as happens in LWE-based homomorphic schemes. Therefore, one can perform a predefined number of homomorphic operations (both addition and multiplication) without worrying about the increasing values of errors.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.