Papers updated in last 7 days (76 results)

Last updated:  2024-10-25
Update to the Sca25519 Library: Mitigating Tearing-based Side-channel Attacks
Lukasz Chmielewski and Lubomír Hrbáček
This short note describes an update to the sca25519 library, an ECC implementation computing the X25519 key-exchange protocol on the Arm Cortex-M4 microcontroller. The sca25519 software came with extensive mitigations against various side-channel and fault attacks and was, to our best knowledge, the first to claim affordable protection against multiple classes of attacks that are motivated by distinct real-world application scenarios. This library is protected against various passive and active side-channel threats. However, both classes of attacks were considered separately, i.e., combining the attacks is considered out-of-scope because to successfully execute such a combined attack, the adversary would need to be very powerful (e.g., a very well-equipped security laboratory). Protection against such powerful adversaries is considered infeasible without using dedicated protected hardware with which Arm Cortex-M4 is not equipped. However, there exists a particular class of easy and cheap active attacks: they are called tearing, and they are well known in the smartcard context. In this paper, we extend the scope of the library to also consider a combination of tearing and side-channel attacks. In this note, we show how we can mitigate such a combination by performing a small code update. The update does not affect the efficiency of the library.
Last updated:  2024-10-24
Efficiently-Thresholdizable Batched Identity Based Encryption, with Applications
Amit Agarwal, Rex Fernando, and Benny Pinkas
We propose a new cryptographic primitive called "batched identity-based encryption" (Batched IBE) and its thresholdized version. The new primitive allows encrypting messages with specific identities and batch labels, where the latter can represent, for example, a block number on a blockchain. Given an arbitrary subset of identities for a particular batch, our primitive enables efficient issuance of a single decryption key that can be used to decrypt all ciphertexts having identities that are included in the subset while preserving the privacy of all ciphertexts having identities that are excluded from the subset. At the heart of our construction is a new technique that enables public aggregation (i.e. without knowledge of any secrets) of any subset of identities, into a succinct digest. This digest is used to derive, via a master secret key, a single succinct decryption key for all the identities that were digested in this batch. In a threshold system, where the master key is distributed as secret shares among multiple authorities, our method significantly reduces the communication (and in some cases, computation) overhead for the authorities. It achieves this by making their costs for key issuance independent of the batch size. We present a concrete instantiation of a Batched IBE scheme based on the KZG polynomial commitment scheme by Kate et al. (Asiacrypt'10) and a modified form of the BLS signature scheme by Boneh et al. (Asiacrypt'01). The construction is proven secure in the generic group model (GGM). In a blockchain setting, the new construction can be used for achieving mempool privacy by encrypting transactions to a block, opening only the transactions included in a given block and hiding the transactions that are not included in it. With the thresholdized version, multiple authorities (validators) can collaboratively manage the decryption process. Other possible applications include scalable support via blockchain for fairness of dishonest majority MPC, and conditional batched threshold decryption that can be used for implementing secure Dutch auctions and privacy preserving options trading.
Last updated:  2024-10-24
The Learning Stabilizers with Noise problem
Alexander Poremba, Yihui Quek, and Peter Shor
Random classical codes have good error correcting properties, and yet they are notoriously hard to decode in practice. Despite many decades of extensive study, the fastest known algorithms still run in exponential time. The Learning Parity with Noise (LPN) problem, which can be seen as the task of decoding a random linear code in the presence of noise, has thus emerged as a prominent hardness assumption with numerous applications in both cryptography and learning theory. Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove a worst-case to average-case reduction for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.
Last updated:  2024-10-24
How (not) to hash into class groups of imaginary quadratic fields?
István András Seres, Péter Burcsi, and Péter Kutas
Class groups of imaginary quadratic fields (class groups for short) have seen a resurgence in cryptography as transparent groups of unknown order. They are a prime candidate for being a trustless alternative to RSA groups because class groups do not need a (distributed) trusted setup to sample a cryptographically secure group of unknown order. Class groups have recently found many applications in verifiable secret sharing, secure multiparty computation, transparent polynomial commitments, and perhaps most importantly, in time-based cryptography, i.e., verifiable delay functions, (homomorphic) time-lock puzzles, timed commitments, etc. However, there are various roadblocks to making class groups widespread in practical cryptographic deployments. We initiate the rigorous study of hashing into class groups. Specifically, we want to sample a uniformly distributed group element in a class group such that nobody knows its discrete logarithm with respect to any public parameter. We point out several flawed algorithms in numerous publicly available class group libraries. We further illustrate the insecurity of these hash functions by showing concrete attacks against cryptographic protocols, i.e., verifiable delay functions, if they were deployed with one of those broken hash-to-class group functions. We propose two families of cryptographically secure hash functions into class groups. We implement these constructions and evaluate their performance. We release our implementation as an open-source library.
Last updated:  2024-10-24
OpenNTT: An Automated Toolchain for Compiling High-Performance NTT Accelerators in FHE
Florian Krieger, Florian Hirner, Ahmet Can Mert, and Sujoy Sinha Roy
Modern cryptographic techniques such as fully homomorphic encryption (FHE) have recently gained broad attention. Most of these cryptosystems rely on lattice problems wherein polynomial multiplication forms the computational bottleneck. A popular method to accelerate these polynomial multiplications is the Number-Theoretic Transformation (NTT). Recent works aim to improve the practical deployability of NTT and propose toolchains supporting the NTT hardware accelerator design processes. However, existing design tools do not provide on-the-fly twiddle factor generation (TFG) which leads to high memory demands. Inspired by this situation, we present OpenNTT, a fully automated, open-source framework to compile NTT hardware accelerators with TFG for various NTT types and parameter sets. We address the challenge of combining conflict-free memory accesses and efficient, linear twiddle factor generation through a dedicated NTT processing order. Following this order, we develop a flexible twiddle factor generation method with minimal memory usage. These core concepts together with a frequency-optimized hardware architecture form our OpenNTT framework. We use OpenNTT to compile and test NTT hardware designs with various parameter sets on FPGAs. The obtained results show a clear memory reduction due to TFG and a speedup by 2.7× in latency and 2.2× in area-time-product, compared to prior arts.
Last updated:  2024-10-24
Provably Robust Watermarks for Open-Source Language Models
Miranda Christ, Sam Gunn, Tal Malkin, and Mariana Raykova
The recent explosion of high-quality language models has necessitated new methods for identifying AI-generated text. Watermarking is a leading solution and could prove to be an essential tool in the age of generative AI. Existing approaches embed watermarks at inference and crucially rely on the large language model (LLM) specification and parameters being secret, which makes them inapplicable to the open-source setting. In this work, we introduce the first watermarking scheme for open-source LLMs. Our scheme works by modifying the parameters of the model, but the watermark can be detected from just the outputs of the model. Perhaps surprisingly, we prove that our watermarks are unremovable under certain assumptions about the adversary's knowledge. To demonstrate the behavior of our construction under concrete parameter instantiations, we present experimental results with OPT-6.7B and OPT-1.3B. We demonstrate robustness to both token substitution and perturbation of the model parameters. We find that the stronger of these attacks, the model-perturbation attack, requires deteriorating the quality score to 0 out of 100 in order to bring the detection rate down to 50%.
Last updated:  2024-10-24
Polylogarithmic Proofs for Multilinears over Binary Towers
Benjamin E. Diamond and Jim Posen
We present a succinct polynomial commitment scheme for multilinears over tiny binary fields, including that with just two elements. To achieve this, we develop two main ideas. Our first adapts Zeilberger, Chen and Fisch's BaseFold ('23) PCS to the binary setting; it uses FRI (ICALP '18)'s lesser-known binary variant, and reveals a new connection between that work and Lin, Chung and Han (FOCS '14)'s additive NTT. We moreover present a novel large-field-to-small-field compiler for polynomial commitment schemes. Using a technique we call "ring-switching", our compiler generically bootstraps any multilinear PCS over a large, power-of-two degree extension field into a further PCS over that field's ground field. The resulting small-field PCS lacks embedding overhead, in that its commitment cost is identical to that of the large-field scheme on each input size (measured in bits). We attain concretely small proofs for enormous binary multilinears, shrinking the proofs of Diamond and Posen ('23) by an order of magnitude.
Last updated:  2024-10-24
CountCrypt: Quantum Cryptography between QCMA and PP
Eli Goldin, Tomoyuki Morimae, Saachi Mutreja, and Takashi Yamakawa
We construct a quantum oracle relative to which $\mathbf{BQP}=\mathbf{QCMA}$ but quantum-computation-classical-communication (QCCC) key exchange, QCCC commitments, and two-round quantum key distribution exist. We also construct an oracle relative to which $\mathbf{BQP}=\mathbf{QMA}$, but quantum lightning (a stronger variant of quantum money) exists. This extends previous work by Kretschmer [Kretschmer, TQC22], which showed that there is a quantum oracle relative to which $\mathbf{BQP}=\mathbf{QMA}$ but pseudorandom state generators (a quantum variant of pseudorandom generators) exist. We also show that QCCC key exchange, QCCC commitments, and two-round quantum key distribution can all be used to build one-way puzzles. One-way puzzles are a version of "quantum samplable" one-wayness and are an intermediate primitive between pseudorandom state generators and EFI pairs, the minimal quantum primitive. In particular, one-way puzzles cannot exist if $\mathbf{BQP}=\mathbf{PP}$. Our results together imply that aside from pseudorandom state generators, there is a large class of quantum cryptographic primitives which can exist even if $\mathbf{BQP} = \mathbf{QCMA}$, but are broken if $\mathbf{BQP} = \mathbf{PP}$. Furthermore, one-way puzzles are a minimal primitive for this class. We denote this class "CountCrypt".
Last updated:  2024-10-24
Lattice-based Broadcast Authenticated Searchable Encryption for Cloud Storage
Yibo Cao, Shiyuan Xu, Xiu-Bo Chen, Gang Xu, Siu-Ming Yiu, and Zongpeng Li
For security issue, data in cloud is encrypted. Searching encrypted data (without decryption) is a practical and important problem. Public key authenticated encryption with keyword search (PAEKS) enables the retrieval of encrypted data, while resisting the insider keyword guessing attacks (IKGAs). Most PAEKS schemes only work with single-receiver model, exhibiting very limited applicability. To address this concern, there have been researches on broadcast authenticated encryption with keyword search (BAEKS) to achieve multi-receiver ciphertext search. But to our best knowledge, existing BAEKS schemes are not quantum resistant. In this paper, we propose lattice-based BAEKS, the first post-quantum broadcast authenticated encryption with keyword search in multi-receiver model. In particular, we leverage several lattice sampling algorithms and rejection sampling technique to construct our BAEKS scheme. We also incorporate the minimal cover set technique and lattice basis extension algorithm to construct an enhanced version, namely FS-BAEKS, which addresses the secret key leakage problem. We give a rigorous security analysis of our schemes. For the efficiency of BAEKS and Test algorithms in our BAEKS scheme, the computational overheads are at least 2x and 89x faster than the state-of-the-art schemes respectively, which is practical for cloud storage systems.
Last updated:  2024-10-24
Basic Lattice Cryptography: The concepts behind Kyber (ML-KEM) and Dilithium (ML-DSA)
Vadim Lyubashevsky
This tutorial focuses on describing the fundamental mathematical concepts and design decisions used in the two ``main'' lattice schemes standardized by NIST and included in the CNSA 2.0 algorithmic suite. They are the KEM / encryption scheme CRYSTALS-Kyber (ML-KEM) and the signature scheme CRYSTALS-Dilithium (ML-DSA) . In addition, we will also give the main ideas behind other lattice-based KEMs like Frodo and NTRU.
Last updated:  2024-10-24
More Efficient Isogeny Proofs of Knowledge via Canonical Modular Polynomials
Thomas den Hollander, Sören Kleine, Marzio Mula, Daniel Slamanig, and Sebastian A. Spindler
Proving knowledge of a secret isogeny has recently been proposed as a means to generate supersingular elliptic curves of unknown endomorphism ring, but is equally important for cryptographic protocol design as well as for real world deployments. Recently, Cong, Lai and Levin (ACNS'23) have investigated the use of general-purpose (non-interactive) zero-knowledge proof systems for proving the knowledge of an isogeny of degree $2^k$ between supersingular elliptic curves. In particular, their approach is to model this relation via a sequence of $k$ successive steps of a walk in the supersingular isogeny graph and to show that the respective $j$-invariants are roots of the second modular polynomial. They then arithmetize this relation and show that this approach, when compared to state-of-the-art tailor-made proofs of knowledge by Basso et al. (EUROCRYPT'23), gives a 3-10$\times$ improvement in proof and verification times, with comparable proof sizes. In this paper we ask whether we can further improve the modular polynomial-based approach and generalize its application to primes ${\ell>2}$, as used in some recent isogeny-based constructions. We will answer these questions affirmatively, by designing efficient arithmetizations for each ${\ell \in \{2, 3, 5, 7, 13\}}$ that achieve an improvement over Cong, Lai and Levin of up to 48%. Our main technical tool and source of efficiency gains is to switch from classical modular polynomials to canonical modular polynomials. Adapting the well-known results on the former to the latter polynomials, however, is not straight-forward and requires some technical effort. We prove various interesting connections via novel use of resultant theory, and advance the understanding of canonical modular polynomials, which might be of independent interest.
Last updated:  2024-10-24
Embedded Curves and Embedded Families for SNARK-Friendly Curves
Aurore Guillevic and Simon Masson
Based on the CM method for primality testing (ECPP) by Atkin and Morain published in 1993, we present two algorithms: one to generate embedded elliptic curves of SNARK-friendly curves, with a variable discriminant D; and another to generate families (parameterized by polynomials) with a fixed discriminant D. When D = 3 mod 4, it is possible to obtain a prime-order curve, and form a cycle. We apply our technique first to generate more embedded curves like Bandersnatch with BLS12-381 and we propose a plain twist-secure cycle above BLS12-381 with D = 6673027. We also devise about the scarcity of Bandersnatch-like CM curves, and show that with our algorithm, it is only a question of core-hours to find them. Second, we obtain families of prime-order embedded curves of discriminant D = 3 for BLS and KSS18 curves. Our method obtains families of embedded curves above KSS16 and can work for any KSS family. Our work generalizes the work on Bandersnatch (Masson, Sanso, and Zhang, and Sanso and El Housni).
Last updated:  2024-10-24
Improving and Automating BFV Parameters Selection: An Average-Case Approach
Beatrice Biasioli, Chiara Marcolla, Marco Calderini, and Johannes Mono
The Brakerski/Fan-Vercauteren (BFV) scheme is a state-of-the-art scheme in Fully Homomorphic Encryption based on the Ring Learning with Errors (RLWE) problem. Thus, ciphertexts contain an error that increases with each homomorphic operation and has to stay below a certain threshold for correctness. This can be achieved by setting the ciphertext modulus big enough. On the other hand, a larger ciphertext modulus decreases the level of security and computational efficiency, making parameter selection challenging. Our work aims to improve the bound on the ciphertext modulus, minimizing it. Our main contributions are the following. Primarily, we perform the first average-case analysis of the error growth for the BFV scheme, significantly improving its estimation. For a circuit with a multiplicative depth of only 5, our bounds are up to 25.2 bits tighter than previous analyses and within 1.2 bits of the experimentally observed values. % Secondly, we give a general way to bound the ciphertext modulus for correct decryption that allows closed formulas. % Finally, we use our theoretical advances and propose the first parameter generation tool for the BFV scheme. Here, we add support for arbitrary but use-case-specific circuits, as well as the ability to generate easy-to-use code snippets, making our theoretical work accessible to both researchers and practitioners.
Last updated:  2024-10-24
$\Sigma$-Check: Compressed $\Sigma$-protocol Theory from Sum-check
Shang Gao, Chen Qian, Tianyu Zheng, Yu Guo, and Bin Xiao
The theory of compressed $\Sigma$-protocols [AC20, ACF21] provides a standardized framework for creating efficient $\Sigma$-protocols. This method involves two main phases: first, amortization, which combines multiple instances that satisfy a homomorphic relation into a single instance; and second, Bulletproofs compression [BBB+18], which minimizes communication overhead to a logarithmic scale during the verification of the combined instance. For high-degree polynomial (non-homomorphic) relations, a circuit-based linearization technique is used to transform each instance into a linear relation, resulting in a protocol with at least linear complexity. In this paper, we provide a direct method to extend the compressed $\Sigma$-protocol theory to polynomial relations, named $\Sigma$-Check. One major contribution of $\Sigma$-Check is to eliminate the linear cost associated with linearization in amortization. To achieve this, we employ a sum-check during this phase to ensure a logarithmic communication cost. To the best of our knowledge, $\Sigma$-Check is the first work to achieve a logarithmic amortization for polynomial relations. Nevertheless, without linearization, the amortized relation may not be linear, which hinders us from using Bulletproofs compression. To overcome this problem, we employ another sum-check during the compression phase to effectively manage high-degree relations. Additionally, we propose several variants of our techniques and adapt them for arithmetic circuit relations. We also demonstrate the practicality of our compressed $\Sigma$-protocol theory through applications such as binary proofs, range proofs, and partial knowledge proofs. Our basic protocols are initially based on the Discrete Logarithm (DL) assumption, and we have further extended these to incorporate the Strong-RSA assumption and the Generalized Discrete Logarithm Representation (GDLR) assumption. Our work expands the scope of compressed $\Sigma$-protocol theory and provides a robust foundation for real-world cryptographic applications.
Last updated:  2024-10-24
Theoretical Approaches to Solving the Shortest Vector Problem in NP-Hard Lattice-Based Cryptography with Post-SUSY Theories of Quantum Gravity in Polynomial Time
Trevor Nestor
The Shortest Vector Problem (SVP) is a cornerstone of lattice-based cryptography, underpinning the security of numerous cryptographic schemes like NTRU. Given its NP-hardness, efficient solutions to SVP have profound implications for both cryptography and computational complexity theory. This paper presents an innovative framework that integrates concepts from quantum gravity, noncommutative geometry, spectral theory, and post-SUSY particle physics to address SVP. By mapping high-dimensional lattice points to spin foam networks and by means of Hamiltonian engineering, it is theoretically possible to devise new algorithms that leverage the interactions topologically protected Majorana fermion particles have with the gravitational field through the spectral action principle to loop through these spinfoam networks where SVP vectors could then be encoded onto the spectrum of the corresponding Dirac operators within the system. We establish a novel approach that leverages post-SUSY physics and theories of quantum gravity to achieve algorithmic speedups beyond those expected by conventional quantum computers. This interdisciplinary methodology not only proposes potential polynomial-time algorithms for SVP but also bridges gaps between theoretical physics and cryptographic applications, providing further insights into the Riemann Hypothesis (RH) and the Hilbert-Polya Conjecture.
Last updated:  2024-10-23
Convex Consensus with Asynchronous Fallback
Andrei Constantinescu, Diana Ghinea, Roger Wattenhofer, and Floris Westermann
Convex Consensus (CC) allows a set of parties to agree on a value $v$ inside the convex hull of their inputs with respect to a predefined abstract convexity notion, even in the presence of byzantine parties. In this work, we focus on achieving CC in the best-of-both-worlds paradigm, i.e., simultaneously tolerating at most $t_s$ corruptions if communication is synchronous, and at most $t_a \leq t_s$ corruptions if it is asynchronous. Our protocol is randomized, which is a requirement under asynchrony, and we prove that it achieves optimal resilience. In the process, we introduce communication primitives tailored to the network-agnostic model. These are a deterministic primitive allowing parties to obtain intersecting views (Gather), and a randomized primitive leading to identical views (Agreement on a Core-Set). Our primitives provide stronger guarantees than previous counterparts, making them of independent interest.
Last updated:  2024-10-23
A graph-theoretic approach to analyzing decoding failures of BIKE
Sarah Arpin, Tyler Raven Billingsley, Daniel Rayor Hast, Jun Bo Lau, Ray Perlner, and Angela Robinson
We present experimental findings on the decoding failure rate (DFR) of BIKE, a fourth-round candidate in the NIST Post-Quantum Standardization process, at the 20-bit security level using graph-theoretic approaches. We select parameters according to BIKE design principles and conduct a series of experiments using Rust to generate significantly more decoding failure instances than in prior work using SageMath. For each decoding failure, we study the internal state of the decoder at each iteration and find that for 97% of decoding failures at block size $r=587$, the decoder reaches a fixed point within 7 iterations. We then consider the corresponding Tanner graphs of each decoding failure instance to determine whether the decoding failures are due to absorbing sets. We find that 81% of decoding failures at $r=587$ were caused by absorbing sets, and of these the majority were $(d,d)$-near codewords.
Last updated:  2024-10-23
Public-Key Cryptography through the Lens of Monoid Actions
Hart Montgomery and Sikhar Patranabis
We provide a novel view of public-key cryptography by showing full equivalences of certain primitives to "hard" monoid actions. More precisely, we show that key exchange and two-party computation are exactly equivalent to monoid actions with certain structural and hardness properties. To the best of our knowledge, this is the first "natural" characterization of the mathematical structure inherent to any key exchange or two-party computation protocol, and the first explicit proof of the necessity of mathematical structure for public-key cryptography. We then utilize these characterizations to show new black-box separation results. Concretely, we obtain the following results: - Two-Party Key Exchange: We show that that any two-party noninteractive key exchange protocol is equivalent to the existence of an abelian monoid action equipped with a natural hardness property, namely (distributional) unpredictability. More generally, we show that any $k$-round (two-party) key exchange protocol is essentially equivalent to the existence of a (distributional) unpredictable monoid action with certain commutator-like properties. Rudich (Crypto '91) shows a black-box separation of $k$-round and $(k+1)$-round key exchange for any $k$; we use our generic primitive here to formalize this result and extend it to efficient key exchange protocols (where communication is poly($k$)). - Two-Party Computation: We show that any maliciously secure two-party computation protocol is also equivalent to a monoid action with commutator-like properties and certain hardness guarantees. We then use a generic version of this primitive to show a black-box separation between $k$-round semi-honest secure two-party computation and $(k+1)$-round maliciously secure two-party computation. This yields the first black-box separation (to our knowledge) between $k$-round and $(k+1)$-round maliciously secure two-party computation protocols.
Last updated:  2024-10-23
The Mysteries of LRA: Roots and Progresses in Side-channel Applications
Jiangshan Long, Changhai Ou, Zhu Wang, and Fan Zhang
Evaluation of cryptographic implementations with respect to side-channels has been mandated at high security levels nowadays. Typically, the evaluation involves four stages: detection, modeling, certification and secret recovery. In pursuit of specific goal at each stage, inherently different techniques used to be considered necessary. However, since the recent works of Eurocrypt2022 and Eurocrypt2024, linear regression analysis (LRA) has uniquely become the technique that is well-applied throughout all the stages. In this paper, we concentrate on this silver bullet technique within the field of side-channel. First, we address the fundamental problems of why and how to use LRA. The discussion of nominal and binary nature explains its strong applicability. To sustain effective outcomes, we provide in-depth analyses about the design matrix, regarding the sample distribution of plaintext and the chosen polynomial degree. We summarize ideal conditions that totally avoid multicollinearity problem, and explore the novel evaluator-advantageous property of LRA by means of model diagnosis. Then, we trace the roots where we theoretically elaborate its connections with traditional side-channel techniques, including Correlation Power Analysis (CPA), Distance-of-Means analysis (DoM) and Partition Power Analysis (PPA), in terms of regression coefficients, regression model and coefficient of determination. Finally, we probe into the state-of-the-art combined LRA with the so-called collapse function, demonstrating its relationship with another refined technique, G-DoM. We argue that properly relaxing the definition of bit groups equally satisfies our conclusions. Experimental results are in line with the theory, confirming its correctness.
Last updated:  2024-10-23
Optimizing Message Range and Ciphertext Storage in GSW Encryption Using CRT and PVW-like Compression Scheme
Kung-Wei Hu, Huan-Chih Wang, and Ja-Ling Wu
This paper explores advancements in the Gentry-Sahai-Waters (GSW) fully homomorphic encryption scheme, addressing challenges related to message data range limitations and ciphertext size constraints. We introduce a novel approach utilizing the Chinese Remainder Theorem (CRT) for message decomposition, significantly expanding the allowable message range to the entire plaintext space. This method enables unrestricted message selection and supports parallel homomorphic operations without intermediate decryption. Additionally, we adapt existing ciphertext compression techniques, such as the PVW-like scheme, to reduce memory overhead associated with ciphertexts. Our experimental results demonstrate the effectiveness of the CRT-based decomposition in increasing the upper bound of message values and improving the scheme's capacity for consecutive homomorphic operations. However, compression introduces a trade-off, necessitating a reduced message range due to error accumulation. This research contributes to enhancing the practicality and efficiency of the GSW encryption scheme for complex computational scenarios while managing the balance between expanded message range, computational complexity, and storage requirements.
Last updated:  2024-10-23
One Time Pad and the Short Key Dream
Umberto Cerruti
This is a survey on the One Time Pad (OTP) and its derivatives, from its origins to modern times. OTP, if used correctly, is (the only) cryptographic code that no computing power, present or future, can break. Naturally, the discussion shifts to the creation of long random sequences, starting from short ones, which can be easily shared. We could call it the Short Key Dream. Many problems inevitably arise, which affect many fields of computer science, mathematics and knowledge in general. This work presents a vast bibliography that includes fundamental classical works and current papers on randomness, pseudorandom number generators, compressibility, unpredictability and more.
Last updated:  2024-10-23
Keeping Up with the KEMs: Stronger Security Notions for KEMs and automated analysis of KEM-based protocols
Cas Cremers, Alexander Dax, and Niklas Medinger
Key Encapsulation Mechanisms (KEMs) are a critical building block for hybrid encryption and modern security protocols, notably in the post-quantum setting. Given the asymmetric public key of a recipient, the primitive establishes a shared secret key between sender and recipient. In recent years, a large number of abstract designs and concrete implementations of KEMs have been proposed, e.g., in the context of the NIST process for post-quantum primitives. In this work, we (i) establish stronger security notions for KEMs, and (ii) develop a symbolic analysis method to analyze security protocols that use KEMs. First, we generalize existing security notions for KEMs in the computational setting, introduce several stronger security notions, and prove their relations. Our new properties formalize in which sense outputs of the KEM uniquely determine, i.e., bind, other values. Our new binding properties can be used, e.g., to prove the absence of attacks that were not captured by prior security notions, such as re-encapsulation attacks. Second, we develop a family of fine-grained symbolic models that correspond to our hierarchy of computational security notions, and are suitable for the automated analysis of KEM-based security protocols. We encode our models as a library in the framework of the Tamarin prover. Given a KEM-based protocol, our approach can automatically derive the minimal binding properties required from the KEM; or, if also given a concrete KEM, can analyze if the protocols meets its security goals. In case studies, Tamarin automatically discovers, e.g., that the key exchange protocol proposed in the original Kyber paper requires stronger properties from the KEM than were proven in the paper.
Last updated:  2024-10-22
Radical 2-isogenies and cryptographic hash functions in dimensions 1, 2 and 3
Sabrina Kunzweiler, Luciano Maino, Tomoki Moriya, Christophe Petit, Giacomo Pope, Damien Robert, Miha Stopar, and Yan Bo Ti
We provide explicit descriptions for radical 2-isogenies in dimensions one, two and three using theta coordinates. These formulas allow us to efficiently navigate in the corresponding isogeny graphs. As an application of this, we implement different versions of the CGL hash func- tion. Notably, the three-dimensional version is fastest, which demonstrates yet another potential of using higher dimensional isogeny graphs in cryptography.
Last updated:  2024-10-22
$\mathsf{OPA}$: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning
Harish Karthikeyan and Antigoni Polychroniadou
Our work aims to minimize interaction in secure computation due to the high cost and challenges associated with communication rounds, particularly in scenarios with many clients. In this work, we revisit the problem of secure aggregation in the single-server setting where a single evaluation server can securely aggregate client-held individual inputs. Our key contribution is the introduction of One-shot Private Aggregation ($\mathsf{OPA}$) where clients speak only once (or even choose not to speak) per aggregation evaluation. Since each client communicates only once per aggregation, this simplifies managing dropouts and dynamic participation, contrasting with multi-round protocols and aligning with plaintext secure aggregation, where clients interact only once. We construct $\mathsf{OPA}$ based on LWR, LWE, class groups, DCR and demonstrate applications to privacy-preserving Federated Learning (FL) where clients {speak once}. This is a sharp departure from prior multi-round FL protocols whose study was initiated by Bonawitz et al. (CCS, 2017). Moreover, unlike the YOSO (You Only Speak Once) model for general secure computation, $\mathsf{OPA}$ eliminates complex committee selection protocols to achieve adaptive security. Beyond asymptotic improvements, $\mathsf{OPA}$ is practical, outperforming state-of-the-art solutions. We benchmark logistic regression classifiers for two datasets, while also building an MLP classifier to train on MNIST, CIFAR-10, and CIFAR-100 datasets. We build two flavors of $\mathsf{OPA}$ (1) from (threshold) key homomorphic PRF and (2) from seed homomorphic PRG and secret sharing. The threshold Key homomorphic PRF addresses shortcomings observed in previous works that relied on DDH and LWR in the work of Boneh et al. (CRYPTO, 2013), marking it as an independent contribution to our work. Moreover, we also present new threshold key homomorphic PRFs based on class groups or DCR or the LWR assumption.
Last updated:  2024-10-22
Arc: Accumulation for Reed--Solomon Codes
Benedikt Bünz, Pratyush Mishra, Wilson Nguyen, and William Wang
Proof-Carrying Data (PCD) is a foundational tool for ensuring the correctness of incremental distributed computations that has found numerous applications in theory and practice. The state-of-the-art PCD constructions are obtained via accumulation or folding schemes. Unfortunately, almost all known constructions of accumulation schemes rely on homomorphic vector commitments (VCs), which results in relatively high computational costs and insecurity in the face of quantum adversaries. A recent work of Bünz, Mishra, Nguyen, and Wang removes the dependence on homomorphic VCs by relying only on the random oracle model, but introduces a bound on the number of consecutive accumulation steps, which in turn bounds the depth of the PCD computation graph and greatly affects prover and verifier efficiency. In this work, we propose Arc, a novel hash-based accumulation scheme that overcomes this restriction and supports an unbounded number of accumulation steps. The core building block underlying Arc is a new accumulation scheme for claims about proximity of claimed codewords to the Reed--Solomon code. Our approach achieves near-optimal efficiency, requiring a small number of Merkle tree openings relative to the code rate, and avoids the efficiency loss associated with bounded accumulation depth. Unlike prior work, our scheme is also able to accumulate claims up to list-decoding radius, resulting in concrete efficiency improvements. We use this accumulation scheme to construct two distinct accumulation schemes, again relying solely on random oracles. The first approach accumulates RS proximity claims and can be used as an almost-drop-in replacement in existing PCD deployments based on IOP-based SNARKs. The second approach directly constructs an accumulation scheme for rank-1 constraint systems (and more generally polynomial constraint systems) that is simpler and more efficient than the former and prior approaches. We introduce the notion of Interactive Oracle Reductions (IORs) to enable a modular and simple security analysis. These extend prior notions of Reductions of Knowledge to the setting of IOPs.
Last updated:  2024-10-22
AQQUA: Augmenting Quisquis with Auditability
George Papadoulis, Danai Balla, Panagiotis Grontas, and Aris Pagourtzis
We propose AQQUA: a digital payment system that combines auditability and privacy. AQQUA extends Quisquis by adding two authorities; one for registration and one for auditing. These authorities do not intervene in the everyday transaction processing; as a consequence, the decentralized nature of the cryptocurrency is not disturbed. Our construction is account-based. An account consists of an updatable public key which functions as a cryptographically unlinkable pseudonym, and of commitments to the balance, the total amount of coins spent, and the total amount of coins received. In order to participate in the system a user creates an initial account with the registration authority. To protect their privacy, whenever the user wants to transact they create unlinkable new accounts by updating their public key and the total number of accounts they own (maintained in committed form). The audit authority may request an audit at will. The user must prove in zero-knowledge that all their accounts are compliant to specific policies. We formally define a security model capturing the properties that a private and auditable digital payment system should possess and we analyze the security of AQQUA under this model.
Last updated:  2024-10-22
Dynamic Decentralized Functional Encryptions from Pairings in the Standard Model
Duy Nguyen
Dynamic Decentralized Functional Encryption (DDFE), introduced by Chotard et al. (CRYPTO'20), represents a robust generalization of (Multi-Client) Functional Encryption. It allows users to dynamically join and contribute private inputs to individually controlled joint functions without requiring a trusted authority. Recently, Shi et al. (PKC'23) proposed the first Multi-Client Functional Encryption scheme for function-hiding inner products (FH-IP) without relying on random oracles. Unfortunately, their construction still requires a trusted key authority, leaving open the question of whether a full-fledged FH-IP-DDFE can exist in the standard model. In this work, we answer this question affirmatively by introducing Updatable Pseudorandom Zero Sharing, a novel concept that provides both the critical functionality and security properties needed to construct secure DDFE schemes in the standard model. Our second contribution is a novel proof strategy, which preserves adaptive security when transforming any functional encryption scheme for FH-IP into FH-IP-DDFE. Together, these two techniques enable a modular construction of FH-IP-DDFE that is secure against adaptive message and key queries in the standard model. Additionally, our pseudorandom zero-sharing scheme is highly versatile, enabling the first DDFE for attribute-weighted sums in the standard model, complementing the recent ROM-based construction by Agrawal et al. (CRYPTO'23).
Last updated:  2024-10-22
Secure and Efficient Outsourced Matrix Multiplication with Homomorphic Encryption
Aikata Aikata and Sujoy Sinha Roy
Fully Homomorphic Encryption (FHE) is a promising privacy-enhancing technique that enables secure and private data processing on untrusted servers, such as privacy-preserving neural network (NN) evaluations. However, its practical application presents significant challenges. Limitations in how data is stored within homomorphic ciphertexts and restrictions on the types of operations that can be performed create computational bottlenecks. As a result, a growing body of research focuses on optimizing existing evaluation techniques for efficient execution in the homomorphic domain. One key operation in this space is matrix multiplication, which forms the foundation of most neural networks. Several studies have even proposed new FHE schemes specifically to accelerate this operation. The optimization of matrix multiplication is also the primary goal of our work. We leverage the Single Instruction Multiple Data (SIMD) capabilities of FHE to increase data packing and significantly reduce the KeySwitch operation count— an expensive low-level routine in homomorphic encryption. By minimizing KeySwitching, we surpass current state-of-the-art solutions, requiring only a minimal multiplicative depth of two. The best-known complexity for matrix multiplication at this depth is $\mathcal{O}(d)$ for matrices of size $d\times d$. Remarkably, even the leading techniques that require a multiplicative depth of three still incur a KeySwitch complexity of $\mathcal{O}(d)$. In contrast, our method reduces this complexity to $\mathcal{O}(\log{d})$ while maintaining the same level of data packing. Our solution broadly applies to all FHE schemes supporting Single Instruction Multiple Data (SIMD) operations. We further generalize the technique in two directions: allowing arbitrary packing availability and extending it to rectangular matrices. This versatile approach offers significant improvements in matrix multiplication performance and enables faster evaluation of privacy-preserving neural network applications.
Last updated:  2024-10-22
Spec-o-Scope: Cache Probing at Cache Speed
Uncategorized
Gal Horowitz, Eyal Ronen, and Yuval Yarom
Show abstract
Uncategorized
Over the last two decades, microarchitectural side channels have been the focus of a large body of research on the development of new attack techniques, exploiting them to attack various classes of targets and designing mitigations. One line of work focuses on increasing the speed of the attacks, achieving higher levels of temporal resolution that can allow attackers to learn finer-grained information. The most recent addition to this line of work is Prime+Scope [CCS '21], which only requires a single access to the L1 cache to confirm the absence of victim activity in a cache set. While significantly faster than prior attacks, Prime+Scope is still an order of magnitude slower than cache access. In this work, we set out to close this gap. We draw on techniques from research into microarchitectural weird gates, software constructs that exploit transient execution to perform arbitrary computation on cache state. We design the Spec-o-Scope gate, a new weird gate that performs 10 cache probes in quick succession, and forms the basis for our eponymous attack. Our Spec-o-Scope attack achieves an order of magnitude improvement in temporal resolution compared to the previous state-of-the-art of Prime+Scope, reducing the measurement time from ~70 cycles to only 5 --- only one cycle more than an L1 cache access. We experimentally verify that our attack can detect timing differences in a 5 cycle resolution. Finally, using our Spec-o-Scope attack, we show the first microarchitectural side-channel attack on an unmodified AES S-box-based implementation, which uses generic CPU features and does not require manipulation of the operating system's scheduler.
Last updated:  2024-10-22
Conjunctive Searchable Symmetric Encryption from Hard Lattices
Debadrita Talapatra, Sikhar Patranabis, and Debdeep Mukhopadhyay
Searchable Symmetric Encryption (SSE) supports efficient keyword searches over encrypted outsourced document collections while minimizing information leakage. All practically efficient SSE schemes supporting conjunctive queries rely crucially on quantum-broken cryptographic assumptions (such as discrete-log hard groups) to achieve compact storage and fast query processing. On the other hand, quantum-safe SSE schemes based on purely symmetric-key crypto-primitives either do not support conjunctive searches, or are practically inefficient. In particular, there exists no quantum-safe yet practically efficient conjunctive SSE scheme from lattice-based hardness assumptions. We solve this open question by proposing Oblivious Post-Quantum Secure Cross Tags (OQXT) – the first lattice-based practically efficient and highly scalable conjunctive SSE scheme. The technical centerpiece of OQXT is a novel oblivious cross-tag generation protocol with provable security guarantees derived from lattice-based hardness assumptions. We prove the post-quantum simulation security of OQXT with respect to a rigorously defined and thoroughly analyzed leakage profile. We then present a prototype implementation of OQXT and experimentally validate its practical efficiency and scalability over extremely large real-world databases. Our experiments show that OQXT has competitive end-to-end search latency when compared with the best (quantum-broken) conjunctive SSE schemes.
Last updated:  2024-10-22
Relations among new CCA security notions for approximate FHE
Sébastien Canard, Caroline Fontaine, Duong Hieu Phan, David Pointcheval, Marc Renard, and Renaud Sirdey
In a recent Eurocrypt'24 paper, Manulis and Nguyen have proposed a new CCA security notion, vCCA, and associated construction blueprints to leverage both CPA-secure and correct FHE beyond the CCA1 security barrier. However, because their approach is only valid under the correctness assumption, it leaves a large part of the FHE spectrum uncovered as many FHE schemes used in practice turn out to be approximate and, as such, do not satisfy the correctness assumption. In this paper, we improve their work by defining and investigating a variant of their security notion which is suitable for a more general case where approximate FHE are included. As the passive security of approximate FHE schemes is more appropriately captured by CPAD rather than CPA security, we start from the former notion to define our vCCAD new security notion. Although, we show that vCCA and vCCAD are equivalent when the correctness assumption holds, we establish that vCCAD security is strictly stronger than vCCA security in the general case. In doing so, we interestingly establish several new separation results between variants of CPAD security of increasing strength. This allows us to clarify the relationship between vCCA security and CPAD security, and to reveal that the security notions landscape is much simpler for correct FHE than when approximate ones are included --- in which case, for example, we establish that multiple challenges security notions are strictly stronger than single-challenge ones for both CPAD and vCCAD security. Lastly, we also give concrete construction blueprints, showing how to leverage some of the blueprints proposed by Manulis and Nguyen to achieve vCCAD security. As a result, vCCAD security is the strongest CCA security notion so far known to be achievable by both correct and approximate FHE schemes.
Last updated:  2024-10-22
cuTraNTT: A Novel Transposed Number Theoretic Transform Targeting Low Latency Homomorphic Encryption for IoT Applications
Supriya Adhikary, Wai Kong Lee, Angshuman Karmakar, Yongwoo Lee, Seong Oun Hwang, and Ramachandra Achar
Large polynomial multiplication is one of the computational bottlenecks in fully homomorphic encryption implementations. Usually, these multiplications are implemented using the number-theoretic transformation to speed up the computation. State-of-the-art GPU-based implementation of fully homomorphic encryption computes the number theoretic transformation in two different kernels, due to the necessary synchronization between GPU blocks to ensure correctness in computation. This can be a serious limitation in embedded systems that only have constrained computational resources to support the time-consuming homomorphic encryption. In this paper, we proposed a series of techniques to improve the performance of number theoretic transform targeting homomorphic encryption on a GPU device. Firstly, we proposed to arrange the polynomials in a transposed manner and skip the last two levels of radix-4 number theoretic transformation, allowing us to completely avoid the block synchronization in NTT implementation. This technique improved the performance of homomorphic encryption by 1.37× and 1.34× on RTX 4060 and Jetson Orin Nano respectively, compared to the conventional approach that uses full NTT without skipping any levels. However, such an approach also introduces extra overhead in the subsequent point-wise multiplication, which slows down the homomorphic multiplication. To reduce this negative impact, a fast 16 × 16 point-wise multiplication implementation was proposed, which relies on the heavily optimized Toom-Cook 4-way algorithm. Experimental results show that our proposed homomorphic multiplication can achieve similar latency compared to Jung et al. and Yang et al., which are the best results to date. This shows that the proposed cuTraNTT is able to reduce the latency of homomorphic encryption without sacrificing the performance in homomorphic multiplication.
Last updated:  2024-10-22
Compact Pseudorandom Functional Encryption from Evasive LWE
Shweta Agrawal, Simran Kumari, and Shota Yamada
We provide the first construction of compact Functional Encryption (FE) for pseudorandom functionalities from the evasive LWE and LWE assumptions. Intuitively, a pseudorandom functionality means that the output of the circuit is indistinguishable from uniform for every input seen by the adversary. This yields the first compact FE for a nontrivial class of functions which does not rely on pairings. We demonstrate the power of our new tool by using it to achieve optimal parameters for both key-policy and ciphertext-policy Attribute Based Encryption (ABE) schemes for circuits of unbounded depth, from just the LWE and evasive LWE assumptions. This improves prior work along the twin axes of assumptions and performance. In more detail, this allows to: (i) replace the assumption of circular evasive LWE used in the work of Hseih, Lin and Luo (FOCS 2023) by plain evasive LWE, (ii) remove the need for the circular tensor LWE assumption in the work of Agrawal, Kumari and Yamada (CRYPTO, 2024), (iii) improve parameters obtained by both aforementioned works to achieve asymptotic optimality. Previously, optimal parameters for ABE schemes were only achieved using compact FE for P (Jain, Lin and Luo, Eurocrypt 2023) – we show that compact FE for a much weaker class (albeit with incomparable security) suffices. Thus we obtain the first optimal ABE schemes for unbounded depth circuits which can be conjectured post-quantum secure. Along the way, we define and construct a new primitive which we term laconic pseudorandom obfuscation from the same assumptions – this may be of independent interest.
Last updated:  2024-10-22
On Key Substitution Attacks against Aggregate Signatures and Multi-Signatures
Yuuki Fujita, Yusuke Sakai, Kyosuke Yamashita, and Goichiro Hanaoka
When we use signature schemes in practice, we sometimes should consider security beyond unforgeability. This paper considers security against key substitution attacks of multi-signer signatures (i.e., aggregate signatures and multi-signatures). Intuitively, this security property ensures that a malicious party cannot claim the ownership of a signature that is created by an honest signer. We investigate security against key substitution attacks of a wide range of aggregate signature schemes and multi-signature schemes: the Boneh-Gentry-Lynn-Shacham aggregate signature scheme, the sequential aggregate signature scheme by Lysyanskaya et al., the multi-signature scheme by Bellare and Neven, MuSig2, and the ordered multi-signature scheme by Boldyreva et al. Furthermore, if the scheme does not provide security against key substitution attacks, then we modify the scheme to become secure against the attacks.
Last updated:  2024-10-22
Maximizing the Utility of Cryptographic Setups: Secure PAKEs, with either functional RO or CRS
Yuting Xiao, Rui Zhang, and Hong-Sheng Zhou
For Password-Based Authenticated Key Exchange (PAKE), an idealized setup such as random oracle (RO) or a trusted setup such as common reference string (CRS) is a must in the universal composability (UC) framework (Canetti, FOCS 2001). Given the potential failure of a CRS or RO setup, it is natural to consider distributing trust among the two setups, resulting a CRS-or-RO-setup (i.e., CoR-setup). However, the infeasibility highlighted by Katz et al. (PODC 2014) suggested that it is impossible to construct UC-secure PAKE protocols with a straightforward CoR-setup (i.e., either the CRS is functional but the RO is compromised, or the RO is functional but the CRS is compromised). To circumvent this impossibility result, we investigate how to design UC-secure PAKE protocols with a fine-grained CoR-setup, where either the CRS is functional but the RO is non-functional, or vice versa. Different from the straightforward CoR-setup, a fine-grained non-functional setup is not necessarily completely compromised and fully controlled by the adversary; Instead, we consider this non-functional setup may still offer certain security properties. Certainly, the non-functional setup alone should be useless for achieving UC-security. We present a UC-secure PAKE protocol under two conditions: either the CRS is functional while the RO is non-functional (falling back to a collision-resistant hash function), or the RO is functional while the CRS is non-functional (falling back to a global CRS). Before presenting our construction, we first prove that a global CRS setup alone is insufficient for achieving UC-secure PAKE. This impossibility result highlights the non-triviality of our approach. To obtain our construction, we introduce several techniques as follows: (1) We propose a new variant of Non-Interactive Key Exchange (NIKE), called homomorphic NIKE with associated functions, which captures key properties of existing RO-based PAKE protocols. This new primitive serves as an important component in our construction. (2) We develop a ``Brute Force'' extraction strategy which allows us to provide security analysis for our UC-secure PAKE with a fine-grained CoR-setup for polynomial-sized password spaces. (3) We introduce a novel password space extension technique that enables the expansion of PAKE protocols from polynomial-sized to arbitrary-sized password spaces. (4) Finally, to ensure provable security for our password space extension in UC-secure PAKEs, we modify existing PAKE functionalities to prevent responses that reveal the correctness of password guesses. This is a reasonable adjustment, as our protocol provides only implicit authentication. We further present a PAKE protocol in the BPR framework (Bellare, Pointcheval, Rogaway, EuroCrypt 2000), assuming either the CRS is functional while the RO falls back to a collision-resistant hash function, or the RO is functional but the CRS trapdoor is allowed to be learned by the adversary.
Last updated:  2024-10-22
Scalable Mixnets from Two-Party Mercurial Signatures on Randomizable Ciphertexts
Masayuki Abe, Masaya Nanri, Miyako Ohkubo, Octavio Perez Kempner, Daniel Slamanig, and Mehdi Tibouchi
A mixnet developed by Hébant et al. (PKC '20) employs certified ciphertexts that carry homomorphic signatures from an authority, reducing the complexity of the shuffling proof, and thereby enabling efficient large-scale deployment. However, their privacy relies on trusting the authority, making it unsuitable for voting, the primary application of mixnets. Building on the prior work, we leverage recent advances in equivalence class signatures by replacing homomorphic signatures with newly developed two-party mercurial signatures on randomizable ciphertexts. This allows users and the authority to jointly sign ciphertexts and randomize keys, ciphertexts, and signatures, all while preserving the embedded messages. We demonstrate that our mixnet is suitable for receipt-free voting without requiring trust in the signing authority for privacy. To assess scalability, we compare our approach to other scalable mixnet solutions, implement our protocols, and provide concrete performance benchmarks. Our results show that our mixnet significantly outperforms existing alternatives in both computation and communication efficiency. Specifically, verifying the mixing process for 50,000 ciphertexts takes just 135 seconds on a commodity laptop using ten mixers, illustrating the practical viability of our approach.
Last updated:  2024-10-22
(Quantum) Indifferentiability and Pre-Computation
Joseph Carolan, Alexander Poremba, and Mark Zhandry
Indifferentiability is a popular cryptographic paradigm for analyzing the security of ideal objects---both in a classical as well as in a quantum world. It is typically stated in the form of a composable and simulation-based definition, and captures what it means for a construction (e.g., a cryptographic hash function) to be ``as good as'' an ideal object (e.g., a random oracle). Despite its strength, indifferentiability is not known to offer security against pre-processin} attacks in which the adversary gains access to (classical or quantum) advice that is relevant to the particular construction. In this work, we show that indifferentiability is (generically) insufficient for capturing pre-computation. To accommodate this shortcoming, we propose a strengthening of indifferentiability which is not only composable but also takes arbitrary pre-computation into account. As an application, we show that the one-round sponge is indifferentiable (with pre-computation) from a random oracle. This yields the first (and tight) classical/quantum space-time trade-off for one-round sponge inversion.
Last updated:  2024-10-21
Certified Randomness implies Secure Classical Position-Verification
Omar Amer, Kaushik Chakraborty, David Cui, Fatih Kaleoglu, Charles Lim, Minzhao Liu, and Marco Pistoia
Liu et al. (ITCS22) initiated the study of designing a secure position verification protocol based on a specific proof of quantumness protocol and classical communication. In this paper, we study this interesting topic further and answer some of the open questions that are left in that paper. We provide a new generic compiler that can convert any single round proof of quantumness-based certified randomness protocol to a secure classical communication-based position verification scheme. Later, we extend our compiler to different kinds of multi-round proof of quantumness-based certified randomness protocols. Moreover, we instantiate our compiler with a random circuit sampling (RCS)-based certified randomness protocol proposed by Aaronson and Hung (STOC 23). RCS-based techniques are within reach of today's NISQ devices; therefore, our design overcomes the limitation of the Liu et al. protocol that would require a fault-tolerant quantum computer to realize. Moreover, this is one of the first cryptographic applications of RCS-based techniques other than certified randomness.
Last updated:  2024-10-21
PISA: Privacy-Preserving Smart Parking
Sayon Duttagupta and Dave Singelée
In recent years, urban areas have experienced a rapid increase in vehicle numbers, while the availability of parking spaces has remained largely static, leading to a significant shortage of parking spots. This shortage creates considerable inconvenience for drivers and contributes to traffic congestion. A viable solution is the temporary use of private parking spaces by homeowners during their absence, providing a means to alleviate the parking problem and generate additional income for the owners. However, current systems for sharing parking spaces often neglect security and privacy concerns, exposing users to potential risks. This paper presents PISA, a novel Privacy-Preserving Smart Parking scheme designed to address these issues through a cryptographically secure protocol. PISA enables the anonymous sharing of parking spots and allows vehicle owners to park without revealing any personal identifiers. Our primary contributions include the development of a comprehensive bi-directional anonymity framework that ensures neither party can identify the other, and the use of formal verification methods to substantiate the soundness and reliability of our security measures. Unlike existing solutions, which often lack a security focus, fail to provide formal validation, or are computationally intensive, PISA is designed to be both secure and efficient.
Last updated:  2024-10-21
Straight-Line Knowledge Extraction for Multi-Round Protocols
Uncategorized
Lior Rotem and Stefano Tessaro
Show abstract
Uncategorized
The Fiat-Shamir (FS) transform is the standard approach to compiling interactive proofs into non-interactive ones. However, the fact that knowledge extraction typically requires rewinding limits its applicability without having to rely on further heuristic conjectures. A better alternative is a transform that guarantees straight-line knowledge extraction. Two such transforms were given by Pass (CRYPTO '03) and Fischlin (CRYPTO '05), respectively, with the latter giving the most practical parameters. Pass's approach, which is based on cut-and-choose, was also adapted by Unruh (EUROCRYPT '12, '14, '15) to the quantum setting, where rewinding poses a different set of challenges. All of these transforms are tailored at the case of three-round Sigma protocols, and do not apply to a number of popular paradigms for building succinct proofs (e.g., those based on folding or sumcheck) which rely on multi-round protocols. This work initiates the study of transforms with straight-line knowledge extraction for multi-round protocols. We give two transforms, which can be thought of as multi-round analogues of those by Fischlin and Pass. Our first transform leads to more efficient proofs, but its usage applies to a smaller class of protocols than the latter one. Our second transform also admits a proof of security in the Quantum Random Oracle Model (QROM), making it the first transform for multi-round protocols which does not incur the super-polynomial security loss affecting the existing QROM analysis of the FS transform (Don et al., CRYPTO '20).
Last updated:  2024-10-21
FOLEAGE: $\mathbb{F}_4$OLE-Based Multi-Party Computation for Boolean Circuits
Maxime Bombar, Dung Bui, Geoffroy Couteau, Alain Couvreur, Clément Ducros, and Sacha Servan-Schreiber
Secure Multi-party Computation (MPC) allows two or more parties to compute any public function over their privately-held inputs, without revealing any information beyond the result of the computation. Modern protocols for MPC generate a large amount of input-independent preprocessing material called multiplication triples, in an offline phase. This preprocessing can later be used by the parties to efficiently instantiate an input-dependent online phase computing the function. To date, the state-of-the-art secure multi-party computation protocols in the preprocessing model are tailored to secure computation of arithmetic circuits over large fields and require little communication in the preprocessing phase, typically O(N · m) to generate m triples among N parties. In contrast, when it comes to computing preprocessing for computations that are naturally represented as Boolean circuits, the state-of-the-art techniques have not evolved since the 1980s, and in particular, require every pair of parties to execute a large number of oblivious transfers before interacting to convert them to N-party triples, which induces an Ω(N^2 · m) communication overhead. In this paper, we introduce FOLEAGE, which addresses this gap by introducing an efficient preprocessing protocol tailored to Boolean circuits. FOLEAGE exhibits excellent performance: It generates m multiplication triples over F2 using only N · m + O(N^2 · log m) bits of communication for N-parties, and can concretely produce over 12 million triples per second in the 2-party setting on one core of a commodity machine. Our result builds upon an efficient Pseudorandom Correlation Generator (PCG) for multiplication triples over the field F4. Roughly speaking, a PCG enables parties to stretch a short seed into a large number of pseudorandom correlations non-interactively, which greatly improves the efficiency of the offline phase in MPC protocols. Our construction significantly outperforms the state-of-the-art, which we demonstrate via a prototype implementation. This is achieved by introducing a number of protocol-level, algorithmic-level, and implementation-level optimizations on the recent PCG construction of Bombar et al. (Crypto 2023) from the Quasi-Abelian Syndrome Decoding assumption.
Last updated:  2024-10-21
Proving the Security of the Extended Summation-Truncation Hybrid
Avijit Dutta and Eik List
Since designing a dedicated secure symmetric PRF is difficult, various works studied optimally secure PRFs from the sum of independent permutations (SoP). At CRYPTO'20, Gunsing and Mennink proposed the Summation-Truncation Hybrid (STH). While based on SoP, STH releases additional $a \leq n$ bits of the permutation calls and sums $n-a$ bits of them. Thus, it produces $n+a$ bits at $O(n-a/2)$-bit PRF security. Both SoP or STH can be used directly in encryption schemes or MACs in place of permutation calls for higher security. However, simply replacing every call as in GCM-SIV$r$ would demand more calls. For encryption schemes, Iwata's XORP scheme is long known to provide a better trade-off between efficiency and security. It extends SoP to variable-length-outputs by using $r+1$ calls to a block cipher where the output of one call is added to each of the other $r$ outputs. A similar extension can be conducted for STH that we call XTH, the XORP-Truncation Hybrid. Such an extension was already suggested in the final discussion by Gunsing and Mennink, but left as an open problem. This work fills the gap by formalizing and proving the security of XTH. For a rate of $r/(r+1)$ as in XORP, we show $O(n-a/2-1.5\log(r))$-bit security for XTH.
Last updated:  2024-10-21
UC Non-Interactive, Proactive, Threshold ECDSA with Identifiable Aborts
Ran Canetti, Rosario Gennaro, Steven Goldfeder, Nikolaos Makriyannis, and Udi Peled
We present a distributed ECDSA protocol, for any number of signatories. The protocol improves on that of the authors (CCS'20), which in turn builds on the Gennaro & Goldfeder and Lindell & Nof protocols (CCS '18). Specifically: ** Only the last round of the protocol requires knowledge of the message, and the other rounds can take place in a preprocessing stage, lending to a non-interactive threshold ECDSA protocol. ** The protocol withstands adaptive corruption of signatories. Furthermore, it includes a periodic refresh mechanism and guarantees proactive security. ** The protocol achieves accountability by identifying corrupted signatories in case of failure to generate a valid signature. (Identifiable abort) Furthermore, we formulate a distributed signature ideal functionality within the UC framework that guarantees unforgeability, proactive security, and identifiable abort, and show that the protocol realizes this functionality in the global random oracle model, assuming Strong RSA, DDH, semantic security of the Paillier encryption, and a somewhat enhanced variant of existential unforgeability of ECDSA. This combination of properties (low latency, compatibility with cold-wallet architectures, proactive security, identifiable abort and universally composable security) make our protocol a good fit for threshold wallets for ECDSA-based cryptocurrencies.
Last updated:  2024-10-21
Computational Analysis of Plausibly Post-Quantum-Secure Recursive Arguments of Knowledge
Dustin Ray and Paulo L. Barreto
With the recent standardization of post-quantum cryptographic algorithms, research efforts have largely remained centered on public key exchange and encryption schemes. Argument systems, which allow a party to efficiently argue the correctness of a computation, have received comparatively little attention regarding their quantum-resilient design. These computational integrity frameworks often rely on cryptographic assumptions, such as pairings or group operations, which are vulnerable to quantum attacks. In this work, we present a fully implemented post-quantum secure argument system that compresses unbounded computation into a constant-sized space. We present a fully implemented prover which can argue the truth of any size computation, and verifier which can verify correctness in constant time. This work shows an extension of utility for computational integrity statements into the quantum domain. We provide real-world performance metrics demonstrating that post-quantum secure argument systems not only exist but can outperform classical systems in both efficiency and scalability, making such systems an attractive choice for practical deployment.
Last updated:  2024-10-21
SEC: Symmetric Encrypted Computation via Fast Look-ups
Debadrita Talapatra, Nimish Mishra, Arnab Bag, Sikhar Patranabis, and Debdeep Mukhopadhyay
Encrypted computation allows a client to securely outsource the storage and processing of sensitive private data to an untrusted third party cloud server. Fully homomorphic encryption (FHE) allows computing arbitrary functions over encrypted data, but incurs huge overheads and does not practically scale to large databases. Whereas, slightly weaker yet efficient constructions- Searchable Symmetric Encryption (SSE) support lookup-based evaluations of a restricted class of Boolean circuits over symmetrically encrypted data. In this paper, we investigate the use of SSE to efficiently perform arbitrary Boolean circuit evaluations over symmetrically encrypted data via look-ups. To this end, in this work, we propose Symmetric Encrypted Computation (SEC): the first practically efficient and provably secure lookup-based construction, analogous to traditional FHE, that supports evaluation of arbitrary Boolean circuits over symmetrically encrypted data. SEC relies on purely symmetric-key cryptoprimitives and achieves flexible performance versus leakage trade-offs. SEC extends and generalizes the functional capabilities of SSE, while inheriting its data privacy guarantees and desirable performance benefits. We provide a concrete construction of SEC and analyze its security with respect to a rigorously defined and thoroughly analyzed leakage profile. We also present a prototype implementation of SEC and experimentally validate its practical efficiency. Our experiments show that SEC outperforms state-of-the-art FHE schemes (such as Torus FHE) substantially, with around 1000× speed-up in basic Boolean gate evaluations. We further showcase the scalability of SEC for functions with multi-bit inputs via experiments performing encrypted evaluation of the entire AES-128 circuit, as well as three max-pooling layers of AlexNet architecture. For both sets of experiments, SEC outperforms state-of-the-art and accelerated FHE implementations by 1000× in terms of processing time, while incurring 250× lower storage.
Last updated:  2024-10-21
A zero-trust swarm security architecture and protocols
Alex Shafarenko
This report presents the security protocols and general trust architecture of the SMARTEDGE swarm computing platform. Part 1 describes the coordination protocols for use in a swarm production environment, e.g. a smart factory, and Part 2 deals with crowd-sensing scenarios characteristic of traffic-control swarms.
Last updated:  2024-10-21
SIGNITC: Supersingular Isogeny Graph Non-Interactive Timed Commitments
Knud Ahrens
Non-Interactive Timed Commitment schemes (NITC) allow to open any commitment after a specified delay $t_{\mathrm{fd}}$. This is useful for sealed bid auctions and as primitive for more complex protocols. We present the first NITC without repeated squaring or theoretical black box algorithms like NIZK proofs or one-way functions. It has fast verification, almost arbitrary delay and satisfies IND-CCA hiding and perfect binding. Our protocol is based on isogenies between supersingular elliptic curves making it presumably quantum secure, and all algorithms have been implemented as part of SQISign or other well-known isogeny-based cryptosystems. Additionally, it needs no trusted setup and can use known primes for SIKE or SQISign.
Last updated:  2024-10-21
State of the art of HFE variants Is it possible to repair HFE with appropriate perturbations?
Benoit COGLIATI, Gilles Macariot-Rat, Jacques Patarin, and Pierre Varjabedian
HFE (that stands for Hidden Field Equations) belongs to multivariate cryptography and was designed by Jacques Patarin in 1996 as a public key trapdoor suitable for encryption or signature. This original basic version is unfortunately known to have a super-polynomial attack, but as imagined since the beginning, it comes with various variants, one can describe as combinations of “modifiers”. In this work, we first present the state of the art of these HFE modifiers, along with their effect on the complexity of the main cryptanalysis techniques against HFE-based schemes. This allows us, in a second time, to identify a combination of two modifiers that has not yet been explored and may still be secure with efficient parameters. Based on our analysis, we propose a new signature scheme that offers extremely short signature sizes, with reasonable public key sizes and performance. In particular, we rely on the classical Feistel-Patarin technique to reduce signature sizes below two times the security parameter.
Last updated:  2024-10-21
Revisiting Fermat's Factorization Method
Gajraj Kuldeep and Rune Hylsberg Jacobsen
This paper addresses the problem of factoring composite numbers by introducing a novel approach to represent their prime divisors. We develop a method to efficiently identify smaller divisors based on the difference between the primes involved in forming the composite number. Building on these insights, we propose an algorithm that significantly reduces the computational complexity of factoring, requiring half as many iterations as traditional quadratic residue-based methods. The presented algorithm offers a more efficient solution for factoring composite numbers, with potential applications in fields such as cryptography and computational number theory.
Last updated:  2024-10-21
SoK: Descriptive Statistics Under Local Differential Privacy
René Raab, Pascal Berrang, Paul Gerhart, and Dominique Schröder
Local Differential Privacy (LDP) provides a formal guarantee of privacy that enables the collection and analysis of sensitive data without revealing any individual's data. While LDP methods have been extensively studied, there is a lack of a systematic and empirical comparison of LDP methods for descriptive statistics. In this paper, we first provide a systematization of LDP methods for descriptive statistics, comparing their properties and requirements. We demonstrate that several mean estimation methods based on sampling from a Bernoulli distribution are equivalent in the one-dimensional case and introduce methods for variance estimation. We then empirically compare methods for mean, variance, and frequency estimation. Finally, we provide recommendations for the use of LDP methods for descriptive statistics and discuss their limitations and open questions.
Last updated:  2024-10-21
An Efficient Noncommutative NTRU from Semidirect Product
Vikas Kumar, Ali Raya, Aditi Kar Gangopadhyay, Sugata Gangopadhyay, and Md Tarique Hussain
NTRU is one of the most extensively studied lattice-based schemes. Its flexible design has inspired different proposals constructed over different rings, with some aiming to enhance security and others focusing on improving performance. The literature has introduced a line of noncommutative NTRU-like designs that claim to offer greater resistance to existing attacks. However, most of these proposals are either theoretical or fall short in terms of time and memory requirements when compared to standard NTRU. To our knowledge, DiTRU (Africacrypt 2024) is the first noncommutative analog of NTRU provided as a complete package. Although DiTRU is practical, it operates at two times slower than NTRU with no decryption failure. Additionally, key generation, encryption, and decryption are 1.2, 1.7, and 1.7 times slower, respectively, with negligible decryption failure. In this work, we introduce a noncommutative version of NTRU that offers comparable performance and key sizes to NTRU while improving upon DiTRU. Our cryptosystem is based on the GR-NTRU framework, utilizing the group ring of a semidirect product of cyclic groups over the ring of Eisenstein integers. This design allows for an efficient construction with key generation speeds approximately two (three) times faster than NTRU (DiTRU). Further, the proposed scheme provides roughly a speed-up by a factor of 1.2 (2) while encrypting/decrypting messages of the same length over NTRU (DiTRU). We provide a reference implementation in C for the proposed cryptosystem to prove our claims.
Last updated:  2024-10-21
Pseudorandom Multi-Input Functional Encryption and Applications
Shweta Agrawal, Simran Kumari, and Shota Yamada
We construct the first multi-input functional encryption (MIFE) and indistinguishability obfuscation (iO) schemes for pseudorandom functionalities, where the output of the functionality is pseudorandom for every input seen by the adversary. Our MIFE scheme relies on LWE and evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) for constant arity functions, and a strengthening of evasive LWE for polynomial arity. Thus, we obtain the first MIFE and iO schemes for a nontrivial functionality from conjectured post-quantum assumptions. Along the way, we identify subtle issues in the proof of witness encryption from evasive LWE by prior work and believe that a similar strengthening of evasive LWE should also be required for their proof, for the same reasons as ours. We demonstrate the power of our new tools via the following applications: 1. Multi Input Predicate Encryption for Constant Arity. Assuming evasive LWE and LWE, we construct a multi-input predicate encryption scheme (MIPE) for P, supporting constant arity. The only prior work to support MIPE for P with constant arity by Agrawal et al. (Crypto, 2023) relies on a strengthening of Tensor LWE in addition to LWE and evasive LWE. 2. Multi Input Predicate Encryption for Polynomial Arity. Assuming a stronger variant of evasive LWE and LWE, we construct MIPE for P for polynomial arity. MIPE for polynomial arity supporting P was not known before, to the best of our knowledge. 3. Two Party ID Based Key Exchange. Assuming a stronger variant of evasive LWE and LWE, along with Decision Bilinear Diffie-Hellman, we provide the first two-party ID based Non-Interactive Key Exchange (ID-NIKE) scheme in the standard model. This leads to the first ID-NIKE in the standard model without using multilinear maps or indistinguishability obfuscation. 4. Instantiating the Random Oracle. We use our pseudorandom iO to instantiate the random oracle in several applications that previously used iO (Hohenberger, Sahai and Waters, Eurocrypt 2014) such as full-domain hash signature based on trapdoor permutations and more. Our tools of MIFE and iO for pseudorandom functionalities appear quite powerful and yield extremely simple constructions when used in applications. We believe they provide a new pathway for basing “extreme” cryptography, which has so far required full fledged iO, on the presumably weaker evasive LWE in the post quantum regime.
Last updated:  2024-10-21
Drifting Towards Better Error Probabilities in Fully Homomorphic Encryption Schemes
Olivier Bernard, Marc Joye, Nigel P. Smart, and Michael Walter
There are two security notions for FHE schemes the traditional notion of IND-CPA, and a more stringent notion of IND-CPA$^D$. The notions are equivalent if the FHE schemes are perfectly correct, however for schemes with negligible failure probability the FHE parameters needed to obtain IND-CPA$^D$ security can be much larger than those needed to obtain IND-CPA security. This paper uses the notion of ciphertext drift in order to understand the practical difference between IND-CPA and IND-CPA$^D$ security in schemes such as FHEW, TFHE and FINAL. This notion allows us to define a modulus switching operation (the main culprit for the difference in parameters) such that one does not require adapting IND-CPA cryptographic parameters to meet the IND-CPA$^D$ security level. Further, the extra cost incurred by the new techniques has no noticeable performance impact in practical applications. The paper also formally defines a stronger version for IND-CPA$^D$ security called sIND-CPA$^D$, which is proved to be strictly separated from the IND-CPA$^D$ notion. Criterion for turning an IND-CPA$^D$ secure public-key encryption into an sIND-CPA$^D$ one is also provided.
Last updated:  2024-10-21
Free-XOR Gate Bootstrapping
Chunling Chen, Xianhui Lu, Ruida Wang, Zhihao Li, Xuan Shen, and Benqiang Wei
The FHEW-like gate bootstrapping framework operates in a 2-bit plaintext space, where logic gates such as NAND, XOR, and AND are implemented by adding two ciphertexts and extracting the most significant bit. However, each gate operation requires bootstrapping with a primary cost of one blind rotation, which is expensive, when processing circuit operations for applications. We propose a novel Free-XOR gate bootstrapping framework based on a single-bit plaintext space, in which the XOR operation is realized by simply adding two ciphertexts, resulting in an almost free computational cost. To form a minimal complete set for logical operations, we design an algorithm for the AND gate within this framework. The AND gate cost of our Free-XOR gate bootstrapping involves two blind rotations. However, by utilizing a single-bit plaintext space to enhance noise tolerance and swapping some operations of the bootstrapping process, we can adopt a more compact parameter setting, which in turn accelerates the speed of blind rotation. We propose an instantiation of the NTRU-based AND gate operation, which requires two blind rotations. Despite the additional rotation, the overall computational cost is marginally lower than the state-of-the-art gate bootstrapping scheme LLW+ [TCHES24], which utilizes only a single blind rotation. In addition, our approach achieves a significant reduction in key size, reducing it to 3.3 times the size of LLW+ [TCHES24].
Last updated:  2024-10-21
Quantum Security of a Compact Multi-Signature
Shaoquan Jiang
With the rapid advance in quantum computing, quantum security is now an indispensable property for any cryptographic system. In this paper, we study how to prove the security of a complex cryptographic system in the quantum random oracle model. We first give a variant of Zhandry's compressed quantum random oracle (${\bf CStO}$), called compressed quantum random oracle with adaptive special points ({\bf CStO}$_s$). Then, we extend the on-line extraction technique of Don et al (EUROCRYPT'22) from {\bf CStO} to ${\bf CStO}_s$. We also extend the random experiment technique of Liu and Zhandry (CRYPTO'19) for extracting the ${\bf CStO}$ query that witnesses the future adversarial output. With these preparations, a systematic security proof in the quantum random oracle model can start with a random {\bf CStO} experiment (that extracts the witness for the future adversarial output) and then convert this game to one involving ${\bf CStO}_s$. Next, the on-line extraction technique for ${\bf CStO}_s$ can be applied to extract the witness for any on-line commitment. With this strategy, we give a security proof of our recent compact multi-signature framework that is converted from any weakly secure linear ID scheme. We also prove the quantum security of our recent lattice realization of this linear ID scheme, by iteratively applying the weakly collapsing protocol technique of Liu and Zhandry (CRYPTO 2019). Combining these two results, we obtain the first quantum security proof for a compact multi-signature.
Last updated:  2024-10-21
Practical Asynchronous MPC from Lightweight Cryptography
Atsuki Momose
We present an asynchronous secure multi-party computation (MPC) protocol that is practically efficient. Our protocol can evaluate any arithmetic circuit with linear communication in the number of parties per multiplication gate, while relying solely on computationally lightweight cryptography such as hash function and symmetric encryption. Our protocol is optimally resilient and tolerates $t$ malicious parties among $n = 3t+1$ parties. At the technical level, we manage to apply the \emph{player-elimination} paradigm to asynchronous MPC. This framework enables the detection and eviction of cheating parties by repeatedly attempting to generate Beaver triples. Once all malicious parties are eliminated, honest parties can proceed with efficient Beaver triple generation. While this approach is standard in synchronous MPC, it presents several technical challenges when adopted in an asynchronous network, which we address in this work.
Last updated:  2024-10-20
Rate-1 Statistical Non-Interactive Zero-Knowledge
Pedro Branco, Nico Döttling, and Akshayaram Srinivasan
We give the first construction of a rate-1 statistical non-interactive zero-knowledge argument of knowledge. For the $\mathsf{circuitSAT}$ language, our construction achieves a proof length of $|w| + |w|^\epsilon \cdot \mathsf{poly}(\lambda)$ where $w$ denotes the witness, $\lambda$ is the security parameter, $\epsilon$ is a small constant less than 1, and $\mathsf{poly}(\cdot)$ is a fixed polynomial that is independent of the instance or the witness size. The soundness of our construction follows from either the LWE assumption, or the $O(1)$-$\mathsf{LIN}$ assumption on prime-order groups with efficiently computable bilinear maps, or the sub-exponential DDH assumption. Previously, Gentry et al. (Journal of Cryptology, 2015) achieved NIZKs with statistical soundness and computational zero-knowledge with the aforementioned proof length by relying on the Learning with Errors (LWE) assumption.
Last updated:  2024-10-20
OT-PCA: New Key-Recovery Plaintext-Checking Oracle Based Side-Channel Attacks on HQC with Offline Templates
Haiyue Dong and Qian Guo
In this paper, we introduce OT-PCA, a novel approach for conducting Plaintext-Checking (PC) oracle based side-channel attacks, specifically designed for Hamming Quasi-Cyclic (HQC). By calling the publicly accessible HQC decoder, we build offline templates that enable efficient extraction of soft information for hundreds of secret positions with just a single PC oracle call. Our method addresses critical challenges in optimizing key-related information extraction, including maximizing decryption output entropy and ensuring error pattern independence, through the use of genetic-style algorithms. Extensive simulations demonstrate that our new attack method significantly reduces the required number of oracle calls, achieving a 2.4-fold decrease for hqc-128 and even greater reductions for hqc-192 and hqc-256 compared to current state-of-the-art methods. Notably, the attack shows strong resilience against inaccuracy in the PC oracle—when the oracle accuracy decreases to 95%, the reduction factor in oracle call requirements increases to 7.6 for hqc-128. Lastly, a real-world evaluation conducted using power analysis on a platform with an ARM Cortex-M4 microcontroller validates the practical applicability and effectiveness of our approach.
Last updated:  2024-10-20
Updatable Privacy-Preserving Blueprints
Bernardo David, Felix Engelmann, Tore Frederiksen, Markulf Kohlweiss, Elena Pagnin, and Mikhail Volkhov
Privacy-preserving blueprint schemes (Kohlweiss et al., EUROCRYPT'23) offer a mechanism for safeguarding user's privacy while allowing for specific legitimate controls by a designated auditor agent. These schemes enable users to create escrows encrypting the result of evaluating a function $y=P(t,x)$, with $P$ being publicly known, $t$ a secret used during the auditor's key generation, and $x$ the user's private input. Crucially, escrows only disclose the blueprinting result $y=P(t,x)$ to the designated auditor, even in cases where the auditor is fully compromised. The original definition and construction only support the evaluation of functions $P$ on an input $x$ provided by a single user. We address this limitation by introducing updatable privacy-preserving blueprint schemes (UPPB), which enhance the original notion with the ability for multiple users to non-interactively update the private user input $x$ while blueprinting. Moreover, UPPBs contain a proof that $y$ is the result of a sequence of valid updates, while revealing nothing else about the private inputs $\{x_i\}$ of updates. As in the case of privacy-preserving blueprints, we first observe that UPPBs can be realized via a generic construction for arbitrary predicates $P$ based on FHE and NIZKs. Our main result is UBlu, an efficient instantiation for a specific predicate comparing the values $x$ and $t$, where $x$ is the cumulative sum of users' private inputs and $t$ is a fixed private value provided by the auditor in the setup phase. This rather specific setting already finds interesting applications such as privacy-preserving anti-money laundering and location tracking, and can be extended to support more generic predicates. From the technical perspective, we devise a novel technique to keep the escrow size concise, independent of the number of updates, and reasonable for practical applications. We achieve this via a novel characterization of malleability for the algebraic NIZK by Couteau and Hartmann (CRYPTO’20) that allows for an additive update function.
Last updated:  2024-10-20
Universally Composable Non-Interactive Zero-Knowledge from Sigma Protocols via a New Straight-line Compiler
Megan Chen, Pousali Dey, Chaya Ganesh, Pratyay Mukherjee, Pratik Sarkar, and Swagata Sasmal
Non-interactive zero-knowledge proofs (NIZK) are essential building blocks in threshold cryptosystems like multiparty signatures, distributed key generation, and verifiable secret sharing, allowing parties to prove correct behavior without revealing secrets. Furthermore, universally composable (UC) NIZKs enable seamless composition in the larger cryptosystems. A popular way to construct NIZKs is to compile interactive protocols using the Fiat-Shamir transform. Unfortunately, Fiat-Shamir transformed NIZK requires rewinding the adversary and is not $\textit{straight-line extractable}$, making it at odds with UC. Using Fischlin's transform gives straight-line extractability, but at the expense of many repetitions of the underlying protocol leading to poor concrete efficiency and difficulty in setting parameters. In this work, we propose a simple new transform that compiles a Sigma protocol for an algebraic relation into a UC-NIZK protocol $\textit{without any overheads of repetition}$. - Given a Sigma protocol for proving m algebraic statements over n witnesses, we construct a compiler to transform it into a $\textit{straight-line extractable}$ protocol using an additively homomorphic encryption scheme AHE. Our prover executes the Sigma protocol's prover once and computes 2n encryptions. The verification process involves running the Sigma protocol verifier once and then computing n encryptions, which are homomorphically verified against the prover generated encryptions. - We apply the Fiat-Shamir transform to the above straight-line extractable Sigma protocol to obtain a UC-NIZK. We instantiate AHE using class group-based encryption where the public key of the encryption scheme is obliviously sampled using a suitable hash function. This yields a UC-NIZK protocol in the random oracle model.
Last updated:  2024-10-20
Low-Communication Updatable PSI from Asymmetric PSI and PSU
Guowei Ling, Peng Tang, and Weidong Qiu
Private Set Intersection (PSI) allows two mutually untrusted parties to compute the intersection of their private sets without revealing additional information. In general, PSI operates in a static setting, where the computation is performed only once on the input sets of both parties. Badrinarayanan et al. (\textit{PoPETs} 2022) initiated the study of Updatable PSI (UPSI), which extends this capability to dynamically updating sets, enabling both parties to securely compute the intersection as their sets are modified while incurring significantly less overhead than re-executing a conventional PSI. However, existing UPSI protocols either do not support arbitrary deletion of elements or incur high computational and communication overhead. In this work, we combine asymmetric PSI with Private Set Union (PSU) to present a novel UPSI protocol. Our UPSI protocol supports arbitrary additions and deletions of elements, offering a flexible approach to update sets. Furthermore, our protocol enjoys extremely low communication overhead, scaling linearly with the size of the update set while remaining independent of the total set size. We implement our protocol and compare it against state-of-the-art conventional PSI and UPSI protocols. Experimental results demonstrate that our UPSI protocol incurs $587$ to $755$ times less communication overhead than the recently proposed UPSI protocol (\textit{AsiaCrypt} 2024) that supports arbitrary additions and deletions. Moreover, our UPSI protocol has a significant advantage in low-bandwidth environments due to the exceptionally low communication overhead. Specifically, with an input size of $2^{22}$ and the size of the addition/deletion set being $2^{10}$, the existing UPSI protocol requires approximately $1650.45$, $1789.5$, and $3458.1$ seconds at bandwidths of $200$ Mbps, $50$ Mbps, and $5$ Mbps, respectively, whereas our UPSI protocol only requires around $13.01$, $13.75$, and $22.53$ seconds under the same conditions. Our open-source implementation is available at: \href{https://github.com/ShallMate/upsi}{https://github.com/ShallMate/upsi}.
Last updated:  2024-10-19
Good things come to those who wait: Dishonest-Majority Coin-Flipping Requires Delay Functions
Joseph Bonneau, Benedikt Bünz, Miranda Christ, and Yuval Efron
We reconsider Cleve's famous 1986 impossibility result on coin-flipping without an honest majority. Recently proposed constructions have circumvented this limit by using cryptographic delay functions. We show that this is necessary: a (weak) notion of delay functions is in fact implied by the existence of a protocol circumventing Cleve's impossibility. However, such delay functions are weaker than those used in existing constructions. We complete our result by showing an equivalence, that these weaker delay functions are also sufficient to construct not just fair dishonest-majority coin-flipping protocols, but also the stronger notion of a distributed randomness beacon. We also show that this is possible in a weaker communication model than previously considered, without the assumption of reliable broadcast or a public bulletin board.
Last updated:  2024-10-19
Volatile and Persistent Memory for zkSNARKs via Algebraic Interactive Proofs
Alex Ozdemir, Evan Laufer, and Dan Boneh
In verifiable outsourcing, an untrusted server runs an expensive computation and produces a succinct proof (called a SNARK) of the results. In many scenarios, the computation accesses a RAM that the server maintains a commitment to (persistent RAM) or that is initially zero (volatile RAM). But, SNARKs for such scenarios are limited by the high overheads associated with existing techniques for RAM checking. We develop new proofs about volatile, persistent, and sparse persistent RAM that reduce SNARK proving times. Our results include both asymptotic and concrete improvements---including a proving time reduction of up to 51.3$\times$ for persistent RAM. Along the way, we apply two tools that may be of independent interest. First, we generalize an existing construction to convert any algebraic interactive proof (AIP) into a SNARK. An AIP is a public-coin, non-succinct, interactive proof with a verifier that is an arithmetic circuit. Second, we apply Bézout's identity for polynomials to construct new AIPs for uniqueness and disjointness. These are useful for showing the independence of accesses to different addresses.
Last updated:  2024-10-19
$\widetilde{\mbox{O}}$ptimal Adaptively Secure Hash-based Asynchronous Common Subset
Hanwen Feng, Zhenliang Lu, and Qiang Tang
Asynchronous multiparty computation (AMPC) requires an input agreement phase where all participants have a consistent view of the set of private inputs. While the input agreement problem can be precisely addressed by a Byzantine fault-tolerant consensus known as Asynchronous Common Subset (ACS), existing ACS constructions with potential post-quantum security have a large $\widetilde{\mathcal{O}}(n^3)$ communication complexity for a network of $n$ nodes. This poses a bottleneck for AMPC in the same setting. In contrast, ACS has optimal constructions with quadratic communication complexity based on bilinear map assumptions. In this paper, we bridge this gap by introducing a nearly optimal ACS, which solely relies on the blackbox use of collision-resistant hash functions. It exhibits $\widetilde{\mathcal{O}}(n^2)$ communication complexity, expected constant round complexity, and security against adaptive adversaries who can corrupt up to $n/3$ nodes and perform ``after-fact-removal'' attacks. At the core of our new ACS is the first nearly optimal hash-based Multi-valued Validated Byzantine Agreement (MVBA). To reduce cubic communication while avoiding heavy cryptographic tools, we introduce a new design paradigm, with several new components. We define and analyze our MVBA and components within the UC-framework, facilitating their modular use in broader applications, particularly in AMPC.
Last updated:  2024-10-19
Do Not Disturb a Sleeping Falcon: Floating-Point Error Sensitivity of the Falcon Sampler and Its Consequences
Xiuhan Lin, Mehdi Tibouchi, Yang Yu, and Shiduo Zhang
Falcon is one of the three postquantum signature schemes already selected by NIST for standardization. It is the most compact among them, and offers excellent efficiency and security. However, it is based on a complex algorithm for lattice discrete Gaussian sampling which presents a number of implementation challenges. In particular, it relies on (possibly emulated) floating-point arithmetic, which is often regarded as a cause for concern, and has been leveraged in, e.g., side-channel analysis. The extent to which Falcon's use of floating point arithmetic can cause security issues has yet to be thoroughly explored in the literature. In this paper, we contribute to filling this gap by identifying a way in which Falcon's lattice discrete Gaussian sampler, due to specific design choices, is singularly sensitive to floating-point errors. In the presence of small floating-point discrepancies (which can occur in various ways, including the use of the two almost but not quite equivalent signing procedures ``dynamic'' and ``tree'' exposed by the Falcon API), we find that, when called twice on the same input, the Falcon sampler has a small but significant chance (on the order of once in a few thousand calls) of outputting two different lattice points with a very structured difference, that immediately reveals the secret key. This is in contrast to other lattice Gaussian sampling algorithms like Peikert's sampler and Prest's hybrid sampler, that are stable with respect to small floating-point errors. Correctly generated Falcon signatures include a salt that should in principle prevent the sampler to ever be called on the same input twice. In that sense, our observation has little impact on the security of Falcon signatures per se (beyond echoing warnings about the dangers of repeated randomness). On the other hand, it is critical for derandomized variants of Falcon, which have been proposed for use in numerous settings. One can mention in particular identity-based encryption, SNARK-friendly signatures, and sublinear signature aggregation. For all these settings, small floating point discrepancies have a chance of resulting in full private key exposure, even when using the slower, integer-based emulated floating-point arithmetic of Falcon's reference implementation.
Last updated:  2024-10-18
Subliminal Encrypted Multi-Maps and Black-Box Leakage Absorption
Amine Bahi, Seny Kamara, Tarik Moataz, and Guevara Noubir
We propose a dynamic, low-latency encrypted multi-map (EMM) that operates in two modes: low-leakage mode, which reveals minimal information such as data size, expected response length, and query arrival rate; and subliminal mode, which reveals only the data size while hiding metadata including query and update times, the number of operations executed, and even whether an operation was executed at all---albeit at the cost of full correctness. We achieve this by exploiting a tradeoff between leakage and latency, a previously underexplored resource in EMM design. In low-leakage mode, our construction improves upon existing work both asymptotically and empirically: it achieves optimal server-side storage, as well as communication and computational complexity that is independent of the maximum response length. In subliminal mode, it is the first construction to hide metadata. To analyze the latency and client-side storage of our construction, we utilize queuing theory and introduce a new queuing model, which may be of independent interest. To examine its metadata-hiding properties, we extend standard security definitions to account for metadata and prove a surprising result: if a scheme is subliminal in that it hides the execution of its operations, then it absorbs the leakage of any scheme that makes black-box use of it without sending additional messages. In other words, if a scheme is subliminal, then any scheme that makes black-box use of it will also be subliminal. We implement and evaluate our construction, demonstrating that our empirical results align with our theoretical analysis and that the scheme achieves a median query latency below $10$ milliseconds, making it practical for some applications.
Last updated:  2024-10-18
PAC-Private Algorithms
Mayuri Sridhar, Hanshen Xiao, and Srinivas Devadas
Provable privacy typically requires involved analysis and is often associated with unacceptable accuracy loss. While many empirical verification or approximation methods, such as Membership Inference Attacks (MIA) and Differential Privacy Auditing (DPA), have been proposed, these do not offer rigorous privacy guarantees. In this paper, we apply recently-proposed Probably Approximately Correct (PAC) Privacy to give formal, mechanized, simulation-based proofs for a range of practical, black-box algorithms: K-Means, Support Vector Machines (SVM), Principal Component Analysis (PCA) and Random Forests. To provide these proofs, we present a new simulation algorithm that efficiently determines anisotropic noise perturbation required for any given level of privacy. We provide a proof of correctness for this algorithm and demonstrate that anisotropic noise has substantive benefits over isotropic noise. Stable algorithms are easier to privatize, and we demonstrate privacy amplification resulting from introducing regularization in these algorithms; meaningful privacy guarantees are obtained with small losses in accuracy. We propose new techniques in order to reduce instability in algorithmic output and convert intractable geometric stability verification into efficient deterministic stability verification. Thorough experiments are included, and we validate our provable adversarial inference hardness against state-of-the-art empirical attacks.
Last updated:  2024-10-18
Computational Attestations of Polynomial Integrity Towards Verifiable Machine Learning
Dustin Ray and Caroline El Jazmi
Machine-learning systems continue to advance at a rapid pace, demonstrating remarkable utility in various fields and disciplines. As these systems continue to grow in size and complexity, a nascent industry is emerging which aims to bring machine-learning-as-a-service (MLaaS) to market. Outsourcing the operation and training of these systems to powerful hardware carries numerous advantages, but challenges arise when privacy and the correctness of work carried out must be ensured. Recent advancements in the field of zero-knowledge cryptography have led to a means of generating arguments of integrity for any computation, which in turn can be efficiently verified by any party, in any place, at any time. In this work we prove the correct training of a differentially-private (DP) linear regression over a dataset of 50,000 samples on a single machine in less than 6 minutes, verifying the entire computation in 0.17 seconds. To our knowledge, this result represents the fastest known instance in the literature of provable-DP over a dataset of this size. We believe this result constitutes a key stepping-stone towards end-to-end private MLaaS.
Last updated:  2024-10-18
Improved Polynomial Division in Cryptography
Kostas Kryptos Chalkias, Charanjit Jutla, Jonas Lindstrom, Varun Madathil, and Arnab Roy
Several cryptographic primitives, especially succinct proofs of various forms, transform the satisfaction of high-level properties to the existence of a polynomial quotient between a polynomial that interpolates a set of values with a cleverly arranged divisor. Some examples are SNARKs, like Groth16, and polynomial commitments, such as KZG. Such a polynomial division naively takes $O(n \log n)$ time with Fast Fourier Transforms, and is usually the asymptotic bottleneck for these computations. Several works have targeted specific constructions to optimize these computations and trade-off one-time setup costs with faster online computation times. In this paper, we present a unified approach to polynomial division related computations for a diverse set of schemes. We show how our approach provides a common abstract lens which recasts and improves existing approaches. Additionally, we present benchmarks for the Groth16 and the KZG systems, illustrating the significant practical benefits of our approach in terms of speed, memory, and parallelizability. We get a speedup of $2\times$ over the state-of-the-art in computing all openings for KZG commitments and a speed-up of about $2-3\%$ for Groth16 proofs when compared against the Rust Arkworks implementation. Although our Groth16 speedup is modest, our approach supports twice the number of gates as Arkworks and SnarkJS as it avoids computations at higher roots of unity. Conversely this reduces the need for employing larger groups for bigger circuits. For example, our approach can support $2^{28}$ gates with BN254, as compared to $2^{27}$ for coset-based approaches, without sacrificing computational advantages. Our core technical contributions are novel conjugate representations and compositions of the derivative operator and point-wise division under the Discrete Fourier Transform. These allow us to leverage l'Hôpital's rule to efficiently compute polynomial division, where in the evaluation basis such divisions maybe of the form $0/0$. Our techniques are generic with potential applicability to many existing protocols.
Last updated:  2024-10-18
Sharp: Short Relaxed Range Proofs
Geoffroy Couteau, Dahmun Goudarzi, Michael Klooß, and Michael Reichle
We provide optimized range proofs, called $\mathsf{Sharp}$, in discrete logarithm and hidden order groups, based on square decomposition. In the former setting, we build on the paradigm of Couteau et al. (Eurocrypt '21) and optimize their range proof (from now on, CKLR) in several ways: (1) We introduce batching via vector commitments and an adapted $\Sigma$-protocol. (2) We introduce a new group switching strategy to reduce communication. (3) As repetitions are necessary to instantiate CKLR in standard groups, we provide a novel batch shortness test that allows for cheaper repetitions. The analysis of our test is nontrivial and forms a core technical contribution of our work. For example, for $\kappa = 128$ bit security and $B = 64$ bit ranges for $N = 1$ (resp. $N = 8$) proof(s), we reduce the proof size by $34\%$ (resp. $75\%$) in arbitrary groups, and by $66\%$ (resp. $88\%)$ in groups of order $256$-bit, compared to CKLR. As $\mathsf{Sharp}$ and CKLR proofs satisfy a “relaxed” notion of security, we show how to enhance their security with one additional hidden order group element. In RSA groups, this reduces the size of state of the art range proofs (Couteau et al., Eurocrypt '17) by $77\%$ ($\kappa = 128, B = 64, N = 1$). Finally, we implement our most optimized range proof. Compared to the state of the art Bulletproofs (Bünz et al., S&P 2018), our benchmarks show a very significant runtime improvement. Eventually, we sketch some applications of our new range proofs.
Last updated:  2024-10-18
The module action for isogeny based cryptography
Damien Robert
We extend the usual ideal action on oriented elliptic curves to a (Hermitian) module action on oriented (polarised) abelian varieties. Oriented abelian varieties are naturally enriched in $R$-modules, and our module action comes from the canonical power object construction on categories enriched in a closed symmetric monoidal category. In particular our action is canonical and gives a fully fledged symmetric monoidal action. Furthermore, we give algorithms to compute this action in practice, generalising the usual algorithms in rank~$1$. The action allows us to unify in the same framework, on the one hand isogeny based cryptography based on ordinary or oriented elliptic curves, and on the other hand the one based on supersingular elliptic curves defined over $\mathbb{F}_{p^2}$. In particular, from our point of view, supersingular elliptic curves over $\mathbb{F}_p$ are given by a rank~$1$ module action, while (the Weil restriction) of those defined over $\mathbb{F}_{p^2}$ are given by a rank~$2$ module action. As a consequence, rank~$2$ module action inversion is at least as hard as the supersingular isogeny path problem. We thus propose to use Hermitian modules as an avatar of a cryptographic symmetric monoidal action framework. This generalizes the more standard cryptographic group action framework, and still allows for a NIKE (Non Interactive Key Exchange). The main advantage of our action is that, presumably, Kuperberg's algorithm does not apply. Compared to CSIDH, this allows for more compact keys and much better scaling properties. In practice, we propose the key exchange scheme $\otimes$-MIKE (Tensor Module Isogeny Key Exchange). Alice and Bob start from a supersingular elliptic curve $E_0/\mathbb{F}_p$ and both compute a $2^n$-isogeny over $\mathbb{F}_{p^2}$. They each send the $j$-invariant of their curve. Crucially, unlike SIDH, no torsion information at all is required. Their common secret, given by the module action, is then a dimension~$4$ principally polarised abelian variety. We obtain a very compact post-quantum NIKE: only 64B for NIST level~$1$ security.
Last updated:  2024-10-18
Non-Interactive Threshold BBS+ From Pseudorandom Correlations
Sebastian Faust, Carmit Hazay, David Kretzler, Leandro Rometsch, and Benjamin Schlosser
The BBS+ signature scheme is one of the most prominent solutions for realizing anonymous credentials. Its prominence is due to properties like selective disclosure and efficient protocols for creating and showing possession of credentials. Traditionally, a single credential issuer produces BBS+ signatures, which poses significant risks due to a single point of failure. In this work, we address this threat via a novel $t$-out-of-$n$ threshold BBS+ protocol. Our protocol supports an arbitrary security threshold $t \leq n$ and works in the so-called preprocessing setting. In this setting, we achieve non-interactive signing in the online phase and sublinear communication complexity in the number of signatures in the offline phase, which, as we show in this work, are important features from a practical point of view. As it stands today, none of the widely studied signature schemes, such as threshold ECDSA and threshold Schnorr, achieve both properties simultaneously. To this end, we design specifically tailored presignatures that can be directly computed from pseudorandom correlations and allow servers to create signature shares without additional cross-server communication. Both our offline and online protocols are actively secure in the Universal Composability model. Finally, we evaluate the concrete efficiency of our protocol, including an implementation of the online phase and the expansion algorithm of the pseudorandom correlation generator (PCG) used during the offline phase. The online protocol without network latency takes less than $15 ms$ for $t \leq 30$ and credentials sizes up to $10$. Further, our results indicate that the influence of $t$ on the online signing is insignificant, $< 6 \%$ for $t \leq 30$, and the overhead of the thresholdization occurs almost exclusively in the offline phase. Our implementation of the PCG expansion is the first considering correlations between more than $3$ parties and shows that even for a committee size of $10$ servers, each server can expand a correlation of up to $2^{16}$ presignatures in about $600$ ms per presignature.
Last updated:  2024-10-18
Dumbo-MPC: Efficient Fully Asynchronous MPC with Optimal Resilience
Yuan Su, Yuan Lu, Jiliang Li, Yuyi Wang, Chengyi Dong, and Qiang Tang
Fully asynchronous multi-party computation (AMPC) has superior robustness in realizing privacy and guaranteed output delivery (G.O.D.) against asynchronous adversaries that can arbitrarily delay communications. However, none of these protocols are truly practical, as they either have sub-optimal resilience, incur cumbersome communication cost, or suffer from an online phase with extra cryptographic overhead. The only attempting implementation---HoneyBadgerMPC (hbMPC)---merely ensures G.O.D. in some implausible optimistic cases due to a non-robust offline pre-processing phase. We propose Dumbo-MPC a concretely efficient AMPC-as-a-service design with all phases G.O.D. and optimal resilience against $t<n/3$ malicious parties (where $n$ is the total number of parties). Same to hbMPC, Dumbo-MPC has a robust (almost) information-theoretic online phase that can efficiently perform online computations, given pre-processed multiplication triples. While for achieving all phases G.O.D., we design a novel dual-mode offline protocol that can robustly pre-process multiplication triples in asynchrony. The offline phase features $O(n)$ per-triple communication in the optimistic case, followed by a fully asynchronous fallback to a pessimistic path to securely restore G.O.D. in the bad case. To efficiently implement the pessimistic path, we devise a concretely efficient zk-proof for product relationship of secret shares over compact KZG polynomial commitments, which enables us to reduce the degree of two secret shares' product from $2t$ to $t$ and could be of independent interest. We also implement and extensively evaluate Dumbo-MPC (particularly its offline phase) in varying network settings with up to 31 AWS servers. To our knowledge, we provide the first implementation of AMPC with all-phase G.O.D. A recent asynchronous triple generation protocol from Groth and Shoup (GS23) is also implemented and experimentally compared. When $n = 31$, Dumbo-MPC generates 94 triples/sec (almost twice of GS23) in the pessimistic case and 349 triples/sec (6X of GS23) in the good case, such that 31 parties require only 2-8 min to prepare a private Vickrey auction of 100 bidders or 10-36 min for a mixing network of $2^{10}$ inputs.
Last updated:  2024-10-18
From One-Time to Two-Round Reusable Multi-Signatures without Nested Forking
Lior Rotem, Gil Segev, and Eylon Yogev
Multi-signature schemes are gaining significant interest due to their blockchain applications. Of particular interest are two-round schemes in the plain public-key model that offer key aggregation, and whose security is based on the hardness of the DLOG problem. Unfortunately, despite substantial recent progress, the security proofs of the proposed schemes provide rather insufficient concrete guarantees (especially for 256-bit groups). This frustrating situation has so far been approached either by relying on the security of seemingly stronger assumptions or by considering restricted classes of attackers (e.g., algebraic attackers, which are assumed to provide an algebraic justification of each group element that they produce). We present a complementing approach by constructing multi-signature schemes that satisfy two relaxed notions of security, whose applicability nevertheless ranges from serving as drop-in replacements to enabling expressive smart contract validation procedures. Our first notion, one-time unforgeability, extends the analogous single-signer notion by considering attackers that obtain a single signature for some message and set of signers of their choice. We construct a non-interactive one-time scheme based on any ring-homomorphic one-way function, admitting efficient instantiations based on the DLOG and RSA assumptions. Aggregated verification keys and signatures consist of two group elements and a single group element, respectively, and our security proof consists of a single application of the forking lemma (thus avoiding the substantial security loss exhibited by the proposed two-round schemes). Additionally, we demonstrate that our scheme naturally extends to a $t$-time scheme, where aggregated verification keys consist of $t+1$ group elements, while aggregated signatures still consist of a single group element. Our second notion, single-set unforgeability, considers attackers that obtain any polynomial number of signatures but are restricted to a single set of signers of their choice. We transform any non-interactive one-time scheme into a two-round single-set scheme via a novel forking-free construction that extends the seminal Naor-Yung tree-based approach to the multi-signer setting. Aggregated verification keys are essentially identical to those of the underlying one-time scheme, and the length of aggregated signatures is determined by that of the underlying scheme while scaling linearly with the length of messages (noting that long messages can always be hashed using a collision-resistant function). Instantiated with our one-time scheme, we obtain aggregated verification keys and signatures whose lengths are completely independent of the number of signers.
Last updated:  2024-10-18
Secure and efficient transciphering for FHE-based MPC
Diego F. Aranha, Antonio Guimarães, Clément Hoffmann, and Pierrick Méaux
Transciphering (or Hybrid-Homomorphic Encryption, HHE) is an es- tablished technique for avoiding ciphertext expansion in HE applications, saving communication and storage resources. Recently, it has also been shown to be a fundamental component in the practical construction of HE-based multi-party computation (MPC) protocols, being used both for input data and intermediary results (Smart, IMACC 2023). In these protocols, however, ciphers are used with keys that are jointly generated by multiple (possibly malicious) parties, which may require additional security assumptions that have been so far overlooked in the HHE literature. In this paper, we formalize this issue as a security against related-key attacks (RKA) problem and provide efficient solutions for it. We start by presenting an efficient method for homomorphically evaluating Mixed-Filter-Permutator (MFP) ciphers in leveled mode, enabling speedups of up to thousands of times compared to previous literature. For the multi-party scenario, we focus specifically on the Margrethe cipher (Hoffmann et al., INDOCRYPT 2023). We show that, contrary to other commonly used HHE ciphers (e.g. FLIP), Margrethe is out-of-the-box secure for any protocols that allow malicious parties to learn up to two related key streams, enabling security for the vast majority of static MPC protocols. For other cases, we quantify the loss of security based on the number of related key streams (which often depends on the number of malicious parties and specific protocol). Performance-wise, our implementation of Margrethe takes just 3.9 ms to transcipher 4 bit messages, being significantly faster than the state of the art in terms of latency.
Last updated:  2024-10-18
A New Fine Tuning Method for FHEW/TFHE Bootstrapping with IND-CPAD Security
Deokhwa Hong, Young-Sik Kim, Yongwoo Lee, and Eunyoung Seo
Fully homomorphic encryption (FHE) schemes enable computations on encrypted data, making them as a crucial component of privacy-enhancing technologies. Ducas and Micciancio introduced the FHEW scheme (Eurocrypt '15), which was further enhanced by Chillotti et al. with TFHE (Asiacrypt '17). These schemes support low-latency homomorphic evaluations of binary (or larger) gates due to their small parameter size. However, the evaluation failure probability in these schemes is highly sensitive to the choice of parameters, resulting in a limited range of viable parameters and a trade-off between failure probability and runtime. Recently, Cheon et al. proposed a key recovery attack on the FHEW/TFHE schemes based on a novel security model for FHE, known as IND-CPA$^\text{D}$ security (CCS '24). Mitigating this attack requires achieving a negligible failure probability (e.g., $2^{-64}$). However, the limited range of parameter options in FHEW/TFHE necessitates the adoption of parameter sets with unnecessarily low failure probabilities, leading to inefficient runtime. We propose a new bootstrapping method for the FHEW/TFHE shcemes that optimizes the trade-off between runtime and failure probability while maintaining ease of implementation. The proposed method allows selecting parameter sets that achieve the desired failure probabilities at various security levels, thereby maximizing runtime efficiency.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.