Papers updated in last 183 days (1563 results)

Last updated:  2024-06-12
Fully Homomorphic Encryption on large integers
Philippe Chartier, Michel Koskas, Mohammed Lemou, and Florian Méhats
At the core of fully homomorphic encryption lies a procedure to refresh the ciphertexts whose noise component has grown too big. The efficiency of the so-called bootstrap is of paramount importance as it is usually regarded as the main bottleneck towards a real-life deployment of fully homomorphic crypto-systems. In two of the fastest implementations so far, the space of messages is limited to binary integers. If the message space is extended to the discretized torus $T_{p_i}$ or equivalently to $Z_{p_i}$ with values of $p_i$ large as compared to the dimension of the polynomial ring in which the operations are realised, the bootstrap delivers incorrect results with far too high probability. As a consequence, the use of a residue numeral system to address large integers modulo $p = p_1 × \cdots × p_\kappa$ would be of limited interest in practical situations without the following remedy : rather than increasing the polynomial degree and thus the computational cost, we introduce here a novel and simple technique (hereafter referred to as “collapsing”) which, by grouping the components of the mask, attenuates both rounding errors and computational costs, and greatly helps to sharpen the correctness of the bootstrap. We then rigorously estimate the probability of success as well as the output error, determine practical parameters to reach a given correctness threshold and present implementation results.
Last updated:  2024-06-12
Application-Aware Approximate Homomorphic Encryption: Configuring FHE for Practical Use
Andreea Alexandru, Ahmad Al Badawi, Daniele Micciancio, and Yuriy Polyakov
Fully Homomorphic Encryption (FHE) is a powerful tool for performing privacy-preserving analytics over encrypted data. A promising method for FHE over real and complex numbers is approximate homomorphic encryption, instantiated with the Cheon-Kim-Kim-Song (CKKS) scheme. The CKKS scheme enables efficient evaluation for many privacy-preserving machine learning applications. While the efficiency advantages of CKKS are clear, there is currently a lot of confusion on how to securely instantiate the scheme for any given application, especially after secret-key recovery attacks were discovered by Li and Micciancio (EUROCRYPT'21) for the $IND-CPA^D$ setting, i.e., where decryption results are shared with other parties. On the one hand, the generic definition of $IND-CPA^D$ is application-agnostic and often requires impractically large parameters. On the other hand, practical CKKS implementations target specific applications and use tighter parameters. A good illustration are the recent secret-key recovery attacks against a CKKS implementation in the OpenFHE library by Guo et al. (USENIX Security'24). These attacks misuse the library by employing different circuits during parameter estimation and run-time computation, yet they do not violate the generic (application-agnostic) $IND-CPA^D$ definition. To address this confusion, we introduce the notion of application-aware homomorphic encryption and devise related security definitions, which correspond more closely to how homomorphic encryption schemes are implemented and used in practice. We then formulate the guidelines for implementing the application-aware homomorphic encryption model to achieve $IND-CPA^D$ security for practical applications of CKKS. We also show that our application-aware model can be used for secure, efficient instantiation of exact homomorphic encryption schemes.
Last updated:  2024-06-12
Homomorphic sign evaluation with a RNS representation of integers
Philippe Chartier, Michel Koskas, Mohammed Lemou, and Florian Méhats
In the context of fully-homomorphic-encryption, we consider the representation of large integers by their decomposition over a product of rings (through the Chinese Remainder Theorem) and introduce a new algorithm for the determination of the sign solely through the knowledge of ring-components. Our implementation with 128 bits of security delivers a correct result and a probability higher than 1 E-9 in less than 100 milliseconds for 32-bit integers on a laptop.
Last updated:  2024-06-12
Anonymous, Timed and Revocable Proxy Signatures
Ghada Almashaqbeh and Anca Nitulescu
A proxy signature enables a party to delegate her signing power to another. This is useful in practice to achieve goals related to robustness, crowd-sourcing, and workload sharing. Such applications, especially in the blockchain model, usually require delegation to satisfy several properties, including time bounds, anonymity, revocability, and policy enforcement. Despite the large amount of work on proxy signatures in the literature, none of the existing schemes satisfy all these properties; even there is no unified formal notion that captures them. In this work, we close this gap and propose an anonymous, timed, and revocable proxy signature scheme. We achieve this in two steps: First, we introduce a tokenizable digital signature based on Schnorr signature allowing for secure distribution of signing tokens. Second, we utilize a public bulletin board, instantiated as a blockchain, and timelock encryption to support: (1) one-time usage of the signing tokens by tracking tokens used so far based on unique values associated to them, (2) timed delegation so that a proxy signer cannot sign outside a given period, and (3) delegation revocation allowing the original signer to end a delegation earlier than provisioned. All of these are done in a decentralized and anonymous way so that no one can tell that someone else signed on behalf of the original signer or even that a delegation took place. We define a formal notion for proxy signatures capturing all these properties, and prove that our construction realizes this notion. We also discuss several design considerations addressing issues related to deployment in practice.
Last updated:  2024-06-12
Computational Differential Privacy for Encrypted Databases Supporting Linear Queries
Ferran Alborch Escobar, Sébastien Canard, Fabien Laguillaumie, and Duong Hieu Phan
Differential privacy is a fundamental concept for protecting individual privacy in databases while enabling data analysis. Conceptually, it is assumed that the adversary has no direct access to the database, and therefore, encryption is not necessary. However, with the emergence of cloud computing and the «on-cloud» storage of vast databases potentially contributed by multiple parties, it is becoming increasingly necessary to consider the possibility of the adversary having (at least partial) access to sensitive databases. A consequence is that, to protect the on-line database, it is now necessary to employ encryption. At PoPETs'19, it was the first time that the notion of differential privacy was considered for encrypted databases, but only for a limited type of query, namely histograms. Subsequently, a new type of query, summation, was considered at CODASPY'22. These works achieve statistical differential privacy, by still assuming that the adversary has no access to the encrypted database. In this paper, we argue that it is essential to assume that the adversary may eventually access the encrypted data, rendering statistical differential privacy inadequate. Therefore, the appropriate privacy notion for encrypted databases that we use is computational differential privacy, which was introduced by Beimel et al. at CRYPTO '08. In our work, we focus on the case of functional encryption, which is an extensively studied primitive permitting some authorized computation over encrypted data. Technically, we show that any randomized functional encryption scheme that satisfies simulation-based security and differential privacy of the output can achieve computational differential privacy for multiple queries to one database. Our work also extends the summation query to a much broader range of queries, specifically linear queries, by utilizing inner-product functional encryption. Hence, we provide an instantiation for inner-product functionalities by proving its simulation soundness and present a concrete randomized inner-product functional encryption with computational differential privacy against multiple queries. In term of efficiency, our protocol is almost as practical as the underlying inner product functional encryption scheme. As evidence, we provide a full benchmark, based on our concrete implementation for databases with up to 1 000 000 entries. Our work can be considered as a step towards achieving privacy-preserving encrypted databases for a wide range of query types and considering the involvement of multiple database owners.
Last updated:  2024-06-12
MATHEMATICAL SPECULATIONS ON CRYPTOGRAPHY
Anjali C B
The current cryptographic frameworks like RSA, ECC, and AES are potentially under quantum threat. Quantum cryptographic and post-quantum cryptography are being extensively researched for securing future information. The quantum computer and quantum algorithms are still in the early developmental stage and thus lack scalability for practical application. As a result of these challenges, most researched PQC methods are lattice-based, code-based, ECC isogeny, hash-based, and multivariate crypto schemes. In this paper, we explore other athematical topics such as stereographic projection, Mobius transformation, change of basis, Apollonian circle, Binary Quadratic form equivalence, Gauss composition law, and its conjunctions. It fulfills preliminary conditions like bijection, primality, and np-hard problems, and the feasibility of one-way functions along with its interconnection. Thus allowing the exploration of new realms of mathematics for the development of secure protocols for future communication.
Last updated:  2024-06-12
On Maximum Size Simultaneous Linear Approximations in Ascon and Keccak and Related Translation and Differential Properties
Nicolas T. Courtois, Frédéric Amiel, and Alexandre Bonnard de Fonvillars
In this paper we study the S-box known as Chi or \chi initially proposed by Daemen in 1995 and very widely used ever since in Keccak, Ascon, and many other. This type of ciphers is typically analyzed [in recent research] in terms of subspace trail attacks [TeDi19] and vector space invariants. An interesting question is then, when different spaces are mapped to each other by translations with a constant. In this paper we relax this fundamental question and we consider arbitrary sets of points and their translations. We generalize previous S-box partial linearization results on Keccak and Ascon from Eurocrypt 2017. We basically introduce new ways to linearize the Ascon S-box to the maximum possible extent. On this basis we show further remarkable properties and some surprising connections between [simultaneous] linear and [prominent] differential properties. We exhibit a new family of maximum size and optimal approximation properties with 11 points, beyond the maximum size of any set in the DDT table. We prove a theorem which guarantees that this type of properties are stable by arbitrary input-side translations which holds for all quadratic permutations. All this will be placed in the context of previous work on classification of 5-bit quadratic permutations.
Last updated:  2024-06-12
Not Just Regular Decoding: Asymptotics and Improvements of Regular Syndrome Decoding Attacks
Andre Esser and Paolo Santini
Cryptographic constructions often base security on structured problem variants to enhance efficiency or to enable advanced functionalities. This led to the introduction of the Regular Syndrome Decoding (RSD) problem, which guarantees that a solution to the Syndrome Decoding (SD) problem follows a particular block-wise structure. Despite recent attacks exploiting that structure by Briaud and Øygarden (Eurocrypt ’23) and Carozza, Couteau and Joux (CCJ, Eurocrypt ’23), many questions about the impact of the regular structure on the problem hardness remain open. In this work we initiate a systematic study of the hardness of the RSD problem starting from its asymptotics. We classify different parameter regimes revealing large regimes for which RSD instances are solvable in polynomial time and on the other hand regimes that lead to particularly hard instances. Against previous perceptions, we show that a classification solely based on the uniqueness of the solution is not sufficient for isolating the worst case parameters. Further, we provide an in-depth comparison between SD and RSD in terms of reducibility and computational complexity, identifying regimes in which RSD instances are actually harder to solve. We provide the first asymptotic analyses of the algorithms presented by CCJ, establishing their worst case decoding complexities as $2^{0.141n}$ and $2^{0.135n}$, respectively. We then introduce regular-ISD algorithms by showing how to tailor the whole machinery of advanced Information Set Decoding (ISD) techniques from attacking SD to the RSD setting. The fastest regular-ISD algorithm improves the worst case decoding complexity significantly to $2^{0.112n}$. Eventually, we show that also with respect to suggested parameters regular-ISD outperforms previous approaches in most cases, reducing security levels by up to 30 bits.
Last updated:  2024-06-12
The Committing Security of MACs with Applications to Generic Composition
Ritam Bhaumik, Bishwajit Chakraborty, Wonseok Choi, Avijit Dutta, Jérôme Govinden, and Yaobin Shen
Message Authentication Codes (MACs) are ubiquitous primitives deployed in multiple flavors through standards such as HMAC, CMAC, GMAC, LightMAC, and many others. Its versatility makes it an essential building block in applications necessitating message authentication and integrity checks, in authentication protocols, authenticated encryption schemes, or as a pseudorandom or key derivation function. Its usage in this variety of settings makes it susceptible to a broad range of attack scenarios. The latest attack trends leverage a lack of commitment or context-discovery security in AEAD schemes and these attacks are mainly due to the weakness in the underlying MAC part. However, these new attack models have been scarcely analyzed for MACs themselves. This paper provides a thorough treatment of MACs committing and context-discovery security. We reveal that commitment and context-discovery security of MACs have their own interest by highlighting real-world vulnerable scenarios. We formalize the required security notions for MACs, and analyze the security of standardized MACs for these notions. Additionally, as a constructive application, we analyze generic AEAD composition and provide simple and efficient ways to build committing and context-discovery secure AEADs.
Last updated:  2024-06-12
Scalable Collaborative zk-SNARK: Fully Distributed Proof Generation and Malicious Security
Xuanming Liu, Zhelei Zhou, Yinghao Wang, Bingsheng Zhang, and Xiaohu Yang
The notion of collaborative zk-SNARK is introduced by Ozdemir and Boneh (USENIX 2022), which allows multiple parties to jointly create a zk-SNARK proof over distributed secrets (also known as the witness). This approach ensures the privacy of the witness, as no corrupted servers involved in the proof generation can learn anything about the honest servers' witness. Later, Garg et al. continued the study, focusing on how to achieve faster proof generation (USENIX 2023). However, their approach requires a powerful server that is responsible for the most resource-intensive computations and communications during the proof generation. This requirement results in a scalability bottleneck, making their protocols unable to handle large-scale circuits. In this work, we address this issue by lifting a zk-SNARK called Libra (Crypto 2019) to a collaborative zk-SNARK and achieve a fully distributed proof generation, where all servers take roughly the same portion of the total workload. Further, our protocol can be adapted to be secure against a malicious adversary by incorporating some verification mechanisms. With 128 consumer machines and a 4Gbps network, we successfully generate a proof for a data-parallel circuit containing $2^{23}$ gates in merely 2.5 seconds and take only 0.5 GB memory for each server. This represents a $19\times$ speed-up, compared to a local Libra prover. Our benchmark further indicates an impressive 877$\times$ improvement in running time and a 992$\times$ enhancement in communication compared to the implementation in previous work. Furthermore, our protocol is capable of handling larger circuits, making it scalable in practice.
Last updated:  2024-06-12
SmartZKCP: Towards Practical Data Exchange Marketplace Against Active Attacks
Xuanming Liu, Jiawen Zhang, Yinghao Wang, Xinpeng Yang, and Xiaohu Yang
The trading of data is becoming increasingly important as it holds substantial value. A blockchain-based data marketplace can provide a secure and transparent platform for data exchange. To facilitate this, developing a fair data exchange protocol for digital goods has garnered considerable attention in recent decades. The Zero Knowledge Contingent Payment (ZKCP) protocol enables trustless fair exchanges with the aid of blockchain and zero-knowledge proofs. However, applying this protocol in a practical data marketplace is not trivial. In this paper, several potential attacks are identified when applying the ZKCP protocol in a practical public data marketplace. To address these issues, we propose SmartZKCP, an enhanced solution that offers improved security measures and increased performance. The protocol is formalized to ensure fairness and secure against potential attacks. Moreover, SmartZKCP offers efficiency optimizations and minimized communication costs. Evaluation results show that SmartZKCP is both practical and efficient, making it applicable in a data exchange marketplace.
Last updated:  2024-06-12
Scalable Collaborative zk-SNARK and Its Application to Efficient Proof Outsourcing
Xuanming Liu, Zhelei Zhou, Yinghao Wang, Jinye He, Bingsheng Zhang, Xiaohu Yang, and Jiaheng Zhang
Collaborative zk-SNARK (USENIX'22) allows multiple parties to jointly create a zk-SNARK proof over distributed secrets (also known as the witness). It provides a promising approach to proof outsourcing, where a client wishes to delegate the tedious task of proof generation to many servers from different locations, while ensuring no corrupted server can learn its witness (USENIX'23). Unfortunately, existing work remains a significant efficiency problem, as the protocols rely heavily on a particularly powerful server, and thus face challenges in achieving scalability for complex applications. In this work, we address this problem by extending the existing zk-SNARKs Libra (Crypto'19) and HyperPlonk (Eurocrypt'23) into scalable collaborative zk-SNARKs. Crucially, our collaborative proof generation does not require a powerful server, and all servers take up roughly the same proportion of the total workload. In this way, we achieve privacy and scalability simultaneously for the first time in proof outsourcing. To achieve this, we develop an efficient MPC toolbox for a number of useful multivariate polynomial primitives, including sumcheck, productcheck, and multilinear polynomial commitment, which can also be applied to other applications as independent interests. For proof outsourcing purposes, when using $128$ servers to jointly generate a proof for a circuit size of $2^{24}$ gates, our benchmarks for these two collaborative proofs show a speedup of $21\times$ and $24\times$ compared to a local prover, respectively. Furthermore, we are able to handle enormously large circuits, making it practical for real-world applications.
Last updated:  2024-06-11
Two RSA-based Cryptosystems
A. Telveenus
The cryptosystem RSA is a very popular cryptosystem in the study of Cryptography. In this article, we explore how the idea of a primitive m th root of unity in a ring can be integrated into the Discrete Fourier Transform, leading to the development of new cryptosystems known as RSA-DFT and RSA-HGR.
Last updated:  2024-06-11
Certifying Private Probabilistic Mechanisms
Zoë Ruha Bell, Shafi Goldwasser, Michael P. Kim, and Jean-Luc Watson
In past years, entire research communities have arisen to address concerns of privacy and fairness in data analysis. At present, however, the public must trust that institutions will re-implement algorithms voluntarily to account for these social concerns. Due to additional cost, widespread adoption is unlikely without effective legal enforcement. A technical challenge for enforcement is that the methods proposed are often probabilistic mechanisms, whose output must be drawn according to precise, and sometimes secret, distributions. The Differential Privacy (DP) case is illustrative: if a cheating curator answers queries according to an overly-accurate mechanism, privacy violations could go undetected. The need for effective enforcement raises the central question of our paper: Can we efficiently certify the output of a probabilistic mechanism enacted by an untrusted party? To this end: (1) We introduce two new notions: Certified Probabilistic Mechanisms (CPM) and Random Variable Commitment Schemes (RVCS). A CPM is an interactive protocol that forces a prover to enact a given probabilistic mechanism or be caught; importantly, the interaction does not reveal secret parameters of the mechanism. An RVCS—a key primitive for constructing CPMs—is a commitment scheme where the verifier is convinced that the commitment is to an RV sampled according to an agreed-upon distribution, but learns nothing else. (2) We instantiate the general notion of CPM for the special case of Certifying DP. We build a lightweight, doubly-efficent interactive proof system to certify arbitrary-predicate counting queries released via the DP Binomial mechanism. The construction relies on a commitment scheme with perfect hiding and additive homomorphic properties that can be used to release a broad class of queries about a committed database, which we construct on top of Pedersen commitments. (3) Finally, we demonstrate the immediate feasibility of Certified DP via a highly-efficient and scalable prototype implementation to answer counting queries of arbitrary predicates. The mechanism is composed of an offline and online stage, where the online phase allows for non-interactive certification of queries. For example, we show that CDP queries over a US Census Public Use Microdata Sample (PUMS) ($n=7000$) can be completed in only 1.6 ms and verified in just 38 $\mu \text{s}$. Our implementation is available in open source at https://github.com/jlwatson/certified-dp.
Last updated:  2024-06-11
Distributed Point Function with Constraints, Revisited
Keyu Ji, Bingsheng Zhang, Hong-Sheng Zhou, and Kui Ren
Distributed Point Function (DPF) provides a way for a dealer to split a point function $f_{\alpha, \beta}$ into multiple succinctly described function-shares, where the function $f_{\alpha, \beta}$ for a special input $\alpha$, returns a special output value $\beta$, and returns a fixed value $0$ otherwise. As the security requirement, any strict subset of the function-shares reveals nothing about the function $f_{\alpha,\beta}$. However, each function-share can be individually evaluated on the common input $x$, and these evaluation results can then be merged together to reconstruct the value $f_{\alpha, \beta}(x)$. Recently, Servan-Schreiber et al. (S&P 2023) investigate the access control problem for DPF; namely, the DPF evaluators can ensure that the DPF dealer is authorized to share the given function with privacy assurance. In this work, we revisit this problem, introducing a new notion called DPF with constraints; meanwhile, we identify that there exists a subtle flaw in their privacy definition as well as a soundness issue in one of their proposed schemes due to the lack of validation of the special output value $\beta$. Next, we show how to reduce both the storage size of the constraint representation and the server's computational overhead from $O(N)$ to $O(\log N)$, where $N$ is the number of authorized function sets. In addition, we show how to achieve fine-grained private access control, that is, the wildcard-style constraint for the choice of the special output $\beta$. Our benchmarks show that the amortized running time of our logarithmic storage scheme is $2\times$ - $3\times$ faster than the state-of-the-art when $N=2^{15}$. Furthermore, we provide the first impossibility and feasibility results of the DPF with constraints where the evaluators do not need to communicate with each other.
Last updated:  2024-06-11
Willow: Secure Aggregation with One-Shot Clients
James Bell-Clark, Adrià Gascón, Baiyu Li, Mariana Raykova, and Phillipp Schoppmann
A common drawback of secure vector summation protocols in the single-server model is that they impose at least one synchronization point between all clients contributing to the aggregation. This results in clients waiting on each other to advance through the rounds of the protocol, leading to large latency even if the protocol is computationally efficient. In this paper we propose protocols in the single-server model where clients contributing data to the aggregation send a single message to the server in an asynchronous fashion, i.e., without the need for synchronizing their reporting time with any other clients. Our approach is based on a committee of parties, called decryptors, that aid in the computation. Decryptors run a setup phase before data collection starts, and a decryption phase once it ends. Unlike existing committee-based protocols such as Flamingo (S&P 2023), the cost for committee members can be made sub-linear in the number of clients, and does not depend on the size of the input data. Our experimental evaluation shows that our protocol, even while enabling asynchronous client contributions,is competitive with the state of the art protocols that do not have that feature in both computation and communication.
Last updated:  2024-06-11
AE Robustness as Indistinguishable Decryption Leakage under Multiple Failure Conditions
Ganyuan Cao
Robustness has emerged as a critical criterion for authenticated encryption, alongside confidentiality and integrity. In this study, we revisit the robustness of AEAD by focusing on descriptive errors. We introduce a new notion, IND-CCLA, to formalize this robustness. IND-CCLA expands on classical security notions defined for AEAD by incorporating the indistinguishability of leakage resulting from decryp- tion failures, including text-based values and descriptive error messages, especially in scenarios with multiple failure conditions. Through this notion, we investigate the discrepancy between a single-error decryption function and the actual leakage occurring during decryption. We introduce the concept of error unicity, which ensures that only one error is re- vealed—whether explicitly through decryption or implicitly through leakage—even when multiple failure conditions exist. Following this notion, we aim to mitigate the security risks associated with disclosing multiple errors through leakage and to maintain security even if one of the failure checks is flawed or compromised by an adversary. This notion captures the security of the scheme’s actual implementation, taking into account scenarios where an adversary might exploit side channels or implementation-level attacks to bypass the failure checks. We further extend this notion to IND-sf-CCLA to address stateful security involving out-of-order ciphertexts. We provide a concrete proof of the robustness of the Encode- then-Encipher ($\textsf{EtE}$) paradigm using our notions, demonstrating its ability to handle multiple failure conditions. Additionally, we briefly present a transformation from our notion to a simulatable one, supporting future research on composable security regarding decryption leakage.
Last updated:  2024-06-11
MFKDF: Multiple Factors Knocked Down Flat
Matteo Scarlata, Matilda Backendal, and Miro Haller
Nair and Song (USENIX 2023) introduce the concept of a Multi-Factor Key Derivation Function (MFKDF), along with constructions and a security analysis. MFKDF integrates dynamic authentication factors, such as HOTP and hardware tokens, into password-based key derivation. The aim is to improve the security of password-derived keys, which can then be used for encryption or as an alternative to multi-factor authentication. The authors claim an exponential security improvement compared to traditional password-based key derivation functions (PBKDF). We show that the MFKDF constructions proposed by Nair and Song fall short of the stated security goals. Underspecified cryptographic primitives and the lack of integrity of the MFKDF state lead to several attacks, ranging from full key recovery when an HOTP factor is compromised, to bypassing factors entirely or severely reducing their entropy. We reflect on the different threat models of key-derivation and authentication, and conclude that MFKDF is always weaker than plain PBKDF and multi-factor authentication in each setting.
Last updated:  2024-06-11
SoK: Polynomial Multiplications for Lattice-Based Cryptosystems
Vincent Hwang
We survey various mathematical tools used in software works multiplying polynomials in $\mathbb{Z}_q[x] / ⟨xn −αx−β⟩$. In particular, we survey implementation works targeting polynomial multiplications in lattice-based cryptosystems Dilithium, Kyber, NTRU, NTRU Prime, and Saber with instruction set architectures/extensions Armv7-M, Armv7E-M, Armv8-A, and AVX2. There are three emphases in this paper: (i) modular arithmetic, (ii) homomorphisms, and (iii) vectorization. For modular arithmetic, we survey Montgomery, Barrett, and Plantard multiplications. For homomorphisms, we survey (a) various homomorphisms such as Cooley–Tukey FFT, Good–Thomas FFT, Bruun’s FFT, Rader’s FFT, Karatsuba, and Toom–Cook; (b) various algebraic techniques for adjoining nice properties to the coefficient rings, including localization, Schönhage’s FFT, Nussbaumer’s FFT, and coefficient ring switching; and (c) various algebraic techniques related to the polynomial moduli, including twisting, composed multiplication, evaluation at ∞, truncation, incomplete transformation, striding, and Toeplitz matrix-vector product. For vectorization, we survey the relations between homomorphisms and vector arithmetic. We then go through several case studies: We compare the implementations of modular multiplications used in Dilithium and Kyber, explain how the matrix-to-vector structure was exploited in Saber, and review the design choices of transformations for NTRU and NTRU Prime with vectorization. Finally, we outline several interesting implementation projects.
Last updated:  2024-06-11
On Sigma-Protocols and (packed) Black-Box Secret Sharing Schemes
Claudia Bartoli and Ignacio Cascudo
$\Sigma$-protocols are a widely utilized, relatively simple and well understood type of zero-knowledge proofs. However, the well known Schnorr $\Sigma$-protocol for proving knowledge of discrete logarithm in a cyclic group of known prime order, and similar protocols working over this type of groups, are hard to generalize to dealing with other groups. In particular with hidden order groups, due to the inability of the knowledge extractor to invert elements modulo the order. In this paper, we introduce a universal construction of $\Sigma$-protocols designed to prove knowledge of preimages of group homomorphisms for any abelian finite group. In order to do this, we first establish a general construction of a $\Sigma$-protocol for $\mathfrak{R}$-module homomorphism given only a linear secret sharing scheme over the ring $\mathfrak{R}$, where zero knowledge and special soundness can be related to the privacy and reconstruction properties of the secret sharing scheme. Then, we introduce a new construction of 2-out-of-$n$ packed black-box secret sharing scheme capable of sharing $k$ elements of an arbitrary (abelian, finite) group where each share consists of $k+\log n-3$ group elements. From these two elements we obtain a generic ``batch'' $\Sigma$-protocol for proving knowledge of $k$ preimages of elements via the same group homomorphism, which communicates $k+\lambda-3$ elements of the group to achieve $2^{-\lambda}$ knowledge error. For the case of class groups, we show that our $\Sigma$-protocol improves in several aspects on existing proofs for knowledge of discrete logarithm and other related statements that have been used in a number of works. Finally, we extend our constructions from group homomorphisms to the case of ZK-ready functions, introduced by Cramer and Damg\aa rd in Crypto 09, which in particular include the case of proofs of knowledge of plaintext (and randomness) for some linearly homomorphic encryption schemes such as Joye-Libert encryption. However, in the case of Joye-Libert, we show an even better alternative, using Shamir secret sharing over Galois rings, which achieves $2^{-k}$ knowledge soundness by communicating $k$ ciphertexts to prove $k$ statements.
Last updated:  2024-06-11
Compact Encryption based on Module-NTRU problems
Shi Bai, Hansraj Jangir, Hao Lin, Tran Ngo, Weiqiang Wen, and Jinwei Zheng
The Module-NTRU problem, introduced by Cheon, Kim, Kim, Son (IACR ePrint 2019/1468), and Chuengsatiansup, Prest, Stehlé, Wallet, Xagawa (ASIACCS ’20), generalizes the versatile NTRU assump- tion. One of its main advantages lies in its ability to offer greater flexibil- ity on parameters, such as the underlying ring dimension. In this work, we present several lattice-based encryption schemes, which are IND-CPA (or OW-CPA) secure in the standard model based on the Module-NTRU and Module-LWE problems. Leveraging the Fujisaki-Okamoto transfor- mations, one can obtain IND-CCA secure key encapsulation schemes. Our first encryption scheme is based on the Module-NTRU assumption, which uses the determinant of the secret matrix over the underlying ring for the decryption. Our second scheme is analogue to the Module-LWE encryption scheme, but uses only a matrix as the public key, based on a vectorial variant of the Module-NTRU problem. In the end, we conduct comprehensive analysis of known attacks and propose concrete parame- ters for the instantiations. In particular, our ciphertext size is about 614 (resp. 1228) bytes for NIST Level 1 (resp. Level 5) security and small decryption failure, placing it on par with the most recent schemes such as the one proposed by Zhang, Feng and Yan (ASIACRYPT ’23). We also present several competitive parameters for NIST Level 3, which has a ci- phertext size of 921 bytes. Moreover, our schemes do not require specific codes for plaintext encoding and decoding.
Last updated:  2024-06-11
An Explicit High-Moment Forking Lemma and its Applications to the Concrete Security of Multi-Signatures
Gil Segev and Liat Shapira
In this work we first present an explicit forking lemma that distills the information-theoretic essence of the high-moment technique introduced by Rotem and Segev (CRYPTO '21), who analyzed the security of identification protocols and Fiat-Shamir signature schemes. Whereas the technique of Rotem and Segev was particularly geared towards two specific cryptographic primitives, we present a stand-alone probabilistic lower bound, which does not involve any underlying primitive or idealized model. The key difference between our lemma and previous ones is that instead of focusing on the tradeoff between the worst-case or expected running time of the resulting forking algorithm and its success probability, we focus on the tradeoff between higher moments of its running time and its success probability. Equipped with our lemma, we then establish concrete security bounds for the BN and BLS multi-signature schemes that are significantly tighter than the concrete security bounds established by Bellare and Neven (CCS '06) and Boneh, Drijvers and Neven (ASIACRYPT '18), respectively. Our analysis does not limit adversaries to any idealized algebraic model, such as the algebraic group model in which all algorithms are assumed to provide an algebraic justification for each group element they produce. Our bounds are derived in the random-oracle model based on the standard-model second-moment hardness of the discrete logarithm problem (for the BN scheme) and the computational co-Diffie-Hellman problem (for the BLS scheme). Such second-moment assumptions, asking that the success probability of any algorithm in solving the underlying computational problems is dominated by the second moment of the algorithm's running time, are particularly plausible in any group where no better-than-generic algorithms are currently known.
Last updated:  2024-06-11
Polynomial Time Cryptanalytic Extraction of Neural Network Models
Isaac A. Canales-Martínez, Jorge Chavez-Saab, Anna Hambitzer, Francisco Rodríguez-Henríquez, Nitin Satpute, and Adi Shamir
Billions of dollars and countless GPU hours are currently spent on training Deep Neural Networks (DNNs) for a variety of tasks. Thus, it is essential to determine the difficulty of extracting all the parameters of such neural networks when given access to their black-box implementations. Many versions of this problem have been studied over the last 30 years, and the best current attack on ReLU-based deep neural networks was presented at Crypto’20 by Carlini, Jagielski, and Mironov. It resembles a differential chosen plaintext attack on a cryptosystem, which has a secret key embedded in its black-box implementation and requires a polynomial number of queries but an exponential amount of time (as a function of the number of neurons). In this paper, we improve this attack by developing several new techniques that enable us to extract with arbitrarily high precision all the real-valued parameters of a ReLU-based DNN using a polynomial number of queries and a polynomial amount of time. We demonstrate its practical efficiency by applying it to a full-sized neural network for classifying the CIFAR10 dataset, which has 3072 inputs, 8 hidden layers with 256 neurons each, and about 1.2 million neuronal parameters. An attack following the approach by Carlini et al. requires an exhaustive search over 2^256 possibilities. Our attack replaces this with our new techniques, which require only 30 minutes on a 256-core computer.
Last updated:  2024-06-11
Analyzing Pump and jump BKZ algorithm using dynamical systems
Leizhang Wang
The analysis of the reduction effort of the lattice reduction algorithm is important in estimating the hardness of lattice-based cryptography schemes. Recently many lattice challenge records have been cracked by using the Pnj-BKZ algorithm which is the default lattice reduction algorithm used in G6K, such as the TU Darmstadt LWE and SVP Challenges. However, the previous estimations of the Pnj-BKZ algorithm are simulator algorithms rather than theoretical upper bound analyses. In this work, we present the first dynamic analysis of Pnj-BKZ algorithm. More precisely, our analysis results show that let $L$ is the lattice spanned by $(\mathbf{a}_i)_{i\leq d}$. The shortest vector $\mathbf{b}_1$ output by running $\Omega \left ( \frac{2Jd^2}{\beta(\beta-J)}\left ( \ln_{}{d} +\ln_{} \ln_{}{\max_{i}\frac{\left \| \mathbf{a}_i^{*} \right \| }{(\mathrm{det}L )^{1/d} } } \right ) \right ) $ tours reduction of pnj-BKZ$(\beta,J)$, $\mathbf{b}_1$ satisfied that $\left \| \mathbf{b}_1 \right \| \le {\gamma}_{\beta}^{\frac{d-1}{2(\beta-J)}+2 } \cdot \left ( \mathrm{det}L \right ) ^{\frac{1}{d} } $.
Last updated:  2024-06-11
Machine-Checked Security for $\mathrm{XMSS}$ as in RFC 8391 and $\mathrm{SPHINCS}^{+}$
Manuel Barbosa, François Dupressoir, Benjamin Grégoire, Andreas Hülsing, Matthias Meijers, and Pierre-Yves Strub
This work presents a novel machine-checked tight security proof for $\mathrm{XMSS}$ — a stateful hash-based signature scheme that is (1) standardized in RFC 8391 and NIST SP 800-208, and (2) employed as a primary building block of $\mathrm{SPHINCS}^{+}$, one of the signature schemes recently selected for standardization as a result of NIST’s post-quantum competition. In 2020, Kudinov, Kiktenko, and Fedoro pointed out a flaw affecting the tight security proofs of $\mathrm{SPHINCS}^{+}$ and $\mathrm{XMSS}$. For the case of $\mathrm{SPHINCS}^{+}$, this flaw was fixed in a subsequent tight security proof by Hülsing and Kudinov. Unfortunately, employing the fix from this proof to construct an analogous tight security proof for XMSS would merely demonstrate security with respect to an insufficient notion. At the cost of modeling the message-hashing function as a random oracle, we complete the tight security proof for $\mathrm{XMSS}$ and formally verify it using the EasyCrypt proof assistant. As part of this endeavor, we formally verify the crucial step common to (the security proofs of) $\mathrm{SPHINCS}^{+}$ and $\mathrm{XMSS}$ that was found to be flawed before, thereby confirming that the core of the aforementioned security proof by Hülsing and Kudinov is correct. As this is the first work to formally verify proofs for hash-based signature schemes in EasyCrypt, we develop several novel libraries for the fundamental cryptographic concepts underlying such schemes — e.g., hash functions and digital signature schemes — establishing a common starting point for future formal verification efforts. These libraries will be particularly helpful in formally verifying proofs of other hash-based signature schemes such as $\mathrm{LMS}$ or $\mathrm{SPHINCS}^{+}$.
Last updated:  2024-06-11
Cheetah: Lean and Fast Secure Two-Party Deep Neural Network Inference
Zhicong Huang, Wen-jie Lu, Cheng Hong, and Jiansheng Ding
Secure two-party neural network inference (2PC-NN) can offer privacy protection for both the client and the server and is a promising technique in the machine-learning-as-a-service setting. However, the large overhead of the current 2PC-NN in- ference systems is still being a headache, especially when applied to deep neural networks such as ResNet50. In this work, we present Cheetah, a new 2PC-NN inference system that is faster and more communication-efficient than state-of-the-arts. The main contributions of Cheetah are two-fold: the first part includes carefully designed homomorphic encryption-based protocols that can evaluate the linear layers (namely convolution, batch normalization, and fully-connection) without any expensive rotation operation. The second part includes several lean and communication-efficient primitives for the non-linear functions (e.g., ReLU and truncation). Using Cheetah, we present intensive benchmarks over several large-scale deep neural networks. Take ResNet50 for an example, an end- to-end execution of Cheetah under a WAN setting costs less than 2.5 minutes and 2.3 gigabytes of communication, which outperforms CrypTFlow2 (ACM CCS 2020) by about 5.6× and 12.9×, respectively.
Last updated:  2024-06-11
Pando: Extremely Scalable BFT Based on Committee Sampling
Xin Wang, Haochen Wang, Haibin Zhang, and Sisi Duan
Byzantine fault-tolerant (BFT) protocols are known to suffer from the scalability issue. Indeed, their performance degrades drastically as the number of replicas $n$ grows. While a long line of work has attempted to achieve the scalability goal, these works can only scale to roughly a hundred replicas. In this paper, we develop BFT protocols from the so-called committee sampling approach that selects a small committee for consensus and conveys the results to all replicas. Such an approach, however, has been focused on the Byzantine agreement (BA) problem (considering replicas only) instead of the BFT problem (in the client-replica model); also, the approach is mainly of theoretical interest only, as concretely, it works for impractically large $n$. We build an extremely efficient, scalable, and adaptively secure BFT protocol called Pando in partially synchronous environments based on the committee sampling approach. In particular, we devise novel BFT building blocks targeting scalability, including communication-efficient and computation-efficient consistent broadcast and atomic broadcast protocols. Pando inherits some inherent issues of committee sampling-based protocols: Pando can only achieve near-optimal resilience (i.e., $f<(1/3-\epsilon)n$, where $f$ is the number of faulty replicas and $\epsilon$ is a small constant), and Pando attains safety and liveness only probabilistically. Interestingly, to make $\epsilon$ come close to $0$ (near-optimal resilience), $n$ needs to be sufficiently large but not impractically large, e.g., $n>500$---just what we need for scalable BFT. Our evaluation on Amazon EC2 shows that in contrast to existing protocols, Pando can easily scale to a thousand replicas in the WAN environment, achieving a throughput of $62.57$ ktx/sec.
Last updated:  2024-06-11
Crystalor: Recoverable Memory Encryption Mechanism with Optimized Metadata Structure
Rei Ueno, Hiromichi Haneda, Naofumi Homma, Akiko Inoue, and Kazuhiko Minematsu
This study presents an efficient recoverable memory encryption mechanism, named Crystalor. Existing memory encryption mechanisms, such as Intel SGX integrity tree, offer neither crash consistency nor recoverability, which results in attack surfaces and causes a non-trivial limitation of practical availability. Although the crash consistency of encrypted memory has been studied in the research field of microarchitecture, existing mechanisms lack formal security analysis and cannot incorporate with metadata optimization mechanisms, which are essential to achieve a practical performance. Crystalor efficiently realizes provably-secure recoverable memory encryption with metadata optimization. To establish Crystalor with provable security and practical performance, we develop a dedicated universal hash function PXOR-Hash and a microarchitecture equipped with PXOR-Hash. Crystalor incurs almost no latency overhead under the nominal operations for the recoverability, while it has a simple construction in such a way as to be compatible with existing microarchitectures. We evaluate its practical performance through both algorithmic analyses and system-level simulation in comparison with the state-of-the-art ones, such as SCUE. Crystalor requires 29–62% fewer clock cycles per memory read/write operation than SCUE for protecting a 4 TB memory. In addition, Crystalor and SCUE require 312 GB and 554 GB memory overheads for metadata, respectively, which indicates that Crystalor achieves a memory overhead reduction of 44%. The results of the system-level simulation using the gem5 simulator indicate that Crystalor achieves a reduction of up to 11.5% in the workload execution time compared to SCUE. Moreover, Crystalor achieves a higher availability and memory recovery several thousand times faster than SCUE, as Crystalor offers lazy recovery.
Last updated:  2024-06-11
A Pure Indistinguishability Obfuscation Approach to Adaptively-Sound SNARGs for NP
Brent Waters and David J. Wu
We construct an adaptively-sound succinct non-interactive argument (SNARG) for NP in the CRS model from sub-exponentially-secure indistinguishability obfuscation ($i\mathcal{O}$) and sub-exponentially-secure one-way functions. Previously, Waters and Wu (STOC 2024), and subsequently, Waters and Zhandry (CRYPTO 2024) showed how to construct adaptively-sound SNARGs for NP by relying on sub-exponentially-secure indistinguishability obfuscation, one-way functions, and an additional algebraic assumption (i.e., discrete log, factoring, or learning with errors). In this work, we show that no additional algebraic assumption is needed and vanilla (sub-exponentially-secure) one-way functions already suffice in combination with $i\mathcal{O}$. We first give a direct construction of an adaptively-sound SNARG for NP assuming (sub-exponentially-secure) $i\mathcal{O}$ and an injective one-way function. Then, we show that it suffices to have an injective one-way function that has an inefficient sampler (i.e., sampling a challenge for the one-way function requires super-polynomial time). Because we rely on the existence of injective one-way functions only in the security proof and not in the actual construction, having an inefficient sampling procedure does not impact correctness. We then show that injective one-way functions with an inefficient sampler can be built generically from any vanilla one-way function. Our approach may be independently useful in other settings to replace injective one-way functions with standard one-way functions in applications of $i\mathcal{O}$.
Last updated:  2024-06-11
Scalable Private Set Union, with Stronger Security
Yanxue Jia, Shi-Feng Sun, Hong-Sheng Zhou, and Dawu Gu
Private Set Union (PSU) protocol allows parties, each holding an input set, to jointly compute the union of the sets without revealing anything else. In the literature, scalable PSU protocols follow the “split-execute-assemble” paradigm (Kolesnikov et al., ASIACRYPT 2019); in addition, those fast protocols often use Oblivious Transfer as building blocks. Kolesnikov et al. (ASIACRYPT 2019) and Jia et al. (USENIX Security 2022), pointed out that certain security issues can be introduced in the “split-execute-assemble” paradigm. In this work, surprisingly, we observe that the typical way of invoking Oblivious Transfer also causes unnecessary leakage, and only the PSU protocols based on additively homomorphic encryption (AHE) can avoid the leakage. However, the AHE-based PSU protocols are far from being practical. To bridge the gap, we also design a new PSU protocol that can avoid the unnecessary leakage. Unlike the AHE-based PSU protocols, our new construction only relies on symmetric-key operations other than base OTs, thereby being much more scalable. The experimental results demonstrate that our protocol can obtain at least 873.74× speedup over the best-performing AHE-based scheme. Moreover, our performance is comparable to that of the state-of-the-art PSU protocol (Chen et al., USENIX Security 2023), which also suffers from the unnecessary leakage.
Last updated:  2024-06-11
SoK: Zero-Knowledge Range Proofs
Miranda Christ, Foteini Baldimtsi, Konstantinos Kryptos Chalkias, Deepak Maram, Arnab Roy, and Joy Wang
Zero-knowledge range proofs (ZKRPs) allow a prover to convince a verifier that a secret value lies in a given interval. ZKRPs have numerous applications: from anonymous credentials and auctions, to confidential transactions in cryptocurrencies. At the same time, a plethora of ZKRP constructions exist in the literature, each with its own trade-offs. In this work, we systematize the knowledge around ZKRPs. We create a classification of existing constructions based on the underlying building techniques, and we summarize their properties. We provide comparisons between schemes both in terms of properties as well as efficiency levels, and construct a guideline to assist in the selection of an appropriate ZKRP for different application requirements. Finally, we discuss a number of interesting open research problems.
Last updated:  2024-06-11
CISELeaks: Information Leakage Assessment of Cryptographic Instruction Set Extension Prototypes
Aruna Jayasena, Richard Bachmann, and Prabhat Mishra
Software based cryptographic implementations provide flexibility but they face performance limitations. In contrast, hardware based cryptographic accelerators utilize application-specific customization to provide real-time security solutions. Cryptographic instruction-set extensions (CISE) combine the advantages of both hardware and software based solutions to provide higher performance combined with the flexibility of atomic-level cryptographic operations. While CISE is widely used to develop security solutions, side-channel analysis of CISE-based devices is in its infancy. Specifically, it is important to evaluate whether the power usage and electromagnetic emissions of CISE-based devices have any correlation with its internal operations, which an adversary can exploit to deduce cryptographic secrets. In this paper, we propose a test vector leakage assessment framework to evaluate the pre-silicon prototypes at the early stages of the design life-cycle. Specifically, we first identify functional units with the potential for leaking information through power side-channel signatures and then evaluate them on system prototypes by generating the necessary firmware to maximize the side-channel signature. Our experimental results on two RISC-V based cryptographic extensions, RISCV-CRYPTO and XCRYPTO, demonstrated that seven out of eight prototype AES- and SHA-related functional units are vulnerable to leaking cryptographic secrets through their power side-channel signature even in full system mode with a statistical significance of $\alpha = 0.05$.
Last updated:  2024-06-10
Leveled Fully-Homomorphic Signatures from Batch Arguments
Abtin Afshar, Jiaqi Cheng, and Rishab Goyal
Fully homomorphic signatures are a significant strengthening of digital signatures, enabling computations on \emph{secretly} signed data. Today, we have multiple approaches to design fully homomorphic signatures such as from lattices, or succinct functional commitments, or indistinguishability obfuscation, or mutable batch arguments. Unfortunately, all existing constructions for homomorphic signatures suffer from one or more limitations. We do not have homomorphic signatures with features such as multi-hop evaluation, context hiding, and fast amortized verification, while relying on standard falsifiable assumptions. In this work, we design homomorphic signatures satisfying all above properties. We construct homomorphic signatures for polynomial-sized circuits from a variety of standard assumptions such as sub-exponential DDH, standard pairing-based assumptions, or learning with errors. We also discuss how our constructions can be easily extended to the multi-key setting.
Last updated:  2024-06-10
MUSES: Efficient Multi-User Searchable Encrypted Database
Tung Le, Rouzbeh Behnia, Jorge Guajardo, and Thang Hoang
Searchable encrypted systems enable privacy-preserving keyword search on encrypted data. Symmetric systems achieve high efficiency (e.g., sublinear search), but they mostly support single-user search. Although systems based on public-key or hybrid models support multi-user search, they incur inherent security weaknesses (e.g., keyword-guessing vulnerabilities) and scalability limitations due to costly public-key operations (e.g., pairing). More importantly, most encrypted search designs leak statistical information (e.g., search, result, and volume patterns) and thus are vulnerable to devastating leakage-abuse attacks. Some pattern-hiding schemes were proposed. However, they incur significant user bandwidth/computation costs, and thus are not desirable for large-scale outsourced databases with resource-constrained users. In this paper, we propose MUSES, a new multi-writer encrypted search platform that addresses the functionality, security, and performance limitations in the existing encrypted search designs. Specifically, MUSES permits single-reader, multi-writer functionalities with permission revocation and hides all statistical information (including search, result, and volume patterns) while featuring minimal user overhead. In MUSES, we demonstrate a unique incorporation of various emerging distributed cryptographic protocols including Distributed Point Function, Distributed PRF, and Oblivious Linear Group Action. We also introduce novel distributed protocols for oblivious counting and shuffling on arithmetic shares for the general multi-party setting with a dishonest majority, which can be found useful in other applications. Our experimental results showed that the keyword search by MUSES is two orders of magnitude faster with up to 97× lower user bandwidth cost than the state-of-the-art.
Last updated:  2024-06-10
Information-Theoretic Single-Server PIR in the Shuffle Model
Yuval Ishai, Mahimna Kelkar, Daniel Lee, and Yiping Ma
We revisit the problem of private information retrieval (PIR) in the shuffle model, where queries can be made anonymously by multiple clients. We present the first single-server PIR protocol in this model that has sublinear per-client communication and information-theoretic security. Moreover, following one-time preprocessing on the server side, our protocol only requires sublinear per-client computation. Concretely, for every $\gamma>0$, the protocol has $O(n^{\gamma})$ communication and computation costs per (stateless) client, with $1/\text{poly}(n)$ statistical security, assuming that a size-$n$ database is simultaneously accessed by $\text{poly}(n)$ clients. This should be contrasted with the recent breakthrough result of Lin, Mook, and Wichs (STOC 2023) on doubly efficient PIR in the standard model, which is (inherently) limited to computational security.
Last updated:  2024-06-10
Generalized Indifferentiable Sponge and its Application to Polygon Miden VM
Tomer Ashur and Amit Singh Bhati
Cryptographic hash functions are said to be the work-horses of modern cryptography. One of the strongest approaches to assess a cryptographic hash function's security is indifferentiability. Informally, indifferentiability measures to what degree the function resembles a random oracle when instantiated with an ideal underlying primitive. However, proving the indifferentiability security of hash functions has been challenging due to complex simulator designs and proof arguments. The Sponge construction is one of the prevalent hashing method used in various systems. The Sponge has been shown to be indifferentiable from a random oracle when initialized with a random permutation. In this work, we first introduce $\mathsf{GSponge}$, a generalized form of the Sponge construction offering enhanced flexibility in input chaining, field sizes, and padding types. $\mathsf{GSponge}$ not only captures all existing sponge variants but also unveils new, efficient ones. The generic structure of $\mathsf{GSponge}$ facilitates the discovery of two micro-optimizations for already deployed sponges. Firstly, it allows a new padding rule based on zero-padding and domain-separated inputs, saving one full permutation call in certain cases without increasing the generation time of zero-knowledge proofs. Secondly, it allows to absorb up to $\mathsf{c}/2$ more elements (that can save another permutation call for certain message lengths) without compromising the indifferentiability security. These optimizations enhance hashing time for practical use cases such as Merkle-tree hashing and short message processing. We then propose a new efficient instantiation of $\mathsf{GSponge}$ called $\mathsf{Sponge2}$ capturing these micro-optimizations and provide a formal indifferentiability proof to establish both $\mathsf{Sponge2}$ and $\mathsf{GSponge}$'s security. This proof, simpler than the original for Sponges, offers clarity and ease of understanding for real-world practitioners. Additionally, it is demonstrated that $\mathsf{GSponge}$ can be safely instantiated with permutations defined over large prime fields, a result not previously formally proven.
Last updated:  2024-06-10
Combining Outputs of a Random Permutation: New Constructions and Tight Security Bounds by Fourier Analysis
Itai Dinur
We consider constructions that combine outputs of a single permutation $\pi:\{0,1\}^n \rightarrow \{0,1\}^n$ using a public function. These are popular constructions for achieving security beyond the birthday bound when implementing a pseudorandom function using a block cipher (i.e., a pseudorandom permutation). One of the best-known constructions (denoted SXoP$[2,n]$) XORs the outputs of 2 domain-separated calls to $\pi$. Modeling $\pi$ as a uniformly chosen permutation, several previous works proved a tight information-theoretic indistinguishability bound for SXoP$[2,n]$ of about $q/2^{n}$, where $q$ is the number of queries. On the other hand, tight bounds are unknown for the generalized variant (denoted SXoP$[r,n]$) which XORs the outputs of $r>2$ domain-separated calls to a uniform permutation. In this paper, we obtain two results. Our first result improves the known bounds for SXoP$[r,n]$ for all (constant) $r \geq 3$ (assuming $q \leq O(2^n/r)$ is not too large) in both the single-user and multi-user settings. In particular, for $q=3$, our bound is about $\sqrt{u}q_{\max}/2^{2.5n}$ (where $u$ is the number of users and $q_{\max}$ is the maximal number of queries per user), improving the best-known previous result by a factor of at least $2^n$. For odd $r$, our bounds are tight for $q > 2^{n/2}$, as they match known attacks. For even $r$, we prove that our single-user bounds are tight by providing matching attacks. Our second and main result is divided into two parts. First, we devise a family of constructions that output $n$ bits by efficiently combining outputs of 2 calls to a permutation on $\{0,1\}^n$, and achieve multi-user security of about $\sqrt{u} q_{\max}/2^{1.5n}$. Then, inspired by the CENC construction of Iwata [FSE'06], we further extend this family to output $2n$ bits by efficiently combining outputs of 3 calls to a permutation on $\{0,1\}^n$. The extended construction has similar multi-user security of $\sqrt{u} q_{\max}/2^{1.5n}$. The new single-user ($u=1$) bounds of $q/2^{1.5n}$ for both families should be contrasted with the previously best-known bounds of $q/2^n$, obtained by the comparable constructions of SXoP$[2,n]$ and CENC. All of our bounds are proved by Fourier analysis, extending the provable security toolkit in this domain in multiple ways.
Last updated:  2024-06-10
Verifiable and Private Vote-by-Mail
Henri Devillez, Olivier Pereira, and Thomas Peters
Vote-by-mail is increasingly used in Europe and worldwide for government elections. Nevertheless, very few attempts have been made towards the design of verifiable vote-by-mail, and none of them come with a rigorous security analysis. Furthermore, the ballot privacy of the currently deployed (non-verifiable) vote-by-mail systems relies on procedural means that a single malicious operator can bypass. We propose a verifiable vote-by-mail system that can accommodate the constraints of many of the large ballots that are common in Europe. Verifiability and privacy hold unless multiple system components collude to cheat on top of the postal channel. These security notions are expressed and analyzed in the simplified UC security framework. Our vote-by-mail system only makes limited requirements on the voters: voters can verify their vote by copying and comparing short strings and verifying the computation of a single hash on a short input, and they can vote normally if they want to ignore the verification steps completely. Our system also relies on common cryptographic components, all available in the ElectionGuard verifiable voting SDK for instance, which limits the risks of errors in the implementation of the cryptographic aspects of the system.
Last updated:  2024-06-10
MUSEN: Aggregatable Key-Evolving Verifiable Random Functions and Applications
Bernardo David, Rafael Dowsley, Anders Konring, and Mario Larangeira
A Verifiable Random Function (VRF) can be evaluated on an input by a prover who holds a secret key, generating a pseudorandom output and a proof of output validity that can be verified using the corresponding public key. VRFs are a central building block of committee election mechanisms that sample parties to execute tasks in cryptographic protocols, e.g. generating blocks in a Proof-of-Stake (PoS) blockchain or executing a round of MPC protocols. We propose the notion, and a matching construction, of an Aggregatable Key-Evolving VRF (A-KE-VRF) with the following extra properties: 1. Aggregation: combining proofs for several VRF evaluations of different inputs under different secret keys into a single constant size proof; 2. Key-Evolving: preventing adversaries who corrupt a party (learning their secret key) from ``forging'' proofs of past VRF evaluations. As an immediate application, we improve on the block size of PoS blockchains and on the efficiency of Proofs of Proof-of-Stake (PoPoS). Furthermore, the A-KE-VRF notion allows us to construct Encryption to the Future (EtF) and Authentication from the Past (AfP) schemes with a Key-Evolving property, which provides forward security. An EtF scheme allows for sending a message to a party who is randomly selected to execute a role in the future, while an AfP scheme allows for this party to authenticate their messages as coming from a past execution of this role. These primitives are essential for realizing the YOSO MPC Framework (CRYPTO'21).
Last updated:  2024-06-10
Two-Round Threshold Lattice-Based Signatures from Threshold Homomorphic Encryption
Kamil Doruk Gur, Jonathan Katz, and Tjerand Silde
Much recent work has developed efficient protocols for threshold signatures, where $n$ parties share a signing key and some threshold $t$ of those parties must interact to produce a signature. Yet efficient threshold signatures with post-quantum security have been elusive, with the state-of-the-art being a two-round scheme by Damgård et al. (PKC'21) based on lattices that supports only the full threshold case (i.e., $t=n$). We show here a two-round threshold signature scheme based on standard lattice assumptions that supports arbitrary thresholds $t\leq n$. Estimates of our scheme's performance at the $128$-bit security level show that in the 3-out-of-5 case, we obtain signatures of size $46.6$ KB and public keys of size $13.6$ KB. We achieve $\approx 5\times$ improved parameters if only a small number of signatures are ever issued with the same key. As an essential building block and independent contribution, we construct an actively secure threshold (linearly) homomorphic encryption scheme that supports arbitrary thresholds $t \leq n$.
Last updated:  2024-06-10
YPIR: High-Throughput Single-Server PIR with Silent Preprocessing
Samir Jordan Menon and David J. Wu
We introduce YPIR, a single-server private information retrieval (PIR) protocol that achieves high throughput (up to 83% of the memory bandwidth of the machine) without any offline communication. For retrieving a 1-bit (or 1-byte) record from a 32 GB database, YPIR achieves 12.1 GB/s/core server throughput and requires 2.5 MB of total communication. On the same setup, the state-of-the-art SimplePIR protocol achieves a 12.5 GB/s/core server throughput, requires 1.5 MB total communication, but additionally requires downloading a 724 MB hint in an offline phase. YPIR leverages a new lightweight technique to remove the hint from high-throughput single-server PIR schemes with small overhead. We also show how to reduce the server preprocessing time in the SimplePIR family of protocols by a factor of $10$-$15\times$. By removing the need for offline communication, YPIR significantly reduces the server-side costs for private auditing of Certificate Transparency logs. Compared to the best previous PIR-based approach, YPIR reduces the server-side costs by a factor of $8\times$. Note that to reduce communication costs, the previous approach assumed that updates to the Certificate Transparency log servers occurred in weekly batches. Since there is no offline communication in YPIR, our approach allows clients to always audit the most recent Certificate Transparency logs (e.g., updating once a day). Supporting daily updates using the prior scheme would cost $48\times$ more than YPIR (based on current AWS compute costs).
Last updated:  2024-06-10
Secret and Shared Keys Recovery on Hamming Quasi-Cyclic with SASCA
Chloé Baïsse, Antoine Moran, Guillaume Goy, Julien Maillard, Nicolas Aragon, Philippe Gaborit, Maxime Lecomte, and Antoine Loiseau
Soft Analytical Side Channel Attacks (SASCA) are a powerful family of Side Channel Attacks (SCA) that allows the recovery of secret values with only a small number of traces. Their effectiveness lies in the Belief Propagation (BP) algorithm, which enables efficient computation of the marginal distributions of intermediate values. Post-quantum schemes such as Kyber, and more recently, Hamming Quasi-Cyclic (HQC), have been targets of SASCA. Previous SASCA on HQC focused on Reed-Solomon (RS) codes and successfully retrieved the shared key with a high success rate for high noise levels using a single trace. In this work, we present new SASCA on HQC, where both the shared key and the secret key are targeted. Our attacks are realized on simulations. Unlike the previous SASCA, we take a closer look at the Reed-Muller (RM) code. The advantage of this choice is that the RM decoder is applied before the RS decoder, enabling attacks targeting both the secret key and shared key. We build a factor graph of the Fast Hadamard Transform (FHT) function from the HQC reference implementation of April 2023. The information recovered from BP allows us to retrieve the shared key with a single trace. In addition to the previous SASCA targeting HQC, we also manage to recover the secret key with two different chosen ciphertext attacks. One of them requires a single trace and is successful until high noise levels.
Last updated:  2024-06-10
Privacy-Preserving Epidemiological Modeling on Mobile Graphs
Daniel Günther, Marco Holz, Benjamin Judkewitz, Helen Möllering, Benny Pinkas, Thomas Schneider, and Ajith Suresh
The latest pandemic COVID-19 brought governments worldwide to use various containment measures to control its spread, such as contact tracing, social distance regulations, and curfews. Epidemiological simulations are commonly used to assess the impact of those policies before they are implemented. Unfortunately, the scarcity of relevant empirical data, specifically detailed social contact graphs, hampered their predictive accuracy. As this data is inherently privacy-critical, a method is urgently needed to perform powerful epidemiological simulations on real-world contact graphs without disclosing any sensitive information. In this work, we present RIPPLE, a privacy-preserving epidemiological modeling framework enabling standard models for infectious disease on a population’s real contact graph while keeping all contact information locally on the participants’ devices. As a building block of independent interest, we present PIR-SUM, a novel extension to private information retrieval for secure download of element sums from a database. Our protocols are supported by a proof-of-concept implementation, demonstrating a 2-week simulation over half a million participants completed in 7 minutes, with each participant communicating less than 50 KB.
Last updated:  2024-06-10
A note on Failing gracefully: Completing the picture for explicitly rejecting Fujisaki-Okamoto transforms using worst-case correctness
Kathrin Hövelmanns and Christian Majenz
The Fujisaki-Okamoto (FO) transformation is used in most proposals for post-quantum secure key encapsulation mechanisms (KEMs) like, e.g., Kyber [BDK+18]. The security analysis of FO in the presence of quantum attackers has made huge progress over the last years. Recently, [HHM22] made a particular improvement by giving a security proof that is agnostic towards how invalid ciphertexts are being treated: in contrast to previous proofs, it works regardless whether invalid ciphertexts are rejected by reporting decryption failure explicitly or implicitly (by returning pseudorandom values). The proof in [HHM22] involves a new correctness notion for the encryption scheme that is used to encapsulate the keys. This allows in principle for a smaller additive security related to decryption failures, but requires to analyze this new notion for the encryption scheme on which a concrete KEM at hand is based. This note offers a trade-off between [HHM22] and its predecessors: it offers a bound for both rejection variants, being mostly based on [HHM22], but uses a more established correctness notion.
Last updated:  2024-06-10
Time Sharing - A Novel Approach to Low-Latency Masking
Dilip Kumar S. V., Siemen Dhooghe, Josep Balasch, Benedikt Gierlichs, and Ingrid Verbauwhede
We present a novel approach to small area and low-latency first-order masking in hardware. The core idea is to separate the processing of shares in time in order to achieve non-completeness. Resulting circuits are proven first-order glitch-extended PINI secure. This means the method can be straightforwardly applied to mask arbitrary functions without constraints which the designer must take care of. Furthermore we show that an implementation can benefit from optimization through EDA tools without sacrificing security. We provide concrete results of several case studies. Our low-latency implementation of a complete PRINCE core shows a 32% area improvement (44% with optimization) over the state-of-the-art. Our PRINCE S-Box passes formal verification with a tool and the complete core on FPGA shows no first-order leakage in TVLA with 100 million traces. Our low-latency implementation of the AES S-Box costs roughly one third (one quarter with optimization) of the area of state-of-the-art implementations. It shows no first-order leakage in TVLA with 250 million traces.
Last updated:  2024-06-10
Hints from Hertz: Dynamic Frequency Scaling Side-Channel Analysis of Number Theoretic Transform in Lattice-Based KEMs
Tianrun Yu, Chi Cheng, Zilong Yang, Yingchen Wang, Yanbin Pan, and Jian Weng
Number Theoretic Transform (NTT) has been widely used in accelerating computations in lattice-based cryptography. However, attackers can potentially launch power analysis targeting NTT because it is usually the most time-consuming part of the implementation. This extended time frame provides a natural window of opportunity for attackers. In this paper, we investigate the first CPU frequency leakage (Hertzbleed-like) attacks against NTT in lattice-based KEMs. Our key observation is that different inputs to NTT incur different Hamming weights in its output and intermediate layers. By measuring the CPU frequency during the execution of NTT, we propose a simple yet effective attack idea to find the input to NTT that triggers NTT processing data with significantly low Hamming weight. We further apply our attack idea to real-world applications that are built upon NTT: CPA-secure Kyber without Compression and Decompression functions, and CCA-secure NTTRU. This leads us to extract information or frequency Hints about the secret key. Integrating these Hints into the LWE-estimator framework, we estimate a minimum of $35\%$ security loss caused by the leakage. The frequency and timing measurements on the Reference and AVX2 implementations of NTT in both Kyber and NTTRU align well with our theoretical analysis, confirming the existence of frequency side-channel leakage in NTT. It is important to emphasize that our observation is not limited to a specific implementation but rather the algorithm on which NTT is based. Therefore, our results call for more attention to the analysis of power leakage against NTT in lattice-based cryptography.
Last updated:  2024-06-10
Leakage Certification Made Simple
Aakash Chowdhury, Carlo Brunetta, Arnab Roy, and Elisabeth Oswald
Side channel evaluations benefit from sound characterisations of adversarial leakage models, which are the determining factor for attack success. Two questions are of interest: can we define and estimate a quantity that captures the ideal adversary (who knows all the distributions that are involved in an attack), and can we define and estimate a quantity that captures a concrete adversary (represented by a given leakage model)? Existing work has led to a proliferation of custom quantities to measure both types of adversaries, which can be data intensive to estimate in the ideal case, even for discrete side channels and especially when the number of dimensions in the side channel traces grows. In this paper, we show how to define the mutual information between carefully chosen variables of interest and how to instantiate a recently suggested mutual information estimator for practical estimation. We apply our results to real-world data sets and are the first to provide a mutual information-based characterisation of ideal and concrete adversaries utilising up to 30 data points.
Last updated:  2024-06-10
Climbing and descending tall volcanos
Steven Galbraith
We revisit the question of relating the elliptic curve discrete logarithm problem (ECDLP) between ordinary elliptic curves over finite fields with the same number of points. This problem was considered in 1999 by Galbraith and in 2005 by Jao, Miller, and Venkatesan. We apply recent results from isogeny cryptography and cryptanalysis, especially the Kani construction, to this problem. We improve the worst case bound in Galbraith's 1999 paper from $\tilde{O}( q^{1.5} )$ to (heuristically) $\tilde{O}( q^{0.4} )$ operations. The two cases of main interest for discrete logarithm cryptography are random curves (flat volcanoes) and pairing-based crypto (tall volcanoes with crater of constant or polynomial size). In both cases we show a rigorous $\tO( q^{1/4})$ algorithm to compute an isogeny between any two curves in the isogeny class. We stress that this paper is motivated by pre-quantum elliptic curve cryptography using ordinary elliptic curves, which is not yet obsolete.
Last updated:  2024-06-10
On Orchestrating Parallel Broadcasts for Distributed Ledgers
Peiyao Sheng, Chenyuan Wu, Dahlia Malkhi, Michael K. Reiter, Chrysoula Stathakopoulou, Michael Wei, and Maofan Yin
This paper introduces and develops the concept of ``ticketing'', through which atomic broadcasts are orchestrated by nodes in a distributed system. The paper studies different ticketing regimes that allow parallelism, yet prevent slow nodes from hampering overall progress. It introduces a hybrid scheme which combines managed and unmanaged ticketing regimes, striking a balance between adaptivity and resilience. The performance evaluation demonstrates how managed and unmanaged ticketing regimes benefit throughput in systems with heterogeneous resources both in static and dynamic scenarios, with the managed ticketing regime performing better among the two as it adapts better. Finally, it demonstrates how using the hybrid ticketing regime performance can enjoy both the adaptivity of the managed regime and the liveness guarantees of the unmanaged regime.
Last updated:  2024-06-09
Traceable Secret Sharing: Strong Security and Efficient Constructions
Dan Boneh, Aditi Partap, and Lior Rotem
Suppose Alice uses a $t$-out-of-$n$ secret sharing to store her secret key on $n$ servers. Her secret key is protected as long as $t$ of them do not collude. However, what if a less-than-$t$ subset of the servers decides to offer the shares they have for sale? In this case, Alice should be able to hold them accountable, or else nothing prevents them from selling her shares. With this motivation in mind, Goyal, Song, and Srinivasan (CRYPTO 21) introduced the concept of {\em traceable secret sharing}. In such schemes, it is possible to provably trace the leaked secret shares back to the servers who leaked them. Goyal et al.~presented the first construction of a traceable secret sharing scheme. However, secret shares in their construction are quadratic in the secret size, and their tracing algorithm is quite involved as it relies on Goldreich-Levin decoding. In this work, we put forth new definitions and practical constructions for traceable secret sharing. In our model, some $f < t$ servers output a reconstruction box~$R$ that may arbitrarily depend on their shares. Given additional $t-f$ shares, $R$ reconstructs and outputs the secret. The task is to trace $R$ back to the corrupted servers given black-box access to $R$. Unlike Goyal et al., we do not assume that the tracing algorithm has any information on how the corrupted servers constructed~$R$ from the shares in their possession. We then present two very efficient constructions of traceable secret sharing based on two classic secret sharing schemes. In both of our schemes, shares are only twice as large as the secret, improving over the quadratic overhead of Goyal et al. Our first scheme is obtained by presenting a new practical tracing algorithm for the widely-used Shamir secret sharing scheme. Our second construction is based on an extension of Blakley's secret sharing scheme. Tracing in this scheme is optimally efficient, and requires just one successful query to $R$. We believe that our constructions are an important step towards bringing traceable secret-sharing schemes to practice. This work also raises several interesting open problems that we describe in the paper.
Last updated:  2024-06-09
Simple Logarithmic-size LSAG signature
Edsger Hughes
A number of existing cryptosystems use the well-known LSAG signature and its extensions. This article presents a simple logarithmic-size signature scheme LS-LSAG which, despite a radical reduction in size, retains the basic code block of the LSAG signature. Therefore, substituting LS-LSAG for LSAG requires minimal changes to almost any existing coded LSAG extension, making it logarithmic instead of linear.
Last updated:  2024-06-09
Leveraging Small Message Spaces for CCA1 Security in Additively Homomorphic and BGN-type Encryption
Benoit Libert
We show that the smallness of message spaces can be used as a checksum allowing to hedge against CCA1 attacks in additively homomorphic encryption schemes. We first show that the additively homomorphic variant of Damgård's Elgamal provides IND-CCA1 security under the standard DDH assumption. Earlier proofs either required non-standard assumptions or only applied to hybrid versions of Damgård's Elgamal, which are not additively homomorphic. Our security proof builds on hash proof systems and exploits the fact that encrypted messages must be contained in a polynomial-size interval in order to enable decryption. With $3$ group elements per ciphertext, this positions Damgård's Elgamal as the most efficient/compact DDH-based additively homomorphic CCA1 cryptosystem. Under the same assumption, the best candidate so far was the lite Cramer-Shoup cryptosystem, where ciphertexts consist of $4$ group elements. We extend this observation to build an IND-CCA1 variant of the Boneh-Goh-Nissim encryption scheme, which allows evaluating 2-DNF formulas on encrypted data. By computing tensor products of Damgård's Elgamal ciphertexts, we obtain product ciphertexts consisting of $9$ group elements (instead of $16$ elements if we were tensoring lite Cramer-Shoup ciphertexts) in the target group of a bilinear map. Using similar ideas, we also obtain a CCA1 variant of the Elgamal-Paillier cryptosystem by forcing $\lambda$ plaintext bits to be zeroes, which yields CCA1 security almost for free. In particular, the message space remains exponentially large and ciphertexts are as short as in the IND-CPA scheme. We finally adapt the technique to the Castagnos-Laguillaumie system.
Last updated:  2024-06-09
Multi-Input Functional Encryption for Unbounded Inner Products
Bishnu Charan Behera and Somindu C. Ramanna
In this work, we propose a construction for $ Multi~Input~Inner ~Product ~Encryption$ (MIPFE) that can handle vectors of variable length in different encryption slots. This construction is the first of its kind, as all existing MIPFE schemes allow only equal length vectors. The scheme is constructed in the private key setting, providing privacy for both message as well as the function, thereby achieving the so-called $full-hiding$ security. Our MIPFE scheme uses bilinear groups of prime order and achieves security under well studied cryptographic assumptions, namely, the symmetric external Diffie-Hellman assumption.
Last updated:  2024-06-09
Cryptographic Analysis of Delta Chat
Yuanming Song, Lenka Mareková, and Kenneth G. Paterson
We analyse the cryptographic protocols underlying Delta Chat, a decentralised messaging application which uses e-mail infrastructure for message delivery. It provides end-to-end encryption by implementing the Autocrypt standard and the SecureJoin protocols, both making use of the OpenPGP standard. Delta Chat's adoption by categories of high-risk users such as journalists and activists, but also more generally users in regions affected by Internet censorship, makes it a target for powerful adversaries. Yet, the security of its protocols has not been studied to date. We describe five new attacks on Delta Chat in its own threat model, exploiting cross-protocol interactions between its implementation of SecureJoin and Autocrypt, as well as bugs in rPGP, its OpenPGP library. The findings have been disclosed to the Delta Chat team, who implemented fixes.
Last updated:  2024-06-09
Unbounded Non-Zero Inner Product Encryption
Bishnu Charan Behera and Somindu C. Ramanna
In a non-zero inner product encryption (NIPE) scheme, ciphertexts and keys are associated with vectors from some inner-product space. Decryption of a ciphertext for $\vec{x}$ is allowed by a key for $\vec{y}$ if and only if the inner product $\langle{\vec{x}},{\vec{y}}\rangle \neq 0$. Existing constructions of NIPE assume the length of the vectors are fixed apriori. We present the first constructions of $ unbounded $ non-zero inner product encryption (UNIPE) with constant sized keys. Unbounded here refers to the size of vectors not being pre-fixed during setup. Both constructions, based on bilinear maps, are proven selectively secure under the decisional bilinear Diffie-Hellman (DBDH) assumption. Our constructions are obtained by transforming the unbounded inner product functional encryption (IPFE) schemes of Dufour-Sans and Pointcheval (ACNS 2019), one in the $strict ~ domain$ setting and the other in the $permissive ~ domain$ setting. Interestingly, in the latter case, we prove security from DBDH, a static assumption while the original IPE scheme relied on an interactive parameterised assumption. In terms of efficiency, features of the IPE constructions are retrained after transformation to NIPE. Notably, the public key and decryption keys have constant size.
Last updated:  2024-06-08
Polymath: Groth16 Is Not The Limit
Helger Lipmaa
Shortening the argument (three group elements or 1536 / 3072 bits over the BLS12-381/BLS24-509 curves) of the Groth16 zk-SNARK for R1CS is a long-standing open problem. We propose a zk-SNARK Polymath for the Square Arithmetic Programming constraint system using the KZG polynomial commitment scheme. Polymath has a shorter argument (1408 / 1792 bits over the same curves) than Groth16. At 192-bit security, Polymath's argument is nearly half the size, making it highly competitive for high-security future applications. Notably, we handle public inputs in a simple way. We optimized Polymath's prover through an exhaustive parameter search. Polymath's prover does not output $\mathbb{G}_{2}$ elements, aiding in batch verification, SNARK aggregation, and recursion. Polymath's properties make it highly suitable to be the final SNARK in SNARK compositions.
Last updated:  2024-06-08
Secure Noise Sampling for DP in MPC with Finite Precision
Hannah Keller, Helen Möllering, Thomas Schneider, Oleksandr Tkachenko, and Liang Zhao
While secure multi-party computation (MPC) protects the privacy of inputs and intermediate values of a computation, differential privacy (DP) ensures that the output itself does not reveal too much about individual inputs. For this purpose, MPC can be used to generate noise and add this noise to the output. However, securely generating and adding this noise is a challenge considering real-world implementations on finite-precision computers, since many DP mechanisms guarantee privacy only when noise is sampled from continuous distributions requiring infinite precision. We introduce efficient MPC protocols that securely realize noise sampling for several plaintext DP mechanisms that are secure against existing precision-based attacks: the discrete Laplace and Gaussian mechanisms, the snapping mechanism, and the integer-scaling Laplace and Gaussian mechanisms. Due to their inherent trade-offs, the favorable mechanism for a specific application depends on the available computation resources, type of function evaluated, and desired $(\epsilon,\delta)$-DP guarantee. The benchmarks of our protocols implemented in the state-of-the-art MPC framework MOTION (Braun et al., TOPS'22) demonstrate highly efficient online runtimes of less than 32 ms/query and down to about 1ms/query with batching in the two-party setting. Also the respective offline phases are practical, requiring only 51 ms to 5.6 seconds/query depending on the batch size.
Last updated:  2024-06-08
DoS-resistant Oblivious Message Retrieval and Snake-eye Resistant PKE from LWE
Uncategorized
Zeyu Liu, Katerina Sotiraki, Eran Tromer, and Yunhao Wang
Show abstract
Uncategorized
Oblivious message retrieval (OMR) allows messages resource-limited recipients to outsource the message retrieval process without revealing which messages are pertinent to which recipient. Its realizations in recent works leave an open problem: can an OMR scheme be both practical and provably secure against spamming attacks from malicious senders (i.e., DoS-resistant) under standard assumptions? In this paper, we first prove that a prior construction OMRp2 is DoS-resistant under a standard LWE assumption, resolving an open conjecture of prior works. Then, we present DoS-PerfOMR: a provably DoS-resistant OMR construction that is 12x faster than OMRp2, and (almost) matches the performance of the state-of-the-art OMR scheme that is not DoS-resistant. As a building block, we analyze the snake-eye resistance property for general PKE schemes. We construct a new lattice-based PKE scheme, LWEmongrass that is provably snake-eye resistant and has better efficiency than the PVW scheme underlying OMRp2. We also show that the natural candidates (e.g., RingLWE PKE) are not snake-eye resistant. Of independent interest, we introduce two variants of LWE with side information, as components towards proving the properties of LWEmongrass, and reduce standard LWE to them for the parameters of interest.
Last updated:  2024-06-08
SoK: Model Reverse Engineering Threats for Neural Network Hardware
Seetal Potluri and Farinaz Koushanfar
There has been significant progress over the past seven years in model reverse engineering (RE) for neural network (NN) hardware. Although there has been systematization of knowledge (SoK) in an overall sense, however, the treatment from the hardware perspective has been far from adequate. To bridge this gap, this paper systematically categorizes the types of NN hardware used prevalently by the industry/academia, and also the model RE attacks/defenses published in each category. Further, we sub-categorize existing NN model RE attacks based on different criteria including the degree of hardware parallelism, threat vectors like side channels, fault-injection, scan-chain attacks, system-level attacks, type of asset under attack, the type of NN, exact versus approximate recovery, etc. We make important technical observations and identify key open research directions. Subsequently, we discuss the state-of-the-art defenses against NN model RE, identify certain categorization criteria, and compare the existing works based on these criteria. We note significant qualitative gaps for defenses, and suggest recommendations for important open research directions for protection of NN models. Finally, we discuss limitations of existing work in terms of the types of models where security evaluation or defenses were proposed, and suggest open problems in terms of protecting practically expensive model IPs.
Last updated:  2024-06-08
Glitch-Stopping Circuits: Hardware Secure Masking without Registers
Zhenda Zhang, Svetla Nikova, and Ventzislav Nikov
Masking is one of the most popular countermeasures to protect implementations against power and electromagnetic side channel attacks, because it offers provable security. Masking has been shown secure against d-threshold probing adversaries by Ishai et al. at CRYPTO'03, but this adversary's model doesn't consider any physical hardware defaults and thus such masking schemes were shown to be still vulnerable when implemented as hardware circuits. To addressed these limitations glitch-extended probing adversaries and correspondingly glitch-immune masking schemes have been introduced. This paper introduces glitch-stopping circuits which, when instantiated with registers, coincide with circuits protected via glitch-immune masking. Then we show that one can instantiate glitch-stopping circuits without registers by using clocked logic gates or latches. This is illustrated for both ASIC and FPGA, offering a promising alternative to conventional register-based masked implementations. Compared to the traditional register-based approach, these register-free solutions can reduce the latency to a single cycle and achieve a lower area cost. We prove and experimentally confirm that the proposed solution is as secure as the register-based one. In summary, this paper proposes a novel method to address the latency of register-based hardware masking without jeopardising their security. This method not only reduces the latency down to one clock, but also improves the areas costs of the implementations.
Last updated:  2024-06-07
REACTIVE: Rethinking Effective Approaches Concerning Trustees in Verifiable Elections
Josh Benaloh, Michael Naehrig, and Olivier Pereira
For more than forty years, two principal questions have been asked when designing verifiable election systems: how will the integrity of the results be demonstrated and how will the privacy of votes be preserved? Many approaches have been taken towards answering the first question such as use of MixNets and homomorphic tallying. But in the academic literature, the second question has always been answered in the same way: decryption capabilities are divided amongst multiple independent “trustees” so that a collusion is required to compromise privacy. In practice, however, this approach can be fairly challenging to deploy. Human trustees rarely have a clear understanding of their responsibilities, and they typically all use identical software for their tasks. Rather than exercising independent judgment to maintain privacy, trustees are often reduced to automata who just push the buttons they are told to when they are told to, doing little towards protecting voter privacy. This paper looks at several aspects of the trustee experience. It begins by discussing various cryptographic protocols that have been used for key generation in elections, explores their impact on the role of trustees, and notes that even the theory of proper use of trustees is more challenging than it might seem. This is illustrated by showing that one of the only references defining a full threshold distributed key generation (DKG) for elections defines an insecure protocol. Belenios claims to rely on that reference for its DKG and security proof. Fortunately, it does not inherit the same vulnerability. We offer a security proof for the Belenios DKG. The paper then discusses various practical contexts, in terms of humans, software, and hardware, and their impact on the practical deployment of a trustee-based privacy model.
Last updated:  2024-06-07
Fast pairings via biextensions and cubical arithmetic
Damien Robert
Biextensions associated to line bundles on abelian varieties allows to reinterpret the usual Weil, Tate, Ate, optimal Ate, \ldots, pairings as monodromy pairings. We introduce a cubical arithmetic, derived from the canonical cubical torsor structure of these line bundles, to obtain an efficient arithmetic of these biextensions. This unifies and extends Miller's standard algorithm to compute pairings along with other algorithms like elliptic nets and theta functions, and allows to adapt these algorithms to pairings on any model of abelian varieties with a polarisation $\Phi_D$, as long as we have an explicit theorem of the square for $D$. In particular, we give explicit formulas for the arithmetic of the biextension (and cubical torsor structure) associated to the divisor $D=2(0_E)$ on an elliptic curve. We derive very efficient pairing formulas on elliptic curves and Kummer lines. Notably for generic pairings on Montgomery curves, our cubical biextension ladder algorithm to compute pairings costs only $15M$ by bits, which as far as I know is faster than any pairing doubling formula in the literature.
Last updated:  2024-06-07
Failed crypto: Matrices over non-standard arithmetic
Daniel R. L. Brown
A failed hypothesis is reported here. The hope was that large matrices over small non-standard arithmetic are likely to have infeasible division, and furthermore be secure for use in Rabi–Sherman associative cryptography.
Last updated:  2024-06-07
Cryptanalysis of Lattice-Based Sequentiality Assumptions and Proofs of Sequential Work
Chris Peikert and Yi Tang
This work *completely breaks* the sequentiality assumption (and broad generalizations thereof) underlying the candidate lattice-based proof of sequential work (PoSW) recently proposed by Lai and Malavolta at CRYPTO 2023. In addition, it breaks an essentially identical variant of the PoSW, which differs from the original in only an arbitrary choice that is immaterial to the design and security proof (under the falsified assumption). This suggests that whatever security the original PoSW may have is fragile, and further motivates the search for a construction based on a sound lattice-based assumption. Specifically, for sequentiality parameter $T$ and SIS parameters $n,q,m = n \log q$, the attack on the sequentiality assumption finds a solution of quasipolynomial norm $m^{\lceil \log T \rceil}$ (or norm $O(\sqrt{m})^{\lceil \log T \rceil}$ with high probability) in only *logarithmic* $\tilde{O}_{n,q}(\log T)$ depth; this strongly falsifies the assumption that finding such a solution requires depth *linear* in $T$. (The $\tilde{O}$ notation hides polylogarithmic factors in the variables appearing in its subscript.) Alternatively, the attack finds a solution of polynomial norm $m^{1/\varepsilon}$ in depth $\tilde{O}_{n,q}(T^{\varepsilon})$, for any constant $\varepsilon > 0$. Similarly, the attack on the (slightly modified) PoSW constructs a valid proof in \emph{polylogarithmic} $\tilde{O}_{n,q}(\log^2 T)$ depth, thus strongly falsifying the expectation that doing so requires linear sequential work.
Last updated:  2024-06-07
Secret Key Recovery in a Global-Scale End-to-End Encryption System
Graeme Connell, Vivian Fang, Rolfe Schmidt, Emma Dauterman, and Raluca Ada Popa
End-to-end encrypted messaging applications ensure that an attacker cannot read a user's message history without their decryption keys. While end-to-end encryption provides strong privacy, it creates a usability problem: if a user loses their devices and cannot access their decryption keys, they can no longer access their message history. To solve this usability problem, users should be able to back up their decryption keys with the messaging provider. For privacy, the provider should not have access to users' decryption keys. To solve this problem, we present Secure Value Recovery 3 (SVR3), a secret key recovery system that distributes trust across different types of hardware enclaves run by different cloud providers in order to protect users' decryption keys. SVR3 is the first deployed secret key recovery system to split trust across heterogeneous enclaves managed by different cloud providers: this design ensures that a single type of enclave does not become a central point of attack. SVR3 protects decryption keys via rollback protection and fault tolerance techniques tailored to the enclaves' security guarantees. SVR3 costs $0.0025/user/year and takes 365ms for a user to recover their key, which is a rare operation. A part of SVR3 has been rolled out to millions of real users in a deployment with capacity for over 500 million users, demonstrating the ability to operate at scale.
Last updated:  2024-06-07
Compact Key Storage: A Modern Approach to Key Backup and Delegation
Yevgeniy Dodis, Daniel Jost, and Antonio Marcedone
End-to-End (E2E) encrypted messaging, which prevents even the service provider from learning communication contents, is gaining popularity. Since users care about maintaining access to their data even if their devices are lost or broken or just replaced, these systems are often paired with cloud backup solutions: Typically, the user will encrypt their messages with a fixed key, and upload the ciphertexts to the server. Unfortunately, this naive solution has many drawbacks. First, it often undermines the fancy security guarantees of the core application, such as forward secrecy (FS) and post-compromise security (PCS), in case the single backup key is compromised. Second, they are wasteful for backing up conversations in large groups, where many users are interested in backing up the same sequence of messages. Instead, we formalize a new primitive called Compact Key Storage (CKS) as the "right" solution to this problem. Such CKS scheme allows a mutable set of parties to delegate to a server storage of an increasing set of keys, while each client maintains only a small state. Clients update their state as they learn new keys (maintaining PCS), or whenever they want to forget keys (achieving FS), often without the need to interact with the server. Moreover, access to the keys (or some subset of them) can be efficiently delegated to new group members, who all efficiently share the same server's storage. We carefully define syntax, correctness, privacy, and integrity of CKS schemes, and build two efficient schemes provably satisfying these notions. Our line scheme covers the most basic "all-or-nothing" flavor of CKS, where one wishes to compactly store and delegate the entire history of past secrets. Thus, new users enjoy the efficiency and compactness properties of the CKS only after being granted access to the entire history of keys. In contrast, our interval scheme is only slightly less efficient but allows for finer-grained access, delegation, and deletion of past keys.
Last updated:  2024-06-07
Information-theoretic security with asymmetries
Tim Beyne and Yu Long Chen
In this paper, we study the problem of lower bounding any given cost function depending on the false positive and false negative probabilities of adversaries against indistinguishability security notions in symmetric-key cryptography. We take the cost model as an input, so that this becomes a purely information-theoretical question. We propose power bounds as an easy-to-use alternative for advantage bounds in the context of indistinguishability with asymmetric cost functions. We show that standard proof techniques such as hybrid arguments and the H-coefficient method can be generalized to the power model, and apply these techniques to the PRP-PRF switching lemma, the Even-Mansour (EM) construction, and the sum-of-permutations (SoP) construction. As the final and perhaps most useful contribution, we provide two methods to convert single-user power bounds into multi-user power bounds, and investigate their relation to the point-wise proximity method of Hoang and Tessaro (Crypto 2016). These method are applied to obtain tight multi-user power bounds for EM and SoP.
Last updated:  2024-06-07
Reducing the Number of Qubits in Quantum Factoring
Clémence Chevignard, Pierre-Alain Fouque, and André Schrottenloher
This paper focuses on the optimization of the number of logical qubits in quantum algorithms for factoring and computing discrete logarithms in $\mathbb{Z}_N^*$. These algorithms contain an exponentiation circuit modulo $N$, which is responsible for most of their cost, both in qubits and operations. In this paper, we show that using only $o(\log N)$ work qubits, one can obtain the least significant bits of the modular exponentiation output. We combine this result with May and Schlieper's truncation technique (ToSC 2022) and the Ekerå-Håstad variant of Shor's algorithm (PQCrypto 2017) to solve the discrete logarithm problem in $\mathbb{Z}_N^*$ using only $d + o(\log N)$ qubits, where $d$ is the bit-size of the logarithm. Consequently we can factor $n$-bit RSA moduli using $n/2 + o(n)$ qubits, while current envisioned implementations require about $2n$ qubits. Our algorithm uses a Residue Number System and succeeds with a parametrizable probability. Being completely classical, we have implemented and tested it. For RSA factorization, we can reach a gate count $\mathcal{O}(n^3)$ for a depth $\mathcal{O}(n^2 \log^3 n)$, which then has to be multiplied by $\mathcal{O}(\log n)$ (the number of measurement results required by Ekerå-Håstad). To factor an RSA-2048 instance, we estimate that 1730 logical qubits and $2^{36}$ Toffoli gates will suffice for a single run, and the algorithm needs on average 40 runs. To solve a discrete logarithm instance of 224 bits (112-bit classical security) in a safe-prime group of 2048 bits, we estimate that 684 logical qubits would suffice, and 20 runs with $2^{32}$ Toffoli gates each.
Last updated:  2024-06-07
Branching Heuristics in Differential Collision Search with Applications to SHA-512
Maria Eichlseder, Florian Mendel, and Martin Schläffer
In this work, we present practical semi-free-start collisions for SHA-512 on up to 38 (out of 80) steps with complexity $2^{40.5}$. The best previously published result was on 24 steps. The attack is based on extending local collisions as proposed by Mendel et al. in their Eurocrypt 2013 attack on SHA-256. However, for SHA-512, the search space is too large for direct application of these techniques. We achieve our result by improving the branching heuristic of the guess-and-determine approach to find differential characteristics and conforming message pairs. Experiments show that for smaller problems like 27 steps of SHA-512, the heuristic can also speed up the collision search by a factor of $2^{20}$.
Last updated:  2024-06-07
Malicious Hashing: Eve's Variant of SHA-1
Ange Albertini, Jean-Philippe Aumasson, Maria Eichlseder, Florian Mendel, and Martin Schläffer
We present collisions for a version of SHA-1 with modified constants, where the colliding payloads are valid binary files. Examples are given of colliding executables, archives, and images. Our malicious SHA-1 instances have round constants that differ from the original ones in only 40 bits (on average). Modified versions of cryptographic standards are typically used on closed systems (e.g., in pay-TV, media and gaming platforms) and aim to differentiate cryptographic components across customers or services. Our proof-of-concept thus demonstrates the exploitability of custom SHA-1 versions for malicious purposes, such as the injection of user surveillance features. To encourage further research on such malicious hash functions, we propose definitions of malicious hash functions and of associated security notions.
Last updated:  2024-06-07
Cryptanalysis of Simpira v1
Christoph Dobraunig, Maria Eichlseder, and Florian Mendel
Simpira v1 is a recently proposed family of permutations, based on the AES round function. The design includes recommendations for using the Simpira permutations in block ciphers, hash functions, or authenticated ciphers. The designers' security analysis is based on computer-aided bounds for the minimum number of active S-boxes. We show that the underlying assumptions of independence, and thus the derived bounds, are incorrect. For family member Simpira-4, we provide differential trails with only 40 (instead of 75) active S-boxes for the recommended 15 rounds. Based on these trails, we propose full-round collision attacks on the proposed Simpira-4 Davies-Meyer hash construction, with complexity $2^{82.62}$ for the recommended full 15 rounds and a truncated 256-bit hash value, and complexity $2^{110.16}$ for 16 rounds and the full 512-bit hash value. These attacks violate the designers' security claims that there are no structural distinguishers with complexity below $2^{128}$.
Last updated:  2024-06-07
Related-Key Forgeries for Prøst-OTR
Christoph Dobraunig, Maria Eichlseder, and Florian Mendel
We present a forgery attack on Prøst-OTR in a related-key setting. Prøst is a family of authenticated encryption algorithms proposed as candidates in the currently ongoing CAESAR competition, and Prøst-OTR is one of the three variants of the Prøst design. The attack exploits how the Prøst permutation is used in an Even-Mansour construction in the Feistel-based OTR mode of operation. Given the ciphertext and tag for any two messages under two related keys K and K + Delta with related nonces, we can forge the ciphertext and tag for a modified message under K. If we can query ciphertexts for chosen messages under K + Delta, we can achieve almost universal forgery for K. The computational complexity is negligible.
Last updated:  2024-06-07
Analysis of SHA-512/224 and SHA-512/256
Christoph Dobraunig, Maria Eichlseder, and Florian Mendel
In 2012, NIST standardized SHA-512/224 and SHA-512/256, two truncated variants of SHA-512, in FIPS 180-4. These two hash functions are faster than SHA-224 and SHA-256 on 64-bit platforms, while maintaining the same hash size and claimed security level. So far, no third-party analysis of SHA-512/224 or SHA-512/256 has been published. In this work, we examine the collision resistance of step-reduced versions of SHA-512/224 and SHA-512/256 by using differential cryptanalysis in combination with sophisticated search tools. We are able to generate practical examples of free-start collisions for 44-step SHA-512/224 and 43-step SHA-512/256. Thus, the truncation performed by these variants on their larger state allows us to attack several more rounds compared to the untruncated family members. In addition, we improve upon the best published collisions for 24-step SHA-512 and present practical collisions for 27 steps of SHA-512/224, SHA-512/256, and SHA-512.
Last updated:  2024-06-07
Quantum Evolving Secret Sharing for General Access Structures
Efrat Cohen and Anat Paskin-Cherniavsky
In the useful and well studied model of secret-sharing schemes, there are $n$ parties and a dealer, which holds a secret. The dealer applies some randomized algorithm to the secret, resulting in $n$ strings, called shares; it gives the $i$'th share to the $i$'th party. There are two requirements. (1) correctness: some predefined subsets of the parties can jointly reconstruct the secret from their shares, and (2) security: any other set gets no information on the secret. The collection of predefined qualified sets is called an access structure (AS). This model assumes that the number of parties is known when preparing the shares and giving the shares to the parties; furthermore, the sharing algorithm and the share size are determined by the number of parties, e.g. in the best-known secret-sharing scheme for an arbitrary $n$-party access structure the share size is $1.5^{n}$ by Appelbaum and Nir. The assumption that the number of parties is known in advance is problematic in many scenarios. Of course, one can take some upper bound on the number of parties. On one hand, if this bound is big, then the share size will be large even if only few parties actually participate in the scheme. On the other hand, if this bound is small, then there is a risk that too many parties will arrive and no further shares can be produced; this will require an expensive re-sharing of the secret and updating all shares (which can be impossible if some parties are temporally off-line). Thus, we need to consider models with an unbounded number of parties. To address these concrens, Komargodski, Naor, and Yogev defined \emph{evolving secret-sharing schemes} with an unbounded number of parties. In a nutshell, evolving AS's are defined as a monotone collection of finite qualified sets, such that at any time $t$ a set $A\subseteq [t]$ is either qualified or not, depending only on $A$ itself, and not on $t$ (a `global' monotonicity notion). Quantum secret sharing (QSS) in the standard $n$-party setting, where the secret is an arbitrary quantum state (say, qbit), rather than classical data. In face of recent advancements in quantum computing, this is a natural notion to consider, and has been studied before. In this work, we explore the natural notion of quantum evolving secret sharing (QESS). While this notion has been studied by Samadder 20', we make several new contributions. (1) The notion of QESS was only implicit in the above work. We formalize this notion (as well as AS's for which it is applicable), and in particular argue that the variant implied by the above work did not require `global monotonicity' of the AS, which was the standard in the evolving secret sharing literature, and appears to be useful for QESS as well. (2) Discuss the applicability and limitations of the notion in the quantum setting that follow from the no-cloning theorem, and make its usability more limited. Yet, we argue that fundamental advantages of the evovling setting, such as keeping parties' shares independent of the total number of parties that arrive can be mantainted in the quantum setting. (3) We characterize the AS's ammenable to construction of QSSS - so called `no cloning' evolving AS's, and point out that this class is not severly restricted relatively to the class of all evolving AS's. On the positive side, our construction combines the compiler of [Smith 00'] with ideas of hybrid secret sharing of [Goyal et. al 23'], to obtain a construction with share size comparable to the best classical linear share complexity of the scheme.
Last updated:  2024-06-07
Practical Key Recovery Attack on MANTIS-5
Christoph Dobraunig, Maria Eichlseder, Daniel Kales, and Florian Mendel
MANTIS is a lightweight tweakable block cipher recently published at CRYPTO 2016. In addition to the full 14-round version, MANTIS-7, the designers also propose an aggressive 10-round version, MANTIS-5. The security claim for MANTIS-5 is resistance against "practical attacks", defined as related-tweak attacks with data complexity $2^d$ less than $2^{30}$ chosen plaintexts (or $2^{40}$ known plaintexts), and computational complexity at most $2^{126-d}$. We present a key-recovery attack against MANTIS-5 with $2^{28}$ chosen plaintexts and a computational complexity of about $2^{38}$ block cipher calls, which violates this claim. Our attack is based on a family of differential characteristics and exploits several properties of the lightweight round function and tweakey schedule. To verify the validity of the attack, we also provide a practical implementation which recovers the full key in about 1 core hour using $2^{30}$ chosen plaintexts.
Last updated:  2024-06-07
Clustering Related-Tweak Characteristics: Application to MANTIS-6
Maria Eichlseder and Daniel Kales
The TWEAKEY/STK construction is an increasingly popular approach for designing tweakable block ciphers that notably uses a linear tweakey schedule. Several recent attacks have analyzed the implications of this approach for differential cryptanalysis and other attacks that can take advantage of related tweakeys. We generalize the clustering approach of a recent differential attack on the tweakable block cipher MANTIS-5 and describe a tool for efficiently finding and evaluating such clusters. More specifically, we consider the set of all differential characteristics compatible with a given truncated characteristic, tweak difference, and optional constraints for the differential. We refer to this set as a semi-truncated characteristic and estimate its probability by analyzing the distribution of compatible differences at each step. We apply this approach to find a semi-truncated differential characteristic for MANTIS-6 with probability about $2^{-67.73}$ and derive a key-recovery attack with a complexity of about $2^{53.94}$ chosen-plaintext queries and computations. The data-time product is $2^{107.88} \ll 2^{126}$.
Last updated:  2024-06-07
Rasta: A cipher with low ANDdepth and few ANDs per bit
Christoph Dobraunig, Maria Eichlseder, Lorenzo Grassi, Virginie Lallemand, Gregor Leander, Eik List, Florian Mendel, and Christian Rechberger
Recent developments in multi party computation (MPC) and fully homomorphic encryption (FHE) promoted the design and analysis of symmetric cryptographic schemes that minimize multiplications in one way or another. In this paper, we propose with Rasta a design strategy for symmetric encryption that has ANDdepth d and at the same time only needs d ANDs per encrypted bit. Even for very low values of d between 2 and 6 we can give strong evidence that attacks may not exist. This contributes to a better understanding of the limits of what concrete symmetric-key constructions can theoretically achieve with respect to AND-related metrics, and is to the best of our knowledge the first attempt that minimizes both metrics simultaneously. Furthermore, we can give evidence that for choices of d between 4 and 6 the resulting implementation properties may well be competitive by testing our construction in the use-case of removing the large ciphertext-expansion when using the BGV scheme.
Last updated:  2024-06-07
Cryptanalysis of MORUS
Tomer Ashur, Maria Eichlseder, Martin M. Lauridsen, Gaëtan Leurent, Brice Minaud, Yann Rotella, Yu Sasaki, and Benoît Viguier
MORUS is a high-performance authenticated encryption algorithm submitted to the CAESAR competition, and recently selected as a finalist.There are three versions of MORUS: MORUS-640 with a 128-bit key, and MORUS-1280 with 128-bit or 256-bit keys. For all versions the security claim for confidentiality matches the key size.In this paper, we analyze the components of this algorithm (initialization, state update and tag generation), and report several results. As our main result, we present a linear correlation in the keystream of full MORUS, which can be used to distinguish its output from random and to recover some plaintext bits in the broadcast setting.For MORUS-1280, the correlation is $2^{-76}$, which can be exploited after around $2^{152}$ encryptions, less than would be expected for a 256-bit secure cipher. For MORUS-640, the same attack results in a correlation of $2^{-73}$, which does not violate the security claims of the cipher. To identify this correlation, we make use of rotational symmetries in MORUS using linear masks that are invariant by word-rotations of the state.This motivates us to introduce single-word versions of MORUS called MiniMORUS, which simplifies the analysis. The attack has been implemented and verified on MiniMORUS, where it yields a correlation of $2^{-16}$. We also study reduced versions of the initialization and finalization of MORUS, aiming to evaluate the security margin of these components.We show a forgery attack when finalization is reduced from 10 steps to 3, and a key-recovery attack in the nonce-misuse setting when initialization is reduced from 16 steps to 10.These additional results do not threaten the full MORUS, but studying all aspects of the design is useful to understand its strengths and weaknesses.
Last updated:  2024-06-07
Ring Signatures for Deniable AKEM: Gandalf's Fellowship
Phillip Gajland, Jonas Janneck, and Eike Kiltz
Ring signatures, a cryptographic primitive introduced by Rivest, Shamir and Tauman (ASIACRYPT 2001), offer signer anonymity within dynamically formed user groups. Recent advancements have focused on lattice-based constructions to improve efficiency, particularly for large signing rings. However, current state-of-the-art solutions suffer from significant overhead, especially for smaller rings. In this work, we present a novel NTRU-based ring signature scheme, Gandalf, tailored towards small rings. Our post-quantum scheme achieves a 50% reduction in signature sizes compared to the linear ring signature scheme Raptor (ACNS 2019). For rings of size two, our signatures are approximately a quarter the size of DualRing (CRYPTO 2021), another linear scheme, and remain more compact for rings up to size seven. Compared to the sublinear scheme Smile (CRYPTO 2021), our signatures are more compact for rings of up to 26. In particular, for rings of size two, our ring signatures are only 1236 bytes. Additionally, we explore the use of ring signatures to obtain deniability in authenticated key exchange mechanisms (AKEMs), the primitive behind the recent HPKE standard used in MLS and TLS. We take a fine-grained approach at formalising sender deniability within AKEM and seek to define the strongest possible notions. Our contributions extend to a black-box construction of a deniable AKEM from a KEM and a ring signature scheme for rings of size two. Our approach attains the highest level of confidentiality and authenticity, while simultaneously preserving the strongest forms of deniability in two orthogonal settings. Finally, we present parameter sets for our schemes, and show that our deniable AKEM, when instantiated with our ring signature scheme, yields ciphertexts of 2004 bytes.
Last updated:  2024-06-07
A Tight Security Proof for $\mathrm{SPHINCS^{+}}$, Formally Verified
Manuel Barbosa, François Dupressoir, Andreas Hülsing, Matthias Meijers, and Pierre-Yves Strub
$\mathrm{SPHINCS^{+}}$ is a post-quantum signature scheme that, at the time of writing, is being standardized as $\mathrm{SLH\text{-}DSA}$. It is the most conservative option for post-quantum signatures, but the original tight proofs of security were flawed—as reported by Kudinov, Kiktenko and Fedorov in 2020. In this work, we formally prove a tight security bound for $\mathrm{SPHINCS^{+}}$ using the EasyCrypt proof assistant, establishing greater confidence in the general security of the scheme and that of the parameter sets considered for standardization. To this end, we reconstruct the tight security proof presented by Hülsing and Kudinov (in 2022) in a modular way. A small but important part of this effort involves a complex argument relating four different games at once, of a form not yet formalized in EasyCrypt (to the best of our knowledge). We describe our approach to overcoming this major challenge, and develop a general formal verification technique aimed at this type of reasoning. Enhancing the set of reusable EasyCrypt artifacts previously produced in the formal verification of stateful hash-based cryptographic constructions, we (1) improve and extend the existing libraries for hash functions and (2) develop new libraries for fundamental concepts related to hash-based cryptographic constructions, including Merkle trees. These enhancements, along with the formal verification technique we develop, further ease future formal verification endeavors in EasyCrypt, especially those concerning hash-based cryptographic constructions.
Last updated:  2024-06-07
Autoguess: A Tool for Finding Guess-and-Determine Attacks and Key Bridges
Hosein Hadipour and Maria Eichlseder
The guess-and-determine technique is one of the most widely used techniques in cryptanalysis to recover unknown variables in a given system of relations. In such attacks, a subset of the unknown variables is guessed such that the remaining unknowns can be deduced using the information from the guessed variables and the given relations. This idea can be applied in various areas of cryptanalysis such as finding the internal state of stream ciphers when a sufficient amount of output data is available, or recovering the internal state and the secret key of a block cipher from very few known plaintexts. Another important application is the key-bridging technique in key-recovery attacks on block ciphers, where the attacker aims to find the minimum number of required sub-key guesses to deduce all involved sub-keys via the key schedule. Since the complexity of the guess-and-determine technique directly depends on the number of guessed variables, it is essential to find the smallest possible guess basis, i.e., the subset of guessed variables from which the remaining variables can be deduced. In this paper, we present Autoguess, an easy-to-use general tool to search for a minimal guess basis. We propose several new modeling techniques to harness SAT/SMT, MILP, and Gröbner basis solvers. We demonstrate their usefulness in guess-and-determine attacks on stream ciphers and block ciphers, as well as finding key-bridges in key recovery attacks on block ciphers. Moreover, integrating our CP models for the key-bridging technique into the previous CP-based frameworks to search for distinguishers, we propose a unified and general CP model to search for key recovery friendly distinguishers which supports both linear and nonlinear key schedules.
Last updated:  2024-06-07
Anonymous Tokens with Public Metadata and Applications to Private Contact Tracing
Tjerand Silde and Martin Strand
Anonymous single-use tokens have seen recent applications in private Internet browsing and anonymous statistics collection. We develop new schemes in order to include public metadata such as expiration dates for tokens. This inclusion enables planned mass revocation of tokens without distributing new keys, which for natural instantiations can give 77 % and 90 % amortized traffic savings compared to Privacy Pass (Davidson et al., 2018) and DIT: De-Identified Authenticated Telemetry at Scale (Huang et al., 2021), respectively. By transforming the public key, we are able to append public metadata to several existing protocols essentially without increasing computation or communication. Additional contributions include expanded definitions, a more complete framework for anonymous single-use tokens and a description of how anonymous tokens can improve the privacy in dp3t-like digital contact tracing applications. We also extend the protocol to create efficient and conceptually simple tokens with both public and private metadata, and tokens with public metadata and public verifiability from pairings.
Last updated:  2024-06-07
Verifiable Decryption for BGV
Tjerand Silde
In this work we present a direct construction for verifiable decryption for the BGV encryption scheme by combining existing zero-knowledge proofs for linear relations and bounded values. This is one of the first constructions of verifiable decryption protocols for lattice-based cryptography, and we give a protocol that is simpler and at least as efficient as the state of the art when amortizing over many ciphertexts. To prove its practicality we provide concrete parameters, resulting in proof size of less than $44 \tau$ KB for $\tau$ ciphertexts with message space $2048$ bits. Furthermore, we provide an open source implementation showing that the amortized cost of the verifiable decryption protocol is only $76$ ms per message when batching over $\tau = 2048$ ciphertexts.
Last updated:  2024-06-07
BRAKE: Biometric Resilient Authenticated Key Exchange
Pia Bauspieß, Tjerand Silde, Matej Poljuha, Alexandre Tullot, Anamaria Costache, Christian Rathgeb, Jascha Kolberg, and Christoph Busch
Biometric data are uniquely suited for connecting individuals to their digital identities. Deriving cryptographic key exchange from successful biometric authentication therefore gives an additional layer of trust compared to password-authenticated key exchange. However, biometric data are sensitive personal data that need to be protected on a long-term basis. Furthermore, efficient feature extraction and comparison components resulting in high intra-subject tolerance and inter-subject distinguishability, documented with good biometric performance, need to be applied in order to prevent zero-effort impersonation attacks. In this work, we present a novel protocol for Biometric Resilient Authenticated Key Exchange that fulfils the above requirements of biometric information protection compliant with the international ISO/IEC 24745 standard. In our protocol, we present a novel modification of unlinkable fuzzy vault schemes that allows their connection with oblivious pseudo-random functions to achieve resilient protection against offline attacks crucial for the protection of biometric data. Our protocol is independent of the biometric modality and can be implemented based on the security of discrete logarithms as well as lattices. We provide an open-source implementation of both instantiations of our protocol which achieve real-time efficiency with transaction times of less than one second from the image capture to the completed key exchange.
Last updated:  2024-06-07
Further Improvements of the Estimation of Key Enumeration with Applications to Solving LWE
Alessandro Budroni and Erik Mårtensson
In post-quantum cryptography, Learning With Errors (LWE) is one of the dominant underlying mathematical problems. The dual attack is one of the main strategies for solving the LWE problem, and it has recently gathered significant attention within the research community. The attack strategy consists of a lattice reduction part and a distinguishing part. The latter includes an enumeration subroutine over a certain number of positions of the secret key. Our contribution consists of giving a precise and efficient approach for calculating the expected complexity of such an enumeration procedure, which was missing in the literature. This allows us to decrease the estimated cost of the whole dual attack, both classically and quantumly, on well-known protocols such as Kyber, Saber, and TFHE. In addition, we explore different enumeration strategies to investigate some potential further improvements. As our method of calculating the expected cost of enumeration is pretty general, it might be of independent interest in other areas of cryptanalysis or even in different research areas.
Last updated:  2024-06-07
A Subexponential Quantum Algorithm for the Semidirect Discrete Logarithm Problem
Christopher Battarbee, Delaram Kahrobaei, Ludovic Perret, and Siamak F. Shahandashti
Group-based cryptography is a relatively unexplored family in post-quantum cryptography, and the so-called Semidirect Discrete Logarithm Problem (SDLP) is one of its most central problems. However, the complexity of SDLP and its relationship to more well-known hardness problems, particularly with respect to its security against quantum adversaries, has not been well understood and was a significant open problem for researchers in this area. In this paper we give the first dedicated security analysis of SDLP. In particular, we provide a connection between SDLP and group actions, a context in which quantum subexponential algorithms are known to apply. We are therefore able to construct a subexponential quantum algorithm for solving SDLP, thereby classifying the complexity of SDLP and its relation to known computational problems.
Last updated:  2024-06-07
Algebraic Cryptanalysis of Frit
Christoph Dobraunig, Maria Eichlseder, Florian Mendel, and Markus Schofnegger
Frit is a cryptographic 384-bit permutation recently proposed by Simon et al. and follows a novel design approach for built-in countermeasures against fault attacks. We analyze the cryptanalytic security of Frit in different use-cases and propose attacks on the full-round primitive. We show that the inverse Frit$^{-1}$ of Frit is significantly weaker than Frit from an algebraic perspective, despite the better diffusion of the inverse of the used mixing functions: Its round function has an effective algebraic degree of only about 1.325. We show how to craft structured input spaces to linearize up to 4 (or, conditionally, 5) rounds and thus further reduce the degree. As a result, we propose very low-dimensional start-in-the-middle zero-sum partitioning distinguishers for unkeyed Frit, as well as integral distinguishers for round-reduced Frit and full-round Frit$^{-1}$. We also consider keyed Frit variants using Even-Mansour or arbitrary round keys. By using optimized interpolation attacks and symbolically evaluating up to 5 rounds of Frit$^{-1}$, we obtain key-recovery attacks with a complexity of either $2^{59}$ chosen plaintexts and $2^{67}$ time, or $2^{18}$ chosen ciphertexts and time (about 10 seconds in practice).
Last updated:  2024-06-07
Protecting against Statistical Ineffective Fault Attacks
Joan Daemen, Christoph Dobraunig, Maria Eichlseder, Hannes Gross, Florian Mendel, and Robert Primas
At ASIACRYPT 2018 it was shown that Statistical Ineffective Fault Attacks (SIFA) pose a threat for many practical implementations of symmetric primitives. In particular, countermeasures against both power analysis and fault attacks typically do not prevent straightforward SIFA attacks that require only very limited knowledge about the concrete attacked implementation. Consequently, the exploration of countermeasures against SIFA that do not rely on protocols or physical protection mechanisms is of great interest. In this paper, we explore different countermeasure strategies against SIFA. First, we introduce an abstraction layer between the algorithmic specification of a cipher and its implementation in hardware or software to study and describe resistance against SIFA. We then show that by basing the masked implementation on permutations as building blocks, we can build circuits that withstand single-fault SIFA and DPA attacks. We show how this approach can be applied to 3-bit, 4-bit, and 5-bit S-boxes and the AES S-box. Additionally, we present a strategy based on fine-grained fault detection suitable for protecting any circuit against SIFA attacks. Although this approach may lead to a higher implementation cost due to the fine-grained detection needed, it can be used to protect arbitrary circuits and can be generalized to cover multi-fault SIFA.
Last updated:  2024-06-07
Information-Combining Differential Fault Attacks on DEFAULT
Marcel Nageler, Christoph Dobraunig, and Maria Eichlseder
Differential fault analysis (DFA) is a very powerful attack vector on implementations of symmetric cryptography. Most countermeasures are applied at the implementation level. At ASIACRYPT 2021, Baksi et al. proposed a design strategy that aims to provide inherent cipher level resistance against DFA by using S-boxes with linear structures. They argue that in their instantiation, the block cipher DEFAULT, a DFA adversary can learn at most 64 of the 128 key bits, so the remaining brute-force complexity of $2^{64}$ is impractical. In this paper, we show that a DFA adversary can combine information across rounds to recover the full key, invalidating their security claim. In particular, we observe that such ciphers exhibit large classes of equivalent keys that can be represented efficiently in normalized form using linear equations. We exploit this in combination with the specifics of DEFAULT's strong key schedule to recover the key using less than 100 faulty computation and negligible time complexity. Moreover, we show that even an idealized version of DEFAULT with independent round keys is vulnerable to our information-combining attacks based on normalized keys.
Last updated:  2024-06-07
Analyzing the Linear Keystream Biases in AEGIS
Maria Eichlseder, Marcel Nageler, and Robert Primas
AEGIS is one of the authenticated encryption designs selected for the final portfolio of the CAESAR competition. It combines the AES round function and simple Boolean operations to update its large state and extract a keystream to achieve an excellent software performance. In 2014, Minaud discovered slight biases in the keystream based on linear characteristics. For family member AEGIS-256, these could be exploited to undermine the confidentiality faster than generic attacks, but this still requires very large amounts of data. For final portfolio member AEGIS-128, these attacks are currently less efficient than generic attacks. We propose improved keystream approximations for the AEGIS family, but also prove upper bounds below $2^{-128}$ for the squared correlation contribution of any single suitable linear characteristic.
Last updated:  2024-06-07
Forgery Attacks on FlexAE and FlexAEAD
Uncategorized
Maria Eichlseder, Daniel Kales, and Markus Schofnegger
Show abstract
Uncategorized
FlexAEAD is one of the round-1 candidates in the ongoing NIST Lightweight Cryptography standardization project. In this note, we show several forgery attacks on FlexAEAD with complexity less than the security bound given by the designers, such as a block reordering attack on full FlexAEAD-128 with estimated success probability about $2^{-54}$. Additionally, we show some trivial forgeries and point out domain separation issues.
Last updated:  2024-06-07
Practical Key Recovery Attacks on FlexAEAD
Orr Dunkelman, Maria Eichlseder, Daniel Kales, Nathan Keller, Gaëtan Leurent, and Markus Schofnegger
FlexAEAD is a block cipher candidate submitted to the NIST Lightweight Cryptography standardization project, based on repeated application of an Even-Mansour construction. In order to optimize performance, the designers chose a relatively small number of rounds, using properties of the mode and bounds on differential and linear characteristics to substantiate their security claims. Due to a forgery attack with complexity $2^{46}$, FlexAEAD was not selected to the second round of evaluation in the NIST project. In this paper we present a practical key recovery attack on FlexAEAD, using clusters of differentials for the internal permutation and the interplay between different parts of the mode. Our attack, which was fully verified in practice, allows recovering the secret subkeys of FlexAEAD-64 with a time complexity of less than $2^{31}$ encryptions (with an experimental success rate of $75\,\%$). This is the first practical key recovery attack on a candidate of the NIST standardization project.
Last updated:  2024-06-07
Integral Cryptanalysis of WARP based on Monomial Prediction
Hosein Hadipour and Maria Eichlseder
WARP is a 128-bit block cipher published by Banik et al. at SAC 2020 as a lightweight alternative to AES. It is based on a generalized Feistel network and achieves the smallest area footprint among 128-bit block ciphers in many settings. Previous analysis results include integral key-recovery attacks on 21 out of 41 rounds. In this paper, we propose integral key-recovery attacks on up to 32 rounds by improving both the integral distinguisher and the key-recovery approach substantially. For the distinguisher, we show how to model the monomial prediction technique proposed by Hu et al. at ASIACRYPT 2020 as a SAT problem and thus create a bit-oriented model of WARP taking the key schedule into account. Together with two additional observations on the properties of WARP's construction, we extend the best previous distinguisher by 2 rounds (as a classical integral distinguisher) or 4 rounds (for a generalized integral distinguisher). For the key recovery, we create a graph-based model of the round function and demonstrate how to manipulate the graph to obtain a cipher representation amenable to FFT-based key recovery.
Last updated:  2024-06-07
Approximate CRT-Based Gadget Decomposition and Application to TFHE Blind Rotation
Olivier Bernard and Marc Joye
One of the main issues to deal with for fully homomorphic encryption is the noise growth when operating on ciphertexts. To some extent, this can be controlled thanks to a so-called gadget decomposition. A gadget decomposition typically relies on radix- or CRT-based representations to split elements as vectors of smaller chunks whose inner products with the corresponding gadget vector rebuilds (an approximation of) the original elements. Radix-based gadget decompositions present the advantage of also supporting the approximate setting: for most homomorphic operations, this has a minor impact on the noise propagation but leads to substantial savings in bandwidth, memory requirements and computational costs. A typical use-case is the blind rotation as used for example in the bootstrapping of the TFHE scheme. On the other hand, CRT-based representations are convenient when machine words are too small for directly accommodating the arithmetic on large operands. This arises in two typical cases: (i) in the hardware case with multipliers of restricted size, e.g., 17 bits; (ii) in the software case for ciphertext moduli above, e.g., 128 bits. This paper presents new CRT-based gadget decompositions for the approximate setting, which combines the advantages of non-exact decompositions with those of CRT-based decompositions. Significantly, it enables certain hardware or software realizations otherwise hardly supported like the two aforementioned cases. In particular, we show that our new gadget decompositions provide implementations of the (programmable) bootstrapping in TFHE relying solely on native arithmetic and offering extra degrees of parallelism.
Last updated:  2024-06-07
Preliminary Analysis of Ascon-Xof and Ascon-Hash
Christoph Dobraunig, Maria Eichlseder, Florian Mendel, and Martin Schläffer
In this note, we present additional preliminary analysis dedicated to Ascon-Xof and Ascon-Hash [DEMS19].
Last updated:  2024-06-07
Ascon PRF, MAC, and Short-Input MAC
Christoph Dobraunig, Maria Eichlseder, Florian Mendel, and Martin Schläffer
The cipher suite Ascon v1.2 already provides authenticated encryption schemes, hash, and extendable output functions. Furthermore, the underlying permutation is also used in two instances of Isap v2.0, an authenticated encryption scheme designed to provide enhanced robustness against side-channel and fault attacks. In this paper, we enrich the functionality one can get out of Ascon's permutation by providing efficient Pseudorandom Functions (PRFs), a Message Authentication Code (MAC) and a fast short-input PRF for messages up to 128 bits.
Last updated:  2024-06-07
Practical and Scalable Access Control Mechanism for the Internet of Things using Time-bound Attribute-based Encryption
Clémentine Gritti, Emanuel Regnath, and Sebastian Steinhorst
Internet of Things (IoT) promises a strong connection between digital and physical environments. Nevertheless, such framework comes with huge security vulnerabilities, due to the heterogeneous nature of devices and of the diversity of their provenance. Furthermore, the resource constraints of weaker devices, such as sensors, require a lightweight design of security protocols. In 2018, Liu et al. presented a new system with access control key updates and direct user revocation, that are beneficial features in IoT. Access control is done using Ciphertext-Policy Attribute-Based Encryption where attributes represent roles of devices within their networks and time validity ranges. In this paper, we propose an extension of this system by enabling several authorities to allocate those role attributes, to mitigate the key escrow problem. Moreover, we devise a novel approach, based on a binary tree, to append the time credentials. This allows us to find an interesting trade-off between key update frequency and user revocation list length, for stressing time-sensitive data exchanged in IoT environments. We adapt the security model to follow the multi-authority setting and prove our scheme secure under the Decisional Bilinear Diffie-Hellman Exponent assumption. Finally, we implement and evaluate of our solution, in order to confirm that the latter is fully deployable in IoT networks.
Last updated:  2024-06-07
Signatures with Memory-Tight Security in the Quantum Random Oracle Model
Keita Xagawa
Memory tightness of reductions in cryptography, in addition to the standard tightness related to advantage and running time, is important when the underlying problem can be solved efficiently with large memory, as discussed in Auerbach, Cash, Fersch, and Kiltz (CRYPTO 2017). Diemert, Geller, Jager, and Lyu (ASIACRYPT 2021) and Ghoshal, Ghosal, Jaeger, and Tessaro (EUROCRYPT 2022) gave memory-tight proofs for the multi-challenge security of digital signatures in the random oracle model. This paper studies the memory-tight reductions for _post-quantum_ signature schemes in the _quantum_ random oracle model. Concretely, we show that signature schemes from lossy identification are multi-challenge secure in the quantum random oracle model via memory-tight reductions. Moreover, we show that the signature schemes from lossy identification achieve more enhanced securities considering _quantum_ signing oracles proposed by Boneh and Zhandry (CRYPTO 2013) and Alagic, Majenz, Russel, and Song (EUROCRYPT 2020). We additionally show that signature schemes from preimage-sampleable functions achieve those securities via memory-tight reductions.
Last updated:  2024-06-07
QFESTA: Efficient Algorithms and Parameters for FESTA using Quaternion Algebras
Kohei Nakagawa and Hiroshi Onuki
In 2023, Basso, Maino, and Pope proposed FESTA (Fast Encryption from Supersingular Torsion Attacks), an isogeny-based public-key encryption (PKE) protocol that uses the SIDH attack for decryption. In the same paper, they proposed a parameter for that protocol, but the parameter requires high-degree isogeny computations. In this paper, we introduce QFESTA (Quaternion Fast Encapsulation from Supersingular Torsion Attacks), a new variant of FESTA that works with better parameters using quaternion algebras and achieves IND-CCA security under QROM. To realize our protocol, we construct a new algorithm to compute an isogeny of non-smooth degree using quaternion algebra and the SIDH attack. Our protocol relies solely on $(2,2)$-isogeny and $3$-isogeny computations, promising a substantial reduction in computational costs. In addition, our protocol has significantly smaller data sizes for public keys and ciphertexts, approximately half size of the original FESTA.
Last updated:  2024-06-07
Key lifting : Multi-key Fully Homomorphic Encryption in plain model without noise flooding
Xiaokang Dai, Wenyuan Wu, and Yong Feng
Multi-key Fully Homomorphic Encryption (\MK), based on the Learning With Error assumption (\LWE), usually lifts ciphertexts of different users to new ciphertexts under a common public key to enable homomorphic evaluation. The efficiency of the current Multi-key Fully Homomorphic Encryption (\MK) scheme is mainly restricted by two aspects: Expensive ciphertext expansion operation : In a boolean circuit with input length $N$, multiplication depth $L$, security parameter $\lambda$, the number of additional encryptions introduced to achieve ciphertext expansion is $O(N\lambda^6L^4)$. Noise flooding technology resulting in a large modulus $q$ : In order to prove the security of the scheme, the noise flooding technology introduced in the encryption and distributed decryption stages will lead to a huge modulus $q = 2^{O(\lambda L)}B_\chi$, which corrodes the whole scheme and leads to sub-exponential approximation factors $\gamma = \tilde{O}(n\cdot 2^{\sqrt{nL}})$. This paper solves the first problem by presenting a framework called Key-Lifting Multi-key Fully Homomorphic Encryption (\KL). With this \emph{key lifting} procedure, the number of encryptions for a local user is reduced to $O(N)$, similar to single-key fully homomorphic encryption (\FHE). For the second problem, we prove the discrete Gaussian version of the Smudging lemma, and combined with the anti-leakage properties of the encryption, we remove the noise flooding technique introduced in the distributed decryption. Secondly, we propose an analysis method based on R\'{e}nyi divergence, which removes the noise flooding technology in the encryption stage. These approaches significantly reduces the size of the modulus $q$ (with $\log q = O(L)$) and the computational overhead of the entire scheme.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.