https://eprint.iacr.org/rss/atom.xml?order=recentCryptology ePrint Archive2024-05-20T15:25:16+00:00None of your businesshttps://iacr.org/img/logo/iacrlogo_small.pngMetadata is available under the CC0 license https://creativecommons.org/publicdomain/zero/1.0/. Each article has a PDF with different license specified for each one.The Cryptology ePrint Archive provides rapid access to recent
research in cryptology. Papers have been placed here by the
authors and did not undergo any refereeing process other than
verifying that the work seems to be within the scope of
cryptology and meets some minimal acceptance criteria and
publishing conditions.https://eprint.iacr.org/2024/773SQIPrime: A dimension 2 variant of SQISignHD with non-smooth challenge isogenies2024-05-20T15:25:16+00:00Max DuparcTako Boris FouotsaWe introduce SQIPrime, a post-quantum digital signature scheme based on the Deuring correspondence and Kani's Lemma. Compared to its predecessors that are SQISign and especially SQISignHD, SQIPrime further expands the use of high dimensional isogenies, already in use in the verification in SQISignHD, to both key generation and commitment.
In doing so, it no longer relies on smooth degree isogenies (of dimension 1). SQIPrime operates with a prime number of the form $p = 2^\alpha f-1$, as opposed to SQISignHD that uses SIDH primes.
The most intriguing novelty in SQIPrime is the use of non-smooth degree isogenies as challenge isogeny. In fact, in the SQISign family identification scheme, the challenge isogeny is computed by the verifier, who is not well-equipped to compute an isogeny of large non-smooth degree. To overcome this obstacle, the verifier samples the kernel of the challenge isogeny and the task of computing this isogeny is accomplished by the prover. The response is modified in such a way that the verifier can check that his challenge isogeny was correctly computed by the prover, on top of verifying the usual response in the SQISign family.
We describe two variants of SQIPrime: SQIPrime4D which uses dimension 4 isogenies to represent the response isogeny, and SQIPrime2D which solely uses dimension 2 isogenies to represent the response isogeny and hence is more efficient compared to SQIPrime4D and to SQISignHD.We introduce SQIPrime, a post-quantum digital signature scheme based on the Deuring correspondence and Kani's Lemma. Compared to its predecessors that are SQISign and especially SQISignHD, SQIPrime further expands the use of high dimensional isogenies, already in use in the verification in SQISignHD, to both key generation and commitment.
In doing so, it no longer relies on smooth degree isogenies (of dimension 1). SQIPrime operates with a prime number of the form $p = 2^\alpha f-1$, as opposed to SQISignHD that uses SIDH primes.
The most intriguing novelty in SQIPrime is the use of non-smooth degree isogenies as challenge isogeny. In fact, in the SQISign family identification scheme, the challenge isogeny is computed by the verifier, who is not well-equipped to compute an isogeny of large non-smooth degree. To overcome this obstacle, the verifier samples the kernel of the challenge isogeny and the task of computing this isogeny is accomplished by the prover. The response is modified in such a way that the verifier can check that his challenge isogeny was correctly computed by the prover, on top of verifying the usual response in the SQISign family.
We describe two variants of SQIPrime: SQIPrime4D which uses dimension 4 isogenies to represent the response isogeny, and SQIPrime2D which solely uses dimension 2 isogenies to represent the response isogeny and hence is more efficient compared to SQIPrime4D and to SQISignHD.2024-05-20T15:25:16+00:00https://creativecommons.org/licenses/by/4.0/Max DuparcTako Boris Fouotsahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/772Reducing the Share Size of Weighted Threshold Secret Sharing Schemes via Chow Parameters Approximation2024-05-20T11:12:55+00:00Oriol FarràsMiquel GuiotA secret sharing scheme is a cryptographic primitive that allows a dealer to share a secret among a set of parties, so that only authorized subsets of them can recover it. The access structure of the scheme is the family of authorized subsets.
In a weighted threshold access structure, each party is assigned a weight according to its importance, and the authorized subsets are those in which the sum of their weights is at least the threshold value. For these access structures, the share size of the best known secret sharing schemes is either linear on the weights or quasipolynomial on the number of parties, which leads to long shares, in general.
In certain settings, a way to circumvent this efficiency problem is to approximate the access structure by another one that admits more efficient schemes. This work is dedicated to the open problem posed by this strategy: Finding secret sharing schemes with a good tradeoff between the efficiency and the accuracy of the approximation.
We present a method to approximate weighted threshold access structures by others that admit schemes with small shares. This method is based on the techniques for the approximation of the Chow parameters developed by De et al. [Journal of the ACM, 2014]. Our method provides secret sharing schemes with share size $n^{1+o(1)}$, where $n$ is the number of parties, and whose access structure is close to the original one. Namely, in this approximation the condition of being authorized or not is preserved for almost all subsets of parties.
In addition, applying the recent results on computational secret sharing schemes by Applebaum et al. [STOC, 2023] we show that there exist computational secret sharing schemes whose security is based on the RSA assumption and whose share size is polylogarithmic in the number of parties.A secret sharing scheme is a cryptographic primitive that allows a dealer to share a secret among a set of parties, so that only authorized subsets of them can recover it. The access structure of the scheme is the family of authorized subsets.
In a weighted threshold access structure, each party is assigned a weight according to its importance, and the authorized subsets are those in which the sum of their weights is at least the threshold value. For these access structures, the share size of the best known secret sharing schemes is either linear on the weights or quasipolynomial on the number of parties, which leads to long shares, in general.
In certain settings, a way to circumvent this efficiency problem is to approximate the access structure by another one that admits more efficient schemes. This work is dedicated to the open problem posed by this strategy: Finding secret sharing schemes with a good tradeoff between the efficiency and the accuracy of the approximation.
We present a method to approximate weighted threshold access structures by others that admit schemes with small shares. This method is based on the techniques for the approximation of the Chow parameters developed by De et al. [Journal of the ACM, 2014]. Our method provides secret sharing schemes with share size $n^{1+o(1)}$, where $n$ is the number of parties, and whose access structure is close to the original one. Namely, in this approximation the condition of being authorized or not is preserved for almost all subsets of parties.
In addition, applying the recent results on computational secret sharing schemes by Applebaum et al. [STOC, 2023] we show that there exist computational secret sharing schemes whose security is based on the RSA assumption and whose share size is polylogarithmic in the number of parties.2024-05-20T11:12:55+00:00https://creativecommons.org/licenses/by/4.0/Oriol FarràsMiquel Guiothttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/771SQIsign2D-East: A New Signature Scheme Using 2-dimensional Isogenies2024-05-20T11:10:39+00:00Kohei NakagawaHiroshi OnukiIsogeny-based cryptography is cryptographic schemes whose security is based on the hardness of a mathematical problem called the isogeny problem, and is attracting attention as one of the candidates for post-quantum cryptography. A representative isogeny-based cryptography is the signature scheme called SQIsign, which was submitted to the NIST PQC standardization competition. SQIsign has attracted much attention because of its very short signature and key size among the candidates for the NIST PQC standardization. Recently, a lot of new schemes have been proposed that use high-dimensional isogenies. Among them, the signature scheme called SQIsignHD has an even shorter signature size than SQIsign. However, it requires 4-dimensional isogeny computations for the signature verification.
In this paper, we propose a new signature scheme, SQIsign2D-East, which requires only two-dimensional isogeny computations for verification, thus reducing the computational cost of verification. First, we generalized an algorithm called RandIsogImg, which computes a random isogeny of non-smooth degree. Then, by using this generalized RandIsogImg, we construct a new signature scheme SQIsign2D-East.Isogeny-based cryptography is cryptographic schemes whose security is based on the hardness of a mathematical problem called the isogeny problem, and is attracting attention as one of the candidates for post-quantum cryptography. A representative isogeny-based cryptography is the signature scheme called SQIsign, which was submitted to the NIST PQC standardization competition. SQIsign has attracted much attention because of its very short signature and key size among the candidates for the NIST PQC standardization. Recently, a lot of new schemes have been proposed that use high-dimensional isogenies. Among them, the signature scheme called SQIsignHD has an even shorter signature size than SQIsign. However, it requires 4-dimensional isogeny computations for the signature verification.
In this paper, we propose a new signature scheme, SQIsign2D-East, which requires only two-dimensional isogeny computations for verification, thus reducing the computational cost of verification. First, we generalized an algorithm called RandIsogImg, which computes a random isogeny of non-smooth degree. Then, by using this generalized RandIsogImg, we construct a new signature scheme SQIsign2D-East.2024-05-20T11:10:39+00:00https://creativecommons.org/licenses/by/4.0/Kohei NakagawaHiroshi Onukihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/770Sublinear-Round Broadcast without Trusted Setup2024-05-20T08:30:07+00:00Andreea B. AlexandruJulian LossCharalampos PapamanthouGiorgos TsimosBenedikt WagnerByzantine broadcast is one of the fundamental problems in distributed computing. Many of its practical applications, from multiparty computation to consensus mechanisms for blockchains, require increasingly weaker trust assumptions, as well as scalability for an ever-growing number of users $n$. This rules out existing solutions which run in a linear number of rounds in $n$ or rely on trusted setup requirements. In this paper, we propose the first sublinear-round and trustless Byzantine broadcast protocol for the dishonest majority setting. Unlike previous sublinear-round protocols, our protocol assumes neither the existence of a trusted dealer who honestly issues keys and correlated random strings to the parties nor random oracles. Instead, we present a solution whose setup is limited to an unstructured uniform reference string and a plain public key infrastructure (a.k.a. bulletin-board PKI).
Our broadcast protocol builds on top of a moderated gradecast protocol which parties can use to reach weak agreement on shared random strings. Using these strings, we can then run in an unbiased fashion a committee-based Byzantine protocol, similar to that of Chan et al. (PKC 2020), which terminates in a sublinear number of rounds. To this end, we propose a novel construction for committee election, which does not rely either on random oracles or on a trusted setup, and uses NIZKs and time-lock puzzles. Our protocol is resilient against an adaptive adversary who corrupts any constant fraction of parties.Byzantine broadcast is one of the fundamental problems in distributed computing. Many of its practical applications, from multiparty computation to consensus mechanisms for blockchains, require increasingly weaker trust assumptions, as well as scalability for an ever-growing number of users $n$. This rules out existing solutions which run in a linear number of rounds in $n$ or rely on trusted setup requirements. In this paper, we propose the first sublinear-round and trustless Byzantine broadcast protocol for the dishonest majority setting. Unlike previous sublinear-round protocols, our protocol assumes neither the existence of a trusted dealer who honestly issues keys and correlated random strings to the parties nor random oracles. Instead, we present a solution whose setup is limited to an unstructured uniform reference string and a plain public key infrastructure (a.k.a. bulletin-board PKI).
Our broadcast protocol builds on top of a moderated gradecast protocol which parties can use to reach weak agreement on shared random strings. Using these strings, we can then run in an unbiased fashion a committee-based Byzantine protocol, similar to that of Chan et al. (PKC 2020), which terminates in a sublinear number of rounds. To this end, we propose a novel construction for committee election, which does not rely either on random oracles or on a trusted setup, and uses NIZKs and time-lock puzzles. Our protocol is resilient against an adaptive adversary who corrupts any constant fraction of parties.2024-05-20T08:30:07+00:00https://creativecommons.org/licenses/by/4.0/Andreea B. AlexandruJulian LossCharalampos PapamanthouGiorgos TsimosBenedikt Wagnerhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/769Time-Based Cryptography From Weaker Assumptions: Randomness Beacons, Delay Functions and More2024-05-20T07:14:37+00:00Damiano AbramLawrence RoyMark SimkinThe assumption that certain computations inherently require some sequential time has established itself as a powerful tool for cryptography. It allows for security and liveness guarantees in distributed protocols that are impossible to achieve with classical hardness assumptions. Unfortunately, many constructions from the realm of time-based cryptography are based on new and poorly understood hardness assumptions, which tend not to stand the test of time (cf. Leurent et al. 2023, Peikert & Tang 2023).
In this work, we make progress on several fronts. We formally define the concept of a delay function and present a construction thereof from minimal assumptions. We show that these functions, in combination with classical cryptographic objects that satisfy certain efficiency criteria, would allow for constructing delay encryption, which is otherwise only known to exist based on a new hardness assumption about isogenies. We formally define randomness beacons as they are used in the context of blockchains, and we show that (linearly homomorphic) time-lock puzzles allow for efficiently constructing them. Finally, we resolve an open question of Malavolta and Thyagarajan by constructing fully homomorphic time- lock puzzles from linearly homomorphic time-lock puzzles and multi-key fully homomorphic encryption, whereas their previous construction required indistinguishability obfuscation.
Our work puts time-based cryptography on a firmer theoretical footing, provides new constructions from simpler assumptions, and opens new avenues for constructing delay encryption.The assumption that certain computations inherently require some sequential time has established itself as a powerful tool for cryptography. It allows for security and liveness guarantees in distributed protocols that are impossible to achieve with classical hardness assumptions. Unfortunately, many constructions from the realm of time-based cryptography are based on new and poorly understood hardness assumptions, which tend not to stand the test of time (cf. Leurent et al. 2023, Peikert & Tang 2023).
In this work, we make progress on several fronts. We formally define the concept of a delay function and present a construction thereof from minimal assumptions. We show that these functions, in combination with classical cryptographic objects that satisfy certain efficiency criteria, would allow for constructing delay encryption, which is otherwise only known to exist based on a new hardness assumption about isogenies. We formally define randomness beacons as they are used in the context of blockchains, and we show that (linearly homomorphic) time-lock puzzles allow for efficiently constructing them. Finally, we resolve an open question of Malavolta and Thyagarajan by constructing fully homomorphic time- lock puzzles from linearly homomorphic time-lock puzzles and multi-key fully homomorphic encryption, whereas their previous construction required indistinguishability obfuscation.
Our work puts time-based cryptography on a firmer theoretical footing, provides new constructions from simpler assumptions, and opens new avenues for constructing delay encryption.2024-05-20T07:14:37+00:00https://creativecommons.org/licenses/by/4.0/Damiano AbramLawrence RoyMark Simkinhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/768The Ouroboros of ZK: Why Verifying the Verifier Unlocks Longer-Term ZK Innovation2024-05-20T06:31:02+00:00Denis FirsovBenjamin LivshitsVerifying the verifier in the context of zero-knowledge proof is an essential part of ensuring the long-term integrity of the zero-knowledge ecosystem. This is vital for both zero-knowledge rollups and also other industrial applications of ZK. In addition to further minimizing the required trust and reducing the trusted computing base (TCB), having a verified verifier opens the door to decentralized proof generation by potentially untrusted parties. We outline a research program and justify the need for more work at the intersection of ZK and formal verification research.Verifying the verifier in the context of zero-knowledge proof is an essential part of ensuring the long-term integrity of the zero-knowledge ecosystem. This is vital for both zero-knowledge rollups and also other industrial applications of ZK. In addition to further minimizing the required trust and reducing the trusted computing base (TCB), having a verified verifier opens the door to decentralized proof generation by potentially untrusted parties. We outline a research program and justify the need for more work at the intersection of ZK and formal verification research.2024-05-20T06:31:02+00:00https://creativecommons.org/licenses/by/4.0/Denis FirsovBenjamin Livshitshttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/767Bootstrapping Bits with CKKS2024-05-20T00:25:30+00:00Youngjin BaeJung Hee CheonJaehyung KimDamien StehléThe Cheon-Kim-Kim-Song (CKKS) fully homomorphic encryption scheme is designed to efficiently perform computations on real numbers in an encrypted state. Recently, Drucker et al. [J. Cryptol.] proposed an efficient strategy to use CKKS in a black-box manner to perform computations on binary data.
In this work, we introduce several CKKS bootstrapping algorithms designed specifically for ciphertexts encoding binary data. Crucially, the new CKKS bootstrapping algorithms enable to bootstrap ciphertexts containing the binary data in the most significant bits. First, this allows to decrease the moduli used in bootstrapping, saving a larger share of the modulus budget for non-bootstrapping operations.
In particular, we obtain full-slot bootstrapping in ring degree $2^{14}$ for the first time. Second, the ciphertext format is compatible with the one used in the DM/CGGI fully homomorphic encryption schemes. Interestingly, we may combine our CKKS bootstrapping algorithms for bits with the fast ring packing technique from Bae et al. [CRYPTO'23]. This leads to a new bootstrapping algorithm for DM/CGGI that outperforms the state-of-the-art approaches when the number of bootstraps to be performed simultaneously is in the low hundreds.The Cheon-Kim-Kim-Song (CKKS) fully homomorphic encryption scheme is designed to efficiently perform computations on real numbers in an encrypted state. Recently, Drucker et al. [J. Cryptol.] proposed an efficient strategy to use CKKS in a black-box manner to perform computations on binary data.
In this work, we introduce several CKKS bootstrapping algorithms designed specifically for ciphertexts encoding binary data. Crucially, the new CKKS bootstrapping algorithms enable to bootstrap ciphertexts containing the binary data in the most significant bits. First, this allows to decrease the moduli used in bootstrapping, saving a larger share of the modulus budget for non-bootstrapping operations.
In particular, we obtain full-slot bootstrapping in ring degree $2^{14}$ for the first time. Second, the ciphertext format is compatible with the one used in the DM/CGGI fully homomorphic encryption schemes. Interestingly, we may combine our CKKS bootstrapping algorithms for bits with the fast ring packing technique from Bae et al. [CRYPTO'23]. This leads to a new bootstrapping algorithm for DM/CGGI that outperforms the state-of-the-art approaches when the number of bootstraps to be performed simultaneously is in the low hundreds.2024-05-20T00:25:30+00:00https://creativecommons.org/licenses/by/4.0/Youngjin BaeJung Hee CheonJaehyung KimDamien Stehléhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/766Breaking Verifiable Delay Functions in the Random Oracle Model2024-05-20T08:27:01+00:00Ziyi GuanArtur RiazanovWeiqiang YuanA verifiable delay function (VDF) is a cryptographic primitive that takes a long time to compute, but produces a unique output that is efficiently and publicly verifiable.
Mahmoody, Smith and Wu (ICALP 2020) prove that VDFs satisfying both perfect completeness and adaptive perfect uniqueness do not exist in the random oracle model. Moreover, Ephraim, Freitag, Komargodski, and Pass (EUROCRYPT 2020) construct a VDF with perfect completeness and computational uniqueness, a much weaker guarantee compare to perfect uniqueness, in the random oracle model under the repeated squaring assumption.
In this work, we close the gap between existing constructions and known lower bounds by showing that VDFs with imperfect completeness and non-adaptive computational uniqueness cannot be constructed in the pure random oracle model (without additional computational assumptions).A verifiable delay function (VDF) is a cryptographic primitive that takes a long time to compute, but produces a unique output that is efficiently and publicly verifiable.
Mahmoody, Smith and Wu (ICALP 2020) prove that VDFs satisfying both perfect completeness and adaptive perfect uniqueness do not exist in the random oracle model. Moreover, Ephraim, Freitag, Komargodski, and Pass (EUROCRYPT 2020) construct a VDF with perfect completeness and computational uniqueness, a much weaker guarantee compare to perfect uniqueness, in the random oracle model under the repeated squaring assumption.
In this work, we close the gap between existing constructions and known lower bounds by showing that VDFs with imperfect completeness and non-adaptive computational uniqueness cannot be constructed in the pure random oracle model (without additional computational assumptions).2024-05-19T22:03:05+00:00https://creativecommons.org/licenses/by/4.0/Ziyi GuanArtur RiazanovWeiqiang Yuanhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/765Information-Theoretic Multi-Server PIR with Global Preprocessing2024-05-19T20:20:13+00:00Ashrujit GhoshalBaitian LiYaohua MaChenxin DaiElaine ShiWe propose a new unified framework to construct multi-server,
information-theoretic Private Information Retrieval (PIR) schemes
that leverage global preprocesing to achieve sublinear computation per query.
Despite a couple earlier attempts, our understanding of PIR schemes
in the global preprocessing model remains limited, and so far,
we only know a few sparse points in the broad design space.
With our new unified framework, we can
generalize the results of
Beimel, Ishai, and Malkin to broader parameter regimes, thus
enabling a tradeoff between bandwidth and computation.
Specifically, for any constant $S > 1$,
we can get an $S$-server scheme whose bandwidth consumption is as small as $n^{1/(S+1) + \epsilon}$ while achieving computation in the $n^\delta$ regime for some constant $\delta \in (0, 1)$.
Moreover, we can get a scheme with polylogarithmic bandwidth and computation, requiring only polylogarithmic number of servers.We propose a new unified framework to construct multi-server,
information-theoretic Private Information Retrieval (PIR) schemes
that leverage global preprocesing to achieve sublinear computation per query.
Despite a couple earlier attempts, our understanding of PIR schemes
in the global preprocessing model remains limited, and so far,
we only know a few sparse points in the broad design space.
With our new unified framework, we can
generalize the results of
Beimel, Ishai, and Malkin to broader parameter regimes, thus
enabling a tradeoff between bandwidth and computation.
Specifically, for any constant $S > 1$,
we can get an $S$-server scheme whose bandwidth consumption is as small as $n^{1/(S+1) + \epsilon}$ while achieving computation in the $n^\delta$ regime for some constant $\delta \in (0, 1)$.
Moreover, we can get a scheme with polylogarithmic bandwidth and computation, requiring only polylogarithmic number of servers.2024-05-19T20:20:13+00:00https://creativecommons.org/licenses/by/4.0/Ashrujit GhoshalBaitian LiYaohua MaChenxin DaiElaine Shihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/764Decentralized Multi-Client Functional Encryption with Strong Security2024-05-19T17:14:22+00:00Ky NguyenDavid PointchevalRobert SchädlichDecentralized Multi-Client Functional Encryption (DMCFE) extends the basic functional encryption to multiple clients that do not trust each other. They can independently encrypt the multiple plaintext-inputs to be given for evaluation to the function embedded in the functional decryption key, defined by multiple parameter-inputs. And they keep control on these functions as they all have to contribute to the generation of the functional decryption keys. Tags can be used in the ciphertexts and the keys to specify which inputs can be combined together. As any encryption scheme, DMCFE provides privacy of the plaintexts. But the functions associated to the functional decryption keys might be sensitive too (e.g. a model in machine learning). The function-hiding property has thus been introduced to additionally protect the function evaluated during the decryption process.
In this paper, we provide new proof techniques to analyze a new concrete construction of function-hiding DMCFE for inner products, with strong security guarantees in the random oracle model: the adversary can adaptively query multiple challenge ciphertexts and multiple challenge keys, with unbounded repetitions of the same message tags in the ciphertext-queries and a fixed polynomially-large number of repetitions of the same key tags in the key-queries, allowing static corruption of the secret encryption keys. Previous constructions were proven secure in the selective setting only.Decentralized Multi-Client Functional Encryption (DMCFE) extends the basic functional encryption to multiple clients that do not trust each other. They can independently encrypt the multiple plaintext-inputs to be given for evaluation to the function embedded in the functional decryption key, defined by multiple parameter-inputs. And they keep control on these functions as they all have to contribute to the generation of the functional decryption keys. Tags can be used in the ciphertexts and the keys to specify which inputs can be combined together. As any encryption scheme, DMCFE provides privacy of the plaintexts. But the functions associated to the functional decryption keys might be sensitive too (e.g. a model in machine learning). The function-hiding property has thus been introduced to additionally protect the function evaluated during the decryption process.
In this paper, we provide new proof techniques to analyze a new concrete construction of function-hiding DMCFE for inner products, with strong security guarantees in the random oracle model: the adversary can adaptively query multiple challenge ciphertexts and multiple challenge keys, with unbounded repetitions of the same message tags in the ciphertext-queries and a fixed polynomially-large number of repetitions of the same key tags in the key-queries, allowing static corruption of the secret encryption keys. Previous constructions were proven secure in the selective setting only.2024-05-19T17:14:22+00:00https://creativecommons.org/licenses/by/4.0/Ky NguyenDavid PointchevalRobert Schädlichhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/763On SIS-problem-based random Feistel ciphers and its statistical evaluation of resistance against differential cryptanalysis2024-05-19T05:18:40+00:00Yu MorishimaMasahiro KaminagaProvable security based on a robust mathematical framework is the gold standard for security evaluation in cryptography.
Several provable secure cryptosystems have been studied for public key cryptography. However, provably secure symmetric-key cryptography has received little attention.
Although there are known provably secure symmetric-key cryptosystems based on the hardness of factorization and discrete logarithm problems, they are not only slower than conventional block ciphers but can also be broken by quantum computers.
Our study aims to tackle this latter problem by proposing a new provably secure Feistel cipher using collision resistant hash functions based on a Short Integer Solution problem (SIS).
Even if cipher primitives are resistant to quantum algorithms, it is crucial to determine whether the cipher is resilient to differential cryptanalysis, a fundamental and powerful attack against symmetric-key cryptosystems.
In this paper, we demonstrate that the proposed cipher family is secure against differential cryptanalysis by deriving an upper bound on the maximum differential probability. In addition, we demonstrate the potential success of differential cryptanalysis for short block sizes and statistically evaluate the average resistance of cipher instances based on differential characteristic probabilities. This method approximates the S-box output using a folded two-dimensional normal distribution and employs a generalized extreme value distribution.
This evaluation method is first introduced in this paper and serves as the basis for studying the differential characteristics of lattice matrices and the number of secure rounds. This study is foundational research on differential cryptanalysis against block ciphers using a lattice matrix based on SIS.Provable security based on a robust mathematical framework is the gold standard for security evaluation in cryptography.
Several provable secure cryptosystems have been studied for public key cryptography. However, provably secure symmetric-key cryptography has received little attention.
Although there are known provably secure symmetric-key cryptosystems based on the hardness of factorization and discrete logarithm problems, they are not only slower than conventional block ciphers but can also be broken by quantum computers.
Our study aims to tackle this latter problem by proposing a new provably secure Feistel cipher using collision resistant hash functions based on a Short Integer Solution problem (SIS).
Even if cipher primitives are resistant to quantum algorithms, it is crucial to determine whether the cipher is resilient to differential cryptanalysis, a fundamental and powerful attack against symmetric-key cryptosystems.
In this paper, we demonstrate that the proposed cipher family is secure against differential cryptanalysis by deriving an upper bound on the maximum differential probability. In addition, we demonstrate the potential success of differential cryptanalysis for short block sizes and statistically evaluate the average resistance of cipher instances based on differential characteristic probabilities. This method approximates the S-box output using a folded two-dimensional normal distribution and employs a generalized extreme value distribution.
This evaluation method is first introduced in this paper and serves as the basis for studying the differential characteristics of lattice matrices and the number of secure rounds. This study is foundational research on differential cryptanalysis against block ciphers using a lattice matrix based on SIS.2024-05-19T05:18:40+00:00https://creativecommons.org/licenses/by/4.0/Yu MorishimaMasahiro Kaminagahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/762Extractable Witness Encryption for Signed Vector Digests from Pairings and Trust-Scalable One-Time Programs2024-05-18T14:36:25+00:00Sora SuegamiWitness encryption (WE) allows a ciphertext to be encrypted under an NP problem such that anyone holding a valid witness for that problem can decrypt it (flexible decryptors), without interaction with others (non-interaction). However, existing schemes are either impractical or achieve only a part of these WE features. We propose a novel WE scheme that 1) is based on bilinear maps such as pairings, 2) achieves the property of flexible decryptors, and 3) still requires the decryptor's communication with a trusted signer, who only performs a fixed amount of computation and communication at regular intervals, regardless of the number of ciphertexts. It provides extractable security and can be extended to a threshold multiple signers setting, avoiding reliance on a single signer. As a significant application of our WE scheme, we build a novel one-time program (OTP) scheme in which the signers' computational and communication costs remain constant, independent of the number of OTPs to be evaluated simultaneously. This feature ensures scalable OTP evaluations without risking decreased signer participation or compromised decentralization due to increased operational costs for the signers.Witness encryption (WE) allows a ciphertext to be encrypted under an NP problem such that anyone holding a valid witness for that problem can decrypt it (flexible decryptors), without interaction with others (non-interaction). However, existing schemes are either impractical or achieve only a part of these WE features. We propose a novel WE scheme that 1) is based on bilinear maps such as pairings, 2) achieves the property of flexible decryptors, and 3) still requires the decryptor's communication with a trusted signer, who only performs a fixed amount of computation and communication at regular intervals, regardless of the number of ciphertexts. It provides extractable security and can be extended to a threshold multiple signers setting, avoiding reliance on a single signer. As a significant application of our WE scheme, we build a novel one-time program (OTP) scheme in which the signers' computational and communication costs remain constant, independent of the number of OTPs to be evaluated simultaneously. This feature ensures scalable OTP evaluations without risking decreased signer participation or compromised decentralization due to increased operational costs for the signers.2024-05-18T14:36:25+00:00https://creativecommons.org/licenses/by/4.0/Sora Suegamihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/761Lattice-based Broadcast Authenticated Searchable Encryption for Cloud Storage2024-05-18T08:23:59+00:00Yibo CaoShiyuan XuXiu-Bo ChenGang XuSiu-Ming YiuThe extensive use of cloud storage has created an urgent need to search and share data. Public key authenticated encryption with keyword search (PAEKS) allows for the retrieval from encrypted data, while resisting the insider keyword guessing attacks (IKGAs). Most PAEKS schemes only work with single-receiver model, exhibiting very limited applicability. To address this concern, there have been researches on broadcast authenticated encryption with keyword search (BAEKS) to achieve multi-receiver ciphertext search. But to our best knowledge, existing BAEKS schemes are susceptible to quantum computing attacks. In this paper, we propose lattice-based BAEKS, the first post-quantum broadcast authenticated encryption with keyword search, providing robust quantum-safety in multi-receiver model. Specifically, we leverage several lattice sampling algorithms and rejection sampling technique to construct our BAEKS scheme. Furthermore, we incorporate minimal cover set technique and lattice basis extension algorithm to construct an enhanced version, namely FS-BAEKS. Moreover, we give a rigorous security analysis of our scheme. Ultimately, the best computational overhead of BAEKS and Test algorithms in our BAEKS scheme delivers up to approximately 12-x and 402-x faster over prior arts when the number of receivers is six, respectively, which is practical for cloud storage systems.The extensive use of cloud storage has created an urgent need to search and share data. Public key authenticated encryption with keyword search (PAEKS) allows for the retrieval from encrypted data, while resisting the insider keyword guessing attacks (IKGAs). Most PAEKS schemes only work with single-receiver model, exhibiting very limited applicability. To address this concern, there have been researches on broadcast authenticated encryption with keyword search (BAEKS) to achieve multi-receiver ciphertext search. But to our best knowledge, existing BAEKS schemes are susceptible to quantum computing attacks. In this paper, we propose lattice-based BAEKS, the first post-quantum broadcast authenticated encryption with keyword search, providing robust quantum-safety in multi-receiver model. Specifically, we leverage several lattice sampling algorithms and rejection sampling technique to construct our BAEKS scheme. Furthermore, we incorporate minimal cover set technique and lattice basis extension algorithm to construct an enhanced version, namely FS-BAEKS. Moreover, we give a rigorous security analysis of our scheme. Ultimately, the best computational overhead of BAEKS and Test algorithms in our BAEKS scheme delivers up to approximately 12-x and 402-x faster over prior arts when the number of receivers is six, respectively, which is practical for cloud storage systems.2024-05-18T08:23:59+00:00https://creativecommons.org/licenses/by/4.0/Yibo CaoShiyuan XuXiu-Bo ChenGang XuSiu-Ming Yiuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/760SQIsign2D-West: The Fast, the Small, and the Safer2024-05-17T23:08:27+00:00Andrea BassoLuca De FeoPierrick DartoisAntonin LerouxLuciano MainoGiacomo PopeDamien RobertBenjamin WesolowskiWe introduce SQIsign2D-West, a variant of SQIsign using two-dimensional isogeny representations.
SQIsignHD was the first variant of SQIsign to use higher dimensional isogeny representations. Its eight-dimensional variant is geared towards provable security but is deemed unpractical. Its four-dimensional variant is geared towards efficiency and has significantly faster signing times than SQIsign, but slower verification owing to the complexity of the four-dimensional representation. Its authors commented on the apparent difficulty of getting any improvement over SQIsign by using two-dimensional representations.
In this work, we introduce new algorithmic tools that make two-dimensional representations a viable alternative. These lead to a signature scheme with sizes comparable to SQIsignHD, slightly slower signing than SQIsignHD but still much faster than SQIsign, and the fastest verification of any known variant of SQIsign. We achieve this without compromising on the security proof: the assumptions behind SQIsign2D-West are similar to those of the eight-dimensional variant of SQIsignHD. Additionally, like SQIsignHD, SQIsign2D-West favourably scales to high levels of security Concretely, for NIST level I we achieve signing times of 80 ms and verifying times of 4.5 ms, using optimised arithmetic based on intrinsics available to the Ice Lake architecture. For NIST level V, we achieve 470 ms for signing and 31 ms for verifying.We introduce SQIsign2D-West, a variant of SQIsign using two-dimensional isogeny representations.
SQIsignHD was the first variant of SQIsign to use higher dimensional isogeny representations. Its eight-dimensional variant is geared towards provable security but is deemed unpractical. Its four-dimensional variant is geared towards efficiency and has significantly faster signing times than SQIsign, but slower verification owing to the complexity of the four-dimensional representation. Its authors commented on the apparent difficulty of getting any improvement over SQIsign by using two-dimensional representations.
In this work, we introduce new algorithmic tools that make two-dimensional representations a viable alternative. These lead to a signature scheme with sizes comparable to SQIsignHD, slightly slower signing than SQIsignHD but still much faster than SQIsign, and the fastest verification of any known variant of SQIsign. We achieve this without compromising on the security proof: the assumptions behind SQIsign2D-West are similar to those of the eight-dimensional variant of SQIsignHD. Additionally, like SQIsignHD, SQIsign2D-West favourably scales to high levels of security Concretely, for NIST level I we achieve signing times of 80 ms and verifying times of 4.5 ms, using optimised arithmetic based on intrinsics available to the Ice Lake architecture. For NIST level V, we achieve 470 ms for signing and 31 ms for verifying.2024-05-17T23:08:27+00:00https://creativecommons.org/licenses/by/4.0/Andrea BassoLuca De FeoPierrick DartoisAntonin LerouxLuciano MainoGiacomo PopeDamien RobertBenjamin Wesolowskihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/759Enhancing Watermarked Language Models to Identify Users2024-05-17T22:29:13+00:00Aloni CohenAlexander HooverGabe SchoenbachA zero-bit watermarked language model produces text that is indistinguishable from that of the underlying model, but which can be detected as machine-generated using a secret key. Unfortunately, merely detecting AI-generated spam, say, as watermarked may not prevent future abuses. If we could additionally trace the text to a spammer's API token or account, we could then cut off their access or pursue legal action.
We introduce multi-user watermarks, which allow tracing model-generated text to individual users or to groups of colluding users. We construct multi-user watermarking schemes from undetectable zero-bit watermarking schemes. Importantly, our schemes provide both zero-bit and multi-user assurances at the same time: detecting shorter snippets just as well as the original scheme, and tracing longer excerpts to individuals. Along the way, we give a generic construction of a watermarking scheme that embeds long messages into generated text.
Ours are the first black-box reductions between watermarking schemes for language models. A major challenge for black-box reductions is the lack of a unified abstraction for robustness — that marked text is detectable even after edits. Existing works give incomparable robustness guarantees, based on bespoke requirements on the language model's outputs and the users' edits. We introduce a new abstraction to overcome this challenge, called AEB-robustness. AEB-robustness provides that the watermark is detectable whenever the edited text "approximates enough blocks" of model-generated output. Specifying the robustness condition amounts to defining approximates, enough, and blocks. Using our new abstraction, we relate the robustness properties of our message-embedding and multi-user schemes to that of the underlying zero-bit scheme, in a black-box way. Whereas prior works only guarantee robustness for a single text generated in response to a single prompt, our schemes are robust against adaptive prompting, a stronger and more natural adversarial model.A zero-bit watermarked language model produces text that is indistinguishable from that of the underlying model, but which can be detected as machine-generated using a secret key. Unfortunately, merely detecting AI-generated spam, say, as watermarked may not prevent future abuses. If we could additionally trace the text to a spammer's API token or account, we could then cut off their access or pursue legal action.
We introduce multi-user watermarks, which allow tracing model-generated text to individual users or to groups of colluding users. We construct multi-user watermarking schemes from undetectable zero-bit watermarking schemes. Importantly, our schemes provide both zero-bit and multi-user assurances at the same time: detecting shorter snippets just as well as the original scheme, and tracing longer excerpts to individuals. Along the way, we give a generic construction of a watermarking scheme that embeds long messages into generated text.
Ours are the first black-box reductions between watermarking schemes for language models. A major challenge for black-box reductions is the lack of a unified abstraction for robustness — that marked text is detectable even after edits. Existing works give incomparable robustness guarantees, based on bespoke requirements on the language model's outputs and the users' edits. We introduce a new abstraction to overcome this challenge, called AEB-robustness. AEB-robustness provides that the watermark is detectable whenever the edited text "approximates enough blocks" of model-generated output. Specifying the robustness condition amounts to defining approximates, enough, and blocks. Using our new abstraction, we relate the robustness properties of our message-embedding and multi-user schemes to that of the underlying zero-bit scheme, in a black-box way. Whereas prior works only guarantee robustness for a single text generated in response to a single prompt, our schemes are robust against adaptive prompting, a stronger and more natural adversarial model.2024-05-17T22:29:13+00:00https://creativecommons.org/licenses/by/4.0/Aloni CohenAlexander HooverGabe Schoenbachhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/758Admissible Parameters for the Crossbred Algorithm and Semi-regular Sequences over Finite Fields2024-05-17T14:53:27+00:00John BaenaDaniel CabarcasSharwan K. TiwariJavier VerbelLuis VillotaMultivariate public key cryptography (MPKC) is one of the most promising alternatives to build quantum-resistant signature schemes, as evidenced in NIST's call for additional post-quantum signature schemes. The main assumption in MPKC is the hardness of the Multivariate Quadratic (MQ) problem, which seeks for a common root to a system of quadratic polynomials over a finite field. Although the Crossbred algorithm is among the most efficient algorithm to solve MQ over small fields, its complexity analysis stands on shaky ground. In particular, it is not clear for what parameters it works and under what assumptions.
In this work, we provide a rigorous analysis of the Crossbred algorithm over any finite field. We provide a complete explanation of the series of admissible parameters proposed in previous literature and explicitly state the regularity assumptions required for its validity. Moreover, we show that the series does not tell the whole story, hence we propose an additional condition for Crossbred to work. Additionally, we define and characterize a notion of regularity for systems over a small field, which is one of the main building blocks in the series of admissible parameters.Multivariate public key cryptography (MPKC) is one of the most promising alternatives to build quantum-resistant signature schemes, as evidenced in NIST's call for additional post-quantum signature schemes. The main assumption in MPKC is the hardness of the Multivariate Quadratic (MQ) problem, which seeks for a common root to a system of quadratic polynomials over a finite field. Although the Crossbred algorithm is among the most efficient algorithm to solve MQ over small fields, its complexity analysis stands on shaky ground. In particular, it is not clear for what parameters it works and under what assumptions.
In this work, we provide a rigorous analysis of the Crossbred algorithm over any finite field. We provide a complete explanation of the series of admissible parameters proposed in previous literature and explicitly state the regularity assumptions required for its validity. Moreover, we show that the series does not tell the whole story, hence we propose an additional condition for Crossbred to work. Additionally, we define and characterize a notion of regularity for systems over a small field, which is one of the main building blocks in the series of admissible parameters.2024-05-17T14:53:27+00:00https://creativecommons.org/licenses/by/4.0/John BaenaDaniel CabarcasSharwan K. TiwariJavier VerbelLuis Villotahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/757Formal Definition and Verification for Combined Random Fault and Random Probing Security2024-05-17T12:16:11+00:00Sonia BelaidJakob FeldtkellerTim GüneysuAnna GuinetJan Richter-BrockmannMatthieu RivainPascal SasdrichAbdul Rahman TalebIn our highly digitalized world, an adversary is not constrained to purely digital attacks but can monitor or influence the physical execution environment of a target computing device. Such side-channel or fault-injection analysis poses a significant threat to otherwise secure cryptographic implementations. Hence, it is important to consider additional adversarial capabilities when analyzing the security of cryptographic implementations besides the default black-box model. For side-channel analysis, this is done by providing the adversary with knowledge of some internal values, while for fault-injection analysis the capabilities of the adversaries include manipulation of some internal values.
In this work, we extend probabilistic security models for physical attacks, by introducing a general random probing model and a general random fault model to capture arbitrary leakage and fault distributions, as well as the combination of these models. Our aim is to enable a more accurate modeling of low-level physical effects. We then analyze important properties, such as the impact of adversarial knowledge on faults and compositions, and provide tool-based formal verification methods that allow the security assessment of design components. These methods are introduced as extension of previous tools VERICA and IronMask which are implemented, evaluated and compared.In our highly digitalized world, an adversary is not constrained to purely digital attacks but can monitor or influence the physical execution environment of a target computing device. Such side-channel or fault-injection analysis poses a significant threat to otherwise secure cryptographic implementations. Hence, it is important to consider additional adversarial capabilities when analyzing the security of cryptographic implementations besides the default black-box model. For side-channel analysis, this is done by providing the adversary with knowledge of some internal values, while for fault-injection analysis the capabilities of the adversaries include manipulation of some internal values.
In this work, we extend probabilistic security models for physical attacks, by introducing a general random probing model and a general random fault model to capture arbitrary leakage and fault distributions, as well as the combination of these models. Our aim is to enable a more accurate modeling of low-level physical effects. We then analyze important properties, such as the impact of adversarial knowledge on faults and compositions, and provide tool-based formal verification methods that allow the security assessment of design components. These methods are introduced as extension of previous tools VERICA and IronMask which are implemented, evaluated and compared.2024-05-17T12:16:11+00:00https://creativecommons.org/licenses/by/4.0/Sonia BelaidJakob FeldtkellerTim GüneysuAnna GuinetJan Richter-BrockmannMatthieu RivainPascal SasdrichAbdul Rahman Talebhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/756(Strong) aPAKE Revisited: Capturing Multi-User Security and Salting2024-05-17T09:06:40+00:00Dennis DayanikliAnja LehmannAsymmetric Password-Authenticated Key Exchange (aPAKE) protocols, particularly Strong aPAKE (saPAKE) have enjoyed significant attention, both from academia and industry, with the well-known OPAQUE protocol currently undergoing standardization. In (s)aPAKE, a client and a server collaboratively establish a high-entropy key, relying on a previously exchanged password for authentication. A main feature is its resilience against offline and precomputation (for saPAKE) attacks. OPAQUE, as well as most other aPAKE protocols, have been designed and analyzed in a single-user setting, i.e., modelling that only a single user interacts with the server. By the composition framework of UC, security for the actual multi-user setting is then conjectured. As any real-world (s)aPAKE instantiation will need to cater multiple users, this introduces a dangerous gap in which developers are tasked to extend the single-user protocol securely and in a UC-compliant manner.
In this work, we extend the (s)aPAKE definition to directly model the multi-user setting, and explicitly capture the impact that a server compromise has across user accounts. We show that the currently standardized multi-user version of OPAQUE might not provide the expected security, as it is insecure against offline attacks as soon as the file for one user in the system is compromised. This is due to using shared state among different users, which violates the UC composition framework. However, we show that another change introduced in the standardization draft which also involves a shared state does not compromise security. When extending the aPAKE security in the multi-client setting, we notice that the widely used security definition captures significantly weaker security guarantees than what is offered by many protocols. Essentially, the aPAKE definition assumes that the server stores unsalted password-hashes, whereas several protocols explicitly use a salt to protect against precomputation attacks. We therefore propose a definitional framework that captures different salting approaches -- thus showing that the security gap between aPAKE and saPAKE can be smaller than expected.Asymmetric Password-Authenticated Key Exchange (aPAKE) protocols, particularly Strong aPAKE (saPAKE) have enjoyed significant attention, both from academia and industry, with the well-known OPAQUE protocol currently undergoing standardization. In (s)aPAKE, a client and a server collaboratively establish a high-entropy key, relying on a previously exchanged password for authentication. A main feature is its resilience against offline and precomputation (for saPAKE) attacks. OPAQUE, as well as most other aPAKE protocols, have been designed and analyzed in a single-user setting, i.e., modelling that only a single user interacts with the server. By the composition framework of UC, security for the actual multi-user setting is then conjectured. As any real-world (s)aPAKE instantiation will need to cater multiple users, this introduces a dangerous gap in which developers are tasked to extend the single-user protocol securely and in a UC-compliant manner.
In this work, we extend the (s)aPAKE definition to directly model the multi-user setting, and explicitly capture the impact that a server compromise has across user accounts. We show that the currently standardized multi-user version of OPAQUE might not provide the expected security, as it is insecure against offline attacks as soon as the file for one user in the system is compromised. This is due to using shared state among different users, which violates the UC composition framework. However, we show that another change introduced in the standardization draft which also involves a shared state does not compromise security. When extending the aPAKE security in the multi-client setting, we notice that the widely used security definition captures significantly weaker security guarantees than what is offered by many protocols. Essentially, the aPAKE definition assumes that the server stores unsalted password-hashes, whereas several protocols explicitly use a salt to protect against precomputation attacks. We therefore propose a definitional framework that captures different salting approaches -- thus showing that the security gap between aPAKE and saPAKE can be smaller than expected.2024-05-17T09:06:40+00:00https://creativecommons.org/licenses/by/4.0/Dennis DayanikliAnja Lehmannhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/755Efficient Second-Order Masked Software Implementations of Ascon in Theory and Practice2024-05-17T08:31:24+00:00Barbara GigerlFlorian MendelMartin SchläfferRobert PrimasIn this paper, we present efficient protected software implementations of the authenticated cipher Ascon, the recently announced winner of the NIST standardization process for lightweight cryptography.
Our implementations target theoretical and practical security against second-order power analysis attacks.
First, we propose an efficient second-order extension of a previously presented first-order masking of the Keccak S-box that does not require online randomness.
The extension itself is inspired by a previously presented second-order masking of an AND-XOR construction.
We then discuss implementation tricks that further improve performance and reduce the chance of unintended combination of shares during the execution of masked software on microprocessors.
This allows us to retain the theoretic protection orders of masking in practice with low performance overhead, which we also confirm via TVLA on ARM microprocessors.
The formal correctness of our designs is additionally verified using Coco on the netlist of a RISC-V IBEX core.
We benchmark our masked software designs on 32-bit ARM and RISC-V microprocessor platforms.
On both platforms, we can perform Ascon-128 authenticated encryption with a throughput of about 300 or 550 cycles/byte when operating on 2 or 3 shares.
When utilizing a leveled implementation technique, the throughput of our masked implementations generally increases to about 90 cycles/byte.
We publish our masked software implementations together with a generic software framework for evaluating performance and side-channel resistance of various masked cryptographic implementations.In this paper, we present efficient protected software implementations of the authenticated cipher Ascon, the recently announced winner of the NIST standardization process for lightweight cryptography.
Our implementations target theoretical and practical security against second-order power analysis attacks.
First, we propose an efficient second-order extension of a previously presented first-order masking of the Keccak S-box that does not require online randomness.
The extension itself is inspired by a previously presented second-order masking of an AND-XOR construction.
We then discuss implementation tricks that further improve performance and reduce the chance of unintended combination of shares during the execution of masked software on microprocessors.
This allows us to retain the theoretic protection orders of masking in practice with low performance overhead, which we also confirm via TVLA on ARM microprocessors.
The formal correctness of our designs is additionally verified using Coco on the netlist of a RISC-V IBEX core.
We benchmark our masked software designs on 32-bit ARM and RISC-V microprocessor platforms.
On both platforms, we can perform Ascon-128 authenticated encryption with a throughput of about 300 or 550 cycles/byte when operating on 2 or 3 shares.
When utilizing a leveled implementation technique, the throughput of our masked implementations generally increases to about 90 cycles/byte.
We publish our masked software implementations together with a generic software framework for evaluating performance and side-channel resistance of various masked cryptographic implementations.2024-05-17T08:31:24+00:00https://creativecommons.org/licenses/by/4.0/Barbara GigerlFlorian MendelMartin SchläfferRobert Primashttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/754Adversary Resilient Learned Bloom Filters2024-05-16T19:41:43+00:00Allison BishopHayder TirmaziCreating an adversary resilient Learned Bloom filter with provable guarantees is an open problem. We define a strong adversarial model for the Learned Bloom Filter. We also construct two adversary resilient variants of the Learned Bloom Filter called the Uptown Bodega Filter and the Downtown Bodega Filter. Our adversarial model extends an existing adversarial model designed for the classical (i.e not ``learned'') Bloom Filter by Naor and Yogev and considers computationally bounded adversaries that run in probabilistic polynomial time (PPT). We show that if pseudo-random permutations exist, then a secure Learned Bloom Filter may be constructed with $\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. We further show that, if pseudo-random permutations exist, then a high utility Learned Bloom Filter may be constructed with $2\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. Finally, we construct a hybrid adversarial model for the case where a fraction of the workload is chosen by an adversary. We show realistic scenarios where using the Downtown Bodega Filter gives better performance guarantees compared to alternative approaches in this model.Creating an adversary resilient Learned Bloom filter with provable guarantees is an open problem. We define a strong adversarial model for the Learned Bloom Filter. We also construct two adversary resilient variants of the Learned Bloom Filter called the Uptown Bodega Filter and the Downtown Bodega Filter. Our adversarial model extends an existing adversarial model designed for the classical (i.e not ``learned'') Bloom Filter by Naor and Yogev and considers computationally bounded adversaries that run in probabilistic polynomial time (PPT). We show that if pseudo-random permutations exist, then a secure Learned Bloom Filter may be constructed with $\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. We further show that, if pseudo-random permutations exist, then a high utility Learned Bloom Filter may be constructed with $2\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. Finally, we construct a hybrid adversarial model for the case where a fraction of the workload is chosen by an adversary. We show realistic scenarios where using the Downtown Bodega Filter gives better performance guarantees compared to alternative approaches in this model.2024-05-16T19:41:43+00:00https://creativecommons.org/licenses/by/4.0/Allison BishopHayder Tirmazihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/753Summation-based Private Segmented Membership Test from Threshold-Fully Homomorphic Encryption2024-05-16T18:35:58+00:00Nirajan KoiralaJonathan TakeshitaJeremy StevensTaeho JungIn many real-world scenarios, there are cases where a client wishes
to check if a data element they hold is included in a set segmented
across a large number of data holders. To protect user privacy, the
client’s query and the data holders’ sets should remain encrypted
throughout the whole process. Prior work on Private Set Intersection (PSI), Multi-Party PSI (MPSI), Private Membership Test (PMT),
and Oblivious RAM (ORAM) falls short in this scenario in many
ways. They either require data holders to possess the sets in plain-
text, incur prohibitively high latency for aggregating results from a
large number of data holders, leak the information about the party
holding the intersection element, or induce a high false positive.
This paper introduces the primitive of a Private Segmented Mem-
bership Test (PSMT). We give a basic construction of a protocol to
solve PSMT using a threshold variant of approximate-arithmetic
homomorphic encryption and show how to overcome existing
challenges to construct a PSMT protocol without leaking information about the party holding the intersection element or false
positives for a large number of data holders ensuring IND-CPA^𝐷
security. Our novel approach is superior to existing state-of-the-art
approaches in scalability with regard to the number of supported
data holders. This is enabled by a novel summation-based homo-
morphic membership check rather than a product-based one, as
well as various novel ideas addressing technical challenges. Our
PSMT protocol supports many more parties (up to 4096 in experiments) compared to prior related work that supports only around
100 parties efficiently. Our experimental evaluation shows that our
method’s aggregation of results from data holders can run in 92.5s
for 1024 data holders and a set size of 2^25, and our method’s over-
head increases very slowly with the increasing number of senders.
We also compare our PSMT protocol to other state-of-the-art PSI
and MPSI protocols and discuss our improvements in usability with
a better privacy model and a larger number of partiesIn many real-world scenarios, there are cases where a client wishes
to check if a data element they hold is included in a set segmented
across a large number of data holders. To protect user privacy, the
client’s query and the data holders’ sets should remain encrypted
throughout the whole process. Prior work on Private Set Intersection (PSI), Multi-Party PSI (MPSI), Private Membership Test (PMT),
and Oblivious RAM (ORAM) falls short in this scenario in many
ways. They either require data holders to possess the sets in plain-
text, incur prohibitively high latency for aggregating results from a
large number of data holders, leak the information about the party
holding the intersection element, or induce a high false positive.
This paper introduces the primitive of a Private Segmented Mem-
bership Test (PSMT). We give a basic construction of a protocol to
solve PSMT using a threshold variant of approximate-arithmetic
homomorphic encryption and show how to overcome existing
challenges to construct a PSMT protocol without leaking information about the party holding the intersection element or false
positives for a large number of data holders ensuring IND-CPA^𝐷
security. Our novel approach is superior to existing state-of-the-art
approaches in scalability with regard to the number of supported
data holders. This is enabled by a novel summation-based homo-
morphic membership check rather than a product-based one, as
well as various novel ideas addressing technical challenges. Our
PSMT protocol supports many more parties (up to 4096 in experiments) compared to prior related work that supports only around
100 parties efficiently. Our experimental evaluation shows that our
method’s aggregation of results from data holders can run in 92.5s
for 1024 data holders and a set size of 2^25, and our method’s over-
head increases very slowly with the increasing number of senders.
We also compare our PSMT protocol to other state-of-the-art PSI
and MPSI protocols and discuss our improvements in usability with
a better privacy model and a larger number of parties2024-05-16T18:35:58+00:00https://creativecommons.org/licenses/by/4.0/Nirajan KoiralaJonathan TakeshitaJeremy StevensTaeho Junghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/752More Embedded Curves for SNARK-Pairing-Friendly Curves2024-05-16T17:50:28+00:00Aurore GuillevicEmbedded curves are elliptic curves defined over a prime field whose order (characteristic) is the prime subgroup order (the scalar field) of a pairing-friendly curve. Embedded curves have a large prime-order subgroup of cryptographic size but are not pairing-friendly themselves. Sanso and El Housni published families of embedded curves for BLS pairing-friendly curves. Their families are parameterized by polynomials, like families of pairing-friendly curves are. However their work did not found embedded families for KSS pairing-friendly curves. In this note we show how the problem of finding families of embedded curves is related to the problem of finding optimal formulas for $\mathbb{G}_1$ subgroup membership testing on the pairing-friendly curve side. Then we apply Smith's technique and Dai, Lin, Zhao, and Zhou criteria to obtain the formulas of embedded curves with KSS, and outline a generic algorithm for solving this problem in all cases. We provide two families of embedded curves for KSS18 and give examples of cryptographic size. We also suggest alternative embedded curves for BLS that have a seed of much lower Hamming weight than Sanso et al.~and much higher 2-valuation for fast FFT. In particular we highlight BLS12 curves which have a prime-order embedded curve that form a plain cycle (no pairing), and a second (plain) embedded curve in Montgomery form. A Brezing-Weng outer curve to have a pairing-friendly 2-chain is also possible like in the BLS12-377-BW6-761 construction. All curves have $j$-invariant 0 and an endomorphism for a faster arithmetic on the curve side.Embedded curves are elliptic curves defined over a prime field whose order (characteristic) is the prime subgroup order (the scalar field) of a pairing-friendly curve. Embedded curves have a large prime-order subgroup of cryptographic size but are not pairing-friendly themselves. Sanso and El Housni published families of embedded curves for BLS pairing-friendly curves. Their families are parameterized by polynomials, like families of pairing-friendly curves are. However their work did not found embedded families for KSS pairing-friendly curves. In this note we show how the problem of finding families of embedded curves is related to the problem of finding optimal formulas for $\mathbb{G}_1$ subgroup membership testing on the pairing-friendly curve side. Then we apply Smith's technique and Dai, Lin, Zhao, and Zhou criteria to obtain the formulas of embedded curves with KSS, and outline a generic algorithm for solving this problem in all cases. We provide two families of embedded curves for KSS18 and give examples of cryptographic size. We also suggest alternative embedded curves for BLS that have a seed of much lower Hamming weight than Sanso et al.~and much higher 2-valuation for fast FFT. In particular we highlight BLS12 curves which have a prime-order embedded curve that form a plain cycle (no pairing), and a second (plain) embedded curve in Montgomery form. A Brezing-Weng outer curve to have a pairing-friendly 2-chain is also possible like in the BLS12-377-BW6-761 construction. All curves have $j$-invariant 0 and an endomorphism for a faster arithmetic on the curve side.2024-05-16T17:50:28+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Aurore Guillevichttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2024/751Simultaneous Haar Indistinguishability with Applications to Unclonable Cryptography2024-05-16T17:42:42+00:00Prabhanjan AnanthFatih KaleogluHenry YuenUnclonable cryptography is concerned with leveraging the no-cloning principle to build cryptographic primitives that are otherwise impossible to achieve classically. Understanding the feasibility of unclonable encryption, one of the key unclonable primitives, satisfying indistinguishability security in the plain model has been a major open question in the area. So far, the existing constructions of unclonable encryption are either in the quantum random oracle model or are based on new conjectures.
We present a new approach to unclonable encryption via a reduction to a novel
question about nonlocal quantum state discrimination: how well can
non-communicating -- but entangled -- players distinguish between different distributions over quantum states? We call this task simultaneous state indistinguishability. Our main technical result is showing that the players cannot distinguish between each player receiving independently-chosen Haar random states versus all players receiving the same Haar random state.
We leverage this result to present the first construction of unclonable
encryption satisfying indistinguishability security, with quantum decryption
keys, in the plain model. We also show other implications to single-decryptor
encryption and leakage-resilient secret sharing.Unclonable cryptography is concerned with leveraging the no-cloning principle to build cryptographic primitives that are otherwise impossible to achieve classically. Understanding the feasibility of unclonable encryption, one of the key unclonable primitives, satisfying indistinguishability security in the plain model has been a major open question in the area. So far, the existing constructions of unclonable encryption are either in the quantum random oracle model or are based on new conjectures.
We present a new approach to unclonable encryption via a reduction to a novel
question about nonlocal quantum state discrimination: how well can
non-communicating -- but entangled -- players distinguish between different distributions over quantum states? We call this task simultaneous state indistinguishability. Our main technical result is showing that the players cannot distinguish between each player receiving independently-chosen Haar random states versus all players receiving the same Haar random state.
We leverage this result to present the first construction of unclonable
encryption satisfying indistinguishability security, with quantum decryption
keys, in the plain model. We also show other implications to single-decryptor
encryption and leakage-resilient secret sharing.2024-05-16T17:42:42+00:00https://creativecommons.org/licenses/by/4.0/Prabhanjan AnanthFatih KaleogluHenry Yuenhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/750Speeding Up Multi-Scalar Multiplications for Pairing-Based zkSNARKs2024-05-16T16:50:10+00:00Xinxin FanVeronika KuchtaFrancesco SicaLei XuMulti-scalar multiplication (MSM) is one of the core components of many zero-knowledge proof systems, and a primary performance bottleneck for proof generation in these schemes. One major strategy to accelerate MSM is utilizing precomputation. Several algorithms (e.g., Pippenger and BGMW) and their variants have been proposed in this direction. In this paper, we revisit the recent precomputation-based MSM calculation method proposed by Luo, Fu and Gong at CHES 2023 and generalize their approach. In particular, we presented a general construction of optimal buckets. This improvement leads to significant performance improvements, which are verified by both theoretical analysis and experiments.Multi-scalar multiplication (MSM) is one of the core components of many zero-knowledge proof systems, and a primary performance bottleneck for proof generation in these schemes. One major strategy to accelerate MSM is utilizing precomputation. Several algorithms (e.g., Pippenger and BGMW) and their variants have been proposed in this direction. In this paper, we revisit the recent precomputation-based MSM calculation method proposed by Luo, Fu and Gong at CHES 2023 and generalize their approach. In particular, we presented a general construction of optimal buckets. This improvement leads to significant performance improvements, which are verified by both theoretical analysis and experiments.2024-05-16T16:50:10+00:00https://creativecommons.org/licenses/by/4.0/Xinxin FanVeronika KuchtaFrancesco SicaLei Xuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/749Reducing the CRS Size in Registered ABE Systems2024-05-16T16:21:10+00:00Rachit GargGeorge LuBrent WatersDavid J. WuAttribute-based encryption (ABE) is a generalization of public-key encryption that enables fine-grained access control to encrypted data. In (ciphertext-policy) ABE, a central trusted authority issues decryption keys for attributes $x$ to users. In turn, ciphertexts are associated with a decryption policy $\mathcal{P}$. Decryption succeeds and recovers the encrypted message whenever $\mathcal{P}(x) = 1$. Recently, Hohenberger, Lu, Waters, and Wu (Eurocrypt 2023) introduced the notion of registered ABE, which is an ABE scheme without a trusted central authority. Instead, users generate their own public/secret keys (just like in public-key encryption) and then register their keys (and attributes) with a key curator. The key curator is a transparent and untrusted entity.
Currently, the best pairing-based registered ABE schemes support monotone Boolean formulas and an a priori bounded number of users $L$. A major limitation of existing schemes is that they require a (structured) common reference string (CRS) of size $L^2 \cdot |\mathcal{U}|$ where $|\mathcal{U}|$ is the size of the attribute universe. In other words, the size of the CRS scales quadratically with the number of users and multiplicatively with the size of the attribute universe. The large CRS makes these schemes expensive in practice and limited to a small number of users and a small universe of attributes.
In this work, we give two ways to reduce the CRS size in pairing-based registered ABE schemes. First, we introduce a combinatoric technique based on progression-free sets that enables registered ABE for the same class of policies but with a CRS whose size is sub-quadratic in the number of users. Asymptotically, we obtain a scheme where the CRS size is nearly linear in the number of users $L$ (i.e., $L^{1 + o(1)}$). If we take a more concrete-efficiency-oriented focus, we can instantiate our framework to obtain a construction with a CRS of size $L^{\log_2 3} \approx L^{1.6}$. For instance, in a scheme for 100,000 users, our approach reduces the CRS by a factor of over $115\times$ compared to previous approaches (and without incurring any overhead in encryption/decryption time). Our second approach for reducing the CRS size is to rely on a partitioning-based argument when arguing security of the registered ABE scheme. Previous approaches took a dual-system approach. Using a partitioning-based argument yields a registered ABE scheme where the size of the CRS is independent of the size of the attribute universe. The cost is the resulting scheme satisfies a weaker notion of static security. Our techniques for reducing the CRS size can be combined, and taken together, we obtain a pairing-based registered ABE scheme that supports monotone Boolean formulas with a CRS size of $L^{1 + o(1)}$. Notably, this is the first pairing-based registered ABE scheme that does not require imposing a bound on the size of the attribute universe during setup time.
As an additional application, we also show how to apply our techniques based on progression-free sets to the batch argument (BARG) for $\mathsf{NP}$ scheme of Waters and Wu (Crypto 2022) to obtain a scheme with a nearly-linear CRS without needing to rely on non-black-box bootstrapping techniques.Attribute-based encryption (ABE) is a generalization of public-key encryption that enables fine-grained access control to encrypted data. In (ciphertext-policy) ABE, a central trusted authority issues decryption keys for attributes $x$ to users. In turn, ciphertexts are associated with a decryption policy $\mathcal{P}$. Decryption succeeds and recovers the encrypted message whenever $\mathcal{P}(x) = 1$. Recently, Hohenberger, Lu, Waters, and Wu (Eurocrypt 2023) introduced the notion of registered ABE, which is an ABE scheme without a trusted central authority. Instead, users generate their own public/secret keys (just like in public-key encryption) and then register their keys (and attributes) with a key curator. The key curator is a transparent and untrusted entity.
Currently, the best pairing-based registered ABE schemes support monotone Boolean formulas and an a priori bounded number of users $L$. A major limitation of existing schemes is that they require a (structured) common reference string (CRS) of size $L^2 \cdot |\mathcal{U}|$ where $|\mathcal{U}|$ is the size of the attribute universe. In other words, the size of the CRS scales quadratically with the number of users and multiplicatively with the size of the attribute universe. The large CRS makes these schemes expensive in practice and limited to a small number of users and a small universe of attributes.
In this work, we give two ways to reduce the CRS size in pairing-based registered ABE schemes. First, we introduce a combinatoric technique based on progression-free sets that enables registered ABE for the same class of policies but with a CRS whose size is sub-quadratic in the number of users. Asymptotically, we obtain a scheme where the CRS size is nearly linear in the number of users $L$ (i.e., $L^{1 + o(1)}$). If we take a more concrete-efficiency-oriented focus, we can instantiate our framework to obtain a construction with a CRS of size $L^{\log_2 3} \approx L^{1.6}$. For instance, in a scheme for 100,000 users, our approach reduces the CRS by a factor of over $115\times$ compared to previous approaches (and without incurring any overhead in encryption/decryption time). Our second approach for reducing the CRS size is to rely on a partitioning-based argument when arguing security of the registered ABE scheme. Previous approaches took a dual-system approach. Using a partitioning-based argument yields a registered ABE scheme where the size of the CRS is independent of the size of the attribute universe. The cost is the resulting scheme satisfies a weaker notion of static security. Our techniques for reducing the CRS size can be combined, and taken together, we obtain a pairing-based registered ABE scheme that supports monotone Boolean formulas with a CRS size of $L^{1 + o(1)}$. Notably, this is the first pairing-based registered ABE scheme that does not require imposing a bound on the size of the attribute universe during setup time.
As an additional application, we also show how to apply our techniques based on progression-free sets to the batch argument (BARG) for $\mathsf{NP}$ scheme of Waters and Wu (Crypto 2022) to obtain a scheme with a nearly-linear CRS without needing to rely on non-black-box bootstrapping techniques.2024-05-16T16:21:10+00:00https://creativecommons.org/licenses/by/4.0/Rachit GargGeorge LuBrent WatersDavid J. Wuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/748PERK: Compact Signature Scheme Based on a New Variant of the Permuted Kernel Problem2024-05-16T14:11:45+00:00Slim BettaiebLoïc BidouxVictor DyserynAndre EsserPhilippe GaboritMukul KulkarniMarco PalumbiIn this work we introduce PERK a compact digital signature scheme based on the hardness of a new variant of the Permuted Kernel Problem (PKP). PERK achieves the smallest signature sizes for any PKP-based scheme for NIST category I security with 6 kB, while obtaining competitive signing and verification timings. PERK also compares well with the general state-of-the-art. To substantiate those claims we provide an optimized constant-time AVX2 implementation, a detailed performance analysis and different size-performance trade-offs.
Technically our scheme is based on a Zero-Knowledge Proof of Knowledge following the MPC-in-the-Head paradigm and employing the Fiat-Shamir transform. We provide comprehensive security proofs, ensuring EUF-CMA security for PERK in the random oracle model. The efficiency of PERK greatly stems from our particular choice of PKP variant which allows for an application of the challenge-space amplification technique due to Bidoux-Gaborit (C2SI 2023).
Our second main contribution is an in-depth study of the hardness of the introduced problem variant. First, we establish a link between the hardness of our problem variant and the hardness of standard PKP. Then, we initiate an in-depth study of the concrete complexity to solve our variant. We present a novel algorithm which outperforms previous approaches for certain parameter regimes. However, the proximity of our problem variant to the standard variant can be controlled via a specific parameter. This enables us to effectively safeguard against our new attack and potential future extensions by a choice of parameters that ensures only a slight variation from standard PKP.In this work we introduce PERK a compact digital signature scheme based on the hardness of a new variant of the Permuted Kernel Problem (PKP). PERK achieves the smallest signature sizes for any PKP-based scheme for NIST category I security with 6 kB, while obtaining competitive signing and verification timings. PERK also compares well with the general state-of-the-art. To substantiate those claims we provide an optimized constant-time AVX2 implementation, a detailed performance analysis and different size-performance trade-offs.
Technically our scheme is based on a Zero-Knowledge Proof of Knowledge following the MPC-in-the-Head paradigm and employing the Fiat-Shamir transform. We provide comprehensive security proofs, ensuring EUF-CMA security for PERK in the random oracle model. The efficiency of PERK greatly stems from our particular choice of PKP variant which allows for an application of the challenge-space amplification technique due to Bidoux-Gaborit (C2SI 2023).
Our second main contribution is an in-depth study of the hardness of the introduced problem variant. First, we establish a link between the hardness of our problem variant and the hardness of standard PKP. Then, we initiate an in-depth study of the concrete complexity to solve our variant. We present a novel algorithm which outperforms previous approaches for certain parameter regimes. However, the proximity of our problem variant to the standard variant can be controlled via a specific parameter. This enables us to effectively safeguard against our new attack and potential future extensions by a choice of parameters that ensures only a slight variation from standard PKP.2024-05-16T14:11:45+00:00https://creativecommons.org/licenses/by/4.0/Slim BettaiebLoïc BidouxVictor DyserynAndre EsserPhilippe GaboritMukul KulkarniMarco Palumbihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/747Scaling Lattice Sieves across Multiple Machines2024-05-16T13:51:54+00:00Martin R. AlbrechtJoe RowellLattice sieves are algorithms for finding short vectors in lattices. We present an implementation of two such sieves – known as “BGJ1” and “BDGL” in the literature – that scales across multiple servers (with varying success). This class of algorithms requires exponential memory which had put into question their ability to scale across sieving nodes. We discuss our architecture and optimisations and report experimental evidence of the efficiency of our approach.Lattice sieves are algorithms for finding short vectors in lattices. We present an implementation of two such sieves – known as “BGJ1” and “BDGL” in the literature – that scales across multiple servers (with varying success). This class of algorithms requires exponential memory which had put into question their ability to scale across sieving nodes. We discuss our architecture and optimisations and report experimental evidence of the efficiency of our approach.2024-05-16T13:51:54+00:00https://creativecommons.org/publicdomain/zero/1.0/Martin R. AlbrechtJoe Rowellhttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2024/746The Art of Bonsai: How Well-Shaped Trees Improve the Communication Cost of MLS2024-05-16T09:36:47+00:00Céline ChevalierGuirec LebrunAnge MartinelliJérôme PlûtMessaging Layer Security (MLS) is a Secure Group Messaging protocol that uses for its handshake a binary tree – called a Ratchet Tree – in order to reach a logarithmic communication cost w.r.t. the number of group members. This Ratchet Tree represents users as its leaves; therefore any change in the group membership results in adding or removing a leaf associated with that user. MLS consequently implements what we call a tree evolution mechanism, consisting in a user add algorithm – determining where to insert a new leaf – and a tree expansion process – stating how to increase the size of the tree when no space is available for a new user.
The tree evolution mechanism currently used by MLS is de-
signed so that it naturally left-balances the Ratchet Tree. However, such a Ratchet Tree structure is often quite inefficient in terms of communication cost. Furthermore, one may wonder whether the binary tree used in that Ratchet Tree has a degree optimized for the features of a handshake in MLS – called a commit.
Therefore, we study in this paper how to improve the communication cost of a commit in MLS by considering both the tree evolution mechanism and the tree degree used for the Ratchet Tree. To do so, we determine the tree structure that optimizes its communication cost, and we propose optimized algorithms for both the user add and tree expansion processes, that allow to remain close to that optimal structure and thus to have a communication cost as close to optimal as possible.
We also determine the Ratchet Tree degree that is best suited to a given set of parameters induced by the encryption scheme used by MLS. This study shows that when using classical (i.e. pre-quantum) ciphersuites, a binary tree is indeed the most appropriate Ratchet Tree; nevertheless, when it comes to post-quantum algorithms, it generally becomes more interesting to use instead a ternary tree.
Our improvements do not change TreeKEM protocol and are
easy to implement. With parameter sets corresponding to practical ciphersuites, they reduce TreeKEM’s communication cost by 5 to 10%. In particular, the 10% gain appears in the Post-Quantum setting – when both an optimized tree evolution mechanism and a ternary tree are necessary –, which is precisely the context where any optimization of the protocol’s communication cost is welcome, due to the important bandwidth of PQ encrypted communication.Messaging Layer Security (MLS) is a Secure Group Messaging protocol that uses for its handshake a binary tree – called a Ratchet Tree – in order to reach a logarithmic communication cost w.r.t. the number of group members. This Ratchet Tree represents users as its leaves; therefore any change in the group membership results in adding or removing a leaf associated with that user. MLS consequently implements what we call a tree evolution mechanism, consisting in a user add algorithm – determining where to insert a new leaf – and a tree expansion process – stating how to increase the size of the tree when no space is available for a new user.
The tree evolution mechanism currently used by MLS is de-
signed so that it naturally left-balances the Ratchet Tree. However, such a Ratchet Tree structure is often quite inefficient in terms of communication cost. Furthermore, one may wonder whether the binary tree used in that Ratchet Tree has a degree optimized for the features of a handshake in MLS – called a commit.
Therefore, we study in this paper how to improve the communication cost of a commit in MLS by considering both the tree evolution mechanism and the tree degree used for the Ratchet Tree. To do so, we determine the tree structure that optimizes its communication cost, and we propose optimized algorithms for both the user add and tree expansion processes, that allow to remain close to that optimal structure and thus to have a communication cost as close to optimal as possible.
We also determine the Ratchet Tree degree that is best suited to a given set of parameters induced by the encryption scheme used by MLS. This study shows that when using classical (i.e. pre-quantum) ciphersuites, a binary tree is indeed the most appropriate Ratchet Tree; nevertheless, when it comes to post-quantum algorithms, it generally becomes more interesting to use instead a ternary tree.
Our improvements do not change TreeKEM protocol and are
easy to implement. With parameter sets corresponding to practical ciphersuites, they reduce TreeKEM’s communication cost by 5 to 10%. In particular, the 10% gain appears in the Post-Quantum setting – when both an optimized tree evolution mechanism and a ternary tree are necessary –, which is precisely the context where any optimization of the protocol’s communication cost is welcome, due to the important bandwidth of PQ encrypted communication.2024-05-16T09:36:47+00:00https://creativecommons.org/licenses/by/4.0/Céline ChevalierGuirec LebrunAnge MartinelliJérôme Plûthttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/745$\mathsf{FRAST}$: TFHE-friendly Cipher Based on Random S-boxes2024-05-16T06:21:40+00:00Mingyu ChoWoohyuk ChungJincheol HaJooyoung LeeEun-Gyeol OhMincheol SonA transciphering framework, also known as hybrid homomorphic encryption, is a practical method of combining a homomorphic encryption~(HE) scheme with a symmetric cipher in the client-server model to reduce computational and communication overload on the client side. As a server homomorphically evaluates a symmetric cipher in this framework, new design rationales are required for ``HE-friendly'' ciphers that take into account the specific properties of the HE schemes.
In this paper, we propose a new TFHE-friendly cipher, dubbed $\mathsf{FRAST}$, with a TFHE-friendly round function based on a random S-box to minimize the number of rounds.
The round function of $\mathsf{FRAST}$ can be efficiently evaluated in TFHE by a new optimization technique, dubbed double blind rotation.
Combined with our new WoP-PBS method, the double blind rotation allows computing multiple S-box calls in the round function of $\mathsf{FRAST}$ at the cost of a single S-box call.
In this way, $\mathsf{FRAST}$ enjoys $2.768$ (resp. $10.57$) times higher throughput compared to $\mathsf{Kreyvium}$ (resp. $\mathsf{Elisabeth}$) for TFHE keystream evaluation in the offline phase of the transciphering framework at the cost of slightly larger communication overload.A transciphering framework, also known as hybrid homomorphic encryption, is a practical method of combining a homomorphic encryption~(HE) scheme with a symmetric cipher in the client-server model to reduce computational and communication overload on the client side. As a server homomorphically evaluates a symmetric cipher in this framework, new design rationales are required for ``HE-friendly'' ciphers that take into account the specific properties of the HE schemes.
In this paper, we propose a new TFHE-friendly cipher, dubbed $\mathsf{FRAST}$, with a TFHE-friendly round function based on a random S-box to minimize the number of rounds.
The round function of $\mathsf{FRAST}$ can be efficiently evaluated in TFHE by a new optimization technique, dubbed double blind rotation.
Combined with our new WoP-PBS method, the double blind rotation allows computing multiple S-box calls in the round function of $\mathsf{FRAST}$ at the cost of a single S-box call.
In this way, $\mathsf{FRAST}$ enjoys $2.768$ (resp. $10.57$) times higher throughput compared to $\mathsf{Kreyvium}$ (resp. $\mathsf{Elisabeth}$) for TFHE keystream evaluation in the offline phase of the transciphering framework at the cost of slightly larger communication overload.2024-05-16T06:21:40+00:00https://creativecommons.org/licenses/by/4.0/Mingyu ChoWoohyuk ChungJincheol HaJooyoung LeeEun-Gyeol OhMincheol Sonhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/744An NVMe-based Secure Computing Platform with FPGA-based TFHE Accelerator2024-05-16T05:07:10+00:00Yoshihiro OhbaTomoya SanukiClaude GravelKentaro MiharaIn this paper, we introduce a new approach to secure computing by implementing a platform that utilizes an NVMe-based system with an FPGA-based Torus FHE accelerator, SSD, and middleware on the host-side. Our platform is the first of its kind to offer complete secure computing capabilities for TFHE using an FPGA-based accelerator. We have defined secure computing instructions to evaluate 14-bit to 14-bit functions using TFHE, and our middleware allows for communication of ciphertexts, keys, and secure computing programs while invoking secure computing programs through NVMe commands with metadata. Our CMux gate implementation features an optimized NTT/INTT circuit that eliminates pre-NTT and post-INTT operations by pre-scaling and pre-transforming constant polynomials such as the bootstrapping and private-functional key-switching keys. Our performance evaluation demonstrates that our secure computing platform outperforms CPU-based and GPU-based platforms by 15 to 120 times and by 2.5 to 3 times, respectively, in gate bootstrapping execution time. Additionally, our platform uses 7 to 12 times less electric energy consumption during the gate bootstrapping execution time compared to CPU-based platforms and 1.15 to 1.2 times less compared to GPU-based platforms.In this paper, we introduce a new approach to secure computing by implementing a platform that utilizes an NVMe-based system with an FPGA-based Torus FHE accelerator, SSD, and middleware on the host-side. Our platform is the first of its kind to offer complete secure computing capabilities for TFHE using an FPGA-based accelerator. We have defined secure computing instructions to evaluate 14-bit to 14-bit functions using TFHE, and our middleware allows for communication of ciphertexts, keys, and secure computing programs while invoking secure computing programs through NVMe commands with metadata. Our CMux gate implementation features an optimized NTT/INTT circuit that eliminates pre-NTT and post-INTT operations by pre-scaling and pre-transforming constant polynomials such as the bootstrapping and private-functional key-switching keys. Our performance evaluation demonstrates that our secure computing platform outperforms CPU-based and GPU-based platforms by 15 to 120 times and by 2.5 to 3 times, respectively, in gate bootstrapping execution time. Additionally, our platform uses 7 to 12 times less electric energy consumption during the gate bootstrapping execution time compared to CPU-based platforms and 1.15 to 1.2 times less compared to GPU-based platforms.2024-05-16T05:07:10+00:00https://creativecommons.org/licenses/by-nc-nd/4.0/Yoshihiro OhbaTomoya SanukiClaude GravelKentaro Miharahttps://creativecommons.org/licenses/by-nc-nd/4.0/https://eprint.iacr.org/2024/743Improved Conditional Cube Attacks on Ascon AEADs in Nonce-Respecting Settings -- with a Break-Fix Strategy2024-05-15T23:14:23+00:00Kai HuThe best-known distinguisher on 7-round Ascon-128 and Ascon-128a AEAD uses a 60-dimensional cube where the nonce bits are set to be equal in the third and fourth rows of the Ascon state during initialization (Rohit et al. ToSC 2021/1).
It was not known how to use this distinguisher to mount key-recovery attacks.
In this paper, we investigate this problem using a new strategy called \textit{break-fix} for the conditional cube attack. The idea is to introduce slightly-modified cubes which increase the degrees of 7-round output bits to be more than 59 (break phase) and then find key conditions which can bring the degree back to 59 (fix phase).
Using this idea, key-recovery attacks on 7-round Ascon-128, Ascon-128a and Ascon-80pq are proposed.
The attacks have better time/memory complexities than the existing attacks, and in some cases improve the weak-key attacks as well.The best-known distinguisher on 7-round Ascon-128 and Ascon-128a AEAD uses a 60-dimensional cube where the nonce bits are set to be equal in the third and fourth rows of the Ascon state during initialization (Rohit et al. ToSC 2021/1).
It was not known how to use this distinguisher to mount key-recovery attacks.
In this paper, we investigate this problem using a new strategy called \textit{break-fix} for the conditional cube attack. The idea is to introduce slightly-modified cubes which increase the degrees of 7-round output bits to be more than 59 (break phase) and then find key conditions which can bring the degree back to 59 (fix phase).
Using this idea, key-recovery attacks on 7-round Ascon-128, Ascon-128a and Ascon-80pq are proposed.
The attacks have better time/memory complexities than the existing attacks, and in some cases improve the weak-key attacks as well.2024-05-15T23:14:23+00:00https://creativecommons.org/licenses/by/4.0/Kai Huhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/742Efficient Universally-Verifiable Electronic Voting with Everlasting Privacy2024-05-15T20:08:35+00:00David PointchevalUniversal verifiability is a must-to-have for electronic voting schemes. It is essential to ensure honest behavior of all the players during the whole process, together with the eligibility. However, it should not endanger the privacy of the individual votes, which is another major requirement.
Whereas the first property prevents attacks during the voting process, privacy of the votes should hold forever, which has been called everlasting privacy.
A classical approach for universal verifiability is to add some proofs together with the encrypted votes, which requires publication of the latter, while eligibility needs a link between the votes and the voters: it definitely excludes long-term privacy. An alternative is the use of perfectly-hiding commitments, on which proofs are published, while ciphertexts are kept private for computing the tally.
In this paper, we show how recent linearly-homomorphic signatures can be exploited for all the proofs, leading to very efficient procedures towards universal verifiability with both strong receipt-freeness and everlasting privacy.
Privacy will indeed be unconditional, after the publication of the results and the proofs, whereas the soundness of the proofs holds in the algebraic group model and the random oracle model.Universal verifiability is a must-to-have for electronic voting schemes. It is essential to ensure honest behavior of all the players during the whole process, together with the eligibility. However, it should not endanger the privacy of the individual votes, which is another major requirement.
Whereas the first property prevents attacks during the voting process, privacy of the votes should hold forever, which has been called everlasting privacy.
A classical approach for universal verifiability is to add some proofs together with the encrypted votes, which requires publication of the latter, while eligibility needs a link between the votes and the voters: it definitely excludes long-term privacy. An alternative is the use of perfectly-hiding commitments, on which proofs are published, while ciphertexts are kept private for computing the tally.
In this paper, we show how recent linearly-homomorphic signatures can be exploited for all the proofs, leading to very efficient procedures towards universal verifiability with both strong receipt-freeness and everlasting privacy.
Privacy will indeed be unconditional, after the publication of the results and the proofs, whereas the soundness of the proofs holds in the algebraic group model and the random oracle model.2024-05-15T20:08:35+00:00https://creativecommons.org/licenses/by/4.0/David Pointchevalhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/741A Deniability Analysis of Signal's Initial Handshake PQXDH2024-05-15T15:48:37+00:00Rune FiedlerChristian JansonMany use messaging apps such as Signal to exercise their right to private communication. To cope with the advent of quantum computing, Signal employs a new initial handshake protocol called PQXDH for post-quantum confidentiality, yet keeps guarantees of authenticity and deniability classical. Compared to its predecessor X3DH, PQXDH includes a KEM encapsulation and a signature on the ephemeral key. In this work we show that PQXDH does not meet the same deniability guarantees as X3DH due to the signature on the ephemeral key. Our analysis relies on plaintext awareness of the KEM, which Signal's implementation of PQXDH does not provide. As for X3DH, both parties (initiator and responder) obtain different deniability guarantees due to the asymmetry of the protocol.
For our analysis of PQXDH, we introduce a new model for deniability of key exchange that allows a more fine-grained analysis. Our deniability model picks up on the ideas of prior work and facilitates new combinations of deniability notions, such as deniability against malicious adversaries in the big brother model, i.e. where the distinguisher knows all secret keys. Our model may be of independent interest.Many use messaging apps such as Signal to exercise their right to private communication. To cope with the advent of quantum computing, Signal employs a new initial handshake protocol called PQXDH for post-quantum confidentiality, yet keeps guarantees of authenticity and deniability classical. Compared to its predecessor X3DH, PQXDH includes a KEM encapsulation and a signature on the ephemeral key. In this work we show that PQXDH does not meet the same deniability guarantees as X3DH due to the signature on the ephemeral key. Our analysis relies on plaintext awareness of the KEM, which Signal's implementation of PQXDH does not provide. As for X3DH, both parties (initiator and responder) obtain different deniability guarantees due to the asymmetry of the protocol.
For our analysis of PQXDH, we introduce a new model for deniability of key exchange that allows a more fine-grained analysis. Our deniability model picks up on the ideas of prior work and facilitates new combinations of deniability notions, such as deniability against malicious adversaries in the big brother model, i.e. where the distinguisher knows all secret keys. Our model may be of independent interest.2024-05-15T15:48:37+00:00https://creativecommons.org/licenses/by/4.0/Rune FiedlerChristian Jansonhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/740Multi-Client Functional Encryption with Public Inputs and Strong Security2024-05-15T08:08:55+00:00Ky NguyenDuong Hieu PhanDavid PointchevalRecent years have witnessed a significant development for functional encryption (FE) in the multi-user setting, particularly with multi-client functional encryption (MCFE). The challenge becomes more important when combined with access control, such as attribute-based encryption (ABE), which was actually not covered by the FE and MCFE frameworks. On the other hand, as for complex primitives, many works have studied the admissibility of adversaries to ensure that the security model encompasses all real threats of attacks.
In this paper, adding a public input to FE/MCFE, we cover many previous primitives, notably attribute-based function classes. Furthermore, with the strongest admissibility for inner-product functionality, our framework is quite versatile, as it encrypts multiple sub-vectors, allows repetitions and corruptions, and eventually also encompasses public-key FE and classical ABE, bridging the private setting of MCFE with the public setting of FE and ABE.
Finally, we propose an MCFE with public inputs with the class of functions that combines inner-products (on private inputs) and attribute-based access-control (on public inputs) for LSSS policies. We achieve the first AB-MCFE for inner-products with strong admissibility and with adaptive security. This also leads to MIFE for inner products, public-key single-input inner-product FE with LSSS key-policy and KPABE for LSSS, with adaptive security while the previous AB-MCFE construction of Agrawal et al. from CRYPTO '23 considers a slightly larger functionality of average weighted sum but with selective security only.Recent years have witnessed a significant development for functional encryption (FE) in the multi-user setting, particularly with multi-client functional encryption (MCFE). The challenge becomes more important when combined with access control, such as attribute-based encryption (ABE), which was actually not covered by the FE and MCFE frameworks. On the other hand, as for complex primitives, many works have studied the admissibility of adversaries to ensure that the security model encompasses all real threats of attacks.
In this paper, adding a public input to FE/MCFE, we cover many previous primitives, notably attribute-based function classes. Furthermore, with the strongest admissibility for inner-product functionality, our framework is quite versatile, as it encrypts multiple sub-vectors, allows repetitions and corruptions, and eventually also encompasses public-key FE and classical ABE, bridging the private setting of MCFE with the public setting of FE and ABE.
Finally, we propose an MCFE with public inputs with the class of functions that combines inner-products (on private inputs) and attribute-based access-control (on public inputs) for LSSS policies. We achieve the first AB-MCFE for inner-products with strong admissibility and with adaptive security. This also leads to MIFE for inner products, public-key single-input inner-product FE with LSSS key-policy and KPABE for LSSS, with adaptive security while the previous AB-MCFE construction of Agrawal et al. from CRYPTO '23 considers a slightly larger functionality of average weighted sum but with selective security only.2024-05-15T08:08:55+00:00https://creativecommons.org/licenses/by/4.0/Ky NguyenDuong Hieu PhanDavid Pointchevalhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/739BGJ15 Revisited: Sieving with Streamed Memory Access2024-05-15T06:01:23+00:00Ziyu ZhaoJintai DingBo-Yin YangThe focus of this paper is to tackle the issue of memory access within sieving algorithms for lattice problems. We have conducted an in-depth analysis of an optimized BGJ sieve (Becker-Gama-Joux 2015), and our findings suggest that its inherent structure is significantly more memory-efficient compared to the asymptotically fastest BDGL sieve (Becker-Ducas-Gama-Laarhoven 2016). Specifically, it necessitates merely $2^{0.2075n + o(n)}$ streamed (non-random) main memory accesses for the execution of an $n$-dimensional sieving. We also provide evidence that the time complexity of this refined BGJ sieve could potentially be $2^{0.292n + o(n)}$, or at least something remarkably close to it. Actually, it outperforms the BDGL sieve in all dimensions that are practically achievable. We hope that this study will contribute to the resolution of the ongoing debate regarding the measurement of RAM access overhead in large-scale, sieving-based lattice attacks.
The concept above is also supported by our implementation. Actually, we provide a highly efficient, both in terms of time and memory, CPU-based implementation of the refined BGJ sieve within an optimized sieving framework. This implementation results in approximately 40% savings in RAM usage and is at least $2^{4.5}$ times more efficient in terms of gate count compared to the previous 4-GPU implementation (Ducas-Stevens-Woerden 2021). Notably, we have successfully solved the 183-dimensional SVP Darmstadt Challenge in 30 days using a 112-core server and approximately 0.87TB of RAM. The majority of previous sieving-based SVP computations relied on the HK3-sieve (Herold-Kirshanova 2017), hence this implementation could offer further insights into the behavior of these asymptotically faster sieving algorithms when applied to large-scale problems. Moreover, our refined cost estimation of SVP based on this implementation suggests that some of the NIST PQC candidates, such as Falcon-512, are unlikely to achieve NIST's security requirements.The focus of this paper is to tackle the issue of memory access within sieving algorithms for lattice problems. We have conducted an in-depth analysis of an optimized BGJ sieve (Becker-Gama-Joux 2015), and our findings suggest that its inherent structure is significantly more memory-efficient compared to the asymptotically fastest BDGL sieve (Becker-Ducas-Gama-Laarhoven 2016). Specifically, it necessitates merely $2^{0.2075n + o(n)}$ streamed (non-random) main memory accesses for the execution of an $n$-dimensional sieving. We also provide evidence that the time complexity of this refined BGJ sieve could potentially be $2^{0.292n + o(n)}$, or at least something remarkably close to it. Actually, it outperforms the BDGL sieve in all dimensions that are practically achievable. We hope that this study will contribute to the resolution of the ongoing debate regarding the measurement of RAM access overhead in large-scale, sieving-based lattice attacks.
The concept above is also supported by our implementation. Actually, we provide a highly efficient, both in terms of time and memory, CPU-based implementation of the refined BGJ sieve within an optimized sieving framework. This implementation results in approximately 40% savings in RAM usage and is at least $2^{4.5}$ times more efficient in terms of gate count compared to the previous 4-GPU implementation (Ducas-Stevens-Woerden 2021). Notably, we have successfully solved the 183-dimensional SVP Darmstadt Challenge in 30 days using a 112-core server and approximately 0.87TB of RAM. The majority of previous sieving-based SVP computations relied on the HK3-sieve (Herold-Kirshanova 2017), hence this implementation could offer further insights into the behavior of these asymptotically faster sieving algorithms when applied to large-scale problems. Moreover, our refined cost estimation of SVP based on this implementation suggests that some of the NIST PQC candidates, such as Falcon-512, are unlikely to achieve NIST's security requirements.2024-05-15T06:01:23+00:00https://creativecommons.org/licenses/by/4.0/Ziyu ZhaoJintai DingBo-Yin Yanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/738Quantum Key-Revocable Dual-Regev Encryption, Revisited2024-05-14T14:50:35+00:00Prabhanjan AnanthZihan HuZikuan HuangQuantum information can be used to achieve novel cryptographic primitives that are impossible to achieve classically. A recent work by Ananth, Poremba, Vaikuntanathan (TCC 2023) focuses on equipping the dual-Regev encryption scheme, introduced by Gentry, Peikert, Vaikuntanathan (STOC 2008), with key revocation capabilities using quantum information. They further showed that the key-revocable dual-Regev scheme implies the existence of fully homomorphic encryption and pseudorandom functions, with both of them also equipped with key revocation capabilities. Unfortunately, they were only able to prove the security of their schemes based on new conjectures and left open the problem of basing the security of key revocable dual-Regev encryption on well-studied assumptions.
In this work, we resolve this open problem. Assuming polynomial hardness of learning with errors (over sub-exponential modulus), we show that key-revocable dual-Regev encryption is secure. As a consequence, for the first time, we achieve the following results:
1. Key-revocable public-key encryption and key-revocable fully-homomorphic encryption satisfying classical revocation security and based on polynomial hardness of learning with errors. Prior works either did not achieve classical revocation or were based on sub-exponential hardness of learning with errors.
2. Key-revocable pseudorandom functions satisfying classical revocation from the polynomial hardness of learning with errors. Prior works relied upon unproven conjectures.Quantum information can be used to achieve novel cryptographic primitives that are impossible to achieve classically. A recent work by Ananth, Poremba, Vaikuntanathan (TCC 2023) focuses on equipping the dual-Regev encryption scheme, introduced by Gentry, Peikert, Vaikuntanathan (STOC 2008), with key revocation capabilities using quantum information. They further showed that the key-revocable dual-Regev scheme implies the existence of fully homomorphic encryption and pseudorandom functions, with both of them also equipped with key revocation capabilities. Unfortunately, they were only able to prove the security of their schemes based on new conjectures and left open the problem of basing the security of key revocable dual-Regev encryption on well-studied assumptions.
In this work, we resolve this open problem. Assuming polynomial hardness of learning with errors (over sub-exponential modulus), we show that key-revocable dual-Regev encryption is secure. As a consequence, for the first time, we achieve the following results:
1. Key-revocable public-key encryption and key-revocable fully-homomorphic encryption satisfying classical revocation security and based on polynomial hardness of learning with errors. Prior works either did not achieve classical revocation or were based on sub-exponential hardness of learning with errors.
2. Key-revocable pseudorandom functions satisfying classical revocation from the polynomial hardness of learning with errors. Prior works relied upon unproven conjectures.2024-05-14T14:50:35+00:00https://creativecommons.org/licenses/by/4.0/Prabhanjan AnanthZihan HuZikuan Huanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/737Mutable Batch Arguments and Applications2024-05-13T19:45:35+00:00Rishab GoyalNon-interactive batch arguments (BARGs) let a prover compute a single proof $\pi$ proving validity of a `batch' of $k$ $\mathbf{NP}$ statements $x_1, \ldots, x_{k}$. The two central features of BARGs are succinctness and soundness. Succinctness states that proof size, $|\pi|$ does not grow with $k$; while soundness states a polytime cheating prover cannot create an accepting proof for any invalid batch of statements.
In this work, we put forth a new concept of mutability for batch arguments, called mutable batch arguments. Our goal is to re-envision how we think about and use BARGs. Traditionally, a BARG proof string $\pi$ is an immutable encoding of $k$ $\mathbf{NP}$ witness $\omega_1, \ldots, \omega_{k}$. In a mutable BARG system, each proof string $\pi$ is a mutable encoding of original witnesses. Thus, a mutable BARG captures and enables computations over a batch proof $\pi$. We also study new privacy notions for mutable BARGs, guaranteeing that a mutated proof hides all non-trivial information. Such mutable BARGs are a naturally good fit for many privacy sensitive applications.
Our main contributions can be summarized as introducing the general concept of mutable BARGs, identifying new non-trivial classes of feasible mutations over BARGs, designing new constructions for mutable BARGs satisfying mutation privacy for these classes from standard cryptographic assumptions, and developing applications of mutable BARGs to advanced signatures such as homomorphic signatures, redactable signatures, and aggregate signatures. Our results improve state-of-the-art known for many such signature systems either in terms of functionality, efficiency, security, or versatility (in terms of cryptographic assumptions).Non-interactive batch arguments (BARGs) let a prover compute a single proof $\pi$ proving validity of a `batch' of $k$ $\mathbf{NP}$ statements $x_1, \ldots, x_{k}$. The two central features of BARGs are succinctness and soundness. Succinctness states that proof size, $|\pi|$ does not grow with $k$; while soundness states a polytime cheating prover cannot create an accepting proof for any invalid batch of statements.
In this work, we put forth a new concept of mutability for batch arguments, called mutable batch arguments. Our goal is to re-envision how we think about and use BARGs. Traditionally, a BARG proof string $\pi$ is an immutable encoding of $k$ $\mathbf{NP}$ witness $\omega_1, \ldots, \omega_{k}$. In a mutable BARG system, each proof string $\pi$ is a mutable encoding of original witnesses. Thus, a mutable BARG captures and enables computations over a batch proof $\pi$. We also study new privacy notions for mutable BARGs, guaranteeing that a mutated proof hides all non-trivial information. Such mutable BARGs are a naturally good fit for many privacy sensitive applications.
Our main contributions can be summarized as introducing the general concept of mutable BARGs, identifying new non-trivial classes of feasible mutations over BARGs, designing new constructions for mutable BARGs satisfying mutation privacy for these classes from standard cryptographic assumptions, and developing applications of mutable BARGs to advanced signatures such as homomorphic signatures, redactable signatures, and aggregate signatures. Our results improve state-of-the-art known for many such signature systems either in terms of functionality, efficiency, security, or versatility (in terms of cryptographic assumptions).2024-05-13T19:45:35+00:00https://creativecommons.org/licenses/by/4.0/Rishab Goyalhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/736Secret Sharing with Certified Deletion2024-05-13T18:35:58+00:00James BartusekJustin RaizesSecret sharing allows a user to split a secret into many shares so that the secret can be recovered if, and only if, an authorized set of shares is collected. Although secret sharing typically does not require any computational hardness assumptions, its security does require that an adversary cannot collect an authorized set of shares. Over long periods of time where an adversary can benefit from multiple data breaches, this may become an unrealistic assumption.
We initiate the systematic study of secret sharing with certified deletion in order to achieve security even against an adversary that eventually collects an authorized set of shares. In secret sharing with certified deletion, a (classical) secret $s$ is split into quantum shares that can be destroyed in a manner verifiable by the dealer.
We put forth two natural definitions of security. No-signaling security roughly requires that if multiple non-communicating adversaries delete sufficiently many shares, then their combined view contains negligible information about $s$, even if the total set of corrupted parties forms an authorized set. Adaptive security requires privacy of $s$ against an adversary that can continuously and adaptively corrupt new shares and delete previously-corrupted shares, as long as the total set of corrupted shares minus deleted shares remains unauthorized.
Next, we show that these security definitions are achievable: we show how to construct (i) a secret sharing scheme with no-signaling certified deletion for any monotone access structure, and (ii) a threshold secret sharing scheme with adaptive certified deletion. Our first construction uses Bartusek and Khurana's (CRYPTO 2023) 2-out-of-2 secret sharing scheme with certified deletion as a building block, while our second construction is built from scratch and requires several new technical ideas. For example, we significantly generalize the ``XOR extractor'' of Agarwal, Bartusek, Khurana, and Kumar (EUROCRYPT 2023) in order to obtain better seedless extraction from certain quantum sources of entropy, and show how polynomial interpolation can double as a high-rate randomness extractor in our context of threshold sharing with certified deletion.Secret sharing allows a user to split a secret into many shares so that the secret can be recovered if, and only if, an authorized set of shares is collected. Although secret sharing typically does not require any computational hardness assumptions, its security does require that an adversary cannot collect an authorized set of shares. Over long periods of time where an adversary can benefit from multiple data breaches, this may become an unrealistic assumption.
We initiate the systematic study of secret sharing with certified deletion in order to achieve security even against an adversary that eventually collects an authorized set of shares. In secret sharing with certified deletion, a (classical) secret $s$ is split into quantum shares that can be destroyed in a manner verifiable by the dealer.
We put forth two natural definitions of security. No-signaling security roughly requires that if multiple non-communicating adversaries delete sufficiently many shares, then their combined view contains negligible information about $s$, even if the total set of corrupted parties forms an authorized set. Adaptive security requires privacy of $s$ against an adversary that can continuously and adaptively corrupt new shares and delete previously-corrupted shares, as long as the total set of corrupted shares minus deleted shares remains unauthorized.
Next, we show that these security definitions are achievable: we show how to construct (i) a secret sharing scheme with no-signaling certified deletion for any monotone access structure, and (ii) a threshold secret sharing scheme with adaptive certified deletion. Our first construction uses Bartusek and Khurana's (CRYPTO 2023) 2-out-of-2 secret sharing scheme with certified deletion as a building block, while our second construction is built from scratch and requires several new technical ideas. For example, we significantly generalize the ``XOR extractor'' of Agarwal, Bartusek, Khurana, and Kumar (EUROCRYPT 2023) in order to obtain better seedless extraction from certain quantum sources of entropy, and show how polynomial interpolation can double as a high-rate randomness extractor in our context of threshold sharing with certified deletion.2024-05-13T18:35:58+00:00https://creativecommons.org/licenses/by/4.0/James BartusekJustin Raizeshttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/735Secure Multiparty Computation in the Presence of Covert Adaptive Adversaries2024-05-13T17:32:48+00:00Isheeta NargisAnwar HasanWe design a new MPC protocol for arithmetic circuits secure against erasure-free covert adaptive adversaries with deterrence 1/2. The new MPC protocol has the same asymptotic communication cost, the number of PKE operations and the number of exponentiation operations as the most efficient MPC protocol for arithmetic circuits secure against covert static adversaries. That means, the new MPC protocol improves security from covert static security to covert adaptive adversary almost for free. For MPC problems where the number of parties n is much larger than the number of multiplication gates M, the new MPC protocol asymptotically improves communication complexity over the most efficient MPC protocol for arithmetic circuits secure against erasure-free active adaptive adversaries.We design a new MPC protocol for arithmetic circuits secure against erasure-free covert adaptive adversaries with deterrence 1/2. The new MPC protocol has the same asymptotic communication cost, the number of PKE operations and the number of exponentiation operations as the most efficient MPC protocol for arithmetic circuits secure against covert static adversaries. That means, the new MPC protocol improves security from covert static security to covert adaptive adversary almost for free. For MPC problems where the number of parties n is much larger than the number of multiplication gates M, the new MPC protocol asymptotically improves communication complexity over the most efficient MPC protocol for arithmetic circuits secure against erasure-free active adaptive adversaries.2024-05-13T17:32:48+00:00https://creativecommons.org/licenses/by/4.0/Isheeta NargisAnwar Hasanhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/734Proof of Stake and Activity: Rewarding On-Chain Activity Through Consensus2024-05-13T16:42:39+00:00Aram JivanyanKaren TerjanianWe are introducing a novel consensus protocol for
blockchain, called Proof of Stake and Activity (PoSA) which can
augment the traditional Proof of Stake methods by integrating
a unique Proof of Activity system. PoSA offers a compelling
economic model that promotes decentralization by rewarding
validators based on their staked capital and also the business
value they contribute to the chain. This protocol has been
implemented already into a fully-fledged blockchain platform
called Bahamut (www.bahamut.io) which boasts hundreds of thousands of active users already.We are introducing a novel consensus protocol for
blockchain, called Proof of Stake and Activity (PoSA) which can
augment the traditional Proof of Stake methods by integrating
a unique Proof of Activity system. PoSA offers a compelling
economic model that promotes decentralization by rewarding
validators based on their staked capital and also the business
value they contribute to the chain. This protocol has been
implemented already into a fully-fledged blockchain platform
called Bahamut (www.bahamut.io) which boasts hundreds of thousands of active users already.2024-05-13T16:42:39+00:00https://creativecommons.org/licenses/by/4.0/Aram JivanyanKaren Terjanianhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/733Proxying is Enough: Security of Proxying in TLS Oracles and AEAD Context Unforgeability2024-05-13T16:27:08+00:00Zhongtang LuoYanxue JiaYaobin ShenAniket KateTLS oracles allow a TLS client to offer selective data provenance to an external (oracle) node such that the oracle node is ensured that the data is indeed coming from a pre-defined TLS server. Typically, the client/user supplies their credentials and reveals data in a zero-knowledge manner to demonstrate certain information to oracles while ensuring the privacy of the rest of the data. Conceptually, this is a standard three-party secure computation between the TLS server, TLS client (prover), and the oracle (verifier) node; however, the key practical requirement for TLS oracles to ensure that data provenance process remains transparent to the TLS server. Recent TLS oracle protocols such as DECO enforce the communication pattern of server-client-verifier and utilize a novel three-party handshake process during TLS to ensure data integrity against potential tempering by the client. However, this approach comes with a significant performance penalty on the client/prover and the verifier and raises the question if it is possible to improve the performance by putting the verifier (as a proxy) between the server and the client such that the correct TLS transcript is always available to the verifier.
This work offers both positive and negative answers to this oracle proxy question: We first formalize the oracle proxy notion that allows the verifier to directly proxy client-server TLS communication, without entering a three-party handshake or interfering with the connection in any way. We then show that for common TLS-based higher-level protocols such as HTTPS, data integrity to the verifier proxy is ensured by the variable padding built into the HTTP protocol semantics. On the other hand, if a TLS-based protocol comes without variable padding, we demonstrate that data integrity cannot be guaranteed. In this context, we then study the case where the TLS response is pre-determined and cannot be tampered with during the connection. We propose the concept of context unforgeability and show allows overcoming the impossibility. We further show that ChaCha20-Poly1305 satisfies the concept while AES-GCM does not under the standard model.TLS oracles allow a TLS client to offer selective data provenance to an external (oracle) node such that the oracle node is ensured that the data is indeed coming from a pre-defined TLS server. Typically, the client/user supplies their credentials and reveals data in a zero-knowledge manner to demonstrate certain information to oracles while ensuring the privacy of the rest of the data. Conceptually, this is a standard three-party secure computation between the TLS server, TLS client (prover), and the oracle (verifier) node; however, the key practical requirement for TLS oracles to ensure that data provenance process remains transparent to the TLS server. Recent TLS oracle protocols such as DECO enforce the communication pattern of server-client-verifier and utilize a novel three-party handshake process during TLS to ensure data integrity against potential tempering by the client. However, this approach comes with a significant performance penalty on the client/prover and the verifier and raises the question if it is possible to improve the performance by putting the verifier (as a proxy) between the server and the client such that the correct TLS transcript is always available to the verifier.
This work offers both positive and negative answers to this oracle proxy question: We first formalize the oracle proxy notion that allows the verifier to directly proxy client-server TLS communication, without entering a three-party handshake or interfering with the connection in any way. We then show that for common TLS-based higher-level protocols such as HTTPS, data integrity to the verifier proxy is ensured by the variable padding built into the HTTP protocol semantics. On the other hand, if a TLS-based protocol comes without variable padding, we demonstrate that data integrity cannot be guaranteed. In this context, we then study the case where the TLS response is pre-determined and cannot be tampered with during the connection. We propose the concept of context unforgeability and show allows overcoming the impossibility. We further show that ChaCha20-Poly1305 satisfies the concept while AES-GCM does not under the standard model.2024-05-13T16:27:08+00:00https://creativecommons.org/licenses/by/4.0/Zhongtang LuoYanxue JiaYaobin ShenAniket Katehttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/732Compact Encryption based on Module-NTRU problems2024-05-13T15:31:40+00:00Shi BaiHansraj JangirHao LinTran NgoWeiqiang WenJinwei ZhengThe Module-NTRU problem, introduced by Cheon, Kim,
Kim, Son (IACR ePrint 2019/1468), and Chuengsatiansup, Prest, Stehlé,
Wallet, Xagawa (ASIACCS ’20), generalizes the versatile NTRU assump-
tion. One of its main advantages lies in its ability to offer greater flexibil-
ity on parameters, such as the underlying ring dimension. In this work,
we present several lattice-based encryption schemes, which are IND-CPA
(or OW-CPA) secure in the standard model based on the Module-NTRU
and Module-LWE problems. Leveraging the Fujisaki-Okamoto transfor-
mations, one can obtain IND-CCA secure key encapsulation schemes.
Our first encryption scheme is based on the Module-NTRU assumption,
which uses the determinant of the secret matrix over the underlying ring
for the decryption. Our second scheme is analogue to the Module-LWE
encryption scheme, but uses only a matrix as the public key, based on a
vectorial variant of the Module-NTRU problem. In the end, we conduct
comprehensive analysis of known attacks and propose concrete parame-
ters for the instantiations. In particular, our ciphertext size is about 614
(resp. 1228) bytes for NIST Level 1 (resp. Level 5) security and small
decryption failure, placing it on par with the most recent schemes such as
the one proposed by Zhang, Feng and Yan (ASIACRYPT ’23). We also
present several competitive parameters for NIST Level 3, which has a ci-
phertext size of 921 bytes. Moreover, our schemes do not require specific
codes for plaintext encoding and decoding.The Module-NTRU problem, introduced by Cheon, Kim,
Kim, Son (IACR ePrint 2019/1468), and Chuengsatiansup, Prest, Stehlé,
Wallet, Xagawa (ASIACCS ’20), generalizes the versatile NTRU assump-
tion. One of its main advantages lies in its ability to offer greater flexibil-
ity on parameters, such as the underlying ring dimension. In this work,
we present several lattice-based encryption schemes, which are IND-CPA
(or OW-CPA) secure in the standard model based on the Module-NTRU
and Module-LWE problems. Leveraging the Fujisaki-Okamoto transfor-
mations, one can obtain IND-CCA secure key encapsulation schemes.
Our first encryption scheme is based on the Module-NTRU assumption,
which uses the determinant of the secret matrix over the underlying ring
for the decryption. Our second scheme is analogue to the Module-LWE
encryption scheme, but uses only a matrix as the public key, based on a
vectorial variant of the Module-NTRU problem. In the end, we conduct
comprehensive analysis of known attacks and propose concrete parame-
ters for the instantiations. In particular, our ciphertext size is about 614
(resp. 1228) bytes for NIST Level 1 (resp. Level 5) security and small
decryption failure, placing it on par with the most recent schemes such as
the one proposed by Zhang, Feng and Yan (ASIACRYPT ’23). We also
present several competitive parameters for NIST Level 3, which has a ci-
phertext size of 921 bytes. Moreover, our schemes do not require specific
codes for plaintext encoding and decoding.2024-05-13T15:31:40+00:00https://creativecommons.org/licenses/by/4.0/Shi BaiHansraj JangirHao LinTran NgoWeiqiang WenJinwei Zhenghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/731Tight Security of Double-Block Nonce-Based MACs2024-05-13T12:21:53+00:00Wonseok ChoiJooyoung LeeYeongmin LeeIn this paper, we study the security of MAC constructions among those classified by Chen et al. in ASIACRYPT '21. Precisely, $F^{\text{EDM}}_{B_2}$ (or $\mathsf{EWCDM}$ as named by Cogliati and Seurin in CRYPTO '16), $F^{\text{EDM}}_{B_3}$, $F^{\text{SoP}}_{B_2}$, $F^{\text{SoP}}_{B_3}$ (all as named by Chen et al.) are proved to be fully secure up to $2^n$ MAC queries in the nonce-respecting setting, improving the previous bound of $\frac{3n}{4}$-bit security. In particular, $F^{\text{SoP}}_{B_2}$ and $F^{\text{SoP}}_{B_3}$ enjoy graceful degradation as the number of queries with repeated nonces grows (when the underlying universal hash function satisfies a certain property called multi-xor-collision resistance). To do this, we develop a new tool, namely extended Mirror theory based on two independent permutations to a wide range of $\xi_{\max}$ including inequalities. Furthermore, we give a generic semi-black-box reduction from single-user security bound in the standard model to multi-user security bound in the ideal cipher model, yielding significantly better bounds than the naive hybrid argument. This reduction is applicable to all MAC construction we considered in this paper and even can be more generalized.
We also present matching attacks on $F^{\text{EDM}}_{B_4}$ and $F^{\text{EDM}}_{B_5}$ using $O(2^{3n/4})$ MAC queries and $O(1)$ verification query without using repeated nonces.In this paper, we study the security of MAC constructions among those classified by Chen et al. in ASIACRYPT '21. Precisely, $F^{\text{EDM}}_{B_2}$ (or $\mathsf{EWCDM}$ as named by Cogliati and Seurin in CRYPTO '16), $F^{\text{EDM}}_{B_3}$, $F^{\text{SoP}}_{B_2}$, $F^{\text{SoP}}_{B_3}$ (all as named by Chen et al.) are proved to be fully secure up to $2^n$ MAC queries in the nonce-respecting setting, improving the previous bound of $\frac{3n}{4}$-bit security. In particular, $F^{\text{SoP}}_{B_2}$ and $F^{\text{SoP}}_{B_3}$ enjoy graceful degradation as the number of queries with repeated nonces grows (when the underlying universal hash function satisfies a certain property called multi-xor-collision resistance). To do this, we develop a new tool, namely extended Mirror theory based on two independent permutations to a wide range of $\xi_{\max}$ including inequalities. Furthermore, we give a generic semi-black-box reduction from single-user security bound in the standard model to multi-user security bound in the ideal cipher model, yielding significantly better bounds than the naive hybrid argument. This reduction is applicable to all MAC construction we considered in this paper and even can be more generalized.
We also present matching attacks on $F^{\text{EDM}}_{B_4}$ and $F^{\text{EDM}}_{B_5}$ using $O(2^{3n/4})$ MAC queries and $O(1)$ verification query without using repeated nonces.2024-05-13T12:21:53+00:00https://creativecommons.org/publicdomain/zero/1.0/Wonseok ChoiJooyoung LeeYeongmin Leehttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2024/730New Solutions to Delsarte's Dual Linear Programs2024-05-13T12:02:37+00:00André ChaillouxThomas Debris-AlazardUnderstanding the maximum size of a code with a given minimum distance is a major question in computer science and discrete mathematics. The most fruitful approach for finding asymptotic bounds on such codes is by using Delsarte's theory of association schemes. With this approach, Delsarte constructs a linear program such that its maximum value is an upper bound on the maximum size of a code with a given minimum distance. Bounding this value can be done by finding solutions to the corresponding dual linear program. Delsarte's theory is very general and goes way beyond binary codes.
In this work, we provide universal bounds in the framework of association schemes that generalize the Hamming bound and the Elias-Bassalygo bound, which can be applied to any association scheme constructed from a distance function. These bounds are obtained by constructing new solutions to Delsarte's dual linear program. We instantiate these results and we recover known bounds for $q$-ary codes and for constant-weight binary codes but which didn't come from the linear program method. Our other contribution is to recover, for essentially any $Q$-polynomial scheme, MRRW-type solutions to Delsarte's dual linear program which are inspired by the Laplacian approach of Friedman and Tillich instead of using the Christoffel-Darboux formulas. We show in particular how the second linear programming bound can be interpreted in this framework.Understanding the maximum size of a code with a given minimum distance is a major question in computer science and discrete mathematics. The most fruitful approach for finding asymptotic bounds on such codes is by using Delsarte's theory of association schemes. With this approach, Delsarte constructs a linear program such that its maximum value is an upper bound on the maximum size of a code with a given minimum distance. Bounding this value can be done by finding solutions to the corresponding dual linear program. Delsarte's theory is very general and goes way beyond binary codes.
In this work, we provide universal bounds in the framework of association schemes that generalize the Hamming bound and the Elias-Bassalygo bound, which can be applied to any association scheme constructed from a distance function. These bounds are obtained by constructing new solutions to Delsarte's dual linear program. We instantiate these results and we recover known bounds for $q$-ary codes and for constant-weight binary codes but which didn't come from the linear program method. Our other contribution is to recover, for essentially any $Q$-polynomial scheme, MRRW-type solutions to Delsarte's dual linear program which are inspired by the Laplacian approach of Friedman and Tillich instead of using the Christoffel-Darboux formulas. We show in particular how the second linear programming bound can be interpreted in this framework.2024-05-13T12:02:37+00:00https://creativecommons.org/licenses/by/4.0/André ChaillouxThomas Debris-Alazardhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/729Covert Adaptive Adversary Model: A New Adversary Model for Multiparty Computation2024-05-13T01:10:18+00:00Isheeta NargisAnwar HasanIn covert adversary model, the corrupted parties can behave in any possible way like active adversaries, but any party that attempts to cheat is guaranteed to get caught by the honest parties with a minimum fixed probability. That probability is called the deterrence factor of covert adversary model. Security-wise, covert adversary is stronger than passive adversary and weaker than active adversary. It is more realistic than passive adversary model. Protocols for covert adversaries are significantly more efficient than protocols for active adversaries. Covert adversary model is defined only for static corruption. Adaptive adversary model is more realistic than static adversaries. In this article, we define a new adversary model, the covert adaptive adversary model, by generalizing the definition of covert adversary model for the more realistic adaptive corruption. We prove security relations between the new covert adaptive adversary model with existing adversary models like passive adaptive adversary model, active adaptive
adversary model and covert static adversary model. We prove the sequential composition theorem for the new adversary model which is necessary to allow modular design of protocols for this new adversary model.In covert adversary model, the corrupted parties can behave in any possible way like active adversaries, but any party that attempts to cheat is guaranteed to get caught by the honest parties with a minimum fixed probability. That probability is called the deterrence factor of covert adversary model. Security-wise, covert adversary is stronger than passive adversary and weaker than active adversary. It is more realistic than passive adversary model. Protocols for covert adversaries are significantly more efficient than protocols for active adversaries. Covert adversary model is defined only for static corruption. Adaptive adversary model is more realistic than static adversaries. In this article, we define a new adversary model, the covert adaptive adversary model, by generalizing the definition of covert adversary model for the more realistic adaptive corruption. We prove security relations between the new covert adaptive adversary model with existing adversary models like passive adaptive adversary model, active adaptive
adversary model and covert static adversary model. We prove the sequential composition theorem for the new adversary model which is necessary to allow modular design of protocols for this new adversary model.2024-05-13T01:10:18+00:00https://creativecommons.org/licenses/by/4.0/Isheeta NargisAnwar Hasanhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/728Relativized Succinct Arguments in the ROM Do Not Exist2024-05-12T19:28:38+00:00Annalisa BarbaraAlessandro ChiesaZiyi GuanA relativized succinct argument in the random oracle model (ROM) is a succinct argument in the ROM that can prove/verify the correctness of computations that involve queries to the random oracle. We prove that relativized succinct arguments in the ROM do not exist. The impossibility holds even if the succinct argument is interactive, and even if soundness is computational (rather than statistical).
This impossibility puts on a formal footing the commonly-held belief that succinct arguments require non-relativizing techniques. Moreover, our results stand in sharp contrast with other oracle models, for which a recent line of work has constructed relativized succinct non-interactive arguments (SNARGs). Indeed, relativized SNARGs are a powerful primitive that, e.g., can be used to obtain constructions of IVC (incrementally-verifiable computation) and PCD (proof-carrying data) based on falsifiable cryptographic assumptions. Our results rule out this approach for IVC and PCD in the ROM.A relativized succinct argument in the random oracle model (ROM) is a succinct argument in the ROM that can prove/verify the correctness of computations that involve queries to the random oracle. We prove that relativized succinct arguments in the ROM do not exist. The impossibility holds even if the succinct argument is interactive, and even if soundness is computational (rather than statistical).
This impossibility puts on a formal footing the commonly-held belief that succinct arguments require non-relativizing techniques. Moreover, our results stand in sharp contrast with other oracle models, for which a recent line of work has constructed relativized succinct non-interactive arguments (SNARGs). Indeed, relativized SNARGs are a powerful primitive that, e.g., can be used to obtain constructions of IVC (incrementally-verifiable computation) and PCD (proof-carrying data) based on falsifiable cryptographic assumptions. Our results rule out this approach for IVC and PCD in the ROM.2024-05-12T19:28:38+00:00https://creativecommons.org/licenses/by/4.0/Annalisa BarbaraAlessandro ChiesaZiyi Guanhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/727Let Attackers Program Ideal Models: Modularity and Composability for Adaptive Compromise2024-05-12T18:55:15+00:00Joseph JaegerWe show that the adaptive compromise security definitions of Jaeger and Tyagi (Crypto '20) cannot be applied in several natural use-cases. These include proving multi-user security from single-user security, the security of the cascade PRF, and the security of schemes sharing the same ideal primitive. We provide new variants of the definitions and show that they resolve these issues with composition. Extending these definitions to the asymmetric settings, we establish the security of the modular KEM/DEM and Fujisaki-Okamoto approaches to public key encryption in the full adaptive compromise setting. This allows instantiations which are more efficient and standard than prior constructions.We show that the adaptive compromise security definitions of Jaeger and Tyagi (Crypto '20) cannot be applied in several natural use-cases. These include proving multi-user security from single-user security, the security of the cascade PRF, and the security of schemes sharing the same ideal primitive. We provide new variants of the definitions and show that they resolve these issues with composition. Extending these definitions to the asymmetric settings, we establish the security of the modular KEM/DEM and Fujisaki-Okamoto approaches to public key encryption in the full adaptive compromise setting. This allows instantiations which are more efficient and standard than prior constructions.2024-05-12T18:55:15+00:00https://creativecommons.org/licenses/by/4.0/Joseph Jaegerhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/726Challenger: Blockchain-based Massively Multiplayer Online Game Architecture2024-05-12T12:53:15+00:00Boris Chan Yip HonBilel ZaghdoudiMaria Potop-ButucaruSébastien TixeuilSerge FdidaWe propose Challenger a peer-to-peer blockchain-based middleware architecture for narrative games, and discuss its resilience to cheating attacks. Our architecture orchestrates nine services in a fully decentralized manner where nodes are not aware of the entire composition of the system nor its size. All these components are orchestrated together to obtain (strong) resilience to cheaters.
The main contribution of the paper is to provide, for the first time, an architecture for narrative games agnostic of a particular blockchain that brings together several distinct research areas, namely distributed ledgers, peer-to-peer networks, multi-player-online games and resilience to attacks.We propose Challenger a peer-to-peer blockchain-based middleware architecture for narrative games, and discuss its resilience to cheating attacks. Our architecture orchestrates nine services in a fully decentralized manner where nodes are not aware of the entire composition of the system nor its size. All these components are orchestrated together to obtain (strong) resilience to cheaters.
The main contribution of the paper is to provide, for the first time, an architecture for narrative games agnostic of a particular blockchain that brings together several distinct research areas, namely distributed ledgers, peer-to-peer networks, multi-player-online games and resilience to attacks.2024-05-12T12:53:15+00:00https://creativecommons.org/licenses/by/4.0/Boris Chan Yip HonBilel ZaghdoudiMaria Potop-ButucaruSébastien TixeuilSerge Fdidahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/725Multi User Security of LightMAC and LightMAC_Plus2024-05-12T09:29:23+00:00Nilanjan DattaShreya DeyAvijit DuttaDevdutto KanungoIn FSE'16, Luykx et al. have proposed $\textsf{LightMAC}$ that provably achieves a query length independent PRF security bound. To be precise, the construction achieves security roughly in the order of $O(q^2/2^n)$, when instantiated with two independently keyed $n$-bit block ciphers and $q$ is the total number of queries made by the adversary. Subsequently, in ASIACRYPT'17, Naito proposed a beyond-birthday-bound variant of the $\textsf{LightMAC}$ construction, dubbed as $\textsf{LightMAC_Plus}$, that is built on three independently keyed $n$-bit block ciphers and achieves $2n/3$-bits PRF security. Security analyses of these two constructions have been conducted in the single-user setting, where we assume that the adversary has the access to a single instance of the construction. In this paper, we investigate, for the first time, the security of the $\textsf{LightMAC}$ and the $\textsf{LightMAC_Plus}$ construction in the context of multi-user setting, where we assume that the adversary has access to more than one instances of the construction. In particular, we have shown that $\textsf{LightMAC}$ remains secure roughly up to $2^{n/2}$ construction queries and $2^k$ ideal-cipher queries in the ideal-cipher model and $\textsf{LightMAC_Plus}$ maintains security up to approximately $2^{2n/3}$ construction queries and $2^{2k/3}$ ideal-cipher queries in the ideal-cipher model, where $n$ denotes the block size and $k$ denotes the key size of the block cipher.In FSE'16, Luykx et al. have proposed $\textsf{LightMAC}$ that provably achieves a query length independent PRF security bound. To be precise, the construction achieves security roughly in the order of $O(q^2/2^n)$, when instantiated with two independently keyed $n$-bit block ciphers and $q$ is the total number of queries made by the adversary. Subsequently, in ASIACRYPT'17, Naito proposed a beyond-birthday-bound variant of the $\textsf{LightMAC}$ construction, dubbed as $\textsf{LightMAC_Plus}$, that is built on three independently keyed $n$-bit block ciphers and achieves $2n/3$-bits PRF security. Security analyses of these two constructions have been conducted in the single-user setting, where we assume that the adversary has the access to a single instance of the construction. In this paper, we investigate, for the first time, the security of the $\textsf{LightMAC}$ and the $\textsf{LightMAC_Plus}$ construction in the context of multi-user setting, where we assume that the adversary has access to more than one instances of the construction. In particular, we have shown that $\textsf{LightMAC}$ remains secure roughly up to $2^{n/2}$ construction queries and $2^k$ ideal-cipher queries in the ideal-cipher model and $\textsf{LightMAC_Plus}$ maintains security up to approximately $2^{2n/3}$ construction queries and $2^{2k/3}$ ideal-cipher queries in the ideal-cipher model, where $n$ denotes the block size and $k$ denotes the key size of the block cipher.2024-05-12T09:29:23+00:00https://creativecommons.org/publicdomain/zero/1.0/Nilanjan DattaShreya DeyAvijit DuttaDevdutto Kanungohttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2024/724zkSNARKs in the ROM with Unconditional UC-Security2024-05-13T08:31:00+00:00Alessandro ChiesaGiacomo FenziThe universal composability (UC) framework is a “gold standard” for security in cryptography. UC-secure protocols achieve strong security guarantees against powerful adaptive adversaries, and retain these guarantees when used as part of larger protocols. Zero knowledge succinct non-interactive arguments of knowledge (zkSNARKs) are a popular cryptographic primitive that are often used within larger protocols deployed in dynamic environments, and so UC-security is a highly desirable, if not necessary, goal.
In this paper we prove that there exist zkSNARKs in the random oracle model (ROM) that unconditionally achieve UC-security. Here, “unconditionally” means that security holds against adversaries that make a bounded number of queries to the random oracle, but are otherwise computationally unbounded.
Prior work studying UC-security for zkSNARKs obtains transformations that rely on computational assumptions and, in many cases, lose most of the succinctness property of the zkSNARK. Moreover, these transformations make the resulting zkSNARK more expensive and complicated.
In contrast, we prove that widely used zkSNARKs in the ROM are UC-secure without modifications. We prove that the Micali construction, which is the canonical construction of a zkSNARK, is UC-secure. Moreover, we prove that the BCS construction, which many zkSNARKs deployed in practice are based on, is UC-secure. Our results confirm the intuition that these natural zkSNARKs do not need to be augmented to achieve UC-security, and give confidence that their use in larger real-world systems is secure.The universal composability (UC) framework is a “gold standard” for security in cryptography. UC-secure protocols achieve strong security guarantees against powerful adaptive adversaries, and retain these guarantees when used as part of larger protocols. Zero knowledge succinct non-interactive arguments of knowledge (zkSNARKs) are a popular cryptographic primitive that are often used within larger protocols deployed in dynamic environments, and so UC-security is a highly desirable, if not necessary, goal.
In this paper we prove that there exist zkSNARKs in the random oracle model (ROM) that unconditionally achieve UC-security. Here, “unconditionally” means that security holds against adversaries that make a bounded number of queries to the random oracle, but are otherwise computationally unbounded.
Prior work studying UC-security for zkSNARKs obtains transformations that rely on computational assumptions and, in many cases, lose most of the succinctness property of the zkSNARK. Moreover, these transformations make the resulting zkSNARK more expensive and complicated.
In contrast, we prove that widely used zkSNARKs in the ROM are UC-secure without modifications. We prove that the Micali construction, which is the canonical construction of a zkSNARK, is UC-secure. Moreover, we prove that the BCS construction, which many zkSNARKs deployed in practice are based on, is UC-secure. Our results confirm the intuition that these natural zkSNARKs do not need to be augmented to achieve UC-security, and give confidence that their use in larger real-world systems is secure.2024-05-11T19:36:32+00:00https://creativecommons.org/licenses/by/4.0/Alessandro ChiesaGiacomo Fenzihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/723$\mathsf{OPA}$: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning2024-05-11T01:20:12+00:00Harish KarthikeyanAntigoni PolychroniadouOur work aims to minimize interaction in secure computation due to the high cost and challenges associated with communication rounds, particularly in scenarios with many clients. In this work, we revisit the problem of secure aggregation in the single-server setting where a single evaluation server can securely aggregate client-held individual inputs. Our key contribution is One-shot Private Aggregation ($\mathsf{OPA}$) where clients speak only once (or even choose not to speak) per aggregation evaluation. Since every client communicates just once per aggregation, this streamlines the management of dropouts and dynamic participation of clients, contrasting with multi-round state-of-the-art protocols for each aggregation.
We initiate the study of $\mathsf{OPA}$ in several ways. First, we formalize the model and present a security definition. Second, we construct $\mathsf{OPA}$ protocols based on class groups, DCR, and LWR assumptions. Third, we demonstrate $\mathsf{OPA}$ with two applications: private stream aggregation and privacy-preserving federated learning. Specifically, $\mathsf{OPA}$ can be used as a key building block to enable privacy-preserving federated learning and critically, where client speaks once. This is a sharp departure from prior multi-round protocols whose study was initiated by Bonawitz et al. (CCS, 2017). Moreover, unlike the YOSO (You Only Speak Once) model for general secure computation, $\mathsf{OPA}$ eliminates complex committee selection protocols to achieve adaptive security. Beyond asymptotic improvements, $\mathsf{OPA}$ is practical, outperforming state-of-the-art solutions. We leverage $\mathsf{OPA}$ to develop a streaming variant named $\mathsf{SOPA}$, serving as the building block for privacy-preserving federated learning. We utilize $\mathsf{SOPA}$ to construct logistic regression classifiers for two datasets.
A new distributed key homomorphic PRF is at the core of our construction of $\mathsf{OPA}$. This key component addresses shortcomings observed in previous works that relied on DDH and LWR in the work of Boneh et al. (CRYPTO, 2013), marking it as an independent contribution to our work. Moreover, we also present new distributed key homomorphic PRFs based on class groups or DCR or the LWR assumption.Our work aims to minimize interaction in secure computation due to the high cost and challenges associated with communication rounds, particularly in scenarios with many clients. In this work, we revisit the problem of secure aggregation in the single-server setting where a single evaluation server can securely aggregate client-held individual inputs. Our key contribution is One-shot Private Aggregation ($\mathsf{OPA}$) where clients speak only once (or even choose not to speak) per aggregation evaluation. Since every client communicates just once per aggregation, this streamlines the management of dropouts and dynamic participation of clients, contrasting with multi-round state-of-the-art protocols for each aggregation.
We initiate the study of $\mathsf{OPA}$ in several ways. First, we formalize the model and present a security definition. Second, we construct $\mathsf{OPA}$ protocols based on class groups, DCR, and LWR assumptions. Third, we demonstrate $\mathsf{OPA}$ with two applications: private stream aggregation and privacy-preserving federated learning. Specifically, $\mathsf{OPA}$ can be used as a key building block to enable privacy-preserving federated learning and critically, where client speaks once. This is a sharp departure from prior multi-round protocols whose study was initiated by Bonawitz et al. (CCS, 2017). Moreover, unlike the YOSO (You Only Speak Once) model for general secure computation, $\mathsf{OPA}$ eliminates complex committee selection protocols to achieve adaptive security. Beyond asymptotic improvements, $\mathsf{OPA}$ is practical, outperforming state-of-the-art solutions. We leverage $\mathsf{OPA}$ to develop a streaming variant named $\mathsf{SOPA}$, serving as the building block for privacy-preserving federated learning. We utilize $\mathsf{SOPA}$ to construct logistic regression classifiers for two datasets.
A new distributed key homomorphic PRF is at the core of our construction of $\mathsf{OPA}$. This key component addresses shortcomings observed in previous works that relied on DDH and LWR in the work of Boneh et al. (CRYPTO, 2013), marking it as an independent contribution to our work. Moreover, we also present new distributed key homomorphic PRFs based on class groups or DCR or the LWR assumption.2024-05-11T01:20:12+00:00https://creativecommons.org/licenses/by/4.0/Harish KarthikeyanAntigoni Polychroniadouhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/722Ultrametric integral cryptanalysis2024-05-10T15:55:12+00:00Tim BeyneMichiel VerbauwhedeA systematic method to analyze \emph{divisibility properties} is proposed.
In integral cryptanalysis, divisibility properties interpolate between bits that sum to zero (divisibility by two) and saturated bits (divisibility by $2^{n - 1}$ for $2^n$ inputs).
From a theoretical point of view, we construct a new cryptanalytic technique that is a non-Archimedean multiplicative analogue of linear cryptanalysis. It lifts integral cryptanalysis to characteristic zero in the sense that, if all quantities are reduced modulo two, then one recovers the algebraic theory of integral cryptanalysis.
The new technique leads to a theory of trails. We develop a tool based on off-the-shelf solvers that automates the analysis of these trails and use it to show that many integral distinguishers on PRESENT and SIMON are stronger than expected.A systematic method to analyze \emph{divisibility properties} is proposed.
In integral cryptanalysis, divisibility properties interpolate between bits that sum to zero (divisibility by two) and saturated bits (divisibility by $2^{n - 1}$ for $2^n$ inputs).
From a theoretical point of view, we construct a new cryptanalytic technique that is a non-Archimedean multiplicative analogue of linear cryptanalysis. It lifts integral cryptanalysis to characteristic zero in the sense that, if all quantities are reduced modulo two, then one recovers the algebraic theory of integral cryptanalysis.
The new technique leads to a theory of trails. We develop a tool based on off-the-shelf solvers that automates the analysis of these trails and use it to show that many integral distinguishers on PRESENT and SIMON are stronger than expected.2024-05-10T15:55:12+00:00https://creativecommons.org/licenses/by/4.0/Tim BeyneMichiel Verbauwhedehttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/721Real-world Universal zkSNARKs are non-malleable2024-05-10T12:50:08+00:00Antonio FaonioDario FioreLuigi RussoSimulation extractability is a strong security notion of zkSNARKs that guarantees that an attacker who produces a valid proof must know the corresponding witness, even if the attacker had prior access to proofs generated by other users. Notably, simulation extractability implies that proofs are non-malleable and is of fundamental importance for applications of zkSNARKs in distributed systems. In this work, we study sufficient and necessary conditions for constructing simulation-extractable universal zkSNARKs via the popular design approach based on compiling polynomial interactive oracle proofs (PIOP). Our main result is the first security proof that popular universal zkSNARKs, such as PLONK and Marlin, as deployed in the real world, are simulation-extractable. Our result fills a gap left from previous work (Faonio et al. TCC’23, and Kohlweiss et al. TCC’23) which could only prove the simulation extractability of the “textbook” versions of these schemes and does not capture their optimized variants, with all the popular optimization tricks in place, that are eventually implemented and deployed in software libraries.Simulation extractability is a strong security notion of zkSNARKs that guarantees that an attacker who produces a valid proof must know the corresponding witness, even if the attacker had prior access to proofs generated by other users. Notably, simulation extractability implies that proofs are non-malleable and is of fundamental importance for applications of zkSNARKs in distributed systems. In this work, we study sufficient and necessary conditions for constructing simulation-extractable universal zkSNARKs via the popular design approach based on compiling polynomial interactive oracle proofs (PIOP). Our main result is the first security proof that popular universal zkSNARKs, such as PLONK and Marlin, as deployed in the real world, are simulation-extractable. Our result fills a gap left from previous work (Faonio et al. TCC’23, and Kohlweiss et al. TCC’23) which could only prove the simulation extractability of the “textbook” versions of these schemes and does not capture their optimized variants, with all the popular optimization tricks in place, that are eventually implemented and deployed in software libraries.2024-05-10T12:50:08+00:00https://creativecommons.org/licenses/by/4.0/Antonio FaonioDario FioreLuigi Russohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/720Multivariate Blind Signatures Revisited2024-05-13T08:34:19+00:00Ward BeullensIn 2017, Petzoldt, Szepieniec, and Mohamed proposed a blind signature scheme, based on multivariate cryptography. This construction has been expanded on by several other works. This short paper shows that their construction is susceptible to an efficient polynomial-time attack. The problem is that the authors implicitly assumed that for a random multivariate quadratic map $\mathcal{R}:\mathbb{F}_q^m \rightarrow \mathbb{F}_q^m$ and a collision-resistant hash function $H: \{0,1\}^* \rightarrow \mathbb{F}_q^m$, the function $\mathsf{Com}(m;\mathbf{r}) := H(m) - \mathcal{R}(\mathbf{r})$ is a binding commitment, which is not the case. There is a "folklore" algorithm that can be used to, given any pair of messages, efficiently produce a commitment that opens to both of them. We hope that by pointing out that multivariate quadratic maps are not binding, similar problems can be avoided in the future.In 2017, Petzoldt, Szepieniec, and Mohamed proposed a blind signature scheme, based on multivariate cryptography. This construction has been expanded on by several other works. This short paper shows that their construction is susceptible to an efficient polynomial-time attack. The problem is that the authors implicitly assumed that for a random multivariate quadratic map $\mathcal{R}:\mathbb{F}_q^m \rightarrow \mathbb{F}_q^m$ and a collision-resistant hash function $H: \{0,1\}^* \rightarrow \mathbb{F}_q^m$, the function $\mathsf{Com}(m;\mathbf{r}) := H(m) - \mathcal{R}(\mathbf{r})$ is a binding commitment, which is not the case. There is a "folklore" algorithm that can be used to, given any pair of messages, efficiently produce a commitment that opens to both of them. We hope that by pointing out that multivariate quadratic maps are not binding, similar problems can be avoided in the future.2024-05-10T08:43:34+00:00https://creativecommons.org/licenses/by/4.0/Ward Beullenshttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/719Client-Efficient Online-Offline Private Information Retrieval2024-05-18T21:21:07+00:00Hoang-Dung NguyenJorge GuajardoThang HoangPrivate Information Retrieval (PIR) permits clients to query entries from a public database hosted on untrusted servers in a privacy-preserving manner. Traditional PIR model suffers from high computation and/or bandwidth cost due to entire database processing for privacy. Recently, Online-Offline PIR (OO-PIR) has been suggested to improve the practicality of PIR, where query-independent materials are precomputed beforehand to accelerate online access. While state-of-the-art OO-PIR schemes (e.g., S&P’24, CRYPTO’23) successfully reduce the online processing overhead to sublinear, they still impose sustainable bandwidth and storage burdens on the client, especially when operating on large databases.
In this paper, we propose Pirex, a new OO-PIR scheme with eminent client performance while maintaining the sublinear server processing efficiency. Specifically, Pirex offers clients with sublinear processing, minimal inbound bandwidth, and low storage requirements. Our Pirex design is fairly simple yet efficient, where the majority of operations are naturally low-cost and streamlined (e.g., XOR, PRF, modular arithmetic).
We have fully implemented Pirex and evaluated its real-world performance using commodity hardware. Our experimental results demonstrated that Pirex outperforms existing OO-PIR schemes by at least two orders of magnitude. Concretely, with a 1 TB database, Pirex only takes 0.8s to query a 256-KB entry, compared with 30-220s by the state-of-the-art.Private Information Retrieval (PIR) permits clients to query entries from a public database hosted on untrusted servers in a privacy-preserving manner. Traditional PIR model suffers from high computation and/or bandwidth cost due to entire database processing for privacy. Recently, Online-Offline PIR (OO-PIR) has been suggested to improve the practicality of PIR, where query-independent materials are precomputed beforehand to accelerate online access. While state-of-the-art OO-PIR schemes (e.g., S&P’24, CRYPTO’23) successfully reduce the online processing overhead to sublinear, they still impose sustainable bandwidth and storage burdens on the client, especially when operating on large databases.
In this paper, we propose Pirex, a new OO-PIR scheme with eminent client performance while maintaining the sublinear server processing efficiency. Specifically, Pirex offers clients with sublinear processing, minimal inbound bandwidth, and low storage requirements. Our Pirex design is fairly simple yet efficient, where the majority of operations are naturally low-cost and streamlined (e.g., XOR, PRF, modular arithmetic).
We have fully implemented Pirex and evaluated its real-world performance using commodity hardware. Our experimental results demonstrated that Pirex outperforms existing OO-PIR schemes by at least two orders of magnitude. Concretely, with a 1 TB database, Pirex only takes 0.8s to query a 256-KB entry, compared with 30-220s by the state-of-the-art.2024-05-10T02:53:56+00:00https://creativecommons.org/licenses/by/4.0/Hoang-Dung NguyenJorge GuajardoThang Hoanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/718PAC-Private Algorithms2024-05-10T01:25:08+00:00Mayuri SridharHanshen XiaoSrinivas DevadasProvable privacy typically requires involved analysis and is often associated with unacceptable accuracy loss. While many empirical verification or approximation methods, such as Membership Inference Attacks (MIA) and Differential Privacy Auditing (DPA), have been proposed, these do not offer rigorous privacy guarantees. In this paper, we apply recently-proposed Probably Approximately Correct (PAC) Privacy to give formal, mechanized, simulation-based proofs for a range of practical, black-box algorithms: K-Means, Support Vector Machines (SVM), Principal Component Analysis (PCA) and Random Forests. To provide these proofs, we present a new simulation algorithm that efficiently determines anisotropic noise perturbation required for any given level of privacy. We provide a proof of correctness for this algorithm and demonstrate that anisotropic noise has substantive benefits over isotropic noise.
Stable algorithms are easier to privatize, and we demonstrate privacy amplification resulting from introducing regularization in these algorithms; meaningful privacy guarantees are obtained with small losses in accuracy. We also propose new techniques in order to canonicalize algorithmic output and convert intractable geometric stability verification into efficient deterministic stability verification. Thorough experiments are included, and we validate our provable adversarial inference hardness against state-of-the-art empirical attacks.Provable privacy typically requires involved analysis and is often associated with unacceptable accuracy loss. While many empirical verification or approximation methods, such as Membership Inference Attacks (MIA) and Differential Privacy Auditing (DPA), have been proposed, these do not offer rigorous privacy guarantees. In this paper, we apply recently-proposed Probably Approximately Correct (PAC) Privacy to give formal, mechanized, simulation-based proofs for a range of practical, black-box algorithms: K-Means, Support Vector Machines (SVM), Principal Component Analysis (PCA) and Random Forests. To provide these proofs, we present a new simulation algorithm that efficiently determines anisotropic noise perturbation required for any given level of privacy. We provide a proof of correctness for this algorithm and demonstrate that anisotropic noise has substantive benefits over isotropic noise.
Stable algorithms are easier to privatize, and we demonstrate privacy amplification resulting from introducing regularization in these algorithms; meaningful privacy guarantees are obtained with small losses in accuracy. We also propose new techniques in order to canonicalize algorithmic output and convert intractable geometric stability verification into efficient deterministic stability verification. Thorough experiments are included, and we validate our provable adversarial inference hardness against state-of-the-art empirical attacks.2024-05-10T01:25:08+00:00https://creativecommons.org/licenses/by/4.0/Mayuri SridharHanshen XiaoSrinivas Devadashttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/717An Improved Threshold Homomorphic Cryptosystem Based on Class Groups2024-05-09T17:02:32+00:00Lennart BraunGuilhem CastagnosIvan DamgårdFabien LaguillaumieKelsey MelissarisClaudio OrlandiIda TuckerWe present distributed key generation and decryption protocols for an additively homomorphic cryptosystem based on class groups, improving on a similar system proposed by Braun, Damgård, and Orlandi at CRYPTO '23. Our key generation is similarly constant round but achieves lower communication complexity than the previous work. This improvement is in part the result of relaxing the reconstruction property required of the underlying integer verifiable secret sharing scheme. This eliminates the reliance on potentially costly proofs of knowledge in unknown order groups. We present a new method to batch zero-knowledge proofs in unknown order groups which strengthens these improvements. We also present a protocol which is proven secure against adaptive adversaries in the single inconsistent player (SIP) model. Our protocols are secure in the universal composability (UC) framework and provide guaranteed output delivery. We demonstrate the relative efficiency of our techniques by presenting the running times and communication costs associated with our implementation of the statically secure protocol and provide a direct comparison with alternate state of the art constructions.We present distributed key generation and decryption protocols for an additively homomorphic cryptosystem based on class groups, improving on a similar system proposed by Braun, Damgård, and Orlandi at CRYPTO '23. Our key generation is similarly constant round but achieves lower communication complexity than the previous work. This improvement is in part the result of relaxing the reconstruction property required of the underlying integer verifiable secret sharing scheme. This eliminates the reliance on potentially costly proofs of knowledge in unknown order groups. We present a new method to batch zero-knowledge proofs in unknown order groups which strengthens these improvements. We also present a protocol which is proven secure against adaptive adversaries in the single inconsistent player (SIP) model. Our protocols are secure in the universal composability (UC) framework and provide guaranteed output delivery. We demonstrate the relative efficiency of our techniques by presenting the running times and communication costs associated with our implementation of the statically secure protocol and provide a direct comparison with alternate state of the art constructions.2024-05-09T17:02:32+00:00https://creativecommons.org/licenses/by/4.0/Lennart BraunGuilhem CastagnosIvan DamgårdFabien LaguillaumieKelsey MelissarisClaudio OrlandiIda Tuckerhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/716Unclonable Secret Sharing2024-05-09T14:55:40+00:00Prabhanjan AnanthVipul GoyalJiahui LiuQipeng LiuUnclonable cryptography utilizes the principles of quantum mechanics to addresses cryptographic tasks that are impossible classically. We introduce a novel unclonable primitive in the context of secret sharing, called unclonable secret sharing (USS). In a USS scheme, there are $n$ shareholders, each holding a share of a classical secret represented as a quantum state. They can recover the secret once all parties (or at least $t$ parties) come together with their shares. Importantly, it should be infeasible to copy their own shares and send the copies to two non-communicating parties, enabling both of them to recover the secret.
Our work initiates a formal investigation into the realm of unclonable secret sharing, shedding light on its implications, constructions, and inherent limitations.
** Connections: We explore the connections between USS and other quantum cryptographic primitives such as unclonable encryption and position verification, showing the difficulties to achieve USS in different scenarios.
**Limited Entanglement: In the case where the adversarial shareholders do not share any entanglement or limited entanglement, we demonstrate information-theoretic constructions for USS.
**Large Entanglement: If we allow the adversarial shareholders to have unbounded entanglement resources (and unbounded computation), we prove that unclonable secret sharing is impossible. On the other hand, in the quantum random oracle model where the adversary can only make a bounded polynomial number of queries, we show a construction secure even with unbounded entanglement.
Furthermore, even when these adversaries possess only a polynomial amount of entanglement resources, we establish that any unclonable secret sharing scheme with a reconstruction function implementable using Cliffords and logarithmically many T-gates is also unattainable.Unclonable cryptography utilizes the principles of quantum mechanics to addresses cryptographic tasks that are impossible classically. We introduce a novel unclonable primitive in the context of secret sharing, called unclonable secret sharing (USS). In a USS scheme, there are $n$ shareholders, each holding a share of a classical secret represented as a quantum state. They can recover the secret once all parties (or at least $t$ parties) come together with their shares. Importantly, it should be infeasible to copy their own shares and send the copies to two non-communicating parties, enabling both of them to recover the secret.
Our work initiates a formal investigation into the realm of unclonable secret sharing, shedding light on its implications, constructions, and inherent limitations.
** Connections: We explore the connections between USS and other quantum cryptographic primitives such as unclonable encryption and position verification, showing the difficulties to achieve USS in different scenarios.
**Limited Entanglement: In the case where the adversarial shareholders do not share any entanglement or limited entanglement, we demonstrate information-theoretic constructions for USS.
**Large Entanglement: If we allow the adversarial shareholders to have unbounded entanglement resources (and unbounded computation), we prove that unclonable secret sharing is impossible. On the other hand, in the quantum random oracle model where the adversary can only make a bounded polynomial number of queries, we show a construction secure even with unbounded entanglement.
Furthermore, even when these adversaries possess only a polynomial amount of entanglement resources, we establish that any unclonable secret sharing scheme with a reconstruction function implementable using Cliffords and logarithmically many T-gates is also unattainable.2024-05-09T14:55:40+00:00https://creativecommons.org/licenses/by/4.0/Prabhanjan AnanthVipul GoyalJiahui LiuQipeng Liuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/715A New Cryptographic Algorithm2024-05-09T12:03:37+00:00Ali MahdoumThe advent of quantum computing technology will compromise many of the current cryptographic algorithms, especially public-key cryptography, which is widely used to protect digital information. Most algorithms on which we depend are used worldwide in components of many different communications, processing, and storage systems. Once access to practical quantum computers becomes available, all public-key algorithms and associated protocols will be vulnerable to criminals, competitors, and other adversaries. It is critical to begin planning for the replacement of hardware, software, and services that use public-key algorithms now so that information is protected from future attacks.” [1].
For this purpose, we have developed a new algorithm that contributes to deal with the aforementioned problem. Instead to use a classical scheme of encoding / decoding methods (keys, prime numbers, etc.), our algorithm is rather based on a combination of functions. Because the cardinality of the set of functions is infinite, it would be impossible for a third party (e.g. a hacker) to decode the secret information transmitted by the sender (Bob) to the receiver (Alice).The advent of quantum computing technology will compromise many of the current cryptographic algorithms, especially public-key cryptography, which is widely used to protect digital information. Most algorithms on which we depend are used worldwide in components of many different communications, processing, and storage systems. Once access to practical quantum computers becomes available, all public-key algorithms and associated protocols will be vulnerable to criminals, competitors, and other adversaries. It is critical to begin planning for the replacement of hardware, software, and services that use public-key algorithms now so that information is protected from future attacks.” [1].
For this purpose, we have developed a new algorithm that contributes to deal with the aforementioned problem. Instead to use a classical scheme of encoding / decoding methods (keys, prime numbers, etc.), our algorithm is rather based on a combination of functions. Because the cardinality of the set of functions is infinite, it would be impossible for a third party (e.g. a hacker) to decode the secret information transmitted by the sender (Bob) to the receiver (Alice).2024-05-09T12:03:37+00:00https://creativecommons.org/licenses/by/4.0/Ali Mahdoumhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/714Learning with Quantization, Polar Quantizer, and Secure Source Coding2024-05-09T07:58:25+00:00Shanxiang LyuLing LiuCong LingThis paper presents a generalization of the Learning With Rounding (LWR) problem, initially introduced by Banerjee, Peikert, and Rosen, by applying the perspective of vector quantization. In LWR, noise is induced by rounding each coordinate to the nearest multiple of a fraction, a process inherently tied to scalar quantization. By considering a new variant termed Learning With Quantization (LWQ), we explore large-dimensional fast-decodable lattices with superior quantization properties, aiming to enhance the compression performance over conventional scalar quantization. We identify polar lattices as exemplary structures, effectively transforming LWQ into a problem akin to Learning With Errors (LWE), where the distribution of quantization noise is statistically close to discrete Gaussian. Furthermore, we develop a novel ``quancryption'' scheme for secure source coding. Notably, the scheme achieves near-optimal rate-distortion ratios for bounded rational signal sources, and can be implemented efficiently with quasi-linear time complexity. Python code of the polar-lattice quantizer is available at https://github.com/shx-lyu/PolarQuantizer.This paper presents a generalization of the Learning With Rounding (LWR) problem, initially introduced by Banerjee, Peikert, and Rosen, by applying the perspective of vector quantization. In LWR, noise is induced by rounding each coordinate to the nearest multiple of a fraction, a process inherently tied to scalar quantization. By considering a new variant termed Learning With Quantization (LWQ), we explore large-dimensional fast-decodable lattices with superior quantization properties, aiming to enhance the compression performance over conventional scalar quantization. We identify polar lattices as exemplary structures, effectively transforming LWQ into a problem akin to Learning With Errors (LWE), where the distribution of quantization noise is statistically close to discrete Gaussian. Furthermore, we develop a novel ``quancryption'' scheme for secure source coding. Notably, the scheme achieves near-optimal rate-distortion ratios for bounded rational signal sources, and can be implemented efficiently with quasi-linear time complexity. Python code of the polar-lattice quantizer is available at https://github.com/shx-lyu/PolarQuantizer.2024-05-09T07:58:25+00:00https://creativecommons.org/licenses/by/4.0/Shanxiang LyuLing LiuCong Linghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/713Analyzing Pump and jump BKZ algorithm using dynamical systems2024-05-17T04:39:41+00:00Leizhang WangThe analysis of the reduction effort of the lattice reduction algorithm is important in estimating the hardness of lattice-based cryptography schemes. Recently many lattice challenge records have been cracked by using the Pnj-BKZ algorithm which is the default lattice reduction algorithm used in G6K, such as the TU Darmstadt LWE and SVP Challenges. However, the previous estimations of the Pnj-BKZ algorithm are simulator algorithms rather than theoretical upper bound analyses. In this work, we present the first dynamic analysis of Pnj-BKZ algorithm. More precisely, our analysis results show that let $L$ is the lattice spanned by $(\mathbf{a}_i)_{i\leq d}$. The shortest vector $\mathbf{b}_1$ output by running $\Omega \left ( \frac{2Jd^2}{\beta(\beta-J)}\left ( \ln_{}{d} +\ln_{} \ln_{}{\max_{i}\frac{\left \| \mathbf{a}_i^{*} \right \| }{(\mathrm{det}L )^{1/d} } } \right ) \right ) $ tours reduction of pnj-BKZ$(\beta,J)$, $\mathbf{b}_1$ satisfied that $\left \| \mathbf{b}_1 \right \| \le {\gamma}_{\beta}^{\frac{d-1}{2(\beta-J)}+2 } \cdot \left ( \mathrm{det}L \right ) ^{\frac{1}{d} } $.The analysis of the reduction effort of the lattice reduction algorithm is important in estimating the hardness of lattice-based cryptography schemes. Recently many lattice challenge records have been cracked by using the Pnj-BKZ algorithm which is the default lattice reduction algorithm used in G6K, such as the TU Darmstadt LWE and SVP Challenges. However, the previous estimations of the Pnj-BKZ algorithm are simulator algorithms rather than theoretical upper bound analyses. In this work, we present the first dynamic analysis of Pnj-BKZ algorithm. More precisely, our analysis results show that let $L$ is the lattice spanned by $(\mathbf{a}_i)_{i\leq d}$. The shortest vector $\mathbf{b}_1$ output by running $\Omega \left ( \frac{2Jd^2}{\beta(\beta-J)}\left ( \ln_{}{d} +\ln_{} \ln_{}{\max_{i}\frac{\left \| \mathbf{a}_i^{*} \right \| }{(\mathrm{det}L )^{1/d} } } \right ) \right ) $ tours reduction of pnj-BKZ$(\beta,J)$, $\mathbf{b}_1$ satisfied that $\left \| \mathbf{b}_1 \right \| \le {\gamma}_{\beta}^{\frac{d-1}{2(\beta-J)}+2 } \cdot \left ( \mathrm{det}L \right ) ^{\frac{1}{d} } $.2024-05-09T03:39:46+00:00https://creativecommons.org/licenses/by/4.0/Leizhang Wanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/712Quantum NV Sieve on Grover for Solving Shortest Vector Problem2024-05-15T05:03:48+00:00Hyunji KimKyungbae JangHyunjun KimAnubhab BaksiSumanta ChakrabortyHwajeong SeoQuantum computers can efficiently model and solve several challenging problems for classical computers, raising concerns about potential security reductions in cryptography. NIST is already considering potential quantum attacks in the development of post-quantum cryptography by estimating the quantum resources required for such quantum attacks. In this paper, we present quantum circuits for the NV sieve algorithm to solve the Shortest Vector Problem (SVP), which serves as the security foundation for lattice-based cryptography, achieving a quantum speedup of the square root. Although there has been extensive research on the application of quantum algorithms for lattice-based problems at the theoretical level, specific quantum circuit implementations for them have not been presented yet.
Notably, this work demonstrates that the required quantum complexity for the SVP in the lattice of rank 70 and dimension 70 is $2^{43}$ (a product of the total gate count and the total depth) with our optimized quantum implementation of the NV sieve algorithm.
This complexity is significantly lower than the NIST post-quantum security standard, where level 1 is $2^{157}$, corresponding to the complexity of Grover's key search for AES-128.Quantum computers can efficiently model and solve several challenging problems for classical computers, raising concerns about potential security reductions in cryptography. NIST is already considering potential quantum attacks in the development of post-quantum cryptography by estimating the quantum resources required for such quantum attacks. In this paper, we present quantum circuits for the NV sieve algorithm to solve the Shortest Vector Problem (SVP), which serves as the security foundation for lattice-based cryptography, achieving a quantum speedup of the square root. Although there has been extensive research on the application of quantum algorithms for lattice-based problems at the theoretical level, specific quantum circuit implementations for them have not been presented yet.
Notably, this work demonstrates that the required quantum complexity for the SVP in the lattice of rank 70 and dimension 70 is $2^{43}$ (a product of the total gate count and the total depth) with our optimized quantum implementation of the NV sieve algorithm.
This complexity is significantly lower than the NIST post-quantum security standard, where level 1 is $2^{157}$, corresponding to the complexity of Grover's key search for AES-128.2024-05-09T03:06:35+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Hyunji KimKyungbae JangHyunjun KimAnubhab BaksiSumanta ChakrabortyHwajeong Seohttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2024/711Non-Transferable Anonymous Tokens by Secret Binding2024-05-10T06:51:17+00:00F. Betül DurakLaurane MarcoAbdullah TalayhanSerge VaudenayNon-transferability (NT) is a security notion which ensures that credentials are only used by their intended owners. Despite its importance, it has not been formally treated in the context of anonymous tokens (AT) which are lightweight anonymous credentials. In this work, we consider a client who "buys" access tokens which are forbidden to be transferred although anonymously redeemed. We extensively study the trade-offs between privacy (obtained through anonymity) and security in AT through the notion of non-transferability. We formalise new security notions, design a suite of protocols with various flavors of NT, prove their security, and implement the protocols to assess their efficiency. Finally, we study the existing anonymous credentials which offer NT, and show that they cannot automatically be used as AT without security and complexity implications.Non-transferability (NT) is a security notion which ensures that credentials are only used by their intended owners. Despite its importance, it has not been formally treated in the context of anonymous tokens (AT) which are lightweight anonymous credentials. In this work, we consider a client who "buys" access tokens which are forbidden to be transferred although anonymously redeemed. We extensively study the trade-offs between privacy (obtained through anonymity) and security in AT through the notion of non-transferability. We formalise new security notions, design a suite of protocols with various flavors of NT, prove their security, and implement the protocols to assess their efficiency. Finally, we study the existing anonymous credentials which offer NT, and show that they cannot automatically be used as AT without security and complexity implications.2024-05-08T17:13:54+00:00https://creativecommons.org/licenses/by/4.0/F. Betül DurakLaurane MarcoAbdullah TalayhanSerge Vaudenayhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/710BUFFing FALCON without Increasing the Signature Size2024-05-08T13:02:29+00:00Samed DüzlüRune FiedlerMarc FischlinThis work shows how FALCON can achieve the Beyond UnForgeability Features (BUFF) introduced by Cremers et al. (S&P'21) more efficiently than by applying the generic BUFF transform. Specifically, we show that applying a transform of Pornin and Stern (ACNS'05), dubbed PS-3 transform, already suffices for FALCON to achieve BUFF security. For FALCON, this merely means to include the public key in the hashing step in signature generation and verification, instead of hashing only the nonce and the message; the other signature computation steps and the signature output remain untouched. In comparison to the BUFF transform, which appends a hash value to the final signature, the PS-3 transform therefore achieves shorter signature sizes, without incurring additional computations.This work shows how FALCON can achieve the Beyond UnForgeability Features (BUFF) introduced by Cremers et al. (S&P'21) more efficiently than by applying the generic BUFF transform. Specifically, we show that applying a transform of Pornin and Stern (ACNS'05), dubbed PS-3 transform, already suffices for FALCON to achieve BUFF security. For FALCON, this merely means to include the public key in the hashing step in signature generation and verification, instead of hashing only the nonce and the message; the other signature computation steps and the signature output remain untouched. In comparison to the BUFF transform, which appends a hash value to the final signature, the PS-3 transform therefore achieves shorter signature sizes, without incurring additional computations.2024-05-08T13:02:29+00:00https://creativecommons.org/licenses/by-nc/4.0/Samed DüzlüRune FiedlerMarc Fischlinhttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2024/709Masked Computation the Floor Function and its Application to the FALCON Signature2024-05-08T06:36:24+00:00Justine PailletPierre-Augustin BerthetCédric TavernierFALCON is candidate for standardization of the new Post Quantum Cryptography (PQC) primitives by the National Institute of Standards and Technology (NIST). However, it remains a challenge to define efficient countermeasures against side-channel attacks (SCA) for this algorithm. FALCON is a lattice-based signature that relies on rational numbers which is unusual in the cryptography field. While recent work proposed a solution to mask the addition and the multiplication, some roadblocks remain, most noticeably how to protect the floor function. We propose in this work to complete the existing first trials of hardening FALCON against SCA. We perform the mathematical proofs of our methods as well as formal security proof in the probing model using the Non-Interference concepts.FALCON is candidate for standardization of the new Post Quantum Cryptography (PQC) primitives by the National Institute of Standards and Technology (NIST). However, it remains a challenge to define efficient countermeasures against side-channel attacks (SCA) for this algorithm. FALCON is a lattice-based signature that relies on rational numbers which is unusual in the cryptography field. While recent work proposed a solution to mask the addition and the multiplication, some roadblocks remain, most noticeably how to protect the floor function. We propose in this work to complete the existing first trials of hardening FALCON against SCA. We perform the mathematical proofs of our methods as well as formal security proof in the probing model using the Non-Interference concepts.2024-05-08T06:36:24+00:00https://creativecommons.org/licenses/by/4.0/Justine PailletPierre-Augustin BerthetCédric Tavernierhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/708Automated Generation of Fault-Resistant Circuits2024-05-07T20:24:42+00:00Nicolai MüllerAmir MoradiFault Injection (FI) attacks, which involve intentionally introducing faults into a system to cause it to behave in an unintended manner, are widely recognized and pose a significant threat to the security of cryptographic primitives implemented in hardware, making fault tolerance an increasingly critical concern. However, protecting cryptographic hardware primitives securely and efficiently, even with well-established and documented methods such as redundant computation, can be a time-consuming, error-prone, and expertise-demanding task.
In this research, we present a comprehensive and fully-automated software solution for the Automated Generation of Fault-Resistant Circuits (AGEFA). Our application employs a generic and extensively researched methodology for the secure integration of countermeasures based on Error-Correcting Codes (ECCs) into cryptographic hardware circuits. Our software tool allows designers without hardware security expertise to develop fault-tolerant hardware circuits with pre-defined correction capabilities under a comprehensive fault adversary model. Moreover, our tool applies to masked designs without violating the masking security requirements, in particular to designs generated by the tool AGEMA. We evaluate the effectiveness of our approach through experiments on various block ciphers and demonstrate its ability to produce fault-tolerant circuits. Additionally, we assess the security of examples generated by AGEFA against Side-Channel Analysis (SCA) and FI using state-of-the-art leakage and fault evaluation tools.Fault Injection (FI) attacks, which involve intentionally introducing faults into a system to cause it to behave in an unintended manner, are widely recognized and pose a significant threat to the security of cryptographic primitives implemented in hardware, making fault tolerance an increasingly critical concern. However, protecting cryptographic hardware primitives securely and efficiently, even with well-established and documented methods such as redundant computation, can be a time-consuming, error-prone, and expertise-demanding task.
In this research, we present a comprehensive and fully-automated software solution for the Automated Generation of Fault-Resistant Circuits (AGEFA). Our application employs a generic and extensively researched methodology for the secure integration of countermeasures based on Error-Correcting Codes (ECCs) into cryptographic hardware circuits. Our software tool allows designers without hardware security expertise to develop fault-tolerant hardware circuits with pre-defined correction capabilities under a comprehensive fault adversary model. Moreover, our tool applies to masked designs without violating the masking security requirements, in particular to designs generated by the tool AGEMA. We evaluate the effectiveness of our approach through experiments on various block ciphers and demonstrate its ability to produce fault-tolerant circuits. Additionally, we assess the security of examples generated by AGEFA against Side-Channel Analysis (SCA) and FI using state-of-the-art leakage and fault evaluation tools.2024-05-07T20:24:42+00:00https://creativecommons.org/licenses/by/4.0/Nicolai MüllerAmir Moradihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/707Towards a Polynomial Instruction Based Compiler for Fully Homomorphic Encryption Accelerators2024-05-07T19:28:47+00:00Sejun KimWen WangDuhyeong KimAdish VartakMichael SteinerRosario CammarotaFully Homomorphic Encryption (FHE) is a transformative technology that enables computations on encrypted data without requiring decryption, promising enhanced data privacy. However, its adoption has been limited due to significant performance overheads. Recent advances include the proposal of domain-specific, highly-parallel hardware accelerators designed to overcome these limitations.
This paper introduces PICA, a comprehensive compiler framework designed to simplify the programming of these specialized FHE accelerators and integration with existing FHE libraries. PICA leverages a novel polynomial Instruction Set Architecture (p-ISA), which abstracts polynomial rings and their arithmetic operations, serving as a fundamental data type for the creation of compact, efficient code embracing high-level operations on polynomial rings, referred to as kernels, e.g., encompassing FHE primitives like arithmetic and ciphertext management. We detail a kernel generation framework that translates high-level FHE operations into pseudo-code using p-ISA, and a subsequent tracing framework that incorporates p-ISA functionalities and kernels into established FHE libraries. Additionally, we introduce a mapper to coordinate multiple FHE kernels for optimal application performance on targeted hardware accelerators. Our evaluations demonstrate PICA's efficacy in creation of compact and efficient code, when compared with an x64 architecture. Particularly in managing complex FHE operations such as relinearization, where we observe a 25.24x instruction count reduction even when a large batch size (8192) is taken into account.Fully Homomorphic Encryption (FHE) is a transformative technology that enables computations on encrypted data without requiring decryption, promising enhanced data privacy. However, its adoption has been limited due to significant performance overheads. Recent advances include the proposal of domain-specific, highly-parallel hardware accelerators designed to overcome these limitations.
This paper introduces PICA, a comprehensive compiler framework designed to simplify the programming of these specialized FHE accelerators and integration with existing FHE libraries. PICA leverages a novel polynomial Instruction Set Architecture (p-ISA), which abstracts polynomial rings and their arithmetic operations, serving as a fundamental data type for the creation of compact, efficient code embracing high-level operations on polynomial rings, referred to as kernels, e.g., encompassing FHE primitives like arithmetic and ciphertext management. We detail a kernel generation framework that translates high-level FHE operations into pseudo-code using p-ISA, and a subsequent tracing framework that incorporates p-ISA functionalities and kernels into established FHE libraries. Additionally, we introduce a mapper to coordinate multiple FHE kernels for optimal application performance on targeted hardware accelerators. Our evaluations demonstrate PICA's efficacy in creation of compact and efficient code, when compared with an x64 architecture. Particularly in managing complex FHE operations such as relinearization, where we observe a 25.24x instruction count reduction even when a large batch size (8192) is taken into account.2024-05-07T19:28:47+00:00https://creativecommons.org/publicdomain/zero/1.0/Sejun KimWen WangDuhyeong KimAdish VartakMichael SteinerRosario Cammarotahttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2024/706Linicrypt in the Ideal Cipher Model2024-05-07T18:00:34+00:00Zahra JavarBruce M. KapronWe extend the Linicrypt framework for characterizing hash function security as proposed by McQuoid, Swope, and Rosulek (TCC 2018) to support constructions in the ideal cipher model.
In this setting, we give a characterization of collision- and second-preimage-resistance in terms of a linear-algebraic condition on Linicrypt programs, and present an efficient algorithm for determining whether a program satisfies the condition. As an application, we consider the case of the block cipherbased hash functions proposed by Preneel, Govaerts, and Vandewall (Crypto 1993), and show that the semantic analysis of PGV given by Black et. al. (J. Crypto. 2010) can be captured as a special case of our characterization. In addition, We model hash functions constructed through the Merkle-Damgård transformation within the Linicrypt framework. Finally, we appy this model to an analysis of how various attacks on the underlying compression functions can compromise the collision resistance of the resulting hash function.We extend the Linicrypt framework for characterizing hash function security as proposed by McQuoid, Swope, and Rosulek (TCC 2018) to support constructions in the ideal cipher model.
In this setting, we give a characterization of collision- and second-preimage-resistance in terms of a linear-algebraic condition on Linicrypt programs, and present an efficient algorithm for determining whether a program satisfies the condition. As an application, we consider the case of the block cipherbased hash functions proposed by Preneel, Govaerts, and Vandewall (Crypto 1993), and show that the semantic analysis of PGV given by Black et. al. (J. Crypto. 2010) can be captured as a special case of our characterization. In addition, We model hash functions constructed through the Merkle-Damgård transformation within the Linicrypt framework. Finally, we appy this model to an analysis of how various attacks on the underlying compression functions can compromise the collision resistance of the resulting hash function.2024-05-07T18:00:34+00:00https://creativecommons.org/licenses/by/4.0/Zahra JavarBruce M. Kapronhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/705Large-Scale MPC: Scaling Private Iris Code Uniqueness Checks to Millions of Users2024-05-07T16:38:57+00:00Remco BloemenDaniel KalesPhilipp SipplRoman WalchIn this work we tackle privacy concerns in biometric verification systems that typically require server-side processing of sensitive data (e.g., fingerprints and Iris Codes). Concretely, we design a solution that allows us to query whether a given Iris Code is similar to one contained in a given database, while all queries and datasets are being protected using secure multiparty computation (MPC). Addressing the substantial performance demands of operational systems like World ID and aid distributions by the Red Cross, we propose new protocols to improve performance by more than three orders of magnitude compared to the recent state-of-the-art system Janus (S&P 24). Our final protocol can achieve a throughput of over a million Iris Code comparisons per second on a single CPU core, while protecting the privacy of both the query and database Iris Codes. We additionally investigate GPU acceleration for some building blocks of our protocol, which results in further speedups of over 38x compared to the respective multi-threaded CPU implementation.In this work we tackle privacy concerns in biometric verification systems that typically require server-side processing of sensitive data (e.g., fingerprints and Iris Codes). Concretely, we design a solution that allows us to query whether a given Iris Code is similar to one contained in a given database, while all queries and datasets are being protected using secure multiparty computation (MPC). Addressing the substantial performance demands of operational systems like World ID and aid distributions by the Red Cross, we propose new protocols to improve performance by more than three orders of magnitude compared to the recent state-of-the-art system Janus (S&P 24). Our final protocol can achieve a throughput of over a million Iris Code comparisons per second on a single CPU core, while protecting the privacy of both the query and database Iris Codes. We additionally investigate GPU acceleration for some building blocks of our protocol, which results in further speedups of over 38x compared to the respective multi-threaded CPU implementation.2024-05-07T16:38:57+00:00https://creativecommons.org/licenses/by/4.0/Remco BloemenDaniel KalesPhilipp SipplRoman Walchhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/704Fully Automated Selfish Mining Analysis in Efficient Proof Systems Blockchains2024-05-07T09:58:20+00:00Krishnendu ChatterjeeAmirali Ebrahim-ZadehMehrdad KarrabiKrzysztof PietrzakMichelle YeoDjordje ZikelicWe study selfish mining attacks in longest-chain blockchains like Bitcoin, but where the proof of work is replaced with efficient proof systems -- like proofs of stake or proofs of space -- and consider the problem of computing an optimal selfish mining attack which maximizes expected relative revenue of the adversary, thus minimizing the chain quality. To this end, we propose a novel selfish mining attack that aims to maximize this objective and formally model the attack as a Markov decision process (MDP). We then present a formal analysis procedure which computes an $\epsilon$-tight lower bound on the optimal expected relative revenue in the MDP and a strategy that achieves this $\epsilon$-tight lower bound, where $\epsilon>0$ may be any specified precision. Our analysis is fully automated and provides formal guarantees on the correctness. We evaluate our selfish mining attack and observe that it achieves superior expected relative revenue compared to two considered baselines.
In concurrent work [Sarenche FC'24] does an automated analysis on selfish mining in predictable longest-chain blockchains based on efficient proof systems. Predictable means the randomness for the challenges is fixed for many blocks (as used e.g., in Ouroboros), while
we consider unpredictable (Bitcoin-like) chains where the challenge is derived from the previous block.We study selfish mining attacks in longest-chain blockchains like Bitcoin, but where the proof of work is replaced with efficient proof systems -- like proofs of stake or proofs of space -- and consider the problem of computing an optimal selfish mining attack which maximizes expected relative revenue of the adversary, thus minimizing the chain quality. To this end, we propose a novel selfish mining attack that aims to maximize this objective and formally model the attack as a Markov decision process (MDP). We then present a formal analysis procedure which computes an $\epsilon$-tight lower bound on the optimal expected relative revenue in the MDP and a strategy that achieves this $\epsilon$-tight lower bound, where $\epsilon>0$ may be any specified precision. Our analysis is fully automated and provides formal guarantees on the correctness. We evaluate our selfish mining attack and observe that it achieves superior expected relative revenue compared to two considered baselines.
In concurrent work [Sarenche FC'24] does an automated analysis on selfish mining in predictable longest-chain blockchains based on efficient proof systems. Predictable means the randomness for the challenges is fixed for many blocks (as used e.g., in Ouroboros), while
we consider unpredictable (Bitcoin-like) chains where the challenge is derived from the previous block.2024-05-07T09:58:20+00:00https://creativecommons.org/licenses/by/4.0/Krishnendu ChatterjeeAmirali Ebrahim-ZadehMehrdad KarrabiKrzysztof PietrzakMichelle YeoDjordje Zikelichttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/703An Efficient and Extensible Zero-knowledge Proof Framework for Neural Networks2024-05-07T09:11:40+00:00Tao LuHaoyu WangWenjie QuZonghui WangJinye HeTianyang TaoWenzhi ChenJiaheng ZhangIn recent years, cloud vendors have started to supply paid services for data analysis by providing interfaces of their well-trained neural network models. However, customers lack tools to verify whether outcomes supplied by cloud vendors are correct inferences from particular models, in the face of lazy or malicious vendors. The cryptographic primitive called zero-knowledge proof (ZKP) addresses this problem. It enables the outcomes to be verifiable without leaking information about the models. Unfortunately, existing ZKP schemes for neural networks have high computational overheads, especially for the non-linear layers in neural networks.
In this paper, we propose an efficient and extensible ZKP framework for neural networks. Our work improves the performance of the proofs for non-linear layers. Compared to previous works relying on the technology of bit decomposition, we convert complex non-linear relations into range and exponent relations, which significantly reduces the number of constraints required to prove non-linear layers. Moreover, we adopt a modular design to make our framework compatible with more neural networks. Specifically, we propose two enhanced range and lookup proofs as basic blocks. They are efficient in proving the satisfaction of range and exponent relations. Then, we constrain the correct calculation of primitive non-linear operations using a small number of range and exponent relations. Finally, we build our ZKP framework from the primitive operations to the entire neural networks, offering the flexibility for expansion to various neural networks.
We implement our ZKPs for convolutional and transformer neural networks. The evaluation results show that our work achieves over $168.6\times$ (up to $477.2\times$) speedup for separated non-linear layers and $41.4\times$ speedup for the entire ResNet-101 convolutional neural network, when compared with the state-of-the-art work, Mystique. In addition, our work can prove GPT-2, a transformer neural network with $117$ million parameters, in $287.1$ seconds, achieving $35.7\times$ speedup over ZKML, which is a state-of-the-art work supporting transformer neural networks.In recent years, cloud vendors have started to supply paid services for data analysis by providing interfaces of their well-trained neural network models. However, customers lack tools to verify whether outcomes supplied by cloud vendors are correct inferences from particular models, in the face of lazy or malicious vendors. The cryptographic primitive called zero-knowledge proof (ZKP) addresses this problem. It enables the outcomes to be verifiable without leaking information about the models. Unfortunately, existing ZKP schemes for neural networks have high computational overheads, especially for the non-linear layers in neural networks.
In this paper, we propose an efficient and extensible ZKP framework for neural networks. Our work improves the performance of the proofs for non-linear layers. Compared to previous works relying on the technology of bit decomposition, we convert complex non-linear relations into range and exponent relations, which significantly reduces the number of constraints required to prove non-linear layers. Moreover, we adopt a modular design to make our framework compatible with more neural networks. Specifically, we propose two enhanced range and lookup proofs as basic blocks. They are efficient in proving the satisfaction of range and exponent relations. Then, we constrain the correct calculation of primitive non-linear operations using a small number of range and exponent relations. Finally, we build our ZKP framework from the primitive operations to the entire neural networks, offering the flexibility for expansion to various neural networks.
We implement our ZKPs for convolutional and transformer neural networks. The evaluation results show that our work achieves over $168.6\times$ (up to $477.2\times$) speedup for separated non-linear layers and $41.4\times$ speedup for the entire ResNet-101 convolutional neural network, when compared with the state-of-the-art work, Mystique. In addition, our work can prove GPT-2, a transformer neural network with $117$ million parameters, in $287.1$ seconds, achieving $35.7\times$ speedup over ZKML, which is a state-of-the-art work supporting transformer neural networks.2024-05-07T09:11:40+00:00https://creativecommons.org/licenses/by/4.0/Tao LuHaoyu WangWenjie QuZonghui WangJinye HeTianyang TaoWenzhi ChenJiaheng Zhanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/702Security Analysis of Signal's PQXDH Handshake2024-05-07T08:42:31+00:00Rune FiedlerFelix GüntherSignal recently deployed a new handshake protocol named PQXDH to protect against "harvest-now-decrypt-later" attacks of a future quantum computer. To this end, PQXDH adds a post-quantum KEM to the Diffie-Hellman combinations of the prior X3DH handshake.
In this work, we give a reductionist security analysis of Signal's PQXDH handshake in a game-based security model that captures the targeted "maximum-exposure" security, allowing fine-grained compromise of user's long-term, semi-static, and ephemeral key material. We augment prior such models to capture not only the added KEM component but also the signing of public keys, which prior analyses did not capture but which adds an additional flavor of post-quantum security in PQXDH. We then establish a fully parameterized, concrete security bound for the session key security of PQXDH, in particular shedding light on a KEM binding property we require for PQXDH's security, and how to avoid it.
Our discussion of KEM binding complements the tool-based analysis of PQXDH by Bhargavan, Jacomme, Kiefer, and Schmidt, which pointed out a potential re-encapsulation attack if the KEM shared secret does not bind the public key. We show that both Kyber (used in PQXDH) and its current NIST draft standard ML-KEM (foreseen to replace Kyber once standardized) satisfy a novel binding notion we introduce and rely on for our PQXDH analysis, which may be of independent interest.Signal recently deployed a new handshake protocol named PQXDH to protect against "harvest-now-decrypt-later" attacks of a future quantum computer. To this end, PQXDH adds a post-quantum KEM to the Diffie-Hellman combinations of the prior X3DH handshake.
In this work, we give a reductionist security analysis of Signal's PQXDH handshake in a game-based security model that captures the targeted "maximum-exposure" security, allowing fine-grained compromise of user's long-term, semi-static, and ephemeral key material. We augment prior such models to capture not only the added KEM component but also the signing of public keys, which prior analyses did not capture but which adds an additional flavor of post-quantum security in PQXDH. We then establish a fully parameterized, concrete security bound for the session key security of PQXDH, in particular shedding light on a KEM binding property we require for PQXDH's security, and how to avoid it.
Our discussion of KEM binding complements the tool-based analysis of PQXDH by Bhargavan, Jacomme, Kiefer, and Schmidt, which pointed out a potential re-encapsulation attack if the KEM shared secret does not bind the public key. We show that both Kyber (used in PQXDH) and its current NIST draft standard ML-KEM (foreseen to replace Kyber once standardized) satisfy a novel binding notion we introduce and rely on for our PQXDH analysis, which may be of independent interest.2024-05-07T08:42:31+00:00https://creativecommons.org/licenses/by/4.0/Rune FiedlerFelix Güntherhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/701Quantum Unpredictability2024-05-07T07:00:23+00:00Tomoyuki MorimaeShogo YamadaTakashi YamakawaUnpredictable functions (UPFs) play essential roles in classical cryptography, including message authentication codes (MACs) and digital signatures. In this paper, we introduce a quantum analog of UPFs, which we call unpredictable state generators (UPSGs). UPSGs are implied by pseudorandom function-like states generators (PRFSs), which are a quantum analog of pseudorandom functions (PRFs), and therefore UPSGs could exist even if one-way functions do not exist, similar to other recently introduced primitives like pseudorandom state generators (PRSGs), one-way state generators (OWSGs), and EFIs. In classical cryptography, UPFs are equivalent to PRFs, but in the quantum case, the equivalence is not clear, and UPSGs could be weaker than PRFSs. Despite this, we demonstrate that all known applications of PRFSs are also achievable with UPSGs. They include IND-CPA-secure secret-key encryption and EUF-CMA-secure MACs with unclonable tags. Our findings suggest that, for many applications, quantum unpredictability, rather than quantum pseudorandomness, is sufficient.Unpredictable functions (UPFs) play essential roles in classical cryptography, including message authentication codes (MACs) and digital signatures. In this paper, we introduce a quantum analog of UPFs, which we call unpredictable state generators (UPSGs). UPSGs are implied by pseudorandom function-like states generators (PRFSs), which are a quantum analog of pseudorandom functions (PRFs), and therefore UPSGs could exist even if one-way functions do not exist, similar to other recently introduced primitives like pseudorandom state generators (PRSGs), one-way state generators (OWSGs), and EFIs. In classical cryptography, UPFs are equivalent to PRFs, but in the quantum case, the equivalence is not clear, and UPSGs could be weaker than PRFSs. Despite this, we demonstrate that all known applications of PRFSs are also achievable with UPSGs. They include IND-CPA-secure secret-key encryption and EUF-CMA-secure MACs with unclonable tags. Our findings suggest that, for many applications, quantum unpredictability, rather than quantum pseudorandomness, is sufficient.2024-05-07T07:00:23+00:00https://creativecommons.org/licenses/by/4.0/Tomoyuki MorimaeShogo YamadaTakashi Yamakawahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/700Sublinear Distributed Product Checks on Replicated Secret-Shared Data over $\mathbb{Z}_{2^k}$ without Ring Extensions2024-05-07T02:08:22+00:00Yun LiDaniel EscuderoYufei DuanZhicong HuangCheng HongChao ZhangYifan SongMultiple works have designed or used maliciously secure honest majority MPC protocols over $\mathbb{Z}_{2^k}$ using replicated secret sharing (e.g. Koti et al. USENIX’21, and the references therein). A recent trend in the design of such MPC protocols is to first execute a semi-honest protocol, and then use a check that verifies the correctness of the computation requiring only sublinear amount of communication in terms of the circuit size. The so-called Galois ring extensions are needed in order to execute such checks over $\mathbb{Z}_{2^k}$, but these rings incur incredibly high computation overheads, which completely undermine any potential benefits the ring $\mathbb{Z}_{2^k}$ had to begin with.
In this work we revisit the task of designing sublinear distributed product checks on replicated secret-shared data over $\mathbb{Z}_{2^k}$ among three parties with an honest majority. We present a novel technique for verifying the correctness of a set of multiplication (in fact, inner product) triples, involving a sublinear cost in terms of the amount of multiplications. Most importantly, unlike previous works, our tools entirely avoid Galois ring extensions, and only require computation over rings of the form $\mathbb{Z}_{2^l}$ . In terms of communication, our checks are 3∼5× lighter than existing checks using ring extensions, which is already quite remarkable. However, our most noticeable improvement is in terms of computation: avoiding extensions allows our checks to be 17.7∼44.2× better than previous approaches, for many parameter regimes of interest. Our experimental results show that checking a 10 million gate circuit with the 3PC protocol from (Boyle et al., CCS’19) takes about two minutes, while our approach takes only 2.82 seconds.
Finally, our techniques are not restricted to the three-party case, and we generalize them to replicated secret-sharing with an arbitrary number of parties n. Even though the share size in this scheme grows exponentially with n, prior works have used it for n = 4 or n = 5—or even general n for feasibility results—and our distributed checks also represent improvements in these contexts.Multiple works have designed or used maliciously secure honest majority MPC protocols over $\mathbb{Z}_{2^k}$ using replicated secret sharing (e.g. Koti et al. USENIX’21, and the references therein). A recent trend in the design of such MPC protocols is to first execute a semi-honest protocol, and then use a check that verifies the correctness of the computation requiring only sublinear amount of communication in terms of the circuit size. The so-called Galois ring extensions are needed in order to execute such checks over $\mathbb{Z}_{2^k}$, but these rings incur incredibly high computation overheads, which completely undermine any potential benefits the ring $\mathbb{Z}_{2^k}$ had to begin with.
In this work we revisit the task of designing sublinear distributed product checks on replicated secret-shared data over $\mathbb{Z}_{2^k}$ among three parties with an honest majority. We present a novel technique for verifying the correctness of a set of multiplication (in fact, inner product) triples, involving a sublinear cost in terms of the amount of multiplications. Most importantly, unlike previous works, our tools entirely avoid Galois ring extensions, and only require computation over rings of the form $\mathbb{Z}_{2^l}$ . In terms of communication, our checks are 3∼5× lighter than existing checks using ring extensions, which is already quite remarkable. However, our most noticeable improvement is in terms of computation: avoiding extensions allows our checks to be 17.7∼44.2× better than previous approaches, for many parameter regimes of interest. Our experimental results show that checking a 10 million gate circuit with the 3PC protocol from (Boyle et al., CCS’19) takes about two minutes, while our approach takes only 2.82 seconds.
Finally, our techniques are not restricted to the three-party case, and we generalize them to replicated secret-sharing with an arbitrary number of parties n. Even though the share size in this scheme grows exponentially with n, prior works have used it for n = 4 or n = 5—or even general n for feasibility results—and our distributed checks also represent improvements in these contexts.2024-05-07T02:08:22+00:00https://creativecommons.org/licenses/by/4.0/Yun LiDaniel EscuderoYufei DuanZhicong HuangCheng HongChao ZhangYifan Songhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/699An Efficient All-to-All GCD Algorithm for Low Entropy RSA Key Factorization2024-05-06T21:17:16+00:00Elijah PelofskeRSA is an incredibly successful and useful asymmetric encryption algorithm. One of the types of implementation flaws in RSA is low entropy of the key generation, specifically the prime number creation stage. This can occur due to flawed usage of random prime number generator libraries, or on computers where there is a lack of a source of external entropy. These implementation flaws result in some RSA keys sharing prime factors, which means that the full factorization of the public modulus can be recovered incredibly efficiently by performing a computation GCD between the two public key moduli that share the prime factor. However, since one does not know which of the composite moduli share a prime factor a-priori, to determine if any such shared prime factors exist, an all-to-all GCD attack (also known as a batch GCD attack, or a bulk GCD attack) can be performed on the available public keys so as to recover any shared prime factors. This study describes a novel all-to-all batch GCD algorithm, which will be referred to as the binary tree batch GCD algorithm, that is more efficient than the current best batch GCD algorithm (the remainder tree batch GCD algorithm). A comparison against the best existing batch GCD method (which is a product tree followed by a remainder tree computation) is given using a dataset of random RSA moduli that are constructed such that some of the moduli share prime factors. This proposed binary tree batch GCD algorithm has better runtime than the existing remainder tree batch GCD algorithm, although asymptotically it has nearly identical scaling and its complexity is dependent on how many shared prime factors exist in the set of RSA keys. In practice, the implementation of the proposed binary tree batch GCD algorithm has a roughly 6x speedup compared to the standard remainder tree batch GCD approach.RSA is an incredibly successful and useful asymmetric encryption algorithm. One of the types of implementation flaws in RSA is low entropy of the key generation, specifically the prime number creation stage. This can occur due to flawed usage of random prime number generator libraries, or on computers where there is a lack of a source of external entropy. These implementation flaws result in some RSA keys sharing prime factors, which means that the full factorization of the public modulus can be recovered incredibly efficiently by performing a computation GCD between the two public key moduli that share the prime factor. However, since one does not know which of the composite moduli share a prime factor a-priori, to determine if any such shared prime factors exist, an all-to-all GCD attack (also known as a batch GCD attack, or a bulk GCD attack) can be performed on the available public keys so as to recover any shared prime factors. This study describes a novel all-to-all batch GCD algorithm, which will be referred to as the binary tree batch GCD algorithm, that is more efficient than the current best batch GCD algorithm (the remainder tree batch GCD algorithm). A comparison against the best existing batch GCD method (which is a product tree followed by a remainder tree computation) is given using a dataset of random RSA moduli that are constructed such that some of the moduli share prime factors. This proposed binary tree batch GCD algorithm has better runtime than the existing remainder tree batch GCD algorithm, although asymptotically it has nearly identical scaling and its complexity is dependent on how many shared prime factors exist in the set of RSA keys. In practice, the implementation of the proposed binary tree batch GCD algorithm has a roughly 6x speedup compared to the standard remainder tree batch GCD approach.2024-05-06T21:17:16+00:00https://creativecommons.org/licenses/by/4.0/Elijah Pelofskehttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/698Private Computations on Streaming Data2024-05-06T18:58:41+00:00Vladimir BravermanKevin GarbeEli JaffeRafail OstrovskyWe present a framework for privacy-preserving streaming algorithms which combine the memory-efficiency of streaming algorithms with strong privacy guarantees. These algorithms enable some number of servers to compute aggregate statistics efficiently on large quantities of user data without learning the user's inputs. While there exists limited prior work that fits within our model, our work is the first to formally define a general framework, interpret existing methods within this general framework, and develop new tools broadly applicable to this model. To highlight our model, we designed and implemented a new privacy-preserving streaming algorithm to compute heavy hitters, which are the most frequent elements in a data stream. We provide a performance comparison between our system and Poplar, the only other private statistics algorithm which supports heavy hitters. We benchmarked ours and Poplar's systems and provided direct performance comparisons within the same hardware platform. Of note, Poplar requires linear space compared to our poly-logarithmic space, meaning our system is the first to compute heavy hitters within the privacy-preserving streaming model. A small memory footprint allows our algorithm (among other benefits) to run efficiently on a very large input sizes without running out of memory or crashing.We present a framework for privacy-preserving streaming algorithms which combine the memory-efficiency of streaming algorithms with strong privacy guarantees. These algorithms enable some number of servers to compute aggregate statistics efficiently on large quantities of user data without learning the user's inputs. While there exists limited prior work that fits within our model, our work is the first to formally define a general framework, interpret existing methods within this general framework, and develop new tools broadly applicable to this model. To highlight our model, we designed and implemented a new privacy-preserving streaming algorithm to compute heavy hitters, which are the most frequent elements in a data stream. We provide a performance comparison between our system and Poplar, the only other private statistics algorithm which supports heavy hitters. We benchmarked ours and Poplar's systems and provided direct performance comparisons within the same hardware platform. Of note, Poplar requires linear space compared to our poly-logarithmic space, meaning our system is the first to compute heavy hitters within the privacy-preserving streaming model. A small memory footprint allows our algorithm (among other benefits) to run efficiently on a very large input sizes without running out of memory or crashing.2024-05-06T18:58:41+00:00https://creativecommons.org/licenses/by/4.0/Vladimir BravermanKevin GarbeEli JaffeRafail Ostrovskyhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/697LINE: Cryptosystem based on linear equations for logarithmic signatures2024-05-06T16:40:41+00:00Gennady KhalimovYevgen KotukhMaksym KolisnykSvitlana KhalimovaOleksandr SievierinovThe discourse herein pertains to a directional encryption cryptosystem predicated upon logarithmic signatures interconnected via a system of linear equations (we call it LINE). A logarithmic signature serves as a foundational cryptographic primitive within the algorithm, characterized by distinct cryptographic attributes including nonlinearity, noncommutativity, unidirectionality, and factorizability by key. The confidentiality of the cryptosystem is contingent upon the presence of an incomplete system of equations and the substantial ambiguity inherent in the matrix transformations integral to the algorithm. Classical cryptanalysis endeavors are constrained by the potency of the secret matrix transformation and the indeterminacy surrounding solutions to the system of linear equations featuring logarithmic signatures. Such cryptanalysis methodologies, being exhaustive in nature, invariably exhibit exponential complexity. The absence of inherent group computations within the algorithm, and by extension, the inability to exploit group properties associated with the periodicity of group elements, serves to mitigate quantum cryptanalysis to Grover's search algorithm. LINE is predicated upon an incomplete system of linear equations embodies the security levels ranging from 1 to 5, as stipulated by the NIST, and thus presents a promising candidate for the construction of post-quantum cryptosystems.The discourse herein pertains to a directional encryption cryptosystem predicated upon logarithmic signatures interconnected via a system of linear equations (we call it LINE). A logarithmic signature serves as a foundational cryptographic primitive within the algorithm, characterized by distinct cryptographic attributes including nonlinearity, noncommutativity, unidirectionality, and factorizability by key. The confidentiality of the cryptosystem is contingent upon the presence of an incomplete system of equations and the substantial ambiguity inherent in the matrix transformations integral to the algorithm. Classical cryptanalysis endeavors are constrained by the potency of the secret matrix transformation and the indeterminacy surrounding solutions to the system of linear equations featuring logarithmic signatures. Such cryptanalysis methodologies, being exhaustive in nature, invariably exhibit exponential complexity. The absence of inherent group computations within the algorithm, and by extension, the inability to exploit group properties associated with the periodicity of group elements, serves to mitigate quantum cryptanalysis to Grover's search algorithm. LINE is predicated upon an incomplete system of linear equations embodies the security levels ranging from 1 to 5, as stipulated by the NIST, and thus presents a promising candidate for the construction of post-quantum cryptosystems.2024-05-06T16:40:41+00:00https://creativecommons.org/licenses/by/4.0/Gennady KhalimovYevgen KotukhMaksym KolisnykSvitlana KhalimovaOleksandr Sievierinovhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/696A Theoretical Take on a Practical Consensus Protocol2024-05-17T16:51:56+00:00Victor ShoupThe Asynchronous Common Subset (ACS) problem is a fundamental problem in distributed computing. Very recently, Das et al. (2024) developed a new ACS protocol with several desirable properties: (i) it provides optimal resilience, tolerating up to $t < n/3$ corrupt parties out of $n$ parties in total, (ii) it does not rely on a trusted set up, (iii) it utilizes only "lighweight" cryptography, which can be instantiated using just a hash function, and (iv) it has expected round complexity $O(1)$ and expected communication complexity $O(\kappa n^3)$, where $\kappa$ is the output-length of the hash function. The purpose of this paper is to give a detailed, self-contained exposition and analysis of this protocol from the point of view of modern theoretcal cryptography, fleshing out a number of details of the definitions and proofs, providing a complete security analysis based on concrete security assumptions on the hash function (i.e., without relying on random oracles), and developing all of the underlying theory in the universal composability framework.The Asynchronous Common Subset (ACS) problem is a fundamental problem in distributed computing. Very recently, Das et al. (2024) developed a new ACS protocol with several desirable properties: (i) it provides optimal resilience, tolerating up to $t < n/3$ corrupt parties out of $n$ parties in total, (ii) it does not rely on a trusted set up, (iii) it utilizes only "lighweight" cryptography, which can be instantiated using just a hash function, and (iv) it has expected round complexity $O(1)$ and expected communication complexity $O(\kappa n^3)$, where $\kappa$ is the output-length of the hash function. The purpose of this paper is to give a detailed, self-contained exposition and analysis of this protocol from the point of view of modern theoretcal cryptography, fleshing out a number of details of the definitions and proofs, providing a complete security analysis based on concrete security assumptions on the hash function (i.e., without relying on random oracles), and developing all of the underlying theory in the universal composability framework.2024-05-06T14:23:36+00:00https://creativecommons.org/licenses/by-nc-nd/4.0/Victor Shouphttps://creativecommons.org/licenses/by-nc-nd/4.0/https://eprint.iacr.org/2024/695Beale Cipher 1 and Cipher 3: Numbers With No Messages2024-05-06T13:42:33+00:00Richard WassmerThis paper's purpose is to give a new method of analyzing Beale Cipher 1 and Cipher 3 and to show that there is no key which will decipher them into sentences.
Previous research has largely used statistical methods to
either decipher them or prove they have no solution. Some
of these methods show that there is a high probability, but not certainty that they are unsolvable. Both ciphers remain unsolved.
The methods used in this paper are not statistical ones
based on thousands of samples. The evidence given here shows there is a high correlation between locations of certain numbers in the ciphers with locations in the written text that was given with these ciphers in the 1885 pamphlet called "The Beale Papers".
Evidence is correlated with a long monotonically increasing Gillogly String in Cipher 1, when translated with the Declaration of Independence given in the pamphlet.
The Beale Papers' writer was anonymous, and words in the three written letters in the 1885 pamphlet are compared with locations of numbers in the ciphers to show who the writer was.
Emphasis is on numbers which are controllable by the encipherer. Letter location sums are used when they are the most plausible ones found.
Evidence supports the statement that Cipher 1 and Cipher 3 are unintelligible. It also supports the statement that they were designed to have no intelligible sentences because they were part of a complex game made by the anonymous writer of The Beale Papers.This paper's purpose is to give a new method of analyzing Beale Cipher 1 and Cipher 3 and to show that there is no key which will decipher them into sentences.
Previous research has largely used statistical methods to
either decipher them or prove they have no solution. Some
of these methods show that there is a high probability, but not certainty that they are unsolvable. Both ciphers remain unsolved.
The methods used in this paper are not statistical ones
based on thousands of samples. The evidence given here shows there is a high correlation between locations of certain numbers in the ciphers with locations in the written text that was given with these ciphers in the 1885 pamphlet called "The Beale Papers".
Evidence is correlated with a long monotonically increasing Gillogly String in Cipher 1, when translated with the Declaration of Independence given in the pamphlet.
The Beale Papers' writer was anonymous, and words in the three written letters in the 1885 pamphlet are compared with locations of numbers in the ciphers to show who the writer was.
Emphasis is on numbers which are controllable by the encipherer. Letter location sums are used when they are the most plausible ones found.
Evidence supports the statement that Cipher 1 and Cipher 3 are unintelligible. It also supports the statement that they were designed to have no intelligible sentences because they were part of a complex game made by the anonymous writer of The Beale Papers.2024-05-06T13:42:33+00:00https://creativecommons.org/licenses/by/4.0/Richard Wassmerhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/694Lower-Bounds on Public-Key Operations in PIR2024-05-06T13:41:35+00:00Jesko DujmovicMohammad HajiabadiPrivate information retrieval (PIR) is a fundamental cryptographic primitive that allows a user to fetch a database entry without revealing to the server which database entry it learns. PIR becomes non-trivial if the server communication is less than the database size. We show that building (even) very weak forms of single-server PIR protocols, without pre-processing, requires the number of public-key operations to scale linearly in the database size. This holds irrespective of the number of symmetric-key operations performed by the parties.
We then use this bound to examine the related problem of communication efficient oblivious transfer (OT) extension.
Oblivious transfer is a crucial building block in secure multi-party computation (MPC). In most MPC protocols, OT invocations are the main bottleneck in terms of computation and communication. OT extension techniques allow one to minimize the number of public-key operations in MPC protocols. One drawback of all existing OT extension protocols is their communication overhead. In particular, the sender’s communication is roughly double what is information-theoretically optimal.
We show that OT extension with close to optimal sender communication is impossible, illustrating that the communication overhead is inherent. Our techniques go much further; we can show many lower bounds on communication-efficient MPC. E.g., we prove that to build high-rate string OT from generic groups, the sender needs to do linearly many group operationsPrivate information retrieval (PIR) is a fundamental cryptographic primitive that allows a user to fetch a database entry without revealing to the server which database entry it learns. PIR becomes non-trivial if the server communication is less than the database size. We show that building (even) very weak forms of single-server PIR protocols, without pre-processing, requires the number of public-key operations to scale linearly in the database size. This holds irrespective of the number of symmetric-key operations performed by the parties.
We then use this bound to examine the related problem of communication efficient oblivious transfer (OT) extension.
Oblivious transfer is a crucial building block in secure multi-party computation (MPC). In most MPC protocols, OT invocations are the main bottleneck in terms of computation and communication. OT extension techniques allow one to minimize the number of public-key operations in MPC protocols. One drawback of all existing OT extension protocols is their communication overhead. In particular, the sender’s communication is roughly double what is information-theoretically optimal.
We show that OT extension with close to optimal sender communication is impossible, illustrating that the communication overhead is inherent. Our techniques go much further; we can show many lower bounds on communication-efficient MPC. E.g., we prove that to build high-rate string OT from generic groups, the sender needs to do linearly many group operations2024-05-06T13:41:35+00:00https://creativecommons.org/licenses/by/4.0/Jesko DujmovicMohammad Hajiabadihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/693A Note of $\mathsf{Anemoi}$ Gröbner Bases2024-05-06T13:28:21+00:00Pierre BriaudRecently, [eprint/2024/250] and [eprint/2024/347] proposed two algebraic attacks on the $\mathsf{Anemoi}$ permutation [Crypto '23]. In this note, we construct a Gröbner basis for the ideal generated by the naive modeling of the $\mathsf{CICO}$ problem associated to $\mathsf{Anemoi}$, in odd and in even characteristics, for one and several branches. We also infer the degree of the ideal from this Gröbner basis, while previous works relied on upper bounds.Recently, [eprint/2024/250] and [eprint/2024/347] proposed two algebraic attacks on the $\mathsf{Anemoi}$ permutation [Crypto '23]. In this note, we construct a Gröbner basis for the ideal generated by the naive modeling of the $\mathsf{CICO}$ problem associated to $\mathsf{Anemoi}$, in odd and in even characteristics, for one and several branches. We also infer the degree of the ideal from this Gröbner basis, while previous works relied on upper bounds.2024-05-06T13:28:21+00:00https://creativecommons.org/licenses/by/4.0/Pierre Briaudhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/692Blink: An Optimal Proof of Proof-of-Work2024-05-06T12:26:46+00:00Lukas AumayrZeta AvarikiotiMatteo MaffeiGiulia ScaffinoDionysis ZindrosDesigning light clients for Proof-of-Work blockchains has been a foundational problem since Nakamoto's SPV construction in the Bitcoin paper. Over the years, communication was reduced from O(C) down to O(polylog(C)) in the system's lifetime C. We present Blink, the first provably secure O(1) light client that does not require a trusted setup.Designing light clients for Proof-of-Work blockchains has been a foundational problem since Nakamoto's SPV construction in the Bitcoin paper. Over the years, communication was reduced from O(C) down to O(polylog(C)) in the system's lifetime C. We present Blink, the first provably secure O(1) light client that does not require a trusted setup.2024-05-06T08:32:01+00:00https://creativecommons.org/licenses/by-sa/4.0/Lukas AumayrZeta AvarikiotiMatteo MaffeiGiulia ScaffinoDionysis Zindroshttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2024/691White-box filtering attacks breaking SEL masking: from exponential to polynomial time2024-05-06T07:58:26+00:00Alex CharlèsAleksei UdovenkoThis work proposes a new white-box attack technique called filtering, which can be combined with any other trace-based attack method. The idea is to filter the traces based on the value of an intermediate variable in the implementation, aiming to fix a share of a sensitive value and degrade the security of an involved masking scheme.
Coupled with LDA (filtered LDA, FLDA), it leads to an attack defeating the state-of-the-art SEL masking scheme (CHES 2021) of arbitrary degree and number of linear shares with quartic complexity in the window size. In comparison, the current best attacks have exponential complexities in the degree (higher degree decoding analysis, HDDA), in the number of linear shares (higher-order differential computation analysis, HODCA), or the window size (white-box learning parity with noise, WBLPN).
The attack exploits the key idea of the SEL scheme - an efficient parallel combination of the nonlinear and linear masking schemes. We conclude that a proper composition of masking schemes is essential for security.
In addition, we propose several optimizations for linear algebraic attacks: redundant node removal (RNR), optimized parity check matrix usage, and chosen-plaintext filtering (CPF), significantly improving the performance of security evaluation of white-box implementations.This work proposes a new white-box attack technique called filtering, which can be combined with any other trace-based attack method. The idea is to filter the traces based on the value of an intermediate variable in the implementation, aiming to fix a share of a sensitive value and degrade the security of an involved masking scheme.
Coupled with LDA (filtered LDA, FLDA), it leads to an attack defeating the state-of-the-art SEL masking scheme (CHES 2021) of arbitrary degree and number of linear shares with quartic complexity in the window size. In comparison, the current best attacks have exponential complexities in the degree (higher degree decoding analysis, HDDA), in the number of linear shares (higher-order differential computation analysis, HODCA), or the window size (white-box learning parity with noise, WBLPN).
The attack exploits the key idea of the SEL scheme - an efficient parallel combination of the nonlinear and linear masking schemes. We conclude that a proper composition of masking schemes is essential for security.
In addition, we propose several optimizations for linear algebraic attacks: redundant node removal (RNR), optimized parity check matrix usage, and chosen-plaintext filtering (CPF), significantly improving the performance of security evaluation of white-box implementations.2024-05-06T07:58:26+00:00https://creativecommons.org/licenses/by/4.0/Alex CharlèsAleksei Udovenkohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/690LPN-based Attacks in the White-box Setting2024-05-06T12:17:27+00:00Alex CharlèsAleksei UdovenkoIn white-box cryptography, early protection techniques have fallen to the automated Differential Computation Analysis attack (DCA), leading to new countermeasures and attacks. A standard side-channel countermeasure, Ishai-Sahai-Wagner's masking scheme (ISW, CRYPTO 2003) prevents Differential Computation Analysis but was shown to be vulnerable in the white-box context to the Linear Decoding Analysis attack (LDA). However, recent quadratic and cubic masking schemes by Biryukov-Udovenko (ASIACRYPT 2018) and Seker-Eisenbarth-Liskiewicz (CHES 2021) prevent LDA and force to use its higher-degree generalizations with much higher complexity.
In this work, we study the relationship between the security of these and related schemes to the Learning Parity with Noise (LPN) problem and propose a new automated attack by applying an LPN-solving algorithm to white-box implementations. The attack effectively exploits strong linear approximations of the masking scheme and thus can be seen as a combination of the DCA and LDA techniques. Different from previous attacks, the complexity of this algorithm depends on the approximation error, henceforth allowing new practical attacks on masking schemes that previously resisted automated analysis. We demonstrate it theoretically and experimentally, exposing multiple cases where the LPN-based method significantly outperforms LDA and DCA methods, including their higher-order variants.
This work applies the LPN problem beyond its usual post-quantum cryptography boundary, strengthening its interest in the cryptographic community, while expanding the range of automated attacks by presenting a new direction for breaking masking schemes in the white-box model.In white-box cryptography, early protection techniques have fallen to the automated Differential Computation Analysis attack (DCA), leading to new countermeasures and attacks. A standard side-channel countermeasure, Ishai-Sahai-Wagner's masking scheme (ISW, CRYPTO 2003) prevents Differential Computation Analysis but was shown to be vulnerable in the white-box context to the Linear Decoding Analysis attack (LDA). However, recent quadratic and cubic masking schemes by Biryukov-Udovenko (ASIACRYPT 2018) and Seker-Eisenbarth-Liskiewicz (CHES 2021) prevent LDA and force to use its higher-degree generalizations with much higher complexity.
In this work, we study the relationship between the security of these and related schemes to the Learning Parity with Noise (LPN) problem and propose a new automated attack by applying an LPN-solving algorithm to white-box implementations. The attack effectively exploits strong linear approximations of the masking scheme and thus can be seen as a combination of the DCA and LDA techniques. Different from previous attacks, the complexity of this algorithm depends on the approximation error, henceforth allowing new practical attacks on masking schemes that previously resisted automated analysis. We demonstrate it theoretically and experimentally, exposing multiple cases where the LPN-based method significantly outperforms LDA and DCA methods, including their higher-order variants.
This work applies the LPN problem beyond its usual post-quantum cryptography boundary, strengthening its interest in the cryptographic community, while expanding the range of automated attacks by presenting a new direction for breaking masking schemes in the white-box model.2024-05-06T07:50:55+00:00https://creativecommons.org/licenses/by/4.0/Alex CharlèsAleksei Udovenkohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/689Automated Creation of Source Code Variants of a Cryptographic Hash Function Implementation Using Generative Pre-Trained Transformer Models2024-05-06T06:23:21+00:00Elijah PelofskeVincent UriasLorie M. LiebrockGenerative pre-trained transformers (GPT's) are a type of large language machine learning model that are unusually adept at producing novel, and coherent, natural language. Notably, these technologies have also been extended to computer programming languages with great success. However, GPT model outputs in general are stochastic and not always correct. For programming languages, the exact specification of the computer code, syntactically and algorithmically, is strictly required in order to ensure the security of computing systems and applications. Therefore, using GPT models to generate computer code poses an important security risk -- while at the same time allowing for potential innovation in how computer code is generated. In this study the ability of GPT models to generate novel and correct versions, and notably very insecure versions, of implementations of the cryptographic hash function SHA-1 is examined. The GPT models Llama-2-70b-chat-hf, Mistral-7B-Instruct-v0.1, and zephyr-7b-alpha are used. The GPT models are prompted to re-write each function using a modified version of the localGPT framework and langchain to provide word embedding context of the full source code and header files to the model, resulting in over $130,000$ function re-write GPT output text blocks (that are potentially correct source code), approximately $40,000$ of which were able to be parsed as C code and subsequently compiled. The generated code is analyzed for being compilable, correctness of the algorithm, memory leaks, compiler optimization stability, and character distance to the reference implementation. Remarkably, several generated function variants have a high implementation security risk of being correct for some test vectors, but incorrect for other test vectors. Additionally, many function implementations were not correct to the reference algorithm of SHA-1, but produced hashes that have some of the basic characteristics of hash functions. Many of the function re-writes contained serious flaws such as memory leaks, integer overflows, out of bounds accesses, use of uninitialised values, and compiler optimization instability. Compiler optimization settings and SHA-256 hash checksums of the compiled binaries are used to cluster implementations that are equivalent but may not have identical syntax - using this clustering over $100,000$ distinct, novel, and correct versions of the SHA-1 codebase were generated where each component C function of the reference implementation is different from the original code.Generative pre-trained transformers (GPT's) are a type of large language machine learning model that are unusually adept at producing novel, and coherent, natural language. Notably, these technologies have also been extended to computer programming languages with great success. However, GPT model outputs in general are stochastic and not always correct. For programming languages, the exact specification of the computer code, syntactically and algorithmically, is strictly required in order to ensure the security of computing systems and applications. Therefore, using GPT models to generate computer code poses an important security risk -- while at the same time allowing for potential innovation in how computer code is generated. In this study the ability of GPT models to generate novel and correct versions, and notably very insecure versions, of implementations of the cryptographic hash function SHA-1 is examined. The GPT models Llama-2-70b-chat-hf, Mistral-7B-Instruct-v0.1, and zephyr-7b-alpha are used. The GPT models are prompted to re-write each function using a modified version of the localGPT framework and langchain to provide word embedding context of the full source code and header files to the model, resulting in over $130,000$ function re-write GPT output text blocks (that are potentially correct source code), approximately $40,000$ of which were able to be parsed as C code and subsequently compiled. The generated code is analyzed for being compilable, correctness of the algorithm, memory leaks, compiler optimization stability, and character distance to the reference implementation. Remarkably, several generated function variants have a high implementation security risk of being correct for some test vectors, but incorrect for other test vectors. Additionally, many function implementations were not correct to the reference algorithm of SHA-1, but produced hashes that have some of the basic characteristics of hash functions. Many of the function re-writes contained serious flaws such as memory leaks, integer overflows, out of bounds accesses, use of uninitialised values, and compiler optimization instability. Compiler optimization settings and SHA-256 hash checksums of the compiled binaries are used to cluster implementations that are equivalent but may not have identical syntax - using this clustering over $100,000$ distinct, novel, and correct versions of the SHA-1 codebase were generated where each component C function of the reference implementation is different from the original code.2024-05-06T06:23:21+00:00https://creativecommons.org/licenses/by/4.0/Elijah PelofskeVincent UriasLorie M. Liebrockhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/688Succinct Functional Commitments for Circuits from k-Lin2024-05-05T22:14:21+00:00Hoeteck WeeDavid J. WuA functional commitment allows a user to commit to an input $\mathbf{x}$ and later, open the commitment to an arbitrary function $\mathbf{y} = f(\mathbf{x})$. The size of the commitment and the opening should be sublinear in $|\mathbf{x}|$ and $|f|$.
In this work, we give the first pairing-based functional commitment for arbitrary circuits where the size of the commitment and the size of the opening consist of a constant number of group elements. Security relies on the standard bilateral $k$-$\mathsf{Lin}$ assumption. This is the first scheme with this level of succinctness from falsifiable bilinear map assumptions (previous approaches required SNARKs for $\mathsf{NP}$). This is also the first functional commitment scheme for general circuits with $\mathsf{poly}(\lambda)$-size commitments and openings from any assumption that makes fully black-box use of cryptographic primitives and algorithms. As an immediate consequence, we also obtain a succinct non-interactive argument for arithmetic circuits (i.e., a SNARG for $\mathsf{P}/\mathsf{poly}$) with a universal setup and where the proofs consist of a constant number of group elements. In particular, the CRS in our SNARG only depends on the size of the arithmetic circuit $|C|$ rather than the circuit $C$ itself; the same CRS can be used to verify computations with respect to different circuits. Our construction relies on a new notion of projective chainable commitments which may be of independent interest.A functional commitment allows a user to commit to an input $\mathbf{x}$ and later, open the commitment to an arbitrary function $\mathbf{y} = f(\mathbf{x})$. The size of the commitment and the opening should be sublinear in $|\mathbf{x}|$ and $|f|$.
In this work, we give the first pairing-based functional commitment for arbitrary circuits where the size of the commitment and the size of the opening consist of a constant number of group elements. Security relies on the standard bilateral $k$-$\mathsf{Lin}$ assumption. This is the first scheme with this level of succinctness from falsifiable bilinear map assumptions (previous approaches required SNARKs for $\mathsf{NP}$). This is also the first functional commitment scheme for general circuits with $\mathsf{poly}(\lambda)$-size commitments and openings from any assumption that makes fully black-box use of cryptographic primitives and algorithms. As an immediate consequence, we also obtain a succinct non-interactive argument for arithmetic circuits (i.e., a SNARG for $\mathsf{P}/\mathsf{poly}$) with a universal setup and where the proofs consist of a constant number of group elements. In particular, the CRS in our SNARG only depends on the size of the arithmetic circuit $|C|$ rather than the circuit $C$ itself; the same CRS can be used to verify computations with respect to different circuits. Our construction relies on a new notion of projective chainable commitments which may be of independent interest.2024-05-05T22:14:21+00:00https://creativecommons.org/licenses/by/4.0/Hoeteck WeeDavid J. Wuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/687Levin–Kolmogorov Complexity is not in Linear Time2024-05-05T19:46:02+00:00Nicholas BrandtUnderstanding the computational hardness of Kolmogorov complexity is a central open question in complexity theory.
An important notion is Levin's version of Kolmogorov complexity, Kt, and its decisional variant, MKtP,
due to its connections to universal search, derandomization, and oneway functions, among others.
The question whether MKtP can be computed in polynomial time is particularly interesting because it is not subject to known technical barriers such as algebrization or natural proofs that would explain the lack of a proof for MKtP not in P.
We take a significant step towards proving MKtP not in P by developing an algorithmic approach for showing unconditionally that MKtP not in DTIME[O(n)] cannot be decided in deterministic linear time in the worst-case.
This allows us to partially affirm a conjecture by Ren and Santhanam [STACS:RS22] about a non-halting variant of Kt complexity.
Additionally, we give conditional lower bounds for MKtP that tolerate either more runtime or one-sided error.Understanding the computational hardness of Kolmogorov complexity is a central open question in complexity theory.
An important notion is Levin's version of Kolmogorov complexity, Kt, and its decisional variant, MKtP,
due to its connections to universal search, derandomization, and oneway functions, among others.
The question whether MKtP can be computed in polynomial time is particularly interesting because it is not subject to known technical barriers such as algebrization or natural proofs that would explain the lack of a proof for MKtP not in P.
We take a significant step towards proving MKtP not in P by developing an algorithmic approach for showing unconditionally that MKtP not in DTIME[O(n)] cannot be decided in deterministic linear time in the worst-case.
This allows us to partially affirm a conjecture by Ren and Santhanam [STACS:RS22] about a non-halting variant of Kt complexity.
Additionally, we give conditional lower bounds for MKtP that tolerate either more runtime or one-sided error.2024-05-05T19:46:02+00:00https://creativecommons.org/licenses/by-sa/4.0/Nicholas Brandthttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2024/686Unstructured Inversions of New Hope2024-05-04T21:07:54+00:00Ian MalloyIntroduced as a new protocol implemented in “Chrome Canary” for the Google Inc. Chrome browser,
“New Hope” is engineered as a post-quantum key exchange for the TLS 1.2 protocol. The structure of
the exchange is revised lattice-based cryptography. New Hope incorporates the key-encapsulation
mechanism of Peikert which itself is a modified Ring-LWE scheme. The search space used to introduce
the closest-vector problem is generated by an intersection of a tesseract and hexadecachoron, or the ℓ∞-
ball and ℓ1-ball respectively. This intersection results in the 24-cell 𝒱 of lattice 𝒟̃4. With respect to the
density of the Voronoi cell 𝒱, the proposed mitigation against backdoor attacks proposed by the authors
of New Hope may not withstand such attempts if enabled by a quantum computer capable of
implementing Grover’s search algorithm.Introduced as a new protocol implemented in “Chrome Canary” for the Google Inc. Chrome browser,
“New Hope” is engineered as a post-quantum key exchange for the TLS 1.2 protocol. The structure of
the exchange is revised lattice-based cryptography. New Hope incorporates the key-encapsulation
mechanism of Peikert which itself is a modified Ring-LWE scheme. The search space used to introduce
the closest-vector problem is generated by an intersection of a tesseract and hexadecachoron, or the ℓ∞-
ball and ℓ1-ball respectively. This intersection results in the 24-cell 𝒱 of lattice 𝒟̃4. With respect to the
density of the Voronoi cell 𝒱, the proposed mitigation against backdoor attacks proposed by the authors
of New Hope may not withstand such attempts if enabled by a quantum computer capable of
implementing Grover’s search algorithm.2024-05-04T21:07:54+00:00https://creativecommons.org/licenses/by/4.0/Ian Malloyhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/685Committing AVID with Partial Retrieval and Optimal Storage2024-05-04T20:18:28+00:00Nicolas AlhaddadLeonid ReyzinMayank VariaAsynchronous Verifiable Information Dispersal (AVID) allows a dealer to disperse a message $M$ across a collection of server replicas consistently and efficiently, such that any future client can reliably retrieve the message $M$ if some servers fail.
Since AVID was introduced by Cachin and Tessaro in 2005, several works improved the asymptotic communication complexity of AVID protocols.
However, recent gains in communication complexity have come at the expense of sub-optimal storage, which is the dominant cost in long-term archiving.
Moreover, recent works do not provide a mechanism to detect errors until the retrieval stage, which may result in completely wasted long-term storage if the dealer is malicious.
In this work, we contribute a new AVID construction that achieves optimal storage and guaranteed output delivery, without sacrificing on communication complexity during dispersal or retrieval.
First, we introduce a technique that bootstraps from dispersal of a message with sub-optimal storage to one with optimal storage.
Second, we define and construct an AVID protocol that is robust, meaning that all server replicas are guaranteed at dispersal time that their fragments will contribute toward retrieval of a valid message.
Third, we add the new possibility that some server replicas may lose their fragment in between dispersal and retrieval (as is likely in the long-term archiving scenario).
This allows us to rely on fewer available replicas for retrieval than are required for dispersal.Asynchronous Verifiable Information Dispersal (AVID) allows a dealer to disperse a message $M$ across a collection of server replicas consistently and efficiently, such that any future client can reliably retrieve the message $M$ if some servers fail.
Since AVID was introduced by Cachin and Tessaro in 2005, several works improved the asymptotic communication complexity of AVID protocols.
However, recent gains in communication complexity have come at the expense of sub-optimal storage, which is the dominant cost in long-term archiving.
Moreover, recent works do not provide a mechanism to detect errors until the retrieval stage, which may result in completely wasted long-term storage if the dealer is malicious.
In this work, we contribute a new AVID construction that achieves optimal storage and guaranteed output delivery, without sacrificing on communication complexity during dispersal or retrieval.
First, we introduce a technique that bootstraps from dispersal of a message with sub-optimal storage to one with optimal storage.
Second, we define and construct an AVID protocol that is robust, meaning that all server replicas are guaranteed at dispersal time that their fragments will contribute toward retrieval of a valid message.
Third, we add the new possibility that some server replicas may lose their fragment in between dispersal and retrieval (as is likely in the long-term archiving scenario).
This allows us to rely on fewer available replicas for retrieval than are required for dispersal.2024-05-04T20:18:28+00:00https://creativecommons.org/licenses/by/4.0/Nicolas AlhaddadLeonid ReyzinMayank Variahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/684A Plug-and-Play Long-Range Defense System for Proof-of-Stake Blockchains2024-05-04T16:31:13+00:00Lucien K. L. NgPanagiotis ChatzigiannisDuc V. LeMohsen MinaeiRanjit KumaresanMahdi ZamaniIn recent years, many blockchain systems have progressively transitioned to proof-of-stake (PoS) con- sensus algorithms. These algorithms are not only more energy efficient than proof-of-work but are also well-studied and widely accepted within the community. However, PoS systems are susceptible to a particularly powerful "long-range" attack, where an adversary can corrupt the validator set retroactively and present forked versions of the blockchain. These versions would still be acceptable to clients, thereby creating the potential for double-spending. Several methods and research efforts have proposed counter- measures against such attacks. Still, they often necessitate modifications to the underlying blockchain, introduce heavy assumptions such as centralized entities, or prove inefficient for securely bootstrapping light clients.
In this work, we propose a method of defending against these attacks with the aid of external servers running our protocol. Our method does not require any soft or hard-forks on the underlying blockchain and operates under reasonable assumptions, specifically the requirement of at least one honest server.
Central to our approach is a new primitive called "Insertable Proof of Sequential Work" (InPoSW). Traditional PoSW ensures that a server performs computational tasks that cannot be parallelized and require a minimum execution time, effectively timestamping the input data. InPoSW additionally allows the prover to "insert" new data into an ongoing InPoSW instance. This primitive can be of independent interest for other timestamp applications. Compared to naively adopting prior PoSW schemes for In-PoSW, our construction achieves >22× storage reduction on the server side and >17900× communication cost reduction for each verification.In recent years, many blockchain systems have progressively transitioned to proof-of-stake (PoS) con- sensus algorithms. These algorithms are not only more energy efficient than proof-of-work but are also well-studied and widely accepted within the community. However, PoS systems are susceptible to a particularly powerful "long-range" attack, where an adversary can corrupt the validator set retroactively and present forked versions of the blockchain. These versions would still be acceptable to clients, thereby creating the potential for double-spending. Several methods and research efforts have proposed counter- measures against such attacks. Still, they often necessitate modifications to the underlying blockchain, introduce heavy assumptions such as centralized entities, or prove inefficient for securely bootstrapping light clients.
In this work, we propose a method of defending against these attacks with the aid of external servers running our protocol. Our method does not require any soft or hard-forks on the underlying blockchain and operates under reasonable assumptions, specifically the requirement of at least one honest server.
Central to our approach is a new primitive called "Insertable Proof of Sequential Work" (InPoSW). Traditional PoSW ensures that a server performs computational tasks that cannot be parallelized and require a minimum execution time, effectively timestamping the input data. InPoSW additionally allows the prover to "insert" new data into an ongoing InPoSW instance. This primitive can be of independent interest for other timestamp applications. Compared to naively adopting prior PoSW schemes for In-PoSW, our construction achieves >22× storage reduction on the server side and >17900× communication cost reduction for each verification.2024-05-04T16:31:13+00:00https://creativecommons.org/licenses/by/4.0/Lucien K. L. NgPanagiotis ChatzigiannisDuc V. LeMohsen MinaeiRanjit KumaresanMahdi Zamanihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/683A note on ``a new password-authenticated module learning with rounding-based key exchange protocol: Saber.PAKE''2024-05-04T04:32:00+00:00Zhengjun CaoLihua LiuWe show the Seyhan-Akleylek key exchange protocol [J. Supercomput., 2023, 79:17859-17896] cannot resist offline dictionary attack and impersonation attack, not as claimed.We show the Seyhan-Akleylek key exchange protocol [J. Supercomput., 2023, 79:17859-17896] cannot resist offline dictionary attack and impersonation attack, not as claimed.2024-05-04T04:32:00+00:00https://creativecommons.org/publicdomain/zero/1.0/Zhengjun CaoLihua Liuhttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2024/682Approximate PSI with Near-Linear Communication2024-05-04T01:57:49+00:00Wutichai ChongchitmateSteve LuRafail OstrovskyPrivate Set Intersection (PSI) is a protocol where two parties with individually held confidential sets want to jointly learn (or secret-share) the intersection of these two sets and reveal nothing else to each other. In this paper, we introduce a natural extension of this notion to approximate matching. Specifically, given a distance metric between elements, an approximate PSI (Approx-PSI) allows to run PSI where ``close'' elements match. Assuming that elements are either ``close'' or sufficiently ``far apart'', we present an Approx-PSI protocol for Hamming distance that dramatically improves the overall efficiency compared to all existing approximate-PSI solutions. In particular, we achieve a near-linear $\tilde{O}(n)$ communication complexity, an improvement over the previously best-known $\tilde{O}(n^2)$. We also show Approx-PSI protocols for Edit distance (also known as Levenstein distance), Euclidean distance and angular distance by deploying results on low distortion embeddings to Hamming distance. The latter two results further imply secure Approx-PSI for other metrics such as cosine similarity metric. Our Approx-PSI for Hamming distance is up to 20x faster and communicating 30% less than best known protocols when (1) matching small binary vectors; or (2) matching large threshold; or (3) matching large input sets. We demonstrate that the protocol can be used to match similar images through spread spectrum of the images.Private Set Intersection (PSI) is a protocol where two parties with individually held confidential sets want to jointly learn (or secret-share) the intersection of these two sets and reveal nothing else to each other. In this paper, we introduce a natural extension of this notion to approximate matching. Specifically, given a distance metric between elements, an approximate PSI (Approx-PSI) allows to run PSI where ``close'' elements match. Assuming that elements are either ``close'' or sufficiently ``far apart'', we present an Approx-PSI protocol for Hamming distance that dramatically improves the overall efficiency compared to all existing approximate-PSI solutions. In particular, we achieve a near-linear $\tilde{O}(n)$ communication complexity, an improvement over the previously best-known $\tilde{O}(n^2)$. We also show Approx-PSI protocols for Edit distance (also known as Levenstein distance), Euclidean distance and angular distance by deploying results on low distortion embeddings to Hamming distance. The latter two results further imply secure Approx-PSI for other metrics such as cosine similarity metric. Our Approx-PSI for Hamming distance is up to 20x faster and communicating 30% less than best known protocols when (1) matching small binary vectors; or (2) matching large threshold; or (3) matching large input sets. We demonstrate that the protocol can be used to match similar images through spread spectrum of the images.2024-05-04T01:57:49+00:00https://creativecommons.org/licenses/by-nc/4.0/Wutichai ChongchitmateSteve LuRafail Ostrovskyhttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2024/681HRA-Secure Homomorphic Lattice-Based Proxy Re-Encryption with Tight Security2024-05-03T21:49:12+00:00Aloni CohenDavid Bruce CousinsNicholas GeniseErik KlineYuriy PolyakovSaraswathy RVWe construct an efficient proxy re-encryption (PRE) scheme secure against honest re-encryption attacks (HRA-secure) with precise concrete security estimates. To get these precise concrete security estimates, we introduce the tight, fine-grained noise-flooding techniques of Li et al. (CRYPTO'22) to RLWE-based (homomorphic) PRE schemes, as well as a mixed statistical-computational security to HRA security analysis. Our solution also supports homomorphic operations on the ciphertexts. Such homomorphism allows for advanced applications, e.g., encrypted computation of network statistics across networks and unlimited hops, in the case of full homomorphism, i.e., bootstrapping.
We implement our PRE scheme in the OpenFHE software library and apply it to a problem of secure multi-hop data distribution in the context of 5G virtual network slices. We also experimentally evaluate the performance of our scheme, demonstrating that the implementation is practical.
In addition, we compare our PRE method with other lattice-based PRE schemes and approaches to achieve HRA security. These achieve HRA security, but not in a tight, practical scheme such as our work. Further, we present an attack on the PRE scheme proposed in Davidson et al.'s (ACISP'19), which was claimed to achieve HRA security without noise flooding.We construct an efficient proxy re-encryption (PRE) scheme secure against honest re-encryption attacks (HRA-secure) with precise concrete security estimates. To get these precise concrete security estimates, we introduce the tight, fine-grained noise-flooding techniques of Li et al. (CRYPTO'22) to RLWE-based (homomorphic) PRE schemes, as well as a mixed statistical-computational security to HRA security analysis. Our solution also supports homomorphic operations on the ciphertexts. Such homomorphism allows for advanced applications, e.g., encrypted computation of network statistics across networks and unlimited hops, in the case of full homomorphism, i.e., bootstrapping.
We implement our PRE scheme in the OpenFHE software library and apply it to a problem of secure multi-hop data distribution in the context of 5G virtual network slices. We also experimentally evaluate the performance of our scheme, demonstrating that the implementation is practical.
In addition, we compare our PRE method with other lattice-based PRE schemes and approaches to achieve HRA security. These achieve HRA security, but not in a tight, practical scheme such as our work. Further, we present an attack on the PRE scheme proposed in Davidson et al.'s (ACISP'19), which was claimed to achieve HRA security without noise flooding.2024-05-03T21:49:12+00:00https://creativecommons.org/licenses/by/4.0/Aloni CohenDavid Bruce CousinsNicholas GeniseErik KlineYuriy PolyakovSaraswathy RVhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/680Universal Vector Commitments2024-05-03T19:42:26+00:00Ojaswi AcharyaFoteini BaldimtsiSamuel Dov GordonDaniel McVickerAayush YadavWe propose a new notion of vector commitment schemes with proofs of (non-)membership that we call universal vector commitments. We show how to build them directly from (i) Merkle commitments, and (ii) a universal accumulator and a plain vector commitment scheme. We also present a generic construction for universal accumulators over large domains from any vector commitment scheme, using cuckoo hashing. Leveraging the aforementioned generic constructions, we show that universal vector commitment schemes are implied by plain vector commitments and cuckoo hashing.We propose a new notion of vector commitment schemes with proofs of (non-)membership that we call universal vector commitments. We show how to build them directly from (i) Merkle commitments, and (ii) a universal accumulator and a plain vector commitment scheme. We also present a generic construction for universal accumulators over large domains from any vector commitment scheme, using cuckoo hashing. Leveraging the aforementioned generic constructions, we show that universal vector commitment schemes are implied by plain vector commitments and cuckoo hashing.2024-05-03T19:42:26+00:00https://creativecommons.org/licenses/by/4.0/Ojaswi AcharyaFoteini BaldimtsiSamuel Dov GordonDaniel McVickerAayush Yadavhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/679Isotropic Quadratic Forms, Diophantine Equations and Digital Signatures2024-05-03T17:58:41+00:00Martin FeussnerIgor SemaevThis work introduces DEFI - an efficient hash-and-sign digital signature scheme based on isotropic quadratic forms over a commutative ring of characteristic 0. The form is public, but the construction is a trapdoor that depends on the scheme's private key. For polynomial rings over integers and rings of integers of algebraic number fields, the cryptanalysis is reducible to solving a quadratic Diophantine equation over the ring or, equivalently, to solving a system of quadratic Diophantine equations over rational integers. It is still an open problem whether quantum computers will have any advantage in solving Diophantine problems.This work introduces DEFI - an efficient hash-and-sign digital signature scheme based on isotropic quadratic forms over a commutative ring of characteristic 0. The form is public, but the construction is a trapdoor that depends on the scheme's private key. For polynomial rings over integers and rings of integers of algebraic number fields, the cryptanalysis is reducible to solving a quadratic Diophantine equation over the ring or, equivalently, to solving a system of quadratic Diophantine equations over rational integers. It is still an open problem whether quantum computers will have any advantage in solving Diophantine problems.2024-05-03T17:58:41+00:00https://creativecommons.org/licenses/by/4.0/Martin FeussnerIgor Semaevhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/678Quantum-Safe Account Recovery for WebAuthn2024-05-09T20:13:31+00:00Douglas StebilaSpencer WilsonWebAuthn is a passwordless authentication protocol which allows users to authenticate to online services using public-key cryptography. Users prove their identity by signing a challenge with a private key, which is stored on a device such as a cell phone or a USB security token. This approach avoids many of the common security problems with password-based authentication.
WebAuthn's reliance on proof-of-possession leads to a usability issue, however: a user who loses access to their authenticator device either loses access to their accounts or is required to fall back on a weaker authentication mechanism. To solve this problem, Yubico has proposed a protocol which allows a user to link two tokens in such a way that one (the primary authenticator) can generate public keys on behalf of the other (the backup authenticator). With this solution, users authenticate with a single token, only relying on their backup token if necessary for account recovery. However, Yubico's protocol relies on the hardness of the discrete logarithm problem for its security and hence is vulnerable to an attacker with a powerful enough quantum computer.
We present a WebAuthn recovery protocol which can be instantiated with quantum-safe primitives. We also critique the security model used in previous analysis of Yubico's protocol and propose a new framework which we use to evaluate the security of both the group-based and the quantum-safe protocol. This leads us to uncover a weakness in Yubico's proposal which escaped detection in prior work but was revealed by our model. In our security analysis, we require the cryptographic primitives underlying the protocols to satisfy a number of novel security properties such as KEM unlinkability, which we formalize. We prove that well-known quantum-safe algorithms, including CRYSTALS-Kyber, satisfy the properties required for analysis of our quantum-safe protocol.WebAuthn is a passwordless authentication protocol which allows users to authenticate to online services using public-key cryptography. Users prove their identity by signing a challenge with a private key, which is stored on a device such as a cell phone or a USB security token. This approach avoids many of the common security problems with password-based authentication.
WebAuthn's reliance on proof-of-possession leads to a usability issue, however: a user who loses access to their authenticator device either loses access to their accounts or is required to fall back on a weaker authentication mechanism. To solve this problem, Yubico has proposed a protocol which allows a user to link two tokens in such a way that one (the primary authenticator) can generate public keys on behalf of the other (the backup authenticator). With this solution, users authenticate with a single token, only relying on their backup token if necessary for account recovery. However, Yubico's protocol relies on the hardness of the discrete logarithm problem for its security and hence is vulnerable to an attacker with a powerful enough quantum computer.
We present a WebAuthn recovery protocol which can be instantiated with quantum-safe primitives. We also critique the security model used in previous analysis of Yubico's protocol and propose a new framework which we use to evaluate the security of both the group-based and the quantum-safe protocol. This leads us to uncover a weakness in Yubico's proposal which escaped detection in prior work but was revealed by our model. In our security analysis, we require the cryptographic primitives underlying the protocols to satisfy a number of novel security properties such as KEM unlinkability, which we formalize. We prove that well-known quantum-safe algorithms, including CRYSTALS-Kyber, satisfy the properties required for analysis of our quantum-safe protocol.2024-05-03T14:26:34+00:00https://creativecommons.org/licenses/by/4.0/Douglas StebilaSpencer Wilsonhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/677Asynchronous Consensus without Trusted Setup or Public-Key Cryptography2024-05-03T13:40:00+00:00Sourav DasSisi DuanShengqi LiuAtsuki MomoseLing RenVictor ShoupByzantine consensus is a fundamental building block in distributed cryptographic problems. Despite decades of research, most existing asynchronous consensus protocols require a strong trusted setup and expensive public-key cryptography. In this paper, we study asynchronous Byzantine consensus protocols that do not rely on a trusted setup and do not use public-key cryptography such as digital signatures. We give an Asynchronous Common Subset (ACS) protocol whose security is only based on cryptographic hash functions modeled as a random oracle. Our protocol has $O(\kappa n^3)$ total communication and runs in expected $O(1)$ rounds. The fact that we use only cryptographic hash functions also means that our protocol is post-quantum secure. The minimal use of cryptography and the small number of rounds make our protocol practical. We implement our protocol and evaluate it in a geo-distributed setting with up to 128 machines. Our experimental evaluation shows that our protocol is more efficient than the only other setup-free consensus protocol that has been implemented to date. En route to our asynchronous consensus protocols, we also introduce new primitives called asynchronous secret key sharing and cover gather, which may be of independent interest.Byzantine consensus is a fundamental building block in distributed cryptographic problems. Despite decades of research, most existing asynchronous consensus protocols require a strong trusted setup and expensive public-key cryptography. In this paper, we study asynchronous Byzantine consensus protocols that do not rely on a trusted setup and do not use public-key cryptography such as digital signatures. We give an Asynchronous Common Subset (ACS) protocol whose security is only based on cryptographic hash functions modeled as a random oracle. Our protocol has $O(\kappa n^3)$ total communication and runs in expected $O(1)$ rounds. The fact that we use only cryptographic hash functions also means that our protocol is post-quantum secure. The minimal use of cryptography and the small number of rounds make our protocol practical. We implement our protocol and evaluate it in a geo-distributed setting with up to 128 machines. Our experimental evaluation shows that our protocol is more efficient than the only other setup-free consensus protocol that has been implemented to date. En route to our asynchronous consensus protocols, we also introduce new primitives called asynchronous secret key sharing and cover gather, which may be of independent interest.2024-05-03T13:40:00+00:00https://creativecommons.org/publicdomain/zero/1.0/Sourav DasSisi DuanShengqi LiuAtsuki MomoseLing RenVictor Shouphttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2024/676Composing Timed Cryptographic Protocols: Foundations and Applications2024-05-03T01:42:50+00:00Karim EldefrawyBenjamin TernerMoti YungTime-lock puzzles are unique cryptographic primitives that use computational complexity to keep information secret for some period of time, after which security expires. Unfortunately, current analysis techniques of time-lock primitives provide no sound mechanism to build multi-party cryptographic protocols which use expiring security as a building block. We explain in this paper that all other attempts at this subtle problem lack either composability, a fully consistent analysis, or functionality. The subtle flaws in the existing frameworks reduce to an impossibility by Mahmoody et al., who showed that time-lock puzzles with super-polynomial gaps (between committer and solver) cannot be constructed from random oracles alone; yet still the analyses of algebraic puzzles today treat the solving process as if each step is a generic or random oracle.
This paper presents a new complexity theoretic based framework and new structural theorems to analyze timed primitives with full generality and in composition (which is the central modular protocol design tool). The framework includes a model of security based on fine-grained circuit complexity which we call residual complexity, which accounts for possible leakage on timed primitives as they expire. Our definitions for multi-party computation protocols generalize the literature standards by accounting for fine-grained polynomial circuit depth to model computational hardness which expires in feasible time. Our composition theorems incur degradation of (fine-grained) security as items are composed. In our framework, simulators are given a polynomial “budget” for how much time they spend, and in composition these polynomials interact.
Finally, we demonstrate via a prototypical auction application how to apply our framework and theorems. For the first time, we show that it is possible to prove – in a way that is fully consistent, with falsifiable assumptions – properties of multi-party applications based on leaky, temporarily secure components.Time-lock puzzles are unique cryptographic primitives that use computational complexity to keep information secret for some period of time, after which security expires. Unfortunately, current analysis techniques of time-lock primitives provide no sound mechanism to build multi-party cryptographic protocols which use expiring security as a building block. We explain in this paper that all other attempts at this subtle problem lack either composability, a fully consistent analysis, or functionality. The subtle flaws in the existing frameworks reduce to an impossibility by Mahmoody et al., who showed that time-lock puzzles with super-polynomial gaps (between committer and solver) cannot be constructed from random oracles alone; yet still the analyses of algebraic puzzles today treat the solving process as if each step is a generic or random oracle.
This paper presents a new complexity theoretic based framework and new structural theorems to analyze timed primitives with full generality and in composition (which is the central modular protocol design tool). The framework includes a model of security based on fine-grained circuit complexity which we call residual complexity, which accounts for possible leakage on timed primitives as they expire. Our definitions for multi-party computation protocols generalize the literature standards by accounting for fine-grained polynomial circuit depth to model computational hardness which expires in feasible time. Our composition theorems incur degradation of (fine-grained) security as items are composed. In our framework, simulators are given a polynomial “budget” for how much time they spend, and in composition these polynomials interact.
Finally, we demonstrate via a prototypical auction application how to apply our framework and theorems. For the first time, we show that it is possible to prove – in a way that is fully consistent, with falsifiable assumptions – properties of multi-party applications based on leaky, temporarily secure components.2024-05-03T01:42:50+00:00https://creativecommons.org/licenses/by/4.0/Karim EldefrawyBenjamin TernerMoti Yunghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/675Olympic Privacy-Preserving Blueprints: Faster Communication, Highly Functional, Stronger Security2024-05-02T19:49:30+00:00Scott GriffyMarkulf KohlweissAnna LysyanskayaMeghna SenguptaIntroduced by Kohlweiss, Lysyanskaya, and Nguyen (Eurocrypt'23), an $f$-privacy-preserving blueprint (PPB) system allows an auditor with secret input $x$ to create a public encoding of the function $f(x,\cdot)$ that verifiably corresponds to a commitment $C_x$ to $x$. The auditor will then be able to derive $f(x,y)$ from an escrow $Z$ computed by a user on input the user's private data $y$ corresponding to a commitment $C_y$. $Z$ verifiably corresponds to the commitment $C_y$ and reveals no other information about $y$.
PPBs provide an abuse-resistant escrow mechanism: for example, if $f$ is the watchlist function where $f(x,y)$ outputs $y$ only in the event that $y$ is on the list $x$, then an $f$-PPB allows the auditor to trace watchlisted users in an otherwise anonymous system. Yet, the auditor's $x$ must correspond to a publicly available (and potentially authorized by a transparent, lawful process) $C_x$, and the auditor will learn nothing except $f(x,y)$.
In this paper, we build on the original PPB results in three ways: (1) We define and satisfy a stronger notion of security where a malicious auditor cannot frame a user in a transaction to which this user was not a party. (2) We provide efficient schemes for a bigger class of functions $f$; for example, for the first time, we show how to realize $f$ that would allow the auditor to trace e-cash transactions of a criminal suspect. (3) For the watchlist and related functions, we reduce the size of the escrow $Z$ from linear in the size of the auditor's input $x$, to logarithmic.Introduced by Kohlweiss, Lysyanskaya, and Nguyen (Eurocrypt'23), an $f$-privacy-preserving blueprint (PPB) system allows an auditor with secret input $x$ to create a public encoding of the function $f(x,\cdot)$ that verifiably corresponds to a commitment $C_x$ to $x$. The auditor will then be able to derive $f(x,y)$ from an escrow $Z$ computed by a user on input the user's private data $y$ corresponding to a commitment $C_y$. $Z$ verifiably corresponds to the commitment $C_y$ and reveals no other information about $y$.
PPBs provide an abuse-resistant escrow mechanism: for example, if $f$ is the watchlist function where $f(x,y)$ outputs $y$ only in the event that $y$ is on the list $x$, then an $f$-PPB allows the auditor to trace watchlisted users in an otherwise anonymous system. Yet, the auditor's $x$ must correspond to a publicly available (and potentially authorized by a transparent, lawful process) $C_x$, and the auditor will learn nothing except $f(x,y)$.
In this paper, we build on the original PPB results in three ways: (1) We define and satisfy a stronger notion of security where a malicious auditor cannot frame a user in a transaction to which this user was not a party. (2) We provide efficient schemes for a bigger class of functions $f$; for example, for the first time, we show how to realize $f$ that would allow the auditor to trace e-cash transactions of a criminal suspect. (3) For the watchlist and related functions, we reduce the size of the escrow $Z$ from linear in the size of the auditor's input $x$, to logarithmic.2024-05-02T19:49:30+00:00https://creativecommons.org/licenses/by/4.0/Scott GriffyMarkulf KohlweissAnna LysyanskayaMeghna Senguptahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2024/674SigmaSuite: How to Minimize Foreign Arithmetic in ZKP Circuits While Keeping Succinct Final Verification.2024-05-02T16:43:57+00:00Wyatt BennoForeign field arithmetic often creates significant additional overheads in zero-knowledge proof circuits. Previous work has offloaded foreign arithmetic from proof circuits by using effective and often simple primitives such as Sigma protocols. While these successfully move the foreign field work outside of the circuit, the costs for the Sigma protocol’s verifier still remains high. In use cases where the verifier is constrained computationally this poses a major challenge. One such use case would be in proof composition where foreign arithmetic causes a blowup in the costs for the verifier circuit. In this work we show that by using folding scheme with Sigmabus and other such uniform verifier offloading techniques, we can remove foreign field arithmetic from zero-knowledge proof circuits while achieving succinct final verification. We do this by applying prior techniques iteratively and accumulate the resulting verifier work into one folding proof of size O(|F|) group elements, where F is the size of a single Sigma verifier’s computation. Then by using an existing zkSNARK we can further compress to a proof size of O(log |F|) which can be checked succinctly by a computationally constrained verifier.Foreign field arithmetic often creates significant additional overheads in zero-knowledge proof circuits. Previous work has offloaded foreign arithmetic from proof circuits by using effective and often simple primitives such as Sigma protocols. While these successfully move the foreign field work outside of the circuit, the costs for the Sigma protocol’s verifier still remains high. In use cases where the verifier is constrained computationally this poses a major challenge. One such use case would be in proof composition where foreign arithmetic causes a blowup in the costs for the verifier circuit. In this work we show that by using folding scheme with Sigmabus and other such uniform verifier offloading techniques, we can remove foreign field arithmetic from zero-knowledge proof circuits while achieving succinct final verification. We do this by applying prior techniques iteratively and accumulate the resulting verifier work into one folding proof of size O(|F|) group elements, where F is the size of a single Sigma verifier’s computation. Then by using an existing zkSNARK we can further compress to a proof size of O(log |F|) which can be checked succinctly by a computationally constrained verifier.2024-05-02T16:43:57+00:00https://creativecommons.org/licenses/by/4.0/Wyatt Bennohttps://creativecommons.org/licenses/by/4.0/