https://eprint.iacr.org/rss/atom.xmlCryptology ePrint Archive2023-05-28T20:24:12+00:00None of your businesshttps://iacr.org/img/logo/iacrlogo_small.pngMetadata is available under the CC0 license https://creativecommons.org/publicdomain/zero/1.0/. Each article has a PDF with different license specified for each one.The Cryptology ePrint Archive provides rapid access to recent
research in cryptology. Papers have been placed here by the
authors and did not undergo any refereeing process other than
verifying that the work seems to be within the scope of
cryptology and meets some minimal acceptance criteria and
publishing conditions.https://eprint.iacr.org/2023/434The Self-Anti-Censorship Nature of Encryption: On the Prevalence of Anamorphic Cryptography2023-05-28T20:24:12+00:00Mirek KutylowskiGiuseppe PersianoDuong Hieu PhanMoti YungMarcin Zawadas part of the responses to the ongoing ``crypto wars,'' the notion of {\em Anamorphic Encryption} was put forth [Persiano-Phan-Yung Eurocrypt '22].
The notion allows private communication in spite of a dictator who (in violation of the usual normative conditions under which Cryptography is developed) is engaged in an extreme form of surveillance and/or censorship, where it asks for all private keys and knows and may even dictate all messages.
The original work pointed out efficient ways to use two known schemes in the anamorphic mode, bypassing the draconian censorship and hiding information from the all-powerful dictator.
A question left open was whether these examples are outlier results or whether anamorphic mode is pervasive in existing systems.
Here we answer the above question: we develop new techniques, expand the notion, and show that the notion of Anamorphic Cryptography is, in fact, very much prevalent.
We first refine the notion of Anamorphic Encryption with respect to the nature of covert communication.
Specifically, we distinguish {\em Single-Receiver Encryption} for many to one communication, and {\em Multiple-Receiver Encryption} for many to many communication within the group of conspiring (against the dictator) users. We then show that Anamorphic Encryption can be embedded in the randomness used in the encryption, and give families of constructions that can be applied to numerous ciphers. In total the families cover classical encryption schemes, some of which in actual use (RSA-OAEP, Pailler, Goldwasser-Micali, ElGamal schemes, Cramer-Shoup, and Smooth Projective Hash based systems). Among our examples is an anamorphic channel with much higher capacity than the regular channel.
In sum, the work shows the very large extent of the potential futility of control and censorship over the use of strong encryption by the dictator (typical for and even stronger than governments engaging in the ongoing ``crypto-wars''): While such limitations obviously hurt utility which encryption typically brings to safety in computing systems, they essentially, are not helping the dictator.
The actual implications of what we show here and what does it mean in practice require further policy and legal analyses and perspectives.s part of the responses to the ongoing ``crypto wars,'' the notion of {\em Anamorphic Encryption} was put forth [Persiano-Phan-Yung Eurocrypt '22].
The notion allows private communication in spite of a dictator who (in violation of the usual normative conditions under which Cryptography is developed) is engaged in an extreme form of surveillance and/or censorship, where it asks for all private keys and knows and may even dictate all messages.
The original work pointed out efficient ways to use two known schemes in the anamorphic mode, bypassing the draconian censorship and hiding information from the all-powerful dictator.
A question left open was whether these examples are outlier results or whether anamorphic mode is pervasive in existing systems.
Here we answer the above question: we develop new techniques, expand the notion, and show that the notion of Anamorphic Cryptography is, in fact, very much prevalent.
We first refine the notion of Anamorphic Encryption with respect to the nature of covert communication.
Specifically, we distinguish {\em Single-Receiver Encryption} for many to one communication, and {\em Multiple-Receiver Encryption} for many to many communication within the group of conspiring (against the dictator) users. We then show that Anamorphic Encryption can be embedded in the randomness used in the encryption, and give families of constructions that can be applied to numerous ciphers. In total the families cover classical encryption schemes, some of which in actual use (RSA-OAEP, Pailler, Goldwasser-Micali, ElGamal schemes, Cramer-Shoup, and Smooth Projective Hash based systems). Among our examples is an anamorphic channel with much higher capacity than the regular channel.
In sum, the work shows the very large extent of the potential futility of control and censorship over the use of strong encryption by the dictator (typical for and even stronger than governments engaging in the ongoing ``crypto-wars''): While such limitations obviously hurt utility which encryption typically brings to safety in computing systems, they essentially, are not helping the dictator.
The actual implications of what we show here and what does it mean in practice require further policy and legal analyses and perspectives.2023-03-24T16:53:23+00:00https://creativecommons.org/licenses/by/4.0/Mirek KutylowskiGiuseppe PersianoDuong Hieu PhanMoti YungMarcin Zawadahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/055Key lifting : Multi-key Fully Homomorphic Encryption in plain model without noise flooding2023-05-28T16:49:49+00:00Xiaokang DaiWenyuan WuYong FengMulti-key Fully Homomorphic Encryption (\MK), based on the Learning With Error assumption (\LWE), usually lifts ciphertexts of different users to new ciphertexts under a common public key to enable homomorphic evaluation. The efficiency of the current Multi-key Fully Homomorphic Encryption (\MK) scheme is mainly restricted by two aspects:
Expensive ciphertext expansion operation: In a boolean circuit with input length $N$, multiplication depth $L$, security parameter $\lambda$, the number of additional encryptions introduced to achieve ciphertext expansion is $O(N\lambda^6L^4)$.
Noise flooding technology resulting in a large modulus $q$ : In order to prove the security of the scheme, the noise flooding technology introduced in the encryption and distributed decryption stages will lead to a huge modulus $q = 2^{O(\lambda L)}B_\chi$, which corrodes the whole scheme and leads to sub-exponential approximation factors $\gamma = \tilde{O}(n\cdot 2^{\sqrt{nL}})$.
This paper solves the first problem by presenting a framework called Key-Lifting Multi-key Fully Homomorphic Encryption (\KL). With this \emph{key lifting} procedure, the number of encryptions for a local user is reduced to $O(N)$, similar to single-key fully homomorphic encryption (\FHE). For the second problem, based on R\'{e}nyi divergence, we propose an optimized proof method that removes the noise flooding technology in the encryption phase. Additionally, in the distributed decryption phase, we prove that the asymmetric nature of the DGSW ciphertext ensures that the noise after decryption does not leak the noise in the initial ciphertext, as long as the depth of the circuit is sufficient. Thus, our initial ciphertext remains semantically secure even without noise flooding, provided the encryption scheme is leakage-resilient. This approach significantly reduces the size of the modulus $q$ (with $\log q = O(L)$) and the computational overhead of the entire scheme.Multi-key Fully Homomorphic Encryption (\MK), based on the Learning With Error assumption (\LWE), usually lifts ciphertexts of different users to new ciphertexts under a common public key to enable homomorphic evaluation. The efficiency of the current Multi-key Fully Homomorphic Encryption (\MK) scheme is mainly restricted by two aspects:
Expensive ciphertext expansion operation: In a boolean circuit with input length $N$, multiplication depth $L$, security parameter $\lambda$, the number of additional encryptions introduced to achieve ciphertext expansion is $O(N\lambda^6L^4)$.
Noise flooding technology resulting in a large modulus $q$ : In order to prove the security of the scheme, the noise flooding technology introduced in the encryption and distributed decryption stages will lead to a huge modulus $q = 2^{O(\lambda L)}B_\chi$, which corrodes the whole scheme and leads to sub-exponential approximation factors $\gamma = \tilde{O}(n\cdot 2^{\sqrt{nL}})$.
This paper solves the first problem by presenting a framework called Key-Lifting Multi-key Fully Homomorphic Encryption (\KL). With this \emph{key lifting} procedure, the number of encryptions for a local user is reduced to $O(N)$, similar to single-key fully homomorphic encryption (\FHE). For the second problem, based on R\'{e}nyi divergence, we propose an optimized proof method that removes the noise flooding technology in the encryption phase. Additionally, in the distributed decryption phase, we prove that the asymmetric nature of the DGSW ciphertext ensures that the noise after decryption does not leak the noise in the initial ciphertext, as long as the depth of the circuit is sufficient. Thus, our initial ciphertext remains semantically secure even without noise flooding, provided the encryption scheme is leakage-resilient. This approach significantly reduces the size of the modulus $q$ (with $\log q = O(L)$) and the computational overhead of the entire scheme.2022-01-18T16:16:25+00:00https://creativecommons.org/licenses/by/4.0/Xiaokang DaiWenyuan WuYong Fenghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/669Classical substitution ciphers and group theory2023-05-28T02:10:16+00:00Thomas KaedingWe explore some connections between classical substitution ciphers, both monoalphabetic and polyalphabetic, and mathematical group theory. We try to do this in a way that is accessible to cryptographers who are not familiar with group theory, and to mathematicians who are not familiar with classical ciphers.We explore some connections between classical substitution ciphers, both monoalphabetic and polyalphabetic, and mathematical group theory. We try to do this in a way that is accessible to cryptographers who are not familiar with group theory, and to mathematicians who are not familiar with classical ciphers.2023-05-11T13:32:20+00:00https://creativecommons.org/licenses/by-sa/4.0/Thomas Kaedinghttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2023/714A Two-Party Hierarchical Deterministic Wallets in Practice2023-05-28T00:15:38+00:00ChihYun ChuangIHung HsuTingFang LeeThe applications of Hierarchical Deterministic Wallet are rapidly growing in various areas such as cryptocurrency exchanges and hardware wallets. Improving privacy and security is more important than ever. In this study, we proposed a protocol that fully support a two-party computation of BIP32. Our protocol, similar to the distributed key generation, can generate each party’s secret share, the common chain-code, and the public key without revealing a seed and any descendant private keys. We also provided a simulation-based proof of our protocol assuming a rushing, static, and malicious adversary in the hybrid model. Our master key generation protocol produces up to total of two bit leakages from a honest party given the feature that the seeds will be re-selected after each execution. The proposed hardened child key derivation protocol leads up to a one bit leakage in the worst situation of simulation from a honest party and will be accumulated with each execution. Fortunately, in reality, this issue can be largely mitigated by adding some validation criteria of boolean circuits and masking the input shares before each execution. We then implemented the proposed protocol and ran in a single thread on a laptop which turned out with practically acceptable execution time. Lastly, the outputs of our protocol can be easily integrated with many threshold sign protocols.The applications of Hierarchical Deterministic Wallet are rapidly growing in various areas such as cryptocurrency exchanges and hardware wallets. Improving privacy and security is more important than ever. In this study, we proposed a protocol that fully support a two-party computation of BIP32. Our protocol, similar to the distributed key generation, can generate each party’s secret share, the common chain-code, and the public key without revealing a seed and any descendant private keys. We also provided a simulation-based proof of our protocol assuming a rushing, static, and malicious adversary in the hybrid model. Our master key generation protocol produces up to total of two bit leakages from a honest party given the feature that the seeds will be re-selected after each execution. The proposed hardened child key derivation protocol leads up to a one bit leakage in the worst situation of simulation from a honest party and will be accumulated with each execution. Fortunately, in reality, this issue can be largely mitigated by adding some validation criteria of boolean circuits and masking the input shares before each execution. We then implemented the proposed protocol and ran in a single thread on a laptop which turned out with practically acceptable execution time. Lastly, the outputs of our protocol can be easily integrated with many threshold sign protocols.2023-05-18T01:47:58+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/ChihYun ChuangIHung HsuTingFang Leehttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2023/391Additional Modes for ASCON2023-05-27T23:06:06+00:00Rhys WeatherleyNIST selected the A SCON family of cryptographic primitives for standardization in February 2023 as the final step in the Lightweight Cryptography Competition. The ASCON submission to the competition provided Authenticated Encryption with Associated Data (AEAD), hashing, and Extensible Output Function (XOF) modes. Real world cryptography systems often need more than packet encryption and simple hashing. Keyed message authentication, key derivation, cryptographically secure pseudo-random number generation (CSPRNG), password hashing, and encryption of sensitive values in memory are also important. This paper defines additional modes that can be deployed on top of ASCON based on proven designs from the literature.NIST selected the A SCON family of cryptographic primitives for standardization in February 2023 as the final step in the Lightweight Cryptography Competition. The ASCON submission to the competition provided Authenticated Encryption with Associated Data (AEAD), hashing, and Extensible Output Function (XOF) modes. Real world cryptography systems often need more than packet encryption and simple hashing. Keyed message authentication, key derivation, cryptographically secure pseudo-random number generation (CSPRNG), password hashing, and encryption of sensitive values in memory are also important. This paper defines additional modes that can be deployed on top of ASCON based on proven designs from the literature.2023-03-19T00:10:34+00:00https://creativecommons.org/licenses/by/4.0/Rhys Weatherleyhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/091Satisfiability Modulo Finite Fields2023-05-27T21:41:10+00:00Alex OzdemirGereon KremerCesare TinelliClark BarrettWe study satisfiability modulo the theory of finite fields and
give a decision procedure for this theory. We implement our procedure
for prime fields inside the cvc5 SMT solver. Using this theory, we con-
struct SMT queries that encode translation validation for various zero
knowledge proof compilers applied to Boolean computations. We evalu-
ate our procedure on these benchmarks. Our experiments show that our
implementation is superior to previous approaches (which encode field
arithmetic using integers or bit-vectors).We study satisfiability modulo the theory of finite fields and
give a decision procedure for this theory. We implement our procedure
for prime fields inside the cvc5 SMT solver. Using this theory, we con-
struct SMT queries that encode translation validation for various zero
knowledge proof compilers applied to Boolean computations. We evalu-
ate our procedure on these benchmarks. Our experiments show that our
implementation is superior to previous approaches (which encode field
arithmetic using integers or bit-vectors).2023-01-25T03:56:57+00:00https://creativecommons.org/licenses/by/4.0/Alex OzdemirGereon KremerCesare TinelliClark Barretthttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/486Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning2023-05-27T19:38:48+00:00Yiping MaJess WoodsSebastian AngelAntigoni PolychroniadouTal RabinThis paper introduces Flamingo, a system for secure aggregation of data across a large set of clients. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo focuses on the multi-round setting found in federated learning in which many consecutive summations (averages) of model weights are performed to derive a good model. Previous protocols, such as Bell et al. (CCS ’20), have been designed for a single round and are adapted to the federated learning setting by repeating the protocol multiple times. Flamingo eliminates the need for the per-round setup of previous protocols, and has a new lightweight dropout resilience protocol to ensure that if clients leave in the middle of a sum the server can still obtain a meaningful result. Furthermore, Flamingo introduces a new way to locally choose the so-called client neighborhood introduced by Bell et al. These techniques help Flamingo reduce the number of interactions between clients and the server, resulting in a significant reduction in the end-to-end runtime for a full training session over prior work.
We implement and evaluate Flamingo and show that it can securely train a neural network on the (Extended) MNIST and CIFAR-100 datasets, and the model converges without a loss in accuracy, compared to a non-private federated learning system.This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo focuses on the multi-round setting found in federated learning in which many consecutive summations (averages) of model weights are performed to derive a good model. Previous protocols, such as Bell et al. (CCS ’20), have been designed for a single round and are adapted to the federated learning setting by repeating the protocol multiple times. Flamingo eliminates the need for the per-round setup of previous protocols, and has a new lightweight dropout resilience protocol to ensure that if clients leave in the middle of a sum the server can still obtain a meaningful result. Furthermore, Flamingo introduces a new way to locally choose the so-called client neighborhood introduced by Bell et al. These techniques help Flamingo reduce the number of interactions between clients and the server, resulting in a significant reduction in the end-to-end runtime for a full training session over prior work.
We implement and evaluate Flamingo and show that it can securely train a neural network on the (Extended) MNIST and CIFAR-100 datasets, and the model converges without a loss in accuracy, compared to a non-private federated learning system.2023-04-04T11:56:02+00:00https://creativecommons.org/licenses/by/4.0/Yiping MaJess WoodsSebastian AngelAntigoni PolychroniadouTal Rabinhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/750BAKSHEESH: Similar Yet Different From GIFT2023-05-27T18:07:24+00:00Anubhab BaksiJakub BreierAnupam ChattopadhyayTomáš GerlichSylvain GuilleyNaina GuptaKai HuTakanori IsobeArpan JatiPetr JedlickaHyunjun KimFukang LiuZdeněk MartinásekKosei SakamotoHwajeong SeoRentaro ShibaRitu Ranjan ShrivastwaWe propose a lightweight block cipher named BAKSHEESH, which follows up on the popular cipher GIFT-128 (CHES'17). BAKSHEESH runs for 35 rounds, which is 12.50 percent smaller compared to GIFT-128 (runs for 40 rounds) while maintaining the same security claims against the classical attacks.
The crux of BAKSHEESH is to use a 4-bit SBox that has a non-trivial Linear Structure (LS). An SBox with one or more non-trivial LS has not been used in a cipher construction until DEFAULT (Asiacrypt'21). DEFAULT is pitched to have inherent protection against the Differential Fault Attack (DFA), thanks to its SBox having 3 non-trivial LS. BAKSHEESH, however, uses an SBox with only 1 non-trivial LS; and is a traditional cipher just like GIFT-128.
The SBox requires a low number of AND gates, making BAKSHEESH suitable for side channel countermeasures (when compared to GIFT-128) and other niche applications. Indeed, our study on the cost of the threshold implementation shows that BAKSHEESH offers a few-fold advantage over other lightweight ciphers. The design is not much deviated from its predecessor (GIFT-128), thereby allowing for easy implementation (such as fix-slicing in software). However, BAKSHEESH opts for the full-round key XOR, compared to the half-round key XOR in GIFT.
Thus, when taking everything into account, we show how a cipher construction can benefit from the unique vantage point of using 1 LS SBox, by combining the state-of-the-art progress in classical cryptanalysis and protection against device-dependent attacks. We, therefore, create a new paradigm of lightweight ciphers, by adequate deliberation on the design choice, and solidify it with appropriate security analysis and ample implementation/benchmark.We propose a lightweight block cipher named BAKSHEESH, which follows up on the popular cipher GIFT-128 (CHES'17). BAKSHEESH runs for 35 rounds, which is 12.50 percent smaller compared to GIFT-128 (runs for 40 rounds) while maintaining the same security claims against the classical attacks.
The crux of BAKSHEESH is to use a 4-bit SBox that has a non-trivial Linear Structure (LS). An SBox with one or more non-trivial LS has not been used in a cipher construction until DEFAULT (Asiacrypt'21). DEFAULT is pitched to have inherent protection against the Differential Fault Attack (DFA), thanks to its SBox having 3 non-trivial LS. BAKSHEESH, however, uses an SBox with only 1 non-trivial LS; and is a traditional cipher just like GIFT-128.
The SBox requires a low number of AND gates, making BAKSHEESH suitable for side channel countermeasures (when compared to GIFT-128) and other niche applications. Indeed, our study on the cost of the threshold implementation shows that BAKSHEESH offers a few-fold advantage over other lightweight ciphers. The design is not much deviated from its predecessor (GIFT-128), thereby allowing for easy implementation (such as fix-slicing in software). However, BAKSHEESH opts for the full-round key XOR, compared to the half-round key XOR in GIFT.
Thus, when taking everything into account, we show how a cipher construction can benefit from the unique vantage point of using 1 LS SBox, by combining the state-of-the-art progress in classical cryptanalysis and protection against device-dependent attacks. We, therefore, create a new paradigm of lightweight ciphers, by adequate deliberation on the design choice, and solidify it with appropriate security analysis and ample implementation/benchmark.2023-05-24T12:28:09+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Anubhab BaksiJakub BreierAnupam ChattopadhyayTomáš GerlichSylvain GuilleyNaina GuptaKai HuTakanori IsobeArpan JatiPetr JedlickaHyunjun KimFukang LiuZdeněk MartinásekKosei SakamotoHwajeong SeoRentaro ShibaRitu Ranjan Shrivastwahttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2023/699Lattice-based, more general anti-leakage model and its application in decentralization2023-05-27T12:50:56+00:00Dai xiaokangJingwei ChenWenyuan WuYong FengIn the case of standard \LWE samples $(\mathbf{A},\mathbf{b = sA + e})$, $\mathbf{A}$ is typically uniformly over $\mathbb{Z}_q^{n \times m}$, and under the \LWE assumption, the conditional distribution of $\mathbf{s}$ given $\mathbf{b}$ and $\mathbf{s}$ should be consistent. However, if an adversary chooses $\mathbf{A}$ adaptively, the gap between the two may be larger. In this work, we are mainly interested in quantifying $\tilde{H}_\infty(\mathbf{s}|\mathbf{sA + e})$, while $\mathbf{A}$ an adversary chooses. Brakerski and D\"{o}ttling answered the question in one case: they proved that when $\mathbf{s}$ is uniformly chosen from $\mathbb{Z}_q^n$, it holds that $\tilde{H}_\infty(\mathbf{s}|\mathbf{sA + e}) \varpropto \rho_\sigma(\Lambda_q(\mathbf{A}))$. We prove that for any $d \leq q$, $\mathbf{s}$ is uniformly chosen from $\mathbb{Z}_d^n$ or is sampled from a discrete Gaussian, the above result still holds.
In addition, as an independent result, we have also proved the regularity of the hash function mapped to the prime-order group and its Cartesian product.
As an application of the above results, we improved the multi-key
fully homomorphic encryption\cite{TCC:BraHalPol17} and answered the question raised at the end of their work positively: we have GSW-type ciphertext rather than Dual-GSW, and the improved scheme has shorter keys and ciphertextsIn the case of standard \LWE samples $(\mathbf{A},\mathbf{b = sA + e})$, $\mathbf{A}$ is typically uniformly over $\mathbb{Z}_q^{n \times m}$, and under the \LWE assumption, the conditional distribution of $\mathbf{s}$ given $\mathbf{b}$ and $\mathbf{s}$ should be consistent. However, if an adversary chooses $\mathbf{A}$ adaptively, the gap between the two may be larger. In this work, we are mainly interested in quantifying $\tilde{H}_\infty(\mathbf{s}|\mathbf{sA + e})$, while $\mathbf{A}$ an adversary chooses. Brakerski and D\"{o}ttling answered the question in one case: they proved that when $\mathbf{s}$ is uniformly chosen from $\mathbb{Z}_q^n$, it holds that $\tilde{H}_\infty(\mathbf{s}|\mathbf{sA + e}) \varpropto \rho_\sigma(\Lambda_q(\mathbf{A}))$. We prove that for any $d \leq q$, $\mathbf{s}$ is uniformly chosen from $\mathbb{Z}_d^n$ or is sampled from a discrete Gaussian, the above result still holds.
In addition, as an independent result, we have also proved the regularity of the hash function mapped to the prime-order group and its Cartesian product.
As an application of the above results, we improved the multi-key
fully homomorphic encryption\cite{TCC:BraHalPol17} and answered the question raised at the end of their work positively: we have GSW-type ciphertext rather than Dual-GSW, and the improved scheme has shorter keys and ciphertexts2023-05-16T09:21:49+00:00https://creativecommons.org/licenses/by/4.0/Dai xiaokangJingwei ChenWenyuan WuYong Fenghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/620ProtoStar: Generic Efficient Accumulation/Folding for Special Sound Protocols2023-05-26T21:19:29+00:00Benedikt BünzBinyi ChenAccumulation is a simple yet powerful primitive that enables incrementally verifiable computation (IVC) without the need for recursive SNARKs. We provide a generic, efficient accumulation (or folding) scheme for any $(2k-1)$-move special-sound protocol with a verifier that checks $\ell$ degree-$d$ equations. The accumulation verifier only performs $k+2$ elliptic curve multiplications and $k+d+O(1)$ field/hash operations. Using the compiler from BCLMS21 (Crypto 21), this enables building efficient IVC schemes where the recursive circuit only depends on the number of rounds and the verifier degree of the underlying special-sound protocol but not the proof size or the verifier time. We use our generic accumulation compiler to build ProtoStar. ProtoStar is a non-uniform IVC scheme for Plonk that supports high-degree gates and (vector) lookups. The recursive circuit is dominated by $3$ group scalar multiplications and a hash of $d^*$ field elements, where $d^*$ is the degree of the highest gate. The scheme does not require a trusted setup or pairings, and the prover does not need to compute any FFTs. The prover in each accumulation/IVC step is also only logarithmic in the number of supported circuits and independent of the table size in the lookup.Accumulation is a simple yet powerful primitive that enables incrementally verifiable computation (IVC) without the need for recursive SNARKs. We provide a generic, efficient accumulation (or folding) scheme for any $(2k-1)$-move special-sound protocol with a verifier that checks $\ell$ degree-$d$ equations. The accumulation verifier only performs $k+2$ elliptic curve multiplications and $k+d+O(1)$ field/hash operations. Using the compiler from BCLMS21 (Crypto 21), this enables building efficient IVC schemes where the recursive circuit only depends on the number of rounds and the verifier degree of the underlying special-sound protocol but not the proof size or the verifier time. We use our generic accumulation compiler to build ProtoStar. ProtoStar is a non-uniform IVC scheme for Plonk that supports high-degree gates and (vector) lookups. The recursive circuit is dominated by $3$ group scalar multiplications and a hash of $d^*$ field elements, where $d^*$ is the degree of the highest gate. The scheme does not require a trusted setup or pairings, and the prover does not need to compute any FFTs. The prover in each accumulation/IVC step is also only logarithmic in the number of supported circuits and independent of the table size in the lookup.2023-05-01T03:53:42+00:00https://creativecommons.org/licenses/by/4.0/Benedikt BünzBinyi Chenhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/520Generic Security of the SAFE API and Its Applications2023-05-26T18:23:13+00:00Dmitry KhovratovichMario Marhuenda BeltránBart MenninkWe provide security foundations for SAFE, a recently introduced API framework for sponge-based hash functions tailored to prime-field-based protocols. SAFE aims to provide a robust and foolproof interface, has been implemented in the Neptune hash framework and some zero-knowledge proof projects, but currently lacks any security proof.
In this work we identify the SAFECore as versatile variant sponge construction underlying SAFE, we prove indifferentiability of SAFECore for all (binary and prime) fields up to around $|\mathbb{F}_p|^{c/2}$ queries, where $\mathbb{F}_p$ is the underlying field and $c$ the capacity, and we apply this security result to various use cases. We show that the SAFE-based protocols of plain hashing, authenticated encryption, verifiable computation, non-interactive proofs, and commitment schemes are secure against a wide class of adversaries, including those dealing with multiple invocations of a sponge in a single application. Our results pave the way of using SAFE with the full taxonomy of hash functions, including SNARK-, lattice-, and x86-friendly hashes.We provide security foundations for SAFE, a recently introduced API framework for sponge-based hash functions tailored to prime-field-based protocols. SAFE aims to provide a robust and foolproof interface, has been implemented in the Neptune hash framework and some zero-knowledge proof projects, but currently lacks any security proof.
In this work we identify the SAFECore as versatile variant sponge construction underlying SAFE, we prove indifferentiability of SAFECore for all (binary and prime) fields up to around $|\mathbb{F}_p|^{c/2}$ queries, where $\mathbb{F}_p$ is the underlying field and $c$ the capacity, and we apply this security result to various use cases. We show that the SAFE-based protocols of plain hashing, authenticated encryption, verifiable computation, non-interactive proofs, and commitment schemes are secure against a wide class of adversaries, including those dealing with multiple invocations of a sponge in a single application. Our results pave the way of using SAFE with the full taxonomy of hash functions, including SNARK-, lattice-, and x86-friendly hashes.2023-04-11T13:27:26+00:00https://creativecommons.org/licenses/by/4.0/Dmitry KhovratovichMario Marhuenda BeltránBart Menninkhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/550New Baselines for Local Pseudorandom Number Generators by Field Extensions2023-05-26T18:20:38+00:00Akin ÜnalWe will revisit recent techniques and results on the cryptoanalysis of local pseudorandom number generators (PRGs). By doing so, we will achieve a new attack on PRGs whose time complexity only depends on the algebraic degree of the PRG. Concretely, for PRGs $F : \{0,1\}^n\rightarrow \{0,1\}^{n^{1+e}}$, we will give an algebraic algorithm that distinguishes between random points and image points of $F$, whose time complexity is bounded by
\[\exp(O(\log(n)^{\deg F /(\deg F - 1)} \cdot n^{1-e/(\deg F -1)} ))\]
and whose advantage is at least $1 - o(1)$ in the worst case.
To the best of the author's knowledge, this attack outperforms current attacks on the pseudorandomness of local random functions with guaranteed noticeable advantage and gives a new baseline algorithm for local PRGs. Furthermore, this is the first subexponential attack that is applicable to polynomial PRGs of constant degree over fields of any size with a guaranteed noticeable advantage.
We will extend this distinguishing attack further to achieve a search algorithm that can invert a uniformly random constant-degree map $F : \{0,1\}^n\rightarrow \{0,1\}^{n^{1+e}}$ with high advantage in the average case. This algorithm has the same runtime complexity as the distinguishing algorithm.We will revisit recent techniques and results on the cryptoanalysis of local pseudorandom number generators (PRGs). By doing so, we will achieve a new attack on PRGs whose time complexity only depends on the algebraic degree of the PRG. Concretely, for PRGs $F : \{0,1\}^n\rightarrow \{0,1\}^{n^{1+e}}$, we will give an algebraic algorithm that distinguishes between random points and image points of $F$, whose time complexity is bounded by
\[\exp(O(\log(n)^{\deg F /(\deg F - 1)} \cdot n^{1-e/(\deg F -1)} ))\]
and whose advantage is at least $1 - o(1)$ in the worst case.
To the best of the author's knowledge, this attack outperforms current attacks on the pseudorandomness of local random functions with guaranteed noticeable advantage and gives a new baseline algorithm for local PRGs. Furthermore, this is the first subexponential attack that is applicable to polynomial PRGs of constant degree over fields of any size with a guaranteed noticeable advantage.
We will extend this distinguishing attack further to achieve a search algorithm that can invert a uniformly random constant-degree map $F : \{0,1\}^n\rightarrow \{0,1\}^{n^{1+e}}$ with high advantage in the average case. This algorithm has the same runtime complexity as the distinguishing algorithm.2023-04-18T15:58:27+00:00https://creativecommons.org/licenses/by/4.0/Akin Ünalhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/436SQISignHD: New Dimensions in Cryptography2023-05-26T15:46:38+00:00Pierrick DartoisAntonin LerouxDamien RobertBenjamin WesolowskiWe introduce SQISignHD, a new post-quantum digital signature scheme inspired by SQISign.
SQISignHD exploits the recent algorithmic breakthrough underlying the attack on SIDH, which allows to efficiently represent isogenies of arbitrary degrees as components of a higher dimensional isogeny. SQISignHD overcomes the main drawbacks of SQISign. First, it scales well to high security levels, since the public parameters for SQISignHD are easy to generate: the characteristic of the underlying field needs only be of the form $2^{f}3^{f'}-1$. Second, the signing procedure is simpler and more efficient. Third, the scheme is easier to analyse, allowing for a much more compelling security reduction. Finally, the signature sizes are even more compact than (the already record-breaking) SQISign, with compressed signatures as small as 116 bytes for the post-quantum NIST-1 level of security.
These advantages may come at the expense of the verification, which now requires the computation of an isogeny in dimension $4$, a task whose optimised cost is still uncertain, as it has been the focus of very little attention.We introduce SQISignHD, a new post-quantum digital signature scheme inspired by SQISign.
SQISignHD exploits the recent algorithmic breakthrough underlying the attack on SIDH, which allows to efficiently represent isogenies of arbitrary degrees as components of a higher dimensional isogeny. SQISignHD overcomes the main drawbacks of SQISign. First, it scales well to high security levels, since the public parameters for SQISignHD are easy to generate: the characteristic of the underlying field needs only be of the form $2^{f}3^{f'}-1$. Second, the signing procedure is simpler and more efficient. Third, the scheme is easier to analyse, allowing for a much more compelling security reduction. Finally, the signature sizes are even more compact than (the already record-breaking) SQISign, with compressed signatures as small as 116 bytes for the post-quantum NIST-1 level of security.
These advantages may come at the expense of the verification, which now requires the computation of an isogeny in dimension $4$, a task whose optimised cost is still uncertain, as it has been the focus of very little attention.2023-03-25T02:15:44+00:00https://creativecommons.org/licenses/by/4.0/Pierrick DartoisAntonin LerouxDamien RobertBenjamin Wesolowskihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1407Threshold Linear Secret Sharing to the Rescue of MPC-in-the-Head2023-05-26T14:39:58+00:00Thibauld FeneuilMatthieu RivainThe MPC-in-the-Head paradigm is a popular framework to build zero-knowledge proof systems using techniques from secure multi-party computation (MPC). While this paradigm is not restricted to a particular secret sharing scheme, all the efficient instantiations for small circuits proposed so far rely on additive secret sharing.
In this work, we show how applying a threshold linear secret sharing scheme (threshold LSSS) can be beneficial to the MPC-in-the-Head paradigm. For a general passively-secure MPC protocol model capturing most of the existing MPCitH schemes, we show that our approach improves the soundness of the underlying proof system from $1/N$ down to $1/\binom{N}{\ell}$, where $N$ is the number of parties and $\ell$ is the privacy threshold of the sharing scheme. While very general, our technique is limited to a number of parties $N \leq |\mathbb{F}|$, where $\mathbb{F}$ is the field underlying the statement, because of the MDS conjecture.
Applying our approach with a low-threshold LSSS also boosts the performance of the proof system by making the MPC emulation cost independent of $N$ for both the prover and the verifier. The gain is particularly significant for the verification time which becomes logarithmic in $N$ (while the prover still has to generate and commit the $N$ input shares). We further generalize and improve our framework: we show how homomorphic commitments can get rid of the linear complexity of the prover, we generalize our result to any quasi-threshold LSSS, and we describe an efficient batching technique relying on Shamir's secret sharing.
We finally apply our techniques to specific use-cases. We first propose a variant of the recent SDitH signature scheme achieving new interesting trade-offs. In particular, for a signature size of 10 KB, we obtain a verification time lower than $0.5$ ms, which is competitive with SPHINCS+, while achieving much faster signing. We further apply our batching technique to two different contexts: batched SDitH proofs and batched proofs for general arithmetic circuits based on the Limbo proof system. In both cases, we obtain an amortized proof size lower than $1/10$ of the baseline scheme when batching a few dozen statements, while the amortized performances are also significantly improved.The MPC-in-the-Head paradigm is a popular framework to build zero-knowledge proof systems using techniques from secure multi-party computation (MPC). While this paradigm is not restricted to a particular secret sharing scheme, all the efficient instantiations for small circuits proposed so far rely on additive secret sharing.
In this work, we show how applying a threshold linear secret sharing scheme (threshold LSSS) can be beneficial to the MPC-in-the-Head paradigm. For a general passively-secure MPC protocol model capturing most of the existing MPCitH schemes, we show that our approach improves the soundness of the underlying proof system from $1/N$ down to $1/\binom{N}{\ell}$, where $N$ is the number of parties and $\ell$ is the privacy threshold of the sharing scheme. While very general, our technique is limited to a number of parties $N \leq |\mathbb{F}|$, where $\mathbb{F}$ is the field underlying the statement, because of the MDS conjecture.
Applying our approach with a low-threshold LSSS also boosts the performance of the proof system by making the MPC emulation cost independent of $N$ for both the prover and the verifier. The gain is particularly significant for the verification time which becomes logarithmic in $N$ (while the prover still has to generate and commit the $N$ input shares). We further generalize and improve our framework: we show how homomorphic commitments can get rid of the linear complexity of the prover, we generalize our result to any quasi-threshold LSSS, and we describe an efficient batching technique relying on Shamir's secret sharing.
We finally apply our techniques to specific use-cases. We first propose a variant of the recent SDitH signature scheme achieving new interesting trade-offs. In particular, for a signature size of 10 KB, we obtain a verification time lower than $0.5$ ms, which is competitive with SPHINCS+, while achieving much faster signing. We further apply our batching technique to two different contexts: batched SDitH proofs and batched proofs for general arithmetic circuits based on the Limbo proof system. In both cases, we obtain an amortized proof size lower than $1/10$ of the baseline scheme when batching a few dozen statements, while the amortized performances are also significantly improved.2022-10-17T11:41:11+00:00https://creativecommons.org/licenses/by/4.0/Thibauld FeneuilMatthieu Rivainhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/670Behemoth: transparent polynomial commitment scheme with constant opening proof size and verifier time2023-05-26T14:06:44+00:00István András SeresPéter BurcsiPolynomial commitment schemes are fundamental building blocks in numerous cryptographic protocols such as verifiable secret sharing, zero-knowledge succinct non-interactive arguments, and many more. The most efficient polynomial commitment schemes rely on a trusted setup which is undesirable in trust-minimized applications, e.g., cryptocurrencies. However, transparent polynomial commitment schemes are inefficient (polylogarithmic opening proofs and/or verification time) compared to their trusted counterparts. It has been an open problem to devise a transparent, succinct polynomial commitment scheme or prove an impossibility result in the transparent setting. In this work, for the first time, we create a transparent, constant-size polynomial commitment scheme called Behemoth with constant-size opening proofs and a constant-time verifier. The downside of Behemoth is that it employs a cubic prover in the degree of the committed polynomial. We prove the security of our scheme in the generic group model and discuss parameter settings in which it remains practical even for the prover.Polynomial commitment schemes are fundamental building blocks in numerous cryptographic protocols such as verifiable secret sharing, zero-knowledge succinct non-interactive arguments, and many more. The most efficient polynomial commitment schemes rely on a trusted setup which is undesirable in trust-minimized applications, e.g., cryptocurrencies. However, transparent polynomial commitment schemes are inefficient (polylogarithmic opening proofs and/or verification time) compared to their trusted counterparts. It has been an open problem to devise a transparent, succinct polynomial commitment scheme or prove an impossibility result in the transparent setting. In this work, for the first time, we create a transparent, constant-size polynomial commitment scheme called Behemoth with constant-size opening proofs and a constant-time verifier. The downside of Behemoth is that it employs a cubic prover in the degree of the committed polynomial. We prove the security of our scheme in the generic group model and discuss parameter settings in which it remains practical even for the prover.2023-05-11T14:11:31+00:00https://creativecommons.org/licenses/by/4.0/István András SeresPéter Burcsihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/030Earn While You Reveal: Private Set Intersection that Rewards Participants2023-05-26T14:02:57+00:00Aydin AbadiSteven MurdochIn Private Set Intersection protocols (PSIs), a non-empty result always reveals something about the private input sets of the parties. Moreover, in various variants of PSI, not all parties necessarily receive or are interested in the result. Nevertheless, to date, the literature has assumed that those parties who do not receive or are not interested in the result still contribute their private input sets to the PSI for free, although doing so would cost them their privacy. In this work, for the first time, we propose a multi-party PSI, called “Anesidora”, that rewards parties who contribute their private input sets to the protocol. Anesidora is efficient; it mainly relies on symmetric key primitives and its computation and communication complexities are linear with the number of parties and set cardinality. It remains secure even if the majority of parties are corrupted by active colluding adversaries.In Private Set Intersection protocols (PSIs), a non-empty result always reveals something about the private input sets of the parties. Moreover, in various variants of PSI, not all parties necessarily receive or are interested in the result. Nevertheless, to date, the literature has assumed that those parties who do not receive or are not interested in the result still contribute their private input sets to the PSI for free, although doing so would cost them their privacy. In this work, for the first time, we propose a multi-party PSI, called “Anesidora”, that rewards parties who contribute their private input sets to the protocol. Anesidora is efficient; it mainly relies on symmetric key primitives and its computation and communication complexities are linear with the number of parties and set cardinality. It remains secure even if the majority of parties are corrupted by active colluding adversaries.2023-01-10T10:10:55+00:00https://creativecommons.org/licenses/by/4.0/Aydin AbadiSteven Murdochhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1201Leakage Certification Made Simple2023-05-26T11:41:08+00:00Aakash ChowdhuryArnab RoyCarlo BrunettaElisabeth OswaldSide channel evaluations benefit from sound characterisations of adversarial leakage models, which are the determining factor for attack success. Two questions are of interest: can we estimate a quantity that captures the ideal adversary (who knows the distributions that are involved in an attack), and can we judge how good one (or several) given leakage models are in relation to the ideal adversary?
Existing work has led to a proliferation of custom quantities (the hypothetical information HI, perceived informatino PI, training information TI, and learnable information LI). These quantities all provide only (loose) bounds for the ideal adversary, they are slow to estimate, convergence guarantees are only for discrete distributions, and they have bias.
Our work shows that none of these quantities is necessary: it is possible to characterise the ideal adversary precisely via the mutual information between the device inputs and the observed side channel traces. We achieve this result by a careful characterisation of the distributions in play. We also put forward a mutual information based approach to leakage certification, with a consistent estimator, and demonstrate via a range of case studies that our approach is simpler, faster, and correct.Side channel evaluations benefit from sound characterisations of adversarial leakage models, which are the determining factor for attack success. Two questions are of interest: can we estimate a quantity that captures the ideal adversary (who knows the distributions that are involved in an attack), and can we judge how good one (or several) given leakage models are in relation to the ideal adversary?
Existing work has led to a proliferation of custom quantities (the hypothetical information HI, perceived informatino PI, training information TI, and learnable information LI). These quantities all provide only (loose) bounds for the ideal adversary, they are slow to estimate, convergence guarantees are only for discrete distributions, and they have bias.
Our work shows that none of these quantities is necessary: it is possible to characterise the ideal adversary precisely via the mutual information between the device inputs and the observed side channel traces. We achieve this result by a careful characterisation of the distributions in play. We also put forward a mutual information based approach to leakage certification, with a consistent estimator, and demonstrate via a range of case studies that our approach is simpler, faster, and correct.2022-09-12T10:14:05+00:00https://creativecommons.org/licenses/by-nc/4.0/Aakash ChowdhuryArnab RoyCarlo BrunettaElisabeth Oswaldhttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2023/247A New Sieving-Style Information-Set Decoding Algorithm2023-05-26T11:31:54+00:00Qian GuoThomas JohanssonVu NguyenThe problem of decoding random codes is a fundamental problem for code-based cryptography, including recent code-based candidates in the NIST post-quantum standardization process. In this paper, we present a novel sieving-style information-set decoding (ISD) algorithm, addressing the task of solving the syndrome decoding problem.
Our approach involves maintaining a list of weight-$2p$ solution vectors
to a partial syndrome decoding problem and then creating new vectors
by identifying pairs of vectors that collide in $p$ positions. By gradually increasing the parity-check condition by one and repeating this process
iteratively, we find the final solution(s). We show that our novel algorithm
performs better than other ISDs in the memory-restricted scenario when
applied to McEliece. Notably, in the case of problems with very low relative weight, it seems to significantly outperform all previous algorithms. In particular, for code-based candidates BIKE and HQC, the algorithm has lower bit complexity than the previous best results.The problem of decoding random codes is a fundamental problem for code-based cryptography, including recent code-based candidates in the NIST post-quantum standardization process. In this paper, we present a novel sieving-style information-set decoding (ISD) algorithm, addressing the task of solving the syndrome decoding problem.
Our approach involves maintaining a list of weight-$2p$ solution vectors
to a partial syndrome decoding problem and then creating new vectors
by identifying pairs of vectors that collide in $p$ positions. By gradually increasing the parity-check condition by one and repeating this process
iteratively, we find the final solution(s). We show that our novel algorithm
performs better than other ISDs in the memory-restricted scenario when
applied to McEliece. Notably, in the case of problems with very low relative weight, it seems to significantly outperform all previous algorithms. In particular, for code-based candidates BIKE and HQC, the algorithm has lower bit complexity than the previous best results.2023-02-21T19:11:17+00:00https://creativecommons.org/licenses/by/4.0/Qian GuoThomas JohanssonVu Nguyenhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/489Shorter and Faster Identity-Based Signatures with Tight Security in the (Q)ROM from Lattices2023-05-26T10:53:57+00:00Eric SageloliPierre PébereauPierrick MéauxCéline ChevalierWe provide identity-based signature (IBS) schemes with tight security against adaptive adversaries, in the (classical or quantum) random oracle model (ROM or QROM), in both unstructured and structured lattices, based on the SIS or RSIS assumption. These signatures are short (of size independent of the message length).
Our schemes build upon a work from Pan and Wagner (PQCrypto’21) and improve on it in several ways. First, we prove their transformation from non-adaptive to adaptive IBS in the QROM. Then, we simplify the parameters used and give concrete values. Finally, we simplify the signature scheme by using a non-homogeneous relation, which helps us reduce the size of the signature and get rid of one costly trapdoor delegation.
On the whole, we get better security bounds, shorter signatures and faster algorithms.We provide identity-based signature (IBS) schemes with tight security against adaptive adversaries, in the (classical or quantum) random oracle model (ROM or QROM), in both unstructured and structured lattices, based on the SIS or RSIS assumption. These signatures are short (of size independent of the message length).
Our schemes build upon a work from Pan and Wagner (PQCrypto’21) and improve on it in several ways. First, we prove their transformation from non-adaptive to adaptive IBS in the QROM. Then, we simplify the parameters used and give concrete values. Finally, we simplify the signature scheme by using a non-homogeneous relation, which helps us reduce the size of the signature and get rid of one costly trapdoor delegation.
On the whole, we get better security bounds, shorter signatures and faster algorithms.2023-04-04T13:17:54+00:00https://creativecommons.org/licenses/by/4.0/Eric SageloliPierre PébereauPierrick MéauxCéline Chevalierhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1313Bounded Surjective Quadratic Functions over $\mathbb F_p^n$ for MPC-/ZK-/FHE-Friendly Symmetric Primitives2023-05-26T10:20:47+00:00Lorenzo GrassiMotivated by new applications such as secure Multi-Party Computation (MPC), Fully Homomorphic Encryption (FHE), and Zero-Knowledge proofs (ZK), many MPC-, FHE- and ZK-friendly symmetric-key primitives that minimize the number of multiplications over $\mathbb F_p$ for a large prime $p$ have been recently proposed in the literature. These symmetric primitives are usually defined via invertible functions, including (i) Feistel and Lai-Massey schemes and (ii) SPN constructions instantiated with invertible non-linear S-Boxes. However, the ``invertibility'' property is actually never required in any of the mentioned applications.
In this paper, we discuss the possibility to set up MPC-/FHE-/ZK-friendly symmetric primitives instantiated with non-invertible bounded surjective functions. With respect to one-to-one functions, any output of a $l$-bounded surjective function admits at most $l$ pre-images. The simplest example is the square map $x\mapsto x^2$ over $\mathbb F_p$ for a prime $p\ge 3$, which is (obviously) $2$-bounded surjective.
When working over $\mathbb F_p^n$ for $n\ge 2$, we set up bounded surjective functions by re-considering the recent results proposed by Grassi, Onofri, Pedicini and Sozzi at FSE/ToSC 2022 as starting points. Given a quadratic local map $F:\mathbb F_p^m \rightarrow \mathbb F_p$ for $m\in\{1,2,3\}$, they proved that the shift-invariant non-linear function over $\mathbb F_p^n$ defined as $\mathcal S_F(x_0, x_1, \ldots, x_{n-1}) = y_0\| y_1\| \ldots \| y_{n-1}$ where $y_i := F(x_i, x_{i+1})$ is never invertible for any $n\ge 2\cdot m-1$. Here, we prove that
- the quadratic function $F:\mathbb F_p^m \rightarrow \mathbb F_p$ for $m\in\{1,2\}$ that minimizes the probability of having a collision for $\mathcal S_F$ over $\mathbb F_p^n$ is of the form $F(x_0, x_1) = x_0^2 + x_1$ (or equivalent);
- the function $\mathcal S_F$ over $\mathbb F_p^n$ defined as before via $F(x_0, x_1) = x_0^2 + x_1$ (or equivalent) is $2^n$-bounded surjective.
As concrete applications, we propose modified versions of the MPC-friendly schemes MiMC, HadesMiMC, and (partially of) Hydra, and of the FHE-friendly schemes Masta, Pasta, and Rubato. By instantiating them with the bounded surjective quadratic functions proposed in this paper, we are able to improve the security and/or the performances in the target applications/protocols.Motivated by new applications such as secure Multi-Party Computation (MPC), Fully Homomorphic Encryption (FHE), and Zero-Knowledge proofs (ZK), many MPC-, FHE- and ZK-friendly symmetric-key primitives that minimize the number of multiplications over $\mathbb F_p$ for a large prime $p$ have been recently proposed in the literature. These symmetric primitives are usually defined via invertible functions, including (i) Feistel and Lai-Massey schemes and (ii) SPN constructions instantiated with invertible non-linear S-Boxes. However, the ``invertibility'' property is actually never required in any of the mentioned applications.
In this paper, we discuss the possibility to set up MPC-/FHE-/ZK-friendly symmetric primitives instantiated with non-invertible bounded surjective functions. With respect to one-to-one functions, any output of a $l$-bounded surjective function admits at most $l$ pre-images. The simplest example is the square map $x\mapsto x^2$ over $\mathbb F_p$ for a prime $p\ge 3$, which is (obviously) $2$-bounded surjective.
When working over $\mathbb F_p^n$ for $n\ge 2$, we set up bounded surjective functions by re-considering the recent results proposed by Grassi, Onofri, Pedicini and Sozzi at FSE/ToSC 2022 as starting points. Given a quadratic local map $F:\mathbb F_p^m \rightarrow \mathbb F_p$ for $m\in\{1,2,3\}$, they proved that the shift-invariant non-linear function over $\mathbb F_p^n$ defined as $\mathcal S_F(x_0, x_1, \ldots, x_{n-1}) = y_0\| y_1\| \ldots \| y_{n-1}$ where $y_i := F(x_i, x_{i+1})$ is never invertible for any $n\ge 2\cdot m-1$. Here, we prove that
- the quadratic function $F:\mathbb F_p^m \rightarrow \mathbb F_p$ for $m\in\{1,2\}$ that minimizes the probability of having a collision for $\mathcal S_F$ over $\mathbb F_p^n$ is of the form $F(x_0, x_1) = x_0^2 + x_1$ (or equivalent);
- the function $\mathcal S_F$ over $\mathbb F_p^n$ defined as before via $F(x_0, x_1) = x_0^2 + x_1$ (or equivalent) is $2^n$-bounded surjective.
As concrete applications, we propose modified versions of the MPC-friendly schemes MiMC, HadesMiMC, and (partially of) Hydra, and of the FHE-friendly schemes Masta, Pasta, and Rubato. By instantiating them with the bounded surjective quadratic functions proposed in this paper, we are able to improve the security and/or the performances in the target applications/protocols.2022-10-04T07:33:44+00:00https://creativecommons.org/licenses/by-sa/4.0/Lorenzo Grassihttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2023/754Batch Proofs are Statistically Hiding2023-05-26T09:18:14+00:00Nir BitanskyChethan KamathOmer PanethRon RothblumPrashant Nalini VasudevanBatch proofs are proof systems that convince a verifier that $x_1,\dots, x_t \in L$, for some $NP$ language $L$, with communication that is much shorter than sending the $t$ witnesses. In the case of statistical soundness (where the cheating prover is unbounded but honest prover is efficient), interactive batch proofs are known for $UP$, the class of unique witness $NP$ languages. In the case of computational soundness (aka arguments, where both honest and dishonest provers are efficient), non-interactive solutions are now known for all of $NP$, assuming standard cryptographic assumptions. We study the necessary conditions for the existence of batch proofs in these two settings. Our main results are as follows.
1. Statistical Soundness: the existence of a statistically-sound batch proof for $L$ implies that $L$ has a statistically witness indistinguishable ($SWI$) proof, with inverse polynomial $SWI$ error, and a non-uniform honest prover. The implication is unconditional for public-coin protocols and relies on one-way functions in the private-coin case.
This poses a barrier for achieving batch proofs beyond $UP$ (where witness indistinguishability is trivial). In particular, assuming that $NP$ does not have $SWI$ proofs, batch proofs for all of $NP$ do not exist. This motivates further study of the complexity class $SWI$, which, in contrast to the related class $SZK$, has been largely left unexplored.
2. Computational Soundness: the existence of batch arguments ($BARG$s) for $NP$, together with one-way functions, implies the existence of statistical zero-knowledge ($SZK$) arguments for $NP$ with roughly the same number of rounds, an inverse polynomial zero-knowledge error, and non-uniform honest prover.
Thus, constant-round interactive $BARG$s from one-way functions would yield constant-round $SZK$ arguments from one-way functions. This would be surprising as $SZK$ arguments are currently only known assuming constant-round statistically-hiding commitments (which in turn are unlikely to follow from one-way functions).
3. Non-interactive: the existence of non-interactive $BARG$s for $NP$ and one-way functions, implies non-interactive statistical zero-knowledge arguments ($NISZKA$) for $NP$, with negligible soundness error, inverse polynomial zero-knowledge error, and non-uniform honest prover. Assuming also lossy public-key encryption, the statistical zero-knowledge error can be made negligible. We further show that $BARG$s satisfying a notion of honest somewhere extractability imply lossy public key encryption.
All of our results stem from a common framework showing how to transform a batch protocol for a language $L$ into an $SWI$ protocol for $L$.Batch proofs are proof systems that convince a verifier that $x_1,\dots, x_t \in L$, for some $NP$ language $L$, with communication that is much shorter than sending the $t$ witnesses. In the case of statistical soundness (where the cheating prover is unbounded but honest prover is efficient), interactive batch proofs are known for $UP$, the class of unique witness $NP$ languages. In the case of computational soundness (aka arguments, where both honest and dishonest provers are efficient), non-interactive solutions are now known for all of $NP$, assuming standard cryptographic assumptions. We study the necessary conditions for the existence of batch proofs in these two settings. Our main results are as follows.
1. Statistical Soundness: the existence of a statistically-sound batch proof for $L$ implies that $L$ has a statistically witness indistinguishable ($SWI$) proof, with inverse polynomial $SWI$ error, and a non-uniform honest prover. The implication is unconditional for public-coin protocols and relies on one-way functions in the private-coin case.
This poses a barrier for achieving batch proofs beyond $UP$ (where witness indistinguishability is trivial). In particular, assuming that $NP$ does not have $SWI$ proofs, batch proofs for all of $NP$ do not exist. This motivates further study of the complexity class $SWI$, which, in contrast to the related class $SZK$, has been largely left unexplored.
2. Computational Soundness: the existence of batch arguments ($BARG$s) for $NP$, together with one-way functions, implies the existence of statistical zero-knowledge ($SZK$) arguments for $NP$ with roughly the same number of rounds, an inverse polynomial zero-knowledge error, and non-uniform honest prover.
Thus, constant-round interactive $BARG$s from one-way functions would yield constant-round $SZK$ arguments from one-way functions. This would be surprising as $SZK$ arguments are currently only known assuming constant-round statistically-hiding commitments (which in turn are unlikely to follow from one-way functions).
3. Non-interactive: the existence of non-interactive $BARG$s for $NP$ and one-way functions, implies non-interactive statistical zero-knowledge arguments ($NISZKA$) for $NP$, with negligible soundness error, inverse polynomial zero-knowledge error, and non-uniform honest prover. Assuming also lossy public-key encryption, the statistical zero-knowledge error can be made negligible. We further show that $BARG$s satisfying a notion of honest somewhere extractability imply lossy public key encryption.
All of our results stem from a common framework showing how to transform a batch protocol for a language $L$ into an $SWI$ protocol for $L$.2023-05-25T04:47:15+00:00https://creativecommons.org/publicdomain/zero/1.0/Nir BitanskyChethan KamathOmer PanethRon RothblumPrashant Nalini Vasudevanhttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2022/1662Revisiting cycles of pairing-friendly elliptic curves2023-05-26T08:56:58+00:00Marta Bellés-MuñozJorge Jiménez UrrozJavier SilvaA recent area of interest in cryptography is recursive composition of proof systems. One of the approaches to make recursive composition efficient involves cycles of pairing-friendly elliptic curves of prime order. However, known constructions have very low embedding degrees. This entails large parameter sizes, which makes the overall system inefficient.
In this paper, we explore $2$-cycles composed of curves from families parameterized by polynomials, and show that such cycles do not exist unless a strong condition holds. As a consequence, we prove that no $2$-cycles can arise from the known families, except for those cycles already known. Additionally, we show some general properties about cycles, and provide a detailed computation on the density of pairing-friendly cycles among all cycles.A recent area of interest in cryptography is recursive composition of proof systems. One of the approaches to make recursive composition efficient involves cycles of pairing-friendly elliptic curves of prime order. However, known constructions have very low embedding degrees. This entails large parameter sizes, which makes the overall system inefficient.
In this paper, we explore $2$-cycles composed of curves from families parameterized by polynomials, and show that such cycles do not exist unless a strong condition holds. As a consequence, we prove that no $2$-cycles can arise from the known families, except for those cycles already known. Additionally, we show some general properties about cycles, and provide a detailed computation on the density of pairing-friendly cycles among all cycles.2022-11-29T15:25:10+00:00https://creativecommons.org/licenses/by/4.0/Marta Bellés-MuñozJorge Jiménez UrrozJavier Silvahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/705Deniable Cryptosystems: Simpler Constructions and Achieving Leakage Resilience2023-05-26T08:47:30+00:00Zhiyuan AnHaibo TianChao ChenFangguo ZhangDeniable encryption (Canetti et al. CRYPTO ’97) is an intriguing primitive, which provides security guarantee against coercion by allowing a sender to convincingly open the ciphertext into a fake message. Despite the notable result by Sahai and Waters STOC ’14 and other efforts in functionality extension, all the deniable public key encryption (DPKE) schemes suffer from intolerable overhead due to the heavy building blocks, e.g., translucent sets or indistinguishability obfuscation. Besides, none of them considers the possible damage from leakage in the real world, obstructing these protocols from practical use.
To fill the gap, in this work we first present a simple and generic approach of sender-DPKE from ciphertext-simulatable encryption, which can be instantiated with nearly all the common PKE schemes. The core of this design is a newly-designed framework for flipping a bit-string that offers inverse polynomial distinguishability. Then we theoretically expound and experimentally show how classic side-channel attacks (timing or simple power attacks), can help the coercer to break deniability, along with feasible countermeasures.Deniable encryption (Canetti et al. CRYPTO ’97) is an intriguing primitive, which provides security guarantee against coercion by allowing a sender to convincingly open the ciphertext into a fake message. Despite the notable result by Sahai and Waters STOC ’14 and other efforts in functionality extension, all the deniable public key encryption (DPKE) schemes suffer from intolerable overhead due to the heavy building blocks, e.g., translucent sets or indistinguishability obfuscation. Besides, none of them considers the possible damage from leakage in the real world, obstructing these protocols from practical use.
To fill the gap, in this work we first present a simple and generic approach of sender-DPKE from ciphertext-simulatable encryption, which can be instantiated with nearly all the common PKE schemes. The core of this design is a newly-designed framework for flipping a bit-string that offers inverse polynomial distinguishability. Then we theoretically expound and experimentally show how classic side-channel attacks (timing or simple power attacks), can help the coercer to break deniability, along with feasible countermeasures.2023-05-17T08:34:28+00:00https://creativecommons.org/licenses/by/4.0/Zhiyuan AnHaibo TianChao ChenFangguo Zhanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1296Efficient Asymmetric Threshold ECDSA for MPC-based Cold Storage2023-05-26T08:06:42+00:00Constantin BlokhNikolaos MakriyannisUdi PeledMotivated by applications to cold-storage solutions for ECDSA-based cryptocurrencies, we present a new threshold ECDSA protocol between $n$ ``online'' parties and a single ``offline'' (aka.~cold) party. The primary objective of this protocol is to minimize the exposure of the offline party in terms of connected time and bandwidth. This is achieved through a unique asymmetric signing phase, in which the majority of computation, communication, and interaction is handled by the online parties.
Our protocol supports a very efficient non-interactive pre-signing stage; the parties calculate preprocessed data for future signatures where each party (offline or online) sends a single independently-generated short message per future signature. Then, to calculate the signature, the offline party simply receives a single short message (approx.~300B) and outputs the signature. All previous ECDSA protocols either have high exposure for all parties, or rely on non-standard coding assumptions. (We assume strong RSA, DCR, DDH and enhanced unforgeability of ECDSA.)
To achieve the above, we present a new batching technique for proving in zero-knowledge that the plaintexts of practically any number of Paillier ciphertexts all lie in a given range. The cost of the resulting batch proof is very close to that of the non-batch proof for a single ciphertext, and the technique is applicable to arbitrary Schnorr-style protocols.Motivated by applications to cold-storage solutions for ECDSA-based cryptocurrencies, we present a new threshold ECDSA protocol between $n$ ``online'' parties and a single ``offline'' (aka.~cold) party. The primary objective of this protocol is to minimize the exposure of the offline party in terms of connected time and bandwidth. This is achieved through a unique asymmetric signing phase, in which the majority of computation, communication, and interaction is handled by the online parties.
Our protocol supports a very efficient non-interactive pre-signing stage; the parties calculate preprocessed data for future signatures where each party (offline or online) sends a single independently-generated short message per future signature. Then, to calculate the signature, the offline party simply receives a single short message (approx.~300B) and outputs the signature. All previous ECDSA protocols either have high exposure for all parties, or rely on non-standard coding assumptions. (We assume strong RSA, DCR, DDH and enhanced unforgeability of ECDSA.)
To achieve the above, we present a new batching technique for proving in zero-knowledge that the plaintexts of practically any number of Paillier ciphertexts all lie in a given range. The cost of the resulting batch proof is very close to that of the non-batch proof for a single ciphertext, and the technique is applicable to arbitrary Schnorr-style protocols.2022-09-29T10:08:58+00:00https://creativecommons.org/licenses/by/4.0/Constantin BlokhNikolaos MakriyannisUdi Peledhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/851Amun: Securing E-Voting Against Over-the-Shoulder Coercion2023-05-25T15:31:54+00:00Riccardo LongoChiara SpadaforaIn an election where each voter may express $P$ preferences among $M$ possible choices, the Amun protocol allows to secure vote casting against over-the-shoulder adversaries, retaining privacy, fairness, end-to-end verifiability, and correctness.
Before the election, each voter receives a ballot containing valid and decoy tokens: only valid tokens contribute in the final tally, but they remain indistinguishable from the decoys.
Since the voter is the only one who knows which tokens are valid (without being able to prove it to a coercer), over-the-shoulder attacks are thwarted.
We prove the security of the construction under the standard Decisional Diffie Hellman assumption in the random oracle model.In an election where each voter may express $P$ preferences among $M$ possible choices, the Amun protocol allows to secure vote casting against over-the-shoulder adversaries, retaining privacy, fairness, end-to-end verifiability, and correctness.
Before the election, each voter receives a ballot containing valid and decoy tokens: only valid tokens contribute in the final tally, but they remain indistinguishable from the decoys.
Since the voter is the only one who knows which tokens are valid (without being able to prove it to a coercer), over-the-shoulder attacks are thwarted.
We prove the security of the construction under the standard Decisional Diffie Hellman assumption in the random oracle model.2021-06-22T14:37:01+00:00https://creativecommons.org/licenses/by/4.0/Riccardo LongoChiara Spadaforahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/736Private Eyes: Zero-Leakage Iris Searchable Encryption2023-05-25T14:38:00+00:00Julie HaChloe CachetLuke DemarestSohaib AhmadBenjamin FullerBiometric databases are being deployed with few cryptographic protections. Because of the nature of biometrics, privacy breaches affect users for their entire life.
This work introduces Private Eyes, the first zero-leakage biometric database. The only leakage of the system is unavoidable: 1) the log of the dataset size and 2) the fact that a query occurred. Private Eyes is built from symmetric searchable encryption. Proximity queries are the required functionality: given a noisy reading of a biometric, the goal is to retrieve all stored records that are close enough according to a distance metric.
Private Eyes combines locality sensitive-hashing or LSHs (Indyk and Motwani, STOC 1998) and encrypted maps. One searches for the disjunction of the LSHs of a noisy biometric reading. The underlying encrypted map needs to efficiently answer disjunction queries.
We focus on the iris biometric. Iris biometric data requires a large number of LSHs, approximately 1000. The most relevant prior work is in zero-leakage k-nearest-neighbor search (Boldyreva and Tang, PoPETS 2021), but that work is designed for a small number of LSHs.
Our main cryptographic tool is a zero-leakage disjunctive map designed for the setting when most clauses do not match any records. For the iris, on average at most 6% of LSHs match any stored value.
To aid in evaluation, we produce a synthetic iris generation tool to evaluate sizes beyond available iris datasets. This generation tool is a simple generative adversarial network. Accurate statistics are crucial to optimizing the cryptographic primitives so this tool may be of independent interest.
Our scheme is implemented and open-sourced. For the largest tested parameters of 5000 stored irises, search requires 26 rounds of communication and 26 minutes of single-threaded computation.Biometric databases are being deployed with few cryptographic protections. Because of the nature of biometrics, privacy breaches affect users for their entire life.
This work introduces Private Eyes, the first zero-leakage biometric database. The only leakage of the system is unavoidable: 1) the log of the dataset size and 2) the fact that a query occurred. Private Eyes is built from symmetric searchable encryption. Proximity queries are the required functionality: given a noisy reading of a biometric, the goal is to retrieve all stored records that are close enough according to a distance metric.
Private Eyes combines locality sensitive-hashing or LSHs (Indyk and Motwani, STOC 1998) and encrypted maps. One searches for the disjunction of the LSHs of a noisy biometric reading. The underlying encrypted map needs to efficiently answer disjunction queries.
We focus on the iris biometric. Iris biometric data requires a large number of LSHs, approximately 1000. The most relevant prior work is in zero-leakage k-nearest-neighbor search (Boldyreva and Tang, PoPETS 2021), but that work is designed for a small number of LSHs.
Our main cryptographic tool is a zero-leakage disjunctive map designed for the setting when most clauses do not match any records. For the iris, on average at most 6% of LSHs match any stored value.
To aid in evaluation, we produce a synthetic iris generation tool to evaluate sizes beyond available iris datasets. This generation tool is a simple generative adversarial network. Accurate statistics are crucial to optimizing the cryptographic primitives so this tool may be of independent interest.
Our scheme is implemented and open-sourced. For the largest tested parameters of 5000 stored irises, search requires 26 rounds of communication and 26 minutes of single-threaded computation.2023-05-22T17:17:10+00:00https://creativecommons.org/licenses/by/4.0/Julie HaChloe CachetLuke DemarestSohaib AhmadBenjamin Fullerhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/761A Quantum Analysis of Nested Search Problems with Applications in Cryptanalysis2023-05-25T13:34:17+00:00André SchrottenloherMarc StevensIn this paper we study search problems that arise very often in cryptanalysis: nested search problems, where each search layer has known degrees of freedom and/or constraints. A generic quantum solution for such problems consists of nesting Grover's quantum search algorithm or amplitude amplification (QAA) by Brassard et al., obtaining up to a square-root speedup on classical algorithms. However, the analysis of nested Grover or QAA is complex and introduces technicalities that in previous works are handled in a case-by-case manner. Moreover, straightforward nesting introduces an overhead factor of $(\pi/2)^\ell$ in the complexity (for $\ell$ layers).
In this paper, we aim to remedy both these issues and introduce a generic framework and tools to transform a classical nested search into a quantum procedure. It improves the state-of-the-art in three ways:
1) our framework results in quantum procedures that are significantly simpler to describe and analyze;
2) it reduces the overhead factor from $(\pi/2)^\ell$ to $\sqrt{\ell}$;
3) it is simpler to apply and optimize, without needing manual quantum analysis.
We give a generic complexity formula that improves the state-of-the-art both for concrete instantiations and in the asymptotic setting. For concrete instances, we show that numerical optimizations enable further improvements over this formula, which results in a further decrease in the gap to an exact quadratic speedup.
We demonstrate our framework with applications in cryptanalysis and improve the complexity of quantum attacks on reduced-round AES.In this paper we study search problems that arise very often in cryptanalysis: nested search problems, where each search layer has known degrees of freedom and/or constraints. A generic quantum solution for such problems consists of nesting Grover's quantum search algorithm or amplitude amplification (QAA) by Brassard et al., obtaining up to a square-root speedup on classical algorithms. However, the analysis of nested Grover or QAA is complex and introduces technicalities that in previous works are handled in a case-by-case manner. Moreover, straightforward nesting introduces an overhead factor of $(\pi/2)^\ell$ in the complexity (for $\ell$ layers).
In this paper, we aim to remedy both these issues and introduce a generic framework and tools to transform a classical nested search into a quantum procedure. It improves the state-of-the-art in three ways:
1) our framework results in quantum procedures that are significantly simpler to describe and analyze;
2) it reduces the overhead factor from $(\pi/2)^\ell$ to $\sqrt{\ell}$;
3) it is simpler to apply and optimize, without needing manual quantum analysis.
We give a generic complexity formula that improves the state-of-the-art both for concrete instantiations and in the asymptotic setting. For concrete instances, we show that numerical optimizations enable further improvements over this formula, which results in a further decrease in the gap to an exact quadratic speedup.
We demonstrate our framework with applications in cryptanalysis and improve the complexity of quantum attacks on reduced-round AES.2022-06-14T11:50:02+00:00https://creativecommons.org/licenses/by/4.0/André SchrottenloherMarc Stevenshttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/756SDitH in the QROM2023-05-25T12:34:30+00:00Carlos Aguilar-MelchorAndreas HülsingDavid JosephChristian MajenzEyal RonenDongze YueThe MPC in the Head (MPCitH) paradigm has recently led to significant improvements for signatures in the code-based setting. In this paper we consider some modifications to a recent twist of MPCitH, called Hypercube-MPCitH, that in the code-based setting provides the currently best known signature sizes. By compressing the Hypercube-MPCitH five round code-based identification into three rounds we obtain two main benefits. On the one hand, it allows us to further
develop recent techniques to provide a tight security proof in the quantum-accessible random oracle model (QROM), avoiding the catastrophic reduction losses incurred using generic QROM-results
for Fiat-Shamir. On the other hand, we can reduce the already low-cost online part of the signature to just a hash and some serialization. In addition, we propose the introduction of proof-of-work techniques to allow for a reduction in signature size. On the technical side, we develop generalizations of several QROM proof techniques and introduce a variant of the recently proposed extractable QROM.The MPC in the Head (MPCitH) paradigm has recently led to significant improvements for signatures in the code-based setting. In this paper we consider some modifications to a recent twist of MPCitH, called Hypercube-MPCitH, that in the code-based setting provides the currently best known signature sizes. By compressing the Hypercube-MPCitH five round code-based identification into three rounds we obtain two main benefits. On the one hand, it allows us to further
develop recent techniques to provide a tight security proof in the quantum-accessible random oracle model (QROM), avoiding the catastrophic reduction losses incurred using generic QROM-results
for Fiat-Shamir. On the other hand, we can reduce the already low-cost online part of the signature to just a hash and some serialization. In addition, we propose the introduction of proof-of-work techniques to allow for a reduction in signature size. On the technical side, we develop generalizations of several QROM proof techniques and introduce a variant of the recently proposed extractable QROM.2023-05-25T12:34:30+00:00https://creativecommons.org/licenses/by/4.0/Carlos Aguilar-MelchorAndreas HülsingDavid JosephChristian MajenzEyal RonenDongze Yuehttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/755The security of Kyber's FO-transform2023-05-25T12:05:57+00:00Manuel BarbosaAndreas HülsingIn this short note we give another direct proof for the variant of the FO transform used by Kyber in the QROM. At PKC'23 Maram & Xagawa gave the first direct proof which does not require the indirection via FO with explicit rejection, thereby avoiding either a non-tight bound, or the necessity to analyze the failure probability in a new setting. However, on the downside their proof produces a bound that incurs an additive collision bound term. We explore a different approach for a direct proof, which results in a simpler argument closer to prior proofs, but a slightly worse bound.In this short note we give another direct proof for the variant of the FO transform used by Kyber in the QROM. At PKC'23 Maram & Xagawa gave the first direct proof which does not require the indirection via FO with explicit rejection, thereby avoiding either a non-tight bound, or the necessity to analyze the failure probability in a new setting. However, on the downside their proof produces a bound that incurs an additive collision bound term. We explore a different approach for a direct proof, which results in a simpler argument closer to prior proofs, but a slightly worse bound.2023-05-25T12:05:57+00:00https://creativecommons.org/licenses/by/4.0/Manuel BarbosaAndreas Hülsinghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/741The Referendum Problem in Anonymous Voting for Decentralized Autonomous Organizations2023-05-25T08:08:46+00:00Artem GrigorVincenzo IovinoGiuseppe ViscontiA natural approach to anonymous voting over Ethereum assumes that there is an off-chain aggregator that performs the following task. The aggregator receives valid signatures of YES/NO preferences from eligible voters and uses them to compute a zk-SNARK proof of the fact that the majority of voters have cast a preference for YES or NO. Then, the aggregator sends to the smart contract the zk-SNARK proof, the smart contract verifies the proof and can trigger an action (e.g., a transfer of funds). As the zk-SNARK proof guarantees anonymity, the privacy of the voters is preserved by attackers not colluding with the aggregator. Moreover, if the SNARK proof verification is efficient the GAS cost will be independent on the number of participating voters and signatures submitted by voters to the aggregator.
In this paper we show that this naive approach to run referenda over Ethereum can incur severe security problems. We propose both mitigations and hardness results for achieving voting procedures in which the proofs submitted on-chain are either ZK or succinct.A natural approach to anonymous voting over Ethereum assumes that there is an off-chain aggregator that performs the following task. The aggregator receives valid signatures of YES/NO preferences from eligible voters and uses them to compute a zk-SNARK proof of the fact that the majority of voters have cast a preference for YES or NO. Then, the aggregator sends to the smart contract the zk-SNARK proof, the smart contract verifies the proof and can trigger an action (e.g., a transfer of funds). As the zk-SNARK proof guarantees anonymity, the privacy of the voters is preserved by attackers not colluding with the aggregator. Moreover, if the SNARK proof verification is efficient the GAS cost will be independent on the number of participating voters and signatures submitted by voters to the aggregator.
In this paper we show that this naive approach to run referenda over Ethereum can incur severe security problems. We propose both mitigations and hardness results for achieving voting procedures in which the proofs submitted on-chain are either ZK or succinct.2023-05-23T15:26:44+00:00https://creativecommons.org/licenses/by-nc/4.0/Artem GrigorVincenzo IovinoGiuseppe Viscontihttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2023/323Poseidon2: A Faster Version of the Poseidon Hash Function2023-05-25T07:26:59+00:00Lorenzo GrassiDmitry KhovratovichMarkus SchofneggerZero-knowledge proof systems for computational integrity have seen a rise in popularity in the last couple of years. One of the results of this development is the ongoing effort in designing so-called arithmetization-friendly hash functions in order to make these proofs more efficient. One of these new hash functions, Poseidon, is extensively used in this context, also thanks to being one of the first constructions tailored towards this use case. Many of the design principles of Poseidon have proven to be efficient and were later used in other primitives, yet parts of the construction have shown to be expensive in real-word scenarios.
In this paper, we propose an optimized version of Poseidon, called Poseidon2. The two versions differ in two crucial points. First, Poseidon is a sponge hash function, while Poseidon2 can be either a sponge or a compression function depending on the use case. Secondly, Poseidon2 is instantiated by new and more efficient linear layers with respect to Poseidon. These changes allow to decrease the number of multiplications in the linear layer by up to 90% and the number of constraints in Plonk circuits by up to 70%. This makes Poseidon2 the currently fastest arithmetization-oriented hash function without lookups.
Besides that, we address a recently proposed algebraic attack and propose a simple modification that makes both Poseidon and Poseidon2 secure against this approach.Zero-knowledge proof systems for computational integrity have seen a rise in popularity in the last couple of years. One of the results of this development is the ongoing effort in designing so-called arithmetization-friendly hash functions in order to make these proofs more efficient. One of these new hash functions, Poseidon, is extensively used in this context, also thanks to being one of the first constructions tailored towards this use case. Many of the design principles of Poseidon have proven to be efficient and were later used in other primitives, yet parts of the construction have shown to be expensive in real-word scenarios.
In this paper, we propose an optimized version of Poseidon, called Poseidon2. The two versions differ in two crucial points. First, Poseidon is a sponge hash function, while Poseidon2 can be either a sponge or a compression function depending on the use case. Secondly, Poseidon2 is instantiated by new and more efficient linear layers with respect to Poseidon. These changes allow to decrease the number of multiplications in the linear layer by up to 90% and the number of constraints in Plonk circuits by up to 70%. This makes Poseidon2 the currently fastest arithmetization-oriented hash function without lookups.
Besides that, we address a recently proposed algebraic attack and propose a simple modification that makes both Poseidon and Poseidon2 secure against this approach.2023-03-04T13:00:41+00:00https://creativecommons.org/licenses/by/4.0/Lorenzo GrassiDmitry KhovratovichMarkus Schofneggerhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/180Asymmetric Trapdoor Pseudorandom Generators: Definitions, Constructions, and Applications to Homomorphic Signatures with Shorter Public Keys2023-05-25T07:25:14+00:00Jinpeng HouYansong GaoAnmin FuJie ChenXiaofeng ChenYuqing ZhangWilly SusiloJosef PieprzykWe introduce a new primitive called the asymmetric trapdoor pseudorandom generator (ATPRG), which belongs to pseudorandom generators with two additional trapdoors (a public trapdoor and a secret trapdoor) or backdoor pseudorandom generators with an additional trapdoor (a secret trapdoor). Specifically, ATPRG can only generate public pseudorandom numbers $pr_1,\dots,pr_N$ for the users having no knowledge of the public trapdoor and the secret trapdoor; so this function is the same as pseudorandom generators. However, the users having the public trapdoor can use any public pseudorandom number $pr_i$ to recover the whole $pr$ sequence; so this function is the same as backdoor pseudorandom generators. Further, the users having the secret trapdoor can use $pr$ sequence to generate a sequence $sr_1,\dots,sr_N$ of the secret pseudorandom numbers. ATPRG can help design more space-efficient protocols where data/input/message should respect a predefined (unchangeable) order to be correctly processed in a computation or malleable cryptographic system.
As for applications of ATPRG, we construct the first homomorphic signature scheme (in the standard model) whose public key size is only $O(T)$ that is independent of the dataset size. As a comparison, the shortest size of the existing public key is $O(\sqrt{N}+\sqrt{T})$, proposed by Catalano et al. (CRYPTO'15), where $N$ is the dataset size and $T$ is the dimension of the message. In other words, we provide the first homomorphic signature scheme with $O(1)$-sized public keys for the one-dimension messages.We introduce a new primitive called the asymmetric trapdoor pseudorandom generator (ATPRG), which belongs to pseudorandom generators with two additional trapdoors (a public trapdoor and a secret trapdoor) or backdoor pseudorandom generators with an additional trapdoor (a secret trapdoor). Specifically, ATPRG can only generate public pseudorandom numbers $pr_1,\dots,pr_N$ for the users having no knowledge of the public trapdoor and the secret trapdoor; so this function is the same as pseudorandom generators. However, the users having the public trapdoor can use any public pseudorandom number $pr_i$ to recover the whole $pr$ sequence; so this function is the same as backdoor pseudorandom generators. Further, the users having the secret trapdoor can use $pr$ sequence to generate a sequence $sr_1,\dots,sr_N$ of the secret pseudorandom numbers. ATPRG can help design more space-efficient protocols where data/input/message should respect a predefined (unchangeable) order to be correctly processed in a computation or malleable cryptographic system.
As for applications of ATPRG, we construct the first homomorphic signature scheme (in the standard model) whose public key size is only $O(T)$ that is independent of the dataset size. As a comparison, the shortest size of the existing public key is $O(\sqrt{N}+\sqrt{T})$, proposed by Catalano et al. (CRYPTO'15), where $N$ is the dataset size and $T$ is the dimension of the message. In other words, we provide the first homomorphic signature scheme with $O(1)$-sized public keys for the one-dimension messages.2023-02-13T09:58:25+00:00https://creativecommons.org/licenses/by/4.0/Jinpeng HouYansong GaoAnmin FuJie ChenXiaofeng ChenYuqing ZhangWilly SusiloJosef Pieprzykhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/753A Faster Software Implementation of SQISign2023-05-25T07:15:29+00:00Kaizhan LinWeize WangZheng XuChang-An ZhaoIsogeny-based cryptography is famous for its short key size. As one of the most compact digital signatures, SQISign (Short Quaternion and Isogeny Signature) is attractive among post-quantum cryptography, but it is ineffcient compared to other post-quantum competitors because of complicated procedures in ideal to isogeny translation, which is the effciency bottleneck of the signing phase.
In this paper, we recall the current implementation of SQISign and
mainly discuss how to improve the execution of ideal to isogeny translation in SQISign. To be precise, we modify the SigningKLPT algorithm to accelerate the performance of generating the ideal $I_\sigma$. In addition, we explore how to save one of the two elliptic curve discrete logarithms and compute the remainder with the help of the reduced Tate pairing correctly and effciently. We speed up other procedures in ideal to isogeny translation with various techniques as well. It should be noted that our improvements also benefit the performances of key generation and verification in SQISign. In particular, in the instantiation with p3923 the improvements lead to a speedup of 8.82%, 8.50% and 18.94% for key generation, signature and verification, respectivelyIsogeny-based cryptography is famous for its short key size. As one of the most compact digital signatures, SQISign (Short Quaternion and Isogeny Signature) is attractive among post-quantum cryptography, but it is ineffcient compared to other post-quantum competitors because of complicated procedures in ideal to isogeny translation, which is the effciency bottleneck of the signing phase.
In this paper, we recall the current implementation of SQISign and
mainly discuss how to improve the execution of ideal to isogeny translation in SQISign. To be precise, we modify the SigningKLPT algorithm to accelerate the performance of generating the ideal $I_\sigma$. In addition, we explore how to save one of the two elliptic curve discrete logarithms and compute the remainder with the help of the reduced Tate pairing correctly and effciently. We speed up other procedures in ideal to isogeny translation with various techniques as well. It should be noted that our improvements also benefit the performances of key generation and verification in SQISign. In particular, in the instantiation with p3923 the improvements lead to a speedup of 8.82%, 8.50% and 18.94% for key generation, signature and verification, respectively2023-05-25T01:59:32+00:00https://creativecommons.org/licenses/by/4.0/Kaizhan LinWeize WangZheng XuChang-An Zhaohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/733Breaking the quadratic barrier: Quantum cryptanalysis of Milenage, telecommunications’ cryptographic backbone2023-05-25T05:16:11+00:00Vincent UlitzschJean-Pierre SeifertThe potential advent of large-scale quantum computers in the near future poses a threat to contemporary cryptography.
One ubiquitous usage of cryptography is currently present in the vibrant field of cellular networks.
The cryptography of cellular networks is centered around seven secret-key algorithms $f_1, \ldots, f_5, f_1^{*}, f_5^{*}$, aggregated into an authentication and key agreement algorithm set.
Still, to the best of our knowledge, these secret key algorithms have not yet been subject to quantum cryptanalysis. Instead, many quantum security considerations for telecommunication networks argue that the threat posed by quantum computers is restricted to public-key cryptography. However, various recent works have presented quantum attacks on secret key cryptography that exploit quantum period finding to achieve more than a quadratic speedup compared to the best known classical attacks. Motivated by this quantum threat to symmetric cryptography, this paper presents a quantum cryptanalysis for the Milenage algorithm set, the prevalent instantiation of the seven secret-key $f_1, \ldots, f_5, f_1^{*}, f_5^{*}$ algorithms that underpin cellular security.
Building upon recent quantum cryptanalytic results, we show attacks that go beyond a quadratic speedup.
Concretely, we provide quantum attack scenarios for all Milenage algorithms, including exponential speedups when the attacker is allowed to issue superposition queries. Our results do not constitute a quantum break of the Milenage algorithms, but they do show that Milenage suffers from structural weaknesses making it susceptible to quantum attacks.The potential advent of large-scale quantum computers in the near future poses a threat to contemporary cryptography.
One ubiquitous usage of cryptography is currently present in the vibrant field of cellular networks.
The cryptography of cellular networks is centered around seven secret-key algorithms $f_1, \ldots, f_5, f_1^{*}, f_5^{*}$, aggregated into an authentication and key agreement algorithm set.
Still, to the best of our knowledge, these secret key algorithms have not yet been subject to quantum cryptanalysis. Instead, many quantum security considerations for telecommunication networks argue that the threat posed by quantum computers is restricted to public-key cryptography. However, various recent works have presented quantum attacks on secret key cryptography that exploit quantum period finding to achieve more than a quadratic speedup compared to the best known classical attacks. Motivated by this quantum threat to symmetric cryptography, this paper presents a quantum cryptanalysis for the Milenage algorithm set, the prevalent instantiation of the seven secret-key $f_1, \ldots, f_5, f_1^{*}, f_5^{*}$ algorithms that underpin cellular security.
Building upon recent quantum cryptanalytic results, we show attacks that go beyond a quadratic speedup.
Concretely, we provide quantum attack scenarios for all Milenage algorithms, including exponential speedups when the attacker is allowed to issue superposition queries. Our results do not constitute a quantum break of the Milenage algorithms, but they do show that Milenage suffers from structural weaknesses making it susceptible to quantum attacks.2022-06-08T15:35:54+00:00https://creativecommons.org/licenses/by/4.0/Vincent UlitzschJean-Pierre Seiferthttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/752Schnorr protocol in Jasmin2023-05-24T21:19:27+00:00Denis FirsovTiago OliveiraDominique UnruhWe implement the Schnorr proof system in assembler via the Jasmin toolchain, and prove the security (proof-of-knowledge property) and the absence of leakage through timing side-channels of that implementation in EasyCrypt.
In order to do so, we show how leakage-freeness of Jasmin programs can be proven for probabilistic programs (that are not constant-time). We implement and verify algorithms for fast constant-time modular multiplication and exponentiation (using Barrett reduction and Montgomery ladder). We implement and verify the rejection sampling algorithm. And finally, we put it all together and show the security of the overall implementation (end-to-end verification) of the Schnorr protocol, by connecting our implementation to prior security analyses in EasyCrypt (Firsov, Unruh, CSF 2023).We implement the Schnorr proof system in assembler via the Jasmin toolchain, and prove the security (proof-of-knowledge property) and the absence of leakage through timing side-channels of that implementation in EasyCrypt.
In order to do so, we show how leakage-freeness of Jasmin programs can be proven for probabilistic programs (that are not constant-time). We implement and verify algorithms for fast constant-time modular multiplication and exponentiation (using Barrett reduction and Montgomery ladder). We implement and verify the rejection sampling algorithm. And finally, we put it all together and show the security of the overall implementation (end-to-end verification) of the Schnorr protocol, by connecting our implementation to prior security analyses in EasyCrypt (Firsov, Unruh, CSF 2023).2023-05-24T21:19:27+00:00https://creativecommons.org/licenses/by/4.0/Denis FirsovTiago OliveiraDominique Unruhhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/751Scalable Agreement Protocols with Optimal Optimistic Efficiency2023-05-24T20:04:04+00:00Yuval GellesIlan KomargodskiDesigning efficient distributed protocols for various agreement tasks such as Byzantine Agreement, Broadcast, and Committee Election is a fundamental problem. We are interested in $scalable$ protocols for these tasks, where each (honest) party communicates a number of bits which is sublinear in $n$, the number of parties. The first major step towards this goal is due to King et al. (SODA 2006) who showed a protocol where each party sends only $\tilde O(1)$ bits throughout $\tilde O(1)$ rounds, but guarantees only that $1-o(1)$ fraction of honest parties end up agreeing on a consistent output, assuming constant $<1/3$ fraction of static corruptions. Few years later, King et al. (ICDCN 2011) managed to get a full agreement protocol in the same model but where each party sends $\tilde O(\sqrt{n})$ bits throughout $\tilde O(1)$ rounds. Getting a full agreement protocol with $o(\sqrt{n})$ communication per party has been a major challenge ever since.
In light of this barrier, we propose a new framework for designing efficient agreement protocols. Specifically, we design $\tilde O(1)$-round protocols for all of the above tasks (assuming constant $<1/3$ fraction of static corruptions) with optimistic and pessimistic guarantees:
$\bullet$ $Optimistic$ $complexity$: In an honest execution, (honest) parties send only $\tilde O(1)$ bits.
$\bullet$ <i> xxx</i>$Pessimistic$ $complexity$: In any other case, (honest) parties send $\tilde O(\sqrt{n})$ bits.
Thus, all an adversary can gain from deviating from the honest execution is that honest parties will need to work harder (i.e., transmit more bits) to reach agreement and terminate. Besides the above agreement tasks, we also use our new framework to get a scalable secure multiparty computation (MPC) protocol with optimistic and pessimistic complexities.
Technically, we identify a relaxation of Byzantine Agreement (of independent interest) that allows us to fall-back to a pessimistic execution in a coordinated way by all parties. We implement this relaxation with $\tilde O(1)$ communication bits per party and within $\tilde O(1)$ rounds.Designing efficient distributed protocols for various agreement tasks such as Byzantine Agreement, Broadcast, and Committee Election is a fundamental problem. We are interested in $scalable$ protocols for these tasks, where each (honest) party communicates a number of bits which is sublinear in $n$, the number of parties. The first major step towards this goal is due to King et al. (SODA 2006) who showed a protocol where each party sends only $\tilde O(1)$ bits throughout $\tilde O(1)$ rounds, but guarantees only that $1-o(1)$ fraction of honest parties end up agreeing on a consistent output, assuming constant $<1/3$ fraction of static corruptions. Few years later, King et al. (ICDCN 2011) managed to get a full agreement protocol in the same model but where each party sends $\tilde O(\sqrt{n})$ bits throughout $\tilde O(1)$ rounds. Getting a full agreement protocol with $o(\sqrt{n})$ communication per party has been a major challenge ever since.
In light of this barrier, we propose a new framework for designing efficient agreement protocols. Specifically, we design $\tilde O(1)$-round protocols for all of the above tasks (assuming constant $<1/3$ fraction of static corruptions) with optimistic and pessimistic guarantees:
$\bullet$ $Optimistic$ $complexity$: In an honest execution, (honest) parties send only $\tilde O(1)$ bits.
$\bullet$ <i> xxx</i>$Pessimistic$ $complexity$: In any other case, (honest) parties send $\tilde O(\sqrt{n})$ bits.
Thus, all an adversary can gain from deviating from the honest execution is that honest parties will need to work harder (i.e., transmit more bits) to reach agreement and terminate. Besides the above agreement tasks, we also use our new framework to get a scalable secure multiparty computation (MPC) protocol with optimistic and pessimistic complexities.
Technically, we identify a relaxation of Byzantine Agreement (of independent interest) that allows us to fall-back to a pessimistic execution in a coordinated way by all parties. We implement this relaxation with $\tilde O(1)$ communication bits per party and within $\tilde O(1)$ rounds.2023-05-24T20:04:04+00:00https://creativecommons.org/licenses/by/4.0/Yuval GellesIlan Komargodskihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1157Private Approximate Nearest Neighbor Search with Sublinear Communication2023-05-24T17:28:48+00:00Sacha Servan-SchreiberSimon LangowskiSrinivas DevadasNearest neighbor search is a fundamental building-block for a wide range of applications. A privacy-preserving protocol for nearest neighbor search involves a set of clients who send queries to a remote database. Each client retrieves the nearest neighbor(s) to its query in the database without revealing any information about the query. To ensure database privacy, clients must learn as little as possible beyond the query answer, even if behaving maliciously by deviating from protocol.
Existing protocols for private nearest neighbor search require heavy cryptographic tools, resulting in high computational and bandwidth overheads. In this paper, we present the first lightweight protocol for private nearest neighbor search. Our protocol is instantiated using two non-colluding servers, each holding a replica of the database. Our design supports an arbitrary number of clients simultaneously querying the database through the two servers. Each query consists of a single round of communication between the client and the two servers. No communication is required between the servers to answer queries.
If at least one of the servers is non-colluding, we ensure that (1) no information is revealed on the client’s query, (2) the total communication between the client and the servers is sublinear in the database size, and (3) each query answer only leaks a small and bounded amount of information about the database to the client, even if the client is malicious.
We implement our protocol and report its performance on real-world data. Our construction requires between 10 and 20 seconds of query latency over large databases of 10M feature vectors. Client overhead remained under 10 ms of processing time per query and less than 10 MB of communication.Nearest neighbor search is a fundamental building-block for a wide range of applications. A privacy-preserving protocol for nearest neighbor search involves a set of clients who send queries to a remote database. Each client retrieves the nearest neighbor(s) to its query in the database without revealing any information about the query. To ensure database privacy, clients must learn as little as possible beyond the query answer, even if behaving maliciously by deviating from protocol.
Existing protocols for private nearest neighbor search require heavy cryptographic tools, resulting in high computational and bandwidth overheads. In this paper, we present the first lightweight protocol for private nearest neighbor search. Our protocol is instantiated using two non-colluding servers, each holding a replica of the database. Our design supports an arbitrary number of clients simultaneously querying the database through the two servers. Each query consists of a single round of communication between the client and the two servers. No communication is required between the servers to answer queries.
If at least one of the servers is non-colluding, we ensure that (1) no information is revealed on the client’s query, (2) the total communication between the client and the servers is sublinear in the database size, and (3) each query answer only leaks a small and bounded amount of information about the database to the client, even if the client is malicious.
We implement our protocol and report its performance on real-world data. Our construction requires between 10 and 20 seconds of query latency over large databases of 10M feature vectors. Client overhead remained under 10 ms of processing time per query and less than 10 MB of communication.2021-09-14T17:49:48+00:00https://creativecommons.org/licenses/by/4.0/Sacha Servan-SchreiberSimon LangowskiSrinivas Devadashttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2020/1158Don't throw your nonces out with the bathwater: Speeding up Dilithium by reusing the tail of y2023-05-24T09:53:33+00:00Amber SprenkelsBas WesterbaanWe suggest a small change to the Dilithium signature scheme, that allows one to reuse computations between rejected nonces, for a speed-up in signing time.
The modification is based on the idea that, after rejecting on a too large $\|\mathbf{r}_0\|_\infty$, not all elements of the nonce $\mathbf{y}$ are spent.
We swap the order of the checks; and if this $\mathbf{r}_0$-check fails, we only need to resample $y_1$.
We provide a proof that shows that the modification does not affect the security of the scheme.
We present measurements of the performance of the modified scheme on AVX2, Cortex M4, and Cortex M3,
which show a speed-up ranging from 11% for Dilithium2 on M3 to 22% for Dilithium3 on AVX2.We suggest a small change to the Dilithium signature scheme, that allows one to reuse computations between rejected nonces, for a speed-up in signing time.
The modification is based on the idea that, after rejecting on a too large $\|\mathbf{r}_0\|_\infty$, not all elements of the nonce $\mathbf{y}$ are spent.
We swap the order of the checks; and if this $\mathbf{r}_0$-check fails, we only need to resample $y_1$.
We provide a proof that shows that the modification does not affect the security of the scheme.
We present measurements of the performance of the modified scheme on AVX2, Cortex M4, and Cortex M3,
which show a speed-up ranging from 11% for Dilithium2 on M3 to 22% for Dilithium3 on AVX2.2020-09-25T18:40:53+00:00https://creativecommons.org/licenses/by/4.0/Amber SprenkelsBas Westerbaanhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/749Note on Subversion-Resilient Key Exchange2023-05-24T09:23:55+00:00Magnus RingerudIn this work, we set out to create a subversion resilient authenticated key exchange protocol. The first step was to design a meaningful security model for this primitive, and our goal was to avoid using building blocks like reverse firewalls and public watchdogs. We wanted to exclude these kinds of tools because we desired that our protocols to be self contained in the sense that we could prove security without relying on some outside, tamper-proof party. To define the model, we began by extending models for regular authenticated key exchange, as we wanted our model to retain all the properties from regular AKE.
While trying to design protocols that would be secure in this model, we discovered that security depended on more than just the protocol, but also on engineering questions like how keys are stored and accessed in memory. Moreover, even if we assume that we can find solutions to these engineering challenges, other problems arise when trying to develop a secure protocol, partly because it's hard to define what secure means in this setting.It is in particular not clear how a subverted algorithm should affect the freshness predicate inherited from trivial attacks in regular AKE. The attack variety is large, and it is not intuitive how one should treat or classify the different attacks.
In the end, we were unable to find a satisfying solution for our model, and hence we could not prove any meaningful security of the protocols we studied. This work is a summary of our attempt, and the challenges we faced before concluding it.In this work, we set out to create a subversion resilient authenticated key exchange protocol. The first step was to design a meaningful security model for this primitive, and our goal was to avoid using building blocks like reverse firewalls and public watchdogs. We wanted to exclude these kinds of tools because we desired that our protocols to be self contained in the sense that we could prove security without relying on some outside, tamper-proof party. To define the model, we began by extending models for regular authenticated key exchange, as we wanted our model to retain all the properties from regular AKE.
While trying to design protocols that would be secure in this model, we discovered that security depended on more than just the protocol, but also on engineering questions like how keys are stored and accessed in memory. Moreover, even if we assume that we can find solutions to these engineering challenges, other problems arise when trying to develop a secure protocol, partly because it's hard to define what secure means in this setting.It is in particular not clear how a subverted algorithm should affect the freshness predicate inherited from trivial attacks in regular AKE. The attack variety is large, and it is not intuitive how one should treat or classify the different attacks.
In the end, we were unable to find a satisfying solution for our model, and hence we could not prove any meaningful security of the protocols we studied. This work is a summary of our attempt, and the challenges we faced before concluding it.2023-05-24T09:23:55+00:00https://creativecommons.org/licenses/by/4.0/Magnus Ringerudhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/730The Problem of Half Round Key XOR2023-05-24T08:58:50+00:00Anubhab BaksiIn the design of GIFT, half round key XOR is used. This leads to the undesired consequence that the security against the differential/linear attacks are overestimated. This comes from the observation that; in the usual DDT/LAT based analysis of the differential/linear attacks, the inherent assumption is the full round key is XORed at each round.In the design of GIFT, half round key XOR is used. This leads to the undesired consequence that the security against the differential/linear attacks are overestimated. This comes from the observation that; in the usual DDT/LAT based analysis of the differential/linear attacks, the inherent assumption is the full round key is XORed at each round.2023-05-21T20:22:08+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Anubhab Baksihttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2023/451Non-interactive VSS using Class Groups and Application to DKG2023-05-24T08:22:48+00:00Aniket KateEaswar Vivek MangipudiPratyay MukherjeeHamza SaleemSri Aravinda Krishnan ThyagarajanVerifiable secret sharing (VSS) allows a dealer to send shares of a secret value to parties such that each party receiving a share can verify (often interactively) if the received share was correctly generated. Non-interactive VSS (NI-VSS) allows the dealer to perform secret sharing such that every party (including an outsider) can verify their shares along with others’ without any interaction with the dealer as well as among themselves. Existing NI-VSS schemes employing either exponentiated ElGamal or lattice-based encryption schemes involve zero-knowledge range proofs, resulting in higher computational and communication complexities.
In this work, we present cgVSS, a NI-VSS protocol that uses class groups for encryption. In cgVSS, the dealer encrypts the secret shares in the exponent through a class group encryption such that the parties can directly decrypt their shares. The existence of a subgroup where a discrete logarithm is tractable in a class group allows the receiver to efficiently decrypt the share though it is available in the exponent. This yields a novel-yet-simple VSS protocol where the dealer publishes the encryptions of the shares and the zero-knowledge proof of the correctness of the dealing. The linear homomorphic nature of the employed encryption scheme allows for an efficient zero-knowledge proof of correct sharing. Given the rise in demand for VSS protocols in the blockchain space, especially for publicly verifiable distributed key generation (DKG), our NI-VSS construction can be particularly impactful. We implement our cgVSS protocol using the BICYCL library and compare its performance with a simplified version of the state-of-the-art NI-VSS by Groth. Our protocol reduces the message complexity and the bit length of the broadcast message by at least 5.6x for a 150-party system, with a 2.7x, 2.4x speed-up in the dealer and receiver computation times, respectively.Verifiable secret sharing (VSS) allows a dealer to send shares of a secret value to parties such that each party receiving a share can verify (often interactively) if the received share was correctly generated. Non-interactive VSS (NI-VSS) allows the dealer to perform secret sharing such that every party (including an outsider) can verify their shares along with others’ without any interaction with the dealer as well as among themselves. Existing NI-VSS schemes employing either exponentiated ElGamal or lattice-based encryption schemes involve zero-knowledge range proofs, resulting in higher computational and communication complexities.
In this work, we present cgVSS, a NI-VSS protocol that uses class groups for encryption. In cgVSS, the dealer encrypts the secret shares in the exponent through a class group encryption such that the parties can directly decrypt their shares. The existence of a subgroup where a discrete logarithm is tractable in a class group allows the receiver to efficiently decrypt the share though it is available in the exponent. This yields a novel-yet-simple VSS protocol where the dealer publishes the encryptions of the shares and the zero-knowledge proof of the correctness of the dealing. The linear homomorphic nature of the employed encryption scheme allows for an efficient zero-knowledge proof of correct sharing. Given the rise in demand for VSS protocols in the blockchain space, especially for publicly verifiable distributed key generation (DKG), our NI-VSS construction can be particularly impactful. We implement our cgVSS protocol using the BICYCL library and compare its performance with a simplified version of the state-of-the-art NI-VSS by Groth. Our protocol reduces the message complexity and the bit length of the broadcast message by at least 5.6x for a 150-party system, with a 2.7x, 2.4x speed-up in the dealer and receiver computation times, respectively.2023-03-28T17:30:50+00:00https://creativecommons.org/licenses/by/4.0/Aniket KateEaswar Vivek MangipudiPratyay MukherjeeHamza SaleemSri Aravinda Krishnan Thyagarajanhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/748Towards the Links of Cryptanalytic Methods on MPC/FHE/ZK-Friendly Symmetric-Key Primitives2023-05-24T07:38:18+00:00Shiyao ChenChun GuoJian GuoLi LiuMeiqin WangPuwen WeiZeyu XuSymmetric-key primitives designed over the prime field $\mathbb{F}_p$ with odd characteristics, rather than the traditional $\mathbb{F}_2^{n}$, are becoming the most popular choice for MPC/FHE/ZK-protocols for better efficiencies. However, the security of $\mathbb{F}_p$ is less understood as there are highly nontrivial gaps when extending the cryptanalysis tools and experiences built on $\mathbb{F}_2^{n}$ in the past few decades to $\mathbb{F}_p$.
At CRYPTO 2015, Sun et al. established the links among impossible differential, zero-correlation linear, and integral cryptanalysis over $\mathbb{F}_2^{n}$ from the perspective of distinguishers. In this paper, following the definition of linear correlations over $\mathbb{F}_p$ by Baignéres, Stern and Vaudenay at SAC 2007, we successfully establish comprehensive links over $\mathbb{F}_p$, by reproducing the proofs and offering alternatives when necessary. Interesting and important differences between $\mathbb{F}_p$ and $\mathbb{F}_2^n$ are observed.
- Zero-correlation linear hulls can not lead to integral distinguishers for some cases over $\mathbb{F}_p$, while this is always possible over $\mathbb{F}_2^n$ proven by Sun et al..
- When the newly established links are applied to GMiMC, its impossible differential, zero-correlation linear hull and integral distinguishers can be increased by up to 3 rounds for most of the cases, and even to an arbitrary number of rounds for some special and limited cases, which only appeared in $\mathbb{F}_p$. It should be noted that all these distinguishers do not invalidate GMiMC's security claims.
The development of the theories over $\mathbb{F}_p$ behind these links, and properties identified (be it similar or different) will bring clearer and easier understanding of security of primitives in this emerging $\mathbb{F}_p$ field, which we believe will provide useful guides for future cryptanalysis and design.Symmetric-key primitives designed over the prime field $\mathbb{F}_p$ with odd characteristics, rather than the traditional $\mathbb{F}_2^{n}$, are becoming the most popular choice for MPC/FHE/ZK-protocols for better efficiencies. However, the security of $\mathbb{F}_p$ is less understood as there are highly nontrivial gaps when extending the cryptanalysis tools and experiences built on $\mathbb{F}_2^{n}$ in the past few decades to $\mathbb{F}_p$.
At CRYPTO 2015, Sun et al. established the links among impossible differential, zero-correlation linear, and integral cryptanalysis over $\mathbb{F}_2^{n}$ from the perspective of distinguishers. In this paper, following the definition of linear correlations over $\mathbb{F}_p$ by Baignéres, Stern and Vaudenay at SAC 2007, we successfully establish comprehensive links over $\mathbb{F}_p$, by reproducing the proofs and offering alternatives when necessary. Interesting and important differences between $\mathbb{F}_p$ and $\mathbb{F}_2^n$ are observed.
- Zero-correlation linear hulls can not lead to integral distinguishers for some cases over $\mathbb{F}_p$, while this is always possible over $\mathbb{F}_2^n$ proven by Sun et al..
- When the newly established links are applied to GMiMC, its impossible differential, zero-correlation linear hull and integral distinguishers can be increased by up to 3 rounds for most of the cases, and even to an arbitrary number of rounds for some special and limited cases, which only appeared in $\mathbb{F}_p$. It should be noted that all these distinguishers do not invalidate GMiMC's security claims.
The development of the theories over $\mathbb{F}_p$ behind these links, and properties identified (be it similar or different) will bring clearer and easier understanding of security of primitives in this emerging $\mathbb{F}_p$ field, which we believe will provide useful guides for future cryptanalysis and design.2023-05-24T07:38:18+00:00https://creativecommons.org/licenses/by/4.0/Shiyao ChenChun GuoJian GuoLi LiuMeiqin WangPuwen WeiZeyu Xuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/747Key-Range Attribute-Based Signatures for Range of Inner Product and Its Applications2023-05-24T02:51:17+00:00Masahito IshizakaIn attribute-based signatures (ABS) for range of inner product (ARIP), recently proposed by Ishizaka and Fukushima at ICISC 2022, a secret-key labeled with an $n$-dimensional vector $\mathbf{x}\in\mathbb{Z}_p^n$ for a prime $p$ can used to sign a message under an $n$-dimensional vector $\mathbf{y}\in\mathbb{Z}_p^n$ and a range $[L,R]=\{L, L+1, \cdots, R-1, R\}$ with $L,R\in\mathbb{Z}_p$ iff their inner product is within the range, i.e., $\langle \mathbf{x}, \mathbf{y} \rangle \in [L,R]\pmod p$. We consider its key-range version, named key-range ARIP (KARIP), where the range $[L,R]$ is associated with a secret-key but not with a signature. We propose three generic KARIP constructions based on linearly homomorphic signatures and non-interactive witness-indistinguishable proof, which lead to concrete KARIP instantiations secure under standard assumptions with different features in terms of efficiency. We also show that KARIP has various applications, e.g., key-range ABS for range evaluation of polynomials/weighted averages/Hamming distance/Euclidean distance, key-range time-specific signatures, and key-range ABS for hyperellipsoid predicates.In attribute-based signatures (ABS) for range of inner product (ARIP), recently proposed by Ishizaka and Fukushima at ICISC 2022, a secret-key labeled with an $n$-dimensional vector $\mathbf{x}\in\mathbb{Z}_p^n$ for a prime $p$ can used to sign a message under an $n$-dimensional vector $\mathbf{y}\in\mathbb{Z}_p^n$ and a range $[L,R]=\{L, L+1, \cdots, R-1, R\}$ with $L,R\in\mathbb{Z}_p$ iff their inner product is within the range, i.e., $\langle \mathbf{x}, \mathbf{y} \rangle \in [L,R]\pmod p$. We consider its key-range version, named key-range ARIP (KARIP), where the range $[L,R]$ is associated with a secret-key but not with a signature. We propose three generic KARIP constructions based on linearly homomorphic signatures and non-interactive witness-indistinguishable proof, which lead to concrete KARIP instantiations secure under standard assumptions with different features in terms of efficiency. We also show that KARIP has various applications, e.g., key-range ABS for range evaluation of polynomials/weighted averages/Hamming distance/Euclidean distance, key-range time-specific signatures, and key-range ABS for hyperellipsoid predicates.2023-05-24T02:51:17+00:00https://creativecommons.org/licenses/by/4.0/Masahito Ishizakahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/746Homomorphic Signatures for Subset and Superset Mixed Predicates and Its Applications2023-05-24T02:43:13+00:00Masahito IshizakaKazuhide FukushimaIn homomorphic signatures for subset predicates (HSSB), each message (to be signed) is a set. Any signature on a set $M$ allows us to derive a signature on any subset $M'\subseteq M$. Its superset version, which should be called homomorphic signatures for superset predicates (HSSP), allows us to derive a signature on any superset $M'\supseteq M$. In this paper, we propose homomorphic signatures for subset and superset mixed predicates (HSSM) as a simple combination of HSSB and HSSP. In HSSM, any signature on a message of a set-pair $(M, W)$ allows us to derive a signature on any $(M', W')$ such that $M'\subseteq M$ and $W'\supseteq W$. We propose an original HSSM scheme which is unforgeable under the decisional linear assumption and completely context-hiding. We show that HSSM has various applications, which include disclosure-controllable HSSB, disclosure-controllable redactable signatures, (key-delegatable) superset/subset predicate signatures, and wildcarded identity-based signatures.In homomorphic signatures for subset predicates (HSSB), each message (to be signed) is a set. Any signature on a set $M$ allows us to derive a signature on any subset $M'\subseteq M$. Its superset version, which should be called homomorphic signatures for superset predicates (HSSP), allows us to derive a signature on any superset $M'\supseteq M$. In this paper, we propose homomorphic signatures for subset and superset mixed predicates (HSSM) as a simple combination of HSSB and HSSP. In HSSM, any signature on a message of a set-pair $(M, W)$ allows us to derive a signature on any $(M', W')$ such that $M'\subseteq M$ and $W'\supseteq W$. We propose an original HSSM scheme which is unforgeable under the decisional linear assumption and completely context-hiding. We show that HSSM has various applications, which include disclosure-controllable HSSB, disclosure-controllable redactable signatures, (key-delegatable) superset/subset predicate signatures, and wildcarded identity-based signatures.2023-05-24T02:43:13+00:00https://creativecommons.org/licenses/by/4.0/Masahito IshizakaKazuhide Fukushimahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/745PSI from ring-OLE2023-05-24T02:17:11+00:00Wutichai ChongchitmateYuval IshaiSteve LuRafail OstrovskyPrivate set intersection (PSI) is one of the most extensively studied instances of secure computation. PSI allows two parties to compute the intersection of their input sets without revealing anything else. Other useful variants include PSI-Payload, where the output includes payloads associated with members of the intersection, and PSI-Sum, where the output includes the sum of the payloads instead of individual ones.
In this work, we make two related contributions. First, we construct simple and efficient protocols for PSI and PSI-Payload from a ring version of oblivious linear function evaluation (ring-OLE) that can be efficiently realized using recent ring-LPN based protocols. A standard OLE over a field F allows a sender with $a,b \in \mathbb{F}$ to deliver $ax+b$ to a receiver who holds $x \in \mathbb{F}$. Ring-OLE generalizes this to a ring $\mathcal{R}$, in particular, a polynomial ring over $\mathbb{F}$. Our second contribution is an efficient general reduction of a variant of PSI-Sum to PSI-Payload and secure inner product.
Our protocols have better communication cost than state-of-the-art PSI protocols, especially when requiring security against malicious parties and when allowing input-independent preprocessing. Compared to previous maliciously secure PSI protocols that have a similar com- putational cost, our online communication is 2x better for small sets (28 − 212 elements) and 20% better for large sets (220 − 224). Our protocol is also simpler to describe and implement. We obtain even bigger improvements over the state of the art (4-5x better running time) for our variant of PSI-Sum.Private set intersection (PSI) is one of the most extensively studied instances of secure computation. PSI allows two parties to compute the intersection of their input sets without revealing anything else. Other useful variants include PSI-Payload, where the output includes payloads associated with members of the intersection, and PSI-Sum, where the output includes the sum of the payloads instead of individual ones.
In this work, we make two related contributions. First, we construct simple and efficient protocols for PSI and PSI-Payload from a ring version of oblivious linear function evaluation (ring-OLE) that can be efficiently realized using recent ring-LPN based protocols. A standard OLE over a field F allows a sender with $a,b \in \mathbb{F}$ to deliver $ax+b$ to a receiver who holds $x \in \mathbb{F}$. Ring-OLE generalizes this to a ring $\mathcal{R}$, in particular, a polynomial ring over $\mathbb{F}$. Our second contribution is an efficient general reduction of a variant of PSI-Sum to PSI-Payload and secure inner product.
Our protocols have better communication cost than state-of-the-art PSI protocols, especially when requiring security against malicious parties and when allowing input-independent preprocessing. Compared to previous maliciously secure PSI protocols that have a similar com- putational cost, our online communication is 2x better for small sets (28 − 212 elements) and 20% better for large sets (220 − 224). Our protocol is also simpler to describe and implement. We obtain even bigger improvements over the state of the art (4-5x better running time) for our variant of PSI-Sum.2023-05-24T02:17:11+00:00https://creativecommons.org/licenses/by-nc/4.0/Wutichai ChongchitmateYuval IshaiSteve LuRafail Ostrovskyhttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2023/744On Extremal Algebraic Graphs and implementations of new cubic Multivariate Public Keys2023-05-23T21:41:31+00:00Vasyl UstimenkoTymoteusz ChojeckiMichal KlisowskiAlgebraic Constructions of Extremal Graph Theory
were efficiently used for the construction of Low Density Parity Check Codes for satellite communication, constructions of
stream ciphers and Postquantum Protocols of Noncommutative
cryptography and corresponding El Gamal type cryptosystems.
We shortly observe some results in these applications and present
idea of the usage of algebraic graphs for the development
of Multivariate Public Keys (MPK). Some MPK schemes are
presented at theoretical level, implementation of one of them is discussed.Algebraic Constructions of Extremal Graph Theory
were efficiently used for the construction of Low Density Parity Check Codes for satellite communication, constructions of
stream ciphers and Postquantum Protocols of Noncommutative
cryptography and corresponding El Gamal type cryptosystems.
We shortly observe some results in these applications and present
idea of the usage of algebraic graphs for the development
of Multivariate Public Keys (MPK). Some MPK schemes are
presented at theoretical level, implementation of one of them is discussed.2023-05-23T21:41:31+00:00https://creativecommons.org/licenses/by/4.0/Vasyl UstimenkoTymoteusz ChojeckiMichal Klisowskihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/743On Sustainable Ring-based Anonymous Systems2023-05-23T17:22:32+00:00Sherman S. M. ChowChristoph EggerRussell W. F. LaiViktoria RongeIvy K. Y. WooAnonymous systems (e.g. anonymous cryptocurrencies and updatable anonymous credentials) often follow a construction template where an account can only perform a single anonymous action, which in turn potentially spawns new (and still single-use) accounts (e.g. UTXO with a balance to spend or session with a score to claim). Due to the anonymous nature of the action, no party can be sure which account has taken part in an action and, therefore, must maintain an ever-growing list of potentially unused accounts to ensure that the system keeps running correctly. Consequently, anonymous systems constructed based on this common template are seemingly not sustainable.
In this work, we study the sustainability of ring-based anonymous systems, where a user performing an anonymous action is hidden within a set of decoy users, traditionally called a ``ring''.
On the positive side, we propose a general technique for ring-based anonymous systems to achieve sustainability. Along the way, we define a general model of decentralised anonymous systems (DAS) for arbitrary anonymous actions, and provide a generic construction which provably achieves sustainability. As a special case, we obtain the first construction of anonymous cryptocurrencies achieving sustainability without compromising availability. We also demonstrate the generality of our model by constructing sustainable decentralised anonymous social networks.
On the negative side, we show empirically that Monero, one of the most popular anonymous cryptocurrencies, is unlikely to be sustainable without altering its current ring sampling strategy. The main subroutine is a sub-quadratic-time algorithm for detecting used accounts in a ring-based anonymous system.Anonymous systems (e.g. anonymous cryptocurrencies and updatable anonymous credentials) often follow a construction template where an account can only perform a single anonymous action, which in turn potentially spawns new (and still single-use) accounts (e.g. UTXO with a balance to spend or session with a score to claim). Due to the anonymous nature of the action, no party can be sure which account has taken part in an action and, therefore, must maintain an ever-growing list of potentially unused accounts to ensure that the system keeps running correctly. Consequently, anonymous systems constructed based on this common template are seemingly not sustainable.
In this work, we study the sustainability of ring-based anonymous systems, where a user performing an anonymous action is hidden within a set of decoy users, traditionally called a ``ring''.
On the positive side, we propose a general technique for ring-based anonymous systems to achieve sustainability. Along the way, we define a general model of decentralised anonymous systems (DAS) for arbitrary anonymous actions, and provide a generic construction which provably achieves sustainability. As a special case, we obtain the first construction of anonymous cryptocurrencies achieving sustainability without compromising availability. We also demonstrate the generality of our model by constructing sustainable decentralised anonymous social networks.
On the negative side, we show empirically that Monero, one of the most popular anonymous cryptocurrencies, is unlikely to be sustainable without altering its current ring sampling strategy. The main subroutine is a sub-quadratic-time algorithm for detecting used accounts in a ring-based anonymous system.2023-05-23T17:22:32+00:00https://creativecommons.org/licenses/by/4.0/Sherman S. M. ChowChristoph EggerRussell W. F. LaiViktoria RongeIvy K. Y. Woohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/742Finding Desirable Substitution Box with SASQUATCH2023-05-23T16:02:04+00:00Manas WadhwaAnubhab BaksiKai HuAnupam ChattopadhyayTakanori IsobeDhiman SahaThis paper presents ``SASQUATCH'', an open-source tool, that aids in finding an unknown substitution box (SBox) given its properties. The inspiration of our work can be directly attributed to the DCC 2022 paper by Lu, Mesnager, Cui, Fan and Wang. Taking their work as the foundation (i.e., converting the problem of SBox search to a satisfiability modulo theory instance and then invoking a solver), we extend in multiple directions (including -- but not limiting to -- coverage of more options, imposing time limit, parallel execution for multiple SBoxes, non-bijective SBox), and package everything within an easy-to-use interface. We also present ASIC benchmarks for some of the SBoxes.This paper presents ``SASQUATCH'', an open-source tool, that aids in finding an unknown substitution box (SBox) given its properties. The inspiration of our work can be directly attributed to the DCC 2022 paper by Lu, Mesnager, Cui, Fan and Wang. Taking their work as the foundation (i.e., converting the problem of SBox search to a satisfiability modulo theory instance and then invoking a solver), we extend in multiple directions (including -- but not limiting to -- coverage of more options, imposing time limit, parallel execution for multiple SBoxes, non-bijective SBox), and package everything within an easy-to-use interface. We also present ASIC benchmarks for some of the SBoxes.2023-05-23T16:02:04+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Manas WadhwaAnubhab BaksiKai HuAnupam ChattopadhyayTakanori IsobeDhiman Sahahttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2022/1340Understanding the Duplex and Its Security2023-05-23T15:51:15+00:00Bart MenninkAt SAC 2011, Bertoni et al. introduced the keyed duplex construction as a tool to build permutation based authenticated encryption schemes. The construction was generalized to full-state absorption by Mennink et al. (ASIACRYPT 2015). Daemen et al. (ASIACRYPT 2017) generalized it further to cover much more use cases, and proved security of this general construction, and Dobraunig and Mennink (ASIACRYPT 2019) derived a leakage resilience security bound for this construction. Due to its generality, the full-state keyed duplex construction that we know today has plethora applications, but the flip side of the coin is that the general construction is hard to grasp and the corresponding security bounds are very complex. Consequently, the state-of-the-art results on the full-state keyed duplex construction are not used to the fullest. In this work, we revisit the history of the duplex construction, give a comprehensive discussion of its possibilities and limitations, and demonstrate how the two security bounds (of Daemen et al. and Dobraunig and Mennink) can be interpreted in particular applications of the duplex.At SAC 2011, Bertoni et al. introduced the keyed duplex construction as a tool to build permutation based authenticated encryption schemes. The construction was generalized to full-state absorption by Mennink et al. (ASIACRYPT 2015). Daemen et al. (ASIACRYPT 2017) generalized it further to cover much more use cases, and proved security of this general construction, and Dobraunig and Mennink (ASIACRYPT 2019) derived a leakage resilience security bound for this construction. Due to its generality, the full-state keyed duplex construction that we know today has plethora applications, but the flip side of the coin is that the general construction is hard to grasp and the corresponding security bounds are very complex. Consequently, the state-of-the-art results on the full-state keyed duplex construction are not used to the fullest. In this work, we revisit the history of the duplex construction, give a comprehensive discussion of its possibilities and limitations, and demonstrate how the two security bounds (of Daemen et al. and Dobraunig and Mennink) can be interpreted in particular applications of the duplex.2022-10-07T12:09:22+00:00https://creativecommons.org/licenses/by/4.0/Bart Menninkhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/661Study of Arithmetization Methods for STARKs2023-05-23T14:09:02+00:00Tiago MartinsJoão FarinhaThis technical paper explores two solutions for arithmetization of computational integrity statements in STARKs, namely the algebraic intermediate representation, AIR, and is preprocessed variant, PAIR. The work then focuses on their soundness implications for Reed-Solomon proximity testing. It proceeds by presenting a comparative study of these methods, providing their theoretical foundations and deriving the degree bounds for low-degree proximity testing. The study shows that using PAIR increases the degree bound for Reed-Solomon proximity testing, which affects its soundness and complexity. However, the possibility of reducing the degree bound with multiple selector columns is also explored, namely by following an approach based on the decomposition of the selector values. Focusing on performance optimization, the work proceeds by qualitatively comparing computational demands of the components of both arithmetization methods, particularly their impact on the low-degree extensions. The paper concludes that, while PAIR might simplify constraint enforcement, it can be easily translated to AIR, and system testing with benchmarks is necessary to determine the application-specific superiority of either method. This work should provide insight into the strengths and limitations of each method, helping researchers and practitioners in the field of STARKs make informed design choices.This technical paper explores two solutions for arithmetization of computational integrity statements in STARKs, namely the algebraic intermediate representation, AIR, and is preprocessed variant, PAIR. The work then focuses on their soundness implications for Reed-Solomon proximity testing. It proceeds by presenting a comparative study of these methods, providing their theoretical foundations and deriving the degree bounds for low-degree proximity testing. The study shows that using PAIR increases the degree bound for Reed-Solomon proximity testing, which affects its soundness and complexity. However, the possibility of reducing the degree bound with multiple selector columns is also explored, namely by following an approach based on the decomposition of the selector values. Focusing on performance optimization, the work proceeds by qualitatively comparing computational demands of the components of both arithmetization methods, particularly their impact on the low-degree extensions. The paper concludes that, while PAIR might simplify constraint enforcement, it can be easily translated to AIR, and system testing with benchmarks is necessary to determine the application-specific superiority of either method. This work should provide insight into the strengths and limitations of each method, helping researchers and practitioners in the field of STARKs make informed design choices.2023-05-10T10:46:53+00:00https://creativecommons.org/licenses/by/4.0/Tiago MartinsJoão Farinhahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/740Practical Robust DKG Protocols for CSIDH2023-05-23T12:14:58+00:00Shahla AtapoorKarim BagheryDaniele CozzoRobi PedersenA Distributed Key Generation (DKG) protocol is an essential component of threshold cryptography. DKGs enable a group of parties to generate a secret and public key pair in a distributed manner so that the secret key is protected from being exposed, even if a certain number of parties are compromised. Robustness further guarantees that the construction of the key pair is always successful, even if malicious parties try to sabotage the computation. In this paper, we construct two efficient robust DKG protocols in the CSIDH setting that work with Shamir secret sharing. Both the proposed protocols are proven to be actively secure in the quantum random oracle model and use an Information Theoretically (IT) secure Verifiable Secret Sharing (VSS) scheme that is built using bivariate polynomials. As a tool, we construct a new piecewise verifiable proof system for structured public keys, that could be of independent interest. In terms of isogeny computations, our protocols outperform the previously proposed DKG protocols CSI-RAShi and Structured CSI-RAShi. As an instance, using our DKG protocols, 4 parties can sample a PK of size 4kB, for CSI-FiSh and CSI-SharK, respectively, 3.4 and 1.7 times faster than the current alternatives. On the other hand, since we use an IT-secure VSS, the fraction of corrupted parties is limited to less than a third and the communication cost of our schemes scales slightly worse with an increasing number of parties. For a low number of parties, our scheme still outperforms the alternatives in terms of communication.A Distributed Key Generation (DKG) protocol is an essential component of threshold cryptography. DKGs enable a group of parties to generate a secret and public key pair in a distributed manner so that the secret key is protected from being exposed, even if a certain number of parties are compromised. Robustness further guarantees that the construction of the key pair is always successful, even if malicious parties try to sabotage the computation. In this paper, we construct two efficient robust DKG protocols in the CSIDH setting that work with Shamir secret sharing. Both the proposed protocols are proven to be actively secure in the quantum random oracle model and use an Information Theoretically (IT) secure Verifiable Secret Sharing (VSS) scheme that is built using bivariate polynomials. As a tool, we construct a new piecewise verifiable proof system for structured public keys, that could be of independent interest. In terms of isogeny computations, our protocols outperform the previously proposed DKG protocols CSI-RAShi and Structured CSI-RAShi. As an instance, using our DKG protocols, 4 parties can sample a PK of size 4kB, for CSI-FiSh and CSI-SharK, respectively, 3.4 and 1.7 times faster than the current alternatives. On the other hand, since we use an IT-secure VSS, the fraction of corrupted parties is limited to less than a third and the communication cost of our schemes scales slightly worse with an increasing number of parties. For a low number of parties, our scheme still outperforms the alternatives in terms of communication.2023-05-23T12:14:58+00:00https://creativecommons.org/licenses/by/4.0/Shahla AtapoorKarim BagheryDaniele CozzoRobi Pedersenhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/141Efficient Hybrid Exact/Relaxed Lattice Proofs and Applications to Rounding and VRFs2023-05-23T05:49:05+00:00Muhammed F. EsginRon SteinfeldDongxi LiuSushmita RujIn this work, we study hybrid exact/relaxed zero-knowledge proofs from lattices, where the proved relation is exact in one part and relaxed in the other. Such proofs arise in important real-life applications such as those requiring verifiable PRF evaluation and have so far not received significant attention as a standalone problem.
We first introduce a general framework, LANES+, for realizing such hybrid proofs efficiently by combining standard relaxed proofs of knowledge RPoK and the LANES framework (due to a series of works in Crypto'20, Asiacrypt'20, ACM CCS'20). The latter framework is a powerful lattice-based proof system that can prove exact linear and multiplicative relations. The advantage of LANES+ is its ability to realize hybrid proofs more efficiently by exploiting RPoK for the high-dimensional part of the secret witness while leaving a low-dimensional secret witness part for the exact proof that is proven at a significantly lower cost via LANES. Thanks to the flexibility of LANES+, other exact proof systems can also be supported.
We apply our LANES+ framework to construct substantially shorter proofs of rounding, which is a central tool for verifiable deterministic lattice-based cryptography. Based on our rounding proof, we then design an efficient long-term verifiable random function (VRF), named LaV. LaV leads to the shortest VRF outputs among the proposals of standard (i.e., long-term and stateless) VRFs based on quantum-safe assumptions.
Of independent interest, we also present generalized results for challenge difference invertibility, a fundamental soundness security requirement for many proof systems.In this work, we study hybrid exact/relaxed zero-knowledge proofs from lattices, where the proved relation is exact in one part and relaxed in the other. Such proofs arise in important real-life applications such as those requiring verifiable PRF evaluation and have so far not received significant attention as a standalone problem.
We first introduce a general framework, LANES+, for realizing such hybrid proofs efficiently by combining standard relaxed proofs of knowledge RPoK and the LANES framework (due to a series of works in Crypto'20, Asiacrypt'20, ACM CCS'20). The latter framework is a powerful lattice-based proof system that can prove exact linear and multiplicative relations. The advantage of LANES+ is its ability to realize hybrid proofs more efficiently by exploiting RPoK for the high-dimensional part of the secret witness while leaving a low-dimensional secret witness part for the exact proof that is proven at a significantly lower cost via LANES. Thanks to the flexibility of LANES+, other exact proof systems can also be supported.
We apply our LANES+ framework to construct substantially shorter proofs of rounding, which is a central tool for verifiable deterministic lattice-based cryptography. Based on our rounding proof, we then design an efficient long-term verifiable random function (VRF), named LaV. LaV leads to the shortest VRF outputs among the proposals of standard (i.e., long-term and stateless) VRFs based on quantum-safe assumptions.
Of independent interest, we also present generalized results for challenge difference invertibility, a fundamental soundness security requirement for many proof systems.2022-02-09T09:00:19+00:00https://creativecommons.org/licenses/by/4.0/Muhammed F. EsginRon SteinfeldDongxi LiuSushmita Rujhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/739SMAUG: Pushing Lattice-based Key Encapsulation Mechanisms to the Limits2023-05-23T05:06:59+00:00Jung Hee CheonHyeongmin ChoeDongyeon HongMinJune YiRecently, NIST has announced Kyber, a lattice-based key encapsulation mechanism (KEM), as a post-quantum standard.
However, it is not the most efficient scheme among the NIST's KEM finalists.
Saber enjoys more compact sizes and faster performance, and Mera et al. (TCHES '21) further pushed its efficiency, proposing a shorter KEM, Sable.
As KEM are frequently used on the Internet, such as in TLS protocols, it is essential to achieve high efficiency while maintaining sufficient security.
In this paper, we further push the efficiency limit of lattice-based KEMs by proposing SMAUG, a new post-quantum KEM scheme submitted to the Korean Post-Quantum Cryptography (KPQC) competition, whose IND-CCA2 security is based on the combination of MLWE and MLWR problems.
We adopt several recent developments in lattice-based cryptography, targeting the textit{smallest} and the \textit{fastest} KEM while maintaining high enough security against various attacks, with a full-fledged use of sparse secrets.
Our design choices allow SMAUG to balance the decryption failure probability and ciphertext sizes without utilizing error correction codes, whose side-channel resistance remains open.
With a constant-time C reference implementation, SMAUG achieves ciphertext sizes up to 12% and 9% smaller than Kyber and Saber, with much faster running time, up to 103% and 58%, respectively.
Compared to Sable, SMAUG has the same ciphertext sizes but a larger public key, which gives a trade-off between the public key size versus performance; SMAUG has 39%-55% faster encapsulation and decapsulation speed in the parameter sets having comparable security.Recently, NIST has announced Kyber, a lattice-based key encapsulation mechanism (KEM), as a post-quantum standard.
However, it is not the most efficient scheme among the NIST's KEM finalists.
Saber enjoys more compact sizes and faster performance, and Mera et al. (TCHES '21) further pushed its efficiency, proposing a shorter KEM, Sable.
As KEM are frequently used on the Internet, such as in TLS protocols, it is essential to achieve high efficiency while maintaining sufficient security.
In this paper, we further push the efficiency limit of lattice-based KEMs by proposing SMAUG, a new post-quantum KEM scheme submitted to the Korean Post-Quantum Cryptography (KPQC) competition, whose IND-CCA2 security is based on the combination of MLWE and MLWR problems.
We adopt several recent developments in lattice-based cryptography, targeting the textit{smallest} and the \textit{fastest} KEM while maintaining high enough security against various attacks, with a full-fledged use of sparse secrets.
Our design choices allow SMAUG to balance the decryption failure probability and ciphertext sizes without utilizing error correction codes, whose side-channel resistance remains open.
With a constant-time C reference implementation, SMAUG achieves ciphertext sizes up to 12% and 9% smaller than Kyber and Saber, with much faster running time, up to 103% and 58%, respectively.
Compared to Sable, SMAUG has the same ciphertext sizes but a larger public key, which gives a trade-off between the public key size versus performance; SMAUG has 39%-55% faster encapsulation and decapsulation speed in the parameter sets having comparable security.2023-05-23T05:06:59+00:00https://creativecommons.org/licenses/by/4.0/Jung Hee CheonHyeongmin ChoeDongyeon HongMinJune Yihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1171Goldfish: No More Attacks on Proof-of-Stake Ethereum2023-05-23T01:29:42+00:00Francesco D'AmatoJoachim NeuErtem Nusret TasDavid TseThe latest message driven (LMD) greedy heaviest observed sub-tree (GHOST) consensus protocol is a critical component of proof-of-stake (PoS) Ethereum. In its current form, the protocol is brittle, as evidenced by recent attacks and patching attempts. We report on Goldfish, a considerably simplified candidate under consideration for a future Ethereum protocol upgrade. We prove that Goldfish satisfies the properties required of a drop-in replacement for LMD GHOST: Goldfish is secure in synchronous networks under dynamic participation, assuming a majority of the nodes (called validators) follows the protocol. Goldfish is reorg resilient (i.e., honestly produced blocks are guaranteed inclusion in the ledger) and supports fast confirmation (i.e., the expected confirmation latency is independent of the desired security level). We show that subsampling validators can improve the communication efficiency of Goldfish, and that Goldfish is composable with finality gadgets and accountability gadgets, which improves state-of-the-art ebb-and-flow protocols. Attacks on LMD GHOST exploit lack of coordination among honest validators, typically provided by a locking mechanism in classical BFT protocols. However, locking requires votes from a quorum of all participants and is not compatible with dynamic availability. Goldfish is powered by a novel coordination mechanism to synchronize the honest validators' actions under dynamic participation. Experiments with our implementation of Goldfish demonstrate the practicality of this mechanism for Ethereum.The latest message driven (LMD) greedy heaviest observed sub-tree (GHOST) consensus protocol is a critical component of proof-of-stake (PoS) Ethereum. In its current form, the protocol is brittle, as evidenced by recent attacks and patching attempts. We report on Goldfish, a considerably simplified candidate under consideration for a future Ethereum protocol upgrade. We prove that Goldfish satisfies the properties required of a drop-in replacement for LMD GHOST: Goldfish is secure in synchronous networks under dynamic participation, assuming a majority of the nodes (called validators) follows the protocol. Goldfish is reorg resilient (i.e., honestly produced blocks are guaranteed inclusion in the ledger) and supports fast confirmation (i.e., the expected confirmation latency is independent of the desired security level). We show that subsampling validators can improve the communication efficiency of Goldfish, and that Goldfish is composable with finality gadgets and accountability gadgets, which improves state-of-the-art ebb-and-flow protocols. Attacks on LMD GHOST exploit lack of coordination among honest validators, typically provided by a locking mechanism in classical BFT protocols. However, locking requires votes from a quorum of all participants and is not compatible with dynamic availability. Goldfish is powered by a novel coordination mechanism to synchronize the honest validators' actions under dynamic participation. Experiments with our implementation of Goldfish demonstrate the practicality of this mechanism for Ethereum.2022-09-07T16:16:51+00:00https://creativecommons.org/licenses/by/4.0/Francesco D'AmatoJoachim NeuErtem Nusret TasDavid Tsehttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/738Extremal algebraic graphs, quadratic multivariate public keys and temporal rules2023-05-22T20:56:12+00:00Vasyl UstimenkoAneta WróblewskaWe introduce large groups of quadratic transformations of a vector space over the finite fields defined via symbolic computations with the usage of
algebraic constructions of Extremal Graph Theory. They can serve as platforms for the protocols of Noncommutative Cryptography with security based on the complexity of word decomposition problem in noncommutative polynomial transformation group.
The modifications of these symbolic computations in the case of large fields of characteristic two allow us to define quadratic bijective multivariate public keys such that the inverses of public maps has a large polynomial degree. Another family of public keys is defined over arbitrary commutative ring with unity.
We suggest the usage of constructed protocols for the private delivery of quadratic encryption maps instead of the public usage of these transformations, i.e. the idea of temporal multivariate rules with their periodical change.We introduce large groups of quadratic transformations of a vector space over the finite fields defined via symbolic computations with the usage of
algebraic constructions of Extremal Graph Theory. They can serve as platforms for the protocols of Noncommutative Cryptography with security based on the complexity of word decomposition problem in noncommutative polynomial transformation group.
The modifications of these symbolic computations in the case of large fields of characteristic two allow us to define quadratic bijective multivariate public keys such that the inverses of public maps has a large polynomial degree. Another family of public keys is defined over arbitrary commutative ring with unity.
We suggest the usage of constructed protocols for the private delivery of quadratic encryption maps instead of the public usage of these transformations, i.e. the idea of temporal multivariate rules with their periodical change.2023-05-22T20:56:12+00:00https://creativecommons.org/licenses/by/4.0/Vasyl UstimenkoAneta Wróblewskahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/737Differential properties of integer multiplication2023-05-22T20:15:54+00:00Koustabh GhoshJoan DaemenIn this paper, we study the differential properties of integer multiplication between two $w$-bit integers, resulting in a $2w$-bit integer. Our objective is to gain insights into its resistance against differential cryptanalysis and asses its suitability as a source of non-linearity in symmetric key primitives.In this paper, we study the differential properties of integer multiplication between two $w$-bit integers, resulting in a $2w$-bit integer. Our objective is to gain insights into its resistance against differential cryptanalysis and asses its suitability as a source of non-linearity in symmetric key primitives.2023-05-22T20:15:54+00:00https://creativecommons.org/licenses/by/4.0/Koustabh GhoshJoan Daemenhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/534Group Oblivious Message Retrieval2023-05-22T19:25:43+00:00Zeyu LiuEran TromerYunhao WangAnonymous message delivery, as in private communication and privacy-preserving blockchain applications, ought to protect recipient metadata:
a message should not be inadvertently linkable to its destination.
But how can messages then be delivered to each recipient, without each recipient scanning all messages? Recent work constructed Oblivious Message Retrieval (OMR) protocols that outsource this job to untrusted servers in a privacy-preserving manner.
We consider the case of group messaging, where each message may have multiple recipients (e.g., in a group chat or blockchain transaction). Direct use of prior OMR protocols in the group setting increases the servers' work linearly in the group size, rendering it prohibitively costly for large groups.
We thus devise new protocols where the servers' cost grows very slowly with the group size, while recipients' cost is low and independent of the group size. Our approach uses Fully Homomorphic Encryption and other lattice-based techniques, building on and improving on prior work. The efficient handling of groups is attained by encoding multiple recipient-specific clues into a single polynomial or multilinear function that can be efficiently evaluated under FHE, and via preprocessing and amortization techniques.
We formally study several variants of Group Oblivious Message Retrieval (GOMR) and describe corresponding GOMR protocols. Our implementation and benchmarks show, for parameters of interest, cost reductions of orders of magnitude compared to prior schemes. For example, the servers' cost is ~3.36 per million messages scanned, where each message may address up to 15 recipients.Anonymous message delivery, as in private communication and privacy-preserving blockchain applications, ought to protect recipient metadata:
a message should not be inadvertently linkable to its destination.
But how can messages then be delivered to each recipient, without each recipient scanning all messages? Recent work constructed Oblivious Message Retrieval (OMR) protocols that outsource this job to untrusted servers in a privacy-preserving manner.
We consider the case of group messaging, where each message may have multiple recipients (e.g., in a group chat or blockchain transaction). Direct use of prior OMR protocols in the group setting increases the servers' work linearly in the group size, rendering it prohibitively costly for large groups.
We thus devise new protocols where the servers' cost grows very slowly with the group size, while recipients' cost is low and independent of the group size. Our approach uses Fully Homomorphic Encryption and other lattice-based techniques, building on and improving on prior work. The efficient handling of groups is attained by encoding multiple recipient-specific clues into a single polynomial or multilinear function that can be efficiently evaluated under FHE, and via preprocessing and amortization techniques.
We formally study several variants of Group Oblivious Message Retrieval (GOMR) and describe corresponding GOMR protocols. Our implementation and benchmarks show, for parameters of interest, cost reductions of orders of magnitude compared to prior schemes. For example, the servers' cost is ~3.36 per million messages scanned, where each message may address up to 15 recipients.2023-04-13T05:37:55+00:00https://creativecommons.org/licenses/by/4.0/Zeyu LiuEran TromerYunhao Wanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/721A Fast RLWE-Based IPFE Library and its Application to Privacy-Preserving Biometric Authentication2023-05-22T15:47:39+00:00Supriya AdhikaryAngshuman KarmakarWith the increased use of data and communication through the internet and the abundant misuse of personal data by many organizations, people are more sensitive about their privacy. Privacy-preserving computation is becoming increasingly important in this era. Functional encryption allows a user to evaluate a function on encrypted data without revealing sensitive information. Most implementations of functional encryption schemes are too time-consuming for practical use. Mera et al. first proposed an inner product functional encryption scheme based on ring learning with errors to improve efficiency. In this work, we optimize the implementation of their work and propose a fast inner product functional encryption library. Specifically, we identify the main performance bottleneck, which is the number theoretic transformation based polynomial multiplication used in the scheme. We also identify the micro and macro level parallel components of the scheme and propose novel techniques to improve the efficiency using $\textit{open multi-processing}$ and $\textit{advanced vector extensions 2}$ vector processor. Compared to the original implementation, our optimization methods translate to $89.72\%$, $83.06\%$, $59.30\%$, and $53.80\%$ improvements in the $\textbf{Setup}$, $\textbf{Encrypt}$, $\textbf{KeyGen}$, and $\textbf{Decrypt}$ operations respectively, in the scheme for standard security level. Designing privacy-preserving applications using functional encryption is ongoing research. Therefore, as an additional contribution to this work, we design a privacy-preserving biometric authentication scheme using inner product functional encryption primitives.With the increased use of data and communication through the internet and the abundant misuse of personal data by many organizations, people are more sensitive about their privacy. Privacy-preserving computation is becoming increasingly important in this era. Functional encryption allows a user to evaluate a function on encrypted data without revealing sensitive information. Most implementations of functional encryption schemes are too time-consuming for practical use. Mera et al. first proposed an inner product functional encryption scheme based on ring learning with errors to improve efficiency. In this work, we optimize the implementation of their work and propose a fast inner product functional encryption library. Specifically, we identify the main performance bottleneck, which is the number theoretic transformation based polynomial multiplication used in the scheme. We also identify the micro and macro level parallel components of the scheme and propose novel techniques to improve the efficiency using $\textit{open multi-processing}$ and $\textit{advanced vector extensions 2}$ vector processor. Compared to the original implementation, our optimization methods translate to $89.72\%$, $83.06\%$, $59.30\%$, and $53.80\%$ improvements in the $\textbf{Setup}$, $\textbf{Encrypt}$, $\textbf{KeyGen}$, and $\textbf{Decrypt}$ operations respectively, in the scheme for standard security level. Designing privacy-preserving applications using functional encryption is ongoing research. Therefore, as an additional contribution to this work, we design a privacy-preserving biometric authentication scheme using inner product functional encryption primitives.2023-05-19T06:24:01+00:00https://creativecommons.org/licenses/by/4.0/Supriya AdhikaryAngshuman Karmakarhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/735Privacy-preserving Attestation for Virtualized Network Infrastructures2023-05-22T15:33:43+00:00Ghada ArfaouiThibaut JacquesMarc LacosteCristina OneteLéo RobertIn multi-tenant cloud environments, physical resources are shared between various parties (called tenants) through the use of virtual machines (VMs). Tenants can verify the state of their VMs by means of deep-attestation: a process by which a (physical or virtual) Trusted Platform Module --TPM -- generates attestation quotes about the integrity state of the VMs. Unfortunately, most existing deep-attestation solutions are either: limited to single-tenant environments, in which tenant {privacy is irrelevant; are inefficient in terms of {linking VM attestations to hypervisor attestations; or provide privacy and/or linking, but at the cost of modifying the TPM hardware.
In this paper, we propose a privacy preserving TPM-based deep-attestation solution in multi-tenant environments, which provably guarantees: (i) Inter-tenant privacy: a tenant is unaware of whether or not the physical machine hosting its VMs also contains other VMs (belonging to other tenants); (ii) Configuration privacy: the hypervisor's configuration, used in the attestation process, remains private with respect to the tenants requiring a hypervisor attestation; and (iii) Layer linking: our protocol enables tenants to link hypervisors with the VMs, thus obtaining a guarantee that their VMs are running on specific physical machines.
Our solution relies on vector commitments and ZK-SNARKs. We build on the security model of Arfaoui et al. and provide both formalizations of the properties we require and proofs that our scheme does, in fact attain them. Our protocol is scalable, and our implementation results prove that it is viable, even for a large number of VMs hosted on a single platform.In multi-tenant cloud environments, physical resources are shared between various parties (called tenants) through the use of virtual machines (VMs). Tenants can verify the state of their VMs by means of deep-attestation: a process by which a (physical or virtual) Trusted Platform Module --TPM -- generates attestation quotes about the integrity state of the VMs. Unfortunately, most existing deep-attestation solutions are either: limited to single-tenant environments, in which tenant {privacy is irrelevant; are inefficient in terms of {linking VM attestations to hypervisor attestations; or provide privacy and/or linking, but at the cost of modifying the TPM hardware.
In this paper, we propose a privacy preserving TPM-based deep-attestation solution in multi-tenant environments, which provably guarantees: (i) Inter-tenant privacy: a tenant is unaware of whether or not the physical machine hosting its VMs also contains other VMs (belonging to other tenants); (ii) Configuration privacy: the hypervisor's configuration, used in the attestation process, remains private with respect to the tenants requiring a hypervisor attestation; and (iii) Layer linking: our protocol enables tenants to link hypervisors with the VMs, thus obtaining a guarantee that their VMs are running on specific physical machines.
Our solution relies on vector commitments and ZK-SNARKs. We build on the security model of Arfaoui et al. and provide both formalizations of the properties we require and proofs that our scheme does, in fact attain them. Our protocol is scalable, and our implementation results prove that it is viable, even for a large number of VMs hosted on a single platform.2023-05-22T15:33:43+00:00https://creativecommons.org/licenses/by/4.0/Ghada ArfaouiThibaut JacquesMarc LacosteCristina OneteLéo Roberthttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/934On Secure Computation of Solitary Output Functionalities With and Without Broadcast2023-05-22T12:55:19+00:00Bar AlonEran OmriSolitary output secure computation models scenarios, where a single entity wishes to compute a function over an input that is distributed among several mutually distrusting parties. The computation should guarantee some security properties, such as correctness, privacy, and guaranteed output delivery. Full security captures all these properties together. This setting is becoming very important, as it is relevant to many real-world scenarios, such as service providers wishing to learn some statistics on the private data of their users.
In this paper, we study full security for solitary output three-party functionalities in the point-to-point model (without broadcast) assuming at most a single party is corrupted. We give a characterization of the set of three-party Boolean functionalities and functionalities with up to three possible outputs (over a polynomial-size domain) that are computable with full security in the point-to-point model against a single corrupted party. We also characterize the set of three-party functionalities (over a polynomial-size domain) where the output receiving party has no input. Using this characterization, we identify the set of parameters that allow certain functionalities related to private set intersection to be securely computable in this model.
Our main technical contribution is a reinterpretation of the hexagon argument due to Fischer et al. [Distributed Computing '86]. While the original argument relies on the agreement property (i.e., all parties output the same value) to construct an attack, we extend the argument to the solitary output setting, where there is no agreement. Furthermore, using our techniques, we were also able to advance our understanding of the set of solitary output three-party functionalities that can be computed with full security, assuming broadcast but where two parties may be corrupted. Specifically, we extend the set of such functionalities that were known to be computable, due to Halevi et al. [TCC '19].Solitary output secure computation models scenarios, where a single entity wishes to compute a function over an input that is distributed among several mutually distrusting parties. The computation should guarantee some security properties, such as correctness, privacy, and guaranteed output delivery. Full security captures all these properties together. This setting is becoming very important, as it is relevant to many real-world scenarios, such as service providers wishing to learn some statistics on the private data of their users.
In this paper, we study full security for solitary output three-party functionalities in the point-to-point model (without broadcast) assuming at most a single party is corrupted. We give a characterization of the set of three-party Boolean functionalities and functionalities with up to three possible outputs (over a polynomial-size domain) that are computable with full security in the point-to-point model against a single corrupted party. We also characterize the set of three-party functionalities (over a polynomial-size domain) where the output receiving party has no input. Using this characterization, we identify the set of parameters that allow certain functionalities related to private set intersection to be securely computable in this model.
Our main technical contribution is a reinterpretation of the hexagon argument due to Fischer et al. [Distributed Computing '86]. While the original argument relies on the agreement property (i.e., all parties output the same value) to construct an attack, we extend the argument to the solitary output setting, where there is no agreement. Furthermore, using our techniques, we were also able to advance our understanding of the set of solitary output three-party functionalities that can be computed with full security, assuming broadcast but where two parties may be corrupted. Specifically, we extend the set of such functionalities that were known to be computable, due to Halevi et al. [TCC '19].2022-07-18T12:51:28+00:00https://creativecommons.org/licenses/by/4.0/Bar AlonEran Omrihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/734TLS → Post-Quantum TLS: Inspecting the TLS landscape for PQC adoption on Android2023-05-22T12:03:53+00:00Dimitri MankowskiThom WiggersVeelasha MoonsamyThe ubiquitous use of smartphones has contributed to more and more users conducting their online browsing activities through apps, rather than web browsers. In order to provide a seamless browsing experience to the users, apps rely on a variety of HTTP-based APIs and third-party libraries, and make use of the TLS protocol to secure the underlying communication. With NIST's recent announcement of the first standards for post-quantum algorithms, there is a need to better understand the constraints and requirements of TLS usage by Android apps in order to make an informed decision for migration to the post-quantum world.
In this paper, we performed an analysis of TLS usage by highest-ranked apps from Google Play Store to assess the resulting overhead for adoption of post-quantum algorithms. Our results show that apps set up large numbers of TLS connections with a median of 94, often to the same hosts. At the same time, many apps make little use of resumption to reduce the overhead of the TLS handshake. This will greatly magnify the impact of the transition to post-quantum cryptography, and we make recommendations for developers, server operators and the mobile operating systems to invest in making more use of these mitigating features or improving their accessibility. Finally, we briefly discuss how alternative proposals for post-quantum TLS handshakes might reduce the overhead.The ubiquitous use of smartphones has contributed to more and more users conducting their online browsing activities through apps, rather than web browsers. In order to provide a seamless browsing experience to the users, apps rely on a variety of HTTP-based APIs and third-party libraries, and make use of the TLS protocol to secure the underlying communication. With NIST's recent announcement of the first standards for post-quantum algorithms, there is a need to better understand the constraints and requirements of TLS usage by Android apps in order to make an informed decision for migration to the post-quantum world.
In this paper, we performed an analysis of TLS usage by highest-ranked apps from Google Play Store to assess the resulting overhead for adoption of post-quantum algorithms. Our results show that apps set up large numbers of TLS connections with a median of 94, often to the same hosts. At the same time, many apps make little use of resumption to reduce the overhead of the TLS handshake. This will greatly magnify the impact of the transition to post-quantum cryptography, and we make recommendations for developers, server operators and the mobile operating systems to invest in making more use of these mitigating features or improving their accessibility. Finally, we briefly discuss how alternative proposals for post-quantum TLS handshakes might reduce the overhead.2023-05-22T12:03:53+00:00https://creativecommons.org/licenses/by/4.0/Dimitri MankowskiThom WiggersVeelasha Moonsamyhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/733On implemented graph based generator of cryptographically strong pseudorandom sequences of multivariate nature2023-05-22T10:38:57+00:00Vasyl UstimenkoTymoteusz ChojeckiClassical Multivariate Cryptography (MP) is searching for special families of functions of kind ^nF=T_1FTT_2 on the vector space V= (F_q)^n where F is a quadratic or cubical polynomial map of the space to itself, T_1 and T^2 are affine transformations and T is the piece of information such that the knowledge of the triple T_1, T_2, T allows the computation of reimage x of given nF(x) in polynomial time O(n^ᾳ). Traditionally F is given by the list of coefficients C(^nF) of its monomial terms ordered lexicographically. We consider the Inverse Problem of MP of finding T_1, T_2, T for F given in its standard form. The solution of inverse problem is harder than finding the procedure to compute the reimage of ^nF in time O(n^ᾳ). For general quadratic or cubic maps nF this is NP hard problem. In the case of special family some arguments on its inclusion to class NP has to be given.Classical Multivariate Cryptography (MP) is searching for special families of functions of kind ^nF=T_1FTT_2 on the vector space V= (F_q)^n where F is a quadratic or cubical polynomial map of the space to itself, T_1 and T^2 are affine transformations and T is the piece of information such that the knowledge of the triple T_1, T_2, T allows the computation of reimage x of given nF(x) in polynomial time O(n^ᾳ). Traditionally F is given by the list of coefficients C(^nF) of its monomial terms ordered lexicographically. We consider the Inverse Problem of MP of finding T_1, T_2, T for F given in its standard form. The solution of inverse problem is harder than finding the procedure to compute the reimage of ^nF in time O(n^ᾳ). For general quadratic or cubic maps nF this is NP hard problem. In the case of special family some arguments on its inclusion to class NP has to be given.2023-05-22T10:38:57+00:00https://creativecommons.org/licenses/by/4.0/Vasyl UstimenkoTymoteusz Chojeckihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/732VerifMSI: Practical Verification of Hardware and Software Masking Schemes Implementations2023-05-22T10:35:21+00:00Quentin L. MeunierAbdul Rahman TalebSide-Channel Attacks are powerful attacks which can recover secret information in a cryptographic device by analysing physical quantities such as power consumption. Masking is a common countermeasure to these attacks which can be applied in software and hardware, and consists in splitting the secrets in several parts. Masking schemes and their implementations are often not trivial, and require the use of automated tools to check for their correctness.
In this work, we propose a new practical tool named VerifMSI which extends an existing verification tool called LeakageVerif targeting software schemes. Compared to LeakageVerif, VerifMSI includes hardware constructs, namely gates and registers, what allows to take glitch propagation into account. Moreover, it includes a new representation of the inputs, making it possible to verify three existing security properties (Non-Interference, Strong Non-Interference, Probe Isolating Non-Interference) as well as a newly defined one called Relaxed Non-Interference, compared to the unique Threshold Probing Security verified in LeakageVerif. Finally, optimisations have been integrated in VerifMSI in order to speed up the verification.
We evaluate VerifMSI on a set of 9 benchmarks from the literature, focusing on the hardware descriptions, and show that it performs well both in terms of accuracy and scalability.Side-Channel Attacks are powerful attacks which can recover secret information in a cryptographic device by analysing physical quantities such as power consumption. Masking is a common countermeasure to these attacks which can be applied in software and hardware, and consists in splitting the secrets in several parts. Masking schemes and their implementations are often not trivial, and require the use of automated tools to check for their correctness.
In this work, we propose a new practical tool named VerifMSI which extends an existing verification tool called LeakageVerif targeting software schemes. Compared to LeakageVerif, VerifMSI includes hardware constructs, namely gates and registers, what allows to take glitch propagation into account. Moreover, it includes a new representation of the inputs, making it possible to verify three existing security properties (Non-Interference, Strong Non-Interference, Probe Isolating Non-Interference) as well as a newly defined one called Relaxed Non-Interference, compared to the unique Threshold Probing Security verified in LeakageVerif. Finally, optimisations have been integrated in VerifMSI in order to speed up the verification.
We evaluate VerifMSI on a set of 9 benchmarks from the literature, focusing on the hardware descriptions, and show that it performs well both in terms of accuracy and scalability.2023-05-22T10:35:21+00:00https://creativecommons.org/licenses/by/4.0/Quentin L. MeunierAbdul Rahman Talebhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/503Neural Network Quantisation for Faster Homomorphic Encryption2023-05-22T09:54:48+00:00Wouter LegiestFurkan TuranMichiel Van BeirendonckJan-Pieter D'AnversIngrid VerbauwhedeHomomorphic encryption (HE) enables calculating
on encrypted data, which makes it possible to perform privacy-
preserving neural network inference. One disadvantage of this
technique is that it is several orders of magnitudes slower than
calculation on unencrypted data. Neural networks are commonly
trained using floating-point, while most homomorphic encryption
libraries calculate on integers, thus requiring a quantisation of the
neural network. A straightforward approach would be to quantise
to large integer sizes (e.g., 32 bit) to avoid large quantisation errors.
In this work, we reduce the integer sizes of the networks, using
quantisation-aware training, to allow more efficient computations.
For the targeted MNIST architecture proposed by Badawi et al., we reduce the integer sizes by 33% without significant loss
of accuracy, while for the CIFAR architecture, we can reduce the
integer sizes by 43%. Implementing the resulting networks under
the BFV homomorphic encryption scheme using SEAL, we could
reduce the execution time of an MNIST neural network by 80%
and by 40% for a CIFAR neural network.Homomorphic encryption (HE) enables calculating
on encrypted data, which makes it possible to perform privacy-
preserving neural network inference. One disadvantage of this
technique is that it is several orders of magnitudes slower than
calculation on unencrypted data. Neural networks are commonly
trained using floating-point, while most homomorphic encryption
libraries calculate on integers, thus requiring a quantisation of the
neural network. A straightforward approach would be to quantise
to large integer sizes (e.g., 32 bit) to avoid large quantisation errors.
In this work, we reduce the integer sizes of the networks, using
quantisation-aware training, to allow more efficient computations.
For the targeted MNIST architecture proposed by Badawi et al., we reduce the integer sizes by 33% without significant loss
of accuracy, while for the CIFAR architecture, we can reduce the
integer sizes by 43%. Implementing the resulting networks under
the BFV homomorphic encryption scheme using SEAL, we could
reduce the execution time of an MNIST neural network by 80%
and by 40% for a CIFAR neural network.2023-04-07T08:54:55+00:00https://creativecommons.org/licenses/by/4.0/Wouter LegiestFurkan TuranMichiel Van BeirendonckJan-Pieter D'AnversIngrid Verbauwhedehttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/731Fast Exhaustive Search for Polynomial Systems over F32023-05-22T09:48:11+00:00Bo-Yin YangWei-Jeng WangShang-Yi YangChar-Shin MiouChen-Mou ChengSolving multivariate polynomial systems over finite fields is an important
problem in cryptography. For random F2 low-degree systems with equally many
variables and equations, enumeration is more efficient than advanced solvers for all
practical problem sizes. Whether there are others remained an open problem.
We here study and propose an exhaustive-search algorithm for low degrees systems
over F3 which is suitable for parallelization. We implemented it on Graphic Processing
Units (GPUs) and commodity CPUs. Its optimizations and differences from the F2
case are also analyzed.
We can solve 30+ quadratic equations in 30 variables on an NVIDIA GeForce GTX
980 Ti in 14 minutes; a cubic system takes 36 minutes. This well outperforms
existing solvers. Using these results, we compare Gröbner Bases vs. enumeration for
polynomial systems over small fields as the sizes go up.Solving multivariate polynomial systems over finite fields is an important
problem in cryptography. For random F2 low-degree systems with equally many
variables and equations, enumeration is more efficient than advanced solvers for all
practical problem sizes. Whether there are others remained an open problem.
We here study and propose an exhaustive-search algorithm for low degrees systems
over F3 which is suitable for parallelization. We implemented it on Graphic Processing
Units (GPUs) and commodity CPUs. Its optimizations and differences from the F2
case are also analyzed.
We can solve 30+ quadratic equations in 30 variables on an NVIDIA GeForce GTX
980 Ti in 14 minutes; a cubic system takes 36 minutes. This well outperforms
existing solvers. Using these results, we compare Gröbner Bases vs. enumeration for
polynomial systems over small fields as the sizes go up.2023-05-22T09:48:11+00:00https://creativecommons.org/publicdomain/zero/1.0/Bo-Yin YangWei-Jeng WangShang-Yi YangChar-Shin MiouChen-Mou Chenghttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2022/422Verifiable Mix-Nets and Distributed Decryption for Voting from Lattice-Based Assumptions2023-05-22T08:42:48+00:00Diego F. AranhaCarsten BaumKristian GjøsteenTjerand SildeCryptographic voting protocols have recently seen much interest from practitioners due to their (planned) use in countries such as Estonia, Switzerland, France, and Australia. Practical protocols usually rely on tested designs such as the mixing-and-decryption paradigm. There, multiple servers verifiably shuffle encrypted ballots, which are then decrypted in a distributed manner. While several efficient protocols implementing this paradigm exist from discrete log-type assumptions, the situation is less clear for post-quantum alternatives such as lattices. This is because the design ideas of the discrete log-based voting protocols do not carry over easily to the lattice setting, due to specific problems such as noise growth and approximate relations.
In this work, we propose a new verifiable secret shuffle for BGV ciphertexts and a compatible verifiable distributed decryption protocol. The shuffle is based on an extension of a shuffle of commitments to known values which is combined with an amortized proof of correct re-randomization. The verifiable distributed decryption protocol uses noise drowning, proving the correctness of decryption steps in zero-knowledge. Both primitives are then used to instantiate the mixing-and-decryption electronic voting paradigm from lattice-based assumptions.
We give concrete parameters for our system, estimate the size of each component and provide implementations of all important sub-protocols. Our experiments show that the shuffle and decryption protocol is suitable for use in real-world e-voting schemes.Cryptographic voting protocols have recently seen much interest from practitioners due to their (planned) use in countries such as Estonia, Switzerland, France, and Australia. Practical protocols usually rely on tested designs such as the mixing-and-decryption paradigm. There, multiple servers verifiably shuffle encrypted ballots, which are then decrypted in a distributed manner. While several efficient protocols implementing this paradigm exist from discrete log-type assumptions, the situation is less clear for post-quantum alternatives such as lattices. This is because the design ideas of the discrete log-based voting protocols do not carry over easily to the lattice setting, due to specific problems such as noise growth and approximate relations.
In this work, we propose a new verifiable secret shuffle for BGV ciphertexts and a compatible verifiable distributed decryption protocol. The shuffle is based on an extension of a shuffle of commitments to known values which is combined with an amortized proof of correct re-randomization. The verifiable distributed decryption protocol uses noise drowning, proving the correctness of decryption steps in zero-knowledge. Both primitives are then used to instantiate the mixing-and-decryption electronic voting paradigm from lattice-based assumptions.
We give concrete parameters for our system, estimate the size of each component and provide implementations of all important sub-protocols. Our experiments show that the shuffle and decryption protocol is suitable for use in real-world e-voting schemes.2022-04-06T13:01:01+00:00https://creativecommons.org/licenses/by/4.0/Diego F. AranhaCarsten BaumKristian GjøsteenTjerand Sildehttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/697NFT Trades in Bitcoin with Off-chain Receipts2023-05-22T07:31:24+00:00Mehmet Sabir KirazEnrique LarraiaOwen VaughanAbstract. Non-fungible tokens (NFTs) are digital representations of assets stored on a blockchain. It allows content creators to certify authenticity of their digital assets and transfer ownership in a transparent and decentralized way. Popular choices of NFT marketplaces infrastructure include blockchains with smart contract functionality or layer-2 solutions. Surprisingly, researchers have largely avoided building NFT schemes over Bitcoin-like blockchains, most likely due to high transaction fees in the BTC network and the belief that Bitcoin lacks enough programmability to implement fair exchanges. In this work we fill this gap. We propose an NFT scheme where trades are settled in a single Bitcoin transaction as opposed to executing complex smart contracts. We use zero-knowledge proofs (concretely, recursive SNARKs) to prove that two Bitcoin transactions, the issuance transaction $tx_0$ and the current trade transaction $tx_n$, are linked through a unique chain of transactions. Indeed, these proofs function as “off-chain receipts” of ownership that can be transferred from the current owner to the new owner using an insecure channel. The size of the proof receipt is short, independent of the total current number of trades $n$, and can be updated incrementally by anyone at anytime. Marketplaces typically require some degree of token ownership delegation, e.g., escrow accounts, to execute the trade between sellers and buyers that are not online concurrently, and to alleviate transaction fees they resort to off-chain trades. This raises concerns on the transparency and purportedly honest behaviour of marketplaces. We achieve fair and non-custodial trades by leveraging our off-chain receipts and letting the involved parties carefully sign the trade transaction with appropriate combinations of sighash flags.Abstract. Non-fungible tokens (NFTs) are digital representations of assets stored on a blockchain. It allows content creators to certify authenticity of their digital assets and transfer ownership in a transparent and decentralized way. Popular choices of NFT marketplaces infrastructure include blockchains with smart contract functionality or layer-2 solutions. Surprisingly, researchers have largely avoided building NFT schemes over Bitcoin-like blockchains, most likely due to high transaction fees in the BTC network and the belief that Bitcoin lacks enough programmability to implement fair exchanges. In this work we fill this gap. We propose an NFT scheme where trades are settled in a single Bitcoin transaction as opposed to executing complex smart contracts. We use zero-knowledge proofs (concretely, recursive SNARKs) to prove that two Bitcoin transactions, the issuance transaction $tx_0$ and the current trade transaction $tx_n$, are linked through a unique chain of transactions. Indeed, these proofs function as “off-chain receipts” of ownership that can be transferred from the current owner to the new owner using an insecure channel. The size of the proof receipt is short, independent of the total current number of trades $n$, and can be updated incrementally by anyone at anytime. Marketplaces typically require some degree of token ownership delegation, e.g., escrow accounts, to execute the trade between sellers and buyers that are not online concurrently, and to alleviate transaction fees they resort to off-chain trades. This raises concerns on the transparency and purportedly honest behaviour of marketplaces. We achieve fair and non-custodial trades by leveraging our off-chain receipts and letting the involved parties carefully sign the trade transaction with appropriate combinations of sighash flags.2023-05-16T08:17:52+00:00https://creativecommons.org/licenses/by-nc-nd/4.0/Mehmet Sabir KirazEnrique LarraiaOwen Vaughanhttps://creativecommons.org/licenses/by-nc-nd/4.0/https://eprint.iacr.org/2022/874Lattice Codes for Lattice-Based PKE2023-05-22T00:41:21+00:00Shanxiang LyuLing LiuCong LingJunzuo LaiHao ChenExisting error correction mechanisms in lattice-based public key encryption (PKE) rely on either naive modulation or its concatenation with error correction codes (ECC). This paper shows that lattice coding, as a joint ECC and modulation technique, can substitute the naive modulation in existing lattice-based PKEs to enjoy better correction performance. We begin by modeling the FrodoPKE protocol as a noisy point-to-point communication system, where the communication channel is similar to the additive white Gaussian noise (AWGN) channel. To employ lattice codes for this special channel that hinges on hypercube shaping, we propose an efficient labeling function that converts between binary information bits and lattice codewords. The parameter sets of FrodoPKE are improved towards either higher security levels or smaller ciphertext sizes. For example, the proposed Frodo-1344-E$_\text{8}$ has a 10-bit classical security gain over Frodo-1344.Existing error correction mechanisms in lattice-based public key encryption (PKE) rely on either naive modulation or its concatenation with error correction codes (ECC). This paper shows that lattice coding, as a joint ECC and modulation technique, can substitute the naive modulation in existing lattice-based PKEs to enjoy better correction performance. We begin by modeling the FrodoPKE protocol as a noisy point-to-point communication system, where the communication channel is similar to the additive white Gaussian noise (AWGN) channel. To employ lattice codes for this special channel that hinges on hypercube shaping, we propose an efficient labeling function that converts between binary information bits and lattice codewords. The parameter sets of FrodoPKE are improved towards either higher security levels or smaller ciphertext sizes. For example, the proposed Frodo-1344-E$_\text{8}$ has a 10-bit classical security gain over Frodo-1344.2022-07-04T13:46:05+00:00https://creativecommons.org/licenses/by/4.0/Shanxiang LyuLing LiuCong LingJunzuo LaiHao Chenhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/729Compact Lattice Gadget and Its Applications to Hash-and-Sign Signatures2023-05-21T14:41:47+00:00Yang YuHuiwen JiaXiaoyun WangLattice gadgets and the associated algorithms are the essential building blocks of lattice-based cryptography. In the past decade, they have been applied to build versatile and powerful cryptosystems. However, the practical optimizations and designs of gadget-based schemes generally lag their theoretical constructions.
For example, the gadget-based signatures have elegant design and capability of extending to more advanced primitives, but they are far less efficient than other lattice-based signatures.
This work aims to improve the practicality of gadget-based cryptosystems, with a focus on hash-and-sign signatures. To this end, we develop a compact gadget framework in which the used gadget is a square matrix instead of the short and fat one used in previous constructions. To work with this compact gadget, we devise a specialized gadget sampler, called semi-random sampler, to compute the approximate preimage. It first deterministically computes the error and then randomly samples the preimage. We show that for uniformly random targets, the preimage and error distributions are simulatable without knowing the trapdoor. This ensures the security of the signature applications. Compared to the Gaussian-distributed errors in previous algorithms, the deterministic errors have a smaller size, which lead to a substantial gain in security and enables a practically working instantiation.
As the applications, we present two practically efficient gadget-based signature schemes based on NTRU and Ring-LWE respectively. The NTRU-based scheme offers comparable efficiency to Falcon and Mitaka and a simple implementation without the need of generating the NTRU trapdoor. The LWE-based scheme also achieves a desirable overall performance. It not only greatly outperforms the state-of-the-art LWE-based hash-and-sign signatures, but also has an even smaller size than the LWE-based Fiat-Shamir signature scheme Dilithium. These results fill the long-term gap in practical gadget-based signatures.Lattice gadgets and the associated algorithms are the essential building blocks of lattice-based cryptography. In the past decade, they have been applied to build versatile and powerful cryptosystems. However, the practical optimizations and designs of gadget-based schemes generally lag their theoretical constructions.
For example, the gadget-based signatures have elegant design and capability of extending to more advanced primitives, but they are far less efficient than other lattice-based signatures.
This work aims to improve the practicality of gadget-based cryptosystems, with a focus on hash-and-sign signatures. To this end, we develop a compact gadget framework in which the used gadget is a square matrix instead of the short and fat one used in previous constructions. To work with this compact gadget, we devise a specialized gadget sampler, called semi-random sampler, to compute the approximate preimage. It first deterministically computes the error and then randomly samples the preimage. We show that for uniformly random targets, the preimage and error distributions are simulatable without knowing the trapdoor. This ensures the security of the signature applications. Compared to the Gaussian-distributed errors in previous algorithms, the deterministic errors have a smaller size, which lead to a substantial gain in security and enables a practically working instantiation.
As the applications, we present two practically efficient gadget-based signature schemes based on NTRU and Ring-LWE respectively. The NTRU-based scheme offers comparable efficiency to Falcon and Mitaka and a simple implementation without the need of generating the NTRU trapdoor. The LWE-based scheme also achieves a desirable overall performance. It not only greatly outperforms the state-of-the-art LWE-based hash-and-sign signatures, but also has an even smaller size than the LWE-based Fiat-Shamir signature scheme Dilithium. These results fill the long-term gap in practical gadget-based signatures.2023-05-21T14:41:47+00:00https://creativecommons.org/licenses/by/4.0/Yang YuHuiwen JiaXiaoyun Wanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/863Effective and Efficient Masking with Low Noise using Small-Mersenne-Prime Ciphers2023-05-21T14:19:37+00:00Loïc MasurePierrick MéauxThorben MoosFrançois-Xavier StandaertEmbedded devices used in security applications are natural targets for physical attacks. Thus, enhancing their side-channel resistance is an important research challenge. A standard solution for this purpose is the use of Boolean masking schemes, as they are well adapted to current block ciphers with efficient bitslice representations. Boolean masking guarantees that the security of an implementation grows exponentially in the number of shares under the assumption that leakages are sufficiently noisy (and independent). Unfortunately, it has been shown that this noise assumption is hardly met on low-end devices. In this paper, we therefore investigate techniques to mask cryptographic algorithms in such a way that their resistance can survive an almost complete lack of noise. Building on seed theoretical results of Dziembowski et al., we put forward that arithmetic encodings in prime fields can reach this goal. We first exhibit the gains that such encodings lead to thanks to a simulated information theoretic analysis of their leakage (with up to six shares). We then provide figures showing that on platforms where optimized arithmetic adders and multipliers are readily available (i.e., most MCUs and FPGAs), performing masked operations in small to medium Mersenne-prime fields as opposed to binary extension fields will not lead to notable implementation overheads. We compile these observations into a new AES-like block cipher, called AES-prime, which is well-suited to illustrate the remarkable advantages of masking in prime fields. We also confirm the practical relevance of our findings by evaluating concrete software (ARM Cortex-M3) and hardware (Xilinx Spartan-6) implementations. Our experimental results show that security gains over Boolean masking (and, more generally, binary encodings) can reach orders of magnitude despite the same amount of information being leaked per share.Embedded devices used in security applications are natural targets for physical attacks. Thus, enhancing their side-channel resistance is an important research challenge. A standard solution for this purpose is the use of Boolean masking schemes, as they are well adapted to current block ciphers with efficient bitslice representations. Boolean masking guarantees that the security of an implementation grows exponentially in the number of shares under the assumption that leakages are sufficiently noisy (and independent). Unfortunately, it has been shown that this noise assumption is hardly met on low-end devices. In this paper, we therefore investigate techniques to mask cryptographic algorithms in such a way that their resistance can survive an almost complete lack of noise. Building on seed theoretical results of Dziembowski et al., we put forward that arithmetic encodings in prime fields can reach this goal. We first exhibit the gains that such encodings lead to thanks to a simulated information theoretic analysis of their leakage (with up to six shares). We then provide figures showing that on platforms where optimized arithmetic adders and multipliers are readily available (i.e., most MCUs and FPGAs), performing masked operations in small to medium Mersenne-prime fields as opposed to binary extension fields will not lead to notable implementation overheads. We compile these observations into a new AES-like block cipher, called AES-prime, which is well-suited to illustrate the remarkable advantages of masking in prime fields. We also confirm the practical relevance of our findings by evaluating concrete software (ARM Cortex-M3) and hardware (Xilinx Spartan-6) implementations. Our experimental results show that security gains over Boolean masking (and, more generally, binary encodings) can reach orders of magnitude despite the same amount of information being leaked per share.2022-07-01T15:31:58+00:00https://creativecommons.org/licenses/by/4.0/Loïc MasurePierrick MéauxThorben MoosFrançois-Xavier Standaerthttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1188High-order masking of NTRU2023-05-21T13:48:33+00:00Jean-Sebastien CoronFrançois GérardMatthias TrannoyRina ZeitounThe main protection against side-channel attacks consists in computing every function with multiple shares via the masking countermeasure. While the masking countermeasure was originally developed for securing block-ciphers such as AES, the protection of lattice-based cryptosystems is often more challenging, because of the diversity of the underlying algorithms. In this paper, we introduce new gadgets for the high-order masking of the NTRU cryptosystem, with security proofs in the classical ISW probing model. We then describe the first fully masked implementation of the NTRU Key Encapsulation Mechanism submitted to NIST, including the key generation. To assess the practicality of our countermeasures, we provide a concrete implementation on ARM Cortex-M3 architecture, and eventually a t-test leakage evaluation.The main protection against side-channel attacks consists in computing every function with multiple shares via the masking countermeasure. While the masking countermeasure was originally developed for securing block-ciphers such as AES, the protection of lattice-based cryptosystems is often more challenging, because of the diversity of the underlying algorithms. In this paper, we introduce new gadgets for the high-order masking of the NTRU cryptosystem, with security proofs in the classical ISW probing model. We then describe the first fully masked implementation of the NTRU Key Encapsulation Mechanism submitted to NIST, including the key generation. To assess the practicality of our countermeasures, we provide a concrete implementation on ARM Cortex-M3 architecture, and eventually a t-test leakage evaluation.2022-09-09T15:07:42+00:00https://creativecommons.org/licenses/by/4.0/Jean-Sebastien CoronFrançois GérardMatthias TrannoyRina Zeitounhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/728SoK: Distributed Randomness Beacons2023-05-21T13:44:00+00:00Kevin ChoiAathira ManojJoseph BonneauMotivated and inspired by the emergence of blockchains, many new protocols have recently been proposed for generating publicly verifiable randomness in a distributed yet secure fashion. These protocols work under different setups and assumptions, use various cryptographic tools, and entail unique trade-offs and characteristics. In this paper, we systematize the design of distributed randomness beacons (DRBs) as well as the cryptographic building blocks they rely on. We evaluate protocols on two key security properties, unbiasability and unpredictability, and discuss common attack vectors for predicting or biasing the beacon output and the countermeasures employed by protocols. We also compare protocols by communication and computational efficiency. Finally, we provide insights on the applicability of different protocols in various deployment scenarios and highlight possible directions for further research.Motivated and inspired by the emergence of blockchains, many new protocols have recently been proposed for generating publicly verifiable randomness in a distributed yet secure fashion. These protocols work under different setups and assumptions, use various cryptographic tools, and entail unique trade-offs and characteristics. In this paper, we systematize the design of distributed randomness beacons (DRBs) as well as the cryptographic building blocks they rely on. We evaluate protocols on two key security properties, unbiasability and unpredictability, and discuss common attack vectors for predicting or biasing the beacon output and the countermeasures employed by protocols. We also compare protocols by communication and computational efficiency. Finally, we provide insights on the applicability of different protocols in various deployment scenarios and highlight possible directions for further research.2023-05-21T13:44:00+00:00https://creativecommons.org/licenses/by/4.0/Kevin ChoiAathira ManojJoseph Bonneauhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/107The Tip5 Hash Function for Recursive STARKs2023-05-21T08:32:38+00:00Alan SzepieniecAlexander LemmensJan Ferdinand SauerBobbin ThreadbareAl-KindiThis paper specifies a new arithmetization-oriented hash function called Tip5. It uses the SHARK design strategy with low-degree power maps in combination with lookup tables, and is tailored to the field with $p=2^{64}-2^{32}+1$ elements.
The context motivating this design is the recursive verification of STARKs. This context imposes particular design constraints, and therefore the hash function's arithmetization is discussed at length.This paper specifies a new arithmetization-oriented hash function called Tip5. It uses the SHARK design strategy with low-degree power maps in combination with lookup tables, and is tailored to the field with $p=2^{64}-2^{32}+1$ elements.
The context motivating this design is the recursive verification of STARKs. This context imposes particular design constraints, and therefore the hash function's arithmetization is discussed at length.2023-01-27T20:12:23+00:00https://creativecommons.org/licenses/by/4.0/Alan SzepieniecAlexander LemmensJan Ferdinand SauerBobbin ThreadbareAl-Kindihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1707Private Access Control for Function Secret Sharing2023-05-21T01:41:45+00:00Sacha Servan-SchreiberSimon BeyzerovEli YablonHyojae ParkFunction Secret Sharing (FSS; Eurocrypt 2015) allows a dealer to share a function f with two or more evaluators. Given secret shares of a function f, the evaluators can locally compute secret shares of f(x) on an input x, without learning information about f.
In this paper, we initiate the study of access control for FSS. Given the shares of f, the evaluators can ensure that the dealer is authorized to share the provided function. For a function family F and an access control list defined over the family, the evaluators receiving the shares of f ∈ F can efficiently check that the dealer knows the access key for f.
This model enables new applications of FSS, such as:
– anonymous authentication in a multi-party setting,
– access control in private databases, and
– authentication and spam prevention in anonymous communication systems.
Our definitions and constructions abstract and improve the concrete efficiency of several re- cent systems that implement ad-hoc mechanisms for access control over FSS. The main building block behind our efficiency improvement is a discrete-logarithm zero-knowledge proof-of-knowledge over secret-shared elements, which may be of independent interest.
We evaluate our constructions and show a 50–70× reduction in computational overhead com- pared to existing access control techniques used in anonymous communication. In other applications, such as private databases, the processing cost of introducing access control is only 1.5–3× when amortized over databases with 500,000 or more items.Function Secret Sharing (FSS; Eurocrypt 2015) allows a dealer to share a function f with two or more evaluators. Given secret shares of a function f, the evaluators can locally compute secret shares of f(x) on an input x, without learning information about f.
In this paper, we initiate the study of access control for FSS. Given the shares of f, the evaluators can ensure that the dealer is authorized to share the provided function. For a function family F and an access control list defined over the family, the evaluators receiving the shares of f ∈ F can efficiently check that the dealer knows the access key for f.
This model enables new applications of FSS, such as:
– anonymous authentication in a multi-party setting,
– access control in private databases, and
– authentication and spam prevention in anonymous communication systems.
Our definitions and constructions abstract and improve the concrete efficiency of several re- cent systems that implement ad-hoc mechanisms for access control over FSS. The main building block behind our efficiency improvement is a discrete-logarithm zero-knowledge proof-of-knowledge over secret-shared elements, which may be of independent interest.
We evaluate our constructions and show a 50–70× reduction in computational overhead com- pared to existing access control techniques used in anonymous communication. In other applications, such as private databases, the processing cost of introducing access control is only 1.5–3× when amortized over databases with 500,000 or more items.2022-12-09T10:22:58+00:00https://creativecommons.org/licenses/by/4.0/Sacha Servan-SchreiberSimon BeyzerovEli YablonHyojae Parkhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1615High-order Polynomial Comparison and Masking Lattice-based Encryption2023-05-20T17:28:56+00:00Jean-Sébastien CoronFrançois GérardSimon MontoyaRina ZeitounThe main protection against side-channel attacks consists in computing every function with multiple shares via the masking countermeasure. For IND-CCA secure lattice-based encryption schemes, the masking of the decryption algorithm requires the high-order computation of a polynomial comparison. In this paper, we describe and evaluate a number of different techniques for such high-order comparison, always with a security proof in the ISW probing model. As an application, we describe the full high-order masking of the NIST finalists Kyber and Saber, with a concrete implementation.The main protection against side-channel attacks consists in computing every function with multiple shares via the masking countermeasure. For IND-CCA secure lattice-based encryption schemes, the masking of the decryption algorithm requires the high-order computation of a polynomial comparison. In this paper, we describe and evaluate a number of different techniques for such high-order comparison, always with a security proof in the ISW probing model. As an application, we describe the full high-order masking of the NIST finalists Kyber and Saber, with a concrete implementation.2021-12-14T09:36:28+00:00https://creativecommons.org/licenses/by/4.0/Jean-Sébastien CoronFrançois GérardSimon MontoyaRina Zeitounhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1314High-order Table-based Conversion Algorithms and Masking Lattice-based Encryption2023-05-20T17:17:42+00:00Jean-Sébastien CoronFrançois GérardSimon MontoyaRina ZeitounMasking is the main countermeasure against side-channel attacks on embedded devices. For cryptographic algorithms that combine Boolean and arithmetic masking, one must therefore convert between the two types of masking, without leaking additional information to the attacker. In this paper we describe a new high-order conversion algorithm between Boolean and arithmetic masking, based on table recomputation, and provably secure in the ISW probing model. We show that our technique is particularly efficient for masking structured LWE encryption schemes such as Kyber and Saber. In particular, for Kyber IND-CPA decryption, we obtain an order of magnitude improvement compared to existing techniques.Masking is the main countermeasure against side-channel attacks on embedded devices. For cryptographic algorithms that combine Boolean and arithmetic masking, one must therefore convert between the two types of masking, without leaking additional information to the attacker. In this paper we describe a new high-order conversion algorithm between Boolean and arithmetic masking, based on table recomputation, and provably secure in the ISW probing model. We show that our technique is particularly efficient for masking structured LWE encryption schemes such as Kyber and Saber. In particular, for Kyber IND-CPA decryption, we obtain an order of magnitude improvement compared to existing techniques.2021-09-30T07:25:45+00:00https://creativecommons.org/licenses/by/4.0/Jean-Sébastien CoronFrançois GérardSimon MontoyaRina Zeitounhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/727Safeguarding Physical Sneaker Sale Through a Decentralized Medium2023-05-20T17:02:41+00:00Marwan ZeggariAydin AbadiRenaud LambiotteMohamad KassabSneakers were designated as the most counterfeited fashion item online, with three times more risk in a trade than any other fashion purchase. As the market expands, the current sneaker scene displays several vulnerabilities and trust flaws, mostly related to the legitimacy of assets or actors. In this paper, we investigate various blockchain-based mechanisms to address these large-scale trust issues. We argue that (i) pre-certified and tracked assets through the use of non-fungible tokens can ensure the genuine nature of an asset and authenticate its owner more effectively during peer-to-peer trading across a marketplace; (ii) a game-theoretic-based system with economic incentives for participating users can greatly reduce the rate of online fraud and address missed delivery deadlines; (iii) a decentralized dispute resolution system biased in favour of an honest party can solve potential conflicts more reliably.Sneakers were designated as the most counterfeited fashion item online, with three times more risk in a trade than any other fashion purchase. As the market expands, the current sneaker scene displays several vulnerabilities and trust flaws, mostly related to the legitimacy of assets or actors. In this paper, we investigate various blockchain-based mechanisms to address these large-scale trust issues. We argue that (i) pre-certified and tracked assets through the use of non-fungible tokens can ensure the genuine nature of an asset and authenticate its owner more effectively during peer-to-peer trading across a marketplace; (ii) a game-theoretic-based system with economic incentives for participating users can greatly reduce the rate of online fraud and address missed delivery deadlines; (iii) a decentralized dispute resolution system biased in favour of an honest party can solve potential conflicts more reliably.2023-05-20T17:02:41+00:00https://creativecommons.org/licenses/by/4.0/Marwan ZeggariAydin AbadiRenaud LambiotteMohamad Kassabhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/726A Note on ``A Secure Anonymous D2D Mutual Authentication and Key Agreement Protocol for IoT''2023-05-19T22:38:16+00:00Zhengjun CaoLihua LiuWe show that the key agreement scheme [Internet of Things, 2022(18): 100493] is flawed. (1) It neglects the structure of an elliptic curve and presents some false computations. (2) The scheme is insecure against key compromise impersonation attack.We show that the key agreement scheme [Internet of Things, 2022(18): 100493] is flawed. (1) It neglects the structure of an elliptic curve and presents some false computations. (2) The scheme is insecure against key compromise impersonation attack.2023-05-19T22:38:16+00:00https://creativecommons.org/publicdomain/zero/1.0/Zhengjun CaoLihua Liuhttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2021/1014SoC Security Properties and Rules2023-05-19T18:37:53+00:00Nusrat Farzana DipuFarimah FarahmandiMark TehranipoorA system-on-chip (SoC) security can be weakened by exploiting the potential vulnerabilities of the
intellectual property (IP) cores used to implement the
design and interaction among the IPs. These vulnerabilities
not only increase the security verification effort but also
can increase design complexity and time-to-market. The
design and verification engineers should be knowledgeable about potential vulnerabilities and threat models at
the early SoC design life cycle to protect their designs
from potential attacks. However, currently, there is no
publicly available repository that can be used as a base
to develop such knowledge in practice. In this paper, we
develop ‘SoC Security Property/Rule Database’ and make
it available publicly to all researchers to facilitate and
extend security verification effort to address this need. The
database gathers a comprehensive security vulnerability
and property list. It also provides all the corresponding
design behavior that should be held in the design to
ensure such vulnerabilities do not exist. The database
contains 67 different vulnerability scenarios for which 105
corresponding security properties have been developed till
now. This paper reviews the existing database and presents
the methodologies we used to gather vulnerabilities and
develop such comprehensive security properties. Additionally, this paper discusses the challenges for security
verification and the utilization of this database to overcome
the research challenges.A system-on-chip (SoC) security can be weakened by exploiting the potential vulnerabilities of the
intellectual property (IP) cores used to implement the
design and interaction among the IPs. These vulnerabilities
not only increase the security verification effort but also
can increase design complexity and time-to-market. The
design and verification engineers should be knowledgeable about potential vulnerabilities and threat models at
the early SoC design life cycle to protect their designs
from potential attacks. However, currently, there is no
publicly available repository that can be used as a base
to develop such knowledge in practice. In this paper, we
develop ‘SoC Security Property/Rule Database’ and make
it available publicly to all researchers to facilitate and
extend security verification effort to address this need. The
database gathers a comprehensive security vulnerability
and property list. It also provides all the corresponding
design behavior that should be held in the design to
ensure such vulnerabilities do not exist. The database
contains 67 different vulnerability scenarios for which 105
corresponding security properties have been developed till
now. This paper reviews the existing database and presents
the methodologies we used to gather vulnerabilities and
develop such comprehensive security properties. Additionally, this paper discusses the challenges for security
verification and the utilization of this database to overcome
the research challenges.2021-08-06T07:23:22+00:00https://creativecommons.org/licenses/by/4.0/Nusrat Farzana DipuFarimah FarahmandiMark Tehranipoorhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/725On Perfect Linear Approximations and Differentials over Two-Round SPNs2023-05-19T18:16:58+00:00Christof BeierlePatrick FelkeGregor LeanderPatrick NeumannLukas StennesRecent constructions of (tweakable) block ciphers with an embedded cryptographic backdoor relied on the existence of probability-one differentials or perfect (non-)linear approximations over a reduced-round version of the primitive. In this work, we study how the existence of probability-one differentials or perfect linear approximations over two rounds of a substitution-permutation network can be avoided by design. More precisely, we develop criteria on the s-box and the linear layer that guarantee the absence of probability-one differentials for all keys. We further present an algorithm that allows to efficiently exclude the existence of keys for which there exists a perfect linear approximation.Recent constructions of (tweakable) block ciphers with an embedded cryptographic backdoor relied on the existence of probability-one differentials or perfect (non-)linear approximations over a reduced-round version of the primitive. In this work, we study how the existence of probability-one differentials or perfect linear approximations over two rounds of a substitution-permutation network can be avoided by design. More precisely, we develop criteria on the s-box and the linear layer that guarantee the absence of probability-one differentials for all keys. We further present an algorithm that allows to efficiently exclude the existence of keys for which there exists a perfect linear approximation.2023-05-19T18:16:58+00:00https://creativecommons.org/licenses/by/4.0/Christof BeierlePatrick FelkeGregor LeanderPatrick NeumannLukas Stenneshttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/724Not so Difficult in the End: Breaking the ASCADv2 Dataset2023-05-19T16:49:52+00:00Lichao WuGuilherme PerinStjepan PicekThe ASCADv2 dataset ranks among the most secure publicly available datasets today. Two layers of countermeasures protect it: affine masking and shuffling, and the current attack approaches rely on strong assumptions. Specifically, besides having access to the source code, an adversary also requires prior knowledge of random shares. This paper forgoes reliance on such knowledge and proposes two attack approaches based on the vulnerabilities of the affine mask implementation. As a result, the first attack can retrieve all secret keys' reliance in less than a minute. Although the second attack is not entirely successful in recovering all keys, we believe more traces would help make such an attack fully functional.The ASCADv2 dataset ranks among the most secure publicly available datasets today. Two layers of countermeasures protect it: affine masking and shuffling, and the current attack approaches rely on strong assumptions. Specifically, besides having access to the source code, an adversary also requires prior knowledge of random shares. This paper forgoes reliance on such knowledge and proposes two attack approaches based on the vulnerabilities of the affine mask implementation. As a result, the first attack can retrieve all secret keys' reliance in less than a minute. Although the second attack is not entirely successful in recovering all keys, we believe more traces would help make such an attack fully functional.2023-05-19T16:49:52+00:00https://creativecommons.org/licenses/by/4.0/Lichao WuGuilherme PerinStjepan Picekhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/723Non-Interactive Commitment from Non-Transitive Group Actions2023-05-19T15:38:04+00:00Giuseppe D'AlconzoAndrea FlaminiAndrea GangemiGroup actions are becoming a viable option for post-quantum cryptography assumptions. Indeed, in recent years some works have shown how to construct primitives from assumptions based on isogenies of elliptic curves, such as CSIDH, on tensors or on code equivalence problems. This paper presents a bit commitment scheme, built on non-transitive group actions, which is shown to be secure in the standard model, under the decisional Group Action Inversion Problem. In particular, the commitment is computationally hiding and perfectly binding, and is obtained from a novel and general framework that exploits the properties of some orbit-invariant functions, together with group actions. Previous constructions depend on an interaction between the sender and the receiver in the commitment phase, which results in an interactive bit commitment. We instead propose the first non-interactive bit commitment based on group actions. Then we show that, when the sender is honest, the constructed commitment enjoys an additional feature, i.e., it is possible to tell whether two commitments were obtained from the same input, without revealing the input. We define the security properties that such a construction must satisfy, and we call this primitive linkable commitment. Finally, as an example, an instantiation of the scheme using tensors with coefficients in a finite field is provided. In this case, the invariant function is the computation of the rank of a tensor, and the cryptographic assumption is related to the Tensor Isomorphism problem.Group actions are becoming a viable option for post-quantum cryptography assumptions. Indeed, in recent years some works have shown how to construct primitives from assumptions based on isogenies of elliptic curves, such as CSIDH, on tensors or on code equivalence problems. This paper presents a bit commitment scheme, built on non-transitive group actions, which is shown to be secure in the standard model, under the decisional Group Action Inversion Problem. In particular, the commitment is computationally hiding and perfectly binding, and is obtained from a novel and general framework that exploits the properties of some orbit-invariant functions, together with group actions. Previous constructions depend on an interaction between the sender and the receiver in the commitment phase, which results in an interactive bit commitment. We instead propose the first non-interactive bit commitment based on group actions. Then we show that, when the sender is honest, the constructed commitment enjoys an additional feature, i.e., it is possible to tell whether two commitments were obtained from the same input, without revealing the input. We define the security properties that such a construction must satisfy, and we call this primitive linkable commitment. Finally, as an example, an instantiation of the scheme using tensors with coefficients in a finite field is provided. In this case, the invariant function is the computation of the rank of a tensor, and the cryptographic assumption is related to the Tensor Isomorphism problem.2023-05-19T15:38:04+00:00https://creativecommons.org/licenses/by/4.0/Giuseppe D'AlconzoAndrea FlaminiAndrea Gangemihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/376Universally Composable End-to-End Secure Messaging2023-05-19T15:19:37+00:00Ran CanettiPalak JainMarika SwanbergMayank VariaWe model and analyze the Signal end-to-end secure messaging protocol within the Universal Composability (UC) framework. Specifically:
(1) We formulate an ideal functionality that captures end-to-end secure messaging in a setting with Public Key Infrastructure (PKI) and an untrusted server, against an adversary that has full control over the network and can adaptively and momentarily compromise parties at any time, obtaining their entire internal states. Our analysis captures the forward secrecy and recovery-of-security properties of Signal and the conditions under which they break.
(2) We model the main components of the Signal architecture (PKI and long-term keys, the backbone continuous-key-exchange or "asymmetric ratchet", epoch-level symmetric ratchets, authenticated encryption) as individual ideal functionalities. These components are realized and analyzed separately, and then composed using the UC and Global-State UC theorems.
(3) We show how the ideal functionalities representing these components can be realized using standard cryptographic primitives with minimal hardness assumptions.
Our modeling introduces additional innovations that enable arguing about the security of Signal, irrespective of the underlying communication medium, and facilitate the secure composition of dynamically generated modules that share state. These features, in conjunction with the basic modularity of the UC framework, will hopefully facilitate the use of both Signal-as-a-whole and its individual components within cryptographic applications.We model and analyze the Signal end-to-end secure messaging protocol within the Universal Composability (UC) framework. Specifically:
(1) We formulate an ideal functionality that captures end-to-end secure messaging in a setting with Public Key Infrastructure (PKI) and an untrusted server, against an adversary that has full control over the network and can adaptively and momentarily compromise parties at any time, obtaining their entire internal states. Our analysis captures the forward secrecy and recovery-of-security properties of Signal and the conditions under which they break.
(2) We model the main components of the Signal architecture (PKI and long-term keys, the backbone continuous-key-exchange or "asymmetric ratchet", epoch-level symmetric ratchets, authenticated encryption) as individual ideal functionalities. These components are realized and analyzed separately, and then composed using the UC and Global-State UC theorems.
(3) We show how the ideal functionalities representing these components can be realized using standard cryptographic primitives with minimal hardness assumptions.
Our modeling introduces additional innovations that enable arguing about the security of Signal, irrespective of the underlying communication medium, and facilitate the secure composition of dynamically generated modules that share state. These features, in conjunction with the basic modularity of the UC framework, will hopefully facilitate the use of both Signal-as-a-whole and its individual components within cryptographic applications.2022-03-22T13:28:35+00:00https://creativecommons.org/licenses/by/4.0/Ran CanettiPalak JainMarika SwanbergMayank Variahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/722Composing Bridges2023-05-19T07:38:37+00:00Mugurel BarcauVicentiu PasolGeorge C TurcasThe present work builds on previous investigations of the authors (and their collaborators) regarding bridges, a certain type of morphisms between encryption schemes, making a step forward in developing a (category theory) language for studying relations between encryption schemes. Here we analyse the conditions under which bridges can be performed sequentially, formalizing the notion of composability. One of our results gives a sufficient condition for a pair of bridges to be composable. We illustrate that composing two bridges, each independently satisfying a previously established IND-CPA security definition, can actually lead to an insecure bridge. Our main result gives a sufficient condition that a pair of secure composable bridges should satisfy in order for their composition to be a secure bridge. We also introduce the concept of a complete bridge and show that it is connected to the notion of Fully composable Homomorphic Encryption (FcHE), recently considered by Micciancio. Moreover, we show that a result of Micciancio which gives a construction of FcHE schemes can be phrased in the language of complete bridges, where his insights can be formalised in a greater generality.The present work builds on previous investigations of the authors (and their collaborators) regarding bridges, a certain type of morphisms between encryption schemes, making a step forward in developing a (category theory) language for studying relations between encryption schemes. Here we analyse the conditions under which bridges can be performed sequentially, formalizing the notion of composability. One of our results gives a sufficient condition for a pair of bridges to be composable. We illustrate that composing two bridges, each independently satisfying a previously established IND-CPA security definition, can actually lead to an insecure bridge. Our main result gives a sufficient condition that a pair of secure composable bridges should satisfy in order for their composition to be a secure bridge. We also introduce the concept of a complete bridge and show that it is connected to the notion of Fully composable Homomorphic Encryption (FcHE), recently considered by Micciancio. Moreover, we show that a result of Micciancio which gives a construction of FcHE schemes can be phrased in the language of complete bridges, where his insights can be formalised in a greater generality.2023-05-19T07:38:37+00:00https://creativecommons.org/licenses/by/4.0/Mugurel BarcauVicentiu PasolGeorge C Turcashttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1398Universally Composable Almost-Everywhere Secure Computation2023-05-19T05:31:41+00:00Nishanth ChandranPouyan ForghaniJuan GarayRafail OstrovskyRutvik PatelVassilis ZikasMost existing work on secure multi-party computation (MPC) ignores a key idiosyncrasy of modern communication networks, that there are a limited number of communication paths between any two nodes, many of which might even be corrupted. The problem becomes particularly acute in the information-theoretic setting, where the lack of trusted setups (and the cryptographic primitives they enable) makes communication over sparse networks more challenging. The work by Garay and Ostrovsky [EUROCRYPT'08] on almost-everywhere MPC (AE-MPC), introduced ``best-possible security'' properties for MPC over such incomplete networks, where necessarily some of the honest parties may be excluded from the computation.
In this work, we provide a universally composable definition of almost-everywhere security, which allows us to automatically and accurately capture the guarantees of AE-MPC (as well as AE-communication, the analogous ``best-possible security'' version of secure communication) in the Universal Composability (UC) framework of Canetti. Our results offer the first simulation-based treatment of this important but under-investigated problem, along with the first simulation-based proof of AE-MPC. To achieve that goal, we state and prove a general composition theorem, which makes precise the level or ``quality'' of AE-security that is obtained when a protocol's hybrids are replaced with almost-everywhere components.Most existing work on secure multi-party computation (MPC) ignores a key idiosyncrasy of modern communication networks, that there are a limited number of communication paths between any two nodes, many of which might even be corrupted. The problem becomes particularly acute in the information-theoretic setting, where the lack of trusted setups (and the cryptographic primitives they enable) makes communication over sparse networks more challenging. The work by Garay and Ostrovsky [EUROCRYPT'08] on almost-everywhere MPC (AE-MPC), introduced ``best-possible security'' properties for MPC over such incomplete networks, where necessarily some of the honest parties may be excluded from the computation.
In this work, we provide a universally composable definition of almost-everywhere security, which allows us to automatically and accurately capture the guarantees of AE-MPC (as well as AE-communication, the analogous ``best-possible security'' version of secure communication) in the Universal Composability (UC) framework of Canetti. Our results offer the first simulation-based treatment of this important but under-investigated problem, along with the first simulation-based proof of AE-MPC. To achieve that goal, we state and prove a general composition theorem, which makes precise the level or ``quality'' of AE-security that is obtained when a protocol's hybrids are replaced with almost-everywhere components.2021-10-18T06:13:31+00:00https://creativecommons.org/licenses/by/4.0/Nishanth ChandranPouyan ForghaniJuan GarayRafail OstrovskyRutvik PatelVassilis Zikashttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/381Security of Nakamoto Consensus under Congestion2023-05-19T01:25:05+00:00Lucianna KifferJoachim NeuSrivatsan SridharAviv ZoharDavid TseNakamoto consensus (NC) powers major proof-of-work (PoW) and proof-of-stake (PoS) blockchains such as Bitcoin or Cardano. Given a network of nodes with certain communication and computation capacities, against what fraction of adversarial power (the resilience) is Nakamoto consensus secure for a given block production rate? Prior security analyses of NC used a bounded delay model which does not capture network congestion resulting from high block production rates, bursty release of adversarial blocks, and in PoS, spamming due to equivocations. For PoW, we find a new attack, called teasing attack, that exploits congestion to increase the time taken to download and verify blocks, thereby succeeding at lower adversarial power than the private attack which was deemed to be the worst-case attack in prior analysis. By adopting a bounded bandwidth model to capture congestion, and through an improved analysis method, we identify the resilience of PoW NC for a given block production rate. In PoS, we augment our attack with equivocations to further increase congestion, making the vanilla PoS NC protocol insecure against any adversarial power except at very low block production rates. To counter equivocation spamming in PoS, we present a new NC-style protocol Sanitizing PoS (SaPoS) which achieves the same resilience as PoW NC.Nakamoto consensus (NC) powers major proof-of-work (PoW) and proof-of-stake (PoS) blockchains such as Bitcoin or Cardano. Given a network of nodes with certain communication and computation capacities, against what fraction of adversarial power (the resilience) is Nakamoto consensus secure for a given block production rate? Prior security analyses of NC used a bounded delay model which does not capture network congestion resulting from high block production rates, bursty release of adversarial blocks, and in PoS, spamming due to equivocations. For PoW, we find a new attack, called teasing attack, that exploits congestion to increase the time taken to download and verify blocks, thereby succeeding at lower adversarial power than the private attack which was deemed to be the worst-case attack in prior analysis. By adopting a bounded bandwidth model to capture congestion, and through an improved analysis method, we identify the resilience of PoW NC for a given block production rate. In PoS, we augment our attack with equivocations to further increase congestion, making the vanilla PoS NC protocol insecure against any adversarial power except at very low block production rates. To counter equivocation spamming in PoS, we present a new NC-style protocol Sanitizing PoS (SaPoS) which achieves the same resilience as PoW NC.2023-03-16T06:26:28+00:00https://creativecommons.org/licenses/by/4.0/Lucianna KifferJoachim NeuSrivatsan SridharAviv ZoharDavid Tsehttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1147Clockwork Finance: Automated Analysis of Economic Security in Smart Contracts2023-05-18T22:31:45+00:00Kushal BabelPhilip DaianMahimna KelkarAri JuelsWe introduce the Clockwork Finance Framework (CFF), a general purpose, formal verification framework for mechanized reasoning about the economic security properties of composed decentralized-finance (DeFi) smart contracts.
CFF features three key properties. It is contract complete, meaning that it can model any smart contract platform and all its contracts—Turing complete or otherwise. It does so with asymptotically constant model overhead. It is also attack-exhaustive by construction, meaning that it can automatically and mechanically extract all possible economic attacks on users’ cryptocurrency across modeled contracts.
Thanks to these properties, CFF can support multiple goals: economic security analysis of contracts by developers, analysis of DeFi trading risks by users, fees UX, and optimization of arbitrage opportunities by bots or miners. Because CFF offers composability, it can support these goals with reasoning over any desired set of potentially interacting smart contract models.
We instantiate CFF as an executable model for Ethereum contracts that incorporates a state-of-the-art deductive verifier. Building on previous work, we introduce extractable value (EV), a new formal notion of economic security in composed DeFi contracts that is both a basis for CFF and of general interest.
We construct modular, human-readable, composable CFF models of four popular, deployed DeFi protocols in Ethereum: Uniswap, Uniswap V2, Sushiswap, and MakerDAO, representing a combined 24 billion USD in value as of March 2022. We use these models along with some other common models such as flash loans, airdrops and voting to show experimentally that CFF is practical and can drive useful, data-based EV-based insights from real world transaction activity.
Without any explicitly programmed attack strategies, CFF uncovers on average an expected $56 million of EV per month in the recent past.We introduce the Clockwork Finance Framework (CFF), a general purpose, formal verification framework for mechanized reasoning about the economic security properties of composed decentralized-finance (DeFi) smart contracts.
CFF features three key properties. It is contract complete, meaning that it can model any smart contract platform and all its contracts—Turing complete or otherwise. It does so with asymptotically constant model overhead. It is also attack-exhaustive by construction, meaning that it can automatically and mechanically extract all possible economic attacks on users’ cryptocurrency across modeled contracts.
Thanks to these properties, CFF can support multiple goals: economic security analysis of contracts by developers, analysis of DeFi trading risks by users, fees UX, and optimization of arbitrage opportunities by bots or miners. Because CFF offers composability, it can support these goals with reasoning over any desired set of potentially interacting smart contract models.
We instantiate CFF as an executable model for Ethereum contracts that incorporates a state-of-the-art deductive verifier. Building on previous work, we introduce extractable value (EV), a new formal notion of economic security in composed DeFi contracts that is both a basis for CFF and of general interest.
We construct modular, human-readable, composable CFF models of four popular, deployed DeFi protocols in Ethereum: Uniswap, Uniswap V2, Sushiswap, and MakerDAO, representing a combined 24 billion USD in value as of March 2022. We use these models along with some other common models such as flash loans, airdrops and voting to show experimentally that CFF is practical and can drive useful, data-based EV-based insights from real world transaction activity.
Without any explicitly programmed attack strategies, CFF uncovers on average an expected $56 million of EV per month in the recent past.2021-09-10T06:54:17+00:00https://creativecommons.org/licenses/by/4.0/Kushal BabelPhilip DaianMahimna KelkarAri Juelshttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1369Network-Agnostic Security Comes (Almost) for Free in DKG and MPC2023-05-18T20:36:22+00:00Renas BachoDaniel CollinsChen-Da Liu-ZhangJulian LossDistributed key generation (DKG) protocols are an essential building block for threshold cryptosystems. Many DKG protocols tolerate up to $t_s<n/2$ corruptions assuming a well-behaved synchronous network, but become insecure as soon as the network delay becomes unstable. On the other hand, solutions in the asynchronous model operate under arbitrary network conditions, but only tolerate $t_a<n/3$ corruptions, even when the network is well-behaved.
In this work, we ask whether one can design a protocol that achieves security guarantees in either scenario. We show a complete characterization of network-agnostic DKG protocols, showing that the tight bound is $t_a+2t_s <n$. As a second contribution, we provide an optimized version of the network-agnostic MPC protocol by Blum, Liu-Zhang and Loss [CRYPTO'20] which improves over the communication complexity of their protocol by a linear factor. Moreover, using our DKG protocol, we can instantiate our MPC protocol in the plain PKI model, i.e., without the need to assume an expensive trusted setup.
Our protocols incur the same communication complexity as state-of-the-art DKG and MPC protocols with optimal resilience in their respective purely synchronous and asynchronous settings, thereby showing that network-agnostic security comes (almost) for free.Distributed key generation (DKG) protocols are an essential building block for threshold cryptosystems. Many DKG protocols tolerate up to $t_s<n/2$ corruptions assuming a well-behaved synchronous network, but become insecure as soon as the network delay becomes unstable. On the other hand, solutions in the asynchronous model operate under arbitrary network conditions, but only tolerate $t_a<n/3$ corruptions, even when the network is well-behaved.
In this work, we ask whether one can design a protocol that achieves security guarantees in either scenario. We show a complete characterization of network-agnostic DKG protocols, showing that the tight bound is $t_a+2t_s <n$. As a second contribution, we provide an optimized version of the network-agnostic MPC protocol by Blum, Liu-Zhang and Loss [CRYPTO'20] which improves over the communication complexity of their protocol by a linear factor. Moreover, using our DKG protocol, we can instantiate our MPC protocol in the plain PKI model, i.e., without the need to assume an expensive trusted setup.
Our protocols incur the same communication complexity as state-of-the-art DKG and MPC protocols with optimal resilience in their respective purely synchronous and asynchronous settings, thereby showing that network-agnostic security comes (almost) for free.2022-10-11T19:33:32+00:00https://creativecommons.org/licenses/by/4.0/Renas BachoDaniel CollinsChen-Da Liu-ZhangJulian Losshttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/720MUSES: Efficient Multi-User Searchable Encrypted Database2023-05-18T20:04:21+00:00Tung LeRouzbeh BehniaJorge GuajardoThang HoangSearchable encrypted systems enable privacy-preserving keyword search on encrypted data. Symmetric Searchable Encryption (SSE) achieves high security (e.g., forward privacy) and efficiency (i.e., sublinear search), but it only supports single-user. Public Key Searchable Encryption (PEKS) supports multi-user settings, however, it suffers from inherent security limitations such as being vulnerable to keyword-guessing attacks and the lack of forward privacy. Recent work has combined SSE and PEKS to achieve the best of both worlds: support multi-user settings, provide forward privacy while having sublinear complexity. However, despite their elegant design, the existing hybrid scheme inherits some of the security limitations of the underlying paradigms (e.g., patterns leakage, keyword-guessing) and might not be suitable for certain applications due to costly public-key operations (e.g., bilinear pairing). In this paper, we propose MUSES, a new multi-user encrypted search scheme that addresses the limitations in the existing hybrid design, while offering user efficiency. Specifically, MUSES permits multi-user functionalities (reader/writer separation, permission revocation), prevents keyword-guessing attacks, protects search/result patterns, achieves forward/backward privacy, and features minimal user overhead. In MUSES, we demonstrate a unique incorporation of various state-of-the-art distributed cryptographic protocols including Distributed Point Function, Distributed PRF, and Secret-Shared Shuffle. We also introduce a new oblivious shuffle protocol for the general 𝐿-party setting with dishonest majority, which can be of independent interest. Our experimental results indicated that the keyword search in our scheme is two orders of magnitude faster with 13× lower user bandwidth overhead than the state-of-the-art.Searchable encrypted systems enable privacy-preserving keyword search on encrypted data. Symmetric Searchable Encryption (SSE) achieves high security (e.g., forward privacy) and efficiency (i.e., sublinear search), but it only supports single-user. Public Key Searchable Encryption (PEKS) supports multi-user settings, however, it suffers from inherent security limitations such as being vulnerable to keyword-guessing attacks and the lack of forward privacy. Recent work has combined SSE and PEKS to achieve the best of both worlds: support multi-user settings, provide forward privacy while having sublinear complexity. However, despite their elegant design, the existing hybrid scheme inherits some of the security limitations of the underlying paradigms (e.g., patterns leakage, keyword-guessing) and might not be suitable for certain applications due to costly public-key operations (e.g., bilinear pairing). In this paper, we propose MUSES, a new multi-user encrypted search scheme that addresses the limitations in the existing hybrid design, while offering user efficiency. Specifically, MUSES permits multi-user functionalities (reader/writer separation, permission revocation), prevents keyword-guessing attacks, protects search/result patterns, achieves forward/backward privacy, and features minimal user overhead. In MUSES, we demonstrate a unique incorporation of various state-of-the-art distributed cryptographic protocols including Distributed Point Function, Distributed PRF, and Secret-Shared Shuffle. We also introduce a new oblivious shuffle protocol for the general 𝐿-party setting with dishonest majority, which can be of independent interest. Our experimental results indicated that the keyword search in our scheme is two orders of magnitude faster with 13× lower user bandwidth overhead than the state-of-the-art.2023-05-18T20:04:21+00:00https://creativecommons.org/licenses/by-nc/4.0/Tung LeRouzbeh BehniaJorge GuajardoThang Hoanghttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2023/719Lower Bounds for Lattice-based Compact Functional Encryption2023-05-18T17:17:09+00:00Erkan TairiAkın ÜnalFunctional encryption (FE) is a primitive where the holder of a master secret key can control which functions a user can evaluate on encrypted data. It is a powerful primitive that even implies indistinguishability obfuscation (iO), given sufficiently compact ciphertexts (Ananth-Jain, CRYPTO'15 and Bitansky-Vaikuntanathan, FOCS'15). However, despite being extensively studied, there are FE schemes, such as function-hiding inner-product FE (Bishop-Jain-Kowalczyk, AC'15, Abdalla-Catalano-Fiore-Gay-Ursu, CRYPTO’18) and compact quadratic FE (Baltico-Catalano-Fiore-Gay, Lin, CRYPTO’17), that can be only realized using pairings. This raises whether there are some mathematical barriers which hinder us from realizing these FE schemes from other assumptions.
In this paper, we study the difficulty of constructing lattice-based compact FE. We generalize the impossibility results of Ünal (EC'20) for lattice-based function-hiding FE, and extend it to the case of compact FE. Concretely, we prove lower bounds for lattice-based compact FE schemes which meet some (natural) algebraic restrictions at encryption and decryption, and have messages and ciphertexts of constant dimensions. We see our results as important indications of why it is hard to construct lattice-based FE schemes for new functionalities, and which mathematical barriers have to be overcome.Functional encryption (FE) is a primitive where the holder of a master secret key can control which functions a user can evaluate on encrypted data. It is a powerful primitive that even implies indistinguishability obfuscation (iO), given sufficiently compact ciphertexts (Ananth-Jain, CRYPTO'15 and Bitansky-Vaikuntanathan, FOCS'15). However, despite being extensively studied, there are FE schemes, such as function-hiding inner-product FE (Bishop-Jain-Kowalczyk, AC'15, Abdalla-Catalano-Fiore-Gay-Ursu, CRYPTO’18) and compact quadratic FE (Baltico-Catalano-Fiore-Gay, Lin, CRYPTO’17), that can be only realized using pairings. This raises whether there are some mathematical barriers which hinder us from realizing these FE schemes from other assumptions.
In this paper, we study the difficulty of constructing lattice-based compact FE. We generalize the impossibility results of Ünal (EC'20) for lattice-based function-hiding FE, and extend it to the case of compact FE. Concretely, we prove lower bounds for lattice-based compact FE schemes which meet some (natural) algebraic restrictions at encryption and decryption, and have messages and ciphertexts of constant dimensions. We see our results as important indications of why it is hard to construct lattice-based FE schemes for new functionalities, and which mathematical barriers have to be overcome.2023-05-18T17:17:09+00:00https://creativecommons.org/licenses/by/4.0/Erkan TairiAkın Ünalhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/718Zero-Knowledge Proofs from the Action Subgraph2023-05-18T17:03:10+00:00Giacomo BorinEdoardo PersichettiPaolo SantiniIn this work, we describe a technique to amplify the soundness of zero-knowledge proofs of knowledge for cryptographic group actions.In this work, we describe a technique to amplify the soundness of zero-knowledge proofs of knowledge for cryptographic group actions.2023-05-18T17:03:10+00:00https://creativecommons.org/licenses/by/4.0/Giacomo BorinEdoardo PersichettiPaolo Santinihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/568Enhancing the Privacy of Machine Learning via faster arithmetic over Torus FHE2023-05-18T16:51:54+00:00Marc Titus TrifanAlexandru NicolauAlexander VeidenbaumThe increased popularity of Machine Learning as a Service (MLaaS) makes the privacy of user data and network weights a critical concern. Using Torus FHE (TFHE) offers a solution for privacy-preserving computation in a cloud environment by allowing computation directly over encrypted data. However, software TFHE implementations of cyphertext-cyphertext multiplication needed when both input data and weights are encrypted are either lacking or are too slow. This paper proposes a new way to improve the performance of such multiplication by applying carry save addition. Its theoretical speedup is proportional to the bit width of the plaintext integer operands. This also speeds up multi-operand summation. A speedup of 15x is obtained for 16-bit multiplication on a 64-core processor, when compared to previous results. Multiplication also becomes more than twice as fast on a GPU if our approach is utilized. This leads to much faster dot product and convolution computations, which combine multiplications and a multi-operand sum. A 45x speedup is achieved for a 16-bit, 32-element dot product and a ~30x speedup for a convolution with a 32x32 filter size.The increased popularity of Machine Learning as a Service (MLaaS) makes the privacy of user data and network weights a critical concern. Using Torus FHE (TFHE) offers a solution for privacy-preserving computation in a cloud environment by allowing computation directly over encrypted data. However, software TFHE implementations of cyphertext-cyphertext multiplication needed when both input data and weights are encrypted are either lacking or are too slow. This paper proposes a new way to improve the performance of such multiplication by applying carry save addition. Its theoretical speedup is proportional to the bit width of the plaintext integer operands. This also speeds up multi-operand summation. A speedup of 15x is obtained for 16-bit multiplication on a 64-core processor, when compared to previous results. Multiplication also becomes more than twice as fast on a GPU if our approach is utilized. This leads to much faster dot product and convolution computations, which combine multiplications and a multi-operand sum. A 45x speedup is achieved for a 16-bit, 32-element dot product and a ~30x speedup for a convolution with a 32x32 filter size.2023-04-22T00:32:33+00:00https://creativecommons.org/licenses/by/4.0/Marc Titus TrifanAlexandru NicolauAlexander Veidenbaumhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1697RISC-V Instruction Set Extensions for Lightweight Symmetric Cryptography2023-05-18T15:08:52+00:00Hao ChengJohann GroßschädlBen MarshallDan PageThinh PhamThe NIST LightWeight Cryptography (LWC) selection process aims to standardise cryptographic functionality which is suitable for resource-constrained devices. Since the outcome is likely to have significant, long-lived impact, careful evaluation of each submission with respect to metrics explicitly outlined in the call is imperative. Beyond the robustness of submissions against cryptanalytic attack, metrics related to their implementation (e.g., execution latency and memory footprint) form an important example. Aiming to provide evidence allowing richer evaluation with respect to such metrics, this paper presents the design, implementation, and evaluation of one separate Instruction Set Extension (ISE) for each of the 10 LWC final round submissions, namely Ascon, Elephant, GIFT-COFB, Grain-128AEADv2, ISAP, PHOTON-Beetle, Romulus, Sparkle, TinyJAMBU, and Xoodyak; although we base the work on use of RISC-V, we argue that it provides more general insight.The NIST LightWeight Cryptography (LWC) selection process aims to standardise cryptographic functionality which is suitable for resource-constrained devices. Since the outcome is likely to have significant, long-lived impact, careful evaluation of each submission with respect to metrics explicitly outlined in the call is imperative. Beyond the robustness of submissions against cryptanalytic attack, metrics related to their implementation (e.g., execution latency and memory footprint) form an important example. Aiming to provide evidence allowing richer evaluation with respect to such metrics, this paper presents the design, implementation, and evaluation of one separate Instruction Set Extension (ISE) for each of the 10 LWC final round submissions, namely Ascon, Elephant, GIFT-COFB, Grain-128AEADv2, ISAP, PHOTON-Beetle, Romulus, Sparkle, TinyJAMBU, and Xoodyak; although we base the work on use of RISC-V, we argue that it provides more general insight.2022-12-07T13:14:54+00:00https://creativecommons.org/licenses/by/4.0/Hao ChengJohann GroßschädlBen MarshallDan PageThinh Phamhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/638Impossibilities in Succinct Arguments: Black-box Extraction and More2023-05-18T14:51:25+00:00Matteo CampanelliChaya GaneshHamidreza KhoshakhlaghJanno SiimThe celebrated result by Gentry and Wichs established a theoretical barrier for succinct non-interactive arguments (SNARGs), showing that for (expressive enough) hard-on-average languages, we must assume non-falsifiable assumptions.
We further investigate those barriers by showing new negative and positive results related to the proof size.
1. We start by formalizing a folklore lower bound for the proof size of black-box extractable arguments based on the hardness of the language. This separates knowledge-sound SNARGs (SNARKs) in the random oracle model (that can have black-box extraction) and those in the standard model.
2. We find a positive result in the non-adaptive setting. Under the existence of non-adaptively sound SNARGs (without extractability) and from standard assumptions, it is possible to build SNARKs with black-box extractability for a non-trivial subset of NP.
3. On the other hand, we show that (under some mild assumptions) all NP languages cannot have SNARKs with black-box extractability even in the non-adaptive setting.
4. The Gentry-Wichs result does not account for the preprocessing model, under which fall several efficient constructions. We show that also, in the preprocessing model, it is impossible to construct SNARGs that rely on falsifiable assumptions in a black-box way.
Along the way, we identify a class of non-trivial languages, which we dub “trapdoor languages”, that bypass some of these impossibility results.The celebrated result by Gentry and Wichs established a theoretical barrier for succinct non-interactive arguments (SNARGs), showing that for (expressive enough) hard-on-average languages, we must assume non-falsifiable assumptions.
We further investigate those barriers by showing new negative and positive results related to the proof size.
1. We start by formalizing a folklore lower bound for the proof size of black-box extractable arguments based on the hardness of the language. This separates knowledge-sound SNARGs (SNARKs) in the random oracle model (that can have black-box extraction) and those in the standard model.
2. We find a positive result in the non-adaptive setting. Under the existence of non-adaptively sound SNARGs (without extractability) and from standard assumptions, it is possible to build SNARKs with black-box extractability for a non-trivial subset of NP.
3. On the other hand, we show that (under some mild assumptions) all NP languages cannot have SNARKs with black-box extractability even in the non-adaptive setting.
4. The Gentry-Wichs result does not account for the preprocessing model, under which fall several efficient constructions. We show that also, in the preprocessing model, it is impossible to construct SNARGs that rely on falsifiable assumptions in a black-box way.
Along the way, we identify a class of non-trivial languages, which we dub “trapdoor languages”, that bypass some of these impossibility results.2022-05-24T05:59:47+00:00https://creativecommons.org/licenses/by/4.0/Matteo CampanelliChaya GaneshHamidreza KhoshakhlaghJanno Siimhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/717Generic Error SDP and Generic Error CVE2023-05-18T14:34:52+00:00Felice ManganielloFreeman SlaughterThis paper introduces a new family of CVE schemes built from generic errors (GE-CVE) and identifies a vulnerability therein. To introduce the problem, we generalize the concept of error sets beyond those defined by a metric, and use the set-theoretic difference operator to characterize when these error sets are detectable or correctable by codes. We prove the existence of a general, metric-less form of the Gilbert-Varshamov bound, and show that - like in the Hamming setting - a random code corrects a generic error set with overwhelming probability. We define the generic error SDP (GE-SDP), which is contained in the complexity class of NP-hard problems, and use its hardness to demonstrate the security of GE-CVE. We prove that these schemes are complete, sound, and zero-knowledge. Finally, we identify a vulnerability of the GE-SDP for codes defined over large extension fields and without a very high rate. We show that certain GE-CVE parameters suffer from this vulnerability, notably the restricted CVE scheme.This paper introduces a new family of CVE schemes built from generic errors (GE-CVE) and identifies a vulnerability therein. To introduce the problem, we generalize the concept of error sets beyond those defined by a metric, and use the set-theoretic difference operator to characterize when these error sets are detectable or correctable by codes. We prove the existence of a general, metric-less form of the Gilbert-Varshamov bound, and show that - like in the Hamming setting - a random code corrects a generic error set with overwhelming probability. We define the generic error SDP (GE-SDP), which is contained in the complexity class of NP-hard problems, and use its hardness to demonstrate the security of GE-CVE. We prove that these schemes are complete, sound, and zero-knowledge. Finally, we identify a vulnerability of the GE-SDP for codes defined over large extension fields and without a very high rate. We show that certain GE-CVE parameters suffer from this vulnerability, notably the restricted CVE scheme.2023-05-18T14:34:52+00:00https://creativecommons.org/licenses/by/4.0/Felice ManganielloFreeman Slaughterhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/687SoK: Delay-based Cryptography2023-05-18T12:26:04+00:00Liam MedleyAngelique Faye LoeElizabeth A. QuagliaIn this work, we provide a systematisation of knowledge of delay-based cryptography, in which we discuss and compare the existing primitives within cryptography that utilise a time-delay.
We start by considering the role of time within cryptography, explaining broadly what a delay aimed to achieve at its inception and now, in the modern age.
We then move on to describing the underlying assumptions used to achieve these goals, and analyse topics including trust, decentralisation and concrete methods to implement a delay.
We then survey the existing primitives, discussing their security properties, instantiations and applications.
We make explicit the relationships between these primitives, identifying a hierarchy and the theoretical gaps that exist.
We end this systematisation of knowledge by highlighting relevant future research directions within the field of delay-based cryptography, from which this area would greatly benefit.In this work, we provide a systematisation of knowledge of delay-based cryptography, in which we discuss and compare the existing primitives within cryptography that utilise a time-delay.
We start by considering the role of time within cryptography, explaining broadly what a delay aimed to achieve at its inception and now, in the modern age.
We then move on to describing the underlying assumptions used to achieve these goals, and analyse topics including trust, decentralisation and concrete methods to implement a delay.
We then survey the existing primitives, discussing their security properties, instantiations and applications.
We make explicit the relationships between these primitives, identifying a hierarchy and the theoretical gaps that exist.
We end this systematisation of knowledge by highlighting relevant future research directions within the field of delay-based cryptography, from which this area would greatly benefit.2023-05-15T09:39:08+00:00https://creativecommons.org/licenses/by/4.0/Liam MedleyAngelique Faye LoeElizabeth A. Quagliahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/716Towards High-speed ASIC Implementations of Post-Quantum Cryptography2023-05-18T12:10:35+00:00Malik ImranAikata AikataSujoy Sinha RoySamuel pagliariniIn this brief, we realize different architectural techniques towards improving the performance of post-quantum cryptography (PQC) algorithms when implemented as hardware accelerators on an application-specific integrated circuit (ASIC) platform. Having SABER as a case study, we designed a 256-bit wide architecture geared for high-speed cryptographic applications that incorporates smaller and distributed SRAM memory blocks. Moreover, we have adapted the building blocks of SABER to process 256-bit words. We have also used a buffer technique for efficient polynomial coefficient multiplications to reduce the clock cycle count. Finally, double-sponge functions are combined serially (one after another) in a high-speed KECCAK core to improve the hash operations of SHA/SHAKE. For key-generation, encapsulation, and decapsulation operations of SABER, our 256-bit wide accelerator with a single sponge function is 1.71x, 1.45x, and 1.78x faster compared to the raw clock cycle count of a serialized SABER design. Similarly, our 256-bit implementation with double-sponge functions takes 1.08x, 1.07x & 1.06x fewer clock cycles compared to its single-sponge counterpart. The studied optimization techniques are not specific to SABER - they can be utilized for improving the performance of other lattice-based PQC accelerators.In this brief, we realize different architectural techniques towards improving the performance of post-quantum cryptography (PQC) algorithms when implemented as hardware accelerators on an application-specific integrated circuit (ASIC) platform. Having SABER as a case study, we designed a 256-bit wide architecture geared for high-speed cryptographic applications that incorporates smaller and distributed SRAM memory blocks. Moreover, we have adapted the building blocks of SABER to process 256-bit words. We have also used a buffer technique for efficient polynomial coefficient multiplications to reduce the clock cycle count. Finally, double-sponge functions are combined serially (one after another) in a high-speed KECCAK core to improve the hash operations of SHA/SHAKE. For key-generation, encapsulation, and decapsulation operations of SABER, our 256-bit wide accelerator with a single sponge function is 1.71x, 1.45x, and 1.78x faster compared to the raw clock cycle count of a serialized SABER design. Similarly, our 256-bit implementation with double-sponge functions takes 1.08x, 1.07x & 1.06x fewer clock cycles compared to its single-sponge counterpart. The studied optimization techniques are not specific to SABER - they can be utilized for improving the performance of other lattice-based PQC accelerators.2023-05-18T12:10:35+00:00https://creativecommons.org/licenses/by-nc/4.0/Malik ImranAikata AikataSujoy Sinha RoySamuel pagliarinihttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2023/715Research Philosophy of Modern Cryptography2023-05-18T02:55:40+00:00Fuchun GuoWilly SusiloXiaofeng ChenPeng JiangJianchang LaiZhen ZhaoProposing novel cryptography schemes (e.g., encryption, signatures, and protocols) is one of the main research goals in modern cryptography. In this paper, based on more than 800 research papers since 1976 that we have surveyed, we introduce the research philosophy of cryptography behind these papers. We use ``benefits" and ``novelty" as the keywords to introduce the research philosophy of proposing new schemes, assuming that there is already one scheme proposed for a cryptography notion. Next, we introduce how benefits were explored in the literature and we have categorized the methodology into 3 ways for benefits, 6 types of benefits, and 17 benefit areas. As examples, we introduce 40 research strategies within these benefit areas that were invented in the literature. The introduced research strategies have covered most cryptography schemes published in top-tier cryptography conferences.Proposing novel cryptography schemes (e.g., encryption, signatures, and protocols) is one of the main research goals in modern cryptography. In this paper, based on more than 800 research papers since 1976 that we have surveyed, we introduce the research philosophy of cryptography behind these papers. We use ``benefits" and ``novelty" as the keywords to introduce the research philosophy of proposing new schemes, assuming that there is already one scheme proposed for a cryptography notion. Next, we introduce how benefits were explored in the literature and we have categorized the methodology into 3 ways for benefits, 6 types of benefits, and 17 benefit areas. As examples, we introduce 40 research strategies within these benefit areas that were invented in the literature. The introduced research strategies have covered most cryptography schemes published in top-tier cryptography conferences.2023-05-18T02:55:40+00:00https://creativecommons.org/licenses/by/4.0/Fuchun GuoWilly SusiloXiaofeng ChenPeng JiangJianchang LaiZhen Zhaohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/605The Principal–Agent Problem in Liquid Staking2023-05-17T21:52:09+00:00Apostolos TzinasDionysis ZindrosProof-of-stake systems require stakers to lock up their funds in order to participate in consensus validation. This leads to capital inefficiency, as locked capital cannot be invested in Decentralized Finance (DeFi). Liquid staking rewards stakers with fungible tokens in return for staking their assets. These fungible tokens can in turn be reused in the DeFi economy. However, liquid staking introduces unexpected risks, as all delegated stake is now fungible. This exacerbates the already existing Principal–Agent problem faced during any delegation, in which the interests of the delegator (the Principal) are not aligned with the interests of the validator (the Agent). In this paper, we study the Principal–Agent problem in the context of liquid staking. We highlight the dilemma between the choice of proportional representation (having one's stake delegated to one's validator of choice) and fair punishment (being economically affected only when one's choice is misinformed). We put forth an attack illustrating that these two notions are fundamentally incompatible in an adversarial setting. We then describe the mechanism of exempt delegations, used by some staking systems today, and devise a precise formula for quantifying the correct choice of exempt delegation which allows balancing the two conflicting virtues in the rational model.Proof-of-stake systems require stakers to lock up their funds in order to participate in consensus validation. This leads to capital inefficiency, as locked capital cannot be invested in Decentralized Finance (DeFi). Liquid staking rewards stakers with fungible tokens in return for staking their assets. These fungible tokens can in turn be reused in the DeFi economy. However, liquid staking introduces unexpected risks, as all delegated stake is now fungible. This exacerbates the already existing Principal–Agent problem faced during any delegation, in which the interests of the delegator (the Principal) are not aligned with the interests of the validator (the Agent). In this paper, we study the Principal–Agent problem in the context of liquid staking. We highlight the dilemma between the choice of proportional representation (having one's stake delegated to one's validator of choice) and fair punishment (being economically affected only when one's choice is misinformed). We put forth an attack illustrating that these two notions are fundamentally incompatible in an adversarial setting. We then describe the mechanism of exempt delegations, used by some staking systems today, and devise a precise formula for quantifying the correct choice of exempt delegation which allows balancing the two conflicting virtues in the rational model.2023-04-27T23:06:57+00:00https://creativecommons.org/licenses/by-sa/4.0/Apostolos TzinasDionysis Zindroshttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2023/713KAIME : Central Bank Digital Currency with Realistic and Modular Privacy2023-05-17T19:36:59+00:00Ali DoganKemal BicakciRecently, with the increasing interest in Central Bank Digital Currency (CBDC), many countries have been working on researching and developing digital currency. The most important reasons for this interest are that CBDC eliminates the disadvantages of traditional currencies and provides a safer, faster, and more efficient payment system. These benefits also come with challenges, such as safeguarding individuals’ privacy and ensuring regulatory mechanisms. While most researches address the privacy conflict between users and regulatory agencies, they miss an important detail. Important parts of a financial system are banks and financial institutions. Some studies ignore the need for privacy and include these institutions in the CBDC system, no system currently offers a solution to the privacy conflict between banks, financial institutions, and users. In this study, while we offer a solution to the privacy conflict between the user and the regulatory agencies, we also provide a solution to the privacy conflict between the user and the banks. Our solution, KAIME has also a modular structure. The privacy of the sender and receiver can be hidden if desired. Compared to previous related research, security analysis and implementation of KAIME is substantially simpler because simple and well-known cryptographic methods are used.Recently, with the increasing interest in Central Bank Digital Currency (CBDC), many countries have been working on researching and developing digital currency. The most important reasons for this interest are that CBDC eliminates the disadvantages of traditional currencies and provides a safer, faster, and more efficient payment system. These benefits also come with challenges, such as safeguarding individuals’ privacy and ensuring regulatory mechanisms. While most researches address the privacy conflict between users and regulatory agencies, they miss an important detail. Important parts of a financial system are banks and financial institutions. Some studies ignore the need for privacy and include these institutions in the CBDC system, no system currently offers a solution to the privacy conflict between banks, financial institutions, and users. In this study, while we offer a solution to the privacy conflict between the user and the regulatory agencies, we also provide a solution to the privacy conflict between the user and the banks. Our solution, KAIME has also a modular structure. The privacy of the sender and receiver can be hidden if desired. Compared to previous related research, security analysis and implementation of KAIME is substantially simpler because simple and well-known cryptographic methods are used.2023-05-17T19:36:59+00:00https://creativecommons.org/licenses/by/4.0/Ali DoganKemal Bicakcihttps://creativecommons.org/licenses/by/4.0/