https://eprint.iacr.org/rss/atom.xmlCryptology ePrint Archive2022-10-06T06:24:26+00:00None of your businesshttps://iacr.org/img/logo/iacrlogo_small.pngMetadata is available under the CC0 license https://creativecommons.org/publicdomain/zero/1.0/. Each article has a PDF with different license specified for each one.The Cryptology ePrint Archive provides rapid access to recent
research in cryptology. Papers have been placed here by the
authors and did not undergo any refereeing process other than
verifying that the work seems to be within the scope of
cryptology and meets some minimal acceptance criteria and
publishing conditions.https://eprint.iacr.org/2022/1277Compact GF(2) systemizer and optimized constant-time hardware sorters for Key Generation in Classic McEliece2022-09-26T14:09:03+00:00Yihong ZhuWenping ZhuChen ChenMin ZhuZhengdong LiShaojun WeiLeibo LiuClassic McEliece is a code-based quantum-resistant public-key scheme characterized with relative high encapsulation/decapsulation speed and small cipher- texts, with an in-depth analysis on its security. However, slow key generation with large public key size make it hard for wider applications. Based on this observation, a high-throughput key generator in hardware, is proposed to accelerate the key generation in Classic McEliece based on algorithm-hardware co-design. Meanwhile the storage overhead caused by large-size keys is also minimized. First, compact large-size GF(2) Gauss elimination is presented by adopting naive processing array, singular matrix detection-based early abort, and memory-friendly scheduling strategy. Second, an optimized constant-time hardware sorter is proposed to support regular memory accesses with less comparators and storage. Third, algorithm-level pipeline is enabled for high-throughput processing, allowing for concurrent key generation based on decoupling between data access and computation.Classic McEliece is a code-based quantum-resistant public-key scheme characterized with relative high encapsulation/decapsulation speed and small cipher- texts, with an in-depth analysis on its security. However, slow key generation with large public key size make it hard for wider applications. Based on this observation, a high-throughput key generator in hardware, is proposed to accelerate the key generation in Classic McEliece based on algorithm-hardware co-design. Meanwhile the storage overhead caused by large-size keys is also minimized. First, compact large-size GF(2) Gauss elimination is presented by adopting naive processing array, singular matrix detection-based early abort, and memory-friendly scheduling strategy. Second, an optimized constant-time hardware sorter is proposed to support regular memory accesses with less comparators and storage. Third, algorithm-level pipeline is enabled for high-throughput processing, allowing for concurrent key generation based on decoupling between data access and computation.2022-09-26T14:09:03+00:00https://creativecommons.org/licenses/by-nc/4.0/Yihong ZhuWenping ZhuChen ChenMin ZhuZhengdong LiShaojun WeiLeibo Liuhttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2022/1278Fast Evaluation of S-boxes with Garbled Circuits2022-09-26T15:10:08+00:00Erik PohleAysajan AbidinBart PreneelGarbling schemes, a formalization of Yao's garbled circuit protocol, are useful cryptographic primitives both in privacy-preserving protocols and for secure two-party computation. In projective garbling schemes, $n$ values are assigned to each wire in the circuit. Current state-of-the-art schemes project two values.
More concretely, we present a projective garbling scheme that assigns $2^n$ values to wires in a circuit comprising XOR and unary projection gates. A generalization of FreeXOR allows the XOR of wires with $2^n$ values to be very efficient. We then analyze the performance of our scheme by evaluating substitution-permutation ciphers. Using our proposal, we measure high-speed evaluation of the ciphers with a moderate increased cost in garbling and bandwidth. Theoretical analysis suggests that for evaluating the nine examined ciphers, one can expect a 4- to 70-fold increase in evaluation with at most a 4-fold increase in garbling cost and, at most, an 8-fold increase in communication cost when compared to state-of-the-art garbling schemes. In an offline/online setting, such as secure function evaluation as a service, the circuit garbling and communication to the evaluator can proceed before the input phase. Thus our scheme offers a fast online phase. Furthermore, we present efficient computation formulas for the S-boxes of TWINE and Midori64 in Boolean circuits. To our knowledge, our formulas give the smallest number of AND gates for the S-boxes of these two ciphers.Garbling schemes, a formalization of Yao's garbled circuit protocol, are useful cryptographic primitives both in privacy-preserving protocols and for secure two-party computation. In projective garbling schemes, $n$ values are assigned to each wire in the circuit. Current state-of-the-art schemes project two values.
More concretely, we present a projective garbling scheme that assigns $2^n$ values to wires in a circuit comprising XOR and unary projection gates. A generalization of FreeXOR allows the XOR of wires with $2^n$ values to be very efficient. We then analyze the performance of our scheme by evaluating substitution-permutation ciphers. Using our proposal, we measure high-speed evaluation of the ciphers with a moderate increased cost in garbling and bandwidth. Theoretical analysis suggests that for evaluating the nine examined ciphers, one can expect a 4- to 70-fold increase in evaluation with at most a 4-fold increase in garbling cost and, at most, an 8-fold increase in communication cost when compared to state-of-the-art garbling schemes. In an offline/online setting, such as secure function evaluation as a service, the circuit garbling and communication to the evaluator can proceed before the input phase. Thus our scheme offers a fast online phase. Furthermore, we present efficient computation formulas for the S-boxes of TWINE and Midori64 in Boolean circuits. To our knowledge, our formulas give the smallest number of AND gates for the S-boxes of these two ciphers.2022-09-26T15:10:08+00:00https://creativecommons.org/licenses/by/4.0/Erik PohleAysajan AbidinBart Preneelhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1255An ECDSA Nullifier Scheme for Unique Pseudonymity within Zero Knowledge Proofs2022-09-26T16:55:46+00:00Aayush GuptaKobi GurkanZK-SNARKs (Zero Knowledge Succinct Noninteractive ARguments of Knowledge) are one of the most promising new applied cryptography tools: proofs allow anyone to prove a property about some data, without revealing that data. Largely spurred by the adoption of cryptographic primitives in blockchain systems, ZK-SNARKs are rapidly becoming computationally practical in real-world settings, shown by i.e. tornado.cash and rollups. These have enabled ideation for new identity applications based on anonymous proof-of-ownership. One of the primary technologies that would enable the jump from existing apps to such systems is the development of deterministic nullifiers.
Nullifiers are used as a public commitment to a specific anonymous account, to forbid actions like double spending, or allow a consistent identity between anonymous actions. We identify a new deterministic signature algorithm that both uniquely identifies the keypair, and keeps the account identity secret. In this work, we will define the full DDH-VRF construction, and prove uniqueness, secrecy, and existential unforgeability. We will also demonstrate a proof of concept of the nullifier.ZK-SNARKs (Zero Knowledge Succinct Noninteractive ARguments of Knowledge) are one of the most promising new applied cryptography tools: proofs allow anyone to prove a property about some data, without revealing that data. Largely spurred by the adoption of cryptographic primitives in blockchain systems, ZK-SNARKs are rapidly becoming computationally practical in real-world settings, shown by i.e. tornado.cash and rollups. These have enabled ideation for new identity applications based on anonymous proof-of-ownership. One of the primary technologies that would enable the jump from existing apps to such systems is the development of deterministic nullifiers.
Nullifiers are used as a public commitment to a specific anonymous account, to forbid actions like double spending, or allow a consistent identity between anonymous actions. We identify a new deterministic signature algorithm that both uniquely identifies the keypair, and keeps the account identity secret. In this work, we will define the full DDH-VRF construction, and prove uniqueness, secrecy, and existential unforgeability. We will also demonstrate a proof of concept of the nullifier.2022-09-21T08:08:02+00:00https://creativecommons.org/licenses/by/4.0/Aayush GuptaKobi Gurkanhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1279Improved Neural Distinguishers with Multi-Round and Multi-Splicing Construction2022-09-27T02:08:54+00:00Jiashuo LiuJiongjiong RenShaozhen ChenManMan LiIn CRYPTO 2019, Gohr successfully applied deep learning to differential cryptanalysis against the NSA block cipher Speck32/64, achieving higher accuracy than traditional differential distinguishers. Until now, the improvement of neural differential distinguishers is a mainstream research direction in neuralaided cryptanalysis. But the current development of training data formats for neural distinguishers forms barriers: (1) The source of data features is limited to linear combinations of ciphertexts, which does not provide more learnable features to the training samples for improving the neural distinguishers. (2) Lacking breakthroughs in constructing data format for network training from the deep learning perspective. In this paper, considering both the domain knowledge about deep learning and information on differential cryptanalysis, we use the output features of the penultimate round to proposing a two-dimensional and non-realistic input data generation method of neural differential distinguishers. Then, we validate that the proposed new input data format has excellent features through experiments and theoretical analysis. Moreover, combining the idea of multiple ciphertext pairs, we generate two specific models for data input construction: MRMSP(Multiple Rounds Multiple Splicing Pairs) and MRMSD(Multiple Rounds Multiple Splicing Differences) and then build new neural distinguishers against Speck and Simon family, which effectively improve the performance compared with the previous works. To the best of our knowledge, our neural distinguishers achieve the longest rounds and the higher accuracy for NSA block ciphers Speck and Simon.In CRYPTO 2019, Gohr successfully applied deep learning to differential cryptanalysis against the NSA block cipher Speck32/64, achieving higher accuracy than traditional differential distinguishers. Until now, the improvement of neural differential distinguishers is a mainstream research direction in neuralaided cryptanalysis. But the current development of training data formats for neural distinguishers forms barriers: (1) The source of data features is limited to linear combinations of ciphertexts, which does not provide more learnable features to the training samples for improving the neural distinguishers. (2) Lacking breakthroughs in constructing data format for network training from the deep learning perspective. In this paper, considering both the domain knowledge about deep learning and information on differential cryptanalysis, we use the output features of the penultimate round to proposing a two-dimensional and non-realistic input data generation method of neural differential distinguishers. Then, we validate that the proposed new input data format has excellent features through experiments and theoretical analysis. Moreover, combining the idea of multiple ciphertext pairs, we generate two specific models for data input construction: MRMSP(Multiple Rounds Multiple Splicing Pairs) and MRMSD(Multiple Rounds Multiple Splicing Differences) and then build new neural distinguishers against Speck and Simon family, which effectively improve the performance compared with the previous works. To the best of our knowledge, our neural distinguishers achieve the longest rounds and the higher accuracy for NSA block ciphers Speck and Simon.2022-09-27T02:08:54+00:00https://creativecommons.org/licenses/by/4.0/Jiashuo LiuJiongjiong RenShaozhen ChenManMan Lihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1280Group Time-based One-time Passwords and its Application to Efficient Privacy-Preserving Proof of Location2022-09-27T02:49:38+00:00Zheng YangChenglu JinJianting NingZengpeng LiTien Tuan Anh DinhJianying ZhouTime-based One-Time Password (TOTP) provides a strong second factor for user authentication. In TOTP, a prover authenticates to a verifier by using the current time and a secret key to generate an authentication token (or password) which is valid for a short time period. Our goal is to extend TOTP to the group setting, and to provide both authentication and privacy. To this end, we introduce a new authentication scheme, called Group TOTP (GTOTP), that allows the prover to prove that it is a member of an authenticated group without revealing its identity. We propose a novel construction that transforms any asymmetric TOTP scheme into a GTOTP scheme. Our approach combines Merkle tree and Bloom filter to reduce the verifier's states to constant sizes.
As a promising application of GTOTP, we show that GTOTP can be used to construct an efficient privacy-preserving Proof of Location (PoL) scheme. We utilize a commitment protocol, a privacy-preserving location proximity scheme, and our GTOTP scheme to build the PoL scheme, in which GTOTP is used not only for user authentication but also as a tool to glue up other building blocks. In the PoL scheme, with the help of some witnesses, a user can prove its location to a verifier, while ensuring the identity and location privacy of both the prover and witnesses. Our PoL scheme outperforms the alternatives based on group digital signatures. We evaluate our schemes on Raspberry Pi hardware, and demonstrate that they achieve practical performance. In particular, the password generation and verification time are in the order of microseconds and milliseconds, respectively, while the computation time of proof generation is less than $1$ second.Time-based One-Time Password (TOTP) provides a strong second factor for user authentication. In TOTP, a prover authenticates to a verifier by using the current time and a secret key to generate an authentication token (or password) which is valid for a short time period. Our goal is to extend TOTP to the group setting, and to provide both authentication and privacy. To this end, we introduce a new authentication scheme, called Group TOTP (GTOTP), that allows the prover to prove that it is a member of an authenticated group without revealing its identity. We propose a novel construction that transforms any asymmetric TOTP scheme into a GTOTP scheme. Our approach combines Merkle tree and Bloom filter to reduce the verifier's states to constant sizes.
As a promising application of GTOTP, we show that GTOTP can be used to construct an efficient privacy-preserving Proof of Location (PoL) scheme. We utilize a commitment protocol, a privacy-preserving location proximity scheme, and our GTOTP scheme to build the PoL scheme, in which GTOTP is used not only for user authentication but also as a tool to glue up other building blocks. In the PoL scheme, with the help of some witnesses, a user can prove its location to a verifier, while ensuring the identity and location privacy of both the prover and witnesses. Our PoL scheme outperforms the alternatives based on group digital signatures. We evaluate our schemes on Raspberry Pi hardware, and demonstrate that they achieve practical performance. In particular, the password generation and verification time are in the order of microseconds and milliseconds, respectively, while the computation time of proof generation is less than $1$ second.2022-09-27T02:49:38+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Zheng YangChenglu JinJianting NingZengpeng LiTien Tuan Anh DinhJianying Zhouhttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2022/1281LARP: A Lightweight Auto-Refreshing Pseudonym Protocol for V2X2022-09-27T04:51:27+00:00Zheng YangTien Tuan Anh DinhChao YinYingying YaoDianshi YangXiaolin ChangJianying ZhouVehicle-to-everything (V2X) communication is the key enabler for emerging intelligent transportation systems. Applications built on top of V2X require both authentication and privacy protection for the vehicles. The common approach to meet both requirements is to use pseudonyms which are short-term identities. However, both industrial standards and state-of-the-art research are not designed for resource-constrained environments. In addition, they make a strong assumption about the security of the vehicle's on-board computation units.
In this paper, we propose a lightweight auto-refreshing pseudonym protocol LARP for V2X. LARP supports efficient operations for resource-constrained devices, and provides security even when parts of the vehicle are compromised. We provide formal security proof showing that the protocol is secure. We conduct experiments on a Raspberry Pi 4. The results demonstrate that LARP is feasible and practical.Vehicle-to-everything (V2X) communication is the key enabler for emerging intelligent transportation systems. Applications built on top of V2X require both authentication and privacy protection for the vehicles. The common approach to meet both requirements is to use pseudonyms which are short-term identities. However, both industrial standards and state-of-the-art research are not designed for resource-constrained environments. In addition, they make a strong assumption about the security of the vehicle's on-board computation units.
In this paper, we propose a lightweight auto-refreshing pseudonym protocol LARP for V2X. LARP supports efficient operations for resource-constrained devices, and provides security even when parts of the vehicle are compromised. We provide formal security proof showing that the protocol is secure. We conduct experiments on a Raspberry Pi 4. The results demonstrate that LARP is feasible and practical.2022-09-27T04:51:27+00:00https://creativecommons.org/licenses/by/4.0/Zheng YangTien Tuan Anh DinhChao YinYingying YaoDianshi YangXiaolin ChangJianying Zhouhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1631Secure Sampling of Constant-Weight Words – Application to BIKE2022-09-27T07:40:53+00:00Nicolas SendrierThe pseudorandom sampling of constant weight words, as it is currently implemented in cryptographic schemes like BIKE or HQC, is prone to the leakage of information on the seed being used for the pseudorandom number generation. This creates a vulnerability when the semantic security conversion requires a deterministic re-encryption. This observation was first made in [HLS21] about HQC and a timing attack was presented to recover the secret key. As suggested in [HLS21] a similar attack applies to BIKE and instances of such an attack were presented in an earlier version of this work [Sen21] and independently in [GHJ+22].
The timing attack stems from the variation of the amount of pseudorandom data to draw and process for sampling uniformly a constant weight word. We give here the exact distribution of this amount for BIKE. This will allow us to estimate precisely the cost of the natural countermeasure which consists in drawing always the same (large enough) amount of randomness for the sampler to terminate with probability overwhelmingly close to one.
The main contribution of this work is to suggest a new approach for fixing the issue. It is first remarked that, contrary to what is done currently, the sampling of constant weight words doesn't need to produce a uniformly distributed output. If the distribution is close to uniform in the appropriate metric, the impact on security is negligible. Also, a new variant of the Fisher-Yates shuffle is proposed which is (1) very well suited for secure implementations against timing and cache attacks, and (2) produces constant weight words with a distribution close enough to uniform.The pseudorandom sampling of constant weight words, as it is currently implemented in cryptographic schemes like BIKE or HQC, is prone to the leakage of information on the seed being used for the pseudorandom number generation. This creates a vulnerability when the semantic security conversion requires a deterministic re-encryption. This observation was first made in [HLS21] about HQC and a timing attack was presented to recover the secret key. As suggested in [HLS21] a similar attack applies to BIKE and instances of such an attack were presented in an earlier version of this work [Sen21] and independently in [GHJ+22].
The timing attack stems from the variation of the amount of pseudorandom data to draw and process for sampling uniformly a constant weight word. We give here the exact distribution of this amount for BIKE. This will allow us to estimate precisely the cost of the natural countermeasure which consists in drawing always the same (large enough) amount of randomness for the sampler to terminate with probability overwhelmingly close to one.
The main contribution of this work is to suggest a new approach for fixing the issue. It is first remarked that, contrary to what is done currently, the sampling of constant weight words doesn't need to produce a uniformly distributed output. If the distribution is close to uniform in the appropriate metric, the impact on security is negligible. Also, a new variant of the Fisher-Yates shuffle is proposed which is (1) very well suited for secure implementations against timing and cache attacks, and (2) produces constant weight words with a distribution close enough to uniform.2021-12-17T14:21:41+00:00https://creativecommons.org/licenses/by/4.0/Nicolas Sendrierhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1282Comparing Key Rank Estimation Methods2022-09-27T09:15:16+00:00Rebecca YoungLuke MatherElisabeth OswaldRecent works on key rank estimation methods claim that algorithmic key rank estimation is too slow, and suggest two new ideas: replacing repeat attacks with simulated attacks (PS-TH-GE rank estimation), and a shortcut rank estimation method that works directly on distinguishing vector distributions (GEEA). We take these ideas and provide a comprehensive comparison between them and a performant implementation of a classical, algorithmic ranking approach, as well as some earlier work on estimating distinguisher distributions. Our results show, in contrast to the recent work, that the algorithmic ranking approach outperforms GEEA, and that simulation based ranks are unreliable.Recent works on key rank estimation methods claim that algorithmic key rank estimation is too slow, and suggest two new ideas: replacing repeat attacks with simulated attacks (PS-TH-GE rank estimation), and a shortcut rank estimation method that works directly on distinguishing vector distributions (GEEA). We take these ideas and provide a comprehensive comparison between them and a performant implementation of a classical, algorithmic ranking approach, as well as some earlier work on estimating distinguisher distributions. Our results show, in contrast to the recent work, that the algorithmic ranking approach outperforms GEEA, and that simulation based ranks are unreliable.2022-09-27T09:15:16+00:00https://creativecommons.org/licenses/by-nc/4.0/Rebecca YoungLuke MatherElisabeth Oswaldhttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2022/1283A Note on Reimplementing the Castryck-Decru Attack and Lessons Learned for SageMath2022-09-27T09:55:24+00:00Rémy OudomphengGiacomo PopeThis note describes the implementation of the Castryck-Decru key recovery attack on SIDH using the computer algebra system, SageMath. We describe in detail alternate computation methods for the isogeny steps of the original attack ($(2,2)$-isogenies from a product of elliptic curves and from a Jacobian), using explicit formulas to compute values of these isogenies at given points, motivated by both performance considerations and working around SageMath limitations. A performance analysis is provided, with focus given to the various algorithmic and SageMath specific improvements made during development, which in total accumulated in approximately an eight-fold performance improvement compared with a naïve reimplementation of the proof of concept.This note describes the implementation of the Castryck-Decru key recovery attack on SIDH using the computer algebra system, SageMath. We describe in detail alternate computation methods for the isogeny steps of the original attack ($(2,2)$-isogenies from a product of elliptic curves and from a Jacobian), using explicit formulas to compute values of these isogenies at given points, motivated by both performance considerations and working around SageMath limitations. A performance analysis is provided, with focus given to the various algorithmic and SageMath specific improvements made during development, which in total accumulated in approximately an eight-fold performance improvement compared with a naïve reimplementation of the proof of concept.2022-09-27T09:55:24+00:00https://creativecommons.org/licenses/by-sa/4.0/Rémy OudomphengGiacomo Popehttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2022/1284(Inner-Product) Functional Encryption with Updatable Ciphertexts2022-09-27T10:04:21+00:00Valerio CiniSebastian RamacherDaniel SlamanigChristoph StriecksErkan TairiWe propose a novel variant of functional encryption which supports ciphertext updates, dubbed ciphertext updatable functional encryption (CUFE). Such a feature further broadens the practical applicability of the functional encryption paradigm and is carried out via so-called update tokens. However, allowing update tokens requires some care for the security definition as we want that updates can be done by any semi-trusted third party and only on ciphertexts. Our contribution is three-fold:
a) We define our new primitive with a security notion in the indistinguishability setting. Within CUFE, functional decryption keys and ciphertexts are labeled with tags such that only if the tag of the decryption key and the ciphertext match, then decryption succeeds. Furthermore, we allow ciphertexts to switch their tags to any other tag via update tokens. Such tokens are generated by the holder of the main secret key and can only be used in the desired direction.
b) We present a generic construction of CUFE for any functionality as well as predicates different from equality testing on tags, which relies on the existence of (probabilistic) indistinguishability obfuscation (iO).
c) We present a practical construction of CUFE for the inner-product functionality from standard assumptions (i.e., LWE) in the random-oracle model. On the technical level, we build on the recent functional encryption schemes with fine-grained access control and linear operations on encrypted data (Abdalla et al., AC'20) and introduce an additional ciphertext updatability feature. Proving security for such a construction turned out to be non-trivial, particularly when revealing keys for the updated challenge ciphertext is allowed. Overall, such construction enriches the set of known inner-product functional-encryption schemes with the additional updatability feature of ciphertexts.We propose a novel variant of functional encryption which supports ciphertext updates, dubbed ciphertext updatable functional encryption (CUFE). Such a feature further broadens the practical applicability of the functional encryption paradigm and is carried out via so-called update tokens. However, allowing update tokens requires some care for the security definition as we want that updates can be done by any semi-trusted third party and only on ciphertexts. Our contribution is three-fold:
a) We define our new primitive with a security notion in the indistinguishability setting. Within CUFE, functional decryption keys and ciphertexts are labeled with tags such that only if the tag of the decryption key and the ciphertext match, then decryption succeeds. Furthermore, we allow ciphertexts to switch their tags to any other tag via update tokens. Such tokens are generated by the holder of the main secret key and can only be used in the desired direction.
b) We present a generic construction of CUFE for any functionality as well as predicates different from equality testing on tags, which relies on the existence of (probabilistic) indistinguishability obfuscation (iO).
c) We present a practical construction of CUFE for the inner-product functionality from standard assumptions (i.e., LWE) in the random-oracle model. On the technical level, we build on the recent functional encryption schemes with fine-grained access control and linear operations on encrypted data (Abdalla et al., AC'20) and introduce an additional ciphertext updatability feature. Proving security for such a construction turned out to be non-trivial, particularly when revealing keys for the updated challenge ciphertext is allowed. Overall, such construction enriches the set of known inner-product functional-encryption schemes with the additional updatability feature of ciphertexts.2022-09-27T10:04:21+00:00https://creativecommons.org/licenses/by/4.0/Valerio CiniSebastian RamacherDaniel SlamanigChristoph StriecksErkan Tairihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1000Statistical Decoding 2.0: Reducing Decoding to LPN2022-09-27T12:26:33+00:00Kevin CarrierThomas Debris-AlazardCharles Meyer-HilfigerJean-Pierre TillichThe security of code-based cryptography relies primarily on the hardness of generic decoding with linear codes. The best generic decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoders (ISD).
A while ago, a generic decoding algorithm which does not belong to this family was proposed: statistical decoding.
It is a randomized algorithm that requires the computation of a large set of parity-checks of moderate weight, and uses some kind of majority voting on these equations to recover the error. This algorithm was long forgotten because even the best variants of it
performed poorly when compared to the simplest ISD algorithm.
We revisit this old algorithm by using parity-check equations in a more general way. Here the parity-checks are used to get LPN samples with a secret which is part of the error and the LPN noise is related to the weight of the parity-checks we produce. The corresponding LPN problem is then solved by standard Fourier techniques. By properly choosing the method of producing these low weight equations and the size of the LPN problem, we are able to outperform in this way significantly information set decodings at code rates smaller than $0.3$. It gives for the first time after $60$ years, a better decoding algorithm for a significant range which does not belong to the ISD family.The security of code-based cryptography relies primarily on the hardness of generic decoding with linear codes. The best generic decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoders (ISD).
A while ago, a generic decoding algorithm which does not belong to this family was proposed: statistical decoding.
It is a randomized algorithm that requires the computation of a large set of parity-checks of moderate weight, and uses some kind of majority voting on these equations to recover the error. This algorithm was long forgotten because even the best variants of it
performed poorly when compared to the simplest ISD algorithm.
We revisit this old algorithm by using parity-check equations in a more general way. Here the parity-checks are used to get LPN samples with a secret which is part of the error and the LPN noise is related to the weight of the parity-checks we produce. The corresponding LPN problem is then solved by standard Fourier techniques. By properly choosing the method of producing these low weight equations and the size of the LPN problem, we are able to outperform in this way significantly information set decodings at code rates smaller than $0.3$. It gives for the first time after $60$ years, a better decoding algorithm for a significant range which does not belong to the ISD family.2022-08-03T16:24:00+00:00https://creativecommons.org/licenses/by/4.0/Kevin CarrierThomas Debris-AlazardCharles Meyer-HilfigerJean-Pierre Tillichhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1273A Conjecture From a Failed Cryptanalysis2022-09-27T15:54:05+00:00David NaccacheOfer Yifrach-StavThis note describes an observation discovered during a failed cryptanalysis attempt.
Let $P(x,y)$ be a bivariate polynomial with coefficients in $\mathbb{C}$. Form the $n\times n$ matrices $L(n)$ whose elements are defined by $P(i,j)$. Define the matrices $M(n)=L(n)-\mbox{ID}_n$.
It appears that $\mu(n)=(-1)^n\det(M_n)$ is a polynomial in $n$ that we did not characterize.
We provide a numerical example.This note describes an observation discovered during a failed cryptanalysis attempt.
Let $P(x,y)$ be a bivariate polynomial with coefficients in $\mathbb{C}$. Form the $n\times n$ matrices $L(n)$ whose elements are defined by $P(i,j)$. Define the matrices $M(n)=L(n)-\mbox{ID}_n$.
It appears that $\mu(n)=(-1)^n\det(M_n)$ is a polynomial in $n$ that we did not characterize.
We provide a numerical example.2022-09-26T10:58:50+00:00https://creativecommons.org/publicdomain/zero/1.0/David NaccacheOfer Yifrach-Stavhttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2022/927Fit The Joint Moments - How to Attack any Masking Schemes2022-09-27T17:03:57+00:00Valence CristianiMaxime LecomteThomas HiscockPhilippe MaurineSide-Channel Analysis (SCA) allows extracting secret keys manipulated by cryptographic primitives through leakages of their physical implementations. Supervised attacks, known to be optimal, can theoretically defeat any countermeasure, including masking, by learning the dependency between the leakage and the secret through the profiling phase. However, defeating masking is less trivial when it comes to unsupervised attacks. While classical strategies such as CPA or LRA have been extended to masked implementations, we show that these extensions only hold for Boolean and arithmetic schemes. Therefore, we propose a new unsupervised strategy, the Joint Moments Regression (JMR), able to defeat any masking schemes (multiplicative, affine, polynomial, inner product...), which are gaining popularity in real implementations. The main idea behind JMR is to directly regress the leakage model of the shares by fitting a system based on higher-order joint moments conditions. We show that this idea can be seen as part of a more general framework known as the Generalized Method of Moments (GMM). This offers mathematical foundations on which we rely to derive optimizations of JMR. Simulations results confirm the interest of JMR over state-of-the-art attacks, even in the case of Boolean and arithmetic masking. Eventually, we apply this strategy to provide, to the best of our knowledge, the first unsupervised attack on the protected AES implementation proposed by the ANSSI for SCA research, which embeds an affine masking and shuffling counter-measures.Side-Channel Analysis (SCA) allows extracting secret keys manipulated by cryptographic primitives through leakages of their physical implementations. Supervised attacks, known to be optimal, can theoretically defeat any countermeasure, including masking, by learning the dependency between the leakage and the secret through the profiling phase. However, defeating masking is less trivial when it comes to unsupervised attacks. While classical strategies such as CPA or LRA have been extended to masked implementations, we show that these extensions only hold for Boolean and arithmetic schemes. Therefore, we propose a new unsupervised strategy, the Joint Moments Regression (JMR), able to defeat any masking schemes (multiplicative, affine, polynomial, inner product...), which are gaining popularity in real implementations. The main idea behind JMR is to directly regress the leakage model of the shares by fitting a system based on higher-order joint moments conditions. We show that this idea can be seen as part of a more general framework known as the Generalized Method of Moments (GMM). This offers mathematical foundations on which we rely to derive optimizations of JMR. Simulations results confirm the interest of JMR over state-of-the-art attacks, even in the case of Boolean and arithmetic masking. Eventually, we apply this strategy to provide, to the best of our knowledge, the first unsupervised attack on the protected AES implementation proposed by the ANSSI for SCA research, which embeds an affine masking and shuffling counter-measures.2022-07-15T19:18:10+00:00https://creativecommons.org/licenses/by/4.0/Valence CristianiMaxime LecomteThomas HiscockPhilippe Maurinehttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1285Lower Bounds for the Number of Decryption Updates in Registration-Based Encryption2022-09-27T17:29:54+00:00Mohammad MahmoodyWei QiAhmadreza RahimiRegistration-based encryption (Garg, Hajiabadi, Mahmoody, Rahimi, TCC'18) aims to offer what identity-based encryption offers without the key-escrow problem, which refers to the ability of the private-key generator to obtain parties' decryption keys at wish. In RBE, parties generate their own secret and public keys and register their public keys to the key curator (KC) who updates a compact public parameter after each registration. The updated public parameter can then be used to securely encrypt messages to registered identities.
A major drawback of RBE, compared with IBE, is that in order to decrypt, parties might need to periodically request so-called decryption updates from the KC. Current RBE schemes require $\Omega(\log n)$ number of updates after $n$ registrations, while the public parameter is of length $\text{poly}(\log n)$. Clearly, it would be highly desirable to have RBEs with only, say, a constant number of updates. This leads to the following natural question: are so many (logarithmic) updates necessary for RBE schemes, or can we decrease the frequency of updates significantly?
In this paper, we prove an almost tight lower bound for the number of updates in RBE schemes, as long as the times that parties receive updates only depend on the registration time of the parties, which is a natural property that holds for all known RBE constructions. More generally, we prove a trade-off between the number of updates in RBEs and the length of the public parameter for any scheme with fixed update times. Indeed, we prove that for any such RBE scheme, if there are $n \geq \binom{k+d}{d+1}$ identities that receive at most $d$ updates, the public parameter needs to be of length $\Omega(k)$. As a corollary, we find that RBE systems with fixed update times and public parameters of length $\text{poly} (\log n)$, require $\Omega(\log n/\text{loglog}\ n)$ decryption updates, which is optimal up to a $O(\text{loglog}\ n)$ factor.Registration-based encryption (Garg, Hajiabadi, Mahmoody, Rahimi, TCC'18) aims to offer what identity-based encryption offers without the key-escrow problem, which refers to the ability of the private-key generator to obtain parties' decryption keys at wish. In RBE, parties generate their own secret and public keys and register their public keys to the key curator (KC) who updates a compact public parameter after each registration. The updated public parameter can then be used to securely encrypt messages to registered identities.
A major drawback of RBE, compared with IBE, is that in order to decrypt, parties might need to periodically request so-called decryption updates from the KC. Current RBE schemes require $\Omega(\log n)$ number of updates after $n$ registrations, while the public parameter is of length $\text{poly}(\log n)$. Clearly, it would be highly desirable to have RBEs with only, say, a constant number of updates. This leads to the following natural question: are so many (logarithmic) updates necessary for RBE schemes, or can we decrease the frequency of updates significantly?
In this paper, we prove an almost tight lower bound for the number of updates in RBE schemes, as long as the times that parties receive updates only depend on the registration time of the parties, which is a natural property that holds for all known RBE constructions. More generally, we prove a trade-off between the number of updates in RBEs and the length of the public parameter for any scheme with fixed update times. Indeed, we prove that for any such RBE scheme, if there are $n \geq \binom{k+d}{d+1}$ identities that receive at most $d$ updates, the public parameter needs to be of length $\Omega(k)$. As a corollary, we find that RBE systems with fixed update times and public parameters of length $\text{poly} (\log n)$, require $\Omega(\log n/\text{loglog}\ n)$ decryption updates, which is optimal up to a $O(\text{loglog}\ n)$ factor.2022-09-27T17:29:54+00:00https://creativecommons.org/licenses/by/4.0/Mohammad MahmoodyWei QiAhmadreza Rahimihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/703Proof-of-possession for KEM certificates using verifiable generation2022-09-27T20:28:51+00:00Tim GüneysuPhilip HodgesGeorg LandMike OunsworthDouglas StebilaGreg ZaveruchaCertificate authorities in public key infrastructures typically require entities to prove possession of the secret key corresponding to the public key they want certified. While this is straightforward for digital signature schemes, the most efficient solution for public key encryption and key encapsulation mechanisms (KEMs) requires an interactive challenge-response protocol, requiring a departure from current issuance processes. In this work we investigate how to non-interactively prove possession of a KEM secret key, specifically for lattice-based KEMs, motivated by the recently proposed KEMTLS protocol which replaces signature-based authentication in TLS 1.3 with KEM-based authentication. Although there are various zero-knowledge (ZK) techniques that can be used to prove possession of a lattice key, they yield large proofs or are inefficient to generate. We propose a technique called verifiable generation, in which a proof of possession is generated at the same time as the key itself is generated. Our technique is inspired by the Picnic signature scheme and uses the multi-party-computation-in-the-head (MPCitH) paradigm; this similarity to a signature scheme allows us to bind attribute data to the proof of possession, as required by certificate issuance protocols. We show how to instantiate this approach for two lattice-based KEMs in Round 3 of the NIST post-quantum cryptography standardization project, Kyber and FrodoKEM, and achieve reasonable proof sizes and performance. Our proofs of possession are faster and an order of magnitude smaller than the previous best MPCitH technique for knowledge of a lattice key, and in size-optimized cases can be comparable to even state-of-the-art direct lattice-based ZK proofs for Kyber. Our approach relies on a new result showing the uniqueness of Kyber and FrodoKEM secret keys, even if the requirement that all secret key components are small is partially relaxed, which may be of independent interest for improving efficiency of zero-knowledge proofs for other lattice-based statements.Certificate authorities in public key infrastructures typically require entities to prove possession of the secret key corresponding to the public key they want certified. While this is straightforward for digital signature schemes, the most efficient solution for public key encryption and key encapsulation mechanisms (KEMs) requires an interactive challenge-response protocol, requiring a departure from current issuance processes. In this work we investigate how to non-interactively prove possession of a KEM secret key, specifically for lattice-based KEMs, motivated by the recently proposed KEMTLS protocol which replaces signature-based authentication in TLS 1.3 with KEM-based authentication. Although there are various zero-knowledge (ZK) techniques that can be used to prove possession of a lattice key, they yield large proofs or are inefficient to generate. We propose a technique called verifiable generation, in which a proof of possession is generated at the same time as the key itself is generated. Our technique is inspired by the Picnic signature scheme and uses the multi-party-computation-in-the-head (MPCitH) paradigm; this similarity to a signature scheme allows us to bind attribute data to the proof of possession, as required by certificate issuance protocols. We show how to instantiate this approach for two lattice-based KEMs in Round 3 of the NIST post-quantum cryptography standardization project, Kyber and FrodoKEM, and achieve reasonable proof sizes and performance. Our proofs of possession are faster and an order of magnitude smaller than the previous best MPCitH technique for knowledge of a lattice key, and in size-optimized cases can be comparable to even state-of-the-art direct lattice-based ZK proofs for Kyber. Our approach relies on a new result showing the uniqueness of Kyber and FrodoKEM secret keys, even if the requirement that all secret key components are small is partially relaxed, which may be of independent interest for improving efficiency of zero-knowledge proofs for other lattice-based statements.2022-06-02T15:24:40+00:00https://creativecommons.org/licenses/by/4.0/Tim GüneysuPhilip HodgesGeorg LandMike OunsworthDouglas StebilaGreg Zaveruchahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/257Guaranteed Output in $O(\sqrt{n})$ Rounds for Round-Robin Sampling Protocols2022-09-28T06:03:56+00:00Ran CohenJack DoernerYashvanth Kondiabhi shelatWe introduce a notion of round-robin secure sampling that captures several protocols in the literature, such as the "powers-of-tau" setup protocol for pairing-based polynomial commitments and zk-SNARKs, and certain verifiable mixnets.
Due to their round-robin structure, protocols of this class inherently require $n$ sequential broadcast rounds, where $n$ is the number of participants.
We describe how to compile them generically into protocols that require only $O(\sqrt{n})$ broadcast rounds. Our compiled protocols guarantee output delivery against any dishonest majority. This stands in contrast to prior techniques, which require $\Omega(n)$ sequential broadcasts in most cases (and sometimes many more). Our compiled protocols permit a certain amount of adversarial bias in the output, as all sampling protocols with guaranteed output must, due to Cleve's impossibility result (STOC'86). We show that in the context of the aforementioned applications, this bias is harmless.We introduce a notion of round-robin secure sampling that captures several protocols in the literature, such as the "powers-of-tau" setup protocol for pairing-based polynomial commitments and zk-SNARKs, and certain verifiable mixnets.
Due to their round-robin structure, protocols of this class inherently require $n$ sequential broadcast rounds, where $n$ is the number of participants.
We describe how to compile them generically into protocols that require only $O(\sqrt{n})$ broadcast rounds. Our compiled protocols guarantee output delivery against any dishonest majority. This stands in contrast to prior techniques, which require $\Omega(n)$ sequential broadcasts in most cases (and sometimes many more). Our compiled protocols permit a certain amount of adversarial bias in the output, as all sampling protocols with guaranteed output must, due to Cleve's impossibility result (STOC'86). We show that in the context of the aforementioned applications, this bias is harmless.2022-03-02T14:01:32+00:00https://creativecommons.org/licenses/by/4.0/Ran CohenJack DoernerYashvanth Kondiabhi shelathttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/608Practical Provably Secure Flooding for Blockchains2022-09-28T06:19:17+00:00Chen-Da Liu-ZhangChristian MattUeli MaurerGuilherme RitoSøren Eller ThomsenIn recent years, permisionless blockchains have received a lot of attention both from industry and academia, where substantial effort has been spent to develop consensus protocols that are secure under the assumption that less than half (or a third) of a given resource (e.g., stake or computing power) is controlled by corrupted parties. The security proofs of these consensus protocols usually assume the availability of a network functionality guaranteeing that a block sent by an honest party is received by all honest parties within some bounded time. To obtain an overall protocol that is secure under the same corruption assumption, it is therefore necessary to combine the consensus protocol with a network protocol that achieves this property under that assumption. In practice, however, the underlying network is typically implemented by flooding protocols that are not proven to be secure in the setting where a fraction of the considered total weight can be corrupted. This has led to many so-called eclipse attacks on existing protocols and tailor-made fixes against specific attacks.
To close this apparent gap, we present the first practical flooding protocol that provably delivers sent messages to all honest parties after a logarithmic number of steps. We prove security in the setting where all parties are publicly assigned a positive weight and the adversary can corrupt parties accumulating up to a constant fraction of the total weight. This can directly be used in the proof-of-stake setting, but is not limited to it. To prove the security of our protocol, we combine known results about the diameter of Erdős–Rényi graphs with reductions between different types of random graphs. We further show that the efficiency of our protocol is asymptotically optimal.
The practicality of our protocol is supported by extensive simulations for different numbers of parties, weight distributions, and corruption strategies. The simulations confirm our theoretical results and show that messages are delivered quickly regardless of the weight distribution, whereas protocols that are oblivious of the parties' weights completely fail if the weights are unevenly distributed. Furthermore, the average message complexity per party of our protocol is within a small constant factor of such a protocol.In recent years, permisionless blockchains have received a lot of attention both from industry and academia, where substantial effort has been spent to develop consensus protocols that are secure under the assumption that less than half (or a third) of a given resource (e.g., stake or computing power) is controlled by corrupted parties. The security proofs of these consensus protocols usually assume the availability of a network functionality guaranteeing that a block sent by an honest party is received by all honest parties within some bounded time. To obtain an overall protocol that is secure under the same corruption assumption, it is therefore necessary to combine the consensus protocol with a network protocol that achieves this property under that assumption. In practice, however, the underlying network is typically implemented by flooding protocols that are not proven to be secure in the setting where a fraction of the considered total weight can be corrupted. This has led to many so-called eclipse attacks on existing protocols and tailor-made fixes against specific attacks.
To close this apparent gap, we present the first practical flooding protocol that provably delivers sent messages to all honest parties after a logarithmic number of steps. We prove security in the setting where all parties are publicly assigned a positive weight and the adversary can corrupt parties accumulating up to a constant fraction of the total weight. This can directly be used in the proof-of-stake setting, but is not limited to it. To prove the security of our protocol, we combine known results about the diameter of Erdős–Rényi graphs with reductions between different types of random graphs. We further show that the efficiency of our protocol is asymptotically optimal.
The practicality of our protocol is supported by extensive simulations for different numbers of parties, weight distributions, and corruption strategies. The simulations confirm our theoretical results and show that messages are delivered quickly regardless of the weight distribution, whereas protocols that are oblivious of the parties' weights completely fail if the weights are unevenly distributed. Furthermore, the average message complexity per party of our protocol is within a small constant factor of such a protocol.2022-05-23T08:20:37+00:00https://creativecommons.org/licenses/by/4.0/Chen-Da Liu-ZhangChristian MattUeli MaurerGuilherme RitoSøren Eller Thomsenhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1648A Scalable SIMD RISC-V based Processor with Customized Vector Extensions for CRYSTALS-Kyber2022-09-28T08:35:24+00:00Huimin LiNele MentensStjepan PicekSHA-3 is considered to be one of the most secure standardized hash functions. It relies on the Keccak-f[1,600] permutation, which operates on an internal state of 1,600 bits, mostly represented as a $5\times5\times64{-}bit$ matrix. While existing implementations process the state sequentially in chunks of typically 32 or 64 bits, the Keccak-f[1,600] permutation can benefit a lot from speedup through parallelization. This paper is the first to explore the full potential of parallelization of Keccak-f[1,600] in RISC-V based processors through custom vector extensions on 32-bit and 64-bit architectures.
We analyze the Keccak-f[1,600] permutation, composed of five different step mappings, and propose ten custom vector instructions to speed up the computation. We realize these extensions in a SIMD processor described in SystemVerilog. We compare the performance of our designs to existing architectures based on vectorized application-specific instruction set processors (ASIP). We show that our designs outperform all related work thanks to our carefully selected custom vector instructions.SHA-3 is considered to be one of the most secure standardized hash functions. It relies on the Keccak-f[1,600] permutation, which operates on an internal state of 1,600 bits, mostly represented as a $5\times5\times64{-}bit$ matrix. While existing implementations process the state sequentially in chunks of typically 32 or 64 bits, the Keccak-f[1,600] permutation can benefit a lot from speedup through parallelization. This paper is the first to explore the full potential of parallelization of Keccak-f[1,600] in RISC-V based processors through custom vector extensions on 32-bit and 64-bit architectures.
We analyze the Keccak-f[1,600] permutation, composed of five different step mappings, and propose ten custom vector instructions to speed up the computation. We realize these extensions in a SIMD processor described in SystemVerilog. We compare the performance of our designs to existing architectures based on vectorized application-specific instruction set processors (ASIP). We show that our designs outperform all related work thanks to our carefully selected custom vector instructions.2021-12-17T14:29:47+00:00https://creativecommons.org/licenses/by/4.0/Huimin LiNele MentensStjepan Picekhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1288Round-Optimal Black-Box Secure Computation from Two-Round Malicious OT2022-09-28T09:31:32+00:00Yuval IshaiDakshita KhuranaAmit SahaiAkshayaram SrinivasanWe give round-optimal {\em black-box} constructions of two-party and multiparty protocols in the common random/reference string (CRS) model, with security against malicious adversaries, based on any two-round oblivious transfer (OT) protocol in the same model. Specifically, we obtain two types of results.
\begin{enumerate}
\item {\bf Two-party protocol.} We give a (two-round) {\it two-sided NISC} protocol that makes black-box use of two-round (malicious-secure) OT in the CRS model. In contrast to the standard setting of non-interactive secure computation (NISC), two-sided NISC allows communication from both parties in each round and delivers the output to both parties at the end of the protocol. Prior black-box constructions of two-sided NISC relied on idealized setup assumptions such as OT correlations, or were proven secure in the random oracle model.
\item {\bf Multiparty protocol.} We give a three-round secure multiparty computation protocol for an arbitrary number of parties making black-box use of a two-round OT in the CRS model. The round optimality of this construction follows from a black-box impossibility proof of
Applebaum et al. (ITCS 2020). Prior constructions either required the use of random oracles, or were based on two-round malicious-secure OT protocols that satisfied additional security properties.
\end{enumerate}We give round-optimal {\em black-box} constructions of two-party and multiparty protocols in the common random/reference string (CRS) model, with security against malicious adversaries, based on any two-round oblivious transfer (OT) protocol in the same model. Specifically, we obtain two types of results.
\begin{enumerate}
\item {\bf Two-party protocol.} We give a (two-round) {\it two-sided NISC} protocol that makes black-box use of two-round (malicious-secure) OT in the CRS model. In contrast to the standard setting of non-interactive secure computation (NISC), two-sided NISC allows communication from both parties in each round and delivers the output to both parties at the end of the protocol. Prior black-box constructions of two-sided NISC relied on idealized setup assumptions such as OT correlations, or were proven secure in the random oracle model.
\item {\bf Multiparty protocol.} We give a three-round secure multiparty computation protocol for an arbitrary number of parties making black-box use of a two-round OT in the CRS model. The round optimality of this construction follows from a black-box impossibility proof of
Applebaum et al. (ITCS 2020). Prior constructions either required the use of random oracles, or were based on two-round malicious-secure OT protocols that satisfied additional security properties.
\end{enumerate}2022-09-28T09:31:32+00:00https://creativecommons.org/licenses/by/4.0/Yuval IshaiDakshita KhuranaAmit SahaiAkshayaram Srinivasanhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1290Bool Network: An Open, Distributed, Secure Cross-chain Notary Platform2022-09-28T11:05:34+00:00Zeyuan YinBingsheng ZhangJingzhong XuKaiyu LuKui RenWith the advancement of blockchain technology, hundreds of cryptocurrencies have been deployed. The bloom of heterogeneous blockchain platforms brings a new emerging problem: typically, various blockchains are isolated systems, how to securely identify and/or transfer digital properties across blockchains? There are three main kinds of cross-chain approaches: sidechains/relays, notaries, and hashed time-lock contracts. Among them, notary-based cross-chain solutions have the best compatibility and user-friendliness, but they are typically centralized. To resolve this issue, we present Bool Network -- an open, distributed, secure cross-chain notary platform powered by MPC-based distributed key management over evolving hidden committees. More specifically, to protect the identities of the committee members, we propose a Ring verifiable random function (Ring VRF) protocol, where the real public key of a VRF instance can be hidden among a ring, which may be of independent interest to other cryptographic protocols. Furthermore, all the key management procedures are executed in the TEE, such as Intel SGX, to ensure the privacy and integrity of partial key components. A prototype of the proposed Bool Network is implemented in Rust language, using Polkadot Substrate.With the advancement of blockchain technology, hundreds of cryptocurrencies have been deployed. The bloom of heterogeneous blockchain platforms brings a new emerging problem: typically, various blockchains are isolated systems, how to securely identify and/or transfer digital properties across blockchains? There are three main kinds of cross-chain approaches: sidechains/relays, notaries, and hashed time-lock contracts. Among them, notary-based cross-chain solutions have the best compatibility and user-friendliness, but they are typically centralized. To resolve this issue, we present Bool Network -- an open, distributed, secure cross-chain notary platform powered by MPC-based distributed key management over evolving hidden committees. More specifically, to protect the identities of the committee members, we propose a Ring verifiable random function (Ring VRF) protocol, where the real public key of a VRF instance can be hidden among a ring, which may be of independent interest to other cryptographic protocols. Furthermore, all the key management procedures are executed in the TEE, such as Intel SGX, to ensure the privacy and integrity of partial key components. A prototype of the proposed Bool Network is implemented in Rust language, using Polkadot Substrate.2022-09-28T11:05:34+00:00https://creativecommons.org/licenses/by-nc/4.0/Zeyuan YinBingsheng ZhangJingzhong XuKaiyu LuKui Renhttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2022/139Sponge-based Authenticated Encryption: Security against Quantum Attackers2022-09-28T14:33:46+00:00Christian JansonPatrick StruckIn this work, we study the security of sponge-based authenticated encryption schemes against quantum attackers. In particular, we analyse the sponge-based authenticated encryption scheme SLAE as put forward by Degabriele et al. (ASIACRYPT'19). We show that the scheme achieves security in the post-quantum (QS1) setting in the quantum random oracle model by using the one-way to hiding lemma. Furthermore, we analyse the scheme in a fully-quantum (QS2) setting. There we provide a set of attacks showing that SLAE does not achieve ciphertext indistinguishability and hence overall does not provide the desired level of security.In this work, we study the security of sponge-based authenticated encryption schemes against quantum attackers. In particular, we analyse the sponge-based authenticated encryption scheme SLAE as put forward by Degabriele et al. (ASIACRYPT'19). We show that the scheme achieves security in the post-quantum (QS1) setting in the quantum random oracle model by using the one-way to hiding lemma. Furthermore, we analyse the scheme in a fully-quantum (QS2) setting. There we provide a set of attacks showing that SLAE does not achieve ciphertext indistinguishability and hence overall does not provide the desired level of security.2022-02-09T08:59:39+00:00https://creativecommons.org/licenses/by/4.0/Christian JansonPatrick Struckhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1291sMGM: parameterizable AEAD-mode2022-09-28T15:39:57+00:00Liliya AkhmetzyanovaEvgeny AlekseevAlexandra BabuevaAndrey BozhkoStanislav SmyshlyaevThe paper introduces a new AEAD-mode called sMGM (strong Multilinear Galois Mode). The proposed construction can be treated as an extension of the Russian standardized MGM mode and its modification MGM2 mode presented at the CTCrypt'21 conference. The distinctive feature of the new mode is that it provides an interface allowing one to choose specific security properties required for a certain application case. Namely, the mode has additional parameters allowing to switch on/off misuse-resistance or re-keying mechanisms.
The sMGM mode consists of two main "building blocks" that are a CTR-style gamma generation function with incorporated re-keying and a multilinear function that lies in the core of the original MGM mode. Different ways of using these functions lead to achieving different sets of security properties. Such an approach to constructing parameterizable AEAD-mode allows for reducing the code size which can be crucial for constrained devices.
We provide security bounds for the proposed mode. We focus on proving the misuse-resistance of the sMGM mode, since the standard security properties were already analyzed during the development of the original MGM and MGM2 modes.The paper introduces a new AEAD-mode called sMGM (strong Multilinear Galois Mode). The proposed construction can be treated as an extension of the Russian standardized MGM mode and its modification MGM2 mode presented at the CTCrypt'21 conference. The distinctive feature of the new mode is that it provides an interface allowing one to choose specific security properties required for a certain application case. Namely, the mode has additional parameters allowing to switch on/off misuse-resistance or re-keying mechanisms.
The sMGM mode consists of two main "building blocks" that are a CTR-style gamma generation function with incorporated re-keying and a multilinear function that lies in the core of the original MGM mode. Different ways of using these functions lead to achieving different sets of security properties. Such an approach to constructing parameterizable AEAD-mode allows for reducing the code size which can be crucial for constrained devices.
We provide security bounds for the proposed mode. We focus on proving the misuse-resistance of the sMGM mode, since the standard security properties were already analyzed during the development of the original MGM and MGM2 modes.2022-09-28T15:39:57+00:00https://creativecommons.org/licenses/by/4.0/Liliya AkhmetzyanovaEvgeny AlekseevAlexandra BabuevaAndrey BozhkoStanislav Smyshlyaevhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1292Bet-or-Pass: Adversarially Robust Bloom Filters2022-09-28T15:57:41+00:00Moni NaorNoa OvedA Bloom filter is a data structure that maintains a succinct and probabilistic representation of a set $S\subseteq U$ of elements from a universe $U$. It supports approximate membership queries. The price of the succinctness is allowing some error, namely false positives: for any $x\notin S$, it might answer `Yes' but with a small (non-negligible) probability.
When dealing with such data structures in adversarial settings, we need to define the correctness guarantee and formalize the requirement that bad events happen infrequently and those false positives are appropriately distributed. Recently, several papers investigated this topic, suggesting different robustness definitions.
In this work we unify this line of research and propose several robustness notions for Bloom filters that allow the adaptivity of queries. The goal is that a robust Bloom filter should behave like a random biased coin even against an adaptive adversary. The robustness definitions are expressed by the type of test that the Bloom filter should withstand. We explore the relationships between these notions and highlight the notion of Bet-or-Pass as capturing the desired properties of such a data structure.A Bloom filter is a data structure that maintains a succinct and probabilistic representation of a set $S\subseteq U$ of elements from a universe $U$. It supports approximate membership queries. The price of the succinctness is allowing some error, namely false positives: for any $x\notin S$, it might answer `Yes' but with a small (non-negligible) probability.
When dealing with such data structures in adversarial settings, we need to define the correctness guarantee and formalize the requirement that bad events happen infrequently and those false positives are appropriately distributed. Recently, several papers investigated this topic, suggesting different robustness definitions.
In this work we unify this line of research and propose several robustness notions for Bloom filters that allow the adaptivity of queries. The goal is that a robust Bloom filter should behave like a random biased coin even against an adaptive adversary. The robustness definitions are expressed by the type of test that the Bloom filter should withstand. We explore the relationships between these notions and highlight the notion of Bet-or-Pass as capturing the desired properties of such a data structure.2022-09-28T15:57:41+00:00https://creativecommons.org/licenses/by/4.0/Moni NaorNoa Ovedhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/683Quantum Analysis of AES2022-09-28T16:49:01+00:00Kyungbae JangAnubhab BaksiGyeongju SongHyunji KimHwajeong SeoAnupam ChattopadhyayQuantum computing is considered among the next big leaps in the computer science. While a fully functional quantum computer is still in the future, there is an ever-growing need to evaluate the security of the secret-key ciphers against a potent quantum adversary.
Keeping this in mind, our work explores the key recovery attack using the Grover's search on the three variants of AES (-128, -192, -256) with respect to the quantum implementation and the quantum key search using the Grover's algorithm. We develop a pool of implementations, by mostly reducing the circuit depth metrics. We consider various strategies for optimization, as well as make use of the state-of-the-art advancements in the relevant fields.
In a nutshell, we present the least Toffoli depth and full depth implementations of AES, thereby improving from Zou et al.'s Asiacrypt'20 paper by more than 98 percent for all variants of AES. Our qubit count - Toffoli depth product is improved from theirs by more than 75 percent. Furthermore, we analyze the Jaques et al.'s Eurocrypt'20 implementations in details, fix its bugs and report corrected benchmarks. To the best of our finding, our work improves from all the previous works (including the recent Eprint'22 paper by Huang and Sun).Quantum computing is considered among the next big leaps in the computer science. While a fully functional quantum computer is still in the future, there is an ever-growing need to evaluate the security of the secret-key ciphers against a potent quantum adversary.
Keeping this in mind, our work explores the key recovery attack using the Grover's search on the three variants of AES (-128, -192, -256) with respect to the quantum implementation and the quantum key search using the Grover's algorithm. We develop a pool of implementations, by mostly reducing the circuit depth metrics. We consider various strategies for optimization, as well as make use of the state-of-the-art advancements in the relevant fields.
In a nutshell, we present the least Toffoli depth and full depth implementations of AES, thereby improving from Zou et al.'s Asiacrypt'20 paper by more than 98 percent for all variants of AES. Our qubit count - Toffoli depth product is improved from theirs by more than 75 percent. Furthermore, we analyze the Jaques et al.'s Eurocrypt'20 implementations in details, fix its bugs and report corrected benchmarks. To the best of our finding, our work improves from all the previous works (including the recent Eprint'22 paper by Huang and Sun).2022-05-31T08:07:20+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Kyungbae JangAnubhab BaksiGyeongju SongHyunji KimHwajeong SeoAnupam Chattopadhyayhttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2022/1293Improving the Efficiency of Report and Trace Ring Signatures2022-09-28T18:47:57+00:00Xavier BultelAshley FraserElizabeth A. QuagliaRing signatures allow signers to produce verifiable signatures and remain anonymous within a set of signers (i.e., the ring) while doing so. They are well-suited to protocols that target anonymity as a primary goal, for example, anonymous cryptocurrencies. However, standard ring signatures do not ensure that signers are held accountable if they act maliciously. Fraser and Quaglia (CANS'21) introduced a ring signature variant that they called report and trace ring signatures which balances the anonymity guarantee of standard ring signatures with the need to hold signers accountable. In particular, report and trace ring signatures introduce a reporting system whereby ring members can report malicious message/signature pairs. A designated tracer can then revoke the signer's anonymity if, and only if, a ring member submits a report to the tracer. Fraser and Quaglia present a generic construction of a report and trace ring signature scheme and outline an instantiation for which it is claimed that the complexity of signing is linear in the size of the ring $|R|$.
In this paper, we introduce a new instantiation of Fraser and Quaglia's generic report and trace ring signature construction. Our instantiation uses a pairing-based variant of ElGamal that we define. We demonstrate that our instantiation is more efficient. In fact, we highlight that the efficiency of Fraser and Quaglia's instantiation omits a scaling factor of $\lambda$ where $\lambda$ is a security parameter. As such, the complexity of signing for their instantiation grows linearly in $\lambda \cdot |R|$. Our instantiation, on the other hand, achieves signing complexity linear in $|R|$.
We also introduce a new pairing-free report and trace ring signature construction reaching a similar signing complexity. Whilst this construction requires some additional group exponentiations, it can be instantiated over any prime order group for which the Decisional Diffie-Hellman assumption holds.Ring signatures allow signers to produce verifiable signatures and remain anonymous within a set of signers (i.e., the ring) while doing so. They are well-suited to protocols that target anonymity as a primary goal, for example, anonymous cryptocurrencies. However, standard ring signatures do not ensure that signers are held accountable if they act maliciously. Fraser and Quaglia (CANS'21) introduced a ring signature variant that they called report and trace ring signatures which balances the anonymity guarantee of standard ring signatures with the need to hold signers accountable. In particular, report and trace ring signatures introduce a reporting system whereby ring members can report malicious message/signature pairs. A designated tracer can then revoke the signer's anonymity if, and only if, a ring member submits a report to the tracer. Fraser and Quaglia present a generic construction of a report and trace ring signature scheme and outline an instantiation for which it is claimed that the complexity of signing is linear in the size of the ring $|R|$.
In this paper, we introduce a new instantiation of Fraser and Quaglia's generic report and trace ring signature construction. Our instantiation uses a pairing-based variant of ElGamal that we define. We demonstrate that our instantiation is more efficient. In fact, we highlight that the efficiency of Fraser and Quaglia's instantiation omits a scaling factor of $\lambda$ where $\lambda$ is a security parameter. As such, the complexity of signing for their instantiation grows linearly in $\lambda \cdot |R|$. Our instantiation, on the other hand, achieves signing complexity linear in $|R|$.
We also introduce a new pairing-free report and trace ring signature construction reaching a similar signing complexity. Whilst this construction requires some additional group exponentiations, it can be instantiated over any prime order group for which the Decisional Diffie-Hellman assumption holds.2022-09-28T18:47:57+00:00https://creativecommons.org/licenses/by/4.0/Xavier BultelAshley FraserElizabeth A. Quagliahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1294What Can Cryptography Do For Decentralized Mechanism Design?2022-09-28T20:04:13+00:00Elaine ShiHao ChungKe WuRecent works of Roughgarden (EC'21) and Chung and Shi (Highlights Beyond EC'22) initiate the study of a new decentralized mechanism design problem called transaction fee mechanism design (TFM). Unlike the classical mechanism design literature, in the decentralized environment, even the auctioneer (i.e., the miner) can be a strategic player, and it can even collude with a subset of the users facilitated by binding side contracts. Chung and Shi showed two main impossibility results that rule out the existence of a dream TFM. First, any TFM that provides incentive compatibility for individual users and miner-user coalitions must always have zero miner revenue, no matter whether the block size is finite or infinite. Second, assuming finite block size, no non-trivial TFM can simultaenously provide incentive compatibility for any individual user, and for any miner-user coalition.
In this work, we explore what new models and meaningful relaxations can allow us to circumvent the impossibility results of Chung and Shi. Besides today’s model that does not employ cryptography, we introduce a new MPC-assisted model where the TFM is implemented by a joint multi-party computation (MPC) protocol among the miners. We prove several feasibility and infeasibility results for achieving strict and approximate incentive compatibility, respectively, in the plain model as well as the MPC-assisted model. We show that while cryptography is not a panacea, it indeed allows us to overcome some impossibility results pertaining to the plain model, leading to non-trivial mechanisms with useful guarantees that are otherwise impossible in the plain model. Our work is also the first to characterize the mathematical landscape of transaction fee mechanism design under approximate incentive compatibility, as well as in a cryptography-assisted model.Recent works of Roughgarden (EC'21) and Chung and Shi (Highlights Beyond EC'22) initiate the study of a new decentralized mechanism design problem called transaction fee mechanism design (TFM). Unlike the classical mechanism design literature, in the decentralized environment, even the auctioneer (i.e., the miner) can be a strategic player, and it can even collude with a subset of the users facilitated by binding side contracts. Chung and Shi showed two main impossibility results that rule out the existence of a dream TFM. First, any TFM that provides incentive compatibility for individual users and miner-user coalitions must always have zero miner revenue, no matter whether the block size is finite or infinite. Second, assuming finite block size, no non-trivial TFM can simultaenously provide incentive compatibility for any individual user, and for any miner-user coalition.
In this work, we explore what new models and meaningful relaxations can allow us to circumvent the impossibility results of Chung and Shi. Besides today’s model that does not employ cryptography, we introduce a new MPC-assisted model where the TFM is implemented by a joint multi-party computation (MPC) protocol among the miners. We prove several feasibility and infeasibility results for achieving strict and approximate incentive compatibility, respectively, in the plain model as well as the MPC-assisted model. We show that while cryptography is not a panacea, it indeed allows us to overcome some impossibility results pertaining to the plain model, leading to non-trivial mechanisms with useful guarantees that are otherwise impossible in the plain model. Our work is also the first to characterize the mathematical landscape of transaction fee mechanism design under approximate incentive compatibility, as well as in a cryptography-assisted model.2022-09-28T20:04:13+00:00https://creativecommons.org/licenses/by/4.0/Elaine ShiHao ChungKe Wuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1010Orion: Zero Knowledge Proof with Linear Prover Time2022-09-28T20:14:52+00:00Tiancheng XieYupeng ZhangDawn SongZero-knowledge proof is a powerful cryptographic primitive that has found various applications in the real world. However, existing schemes with succinct proof size suffer from a high overhead on the proof generation time that is super-linear in the size of the statement represented as an arithmetic circuit, limiting their efficiency and scalability in practice. In this paper, we present Orion, a new zero-knowledge argument system that achieves $O(N)$ prover time of field operations and hash functions and $O(\log^2 N)$ proof size. Orion is concretely efficient and our implementation shows that the prover time is 3.09s and the proof size is 1.5MB for a circuit with $2^{20}$ multiplication gates. The prover time is the fastest among all existing succinct proof systems, and the proof size is an order of magnitude smaller than a recent scheme proposed in Golovnev et al. 2021.
In particular, we develop two new techniques leading to the efficiency improvement. (1) We propose a new algorithm to test whether a random bipartite graph is a lossless expander graph or not based on the densest subgraph algorithm. It allows us to sample lossless expanders with an overwhelming probability. The technique improves the efficiency and/or security of all existing zero-knowledge argument schemes with a linear prover time. The testing algorithm based on densest subgraph may be of independent interest for other applications of expander graphs. (2) We develop an efficient proof composition scheme, code switching, to reduce the proof size from square root to polylogarithmic in the size of the computation. The scheme is built on the encoding circuit of a linear code and shows that the witness of a second zero-knowledge argument is the same as the message in the linear code. The proof composition only introduces a small overhead on the prover time.Zero-knowledge proof is a powerful cryptographic primitive that has found various applications in the real world. However, existing schemes with succinct proof size suffer from a high overhead on the proof generation time that is super-linear in the size of the statement represented as an arithmetic circuit, limiting their efficiency and scalability in practice. In this paper, we present Orion, a new zero-knowledge argument system that achieves $O(N)$ prover time of field operations and hash functions and $O(\log^2 N)$ proof size. Orion is concretely efficient and our implementation shows that the prover time is 3.09s and the proof size is 1.5MB for a circuit with $2^{20}$ multiplication gates. The prover time is the fastest among all existing succinct proof systems, and the proof size is an order of magnitude smaller than a recent scheme proposed in Golovnev et al. 2021.
In particular, we develop two new techniques leading to the efficiency improvement. (1) We propose a new algorithm to test whether a random bipartite graph is a lossless expander graph or not based on the densest subgraph algorithm. It allows us to sample lossless expanders with an overwhelming probability. The technique improves the efficiency and/or security of all existing zero-knowledge argument schemes with a linear prover time. The testing algorithm based on densest subgraph may be of independent interest for other applications of expander graphs. (2) We develop an efficient proof composition scheme, code switching, to reduce the proof size from square root to polylogarithmic in the size of the computation. The scheme is built on the encoding circuit of a linear code and shows that the witness of a second zero-knowledge argument is the same as the message in the linear code. The proof composition only introduces a small overhead on the prover time.2022-08-05T06:47:59+00:00https://creativecommons.org/licenses/by/4.0/Tiancheng XieYupeng ZhangDawn Songhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2016/846Survey of Approaches and Techniques for Security Verification of Computer Systems2022-09-29T01:59:13+00:00Ferhat ErataShuwen DengFaisal ZaghloulWenjie XiongOnur DemirJakub SzeferThis paper surveys the landscape of security verification approaches and techniques for computer systems at various levels: from a software-application level all the way to the physical hardware level. Different existing projects are compared, based on the tools used and security aspects being examined. Since many systems require both hardware and software components to work together to provide the system's promised security protections, it is not sufficient to verify just the software levels or just the hardware levels in a mutually exclusive fashion. This survey especially highlights system levels that are verified by the different existing projects and presents to the readers the state of the art in hardware and software system security verification. Few approaches come close to providing full-system verification, and there is still much room for improvement.This paper surveys the landscape of security verification approaches and techniques for computer systems at various levels: from a software-application level all the way to the physical hardware level. Different existing projects are compared, based on the tools used and security aspects being examined. Since many systems require both hardware and software components to work together to provide the system's promised security protections, it is not sufficient to verify just the software levels or just the hardware levels in a mutually exclusive fashion. This survey especially highlights system levels that are verified by the different existing projects and presents to the readers the state of the art in hardware and software system security verification. Few approaches come close to providing full-system verification, and there is still much room for improvement.2016-09-07T13:58:30+00:00https://creativecommons.org/licenses/by/4.0/Ferhat ErataShuwen DengFaisal ZaghloulWenjie XiongOnur DemirJakub Szeferhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/348Fast Subgroup Membership Testings for $\mathbb{G}_1$, $\mathbb{G}_2$ and $\mathbb{G}_T$ on Pairing-friendly Curves2022-09-29T03:10:03+00:00Yu DaiKaizhan LinZijian ZhouChang-An ZhaoPairing-based cryptographic protocols are typically vulnerable to small-subgroup attacks in the absence of protective measures. To thwart them, one of feasible measures is to execute subgroup membership testings, which are generally considered expensive. Recently, Scott proposed an efficient method of subgroup membership testings for $\mathbb{G}_1$, $\mathbb{G}_2$ and $\mathbb{G}_T$ on the BLS family. In this paper, we generalize this method proposed by Scott and show that the new technique is applicable to a large class of pairing-friendly curves. In addition, we also confirm that the new method leads to a significant speedup for membership testings on many popular pairing-friendly curves.Pairing-based cryptographic protocols are typically vulnerable to small-subgroup attacks in the absence of protective measures. To thwart them, one of feasible measures is to execute subgroup membership testings, which are generally considered expensive. Recently, Scott proposed an efficient method of subgroup membership testings for $\mathbb{G}_1$, $\mathbb{G}_2$ and $\mathbb{G}_T$ on the BLS family. In this paper, we generalize this method proposed by Scott and show that the new technique is applicable to a large class of pairing-friendly curves. In addition, we also confirm that the new method leads to a significant speedup for membership testings on many popular pairing-friendly curves.2022-03-14T11:58:29+00:00https://creativecommons.org/licenses/by/4.0/Yu DaiKaizhan LinZijian ZhouChang-An Zhaohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/362How to Backdoor (Classic) McEliece and How to Guard Against Backdoors2022-09-29T09:39:47+00:00Tobias HemmertAlexander MayJohannes MittmannCarl Richard Theodor SchneiderWe show how to backdoor the McEliece cryptosystem such that a backdoored public key is indistinguishable from a usual public key, but allows to efficiently retrieve the underlying secret key.
For good cryptographic reasons, McEliece uses a small random seed 𝛅 that generates via some pseudo random generator (PRG) the randomness that determines the secret key. Our backdoor mechanism works by encoding an encryption of 𝛅 into the public key. Retrieving 𝛅 then allows to efficiently recover the (backdoored) secret key. Interestingly, McEliece can be used itself to encrypt 𝛅, thereby protecting our backdoor mechanism with strong post-quantum security guarantees.
Our construction also works for the current Classic McEliece NIST standard proposal for non-compressed secret keys, and therefore opens the door for widespread maliciously backdoored implementations.
Fortunately, our backdoor mechanism can be detected by the owner of the (backdoored) secret key if 𝛅 is stored after key generation as specified by the Classic McEliece proposal. Thus, our results provide strong advice for implementers to store 𝛅 inside the secret key and use 𝛅 to guard against backdoor mechanisms.We show how to backdoor the McEliece cryptosystem such that a backdoored public key is indistinguishable from a usual public key, but allows to efficiently retrieve the underlying secret key.
For good cryptographic reasons, McEliece uses a small random seed 𝛅 that generates via some pseudo random generator (PRG) the randomness that determines the secret key. Our backdoor mechanism works by encoding an encryption of 𝛅 into the public key. Retrieving 𝛅 then allows to efficiently recover the (backdoored) secret key. Interestingly, McEliece can be used itself to encrypt 𝛅, thereby protecting our backdoor mechanism with strong post-quantum security guarantees.
Our construction also works for the current Classic McEliece NIST standard proposal for non-compressed secret keys, and therefore opens the door for widespread maliciously backdoored implementations.
Fortunately, our backdoor mechanism can be detected by the owner of the (backdoored) secret key if 𝛅 is stored after key generation as specified by the Classic McEliece proposal. Thus, our results provide strong advice for implementers to store 𝛅 inside the secret key and use 𝛅 to guard against backdoor mechanisms.2022-03-18T09:47:47+00:00https://creativecommons.org/licenses/by/4.0/Tobias HemmertAlexander MayJohannes MittmannCarl Richard Theodor Schneiderhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1296Efficient Asymmetric Threshold ECDSA for MPC-based Cold Storage2022-09-29T10:08:58+00:00Constantin BlokhNikolaos MakriyannisUdi PeledMotivated by applications to cold-storage solutions for ECDSA-based cryptocurrencies, we present a new ECDSA protocol between $n$ ``online'' parties and a single ``offline'' party. Our protocol tolerates all-but-one adaptive corruptions, and it achieves full proactive security. Our protocol improves as follows over the state of the art.
** The preprocessing phase for calculating preprocessed data for future signatures is lightweight and non-interactive; it consists of each party sending a single independently-generated short message per future signature per online party (approx.~300B for typical choice of parameters).
** The signing phase is asymmetric in the following sense; to calculate the signature, it is enough for the offline party to receive a single short message from the online ``world'' (approx.~300B).
We note that all previous ECDSA protocols require many rounds of interaction between all parties, and thus all previous protocols require extensive ``interactive time'' from the offline party. In contrast, our protocol requires minimal involvement from the offline party, and it is thus ideal for MPC-based cold storage.
Our main technical innovation for achieving the above is twofold: First, building on recent protocols, we design a two-party protocol that we non-generically compile into a highly efficient $(n+1)$-party protocol. Second, we present a new batching technique for proving in zero-knowledge that the plaintext values of practically any number of Paillier ciphertexts lie in a given range. The cost of the resulting (batched) proof is very close to the cost of the underlying single-instance proof of MacKenzie and Reiter (CRYPTO'01, IJIS'04).
We prove security in the UC framework, in the global random oracle model, assuming Strong RSA, semantic security of Paillier encryption, DDH, and enhanced existential unforgeability of ECDSA; these assumptions are widely used in the threshold-ECDSA literature and many commercially-available MPC-based wallets.Motivated by applications to cold-storage solutions for ECDSA-based cryptocurrencies, we present a new ECDSA protocol between $n$ ``online'' parties and a single ``offline'' party. Our protocol tolerates all-but-one adaptive corruptions, and it achieves full proactive security. Our protocol improves as follows over the state of the art.
** The preprocessing phase for calculating preprocessed data for future signatures is lightweight and non-interactive; it consists of each party sending a single independently-generated short message per future signature per online party (approx.~300B for typical choice of parameters).
** The signing phase is asymmetric in the following sense; to calculate the signature, it is enough for the offline party to receive a single short message from the online ``world'' (approx.~300B).
We note that all previous ECDSA protocols require many rounds of interaction between all parties, and thus all previous protocols require extensive ``interactive time'' from the offline party. In contrast, our protocol requires minimal involvement from the offline party, and it is thus ideal for MPC-based cold storage.
Our main technical innovation for achieving the above is twofold: First, building on recent protocols, we design a two-party protocol that we non-generically compile into a highly efficient $(n+1)$-party protocol. Second, we present a new batching technique for proving in zero-knowledge that the plaintext values of practically any number of Paillier ciphertexts lie in a given range. The cost of the resulting (batched) proof is very close to the cost of the underlying single-instance proof of MacKenzie and Reiter (CRYPTO'01, IJIS'04).
We prove security in the UC framework, in the global random oracle model, assuming Strong RSA, semantic security of Paillier encryption, DDH, and enhanced existential unforgeability of ECDSA; these assumptions are widely used in the threshold-ECDSA literature and many commercially-available MPC-based wallets.2022-09-29T10:08:58+00:00https://creativecommons.org/licenses/by/4.0/Constantin BlokhNikolaos MakriyannisUdi Peledhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1297Toward a Post-Quantum Zero-Knowledge Verifiable Credential System for Self-Sovereign Identity2022-09-29T12:57:54+00:00Simone DuttoDavide MargariaCarlo SannaAndrea VescoThe advent of quantum computers brought a large interest in post-quantum cryptography and in the migration to quantum-resistant systems. Protocols for Self-Sovereign Identity (SSI) are among the fundamental scenarios touched by this need. The core concept of SSI is to move the control of digital identity from third-party identity providers directly to individuals. This is achieved through Verificable Credentials (VCs) supporting anonymity and selective disclosure. In turn, the implementation of VCs requires cryptographic signature schemes compatible with a proper Zero-Knowledge Proof (ZKP) framework. We describe the two main ZKP VCs schemes based on classical cryptographic assumptions, that is, the signature scheme with efficient protocols of Camenisch and Lysyanskaya, which is based on the strong RSA assumption, and the BBS+ scheme of Boneh, Boyen and Shacham, which is based on the strong Diffie-Hellman assumption. Since these schemes are not quantum-resistant, we select as one of the possible post-quantum alternatives a lattice-based scheme proposed by Jeudy, Roux-Langlois, and Sander, and we try to identify the open problems for achieving VCs suitable for selective disclosure, non-interactive renewal mechanisms, and efficient revocation.The advent of quantum computers brought a large interest in post-quantum cryptography and in the migration to quantum-resistant systems. Protocols for Self-Sovereign Identity (SSI) are among the fundamental scenarios touched by this need. The core concept of SSI is to move the control of digital identity from third-party identity providers directly to individuals. This is achieved through Verificable Credentials (VCs) supporting anonymity and selective disclosure. In turn, the implementation of VCs requires cryptographic signature schemes compatible with a proper Zero-Knowledge Proof (ZKP) framework. We describe the two main ZKP VCs schemes based on classical cryptographic assumptions, that is, the signature scheme with efficient protocols of Camenisch and Lysyanskaya, which is based on the strong RSA assumption, and the BBS+ scheme of Boneh, Boyen and Shacham, which is based on the strong Diffie-Hellman assumption. Since these schemes are not quantum-resistant, we select as one of the possible post-quantum alternatives a lattice-based scheme proposed by Jeudy, Roux-Langlois, and Sander, and we try to identify the open problems for achieving VCs suitable for selective disclosure, non-interactive renewal mechanisms, and efficient revocation.2022-09-29T12:57:54+00:00https://creativecommons.org/publicdomain/zero/1.0/Simone DuttoDavide MargariaCarlo SannaAndrea Vescohttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2022/1298BLEACH: Cleaning Errors in Discrete Computations over CKKS2022-09-29T16:04:14+00:00Nir DruckerGuy MoshkowichTomer PellegHayim ShaulApproximated homomorphic encryption (HE) schemes such as CKKS are commonly used to perform computations over encrypted real numbers. It is commonly assumed that these schemes are not “exact” and thus they cannot execute circuits with unbounded depth over discrete sets, such as binary or integer numbers, without error overflows. These circuits are usually executed using BGV and B/FV for integers and TFHE for binary numbers. This artificial separation can cause users to favor one scheme over another for a given computation, without even exploring other, perhaps better, options.
We show that by treating step functions as “clean-up” utilities and by leveraging the SIMD capabilities of CKKS, we can extend the homomorphic encryption toolbox with efficient tools. These tools use CKKS to run unbounded circuits that operate over binary and small-integer elements and even combine these circuits with fixed-point real numbers circuits. We demonstrate the results using the Turing-complete Conway’s Game of Life. In our evaluation, for boards of size 128x128, these tools achieved an order of magnitude faster latency than previous implementations using other HE schemes. We argue and demonstrate that for large enough real-world inputs, performing binary circuits over CKKS, while considering it as an “exact” scheme, results in comparable or even better performance than using other schemes tailored for similar inputs.Approximated homomorphic encryption (HE) schemes such as CKKS are commonly used to perform computations over encrypted real numbers. It is commonly assumed that these schemes are not “exact” and thus they cannot execute circuits with unbounded depth over discrete sets, such as binary or integer numbers, without error overflows. These circuits are usually executed using BGV and B/FV for integers and TFHE for binary numbers. This artificial separation can cause users to favor one scheme over another for a given computation, without even exploring other, perhaps better, options.
We show that by treating step functions as “clean-up” utilities and by leveraging the SIMD capabilities of CKKS, we can extend the homomorphic encryption toolbox with efficient tools. These tools use CKKS to run unbounded circuits that operate over binary and small-integer elements and even combine these circuits with fixed-point real numbers circuits. We demonstrate the results using the Turing-complete Conway’s Game of Life. In our evaluation, for boards of size 128x128, these tools achieved an order of magnitude faster latency than previous implementations using other HE schemes. We argue and demonstrate that for large enough real-world inputs, performing binary circuits over CKKS, while considering it as an “exact” scheme, results in comparable or even better performance than using other schemes tailored for similar inputs.2022-09-29T16:04:14+00:00https://creativecommons.org/licenses/by/4.0/Nir DruckerGuy MoshkowichTomer PellegHayim Shaulhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1222Homomorphic Encryption on GPU2022-09-29T19:58:11+00:00Ali Şah ÖzcanCan AydumanEnes Recep TürkoğluErkay SavaşHomomorphic encryption (HE) is a cryptosystem that allows secure processing of encrypted data. One of the most popular HE schemes is the Brakerski-Fan-Vercauteren (BFV), which supports somewhat (SWHE) and fully homomorphic encryption (FHE). Since overly involved arithmetic operations of HE schemes are amenable to concurrent computation, GPU devices can be instrumental in facilitating the practical use of HE in real world applications thanks to their superior parallel processing capacity.
This paper presents an optimized and highly parallelized GPU library to accelerate the BFV scheme. This library includes state-of-the-art implementations of Number Theoretic Transform (NTT) and inverse NTT that minimize the GPU kernel function calls. It makes an efficient use of the GPU memory hierarchy and computes 128 NTT operations for ring dimension of $2^{14}$ only in $176.1~\mu s$ on RTX~3060Ti GPU. To the best of our knowlede, this is the fastest implementation in the literature. The library also improves the performance of the homomorphic operations of the BFV scheme. Although the library can be independently used, it is also fully integrated with the Microsoft SEAL library, which is a well-known HE library that also implements the BFV scheme. For one ciphertext multiplication, for the ring dimension $2^{14}$ and the modulus bit size of $438$, our GPU implementation offers $\mathbf{63.4}$ times speedup over the SEAL library running on a high-end CPU. The library compares favorably with other state-of-the-art GPU implementations of NTT and the BFV operations. Finally, we implement a privacy-preserving application that classifies encrpyted genome data for tumor types and achieve speedups of $42.98$ and $5.7$ over a CPU implementations using single and 16 threads, respectively. Our results indicate that GPU implementations can facilitate the deployment of homomorphic cryptographic libraries in real world privacy preserving applications.Homomorphic encryption (HE) is a cryptosystem that allows secure processing of encrypted data. One of the most popular HE schemes is the Brakerski-Fan-Vercauteren (BFV), which supports somewhat (SWHE) and fully homomorphic encryption (FHE). Since overly involved arithmetic operations of HE schemes are amenable to concurrent computation, GPU devices can be instrumental in facilitating the practical use of HE in real world applications thanks to their superior parallel processing capacity.
This paper presents an optimized and highly parallelized GPU library to accelerate the BFV scheme. This library includes state-of-the-art implementations of Number Theoretic Transform (NTT) and inverse NTT that minimize the GPU kernel function calls. It makes an efficient use of the GPU memory hierarchy and computes 128 NTT operations for ring dimension of $2^{14}$ only in $176.1~\mu s$ on RTX~3060Ti GPU. To the best of our knowlede, this is the fastest implementation in the literature. The library also improves the performance of the homomorphic operations of the BFV scheme. Although the library can be independently used, it is also fully integrated with the Microsoft SEAL library, which is a well-known HE library that also implements the BFV scheme. For one ciphertext multiplication, for the ring dimension $2^{14}$ and the modulus bit size of $438$, our GPU implementation offers $\mathbf{63.4}$ times speedup over the SEAL library running on a high-end CPU. The library compares favorably with other state-of-the-art GPU implementations of NTT and the BFV operations. Finally, we implement a privacy-preserving application that classifies encrpyted genome data for tumor types and achieve speedups of $42.98$ and $5.7$ over a CPU implementations using single and 16 threads, respectively. Our results indicate that GPU implementations can facilitate the deployment of homomorphic cryptographic libraries in real world privacy preserving applications.2022-09-15T13:26:51+00:00https://creativecommons.org/licenses/by/4.0/Ali Şah ÖzcanCan AydumanEnes Recep TürkoğluErkay Savaşhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1299Addax: A fast, private, and accountable ad exchange infrastructure2022-09-29T23:54:16+00:00Ke ZhongYiping MaYifeng MaoSebastian AngelThis paper proposes Addax, a fast, verifiable, and private online ad exchange.
When a user visits an ad-supported site, Addax runs an auction similar to those
of leading exchanges; Addax requests bids, selects the winner, collects
payment, and displays the ad to the user. A key distinction is that bids in
Addax’s auctions are kept private and the outcome of the auction is publicly
verifiable. Addax achieves these properties by adding public verifiability to
the affine aggregatable encodings in Prio (NSDI’17) and by building an auction
protocol out of them. Our implementation of Addax over WAN with hundreds of
bidders can run roughly half the auctions per second as a non-private and
non-verifiable exchange, while delivering ads to users in under 600 ms with
little additional bandwidth requirements. This efficiency makes Addax the first
architecture capable of bringing transparency to this otherwise opaque ecosystem.This paper proposes Addax, a fast, verifiable, and private online ad exchange.
When a user visits an ad-supported site, Addax runs an auction similar to those
of leading exchanges; Addax requests bids, selects the winner, collects
payment, and displays the ad to the user. A key distinction is that bids in
Addax’s auctions are kept private and the outcome of the auction is publicly
verifiable. Addax achieves these properties by adding public verifiability to
the affine aggregatable encodings in Prio (NSDI’17) and by building an auction
protocol out of them. Our implementation of Addax over WAN with hundreds of
bidders can run roughly half the auctions per second as a non-private and
non-verifiable exchange, while delivering ads to users in under 600 ms with
little additional bandwidth requirements. This efficiency makes Addax the first
architecture capable of bringing transparency to this otherwise opaque ecosystem.2022-09-29T23:54:16+00:00https://creativecommons.org/licenses/by/4.0/Ke ZhongYiping MaYifeng MaoSebastian Angelhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1286ZEBRA: Anonymous Credentials with Practical On-chain Verification and Applications to KYC in DeFi2022-09-30T02:08:56+00:00Deevashwer RatheeGuru Vamsi PolicharlaTiancheng XieRyan CottoneDawn SongZEBRA is an Anonymous Credential (AC) scheme, supporting auditability and revocation, that provides practical on-chain verification for the first time. It realizes efficient access control on permissionless blockchains while achieving both privacy and accountability. In all prior solutions, users either pay exorbitant fees or lose privacy since authorities granting access can map users to their wallets. Hence, ZEBRA is the first to enable DeFi platforms to remain compliant with imminent regulations without compromising user privacy.
We evaluate ZEBRA and show that it reduces the gas cost incurred on the Ethereum Virtual Machine (EVM) by 11.8x when compared to Coconut [NDSS 2019], the state-of-the-art AC scheme for blockchains. This translates to a reduction in transaction fees from 94 USD to 8 USD on Ethereum in August 2022. However, 8 USD is still high for most applications, and ZEBRA further drives down credential verification costs through batched verification. For a batch of 512 layer-1 and layer-2 wallets, the gas cost is reduced by 35x and 641x on EVM, and the transaction fee is reduced to just 0.23 USD and 0.0126 USD on Ethereum, respectively. For perspective, these costs are comparable to the minimum transaction costs on Ethereum.ZEBRA is an Anonymous Credential (AC) scheme, supporting auditability and revocation, that provides practical on-chain verification for the first time. It realizes efficient access control on permissionless blockchains while achieving both privacy and accountability. In all prior solutions, users either pay exorbitant fees or lose privacy since authorities granting access can map users to their wallets. Hence, ZEBRA is the first to enable DeFi platforms to remain compliant with imminent regulations without compromising user privacy.
We evaluate ZEBRA and show that it reduces the gas cost incurred on the Ethereum Virtual Machine (EVM) by 11.8x when compared to Coconut [NDSS 2019], the state-of-the-art AC scheme for blockchains. This translates to a reduction in transaction fees from 94 USD to 8 USD on Ethereum in August 2022. However, 8 USD is still high for most applications, and ZEBRA further drives down credential verification costs through batched verification. For a batch of 512 layer-1 and layer-2 wallets, the gas cost is reduced by 35x and 641x on EVM, and the transaction fee is reduced to just 0.23 USD and 0.0126 USD on Ethereum, respectively. For perspective, these costs are comparable to the minimum transaction costs on Ethereum.2022-09-28T02:32:17+00:00https://creativecommons.org/licenses/by/4.0/Deevashwer RatheeGuru Vamsi PolicharlaTiancheng XieRyan CottoneDawn Songhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1127GUC-Secure Commitments via Random Oracles: New Impossibility and Feasibility2022-09-30T02:51:47+00:00Zhelei ZhouBingsheng ZhangHong-Sheng ZhouKui RenIn the UC framework, protocols must be subroutine respecting; therefore, shared trusted setup might cause security issues. To address this drawback, Generalized UC (GUC) framework is introduced by Canetti \emph{et al.} (TCC 2007).
In this work, we investigate the impossibility and feasibility of GUC-secure commitments using global random oracles (GRO) as the trusted setup. In particular, we show that it is impossible to have a 2-round (1-round committing and 1-round opening) GUC-secure commitment in the global observable RO model by Canetti \emph{et al.} (CCS 2014). We then give a new round-optimal GUC-secure commitment that uses only Minicrypt assumptions (i.e. the existence of one-way functions) in the global observable RO model. Furthermore, we also examine the complete picture on round complexity of the GUC-secure commitments in various global RO models.In the UC framework, protocols must be subroutine respecting; therefore, shared trusted setup might cause security issues. To address this drawback, Generalized UC (GUC) framework is introduced by Canetti \emph{et al.} (TCC 2007).
In this work, we investigate the impossibility and feasibility of GUC-secure commitments using global random oracles (GRO) as the trusted setup. In particular, we show that it is impossible to have a 2-round (1-round committing and 1-round opening) GUC-secure commitment in the global observable RO model by Canetti \emph{et al.} (CCS 2014). We then give a new round-optimal GUC-secure commitment that uses only Minicrypt assumptions (i.e. the existence of one-way functions) in the global observable RO model. Furthermore, we also examine the complete picture on round complexity of the GUC-secure commitments in various global RO models.2022-08-30T08:55:12+00:00https://creativecommons.org/licenses/by-nc/4.0/Zhelei ZhouBingsheng ZhangHong-Sheng ZhouKui Renhttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2022/930Multi-Parameter Support with NTTs for NTRU and NTRU Prime on Cortex-M42022-09-30T07:16:17+00:00Erdem AlkimVincent HwangBo-Yin YangWe propose NTT implementations with each supporting at least one parameter of NTRU and one parameter of NTRU Prime. Our implementations are based on size-1440, size-1536, and size-1728 convolutions without algebraic assumptions on the target polynomial rings. We also propose several improvements for the NTT computation. Firstly, we introduce dedicated radix-(2,3) butterflies combining Good–Thomas FFT and vector-radix FFT. In general, there are six dedicated radix-(2, 3) butterflies and they together support implicit permutations. Secondly, for odd prime radices, we show that the multiplications for one output can be replaced with additions/subtractions. We demonstrate the idea for radix-3 and show how to extend it to any odd prime. Our improvement also applies to radix-(2, 3) butterflies. Thirdly, we implement an incomplete version of Good–Thomas FFT for addressing potential code size issues. For NTRU, our polynomial multiplications outperform the state-of-the-art by 2.8%−10.3%. For NTRU Prime, our polynomial multiplications are slower than the state-of-the-art. However, the SotA exploits the specific structure of coefficient rings or polynomial moduli, while our NTT-based multiplications exploit neither and apply across different schemes. This reduces the engineering effort, including testing and verification.We propose NTT implementations with each supporting at least one parameter of NTRU and one parameter of NTRU Prime. Our implementations are based on size-1440, size-1536, and size-1728 convolutions without algebraic assumptions on the target polynomial rings. We also propose several improvements for the NTT computation. Firstly, we introduce dedicated radix-(2,3) butterflies combining Good–Thomas FFT and vector-radix FFT. In general, there are six dedicated radix-(2, 3) butterflies and they together support implicit permutations. Secondly, for odd prime radices, we show that the multiplications for one output can be replaced with additions/subtractions. We demonstrate the idea for radix-3 and show how to extend it to any odd prime. Our improvement also applies to radix-(2, 3) butterflies. Thirdly, we implement an incomplete version of Good–Thomas FFT for addressing potential code size issues. For NTRU, our polynomial multiplications outperform the state-of-the-art by 2.8%−10.3%. For NTRU Prime, our polynomial multiplications are slower than the state-of-the-art. However, the SotA exploits the specific structure of coefficient rings or polynomial moduli, while our NTT-based multiplications exploit neither and apply across different schemes. This reduces the engineering effort, including testing and verification.2022-07-17T14:06:56+00:00https://creativecommons.org/publicdomain/zero/1.0/Erdem AlkimVincent HwangBo-Yin Yanghttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2022/1302Private Certifier Intersection2022-09-30T11:56:03+00:00Bishakh Chandra GhoshSikhar PatranabisDhinakaran VinayagamurthyVenkatraman RamakrishnaKrishnasuri NarayanamSandip ChakrabortyWe initiate the study of Private Certifier Intersection (PCI), which allows mutually distrusting parties to establish a trust basis for cross-validation of claims if they have one or more trust authorities (certifiers) in common. This is one of the essential requirements for verifiable presentations in Web 3.0, since it provides additional privacy without compromising on decentralization. A PCI protocol allows two or more parties holding certificates to identify a common set of certifiers while additionally validating the certificates issued by such certifiers, without leaking any information about the certifiers not in the output intersection. In this paper, we formally define the notion of multi-party PCI in the Simplified-UC framework for two different settings depending on whether certificates are required for any of the claims (called PCI-Any) or all of the claims (called PCI-All). We then design and implement two provably secure and practically efficient PCI protocols supporting validation of digital signature-based certificates: a PCI-Any protocol for ECDSA-based certificates and a PCI-All protocol for BLS-based certificates. The technical centerpiece of our proposals is the first secretsharing-based MPC framework supporting efficient computation of elliptic curve-based arithmetic operations, including elliptic curve pairings, in a black-box way. We implement this framework by building on top of the well-known MP-SPDZ library using OpenSSL and RELIC for elliptic curve operations, and use this implementation to benchmark our proposed PCI protocols in the LAN and WAN settings. In an intercontinental WAN setup with parties located in different continents, our protocols execute in less than a minute on input sets of size 40, which demonstrates the practicality of our proposed solutions.We initiate the study of Private Certifier Intersection (PCI), which allows mutually distrusting parties to establish a trust basis for cross-validation of claims if they have one or more trust authorities (certifiers) in common. This is one of the essential requirements for verifiable presentations in Web 3.0, since it provides additional privacy without compromising on decentralization. A PCI protocol allows two or more parties holding certificates to identify a common set of certifiers while additionally validating the certificates issued by such certifiers, without leaking any information about the certifiers not in the output intersection. In this paper, we formally define the notion of multi-party PCI in the Simplified-UC framework for two different settings depending on whether certificates are required for any of the claims (called PCI-Any) or all of the claims (called PCI-All). We then design and implement two provably secure and practically efficient PCI protocols supporting validation of digital signature-based certificates: a PCI-Any protocol for ECDSA-based certificates and a PCI-All protocol for BLS-based certificates. The technical centerpiece of our proposals is the first secretsharing-based MPC framework supporting efficient computation of elliptic curve-based arithmetic operations, including elliptic curve pairings, in a black-box way. We implement this framework by building on top of the well-known MP-SPDZ library using OpenSSL and RELIC for elliptic curve operations, and use this implementation to benchmark our proposed PCI protocols in the LAN and WAN settings. In an intercontinental WAN setup with parties located in different continents, our protocols execute in less than a minute on input sets of size 40, which demonstrates the practicality of our proposed solutions.2022-09-30T11:56:03+00:00https://creativecommons.org/licenses/by/4.0/Bishakh Chandra GhoshSikhar PatranabisDhinakaran VinayagamurthyVenkatraman RamakrishnaKrishnasuri NarayanamSandip Chakrabortyhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1144MAYO: Practical Post-Quantum Signatures from Oil-and-Vinegar Maps2022-09-30T16:27:38+00:00Ward BeullensThe Oil and Vinegar signature scheme, proposed in 1997 by Patarin, is one of the oldest and best understood multivariate quadratic signature schemes. It has excellent performance and signature sizes but suffers from large key sizes on the order of 50 KB, which makes it less practical as a general-purpose signature scheme. To solve this problem, this paper proposes MAYO, a variant of the UOV signature scheme whose public keys are two orders of magnitude smaller. MAYO works by using a UOV map with an unusually small oil space, which makes it possible to represent the public key very compactly. The usual UOV signing algorithm fails if the oil space is too small, but MAYO works around this problem by ``whipping up'' the base oil and vinegar map into a larger map, that does have a sufficiently large oil space. With parameters targeting NISTPQC security level I, MAYO has a public key size of only 614 Bytes and a signature size of 392 Bytes. This makes MAYO more compact than state-of-the-art lattice-based signature schemes such as Falcon and Dilithium. Moreover, we can choose MAYO parameters such that, unlike traditional UOV signatures, signatures provably only leak a negligible amount of information about the private key.The Oil and Vinegar signature scheme, proposed in 1997 by Patarin, is one of the oldest and best understood multivariate quadratic signature schemes. It has excellent performance and signature sizes but suffers from large key sizes on the order of 50 KB, which makes it less practical as a general-purpose signature scheme. To solve this problem, this paper proposes MAYO, a variant of the UOV signature scheme whose public keys are two orders of magnitude smaller. MAYO works by using a UOV map with an unusually small oil space, which makes it possible to represent the public key very compactly. The usual UOV signing algorithm fails if the oil space is too small, but MAYO works around this problem by ``whipping up'' the base oil and vinegar map into a larger map, that does have a sufficiently large oil space. With parameters targeting NISTPQC security level I, MAYO has a public key size of only 614 Bytes and a signature size of 392 Bytes. This makes MAYO more compact than state-of-the-art lattice-based signature schemes such as Falcon and Dilithium. Moreover, we can choose MAYO parameters such that, unlike traditional UOV signatures, signatures provably only leak a negligible amount of information about the private key.2021-09-10T06:52:49+00:00https://creativecommons.org/licenses/by/4.0/Ward Beullenshttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1303Towards perfect CRYSTALS in Helium2022-09-30T16:31:59+00:00Hanno BeckerFabien KleinIn this work, we present a tool for the automated super optimization of Armv8.1-M + Helium assembly on Cortex-M55. It consists of two parts: Firstly, a generic framework SLOTHY - [S]uper ([L]azy) [O]ptimization of [T]ricky [H]andwritten assembl[Y] - for expressing the super optimization of small pieces of assembly as a constraint satisfaction problem which can be handed to an external solver -- concretely, we pick CP-SAT from Google OR-Tools. Secondly, an instantiation Helight55 of SLOTHY with the Armv8.1-M architecture and aspects of the Cortex-M55 microarchitecture. We demonstrate the power of SLOTHY and Helight55 by using it to optimize two workloads: First, a radix-4 complex Fast Fourier Transform (FFT) in fixed-point arithmetic, fundamental in Digital Signal Processing. Second, the instances of the Number Theoretic Transform (NTT) underlying CRYSTALS-Kyber and CRYSTALS-Dilithium, two recently announced winners of the NIST Post-Quantum Cryptography standardization project.In this work, we present a tool for the automated super optimization of Armv8.1-M + Helium assembly on Cortex-M55. It consists of two parts: Firstly, a generic framework SLOTHY - [S]uper ([L]azy) [O]ptimization of [T]ricky [H]andwritten assembl[Y] - for expressing the super optimization of small pieces of assembly as a constraint satisfaction problem which can be handed to an external solver -- concretely, we pick CP-SAT from Google OR-Tools. Secondly, an instantiation Helight55 of SLOTHY with the Armv8.1-M architecture and aspects of the Cortex-M55 microarchitecture. We demonstrate the power of SLOTHY and Helight55 by using it to optimize two workloads: First, a radix-4 complex Fast Fourier Transform (FFT) in fixed-point arithmetic, fundamental in Digital Signal Processing. Second, the instances of the Number Theoretic Transform (NTT) underlying CRYSTALS-Kyber and CRYSTALS-Dilithium, two recently announced winners of the NIST Post-Quantum Cryptography standardization project.2022-09-30T16:31:59+00:00https://creativecommons.org/licenses/by/4.0/Hanno BeckerFabien Kleinhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1304Unifying Quantum Verification and Error-Detection: Theory and Tools for Optimisations2022-09-30T18:45:00+00:00Theodoros KapourniotisElham KashefiDominik LeichtleLuka MusicHarold OllivierWith the recent availability of cloud quantum computing services, the question of verifying quantum computations delegated by a client to a quantum server is becoming of practical interest. While Verifiable Blind Quantum Computing (VBQC) has emerged as one of the key approaches to address this challenge, current protocols still need to be optimised before they are truly practical.
To this end, we establish a fundamental correspondence between error-detection and verification and provide sufficient conditions to both achieve security in the Abstract Cryptography framework and optimise resource overheads of all known VBQC-based protocols. As a direct application, we demonstrate how to systematise the search for new efficient and robust verification protocols for $\mathsf{BQP}$ computations. While we have chosen Measurement-Based Quantum Computing (MBQC) as the working model for the presentation of our results, one could expand the domain of applicability of our framework via direct known translation between the circuit model and MBQC.With the recent availability of cloud quantum computing services, the question of verifying quantum computations delegated by a client to a quantum server is becoming of practical interest. While Verifiable Blind Quantum Computing (VBQC) has emerged as one of the key approaches to address this challenge, current protocols still need to be optimised before they are truly practical.
To this end, we establish a fundamental correspondence between error-detection and verification and provide sufficient conditions to both achieve security in the Abstract Cryptography framework and optimise resource overheads of all known VBQC-based protocols. As a direct application, we demonstrate how to systematise the search for new efficient and robust verification protocols for $\mathsf{BQP}$ computations. While we have chosen Measurement-Based Quantum Computing (MBQC) as the working model for the presentation of our results, one could expand the domain of applicability of our framework via direct known translation between the circuit model and MBQC.2022-09-30T18:45:00+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Theodoros KapourniotisElham KashefiDominik LeichtleLuka MusicHarold Ollivierhttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2022/698State Machine Replication under Changing Network Conditions2022-09-30T20:45:08+00:00Andreea B. AlexandruErica BlumJonathan KatzJulian LossProtocols for state machine replication (SMR) are typically designed for synchronous or asynchronous networks, with a lower corruption threshold in the latter case. Recent network-agnostic protocols are secure when run in either a synchronous or an asynchronous network. We propose two new constructions of network-agnostic SMR protocols that improve on existing protocols in terms of either the adversarial model or communication complexity:
1. an adaptively secure protocol with optimal corruption thresholds and quadratic amortized communication complexity per transaction;
2. a statically secure protocol with near-optimal corruption thresholds and linear amortized communication complexity per transaction.
We further explore SMR protocols run in a network that may change between synchronous and asynchronous arbitrarily often; parties can be uncorrupted (as in the proactive model), and the protocol should remain secure as long as the appropriate corruption thresholds are maintained. We show that purely asynchronous proactive secret sharing is impossible without some form of synchronization between the parties, ruling out a natural approach to proactively secure network-agnostic SMR protocols. Motivated by this negative result, we consider a model where the adversary is limited in the total number of parties it can corrupt over the duration of the protocol and show, in this setting, that our SMR protocols remain secure even under arbitrarily changing network conditions.Protocols for state machine replication (SMR) are typically designed for synchronous or asynchronous networks, with a lower corruption threshold in the latter case. Recent network-agnostic protocols are secure when run in either a synchronous or an asynchronous network. We propose two new constructions of network-agnostic SMR protocols that improve on existing protocols in terms of either the adversarial model or communication complexity:
1. an adaptively secure protocol with optimal corruption thresholds and quadratic amortized communication complexity per transaction;
2. a statically secure protocol with near-optimal corruption thresholds and linear amortized communication complexity per transaction.
We further explore SMR protocols run in a network that may change between synchronous and asynchronous arbitrarily often; parties can be uncorrupted (as in the proactive model), and the protocol should remain secure as long as the appropriate corruption thresholds are maintained. We show that purely asynchronous proactive secret sharing is impossible without some form of synchronization between the parties, ruling out a natural approach to proactively secure network-agnostic SMR protocols. Motivated by this negative result, we consider a model where the adversary is limited in the total number of parties it can corrupt over the duration of the protocol and show, in this setting, that our SMR protocols remain secure even under arbitrarily changing network conditions.2022-06-01T18:32:11+00:00https://creativecommons.org/licenses/by/4.0/Andreea B. AlexandruErica BlumJonathan KatzJulian Losshttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/974PEReDi: Privacy-Enhanced, Regulated and Distributed Central Bank Digital Currencies2022-09-30T21:30:18+00:00Aggelos KiayiasMarkulf KohlweissAmirreza SarenchehCentral Bank Digital Currencies (CBDCs) aspire to offer a digital replacement for physical cash and as such need to tackle two fundamental requirements that are in conflict. On the one hand, it is desired they are private so that a financial “panopticon” is avoided, while on the other, they should be regulation friendly in the sense of facilitating any threshold-limiting, tracing, and counterparty auditing functionality that is necessary to comply with regulations such as Know Your Customer (KYC), Anti Money Laundering (AML) and Combating Financing of Terrorism (CFT) as well as financial stability considerations. In this work, we put forth a new model for CBDCs and an efficient construction that, for the first time, fully addresses these issues simultaneously. Moreover, recognizing the importance of avoiding a single point of failure, our construction is distributed so that all its properties can withstand a suitably bounded minority of participating entities getting corrupted by an adversary. Achieving all the above properties efficiently is technically involved; among others, our construction uses suitable cryptographic tools to thwart manin-the-middle attacks, it showcases a novel traceability mechanism with significant performance gains compared to previously known techniques and, perhaps surprisingly, shows how to obviate Byzantine agreement or broadcast from the optimistic execution path of a payment, something that results in an essentially optimal communication pattern and communication overhead when the sender and receiver are honest. Going beyond “simple” payments, we also discuss how our scheme can facilitate one-off large transfers complying with Know Your Transaction (KYT) disclosure requirements. Our CBDC concept is expressed and realized in the Universal Composition (UC) framework providing in this way a modular and secure way to embed it within a larger financial ecosystem.Central Bank Digital Currencies (CBDCs) aspire to offer a digital replacement for physical cash and as such need to tackle two fundamental requirements that are in conflict. On the one hand, it is desired they are private so that a financial “panopticon” is avoided, while on the other, they should be regulation friendly in the sense of facilitating any threshold-limiting, tracing, and counterparty auditing functionality that is necessary to comply with regulations such as Know Your Customer (KYC), Anti Money Laundering (AML) and Combating Financing of Terrorism (CFT) as well as financial stability considerations. In this work, we put forth a new model for CBDCs and an efficient construction that, for the first time, fully addresses these issues simultaneously. Moreover, recognizing the importance of avoiding a single point of failure, our construction is distributed so that all its properties can withstand a suitably bounded minority of participating entities getting corrupted by an adversary. Achieving all the above properties efficiently is technically involved; among others, our construction uses suitable cryptographic tools to thwart manin-the-middle attacks, it showcases a novel traceability mechanism with significant performance gains compared to previously known techniques and, perhaps surprisingly, shows how to obviate Byzantine agreement or broadcast from the optimistic execution path of a payment, something that results in an essentially optimal communication pattern and communication overhead when the sender and receiver are honest. Going beyond “simple” payments, we also discuss how our scheme can facilitate one-off large transfers complying with Know Your Transaction (KYT) disclosure requirements. Our CBDC concept is expressed and realized in the Universal Composition (UC) framework providing in this way a modular and secure way to embed it within a larger financial ecosystem.2022-07-29T23:15:35+00:00https://creativecommons.org/licenses/by/4.0/Aggelos KiayiasMarkulf KohlweissAmirreza Sarenchehhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/400Quantum Advantage from Any Non-Local Game2022-09-30T22:36:30+00:00Yael Tauman KalaiAlex LombardiVinod VaikuntanathanLisa YangWe show a general method of compiling any $k$-prover non-local game into a single-prover interactive game maintaining the same (quantum) completeness and (classical) soundness guarantees (up to negligible additive factors in a security parameter). Our compiler uses any quantum homomorphic encryption scheme (Mahadev, FOCS 2018; Brakerski, CRYPTO 2018) satisfying a natural form of correctness with respect to auxiliary (quantum) input. The homomorphic encryption scheme is used as a cryptographic mechanism to simulate the effect of spatial separation, and is required to evaluate $k-1$ prover strategies (out of $k$) on encrypted queries.
In conjunction with the rich literature on (entangled) multi-prover non-local games starting from the celebrated CHSH game (Clauser, Horne, Shimonyi and Holt, Physical Review Letters 1969), our compiler gives a broad framework for constructing mechanisms to classically verify quantum advantage.We show a general method of compiling any $k$-prover non-local game into a single-prover interactive game maintaining the same (quantum) completeness and (classical) soundness guarantees (up to negligible additive factors in a security parameter). Our compiler uses any quantum homomorphic encryption scheme (Mahadev, FOCS 2018; Brakerski, CRYPTO 2018) satisfying a natural form of correctness with respect to auxiliary (quantum) input. The homomorphic encryption scheme is used as a cryptographic mechanism to simulate the effect of spatial separation, and is required to evaluate $k-1$ prover strategies (out of $k$) on encrypted queries.
In conjunction with the rich literature on (entangled) multi-prover non-local games starting from the celebrated CHSH game (Clauser, Horne, Shimonyi and Holt, Physical Review Letters 1969), our compiler gives a broad framework for constructing mechanisms to classically verify quantum advantage.2022-03-28T14:48:17+00:00https://creativecommons.org/licenses/by/4.0/Yael Tauman KalaiAlex LombardiVinod VaikuntanathanLisa Yanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1295Daric: A Storage Efficient Payment Channel With Penalization Mechanism2022-10-01T02:27:08+00:00Arash MirzaeiAmin SakzadJiangshan YuRon SteinfeldLightning Network (LN), the most widely deployed payment channel for Bitcoin, requires channel parties to generate and store distinct revocation keys for all n payments of a channel to resolve fraudulent channel closures. To reduce the required storage in a payment channel, eltoo introduces a new signature type for Bitcoin to enable payment versioning. This allows a channel party to revoke all old payments by using a payment with a higher version number, reducing the storage complexity from O(n) to O(1). However, eltoo fails to achieve bounded closure, enabling a dishonest channel party to significantly delay the channel closure process. Eltoo also lacks a punishment mechanism, which may incentivize profit-driven channel parties to close a payment channel with an old state, to their own advantage.
This paper introduces Daric, a payment channel with unlimited lifetime for Bitcoin that achieves optimal storage and bounded closure. Moreover, Daric implements a punishment mechanism and simultaneously avoids the methods other schemes commonly use to enable punishment: 1) state duplication which leads to exponential increase in the number of transactions with the number of applications on top of each other or 2) dedicated design of adaptor signatures which introduces compatibility issues with BLS or most post-quantum resistant digital signatures. We also formalise Daric and prove its security in the Universal Composability model.Lightning Network (LN), the most widely deployed payment channel for Bitcoin, requires channel parties to generate and store distinct revocation keys for all n payments of a channel to resolve fraudulent channel closures. To reduce the required storage in a payment channel, eltoo introduces a new signature type for Bitcoin to enable payment versioning. This allows a channel party to revoke all old payments by using a payment with a higher version number, reducing the storage complexity from O(n) to O(1). However, eltoo fails to achieve bounded closure, enabling a dishonest channel party to significantly delay the channel closure process. Eltoo also lacks a punishment mechanism, which may incentivize profit-driven channel parties to close a payment channel with an old state, to their own advantage.
This paper introduces Daric, a payment channel with unlimited lifetime for Bitcoin that achieves optimal storage and bounded closure. Moreover, Daric implements a punishment mechanism and simultaneously avoids the methods other schemes commonly use to enable punishment: 1) state duplication which leads to exponential increase in the number of transactions with the number of applications on top of each other or 2) dedicated design of adaptor signatures which introduces compatibility issues with BLS or most post-quantum resistant digital signatures. We also formalise Daric and prove its security in the Universal Composability model.2022-09-29T05:15:55+00:00https://creativecommons.org/licenses/by/4.0/Arash MirzaeiAmin SakzadJiangshan YuRon Steinfeldhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1305Subset Product with Errors over Unique Factorization Domains and Ideal Class Groups of Dedekind Domains2022-10-01T04:29:11+00:00Trey LiIt has been half a century since the first several NP-complete problems were discovered by Cook, Karp and Levin in the early 1970s. Till today, thousands of NP-complete problems have been found. Most of them are of combinatorial flavor. We discover new possibilities in purer mathematics and introduce more structures to the theory of computation. We propose a family of abstract problems related to the subset product problem. To describe hardness of abstract problems, we propose a new hardness notion called global-case hardness, which is stronger than worst-case hardness and incomparable with average-case hardness. It is about whether all prespecified subproblems of a problem are NP-hard. We prove that our problems are generally NP-hard in all/a wide range of unique factorization domains with efficient multiplication or all/a wide range of ideal class groups of Dedekind domains with efficient ideal multiplication.It has been half a century since the first several NP-complete problems were discovered by Cook, Karp and Levin in the early 1970s. Till today, thousands of NP-complete problems have been found. Most of them are of combinatorial flavor. We discover new possibilities in purer mathematics and introduce more structures to the theory of computation. We propose a family of abstract problems related to the subset product problem. To describe hardness of abstract problems, we propose a new hardness notion called global-case hardness, which is stronger than worst-case hardness and incomparable with average-case hardness. It is about whether all prespecified subproblems of a problem are NP-hard. We prove that our problems are generally NP-hard in all/a wide range of unique factorization domains with efficient multiplication or all/a wide range of ideal class groups of Dedekind domains with efficient ideal multiplication.2022-10-01T04:29:11+00:00https://creativecommons.org/licenses/by/4.0/Trey Lihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1268Cryptographic Role-Based Access Control, Reconsidered2022-10-01T06:15:11+00:00Bin LiuAntonis MichalasBogdan WarinschiThe heavy reliance on reference monitors is a significant shortcoming of traditional access control mechanisms since monitors are single points of failure that need to run in protected mode and have to be permanently online to deal with every access request. Cryptographic access control offers an alternative solution that provides better scalability and deployability. It relies on security guarantees of the underlying cryptographic primitives and also the appropriate key distribution/management in the system. In order to rigorously study security guarantees that a cryptographic access control system can achieve, providing formal security definitions for the system is of great importance, since the security guarantee of the underlying cryptographic primitives cannot be directly translated into those of the system.
In this paper, we follow the line of the existing study on cryptographic enforcement of Role-Based Access Control (RBAC). Inspired by the study of the relation between the existing security definitions for such systems, we identify two different types of attacks which cannot be captured by the existing ones. Therefore, we propose two new security definitions towards the goal of appropriately modeling cryptographic enforcement of Role-Based Access Control policies and study the relation between our new definitions and the existing ones. In addition, we show that the cost of supporting dynamic policy updates is inherently expensive by presenting two lower bounds for such systems which guarantee the correctness and secure access.The heavy reliance on reference monitors is a significant shortcoming of traditional access control mechanisms since monitors are single points of failure that need to run in protected mode and have to be permanently online to deal with every access request. Cryptographic access control offers an alternative solution that provides better scalability and deployability. It relies on security guarantees of the underlying cryptographic primitives and also the appropriate key distribution/management in the system. In order to rigorously study security guarantees that a cryptographic access control system can achieve, providing formal security definitions for the system is of great importance, since the security guarantee of the underlying cryptographic primitives cannot be directly translated into those of the system.
In this paper, we follow the line of the existing study on cryptographic enforcement of Role-Based Access Control (RBAC). Inspired by the study of the relation between the existing security definitions for such systems, we identify two different types of attacks which cannot be captured by the existing ones. Therefore, we propose two new security definitions towards the goal of appropriately modeling cryptographic enforcement of Role-Based Access Control policies and study the relation between our new definitions and the existing ones. In addition, we show that the cost of supporting dynamic policy updates is inherently expensive by presenting two lower bounds for such systems which guarantee the correctness and secure access.2022-09-24T21:26:09+00:00https://creativecommons.org/licenses/by/4.0/Bin LiuAntonis MichalasBogdan Warinschihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1306Single-shuffle Full-open Card-based Protocols Imply Private Simultaneous Messages Protocols2022-10-01T07:39:53+00:00Kazumasa ShinagawaKoji NuidaIn this note, we introduce a class of card-based protocols called single-shuffle full-open (SSFO) protocols and show that any SSFO protocol for a function $f: \{0,1\}^n \rightarrow [d]$ using $k$ cards is generically converted to a private simultaneous messages (PSM) protocol for $f$ with $(nk)$-bit communication. As an example application, we obtain an 18-bit PSM protocol for the three-bit equality function from the six-card trick (Heather-Schneider-Teague, Formal Aspects of Computing 2014), which is an SSFO protocol in our terminology. We then generalize this result to another class of protocols which we name single-shuffle single-branch (SSSB) protocols, which contains SSFO protocols as a subclass. As an example application, we obtain an 8-bit PSM protocol for the two-bit AND function from the four-card trick (Mizuki-Kumamoto-Sone, ASIACRYPT 2012), which is an SSSB protocol in our terminology.In this note, we introduce a class of card-based protocols called single-shuffle full-open (SSFO) protocols and show that any SSFO protocol for a function $f: \{0,1\}^n \rightarrow [d]$ using $k$ cards is generically converted to a private simultaneous messages (PSM) protocol for $f$ with $(nk)$-bit communication. As an example application, we obtain an 18-bit PSM protocol for the three-bit equality function from the six-card trick (Heather-Schneider-Teague, Formal Aspects of Computing 2014), which is an SSFO protocol in our terminology. We then generalize this result to another class of protocols which we name single-shuffle single-branch (SSSB) protocols, which contains SSFO protocols as a subclass. As an example application, we obtain an 8-bit PSM protocol for the two-bit AND function from the four-card trick (Mizuki-Kumamoto-Sone, ASIACRYPT 2012), which is an SSSB protocol in our terminology.2022-10-01T07:39:53+00:00https://creativecommons.org/licenses/by/4.0/Kazumasa ShinagawaKoji Nuidahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1130A note on group membership tests for $\G_1$, $\G_2$ and $\G_T$ on BLS pairing-friendly curves2022-10-01T08:14:05+00:00Michael ScottHere we consider a method for quickly testing for group membership in the groups $\G_1$, $\G_2$ and $\G_T$ (all of prime order $r$) as they arise on a type-3 pairing-friendly curve. As is well known endomorphisms exist for each of these groups which allows for faster point multiplication for elements of order $r$. The endomorphism applies if an element is of
order $r$. Here we show that, under relatively mild conditions, the endomorphism applies {\bf if and only if} an element is of order $r$. This results in a faster method of confirming group membership. In particular we show that the conditions are met for the popular BLS family of curves.Here we consider a method for quickly testing for group membership in the groups $\G_1$, $\G_2$ and $\G_T$ (all of prime order $r$) as they arise on a type-3 pairing-friendly curve. As is well known endomorphisms exist for each of these groups which allows for faster point multiplication for elements of order $r$. The endomorphism applies if an element is of
order $r$. Here we show that, under relatively mild conditions, the endomorphism applies {\bf if and only if} an element is of order $r$. This results in a faster method of confirming group membership. In particular we show that the conditions are met for the popular BLS family of curves.2021-09-06T07:49:05+00:00https://creativecommons.org/licenses/by/4.0/Michael Scotthttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1232The Abe-Okamoto Partially Blind Signature Scheme Revisited2022-10-01T11:38:31+00:00Julia KastnerJulian LossJiayu XuPartially blind signatures, an extension of ordinary blind signatures, are a primitive with wide applications in e-cash and electronic voting. One of the most efficient schemes to date is the one by Abe and Okamoto (CRYPTO 2000), whose underlying idea - the OR-proof technique - has served as the basis for several works.
We point out several subtle flaws in the original proof of security, and provide a new detailed and rigorous proof, achieving similar bounds as the original work. We believe our insights on the proof strategy will find useful in the security analyses of other OR-proof-based schemes.Partially blind signatures, an extension of ordinary blind signatures, are a primitive with wide applications in e-cash and electronic voting. One of the most efficient schemes to date is the one by Abe and Okamoto (CRYPTO 2000), whose underlying idea - the OR-proof technique - has served as the basis for several works.
We point out several subtle flaws in the original proof of security, and provide a new detailed and rigorous proof, achieving similar bounds as the original work. We believe our insights on the proof strategy will find useful in the security analyses of other OR-proof-based schemes.2022-09-16T16:13:22+00:00https://creativecommons.org/licenses/by/4.0/Julia KastnerJulian LossJiayu Xuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/7482DT-GLS: Faster and exception-free scalar multiplication in the GLS254 binary curve2022-10-01T11:52:22+00:00Marius A. AardalDiego F. AranhaWe revisit and improve performance of arithmetic in the binary GLS254 curve by introducing the 2DT-GLS scalar multiplication algorithm.
The algorithm includes theoretical and practice-oriented contributions of potential independent interest:
(i) for the first time, a proof that the GLS scalar multiplication algorithm does not incur exceptions, such that faster incomplete formulas can be used;
(ii) faster dedicated atomic formulas that alleviate the cost of precomputation;
(iii) a table compression technique that reduces the storage needed for precomputed points;
(iv) a refined constant-time scalar decomposition algorithm that is more robust to rounding.
We also present the first GLS254 implementation for Armv8. With our contributions, we set new speed records for constant-time scalar multiplication by $34.5\%$ and $6\%$ on 64-bit Arm and Intel platforms, respectively.We revisit and improve performance of arithmetic in the binary GLS254 curve by introducing the 2DT-GLS scalar multiplication algorithm.
The algorithm includes theoretical and practice-oriented contributions of potential independent interest:
(i) for the first time, a proof that the GLS scalar multiplication algorithm does not incur exceptions, such that faster incomplete formulas can be used;
(ii) faster dedicated atomic formulas that alleviate the cost of precomputation;
(iii) a table compression technique that reduces the storage needed for precomputed points;
(iv) a refined constant-time scalar decomposition algorithm that is more robust to rounding.
We also present the first GLS254 implementation for Armv8. With our contributions, we set new speed records for constant-time scalar multiplication by $34.5\%$ and $6\%$ on 64-bit Arm and Intel platforms, respectively.2022-06-13T15:13:16+00:00https://creativecommons.org/licenses/by/4.0/Marius A. AardalDiego F. Aranhahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/210An Analysis of the Algebraic Group Model2022-10-01T17:10:07+00:00Jonathan KatzCong ZhangHong-Sheng ZhouThe algebraic group model (AGM), formalized by Fuchsbauer, Kiltz, and Loss, has recently received significant attention. One of the appealing properties of the AGM is that it is viewed as being (strictly) weaker than the generic group model (GGM), in the sense that hardness results for algebraic algorithms imply hardness results for generic algorithms, and generic reductions in the AGM (namely, between the algebraic formulations of two problems) imply generic reductions in the GGM. We highlight that as the GGM and AGM are currently formalized, this is not true: hardness in the AGM may not imply hardness in the GGM, and a generic reduction in the AGM may not imply a similar reduction in the GGM.The algebraic group model (AGM), formalized by Fuchsbauer, Kiltz, and Loss, has recently received significant attention. One of the appealing properties of the AGM is that it is viewed as being (strictly) weaker than the generic group model (GGM), in the sense that hardness results for algebraic algorithms imply hardness results for generic algorithms, and generic reductions in the AGM (namely, between the algebraic formulations of two problems) imply generic reductions in the GGM. We highlight that as the GGM and AGM are currently formalized, this is not true: hardness in the AGM may not imply hardness in the GGM, and a generic reduction in the AGM may not imply a similar reduction in the GGM.2022-02-22T16:18:21+00:00https://creativecommons.org/licenses/by/4.0/Jonathan KatzCong ZhangHong-Sheng Zhouhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/345On the decisional Diffie-Hellman problem for class group actions on oriented elliptic curves2022-10-01T18:59:38+00:00Wouter CastryckMarc HoubenFrederik VercauterenBenjamin WesolowskiWe show how the Weil pairing can be used to evaluate the assigned characters of an imaginary quadratic order $\mathcal{O}$ in an unknown ideal class $[\mathfrak{a}] \in \mathrm{Cl}(\mathcal{O})$ that connects two given $\mathcal{O}$-oriented elliptic curves $(E, \iota)$ and $(E', \iota') = [\mathfrak{a}](E, \iota)$.
When specialized to ordinary elliptic curves over finite fields, our method is conceptually simpler and often faster than a recent approach due to Castryck, Sot\'akov\'a and Vercauteren, who rely on the Tate pairing instead.
The main implication of our work is that it breaks the decisional Diffie–Hellman problem for practically all oriented elliptic curves that are acted upon by an even-order class group.
It can also be used to better handle the worst cases in Wesolowski's recent reduction from the vectorization problem for oriented elliptic curves to the endomorphism ring problem, leading to a method that always works in sub-exponential time.We show how the Weil pairing can be used to evaluate the assigned characters of an imaginary quadratic order $\mathcal{O}$ in an unknown ideal class $[\mathfrak{a}] \in \mathrm{Cl}(\mathcal{O})$ that connects two given $\mathcal{O}$-oriented elliptic curves $(E, \iota)$ and $(E', \iota') = [\mathfrak{a}](E, \iota)$.
When specialized to ordinary elliptic curves over finite fields, our method is conceptually simpler and often faster than a recent approach due to Castryck, Sot\'akov\'a and Vercauteren, who rely on the Tate pairing instead.
The main implication of our work is that it breaks the decisional Diffie–Hellman problem for practically all oriented elliptic curves that are acted upon by an even-order class group.
It can also be used to better handle the worst cases in Wesolowski's recent reduction from the vectorization problem for oriented elliptic curves to the endomorphism ring problem, leading to a method that always works in sub-exponential time.2022-03-14T11:56:43+00:00https://creativecommons.org/licenses/by/4.0/Wouter CastryckMarc HoubenFrederik VercauterenBenjamin Wesolowskihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1450Efficient Zero-Knowledge Argument in Discrete Logarithm Setting: Sublogarithmic Proof or Sublinear Verifier2022-10-01T19:50:50+00:00Sungwook KimHyeonbum LeeJae Hong SeoWe propose three interactive zero-knowledge arguments for arithmetic circuit of size $N$ in the common random string model, which can be converted to be non-interactive by Fiat-Shamir heuristics in the random oracle model. First argument features $O(\sqrt{\log N})$ communication and round complexities and $O(N)$ computational complexity for the verifier. Second argument features $O(\log N)$ communication and $O(\sqrt{N})$ computational complexity for the verifier. Third argument features $O(\log N)$ communication and $O(\sqrt{N}\log N)$ computational complexity for the verifier. Contrary to first and second arguments, the third argument is free of reliance on pairing-friendly elliptic curves. The soundness of three arguments is proven under the standard discrete logarithm and/or the double pairing assumption, which is at least as reliable as the decisional Diffie-Hellman assumption.We propose three interactive zero-knowledge arguments for arithmetic circuit of size $N$ in the common random string model, which can be converted to be non-interactive by Fiat-Shamir heuristics in the random oracle model. First argument features $O(\sqrt{\log N})$ communication and round complexities and $O(N)$ computational complexity for the verifier. Second argument features $O(\log N)$ communication and $O(\sqrt{N})$ computational complexity for the verifier. Third argument features $O(\log N)$ communication and $O(\sqrt{N}\log N)$ computational complexity for the verifier. Contrary to first and second arguments, the third argument is free of reliance on pairing-friendly elliptic curves. The soundness of three arguments is proven under the standard discrete logarithm and/or the double pairing assumption, which is at least as reliable as the decisional Diffie-Hellman assumption.2021-10-29T18:30:54+00:00https://creativecommons.org/licenses/by/4.0/Sungwook KimHyeonbum LeeJae Hong Seohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1308Jacobi Symbol Parity Checking Algorithm for Subset Product2022-10-02T00:48:46+00:00Trey LiIt is well-known that the subset product problem is NP-hard. We give a probabilistic polynomial time algorithm for the special case of high F_2-rank.It is well-known that the subset product problem is NP-hard. We give a probabilistic polynomial time algorithm for the special case of high F_2-rank.2022-10-02T00:48:46+00:00https://creativecommons.org/licenses/by/4.0/Trey Lihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/974Fast Keyword Search over Encrypted Data with Short Ciphertext in Clouds2022-10-02T01:30:46+00:00Yi-Fan TsengChun-I FanZi-Cheng LiuNowadays, it is convenient for people to store their data on clouds. To protect the privacy, people tend to encrypt their data before uploading them to clouds. Due to the widespread use of cloud services, public key searchable encryption is necessary for users to search the encrypted files efficiently and correctly. However, the existing public key searchable encryption schemes supporting monotonic queries suffer from either infeasibility in keyword testing or inefficiency such as heavy computing cost of testing, large size of ciphertext or trapdoor, and so on. In this work, we first propose a novel and efficient anonymous key-policy attribute-based encryption (KP-ABE). Then by applying Shen et al.'s generic construction proposed to the proposed anonymous KP-ABE, we obtain an efficient and expressive public key searchable encryption, which to the best of our knowledge achieves the best performance in testing among the existing such schemes. Only 2 pairings is needed in testing. Besides, we also implement our scheme and others with Python for comparing the performance. From the implementation results, our scheme owns the best performance on testing, and the size of ciphertexts and trapdoors are smaller than most of the existing schemes.Nowadays, it is convenient for people to store their data on clouds. To protect the privacy, people tend to encrypt their data before uploading them to clouds. Due to the widespread use of cloud services, public key searchable encryption is necessary for users to search the encrypted files efficiently and correctly. However, the existing public key searchable encryption schemes supporting monotonic queries suffer from either infeasibility in keyword testing or inefficiency such as heavy computing cost of testing, large size of ciphertext or trapdoor, and so on. In this work, we first propose a novel and efficient anonymous key-policy attribute-based encryption (KP-ABE). Then by applying Shen et al.'s generic construction proposed to the proposed anonymous KP-ABE, we obtain an efficient and expressive public key searchable encryption, which to the best of our knowledge achieves the best performance in testing among the existing such schemes. Only 2 pairings is needed in testing. Besides, we also implement our scheme and others with Python for comparing the performance. From the implementation results, our scheme owns the best performance on testing, and the size of ciphertexts and trapdoors are smaller than most of the existing schemes.2021-07-22T09:21:27+00:00https://creativecommons.org/licenses/by/4.0/Yi-Fan TsengChun-I FanZi-Cheng Liuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1548Just how hard are rotations of $\mathbb{Z}^n$? Algorithms and cryptography with the simplest lattice2022-10-02T02:13:36+00:00Huck BennettAtul GanjuPura PeetathawatchaiNoah Stephens-DavidowitzWe study the computational problem of finding a shortest non-zero vector in a rotation of $\mathbb{Z}^n$, which we call $\mathbb{Z}$SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for $\mathbb{Z}$SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain very special cases. However, despite all of this work, the fastest known algorithm that is proven to solve $\mathbb{Z}$SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in $2^{n + o(n)}$ time.
We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for $\mathbb{Z}$SVP and instead ask what else we can say about the problem. E.g., can we find *any* non-trivial speedup over the best known SVP algorithm? And, what consequences would follow if $\mathbb{Z}$SVP actually is hard? Our results are as follows.
1) We show that $\mathbb{Z}$SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce $\mathbb{Z}$SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of $\mathbb{Z}$SVP from SVP. As a consequence of this reduction, we obtain a $2^{n/2 + o(n)}$-time algorithm for $\mathbb{Z}$SVP, i.e., the first non-trivial speedup over the best known algorithm for SVP on general lattices. (In fact, this reduction works for a more general class of lattices---semi-stable lattices with not-too-large $\lambda_1$.)
2) We show a simple public-key encryption scheme that is secure if (an appropriate variant of) $\mathbb{Z}$SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of $\mathbb{Z}^n$ from either a lattice with all non-zero vectors longer than $\sqrt{n/\log n}$ or a lattice with smoothing parameter significantly smaller than the smoothing parameter of $\mathbb{Z}^n$. The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that ``$\mathbb{Z}^n$ has the largest smoothing parameter.''
3) We show a distribution of bases $B$ for rotations of $\mathbb{Z}^n$ such that, if $\mathbb{Z}$SVP is hard for *any* input basis, then $\mathbb{Z}$SVP is hard on input $B$. This gives a satisfying theoretical resolution to the problem of sampling hard bases for $\mathbb{Z}^n$, which was studied by Blanks and Miller (PQCrypto, 2021). This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices (Eurocrypt, 2022), and they also used this to analyze the security of a public-key encryption scheme.) Along the way to this result, we show a new algorithm for converting a generating set to a basis, which might be of independent interest.
4) We perform experiments to determine how practical basis reduction performs on bases of $\mathbb{Z}^n$ that are generated in different ways and how heuristic sieving algorithms perform on $\mathbb{Z}^n$. Our basis reduction experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of algorithms (i.e., larger block sizes) and study the ``provably hard'' distribution of bases described above. We also observe a threshold phenomenon in which ``basis reduction algorithms on $\mathbb{Z}^n$ nearly always find a shortest non-zero vector once they have found a vector with length less than $\sqrt{n}/2$,'' and we explore this further. Our sieving experiments confirm that heuristic sieving algorithms perform as expected on $\mathbb{Z}^n$.We study the computational problem of finding a shortest non-zero vector in a rotation of $\mathbb{Z}^n$, which we call $\mathbb{Z}$SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for $\mathbb{Z}$SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain very special cases. However, despite all of this work, the fastest known algorithm that is proven to solve $\mathbb{Z}$SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in $2^{n + o(n)}$ time.
We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for $\mathbb{Z}$SVP and instead ask what else we can say about the problem. E.g., can we find *any* non-trivial speedup over the best known SVP algorithm? And, what consequences would follow if $\mathbb{Z}$SVP actually is hard? Our results are as follows.
1) We show that $\mathbb{Z}$SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce $\mathbb{Z}$SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of $\mathbb{Z}$SVP from SVP. As a consequence of this reduction, we obtain a $2^{n/2 + o(n)}$-time algorithm for $\mathbb{Z}$SVP, i.e., the first non-trivial speedup over the best known algorithm for SVP on general lattices. (In fact, this reduction works for a more general class of lattices---semi-stable lattices with not-too-large $\lambda_1$.)
2) We show a simple public-key encryption scheme that is secure if (an appropriate variant of) $\mathbb{Z}$SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of $\mathbb{Z}^n$ from either a lattice with all non-zero vectors longer than $\sqrt{n/\log n}$ or a lattice with smoothing parameter significantly smaller than the smoothing parameter of $\mathbb{Z}^n$. The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that ``$\mathbb{Z}^n$ has the largest smoothing parameter.''
3) We show a distribution of bases $B$ for rotations of $\mathbb{Z}^n$ such that, if $\mathbb{Z}$SVP is hard for *any* input basis, then $\mathbb{Z}$SVP is hard on input $B$. This gives a satisfying theoretical resolution to the problem of sampling hard bases for $\mathbb{Z}^n$, which was studied by Blanks and Miller (PQCrypto, 2021). This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices (Eurocrypt, 2022), and they also used this to analyze the security of a public-key encryption scheme.) Along the way to this result, we show a new algorithm for converting a generating set to a basis, which might be of independent interest.
4) We perform experiments to determine how practical basis reduction performs on bases of $\mathbb{Z}^n$ that are generated in different ways and how heuristic sieving algorithms perform on $\mathbb{Z}^n$. Our basis reduction experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of algorithms (i.e., larger block sizes) and study the ``provably hard'' distribution of bases described above. We also observe a threshold phenomenon in which ``basis reduction algorithms on $\mathbb{Z}^n$ nearly always find a shortest non-zero vector once they have found a vector with length less than $\sqrt{n}/2$,'' and we explore this further. Our sieving experiments confirm that heuristic sieving algorithms perform as expected on $\mathbb{Z}^n$.2021-11-29T12:19:09+00:00https://creativecommons.org/licenses/by/4.0/Huck BennettAtul GanjuPura PeetathawatchaiNoah Stephens-Davidowitzhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1226Algebraic Relation of Three MinRank Algebraic Modelings2022-10-02T04:11:58+00:00Hao GuoJintai DingWe give algebraic relations among equations of three algebraic modelings for MinRank problem: support minors modeling, Kipnis–Shamir modeling and minors modeling.We give algebraic relations among equations of three algebraic modelings for MinRank problem: support minors modeling, Kipnis–Shamir modeling and minors modeling.2022-09-16T05:10:46+00:00https://creativecommons.org/licenses/by/4.0/Hao GuoJintai Dinghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1251Flashproofs: Efficient Zero-Knowledge Arguments of Range and Polynomial Evaluation with Transparent Setup2022-10-02T13:02:46+00:00Nan WangSid Chi-Kin ChauWe propose Flashproofs, a new type of efficient special honest verifier zero-knowledge arguments with a transparent setup in the discrete logarithm (DL) setting. First, we put forth gas-efficient range arguments that achieve $O(N^{\frac{2}{3}})$ communication cost, and involve $O(N^{\frac{2}{3}})$ group exponentiations for verification and a slightly sub-linear number of group exponentiations for proving with respect to the range $[0, 2^N-1]$, where $N$ is the bit length of the range. For typical confidential transactions on blockchain platforms supporting smart contracts, verifying our range arguments consumes only 234K and 315K gas for 32-bit and 64-bit ranges, which are comparable to 220K gas incurred by verifying the most efficient zkSNARK with a trusted setup (EUROCRYPT 16) at present. Besides, the aggregation of multiple arguments can yield further efficiency improvement. Second, we present polynomial evaluation arguments based on the techniques of Bayer & Groth (EUROCRYPT 13). We provide two zero-knowledge arguments, which are optimised for lower-degree ($D \in [3, 2^9]$) and higher-degree ($D > 2^9$) polynomials, where $D$ is the polynomial degree. Our arguments yield a non-trivial improvement in the overall efficiency. Notably, the number of group exponentiations for proving drops from $8\log D$ to $3(\log D+\sqrt{\log D})$. The communication cost and the number of group exponentiations for verification decrease from $7\log D$ to $(\log D + 3\sqrt{\log D})$. To the best of our knowledge, our arguments instantiate the most communication-efficient arguments of membership and non-membership in the DL setting among those not requiring trusted setups. More importantly, our techniques enable a significantly asymptotic improvement in the efficiency of communication and verification (group exponentiations) from $O(\log D)$ to $O(\sqrt{\log D})$ when multiple arguments satisfying different polynomials with the same degree and inputs are aggregated.We propose Flashproofs, a new type of efficient special honest verifier zero-knowledge arguments with a transparent setup in the discrete logarithm (DL) setting. First, we put forth gas-efficient range arguments that achieve $O(N^{\frac{2}{3}})$ communication cost, and involve $O(N^{\frac{2}{3}})$ group exponentiations for verification and a slightly sub-linear number of group exponentiations for proving with respect to the range $[0, 2^N-1]$, where $N$ is the bit length of the range. For typical confidential transactions on blockchain platforms supporting smart contracts, verifying our range arguments consumes only 234K and 315K gas for 32-bit and 64-bit ranges, which are comparable to 220K gas incurred by verifying the most efficient zkSNARK with a trusted setup (EUROCRYPT 16) at present. Besides, the aggregation of multiple arguments can yield further efficiency improvement. Second, we present polynomial evaluation arguments based on the techniques of Bayer & Groth (EUROCRYPT 13). We provide two zero-knowledge arguments, which are optimised for lower-degree ($D \in [3, 2^9]$) and higher-degree ($D > 2^9$) polynomials, where $D$ is the polynomial degree. Our arguments yield a non-trivial improvement in the overall efficiency. Notably, the number of group exponentiations for proving drops from $8\log D$ to $3(\log D+\sqrt{\log D})$. The communication cost and the number of group exponentiations for verification decrease from $7\log D$ to $(\log D + 3\sqrt{\log D})$. To the best of our knowledge, our arguments instantiate the most communication-efficient arguments of membership and non-membership in the DL setting among those not requiring trusted setups. More importantly, our techniques enable a significantly asymptotic improvement in the efficiency of communication and verification (group exponentiations) from $O(\log D)$ to $O(\sqrt{\log D})$ when multiple arguments satisfying different polynomials with the same degree and inputs are aggregated.2022-09-21T00:59:16+00:00https://creativecommons.org/licenses/by/4.0/Nan WangSid Chi-Kin Chauhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1274Self Masking for Hardering Inversions2022-10-02T19:48:09+00:00Paweł CyprysShlomi DolevShlomo MoranThe question whether one way functions (i.e., functions that are easy to compute but hard to invert) exist is arguably one of the central problems in complexity theory, both from theoretical and practical aspects. While proving that such functions exist could be hard, there were quite a few attempts to provide functions which are one way "in practice", namely, they are easy to compute, but there are no known polynomial time algorithms that compute their (generalized) inverse (or that computing their inverse is as hard as notoriously difficult tasks, like factoring very large integers).
In this paper we study a different approach. We provide a simple heuristic, called self masking, which converts a given polynomial time computable function $f$ into a self masked version $[{f}]$, which satisfies the following: for a random input $x$, $[{f}]^{-1}([{f}](x))=f^{-1}(f(x))$ w.h.p., but a part of $f(x)$, which is essential for computing $f^{-1}(f(x))$ is masked in $[{f}](x)$. Intuitively, this masking makes it hard to convert an efficient algorithm which computes $f^{-1}$ to an efficient algorithm which computes $[{f}]^{-1}$, since the masked parts are available to $f$ but not to $[{f}]$.
We apply this technique on variants of the subset sum problem which were studied in the context of one way functions, and obtain functions which, to the best of our knowledge, cannot be inverted in polynomial time by published techniques.The question whether one way functions (i.e., functions that are easy to compute but hard to invert) exist is arguably one of the central problems in complexity theory, both from theoretical and practical aspects. While proving that such functions exist could be hard, there were quite a few attempts to provide functions which are one way "in practice", namely, they are easy to compute, but there are no known polynomial time algorithms that compute their (generalized) inverse (or that computing their inverse is as hard as notoriously difficult tasks, like factoring very large integers).
In this paper we study a different approach. We provide a simple heuristic, called self masking, which converts a given polynomial time computable function $f$ into a self masked version $[{f}]$, which satisfies the following: for a random input $x$, $[{f}]^{-1}([{f}](x))=f^{-1}(f(x))$ w.h.p., but a part of $f(x)$, which is essential for computing $f^{-1}(f(x))$ is masked in $[{f}](x)$. Intuitively, this masking makes it hard to convert an efficient algorithm which computes $f^{-1}$ to an efficient algorithm which computes $[{f}]^{-1}$, since the masked parts are available to $f$ but not to $[{f}]$.
We apply this technique on variants of the subset sum problem which were studied in the context of one way functions, and obtain functions which, to the best of our knowledge, cannot be inverted in polynomial time by published techniques.2022-09-26T11:17:31+00:00https://creativecommons.org/publicdomain/zero/1.0/Paweł CyprysShlomi DolevShlomo Moranhttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2022/1309MPC as a service using Ethereum Registry Smart Contracts - dCommon CIP2022-10-02T23:54:03+00:00Matt Shams(Anis)Bingsheng ZhangIn this paper we introduce dCommon - auditable and programmable MPC as a service for solving multichain governance coordination problems throughout DeFi and Web3; Along with its on-chain part Common Interest Protocol (CIP) - an autonomous and immutable registry smart contract suite. CIP enables arbitrary business logic for off-chain computations using dCommon’s network/subnetworks with Ethereum smart contracts. In Stakehouse, CIP facilitates a trustless recovery of signing keys and key management for validator owners on demand. The paper elucidates a formal overview of the MPC system cryptography mechanics and its smart contract business logic for the Stakehouse CIP (SH-CIP) application implementation.In this paper we introduce dCommon - auditable and programmable MPC as a service for solving multichain governance coordination problems throughout DeFi and Web3; Along with its on-chain part Common Interest Protocol (CIP) - an autonomous and immutable registry smart contract suite. CIP enables arbitrary business logic for off-chain computations using dCommon’s network/subnetworks with Ethereum smart contracts. In Stakehouse, CIP facilitates a trustless recovery of signing keys and key management for validator owners on demand. The paper elucidates a formal overview of the MPC system cryptography mechanics and its smart contract business logic for the Stakehouse CIP (SH-CIP) application implementation.2022-10-02T23:54:03+00:00https://creativecommons.org/licenses/by-nc/4.0/Matt Shams(Anis)Bingsheng Zhanghttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2022/1287On a Conjecture From a Failed CryptoAnalysis2022-10-03T04:36:07+00:00Shengtong ZhangLet $P(x, y)$ be a bivariate polynomial with coefficients in $\mathbb{C}$. Form the $n \times n$ matrices $L_n$ whose elements are defined by $P(i, j)$. Define the matrices $M_n = I_n - L_n $.
We show that $\mu_n = \det(M_n)$ is a polynomial in $n$, thus answering a conjecture of Naccache and Yifrach-Stav.Let $P(x, y)$ be a bivariate polynomial with coefficients in $\mathbb{C}$. Form the $n \times n$ matrices $L_n$ whose elements are defined by $P(i, j)$. Define the matrices $M_n = I_n - L_n $.
We show that $\mu_n = \det(M_n)$ is a polynomial in $n$, thus answering a conjecture of Naccache and Yifrach-Stav.2022-09-28T04:00:35+00:00https://creativecommons.org/licenses/by/4.0/Shengtong Zhanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1310Power Residue Symbol Order Detecting Algorithm for Subset Product over Algebraic Integers2022-10-03T04:36:10+00:00Trey LiWe give a probabilistic polynomial time algorithm for high F_ell-rank subset product problem over the order O_K of any algebraic field K with O_K a principal ideal domain and the ell-th power residue symbol in O_K polynomial time computable, for some rational prime ell.We give a probabilistic polynomial time algorithm for high F_ell-rank subset product problem over the order O_K of any algebraic field K with O_K a principal ideal domain and the ell-th power residue symbol in O_K polynomial time computable, for some rational prime ell.2022-10-03T04:36:10+00:00https://creativecommons.org/licenses/by/4.0/Trey Lihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/992An $\mathcal{O}(n)$ Algorithm for Coefficient Grouping2022-10-03T05:16:24+00:00Fukang LiuIn this note, we study a specific optimization problem arising in the recently proposed coefficient grouping technique, which is used for the algebraic degree evaluation. Specifically, we show that there exists an efficient algorithm running in time $\mathcal{O}(n)$ to solve this basic optimization problem relevant to upper bound the algebraic degree. Moreover, the main technique in this efficient algorithm can also be used to further improve the performance of the off-the-shelf solvers to solve other optimization problems in the coefficient grouping technique. We expect that some results in this note can inspire more studies on the coefficient grouping technique.In this note, we study a specific optimization problem arising in the recently proposed coefficient grouping technique, which is used for the algebraic degree evaluation. Specifically, we show that there exists an efficient algorithm running in time $\mathcal{O}(n)$ to solve this basic optimization problem relevant to upper bound the algebraic degree. Moreover, the main technique in this efficient algorithm can also be used to further improve the performance of the off-the-shelf solvers to solve other optimization problems in the coefficient grouping technique. We expect that some results in this note can inspire more studies on the coefficient grouping technique.2022-08-03T01:27:48+00:00https://creativecommons.org/licenses/by/4.0/Fukang Liuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/991Coefficient Grouping: Breaking Chaghri and More2022-10-03T07:22:11+00:00Fukang LiuRavi AnandLibo WangWilli MeierTakanori IsobeWe propose an efficient technique called coefficient grouping to evaluate the algebraic degree of the FHE-friendly cipher Chaghri, which has been accepted for ACM CCS 2022. It is found that the algebraic degree increases linearly rather than exponentially. As a consequence, we can construct a 13-round distinguisher with time and data complexity of $2^{63}$ and mount a 13.5-round key-recovery attack. In particular, a higher-order differential attack on 8 rounds of Chaghri can be achieved with time and data complexity of $2^{38}$. Hence, it indicates that the full 8 rounds are far from being secure. Furthermore, we also demonstrate the application of our coefficient grouping technique to the design of secure cryptographic components. As a result, a countermeasure is found for Chaghri and it has little overhead compared with the original design. Since more and more symmetric primitives defined over a large finite field are emerging, we believe our new technique can have more applications in the future research.We propose an efficient technique called coefficient grouping to evaluate the algebraic degree of the FHE-friendly cipher Chaghri, which has been accepted for ACM CCS 2022. It is found that the algebraic degree increases linearly rather than exponentially. As a consequence, we can construct a 13-round distinguisher with time and data complexity of $2^{63}$ and mount a 13.5-round key-recovery attack. In particular, a higher-order differential attack on 8 rounds of Chaghri can be achieved with time and data complexity of $2^{38}$. Hence, it indicates that the full 8 rounds are far from being secure. Furthermore, we also demonstrate the application of our coefficient grouping technique to the design of secure cryptographic components. As a result, a countermeasure is found for Chaghri and it has little overhead compared with the original design. Since more and more symmetric primitives defined over a large finite field are emerging, we believe our new technique can have more applications in the future research.2022-08-03T01:21:24+00:00https://creativecommons.org/licenses/by/4.0/Fukang LiuRavi AnandLibo WangWilli MeierTakanori Isobehttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/509Lattice-Based Signature with Efficient Protocols, Revisited2022-10-03T07:36:18+00:00Corentin JeudyAdeline Roux-LangloisOlivier SandersDigital signature is an essential primitive in cryptography, which can be used as the digital analogue of handwritten signatures but also as a building block for more complex systems. In the latter case, signatures with specific features are needed, so as to smoothly interact with the other components of the systems, such as zero-knowledge proofs. This has given rise to so-called signatures with efficient protocols, a versatile tool that has been used in countless applications. Designing such signatures is however quite difficult, in particular if one wishes to withstand quantum computing. We are indeed aware of only one post-quantum construction, proposed by Libert et al. at Asiacrypt'16, yielding very large signatures and proofs.
In this paper, we propose a new construction that can be instantiated in both standard lattices and structured ones, resulting in each case in dramatic performance improvements. In particular, the size of a proof of message-signature possession, which is one of the main metrics for such schemes, can be brought down to less than 650 KB. As our construction retains all the features expected from signatures with efficient protocols, it can be used as a drop-in replacement in all systems using them, which mechanically improves their own performance, and has thus an impact on many applications.Digital signature is an essential primitive in cryptography, which can be used as the digital analogue of handwritten signatures but also as a building block for more complex systems. In the latter case, signatures with specific features are needed, so as to smoothly interact with the other components of the systems, such as zero-knowledge proofs. This has given rise to so-called signatures with efficient protocols, a versatile tool that has been used in countless applications. Designing such signatures is however quite difficult, in particular if one wishes to withstand quantum computing. We are indeed aware of only one post-quantum construction, proposed by Libert et al. at Asiacrypt'16, yielding very large signatures and proofs.
In this paper, we propose a new construction that can be instantiated in both standard lattices and structured ones, resulting in each case in dramatic performance improvements. In particular, the size of a proof of message-signature possession, which is one of the main metrics for such schemes, can be brought down to less than 650 KB. As our construction retains all the features expected from signatures with efficient protocols, it can be used as a drop-in replacement in all systems using them, which mechanically improves their own performance, and has thus an impact on many applications.2022-04-28T17:17:27+00:00https://creativecommons.org/licenses/by/4.0/Corentin JeudyAdeline Roux-LangloisOlivier Sandershttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1289Exploring RNS for Isogeny-based Cryptography2022-10-03T12:36:11+00:00David JacqueminAhmet Can MertSujoy Sinha RoyIsogeny-based cryptography suffers from a long-running time due to its requirement of a great amount of large integer arithmetic. The Residue Number System (RNS) can compensate for that drawback by making computation more efficient via parallelism. However, performing a modular reduction by a large prime which is not part of the RNS base is very expensive. In this paper, we propose a new fast and efficient modular reduction algorithm using RNS. Our method reduces the number of required multiplications by 40\% compared to RNS Montgomery modular reduction algorithm. Also, we evaluate our modular reduction method by realizing a cryptoprocessor for isogeny-based SIDH key exchange. On a Xilinx Ultrascale+ FPGA, the proposed cryptoprocessor consumes 151,009 LUTs, 143,171 FFs and 1,056 DSPs. It achieves 250 MHz clock frequency and finishes the key exchange for SIDH in 3.8 and 4.9 ms.Isogeny-based cryptography suffers from a long-running time due to its requirement of a great amount of large integer arithmetic. The Residue Number System (RNS) can compensate for that drawback by making computation more efficient via parallelism. However, performing a modular reduction by a large prime which is not part of the RNS base is very expensive. In this paper, we propose a new fast and efficient modular reduction algorithm using RNS. Our method reduces the number of required multiplications by 40\% compared to RNS Montgomery modular reduction algorithm. Also, we evaluate our modular reduction method by realizing a cryptoprocessor for isogeny-based SIDH key exchange. On a Xilinx Ultrascale+ FPGA, the proposed cryptoprocessor consumes 151,009 LUTs, 143,171 FFs and 1,056 DSPs. It achieves 250 MHz clock frequency and finishes the key exchange for SIDH in 3.8 and 4.9 ms.2022-09-28T10:41:34+00:00https://creativecommons.org/licenses/by/4.0/David JacqueminAhmet Can MertSujoy Sinha Royhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2020/200Leakage and Tamper Resilient Permutation-Based Cryptography2022-10-03T12:37:46+00:00Christoph DobraunigBart MenninkRobert PrimasImplementation attacks such as power analysis and fault attacks have shown that, if potential attackers have physical access to a cryptographic device, achieving practical security requires more considerations apart from just cryptanalytic security. In recent years, and with the advent of micro-architectural or hardware-oriented attacks, it became more and more clear that similar attack vectors can also be exploited on larger computing platforms and without the requirement of physical proximity of an attacker. While newly discovered attacks typically come with implementation recommendations that help counteract a specific attack vector, the process of constantly patching cryptographic code is quite time consuming in some cases, and simply not possible in other cases.
What adds up to the problem is that the popular approach of leakage resilient cryptography only provably solves part of the problem: it discards the threat of faults. Therefore, we put forward the usage of leakage and tamper resilient cryptographic algorithms, as they can offer built-in protection against various types of physical and hardware oriented attacks, likely including attack vectors that will only be discovered in the future. In detail, we present the - to the best of our knowledge - first framework for proving the security of permutation-based symmetric cryptographic constructions in the leakage and tamper resilient setting. As a proof of concept, we apply the framework to a sponge-based stream encryption scheme called asakey and provide a practical analysis of its resistance against side channel and fault attacks.Implementation attacks such as power analysis and fault attacks have shown that, if potential attackers have physical access to a cryptographic device, achieving practical security requires more considerations apart from just cryptanalytic security. In recent years, and with the advent of micro-architectural or hardware-oriented attacks, it became more and more clear that similar attack vectors can also be exploited on larger computing platforms and without the requirement of physical proximity of an attacker. While newly discovered attacks typically come with implementation recommendations that help counteract a specific attack vector, the process of constantly patching cryptographic code is quite time consuming in some cases, and simply not possible in other cases.
What adds up to the problem is that the popular approach of leakage resilient cryptography only provably solves part of the problem: it discards the threat of faults. Therefore, we put forward the usage of leakage and tamper resilient cryptographic algorithms, as they can offer built-in protection against various types of physical and hardware oriented attacks, likely including attack vectors that will only be discovered in the future. In detail, we present the - to the best of our knowledge - first framework for proving the security of permutation-based symmetric cryptographic constructions in the leakage and tamper resilient setting. As a proof of concept, we apply the framework to a sponge-based stream encryption scheme called asakey and provide a practical analysis of its resistance against side channel and fault attacks.2020-02-18T09:12:14+00:00https://creativecommons.org/licenses/by/4.0/Christoph DobraunigBart MenninkRobert Primashttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/929PH = PSPACE2022-10-03T15:12:54+00:00Valerii SopinIn this paper it is shown that PSPACE is equal to 4th level in the polynomial hierarchy. A lot of important consequences are also deduced.
True quantified Boolean formula is indeed a generalisation of the Boolean Satisfiability Problem, where determining of interpretation that satisfies a given Boolean formula is replaced by existence of Boolean functions that makes a given QBF to be tautology. Such functions are called the Skolem functions.
The essential idea is to skolemize, and then use additional formulas from the second level of the polynomial hierarchy inside the skolemized prefix to enforce that the skolem variables indeed depend only on the universally quantified variables they are supposed to. However, some dependence is lost when the quantification is reversed. It is called "XOR issue" in the paper because the functional dependence can be expressed by means of an XOR formula. Thus, it is needed to locate these XORs, but there is no need to locate all chains with XORs: any chain includes a XOR of only two variables. The last can be done locally in each iteration (keep in mind the algebraic normal form (ANF)), when all arguments are specified.
Relativization is defeated due to the well-known fact: PH = PSPACE iff second-order logic over finite structures gains no additional power from the addition of a transitive closure operator. Boolean algebra is finite. The exchange is possible due to finite possibilities for arguments. So, the theorems with oracles are not applicable since a random oracle is an arbitrary set. And that’s why Polynomial Hierarchy is infinite relative to a random oracle with probability 1.In this paper it is shown that PSPACE is equal to 4th level in the polynomial hierarchy. A lot of important consequences are also deduced.
True quantified Boolean formula is indeed a generalisation of the Boolean Satisfiability Problem, where determining of interpretation that satisfies a given Boolean formula is replaced by existence of Boolean functions that makes a given QBF to be tautology. Such functions are called the Skolem functions.
The essential idea is to skolemize, and then use additional formulas from the second level of the polynomial hierarchy inside the skolemized prefix to enforce that the skolem variables indeed depend only on the universally quantified variables they are supposed to. However, some dependence is lost when the quantification is reversed. It is called "XOR issue" in the paper because the functional dependence can be expressed by means of an XOR formula. Thus, it is needed to locate these XORs, but there is no need to locate all chains with XORs: any chain includes a XOR of only two variables. The last can be done locally in each iteration (keep in mind the algebraic normal form (ANF)), when all arguments are specified.
Relativization is defeated due to the well-known fact: PH = PSPACE iff second-order logic over finite structures gains no additional power from the addition of a transitive closure operator. Boolean algebra is finite. The exchange is possible due to finite possibilities for arguments. So, the theorems with oracles are not applicable since a random oracle is an arbitrary set. And that’s why Polynomial Hierarchy is infinite relative to a random oracle with probability 1.2022-07-16T17:57:00+00:00https://creativecommons.org/licenses/by/4.0/Valerii Sopinhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1307BLOOM: Bimodal Lattice One-Out-of-Many Proofs and Applications2022-10-03T16:59:49+00:00Vadim LyubashevskyNgoc Khanh NguyenWe give a construction of an efficient one-out-of-many proof system, in which a prover shows that he knows the pre-image for one element in a set, based on the hardness of lattice problems. The construction employs the recent zero-knowledge framework of Lyubashevsky et al. (Crypto 2022) together with an improved, over prior lattice-based one-out-of-many proofs, recursive procedure, and a novel rejection sampling proof that allows to use the efficient bimodal rejection sampling throughout the protocol.
Using these new primitives and techniques, we give instantiations of the most compact lattice-based ring and group signatures schemes. The improvement in signature sizes over prior works ranges between $25\%$ and $2$X. Perhaps of even more significance, the size of the user public keys, which need to be stored somewhere publicly accessible in order for ring signatures to be meaningful, is reduced by factors ranging from $7$X to $15$X. In what could be of independent interest, we also provide noticeably improved proofs for integer relations which, together with one-out-of-many proofs are key components of confidential payment systems.We give a construction of an efficient one-out-of-many proof system, in which a prover shows that he knows the pre-image for one element in a set, based on the hardness of lattice problems. The construction employs the recent zero-knowledge framework of Lyubashevsky et al. (Crypto 2022) together with an improved, over prior lattice-based one-out-of-many proofs, recursive procedure, and a novel rejection sampling proof that allows to use the efficient bimodal rejection sampling throughout the protocol.
Using these new primitives and techniques, we give instantiations of the most compact lattice-based ring and group signatures schemes. The improvement in signature sizes over prior works ranges between $25\%$ and $2$X. Perhaps of even more significance, the size of the user public keys, which need to be stored somewhere publicly accessible in order for ring signatures to be meaningful, is reduced by factors ranging from $7$X to $15$X. In what could be of independent interest, we also provide noticeably improved proofs for integer relations which, together with one-out-of-many proofs are key components of confidential payment systems.2022-10-01T15:51:45+00:00https://creativecommons.org/licenses/by/4.0/Vadim LyubashevskyNgoc Khanh Nguyenhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1311Fully Adaptive Decentralized Multi-Authority ABE2022-10-03T18:41:14+00:00Pratish DattaIlan KomargodskiBrent WatersDecentralized multi-authority attribute-based encryption (MA-ABE) is a distributed generalization of standard (ciphertext-policy) attribute-based encryption where there is no trusted central authority: any party can become an authority and issue private keys, and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters.
We present the first multi-authority attribute-based encryption schemes that are provably fully-adaptively secure. Namely, our construction is secure against an attacker that may corrupt some of the authorities as well as perform key queries adaptively throughout the life-time of the system. Our main construction relies on a prime order bilinear group where the $k$-linear assumption holds as well as on a random oracle. Along the way, we present a conceptually simpler construction relying on a composite order bilinear group with standard subgroup decision assumptions as well as on a random oracle.
Prior to this work, there was no construction that could resist adaptive corruptions of authorities, no matter the assumptions used. In fact, we point out that even standard complexity leveraging style arguments do not work in the multi-authority setting.Decentralized multi-authority attribute-based encryption (MA-ABE) is a distributed generalization of standard (ciphertext-policy) attribute-based encryption where there is no trusted central authority: any party can become an authority and issue private keys, and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters.
We present the first multi-authority attribute-based encryption schemes that are provably fully-adaptively secure. Namely, our construction is secure against an attacker that may corrupt some of the authorities as well as perform key queries adaptively throughout the life-time of the system. Our main construction relies on a prime order bilinear group where the $k$-linear assumption holds as well as on a random oracle. Along the way, we present a conceptually simpler construction relying on a composite order bilinear group with standard subgroup decision assumptions as well as on a random oracle.
Prior to this work, there was no construction that could resist adaptive corruptions of authorities, no matter the assumptions used. In fact, we point out that even standard complexity leveraging style arguments do not work in the multi-authority setting.2022-10-03T18:41:14+00:00https://creativecommons.org/licenses/by/4.0/Pratish DattaIlan KomargodskiBrent Watershttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1300Garrison: A Novel Watchtower Scheme for Bitcoin2022-10-04T02:20:31+00:00Arash MirzaeiAmin SakzadJiangshan YuRon SteinfeldIn this paper, we propose Garrison, which is a payment channel with watchtower for Bitcoin. For this scheme, the storage requirements of both channel parties and their watchtower would be O(log(N)) with N being the maximum number of channel updates. Furthermore, using properties of the adaptor signature, Garrison avoids state duplication, meaning that both parties store the same version of transactions for each state and hence the number of transactions does not exponentially increase with the number of applications on top of the channel.In this paper, we propose Garrison, which is a payment channel with watchtower for Bitcoin. For this scheme, the storage requirements of both channel parties and their watchtower would be O(log(N)) with N being the maximum number of channel updates. Furthermore, using properties of the adaptor signature, Garrison avoids state duplication, meaning that both parties store the same version of transactions for each state and hence the number of transactions does not exponentially increase with the number of applications on top of the channel.2022-09-30T04:32:28+00:00https://creativecommons.org/licenses/by/4.0/Arash MirzaeiAmin SakzadJiangshan YuRon Steinfeldhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1312Multiple Modular Unique Factorization Domain Subset Product with Errors2022-10-04T04:19:31+00:00Trey LiWe propose the multiple modular subset product with errors problem over unique factorization domains and give search-to-decision reduction as well as average-case-solution to worst-case-solution reduction for it.We propose the multiple modular subset product with errors problem over unique factorization domains and give search-to-decision reduction as well as average-case-solution to worst-case-solution reduction for it.2022-10-04T04:19:31+00:00https://creativecommons.org/licenses/by/4.0/Trey Lihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1313Weak Bijective Quadratic Functions over $\mathbb F_p^n$2022-10-04T07:33:44+00:00Lorenzo GrassiMotivated by new applications such as secure Multi-Party Computation (MPC), Homomorphic Encryption (HE), and Zero-Knowledge proofs (ZK),
many MPC-, HE- and ZK-friendly symmetric-key primitives that minimize the number of multiplications over $\mathbb F_p$ for a large prime $p$ have been recently proposed in the literature. These symmetric primitive are usually defined via invertible functions, including (i) Feistel and Lai--Massey schemes and (ii) SPN constructions instantiated with invertible non-linear S-Boxes (as invertible power maps $x\mapsto x^d$). However, the ``invertibility'' property is actually never required in any of the mentioned applications.
In this paper, we discuss the possibility to set up MPC-/HE-/ZK-friendly symmetric primitives instantiated with non-invertible weak bijective functions. With respect to one-to-one correspondence functions, any output of a weak bijective function admits at most two pre-images. The simplest example of such function is the square map over $\mathbb F_p$ for a prime $p\ge 3$, for which $x^2 = (-x)^2$.
When working over $\mathbb F_p^n$ for $n\gg 1$, a weak bijective function can be set up by re-considering the recent results of Grassi, Onofri, Pedicini and Sozzi as starting point.
Given a quadratic local map $F:\mathbb F_p^2 \rightarrow \mathbb F_p$, they proved that the non-linear function over $\mathbb F_p^n$ for $n\ge 3$ defined as $\mathcal S_F(x_0, x_1, \ldots, x_{n-1}) = y_0\| y_1\| \ldots \| y_{n-1}$ where $y_i := F(x_i, x_{i+1})$ is never invertible. Here, we prove that
-- the quadratic function $F:\mathbb F_p^2 \rightarrow \mathbb F_p$ that minimizes the probability of having a collision for $\mathcal S_F$ over $\mathbb F_p^n$ is of the form $F(x_0, x_1) = x_0^2 + x_1$ (or equivalent);
-- the function $\mathcal S_F$ over $\mathbb F_p^n$ defined as before via $F(x_0, x_1) = x_0^2 + x_1$ (or equivalent) is weak bijective.
As concrete applications, we propose modified versions of the MPC-friendly schemes MiMC, HadesMiMC, and (partially of) Hydra, and of the HE-friendly schemes Masta, Pasta, and Rubato. By instantiating them with the weak bijective quadratic functions proposed in this paper, we are able to improve the security and/or the performances in the target applications/protocols.Motivated by new applications such as secure Multi-Party Computation (MPC), Homomorphic Encryption (HE), and Zero-Knowledge proofs (ZK),
many MPC-, HE- and ZK-friendly symmetric-key primitives that minimize the number of multiplications over $\mathbb F_p$ for a large prime $p$ have been recently proposed in the literature. These symmetric primitive are usually defined via invertible functions, including (i) Feistel and Lai--Massey schemes and (ii) SPN constructions instantiated with invertible non-linear S-Boxes (as invertible power maps $x\mapsto x^d$). However, the ``invertibility'' property is actually never required in any of the mentioned applications.
In this paper, we discuss the possibility to set up MPC-/HE-/ZK-friendly symmetric primitives instantiated with non-invertible weak bijective functions. With respect to one-to-one correspondence functions, any output of a weak bijective function admits at most two pre-images. The simplest example of such function is the square map over $\mathbb F_p$ for a prime $p\ge 3$, for which $x^2 = (-x)^2$.
When working over $\mathbb F_p^n$ for $n\gg 1$, a weak bijective function can be set up by re-considering the recent results of Grassi, Onofri, Pedicini and Sozzi as starting point.
Given a quadratic local map $F:\mathbb F_p^2 \rightarrow \mathbb F_p$, they proved that the non-linear function over $\mathbb F_p^n$ for $n\ge 3$ defined as $\mathcal S_F(x_0, x_1, \ldots, x_{n-1}) = y_0\| y_1\| \ldots \| y_{n-1}$ where $y_i := F(x_i, x_{i+1})$ is never invertible. Here, we prove that
-- the quadratic function $F:\mathbb F_p^2 \rightarrow \mathbb F_p$ that minimizes the probability of having a collision for $\mathcal S_F$ over $\mathbb F_p^n$ is of the form $F(x_0, x_1) = x_0^2 + x_1$ (or equivalent);
-- the function $\mathcal S_F$ over $\mathbb F_p^n$ defined as before via $F(x_0, x_1) = x_0^2 + x_1$ (or equivalent) is weak bijective.
As concrete applications, we propose modified versions of the MPC-friendly schemes MiMC, HadesMiMC, and (partially of) Hydra, and of the HE-friendly schemes Masta, Pasta, and Rubato. By instantiating them with the weak bijective quadratic functions proposed in this paper, we are able to improve the security and/or the performances in the target applications/protocols.2022-10-04T07:33:44+00:00https://creativecommons.org/licenses/by-sa/4.0/Lorenzo Grassihttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2022/1314Hash Gone Bad: Automated discovery of protocol attacks that exploit hash function weaknesses2022-10-04T08:44:33+00:00Vincent ChevalCas CremersAlexander DaxLucca HirschiCharlie JacommeSteve KremerMost cryptographic protocols use cryptographic hash functions as a building block. The security analyses of these protocols typically assume that the hash functions are perfect (such as in the random oracle model). However, in practice, most widely deployed hash functions are far from perfect -- and as a result, the analysis may miss attacks that exploit the gap between the model and the actual hash function used.
We develop the first methodology to systematically discover attacks on security protocols that exploit weaknesses in widely deployed hash functions. We achieve this by revisiting the gap between theoretical properties of hash functions and the weaknesses of real-world hash functions, from which we develop a lattice of threat models. For all of these threat models, we develop fine-grained symbolic models.
Our methodology's fine-grained models cannot be directly encoded in existing state-of-the-art analysis tools by just using their equational reasoning. We therefore develop extensions for the two leading tools, Tamarin and Proverif. In extensive case studies using our methodology, the extended tools rediscover all attacks that were previously reported for these protocols and discover several new variants.Most cryptographic protocols use cryptographic hash functions as a building block. The security analyses of these protocols typically assume that the hash functions are perfect (such as in the random oracle model). However, in practice, most widely deployed hash functions are far from perfect -- and as a result, the analysis may miss attacks that exploit the gap between the model and the actual hash function used.
We develop the first methodology to systematically discover attacks on security protocols that exploit weaknesses in widely deployed hash functions. We achieve this by revisiting the gap between theoretical properties of hash functions and the weaknesses of real-world hash functions, from which we develop a lattice of threat models. For all of these threat models, we develop fine-grained symbolic models.
Our methodology's fine-grained models cannot be directly encoded in existing state-of-the-art analysis tools by just using their equational reasoning. We therefore develop extensions for the two leading tools, Tamarin and Proverif. In extensive case studies using our methodology, the extended tools rediscover all attacks that were previously reported for these protocols and discover several new variants.2022-10-04T08:44:33+00:00https://creativecommons.org/licenses/by/4.0/Vincent ChevalCas CremersAlexander DaxLucca HirschiCharlie JacommeSteve Kremerhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/724A Power Side-Channel Attack on the Reed-Muller Reed-Solomon Version of the HQC Cryptosystem2022-10-04T09:45:37+00:00Thomas SchambergerLukas HolzbaurJulian RennerAntonia Wachter-ZehGeorg SiglThe code-based post-quantum algorithm Hamming Quasi-Cyclic (HQC) is a fourth round candidate in the NIST standardization project. Since their third round version the authors utilize a new combination of error correcting codes, namely a combination of a Reed-Muller and a Reed-Solomon code, which requires an adaption of published attacks. We identify that the power side-channel attack by Uneo et al. from CHES 2021 does not work in practice as they miss the fact that the implemented Reed-Muller decoder does not have a fixed decoding boundary.
In this work we provide a novel attack strategy that again allows for a successful attack. Our attack does not rely on simulation to verify its success but is proven with high probability for the HQC parameter sets. In contrast to the timing side-channel attack by Guo et al. we are able to reduce the required attack queries by a factor of 12 and are able to eliminate the inherent uncertainty of their used timing oracle. We show practical attack results utilizing a power side-channel of the used Reed-Solomon decoder on an ARM Cortex-M4 microcontroller.
In addition, we provide a discussion on how or whether our attack strategy is usable with the side-channel targets of mentioned related work. Finally, we use information set decoding to evaluate the remaining attack complexity for partially retrieved secret keys. This work again emphasizes the need for a side-channel secure implementation of all relevant building blocks of HQC.The code-based post-quantum algorithm Hamming Quasi-Cyclic (HQC) is a fourth round candidate in the NIST standardization project. Since their third round version the authors utilize a new combination of error correcting codes, namely a combination of a Reed-Muller and a Reed-Solomon code, which requires an adaption of published attacks. We identify that the power side-channel attack by Uneo et al. from CHES 2021 does not work in practice as they miss the fact that the implemented Reed-Muller decoder does not have a fixed decoding boundary.
In this work we provide a novel attack strategy that again allows for a successful attack. Our attack does not rely on simulation to verify its success but is proven with high probability for the HQC parameter sets. In contrast to the timing side-channel attack by Guo et al. we are able to reduce the required attack queries by a factor of 12 and are able to eliminate the inherent uncertainty of their used timing oracle. We show practical attack results utilizing a power side-channel of the used Reed-Solomon decoder on an ARM Cortex-M4 microcontroller.
In addition, we provide a discussion on how or whether our attack strategy is usable with the side-channel targets of mentioned related work. Finally, we use information set decoding to evaluate the remaining attack complexity for partially retrieved secret keys. This work again emphasizes the need for a side-channel secure implementation of all relevant building blocks of HQC.2022-06-07T08:19:07+00:00https://creativecommons.org/licenses/by-nc-nd/4.0/Thomas SchambergerLukas HolzbaurJulian RennerAntonia Wachter-ZehGeorg Siglhttps://creativecommons.org/licenses/by-nc-nd/4.0/https://eprint.iacr.org/2021/1274A Tight Computational Indistinguishability Bound for Product Distributions2022-10-04T12:10:20+00:00Nathan GeierAssume that distributions $X_0,X_1$ (respectively $Y_0,Y_1$) are $d_X$ (respectively $d_Y$) indistinguishable for circuits of a given size. It is well known that the product distributions $X_0Y_0,\,X_1Y_1$ are $d_X+d_Y$ indistinguishable for slightly smaller circuits. However, in probability theory where unbounded adversaries are considered through statistical distance, it is folklore knowledge that in fact $X_0Y_0$ and $X_1Y_1$ are $d_X+d_Y-d_X\cdot d_Y$ indistinguishable, and also that this bound is tight.
We formulate and prove the computational analog of this tight bound. Our proof is entirely different from the proof in the statistical case, which is non-constructive. As a corollary, we show that if $X$ and $Y$ are $d$ indistinguishable, then $k$ independent copies of $X$ and $k$ independent copies of $Y$ are almost $1-(1-d)^k$ indistinguishable for smaller circuits, as against $d\cdot k$ using the looser bound. Our bounds are useful in settings where only weak (i.e. non-negligible) indistinguishability is guaranteed. We demonstrate this in the context of cryptography, showing that our bounds yield simple analysis for amplification of weak oblivious transfer protocols.Assume that distributions $X_0,X_1$ (respectively $Y_0,Y_1$) are $d_X$ (respectively $d_Y$) indistinguishable for circuits of a given size. It is well known that the product distributions $X_0Y_0,\,X_1Y_1$ are $d_X+d_Y$ indistinguishable for slightly smaller circuits. However, in probability theory where unbounded adversaries are considered through statistical distance, it is folklore knowledge that in fact $X_0Y_0$ and $X_1Y_1$ are $d_X+d_Y-d_X\cdot d_Y$ indistinguishable, and also that this bound is tight.
We formulate and prove the computational analog of this tight bound. Our proof is entirely different from the proof in the statistical case, which is non-constructive. As a corollary, we show that if $X$ and $Y$ are $d$ indistinguishable, then $k$ independent copies of $X$ and $k$ independent copies of $Y$ are almost $1-(1-d)^k$ indistinguishable for smaller circuits, as against $d\cdot k$ using the looser bound. Our bounds are useful in settings where only weak (i.e. non-negligible) indistinguishability is guaranteed. We demonstrate this in the context of cryptography, showing that our bounds yield simple analysis for amplification of weak oblivious transfer protocols.2021-09-24T17:48:44+00:00https://creativecommons.org/licenses/by/4.0/Nathan Geierhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/376Universally Composable End-to-End Secure Messaging2022-10-04T12:53:10+00:00Ran CanettiPalak JainMarika SwanbergMayank VariaWe provide a full-fledged security analysis of the Signal end-to-end messaging protocol within the UC framework. In particular:
(1) We formulate an ideal functionality that captures end-to-end secure messaging, in a setting with PKI and an untrusted server, against an adversary that has full control over the network and can adaptively and momentarily compromise parties at any time and obtain their entire internal states. In particular our analysis captures the forward and backwards secrecy properties of Signal and the conditions under which they break.
(2) We model the various components of Signal (PKI and long-term keys, backbone "asymmetric ratchet", epoch-level symmetric ratchets, authenticated encryption) as individual ideal functionalities that are analysed separately and then composed using the UC and Global-State UC theorems.
(3) We use the Random Oracle Model to model non-committing encryption for arbitrary-length messages, but the rest of the analysis is in the plain model based on standard primitives. In particular, we show how to realize Signal's key derivation functions in the standard model, from generic components, and under minimalistic cryptographic assumptions.
Our analysis improves on previous ones in the guarantees it provides, in its relaxed security assumptions, and in its modularity. We also uncover some weaknesses of Signal that were not previously discussed.
Our modeling differs from previous UC models of secure communication in that the protocol is modeled as a set of local algorithms, keeping the communication network completely out of scope. We also make extensive, layered use of global-state composition within the plain UC framework. These innovations may be of separate interest.We provide a full-fledged security analysis of the Signal end-to-end messaging protocol within the UC framework. In particular:
(1) We formulate an ideal functionality that captures end-to-end secure messaging, in a setting with PKI and an untrusted server, against an adversary that has full control over the network and can adaptively and momentarily compromise parties at any time and obtain their entire internal states. In particular our analysis captures the forward and backwards secrecy properties of Signal and the conditions under which they break.
(2) We model the various components of Signal (PKI and long-term keys, backbone "asymmetric ratchet", epoch-level symmetric ratchets, authenticated encryption) as individual ideal functionalities that are analysed separately and then composed using the UC and Global-State UC theorems.
(3) We use the Random Oracle Model to model non-committing encryption for arbitrary-length messages, but the rest of the analysis is in the plain model based on standard primitives. In particular, we show how to realize Signal's key derivation functions in the standard model, from generic components, and under minimalistic cryptographic assumptions.
Our analysis improves on previous ones in the guarantees it provides, in its relaxed security assumptions, and in its modularity. We also uncover some weaknesses of Signal that were not previously discussed.
Our modeling differs from previous UC models of secure communication in that the protocol is modeled as a set of local algorithms, keeping the communication network completely out of scope. We also make extensive, layered use of global-state composition within the plain UC framework. These innovations may be of separate interest.2022-03-22T13:28:35+00:00https://creativecommons.org/licenses/by/4.0/Ran CanettiPalak JainMarika SwanbergMayank Variahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/981FrodoPIR: Simple, Scalable, Single-Server Private Information Retrieval2022-10-04T13:57:30+00:00Alex DavidsonGonçalo PestanaSofía CeliWe design $\mathsf{FrodoPIR}$ — a highly configurable, stateful, single-server Private Information Retrieval (PIR) scheme that involves an offline phase that is completely client-independent. Coupled with small online overheads, it leads to much smaller amortized financial costs on the server-side than previous approaches. In terms of performance for a database of $1$ million $1$KB elements, $\mathsf{FrodoPIR}$ requires $< 1$ second for responding to a client query, has a server response size blow-up factor of $< 3.6\times$, and financial costs are $\sim \$1$ for answering $100,000$ client queries. Our experimental analysis is built upon a simple, non-optimized Rust implementation, illustrating that $\mathsf{FrodoPIR}$ is particularly suitable for deployments that involve large numbers of clients.We design $\mathsf{FrodoPIR}$ — a highly configurable, stateful, single-server Private Information Retrieval (PIR) scheme that involves an offline phase that is completely client-independent. Coupled with small online overheads, it leads to much smaller amortized financial costs on the server-side than previous approaches. In terms of performance for a database of $1$ million $1$KB elements, $\mathsf{FrodoPIR}$ requires $< 1$ second for responding to a client query, has a server response size blow-up factor of $< 3.6\times$, and financial costs are $\sim \$1$ for answering $100,000$ client queries. Our experimental analysis is built upon a simple, non-optimized Rust implementation, illustrating that $\mathsf{FrodoPIR}$ is particularly suitable for deployments that involve large numbers of clients.2022-07-31T17:51:44+00:00https://creativecommons.org/licenses/by/4.0/Alex DavidsonGonçalo PestanaSofía Celihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1315Hitchhiker’s Guide to a Practical Automated TFHE Parameter Setup for Custom Applications2022-10-04T14:07:06+00:00Jakub KlemsaAlso referred to as the Holy Grail of Cryptography, Fully Homomorphic Encryption (FHE) allows for arbitrary calculations over encrypted data. As a basic use-case, FHE enables a User to delegate a computation over her sensitive data to a semi-trusted Cloud: in such a setup, the User provides her data $x$ encrypted to the Cloud, the Cloud evaluates a function $f$ over the encrypted data without ever decrypting it, and sends the (encrypted) result back to the User, who finally decrypts it and obtains the desired value $f(x)$. However, even after more than twelve years of advances in this field, FHE schemes are still fairly slow in evaluation, therefore any optimization is welcome.
Among existing FHE schemes, in this contribution we focus on the TFHE Scheme by Chillotti et al., which currently achieves the best evaluation times for generic functions. To be instantiated, TFHE however requires an extensive set of parameters. These parameters affect several aspects, including but not limited to the cleartext size, the bit-security level, the probability of errors and also the evaluation time. We propose, implement and evaluate a (semi-)automated approach to generate a set of TFHE parameters with particular respect to the evaluation time, given just the desired security level, cleartext precision, and a parameter that relates to the properties of the evaluated function $f$. With our tool, we re-generate some of the existing TFHE parameters, while achieving up to 39% better execution times in an equivalent setup.Also referred to as the Holy Grail of Cryptography, Fully Homomorphic Encryption (FHE) allows for arbitrary calculations over encrypted data. As a basic use-case, FHE enables a User to delegate a computation over her sensitive data to a semi-trusted Cloud: in such a setup, the User provides her data $x$ encrypted to the Cloud, the Cloud evaluates a function $f$ over the encrypted data without ever decrypting it, and sends the (encrypted) result back to the User, who finally decrypts it and obtains the desired value $f(x)$. However, even after more than twelve years of advances in this field, FHE schemes are still fairly slow in evaluation, therefore any optimization is welcome.
Among existing FHE schemes, in this contribution we focus on the TFHE Scheme by Chillotti et al., which currently achieves the best evaluation times for generic functions. To be instantiated, TFHE however requires an extensive set of parameters. These parameters affect several aspects, including but not limited to the cleartext size, the bit-security level, the probability of errors and also the evaluation time. We propose, implement and evaluate a (semi-)automated approach to generate a set of TFHE parameters with particular respect to the evaluation time, given just the desired security level, cleartext precision, and a parameter that relates to the properties of the evaluated function $f$. With our tool, we re-generate some of the existing TFHE parameters, while achieving up to 39% better execution times in an equivalent setup.2022-10-04T14:07:06+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Jakub Klemsahttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2022/403Horst Meets Fluid-SPN: Griffin for Zero-Knowledge Applications2022-10-04T14:44:18+00:00Lorenzo GrassiYonglin HaoChristian RechbergerMarkus SchofneggerRoman WalchQingju WangZero-knowledge (ZK) applications form a large group of use cases in modern cryptography, and recently gained in popularity due to novel proof systems. For many of these applications, cryptographic hash functions are used as the main building blocks, and they often dominate the overall performance and cost of these approaches. Therefore, in the last years several new hash functions were built in order to reduce the cost in these scenarios, including Poseidon and Rescue among others.
These hash functions often look very different from more classical designs such as AES or SHA-2. For example, they work natively with integer objects rather than bits. At the same time, for example Poseidon and Rescue share some common features, such as being SPN schemes and instantiating the nonlinear layer with invertible power maps. While this allows the designers to provide simple and strong arguments for establishing their security, it also introduces some crucial limitations in the design, which affects the performance in the target applications.
To overcome these limitations, we propose the Horst mode of operation, in which the addition in a Feistel scheme $(x,y)\mapsto (y+F(x), x)$ is replaced by a multiplication, i.e., $(x,y)\mapsto (y\times G(x), x)$.
By carefully analyzing the relevant performance metrics in SNARK and STARK protocols, we show how to combine an expanding Horst scheme and the strong points of existing schemes in order to provide security and better efficiency in the target applications. We provide an extensive security analysis for our new design Griffin and a comparison with all current competitors.Zero-knowledge (ZK) applications form a large group of use cases in modern cryptography, and recently gained in popularity due to novel proof systems. For many of these applications, cryptographic hash functions are used as the main building blocks, and they often dominate the overall performance and cost of these approaches. Therefore, in the last years several new hash functions were built in order to reduce the cost in these scenarios, including Poseidon and Rescue among others.
These hash functions often look very different from more classical designs such as AES or SHA-2. For example, they work natively with integer objects rather than bits. At the same time, for example Poseidon and Rescue share some common features, such as being SPN schemes and instantiating the nonlinear layer with invertible power maps. While this allows the designers to provide simple and strong arguments for establishing their security, it also introduces some crucial limitations in the design, which affects the performance in the target applications.
To overcome these limitations, we propose the Horst mode of operation, in which the addition in a Feistel scheme $(x,y)\mapsto (y+F(x), x)$ is replaced by a multiplication, i.e., $(x,y)\mapsto (y\times G(x), x)$.
By carefully analyzing the relevant performance metrics in SNARK and STARK protocols, we show how to combine an expanding Horst scheme and the strong points of existing schemes in order to provide security and better efficiency in the target applications. We provide an extensive security analysis for our new design Griffin and a comparison with all current competitors.2022-03-31T07:23:36+00:00https://creativecommons.org/licenses/by/4.0/Lorenzo GrassiYonglin HaoChristian RechbergerMarkus SchofneggerRoman WalchQingju Wanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1316TurboPack: Honest Majority MPC with Constant Online Communication2022-10-04T15:10:47+00:00Daniel EscuderoVipul GoyalAntigoni PolychroniadouYifan SongWe present a novel approach to honest majority secure multiparty computation in the preprocessing model with information theoretic security that achieves the best online communication complexity. The online phase of our protocol requires $12$ elements in total per multiplication gate with circuit-dependent preprocessing, or $20$ elements in total with circuit-independent preprocessing. Prior works achieved linear online communication complexity in $n$, the number of parties, with the best prior existing solution involving $1.5n$ elements per multiplication gate. Only one recent work (Goyal et al, CRYPTO'22) achieves constant online communication complexity, but the constants are large ($108$ elements for passive security, and twice that for active security). That said, our protocol offers a very efficient information theoretic online phase for any number of parties.
The total end-to-end communication cost with the preprocessing phase is linear in $n$, i.e., $10n + 44$, which is larger than the $4n$ complexity of the state-of-the-art protocols. The gap is not significant when the online phase must be optimized as a priority and a reasonably large number of parties is involved. Unlike previous works based on packed secret-sharing to reduce communication complexity, we further reduce the communication by avoiding the use of complex and expensive network routing or permutations tools. Furthermore, we also allow for a maximal honest majority adversary, while most previous works require the set of honest parties to be strictly larger than a majority.
Our protocol is simple and offers concrete efficiency. To illustrate this we present a full-fledged implementation together with experimental results that show improvements in online phase runtimes that go up to $5\times$ in certain settings (e.g. $45$ parties, LAN network, circuit of depth $10$ with $1$M gates).We present a novel approach to honest majority secure multiparty computation in the preprocessing model with information theoretic security that achieves the best online communication complexity. The online phase of our protocol requires $12$ elements in total per multiplication gate with circuit-dependent preprocessing, or $20$ elements in total with circuit-independent preprocessing. Prior works achieved linear online communication complexity in $n$, the number of parties, with the best prior existing solution involving $1.5n$ elements per multiplication gate. Only one recent work (Goyal et al, CRYPTO'22) achieves constant online communication complexity, but the constants are large ($108$ elements for passive security, and twice that for active security). That said, our protocol offers a very efficient information theoretic online phase for any number of parties.
The total end-to-end communication cost with the preprocessing phase is linear in $n$, i.e., $10n + 44$, which is larger than the $4n$ complexity of the state-of-the-art protocols. The gap is not significant when the online phase must be optimized as a priority and a reasonably large number of parties is involved. Unlike previous works based on packed secret-sharing to reduce communication complexity, we further reduce the communication by avoiding the use of complex and expensive network routing or permutations tools. Furthermore, we also allow for a maximal honest majority adversary, while most previous works require the set of honest parties to be strictly larger than a majority.
Our protocol is simple and offers concrete efficiency. To illustrate this we present a full-fledged implementation together with experimental results that show improvements in online phase runtimes that go up to $5\times$ in certain settings (e.g. $45$ parties, LAN network, circuit of depth $10$ with $1$M gates).2022-10-04T15:10:47+00:00https://creativecommons.org/licenses/by/4.0/Daniel EscuderoVipul GoyalAntigoni PolychroniadouYifan Songhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1049Post Quantum Design in SPDM for Device Authentication and Key Establishment2022-10-04T16:32:09+00:00Jiewen YaoKrystian MatusiewiczVincent ZimmerThe Security Protocol and Data Model (SPDM) defines flows to authenticate hardware identity of a computing device. It also allows for establishing a secure session for confidential and integrity protected data communication between two devices. The present version of SPDM, namely version 1.2, relies on traditional asymmetric cryptographic algorithms that are known to be vulnerable to quantum attacks. This paper describes the means by which support for post-quantum (PQ) cryptography can be added to the SPDM protocol in order to enable SPDM for the upcoming world of quantum computing. We examine SPDM 1.2 protocol and discuss how to negotiate the use of post-quantum cryptography algorithms (PQC), how to support device identity reporting, means to authenticate the device, and how to establish a secure session when using PQC algorithms. We consider so called hybrid modes where both classical and PQC algorithms are used to achieve security properties as these modes are important during the transition period. We also share our experience with implementing PQ-SPDM and provide benchmarks for some of the winning NIST PQC algorithms.The Security Protocol and Data Model (SPDM) defines flows to authenticate hardware identity of a computing device. It also allows for establishing a secure session for confidential and integrity protected data communication between two devices. The present version of SPDM, namely version 1.2, relies on traditional asymmetric cryptographic algorithms that are known to be vulnerable to quantum attacks. This paper describes the means by which support for post-quantum (PQ) cryptography can be added to the SPDM protocol in order to enable SPDM for the upcoming world of quantum computing. We examine SPDM 1.2 protocol and discuss how to negotiate the use of post-quantum cryptography algorithms (PQC), how to support device identity reporting, means to authenticate the device, and how to establish a secure session when using PQC algorithms. We consider so called hybrid modes where both classical and PQC algorithms are used to achieve security properties as these modes are important during the transition period. We also share our experience with implementing PQ-SPDM and provide benchmarks for some of the winning NIST PQC algorithms.2022-08-12T13:26:59+00:00https://creativecommons.org/publicdomain/zero/1.0/Jiewen YaoKrystian MatusiewiczVincent Zimmerhttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2022/674A Note on Key Ranking for Optimal Collision Side-Channel Attacks2022-10-04T16:53:03+00:00Cezary GlowaczIn "Optimal collision side-channel attacks" (https://eprint.iacr.org/2019/828) we studied, and derived an optimal distinguisher for key ranking. In this note we propose a heuristic estimation procedure for key ranking based on this distinguisher, and provide estimates of lower bounds for secret key ranks in collision side channel attacks.In "Optimal collision side-channel attacks" (https://eprint.iacr.org/2019/828) we studied, and derived an optimal distinguisher for key ranking. In this note we propose a heuristic estimation procedure for key ranking based on this distinguisher, and provide estimates of lower bounds for secret key ranks in collision side channel attacks.2022-05-30T08:59:47+00:00https://creativecommons.org/licenses/by/4.0/Cezary Glowaczhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1317On the Optimal Succinctness and Efficiency of Functional Encryption and Attribute-Based Encryption2022-10-04T18:07:00+00:00Aayush JainHuijia LinJi LuoIn this work we investigate the asymptotically best-possible succinctness and efficiency of functional encryption (FE) and attribute-based encryption (ABE), focusing on simultaneously minimizing the sizes of secret keys and ciphertexts and the decryption time. To this end, we consider the notion of partially hiding functional encryption (PHFE) that captures both FE and ABE, and the most efficient computation model of random access machine (RAM). A PHFE secret key $\mathsf{sk}_f$ is tied to a function $f$, whereas a ciphertext $\mathsf{ct}_x(y)$ is tied to a public input $x$ (e.g., ABE attribute) and encrypts a private input $y$. Decryption reveals $f(x,y)$ and nothing else about $y$.
We present the first PHFE scheme for RAMs based on the necessary assumption of FE for circuits. It achieves nearly optimal succinctness and efficiency:
* The secret keys $\mathsf{sk}_f$ are of (optimal) *constant size*, independent of the description size $|f|$ of the function tied to it.
* The ciphertexts $\mathsf{ct}_x(y)$ have (nearly optimal) *rate-2* dependency on the private input length $|y|$ and (optimal) independency of the public input length $|x|$.
* Decryption is efficient, running in time linear in the instance running time $T$ of the RAM computation, in addition to the input and function sizes, i.e., ${T_{\mathsf{Dec}}=(T+|f|+|x|+|y|)\operatorname{poly}(\lambda)}$.
Our construction significantly improves upon the asymptotic efficiency of prior schemes. As a corollary, we obtain the first ABE scheme with both constant-size keys and constant-size ciphertexts, and the best-possible decryption time matching an existing lower bound.
We show barriers to further improvement on the asymptotic efficiency of (PH-)FE. We prove the first unconditional space-time trade-off for (PH-)FE. *No* secure (PH-)FE scheme can have both key size and decryption time sublinear in the function size $|f|$, and *no* secure PHFE scheme can have both ciphertext size and decryption time sublinear in the public input length $|x|$. These space-time trade-offs apply even in the simplest selective 1-key 1-ciphertext secret-key setting. Furthermore, we show a conditional barrier towards achieving the optimal decryption time ${T_{\mathsf{Dec}}=T\operatorname{poly}(\lambda)}$ — any such (PH-)FE scheme implies a primitive called secret-key doubly efficient private information retrieval (SK-DE-PIR), for which so far the only known candidates rely on new and non-standard hardness conjectures.In this work we investigate the asymptotically best-possible succinctness and efficiency of functional encryption (FE) and attribute-based encryption (ABE), focusing on simultaneously minimizing the sizes of secret keys and ciphertexts and the decryption time. To this end, we consider the notion of partially hiding functional encryption (PHFE) that captures both FE and ABE, and the most efficient computation model of random access machine (RAM). A PHFE secret key $\mathsf{sk}_f$ is tied to a function $f$, whereas a ciphertext $\mathsf{ct}_x(y)$ is tied to a public input $x$ (e.g., ABE attribute) and encrypts a private input $y$. Decryption reveals $f(x,y)$ and nothing else about $y$.
We present the first PHFE scheme for RAMs based on the necessary assumption of FE for circuits. It achieves nearly optimal succinctness and efficiency:
* The secret keys $\mathsf{sk}_f$ are of (optimal) *constant size*, independent of the description size $|f|$ of the function tied to it.
* The ciphertexts $\mathsf{ct}_x(y)$ have (nearly optimal) *rate-2* dependency on the private input length $|y|$ and (optimal) independency of the public input length $|x|$.
* Decryption is efficient, running in time linear in the instance running time $T$ of the RAM computation, in addition to the input and function sizes, i.e., ${T_{\mathsf{Dec}}=(T+|f|+|x|+|y|)\operatorname{poly}(\lambda)}$.
Our construction significantly improves upon the asymptotic efficiency of prior schemes. As a corollary, we obtain the first ABE scheme with both constant-size keys and constant-size ciphertexts, and the best-possible decryption time matching an existing lower bound.
We show barriers to further improvement on the asymptotic efficiency of (PH-)FE. We prove the first unconditional space-time trade-off for (PH-)FE. *No* secure (PH-)FE scheme can have both key size and decryption time sublinear in the function size $|f|$, and *no* secure PHFE scheme can have both ciphertext size and decryption time sublinear in the public input length $|x|$. These space-time trade-offs apply even in the simplest selective 1-key 1-ciphertext secret-key setting. Furthermore, we show a conditional barrier towards achieving the optimal decryption time ${T_{\mathsf{Dec}}=T\operatorname{poly}(\lambda)}$ — any such (PH-)FE scheme implies a primitive called secret-key doubly efficient private information retrieval (SK-DE-PIR), for which so far the only known candidates rely on new and non-standard hardness conjectures.2022-10-04T18:07:00+00:00https://creativecommons.org/licenses/by/4.0/Aayush JainHuijia LinJi Luohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1318General Partially Fair Multi-Party Computation with VDFs2022-10-04T18:32:19+00:00Bolton BaileyAndrew MillerGordon and Katz, in "Partial Fairness in Secure Two-Party Computation", present a protocol for two-party computation with partial fairness
which depends on presumptions on the size of the input or output of the functionality. They also
show that for some other functionalities, this notion of partial fairness is impossible to achieve.
In this work, we get around this impossibility result using verifiable delay functions, a primitive
which brings in an assumption on the inability of an adversary to compute a certain function in a
specified time. We present a gadget using VDFs which allows for any MPC to be carried out with
≈ 1/R partial fairness, where R is the number of communication rounds.Gordon and Katz, in "Partial Fairness in Secure Two-Party Computation", present a protocol for two-party computation with partial fairness
which depends on presumptions on the size of the input or output of the functionality. They also
show that for some other functionalities, this notion of partial fairness is impossible to achieve.
In this work, we get around this impossibility result using verifiable delay functions, a primitive
which brings in an assumption on the inability of an adversary to compute a certain function in a
specified time. We present a gadget using VDFs which allows for any MPC to be carried out with
≈ 1/R partial fairness, where R is the number of communication rounds.2022-10-04T18:32:19+00:00https://creativecommons.org/licenses/by/4.0/Bolton BaileyAndrew Millerhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2019/1147Batching non-membership proofs with bilinear accumulators2022-10-04T23:43:37+00:00Steve ThakurIn this short paper, we provide a protocol to batch multiple non-membership proofs into a single proof of constant size with bilinear accumulators via a succinct argument of knowledge for polynomial commitments.
We use similar techniques to provide a constant-sized proof that a polynomial commitment as in [KZG10] is a commitment to a separable (square-free) polynomial. In the context of the bilinear accumulator, this can be used to prove that a committed multiset is, in fact, a set. This has applications to any setting where a Verifier needs to be convinced that no element was added more than once. This protocol easily generalizes to a succinct protocol that shows that no element was inserted more than k times.
We use the protocol for the derivative to link a committed polynomial to a commitment to its degree, in zero-knowledge.
We have designed all of the protocols so that the Verifier needs to store just four elliptic curve points for any verification, despite the linear CRS. We also provide ways to speed up the verification of membership and non-membership proofs and to shift most of the computational burden from the Verifier to the Prover. Since all the challenges are public coin, the protocols can be made non-interactive with a Fiat-Shamir heuristic.In this short paper, we provide a protocol to batch multiple non-membership proofs into a single proof of constant size with bilinear accumulators via a succinct argument of knowledge for polynomial commitments.
We use similar techniques to provide a constant-sized proof that a polynomial commitment as in [KZG10] is a commitment to a separable (square-free) polynomial. In the context of the bilinear accumulator, this can be used to prove that a committed multiset is, in fact, a set. This has applications to any setting where a Verifier needs to be convinced that no element was added more than once. This protocol easily generalizes to a succinct protocol that shows that no element was inserted more than k times.
We use the protocol for the derivative to link a committed polynomial to a commitment to its degree, in zero-knowledge.
We have designed all of the protocols so that the Verifier needs to store just four elliptic curve points for any verification, despite the linear CRS. We also provide ways to speed up the verification of membership and non-membership proofs and to shift most of the computational burden from the Verifier to the Prover. Since all the challenges are public coin, the protocols can be made non-interactive with a Fiat-Shamir heuristic.2019-10-07T08:18:53+00:00https://creativecommons.org/licenses/by/4.0/Steve Thakurhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1319Post-Quantum Key Exchange from Subset Product With Errors2022-10-05T00:56:56+00:00Trey LiWe introduce a new direction for post-quantum key exchange based on the multiple modular subset product with errors problem.We introduce a new direction for post-quantum key exchange based on the multiple modular subset product with errors problem.2022-10-05T00:56:56+00:00https://creativecommons.org/licenses/by/4.0/Trey Lihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/318Efficient Online-friendly Two-Party ECDSA Signature2022-10-05T06:01:30+00:00Haiyang XueMan Ho AuXiang XieTsz Hon YuenHandong CuiTwo-party ECDSA signatures have received much attention due to their widespread deployment in cryptocurrencies. Depending on whether or not the message is required, we could divide two-party signing into two different phases, namely, offline and online. Ideally, the online phase should be made as lightweight as possible. At the same time, the cost of the offline phase should remain similar to that of a normal signature generation. However, the existing two-party protocols of ECDSA are not optimal: either their online phase requires decryption of a ciphertext, or their offline phase needs at least two executions of multiplicative-to-additive conversion which dominates the overall complexity.
This paper proposes an online-friendly two-party ECDSA with a lightweight online phase and a single multiplicative-to-additive function in the offline phase. It is constructed by a novel design of a re-sharing of the secret key and a linear sharing of the nonce. Our scheme significantly improves previous protocols based on either oblivious transfer or homomorphic encryption. We implement our scheme and show that it outperforms prior online-friendly schemes (i.e., those have lightweight online cost) by a factor of roughly 2 to 9 in both communication and computation.
Furthermore, our two-party scheme could be easily extended to the $2$-out-of-$n$ threshold ECDSA.Two-party ECDSA signatures have received much attention due to their widespread deployment in cryptocurrencies. Depending on whether or not the message is required, we could divide two-party signing into two different phases, namely, offline and online. Ideally, the online phase should be made as lightweight as possible. At the same time, the cost of the offline phase should remain similar to that of a normal signature generation. However, the existing two-party protocols of ECDSA are not optimal: either their online phase requires decryption of a ciphertext, or their offline phase needs at least two executions of multiplicative-to-additive conversion which dominates the overall complexity.
This paper proposes an online-friendly two-party ECDSA with a lightweight online phase and a single multiplicative-to-additive function in the offline phase. It is constructed by a novel design of a re-sharing of the secret key and a linear sharing of the nonce. Our scheme significantly improves previous protocols based on either oblivious transfer or homomorphic encryption. We implement our scheme and show that it outperforms prior online-friendly schemes (i.e., those have lightweight online cost) by a factor of roughly 2 to 9 in both communication and computation.
Furthermore, our two-party scheme could be easily extended to the $2$-out-of-$n$ threshold ECDSA.2022-03-08T12:49:52+00:00https://creativecommons.org/licenses/by/4.0/Haiyang XueMan Ho AuXiang XieTsz Hon YuenHandong Cuihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1320Boosting Batch Arguments and RAM Delegation2022-10-05T06:55:22+00:00Yael Tauman KalaiAlex LombardiVinod VaikuntanathanDaniel WichsWe show how to generically improve the succinctness of non-interactive publicly verifiable batch argument ($\mathsf{BARG}$) systems. In particular, we show (under a mild additional assumption) how to convert a $\mathsf{BARG}$ that generates proofs of length $\mathsf{poly} (m)\cdot k^{1-\epsilon}$, where $m$ is the length of a single instance and $k$ is the number of instances being batched, into one that generates proofs of length $\mathsf{poly} (m)\cdot \mathsf{poly} \log k$, which is the gold standard for succinctness of $\mathsf{BARG}$s. By prior work, such $\mathsf{BARG}$s imply the existence of $\mathsf{SNARG}$s for deterministic time $T$ computation with optimal succinctness $\mathsf{poly}\log T$.
Our result reduces the long-standing challenge of building publicly-verifiable delegation schemes to a much easier problem: building a batch argument system that beats the trivial construction. It also immediately implies new constructions of $\mathsf{BARG}$s and $\mathsf{SNARG}$s with polylogarithmic succinctness based on either bilinear maps or a combination of the $\mathsf{DDH}$ and $\mathsf{QR}$ assumptions.
Along the way, we prove an equivalence between $\mathsf{BARG}$s and a new notion of $\mathsf{SNARG}$s for (deterministic) $\mathsf{RAM}$ computations that we call ``flexible $\mathsf{RAM}$ $\mathsf{SNARG}$s with partial input soundness." This is the first demonstration that $\mathsf{SNARG}$s for deterministic computation (of any kind) imply $\mathsf{BARG}$s. Our $\mathsf{RAM}$ $\mathsf{SNARG}$ notion is of independent interest and has already been used in a recent work on constructing rate-1 $\mathsf{BARG}$s (Devadas et. al. FOCS 2022).We show how to generically improve the succinctness of non-interactive publicly verifiable batch argument ($\mathsf{BARG}$) systems. In particular, we show (under a mild additional assumption) how to convert a $\mathsf{BARG}$ that generates proofs of length $\mathsf{poly} (m)\cdot k^{1-\epsilon}$, where $m$ is the length of a single instance and $k$ is the number of instances being batched, into one that generates proofs of length $\mathsf{poly} (m)\cdot \mathsf{poly} \log k$, which is the gold standard for succinctness of $\mathsf{BARG}$s. By prior work, such $\mathsf{BARG}$s imply the existence of $\mathsf{SNARG}$s for deterministic time $T$ computation with optimal succinctness $\mathsf{poly}\log T$.
Our result reduces the long-standing challenge of building publicly-verifiable delegation schemes to a much easier problem: building a batch argument system that beats the trivial construction. It also immediately implies new constructions of $\mathsf{BARG}$s and $\mathsf{SNARG}$s with polylogarithmic succinctness based on either bilinear maps or a combination of the $\mathsf{DDH}$ and $\mathsf{QR}$ assumptions.
Along the way, we prove an equivalence between $\mathsf{BARG}$s and a new notion of $\mathsf{SNARG}$s for (deterministic) $\mathsf{RAM}$ computations that we call ``flexible $\mathsf{RAM}$ $\mathsf{SNARG}$s with partial input soundness." This is the first demonstration that $\mathsf{SNARG}$s for deterministic computation (of any kind) imply $\mathsf{BARG}$s. Our $\mathsf{RAM}$ $\mathsf{SNARG}$ notion is of independent interest and has already been used in a recent work on constructing rate-1 $\mathsf{BARG}$s (Devadas et. al. FOCS 2022).2022-10-05T01:08:42+00:00https://creativecommons.org/licenses/by-sa/4.0/Yael Tauman KalaiAlex LombardiVinod VaikuntanathanDaniel Wichshttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2022/1322Efficient Linkable Ring Signature from Vector Commitment inexplicably named Multratug2022-10-05T07:27:26+00:00Anton A. SokolovIn this paper we continue our work started in the article ‘Lin2-Xor lemma and Log-size Linkable
Threshold Ring Signature’ by introducing another lemma called Lin2-Choice, which extends the Lin2-Xor lemma,
and creating a general-purpose log-size linkable threshold ring signature scheme of size 2 log 2 (n) + 3l + 3, where n
is the ring size and l is the threshold. The scheme is composed of several public coin honest verifier zero-knowledge
arguments that have computational witness-extended emulation. We use an arbitrary vector commitment argument
as the base building block, providing the possibility to use any concrete scheme for it, as long as the scheme is
honest verifier zero-knowledge and has computational witness-extended emulation. Also, we present an extended
version of our signature of size 2 log 2 (n + l) + 6l + 6, which simultaneously proves the sum of hidden amounts
attached to the signing keys. All this in a prime order group without bilinear parings in which the decisional
Diffie-Hellman assumption holds.In this paper we continue our work started in the article ‘Lin2-Xor lemma and Log-size Linkable
Threshold Ring Signature’ by introducing another lemma called Lin2-Choice, which extends the Lin2-Xor lemma,
and creating a general-purpose log-size linkable threshold ring signature scheme of size 2 log 2 (n) + 3l + 3, where n
is the ring size and l is the threshold. The scheme is composed of several public coin honest verifier zero-knowledge
arguments that have computational witness-extended emulation. We use an arbitrary vector commitment argument
as the base building block, providing the possibility to use any concrete scheme for it, as long as the scheme is
honest verifier zero-knowledge and has computational witness-extended emulation. Also, we present an extended
version of our signature of size 2 log 2 (n + l) + 6l + 6, which simultaneously proves the sum of hidden amounts
attached to the signing keys. All this in a prime order group without bilinear parings in which the decisional
Diffie-Hellman assumption holds.2022-10-05T07:27:26+00:00https://creativecommons.org/licenses/by/4.0/Anton A. Sokolovhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1323On Constructing One-Way Quantum State Generators, and More2022-10-05T09:38:29+00:00Shujiao CaoRui XueAs a quantum analogue of one-way function, the notion of one-way quantum state generator is recently proposed by Morimae and Yamakawa (CRYPTO'22), which is proved to be implied by the pseudorandom state, and can be used to devise a construction of one-time secure digital signature. Due to Kretschmer's result (TQC'20), it's believed that pseudorandom state generator requires less than post-quantum secure one-way function. Unfortunately, it remains to be unknown how to achieve the one-way quantum state generator without the existence of post-quantum secure one-way function. In this paper, we mainly study that problem and obtain the following results:
We propose two variants of one-way quantum state generator, which we call them the weak one-way quantum state generator and distributionally one-way quantum state generator, and show the equivalence among these three primitives.
The distributionally one-way quantum state generator from average-case hardness assumption of a promise problem belongs to \textsf{QSZK} is obtained, and hence a construction of one-way quantum state generator.
A direct construction of quantum bit commitment with statistical binding (sum-binding) and computational hiding from the average-case hardness of a complete problem of $\textsf{QSZK}$.
To show the non-triviality of the constructions above, a quantum oracle $\mathcal{U}$ is devised relative to which such promise problem in $\textsf{QSZK}$ doesn't belong to $\mathsf{QMA}^{\mathcal{U}}$.
Our results present the first non-trivial construction of one-way quantum state generator from the hardness assumption of complexity class, and give another evidence that one-way quantum state generator probably requires less than post-quantum secure one-way function.As a quantum analogue of one-way function, the notion of one-way quantum state generator is recently proposed by Morimae and Yamakawa (CRYPTO'22), which is proved to be implied by the pseudorandom state, and can be used to devise a construction of one-time secure digital signature. Due to Kretschmer's result (TQC'20), it's believed that pseudorandom state generator requires less than post-quantum secure one-way function. Unfortunately, it remains to be unknown how to achieve the one-way quantum state generator without the existence of post-quantum secure one-way function. In this paper, we mainly study that problem and obtain the following results:
We propose two variants of one-way quantum state generator, which we call them the weak one-way quantum state generator and distributionally one-way quantum state generator, and show the equivalence among these three primitives.
The distributionally one-way quantum state generator from average-case hardness assumption of a promise problem belongs to \textsf{QSZK} is obtained, and hence a construction of one-way quantum state generator.
A direct construction of quantum bit commitment with statistical binding (sum-binding) and computational hiding from the average-case hardness of a complete problem of $\textsf{QSZK}$.
To show the non-triviality of the constructions above, a quantum oracle $\mathcal{U}$ is devised relative to which such promise problem in $\textsf{QSZK}$ doesn't belong to $\mathsf{QMA}^{\mathcal{U}}$.
Our results present the first non-trivial construction of one-way quantum state generator from the hardness assumption of complexity class, and give another evidence that one-way quantum state generator probably requires less than post-quantum secure one-way function.2022-10-05T09:38:29+00:00https://creativecommons.org/licenses/by/4.0/Shujiao CaoRui Xuehttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1324Adaptive Multiparty NIKE2022-10-05T10:43:31+00:00Venkata KoppulaBrent WatersMark ZhandryWe construct adaptively secure multiparty non-interactive key exchange (NIKE) from polynomially hard indistinguishability obfuscation and other standard assumptions. This improves on all prior such protocols, which required sub-exponential hardness. Along the way, we establish several compilers which simplify the task of constructing new multiparty NIKE protocols, and also establish a close connection with a particular type of constrained PRF.We construct adaptively secure multiparty non-interactive key exchange (NIKE) from polynomially hard indistinguishability obfuscation and other standard assumptions. This improves on all prior such protocols, which required sub-exponential hardness. Along the way, we establish several compilers which simplify the task of constructing new multiparty NIKE protocols, and also establish a close connection with a particular type of constrained PRF.2022-10-05T10:43:31+00:00https://creativecommons.org/licenses/by/4.0/Venkata KoppulaBrent WatersMark Zhandryhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1321cuZK: Accelerating Zero-Knowledge Proof with A Faster Parallel Multi-Scalar Multiplication Algorithm on GPUs2022-10-05T12:58:57+00:00Tao LuChengkun WeiRuijing YuYi ChenLi WangChaochao ChenZeke WangWenzhi ChenZero-knowledge proof (ZKP) is a critical cryptographic protocol, and it has been deployed in various privacy-preserving applications such as cryptocurrencies and verifiable machine learning. Unfortunately, ZKP has a high overhead on its proof generation step, which consists of several time-consuming operations, including large-scale matrix-vector multiplication (MUL), number-theoretic transform (NTT), and multi-scalar multiplication (MSM) on elliptic curves. Currently, several GPU-accelerated implementations of ZKP have been developed to improve its performance. However, these existing GPU designs do not fully unleash the potential of GPUs. Therefore, this paper presents cuZK, an efficient GPU implementation of ZKP with the following three optimizations to achieve higher performance. First, we propose a new parallel MSM algorithm and deploy it in cuZK. This MSM algorithm is well adapted to the high parallelism provided by GPUs, and it achieves nearly perfect linear speedup over the Pippenger algorithm, a well-known serial MSM algorithm. Second, we parallelize the MUL operation, which is lightly disregarded by other existing GPU designs. Indeed, along with our self-designed MSM parallel scheme and well-studied NTT parallel scheme, cuZK achieves the parallelization of all operations in the proof generation step. Third, cuZK reduces the latency overhead caused by CPU-GPU data transfer (DT) by 1) reducing redundant data transfer and 2) overlapping data transfer and device computation with the multi-streaming technique. We design a series of evaluation schemes for cuZK. The evaluation results show that our MSM module provides over 2.08× (up to 2.63×) speedup versus the state-of-the-art GPU implementation. cuZK achieves over 2.65× (up to 4.47×) speedup on standard benchmarks and 2.18× speedup on a GPU-accelerated cryptocurrency application, Filecoin.Zero-knowledge proof (ZKP) is a critical cryptographic protocol, and it has been deployed in various privacy-preserving applications such as cryptocurrencies and verifiable machine learning. Unfortunately, ZKP has a high overhead on its proof generation step, which consists of several time-consuming operations, including large-scale matrix-vector multiplication (MUL), number-theoretic transform (NTT), and multi-scalar multiplication (MSM) on elliptic curves. Currently, several GPU-accelerated implementations of ZKP have been developed to improve its performance. However, these existing GPU designs do not fully unleash the potential of GPUs. Therefore, this paper presents cuZK, an efficient GPU implementation of ZKP with the following three optimizations to achieve higher performance. First, we propose a new parallel MSM algorithm and deploy it in cuZK. This MSM algorithm is well adapted to the high parallelism provided by GPUs, and it achieves nearly perfect linear speedup over the Pippenger algorithm, a well-known serial MSM algorithm. Second, we parallelize the MUL operation, which is lightly disregarded by other existing GPU designs. Indeed, along with our self-designed MSM parallel scheme and well-studied NTT parallel scheme, cuZK achieves the parallelization of all operations in the proof generation step. Third, cuZK reduces the latency overhead caused by CPU-GPU data transfer (DT) by 1) reducing redundant data transfer and 2) overlapping data transfer and device computation with the multi-streaming technique. We design a series of evaluation schemes for cuZK. The evaluation results show that our MSM module provides over 2.08× (up to 2.63×) speedup versus the state-of-the-art GPU implementation. cuZK achieves over 2.65× (up to 4.47×) speedup on standard benchmarks and 2.18× speedup on a GPU-accelerated cryptocurrency application, Filecoin.2022-10-05T02:41:14+00:00https://creativecommons.org/licenses/by-sa/4.0/Tao LuChengkun WeiRuijing YuYi ChenLi WangChaochao ChenZeke WangWenzhi Chenhttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2019/1126Encrypted Distributed Hash Tables2022-10-05T13:42:09+00:00Archita AgarwalSeny KamaraDistributed hash tables (DHT) are a fundamental building block in the design of
distributed systems with applications ranging from content distribution
networks to off-chain storage networks for blockchains and smart contracts.
When DHTs are used to store sensitive information, system designers use
end-to-end encryption in order to guarantee the confidentiality of their data.
A prominent example is Ethereum's off-chain network Swarm.
In this work, we initiate the study of end-to-end encryption in DHTs and the
many systems they support. We introduce the notion of an encrypted DHT and
provide simulation-based security definitions that capture the security
properties one would desire from such a system. Using our definitions, we
then analyze the security of a standard approach to storing encrypted
data in DHTs. Interestingly, we show that this "standard scheme" leaks information probabilistically,
where the probability is a function of how well the underlying DHT load balances its data.
We also show that, in order to be securely used with the standard scheme, a DHT
needs to satisfy a form of equivocation with respect to its overlay. To show
that these properties are indeed achievable in practice, we study the balancing
properties of the Chord DHT---arguably the most influential DHT---and show that
it is equivocable with respect to its overlay in the random oracle model.
Finally, we consider the problem of encrypted DHTs in the context of transient
networks, where nodes are allowed to leave and join.Distributed hash tables (DHT) are a fundamental building block in the design of
distributed systems with applications ranging from content distribution
networks to off-chain storage networks for blockchains and smart contracts.
When DHTs are used to store sensitive information, system designers use
end-to-end encryption in order to guarantee the confidentiality of their data.
A prominent example is Ethereum's off-chain network Swarm.
In this work, we initiate the study of end-to-end encryption in DHTs and the
many systems they support. We introduce the notion of an encrypted DHT and
provide simulation-based security definitions that capture the security
properties one would desire from such a system. Using our definitions, we
then analyze the security of a standard approach to storing encrypted
data in DHTs. Interestingly, we show that this "standard scheme" leaks information probabilistically,
where the probability is a function of how well the underlying DHT load balances its data.
We also show that, in order to be securely used with the standard scheme, a DHT
needs to satisfy a form of equivocation with respect to its overlay. To show
that these properties are indeed achievable in practice, we study the balancing
properties of the Chord DHT---arguably the most influential DHT---and show that
it is equivocable with respect to its overlay in the random oracle model.
Finally, we consider the problem of encrypted DHTs in the context of transient
networks, where nodes are allowed to leave and join.2019-10-02T07:57:00+00:00https://creativecommons.org/licenses/by/4.0/Archita AgarwalSeny Kamarahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1583Orientations and the supersingular endomorphism ring problem2022-10-05T17:19:08+00:00Benjamin WesolowskiWe study two important families of problems in isogeny-based cryptography and how they relate to each other: computing the endomorphism ring of supersingular elliptic curves, and inverting the action of class groups on oriented supersingular curves. We prove that these two families of problems are closely related through polynomial-time reductions, assuming the generalised Riemann hypothesis.
We identify two classes of essentially equivalent problems. The first class corresponds to the problem of computing the endomorphism ring of oriented curves. The security of a large family of cryptosystems (such as CSIDH) reduces to (and sometimes from) this class, for which there are heuristic quantum algorithms running in subexponential time. The second class corresponds to computing the endomorphism ring of orientable curves. The security of essentially all isogeny-based cryptosystems reduces to (and sometimes from) this second class, for which the best known algorithms are still exponential.
Some of our reductions not only generalise, but also strengthen previously known results. For instance, it was known that in the particular case of curves defined over $\mathbb F_p$, the security of CSIDH reduces to the endomorphism ring problem in subexponential time. Our reductions imply that the security of CSIDH is actually equivalent to the endomorphism ring problem, under polynomial time reductions (circumventing arguments that proved such reductions unlikely).We study two important families of problems in isogeny-based cryptography and how they relate to each other: computing the endomorphism ring of supersingular elliptic curves, and inverting the action of class groups on oriented supersingular curves. We prove that these two families of problems are closely related through polynomial-time reductions, assuming the generalised Riemann hypothesis.
We identify two classes of essentially equivalent problems. The first class corresponds to the problem of computing the endomorphism ring of oriented curves. The security of a large family of cryptosystems (such as CSIDH) reduces to (and sometimes from) this class, for which there are heuristic quantum algorithms running in subexponential time. The second class corresponds to computing the endomorphism ring of orientable curves. The security of essentially all isogeny-based cryptosystems reduces to (and sometimes from) this second class, for which the best known algorithms are still exponential.
Some of our reductions not only generalise, but also strengthen previously known results. For instance, it was known that in the particular case of curves defined over $\mathbb F_p$, the security of CSIDH reduces to the endomorphism ring problem in subexponential time. Our reductions imply that the security of CSIDH is actually equivalent to the endomorphism ring problem, under polynomial time reductions (circumventing arguments that proved such reductions unlikely).2021-12-03T07:59:55+00:00https://creativecommons.org/licenses/by/4.0/Benjamin Wesolowskihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1325Efficient and Complete Formulas for Binary Curves2022-10-05T22:26:29+00:00Thomas PorninBinary elliptic curves are elliptic curves defined over finite fields of characteristic 2. On software platforms that offer carryless multiplication opcodes (e.g. pclmul on x86), they have very good performance. However, they suffer from some drawbacks, in particular that non-supersingular binary curves have an even order, and that most known formulas for point operations have exceptional cases that are detrimental to safe implementation.
In this paper, we show how to make a prime order group abstraction out of standard binary curves. We describe a new canonical compression scheme that yields a canonical and compact encoding. We also describe complete formulas for operations on the group. The formulas have no exceptional case, and are furthermore faster than previously known complete and incomplete formulas (general point addition in cost 8M+2S+2mb on all curves, 7M+2S+2mb on half of the curves). We also show how the same formulas can be applied to computations on the entire original curve, if full backward compatibility with standard curves is needed. Finally, we implemented our method over the standard NIST curves B-233 and K-233. Our strictly constant-time code achieves generic point multiplication by a scalar on curve K-233 in as little as 29600 clock cycles on an Intel x86 CPU (Coffee Lake core).Binary elliptic curves are elliptic curves defined over finite fields of characteristic 2. On software platforms that offer carryless multiplication opcodes (e.g. pclmul on x86), they have very good performance. However, they suffer from some drawbacks, in particular that non-supersingular binary curves have an even order, and that most known formulas for point operations have exceptional cases that are detrimental to safe implementation.
In this paper, we show how to make a prime order group abstraction out of standard binary curves. We describe a new canonical compression scheme that yields a canonical and compact encoding. We also describe complete formulas for operations on the group. The formulas have no exceptional case, and are furthermore faster than previously known complete and incomplete formulas (general point addition in cost 8M+2S+2mb on all curves, 7M+2S+2mb on half of the curves). We also show how the same formulas can be applied to computations on the entire original curve, if full backward compatibility with standard curves is needed. Finally, we implemented our method over the standard NIST curves B-233 and K-233. Our strictly constant-time code achieves generic point multiplication by a scalar on curve K-233 in as little as 29600 clock cycles on an Intel x86 CPU (Coffee Lake core).2022-10-05T22:26:29+00:00https://creativecommons.org/licenses/by/4.0/Thomas Porninhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/1301On the Invalidity of Lin16/Lin17 Obfuscation Schemes2022-10-06T01:47:43+00:00Hu YupuDong SiyueWang BaocangDong XingtingIndistinguishability obfuscation (IO) is at the frontier of cryptography research. Lin16/Lin17 obfuscation schemes are famous progresses towards simplifying obfuscation mechanism. Their basic structure can be described in the following way: to obfuscate a polynomial-time-computable Boolean function $c(x)$, first divide it into a group of component functions with low-degree and low-locality by using randomized encoding, and then hide the shapes of these component functions by using constant-degree multilinear maps (rather than polynomial degree ones).
In this short paper we point out that Lin16/Lin17 schemes are invalid. More detailedly, they cannot achieve reusability, therefore they are not true IO schemes, but rather garbling schemes which are one-time schemes. Besides, this short paper presents more observations, to show that component functions cannot be overly simple.Indistinguishability obfuscation (IO) is at the frontier of cryptography research. Lin16/Lin17 obfuscation schemes are famous progresses towards simplifying obfuscation mechanism. Their basic structure can be described in the following way: to obfuscate a polynomial-time-computable Boolean function $c(x)$, first divide it into a group of component functions with low-degree and low-locality by using randomized encoding, and then hide the shapes of these component functions by using constant-degree multilinear maps (rather than polynomial degree ones).
In this short paper we point out that Lin16/Lin17 schemes are invalid. More detailedly, they cannot achieve reusability, therefore they are not true IO schemes, but rather garbling schemes which are one-time schemes. Besides, this short paper presents more observations, to show that component functions cannot be overly simple.2022-09-30T07:58:57+00:00https://creativecommons.org/publicdomain/zero/1.0/Hu YupuDong SiyueWang BaocangDong Xingtinghttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2019/468The Mersenne Low Hamming Combination Search Problem can be reduced to an ILP Problem2022-10-06T06:24:26+00:00Alessandro BudroniAndrea TentiIn 2017, Aggarwal, Joux, Prakash, and Santha proposed an innovative NTRU-like public-key cryptosystem that was believed to be quantum resistant, based on Mersenne prime numbers q = 2^N-1. After a successful attack designed by Beunardeau, Connolly, Geraud, and Naccache, the authors revised the protocol which was accepted for Round 1 of the Post-Quantum Cryptography Standardization Process organized by NIST. The security of this protocol is based on the assumption that a so-called Mersenne Low Hamming Combination Search Problem (MLHCombSP) is hard to solve. In this work, we present a reduction of MLHCombSP to Integer Linear Programming (ILP). This opens new research directions for assessing the concrete robustness of such cryptosystem. In particular, we uncover a new family of weak keys, for whose our attack runs in polynomial time.In 2017, Aggarwal, Joux, Prakash, and Santha proposed an innovative NTRU-like public-key cryptosystem that was believed to be quantum resistant, based on Mersenne prime numbers q = 2^N-1. After a successful attack designed by Beunardeau, Connolly, Geraud, and Naccache, the authors revised the protocol which was accepted for Round 1 of the Post-Quantum Cryptography Standardization Process organized by NIST. The security of this protocol is based on the assumption that a so-called Mersenne Low Hamming Combination Search Problem (MLHCombSP) is hard to solve. In this work, we present a reduction of MLHCombSP to Integer Linear Programming (ILP). This opens new research directions for assessing the concrete robustness of such cryptosystem. In particular, we uncover a new family of weak keys, for whose our attack runs in polynomial time.2019-05-10T12:34:11+00:00https://creativecommons.org/licenses/by/4.0/Alessandro BudroniAndrea Tentihttps://creativecommons.org/licenses/by/4.0/